idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
49,101
Kruskal-Wallis test: how to calculate the exact p-value?
The general idea to compute p-values on finite possibilities is to run over all possibilities and count how many time the statistics is greater than your observed statistics. Example for a binomial: You want to test if a coin is biased with 10 heads/tails. Compute the statistics $h_0=|heads-5|$ for your dataset. Under null (unbiased), for all sequences of 0s and 1s (who all have the same probability), compute the statistics and see how often it is greater than $h_0$. This is the p-value. There are $2^{10}=1024$ sequences to generate. For Kruskal-Wallis: Call $\omega_0$ your dataset and $h_0$ the value of the KS statistics for your dataset. Call $\omega$ a permutation of the 37 possibles values. For each $\omega$, attribute the 12 first to A, the 9 following to B... You get one virtual dataset also called $\omega$. For each $\omega$ , compute the value of of KW statistics, say $h(\omega)$. Then count how many times this value of $h(\omega)$ is greater or equal to $h_0$. Also count the total number of permutations. Divide, you get the p-value. How many $\omega$? Well the number of permutations: $37!\approx 10^{43}$ (Stirling's formula). A usual computer does around $10^9$ atomic operations by second and each permutation would take at at least 1000 such operations so you can consider that you can calculate $10^6$ permutations per second. This is not feasible. Now instead you can directly work on combinations instead of permutations. You can calculate the number as $\binom{37}{12}\binom{37-12}{9}...=\frac{37!}{12!9!...}\approx10^{22}$. Still not feasible. This only describes the "naive" algorithms. But I think advanced algorithmic ideas (for exact values) become quite tough to understand as soon as the aim is to make them feasible even for values as small as 37. And they still need intensive computing resources. Further reading: http://faculty.virginia.edu/kruskal-wallis/ However it's not necessary to have real "exact" values. The usual method is to use a simple Monte Carlo method: instead of testing all permutations/combinations, we test $N$ of them picked up randomly. This is very simple to implement. It converges to the real p-value in $1\sqrt N$ in a way that can be controlled easily. You get the precision that you want in a reasonable time. Compared to $\chi^2$ approximation that can be poor for small sample size (37, 12, 9...) it is almost exact, since you don't assume $37\approx+\infty$ but use an $N$ as large as you want. It's what most stat softwares do.
Kruskal-Wallis test: how to calculate the exact p-value?
The general idea to compute p-values on finite possibilities is to run over all possibilities and count how many time the statistics is greater than your observed statistics. Example for a binomial:
Kruskal-Wallis test: how to calculate the exact p-value? The general idea to compute p-values on finite possibilities is to run over all possibilities and count how many time the statistics is greater than your observed statistics. Example for a binomial: You want to test if a coin is biased with 10 heads/tails. Compute the statistics $h_0=|heads-5|$ for your dataset. Under null (unbiased), for all sequences of 0s and 1s (who all have the same probability), compute the statistics and see how often it is greater than $h_0$. This is the p-value. There are $2^{10}=1024$ sequences to generate. For Kruskal-Wallis: Call $\omega_0$ your dataset and $h_0$ the value of the KS statistics for your dataset. Call $\omega$ a permutation of the 37 possibles values. For each $\omega$, attribute the 12 first to A, the 9 following to B... You get one virtual dataset also called $\omega$. For each $\omega$ , compute the value of of KW statistics, say $h(\omega)$. Then count how many times this value of $h(\omega)$ is greater or equal to $h_0$. Also count the total number of permutations. Divide, you get the p-value. How many $\omega$? Well the number of permutations: $37!\approx 10^{43}$ (Stirling's formula). A usual computer does around $10^9$ atomic operations by second and each permutation would take at at least 1000 such operations so you can consider that you can calculate $10^6$ permutations per second. This is not feasible. Now instead you can directly work on combinations instead of permutations. You can calculate the number as $\binom{37}{12}\binom{37-12}{9}...=\frac{37!}{12!9!...}\approx10^{22}$. Still not feasible. This only describes the "naive" algorithms. But I think advanced algorithmic ideas (for exact values) become quite tough to understand as soon as the aim is to make them feasible even for values as small as 37. And they still need intensive computing resources. Further reading: http://faculty.virginia.edu/kruskal-wallis/ However it's not necessary to have real "exact" values. The usual method is to use a simple Monte Carlo method: instead of testing all permutations/combinations, we test $N$ of them picked up randomly. This is very simple to implement. It converges to the real p-value in $1\sqrt N$ in a way that can be controlled easily. You get the precision that you want in a reasonable time. Compared to $\chi^2$ approximation that can be poor for small sample size (37, 12, 9...) it is almost exact, since you don't assume $37\approx+\infty$ but use an $N$ as large as you want. It's what most stat softwares do.
Kruskal-Wallis test: how to calculate the exact p-value? The general idea to compute p-values on finite possibilities is to run over all possibilities and count how many time the statistics is greater than your observed statistics. Example for a binomial:
49,102
Why do saddle points become "attractive" in Newtonian dynamics?
There's not really a more intuitive way to think about this. Suppose that you have the eigendecomposition of the Hessian for $P$ an orthonormal matrix of eigenvectors and $D$ diagonal matrix of eigenvalues. $$ \begin{align} \nabla^2 f(x) &= PDP^\top \\ \left[\nabla^2 f(x)\right]^{-1} &= PD^{-1}P^\top \end{align} $$ This is relevant to Newton's method because the update is given by $$ \begin{align} x^{(t+1)} &= x^{(t)}-\left[\nabla^2 f(x^{(t)})\right]^{-1}\nabla f(x^{(t)}) \\ &= x^{(t)}-PD^{-1}P^\top\nabla f(x^{(t)}) \end{align} $$ Saddle points have gradient 0, and Newton's method seeks points with gradient 0. If the problem is non-convex, then depending on the starting point, you may find yourself in the "basin of attraction" of the saddle point. I also think that this post is of interest: Gradient descent on non-convex functions Why is Newton's method not widely used in machine learning?
Why do saddle points become "attractive" in Newtonian dynamics?
There's not really a more intuitive way to think about this. Suppose that you have the eigendecomposition of the Hessian for $P$ an orthonormal matrix of eigenvectors and $D$ diagonal matrix of eigenv
Why do saddle points become "attractive" in Newtonian dynamics? There's not really a more intuitive way to think about this. Suppose that you have the eigendecomposition of the Hessian for $P$ an orthonormal matrix of eigenvectors and $D$ diagonal matrix of eigenvalues. $$ \begin{align} \nabla^2 f(x) &= PDP^\top \\ \left[\nabla^2 f(x)\right]^{-1} &= PD^{-1}P^\top \end{align} $$ This is relevant to Newton's method because the update is given by $$ \begin{align} x^{(t+1)} &= x^{(t)}-\left[\nabla^2 f(x^{(t)})\right]^{-1}\nabla f(x^{(t)}) \\ &= x^{(t)}-PD^{-1}P^\top\nabla f(x^{(t)}) \end{align} $$ Saddle points have gradient 0, and Newton's method seeks points with gradient 0. If the problem is non-convex, then depending on the starting point, you may find yourself in the "basin of attraction" of the saddle point. I also think that this post is of interest: Gradient descent on non-convex functions Why is Newton's method not widely used in machine learning?
Why do saddle points become "attractive" in Newtonian dynamics? There's not really a more intuitive way to think about this. Suppose that you have the eigendecomposition of the Hessian for $P$ an orthonormal matrix of eigenvectors and $D$ diagonal matrix of eigenv
49,103
Training models on biased samples
1) You have no data on the false negatives (i.e. cases that are risky that the existing models deems 'not risky') which makes these cases impossible to identify. 2) You can train a new model that would distinguish between the true positives and false positives that you have data about i.e. the samples samples that have been manually investigated because the existing model deems them to be risky. However, this would not tell you anything about the data that have not been manually investigated. You would still not have the ability to correct for the false negatives in your data. But this means that you could probably improve on the existing model by coupling the existing model with the new model. That is, use the two models sequentially, with the new model only being applied to the data that the existing model deems to be risky. 3) If your existing model is not a black box, you could attempt to learn its 'decision rules' and use that to develop a new model that functions alone (instead of two coupled models as suggested above). However, you still cannot improve performance on the false negatives without more data on this.
Training models on biased samples
1) You have no data on the false negatives (i.e. cases that are risky that the existing models deems 'not risky') which makes these cases impossible to identify. 2) You can train a new model that woul
Training models on biased samples 1) You have no data on the false negatives (i.e. cases that are risky that the existing models deems 'not risky') which makes these cases impossible to identify. 2) You can train a new model that would distinguish between the true positives and false positives that you have data about i.e. the samples samples that have been manually investigated because the existing model deems them to be risky. However, this would not tell you anything about the data that have not been manually investigated. You would still not have the ability to correct for the false negatives in your data. But this means that you could probably improve on the existing model by coupling the existing model with the new model. That is, use the two models sequentially, with the new model only being applied to the data that the existing model deems to be risky. 3) If your existing model is not a black box, you could attempt to learn its 'decision rules' and use that to develop a new model that functions alone (instead of two coupled models as suggested above). However, you still cannot improve performance on the false negatives without more data on this.
Training models on biased samples 1) You have no data on the false negatives (i.e. cases that are risky that the existing models deems 'not risky') which makes these cases impossible to identify. 2) You can train a new model that woul
49,104
Training models on biased samples
Under certain assumptions, one way to capture some of those hidden false negatives would be to do clustering and possibly outlier detection. Then you can additionally to the high risk examples, manually examine some representative examples from each cluster, as well as / or the outliers. Also you can see if your clusters / model of the data significantly change over time.
Training models on biased samples
Under certain assumptions, one way to capture some of those hidden false negatives would be to do clustering and possibly outlier detection. Then you can additionally to the high risk examples, manual
Training models on biased samples Under certain assumptions, one way to capture some of those hidden false negatives would be to do clustering and possibly outlier detection. Then you can additionally to the high risk examples, manually examine some representative examples from each cluster, as well as / or the outliers. Also you can see if your clusters / model of the data significantly change over time.
Training models on biased samples Under certain assumptions, one way to capture some of those hidden false negatives would be to do clustering and possibly outlier detection. Then you can additionally to the high risk examples, manual
49,105
Difference between OOB score and score of random forest model in scikit-learn package?
clf.score(y, x) provides the coefficient of determination (R**2) for the trained model on the given data. Since you pass the same data used for training, this is your overall training loss score. If you would put "unseen" test-data here, you get validation loss. clf.oob_score provides the coefficient of determination using oob method, i.e. on 'unseen' out-of-bag data. This score serves as cross-validation loss and accordingly to L. Breinman oob_score = cross-validation score https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm.
Difference between OOB score and score of random forest model in scikit-learn package?
clf.score(y, x) provides the coefficient of determination (R**2) for the trained model on the given data. Since you pass the same data used for training, this is your overall training loss score. If y
Difference between OOB score and score of random forest model in scikit-learn package? clf.score(y, x) provides the coefficient of determination (R**2) for the trained model on the given data. Since you pass the same data used for training, this is your overall training loss score. If you would put "unseen" test-data here, you get validation loss. clf.oob_score provides the coefficient of determination using oob method, i.e. on 'unseen' out-of-bag data. This score serves as cross-validation loss and accordingly to L. Breinman oob_score = cross-validation score https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm.
Difference between OOB score and score of random forest model in scikit-learn package? clf.score(y, x) provides the coefficient of determination (R**2) for the trained model on the given data. Since you pass the same data used for training, this is your overall training loss score. If y
49,106
Pre-computing feature crosses when using XGBoost?
Both are correct. Tree-based methods cut perpendicular to a feature axis, so if the boundary is actually a diagonal, many perpendicular splits will be used to approximate a diagonal boundary. On the one hand, trees (like xgb) can work out this boundary; on the other hand, lots of splits will be needed to accomplish it. Which strategy (letting the computer work it out, or pre-specifying feature products) is more efficient is problem-specific. If the data are not partitioned "diagonally," or are so partitioned rarely, products of features offer no improvement, so you'll be wasting time whenever xgb considers an unhelpful product feature. On the other hand, if product feature are helpful, including them can preclude the splits that would be necessary to approximate a diagonal. Finally, if you had $p$ features originally, including all products adds $\frac{p(p-1)}{2}$ additional features, which itself may be prohibitive.
Pre-computing feature crosses when using XGBoost?
Both are correct. Tree-based methods cut perpendicular to a feature axis, so if the boundary is actually a diagonal, many perpendicular splits will be used to approximate a diagonal boundary. On the o
Pre-computing feature crosses when using XGBoost? Both are correct. Tree-based methods cut perpendicular to a feature axis, so if the boundary is actually a diagonal, many perpendicular splits will be used to approximate a diagonal boundary. On the one hand, trees (like xgb) can work out this boundary; on the other hand, lots of splits will be needed to accomplish it. Which strategy (letting the computer work it out, or pre-specifying feature products) is more efficient is problem-specific. If the data are not partitioned "diagonally," or are so partitioned rarely, products of features offer no improvement, so you'll be wasting time whenever xgb considers an unhelpful product feature. On the other hand, if product feature are helpful, including them can preclude the splits that would be necessary to approximate a diagonal. Finally, if you had $p$ features originally, including all products adds $\frac{p(p-1)}{2}$ additional features, which itself may be prohibitive.
Pre-computing feature crosses when using XGBoost? Both are correct. Tree-based methods cut perpendicular to a feature axis, so if the boundary is actually a diagonal, many perpendicular splits will be used to approximate a diagonal boundary. On the o
49,107
Calculate $E[XY]$ for $(X,Y)\sim N(\mu_{1},\mu_{2},\sigma_{1}^{2},\sigma_{2}^{2}, \rho)$
The hint you suggest in your question is to use condition expectations. although not the only way to solve this problem, it is a quick method if you are comfortable with conditioning arguments. $\mathbb{E}(XY) = \mathbb{E}_X(\mathbb{E}_y(XY|X))$ where the subscripts denote expectation with respect to which variable (for clarity to the reader. Then $\mathbb{E}_X(\mathbb{E}_Y(XY|X))= \mathbb{E}_X(X\mathbb{E}_Y(Y|X)).$ Now if you know the distribution of $Y|X$ this is easy, $Y|X \sim N(\mu_y+\rho\frac{\sigma_y}{\sigma_x}(X-\mu_x), \sigma_y^2(1-\rho)).$ This means the mean of $Y|X$ is the first quantity, $\mathbb{E}_X(X(\mu_y+\rho\frac{\sigma_y}{\sigma_x}(X-\mu_x))).$ Now multiply the $X$ through and take expectations to get $\mathbb{E}_X(X)\mu_y+\rho\frac{\sigma_y}{\sigma_x}\left[\mathbb{E}_X(X^2)-\mu_x\mathbb{E}_X(X)\right].$ Now using the variance breakdown of the second and the first moment gives: $\mathbb{E}_X(X)\mu_y+\rho\frac{\sigma_y}{\sigma_x}\left[\sigma_x^2 +\mu_x^2 -\mu_x^2\right].$ And a little simplification yields: $\mu_X\mu_y+\rho\sigma_y\sigma_x.$ You can check this against the correlation by substractions of $\mu_x\mu_y$ to get the co-variance, verifying the result is correct.
Calculate $E[XY]$ for $(X,Y)\sim N(\mu_{1},\mu_{2},\sigma_{1}^{2},\sigma_{2}^{2}, \rho)$
The hint you suggest in your question is to use condition expectations. although not the only way to solve this problem, it is a quick method if you are comfortable with conditioning arguments. $\mat
Calculate $E[XY]$ for $(X,Y)\sim N(\mu_{1},\mu_{2},\sigma_{1}^{2},\sigma_{2}^{2}, \rho)$ The hint you suggest in your question is to use condition expectations. although not the only way to solve this problem, it is a quick method if you are comfortable with conditioning arguments. $\mathbb{E}(XY) = \mathbb{E}_X(\mathbb{E}_y(XY|X))$ where the subscripts denote expectation with respect to which variable (for clarity to the reader. Then $\mathbb{E}_X(\mathbb{E}_Y(XY|X))= \mathbb{E}_X(X\mathbb{E}_Y(Y|X)).$ Now if you know the distribution of $Y|X$ this is easy, $Y|X \sim N(\mu_y+\rho\frac{\sigma_y}{\sigma_x}(X-\mu_x), \sigma_y^2(1-\rho)).$ This means the mean of $Y|X$ is the first quantity, $\mathbb{E}_X(X(\mu_y+\rho\frac{\sigma_y}{\sigma_x}(X-\mu_x))).$ Now multiply the $X$ through and take expectations to get $\mathbb{E}_X(X)\mu_y+\rho\frac{\sigma_y}{\sigma_x}\left[\mathbb{E}_X(X^2)-\mu_x\mathbb{E}_X(X)\right].$ Now using the variance breakdown of the second and the first moment gives: $\mathbb{E}_X(X)\mu_y+\rho\frac{\sigma_y}{\sigma_x}\left[\sigma_x^2 +\mu_x^2 -\mu_x^2\right].$ And a little simplification yields: $\mu_X\mu_y+\rho\sigma_y\sigma_x.$ You can check this against the correlation by substractions of $\mu_x\mu_y$ to get the co-variance, verifying the result is correct.
Calculate $E[XY]$ for $(X,Y)\sim N(\mu_{1},\mu_{2},\sigma_{1}^{2},\sigma_{2}^{2}, \rho)$ The hint you suggest in your question is to use condition expectations. although not the only way to solve this problem, it is a quick method if you are comfortable with conditioning arguments. $\mat
49,108
Fitting MA(q) and ARIMA(q) model
When you write [...] to initialize a time series of random white noise (the errors), and than perform a first fit, to obtain a first model, than calculate the errors compared to the actual data, and than fit it again with the newly obtained errors, compare it to the actual data to obtain new errors and so on. you actually outline the need for an estimation method that simultaneously handles the estimation of the vector of residuals as well as that of the parameters. Say one has, ${y}_t = \boldsymbol{x}_t\boldsymbol{\beta} + {\varepsilon}_t$ Where $\boldsymbol{x}_t$ may contain anything you want, why not ${y}_{t-i}$ for $i=1,...,p$. Putting aside the discussion about the conditions related to $p$. In the MA(q) case, one assumes that ${\varepsilon}_t = {r}_t + \sum_{i=1}^q \lambda_i {r}_{t-i}$. Which leads to ${y}_t = \boldsymbol{x}_t\boldsymbol{\beta} + {r}_t + \sum_{i=1}^q \lambda_i {r}_{t-i}$ or reformulated in matricial terms using backshift-operator, $\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \left(\boldsymbol{I} + \sum_{i=1}^q \lambda_i \boldsymbol{B}_i\right)\boldsymbol{r}$ Given that playing with MLE actually means playing with distribution-conditioned errors, you have to rearrange the above last equation as $\left(\boldsymbol{I} + \sum_{i=1}^q \lambda_i \boldsymbol{B}_i\right)^{-1}\left(\boldsymbol{y} - \boldsymbol{X}\boldsymbol{\beta}\right) = \boldsymbol{r}$ In practice, this means playing with distribution-conditioned residuals $\left(\boldsymbol{I} + \sum_{i=1}^q \widehat{\lambda}_i \boldsymbol{B}_i\right)^{-1}\left(\boldsymbol{y} - \boldsymbol{X}\widehat{\boldsymbol{\beta}}\right) = \widehat{\boldsymbol{r}}$. So arranged, one can both maximize the (knowledge-driven) likelihood that is assumed for our residuals in conjunction of the estimation of our parameters: that is simultaneously. Hence the frequent use of MLE. But iterative approaches like the one you described are also used in practice, in the case of, say, GMM when dealing with endogeneity, stoping when a convergence criterion is met.
Fitting MA(q) and ARIMA(q) model
When you write [...] to initialize a time series of random white noise (the errors), and than perform a first fit, to obtain a first model, than calculate the errors compared to the actual data, and
Fitting MA(q) and ARIMA(q) model When you write [...] to initialize a time series of random white noise (the errors), and than perform a first fit, to obtain a first model, than calculate the errors compared to the actual data, and than fit it again with the newly obtained errors, compare it to the actual data to obtain new errors and so on. you actually outline the need for an estimation method that simultaneously handles the estimation of the vector of residuals as well as that of the parameters. Say one has, ${y}_t = \boldsymbol{x}_t\boldsymbol{\beta} + {\varepsilon}_t$ Where $\boldsymbol{x}_t$ may contain anything you want, why not ${y}_{t-i}$ for $i=1,...,p$. Putting aside the discussion about the conditions related to $p$. In the MA(q) case, one assumes that ${\varepsilon}_t = {r}_t + \sum_{i=1}^q \lambda_i {r}_{t-i}$. Which leads to ${y}_t = \boldsymbol{x}_t\boldsymbol{\beta} + {r}_t + \sum_{i=1}^q \lambda_i {r}_{t-i}$ or reformulated in matricial terms using backshift-operator, $\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta} + \left(\boldsymbol{I} + \sum_{i=1}^q \lambda_i \boldsymbol{B}_i\right)\boldsymbol{r}$ Given that playing with MLE actually means playing with distribution-conditioned errors, you have to rearrange the above last equation as $\left(\boldsymbol{I} + \sum_{i=1}^q \lambda_i \boldsymbol{B}_i\right)^{-1}\left(\boldsymbol{y} - \boldsymbol{X}\boldsymbol{\beta}\right) = \boldsymbol{r}$ In practice, this means playing with distribution-conditioned residuals $\left(\boldsymbol{I} + \sum_{i=1}^q \widehat{\lambda}_i \boldsymbol{B}_i\right)^{-1}\left(\boldsymbol{y} - \boldsymbol{X}\widehat{\boldsymbol{\beta}}\right) = \widehat{\boldsymbol{r}}$. So arranged, one can both maximize the (knowledge-driven) likelihood that is assumed for our residuals in conjunction of the estimation of our parameters: that is simultaneously. Hence the frequent use of MLE. But iterative approaches like the one you described are also used in practice, in the case of, say, GMM when dealing with endogeneity, stoping when a convergence criterion is met.
Fitting MA(q) and ARIMA(q) model When you write [...] to initialize a time series of random white noise (the errors), and than perform a first fit, to obtain a first model, than calculate the errors compared to the actual data, and
49,109
Why is p(x|z) tractable but p(z|x) intractable?
In bayesian inference, when you have some data $x$, you first specify a ${likelihood}$, $p(x|z)$, also called a sampling distribution, which will depend on some unknown parameters $z$ (also called latent variables, going with your notation). We then have to specify a $prior$ on these latent variables, $p(z)$, to completely specify the $data\ generating\ process$. This is called the data generating process as we can imagine first sampling some latent variables from the prior $z^*\sim p(z)$, and then sampling a data point from the likelihood at this sample $z^*$, $x^*\sim p(x|z^*)$. The reason the likelihood is tractable is ${because\ we\ say\ it\ is}$. This isn't specific to bayesian inference either. In frequentist inference you also specify a likelihood (you just don't specify a prior). At some point you need to assume some model for your data so you can actually infer something! In the case of bayesian inference this model is the combination of likelihood and prior.
Why is p(x|z) tractable but p(z|x) intractable?
In bayesian inference, when you have some data $x$, you first specify a ${likelihood}$, $p(x|z)$, also called a sampling distribution, which will depend on some unknown parameters $z$ (also called lat
Why is p(x|z) tractable but p(z|x) intractable? In bayesian inference, when you have some data $x$, you first specify a ${likelihood}$, $p(x|z)$, also called a sampling distribution, which will depend on some unknown parameters $z$ (also called latent variables, going with your notation). We then have to specify a $prior$ on these latent variables, $p(z)$, to completely specify the $data\ generating\ process$. This is called the data generating process as we can imagine first sampling some latent variables from the prior $z^*\sim p(z)$, and then sampling a data point from the likelihood at this sample $z^*$, $x^*\sim p(x|z^*)$. The reason the likelihood is tractable is ${because\ we\ say\ it\ is}$. This isn't specific to bayesian inference either. In frequentist inference you also specify a likelihood (you just don't specify a prior). At some point you need to assume some model for your data so you can actually infer something! In the case of bayesian inference this model is the combination of likelihood and prior.
Why is p(x|z) tractable but p(z|x) intractable? In bayesian inference, when you have some data $x$, you first specify a ${likelihood}$, $p(x|z)$, also called a sampling distribution, which will depend on some unknown parameters $z$ (also called lat
49,110
What's the difference between Random Intercepts Model and linear model with dummies?
Random intercept models (and multi-level models in general) allow us to relax the assumption of independent errors. They do this by having two sorts of errors (labeled $\epsilon$ and $\mu$ in your question).
What's the difference between Random Intercepts Model and linear model with dummies?
Random intercept models (and multi-level models in general) allow us to relax the assumption of independent errors. They do this by having two sorts of errors (labeled $\epsilon$ and $\mu$ in your que
What's the difference between Random Intercepts Model and linear model with dummies? Random intercept models (and multi-level models in general) allow us to relax the assumption of independent errors. They do this by having two sorts of errors (labeled $\epsilon$ and $\mu$ in your question).
What's the difference between Random Intercepts Model and linear model with dummies? Random intercept models (and multi-level models in general) allow us to relax the assumption of independent errors. They do this by having two sorts of errors (labeled $\epsilon$ and $\mu$ in your que
49,111
Why do we use sampled points instead of mean to reconstruct outputs in variational autoencoder?
According to Auto-Encoding Variational Bayes (eq. 3), the loss function of a variational autoencoder The expectation term is usually an intractable integral, so we want to approximate this expected value by drawing samples then computing the average. The random value is added for generating samples from $q_\phi(z|x^{(i)})$, theoretically we should draw a large number of random values for an accurate approximation, but since the training usually takes thousands of iterations, we can use only one random value per input instance. On the other hand, if we use the mean value instead, it'll no longer be the same loss function, as $$E[f(x)]\neq f(E[x]).$$ Update In the original autoencoder models the data likelihood $p_\theta(x)$ as a whole so it's convenient to use mean square error or binary cross entropy to give a likelihood to optimize. In the variational autoencoder we introduced the latent variable $z$, and want to model $q_\phi(z|x)$ (encoder) and $p_\theta(x|z)$ (decoder) jointly. The difficulty in this case is $p(x)$ can no longer be simply computed through one pass of the network as in the original autoencoder. The paper addresses this by maximizing a variational lower bound of $p(x)$ (the above loss function), which contains an intractable expectation term which needs to be approximated using sampling and the "reparameterization trick". I just found a very good Variational Autoencoder course on coursera, it mentions exactly your question in the forth video.
Why do we use sampled points instead of mean to reconstruct outputs in variational autoencoder?
According to Auto-Encoding Variational Bayes (eq. 3), the loss function of a variational autoencoder The expectation term is usually an intractable integral, so we want to approximate this expected v
Why do we use sampled points instead of mean to reconstruct outputs in variational autoencoder? According to Auto-Encoding Variational Bayes (eq. 3), the loss function of a variational autoencoder The expectation term is usually an intractable integral, so we want to approximate this expected value by drawing samples then computing the average. The random value is added for generating samples from $q_\phi(z|x^{(i)})$, theoretically we should draw a large number of random values for an accurate approximation, but since the training usually takes thousands of iterations, we can use only one random value per input instance. On the other hand, if we use the mean value instead, it'll no longer be the same loss function, as $$E[f(x)]\neq f(E[x]).$$ Update In the original autoencoder models the data likelihood $p_\theta(x)$ as a whole so it's convenient to use mean square error or binary cross entropy to give a likelihood to optimize. In the variational autoencoder we introduced the latent variable $z$, and want to model $q_\phi(z|x)$ (encoder) and $p_\theta(x|z)$ (decoder) jointly. The difficulty in this case is $p(x)$ can no longer be simply computed through one pass of the network as in the original autoencoder. The paper addresses this by maximizing a variational lower bound of $p(x)$ (the above loss function), which contains an intractable expectation term which needs to be approximated using sampling and the "reparameterization trick". I just found a very good Variational Autoencoder course on coursera, it mentions exactly your question in the forth video.
Why do we use sampled points instead of mean to reconstruct outputs in variational autoencoder? According to Auto-Encoding Variational Bayes (eq. 3), the loss function of a variational autoencoder The expectation term is usually an intractable integral, so we want to approximate this expected v
49,112
Test to determine whether a variable changes more in one group than another
Is the affected group significantly more affected than controls by a change in dose - particularly at lower doses? What you are describing is equivalent to assessing the significance of an interaction in a regression model. Namely, the interaction between group and dose in the model: response ~ group * dose Your question doesn't state explicitly what the response is, but judging by the higher variance with higher values and what looks like a plateau, you may want to perform a transformation of the response. The right transformation depends on the process generating the data, so I can't tell for certain unless you include more information on this response.
Test to determine whether a variable changes more in one group than another
Is the affected group significantly more affected than controls by a change in dose - particularly at lower doses? What you are describing is equivalent to assessing the significance of an interactio
Test to determine whether a variable changes more in one group than another Is the affected group significantly more affected than controls by a change in dose - particularly at lower doses? What you are describing is equivalent to assessing the significance of an interaction in a regression model. Namely, the interaction between group and dose in the model: response ~ group * dose Your question doesn't state explicitly what the response is, but judging by the higher variance with higher values and what looks like a plateau, you may want to perform a transformation of the response. The right transformation depends on the process generating the data, so I can't tell for certain unless you include more information on this response.
Test to determine whether a variable changes more in one group than another Is the affected group significantly more affected than controls by a change in dose - particularly at lower doses? What you are describing is equivalent to assessing the significance of an interactio
49,113
Test to determine whether a variable changes more in one group than another
It looks like you have repeated measures for each individual across time which need to be accounted for. The most appropriate model is probably a linear mixed model (or growth model) with individual as your random factor and time and time squared as your random effects, and time and time squared as your fixed effects. Then you can add in a dummy variable for Affected/Control as a fixed effect to see if the curve for your affected group is higher than your curve for your control group.
Test to determine whether a variable changes more in one group than another
It looks like you have repeated measures for each individual across time which need to be accounted for. The most appropriate model is probably a linear mixed model (or growth model) with individual a
Test to determine whether a variable changes more in one group than another It looks like you have repeated measures for each individual across time which need to be accounted for. The most appropriate model is probably a linear mixed model (or growth model) with individual as your random factor and time and time squared as your random effects, and time and time squared as your fixed effects. Then you can add in a dummy variable for Affected/Control as a fixed effect to see if the curve for your affected group is higher than your curve for your control group.
Test to determine whether a variable changes more in one group than another It looks like you have repeated measures for each individual across time which need to be accounted for. The most appropriate model is probably a linear mixed model (or growth model) with individual a
49,114
When the effect size of a covariate is high and yet not significant
Significance means detectability. That, in turn, depends (among other things) on the amount of data. A common way to see large but insignificant effect sizes, then, is when there isn't much data. Since such examples are numerous and easy to create, I won't dwell on this rather uninteresting point. There are subtler things that can go on. Even with relatively large amounts of data, a model can fail to detect an effect because that effect is masked or otherwise confused with another effect. Here's an example involving a plain-vanilla linear regression with two explanatory variables $x_1$ and $x_2$ and a response $y$ that is conditionally independent, Normal, and of constant variance--in other words, as beautiful a situation as one could hope for when applying Least Squares methods. This scatterplot matrix shows how the three variables are related within a sample of three hundred observations. Perhaps, for instance, an experimenter was able to observe a system in three different conditions; measured $x_1,x_2,$ and $y$ one hundred times in each condition; and wishes to understand how $y$ might be related to the $x_i$. That sounds like a typical (and therefore important) situation to understand. There does seem to be a relationship: although the values of $y$ are spread over ranges of roughly $0-4$, $1-5$, and $2-6$, these ranges shift upwards as the values of the $x_i$ increase from near $-1$ to near $1$. The regression overall is significant (the p-value is too tiny to be calculated). Here are the coefficient estimates and their associated statistics: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.06269 0.05619 54.501 <2e-16 x.1 -1.95875 1.88340 -1.040 0.299 x.2 2.96594 1.88206 1.576 0.116 Notice: The p-values for the coefficients are 0.299 and 0.116. Neither would be considered "significant" in most situations: they are too large. The coefficient estimates ("effect sizes") of $-1.96$ and $2.97$ are large. What does "large" mean here? Simply that since each of the $x_i$ varies by more than $1 - (-1)=2$ in the dataset, a coefficient of (say) $2.97$ translates to variations of more than $2\times 2.97\approx 6$ in the prediction of $y$. Since the total variation in $y$ is only from $0$ to $6$, this means $x_1$ alone can completely determine the value of $y$! That's large. This failure-to-detect the effects of the $x_i$ comes about because the $x_i$ separately give us nearly the same information: they are said to be (almost) collinear. Collinearity can be subtle and much more difficult to detect when there are more than two explanatory variables. Search our site for more examples. You may modify this example to explore the effects of sample size, variability, coefficients, and so on. Here is the R code. Comment out the line set.seed(17) when experimenting, so that you get randomly different results each time. library(data.table) n <- 100 # One-third of the sample size tau <- .02 # Conditional standard deviation of the explanatory variables sigma <- 1 # Error standard deviation # # Create data. # set.seed(17) # Creates reproducible data x <- c(rep(1,n), rep(0,n), rep(-1,n)) # Experimental "condition" X <- data.table(x.1=rnorm(3*n, x, tau), # Regressor x.1 x.2=rnorm(3*n, x, tau)) # Regressor x.2 invisible(X[, y := rnorm(3*n, 3+x+x.2-x.1, sigma)]) # y (including error) # # Plot the data. # pairs(X, pch=19, col="#00000010") # # Perform least squares regression. # fit <- lm(y ~ ., X) summary(fit)
When the effect size of a covariate is high and yet not significant
Significance means detectability. That, in turn, depends (among other things) on the amount of data. A common way to see large but insignificant effect sizes, then, is when there isn't much data. S
When the effect size of a covariate is high and yet not significant Significance means detectability. That, in turn, depends (among other things) on the amount of data. A common way to see large but insignificant effect sizes, then, is when there isn't much data. Since such examples are numerous and easy to create, I won't dwell on this rather uninteresting point. There are subtler things that can go on. Even with relatively large amounts of data, a model can fail to detect an effect because that effect is masked or otherwise confused with another effect. Here's an example involving a plain-vanilla linear regression with two explanatory variables $x_1$ and $x_2$ and a response $y$ that is conditionally independent, Normal, and of constant variance--in other words, as beautiful a situation as one could hope for when applying Least Squares methods. This scatterplot matrix shows how the three variables are related within a sample of three hundred observations. Perhaps, for instance, an experimenter was able to observe a system in three different conditions; measured $x_1,x_2,$ and $y$ one hundred times in each condition; and wishes to understand how $y$ might be related to the $x_i$. That sounds like a typical (and therefore important) situation to understand. There does seem to be a relationship: although the values of $y$ are spread over ranges of roughly $0-4$, $1-5$, and $2-6$, these ranges shift upwards as the values of the $x_i$ increase from near $-1$ to near $1$. The regression overall is significant (the p-value is too tiny to be calculated). Here are the coefficient estimates and their associated statistics: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.06269 0.05619 54.501 <2e-16 x.1 -1.95875 1.88340 -1.040 0.299 x.2 2.96594 1.88206 1.576 0.116 Notice: The p-values for the coefficients are 0.299 and 0.116. Neither would be considered "significant" in most situations: they are too large. The coefficient estimates ("effect sizes") of $-1.96$ and $2.97$ are large. What does "large" mean here? Simply that since each of the $x_i$ varies by more than $1 - (-1)=2$ in the dataset, a coefficient of (say) $2.97$ translates to variations of more than $2\times 2.97\approx 6$ in the prediction of $y$. Since the total variation in $y$ is only from $0$ to $6$, this means $x_1$ alone can completely determine the value of $y$! That's large. This failure-to-detect the effects of the $x_i$ comes about because the $x_i$ separately give us nearly the same information: they are said to be (almost) collinear. Collinearity can be subtle and much more difficult to detect when there are more than two explanatory variables. Search our site for more examples. You may modify this example to explore the effects of sample size, variability, coefficients, and so on. Here is the R code. Comment out the line set.seed(17) when experimenting, so that you get randomly different results each time. library(data.table) n <- 100 # One-third of the sample size tau <- .02 # Conditional standard deviation of the explanatory variables sigma <- 1 # Error standard deviation # # Create data. # set.seed(17) # Creates reproducible data x <- c(rep(1,n), rep(0,n), rep(-1,n)) # Experimental "condition" X <- data.table(x.1=rnorm(3*n, x, tau), # Regressor x.1 x.2=rnorm(3*n, x, tau)) # Regressor x.2 invisible(X[, y := rnorm(3*n, 3+x+x.2-x.1, sigma)]) # y (including error) # # Plot the data. # pairs(X, pch=19, col="#00000010") # # Perform least squares regression. # fit <- lm(y ~ ., X) summary(fit)
When the effect size of a covariate is high and yet not significant Significance means detectability. That, in turn, depends (among other things) on the amount of data. A common way to see large but insignificant effect sizes, then, is when there isn't much data. S
49,115
Complete sufficient statistic and unbiased estimator
Can we always find such an unbiased estimator if we have complete sufficient statistic? A slight modification of the one given in the comments. Let $X_1,X_2,...X_n$ follow $B(m,\theta)$. Then the function $g(\theta)=\frac{1}{\theta}$ doesn't admit an unbiased estimator while $\sum_{i=1}^{n}{X_i}$ is a Complete Sufficient Statistic. Or conversely, if an unbiased estimator of θ(parameter) exists as a function of a sufficient statistic, does that imply that the sufficient statistic is complete? Let $X_1,X_2,...X_n$ follow $P(\theta)$ then $S^2 = \frac{1}{n-1}\sum_{i=1}^{n}{{(X_i-\bar{X}})}^2$ is an unbiased estimator and is a function of a statistic $T(X) = (\sum_{i=1}^{n}{X_i},\sum_{i=1}^{n}{X_i}^2)$ which is sufficient. But $S^2$ is not a Complete Sufficient Statistic.
Complete sufficient statistic and unbiased estimator
Can we always find such an unbiased estimator if we have complete sufficient statistic? A slight modification of the one given in the comments. Let $X_1,X_2,...X_n$ follow $B(m,\theta)$. Then the fu
Complete sufficient statistic and unbiased estimator Can we always find such an unbiased estimator if we have complete sufficient statistic? A slight modification of the one given in the comments. Let $X_1,X_2,...X_n$ follow $B(m,\theta)$. Then the function $g(\theta)=\frac{1}{\theta}$ doesn't admit an unbiased estimator while $\sum_{i=1}^{n}{X_i}$ is a Complete Sufficient Statistic. Or conversely, if an unbiased estimator of θ(parameter) exists as a function of a sufficient statistic, does that imply that the sufficient statistic is complete? Let $X_1,X_2,...X_n$ follow $P(\theta)$ then $S^2 = \frac{1}{n-1}\sum_{i=1}^{n}{{(X_i-\bar{X}})}^2$ is an unbiased estimator and is a function of a statistic $T(X) = (\sum_{i=1}^{n}{X_i},\sum_{i=1}^{n}{X_i}^2)$ which is sufficient. But $S^2$ is not a Complete Sufficient Statistic.
Complete sufficient statistic and unbiased estimator Can we always find such an unbiased estimator if we have complete sufficient statistic? A slight modification of the one given in the comments. Let $X_1,X_2,...X_n$ follow $B(m,\theta)$. Then the fu
49,116
Complete sufficient statistic and unbiased estimator
No, there are examples where there is a complete sufficient statistic but there is some function of the parameter that does not admit an unbiased estimator. One binomial example, noted in comments, is discussed here at math SE. There are many other examples, this is one. A paper discussing the topic is A Class of Parameter Functions for Which the Unbiased Estimator Does Not Έxist.
Complete sufficient statistic and unbiased estimator
No, there are examples where there is a complete sufficient statistic but there is some function of the parameter that does not admit an unbiased estimator. One binomial example, noted in comments, is
Complete sufficient statistic and unbiased estimator No, there are examples where there is a complete sufficient statistic but there is some function of the parameter that does not admit an unbiased estimator. One binomial example, noted in comments, is discussed here at math SE. There are many other examples, this is one. A paper discussing the topic is A Class of Parameter Functions for Which the Unbiased Estimator Does Not Έxist.
Complete sufficient statistic and unbiased estimator No, there are examples where there is a complete sufficient statistic but there is some function of the parameter that does not admit an unbiased estimator. One binomial example, noted in comments, is
49,117
Moments of the sample median of a normal distribution
The following is not an exact answer, but it does provide some help in the form of a heuristic for the central moment. As mentioned in the comments by others, a theoretical expression is difficult (except for the odd moments which are zero), if not unobtainable. Yet, for the 2nd moment of the median of samples with 3 and 5 variables, it is possible to find expressions using integration by parts and tables of integrals. Experimental I started plotting some computations in order to get an idea in what direction the integral might go. And the result is close to a power-law relationship in for the ratio of the moment of the Gaussian distribution and the moment of the median is of 2n+1 Gaussian distributed variables. For the n-th central moment, $m$, of the Gaussian distribution we have: $E((X-\mu_X)^m) = \begin{cases} \sigma^m(m-1)!! & \text{for $m$ even,} \\ 0 & \text{for $m$ odd.} \end{cases}$ with $n!!$ the double factorial. For the n-th central moment of the median of a sample with 2n+1 Gaussian distributed variables, $X_{2n+1}$ we find roughly a powerlaw: $\frac{E((X_{2n+1}-\mu_{X_{2n+1}})^m)}{E((X-\mu_X)^m)} \sim (2n+1)^{-m/2}$ In the plot with the computations I have added a line that is based on the following fit: $\frac{E((X_{2n+1}-\mu_{X_{2n+1}})^m)}{E((X-\mu_X)^m)} \sim (1+0.6355323(n-1))^{-m/2}$ Note that the parameters in this fit only change a little bit if different moments are calculated. Legend for the image below: 2-nd moment of the median as function of number of variables in the sample black dots are computed 2-nd central moments green crosses are theoretic calculations red line is a power law relationship based on an experimental linearized model fit Theoretic The image above contains two points that have been calculated exactly. These points were found using integration by parts. First we can eliminate this $\dfrac{(2n+1)!}{(n!)^2}$ part by using a ratio: $\frac{\int \dfrac{(2n+1)!}{(n!)^2}y^m[F_{X_i}(y)(1-F_{X_i}(y))]^{n}f_{X_i}(y)dy}{\int \dfrac{(2n+1)!}{(n!)^2}[F_{X_i}(y)(1-F_{X_i}(y))]^{n}f_{X_i}(y)dy}$ The integration by parts follows a pattern like (I won't write down the entire procedure): $\int y^a f^b (F-0.5)^c = \frac{(a-1)}{b}\int y^{a-2} f^b (F-0.5)^c + \frac{c}{b} \int y^{a-1} f^{b+1} (F-0.5)^{c-1} $ And in this way you eliminate the $y$-term such that you are left with only products of $f^b F^c$ in the integral. The cases $f^b$, $f F^c$, $f^b F$, are easy to calculate. However, if both $b>2$ and $c>2$, then it becomes difficult. For the case of 5 variables we do get an integral with a $f^3F^2$ term. This can be solved using the tables from Murray Geller 'A table of integrals of the Error functions' in which we find: $\int_0^\infty \mathrm{erf}(ax) \, \mathrm{erf}(bx) \, e^{-c^2x^2} dx = \frac{1}{c\sqrt(\pi)} \mathrm{tan}^{-1} \left( \frac{ab}{c\sqrt{a^2+b^2+c^2}} \right)$ In cases of more than 5 variables we end up with larger power terms for which there are not standard integrals published. You will have to play around with more than simple integration by parts. So we get for the cases $n=3$ and $n=5$: $E((X_{3}-\mu_{X_{3}})^2) \sim E((X-\mu_X)^2) \frac{\frac{1}{6}-\frac{\sqrt{3}}{\sqrt{2 \pi}^2}}{\frac{1}{6}}$ and $E((X_{5}-\mu_{X_{5}})^2) \sim E((X-\mu_X)^2) \frac{\frac{1}{30}-\frac{1}{4 \sqrt{3} \pi}+\frac{6}{\pi^2 4 \sqrt{3}} \mathrm{tan}^{-1}\left( \frac{1}{\sqrt{15}} \right)}{\frac{1}{30}}$ In which I excuse myself for not simplifying the expression and showing every detail of the derivation. I leave that open. The code below shows that these expressions work. Code for the image: #function for double factorial odfactorial <- function(x) { l <- (x+1)/2 return(factorial(2*l)/factorial(l)/2^l) } #define some stuff mom = 2 sigm = 1 x = 10*c(-1000:1000)/1000 #x variable k <- c(0:500) # loop over odd numbers of observations n <- 1+2*k # number of observations #calculate moments moments_p <- lapply(k, FUN <- function(kj) sum(x^mom*dnorm(x, 0, sigm)*(pnorm(x, 0, sigm) - pnorm(x, 0, sigm)^2)^kj)/ sum( dnorm(x, 0, sigm)*(pnorm(x, 0, sigm) - pnorm(x, 0, sigm)^2)^kj)) moments_p <- as.numeric(moments_p) # convert array to vector # plot expected moment versus number of observations plot(n, moments_p, xlab="observations", ylab=paste0(mom, "-th moment"), log="xy") #lines(n, sigm^(mom) * (odfactorial(mom-1))/n^(mom/2)) #draw heuristic line m_t <- sigm^(mom) * (odfactorial(mom-1)) * (1+0.6355323*(n-1))^(-mom/2) lines(n,m_t,col=2) #draw calculated points m1_predict = sigm^(mom)*(odfactorial(mom-1)) points(1,m1_predict,col=3,pch=3,cex=1.45) m2_predict = sigm^(mom)*(odfactorial(mom-1))*(1/6-sqrt(1/3)/sqrt(2*pi)^2)/(1/6) points(3,m2_predict,col=3,pch=3,cex=1.45) m3_predict = sigm^(mom)*(odfactorial(mom-1))*((1/30)-0.5/sqrt(3)/2/pi+6*(atan(1/sqrt(3)/sqrt(5))/pi^2/sqrt(3)/4))/(1/30) points(5,m3_predict,col=3,pch=3,cex=1.45)
Moments of the sample median of a normal distribution
The following is not an exact answer, but it does provide some help in the form of a heuristic for the central moment. As mentioned in the comments by others, a theoretical expression is difficult (e
Moments of the sample median of a normal distribution The following is not an exact answer, but it does provide some help in the form of a heuristic for the central moment. As mentioned in the comments by others, a theoretical expression is difficult (except for the odd moments which are zero), if not unobtainable. Yet, for the 2nd moment of the median of samples with 3 and 5 variables, it is possible to find expressions using integration by parts and tables of integrals. Experimental I started plotting some computations in order to get an idea in what direction the integral might go. And the result is close to a power-law relationship in for the ratio of the moment of the Gaussian distribution and the moment of the median is of 2n+1 Gaussian distributed variables. For the n-th central moment, $m$, of the Gaussian distribution we have: $E((X-\mu_X)^m) = \begin{cases} \sigma^m(m-1)!! & \text{for $m$ even,} \\ 0 & \text{for $m$ odd.} \end{cases}$ with $n!!$ the double factorial. For the n-th central moment of the median of a sample with 2n+1 Gaussian distributed variables, $X_{2n+1}$ we find roughly a powerlaw: $\frac{E((X_{2n+1}-\mu_{X_{2n+1}})^m)}{E((X-\mu_X)^m)} \sim (2n+1)^{-m/2}$ In the plot with the computations I have added a line that is based on the following fit: $\frac{E((X_{2n+1}-\mu_{X_{2n+1}})^m)}{E((X-\mu_X)^m)} \sim (1+0.6355323(n-1))^{-m/2}$ Note that the parameters in this fit only change a little bit if different moments are calculated. Legend for the image below: 2-nd moment of the median as function of number of variables in the sample black dots are computed 2-nd central moments green crosses are theoretic calculations red line is a power law relationship based on an experimental linearized model fit Theoretic The image above contains two points that have been calculated exactly. These points were found using integration by parts. First we can eliminate this $\dfrac{(2n+1)!}{(n!)^2}$ part by using a ratio: $\frac{\int \dfrac{(2n+1)!}{(n!)^2}y^m[F_{X_i}(y)(1-F_{X_i}(y))]^{n}f_{X_i}(y)dy}{\int \dfrac{(2n+1)!}{(n!)^2}[F_{X_i}(y)(1-F_{X_i}(y))]^{n}f_{X_i}(y)dy}$ The integration by parts follows a pattern like (I won't write down the entire procedure): $\int y^a f^b (F-0.5)^c = \frac{(a-1)}{b}\int y^{a-2} f^b (F-0.5)^c + \frac{c}{b} \int y^{a-1} f^{b+1} (F-0.5)^{c-1} $ And in this way you eliminate the $y$-term such that you are left with only products of $f^b F^c$ in the integral. The cases $f^b$, $f F^c$, $f^b F$, are easy to calculate. However, if both $b>2$ and $c>2$, then it becomes difficult. For the case of 5 variables we do get an integral with a $f^3F^2$ term. This can be solved using the tables from Murray Geller 'A table of integrals of the Error functions' in which we find: $\int_0^\infty \mathrm{erf}(ax) \, \mathrm{erf}(bx) \, e^{-c^2x^2} dx = \frac{1}{c\sqrt(\pi)} \mathrm{tan}^{-1} \left( \frac{ab}{c\sqrt{a^2+b^2+c^2}} \right)$ In cases of more than 5 variables we end up with larger power terms for which there are not standard integrals published. You will have to play around with more than simple integration by parts. So we get for the cases $n=3$ and $n=5$: $E((X_{3}-\mu_{X_{3}})^2) \sim E((X-\mu_X)^2) \frac{\frac{1}{6}-\frac{\sqrt{3}}{\sqrt{2 \pi}^2}}{\frac{1}{6}}$ and $E((X_{5}-\mu_{X_{5}})^2) \sim E((X-\mu_X)^2) \frac{\frac{1}{30}-\frac{1}{4 \sqrt{3} \pi}+\frac{6}{\pi^2 4 \sqrt{3}} \mathrm{tan}^{-1}\left( \frac{1}{\sqrt{15}} \right)}{\frac{1}{30}}$ In which I excuse myself for not simplifying the expression and showing every detail of the derivation. I leave that open. The code below shows that these expressions work. Code for the image: #function for double factorial odfactorial <- function(x) { l <- (x+1)/2 return(factorial(2*l)/factorial(l)/2^l) } #define some stuff mom = 2 sigm = 1 x = 10*c(-1000:1000)/1000 #x variable k <- c(0:500) # loop over odd numbers of observations n <- 1+2*k # number of observations #calculate moments moments_p <- lapply(k, FUN <- function(kj) sum(x^mom*dnorm(x, 0, sigm)*(pnorm(x, 0, sigm) - pnorm(x, 0, sigm)^2)^kj)/ sum( dnorm(x, 0, sigm)*(pnorm(x, 0, sigm) - pnorm(x, 0, sigm)^2)^kj)) moments_p <- as.numeric(moments_p) # convert array to vector # plot expected moment versus number of observations plot(n, moments_p, xlab="observations", ylab=paste0(mom, "-th moment"), log="xy") #lines(n, sigm^(mom) * (odfactorial(mom-1))/n^(mom/2)) #draw heuristic line m_t <- sigm^(mom) * (odfactorial(mom-1)) * (1+0.6355323*(n-1))^(-mom/2) lines(n,m_t,col=2) #draw calculated points m1_predict = sigm^(mom)*(odfactorial(mom-1)) points(1,m1_predict,col=3,pch=3,cex=1.45) m2_predict = sigm^(mom)*(odfactorial(mom-1))*(1/6-sqrt(1/3)/sqrt(2*pi)^2)/(1/6) points(3,m2_predict,col=3,pch=3,cex=1.45) m3_predict = sigm^(mom)*(odfactorial(mom-1))*((1/30)-0.5/sqrt(3)/2/pi+6*(atan(1/sqrt(3)/sqrt(5))/pi^2/sqrt(3)/4))/(1/30) points(5,m3_predict,col=3,pch=3,cex=1.45)
Moments of the sample median of a normal distribution The following is not an exact answer, but it does provide some help in the form of a heuristic for the central moment. As mentioned in the comments by others, a theoretical expression is difficult (e
49,118
Expected value of $X^{-1}$, $X$ being a noncentral $\chi^2$. Cannot understand a step of a equation in a paper
Yes, $EY^{−r}$ stands for $E[Y^{−r}]$. (I dislike not making it explicit because it leaves too many opportunities for misunderstandings and errors.) With respect to the later part, consider: $Y=\frac{Y}{E(Y)}\cdot E(Y)= E(Y)\cdot [\frac{Y}{E(Y)}-1+1]= E(Y)\cdot [\frac{Y-E(Y)}{E(Y)}+1]$ Therefore $EY^{-r} = [E(Y)]^{-r}\cdot E \left[ \left( 1 + \frac{Y-EY}{EY} \right)^{-r}\right] \, .$ So now compare with the paper. The equation in the paper is $\mu'_r=EX^r \cdot EY^{-r} = EX^r \cdot E \left[ 1 + \frac{Y-EY}{EY} \right]^{-r}\,.$ Dividing through by $EX^r$ we have $\mu'_r/EX^r = EY^{-r} = E \left[ 1 + \frac{Y-EY}{EY} \right]^{-r}$. So we see from that last equality that they're asserting $EY^{-r} = E \left[ 1 + \frac{Y-EY}{EY} \right]^{-r}\quad{^\ddagger}$ and so we can see $[E(Y)]^{-r}$ seems to have disappeared -- there's a term missing in the paper. $^\ddagger$ -- keeping in mind that when they write $E\text{<term>}^{-r}$ it seems they intend $E(\text{<term>}^{-r})$, so this means $E\left( \left[ 1 + \frac{Y-EY}{EY} \right]^{-r}\right)$
Expected value of $X^{-1}$, $X$ being a noncentral $\chi^2$. Cannot understand a step of a equation
Yes, $EY^{−r}$ stands for $E[Y^{−r}]$. (I dislike not making it explicit because it leaves too many opportunities for misunderstandings and errors.) With respect to the later part, consider: $Y=\frac{
Expected value of $X^{-1}$, $X$ being a noncentral $\chi^2$. Cannot understand a step of a equation in a paper Yes, $EY^{−r}$ stands for $E[Y^{−r}]$. (I dislike not making it explicit because it leaves too many opportunities for misunderstandings and errors.) With respect to the later part, consider: $Y=\frac{Y}{E(Y)}\cdot E(Y)= E(Y)\cdot [\frac{Y}{E(Y)}-1+1]= E(Y)\cdot [\frac{Y-E(Y)}{E(Y)}+1]$ Therefore $EY^{-r} = [E(Y)]^{-r}\cdot E \left[ \left( 1 + \frac{Y-EY}{EY} \right)^{-r}\right] \, .$ So now compare with the paper. The equation in the paper is $\mu'_r=EX^r \cdot EY^{-r} = EX^r \cdot E \left[ 1 + \frac{Y-EY}{EY} \right]^{-r}\,.$ Dividing through by $EX^r$ we have $\mu'_r/EX^r = EY^{-r} = E \left[ 1 + \frac{Y-EY}{EY} \right]^{-r}$. So we see from that last equality that they're asserting $EY^{-r} = E \left[ 1 + \frac{Y-EY}{EY} \right]^{-r}\quad{^\ddagger}$ and so we can see $[E(Y)]^{-r}$ seems to have disappeared -- there's a term missing in the paper. $^\ddagger$ -- keeping in mind that when they write $E\text{<term>}^{-r}$ it seems they intend $E(\text{<term>}^{-r})$, so this means $E\left( \left[ 1 + \frac{Y-EY}{EY} \right]^{-r}\right)$
Expected value of $X^{-1}$, $X$ being a noncentral $\chi^2$. Cannot understand a step of a equation Yes, $EY^{−r}$ stands for $E[Y^{−r}]$. (I dislike not making it explicit because it leaves too many opportunities for misunderstandings and errors.) With respect to the later part, consider: $Y=\frac{
49,119
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning
Let me use Hinton's own writing to answer this question: The CD learning procedure is based on ignoring derivatives that come from later steps in the Markov chain (Hinton, Osindero and Teh, 2006), so it tends to approximate maximum likelihood learning better when the mixing is fast. The ignored derivatives are then small for the following reason: When a Markov chain is very close to its stationary distribution, the best parameters for modeling samples from the chain are very close to its current parameters. Hinton, A Practical Guide to Training Restricted Boltzmann Machines, 2010 So let's follow the "white rabbit": [W]e can compute the derivatives of the log probability of the data. Let us start by computing the derivative for a generative weight, $\omega^{0,0}_{i,j}$, from a unit $j$ in [hidden] layer $H_0$ to unit $i$ in [visible] layer $V_0$ (see Figure 3). In a logistic belief net, the maximum likelihood learning rule for a single data vector [and layer!], $\vec{v}^0$, is $$\frac{\delta \log p(\vec{v}^0)}{\delta \omega^{0,0}_{i,j}} = \langle{ \vec{h}^0_j ( \vec{v}^0_i - \hat{\vec{v}}^0_i ) \rangle}$$ where ⟨·⟩ denotes an average over the sampled states and $\vec{v}^0_i$ is the probability that unit $i$ would be turned on if the visible vector was stochastically reconstructed from the sampled hidden states. From Section 2.1 and Formula 2.2 in Hinton et al., A Fast Learning Algorithm for Deep Belief Nets, 2006 Note that the super-script numbers represent layers in your DBN, so for a RBM, you just can "ignore" them. Next, Hinton goes on to expand the ML rule over all layers of the DBN. But check out Figure 4, as that depicts how you would then use alternative Gibbs sampling (in theory: to infitinty!) on your Markov chain to optimize a [single] "RBM" layer. In a nutshell, it simply depicts making the angle-bracketed estimate with your MCMC sampler, up to infinity. He also gives us the probabilistic interpretation of ML learning your RBM: Maximizing the log probability of the data is exactly the same as minimizing the Kullback-Leibler divergence, $KL(P^0||P^\infty_\theta)$, between the distribution of the data, $P^0$, and the equilibrium distribution defined by the model, $P^\infty_\theta$. In contrastive divergence learning (Hinton, 2002), we run the Markov chain for only $n$ full steps before measuring the second correlation. However, then: An empirical investigation of the relationship between the maximum likelihood and the contrastive divergence learning rules can be found in Carreira-Perpinan and Hinton (2005). Bad luck, another redirection to fully resolve all your questions; Yet, we at least already understand how the ML approach will work for our RBM (Bullet 1). One question in that bullet, though, was "I didn't get that concept at all. Are there even any training examples involved?". I am not sure I fully understand what you are asking for, but yes, of course, you initialize your visible units according to the training data, just like with the CD method (yet, that seems almost to trivial an question/answer?). So next, let us dive into the answers for Bullet 2: Maximum-likelihood (ML) learning of Markov random fields is challenging because it requires estimates of averages that have an exponential number of terms. Markov chain Monte Carlo methods typically take a long time to converge on unbiased estimates, but Hinton (2002) showed that if the Markov chain is only run for a few steps, the learning can still work well and it approximately minimizes a different function called "contrastive divergence" (CD). And: Fast CD learning can therefore be used to get close to an ML solution and slow ML learning can then be used to fine-tune the CD solution. From the abstract of Carreira-Perpinan and Hinton, On contrastive divergence learning, 2005 Luckily, the introduction spells out what you might have already guessed from our understanding of the ML approach above: However, [ML learning with MCMC] is typically very slow, since running the Markov chain to equilibrium can require a very large number of steps, and no foolproof method exists to determine whether equilibrium has been reached. A further disadvantage is the large variance of the estimated gradient. So all that is left to answer is "can someone give me a clear comparison (in terms of probability) between the two techniques?" (I believe/hope). Luckily, this, too, is answered immediately in the introduction of that paper: ML learning minimises the Kullback-Leibler divergence $$KL(p_0||p_\infty) = \sum_x{p_0(\vec{x})\log\frac{p_o(\vec{x})}{p(\vec{x};\vec{W})}}$$ CD learning approximately follows the gradient of the difference of two divergences (Hinton, 2002): $$CD_n = KL(p_0||p_\infty) - KL(p_n||p_\infty)$$ In CD learning, we start the Markov chain at the data distribution $p_0$ and run the chain for a small number $n$ of steps (e.g. $n = 1$). This greatly reduces both the computation per gradient step and the variance of the estimated gradient, and experiments show that it results in good parameter estimates (Hinton, 2002). I believe this has answered all questions, and for any further details, I strongly recommend studying the cited work. Hinton's work is always so refreshingly clear! ADDENDUM: So to put it quite blatantly, this whole work is a very elaborate, academic way of showing that running a "1-step MCMC" on our RBM is not really significantly worse than running to "infinity" (or, more likely, some kind of convergence), while it makes finding the weights of a RBM/DBN tractable, and therefore this simplification "deserved" a new name ("CD learning" as opposed to "ML learning").
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning
Let me use Hinton's own writing to answer this question: The CD learning procedure is based on ignoring derivatives that come from later steps in the Markov chain (Hinton, Osindero and Teh, 2006), so
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning Let me use Hinton's own writing to answer this question: The CD learning procedure is based on ignoring derivatives that come from later steps in the Markov chain (Hinton, Osindero and Teh, 2006), so it tends to approximate maximum likelihood learning better when the mixing is fast. The ignored derivatives are then small for the following reason: When a Markov chain is very close to its stationary distribution, the best parameters for modeling samples from the chain are very close to its current parameters. Hinton, A Practical Guide to Training Restricted Boltzmann Machines, 2010 So let's follow the "white rabbit": [W]e can compute the derivatives of the log probability of the data. Let us start by computing the derivative for a generative weight, $\omega^{0,0}_{i,j}$, from a unit $j$ in [hidden] layer $H_0$ to unit $i$ in [visible] layer $V_0$ (see Figure 3). In a logistic belief net, the maximum likelihood learning rule for a single data vector [and layer!], $\vec{v}^0$, is $$\frac{\delta \log p(\vec{v}^0)}{\delta \omega^{0,0}_{i,j}} = \langle{ \vec{h}^0_j ( \vec{v}^0_i - \hat{\vec{v}}^0_i ) \rangle}$$ where ⟨·⟩ denotes an average over the sampled states and $\vec{v}^0_i$ is the probability that unit $i$ would be turned on if the visible vector was stochastically reconstructed from the sampled hidden states. From Section 2.1 and Formula 2.2 in Hinton et al., A Fast Learning Algorithm for Deep Belief Nets, 2006 Note that the super-script numbers represent layers in your DBN, so for a RBM, you just can "ignore" them. Next, Hinton goes on to expand the ML rule over all layers of the DBN. But check out Figure 4, as that depicts how you would then use alternative Gibbs sampling (in theory: to infitinty!) on your Markov chain to optimize a [single] "RBM" layer. In a nutshell, it simply depicts making the angle-bracketed estimate with your MCMC sampler, up to infinity. He also gives us the probabilistic interpretation of ML learning your RBM: Maximizing the log probability of the data is exactly the same as minimizing the Kullback-Leibler divergence, $KL(P^0||P^\infty_\theta)$, between the distribution of the data, $P^0$, and the equilibrium distribution defined by the model, $P^\infty_\theta$. In contrastive divergence learning (Hinton, 2002), we run the Markov chain for only $n$ full steps before measuring the second correlation. However, then: An empirical investigation of the relationship between the maximum likelihood and the contrastive divergence learning rules can be found in Carreira-Perpinan and Hinton (2005). Bad luck, another redirection to fully resolve all your questions; Yet, we at least already understand how the ML approach will work for our RBM (Bullet 1). One question in that bullet, though, was "I didn't get that concept at all. Are there even any training examples involved?". I am not sure I fully understand what you are asking for, but yes, of course, you initialize your visible units according to the training data, just like with the CD method (yet, that seems almost to trivial an question/answer?). So next, let us dive into the answers for Bullet 2: Maximum-likelihood (ML) learning of Markov random fields is challenging because it requires estimates of averages that have an exponential number of terms. Markov chain Monte Carlo methods typically take a long time to converge on unbiased estimates, but Hinton (2002) showed that if the Markov chain is only run for a few steps, the learning can still work well and it approximately minimizes a different function called "contrastive divergence" (CD). And: Fast CD learning can therefore be used to get close to an ML solution and slow ML learning can then be used to fine-tune the CD solution. From the abstract of Carreira-Perpinan and Hinton, On contrastive divergence learning, 2005 Luckily, the introduction spells out what you might have already guessed from our understanding of the ML approach above: However, [ML learning with MCMC] is typically very slow, since running the Markov chain to equilibrium can require a very large number of steps, and no foolproof method exists to determine whether equilibrium has been reached. A further disadvantage is the large variance of the estimated gradient. So all that is left to answer is "can someone give me a clear comparison (in terms of probability) between the two techniques?" (I believe/hope). Luckily, this, too, is answered immediately in the introduction of that paper: ML learning minimises the Kullback-Leibler divergence $$KL(p_0||p_\infty) = \sum_x{p_0(\vec{x})\log\frac{p_o(\vec{x})}{p(\vec{x};\vec{W})}}$$ CD learning approximately follows the gradient of the difference of two divergences (Hinton, 2002): $$CD_n = KL(p_0||p_\infty) - KL(p_n||p_\infty)$$ In CD learning, we start the Markov chain at the data distribution $p_0$ and run the chain for a small number $n$ of steps (e.g. $n = 1$). This greatly reduces both the computation per gradient step and the variance of the estimated gradient, and experiments show that it results in good parameter estimates (Hinton, 2002). I believe this has answered all questions, and for any further details, I strongly recommend studying the cited work. Hinton's work is always so refreshingly clear! ADDENDUM: So to put it quite blatantly, this whole work is a very elaborate, academic way of showing that running a "1-step MCMC" on our RBM is not really significantly worse than running to "infinity" (or, more likely, some kind of convergence), while it makes finding the weights of a RBM/DBN tractable, and therefore this simplification "deserved" a new name ("CD learning" as opposed to "ML learning").
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning Let me use Hinton's own writing to answer this question: The CD learning procedure is based on ignoring derivatives that come from later steps in the Markov chain (Hinton, Osindero and Teh, 2006), so
49,120
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning
Thank you very much for your effort, @fnl! Although I couldn't follow all of your points (probably because I'm quite a beginner), your answer gave me some clarity. Could you point out the problem again in terms of energy? I came across the following equation several times and one drawback of the ML method is that the denominator requires to sum over all possible combinations of visible and hidden states, which is exponentially complex, right? $$ \arg \max_\theta \log p(v | \theta) = \log \frac{\sum_{h} e^{-E(v,h)}}{\sum_{v', h'}e^{-E(v',h')}} = \log \sum_{h} e^{-E(v,h)} - \log {\sum_{v', h'}}e^{-E(v',h')} $$ However, I haven't completely understood, how the energy-related formulation fits into the whole problem. Also, I remember a quote (I believe by Hinton in one of his Coursera videos) which was like (in reference to the above equation) The gradient of the first sum is easy to comute, while the gradient of the second is not. It could be approximated when having samples, but even sampling is hard, so we also need to approximate them. So does the "first" approximation mean to repeatedly run the Markov chain and the "second" approximaton to perform Gibbs sampling?
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning
Thank you very much for your effort, @fnl! Although I couldn't follow all of your points (probably because I'm quite a beginner), your answer gave me some clarity. Could you point out the problem agai
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning Thank you very much for your effort, @fnl! Although I couldn't follow all of your points (probably because I'm quite a beginner), your answer gave me some clarity. Could you point out the problem again in terms of energy? I came across the following equation several times and one drawback of the ML method is that the denominator requires to sum over all possible combinations of visible and hidden states, which is exponentially complex, right? $$ \arg \max_\theta \log p(v | \theta) = \log \frac{\sum_{h} e^{-E(v,h)}}{\sum_{v', h'}e^{-E(v',h')}} = \log \sum_{h} e^{-E(v,h)} - \log {\sum_{v', h'}}e^{-E(v',h')} $$ However, I haven't completely understood, how the energy-related formulation fits into the whole problem. Also, I remember a quote (I believe by Hinton in one of his Coursera videos) which was like (in reference to the above equation) The gradient of the first sum is easy to comute, while the gradient of the second is not. It could be approximated when having samples, but even sampling is hard, so we also need to approximate them. So does the "first" approximation mean to repeatedly run the Markov chain and the "second" approximaton to perform Gibbs sampling?
Restricted Boltzmann Machines - Understanding contrastive divergence vs. ML learning Thank you very much for your effort, @fnl! Although I couldn't follow all of your points (probably because I'm quite a beginner), your answer gave me some clarity. Could you point out the problem agai
49,121
Why is chi-squared / z-test used for a/b testing in marketing?
A typical A/B test in marketing is fundamentally a test of equal proportions, and there are several ways to perform this test. In a marketing campaign, a certain number of people are contacted or exposed to an impression, and of those a certain number will "convert," which often means purchase something, but can be some other evidence of engagement, such as creating an account or simply clicking on something. This basic framework holds true for direct mail campaigns, email campaigns, display and paid search advertisements, and so on. To A/B test a marketing campaign, we divide the people who will are exposed into two groups, conventionally called A and B, and use a different message, appeal, call-to-action, or graphic design for the two groups. Which version an individual is exposed to is completely random. For each group then, we know the total number of people exposed to each version of the message, and we know the number of conversions for each group. The data can be tabulated like so: | A | B ----------------+------+---- did not convert | 605 | 195 converted | 351 | 41 These same data could be displayed in an equivalent form: | n | p --------+------+------ Group A | 956 | 0.367 Group B | 236 | 0.174 We are interested to know if the two groups have the same conversion rate (the proportion of conversions in each group) or if there is a statistically significant different in conversion rate. In the language of statistics, we need to perform a two-sided test of equal proportions. The null hypothesis is that the two groups have equal proportions, and the alternative hypothesis is that the "true" proportions are not equal. There are three popular tests to consider: Fisher's exact test, Pearson's $\chi^2$ test, and the z-test on equal proportions. These vary mainly by the degree of approximation involved, and consequently by the total sample size required to before we meet the assumptions of the test. For a typical marketing campaign, we will have many thousands of impressions and conversions. The z-test is used because it is easy to interpret and makes it easy to talk about effect size and confidence intervals. In some cases, some of the counts will be small (less than 30) usually for the number of conversions. In those cases, the $\chi^2$ or Fisher's exact test will be used for formal hypothesis testing and reporting significance level. In my experience, the z-test is most often used, because if the sample sizes are so small, the campaign isn't worth analyzing anyway. (Although I have seen certain clients apply Fisher's exact test because they had only a handful of conversion, each worth tens of thousands of dollars, and perhaps other big ticket items like sports cars would likewise need to analyze campaigns with a very small number of conversions.) The z-test of equal proportions can be understood as modeling each group as a Binomial distribution: $$ \begin{align} A & \sim \text{Binomial}(n_A, p_A) \\ B & \sim \text{Binomial}(n_B, p_B) \\ \Delta & = B - A \end{align} $$ But what is the distribution of an RV defined as the difference of two Binomial RVs? Well, it's actually a bit intractable, so we approximate it with a normal distribution. By dividing by the standard error of $\Delta$, we can obtain a $z$ statistic which has, under the null hypothesis and our approximation, a standard normal distribution. You can easily find the formulas for this standard error; what's important to remember is that it is the difference in proportions between the two groups that drives our test statistic. This makes it very easy and intuitive to reason about. With the other tests we gain a modicum of validity at small sample sizes, but lose some of this ease of interpretation.
Why is chi-squared / z-test used for a/b testing in marketing?
A typical A/B test in marketing is fundamentally a test of equal proportions, and there are several ways to perform this test. In a marketing campaign, a certain number of people are contacted or expo
Why is chi-squared / z-test used for a/b testing in marketing? A typical A/B test in marketing is fundamentally a test of equal proportions, and there are several ways to perform this test. In a marketing campaign, a certain number of people are contacted or exposed to an impression, and of those a certain number will "convert," which often means purchase something, but can be some other evidence of engagement, such as creating an account or simply clicking on something. This basic framework holds true for direct mail campaigns, email campaigns, display and paid search advertisements, and so on. To A/B test a marketing campaign, we divide the people who will are exposed into two groups, conventionally called A and B, and use a different message, appeal, call-to-action, or graphic design for the two groups. Which version an individual is exposed to is completely random. For each group then, we know the total number of people exposed to each version of the message, and we know the number of conversions for each group. The data can be tabulated like so: | A | B ----------------+------+---- did not convert | 605 | 195 converted | 351 | 41 These same data could be displayed in an equivalent form: | n | p --------+------+------ Group A | 956 | 0.367 Group B | 236 | 0.174 We are interested to know if the two groups have the same conversion rate (the proportion of conversions in each group) or if there is a statistically significant different in conversion rate. In the language of statistics, we need to perform a two-sided test of equal proportions. The null hypothesis is that the two groups have equal proportions, and the alternative hypothesis is that the "true" proportions are not equal. There are three popular tests to consider: Fisher's exact test, Pearson's $\chi^2$ test, and the z-test on equal proportions. These vary mainly by the degree of approximation involved, and consequently by the total sample size required to before we meet the assumptions of the test. For a typical marketing campaign, we will have many thousands of impressions and conversions. The z-test is used because it is easy to interpret and makes it easy to talk about effect size and confidence intervals. In some cases, some of the counts will be small (less than 30) usually for the number of conversions. In those cases, the $\chi^2$ or Fisher's exact test will be used for formal hypothesis testing and reporting significance level. In my experience, the z-test is most often used, because if the sample sizes are so small, the campaign isn't worth analyzing anyway. (Although I have seen certain clients apply Fisher's exact test because they had only a handful of conversion, each worth tens of thousands of dollars, and perhaps other big ticket items like sports cars would likewise need to analyze campaigns with a very small number of conversions.) The z-test of equal proportions can be understood as modeling each group as a Binomial distribution: $$ \begin{align} A & \sim \text{Binomial}(n_A, p_A) \\ B & \sim \text{Binomial}(n_B, p_B) \\ \Delta & = B - A \end{align} $$ But what is the distribution of an RV defined as the difference of two Binomial RVs? Well, it's actually a bit intractable, so we approximate it with a normal distribution. By dividing by the standard error of $\Delta$, we can obtain a $z$ statistic which has, under the null hypothesis and our approximation, a standard normal distribution. You can easily find the formulas for this standard error; what's important to remember is that it is the difference in proportions between the two groups that drives our test statistic. This makes it very easy and intuitive to reason about. With the other tests we gain a modicum of validity at small sample sizes, but lose some of this ease of interpretation.
Why is chi-squared / z-test used for a/b testing in marketing? A typical A/B test in marketing is fundamentally a test of equal proportions, and there are several ways to perform this test. In a marketing campaign, a certain number of people are contacted or expo
49,122
Why is chi-squared / z-test used for a/b testing in marketing?
Totally marketing-naive, but in general you're testing if the difference between A and B can be attributed to random chance or not, i.e. is the difference significant? A chi-squared test tests whether a ratio is significantly different (e.g. ratio of visitors that click on a link) and a z-test tests whether means are significantly different. I believe you're slightly confused, in that both tests do directly compare the A version to the B version. There are probably many more out there, but the discussion in this question might be helpful re: the logic of using the tests. A/B tests: z-test vs t-test vs chi square vs fisher exact test
Why is chi-squared / z-test used for a/b testing in marketing?
Totally marketing-naive, but in general you're testing if the difference between A and B can be attributed to random chance or not, i.e. is the difference significant? A chi-squared test tests whether
Why is chi-squared / z-test used for a/b testing in marketing? Totally marketing-naive, but in general you're testing if the difference between A and B can be attributed to random chance or not, i.e. is the difference significant? A chi-squared test tests whether a ratio is significantly different (e.g. ratio of visitors that click on a link) and a z-test tests whether means are significantly different. I believe you're slightly confused, in that both tests do directly compare the A version to the B version. There are probably many more out there, but the discussion in this question might be helpful re: the logic of using the tests. A/B tests: z-test vs t-test vs chi square vs fisher exact test
Why is chi-squared / z-test used for a/b testing in marketing? Totally marketing-naive, but in general you're testing if the difference between A and B can be attributed to random chance or not, i.e. is the difference significant? A chi-squared test tests whether
49,123
Estimation in STAN - help modelling a multinomial
You are doing the right thing. According to the Stan User Manual, the multinomial distribution figures out what N, the total count, is by calculating the sum of y. In your case, it will know that there were 7 subjects in the first row by calculating 0 + 1 + 6. Stan can't do this for the binomial distribution, since the data there is just the number of successes and not the number of successes and the number of failures (from which it would have been able to calculate N). If you expect the same parameter $\theta$ to govern all of your replicates, then you should just estimate a single $\theta$, the way you are doing now. If for some reason you think that the different replicates will have different probability vectors governing them, you should estimate separate $\theta$s.
Estimation in STAN - help modelling a multinomial
You are doing the right thing. According to the Stan User Manual, the multinomial distribution figures out what N, the total count, is by calculating the sum of y. In your case, it will know that ther
Estimation in STAN - help modelling a multinomial You are doing the right thing. According to the Stan User Manual, the multinomial distribution figures out what N, the total count, is by calculating the sum of y. In your case, it will know that there were 7 subjects in the first row by calculating 0 + 1 + 6. Stan can't do this for the binomial distribution, since the data there is just the number of successes and not the number of successes and the number of failures (from which it would have been able to calculate N). If you expect the same parameter $\theta$ to govern all of your replicates, then you should just estimate a single $\theta$, the way you are doing now. If for some reason you think that the different replicates will have different probability vectors governing them, you should estimate separate $\theta$s.
Estimation in STAN - help modelling a multinomial You are doing the right thing. According to the Stan User Manual, the multinomial distribution figures out what N, the total count, is by calculating the sum of y. In your case, it will know that ther
49,124
During oversampling of rare events, why are the beta coefficients of the independent variables not affected, but only the intercept?
Let 2 binary random variables $X$ and $Y$ e.g. exposure and disease status. Here is the logistic model: $logit(\pi(X=x)) = \alpha + \beta x $ where $\pi(X=x)=P(Y=1|X=x)$. Then, $\alpha = logit(\pi(X=0)) = \log (\frac{\pi(X=0)}{1-\pi(X=0)})$ so $\alpha$ is simply the log of odds for the unexposed, $X=0$. But why is the estimator of $\alpha$ biased when the diseased are oversampled? This sampling scheme is called case-control. In case-control, the disease status $Y$ is fixed because one generally fixes the number of diseased and non-diseased and NOT the exposure status. Consequently, $X|Y \sim Binom$ so we can only estimate the probability of exposure given the disease status, which is not what we want. We want the probability of disease given exposure $\pi(X=x) = P(Y=1|X=x)$. But in case control, $\pi(X=x)$ is not estimable. To see this, in case-control, we only have $P(Y=1|X=x, \text{Case Sampled}=1)$ and the marginal probability is $P(Y=1|X=x) = P(Y=1|X=x, \text{Case Sampled}=0) P(\text{Case Sampled}=0) + P(Y=1|X=x, \text{Case Sampled}=1) P(\text{Case Sampled}=1).$ (Note: $P(\text{Case Sampled}|X=x)=P(\text{Case Sampled})$ because the sampling is not based on exposure status) So, we've learnt that, unless you know the sampling probability, the estimator for $\pi(X=x) = P(Y=1|X=x)$ will be biased because $\pi(X=x) \neq \pi(X=x, Sampled=1)$, therefore, $\alpha = logit(\pi(X=0))$ is biased and this is general, not only in rare disease condition. Moving on to $\beta$, you can see from the logistic model that $\beta = logit(\pi(X=1)) - logit(\pi(X=0)) = \log(\frac{\pi(X=1)/(1-\pi(X=1))}{\pi(X=0)/(1-\pi(X=0))}) = \Omega$ which is the log odds ratio between exposed and unexposed. This has some nice properties, among which the most important one is its "unbiasedness" under different sampling. To see this, with simple probability exercise, we can show $\Omega=\log(\frac{P(Y=1|X=1) P(Y=0|X=0)}{P(Y=1|X=0) P(Y=0|X=1)}) =\log(\frac{P(X=1|Y=1) P(X=0|Y=0)}{P(X=1|Y=0) P(X=0|Y=1)})$ We saw earlier that $X|Y \sim Bin$ in case-control so there is no problem estimating $P(X|Y)$, therefore, $\hat{\beta}$ is unbiased.
During oversampling of rare events, why are the beta coefficients of the independent variables not a
Let 2 binary random variables $X$ and $Y$ e.g. exposure and disease status. Here is the logistic model: $logit(\pi(X=x)) = \alpha + \beta x $ where $\pi(X=x)=P(Y=1|X=x)$. Then, $\alpha = logit(\pi(
During oversampling of rare events, why are the beta coefficients of the independent variables not affected, but only the intercept? Let 2 binary random variables $X$ and $Y$ e.g. exposure and disease status. Here is the logistic model: $logit(\pi(X=x)) = \alpha + \beta x $ where $\pi(X=x)=P(Y=1|X=x)$. Then, $\alpha = logit(\pi(X=0)) = \log (\frac{\pi(X=0)}{1-\pi(X=0)})$ so $\alpha$ is simply the log of odds for the unexposed, $X=0$. But why is the estimator of $\alpha$ biased when the diseased are oversampled? This sampling scheme is called case-control. In case-control, the disease status $Y$ is fixed because one generally fixes the number of diseased and non-diseased and NOT the exposure status. Consequently, $X|Y \sim Binom$ so we can only estimate the probability of exposure given the disease status, which is not what we want. We want the probability of disease given exposure $\pi(X=x) = P(Y=1|X=x)$. But in case control, $\pi(X=x)$ is not estimable. To see this, in case-control, we only have $P(Y=1|X=x, \text{Case Sampled}=1)$ and the marginal probability is $P(Y=1|X=x) = P(Y=1|X=x, \text{Case Sampled}=0) P(\text{Case Sampled}=0) + P(Y=1|X=x, \text{Case Sampled}=1) P(\text{Case Sampled}=1).$ (Note: $P(\text{Case Sampled}|X=x)=P(\text{Case Sampled})$ because the sampling is not based on exposure status) So, we've learnt that, unless you know the sampling probability, the estimator for $\pi(X=x) = P(Y=1|X=x)$ will be biased because $\pi(X=x) \neq \pi(X=x, Sampled=1)$, therefore, $\alpha = logit(\pi(X=0))$ is biased and this is general, not only in rare disease condition. Moving on to $\beta$, you can see from the logistic model that $\beta = logit(\pi(X=1)) - logit(\pi(X=0)) = \log(\frac{\pi(X=1)/(1-\pi(X=1))}{\pi(X=0)/(1-\pi(X=0))}) = \Omega$ which is the log odds ratio between exposed and unexposed. This has some nice properties, among which the most important one is its "unbiasedness" under different sampling. To see this, with simple probability exercise, we can show $\Omega=\log(\frac{P(Y=1|X=1) P(Y=0|X=0)}{P(Y=1|X=0) P(Y=0|X=1)}) =\log(\frac{P(X=1|Y=1) P(X=0|Y=0)}{P(X=1|Y=0) P(X=0|Y=1)})$ We saw earlier that $X|Y \sim Bin$ in case-control so there is no problem estimating $P(X|Y)$, therefore, $\hat{\beta}$ is unbiased.
During oversampling of rare events, why are the beta coefficients of the independent variables not a Let 2 binary random variables $X$ and $Y$ e.g. exposure and disease status. Here is the logistic model: $logit(\pi(X=x)) = \alpha + \beta x $ where $\pi(X=x)=P(Y=1|X=x)$. Then, $\alpha = logit(\pi(
49,125
Distance between discrete histograms
This is a partial answer. I don't give a solution but explanations why it does not work for some distances. When using a distance between two distributions it is important to distinguish between two kind of distances. Let's take a simple example. You have a range of values $\{0,1,...100\}$. Then consider the distribution $\delta_{a}$: it gives all the weight (probability 1) to $a$. It is a single bar histogram. Distances on distributions treat the distance between $\delta_{a}$ and $\delta_{b}$ differently. Some distances will consider that $\delta_{0}$ is as far from $\delta_{1}$ as from $\delta_{100}$ because it does not consider the distance between $a$ and $b$. Examples: Kullback–Leibler divergence Hellinger distance Bhattacharyya distance Different vector norms (L1, L2, Inf) Chi-squared You can rule them out because it's not what you want (as far as I understand). Some distances however use the idea that $\delta_{0}$ is closer to $\delta_{1}$ than $\delta_{100}$. Values who are close are considered close by the distance on distributions. This includes : Wasserstein metric. It is reasonable to use the Wasserstein metric. The fact it does not work may be related to the small additional noise everywhere in the "orange" distribution or to the fact that the total weight of an orange impulse may not match the weight of the closest "blue" impulse. The Wasserstein metric will try to "assign" each point (in the noise or extra weight of the impulse) to another impulse, resulting in an undesired and impracticable result.
Distance between discrete histograms
This is a partial answer. I don't give a solution but explanations why it does not work for some distances. When using a distance between two distributions it is important to distinguish between two k
Distance between discrete histograms This is a partial answer. I don't give a solution but explanations why it does not work for some distances. When using a distance between two distributions it is important to distinguish between two kind of distances. Let's take a simple example. You have a range of values $\{0,1,...100\}$. Then consider the distribution $\delta_{a}$: it gives all the weight (probability 1) to $a$. It is a single bar histogram. Distances on distributions treat the distance between $\delta_{a}$ and $\delta_{b}$ differently. Some distances will consider that $\delta_{0}$ is as far from $\delta_{1}$ as from $\delta_{100}$ because it does not consider the distance between $a$ and $b$. Examples: Kullback–Leibler divergence Hellinger distance Bhattacharyya distance Different vector norms (L1, L2, Inf) Chi-squared You can rule them out because it's not what you want (as far as I understand). Some distances however use the idea that $\delta_{0}$ is closer to $\delta_{1}$ than $\delta_{100}$. Values who are close are considered close by the distance on distributions. This includes : Wasserstein metric. It is reasonable to use the Wasserstein metric. The fact it does not work may be related to the small additional noise everywhere in the "orange" distribution or to the fact that the total weight of an orange impulse may not match the weight of the closest "blue" impulse. The Wasserstein metric will try to "assign" each point (in the noise or extra weight of the impulse) to another impulse, resulting in an undesired and impracticable result.
Distance between discrete histograms This is a partial answer. I don't give a solution but explanations why it does not work for some distances. When using a distance between two distributions it is important to distinguish between two k
49,126
Notation for median in a formula
As Wikipedia says there is no standard notation so if it is clear in the text you can use any notation you want. For example if you state precisely that for any set of integers $A$, $med(A)$ represents the median of the set $A$ then you can use this notation as in your example.
Notation for median in a formula
As Wikipedia says there is no standard notation so if it is clear in the text you can use any notation you want. For example if you state precisely that for any set of integers $A$, $med(A)$ represent
Notation for median in a formula As Wikipedia says there is no standard notation so if it is clear in the text you can use any notation you want. For example if you state precisely that for any set of integers $A$, $med(A)$ represents the median of the set $A$ then you can use this notation as in your example.
Notation for median in a formula As Wikipedia says there is no standard notation so if it is clear in the text you can use any notation you want. For example if you state precisely that for any set of integers $A$, $med(A)$ represent
49,127
Utility of the Bayesian Cramer-Rao Bound (van Trees inequality)
I would wager that this has a lot to do with the fact that many Bayesian models do not have tractable posterior distributions so that the van Trees inquality is likely to be of little use. Additionally, a major thrust of Bayesian research over the last 30 years or so has been on Markov chain Monte Carlo (MCMC) methods. MCMC methods are a collection of methods to draw samples from the posterior distribution. With enough of these samples you can calculate the numeric value of any variances you want and do not need to worry about using the van Trees inequality.
Utility of the Bayesian Cramer-Rao Bound (van Trees inequality)
I would wager that this has a lot to do with the fact that many Bayesian models do not have tractable posterior distributions so that the van Trees inquality is likely to be of little use. Additionall
Utility of the Bayesian Cramer-Rao Bound (van Trees inequality) I would wager that this has a lot to do with the fact that many Bayesian models do not have tractable posterior distributions so that the van Trees inquality is likely to be of little use. Additionally, a major thrust of Bayesian research over the last 30 years or so has been on Markov chain Monte Carlo (MCMC) methods. MCMC methods are a collection of methods to draw samples from the posterior distribution. With enough of these samples you can calculate the numeric value of any variances you want and do not need to worry about using the van Trees inequality.
Utility of the Bayesian Cramer-Rao Bound (van Trees inequality) I would wager that this has a lot to do with the fact that many Bayesian models do not have tractable posterior distributions so that the van Trees inquality is likely to be of little use. Additionall
49,128
How to understand / calculate FLOPs of the neural network model?
input_shape = (3,300,300) # Format:(channels, rows,cols) conv_filter = (64,3,3,3) # Format: (num_filters, channels, rows, cols) stride = 1 padding = 1 activation = 'relu' n = conv_filter[1] * conv_filter[2] * conv_filter[3] # vector_length flops_per_instance = n + 1 # general defination for number of flops (n: multiplications and n-1: additions) num_instances_per_filter = (( input_shape[1] - conv_filter[2] + 2*padding) / stride ) + 1 # for rows num_instances_per_filter *= (( input_shape[2] - conv_filter[3] + 2*padding) / stride ) + 1 # multiplying with cols flops_per_filter = num_instances_per_filter * flops_per_instance total_flops_per_layer = flops_per_filter * conv_filter[0] # multiply with number of filters if activation == 'relu': # Here one can add number of flops required # Relu takes 1 comparison and 1 multiplication # Assuming for Relu: number of flops equal to length of input vector total_flops_per_layer += conv_filter[0]*num_instances_per_filter print(total_flops_per_layer) This might help you.
How to understand / calculate FLOPs of the neural network model?
input_shape = (3,300,300) # Format:(channels, rows,cols) conv_filter = (64,3,3,3) # Format: (num_filters, channels, rows, cols) stride = 1 padding = 1 activation = 'relu' n = conv_filter[1] * conv_f
How to understand / calculate FLOPs of the neural network model? input_shape = (3,300,300) # Format:(channels, rows,cols) conv_filter = (64,3,3,3) # Format: (num_filters, channels, rows, cols) stride = 1 padding = 1 activation = 'relu' n = conv_filter[1] * conv_filter[2] * conv_filter[3] # vector_length flops_per_instance = n + 1 # general defination for number of flops (n: multiplications and n-1: additions) num_instances_per_filter = (( input_shape[1] - conv_filter[2] + 2*padding) / stride ) + 1 # for rows num_instances_per_filter *= (( input_shape[2] - conv_filter[3] + 2*padding) / stride ) + 1 # multiplying with cols flops_per_filter = num_instances_per_filter * flops_per_instance total_flops_per_layer = flops_per_filter * conv_filter[0] # multiply with number of filters if activation == 'relu': # Here one can add number of flops required # Relu takes 1 comparison and 1 multiplication # Assuming for Relu: number of flops equal to length of input vector total_flops_per_layer += conv_filter[0]*num_instances_per_filter print(total_flops_per_layer) This might help you.
How to understand / calculate FLOPs of the neural network model? input_shape = (3,300,300) # Format:(channels, rows,cols) conv_filter = (64,3,3,3) # Format: (num_filters, channels, rows, cols) stride = 1 padding = 1 activation = 'relu' n = conv_filter[1] * conv_f
49,129
Prediction on individual cases in survival analysis
There isn't anything unique about survival analysis that prevents individual prediction. Just like other regression techniques, you can make individual predictions. In fact, survival analysis often gives you something better: the full distribution of the duration! Let me explain. Linear regression gives you an estimate for $E[Y_i|x_i]$, which is a summary statistic for the distribution of the random variable $Y_i | x_i$. If you did have the distribution of $Y_i | x_i$, you could compute the expected value, but also other quantities like the median, or some other business-influenced summary statistic. But alas, linear regression only gives you the expected value. Survival regression, on the other hand, focuses on estimating the survival function (what you call survival probability over time). The predicted survival function is an estimate for $P(Y_i > t | x_i)$, which has the same information as the distribution of $Y_i | x_i$. Hence we can choose the summary statistic, like $E[Y_i | x_i]$, or the median, or some percentile, etc. You mention, expected time-to-event for individual so it sounds like you want the expected value. Either you have to compute this from the survival function, but often the software does this for you. In Python, two packages that can do this are lifelines and scikit-survival.
Prediction on individual cases in survival analysis
There isn't anything unique about survival analysis that prevents individual prediction. Just like other regression techniques, you can make individual predictions. In fact, survival analysis often gi
Prediction on individual cases in survival analysis There isn't anything unique about survival analysis that prevents individual prediction. Just like other regression techniques, you can make individual predictions. In fact, survival analysis often gives you something better: the full distribution of the duration! Let me explain. Linear regression gives you an estimate for $E[Y_i|x_i]$, which is a summary statistic for the distribution of the random variable $Y_i | x_i$. If you did have the distribution of $Y_i | x_i$, you could compute the expected value, but also other quantities like the median, or some other business-influenced summary statistic. But alas, linear regression only gives you the expected value. Survival regression, on the other hand, focuses on estimating the survival function (what you call survival probability over time). The predicted survival function is an estimate for $P(Y_i > t | x_i)$, which has the same information as the distribution of $Y_i | x_i$. Hence we can choose the summary statistic, like $E[Y_i | x_i]$, or the median, or some percentile, etc. You mention, expected time-to-event for individual so it sounds like you want the expected value. Either you have to compute this from the survival function, but often the software does this for you. In Python, two packages that can do this are lifelines and scikit-survival.
Prediction on individual cases in survival analysis There isn't anything unique about survival analysis that prevents individual prediction. Just like other regression techniques, you can make individual predictions. In fact, survival analysis often gi
49,130
Prediction on individual cases in survival analysis
As far as I know, individual prediction is a whole other type of analysis. You can't just simply predict for an individual, as you have to take into account all the different predictive determinants/characteristics of that individual case. So you'll have to construct a risk model for individualized prediction (which you'll have to not only derive from a cohort, but also preferably validate to see how well it predicts). So this is not survival analysis and also takes quite a lot of time to learn (I say this from experience unfortunately). Disclaimer: I'm relatively new to individual prediction and just starting to learn, but my department does a lot of individualized prediction. So if there is someone who is more knowledgeable in this subject, feel free to correct me if my assumptions are wrong.
Prediction on individual cases in survival analysis
As far as I know, individual prediction is a whole other type of analysis. You can't just simply predict for an individual, as you have to take into account all the different predictive determinants/c
Prediction on individual cases in survival analysis As far as I know, individual prediction is a whole other type of analysis. You can't just simply predict for an individual, as you have to take into account all the different predictive determinants/characteristics of that individual case. So you'll have to construct a risk model for individualized prediction (which you'll have to not only derive from a cohort, but also preferably validate to see how well it predicts). So this is not survival analysis and also takes quite a lot of time to learn (I say this from experience unfortunately). Disclaimer: I'm relatively new to individual prediction and just starting to learn, but my department does a lot of individualized prediction. So if there is someone who is more knowledgeable in this subject, feel free to correct me if my assumptions are wrong.
Prediction on individual cases in survival analysis As far as I know, individual prediction is a whole other type of analysis. You can't just simply predict for an individual, as you have to take into account all the different predictive determinants/c
49,131
Convolutional neural networks backpropagation
A convolutional network layer is a just a fully-connected layer where two things are true: certain connections are removed; means their weights are forced to be constant zero. So you can just ignore these connections/weights each of the non-zero weights is shared across multiple connections, ie across multiple pairs of input/output neurons When a weight is shared across connections, the gradient update for that weight is the sum of the gradients across all the connections that share the same weight. So then looking at your questions: Or are you supposed to update each weight (each value in the 2x2 filter) individually by its own gradient like in a normal artificial neural network ANN? Each weight is updated just like in a 'normal ann', by which case I interpret this to mean in a fully-connected, or 'dense', or 'linear' layer, right? However, as stated, the update applied to each of the weights will be the sum of the gradients from all the connections that share that weight. The 2x2 delta is backpropagated onto a 2x2 filter which creates a 3x3 grid of the change in weights If you are supposed to add up those 9 values and then add them to the original 2x2 filter at all positions it will just change each value in the 2x2 filter by the same amount and in the same directions. Is that the correct way to update weights in a CNN? The gradients 'flow back' to where they came. In the forward pass, each weight in the CNN 'kernel' will be used to calculate the output of multiple connections. In the backward pass, the gradients will 'flow backwards', along those exact same connections, onto the originating weights. In the worst case, rather than thinking of the CNN as some magical thing, you can, as stated, consider a CNN to be a standard fully-connected/linear layer, with many connections/weights forced to be zero, and the remaining weights shared across multiple connections. Then, you can use your existing knowledge for fully-connected / linear layers, to handle the CNN layer. For background, some conceptual aims/motivations of a CNN layer compared to a fully-connected layer are: enforces a prior on adjacency. Since pixels in images tend to have more correlation and relationships with adjacent pixels, so we keep only connections between adjacent input/output pixels, and remove the other connections partly to remove parameters, to avoid over-fitting, and partly because we want to enforce a prior on invariance to translation, we share the weights such that input pixels in one region will give identical outputs, no matter where in the image they are, but simply translated in the output image
Convolutional neural networks backpropagation
A convolutional network layer is a just a fully-connected layer where two things are true: certain connections are removed; means their weights are forced to be constant zero. So you can just ignore
Convolutional neural networks backpropagation A convolutional network layer is a just a fully-connected layer where two things are true: certain connections are removed; means their weights are forced to be constant zero. So you can just ignore these connections/weights each of the non-zero weights is shared across multiple connections, ie across multiple pairs of input/output neurons When a weight is shared across connections, the gradient update for that weight is the sum of the gradients across all the connections that share the same weight. So then looking at your questions: Or are you supposed to update each weight (each value in the 2x2 filter) individually by its own gradient like in a normal artificial neural network ANN? Each weight is updated just like in a 'normal ann', by which case I interpret this to mean in a fully-connected, or 'dense', or 'linear' layer, right? However, as stated, the update applied to each of the weights will be the sum of the gradients from all the connections that share that weight. The 2x2 delta is backpropagated onto a 2x2 filter which creates a 3x3 grid of the change in weights If you are supposed to add up those 9 values and then add them to the original 2x2 filter at all positions it will just change each value in the 2x2 filter by the same amount and in the same directions. Is that the correct way to update weights in a CNN? The gradients 'flow back' to where they came. In the forward pass, each weight in the CNN 'kernel' will be used to calculate the output of multiple connections. In the backward pass, the gradients will 'flow backwards', along those exact same connections, onto the originating weights. In the worst case, rather than thinking of the CNN as some magical thing, you can, as stated, consider a CNN to be a standard fully-connected/linear layer, with many connections/weights forced to be zero, and the remaining weights shared across multiple connections. Then, you can use your existing knowledge for fully-connected / linear layers, to handle the CNN layer. For background, some conceptual aims/motivations of a CNN layer compared to a fully-connected layer are: enforces a prior on adjacency. Since pixels in images tend to have more correlation and relationships with adjacent pixels, so we keep only connections between adjacent input/output pixels, and remove the other connections partly to remove parameters, to avoid over-fitting, and partly because we want to enforce a prior on invariance to translation, we share the weights such that input pixels in one region will give identical outputs, no matter where in the image they are, but simply translated in the output image
Convolutional neural networks backpropagation A convolutional network layer is a just a fully-connected layer where two things are true: certain connections are removed; means their weights are forced to be constant zero. So you can just ignore
49,132
Convolutional neural networks backpropagation
In the answer you reference, the goal is to calculate $w_k$, one of the weights in one filter in a (convolutional) neural network. The update formulas for any parameter in a neural network, using gradient descent, is quite simple if you just express it as a derivative of the cost function $J$: $\Delta w_k = -\eta \frac{\partial J}{\partial w_k}$ But, computing $\frac{\partial J}{\partial w_k}$ is where things get tricky. In a convolutional neural network, this weight is shared, so you need to calculate $\frac{\partial J}{\partial w_k} = \sum \frac{\partial J}{\partial w_l}$ where the sum is taken over all the occurrences of that weight. So, if you have a 2x2 filter and an 8x8 image (stride 1), the filter is applied 7x7=49 times and in each application of this filter the top left weight is used. So, the sum is over these 49 values. For the top right weight, however, the sum is still over 49 values, but it's now a different set of values and so the update is different.
Convolutional neural networks backpropagation
In the answer you reference, the goal is to calculate $w_k$, one of the weights in one filter in a (convolutional) neural network. The update formulas for any parameter in a neural network, using gra
Convolutional neural networks backpropagation In the answer you reference, the goal is to calculate $w_k$, one of the weights in one filter in a (convolutional) neural network. The update formulas for any parameter in a neural network, using gradient descent, is quite simple if you just express it as a derivative of the cost function $J$: $\Delta w_k = -\eta \frac{\partial J}{\partial w_k}$ But, computing $\frac{\partial J}{\partial w_k}$ is where things get tricky. In a convolutional neural network, this weight is shared, so you need to calculate $\frac{\partial J}{\partial w_k} = \sum \frac{\partial J}{\partial w_l}$ where the sum is taken over all the occurrences of that weight. So, if you have a 2x2 filter and an 8x8 image (stride 1), the filter is applied 7x7=49 times and in each application of this filter the top left weight is used. So, the sum is over these 49 values. For the top right weight, however, the sum is still over 49 values, but it's now a different set of values and so the update is different.
Convolutional neural networks backpropagation In the answer you reference, the goal is to calculate $w_k$, one of the weights in one filter in a (convolutional) neural network. The update formulas for any parameter in a neural network, using gra
49,133
Deal with percentage data
This kind of data is known as compositional data, and you might find this interesting summary of transformation techniques to be helpful. You designate one of the markers as the baseline and it won't be used directly in any analysis, though you can back out its results in the end.
Deal with percentage data
This kind of data is known as compositional data, and you might find this interesting summary of transformation techniques to be helpful. You designate one of the markers as the baseline and it won't
Deal with percentage data This kind of data is known as compositional data, and you might find this interesting summary of transformation techniques to be helpful. You designate one of the markers as the baseline and it won't be used directly in any analysis, though you can back out its results in the end.
Deal with percentage data This kind of data is known as compositional data, and you might find this interesting summary of transformation techniques to be helpful. You designate one of the markers as the baseline and it won't
49,134
Iteratively updating the decomposition of a covariance matrix
I think you can achieve part of what you want by using an incremental SVD and/or an online PCA algorithm. Given a known decomposition we update it to take into account a new data-point. In terms of theoretical background, I would suggest you look at computer vision literature. They have nice application papers that explain the relevant mechanics without getting to heavy into the numerics side of things. The papers "Incremental singular value decomposition of uncertain data with missing values" (Brand, 2002) and "Incremental learning for robust visual tracking" (Ross et al. 2008) are well-cited papers than can offer a good starting point. The whole idea is the "forgetting vector" f encapsulates how much we value our recent points compared to the ones we have already observed. Setting f to something very small will lead us to obtain essentially a new eigen-decomposition for $n+1$ points given we already have seen $n$ points. In terms of implementations in R I would suggest you look at the package onlinePCA. It offers an online PCA implementation using incremental SVD (function incRpca); it also offers the updateCovariance and updateMean functions if you want to take matters in your own hands more (for PCA-purposes incremental SVD will be much faster though). The second part of the task as hand is removing the influence of existing data-points from the decomposition already computed. In that case the procedure we need is decremental SVD or SVD downdating. In terms of theoretical background, decremental SVD is a much less popular problem than incremental SVD and I have no experience on it. Starting references appear to be the papers: "Merging and splitting eigenspace models" (Hall et al. 2000) and "Efficiently downdating, composing and splitting singular value decompositions preserving the mean information" (Melenchon and Martinez, 2007); I came across a paper that implements among other things decremental LDA - Incremental and decremental LDA learning with applications (Pang et al. 2010) which might prove useful too. I have not come across any decremental SVD routines. For a particular problem if there are at most one or two consecutive down-dates regarding the most recent points (eg. we just read in corrupted points), we might even get away by simply storing the past couple decompositions and retrieving them. The Melenchon and Martinez paper outlines a relatively simple implementation if you are inclined to try your hand (in which case please publish it!). Update on the matter: I looked around the Eigen library offers a rank update for their $LDL^T$ decomposition (ie. a variant of the standard Cholesky decomposition) and it allows for down-dates. That means we can remove from a decomposition a previously-added vector. Eigen can be very easily linked with R through the excellent RcppEigen package. If it possible to work with the Cholesky decomposition of the covariance matrix this functionality might be fully resolve the issue presented.
Iteratively updating the decomposition of a covariance matrix
I think you can achieve part of what you want by using an incremental SVD and/or an online PCA algorithm. Given a known decomposition we update it to take into account a new data-point. In terms of t
Iteratively updating the decomposition of a covariance matrix I think you can achieve part of what you want by using an incremental SVD and/or an online PCA algorithm. Given a known decomposition we update it to take into account a new data-point. In terms of theoretical background, I would suggest you look at computer vision literature. They have nice application papers that explain the relevant mechanics without getting to heavy into the numerics side of things. The papers "Incremental singular value decomposition of uncertain data with missing values" (Brand, 2002) and "Incremental learning for robust visual tracking" (Ross et al. 2008) are well-cited papers than can offer a good starting point. The whole idea is the "forgetting vector" f encapsulates how much we value our recent points compared to the ones we have already observed. Setting f to something very small will lead us to obtain essentially a new eigen-decomposition for $n+1$ points given we already have seen $n$ points. In terms of implementations in R I would suggest you look at the package onlinePCA. It offers an online PCA implementation using incremental SVD (function incRpca); it also offers the updateCovariance and updateMean functions if you want to take matters in your own hands more (for PCA-purposes incremental SVD will be much faster though). The second part of the task as hand is removing the influence of existing data-points from the decomposition already computed. In that case the procedure we need is decremental SVD or SVD downdating. In terms of theoretical background, decremental SVD is a much less popular problem than incremental SVD and I have no experience on it. Starting references appear to be the papers: "Merging and splitting eigenspace models" (Hall et al. 2000) and "Efficiently downdating, composing and splitting singular value decompositions preserving the mean information" (Melenchon and Martinez, 2007); I came across a paper that implements among other things decremental LDA - Incremental and decremental LDA learning with applications (Pang et al. 2010) which might prove useful too. I have not come across any decremental SVD routines. For a particular problem if there are at most one or two consecutive down-dates regarding the most recent points (eg. we just read in corrupted points), we might even get away by simply storing the past couple decompositions and retrieving them. The Melenchon and Martinez paper outlines a relatively simple implementation if you are inclined to try your hand (in which case please publish it!). Update on the matter: I looked around the Eigen library offers a rank update for their $LDL^T$ decomposition (ie. a variant of the standard Cholesky decomposition) and it allows for down-dates. That means we can remove from a decomposition a previously-added vector. Eigen can be very easily linked with R through the excellent RcppEigen package. If it possible to work with the Cholesky decomposition of the covariance matrix this functionality might be fully resolve the issue presented.
Iteratively updating the decomposition of a covariance matrix I think you can achieve part of what you want by using an incremental SVD and/or an online PCA algorithm. Given a known decomposition we update it to take into account a new data-point. In terms of t
49,135
How does clogit() (in R) handle incomplete strata?
The residual will be zero for the remaining observation in that stratum. There's no need to remove it, since it doesn't provide any information if there were only two observations in the stratum. > library(survival) > data(retinopathy) > head(retinopathy) id laser eye age type trt futime status risk 1 5 argon left 28 adult 1 46.23 0 9 2 5 argon left 28 adult 0 46.23 0 9 3 14 argon right 12 juvenile 1 42.50 0 8 4 14 argon right 12 juvenile 0 31.30 1 6 5 16 xenon right 9 juvenile 1 42.27 0 11 6 16 xenon right 9 juvenile 0 42.27 0 11 > > allmodel<- clogit(status~trt+strata(id),data=retinopathy) > allmodel Call: clogit(status ~ trt + strata(id), data = retinopathy) coef exp(coef) se(coef) z p trt -1.371 0.254 0.280 -4.896 9.8e-07 Likelihood ratio test=29.9 on 1 df, p=4.544e-08 n= 394, number of events= 155 > resid(allmodel)[1:4] 1 2 3 4 0.0000000 0.0000000 -0.2025316 0.2025316 > > retinopathy$trt[3]<-NA > missmodel<- clogit(status~trt+strata(id),data=retinopathy) > missmodel Call: clogit(status ~ trt + strata(id), data = retinopathy) coef exp(coef) se(coef) z p trt -1.3545 0.2581 0.2804 -4.831 1.36e-06 Likelihood ratio test=28.97 on 1 df, p=7.344e-08 n= 393, number of events= 155 (1 observation deleted due to missingness) > resid(missmodel)[1:3] 1 2 4 0 0 0 [If there had been more than two observations in the stratum to begin with, the stratum would still be informative, of course. The residuals would then be what you'd expect for the remaining data in the stratum]
How does clogit() (in R) handle incomplete strata?
The residual will be zero for the remaining observation in that stratum. There's no need to remove it, since it doesn't provide any information if there were only two observations in the stratum. > l
How does clogit() (in R) handle incomplete strata? The residual will be zero for the remaining observation in that stratum. There's no need to remove it, since it doesn't provide any information if there were only two observations in the stratum. > library(survival) > data(retinopathy) > head(retinopathy) id laser eye age type trt futime status risk 1 5 argon left 28 adult 1 46.23 0 9 2 5 argon left 28 adult 0 46.23 0 9 3 14 argon right 12 juvenile 1 42.50 0 8 4 14 argon right 12 juvenile 0 31.30 1 6 5 16 xenon right 9 juvenile 1 42.27 0 11 6 16 xenon right 9 juvenile 0 42.27 0 11 > > allmodel<- clogit(status~trt+strata(id),data=retinopathy) > allmodel Call: clogit(status ~ trt + strata(id), data = retinopathy) coef exp(coef) se(coef) z p trt -1.371 0.254 0.280 -4.896 9.8e-07 Likelihood ratio test=29.9 on 1 df, p=4.544e-08 n= 394, number of events= 155 > resid(allmodel)[1:4] 1 2 3 4 0.0000000 0.0000000 -0.2025316 0.2025316 > > retinopathy$trt[3]<-NA > missmodel<- clogit(status~trt+strata(id),data=retinopathy) > missmodel Call: clogit(status ~ trt + strata(id), data = retinopathy) coef exp(coef) se(coef) z p trt -1.3545 0.2581 0.2804 -4.831 1.36e-06 Likelihood ratio test=28.97 on 1 df, p=7.344e-08 n= 393, number of events= 155 (1 observation deleted due to missingness) > resid(missmodel)[1:3] 1 2 4 0 0 0 [If there had been more than two observations in the stratum to begin with, the stratum would still be informative, of course. The residuals would then be what you'd expect for the remaining data in the stratum]
How does clogit() (in R) handle incomplete strata? The residual will be zero for the remaining observation in that stratum. There's no need to remove it, since it doesn't provide any information if there were only two observations in the stratum. > l
49,136
How does clogit() (in R) handle incomplete strata?
In principle, strata with missing values on the control and/or case observation should be removed from the analysis. I haven't used "clogit" command recently but I am pretty sure this is automatically done or at least an error/warning message is displayed during estimation. Otherwise you could remove incomplete strata manually (As a general rule of thumb, I think that it is actually better to do data cleaning yourself rather than relying on estimation commands).
How does clogit() (in R) handle incomplete strata?
In principle, strata with missing values on the control and/or case observation should be removed from the analysis. I haven't used "clogit" command recently but I am pretty sure this is automatically
How does clogit() (in R) handle incomplete strata? In principle, strata with missing values on the control and/or case observation should be removed from the analysis. I haven't used "clogit" command recently but I am pretty sure this is automatically done or at least an error/warning message is displayed during estimation. Otherwise you could remove incomplete strata manually (As a general rule of thumb, I think that it is actually better to do data cleaning yourself rather than relying on estimation commands).
How does clogit() (in R) handle incomplete strata? In principle, strata with missing values on the control and/or case observation should be removed from the analysis. I haven't used "clogit" command recently but I am pretty sure this is automatically
49,137
Backpropagation algorithm in neural networks (NN) with logistic activation function
Yes you got it right. Just to add (sorry for being nitpicky :), when you write $\frac{\partial E}{\partial y_i}$ it is implied that the output is a vector, so maybe writing $$\frac{\partial\frac{1}{2}\sum_i(t_i - y_i)^2}{\partial y_i} = \frac{\partial\frac{1}{2}(t_i - y_i)^2}{\partial y_i} = y_i - t_i$$ would be more clear, for $T = (t_1, \dots, t_n)^T$ being the correct output.
Backpropagation algorithm in neural networks (NN) with logistic activation function
Yes you got it right. Just to add (sorry for being nitpicky :), when you write $\frac{\partial E}{\partial y_i}$ it is implied that the output is a vector, so maybe writing $$\frac{\partial\frac{1}{2}
Backpropagation algorithm in neural networks (NN) with logistic activation function Yes you got it right. Just to add (sorry for being nitpicky :), when you write $\frac{\partial E}{\partial y_i}$ it is implied that the output is a vector, so maybe writing $$\frac{\partial\frac{1}{2}\sum_i(t_i - y_i)^2}{\partial y_i} = \frac{\partial\frac{1}{2}(t_i - y_i)^2}{\partial y_i} = y_i - t_i$$ would be more clear, for $T = (t_1, \dots, t_n)^T$ being the correct output.
Backpropagation algorithm in neural networks (NN) with logistic activation function Yes you got it right. Just to add (sorry for being nitpicky :), when you write $\frac{\partial E}{\partial y_i}$ it is implied that the output is a vector, so maybe writing $$\frac{\partial\frac{1}{2}
49,138
Uniform distribution of correlation coefficients in correlation matrix
It's not only possible, it's easy to create any distribution $F$ whatsoever supported on the interval $[-1/(N-2), 1]$, provided only that $K \le N-2$. Here's one way. It creates datasets in which all the variables have the same correlation with each other. Let $\rho$ be a random variable with distribution $F$. Define $U \ge 1/(N-1)$ as the unique solution to $$\rho = \frac{1 + 2 U - (N-1)U^2}{2 - 2(N-2)U + (N-1)(N-2)U^2}.$$ Set $V = (N-2)U-1$ and construct the $K$ vectors, each of length $N$, given by $$\left\{\eqalign{ X_1 &= (1, V, -U, -U, \ldots, -U) \\ X_2 &= (1, -U, V, -U, -U, \ldots, -U) \\ &\ldots \\ X_K &= (1, -U, -U, \ldots, -U, V, -U, \ldots, -U). }\right.$$ Each has a $1$ in the first place, $V$ in the $K+1^\text{st}$ place, and $-U$ everywhere else. A computation (which is simple because all the $X_i$ have zero means and the same variance) shows that $\rho$ is the correlation coefficient between each $X_i$ and $X_j$. Therefore all the correlation coefficients of these $K$ random vectors of length $N$ equal $\rho$, QED. Appendix: Illustration via simulation This R code simulates from a given distribution $F$. It displays histograms of the correlation coefficients and tests them for uniformity. The comments explain the details. # # Specify the situation. # N <- 20 # Dataset size K <- 4 # Number of variables n.sim <- 1e4 # Simulation size # # Predefine some objects. # f <- function(rho, n) { # Maps `rho` to `U` (1 + (n-2)*rho + sqrt(n * (1-rho)*(1+(n-2)*rho))) / ((n-1) * (1+(n-2)*rho)) } pattern <- cbind(diag(rep(1, K)), matrix(0, K, N-K)) mask <- lower.tri(outer(1:K, 1:K)) # # Conduct the simulation. # # rF <- runif # The random number generator # qF <- qunif # The quantile function # dF <- dunif # The density function rF <- function(n) rbeta(n, 1, 3) qF <- function(q) qbeta(q, 1, 3) dF <- function(x) dbeta(x, 1, 3) rho <- rF(n.sim) # Draw values of `rho` # # Construct the data and compute their correlation coefficients. # Each row of `sim` will record one particular correlation coefficient. # Its columns are the iterations. # U <- f(rho, N) sim <- sapply(U, function(u) { v <- (N-1)*u - 1 x <- matrix(rep(c(rep(-u, N-1), 1), K), nrow=K, byrow=TRUE) + v*pattern cor(t(x))[mask] }) # # Display the distributions of the correlation coefficients. # n.plots <- choose(K,2) n.rows <- floor(sqrt(n.plots)) n.cols <- ceiling(n.plots/n.rows) par(mfrow=c(n.rows, n.cols)) breaks <- qF(seq(0, 1, by=1/20)) invisible(apply(sim, 1, function(x) { H <<- hist(x, main="Marginal Histogram", freq=FALSE, breaks=breaks) curve(dF(x), add=TRUE, col="Red", lwd=2) # # Test the uniformity with a chi-squared test. # p <- chisq.test(H$counts)$p.value mtext(paste0("(Test of uniformity: p = ", signif(p, 3), ")"), cex=0.75) })) par(mfrow=c(1,1))
Uniform distribution of correlation coefficients in correlation matrix
It's not only possible, it's easy to create any distribution $F$ whatsoever supported on the interval $[-1/(N-2), 1]$, provided only that $K \le N-2$. Here's one way. It creates datasets in which al
Uniform distribution of correlation coefficients in correlation matrix It's not only possible, it's easy to create any distribution $F$ whatsoever supported on the interval $[-1/(N-2), 1]$, provided only that $K \le N-2$. Here's one way. It creates datasets in which all the variables have the same correlation with each other. Let $\rho$ be a random variable with distribution $F$. Define $U \ge 1/(N-1)$ as the unique solution to $$\rho = \frac{1 + 2 U - (N-1)U^2}{2 - 2(N-2)U + (N-1)(N-2)U^2}.$$ Set $V = (N-2)U-1$ and construct the $K$ vectors, each of length $N$, given by $$\left\{\eqalign{ X_1 &= (1, V, -U, -U, \ldots, -U) \\ X_2 &= (1, -U, V, -U, -U, \ldots, -U) \\ &\ldots \\ X_K &= (1, -U, -U, \ldots, -U, V, -U, \ldots, -U). }\right.$$ Each has a $1$ in the first place, $V$ in the $K+1^\text{st}$ place, and $-U$ everywhere else. A computation (which is simple because all the $X_i$ have zero means and the same variance) shows that $\rho$ is the correlation coefficient between each $X_i$ and $X_j$. Therefore all the correlation coefficients of these $K$ random vectors of length $N$ equal $\rho$, QED. Appendix: Illustration via simulation This R code simulates from a given distribution $F$. It displays histograms of the correlation coefficients and tests them for uniformity. The comments explain the details. # # Specify the situation. # N <- 20 # Dataset size K <- 4 # Number of variables n.sim <- 1e4 # Simulation size # # Predefine some objects. # f <- function(rho, n) { # Maps `rho` to `U` (1 + (n-2)*rho + sqrt(n * (1-rho)*(1+(n-2)*rho))) / ((n-1) * (1+(n-2)*rho)) } pattern <- cbind(diag(rep(1, K)), matrix(0, K, N-K)) mask <- lower.tri(outer(1:K, 1:K)) # # Conduct the simulation. # # rF <- runif # The random number generator # qF <- qunif # The quantile function # dF <- dunif # The density function rF <- function(n) rbeta(n, 1, 3) qF <- function(q) qbeta(q, 1, 3) dF <- function(x) dbeta(x, 1, 3) rho <- rF(n.sim) # Draw values of `rho` # # Construct the data and compute their correlation coefficients. # Each row of `sim` will record one particular correlation coefficient. # Its columns are the iterations. # U <- f(rho, N) sim <- sapply(U, function(u) { v <- (N-1)*u - 1 x <- matrix(rep(c(rep(-u, N-1), 1), K), nrow=K, byrow=TRUE) + v*pattern cor(t(x))[mask] }) # # Display the distributions of the correlation coefficients. # n.plots <- choose(K,2) n.rows <- floor(sqrt(n.plots)) n.cols <- ceiling(n.plots/n.rows) par(mfrow=c(n.rows, n.cols)) breaks <- qF(seq(0, 1, by=1/20)) invisible(apply(sim, 1, function(x) { H <<- hist(x, main="Marginal Histogram", freq=FALSE, breaks=breaks) curve(dF(x), add=TRUE, col="Red", lwd=2) # # Test the uniformity with a chi-squared test. # p <- chisq.test(H$counts)$p.value mtext(paste0("(Test of uniformity: p = ", signif(p, 3), ")"), cex=0.75) })) par(mfrow=c(1,1))
Uniform distribution of correlation coefficients in correlation matrix It's not only possible, it's easy to create any distribution $F$ whatsoever supported on the interval $[-1/(N-2), 1]$, provided only that $K \le N-2$. Here's one way. It creates datasets in which al
49,139
Terminology for "time-series of events"
The term intermittent comes to mind reflecting a measure of an activity that takes place but not at fixed intervals such as the quantity of gas purchases for your auto.
Terminology for "time-series of events"
The term intermittent comes to mind reflecting a measure of an activity that takes place but not at fixed intervals such as the quantity of gas purchases for your auto.
Terminology for "time-series of events" The term intermittent comes to mind reflecting a measure of an activity that takes place but not at fixed intervals such as the quantity of gas purchases for your auto.
Terminology for "time-series of events" The term intermittent comes to mind reflecting a measure of an activity that takes place but not at fixed intervals such as the quantity of gas purchases for your auto.
49,140
Terminology for "time-series of events"
Unevenly spaced time series is a term that is used. While most statistics theory is about evenly spaced time series. In the comments there is also proposed point process, but that seems to be a special case where only the times itself are observed and of interest. So if the observations are occurrence times of earthquakes in California, that would be a point process, also since there is no value associated with other (non-nonoccurence) times. But when there is a continuous, underlying value, for example the luminosity of a variable star, but it is only observed at irregular times, due to weather conditions and observer availability, it is an unevenly spaced time series.
Terminology for "time-series of events"
Unevenly spaced time series is a term that is used. While most statistics theory is about evenly spaced time series. In the comments there is also proposed point process, but that seems to be a specia
Terminology for "time-series of events" Unevenly spaced time series is a term that is used. While most statistics theory is about evenly spaced time series. In the comments there is also proposed point process, but that seems to be a special case where only the times itself are observed and of interest. So if the observations are occurrence times of earthquakes in California, that would be a point process, also since there is no value associated with other (non-nonoccurence) times. But when there is a continuous, underlying value, for example the luminosity of a variable star, but it is only observed at irregular times, due to weather conditions and observer availability, it is an unevenly spaced time series.
Terminology for "time-series of events" Unevenly spaced time series is a term that is used. While most statistics theory is about evenly spaced time series. In the comments there is also proposed point process, but that seems to be a specia
49,141
Multinomial logistic regression with class probability as target variable
The polytomous extension of the beta regression is Dirichlet regression. For beta you have just one proportion $y$ which you could also see as a composition of $(y_1, 1 - y_1)$. More generally, one could also have $(y_1, y_2, \dots, y_{k-1}, 1 - \sum_{j = 1}^{k-1} y_j)$ with the additional restriction that $0 < y_j < 1 \forall j$. The Dirichlet distribution then provides a probabilistic model for this kind of data. And there are different parameterizations that could be employed in a regression setup. The R package DirichletReg at https://CRAN.R-project.org/package=DirichletReg implements two possible parameterizations. See http://epub.wu.ac.at/4077/ for an introduction.
Multinomial logistic regression with class probability as target variable
The polytomous extension of the beta regression is Dirichlet regression. For beta you have just one proportion $y$ which you could also see as a composition of $(y_1, 1 - y_1)$. More generally, one co
Multinomial logistic regression with class probability as target variable The polytomous extension of the beta regression is Dirichlet regression. For beta you have just one proportion $y$ which you could also see as a composition of $(y_1, 1 - y_1)$. More generally, one could also have $(y_1, y_2, \dots, y_{k-1}, 1 - \sum_{j = 1}^{k-1} y_j)$ with the additional restriction that $0 < y_j < 1 \forall j$. The Dirichlet distribution then provides a probabilistic model for this kind of data. And there are different parameterizations that could be employed in a regression setup. The R package DirichletReg at https://CRAN.R-project.org/package=DirichletReg implements two possible parameterizations. See http://epub.wu.ac.at/4077/ for an introduction.
Multinomial logistic regression with class probability as target variable The polytomous extension of the beta regression is Dirichlet regression. For beta you have just one proportion $y$ which you could also see as a composition of $(y_1, 1 - y_1)$. More generally, one co
49,142
FGLS and time fixed effects
Yes including both violates certain statistical properties. The pggls documentation indirectly states exactly that: Conversely, this structure is assumed identical across groups and thus general FGLS estimation is inefficient under groupwise heteroskedasticity. Note also that this method requires estimation of T(T+1)/2 variance parameters, thus efficiency requires N > > T (if effect="individual", else the opposite). FGLS requires estimator of covariance matrix of regression disturbances. For individual effects panel data model: $$y_{it} = x_{it}\beta + c_i + u_{it},$$ where $c_{i}$ is an individual effect, it is assumed that $u_{it}$ are independent from $u_{jt}$ for each $i\neq j$, so you are left with $\frac{T(T+1)}{2}$ covariances $\text{cov}(u_{it},u_{is})$. For the time effects panel data model: $$y_{it} = x_{it}\beta + d_t + u_{it},$$ where $d_t$ is a time effect it is assumed that $u_{it}$ are independent from $u_{is}$ for each $t\neq s$, so you are left with $\frac{N(N+1)}{2}$ covariances $\text{cov}(u_{it},u_{is})$. Now if you have both time and individual effects: $$y_{it} = x_{it}\beta + c_i + d_t + u_{it},$$ the question arises which covariances $\text{cov}(u_{it},u_{js})$ are zero? If you assume that all of them are not zero, you are left with $\frac{NT(NT+1)}{2}$ unknown parameters with $NT$ data points, which makes the problem non feasible. Note 1. Independence assumption mention can be relaxed to zero covariances. Note 2. In both individual and time effect there $NT$ data points. FGLS is an asymptotic procedure and it requires that the number of data points must increase, while the number of parameters remains fixed. For the individual effects hence the $N$ must increase, and for time effects $T$ must increase. More often than not these requirements are satisfied for totally different data sources, hence the answer to your problem depends on your data source. Is is more likely that $N$ is increasing or $T$? Since you mention time dummies, I suspect that $N$ is increasing, hence I suggest using time dummies.
FGLS and time fixed effects
Yes including both violates certain statistical properties. The pggls documentation indirectly states exactly that: Conversely, this structure is assumed identical across groups and thus general FG
FGLS and time fixed effects Yes including both violates certain statistical properties. The pggls documentation indirectly states exactly that: Conversely, this structure is assumed identical across groups and thus general FGLS estimation is inefficient under groupwise heteroskedasticity. Note also that this method requires estimation of T(T+1)/2 variance parameters, thus efficiency requires N > > T (if effect="individual", else the opposite). FGLS requires estimator of covariance matrix of regression disturbances. For individual effects panel data model: $$y_{it} = x_{it}\beta + c_i + u_{it},$$ where $c_{i}$ is an individual effect, it is assumed that $u_{it}$ are independent from $u_{jt}$ for each $i\neq j$, so you are left with $\frac{T(T+1)}{2}$ covariances $\text{cov}(u_{it},u_{is})$. For the time effects panel data model: $$y_{it} = x_{it}\beta + d_t + u_{it},$$ where $d_t$ is a time effect it is assumed that $u_{it}$ are independent from $u_{is}$ for each $t\neq s$, so you are left with $\frac{N(N+1)}{2}$ covariances $\text{cov}(u_{it},u_{is})$. Now if you have both time and individual effects: $$y_{it} = x_{it}\beta + c_i + d_t + u_{it},$$ the question arises which covariances $\text{cov}(u_{it},u_{js})$ are zero? If you assume that all of them are not zero, you are left with $\frac{NT(NT+1)}{2}$ unknown parameters with $NT$ data points, which makes the problem non feasible. Note 1. Independence assumption mention can be relaxed to zero covariances. Note 2. In both individual and time effect there $NT$ data points. FGLS is an asymptotic procedure and it requires that the number of data points must increase, while the number of parameters remains fixed. For the individual effects hence the $N$ must increase, and for time effects $T$ must increase. More often than not these requirements are satisfied for totally different data sources, hence the answer to your problem depends on your data source. Is is more likely that $N$ is increasing or $T$? Since you mention time dummies, I suspect that $N$ is increasing, hence I suggest using time dummies.
FGLS and time fixed effects Yes including both violates certain statistical properties. The pggls documentation indirectly states exactly that: Conversely, this structure is assumed identical across groups and thus general FG
49,143
Probability of throwing n different numbers in m throws of a die
Let: $T_{n, m}$ be the event "exactly $n$ different numbers in $m$ throws of a dice". $A$ be the event "in the $m^{th}$ throw, a number that has been seen before appears". $D$ be the number of sides on your dice. We assume that the dice is fair. The objective is to find $P (T_{n, m })$. Then by the law of total probability, we have: \begin{align} P (T_{n, m}) =& P (T_{n, m}|A)P(A) + P (T_{n, m}|\overline{A})P(\overline{A})\\ =& P (T_{n, m -1}|A)P(A) + P (T_{n-1, m-1}|\overline{A})P(\overline{A}) \\ =& P (A| T_{n, m -1})P (T_{n, m -1}) +P(\overline{A}|T_{n-1, m -1})P (T_{n-1, m -1}) \\ =& \frac{n}{D}P (T_{n, m -1}) + \frac{D - (n - 1)}{D}P (T_{n-1, m -1}) \end{align} Base case: $P (T_{n, n })=\frac{n!}{n^n} $ and $ n>m\Rightarrow P (T_{n, m }) =0 $.
Probability of throwing n different numbers in m throws of a die
Let: $T_{n, m}$ be the event "exactly $n$ different numbers in $m$ throws of a dice". $A$ be the event "in the $m^{th}$ throw, a number that has been seen before appears". $D$ be the number of sides
Probability of throwing n different numbers in m throws of a die Let: $T_{n, m}$ be the event "exactly $n$ different numbers in $m$ throws of a dice". $A$ be the event "in the $m^{th}$ throw, a number that has been seen before appears". $D$ be the number of sides on your dice. We assume that the dice is fair. The objective is to find $P (T_{n, m })$. Then by the law of total probability, we have: \begin{align} P (T_{n, m}) =& P (T_{n, m}|A)P(A) + P (T_{n, m}|\overline{A})P(\overline{A})\\ =& P (T_{n, m -1}|A)P(A) + P (T_{n-1, m-1}|\overline{A})P(\overline{A}) \\ =& P (A| T_{n, m -1})P (T_{n, m -1}) +P(\overline{A}|T_{n-1, m -1})P (T_{n-1, m -1}) \\ =& \frac{n}{D}P (T_{n, m -1}) + \frac{D - (n - 1)}{D}P (T_{n-1, m -1}) \end{align} Base case: $P (T_{n, n })=\frac{n!}{n^n} $ and $ n>m\Rightarrow P (T_{n, m }) =0 $.
Probability of throwing n different numbers in m throws of a die Let: $T_{n, m}$ be the event "exactly $n$ different numbers in $m$ throws of a dice". $A$ be the event "in the $m^{th}$ throw, a number that has been seen before appears". $D$ be the number of sides
49,144
What does the W value in Wilcoxon test mean
The Wilcoxon test does not test for equality of means, rather it tests $$H_0: P(X_a > X_b) = 0.5$$ namely that a randomly drawn observation of group a has 50% chance of being larger than a randomly drawn observation from group b. Only if you see location-shift (i.e. distributions in both groups have the same shape but different mean (location)) than you can formulate your conclusion in terms of means. With the P-value you have you can reject $H_0$ on the 5% significance level. The $W$ is the Wilcoxon test statistic and is, as the name says, the sum of the ranks in one of both groups. You could enumerate the exact distribution of $W$ under $H_0$ by typing wilcox.test(raj_reps.a, raj_reps.b, exact=TRUE) This is very computer-intensive, so usually for such large sample sizes the asymptotic normality of $W$ is used for calculating the p-value, see ?wilcox.test
What does the W value in Wilcoxon test mean
The Wilcoxon test does not test for equality of means, rather it tests $$H_0: P(X_a > X_b) = 0.5$$ namely that a randomly drawn observation of group a has 50% chance of being larger than a randomly dr
What does the W value in Wilcoxon test mean The Wilcoxon test does not test for equality of means, rather it tests $$H_0: P(X_a > X_b) = 0.5$$ namely that a randomly drawn observation of group a has 50% chance of being larger than a randomly drawn observation from group b. Only if you see location-shift (i.e. distributions in both groups have the same shape but different mean (location)) than you can formulate your conclusion in terms of means. With the P-value you have you can reject $H_0$ on the 5% significance level. The $W$ is the Wilcoxon test statistic and is, as the name says, the sum of the ranks in one of both groups. You could enumerate the exact distribution of $W$ under $H_0$ by typing wilcox.test(raj_reps.a, raj_reps.b, exact=TRUE) This is very computer-intensive, so usually for such large sample sizes the asymptotic normality of $W$ is used for calculating the p-value, see ?wilcox.test
What does the W value in Wilcoxon test mean The Wilcoxon test does not test for equality of means, rather it tests $$H_0: P(X_a > X_b) = 0.5$$ namely that a randomly drawn observation of group a has 50% chance of being larger than a randomly dr
49,145
Self Play in Reinforcement Learning
I'm a chess player, so I'll use chess in my answer. (a) and (b) are not identical. In (a), you have two agents playing against each other. Their underlying models might not be comparable. Even if they had the same model, their parameters highly likely wouldn't converge simultaneously. This is like matching two different chess engines. In (b), you have an agent playing against itself for as many times as possible. This is the typical definition for self-learning reinforcement learning, and is commonly seen in chess programming. (b) is very common in chess, so it does make sense. In chess, we can map the definitions like this: agent -> chess engine both sides learning -> update piece-square-table (PST) and evaluation parameters. The PST table defines where White pieces and Black pieces should go. Evaluation parameters define how a position should be evaluated statically. For example, rook on the seventh rook, two-bishop advantage, isolated pawns, protected passed-pawn etc. left side play -> white colour right side play -> black colour flips the current state - Minimax changes all X to O - get the alpha-beta evaluation from the last ply and change it's sign updates the values - update parameters based on whether the game is won/draw/loss, and choose appropriate learning rate I have only experience with self-play defined in (b). To my knowledge, nobody has done (a) for chess engine programming. NeuroChess is a chess engine by reinforcement learning. The engine learns by the final outcome of the game and it does that by playing itself.
Self Play in Reinforcement Learning
I'm a chess player, so I'll use chess in my answer. (a) and (b) are not identical. In (a), you have two agents playing against each other. Their underlying models might not be comparable. Even if the
Self Play in Reinforcement Learning I'm a chess player, so I'll use chess in my answer. (a) and (b) are not identical. In (a), you have two agents playing against each other. Their underlying models might not be comparable. Even if they had the same model, their parameters highly likely wouldn't converge simultaneously. This is like matching two different chess engines. In (b), you have an agent playing against itself for as many times as possible. This is the typical definition for self-learning reinforcement learning, and is commonly seen in chess programming. (b) is very common in chess, so it does make sense. In chess, we can map the definitions like this: agent -> chess engine both sides learning -> update piece-square-table (PST) and evaluation parameters. The PST table defines where White pieces and Black pieces should go. Evaluation parameters define how a position should be evaluated statically. For example, rook on the seventh rook, two-bishop advantage, isolated pawns, protected passed-pawn etc. left side play -> white colour right side play -> black colour flips the current state - Minimax changes all X to O - get the alpha-beta evaluation from the last ply and change it's sign updates the values - update parameters based on whether the game is won/draw/loss, and choose appropriate learning rate I have only experience with self-play defined in (b). To my knowledge, nobody has done (a) for chess engine programming. NeuroChess is a chess engine by reinforcement learning. The engine learns by the final outcome of the game and it does that by playing itself.
Self Play in Reinforcement Learning I'm a chess player, so I'll use chess in my answer. (a) and (b) are not identical. In (a), you have two agents playing against each other. Their underlying models might not be comparable. Even if the
49,146
What is the loss function in neural networks?
The backpropagated deltas are derived via the chain rule of calculus. Notice that, although they are valid over all inputs, in the weight update step they are multiplied with the actual activation of that layer. For the loss function we usually use MSE for linear layers or cross-entropy for softmax layers such that the backpropagated error becomes the difference of the prediction and the target. I suggest for a detailed understanding to study the topic in the deep learning book by Goodfellow et al.: http://www.deeplearningbook.org/ A more limited treatment of can be found here: Backpropagation with Softmax / Cross Entropy
What is the loss function in neural networks?
The backpropagated deltas are derived via the chain rule of calculus. Notice that, although they are valid over all inputs, in the weight update step they are multiplied with the actual activation of
What is the loss function in neural networks? The backpropagated deltas are derived via the chain rule of calculus. Notice that, although they are valid over all inputs, in the weight update step they are multiplied with the actual activation of that layer. For the loss function we usually use MSE for linear layers or cross-entropy for softmax layers such that the backpropagated error becomes the difference of the prediction and the target. I suggest for a detailed understanding to study the topic in the deep learning book by Goodfellow et al.: http://www.deeplearningbook.org/ A more limited treatment of can be found here: Backpropagation with Softmax / Cross Entropy
What is the loss function in neural networks? The backpropagated deltas are derived via the chain rule of calculus. Notice that, although they are valid over all inputs, in the weight update step they are multiplied with the actual activation of
49,147
What is the loss function in neural networks?
You ask for simple explanation how neutral network should train. The cost function used in a network depends on what you want to do and sometimes the network architecture. For a regression problem, the most common is least-squares. For classification, cross-entropy is popular. Imagine you want your network to output 10, but it actually gives you 9. Assuming least square cost function, your error would be 1. We need to adjust the network parameters because there is an error. This error (1 in our example) would propagate back in the network by something known as chain rule. We repeat and keep training until our error rate is minimized.
What is the loss function in neural networks?
You ask for simple explanation how neutral network should train. The cost function used in a network depends on what you want to do and sometimes the network architecture. For a regression problem, t
What is the loss function in neural networks? You ask for simple explanation how neutral network should train. The cost function used in a network depends on what you want to do and sometimes the network architecture. For a regression problem, the most common is least-squares. For classification, cross-entropy is popular. Imagine you want your network to output 10, but it actually gives you 9. Assuming least square cost function, your error would be 1. We need to adjust the network parameters because there is an error. This error (1 in our example) would propagate back in the network by something known as chain rule. We repeat and keep training until our error rate is minimized.
What is the loss function in neural networks? You ask for simple explanation how neutral network should train. The cost function used in a network depends on what you want to do and sometimes the network architecture. For a regression problem, t
49,148
Should MCMC posterior be used as my new prior?
One thing to consider would be to re-formulate your model to be hierarchical with some dependence structure that allows you to borrow strength across experiments for parameter estimation. There are many ways to this, and how you create these dependence structures is problem specific. But to give you an idea, a simple and ubiquitous example of this kind of thinking is the eight-schools problem presented in the book Bayesian Data Analysis by Gelman et al. A simpler solution you might consider is using a power prior. In a nutshell, a power prior uses a scalar parameter $a_0$ to weight the historical data relative to the likelihood of the current data. $a_0=0$ corresponds to no weight on the historical data, while $a_0=1$ corresponds to the prior for the new study being the posterior from the previous study. $0 < a_0 < 1$ of course corresponds to something in between. You can also put a prior on the $a_0$ parameter if you wish. The idea would be to down-weight the historical data enough that it didn't skew the results from your second experiment.
Should MCMC posterior be used as my new prior?
One thing to consider would be to re-formulate your model to be hierarchical with some dependence structure that allows you to borrow strength across experiments for parameter estimation. There are m
Should MCMC posterior be used as my new prior? One thing to consider would be to re-formulate your model to be hierarchical with some dependence structure that allows you to borrow strength across experiments for parameter estimation. There are many ways to this, and how you create these dependence structures is problem specific. But to give you an idea, a simple and ubiquitous example of this kind of thinking is the eight-schools problem presented in the book Bayesian Data Analysis by Gelman et al. A simpler solution you might consider is using a power prior. In a nutshell, a power prior uses a scalar parameter $a_0$ to weight the historical data relative to the likelihood of the current data. $a_0=0$ corresponds to no weight on the historical data, while $a_0=1$ corresponds to the prior for the new study being the posterior from the previous study. $0 < a_0 < 1$ of course corresponds to something in between. You can also put a prior on the $a_0$ parameter if you wish. The idea would be to down-weight the historical data enough that it didn't skew the results from your second experiment.
Should MCMC posterior be used as my new prior? One thing to consider would be to re-formulate your model to be hierarchical with some dependence structure that allows you to borrow strength across experiments for parameter estimation. There are m
49,149
Is $\min(f(x)g(y),f(y)g(x))$ a positive definite kernel?
Yes, it is. The proof is as follows, $$K(x, y) = \min(f(x)g(y), f(y)g(x))=\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})g(x)g(y)$$ $$\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})=\int\mathbb{1}_{[0, \frac{f(x)}{g(x)}]} \mathbb{1}_{[0, \frac{f(y)}{g(y)}]} = \langle \mathbb{1}_{[0, \frac{f(x)}{g(x)}]}, \mathbb{1}_{[0, \frac{f(y)}{g(y)}]} \rangle = \langle \phi(x), \phi(y) \rangle$$ By Aronszajn theorem using $\phi(x) = \mathbb{1}_{[0, \frac{f(x)}{g(x)}]}$ , $\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})$ is positive definite kernel. $g(x)g(y)$ is trivially a positive kernel as it is a product. $K(x, y)$ is thus the product of two positive definite kernels, so it is a positive definite kernel.
Is $\min(f(x)g(y),f(y)g(x))$ a positive definite kernel?
Yes, it is. The proof is as follows, $$K(x, y) = \min(f(x)g(y), f(y)g(x))=\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})g(x)g(y)$$ $$\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})=\int\mathbb{1}_{[0, \frac{f(
Is $\min(f(x)g(y),f(y)g(x))$ a positive definite kernel? Yes, it is. The proof is as follows, $$K(x, y) = \min(f(x)g(y), f(y)g(x))=\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})g(x)g(y)$$ $$\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})=\int\mathbb{1}_{[0, \frac{f(x)}{g(x)}]} \mathbb{1}_{[0, \frac{f(y)}{g(y)}]} = \langle \mathbb{1}_{[0, \frac{f(x)}{g(x)}]}, \mathbb{1}_{[0, \frac{f(y)}{g(y)}]} \rangle = \langle \phi(x), \phi(y) \rangle$$ By Aronszajn theorem using $\phi(x) = \mathbb{1}_{[0, \frac{f(x)}{g(x)}]}$ , $\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})$ is positive definite kernel. $g(x)g(y)$ is trivially a positive kernel as it is a product. $K(x, y)$ is thus the product of two positive definite kernels, so it is a positive definite kernel.
Is $\min(f(x)g(y),f(y)g(x))$ a positive definite kernel? Yes, it is. The proof is as follows, $$K(x, y) = \min(f(x)g(y), f(y)g(x))=\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})g(x)g(y)$$ $$\min(\frac{f(x)}{g(x)}, \frac{f(y)}{g(y)})=\int\mathbb{1}_{[0, \frac{f(
49,150
What information can I gain from the fourier transform?
The FFT is used to analyse periodic data. You use the Short Time Fourier Transform ( basically the FT over small segments of the time series) to analyse how the frequencies change over time ( eg in music). your plots are too low detail to zoom in, but I cannot imagine that a keystroke has any particular repetitive signal (eg that your fingers vibrate differently based on the key pressed) [ I have little imagination:) ]. I could imagine you look at frequency information to remove periodic noise (?you are on a train). another possible use would be to identify typing frequency and maybe stretch your signal accordingly ( ie to standardise your signal across typing speeds)
What information can I gain from the fourier transform?
The FFT is used to analyse periodic data. You use the Short Time Fourier Transform ( basically the FT over small segments of the time series) to analyse how the frequencies change over time ( eg in mu
What information can I gain from the fourier transform? The FFT is used to analyse periodic data. You use the Short Time Fourier Transform ( basically the FT over small segments of the time series) to analyse how the frequencies change over time ( eg in music). your plots are too low detail to zoom in, but I cannot imagine that a keystroke has any particular repetitive signal (eg that your fingers vibrate differently based on the key pressed) [ I have little imagination:) ]. I could imagine you look at frequency information to remove periodic noise (?you are on a train). another possible use would be to identify typing frequency and maybe stretch your signal accordingly ( ie to standardise your signal across typing speeds)
What information can I gain from the fourier transform? The FFT is used to analyse periodic data. You use the Short Time Fourier Transform ( basically the FT over small segments of the time series) to analyse how the frequencies change over time ( eg in mu
49,151
What information can I gain from the fourier transform?
Informally speaking frequency domain tells us "how fast things change". And the different "components" of you data. You mentioned about audio data, which is a perfect example to understand frequency domain. The low frequency components can be the sound from base guitar, and the high frequency components can be sound from lead guitar. Suppose we want to know "how much work" has been done for two players. The "power spectrum on different frequency components" can tell you that. Also, suppose you want to practice lead and filter out the lead sound in original track. Frequency analysis can help to design a filter to do such task. Back to your example of sensor data. Suppose you have some sensor on a person that can collect all the movement data. The data should be very complicated that contains slow movements (such as walk) and fast movements (such as heart beat). Similar to the audio example, frequency analysis will give you different roles. Say, suppose you want to get the heart beat data only, a special filter can be derived and applied to your data to get that. BTW, here is a related question from me. You can see if your data is periodic (as shown in your figure), using Fourier basis is better. What's wrong to fit periodic data with polynomials?
What information can I gain from the fourier transform?
Informally speaking frequency domain tells us "how fast things change". And the different "components" of you data. You mentioned about audio data, which is a perfect example to understand frequency d
What information can I gain from the fourier transform? Informally speaking frequency domain tells us "how fast things change". And the different "components" of you data. You mentioned about audio data, which is a perfect example to understand frequency domain. The low frequency components can be the sound from base guitar, and the high frequency components can be sound from lead guitar. Suppose we want to know "how much work" has been done for two players. The "power spectrum on different frequency components" can tell you that. Also, suppose you want to practice lead and filter out the lead sound in original track. Frequency analysis can help to design a filter to do such task. Back to your example of sensor data. Suppose you have some sensor on a person that can collect all the movement data. The data should be very complicated that contains slow movements (such as walk) and fast movements (such as heart beat). Similar to the audio example, frequency analysis will give you different roles. Say, suppose you want to get the heart beat data only, a special filter can be derived and applied to your data to get that. BTW, here is a related question from me. You can see if your data is periodic (as shown in your figure), using Fourier basis is better. What's wrong to fit periodic data with polynomials?
What information can I gain from the fourier transform? Informally speaking frequency domain tells us "how fast things change". And the different "components" of you data. You mentioned about audio data, which is a perfect example to understand frequency d
49,152
What information can I gain from the fourier transform?
FFT will give you an idea of the dominant frequencies that are associated with the tap events you mention above. My guess is that you would see different spectral 'signatures' for each tap event (or for small sets of events like consecutive taps of the same number/button).
What information can I gain from the fourier transform?
FFT will give you an idea of the dominant frequencies that are associated with the tap events you mention above. My guess is that you would see different spectral 'signatures' for each tap event (or f
What information can I gain from the fourier transform? FFT will give you an idea of the dominant frequencies that are associated with the tap events you mention above. My guess is that you would see different spectral 'signatures' for each tap event (or for small sets of events like consecutive taps of the same number/button).
What information can I gain from the fourier transform? FFT will give you an idea of the dominant frequencies that are associated with the tap events you mention above. My guess is that you would see different spectral 'signatures' for each tap event (or f
49,153
Autocorrelation and GLS
GLS is the model that takes autocorrelated residuals into account, while Cochrane-Orcutt is one of the many procedures to estimate such GLS model. Strictly speaking, the GLS model requires the true value of $\rho$ in $\varepsilon_t = \rho\varepsilon_{t-1} + w_t$ to be known. However, apparently $\rho$ is unobservable. Cochrane-Orcutt is one (of many) iterative procedures to estimate $\rho$ and the GLS model. (a random reference/further reading) GLS is not the only way to fix the autocorrelated residual problem. For example, adding lagged terms of dependent/independent variables as predictors to the OLS may also fix the problem, depends on the true relationship among the variables.
Autocorrelation and GLS
GLS is the model that takes autocorrelated residuals into account, while Cochrane-Orcutt is one of the many procedures to estimate such GLS model. Strictly speaking, the GLS model requires the true va
Autocorrelation and GLS GLS is the model that takes autocorrelated residuals into account, while Cochrane-Orcutt is one of the many procedures to estimate such GLS model. Strictly speaking, the GLS model requires the true value of $\rho$ in $\varepsilon_t = \rho\varepsilon_{t-1} + w_t$ to be known. However, apparently $\rho$ is unobservable. Cochrane-Orcutt is one (of many) iterative procedures to estimate $\rho$ and the GLS model. (a random reference/further reading) GLS is not the only way to fix the autocorrelated residual problem. For example, adding lagged terms of dependent/independent variables as predictors to the OLS may also fix the problem, depends on the true relationship among the variables.
Autocorrelation and GLS GLS is the model that takes autocorrelated residuals into account, while Cochrane-Orcutt is one of the many procedures to estimate such GLS model. Strictly speaking, the GLS model requires the true va
49,154
Autocorrelation and GLS
The solution is known generally as a Transfer Function sometimes as a Dynamic Regression How to forecast a time series which is dependent on different time series? . You might want to also look at ARIMAX model's exogenous components? and An example of autocorrelation in residuals causing misinterpretation.
Autocorrelation and GLS
The solution is known generally as a Transfer Function sometimes as a Dynamic Regression How to forecast a time series which is dependent on different time series? . You might want to also look at ARI
Autocorrelation and GLS The solution is known generally as a Transfer Function sometimes as a Dynamic Regression How to forecast a time series which is dependent on different time series? . You might want to also look at ARIMAX model's exogenous components? and An example of autocorrelation in residuals causing misinterpretation.
Autocorrelation and GLS The solution is known generally as a Transfer Function sometimes as a Dynamic Regression How to forecast a time series which is dependent on different time series? . You might want to also look at ARI
49,155
Mathematical Principle behind ANOVA?
I would encourage OP to conceptually separate the mathematical and statistical principles of ANOVA. Mathematical Principles of ANOVA Consider variable $Y_k, \; k = 1, \ldots, N,$ with sample variance $s^2 = \sum_{k = 1}^N (Y_k - \bar{Y}_{\centerdot})^2.$ Now consider a grouping index $i = 1, \ldots, I$ with no particular meaning that divides $1, \ldots, N$ into equal (for convenience) groups of size $n$. We can then rewrite the variance as $$s^2 = \sum_{i = 1}^I \sum_{j = 1}^n (Y_{ij} - \bar{Y}_{\centerdot \centerdot})^2/(N - 1).$$ This is the exact same quantity with a different indexing scheme. The following two operations of subtracting and adding the group means, and expanding the square (needs demonstration that the cross-product goes to zero), is entirely algebraic: \begin{align*} (N - 1)s^2 &= \sum_{i = 1}^I \sum_{j = 1}^n (Y_{ij} - \bar{Y}_{i \centerdot} + \bar{Y}_{i \centerdot} - \bar{Y}_{\centerdot \centerdot})^2 \\ &= n\sum_{i = 1}^I (\bar{Y}_{i \centerdot} - \bar{Y}_{\centerdot \centerdot})^2 + \sum_{i = 1}^I \sum_{j = 1}^n (Y_{ij} - \bar{Y}_{i \centerdot})^2. \end{align*} The math doesn't care about the interpretation of these terms, and the decomposition always works (at least for the one-way layout). Statistical Principles of ANOVA So far in this example, not a single distributional statement was made about $Y_k$ or the re-indexed $Y_{ij}$, and that's because the mathematical decomposition didn't need any. A statistical device, the null hypothesis that there are no group effects, along with the assumption of normality, leads to $Y_{ij} \sim N(\mu, \sigma^2)$ for all $i, j$. I won't go through every step along the way to the F-statistic, but note that the sample variance of $\bar{Y}_{i \centerdot}$ is $\sum_{i = 1}^I (\bar{Y}_{i \centerdot} - \bar{Y}_{\centerdot \centerdot})^2 / (I - 1)$, and, when scaled by the appropriate constant, has a $\chi_{I-1}^2$ distribution. You can probably "see" this quantity in the decomposition above, and well as hints of the F-statistic if you divide the first term by the second. In Casella and Berger's text Statistical Inference, both $t$ and $F$ distributions are introduced under a section "The Derived Distributions." As far as I know, the F-distribution was derived ad hoc (from a scaled ratio of $\chi^2$ random variables) for the purposes of testing ANOVA null hypotheses.
Mathematical Principle behind ANOVA?
I would encourage OP to conceptually separate the mathematical and statistical principles of ANOVA. Mathematical Principles of ANOVA Consider variable $Y_k, \; k = 1, \ldots, N,$ with sample variance
Mathematical Principle behind ANOVA? I would encourage OP to conceptually separate the mathematical and statistical principles of ANOVA. Mathematical Principles of ANOVA Consider variable $Y_k, \; k = 1, \ldots, N,$ with sample variance $s^2 = \sum_{k = 1}^N (Y_k - \bar{Y}_{\centerdot})^2.$ Now consider a grouping index $i = 1, \ldots, I$ with no particular meaning that divides $1, \ldots, N$ into equal (for convenience) groups of size $n$. We can then rewrite the variance as $$s^2 = \sum_{i = 1}^I \sum_{j = 1}^n (Y_{ij} - \bar{Y}_{\centerdot \centerdot})^2/(N - 1).$$ This is the exact same quantity with a different indexing scheme. The following two operations of subtracting and adding the group means, and expanding the square (needs demonstration that the cross-product goes to zero), is entirely algebraic: \begin{align*} (N - 1)s^2 &= \sum_{i = 1}^I \sum_{j = 1}^n (Y_{ij} - \bar{Y}_{i \centerdot} + \bar{Y}_{i \centerdot} - \bar{Y}_{\centerdot \centerdot})^2 \\ &= n\sum_{i = 1}^I (\bar{Y}_{i \centerdot} - \bar{Y}_{\centerdot \centerdot})^2 + \sum_{i = 1}^I \sum_{j = 1}^n (Y_{ij} - \bar{Y}_{i \centerdot})^2. \end{align*} The math doesn't care about the interpretation of these terms, and the decomposition always works (at least for the one-way layout). Statistical Principles of ANOVA So far in this example, not a single distributional statement was made about $Y_k$ or the re-indexed $Y_{ij}$, and that's because the mathematical decomposition didn't need any. A statistical device, the null hypothesis that there are no group effects, along with the assumption of normality, leads to $Y_{ij} \sim N(\mu, \sigma^2)$ for all $i, j$. I won't go through every step along the way to the F-statistic, but note that the sample variance of $\bar{Y}_{i \centerdot}$ is $\sum_{i = 1}^I (\bar{Y}_{i \centerdot} - \bar{Y}_{\centerdot \centerdot})^2 / (I - 1)$, and, when scaled by the appropriate constant, has a $\chi_{I-1}^2$ distribution. You can probably "see" this quantity in the decomposition above, and well as hints of the F-statistic if you divide the first term by the second. In Casella and Berger's text Statistical Inference, both $t$ and $F$ distributions are introduced under a section "The Derived Distributions." As far as I know, the F-distribution was derived ad hoc (from a scaled ratio of $\chi^2$ random variables) for the purposes of testing ANOVA null hypotheses.
Mathematical Principle behind ANOVA? I would encourage OP to conceptually separate the mathematical and statistical principles of ANOVA. Mathematical Principles of ANOVA Consider variable $Y_k, \; k = 1, \ldots, N,$ with sample variance
49,156
Usage of tensor notation in statistics
The most obvious and straightforward application of tensors (that I know of) in statistics is computing high-order moments of a multivariate distribution. For example, consider a random vector $x\sim F$, where $F$ is some $p$-dimensional distribution. Given some data matrix $X \in \mathbb{R}^{n\times p}$ where $n$ is the number of observations, each of which is drawn iid from $F$, the second moment $\mathbb{E}(xx^\top) = \mathbb{E}(x\otimes x)$ can be estimated from the sample $X$ as follows $$\hat{\mathbb{E}}(x\otimes x) = \frac1n \sum_{i=1}^n X_{i\cdot} \otimes X_{i\cdot} = \frac1n X^\top X \in \mathbb{R}^{p\times p}$$ where $X_{i\cdot}$ is the $i^{th}$ row of $X$. Certainly this is a matrix that is only a few operations away from the covariance matrix. Continuing on to the third moment, which is again related to "co-skewness," we see we are dealing with an order-3 tensor $$\hat{\mathbb{E}}(x\otimes x \otimes x) = \frac1n \sum_{i=1}^n X_{i\cdot} \otimes X_{i\cdot}\otimes X_{i\cdot} \in \mathbb{R}^{p\times p\times p}$$ The "co-kurtosis" tensor is order 4 and so on for higher-order moments. These moment tensors have been applied in financial portfolio optimization decades ago, multivariate data standardization (standardize by skew, not just mean and variance), and obviously in deep learning (eg. tensorflow) where the gradients of the loss function with respect to model parameters contain tensors that are used in back-propagation. I believe there are additional applications in natural language processing, multivariate time series, and stochastic block models. I agree with @whuber: When the indices are not pivotal to the work, it certainly is an intuitive and flexible generalization that sheds light on the lower-dimensional cases. However, it tends to make things difficult for statisticians and engineers that have to stress out about three or more indices and weird complicated generalization of what seemed like ergonomic rules (eg. what is the trace of a high-order tensor? what does symmetry mean? etc) That's probably why many of the statisticians and applied mathematicians I know avoid tensors and simply stack/flatten the 2d cross-sections of each tensor into a tall matrix.
Usage of tensor notation in statistics
The most obvious and straightforward application of tensors (that I know of) in statistics is computing high-order moments of a multivariate distribution. For example, consider a random vector $x\sim
Usage of tensor notation in statistics The most obvious and straightforward application of tensors (that I know of) in statistics is computing high-order moments of a multivariate distribution. For example, consider a random vector $x\sim F$, where $F$ is some $p$-dimensional distribution. Given some data matrix $X \in \mathbb{R}^{n\times p}$ where $n$ is the number of observations, each of which is drawn iid from $F$, the second moment $\mathbb{E}(xx^\top) = \mathbb{E}(x\otimes x)$ can be estimated from the sample $X$ as follows $$\hat{\mathbb{E}}(x\otimes x) = \frac1n \sum_{i=1}^n X_{i\cdot} \otimes X_{i\cdot} = \frac1n X^\top X \in \mathbb{R}^{p\times p}$$ where $X_{i\cdot}$ is the $i^{th}$ row of $X$. Certainly this is a matrix that is only a few operations away from the covariance matrix. Continuing on to the third moment, which is again related to "co-skewness," we see we are dealing with an order-3 tensor $$\hat{\mathbb{E}}(x\otimes x \otimes x) = \frac1n \sum_{i=1}^n X_{i\cdot} \otimes X_{i\cdot}\otimes X_{i\cdot} \in \mathbb{R}^{p\times p\times p}$$ The "co-kurtosis" tensor is order 4 and so on for higher-order moments. These moment tensors have been applied in financial portfolio optimization decades ago, multivariate data standardization (standardize by skew, not just mean and variance), and obviously in deep learning (eg. tensorflow) where the gradients of the loss function with respect to model parameters contain tensors that are used in back-propagation. I believe there are additional applications in natural language processing, multivariate time series, and stochastic block models. I agree with @whuber: When the indices are not pivotal to the work, it certainly is an intuitive and flexible generalization that sheds light on the lower-dimensional cases. However, it tends to make things difficult for statisticians and engineers that have to stress out about three or more indices and weird complicated generalization of what seemed like ergonomic rules (eg. what is the trace of a high-order tensor? what does symmetry mean? etc) That's probably why many of the statisticians and applied mathematicians I know avoid tensors and simply stack/flatten the 2d cross-sections of each tensor into a tall matrix.
Usage of tensor notation in statistics The most obvious and straightforward application of tensors (that I know of) in statistics is computing high-order moments of a multivariate distribution. For example, consider a random vector $x\sim
49,157
Standard equivalent of OLS for minimizing the $L_1$ norm
We know from intuition of the necessity of the Bessel correction that $$\arg\min_x \sum_{j=1}^n (x_j - x)^2 = \bar{x},$$ the sample mean. It similarly turns out that $$\arg\min_x \sum_{j=1}^n |x_j - x| = \mathrm{med}(x_1, \dots, x_j),$$ the sample median. Commonly in regression, we minimize the squared error, giving us estimates for the mean, but, if we were to instead minimize the absolute error, we'd get estimates for the median. Of course, when the the regression model is $y \sim \mathcal{N}(X \beta^*, \sigma^2I)$, the median is the mean so the differences between these two methods aren't too pronounced. Indeed, I've commonly seen quantile regression motivated as being useful in the presence of outliers. An interesting history of the use of absolute error in regression is included here (in section 2): Portnoy, S., & Koenker, R. (1997). The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error versus Absolute-Error Estimators. Statistical Science, 12(4), 279–300. It's discussed that the idea to use this loss was had long ago, but, due to computational intractability, it didn't gain widespread attention. More generally, we could this is an example of quantile regression, which just installs weights into the absolute error that works for the median.
Standard equivalent of OLS for minimizing the $L_1$ norm
We know from intuition of the necessity of the Bessel correction that $$\arg\min_x \sum_{j=1}^n (x_j - x)^2 = \bar{x},$$ the sample mean. It similarly turns out that $$\arg\min_x \sum_{j=1}^n |x_j - x
Standard equivalent of OLS for minimizing the $L_1$ norm We know from intuition of the necessity of the Bessel correction that $$\arg\min_x \sum_{j=1}^n (x_j - x)^2 = \bar{x},$$ the sample mean. It similarly turns out that $$\arg\min_x \sum_{j=1}^n |x_j - x| = \mathrm{med}(x_1, \dots, x_j),$$ the sample median. Commonly in regression, we minimize the squared error, giving us estimates for the mean, but, if we were to instead minimize the absolute error, we'd get estimates for the median. Of course, when the the regression model is $y \sim \mathcal{N}(X \beta^*, \sigma^2I)$, the median is the mean so the differences between these two methods aren't too pronounced. Indeed, I've commonly seen quantile regression motivated as being useful in the presence of outliers. An interesting history of the use of absolute error in regression is included here (in section 2): Portnoy, S., & Koenker, R. (1997). The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error versus Absolute-Error Estimators. Statistical Science, 12(4), 279–300. It's discussed that the idea to use this loss was had long ago, but, due to computational intractability, it didn't gain widespread attention. More generally, we could this is an example of quantile regression, which just installs weights into the absolute error that works for the median.
Standard equivalent of OLS for minimizing the $L_1$ norm We know from intuition of the necessity of the Bessel correction that $$\arg\min_x \sum_{j=1}^n (x_j - x)^2 = \bar{x},$$ the sample mean. It similarly turns out that $$\arg\min_x \sum_{j=1}^n |x_j - x
49,158
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft?
I tried to implement this at one point in an existing rf framework and found it difficult go get good performance in terms of training time. The standard cart algorithm used in random forests and xgboost works by recursive partitioning of the data. In this case data (or an array of index into it) can be reordered in memory so each of the partitions is continuous and further splits can be calculated by scanning left to right across the current partition as ordered by the feature under consideration iteratively updating the impurity estimates. This can all be done with zero memory allocations making it quite fast. Jungles allow nodes to be recombined which doesn't work well with this reorder, scan and split algorithm. MS has probably figured out a clever way to implement it but my naive implementation ended up being much slower to train then a standard rf. The most recent papers are interesting and if more results like that are found perhaps it will catch on.
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft?
I tried to implement this at one point in an existing rf framework and found it difficult go get good performance in terms of training time. The standard cart algorithm used in random forests and xgbo
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft? I tried to implement this at one point in an existing rf framework and found it difficult go get good performance in terms of training time. The standard cart algorithm used in random forests and xgboost works by recursive partitioning of the data. In this case data (or an array of index into it) can be reordered in memory so each of the partitions is continuous and further splits can be calculated by scanning left to right across the current partition as ordered by the feature under consideration iteratively updating the impurity estimates. This can all be done with zero memory allocations making it quite fast. Jungles allow nodes to be recombined which doesn't work well with this reorder, scan and split algorithm. MS has probably figured out a clever way to implement it but my naive implementation ended up being much slower to train then a standard rf. The most recent papers are interesting and if more results like that are found perhaps it will catch on.
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft? I tried to implement this at one point in an existing rf framework and found it difficult go get good performance in terms of training time. The standard cart algorithm used in random forests and xgbo
49,159
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft?
Microsoft's "Decision Jungle" share the name with the "Random Jungle" tool. The "Random Jungle" software is an implementation of "Random Forest" and can run on many servers in a parallel manner. The rjungle refers to this "Random Jungle" tool. Source code: https://sourceforge.net/p/randomjungle/code/HEAD/tree/trunk/ https://github.com/gitpan/RandomJungle https://github.com/insilico/randomjungle https://github.com/texane/randomjungle Paper: https://academic.oup.com/bioinformatics/article/26/14/1752/177075
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft?
Microsoft's "Decision Jungle" share the name with the "Random Jungle" tool. The "Random Jungle" software is an implementation of "Random Forest" and can run on many servers in a parallel manner. The r
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft? Microsoft's "Decision Jungle" share the name with the "Random Jungle" tool. The "Random Jungle" software is an implementation of "Random Forest" and can run on many servers in a parallel manner. The rjungle refers to this "Random Jungle" tool. Source code: https://sourceforge.net/p/randomjungle/code/HEAD/tree/trunk/ https://github.com/gitpan/RandomJungle https://github.com/insilico/randomjungle https://github.com/texane/randomjungle Paper: https://academic.oup.com/bioinformatics/article/26/14/1752/177075
Random jungle - why was the algorithm/tool abandoned in general but not by Microsoft? Microsoft's "Decision Jungle" share the name with the "Random Jungle" tool. The "Random Jungle" software is an implementation of "Random Forest" and can run on many servers in a parallel manner. The r
49,160
Why are hidden Markov models (HMM) also called mixture models?
Mixture models are generic probability density functions which are the weighted sums of independent processes that add to a total density function with a total area of 1, which area is common to all probability density functions. Consider, for example that two people are cutting pencils on an assembly line. The first cuts a fraction $0<p<1$ of the pencils with an average pencil length of $\mu_1$ with a standard deviation of $\sigma_1$. A second person is cutting $1-p$ of the pencils with average pencil length of $\mu_2$ with a standard deviation of $\sigma_2$. Then the mixture distribution (normal distribution assumption) of pencils coming off of the assembly line is $MD(p,\mu_1,\sigma_1,\mu_2,\sigma_2)=pN(\mu_1,\sigma_1)+(1-p)N(\mu_2,\sigma_2)$. In a hidden Markov model, the state (pencil cutters) is not directly visible, but the output (e.g., assembly line output), dependent on the state, is visible. Each state has a probability distribution over the possible output tokens ($p$ and $1-p$ in our case). Now a hidden Markov model does not have to be a mixture model, for example, it can be unimodal, but the mixture model type of hidden Markov model is simple to solve. To better explore if, as claimed in Wikipedia, a hidden Markov model can be considered a generalization of a mixture model or whether that is just too narrow a view, I posed this as a separate question; Are there any examples of hidden Markov models that are not mixture models? And as it turns out convolutions can be HMM as well, and most people would consider convolution to be a different operation from mixture addition. It would seem that HMM are not only useful for mixture models, but for convolution models and possibly others.
Why are hidden Markov models (HMM) also called mixture models?
Mixture models are generic probability density functions which are the weighted sums of independent processes that add to a total density function with a total area of 1, which area is common to all p
Why are hidden Markov models (HMM) also called mixture models? Mixture models are generic probability density functions which are the weighted sums of independent processes that add to a total density function with a total area of 1, which area is common to all probability density functions. Consider, for example that two people are cutting pencils on an assembly line. The first cuts a fraction $0<p<1$ of the pencils with an average pencil length of $\mu_1$ with a standard deviation of $\sigma_1$. A second person is cutting $1-p$ of the pencils with average pencil length of $\mu_2$ with a standard deviation of $\sigma_2$. Then the mixture distribution (normal distribution assumption) of pencils coming off of the assembly line is $MD(p,\mu_1,\sigma_1,\mu_2,\sigma_2)=pN(\mu_1,\sigma_1)+(1-p)N(\mu_2,\sigma_2)$. In a hidden Markov model, the state (pencil cutters) is not directly visible, but the output (e.g., assembly line output), dependent on the state, is visible. Each state has a probability distribution over the possible output tokens ($p$ and $1-p$ in our case). Now a hidden Markov model does not have to be a mixture model, for example, it can be unimodal, but the mixture model type of hidden Markov model is simple to solve. To better explore if, as claimed in Wikipedia, a hidden Markov model can be considered a generalization of a mixture model or whether that is just too narrow a view, I posed this as a separate question; Are there any examples of hidden Markov models that are not mixture models? And as it turns out convolutions can be HMM as well, and most people would consider convolution to be a different operation from mixture addition. It would seem that HMM are not only useful for mixture models, but for convolution models and possibly others.
Why are hidden Markov models (HMM) also called mixture models? Mixture models are generic probability density functions which are the weighted sums of independent processes that add to a total density function with a total area of 1, which area is common to all p
49,161
Why are hidden Markov models (HMM) also called mixture models?
(This answer would be better as a comment to build on @Eskapp's comment) I think it is important to give the general and simple formula $$p(Y) = \sum_{X} p(X,Y) = \sum_{X} p(X)p(Y|X)$$ (also appearing on Wikipedia). This clearly shows that in HMM, it is the observation process ($Y$) which is modeled as a mixture. However, as already noted, HMM are not called mixtures models.
Why are hidden Markov models (HMM) also called mixture models?
(This answer would be better as a comment to build on @Eskapp's comment) I think it is important to give the general and simple formula $$p(Y) = \sum_{X} p(X,Y) = \sum_{X} p(X)p(Y|X)$$ (also appearing
Why are hidden Markov models (HMM) also called mixture models? (This answer would be better as a comment to build on @Eskapp's comment) I think it is important to give the general and simple formula $$p(Y) = \sum_{X} p(X,Y) = \sum_{X} p(X)p(Y|X)$$ (also appearing on Wikipedia). This clearly shows that in HMM, it is the observation process ($Y$) which is modeled as a mixture. However, as already noted, HMM are not called mixtures models.
Why are hidden Markov models (HMM) also called mixture models? (This answer would be better as a comment to build on @Eskapp's comment) I think it is important to give the general and simple formula $$p(Y) = \sum_{X} p(X,Y) = \sum_{X} p(X)p(Y|X)$$ (also appearing
49,162
Relation between: Likelihood, conditional probability and failure rate
My question is about the possibility of showing equivalence between the hazard rate, the conditional probability (of failure) and a likelihood function. TLDR; There is no such equivalence. Likelihood is defined as $$ \mathcal{L}(\theta \mid x_1,\dots,x_n) = \prod_{i=1}^n f_\theta(x_i) $$ so it is a product of probability density functions evaluated at $x_i$ points, given some fixed value of parameter $\theta$. So it has nothing to do with hazard rate, since hazard rate is probability density function evaluated at $x_i$ point parametrized by $\theta$, divided by survival function evaluated at $x_i$ parametrized by $\theta$ $$ h(x_i) = \frac{f_\theta(x_i)}{1-F_\theta(x_i)} $$ Moreover, likelihood is not a probability and it is not a conditional probability. It is a conditional probability only in Bayesian understanding of likelihood, i.e. if you assume that $\theta$ is a random variable. Your understanding of conditional probability also seems to be wrong: Intuitively, all conditional probabilities are purely multiplicative processes. [...] Is it generally true, that if all conditional probablities are purely multiplicative processes of numbers between one and zero, they all decrease with some exponential rate $\lambda$? The answer is no. Conditional probability is a relation between joint and individual probabilities $$ P(A \mid B) = \frac{P(A \cap B)}{P(B)} $$ So it is not a process, and it is not multiplicative. Moreover, multiplicative relation is a definition of independence $$ P(A \cap B) = P(A)\,P(B) $$ or equivalently $$ P(A \mid B) = P(A) $$ Even if you are talking about a random process, then this is not true. To give an example, imagine a series of coin tosses, if they are independent, then probability of tossing head given that previous toss resulted in tails is simply a (unconditional) probability of tossing head.
Relation between: Likelihood, conditional probability and failure rate
My question is about the possibility of showing equivalence between the hazard rate, the conditional probability (of failure) and a likelihood function. TLDR; There is no such equivalence. Likeli
Relation between: Likelihood, conditional probability and failure rate My question is about the possibility of showing equivalence between the hazard rate, the conditional probability (of failure) and a likelihood function. TLDR; There is no such equivalence. Likelihood is defined as $$ \mathcal{L}(\theta \mid x_1,\dots,x_n) = \prod_{i=1}^n f_\theta(x_i) $$ so it is a product of probability density functions evaluated at $x_i$ points, given some fixed value of parameter $\theta$. So it has nothing to do with hazard rate, since hazard rate is probability density function evaluated at $x_i$ point parametrized by $\theta$, divided by survival function evaluated at $x_i$ parametrized by $\theta$ $$ h(x_i) = \frac{f_\theta(x_i)}{1-F_\theta(x_i)} $$ Moreover, likelihood is not a probability and it is not a conditional probability. It is a conditional probability only in Bayesian understanding of likelihood, i.e. if you assume that $\theta$ is a random variable. Your understanding of conditional probability also seems to be wrong: Intuitively, all conditional probabilities are purely multiplicative processes. [...] Is it generally true, that if all conditional probablities are purely multiplicative processes of numbers between one and zero, they all decrease with some exponential rate $\lambda$? The answer is no. Conditional probability is a relation between joint and individual probabilities $$ P(A \mid B) = \frac{P(A \cap B)}{P(B)} $$ So it is not a process, and it is not multiplicative. Moreover, multiplicative relation is a definition of independence $$ P(A \cap B) = P(A)\,P(B) $$ or equivalently $$ P(A \mid B) = P(A) $$ Even if you are talking about a random process, then this is not true. To give an example, imagine a series of coin tosses, if they are independent, then probability of tossing head given that previous toss resulted in tails is simply a (unconditional) probability of tossing head.
Relation between: Likelihood, conditional probability and failure rate My question is about the possibility of showing equivalence between the hazard rate, the conditional probability (of failure) and a likelihood function. TLDR; There is no such equivalence. Likeli
49,163
What are the advantages of normalizing flow over VAEs with deep latent gaussian models for inference?
So the answer lies in the PhD thesis of Durk Kingma. In his thesis he has mentioned that The framework of normalizing flows [Rezende and Mohamed, 2015] provides an attractive approach for parameterizing flexible approximate posterior distributions in the VAE framework The term "flexible approximation" is useful so that the model adapts to the data, which reduces the distance between real distribution and model distribution. But the problem which is faced that the framework of normalizing flows is that it doesn't scale well in high dimensional latent spaces. To overcome this issue [Kingma et al., 2016] we propose inverse autoregressive flows, a flexible class of posterior distributions based on normalizing flows, allowing inference of highly non-Gaussian posterior distributions over high-dimensional latent spaces. All the information for the above answer was taken from PhD thesis if Durk Kingma, the link of the same can be found here: http://dpkingma.com
What are the advantages of normalizing flow over VAEs with deep latent gaussian models for inference
So the answer lies in the PhD thesis of Durk Kingma. In his thesis he has mentioned that The framework of normalizing flows [Rezende and Mohamed, 2015] provides an attractive approach for paramete
What are the advantages of normalizing flow over VAEs with deep latent gaussian models for inference? So the answer lies in the PhD thesis of Durk Kingma. In his thesis he has mentioned that The framework of normalizing flows [Rezende and Mohamed, 2015] provides an attractive approach for parameterizing flexible approximate posterior distributions in the VAE framework The term "flexible approximation" is useful so that the model adapts to the data, which reduces the distance between real distribution and model distribution. But the problem which is faced that the framework of normalizing flows is that it doesn't scale well in high dimensional latent spaces. To overcome this issue [Kingma et al., 2016] we propose inverse autoregressive flows, a flexible class of posterior distributions based on normalizing flows, allowing inference of highly non-Gaussian posterior distributions over high-dimensional latent spaces. All the information for the above answer was taken from PhD thesis if Durk Kingma, the link of the same can be found here: http://dpkingma.com
What are the advantages of normalizing flow over VAEs with deep latent gaussian models for inference So the answer lies in the PhD thesis of Durk Kingma. In his thesis he has mentioned that The framework of normalizing flows [Rezende and Mohamed, 2015] provides an attractive approach for paramete
49,164
"Branching Stick" Regression
I don't know of a fully worked out solution. If the data really look like your example, you could try doing cluster analysis first, then separate regressions in each cluster. You would probably want to try several clusters. Another possibility, if the data set is not too large, is to try to do pairs of regressions on a great many different splits of the data. Even with moderate size data, the total possible splits grows very fast, but it is probably possible to limit the number of splits quite a bit, either on an ad-hoc basis (it looks like this!) or maybe some more principled approach of searching through splits.
"Branching Stick" Regression
I don't know of a fully worked out solution. If the data really look like your example, you could try doing cluster analysis first, then separate regressions in each cluster. You would probably want
"Branching Stick" Regression I don't know of a fully worked out solution. If the data really look like your example, you could try doing cluster analysis first, then separate regressions in each cluster. You would probably want to try several clusters. Another possibility, if the data set is not too large, is to try to do pairs of regressions on a great many different splits of the data. Even with moderate size data, the total possible splits grows very fast, but it is probably possible to limit the number of splits quite a bit, either on an ad-hoc basis (it looks like this!) or maybe some more principled approach of searching through splits.
"Branching Stick" Regression I don't know of a fully worked out solution. If the data really look like your example, you could try doing cluster analysis first, then separate regressions in each cluster. You would probably want
49,165
"Branching Stick" Regression
The problem with your "branching stick" regression is that it is very difficult to parametrize, as it is not easily described by a threshold and indicator function as in the segmented regression case. A first way to relax the segmented regression is by not requiring that the lines join, this could give you a first idea. This is called usually threshold regression, a sort of generalisation of segmented regression. However, you will have no overlap of lines as you have in your "branching stick" The most general (parametric) model is a mixture regression (or latent class regression), where you fit two lines, one for each group, where membership to one of the two groups is attributed based on a data driven procedure. This is a very general model, but it won't guarantee that the two lines cross as you wish. So if you really want this exact "branching stick", you could run a mixture regression imposing the specific functional form you are interested. This will require a fair amount of coding, but basically just implies modifying the standard mixture regression algorithms (mostly based on EM).
"Branching Stick" Regression
The problem with your "branching stick" regression is that it is very difficult to parametrize, as it is not easily described by a threshold and indicator function as in the segmented regression case.
"Branching Stick" Regression The problem with your "branching stick" regression is that it is very difficult to parametrize, as it is not easily described by a threshold and indicator function as in the segmented regression case. A first way to relax the segmented regression is by not requiring that the lines join, this could give you a first idea. This is called usually threshold regression, a sort of generalisation of segmented regression. However, you will have no overlap of lines as you have in your "branching stick" The most general (parametric) model is a mixture regression (or latent class regression), where you fit two lines, one for each group, where membership to one of the two groups is attributed based on a data driven procedure. This is a very general model, but it won't guarantee that the two lines cross as you wish. So if you really want this exact "branching stick", you could run a mixture regression imposing the specific functional form you are interested. This will require a fair amount of coding, but basically just implies modifying the standard mixture regression algorithms (mostly based on EM).
"Branching Stick" Regression The problem with your "branching stick" regression is that it is very difficult to parametrize, as it is not easily described by a threshold and indicator function as in the segmented regression case.
49,166
"Branching Stick" Regression
The first case mentioned in the question can be solved with a very simple method (not iterative, no initial guess) thanks to the piecewise linear regression given page 12 of the paper : https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf The result $\quad\begin{cases} y=p_1x+q_1 & x<a_1 \\ y=p_2x+q_2 & x>a_1 \end{cases}\quad$ appears on the next figure : The second case, so called "Branching Stick regression" is substantially different because the intended function is multi-valuated on a range of $x$. In fact, this case is a sub-case of the "degenerated conic regression" which is treated page 19 in another paper : https://fr.scribd.com/doc/14819165/Regressions-coniques-quadriques-circulaire-spherique . The sought multi-valuated function must be of the second order at least, for example a conic section of equation : $$a_{02}y^2+a_{20}x^2+a_{11}xy+a_{01}y+a_{10}x+1=0 \tag 1$$ A degenerate case is the case of two straight lines $y=p_1x+q_1$ and $y=p_2x+q_2$ , which equation is : $$(p_1x+q_1-y)(p_2x+q_2-y)=0 \tag 2$$ A linear regression gives approximate values of the coefficients $a_{02},a_{20},a_{11},a_{01},a_{10}$ . As explained in the referenced paper, the equations $(1)$ and $(2)$ match if there is no scatter on data. In case of scattered data, the result of regression is an hyperbola equation $(1)$, close to the asymptotes equation $(2)$. The formulas to compute $p_1,q_1,p_2,q_2$ are provided in the referenced paper. A numerical example is shown on the figure below. The hyperbola computed from the regression is drawn in green, the asymptotes in blue. The coordinates of the "Branching point" are $(x_c,y_c)$. In the referenced paper, it is pointed out that the method of "degenerated conic regression" fails if the scatter of data is too large because the gap between the hyperbola and the asymptotes becomes too large.
"Branching Stick" Regression
The first case mentioned in the question can be solved with a very simple method (not iterative, no initial guess) thanks to the piecewise linear regression given page 12 of the paper : https://fr.scr
"Branching Stick" Regression The first case mentioned in the question can be solved with a very simple method (not iterative, no initial guess) thanks to the piecewise linear regression given page 12 of the paper : https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf The result $\quad\begin{cases} y=p_1x+q_1 & x<a_1 \\ y=p_2x+q_2 & x>a_1 \end{cases}\quad$ appears on the next figure : The second case, so called "Branching Stick regression" is substantially different because the intended function is multi-valuated on a range of $x$. In fact, this case is a sub-case of the "degenerated conic regression" which is treated page 19 in another paper : https://fr.scribd.com/doc/14819165/Regressions-coniques-quadriques-circulaire-spherique . The sought multi-valuated function must be of the second order at least, for example a conic section of equation : $$a_{02}y^2+a_{20}x^2+a_{11}xy+a_{01}y+a_{10}x+1=0 \tag 1$$ A degenerate case is the case of two straight lines $y=p_1x+q_1$ and $y=p_2x+q_2$ , which equation is : $$(p_1x+q_1-y)(p_2x+q_2-y)=0 \tag 2$$ A linear regression gives approximate values of the coefficients $a_{02},a_{20},a_{11},a_{01},a_{10}$ . As explained in the referenced paper, the equations $(1)$ and $(2)$ match if there is no scatter on data. In case of scattered data, the result of regression is an hyperbola equation $(1)$, close to the asymptotes equation $(2)$. The formulas to compute $p_1,q_1,p_2,q_2$ are provided in the referenced paper. A numerical example is shown on the figure below. The hyperbola computed from the regression is drawn in green, the asymptotes in blue. The coordinates of the "Branching point" are $(x_c,y_c)$. In the referenced paper, it is pointed out that the method of "degenerated conic regression" fails if the scatter of data is too large because the gap between the hyperbola and the asymptotes becomes too large.
"Branching Stick" Regression The first case mentioned in the question can be solved with a very simple method (not iterative, no initial guess) thanks to the piecewise linear regression given page 12 of the paper : https://fr.scr
49,167
Forecast time-series with two seasonal patterns
You have a multiple seasonal time series with seasonalities of length $12$ and $12\times 24=288$. This is the sort of data for which the TBATS method was designed. usage_train_ts <- msts(usage_train, seasonal.periods=c(12,288)) fit <- tbats(usage_train_ts) fc <- forecast(fit) plot(fc) Details of the TBATS model are given here: http://robjhyndman.com/papers/complex-seasonality/
Forecast time-series with two seasonal patterns
You have a multiple seasonal time series with seasonalities of length $12$ and $12\times 24=288$. This is the sort of data for which the TBATS method was designed. usage_train_ts <- msts(usage_train,
Forecast time-series with two seasonal patterns You have a multiple seasonal time series with seasonalities of length $12$ and $12\times 24=288$. This is the sort of data for which the TBATS method was designed. usage_train_ts <- msts(usage_train, seasonal.periods=c(12,288)) fit <- tbats(usage_train_ts) fc <- forecast(fit) plot(fc) Details of the TBATS model are given here: http://robjhyndman.com/papers/complex-seasonality/
Forecast time-series with two seasonal patterns You have a multiple seasonal time series with seasonalities of length $12$ and $12\times 24=288$. This is the sort of data for which the TBATS method was designed. usage_train_ts <- msts(usage_train,
49,168
Forecast time-series with two seasonal patterns
Please review Robust time-series regression for outlier detection as that problem/question is similar to yours in that there are two seasons in play. You have 12 readings per hour and 24 hours per day for three days or a total of 864 values. What I might suggest is that you build a model for each of the 24 hours (step 1). With only three readings I would think that the simple average of three values) would be a good baseline forecast for the next day's hourly values. I would then develop a Transfer Function (ARMAX) model for each of the 12 readings per hour using the hourly total as the X variable and using the 24 predicted hourly values from step 1. As usual Outliers/Level Shifts/Time Trends should be identified along with the customized ARIMA structure for each of these 12 models (step2) using 72 historical values for Y and 3 distinct values for X . Forecasts can then computed for each of these 12 time slots using the hourly predictions from step 1.
Forecast time-series with two seasonal patterns
Please review Robust time-series regression for outlier detection as that problem/question is similar to yours in that there are two seasons in play. You have 12 readings per hour and 24 hours per day
Forecast time-series with two seasonal patterns Please review Robust time-series regression for outlier detection as that problem/question is similar to yours in that there are two seasons in play. You have 12 readings per hour and 24 hours per day for three days or a total of 864 values. What I might suggest is that you build a model for each of the 24 hours (step 1). With only three readings I would think that the simple average of three values) would be a good baseline forecast for the next day's hourly values. I would then develop a Transfer Function (ARMAX) model for each of the 12 readings per hour using the hourly total as the X variable and using the 24 predicted hourly values from step 1. As usual Outliers/Level Shifts/Time Trends should be identified along with the customized ARIMA structure for each of these 12 models (step2) using 72 historical values for Y and 3 distinct values for X . Forecasts can then computed for each of these 12 time slots using the hourly predictions from step 1.
Forecast time-series with two seasonal patterns Please review Robust time-series regression for outlier detection as that problem/question is similar to yours in that there are two seasons in play. You have 12 readings per hour and 24 hours per day
49,169
Apparent contradiction between t-test and 1-way ANOVA
Both Student's t-test and ANOVA work by evaluating the observed differences between means relative to the observed variation. In this case the ANOVA uses an average variation of all three groups, but the t-test uses only two groups. The two groups tested with the t-test have much lower variation than the third group and so the t-test yields a smaller p-value than the ANOVA. Reconciling the statistical and practical conclusions is usually not something that can be accomplished using the dichotomous interpretation of significant/not significant. Instead, consider the p-values as continuous indices of the strength of evidence in the data about the null hypothesis and statistical model. If the p-value from the primary F-test of the ANOVA is larger than 0.05 then the p-value from the t-test is probably not very small. In that case you do not have very strong evidence against the null hypotheses in either case. Unless you have enough information from outside the experiment in hand to make a reasoned argument that backs up any conclusion that you want to make, you probably should defer any firm conclusion. It's rarely a mistake to run the experiment again!
Apparent contradiction between t-test and 1-way ANOVA
Both Student's t-test and ANOVA work by evaluating the observed differences between means relative to the observed variation. In this case the ANOVA uses an average variation of all three groups, but
Apparent contradiction between t-test and 1-way ANOVA Both Student's t-test and ANOVA work by evaluating the observed differences between means relative to the observed variation. In this case the ANOVA uses an average variation of all three groups, but the t-test uses only two groups. The two groups tested with the t-test have much lower variation than the third group and so the t-test yields a smaller p-value than the ANOVA. Reconciling the statistical and practical conclusions is usually not something that can be accomplished using the dichotomous interpretation of significant/not significant. Instead, consider the p-values as continuous indices of the strength of evidence in the data about the null hypothesis and statistical model. If the p-value from the primary F-test of the ANOVA is larger than 0.05 then the p-value from the t-test is probably not very small. In that case you do not have very strong evidence against the null hypotheses in either case. Unless you have enough information from outside the experiment in hand to make a reasoned argument that backs up any conclusion that you want to make, you probably should defer any firm conclusion. It's rarely a mistake to run the experiment again!
Apparent contradiction between t-test and 1-way ANOVA Both Student's t-test and ANOVA work by evaluating the observed differences between means relative to the observed variation. In this case the ANOVA uses an average variation of all three groups, but
49,170
Apparent contradiction between t-test and 1-way ANOVA
Especially when you have only one way (therefore no interaction effects) and few groups, the ANOVA is not the only valid tool. You could do t-tests as long as you correct for multiple testing. The commonly used Tuckey's tests are not really post-hoc tests for an ANOVA, but a collection of t-tests that can be performed with or without the ANOVA. The ANOVA has some assumptions regarding homoscedasticity, normality and sample sizes that for example Welch t-tests do not have. Also, even if significant, the ANOVA can tell you less than t-tests can because it does not compare groups one by one. I would suggest you use t-tests (Welch t-tests if you are unsure about the assumptions) and do a Bonferroni or better Holm correction for multiple testing. PS. Are those real data-points on the plot? It looks like someone deliberately constructed a corner case to expose a weakness of ANOVA vs. t-tests
Apparent contradiction between t-test and 1-way ANOVA
Especially when you have only one way (therefore no interaction effects) and few groups, the ANOVA is not the only valid tool. You could do t-tests as long as you correct for multiple testing. The com
Apparent contradiction between t-test and 1-way ANOVA Especially when you have only one way (therefore no interaction effects) and few groups, the ANOVA is not the only valid tool. You could do t-tests as long as you correct for multiple testing. The commonly used Tuckey's tests are not really post-hoc tests for an ANOVA, but a collection of t-tests that can be performed with or without the ANOVA. The ANOVA has some assumptions regarding homoscedasticity, normality and sample sizes that for example Welch t-tests do not have. Also, even if significant, the ANOVA can tell you less than t-tests can because it does not compare groups one by one. I would suggest you use t-tests (Welch t-tests if you are unsure about the assumptions) and do a Bonferroni or better Holm correction for multiple testing. PS. Are those real data-points on the plot? It looks like someone deliberately constructed a corner case to expose a weakness of ANOVA vs. t-tests
Apparent contradiction between t-test and 1-way ANOVA Especially when you have only one way (therefore no interaction effects) and few groups, the ANOVA is not the only valid tool. You could do t-tests as long as you correct for multiple testing. The com
49,171
Tuning Order XGBoost
Here is a good article on the topic: Complete Guide to Parameter Tuning in XGBoost (with codes in Python) Also, some people have had good success using hyperopt for tuning hyperparameters. Amine Benhalloum provides some Python code for tuning XGBoost: https://github.com/bamine/Kaggle-stuff/tree/master/otto
Tuning Order XGBoost
Here is a good article on the topic: Complete Guide to Parameter Tuning in XGBoost (with codes in Python) Also, some people have had good success using hyperopt for tuning hyperparameters. Amine Benh
Tuning Order XGBoost Here is a good article on the topic: Complete Guide to Parameter Tuning in XGBoost (with codes in Python) Also, some people have had good success using hyperopt for tuning hyperparameters. Amine Benhalloum provides some Python code for tuning XGBoost: https://github.com/bamine/Kaggle-stuff/tree/master/otto
Tuning Order XGBoost Here is a good article on the topic: Complete Guide to Parameter Tuning in XGBoost (with codes in Python) Also, some people have had good success using hyperopt for tuning hyperparameters. Amine Benh
49,172
Tuning Order XGBoost
param_grid = { 'silent': [1], 'max_depth': [4,5,6,7], 'learning_rate': [0.001, 0.01, 0.1, 0.2, 0,3], 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bytree': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bylevel': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'min_child_weight': [0.5, 1.0, 3.0, 5.0, 7.0, 10.0], 'gamma': [0, 0.25, 0.5, 1.0], 'reg_lambda': [0.1, 1.0, 5.0, 10.0, 50.0, 100.0], 'n_estimators': [100]} fit_params = {'eval_metric': 'logloss', 'early_stopping_rounds': 10, 'eval_set': [(X_train_tfidf, y_train_tfidf)], 'verbose' : False } clf = xgb.XGBClassifier(n_jobs=-1) randomized_search = RandomizedSearchCV(clf, param_grid, n_iter=30, n_jobs=-1, verbose=0, cv=5, fit_params=fit_params, scoring='neg_log_loss', refit=False, random_state=42) randomized_search.fit(X_train, y_train)
Tuning Order XGBoost
param_grid = { 'silent': [1], 'max_depth': [4,5,6,7], 'learning_rate': [0.001, 0.01, 0.1, 0.2, 0,3], 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bytree': [0.4, 0.5, 0.6
Tuning Order XGBoost param_grid = { 'silent': [1], 'max_depth': [4,5,6,7], 'learning_rate': [0.001, 0.01, 0.1, 0.2, 0,3], 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bytree': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bylevel': [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'min_child_weight': [0.5, 1.0, 3.0, 5.0, 7.0, 10.0], 'gamma': [0, 0.25, 0.5, 1.0], 'reg_lambda': [0.1, 1.0, 5.0, 10.0, 50.0, 100.0], 'n_estimators': [100]} fit_params = {'eval_metric': 'logloss', 'early_stopping_rounds': 10, 'eval_set': [(X_train_tfidf, y_train_tfidf)], 'verbose' : False } clf = xgb.XGBClassifier(n_jobs=-1) randomized_search = RandomizedSearchCV(clf, param_grid, n_iter=30, n_jobs=-1, verbose=0, cv=5, fit_params=fit_params, scoring='neg_log_loss', refit=False, random_state=42) randomized_search.fit(X_train, y_train)
Tuning Order XGBoost param_grid = { 'silent': [1], 'max_depth': [4,5,6,7], 'learning_rate': [0.001, 0.01, 0.1, 0.2, 0,3], 'subsample': [0.5, 0.6, 0.7, 0.8, 0.9, 1.0], 'colsample_bytree': [0.4, 0.5, 0.6
49,173
How does FaceNet (Google's facerecognition) handles a new image?
After the network is trained, we can throw away the loss layer. Actually the facenet (and many other networks for facial recognition) is trained for extracting features, that is to represent the image by a fixed length vector (embedding). The triplet loss basically says, the distance between feature vectors of the same person should be small, and the distance between different persons should be large. After training, for each given image, we take the output of the second last layer as its feature vector. Thereafter we can do verification (to tell whether two images are of the same person) or clustering based on the features and some distance function (e.g. Euclidean distance).
How does FaceNet (Google's facerecognition) handles a new image?
After the network is trained, we can throw away the loss layer. Actually the facenet (and many other networks for facial recognition) is trained for extracting features, that is to represent the image
How does FaceNet (Google's facerecognition) handles a new image? After the network is trained, we can throw away the loss layer. Actually the facenet (and many other networks for facial recognition) is trained for extracting features, that is to represent the image by a fixed length vector (embedding). The triplet loss basically says, the distance between feature vectors of the same person should be small, and the distance between different persons should be large. After training, for each given image, we take the output of the second last layer as its feature vector. Thereafter we can do verification (to tell whether two images are of the same person) or clustering based on the features and some distance function (e.g. Euclidean distance).
How does FaceNet (Google's facerecognition) handles a new image? After the network is trained, we can throw away the loss layer. Actually the facenet (and many other networks for facial recognition) is trained for extracting features, that is to represent the image
49,174
Autoencoders' gradient when using tied weights
Following the notation in the slides, a one layer autoencoder with tied weights is given by $$o(\hat{a}(x))=o(c+W^Th(x))=o(c+W^T\sigma(b+Wx))$$ The gradient wrt $W$ according to the product rule $$\frac{\partial l}{\partial W_{ij}}=\frac{\partial l}{\partial \hat{a}_j}\frac{\partial \hat{a}_j}{\partial W_{ij}}=\frac{\partial l}{\partial \hat{a}_j}(h_i+W_{ij}\frac{\partial h_i}{\partial W_{ij}})=\frac{\partial l}{\partial \hat{a}_j}h_i+\frac{\partial l}{\partial \hat{a}_i}x_j$$ which is equal to adding up the backpropagation gradients of each layer.
Autoencoders' gradient when using tied weights
Following the notation in the slides, a one layer autoencoder with tied weights is given by $$o(\hat{a}(x))=o(c+W^Th(x))=o(c+W^T\sigma(b+Wx))$$ The gradient wrt $W$ according to the product rule $$\fr
Autoencoders' gradient when using tied weights Following the notation in the slides, a one layer autoencoder with tied weights is given by $$o(\hat{a}(x))=o(c+W^Th(x))=o(c+W^T\sigma(b+Wx))$$ The gradient wrt $W$ according to the product rule $$\frac{\partial l}{\partial W_{ij}}=\frac{\partial l}{\partial \hat{a}_j}\frac{\partial \hat{a}_j}{\partial W_{ij}}=\frac{\partial l}{\partial \hat{a}_j}(h_i+W_{ij}\frac{\partial h_i}{\partial W_{ij}})=\frac{\partial l}{\partial \hat{a}_j}h_i+\frac{\partial l}{\partial \hat{a}_i}x_j$$ which is equal to adding up the backpropagation gradients of each layer.
Autoencoders' gradient when using tied weights Following the notation in the slides, a one layer autoencoder with tied weights is given by $$o(\hat{a}(x))=o(c+W^Th(x))=o(c+W^T\sigma(b+Wx))$$ The gradient wrt $W$ according to the product rule $$\fr
49,175
Does $r$ overestimate true effects for small sample size datasets?
In fact the sample correlation is biased, but it's not biased upward; it's biased toward 0 (this has been known for at least a century). For example, I just did a little simulation -- in a sample of 10000 simulations of samples of size 3 where the pairs were generated from a bivariate normal population with $\rho= 0.1$, the average sample correlation was $0.0685$. Soper (1913) [1] came up with some approximations to the expected value of the sample correlation when sampling from a bivariate normal (including the approximation $E[r]\approx \rho(1-\frac{1-\rho^2}{2n})\,$) and Fisher [2] worked on the problem and did some of the mathematics in detail (and continuing on in later papers). [1] Soper, H.E. (1913), "On the probable error of the correlation coefficient to a second approximation", Biometrika, vol 9, p91-115 pdf [2] Fisher, R.A. (1915), "Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population", Biometrika, vol 10, no. 4, 507-521 pdf
Does $r$ overestimate true effects for small sample size datasets?
In fact the sample correlation is biased, but it's not biased upward; it's biased toward 0 (this has been known for at least a century). For example, I just did a little simulation -- in a sample of 1
Does $r$ overestimate true effects for small sample size datasets? In fact the sample correlation is biased, but it's not biased upward; it's biased toward 0 (this has been known for at least a century). For example, I just did a little simulation -- in a sample of 10000 simulations of samples of size 3 where the pairs were generated from a bivariate normal population with $\rho= 0.1$, the average sample correlation was $0.0685$. Soper (1913) [1] came up with some approximations to the expected value of the sample correlation when sampling from a bivariate normal (including the approximation $E[r]\approx \rho(1-\frac{1-\rho^2}{2n})\,$) and Fisher [2] worked on the problem and did some of the mathematics in detail (and continuing on in later papers). [1] Soper, H.E. (1913), "On the probable error of the correlation coefficient to a second approximation", Biometrika, vol 9, p91-115 pdf [2] Fisher, R.A. (1915), "Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population", Biometrika, vol 10, no. 4, 507-521 pdf
Does $r$ overestimate true effects for small sample size datasets? In fact the sample correlation is biased, but it's not biased upward; it's biased toward 0 (this has been known for at least a century). For example, I just did a little simulation -- in a sample of 1
49,176
Maximum likelihood estimation of a Poisson binomial distribution
As I see your problem, you have $K$ individuals completing $N$ trials, that result in binary outcomes (success or failure). So you are dealing with $N\times K$ random variables $X_{ij}$. You are interested in computing probabilities of success for each trial $p_i$. So the first thing to notice is that you assume in here that participants are exchangeable, so there is no more or less skilled participants - is this assumption correct for your data? The first thing that comes to my mind is that for the kind of data as yours Item Response Theory models would be better suited. Using such models you could estimate model assuming that the tasks vary in their difficulty and that the participants vary in their skills (e.g. using simple Rasch model). But let's stick to what you said and assume that participants are exchangeable and you are interested only in probabilities per task. As others already noticed, you are not dealing here with Poisson binomial distribution, since we use such distribution for sums of successes from $N$ independent Bernoulli trials with different probabilities $p_1,\dots,p_N$. For this you would have to introduce new random variable defined as $Y_j = \sum_{i=1}^N X_{ij}$, i.e. total number of successes per participant. As noted by Xi'an, parameters $p_i$ are not identifiable in here and if you have data on result of each trial by each participant, it is better to think about it as of Bernoulli variables parametrized by $p_i$. From what you are saying, you would like to test if Poisson binomial distribution fits the data better then ordinary binomial. As I read it, you want to test if the trials differ in probabilities of success, versus if probability of success is the same for each trial (since $p_1 = p_2 = \dots = p_N$ is an ordinary binomial distribution). Saying it other way around, your null hypothesis would be that not only participants, but also trials are exchangeable, so identifying particular trials tells us nothing about the data since they all have the same probability of success. If we have null hypothesis stated like this, it instantly leads to permutation test, where you would randomly "shuffle" your $N\times K$ matrix and compare the statistic computed on such permuted data to statistic computed on unshuffled data. For the statistic to compare I would use combined variance's $$ \sum_{i=1}^N \hat p_i(1-\hat p_i) $$ where $\hat p_i$ is probability of success estimated from the data for the $i$-th participant (columnwise means). In case of equal $\hat p_i$'s would reduce to $N \hat p(1-\hat p)$. To illustrate it I conducted a simulation with three different scenarios: (a) all $p_i = 0.5$, (b) they come from Beta(1000,1000) distribution, (c) they come from uniform distribution. In the first case $p_i$'s are all equal; in the second case they are "random", but grouped around common mean; and in the third case they are totally "random". The plot below shows distributions of test statistic under null hypothesis (i.e. computed on shuffled data), red lines show the variance computed on unshuffled data. As you can see, the combined variances of unshuffled data crosses with the null distribution in the first case (test is not significant) and slightly approach the distribution in the second case (significant difference from the null). In the third case the red line is even not visible on the plot since it is that far from the null distribution (significant difference). So while the test correctly identified "all the same $p_i$'s" scenario (a), but didn't find the "similar but not the same" scenario (b) to fulfill the criteria of equality. The question is if you want to be that strict about it? Nonetheless, this is a direct implementation of test for your hypothesis. It compares the basic criteria that would enable you to distinguish ordinary binomial from Poisson binomial (their variances). There is of course lot's of other possibilities, more or less appropriate depending on your problem, e.g.: comparing the individual confidence intervals, pairwise $z$-tests, ANOVA, using some kind of logistic regression model etc. However, as I said before, this sounds rather like a problem for Item Response Theory models and assuming equal skills of participants sounds risky.
Maximum likelihood estimation of a Poisson binomial distribution
As I see your problem, you have $K$ individuals completing $N$ trials, that result in binary outcomes (success or failure). So you are dealing with $N\times K$ random variables $X_{ij}$. You are inter
Maximum likelihood estimation of a Poisson binomial distribution As I see your problem, you have $K$ individuals completing $N$ trials, that result in binary outcomes (success or failure). So you are dealing with $N\times K$ random variables $X_{ij}$. You are interested in computing probabilities of success for each trial $p_i$. So the first thing to notice is that you assume in here that participants are exchangeable, so there is no more or less skilled participants - is this assumption correct for your data? The first thing that comes to my mind is that for the kind of data as yours Item Response Theory models would be better suited. Using such models you could estimate model assuming that the tasks vary in their difficulty and that the participants vary in their skills (e.g. using simple Rasch model). But let's stick to what you said and assume that participants are exchangeable and you are interested only in probabilities per task. As others already noticed, you are not dealing here with Poisson binomial distribution, since we use such distribution for sums of successes from $N$ independent Bernoulli trials with different probabilities $p_1,\dots,p_N$. For this you would have to introduce new random variable defined as $Y_j = \sum_{i=1}^N X_{ij}$, i.e. total number of successes per participant. As noted by Xi'an, parameters $p_i$ are not identifiable in here and if you have data on result of each trial by each participant, it is better to think about it as of Bernoulli variables parametrized by $p_i$. From what you are saying, you would like to test if Poisson binomial distribution fits the data better then ordinary binomial. As I read it, you want to test if the trials differ in probabilities of success, versus if probability of success is the same for each trial (since $p_1 = p_2 = \dots = p_N$ is an ordinary binomial distribution). Saying it other way around, your null hypothesis would be that not only participants, but also trials are exchangeable, so identifying particular trials tells us nothing about the data since they all have the same probability of success. If we have null hypothesis stated like this, it instantly leads to permutation test, where you would randomly "shuffle" your $N\times K$ matrix and compare the statistic computed on such permuted data to statistic computed on unshuffled data. For the statistic to compare I would use combined variance's $$ \sum_{i=1}^N \hat p_i(1-\hat p_i) $$ where $\hat p_i$ is probability of success estimated from the data for the $i$-th participant (columnwise means). In case of equal $\hat p_i$'s would reduce to $N \hat p(1-\hat p)$. To illustrate it I conducted a simulation with three different scenarios: (a) all $p_i = 0.5$, (b) they come from Beta(1000,1000) distribution, (c) they come from uniform distribution. In the first case $p_i$'s are all equal; in the second case they are "random", but grouped around common mean; and in the third case they are totally "random". The plot below shows distributions of test statistic under null hypothesis (i.e. computed on shuffled data), red lines show the variance computed on unshuffled data. As you can see, the combined variances of unshuffled data crosses with the null distribution in the first case (test is not significant) and slightly approach the distribution in the second case (significant difference from the null). In the third case the red line is even not visible on the plot since it is that far from the null distribution (significant difference). So while the test correctly identified "all the same $p_i$'s" scenario (a), but didn't find the "similar but not the same" scenario (b) to fulfill the criteria of equality. The question is if you want to be that strict about it? Nonetheless, this is a direct implementation of test for your hypothesis. It compares the basic criteria that would enable you to distinguish ordinary binomial from Poisson binomial (their variances). There is of course lot's of other possibilities, more or less appropriate depending on your problem, e.g.: comparing the individual confidence intervals, pairwise $z$-tests, ANOVA, using some kind of logistic regression model etc. However, as I said before, this sounds rather like a problem for Item Response Theory models and assuming equal skills of participants sounds risky.
Maximum likelihood estimation of a Poisson binomial distribution As I see your problem, you have $K$ individuals completing $N$ trials, that result in binary outcomes (success or failure). So you are dealing with $N\times K$ random variables $X_{ij}$. You are inter
49,177
For random forest, what's the difference between out-of-bag error and k-fold cross validation?
OOB error will give a misleading indication of performance on a time-series dataset because it will be evaluating performance on past data using future data. This does not give a good indication of the model's ability to perform on future data. Therefore, use a methodology like TimeSeriesSplit. By holding out future data points for model evaluation, you examine your model's ability to perform in the future. To build your intuition on why it's "cheating" to evaluate your forecasting ability using future data points: suppose I ask you to predict my weight a year from now, given measurements I took over the past 365 days. That might be challenging, unless you see a clear pattern. Now suppose that I ask you to predict my weight on day 155 of the 365 days I already recorded, and all I do is hold out day 155. You could easily estimate my weight as (w[154] + w[156]) / 2. You just took the average of the weights between two days. But did you forecast the future? (No.) Does a measurement of your success imply you can predict my future weight? (Certainly not!)
For random forest, what's the difference between out-of-bag error and k-fold cross validation?
OOB error will give a misleading indication of performance on a time-series dataset because it will be evaluating performance on past data using future data. This does not give a good indication of th
For random forest, what's the difference between out-of-bag error and k-fold cross validation? OOB error will give a misleading indication of performance on a time-series dataset because it will be evaluating performance on past data using future data. This does not give a good indication of the model's ability to perform on future data. Therefore, use a methodology like TimeSeriesSplit. By holding out future data points for model evaluation, you examine your model's ability to perform in the future. To build your intuition on why it's "cheating" to evaluate your forecasting ability using future data points: suppose I ask you to predict my weight a year from now, given measurements I took over the past 365 days. That might be challenging, unless you see a clear pattern. Now suppose that I ask you to predict my weight on day 155 of the 365 days I already recorded, and all I do is hold out day 155. You could easily estimate my weight as (w[154] + w[156]) / 2. You just took the average of the weights between two days. But did you forecast the future? (No.) Does a measurement of your success imply you can predict my future weight? (Certainly not!)
For random forest, what's the difference between out-of-bag error and k-fold cross validation? OOB error will give a misleading indication of performance on a time-series dataset because it will be evaluating performance on past data using future data. This does not give a good indication of th
49,178
Resampling/Interpolating monthly rates to daily rate estimates in R
If I understand the question correctly, the idea is to resample the energy-usage rates conservatively. To ensure conservative resampling, you should resample the extensive quantity ("mass" = cumulative energy used) rather than the intensive quantity ("density" = usage rate). This is very similar to how resampling a probability density is tricky, but resampling a cumulative distribution is straightforward (i.e. no coordinate-change adjustment is required). In the current case, we have a time series of cumulative energy usage $(E_k,t_k)$. The original question is phrased in terms of the average energy-usage rate (power), which is the ratio of first differences, i.e. $\bar{r}_k=\Delta E_k/\Delta t_k$, and notes difficulty in conservative resampling of the $\bar{r}_k$ time series. However, if we consider the energy series itself, then we have $$\Delta E_k = E_{k+1}-E_k = \int_{t_k}^{t_{k+1}}r[t]dt = \bar{r}_k\Delta t_k$$ The energy series itself $E(t)$ can be interpolated using any monotone scheme (e.g. linear interpolation, or monotone cubic). The monotone requirement ensures that it will always be non-decreasing through time: $E_k\leq E_{k+\phi}\leq E_{k+1}$ for $\phi\in[0,1]$. Once this is done, the new energy series can be differenced to get average usage rates over the new time intervals. Summary: the average usage rate is by definition $\bar{r}=\Delta E/\Delta t$, so if you interpolate the cumulative energy $E(t)$ (monotonically) then you will automatically have conservative results. I do not use R, but skimming the help, it looks like you can do something like: nt <- length(ts$start_date) t <- c(ts$start_date,ts$end_date[nt]) E <- cumsum(c(0,ts$energy_use)) Espline <- splinefun(x = t, y = E, method = 'monoH.FC') dEdt_spline <- function(t) Espline( t , deriv = 1 ) Then you can evaluate the average power consumption as $\langle r\rangle_{t\in[t_1,t_2]} = \frac{E(t_2)-E(t_1)}{t_2-t_1}$, and the "instantaneous" power consumption with $E'(t)$. (Note: I quickly tried this on R-fiddle and it seemed to work, but your integrate test still did not work. I strongly believe this must be due to some code error, either on my part or in the R libraries. That is, by design $E(t)$ has been fit with a monotone interpolating spline, which has an analytic derivative, that itself has an analytic integral, as they are piecewise polynomial functions. Most likely the inconsistency is due to my lack of R knowledge, or it could be due to numerical approximations used in the spline calculus functions.) Update: As expected, the above was due to my lack of R knowledge, as shown in updated question. (I had literally never written any R before Googling to do the above, so not too shabby!) Note also that as seen there, the monotone cubic spline functions will have a discontinuous second derivative (seen as kinks in the $E'(t)$ plots). This could be avoided with a monotone C2 interpolant (e.g. this), though I do not know what R package this might be in. Note on monotone interpolation: This simply means that the interpolation does not introduce any new local maxima/minima not already present in the data. For example the following picture from Wikipedia demonstrates how the standard cubic spline is not monotone Public Domain, https://en.wikipedia.org/w/index.php?curid=9051137
Resampling/Interpolating monthly rates to daily rate estimates in R
If I understand the question correctly, the idea is to resample the energy-usage rates conservatively. To ensure conservative resampling, you should resample the extensive quantity ("mass" = cumulativ
Resampling/Interpolating monthly rates to daily rate estimates in R If I understand the question correctly, the idea is to resample the energy-usage rates conservatively. To ensure conservative resampling, you should resample the extensive quantity ("mass" = cumulative energy used) rather than the intensive quantity ("density" = usage rate). This is very similar to how resampling a probability density is tricky, but resampling a cumulative distribution is straightforward (i.e. no coordinate-change adjustment is required). In the current case, we have a time series of cumulative energy usage $(E_k,t_k)$. The original question is phrased in terms of the average energy-usage rate (power), which is the ratio of first differences, i.e. $\bar{r}_k=\Delta E_k/\Delta t_k$, and notes difficulty in conservative resampling of the $\bar{r}_k$ time series. However, if we consider the energy series itself, then we have $$\Delta E_k = E_{k+1}-E_k = \int_{t_k}^{t_{k+1}}r[t]dt = \bar{r}_k\Delta t_k$$ The energy series itself $E(t)$ can be interpolated using any monotone scheme (e.g. linear interpolation, or monotone cubic). The monotone requirement ensures that it will always be non-decreasing through time: $E_k\leq E_{k+\phi}\leq E_{k+1}$ for $\phi\in[0,1]$. Once this is done, the new energy series can be differenced to get average usage rates over the new time intervals. Summary: the average usage rate is by definition $\bar{r}=\Delta E/\Delta t$, so if you interpolate the cumulative energy $E(t)$ (monotonically) then you will automatically have conservative results. I do not use R, but skimming the help, it looks like you can do something like: nt <- length(ts$start_date) t <- c(ts$start_date,ts$end_date[nt]) E <- cumsum(c(0,ts$energy_use)) Espline <- splinefun(x = t, y = E, method = 'monoH.FC') dEdt_spline <- function(t) Espline( t , deriv = 1 ) Then you can evaluate the average power consumption as $\langle r\rangle_{t\in[t_1,t_2]} = \frac{E(t_2)-E(t_1)}{t_2-t_1}$, and the "instantaneous" power consumption with $E'(t)$. (Note: I quickly tried this on R-fiddle and it seemed to work, but your integrate test still did not work. I strongly believe this must be due to some code error, either on my part or in the R libraries. That is, by design $E(t)$ has been fit with a monotone interpolating spline, which has an analytic derivative, that itself has an analytic integral, as they are piecewise polynomial functions. Most likely the inconsistency is due to my lack of R knowledge, or it could be due to numerical approximations used in the spline calculus functions.) Update: As expected, the above was due to my lack of R knowledge, as shown in updated question. (I had literally never written any R before Googling to do the above, so not too shabby!) Note also that as seen there, the monotone cubic spline functions will have a discontinuous second derivative (seen as kinks in the $E'(t)$ plots). This could be avoided with a monotone C2 interpolant (e.g. this), though I do not know what R package this might be in. Note on monotone interpolation: This simply means that the interpolation does not introduce any new local maxima/minima not already present in the data. For example the following picture from Wikipedia demonstrates how the standard cubic spline is not monotone Public Domain, https://en.wikipedia.org/w/index.php?curid=9051137
Resampling/Interpolating monthly rates to daily rate estimates in R If I understand the question correctly, the idea is to resample the energy-usage rates conservatively. To ensure conservative resampling, you should resample the extensive quantity ("mass" = cumulativ
49,179
Resampling/Interpolating monthly rates to daily rate estimates in R
What I did is calculate the usage at mid month needed to balance the end month kw usage calculation using linear interpolation. This works out to be a simple formula: $kw_{m-1/2}=\frac{3}{2}kw_m-\frac{1}{2}kw_{m-1}$. That is, the mid month consumption is 3/2 of the end month consumption minus 1/2 of the prior month's utilization. Why? Simple geometry. Note that the area of each trapezoid under the curve is the average height times the base. Note that the area of the two half- month trapezoids culminating in the end of month usage is thus $\frac{1}{2}\frac{kw_{m-1}+\frac{3}{2}kw_m-\frac{1}{2}kw_{m-1}}{2}+\frac{1}{2}\frac{\frac{3}{2}kw_m-\frac{1}{2}kw_{m-1}+kw_m}{2}=kw_m.$ To show this, I made up pseudo data because I didn't have any. Here are the step by step calculations. Pseudo-data and calculations: month end kw days kw/day mid month mid month usage 01/10/2017 546 3/2*kw(m)-1/2*kw(m-1) 01/11/2017 294 31 17.61290323 16/10/2017 12:00 168 01/12/2017 493 30 9.8 16/11/2017 0:00 592.5 01/01/2018 593 31 15.90322581 16/12/2017 12:00 643 Putting together the true usages and mid month usages in a single table give us a table of all values: dates kw 01/10/2017 546 16/10/2017 12:00 168 01/11/2017 294 16/11/2017 0:00 592.5 01/12/2017 493 16/12/2017 12:00 643 01/01/2018 593 We then proceed to plot that table: Note, we could further require that the slope for each month and mid month be the same approached from the left or right, or we could solve the same problem a bazillion other ways. No matter how it is solved, there will be "slumps" or "bumps" in between the months to account, respectively, for the greater or lesser utilization during the prior month. For real life utilization, the actual curve is closer to a step function with many mostly tiny steps and some big ones too, so there is no way we can emulate it exactly.
Resampling/Interpolating monthly rates to daily rate estimates in R
What I did is calculate the usage at mid month needed to balance the end month kw usage calculation using linear interpolation. This works out to be a simple formula: $kw_{m-1/2}=\frac{3}{2}kw_m-\frac
Resampling/Interpolating monthly rates to daily rate estimates in R What I did is calculate the usage at mid month needed to balance the end month kw usage calculation using linear interpolation. This works out to be a simple formula: $kw_{m-1/2}=\frac{3}{2}kw_m-\frac{1}{2}kw_{m-1}$. That is, the mid month consumption is 3/2 of the end month consumption minus 1/2 of the prior month's utilization. Why? Simple geometry. Note that the area of each trapezoid under the curve is the average height times the base. Note that the area of the two half- month trapezoids culminating in the end of month usage is thus $\frac{1}{2}\frac{kw_{m-1}+\frac{3}{2}kw_m-\frac{1}{2}kw_{m-1}}{2}+\frac{1}{2}\frac{\frac{3}{2}kw_m-\frac{1}{2}kw_{m-1}+kw_m}{2}=kw_m.$ To show this, I made up pseudo data because I didn't have any. Here are the step by step calculations. Pseudo-data and calculations: month end kw days kw/day mid month mid month usage 01/10/2017 546 3/2*kw(m)-1/2*kw(m-1) 01/11/2017 294 31 17.61290323 16/10/2017 12:00 168 01/12/2017 493 30 9.8 16/11/2017 0:00 592.5 01/01/2018 593 31 15.90322581 16/12/2017 12:00 643 Putting together the true usages and mid month usages in a single table give us a table of all values: dates kw 01/10/2017 546 16/10/2017 12:00 168 01/11/2017 294 16/11/2017 0:00 592.5 01/12/2017 493 16/12/2017 12:00 643 01/01/2018 593 We then proceed to plot that table: Note, we could further require that the slope for each month and mid month be the same approached from the left or right, or we could solve the same problem a bazillion other ways. No matter how it is solved, there will be "slumps" or "bumps" in between the months to account, respectively, for the greater or lesser utilization during the prior month. For real life utilization, the actual curve is closer to a step function with many mostly tiny steps and some big ones too, so there is no way we can emulate it exactly.
Resampling/Interpolating monthly rates to daily rate estimates in R What I did is calculate the usage at mid month needed to balance the end month kw usage calculation using linear interpolation. This works out to be a simple formula: $kw_{m-1/2}=\frac{3}{2}kw_m-\frac
49,180
How can one produce many `p-values` in regression analysis?
When you regress on a factor you have an indicator (dummy) variable for each level of the factor bar one (the "baseline" category). As a result the p-values of the coefficients represent p-values for the pairwise comparisons with the baseline. Here's an example in R, a data set on weights of chicks on different feed: > summary(lm(weight~feed,chickwts)) [... snip ...] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 323.583 15.834 20.436 < 2e-16 *** feedhorsebean -163.383 23.485 -6.957 2.07e-09 *** feedlinseed -104.833 22.393 -4.682 1.49e-05 *** feedmeatmeal -46.674 22.896 -2.039 0.045567 * feedsoybean -77.155 21.578 -3.576 0.000665 *** feedsunflower 5.333 22.393 0.238 0.812495 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 54.85 on 65 degrees of freedom Multiple R-squared: 0.5417, Adjusted R-squared: 0.5064 F-statistic: 15.36 on 5 and 65 DF, p-value: 5.936e-10 The last column in the coefficients table is a set of p-values for comparisons with the mean of the baseline (casein) category.
How can one produce many `p-values` in regression analysis?
When you regress on a factor you have an indicator (dummy) variable for each level of the factor bar one (the "baseline" category). As a result the p-values of the coefficients represent p-values for
How can one produce many `p-values` in regression analysis? When you regress on a factor you have an indicator (dummy) variable for each level of the factor bar one (the "baseline" category). As a result the p-values of the coefficients represent p-values for the pairwise comparisons with the baseline. Here's an example in R, a data set on weights of chicks on different feed: > summary(lm(weight~feed,chickwts)) [... snip ...] Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 323.583 15.834 20.436 < 2e-16 *** feedhorsebean -163.383 23.485 -6.957 2.07e-09 *** feedlinseed -104.833 22.393 -4.682 1.49e-05 *** feedmeatmeal -46.674 22.896 -2.039 0.045567 * feedsoybean -77.155 21.578 -3.576 0.000665 *** feedsunflower 5.333 22.393 0.238 0.812495 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 54.85 on 65 degrees of freedom Multiple R-squared: 0.5417, Adjusted R-squared: 0.5064 F-statistic: 15.36 on 5 and 65 DF, p-value: 5.936e-10 The last column in the coefficients table is a set of p-values for comparisons with the mean of the baseline (casein) category.
How can one produce many `p-values` in regression analysis? When you regress on a factor you have an indicator (dummy) variable for each level of the factor bar one (the "baseline" category). As a result the p-values of the coefficients represent p-values for
49,181
Sum of truncated Gammas
I'm not sure if the above is correct, or how to calculate the second term in the brackets. Your solution appears to be correct. And the incomplete Beta does not pose a problem ... Given: $X_1$ and $X_2$ each have a $\text{Gamma}(a,b)$ distribution truncated above at $w$, with pdf $f(x)$: Note that your parameter $\beta = \frac{1}{b}$, and that you are using the lower incomplete gamma function, whereas I am using the incomplete gamma. Checking with the development version of mathStatica returns the sum $Y = X_1 + X_2$ has pdf $h(y)$: where: Beta[z,a,c] denotes the incomplete beta function $B_z(a,c)$ and Gamma[a,w] is the incomplete gamma function $\Gamma(a,w) = \int _w^{\infty } t^{a-1} e^{-t} d t$ which appears to match your own workings. The inclusion of the incomplete Beta function does not impose any practical problem: it is commonly available in any number of software packages. Here is a plot of the pdf $h(y)$ when $a = 1.2$, $b= 3$, and $w = 4$ Monte Carlo check Here is a quick Monte Carlo check comparing the 'empirical' pdf of the sum of two truncated Gammas (blue wiggly) to the exact theoretical solution above (dashed red curve), for the same parameter values: All looks good :)
Sum of truncated Gammas
I'm not sure if the above is correct, or how to calculate the second term in the brackets. Your solution appears to be correct. And the incomplete Beta does not pose a problem ... Given: $X_1$ and $
Sum of truncated Gammas I'm not sure if the above is correct, or how to calculate the second term in the brackets. Your solution appears to be correct. And the incomplete Beta does not pose a problem ... Given: $X_1$ and $X_2$ each have a $\text{Gamma}(a,b)$ distribution truncated above at $w$, with pdf $f(x)$: Note that your parameter $\beta = \frac{1}{b}$, and that you are using the lower incomplete gamma function, whereas I am using the incomplete gamma. Checking with the development version of mathStatica returns the sum $Y = X_1 + X_2$ has pdf $h(y)$: where: Beta[z,a,c] denotes the incomplete beta function $B_z(a,c)$ and Gamma[a,w] is the incomplete gamma function $\Gamma(a,w) = \int _w^{\infty } t^{a-1} e^{-t} d t$ which appears to match your own workings. The inclusion of the incomplete Beta function does not impose any practical problem: it is commonly available in any number of software packages. Here is a plot of the pdf $h(y)$ when $a = 1.2$, $b= 3$, and $w = 4$ Monte Carlo check Here is a quick Monte Carlo check comparing the 'empirical' pdf of the sum of two truncated Gammas (blue wiggly) to the exact theoretical solution above (dashed red curve), for the same parameter values: All looks good :)
Sum of truncated Gammas I'm not sure if the above is correct, or how to calculate the second term in the brackets. Your solution appears to be correct. And the incomplete Beta does not pose a problem ... Given: $X_1$ and $
49,182
How to get the data set size required for neural network training?
There's really no fixed rule that you can apply here. The number of training samples for training depends on the nature of the problem, the number of features, and the complexity of your network architecture. Try "simple" architectures first, i.e., fewer layers, fewer units per layer and experiment a bit with different training sizes and architectures to get a feeling for that. I know, the answer may be a bit disappointing, but as far as I know, it's all empirical for now. Also, maybe learning curves could help (although, be aware that it's expensive; it's useful for developing a "feeling" for the dataset and model complexity though) E.g., I did this one for a MNIST subset using a simple softmax algorithm (1-layer) some time ago. I used 1500 samples for testing for the different training set sizes, and I would conclude from this figure that more training data may help to fit a "more accurate" model.
How to get the data set size required for neural network training?
There's really no fixed rule that you can apply here. The number of training samples for training depends on the nature of the problem, the number of features, and the complexity of your network archi
How to get the data set size required for neural network training? There's really no fixed rule that you can apply here. The number of training samples for training depends on the nature of the problem, the number of features, and the complexity of your network architecture. Try "simple" architectures first, i.e., fewer layers, fewer units per layer and experiment a bit with different training sizes and architectures to get a feeling for that. I know, the answer may be a bit disappointing, but as far as I know, it's all empirical for now. Also, maybe learning curves could help (although, be aware that it's expensive; it's useful for developing a "feeling" for the dataset and model complexity though) E.g., I did this one for a MNIST subset using a simple softmax algorithm (1-layer) some time ago. I used 1500 samples for testing for the different training set sizes, and I would conclude from this figure that more training data may help to fit a "more accurate" model.
How to get the data set size required for neural network training? There's really no fixed rule that you can apply here. The number of training samples for training depends on the nature of the problem, the number of features, and the complexity of your network archi
49,183
How to get the data set size required for neural network training?
I'll copy my answer from the very related question How few training examples is too few when training a neural network? (any update will be performed there): It really depends on your dataset, and network architecture. One rule of thumb I have read (e.g., in (2)) was a few thousand samples per class for the neural network to start to perform very well. In practice, people try and see. It's not rare to find studies showing decent results with a training set smaller than 1000 samples. (2) Cireşan, Dan C., Ueli Meier, and Jürgen Schmidhuber. "Transfer learning for Latin and Chinese characters with deep neural networks." In The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1-6. IEEE, 2012. https://scholar.google.com/scholar?cluster=7452424507909578812&hl=en&as_sdt=0,22 ; http://people.idsia.ch/~ciresan/data/ijcnn2012_v9.pdf: For classification tasks with a few thousand samples per class, the benefit of (unsupervised or supervised) pretraining is not easy to demonstrate.
How to get the data set size required for neural network training?
I'll copy my answer from the very related question How few training examples is too few when training a neural network? (any update will be performed there): It really depends on your dataset, and n
How to get the data set size required for neural network training? I'll copy my answer from the very related question How few training examples is too few when training a neural network? (any update will be performed there): It really depends on your dataset, and network architecture. One rule of thumb I have read (e.g., in (2)) was a few thousand samples per class for the neural network to start to perform very well. In practice, people try and see. It's not rare to find studies showing decent results with a training set smaller than 1000 samples. (2) Cireşan, Dan C., Ueli Meier, and Jürgen Schmidhuber. "Transfer learning for Latin and Chinese characters with deep neural networks." In The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1-6. IEEE, 2012. https://scholar.google.com/scholar?cluster=7452424507909578812&hl=en&as_sdt=0,22 ; http://people.idsia.ch/~ciresan/data/ijcnn2012_v9.pdf: For classification tasks with a few thousand samples per class, the benefit of (unsupervised or supervised) pretraining is not easy to demonstrate.
How to get the data set size required for neural network training? I'll copy my answer from the very related question How few training examples is too few when training a neural network? (any update will be performed there): It really depends on your dataset, and n
49,184
How to get the data set size required for neural network training?
The "data set size" is property of the data set, not of the NN. If you are working with MNIST data set - the full data set is 60,000 images. If you split 10% for validation, you'd have 54,000 images for training. The training data set size will be 54,000.
How to get the data set size required for neural network training?
The "data set size" is property of the data set, not of the NN. If you are working with MNIST data set - the full data set is 60,000 images. If you split 10% for validation, you'd have 54,000 images f
How to get the data set size required for neural network training? The "data set size" is property of the data set, not of the NN. If you are working with MNIST data set - the full data set is 60,000 images. If you split 10% for validation, you'd have 54,000 images for training. The training data set size will be 54,000.
How to get the data set size required for neural network training? The "data set size" is property of the data set, not of the NN. If you are working with MNIST data set - the full data set is 60,000 images. If you split 10% for validation, you'd have 54,000 images f
49,185
Formal definitions for nonparametric and parametric models
Mark J. Schervish's Theory of Statistics (1995) puts it like this (p. 1): Most paradigms for statistical inference make at least some use of the following structure. We suppose that some random variables $X_1, ..., X_n$ all have the same distribution [i.e., their induced probability measures are all equal], but we may be unwilling to say what that distribution is. Instead, we create a collection of distributions called a parametric family and denoted $P_0$. For example, $P_0$ might consist of all normal distributions, or just those normal distributions with variance 1, or all binomial distributions, or all Poisson distributions, and so forth. Each of these cases has the property that the collection of distributions can be indexed by a finite-dimensional real quantity, which is commonly called a parameter. For example, if the parametric family is all normal distributions, then the parameter can be denoted $Θ = (M, E)$, where $M$ stands for the mean and $E$ stands for the standard deviation. The set of all possible values of the parameter is called the parameter space and is often denoted by $Ω$. Schervish's terminology is a little idiosyncratic in that he calls the vector of real numbers determining the distribution "the parameter", singular, as opposed to the more common practice of referring to the individual real numbers as "parameters". Notice Schervish's qualification "finite-dimensional". Parametric families with infinite-dimensional parameter spaces do see use, but the resulting model is (perhaps confusingly) called nonparametric. In my opinion, "nonparametric" is also a fair description of any method that doesn't fit into the parametric-modeling framework at all, such as nearest-neighbors classification, but one could argue instead that the terms "parametric" and "nonparametric" are simply inapplicable.
Formal definitions for nonparametric and parametric models
Mark J. Schervish's Theory of Statistics (1995) puts it like this (p. 1): Most paradigms for statistical inference make at least some use of the following structure. We suppose that some random varia
Formal definitions for nonparametric and parametric models Mark J. Schervish's Theory of Statistics (1995) puts it like this (p. 1): Most paradigms for statistical inference make at least some use of the following structure. We suppose that some random variables $X_1, ..., X_n$ all have the same distribution [i.e., their induced probability measures are all equal], but we may be unwilling to say what that distribution is. Instead, we create a collection of distributions called a parametric family and denoted $P_0$. For example, $P_0$ might consist of all normal distributions, or just those normal distributions with variance 1, or all binomial distributions, or all Poisson distributions, and so forth. Each of these cases has the property that the collection of distributions can be indexed by a finite-dimensional real quantity, which is commonly called a parameter. For example, if the parametric family is all normal distributions, then the parameter can be denoted $Θ = (M, E)$, where $M$ stands for the mean and $E$ stands for the standard deviation. The set of all possible values of the parameter is called the parameter space and is often denoted by $Ω$. Schervish's terminology is a little idiosyncratic in that he calls the vector of real numbers determining the distribution "the parameter", singular, as opposed to the more common practice of referring to the individual real numbers as "parameters". Notice Schervish's qualification "finite-dimensional". Parametric families with infinite-dimensional parameter spaces do see use, but the resulting model is (perhaps confusingly) called nonparametric. In my opinion, "nonparametric" is also a fair description of any method that doesn't fit into the parametric-modeling framework at all, such as nearest-neighbors classification, but one could argue instead that the terms "parametric" and "nonparametric" are simply inapplicable.
Formal definitions for nonparametric and parametric models Mark J. Schervish's Theory of Statistics (1995) puts it like this (p. 1): Most paradigms for statistical inference make at least some use of the following structure. We suppose that some random varia
49,186
Formal definitions for nonparametric and parametric models
Parametric (model): data which can be described by a finite number of parameters (for example gaussian with both unknown mean and variance). Nonparametric (model): data which need an infinite number of parameters to be described (for example bounded probability distributions, continuous regression functions...) Parametric/nonparametric algorithms: algorithms devised to work for parametric/nonparametric models. Basic SVM only works for linear data, so it's a parametric algorithm. More advanced SVM work for more complicated data. If your SVM works for nonparametric data (for example regression functions only assumed to be continuous), then it is nonparametric.
Formal definitions for nonparametric and parametric models
Parametric (model): data which can be described by a finite number of parameters (for example gaussian with both unknown mean and variance). Nonparametric (model): data which need an infinite number o
Formal definitions for nonparametric and parametric models Parametric (model): data which can be described by a finite number of parameters (for example gaussian with both unknown mean and variance). Nonparametric (model): data which need an infinite number of parameters to be described (for example bounded probability distributions, continuous regression functions...) Parametric/nonparametric algorithms: algorithms devised to work for parametric/nonparametric models. Basic SVM only works for linear data, so it's a parametric algorithm. More advanced SVM work for more complicated data. If your SVM works for nonparametric data (for example regression functions only assumed to be continuous), then it is nonparametric.
Formal definitions for nonparametric and parametric models Parametric (model): data which can be described by a finite number of parameters (for example gaussian with both unknown mean and variance). Nonparametric (model): data which need an infinite number o
49,187
Ways of implementing Translation invariance
This answer by Matt Krause on What is translation invariance in computer vision and convolutional netral network? contain some pointers: One can show that the convolution operator commutes with respect to translation. If you convolve $f$ with $g$, it doesn't matter if you translate the convolved output $f*g$, or if you translate $f$ or $g$ first, then convolve them. Wikipedia has a bit more. One approach to translation-invariant object recognition is to take a "template" of the object and convolve it with every possible location of the object in the image. If you get a large response at a location, it suggests that an object resembling the template is located at that location. This approach is often called template-matching. You may also find this technical report interesting, they give some overview: Leibo, Joel Z., Jim Mutch, Lorenzo Rosasco, Shimon Ullman, and Tomaso Poggio. "Learning generic invariances in object recognition: translation and scale." (2010). https://scholar.google.com/scholar?cluster=17887886525836197513&hl=en&as_sdt=0,22 ; http://cbcl.mit.edu/cbcl/publications/ps/Efficiency_of_invariance_and_learning_CBCL_TR.pdf
Ways of implementing Translation invariance
This answer by Matt Krause on What is translation invariance in computer vision and convolutional netral network? contain some pointers: One can show that the convolution operator commutes with respe
Ways of implementing Translation invariance This answer by Matt Krause on What is translation invariance in computer vision and convolutional netral network? contain some pointers: One can show that the convolution operator commutes with respect to translation. If you convolve $f$ with $g$, it doesn't matter if you translate the convolved output $f*g$, or if you translate $f$ or $g$ first, then convolve them. Wikipedia has a bit more. One approach to translation-invariant object recognition is to take a "template" of the object and convolve it with every possible location of the object in the image. If you get a large response at a location, it suggests that an object resembling the template is located at that location. This approach is often called template-matching. You may also find this technical report interesting, they give some overview: Leibo, Joel Z., Jim Mutch, Lorenzo Rosasco, Shimon Ullman, and Tomaso Poggio. "Learning generic invariances in object recognition: translation and scale." (2010). https://scholar.google.com/scholar?cluster=17887886525836197513&hl=en&as_sdt=0,22 ; http://cbcl.mit.edu/cbcl/publications/ps/Efficiency_of_invariance_and_learning_CBCL_TR.pdf
Ways of implementing Translation invariance This answer by Matt Krause on What is translation invariance in computer vision and convolutional netral network? contain some pointers: One can show that the convolution operator commutes with respe
49,188
Ways of implementing Translation invariance
I can't give you a specific link, but I'd start looking into convolutional neural networks (CNNs). I don't know whether there are other approaches to this problem.
Ways of implementing Translation invariance
I can't give you a specific link, but I'd start looking into convolutional neural networks (CNNs). I don't know whether there are other approaches to this problem.
Ways of implementing Translation invariance I can't give you a specific link, but I'd start looking into convolutional neural networks (CNNs). I don't know whether there are other approaches to this problem.
Ways of implementing Translation invariance I can't give you a specific link, but I'd start looking into convolutional neural networks (CNNs). I don't know whether there are other approaches to this problem.
49,189
Understanding formulation of hypotheses in difference between two sample means (z test)
1) If you want your hypotheses to partition the universe, where do you stop? Imagine you have a dataset drawn from a normal distribution. You might consider the hypotheses: $H_1$: $\mu>0$ $H_0$: $\mu\leq0$ $H_{-1}$: The data weren't Normal after all but followed some other distribution $H_{-2}$: The data didn't even have to be real valued, we just happened to observe nothing but real values ... In practice, if the sample mean of population 2 is lower than that of population 1 we will always be rejecting an alternative hypothesis of $\mu_2 > \mu_1$ whether the null hypothesis contained an equals or a $"\leq"$. It is perfectly possible to perform a likelihood ratio test for an "$=$" hypothesis against a "$<$" hypothesis. 2) Yes, you can swap the signs but you don't swap which hypothesis is the null. The two formulations are equivalent: $H_0: \mu_1 \leq \mu_2\\H_1: \mu_1 > \mu_2$ $H_0: \mu_2 \geq \mu_1\\H_1: \mu_2 < \mu_1$ The reason for this is that the datasets were symmetric in a sense, but the null hypothesis enjoys a special privilege (it is "innocent until proven guilty", or at least 95%-so.)
Understanding formulation of hypotheses in difference between two sample means (z test)
1) If you want your hypotheses to partition the universe, where do you stop? Imagine you have a dataset drawn from a normal distribution. You might consider the hypotheses: $H_1$: $\mu>0$ $H_0$: $\mu\
Understanding formulation of hypotheses in difference between two sample means (z test) 1) If you want your hypotheses to partition the universe, where do you stop? Imagine you have a dataset drawn from a normal distribution. You might consider the hypotheses: $H_1$: $\mu>0$ $H_0$: $\mu\leq0$ $H_{-1}$: The data weren't Normal after all but followed some other distribution $H_{-2}$: The data didn't even have to be real valued, we just happened to observe nothing but real values ... In practice, if the sample mean of population 2 is lower than that of population 1 we will always be rejecting an alternative hypothesis of $\mu_2 > \mu_1$ whether the null hypothesis contained an equals or a $"\leq"$. It is perfectly possible to perform a likelihood ratio test for an "$=$" hypothesis against a "$<$" hypothesis. 2) Yes, you can swap the signs but you don't swap which hypothesis is the null. The two formulations are equivalent: $H_0: \mu_1 \leq \mu_2\\H_1: \mu_1 > \mu_2$ $H_0: \mu_2 \geq \mu_1\\H_1: \mu_2 < \mu_1$ The reason for this is that the datasets were symmetric in a sense, but the null hypothesis enjoys a special privilege (it is "innocent until proven guilty", or at least 95%-so.)
Understanding formulation of hypotheses in difference between two sample means (z test) 1) If you want your hypotheses to partition the universe, where do you stop? Imagine you have a dataset drawn from a normal distribution. You might consider the hypotheses: $H_1$: $\mu>0$ $H_0$: $\mu\
49,190
Understanding formulation of hypotheses in difference between two sample means (z test)
Your assumption is correct and you explained it nicely yourself. If you swap the hypotheses, then you must keep in mind that level of significance $\alpha$ and 1-power will swap places.
Understanding formulation of hypotheses in difference between two sample means (z test)
Your assumption is correct and you explained it nicely yourself. If you swap the hypotheses, then you must keep in mind that level of significance $\alpha$ and 1-power will swap places.
Understanding formulation of hypotheses in difference between two sample means (z test) Your assumption is correct and you explained it nicely yourself. If you swap the hypotheses, then you must keep in mind that level of significance $\alpha$ and 1-power will swap places.
Understanding formulation of hypotheses in difference between two sample means (z test) Your assumption is correct and you explained it nicely yourself. If you swap the hypotheses, then you must keep in mind that level of significance $\alpha$ and 1-power will swap places.
49,191
How are unobserved components predicted in random effect models?
So the true model has the unobserved individual level time invariant heterogeneity: $y_{it}=\beta x_{it}+c_i+e_{it}$ So we estimate: $y_{it}=\alpha + \beta x_{it}+u_{it}$, where $u_{it}=c_i-\alpha+e_{it}$ Use pooled ols to get $\hat u_{it} $ and $\hat a$ let $c_i-a=\mu$ $\hat\mu=(1/n)\sum \hat u_{it}$ and $\hat e_{it}=\hat u_{it}-\hat\mu$ More info here: http://www.utdallas.edu/~d.sul/Econo1/lec_note_part3.pdf
How are unobserved components predicted in random effect models?
So the true model has the unobserved individual level time invariant heterogeneity: $y_{it}=\beta x_{it}+c_i+e_{it}$ So we estimate: $y_{it}=\alpha + \beta x_{it}+u_{it}$, where $u_{it}=c_i-\alpha+e_{
How are unobserved components predicted in random effect models? So the true model has the unobserved individual level time invariant heterogeneity: $y_{it}=\beta x_{it}+c_i+e_{it}$ So we estimate: $y_{it}=\alpha + \beta x_{it}+u_{it}$, where $u_{it}=c_i-\alpha+e_{it}$ Use pooled ols to get $\hat u_{it} $ and $\hat a$ let $c_i-a=\mu$ $\hat\mu=(1/n)\sum \hat u_{it}$ and $\hat e_{it}=\hat u_{it}-\hat\mu$ More info here: http://www.utdallas.edu/~d.sul/Econo1/lec_note_part3.pdf
How are unobserved components predicted in random effect models? So the true model has the unobserved individual level time invariant heterogeneity: $y_{it}=\beta x_{it}+c_i+e_{it}$ So we estimate: $y_{it}=\alpha + \beta x_{it}+u_{it}$, where $u_{it}=c_i-\alpha+e_{
49,192
How are unobserved components predicted in random effect models?
Ok, I managed to get the answer I wanted, which also explains why the estimator is unbiased and consistent. Here it is: The model is: $$y_{it} = x_{it}\beta + c_{i} + \epsilon_{it} $$ From RE we obtain an estimation of $\beta$. Define the estimation error $\hat{u}_{it}$: $$ \hat{u}_{it} \equiv y_{it} - x_{it}\hat{\beta} $$ Now, define the linear predictor $\bar{u}_{it}$ as the mean of the estimation error: $$ \bar{u}_{it} \equiv \frac{\sum_{t=1}^{T}\hat{u}_{i}}{T} = \bar{y_{it}} - \bar{x}_{i}\hat{\beta} $$ This is, allegedly, the BLUP estimator of $c_{i}$. To confirm this, let us evaluate the statistical properties of this predictor. To do this, replace the original model into the above expression. After some rearranging, the outcome is: $$ \bar{u}_{it} = \bar{x}_{i}\beta - \bar{x}_{i}\hat{\beta} + c_{i} + \frac{\sum_{t=1}^{T}\epsilon_{it}}{T} $$ The expectation of this estimator is: $$ E(\bar{u}_{it}) = \bar{x}_{i}\beta - \bar{x}_{i}E(\hat{\beta}) + E(c_{i}) + \frac{\sum_{t=1}^{T}E(\epsilon_{it})}{T} $$ Assume $\hat{\beta}$ is an unbiased estimator of $\beta$ (requires strict exogeneity, unobserved component orthogonal to regressors, and rank condition). Moreover, $E(\epsilon_{it}) = 0$ (trivial when constant included in $x_{it}$). In consequence, $\bar{u}_{it}$ is an unbiased estimator of $E(c_{i})$. Regarding consistency, the probability limit of this predictor is: $$ p \lim\limits_{T \rightarrow \infty} \bar{u}_{it} = p \lim\limits_{T \rightarrow \infty} \left( \bar{x}_{i}\beta\right) - p \lim\limits_{T \rightarrow \infty} \left(\bar{x}_{i}\hat{\beta}\right) + p \lim\limits_{T \rightarrow \infty} c_{i} + p \lim\limits_{T \rightarrow \infty} \left( \frac{\sum_{t=1}^{T}\epsilon_{it}}{T}\right) $$ Again, $\hat{\beta}$ is a consistent estimator of $\beta$. This is, $p \lim\limits_{T \rightarrow \infty} \hat{\beta} = \beta $. Furthermore, $p \lim\limits_{T \rightarrow \infty} \left( \frac{\sum_{t=1}^{T}\epsilon_{it}}{T}\right) = E(\epsilon_{it})$, which is zero. Therefore: $$ p \lim\limits_{T \rightarrow \infty} \bar{u}_{it} = c_{i} $$ Or, equivalently: $$ \bar{u}_{it} \xrightarrow{P} c_{i} $$ This proves that the predictor is indeed BLUP.
How are unobserved components predicted in random effect models?
Ok, I managed to get the answer I wanted, which also explains why the estimator is unbiased and consistent. Here it is: The model is: $$y_{it} = x_{it}\beta + c_{i} + \epsilon_{it} $$ From RE we obtai
How are unobserved components predicted in random effect models? Ok, I managed to get the answer I wanted, which also explains why the estimator is unbiased and consistent. Here it is: The model is: $$y_{it} = x_{it}\beta + c_{i} + \epsilon_{it} $$ From RE we obtain an estimation of $\beta$. Define the estimation error $\hat{u}_{it}$: $$ \hat{u}_{it} \equiv y_{it} - x_{it}\hat{\beta} $$ Now, define the linear predictor $\bar{u}_{it}$ as the mean of the estimation error: $$ \bar{u}_{it} \equiv \frac{\sum_{t=1}^{T}\hat{u}_{i}}{T} = \bar{y_{it}} - \bar{x}_{i}\hat{\beta} $$ This is, allegedly, the BLUP estimator of $c_{i}$. To confirm this, let us evaluate the statistical properties of this predictor. To do this, replace the original model into the above expression. After some rearranging, the outcome is: $$ \bar{u}_{it} = \bar{x}_{i}\beta - \bar{x}_{i}\hat{\beta} + c_{i} + \frac{\sum_{t=1}^{T}\epsilon_{it}}{T} $$ The expectation of this estimator is: $$ E(\bar{u}_{it}) = \bar{x}_{i}\beta - \bar{x}_{i}E(\hat{\beta}) + E(c_{i}) + \frac{\sum_{t=1}^{T}E(\epsilon_{it})}{T} $$ Assume $\hat{\beta}$ is an unbiased estimator of $\beta$ (requires strict exogeneity, unobserved component orthogonal to regressors, and rank condition). Moreover, $E(\epsilon_{it}) = 0$ (trivial when constant included in $x_{it}$). In consequence, $\bar{u}_{it}$ is an unbiased estimator of $E(c_{i})$. Regarding consistency, the probability limit of this predictor is: $$ p \lim\limits_{T \rightarrow \infty} \bar{u}_{it} = p \lim\limits_{T \rightarrow \infty} \left( \bar{x}_{i}\beta\right) - p \lim\limits_{T \rightarrow \infty} \left(\bar{x}_{i}\hat{\beta}\right) + p \lim\limits_{T \rightarrow \infty} c_{i} + p \lim\limits_{T \rightarrow \infty} \left( \frac{\sum_{t=1}^{T}\epsilon_{it}}{T}\right) $$ Again, $\hat{\beta}$ is a consistent estimator of $\beta$. This is, $p \lim\limits_{T \rightarrow \infty} \hat{\beta} = \beta $. Furthermore, $p \lim\limits_{T \rightarrow \infty} \left( \frac{\sum_{t=1}^{T}\epsilon_{it}}{T}\right) = E(\epsilon_{it})$, which is zero. Therefore: $$ p \lim\limits_{T \rightarrow \infty} \bar{u}_{it} = c_{i} $$ Or, equivalently: $$ \bar{u}_{it} \xrightarrow{P} c_{i} $$ This proves that the predictor is indeed BLUP.
How are unobserved components predicted in random effect models? Ok, I managed to get the answer I wanted, which also explains why the estimator is unbiased and consistent. Here it is: The model is: $$y_{it} = x_{it}\beta + c_{i} + \epsilon_{it} $$ From RE we obtai
49,193
Bayesian A/B testing a continuous value (Not a success rate)
Let's take a look at what information you have: Whether the customer purchased anything $(Y_K)$. This is binary. If the customer made a purchase, which incentive threshold the they were willing to buy at $(Y_L)$. This is ordinal, and ranges from 0 to 3. The value of this is meaningless when $Y_K=0$. The amount of money the customer spent ($Y_M$). This is a continuous, positive value, and does not include customers for whom $Y_K=0$. The test treatment $(X)$, which could plausibly affect the above $Y$ values. I's often helpful to try to model the process that generated the data you're analyzing. The amount of money a person spends $(Y_M)$ appears to be influenced by the incentive they desire ($Y_L$), which is contingent on whether they decide to purchase anything at all ($Y_K$). This suggests something like the following model: $$\begin{align} Y_{K} & \sim\text{Bernoulli}(p_{x})\\ Y_{L} & \sim\text{Ordered-Logistic}(\eta_{x},c)\\ Y_{M} & \sim\text{LogNormal}(\alpha_{L}+\beta_{x}, \sigma)\end{align}$$ $p_x$ is probability of making a purchase for group $X$. You can either put a prior on the $p$ parameters directly or use a logistic regression. The ordered logistic regression has $\eta_x$ as the predictor for which incentive group the customer would choose given their A/B group membership. $c$ is a vector of cutpoint parameters that determine what values of $Y_L$ $\eta$ refers over its range. These parameters can be transformed to calculate the posterior probability of a given treatment selecting each incentive level. For more details, I'd recommend reading up on ordered logit regression. $\alpha_L$ is the intercept for $Y_M$, and it depends on the value of $Y_L$. This should (in theory) take care of the multimodality. Given that you know what the incentive thresholds are, you should use informed priors for this parameter. $\beta_x$ is the effect of one of your treatment groups. I suggested a log-normal distribution since it is easily parameterized and is often used for positive data with a skew, but you should feel free to try other options. You may also wish to consider investigating whether $\beta_x$ or $\sigma$ should vary among levels of $Y_L$. You should then be able to a loss function to the posterior distribution of the parameters to determine when you should stop the test. The whole model (including posterior transformations and loss function calculation) could be implemented in Stan.
Bayesian A/B testing a continuous value (Not a success rate)
Let's take a look at what information you have: Whether the customer purchased anything $(Y_K)$. This is binary. If the customer made a purchase, which incentive threshold the they were willing to bu
Bayesian A/B testing a continuous value (Not a success rate) Let's take a look at what information you have: Whether the customer purchased anything $(Y_K)$. This is binary. If the customer made a purchase, which incentive threshold the they were willing to buy at $(Y_L)$. This is ordinal, and ranges from 0 to 3. The value of this is meaningless when $Y_K=0$. The amount of money the customer spent ($Y_M$). This is a continuous, positive value, and does not include customers for whom $Y_K=0$. The test treatment $(X)$, which could plausibly affect the above $Y$ values. I's often helpful to try to model the process that generated the data you're analyzing. The amount of money a person spends $(Y_M)$ appears to be influenced by the incentive they desire ($Y_L$), which is contingent on whether they decide to purchase anything at all ($Y_K$). This suggests something like the following model: $$\begin{align} Y_{K} & \sim\text{Bernoulli}(p_{x})\\ Y_{L} & \sim\text{Ordered-Logistic}(\eta_{x},c)\\ Y_{M} & \sim\text{LogNormal}(\alpha_{L}+\beta_{x}, \sigma)\end{align}$$ $p_x$ is probability of making a purchase for group $X$. You can either put a prior on the $p$ parameters directly or use a logistic regression. The ordered logistic regression has $\eta_x$ as the predictor for which incentive group the customer would choose given their A/B group membership. $c$ is a vector of cutpoint parameters that determine what values of $Y_L$ $\eta$ refers over its range. These parameters can be transformed to calculate the posterior probability of a given treatment selecting each incentive level. For more details, I'd recommend reading up on ordered logit regression. $\alpha_L$ is the intercept for $Y_M$, and it depends on the value of $Y_L$. This should (in theory) take care of the multimodality. Given that you know what the incentive thresholds are, you should use informed priors for this parameter. $\beta_x$ is the effect of one of your treatment groups. I suggested a log-normal distribution since it is easily parameterized and is often used for positive data with a skew, but you should feel free to try other options. You may also wish to consider investigating whether $\beta_x$ or $\sigma$ should vary among levels of $Y_L$. You should then be able to a loss function to the posterior distribution of the parameters to determine when you should stop the test. The whole model (including posterior transformations and loss function calculation) could be implemented in Stan.
Bayesian A/B testing a continuous value (Not a success rate) Let's take a look at what information you have: Whether the customer purchased anything $(Y_K)$. This is binary. If the customer made a purchase, which incentive threshold the they were willing to bu
49,194
Bayesian A/B testing a continuous value (Not a success rate)
Take the 99% interval ( HDR - High Density Region ) of your posterior distribution at each step, and check if your Ho ( stopping point ) is in it. If you dont know the distribution you can simply take a big sample from the posterior and then take the interval based on the percentiles of your sample.
Bayesian A/B testing a continuous value (Not a success rate)
Take the 99% interval ( HDR - High Density Region ) of your posterior distribution at each step, and check if your Ho ( stopping point ) is in it. If you dont know the distribution you can simply take
Bayesian A/B testing a continuous value (Not a success rate) Take the 99% interval ( HDR - High Density Region ) of your posterior distribution at each step, and check if your Ho ( stopping point ) is in it. If you dont know the distribution you can simply take a big sample from the posterior and then take the interval based on the percentiles of your sample.
Bayesian A/B testing a continuous value (Not a success rate) Take the 99% interval ( HDR - High Density Region ) of your posterior distribution at each step, and check if your Ho ( stopping point ) is in it. If you dont know the distribution you can simply take
49,195
Help understanding Standard Error
Edited Q3 answer to clarify. Q1: yes, this is talking about bias/dependence in the observations/errors. Those formulae only hold strictly true if the data is IID (independent and identically distributed). If there is bias then you have to apply a correction. Q2: yes, although the convergence will slow as you approach zero. Q3: "somewhat deficient in the left side of the plot" is referring to the fact that the errors are imbalanced about the linear regression line (greater number below, greater values above). An ideal (or at least very good) regression line will have a fairly balanced distribution of errors above and below it. The imbalance is a visual illustration of what you asked about in Q1: that the errors in this data are not IID.
Help understanding Standard Error
Edited Q3 answer to clarify. Q1: yes, this is talking about bias/dependence in the observations/errors. Those formulae only hold strictly true if the data is IID (independent and identically distribut
Help understanding Standard Error Edited Q3 answer to clarify. Q1: yes, this is talking about bias/dependence in the observations/errors. Those formulae only hold strictly true if the data is IID (independent and identically distributed). If there is bias then you have to apply a correction. Q2: yes, although the convergence will slow as you approach zero. Q3: "somewhat deficient in the left side of the plot" is referring to the fact that the errors are imbalanced about the linear regression line (greater number below, greater values above). An ideal (or at least very good) regression line will have a fairly balanced distribution of errors above and below it. The imbalance is a visual illustration of what you asked about in Q1: that the errors in this data are not IID.
Help understanding Standard Error Edited Q3 answer to clarify. Q1: yes, this is talking about bias/dependence in the observations/errors. Those formulae only hold strictly true if the data is IID (independent and identically distribut
49,196
Help understanding Standard Error
Q1 A: They could be correlated by a measure in the background. For instance, your sample might have been influenced by the time when you took it; the sun standing in a particular position or whatever. Then you would obtain another result than by sampling the whole day (what you might have intended). B: As there is no precise definition of the symbol, we cannot say for sure. However, the reference to the plot implies what the author meant: The variance should be constant at every position $x_{i}$ -> TVs in the plot. Q2 Clearly yes. Also the distribution of the mean gets closer to the normal distribution. This and this may help to understand. Q3 I referred to this in my answer to Q1 B.
Help understanding Standard Error
Q1 A: They could be correlated by a measure in the background. For instance, your sample might have been influenced by the time when you took it; the sun standing in a particular position or whatever.
Help understanding Standard Error Q1 A: They could be correlated by a measure in the background. For instance, your sample might have been influenced by the time when you took it; the sun standing in a particular position or whatever. Then you would obtain another result than by sampling the whole day (what you might have intended). B: As there is no precise definition of the symbol, we cannot say for sure. However, the reference to the plot implies what the author meant: The variance should be constant at every position $x_{i}$ -> TVs in the plot. Q2 Clearly yes. Also the distribution of the mean gets closer to the normal distribution. This and this may help to understand. Q3 I referred to this in my answer to Q1 B.
Help understanding Standard Error Q1 A: They could be correlated by a measure in the background. For instance, your sample might have been influenced by the time when you took it; the sun standing in a particular position or whatever.
49,197
Procedure for testing covariate balance for generalized propensity score estimator
The method you describe would be a coarse way to evaluate balance, but a finer way is the following: For each covariate, compute the correlation between the covariate and the treatment variable after conditioning. If it is 0, then the variable will no longer confound the estimate of the treatment effect. Calculating standardized mean differences in the context of binary treatments essentially examines the same thing. Fong, Hazlett, and Imai (2015) consider continuous treatments and compute the absolute Pearson correlations between covariates and treatment to establish balance. It would also be a good idea to evaluate the correlation between treatment and the squared and other polynomial and interaction terms of the covariates. You want all these to be as close to 0 as possible. In general, you want treatment to be independent form the covariates, so you can use whatever methods are appropriate to determine this (e.g., visually examining scatterplots, etc.). The method you describe from Guo & Fraser is effective in theory, and of course would be approximately equivalent in large samples with many subclasses. It would actually be superior, because you aren't limited to polynomial correlations (for the same reason subclassification on the PS is superior to covariate adjustment with the PS: you don't have to assume the functional form of the relationship). The problem is that you are coarsening your treatment into 5 categories, which it is not: it's a continuous variable, so independence should be met over the whole distribution, not just within subclasses. Also, although they recommend it, avoid using hypothesis tests of any kind for balance assessment. Balance can become conflated with power when using them. If you're using R for propensity score analysis, consider the cobalt package for assessing balance. In the next release, balance assessment for continuous treatment will be implemented. [Edit: it can now assess balance for continuous treatments.]
Procedure for testing covariate balance for generalized propensity score estimator
The method you describe would be a coarse way to evaluate balance, but a finer way is the following: For each covariate, compute the correlation between the covariate and the treatment variable after
Procedure for testing covariate balance for generalized propensity score estimator The method you describe would be a coarse way to evaluate balance, but a finer way is the following: For each covariate, compute the correlation between the covariate and the treatment variable after conditioning. If it is 0, then the variable will no longer confound the estimate of the treatment effect. Calculating standardized mean differences in the context of binary treatments essentially examines the same thing. Fong, Hazlett, and Imai (2015) consider continuous treatments and compute the absolute Pearson correlations between covariates and treatment to establish balance. It would also be a good idea to evaluate the correlation between treatment and the squared and other polynomial and interaction terms of the covariates. You want all these to be as close to 0 as possible. In general, you want treatment to be independent form the covariates, so you can use whatever methods are appropriate to determine this (e.g., visually examining scatterplots, etc.). The method you describe from Guo & Fraser is effective in theory, and of course would be approximately equivalent in large samples with many subclasses. It would actually be superior, because you aren't limited to polynomial correlations (for the same reason subclassification on the PS is superior to covariate adjustment with the PS: you don't have to assume the functional form of the relationship). The problem is that you are coarsening your treatment into 5 categories, which it is not: it's a continuous variable, so independence should be met over the whole distribution, not just within subclasses. Also, although they recommend it, avoid using hypothesis tests of any kind for balance assessment. Balance can become conflated with power when using them. If you're using R for propensity score analysis, consider the cobalt package for assessing balance. In the next release, balance assessment for continuous treatment will be implemented. [Edit: it can now assess balance for continuous treatments.]
Procedure for testing covariate balance for generalized propensity score estimator The method you describe would be a coarse way to evaluate balance, but a finer way is the following: For each covariate, compute the correlation between the covariate and the treatment variable after
49,198
Can neural network can be used to predict pseudo-random numbers?
A recent paper in this vein can be found in "Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice?" by Fenglei Fan & Ge Wang. Inspired by the fact that the neural network, as the mainstream method for machine learning, has brought successes in many application areas, here we propose to use this approach for decoding hidden correlation among pseudo-random data and predicting events accordingly. With a simple neural network structure and a typical training procedure, we demonstrate the learning and prediction power of the neural network in pseudorandom environments. Finally, we postulate that the high sensitivity and efficiency of the neural network may allow to learn on a low-dimensional manifold in a high-dimensional space of pseudo-random events and critically test if there could be any fundamental difference between quantum randomness and pseudo randomness, which is equivalent to the classic question: Does God play dice? Moreover, the references cited in the paper comprise a partial literature review of other efforts to model PRNGs using neural networks.
Can neural network can be used to predict pseudo-random numbers?
A recent paper in this vein can be found in "Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice?" by Fenglei Fan & Ge Wang. Inspired by the fact that the ne
Can neural network can be used to predict pseudo-random numbers? A recent paper in this vein can be found in "Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice?" by Fenglei Fan & Ge Wang. Inspired by the fact that the neural network, as the mainstream method for machine learning, has brought successes in many application areas, here we propose to use this approach for decoding hidden correlation among pseudo-random data and predicting events accordingly. With a simple neural network structure and a typical training procedure, we demonstrate the learning and prediction power of the neural network in pseudorandom environments. Finally, we postulate that the high sensitivity and efficiency of the neural network may allow to learn on a low-dimensional manifold in a high-dimensional space of pseudo-random events and critically test if there could be any fundamental difference between quantum randomness and pseudo randomness, which is equivalent to the classic question: Does God play dice? Moreover, the references cited in the paper comprise a partial literature review of other efforts to model PRNGs using neural networks.
Can neural network can be used to predict pseudo-random numbers? A recent paper in this vein can be found in "Learning from Pseudo-Randomness with an Artificial Neural Network – Does God Play Pseudo-Dice?" by Fenglei Fan & Ge Wang. Inspired by the fact that the ne
49,199
My data are very skew and I can't see any detail in a histogram. How do I see the shape?
With or without exact zeros a histogram of a very skew distribution can look like this. It has nothing to do with the spread, nor with the existence of zeros, but with how far above the bulk of the data the largest observation is. You're dealing with two different problems at once here -- $\:$ a. The distribution is very skew. $\:$ b. The default number of bins in R is a good deal too low for seeing the shape well in general and much too low when the distribution is skew or has heavy tails. The second problem is easy to deal with -- use far more bins. Obtaining a good display with very skewed data is not a single step process. It may take several attempts and some decision-making about how to best represent the information If the data were actually lognormal with a large $\sigma$ parameter (and so very skew), it can look just like your plot. Here I generated some data (in "x", with n=1800) which has a particular lognormal distribution: I made that plot have about twice as many bins that you got from the default, but it still didn't show any detail. How skewed is this? Well, the sample mean is about 8000 times as big as the median. Those large values dominate more than just the plot. (I made a second data set that's somewhat less skew, to show the potential value of some of the options I suggest) The most obvious thing to do would be to plot on a log scale: Note that the axis labels are in original values not log-currency, just as you'd get with plot(...,log="x") (in fact that's what I used to make this plot, after extracting the results of hist by putting it into a variable); I also added some detail on the x-axis but this isn't really necessary. If you have exact zeros this approach of plotting on a log-scale is obviously not suitable for them as-is, since you can't take log of exact 0's (which are impossible in a lognormal; clearly you don't a lognormal). How you might deal with them depends on what you're trying to do with the variable and how many there are. (What's the actual proportion of exact zeros? You didn't say) Anyway, here's the simplest step in the process of trying to find a suitable display: try a histogram with a lot of bins - at least a hundred. And plot in a bigger window. This can sometimes help a lot but if your data is pretty skew, it won't solve the problem: That may not work (it didn't for my most skewed sample there), so what else is there? Cut off the largest values and list them on the plot, or do two displays, one with the top end cut off and one showing the larger values (equivalently, on one display, show a complete scale break and plot the two parts on two different x-scales). Something a bit like this: This can often help a lot but if your data is really skewed, it won't solve the problem: This is about the best plot possible with that data -- you really can't get more detail on the right without losing it all on the left. cut off the exact zeros, show a count of them and plot on the log-scale (but with original currency-scale tick-labels) as in my second diagram above Consider a transformation that can manage zeros, such as cube-root, but still show currency on the axis. (This would involve writing some R-code, so it may be too hard for you at this point. I don't really suggest it in this case, since people are much more used to seeing financial variables like income either on the log-scale, or on the original currency-scale.) Since someone is bound to ask, here's how I did the log-scale plot (absent additional fiddling with axis ticks): res <- hist(log(x),n=30) with(res,plot(counts~exp(mids),type="h",lwd=10,col=8,log="x",lend=2,xlab="income")) (You need lwd wide enough that the bars just touch. You need lend to make the cars have square ends. You can make a version of "hist" to do this but it takes more work.)
My data are very skew and I can't see any detail in a histogram. How do I see the shape?
With or without exact zeros a histogram of a very skew distribution can look like this. It has nothing to do with the spread, nor with the existence of zeros, but with how far above the bulk of the da
My data are very skew and I can't see any detail in a histogram. How do I see the shape? With or without exact zeros a histogram of a very skew distribution can look like this. It has nothing to do with the spread, nor with the existence of zeros, but with how far above the bulk of the data the largest observation is. You're dealing with two different problems at once here -- $\:$ a. The distribution is very skew. $\:$ b. The default number of bins in R is a good deal too low for seeing the shape well in general and much too low when the distribution is skew or has heavy tails. The second problem is easy to deal with -- use far more bins. Obtaining a good display with very skewed data is not a single step process. It may take several attempts and some decision-making about how to best represent the information If the data were actually lognormal with a large $\sigma$ parameter (and so very skew), it can look just like your plot. Here I generated some data (in "x", with n=1800) which has a particular lognormal distribution: I made that plot have about twice as many bins that you got from the default, but it still didn't show any detail. How skewed is this? Well, the sample mean is about 8000 times as big as the median. Those large values dominate more than just the plot. (I made a second data set that's somewhat less skew, to show the potential value of some of the options I suggest) The most obvious thing to do would be to plot on a log scale: Note that the axis labels are in original values not log-currency, just as you'd get with plot(...,log="x") (in fact that's what I used to make this plot, after extracting the results of hist by putting it into a variable); I also added some detail on the x-axis but this isn't really necessary. If you have exact zeros this approach of plotting on a log-scale is obviously not suitable for them as-is, since you can't take log of exact 0's (which are impossible in a lognormal; clearly you don't a lognormal). How you might deal with them depends on what you're trying to do with the variable and how many there are. (What's the actual proportion of exact zeros? You didn't say) Anyway, here's the simplest step in the process of trying to find a suitable display: try a histogram with a lot of bins - at least a hundred. And plot in a bigger window. This can sometimes help a lot but if your data is pretty skew, it won't solve the problem: That may not work (it didn't for my most skewed sample there), so what else is there? Cut off the largest values and list them on the plot, or do two displays, one with the top end cut off and one showing the larger values (equivalently, on one display, show a complete scale break and plot the two parts on two different x-scales). Something a bit like this: This can often help a lot but if your data is really skewed, it won't solve the problem: This is about the best plot possible with that data -- you really can't get more detail on the right without losing it all on the left. cut off the exact zeros, show a count of them and plot on the log-scale (but with original currency-scale tick-labels) as in my second diagram above Consider a transformation that can manage zeros, such as cube-root, but still show currency on the axis. (This would involve writing some R-code, so it may be too hard for you at this point. I don't really suggest it in this case, since people are much more used to seeing financial variables like income either on the log-scale, or on the original currency-scale.) Since someone is bound to ask, here's how I did the log-scale plot (absent additional fiddling with axis ticks): res <- hist(log(x),n=30) with(res,plot(counts~exp(mids),type="h",lwd=10,col=8,log="x",lend=2,xlab="income")) (You need lwd wide enough that the bars just touch. You need lend to make the cars have square ends. You can make a version of "hist" to do this but it takes more work.)
My data are very skew and I can't see any detail in a histogram. How do I see the shape? With or without exact zeros a histogram of a very skew distribution can look like this. It has nothing to do with the spread, nor with the existence of zeros, but with how far above the bulk of the da
49,200
If we have auto differentiate tool, do we also need EM algorithm?
In some cases yes; autodiff certainly makes life easier in many circumstances. But the EM algorithm may still be more appropriate in other cases. For example, consider fitting mixture models. At each step, the EM algorithm will always return valid parameters for the distribution. Although gradient-based optimization methods can be used, complicated constraints may be necessary. The mixture weights must be constrained to be positive and sum to one, covariance matrices must be constrained to be positive semidefinite, etc. The EM algorithm may have better convergence for some (but certainly not all) problems. The following posts may also of interest: 1, 2, 3.
If we have auto differentiate tool, do we also need EM algorithm?
In some cases yes; autodiff certainly makes life easier in many circumstances. But the EM algorithm may still be more appropriate in other cases. For example, consider fitting mixture models. At each
If we have auto differentiate tool, do we also need EM algorithm? In some cases yes; autodiff certainly makes life easier in many circumstances. But the EM algorithm may still be more appropriate in other cases. For example, consider fitting mixture models. At each step, the EM algorithm will always return valid parameters for the distribution. Although gradient-based optimization methods can be used, complicated constraints may be necessary. The mixture weights must be constrained to be positive and sum to one, covariance matrices must be constrained to be positive semidefinite, etc. The EM algorithm may have better convergence for some (but certainly not all) problems. The following posts may also of interest: 1, 2, 3.
If we have auto differentiate tool, do we also need EM algorithm? In some cases yes; autodiff certainly makes life easier in many circumstances. But the EM algorithm may still be more appropriate in other cases. For example, consider fitting mixture models. At each