text
stringlengths 83
79.5k
|
---|
H: How does the seed value work in Weka for clustering?
I have recently been learning about the various clustering methods, and I decided to apply Furthest Point clustering in Weka with varying seed values.
The seed value dictates the initial choice of points in the data set. It can't be totally random, since repeatedly doing the clustering with, say, seed = 3 gives the same result.
How does it select data points based on the seed?
AI: Pseudo-random number generators are initialized with a seed.
There is little real randomness available. Most just looks random to us, because the pattern is too complex. Read up some examples on pseudo random number generation such ad Mersenne Twisters. |
H: What is training
I'm a newbie on deep learning and I have a simple question.
I'm reading some article about the neural network. It says that people created a simple neural network named perceptron. And this network can't solve a very simple question: exclusive or.
I think it's because I don't know how to train a neural network or how to set the weights for a neural network.
Could you describe the process of training for perceptron with the example AND: 0 & 0 = 0 1 & 0 = 0 0 & 1 = 0 1 & 1 = 1
With such a training, how do the weights in perceptron look like?
AI: Welcome to the Site!
Assuming that simple neural network means Single layer Perceptron.
I think you need to understand this, a simple neural network with a single hidden layer cannot solve an XOR problem. As they are not Linearly Separable.
To overcome this, we use Multi-Layer Perceptron, you can go through this Link for better understanding.
Regarding your question with respect weights in Neural Network for solving XOR, please go through this lecture, this video, in the video they explain you how the weights are adjusted in every iteration before getting the solution.
If you are stuck somewhere do let me know, will try to help you!
Other links:
Link-1
Another link suggested by @Yves : Link |
H: How to get rid of the expectation in Monte Carlo Policy Gradient method?
Taken from Policy Gradient lecture notes, slide 16 onward:
Here in the David Silver Lecture series I perfectly understand how he took the expectation value in the theorem part by combining the following equation .
But how he removed the expectation part when talking stochastic gradient decent ?
AI: But how he removed the expectation part when talking stochastic gradient decent ?
A result that shows what happens in expectation can be estimated by sampling it, and that is precisely what stochastic gradient descent (or ascent in this case) methods do - they operate on individual samples on the assumption that this will on average produce reasonable direction for optimising parameters.
So there is no need to "get rid of" the expectation. In fact the sequence of equations is working deliberately towards it, because in most situations we do not know the full characteristics of the MDP and cannot calculate $d(s)$ (and maybe not even $\mathcal{R}_{s,a}$) - the expectation is used to remove the need for knowing those terms.
Importantly, $d(s)$ is the probability density of the state = the likelihood on a random sample of all states visited under a certain policy of finding the agent/environment in that state. This is often hard to calculate directly, even if you know the MDP and the policy, and it is intractable if you do not have a model of the MDP. However, when you take many samples then just by the act of sampling and using the value of $s$ that is observed, then you will approximate the true distribution of states in the long run. |
H: Why I didn't get any significant variable in my logistic model?
I decided to apply the logistic regression method to my categorical and quantitative data.
So, I followed these steps:
Eliminating the bad and inconsistent data.
Preparing the target variable (categorical variable).
Testing the dependencies between categorical variables and the
target variable using the Ki-2 test for selecting the variables that
are well linked with the target variable.
Testing the correlation between quantitative variables to avoid the
choice of two correlated variables at a time.
Crossing some variables to improve their significance.
After all this I do not find any significant variables in my logistic regression model knowing that the base is well coherent and well cleaned.
I work with the R language and I used the glm function:
glm (formula, family = familytype (link = linkfunction), data =)
You can see my data base. Note that Achat_client is target variable:
> M=data.frame(Type_peau,PEAU_CORPS,SENSIBILITE,IMPERFECTIONS,BRILLANCE ,GRAIN_PEAU,RIDES_VISAGE,ALLERGIES,MAINS,PEAU_CORPS,
+ INTERET_ALIM_NATURELLE,INTERET_ORIGINE_GEO,Crois_Prior1_Milieu,Crois_Profil_Prio,Crois_ALL_AGE,
+ INTERET_VACANCES,INTERET_COMPOSITION, PEAU_CORPS,Nbre_gift,w,Achat_client)
> str(M)
'data.frame': 836 obs. of 21 variables:
$ Type_peau : Factor w/ 5 levels "","Grasse","Mixte",..: 3 4 5 3 4 3 3 3 2 3 ...
$ PEAU_CORPS : Factor w/ 4 levels "","Normale","Sèche",..: 2 3 3 2 2 2 3 2 3 2 ...
$ SENSIBILITE : Factor w/ 4 levels "","Aucune","Fréquente",..: 4 4 4 2 4 3 4 2 4 4 ...
$ IMPERFECTIONS : Factor w/ 4 levels "","Fréquente",..: 3 4 3 4 3 2 3 4 3 3 ...
$ BRILLANCE : Factor w/ 4 levels "","Aucune","Partout",..: 4 2 2 4 4 4 4 4 3 4 ...
$ GRAIN_PEAU : Factor w/ 4 levels "","Dilaté","Fin",..: 4 4 4 2 4 2 4 4 2 4 ...
$ RIDES_VISAGE : Factor w/ 4 levels "","Aucune","Très visibles",..: 2 2 2 4 4 2 4 2 4 2 ...
$ ALLERGIES : Factor w/ 4 levels "","Non","Oui",..: 2 2 2 2 2 2 2 2 2 2 ...
$ MAINS : Factor w/ 4 levels "","Moites","Normales",..: 3 4 4 3 3 3 3 4 4 4 ...
$ PEAU_CORPS.1 : Factor w/ 4 levels "","Normale","Sèche",..: 2 3 3 2 2 2 3 2 3 2 ...
$ INTERET_ALIM_NATURELLE: Factor w/ 4 levels "","Beaucoup",..: 2 4 4 4 2 2 2 4 4 2 ...
$ INTERET_ORIGINE_GEO : Factor w/ 5 levels "","Beaucoup",..: 2 4 2 5 2 2 2 2 2 2 ...
$ Crois_Prior1_Milieu : Factor w/ 14 levels "Per_nature_éclatante",..: 11 13 8 12 8 6 8 9 13 11 ...
$ Crois_Profil_Prio : Factor w/ 294 levels "Eclatante / Hydratée_éclatante",..: 141 227 221 74 56 184 13 86 227 68 ...
$ Crois_ALL_AGE : Factor w/ 6 levels "jeune_Avec_ALL",..: 2 4 2 4 2 2 4 4 6 2 ...
$ INTERET_VACANCES : Factor w/ 6 levels "","À la mer",..: 3 4 2 2 3 2 3 2 3 2 ...
$ INTERET_COMPOSITION : Factor w/ 4 levels "","Beaucoup",..: 2 2 2 4 2 2 2 2 4 2 ...
$ PEAU_CORPS.2 : Factor w/ 4 levels "","Normale","Sèche",..: 2 3 3 2 2 2 3 2 3 2 ...
$ Nbre_gift : int 1 4 1 1 2 1 1 1 1 1 ...
$ w : num 0.25 0.25 0.5 0.25 0.5 0 0 0 0 0.75 ...
$ Achat_client : num 0 0 0 0 0 0 1 0 0 0 ...
Then I split my data base into tow parts: 70% of data for modeling and 30% for testing then the model:
split = sample.split(M$Achat_client, SplitRatio = 0.70)
final.train = subset(M, split == TRUE)
final.test = subset(M, split == FALSE)
Then for application of logistic regression:
final.log.model <- glm(formula=Achat_client ~ .-1,family=binomial(link="logit"),data = final.train)
The result is:
> summary(final.log.model)
Call:
glm(formula = Achat_client ~ . - 1, family = binomial(link = "logit"),
data = final.train)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.60306 -0.26636 -0.04545 -0.00003 2.83714
Coefficients: (5 not defined because of singularities)
Estimate Std. Error z value Pr(>|z|)
Type_peauGrasse -7.530e+00 2.811e+00 -2.679 0.007383 **
Type_peauMixte -8.278e+00 2.535e+00 -3.265 0.001095 **
Type_peauNormale -7.902e+00 2.568e+00 -3.077 0.002087 **
Type_peauSèche -9.710e+00 2.583e+00 -3.759 0.000171 ***
PEAU_CORPSSèche 8.639e-01 5.966e-01 1.448 0.147612
PEAU_CORPSTrès sèche 1.660e+00 8.959e-01 1.853 0.063843 .
SENSIBILITEFréquente -1.978e-02 9.583e-01 -0.021 0.983531
SENSIBILITEOccasionnelle -1.123e-01 8.905e-01 -0.126 0.899640
IMPERFECTIONSOccasionnelle -2.825e-01 5.859e-01 -0.482 0.629712
IMPERFECTIONSRares -1.234e+00 9.116e-01 -1.353 0.175956
BRILLANCEPartout 1.720e+00 1.351e+00 1.273 0.202993
BRILLANCEZone T 2.942e-01 7.643e-01 0.385 0.700299
GRAIN_PEAUFin 1.901e+00 9.640e-01 1.972 0.048609 *
GRAIN_PEAUMoyen 6.999e-01 6.976e-01 1.003 0.315696
RIDES_VISAGETrès visibles 2.350e+00 1.101e+00 2.134 0.032843 *
RIDES_VISAGEVisibles -1.224e-02 5.696e-01 -0.021 0.982857
ALLERGIESOui -1.770e+01 4.714e+03 -0.004 0.997004
MAINSNormales 1.767e+00 1.055e+00 1.675 0.093963 .
MAINSSèches 1.679e+00 1.073e+00 1.565 0.117595
PEAU_CORPS.1Sèche NA NA NA NA
PEAU_CORPS.1Très sèche NA NA NA NA
INTERET_ALIM_NATURELLEPas du tout -6.929e-01 1.490e+00 -0.465 0.641833
INTERET_ALIM_NATURELLEUn peu -1.801e+00 7.381e-01 -2.441 0.014666 *
INTERET_ORIGINE_GEOPas du tout 1.025e+00 1.293e+00 0.793 0.427673
INTERET_ORIGINE_GEOUn peu -4.041e-01 6.494e-01 -0.622 0.533728
Crois_Prior1_MilieuPer_nature_hydratée -7.592e-01 2.015e+00 -0.377 0.706322
Crois_Prior1_MilieuPer_nature_lisse -3.454e-01 1.756e+00 -0.197 0.844073
Crois_Prior1_MilieuPer_nature_matifiée -1.509e-01 2.495e+00 -0.060 0.951761
Crois_Prior1_MilieuPer_nature_nourrie -1.860e+01 6.608e+03 -0.003 0.997754
Crois_Prior1_MilieuPer_nature_purifiée -3.315e-01 1.611e+00 -0.206 0.837001
Crois_Prior1_MilieuPer_nature_reposée 1.284e+00 1.883e+00 0.682 0.495542
Crois_Prior1_MilieuPer_urbain_éclatante -1.819e-02 1.534e+00 -0.012 0.990538
Crois_Prior1_MilieuPer_urbain_hydratée 3.449e-01 1.356e+00 0.254 0.799243
Crois_Prior1_MilieuPer_urbain_lisse 1.080e+00 1.334e+00 0.810 0.418107
Crois_Prior1_MilieuPer_urbain_matifiée 1.051e+00 1.398e+00 0.752 0.452290
Crois_Prior1_MilieuPer_urbain_nourrie 3.070e+00 1.627e+00 1.887 0.059226 .
Crois_Prior1_MilieuPer_urbain_purifiée -1.456e+00 1.430e+00 -1.018 0.308601
Crois_Prior1_MilieuPer_urbain_reposée 2.520e+00 1.493e+00 1.688 0.091498 .
Crois_Profil_PrioLisse / Eclatante_éclatante 2.472e+00 1.486e+00 1.663 0.096222 .
Crois_Profil_PrioMatifiée / Eclatante_éclatante 2.845e+00 1.631e+00 1.744 0.081110 .
Crois_Profil_PrioNourrie / Eclatante_éclatante -1.685e+01 7.795e+03 -0.002 0.998276
Crois_Profil_PrioPurifiée / Eclatante_éclatante 1.182e+00 1.523e+00 0.776 0.437707
Crois_Profil_PrioReposée / Eclatante_éclatante -1.895e+01 5.224e+03 -0.004 0.997106
Crois_Profil_PrioEclatante / Hydratée_hydratée -1.763e+01 2.457e+03 -0.007 0.994276
Crois_Profil_PrioLisse / Hydratée_hydratée -1.409e-02 1.625e+00 -0.009 0.993081
Crois_Profil_PrioMatifiée / Hydratée_hydratée -1.623e+01 4.667e+03 -0.003 0.997225
Crois_Profil_PrioNourrie / Hydratée_hydratée 1.917e+00 1.854e+00 1.034 0.301226
Crois_Profil_PrioPurifiée / Hydratée_hydratée 1.841e+00 1.417e+00 1.299 0.193915
Crois_Profil_PrioReposée / Hydratée_hydratée -1.581e+01 7.573e+03 -0.002 0.998334
Crois_Profil_PrioEclatante / Lisse_lisse 1.464e+00 1.427e+00 1.026 0.305064
Crois_Profil_PrioHydratée / Lisse_lisse -1.757e+01 3.171e+03 -0.006 0.995578
Crois_Profil_PrioMatifiée / Lisse_lisse 3.308e+00 1.879e+00 1.760 0.078343 .
Crois_Profil_PrioNourrie / Lisse_lisse 1.318e+00 1.829e+00 0.721 0.471155
Crois_Profil_PrioPurifiée / Lisse_lisse -1.844e+01 3.417e+03 -0.005 0.995695
Crois_Profil_PrioReposée / Lisse_lisse -1.627e+01 6.889e+03 -0.002 0.998115
Crois_Profil_PrioEclatante / Matifiée_matifiée -1.651e+01 4.405e+03 -0.004 0.997010
Crois_Profil_PrioHydratée / Matifiée_matifiée 3.058e+00 1.669e+00 1.832 0.066970 .
Crois_Profil_PrioLisse / Matifiée_matifiée 2.699e+00 1.761e+00 1.533 0.125319
Crois_Profil_PrioNourrie / Matifiée_matifiée 3.910e+00 2.316e+00 1.688 0.091412 .
Crois_Profil_PrioPurifiée / Matifiée_matifiée -1.614e+01 3.076e+03 -0.005 0.995814
Crois_Profil_PrioReposée / Matifiée_matifiée -1.751e+01 1.235e+04 -0.001 0.998868
Crois_Profil_PrioEclatante / Nourrie_nourrie 2.304e+00 1.620e+00 1.422 0.154909
Crois_Profil_PrioHydratée / Nourrie_nourrie -1.622e+01 3.909e+03 -0.004 0.996688
Crois_Profil_PrioLisse / Nourrie_nourrie 4.018e+00 2.082e+00 1.931 0.053542 .
Crois_Profil_PrioMatifiée / Nourrie_nourrie -1.683e+01 9.424e+03 -0.002 0.998575
Crois_Profil_PrioPurifiée / Nourrie_nourrie 2.889e+00 1.891e+00 1.528 0.126629
Crois_Profil_PrioEclatante / Purifiée_purifiée 1.983e-01 1.773e+00 0.112 0.910932
Crois_Profil_PrioHydratée / Purifiée_purifiée -1.700e+00 2.025e+00 -0.840 0.401098
Crois_Profil_PrioLisse / Purifiée_purifiée -1.741e+01 5.381e+03 -0.003 0.997419
Crois_Profil_PrioMatifiée / Purifiée_purifiée -1.840e+01 4.515e+03 -0.004 0.996749
Crois_Profil_PrioReposée / Purifiée_purifiée -1.771e+01 1.039e+04 -0.002 0.998640
Crois_Profil_PrioEclatante / Reposée_reposée 2.290e+00 1.796e+00 1.275 0.202363
Crois_Profil_PrioHydratée / Reposée_reposée -1.609e+01 3.890e+03 -0.004 0.996699
Crois_Profil_PrioLisse / Reposée_reposée 2.916e+00 1.552e+00 1.879 0.060212 .
Crois_Profil_PrioMatifiée / Reposée_reposée -1.625e+01 9.461e+03 -0.002 0.998629
Crois_Profil_PrioNourrie / Reposée_reposée -1.724e+01 8.232e+03 -0.002 0.998329
Crois_Profil_PrioPurifiée / Reposée_reposée 6.497e-01 1.724e+00 0.377 0.706274
Crois_ALL_AGEjeune_Sans_ALL -7.512e-01 7.326e-01 -1.025 0.305170
Crois_ALL_AGEsenior_Avec_ALL 1.664e+01 4.714e+03 0.004 0.997183
Crois_ALL_AGEsenior_Sans_ALL 3.611e-01 6.555e-01 0.551 0.581703
Crois_ALL_AGEvieux_Avec_ALL 1.930e+01 4.714e+03 0.004 0.996733
Crois_ALL_AGEvieux_Sans_ALL NA NA NA NA
INTERET_VACANCESÀ la montagne 1.208e+00 5.537e-01 2.182 0.029090 *
INTERET_VACANCESEn ville 2.452e+00 9.143e-01 2.682 0.007317 **
INTERET_COMPOSITIONPas du tout -1.763e+01 4.579e+03 -0.004 0.996929
INTERET_COMPOSITIONUn peu 4.350e-01 8.082e-01 0.538 0.590461
PEAU_CORPS.2Sèche NA NA NA NA
PEAU_CORPS.2Très sèche NA NA NA NA
Nbre_gift 2.144e-01 1.520e-01 1.411 0.158265
w 1.796e+00 1.027e+00 1.749 0.080317 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 812.37 on 586 degrees of freedom
Residual deviance: 170.68 on 501 degrees of freedom
AIC: 340.68
Number of Fisher Scoring iterations: 19
Besides, I tested all types of family:
*Family Default Link Function
binomial (link = "logit")
gaussian (link = "identity")
Gamma (link = "inverse")
inverse.gaussian (link = "1 / mu ^ 2")
fish (link = "log")
quasi (link = "identity", variance = "constant")
quasibinomial (link = "logit")
quasipoisson (link = "log")*
Why then I can not find any significant variable in my model ???
Thanks,
AI: Welcome to the site!
So far what you have done is good.
Before going to modelling you need to take care of couple of things(Exploratory Analysis), Like:
Removing all the unnecessary variables based on business knowledge
Imputing Missing values
Removing Outliers
removing unimportant variables
Correlation analysis between variables
once these are achieved then look into Modelling(you have done most of the above steps).
Now coming to your problem:
When you are trying to apply Logistic Regression on data to predict then you need to take care of couple of things:
Remove all the factors with single level
Then you need to make sure that data which is partitioned(Spliting between Test and Train) have Same Levels before you predict. For now you din't see the issue but once you predict then you will get this error.
Important thing to remember is, Logistic regression works by converting Categorical variable to dummies before applying model, because of this it is unable to return you anything.Since you have more Categorical variables in your data I would suggest you not use Logistic Regression.
To get Predictor Importance(Important Independent Variables), you can use a Package in R called Boruta, this link has the implementation in R.
Once you get important variables then you can apply Random Forest, Decision Trees(Rpart) and see how they perform.
Let me know if you have any issues. |
H: Why are policy gradient methods preferred over value function approximation in continuous action domains?
In value-function approximation, in particular, in deep Q-learning, I understand that we first predict the Q values for each action. However, when there are many actions, this task is not easy.
But in policy iteration we also have to output a softmax vector related to each action. So I don't understand how this can be used to work with continuous action space.
Why are policy gradient methods preferred over value function approximation in continuous action domains?
AI: But in policy iteration also we are have to output a softmax vector related to each actions
This is not strictly true. A softmax vector is one possible way to represent a policy, and works for discrete action spaces. The difference between policy gradient and value function approaches here is in how you use the output. For a value function you would find the maximum output, and choose that (perhaps $\epsilon$-greedily), and it should be an estimate of the value of taking that action. For a policy function, you would use the output as probability to choose each action, and you do not know the value of taking that action.
So I don't understand how this can use to work with continuous action space ?
With policy gradient methods, the policy can be any function of your parameters $\theta$ which:
Outputs a probability distribution
Can be differentiated with respect to $\theta$
So for instance your policy function can be
$$\pi_{\theta}(s) = \mathcal{N}(\mu(s,\theta), \sigma(s,\theta))$$
where $\mu$ and $\sigma$ can be functions you implement with e.g. a neural network. The output of the network is a description of the Normal distribution for the action value $a$ given a state value $s$. The policy requires you to sample from the normal distribution defined by those values (the NN doesn't do that sampling, you typically have to add that in code).
Why are policy gradient methods preferred over value function approximation in continuous action domains?
Whilst it is still possible to estimate the value of a state/action pair in a continuous action space, this does not help you choose an action. Consider how you might implement an $\epsilon$-greedy policy using action value approximation: It would require performing an optimisation over the action space for each and every action choice, in order to find the estimated optimal action. This is possible, but likely to be very slow/inefficient (also there is a risk of finding local maximum).
Working directly with policies that emit probability distributions can avoid this problem, provided those distributions are easy to sample from. Hence you will often see things like policies that control parameters of the Normal distribution or similar, because it is known how to easily sample from those distributions. |
H: Classifier Threshold
I am designing a classifier for an Imbalanced Data set. I have a queries regarding choosing the threshold for a classifier, currently I am using mean of the predicted probabilities as the threshold and I am maximizing the recall on positive classes. Is this a correct way of choosing a threshold (I.e. mean of predicted probabilities) or I should try doing something like this book by using recall as a metric on KFold cross validation.
Any suggestion would be highly appreciable. Thanks :)
AI: The threshold you choose depends on the specifics of the problem you are trying to solve. More specifically, it should be based on how you weigh false positives vs. false negatives, i.e. how bad each of these are relative to each other. You mention that you are trying to maximize recall on the positive class, but if that were true you would should just classify everything as a positive class, and get a recall of 1.0. Based on the domain you are working in, you should decide how much a false positive 'costs' vs. how much a false negative 'costs'. Once you decide this, you can find the threshold that minimizes the function
total cost = false negative count x FN cost + false positive count x FP cost |
H: What is the difference between float64 and double in TensorFlow?
In storing floating point values both overflow and underflow problems cause loss of data. In machine learning tasks underflow is a common problem. I wanted to know if double is better than float64 in TensorFlow or not and if there is any difference between them?
AI: Taking a look at tensorflow's dtypes.py, there's this line:
double = float64
So double is exactly the same at float64. |
H: How to add incorporate meta data into text classification?
I have a collection of statements which I need to classify into 5 classes. Each statement have meta data in different columns:
Author|Editor| date of release| statement | Class
How can one use the meta data to improve the text classification task?
AI: Some models cannot really handle this, while others lend themselves for it easily. I'll explain two approaches that you could use:
Naive bayes
With Naive Bayes you can use other categorical values as well as your normal n-grams or sparse bag of words vectors. Just add them one-hot encoded to your features and it is also incorporated. With numerical features you would need to use something like Gaussian Naive Bayes, to fit a distribution to your features per target class, then you can use the likelihoods of these features per class to compute the probabilities.
Neural network
If you use a neural network approach like CNNs or RNNs, you can add any type of feature representation network and concatenate it somewhere in your original network. In your case you would have a softmax at the end of your RNN. Before this, concatenate the output of your 'normal-feature' neural network, add some dense layers and feed this to your softmax output layer. This way you can train your model end-to-end and it will learn important interactions as well. |
H: Need a Work-around for OneHotEncoder Issue in SKLearn Preprocessing
So, it seems that OneHotEncoder won't work with the np.int64 datatype (only np.int32)! Here's a sample of code:
import numpy as np
import pandas as pd
from sklearn.preprocessing import OneHotEncoder
a = np.array([[56748683,8511896545,51001984320],[18643548615,28614357465,56748683],[8511896545,51001984320,40084357915]])
b = pd.DataFrame(a, dtype=np.int64)
ohe = OneHotEncoder()
c = ohe.fit_transform(b).toarray()
When I run this I get the following error: "ValueError: X needs to contain only non-negative integers."
As you can see, X DOES contain only non-negative integers! When I trim a few of the digits and change the datatype to int32 it works fine:
a = np.array([[56748,8511896,51001984],[18643548,28614357,56748],[8511896,51001984,40084357]])
b = pd.DataFrame(a, dtype=np.int32)
ohe = OneHotEncoder()
c = ohe.fit_transform(b).toarray()
Unfortunately, the data I need to encode has 11 digits (which can't be represented by int32). So, any suggestions would be helpful...
Also, I should mention, I don't necessarily need a one hot encoding, just need to create dummy variables. Thanks!
AI: Try the following approach:
In [51]: pd.get_dummies(b.astype(str), prefix_sep='')
Out[51]:
018643548 056748 08511896 128614357 151001984 18511896 240084357 251001984 256748
0 0 1 0 0 0 1 0 1 0
1 1 0 0 1 0 0 0 0 1
2 0 0 1 0 1 0 1 0 0 |
H: concatenating the content of list in python
I have a list.
list = ['It is a delightfully naive and entertaining movie',
'The songs are boring and dated in 2009',
'was a great movie from genre director Luc Besson']
and I want a result like:
list_result = [ It is a delightfully naive and entertaining movie The songs are boring and dated in 2009 was a great movie from genre director Luc Besson]
how I can do this? (list_result can also be a corpus.)
AI: result = ""
for sentence in list:
result += sentence
result += " "
list_result = [result]
Go over list comprehension if you want a more pythonic way to do it, here is the most understandable to begin. |
H: How does neural network solve XOR problem
I'm reading a wonderful tutorial about neural network. This is the best tutorial I've ever seen but I can't understand one thing as below:
In the link above, it is talking about how the neural work solves the XOR problem.
It says that we need two lines to separate the four points. But I don't know the second table.
XOR:
input1 input2 output
0 0 0
0 1 1
1 0 1
1 1 0
First table:
input1 input2 output
0 0 0
0 1 1
1 0 1
1 1 1
In my opinion, the first table is OK because it includes the XOR, which means that what the second table need to do is to remove the forth input. So I think the second table should be as below:
input1 input2 output
0 0 1
0 1 1
1 0 1
1 1 0
How in the link it says the second table is like this:
input1 input2 output
0 0 0
0 1 0
1 0 0
1 1 1
In a word, I can understand why the single layer neural network can't solve the XOR problem but I can't understand how the two layers neural network work to solve it.
AI: Notice that the first table (orange line) is performing an OR operation and the second table (blue line) is performing an AND operation. XOR can be defined as (x OR y) AND NOT (x AND y) or $(x \lor y) \land \lnot (x \land y)$, so in other words: orange should fire and blue shouldn't fire. If we now look at this figure from your tutorial:
we can see that the weight of blue in the second layer is negative and small enough (more negative) such that the output can never fire if blue fires, i.e. the output can't fire if both inputs are firing.
$0 + 0 \ngtr 1 : \emptyset $ shouldn't fire
$-2 + 0 \ngtr 1 : (x \land y) = T$ shouldn't fire
$0 + 1.1 \gt 1 : (x \lor y) \land \lnot (x \land y) = T$ should fire
$-2 + 1.1 \ngtr 1 : (x \land y) \land (x \lor y) = T$ shouldn't fire |
H: What is Compatible Function Approximation theorem in reinforcement learning?
I am following David Silver's RL course. In the policy gradient section, I found this slide that I would like have an explanation of.
What are these two conditions?
What is the logic behind the first derivative equality? Is it just that we assume these two derivatives should be equal since there should be some kind of connection with direction of value function approximation gradient and our policy like-hood?
Then what is the epsilon value? What is this mean square value? Q value with w parameters means the Q values function new approximation. And the other Q value is the value got by following our policy right? So is this OFF policy value approximation or ON policy? I think this is on policy since normally in the Q value update we take the max.
AI: There are lots of questions but I will try to answer in a way that might clear things up for you and also give you some guidance. Please note that the proofs for your questions involve lots of math operations so instead I will provide you with references.
Your main reference is the paper from Sutton PG Methods with Function Approximation. I highly recommend you to read the paper a couple of times (or even more!) and do some search in the relevant literature when you will be familiar with the main objectives, notation and math around the general approach of the methods. PG Methods are not easy to get a grasp of them mainly because of their sampling nature, notation and discrete/continuous math involved.
PG Methods satisfy (or at least should) the PG theorem (eq. 2 from the paper). An interesting approach would be to substitute the true $Q^\pi (s,a)$ by some approximate function ($f_w$ in the paper, $Q_w$ in your question). Now, we are wondering what conditions should be satisfied by that proposed approximation in order to satisfy the PG Theorem.
The first thing you notice is that a natural choice for updating the parameters $w$ is to update them towards the direction that minimizes the mean squared error of the exact $Q^\pi (s,a)$ with the function approximation. In your question this is the $\epsilon$. In such a scenario the exact $Q^\pi (s,a)$ is estimated using unbiased samples such as $r_t$.
This is explained in detail in Part 2 of the paper.
For the PG theorem to hold (proof consists of the 3 lines before Part 3) the grad of your approximate function should satisfy the compatibility condition. To sum up we started from PG theorem and we found a suitable family of function approximators for our action-value function that the PG theorem holds. In Part 3 you can see an example of a compatible function. From this of course you can use even non-linear approximators such as NNs.
A clarification on on/off-policy: The David Silver's slide that you posted here has to do with theoretical guarantees and has nothing to do with an actual RL algorithm. By the way Q-learning algorithm in which you use the $max_{a'}{Q(s',a')}$ is OFF-policy as you you don't actually use for updates the ongoing policy.
Hope this helps! |
H: Regression and Neural networks
I'm trying to restore this function:
$$ F(x) = x*sin(\alpha x)+b; \space\space \alpha,b \in (-20,20) $$
My NN model(with Keras) is:
1 layer: GRU, 9 neurons, selu activation
2 layer: GRU, 3 neurons, selu activation
1 layer: GRU, 7 neurons, selu activation
1 layer: Dense, 1 neuron, linear activation
kernel init he_normal for all layers
For the training dataset I've generated function values $f(x), x \in -5..5$ (e.g. linspace(-5,5,500) and random integer values for $\alpha$ and $b$, 250 times.
Then I've selected $f[i-1]$ and $f[i]$ (previous steps) and for this "xs" the output is $f[i+1]$. So the training dataset is like:
X:
first row: previous previous $f$, $\alpha$, $b$
second row: previous $f$, $\alpha$, $b$
Y:
current $f$ value
After training over 200 epochs, mae on validation data was 0.1851.
Now if I try to predict new data it seems normal, but when I try to predict new values using previously predicted by model points it falls down and it doesn't looks like sine function at all.
What am I doing wrong?
AI: The problem is that your error is accumulating and diverging. In other words, a small error in the first prediction is leading to a larger error in your second prediction, which is leading to an even larger error in your third prediction, and so on.
With that said, this LSTM example seems to do well predicting multiple future values for a sine wave. |
H: Classes of neural nets and their applications
Would you say, you could design, tune and/or train any DNN for any application, or do their designs inherently postulate some specialization?
Is there such a review?
For example, are CNNs better for the spatial domain (e.g image analysis), and RNN for temporal (e.g. time series data, or speech), or it does not matter?
AI: Convolutional Neural Networks have consistently outperformed other methods for image recognition and related tasks. In fact, beyond a certain image size it is not practical to train a fully-connected network on raw pixels due to the enormous number of parameters that would be required. The number of parameters in a CNN however grows much more slowly as you increase the input image size, since they usually do not require larger filters, and the number of parameters per filter is independent of input size.
With that said, there is a much more remarkable demonstration of the power of the architecture of a CNN for image processing in the recent paper Deep Image Prior. The abstract reads:
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.
Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.
So they were able to demonstrate that the raw network prior to any training was powerful enough for sophisticated tasks, which is the strongest and most direct illustration of the power of the specific architecture itself that I have seen.
The answer is not so clear for time series prediction, since CNNs with various tricks have performed on par with RNNs for some datasets, but in general RNNs seem much more suited to the problem, since they can handle inputs of arbitrary length, and they can store state.
However, either method will significantly outperform previous methods such as passing in a sliding window of fixed size into a fully-connected network. Even when a fully-connected network is combined with some state-preserving model such as ARIMA it still cannot match the performance of CNNs or RNNs. See Deep Learning for Time-Series Analysis. |
H: Can linear regression or any other models be used to judge if Y increases as X increases?
I have a database like:
Site X Y
S1 1 1.5
S1 1 1.3
S1 2 1.7
S1 1 1.1
S1 4 5.9
S2 3 4.0
S2 2 2.5
S2 4 9.1
S2 4 9.2
S2 1 2.3
I need to find if $Y$ increases as $X$ increases for every site. In other words, Bigger $X$ corresponds to bigger $Y$.
I know linear regression might suit this problem. But please take a look at the following graph:
Figure 1 is not what I want because small $X$ corresponds to big $Y$. However, Figure 2 is what I want. When I use a linear regression model and RMSE as a measure, it cannot tell the difference between Figure 1 and Figure 2.
Another thing about my database is: $X$ are like levels, which are the same to all the sites. But the $Y\text{'s}$ of every sites are different. For example, for one site, $X=1$ and $Y=20$ means $20$ is a small value because it corresponds the lowest level of $X$. But for another site, $Y=15$ and $X=6$ means $15$ is the highest value because $X$ is the highest level.
So, my problem is: for every site, I need to use a linear model or any other algorithms to judge if $Y$ increases as $X$ does. Then, I need to use a measure to select some sites.
AI: Linear Regression will help you decide whether Y tends to increase with X, but it is not a good tool to prove that Y always increases with X. For this you need to design an algorithm.
To prove that Y always increases with X at each site, first create a table of unique X in ascending order with the corresponding min and max Y for each site:
S1:
X min(Y) max(Y)
1 1.1 1.5
2 1.7 1.7
4 5.9 5.9
S2:
X min(Y) max(Y)
1 2.3 2.3
2 2.5 2.5
3 4.0 4.0
4 9.1 9.2
Now for each site verify the following: For each X check that max(Y) is less than min(Y) for X+1. If this condition ever fails then you have shown that Y does not always increase with X at every site, otherwise you can say that it does. |
H: What is the name for this chart which splits a quantity by allocation to a class hierarchy?
I need to create this type of chart, where you can see a quantity is split into a hierarchy of classes or taxonomy. In the example below the quantity is a household monthly income, and the classes are different monthly expenditures, classified broadly to the left, and progressively split into more detailed sub-classes.
However, I don't know what it is called and thus cannot find which tool to use.
AI: This is called a Sankey-Diagram:
"Sankey diagrams are a specific type of flow diagram, in which the width of the arrows is shown proportionally to the flow quantity." (wiki
An easy package to make these graphs would be "d3Network" in R. But many other options are available. |
H: How should classification be done for a very small data set?
I am looking at data from the London Data Store based on social characteristics between London boroughs.
Since there are only about 30 London boroughs, the data sets I am looking at are naturally very small. For example, I might be fitting regression/correlations to a plot of about 30 points.
What are appropriate ways to conduct classification on such small data sets, and why? 'Why' is important.
I was thinking of something like SVM, or Naive Bayes. Or regression if the data is continuous.
What are very inappropriate ways to conduct classification here?
AI: I don't think you need some classification algorithm, you can use your basic understanding on data/ Business Knowledge to do the classification. As the number of data points are too low, the model cannot give you good/generalised results.
Even if you try applying some complex algorithm like SVM/NN, it is of no use as the data is too low.
If you still want to apply some machine learning algorithm and then you can apply Naive Bayes, Decision Tree as these are the basic algorithms, can do the job. |
H: First layer weights for transfer learning with new input tensor in keras.applications models?
In the pre-implemented models in keras (VGG16 ect) it is specified that we can change shape of the inputs of the models and still load the pre-trained imagenet weights.
What I am confused about is then what happens to the first layer weights? If the input tensor has a different shape, then the number of weights will be different than for the pre-trained models isn't it?
Here is the implementation of the Keras VGG16 model for reference.
AI: The first layers are convolution and pooling ones:
For the convolutional layers, the only weights are the kernels and the biases, and they have fixed size (e.g. 3x3x3, 5x5x3) and do not depend on the input tensor shape.
The pooling layers do not have weights at all.
That's why you can reuse the weights independently from the input tensor shape.
With dense layers (i.e. the final layers), you need shapes to match, so you cannot reuse them if they do not. |
H: What is the dimensionality of the bias term in neural networks?
I am trying to build a neural network (3 layers, 1 hidden) in Python on the classic Titanic dataset.
I want to include a bias term following Siraj's examples, and the 3Blue1Brown tutorials to update the bias by backpropagation, but I know my dimensionality is wrong. (I feel I am updating the biases incorrectly which is causing the incorrect dimensionality)
The while loop in the code below works for a training dataset, where the node products and biases have the same dimension, but once I pass in a test example into the predict function, the dimensions do not match up and get an error. I have commented my code with the dimensions of the calculations of dot products between nodes and inputs.
Can someone help me understand what the dimensionality of the bias term should be, both in this particular case and in general, and how it should be added (row-wise, column-wise)?
Code:
def sigmoid(x, deriv=False):
"""
Activation function
"""
if(deriv==True):
return (x*(1-x))
return 1/(1+np.exp(-x))
# learning rate, hidden layer dimension, error threshold, dropout rate
alpha, hidden_size, threshold, drop_rate = (0.035,32,0.1,0.5)
# x_train and y_train are the training dataset and corresponding classes
# syn0 and syn1 are the synapses, weight matrices between layers (3 layers, 2 synpases)
syn0 = 2*np.random.random((x_train.shape[1],hidden_size)) - 1 # NxH
syn1 = 2*np.random.random((hidden_size,1)) - 1 # Hx1
b1 = np.random.random((x_train.shape[0],hidden_size)) # MxH
b2 = np.random.random((x_train.shape[0],1)) # Mx1
layer_2_error = 100*np.abs(np.random.random((y_train.shape[0],1))) - 1 # Mx1
avg_err = []
count = 0
while np.mean(np.abs(layer_2_error)) > threshold:
# Forward
layer_0 = x_train # training dataset
A = np.dot(layer_0,syn0) + b1 # MxN X NxH + MxH ~ MxH
layer_1 = sigmoid(A)
# drop out to reduce overfitting
layer_1 *= np.random.binomial([np.ones((len(x_train),hidden_size))],1-drop_rate)[0] * (1/(1-drop_rate))
B = np.dot(layer_1,syn1) + b2 # MxH X Hx1 + Mx1 ~ Mx1
layer_2 = sigmoid(B)
# Backprop
layer_2_error = layer_2 - y_train # Mx1
layer_2_delta = layer_2_error * sigmoid(layer_2,deriv=True) # Mx1 * Mx1 ~ Mx1
layer_1_error = np.dot(layer_2_delta,syn1.T) # Mx1 X 1xH ~ MxH
layer_1_delta = layer_1_error * sigmoid(layer_1,deriv=True) # MxH * MxH ~ MxH
# update weights
syn1 -= alpha*np.dot(layer_1.T,layer_2_delta) # HxM X Mx1 ~ Hx1
syn0 -= alpha*np.dot(layer_0.T,layer_1_delta) # NxM X MxH ~ NxH
# update biases
b2 -= alpha*layer_2_delta # Mx1
b1 -= alpha*layer_1_delta # MxH
avg_err.append(np.mean(np.abs(layer_2_error)))
if count % 500 == 0:
print("Error after",count,"iterations:",np.mean(np.abs(layer_2_error)))
count += 1
def predict(x, w0, w1, b1, b2):
"""
Function to predict an output given a data x, weight matrices w1 & w1 and biases b1 & b2
"""
A = np.dot(x,w0) + b1 # mXN X NxH (+ MxH) ~ mxH
layer_1 = sigmoid(A)
B = np.dot(layer_1,w1) + b2 # mxH X Hx1 (+ Mx1) ~ mx1 (preds)
layer_2 = B
return (sigmoid(layer_2) > 0.5).astype(int)
AI: As per the general case, the bias vector must have the same dimensions as the output vector.
Please, have a look at this excellent presentation:
In this example by M.Görner, there are 10 classes, so is bias dimension. Once inputs are multiplied by weights, the bias is added pointwise (it is 'broadcasted'). And that's pretty much it.
For those wondering the origin of 100 and 784. There are 100 training examples (images) and 784 features (total pixels/image). |
H: Better input for Doc2Vec
I want to perform Doc2Vec on a twitter dataset. As each tweet consists of a nummber of special characters ,numbers, urls, mentions and hashtags, non-english words, what should be my input for Doc2Vec? How should i approach for initial tweet pre-processing?
I saw many tutorials but all of them used plain texts. I am a newbie in data science.
AI: There are plenty of different approaches you can use, and none is the universal best solution. However, in general, preprocessing in twitter data, especially for Doc2Vec, follow:
tokeninzing (nltk tokenizer, custom regex tokenizer) to identify words. Depending on the application, you can work on special cases such as english contractions, negations (sentiment analysis especially) and others
normalizing urls and/or mentions, transforming each of them to the same version to reduce voacbulary size (http://someurl, @mention) or totally removing them.
removing of numbers, punctuation or others. Note that this is highly domain dependent, for example "!" has been shown to express a lot in sentiment analysis.
In fact, doc2vec doesn't expect anything, it uses the vocabulary you give. So you decide what you want to keep. A good practice is to experiment different tokenizers and training phases if you have time to see what works best. Also use what has already been done in your field of application.
Don't forget that the input of Doc2Vec is an iterator over a list of TaggedDocument. See this tutorial for more.
Good luck ! |
H: Cannot see what the "notation abuse" is, mentioned by author of book
From Sutton and Barto, Reinforcement Learning: An Introduction (second edition draft), in equation 3.4 of page 38.
The probabilities given by the four-argument function p completely characterize the dynamics of a
finite MDP. From it, one can compute anything else one might want to know about the environment,
such as the state-transition probabilities (which we denote, with a slight abuse of notation, as a threeargument function
$p(s^{'} | s, a) \dot{=}Pr\{S_t=s^{'} | S_{t-1} = s, A_{t-1}=a\} =
\sum_{r\in{R}}{p(s^{'},r|s,a)}$
The author mentioned, with a slight abuse of notation.
where is the abuse in the notation please? I didn't see anything that is not proper.
Thank you.
AI: The mathematical expression is completely legit. The abuse is in the fact that the function $p$, which is defined first time in equation 3.2, which:
The function $p: S$ x $R$ x $S$ x $A \rightarrow [0,1]$. is an ordinary
deterministic function of four arguments...
is re-defined slightly differently just two lines after this definition (equation 3.4), as a three-argument function $p: S$ x $S$ x $A \rightarrow [0,1]$.
If they used $p$ to represent the regular probability measure, there would be no abuse. In the authors' notation, $p$ is a deterministic function, while the regular probability function is denoted as $Pr$; and keeping the same name for slightly different functions, is where the "innocent" notation abuse comes from. |
H: What is the best way to normalize histogram vectors to get distribution?
l have the following sample of 4 vectors of dimension 5 . They are sparse vectors, in a way that each value in a vector represent the frequency (number of occurrence of values). For instance
v_1=[0,4,0,0,1]
4 at index 1 means we have four values of 1 and 1 at index 4 it means we have one value of 4.
The purpose of normalizing these histograms is to get a distribution.
Here is a sample of my data.
Vectors=[[0,4,0,0,1],[5,0,0,3,1],[1,0,0,0,0],[0,0,6,0,0]]
What l tried ?
vectors=(vectors-np.mean(vectors,axis=0)) / np.std(vectors,axis=0)
and l tried :
vectors=sklearn.preprocessing.scale(vectors)
A)Is it correct what l have tried ?
B)l have question to normalization by subtracting means and dividing by standard deviation:
through all the vectors, through each dimension, means( of a dimension) is computed. Than each data point of this dimension is subtracted from its mean. Is it correct what l'm saying ?
in other way for each dimension, we have it's own mean.
For instance, if we have dim(vectors)=(400,2000) such that 400 is the number of examples.
we compute 2000 means and 2000 std since we have 2000 dimensions. And then each data point is subtracted from its means and divided by its std (mean and std of that dimension)
Thank you for correcting me.
AI: It's okay, what you are saying, except the
each data point is subtracted from its means
part because it's in the opposite way, you have to subtract the means from their data points.
I guess your problem is coming from that you put your vectors into the matrix as row vectors, but np.mean(..., axis = 0) calculates the column means.
If you want to use the above formula for normalizing then you should transpose your matrix.
vectors = np.array([[0,4,0,0,1],[5,0,0,3,1],[1,0,0,0,0],[0,0,6,0,0]])
vectors.T
array([[0, 5, 1, 0],
[4, 0, 0, 0],
[0, 0, 0, 6],
[0, 3, 0, 0],
[1, 1, 0, 0]])
And now the np.mean(vectors.T, axis=0) gives the correct means for your vectors:
array([ 1. , 1.8, 0.2, 1.2])
Substract the means from the transposed matrix like vectors.T - np.mean(vectors.T, axis=0)
array([[-1. , 3.2, 0.8, -1.2],
[ 3. , -1.8, -0.2, -1.2],
[-1. , -1.8, -0.2, 4.8],
[-1. , 1.2, -0.2, -1.2],
[ 0. , -0.8, -0.2, -1.2]])
And finally divide it by the standard deviation of the transposed matrix
t_vectors = (vectors.T-np.mean(vectors.T,axis=0)) / np.std(vectors.T,axis=0)
t_vectors
array([[-0.64549722, 1.65027399, 2. , -0.5 ],
[ 1.93649167, -0.92827912, -0.5 , -0.5 ],
[-0.64549722, -0.92827912, -0.5 , 2. ],
[-0.64549722, 0.61885275, -0.5 , -0.5 ],
[ 0. , -0.4125685 , -0.5 , -0.5 ]])
Sklearn gives the same sklearn.preprocessing.scale(vectors.T). |
H: scikit-learn classifier reset in loop
I'm trying to evaluate classifiers comparison by running the sample script that can be found here.
What I noticed is that in some cases the classifier is not reset.
In fact, duplicating some of those (with no parameter change) the score and the countour change between the two.
This can be seen simply replacing AdaBoostClassifier() in the classifier list with another MLPClassifier(alpha=1)
I guess that at every cicle of the for loop the classifier should be reset in order to make a fair comparison among the different models, and this case should behave the same I think.
In particular, differences are noticed duplicating the MPL (Neural Net) and the Random Forest, while there is no change duplicating KNN or RBF SVM.
I also tried to clone the classifier, and even del clf in the loop, but the behaviour stays the same.
How can I make the evaluation replicable and not influenced by the previous run? I want to be sure that when I use the same model, and only change the parameters the result is correct, and this will be possible only if two identical model yield the same result.
AI: The behaviour you are seeing is not related to not properly resetting models but the stochastic nature of most of these algorithms. By setting a random seed the same random numbers will be generated every time. See:
How to seed the random number generator for scikit-learn?
However, while this will lead to a reproducible sample, this might still not be fair. If one model randomly gets a good seed and another one a bad one you will unfairly always favor the first one. What you could do is run the models multiple times with the same hyperparameters but with different seed and look for the average performance. This way you get a more fair comparison and the reproducibility. Pick the seeds up front, then you can loop over them. Something like this:
seeds = (1, 2, 3, 4, 5)
performances = []
for seed in seeds:
performances.append(score(Model(param1=1, param2=2, random_state=seed))) |
H: When visualizing data that has <1 or <5 ppm how do you display this?
I have some data (parts per million) where some of it is 1 or greater than one (but has an actual number.) However, some of the data simply lists "<1" ppm.
What is a good way to visualize this in a graph?
Should I pick an arbitrary decimal less than one, so that it can be shown on the graph? I feel setting it to zero would not be right either, as they went to the extent of listing it, so it must be significant.
AI: Is it a univariate plot? If so, just bin the data and toss it into the lowest bin. |
H: When do we say that the dataset is not classifiable?
I have many times analysed a dataset on which I could not really do any sort of classification. To see whether I can get a classifier I have usually used the following steps:
Generate box plots of label against numerical values.
Reduce the dimensionality to 2 or 3 to see if classes are separable, also tried LDA sometimes.
Forcefully try to fit SVMs and Random Forests and look at feature-importance to see if the features make any sense or not.
Try to change the balance of classes and techniques like under-sampling and over-sampling to check if class imbalance might be an issue.
There are many other approaches I can think of, but have not tried. Sometimes I know that these features are not good and not at all related to the label we are trying to predict. I then use that business intuition to end the exercise, concluding that we need better features or totally different labels.
My question is how does a Data Scientist report that the classification can not be done with these features. Is there any statistical way to report this or fitting the data in different algorithms first and looking at validation metric is the best option?
AI: It depends on your data. There is something called human level error. Suppose tasks like reading of printed books, humans do not struggle to read and it might not happen to make a mistake unless because of bad quality of printing. In cases like reading hand-written manuscripts, it may happen a lot not to understand all words if the font of the writer is odd to reader. In the first situation the human level error is too low and the learning algorithms can have the same performance but the second example illustrates the fact that in some situations the human level error is so much high and in a usual manner (if you use the same features as humans) your learning algorithm will have so much error ratio.
In statistical learning, there is something called Bayes Error, whenever the distribution of classes overlap, the ratio of error is large. without changing the features, the Bayes error of the current distributions is the best performance and can not be reduced at all.
I also suggest you reading here. Problems with a large amount of Bayes error with appointed features are considered not classifiable with in the space of those features. As another example you can suppose you want to classify cars with lights on. If you try to do that in the morning, you yourself may have lots of errors and if you use same images for training the learning algorithm, that may have too.
Also I recommend you not to change the distribution of your classes. In such cases, the result of classifier near the boundary would be completely random. The distribution of data for training your machine learning algorithm should not be changed and should be as it is in the real condition. |
H: What exactly is the input of decoder in autoencoder setup
I am reviewing various autoencoder setups for MNIST reconstruction, Seq2Seq translation and others.
My naive understanding of data flow is as follows:
Input -> [Encoder] -> Hidden Representation -> [Decoder] -> Output.
However, in case of Seq2Seq translation task similar to Sutskever et. al. decoder input is combined from hidden state + input sequence.
I wonder how is the input of decoder dependent on the target task? Why do we need to put input sequence besides the hidden state? Any high-level explanation is appreciated.
AI: Your are mixing two different beasts. Despite both having encoder and decoder parts, the way in which a normal feedforward image transformation network (i.e. the autoencoder) and an autoregressive model (i.e. the seq2seq) are actually used is very different:
In the type of image transformation done in a vanilla autoencoder, you input an image and get another image at the output. That's it, both at training and at inference time.
In an autoregressive model like seq2seq, you feed the network with the input and the initial tokens of the expected output sequence (i.e. the prefix), and you get the next token as output. During training, those initial tokens are normally from the gold data (aka teacher forcing), that is, you feed as prefix real tokens from the training data. In inference, those tokens are generated by the model itself: you generate the first token, then you generate the second token given the first one you previously generated, then you generate the third token given the previous 2 generated tokens, and so on. |
H: Using machine learning technique to predict commodity prices
Has anyone here tried to predict a commodity's price by using other commodities prices as features in a machine learning algorithm? What techniques have been successful?
AI: Based on your question there are couple of things which I would assume to answer your question:
As you need to predict the commodity price the data which is collected is time series data.
Since you want to use other commodity to predict, it means that you don't have any past data of the product which you want to predict.
The answer could be derived by performing some exploratory analysis on the existing data. i.e., based on your business understanding you need to decide which product is similar to the new product. This kind of techniques is used to understand the sale of the new product/how is it going to perform after the launch.
Techniques which can be used over here are Time Series Analysis like ARIMA, if seasonality is present then SARIMA, if no trend then Exponential Smoothing(too many spikes), there are other models like Auto Regression, Moving Average, Croston if there are 0's etc.
This is one way of looking at your problem. |
H: How word embedding work for word similarity?
I am trying to apply word2vec/doc2vec to find similar sentences. First consider word2vec for word similarity. What I understand is, CBOW can be used to find most suitable word given a context, whereas Skip-gram is used to find the context given some word, so in both cases, I am getting words that co-occur frequently. But how does it work to find similar words? My intuition is, since similar words tend to occur in similar contexts, the words similarity is actually measured from the similarity among contextual/co-occuring words. In the neural net, when the vector representation for some word at the hidden layer is passed through to the output layer, it outputs probabilities of co-occuring words. So, the co-occuring words influence the vectors of some words, and since similar words have similar set of co-occuring words, their vector representations are also similar. To find the similarity, we need to extract the hidden layer weights (or vectors) for each word and measure their similarities. Do I understand it correctly?
Finally, what is a good way to find tweet text (full sentence) similarity using word2vec/doc2vec?
AI: I think you have it mostly correct.
Word embeddings can be summed up by: A word is known by the company it keeps. You either predict the word given the context or vice versa. In either case similarity of word vectors is similarity in terms of replaceability. i.e. if two words are similar one could replace the other in the same context. Note that this means that "hot" and "cold" are (or might be) similar within this context.
If you want to use word embeddings for a similarity measure of tweets there are a couple approaches you can take. One is to compute paragraph vectors (AKA doc2vec) on the corpus, treating each tweet as a separate document. (There are good examples of running doc2vec on Gensim on the web.) An alternate approach is to AVERAGE the individual word vectors from within each tweet, thus representing each document as an average of its word2vec vectors. There are a number of other issues involved in optimizing similarity on tweet text (normalizing text, etc) but that is a different topic. |
H: What is the definition of Data Scout?
I am looking for a definition of Data Scout.
what is the difference between Data Mining and Data Scout?
AI: In business, there is little time to look through the data that has been eliminated before a specific analysis project so I put that data in an elim_bin (project code#, project date) I ALWAYS attach data source & date pulled for each eliminated.
Later, I can "scout" that data with an algorithm to see if something could be useful/learned.
This is how I use "scout" data. Hope this helps. |
H: Are neural networks able to deal with non-normalised inputs?
All the techniques/models that I have learnt so far for deep learning start with some sort of normalization to the features, for example gaussian method, minmax scaling, robust scaling, batch normalization, instance normalization.
Are there any techniques to run neural networks without normalization so that the network can see (in absolute values) the magnitude of the value and respond according to that instead of normalized values? Will there be exploding/vanishing gradient issues if I don't normalize my data?
For example, if I am training a custom LSTM network for multivariate time series data, the input dimension for a feature vector $x$ is all the values from $t-n$ to $t$, where $n$ is the number of time steps and the output vector is the value at $t+1$. Is there any need for normalization in this case?
AI: Normalization helps to eliminate scale factors that might exist between variables in your data. Take, for example, the classic problem of predicting home prices. If you represent the square footage of your home in square millimeters, a large change in this value will have a relatively small effect on home price, implying a small gradient on this variable. If you represent that value in square kilometers, a small numerical change will have a large impact on price, implying a large gradient. Normalization isn't necessarily required, but can help to balance the problem by making all variables have "equal weight" in your model. If you were to include both the square millimeter and square kilometer variables in your training data, the neural network would likely spend a lot of effort optimizing on the square kilometer variable, since it is numerically more important. You can still do training with un-normalized data, but it will likely take longer, and possibly have worse output if your important variables are numerically smallest. |
H: How to find a similarity value between cars
So I have a database of web-scraped cars, and I want to find the similarity between cars based on the km driven (e.g 69000), the model year (e.g 2012), and the trim of the car which will be one of three categories: base, mid, top.
What similarity measure can I use that will give me a decently accurate similarity ratio
AI: I suggest you use a weighted per-attribute similarity, for instance let a and b a pair of tuples representing the attributes of some car A and another car B. For example:
a = (69000, 2012, base)
b = (71000, 2013, base)
For the km driven I would use Skm = 1 / (1 + |km(a) - km(b)|), so:
Skm = 1 / (1 + |69000 - 70000|) = 1 / 1001 = 0.0009
For the case of the year the same function could be use, then:
Syear = 1 / (1 + |2012 - 2013|) = 1 / 2 = 0.5
Last, for the trim of the car I would use a function that returns 0 if the categories were different 1 otherwise.
Strim = (base == base) = 1
Finally,
S = Wkm * Skm + Wyear * Syear + Wtrim * Strim
Where Wkm + Wyear + Wtrim = 1.0 and I would adjust the values according to what you think is decent accurate similarity ratio. Setting the values to be Wkm = 0.6, Wyear = 0.2, Wtrim = 0.2 the similarity S(a, b) = 0.30054. This similarity has the nice property that tuples with the same attributes yield a value of 1.
A good practice is to normalize the values of each attribute so you don't assign low values to any attribute for instance you could divide the columns of the km and year attributes for the maximum value, therefore for the pair of tuples in the example you will have the following transformed tuples:
a' = (0.985, 0.99, base)
b' = (1, 1, base)
Now the similarity using the same weights is S(a, b) = 0.9. Alternatively you could use a similarity matrix for the trim of the car, this is base could be more similar to mid thant to top. For example:
cat | base | mid | top |
-------------------------
base | 1 | 0.5 | 0.2 |
-------------------------
mid | 0.5 | 1 | 0.7 |
-------------------------
top | 0.2 | 0.7 | 1 |
The values of the matrix have to be adjusted to fit your needs. For a more in depth comparison between similarity functions between categorical attributes (trim) see this for a comparison of different similarity measures for continuous attributes (year, km) see this.
Again to see which similarity is the best will depend on the data and on your perception, I would select a random set of pairs of cars and manually assign a similarity. Then split this set into a train set and test set. Use the train set to adjust the parameters of the different possible similarities, in the case of my proposal Wkm, Wyear and Wtrim. Then once your satisfied evaluate on the test set. |
H: Neural Networks overfitting
How the neural networks are overfitted for regression.Either it tries
to equal individual observation values or equals to the sum of all observations
AI: Neural Networks basically act as a high memory-based machine learning algorithm. So for a given dataset the chance of it perfectly aligned with all the data at a given instance is high, as it most likely just ends up remembering every data point you give.
Overfitting occurs precisely because of this, when a new expansive data set is introduced there is no way it can adjust its fit to the new data, graphically it ends up missing more of the values than its supposed to fit.
In conclusion, it does not work well in the case where the scoring population is significantly different compared to training sample. |
H: What are the best ways to use a time series data for binary classification
I have large number of csv files and each of them are timeseries based csv files sampled at Avery 5 seconds for 2-3 mins. I have 20k such files with 200-300 variables in each file.
I am aggregating the data by mean over the entire 2-3 mins window are using it for binary classification.
Currently I am using mean of each column in the .CSV file to represent that file, so basically I am summarizing the csv's using one scalar value per column.
so each file is one sample represented by its respective mean value.
Could anyone suggest me some better ways to summarize the timeseries data.
Thanks for your time.
AI: From your comment, I understand that you are trying to solve the binary classification problem using your aggregated data and you are getting very poor results when you simply use the mean.
Depending on specifics of your data and the shape of your time series, there are several alternatives that you could try. Note, that you might need (significantly) more than just a single number per time series to solve your problem.
In addition to the mean, you could use the quantiles or some other summary statistic, like standard deviation, min, or max.
You could try to sample the data, i.e. instead of taking the entire time series, pick only the values that are minutes, hours or days a part. Or pick only mid-day values. The frequency of the sampling depends on your data.
Or just pre-aggregate by calculating averages for every hour, day, month, etc.
Additionally, you could calculate the periodicity of your time series and use it as a new feature.
Or calculate some trends.
Try to fit some standard time series models to your data, e.g. ARIMA and use the coefficients as informative features.
Last but not least, use the domain knowledge re what could be relevant feature for your classification problem: the biggest jump (max first order difference), change of regime, etc.
Edit
I’d pick at least 10-20 features per time series generated as described above and apply logistic regression with LASSO or even xgboost.
After selecting 10-20 features per time series you also could try PCA to reduce the dimension. |
H: Understanding what is going on
I have a collection of 1000s of bottles of wine. I want to understand what could / likely to be driving the price either up or down. Below is an example of the data
Wine Country Area Grape Class Year Price
A France Burgundy Pinot Noir Grand 2014 +6%
B France Burgundy Chardonnay 1er 2014 -1%
C France Burgundy Pinot Noir 1er 2014 +4%
D USA California Pinot Noir 2013 +1%
E USA California Blend 2014 +0.5%
F USA California Chardonnay 2014 -4%
G USA California Chardonnay 2013 -5%
From these fictitious numbers i would assume the reason bottle A is up is because pinot is up but more importantly France is up V USA wine. To the same extent Chardonnay is doing poorly but USA again is doing worse than France.
Here this is rather easy but if this was extended by 1000s of rows with more categories added, the exercise becomes less easy to see the trend.
My original thought would be to use the importance of a feature from a decision tree. But is there a better way?
Thanks
AI: What you are looking for is feature contributions to the final score of an observation having a positive or negative price. Feature importances from a decision tree or random forest is not going to help you because a feature's importance is fixed across all observations (wines). They tell a story about the overall model, but nothing about the individual observations. So if you want to know what caused a wine having a negative or positive price, you should look at feature contributions.
Let's elaborate a little bit.
Let's forget about decision trees, and go for ensemble of them, say random forest, for all the good reasons. (I highly recommend you to read why ensemble models uniformly beat the base learner, and how they help with the high bias - high variance problem base learners might have.)
The first step would be to label your dependent variable, price, as $1$ and $0$, depending on the price being positive and negative ($y = 1$ iff $price > 0$). After the typical exploratory analysis (where you can look at boxplots and stuff as a beginning study) and feature engineering, you can fit your random forest to predict a wine having a positive or negative price (i.e. $y$ being $1$ or $0$).
Once you do that, you will have an estimated probability for each and every wine, representing the classification model's estimation of that wine's price being positive or negative. Something like this:
Wine Prob
A 0.4
B 0.7
C 0.9
D 0.3
These probabilities actually consist of the summation of a bias term and the individual contributions of all features to that observation only:
$$P(y_i = 1) = bias + \sum_{i=1}^{m}(contributionOfFeature_i),$$
where $m$ is the total number of features. This is how you can analyze each and every feature's contribution on the final probability of a particulat wine's price being positive or negative.
Regarding how to code this, if you are using scikit, you can use this very easy and convenient tool:
https://github.com/andosa/treeinterpreter |
H: Softmax classifier never allows for 100% probability in LSTM?
When working with LSTM I am using a softmax classifier and a one-hot encoded vector approach. The softmax looks like this:
$$S(h_i) = \frac{e^{h_i}}{\sum e^{h_{total}}}$$
notice, LSTM's result is a $h=tanh(c) \circ \sigma(p)$
Where c is the cell state passed through a tanh as well, and $\circ$ is the component-wise product of two vectors.
Recall that tanh never goes beyond -1 and never goes above 1; The $\sigma$ never goes below 0 and never above 1
Does this mean that if we have a 4-neurons on the output - the best guess a network can make will be 1, -1, -1, -1?
When softmaxed, this will produce $\frac{2.72}{3.83}, \frac{0.37}{3.83}, \frac{0.37}{3.83}, \frac{0.37}{3.83}$ which are these probabilities:
0.7,
0.1,
0.1,
0.1
and we can never get ~100% certainty, (Edit: or in this example even above 70%) no matter the learning rate?
Is there a way to combat it without destroying its derivative working nicely with Cross-entropy?
Edit
As a solution, I've tried to determine how close to $e^{−1}$ and $e^1$ the $e^x$ actually lies. Basically an 'inverse lerp', and it indeed puts values to 0 and 1 range. However, I am not sure if it affects derivatives & if it no longer will work out of the box when using Cross Entropy.
Here is the usual way to compute softmaxed-vector:
float totalSum = 0;
Vec softmaxedVec;
for(int i=0; i< tanhVec.Length; ++i){
softmaxedVec[i] = exp( tanhVec[i]);
totalSum += softmaxedVec[i];
}
for(int i=0; i<softmaxedVec.Length; ++i){
softmaxedVec[i] = softmaxedVec[i] / totalSum;
}
and here is what I am now trying to do:
float totalSum = 0;
Vec softmaxedVec;
for(int i=0; i< tanhVec.Length; ++i){
softmaxedVec[i] = inverse_lerp(0.3678, 2.71828, exp( tanhVec[i]));
totalSum += softmaxedVec[i];
}
for(int i=0; i<softmaxedVec.Length; ++i){
softmaxedVec[i] = softmaxedVec[i] / totalSum;
}
//returns percentage: 0 the value sits on min, 1.0 the value sits on max
float inverse_lerp(float min, float max, float currValue){
return (currValue - min) / (max - min);
}
However, with this I am afraid that I might have lost that gorgeous $\frac{\partial E}{\partial W} = expected - target $ (curtesy of softmax & cross entropy working together)
..or if I basically ruined the benefit of softmax
AI: If you feed the output of the LSTM directly into a softmax, you probably won't get good results.
If you use a softmax layer after a tanh layer, bad stuff happens. As you say, the confidence will never get near 100%. For instance, if there are two classes, you can never get above about 88% confidence. If there are $k$ classes, you can never get confidence above $e/(e + (k-1)/e)) = e^2/(e^2 + k-1)$.
So, rather than directly feeding the output of the LSTM directly into softmax, you can instead use the output of the LSTM as the input to one or more (fully-connected) layers of neural network. |
H: Implementing a CNN with one convolution layer
I am trying to implement a 1 channel CNN by slightly changing this article: this article.
The problem is that I am new to keras and deep learning and I don't know this far why I am getting this error:
ValueError: Negative dimension size caused by subtracting 100 from 1 for 'conv2d_1/convolution' (op: 'Conv2D') with input shapes: [?,1,70,100], [100,100,100,64]
Obviously, it's a mismatch in the dimensions.
I am using this code code:
from keras.layers import Embedding
from keras.layers import Conv2D
from keras.models import Sequential
from keras.layers import MaxPooling2D
from keras.layers import Reshape
import pdb
Vocab_Size=11123
MAX_SEQUENCE_LENGTH=70
EMBED_DIM=100
model = Sequential()
embed1=Embedding(Vocab_Size+1,EMBED_DIM,input_length=MAX_SEQUENCE_LENGTH,input_shape=(MAX_SEQUENCE_LENGTH,EMBED_DIM,1))
nb_labels=6
model = Sequential()
model.add(embed1)
model.add(Reshape((1,MAX_SEQUENCE_LENGTH, EMBED_DIM)))
model.add(Conv2D(64, strides=5, kernel_size=EMBED_DIM, activation="relu", padding='valid'))
model.add(MaxPooling2D((MAX_SEQUENCE_LENGTH-5+1,1)))
model.add(Flatten())
model.add(Dense(256, activation="relu"))
model.add(Dropout(0.3))
model.add(Dense(len(nb_labels), activation="softmax"))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['acc'])
Edit1:
I updated as mentionned by Media padding to same.
Now I have another problem for the next layer:
ValueError: Negative dimension size caused by subtracting 66 from 1 for 'max_pooling2d_1/MaxPool' (op: 'MaxPool') with input shapes: [?,1,14,64].
AI: I guess you should change the following line to solve the problem:
model.add(Conv2D(64, strides=5, kernel_size=EMBED_DIM, activation="relu", padding='valid'))
instead use this code:
model.add(Conv2D(64, strides=5, kernel_size=EMBED_DIM, activation="relu", padding='same'))
this will keep the dimension of your input. If it does not work, let me know to help you more. |
H: Exploration vs exploitation tradeoff to find a price that maximizes revenue
Is there a practical strategy that can learn to price a product optimally? Right now I have the following arbitrary hill-climbing algorithm:
Run an experiment at starting price P and gather 500 data points (e.x. 20 buy and 480 not buy).
Run a t-test on what confidence level P yields higher revenue per visitor than P * 1.1 and P * 0.9. Then do a 3-way weighted coin-flip and the winner gets to run the next experiment.
There's many problems with this approach. For example, if price is at optimal, it can't price a product at a more optimal pricing e.x. P * 1.03. Another is that if at some price point P = K we happen to get really unlucky and get 1 buy of 500 data points, the algorithm won't converge fast.
The problem gets easy if we take lots of data points but that would reduce long term revenue. Is there a fast algorithm that can converge to the optimal price and then not do anymore exploration?
AI: Without making any underlying assumptions you will not get anywhere. That said, there are multi-arm bandit strategies that try to optimize the rewards, there is a ton of research on this field. It comes down to sampling from a distribution of your options (in your case two) and adapting this distribution based on the rewards.
https://en.wikipedia.org/wiki/Multi-armed_bandit
Once you know that the reward distribution from each bandit comes from a specific distribution, you can deduce optimal sampling strategies. Once you have at least some prior information you can do fairly well although not always optimal. Regardless, most strategies will do better than normal A/B testing if the strategy is not super greedy. |
H: What is the difference between CountVectorizer token counts and TfidfTransformer with use_idf set to False?
We can use CountVectorizer to count the number of times a word occurs in a corpus:
# Tokenizing text
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
If we convert this to a data frame, we can see what the tokens look like:
For example, the 35,780th word of the 3rd document occurs twice.
We can use TfidfTransformer to count the number of times a word occurs in a corpus (only the term frequency and not the inverse) as follows:
from sklearn.feature_extraction.text import TfidfTransformer
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
Converting this to a data frame, we get:
We can see the representation is different. The TF is shown as 0.15523. Why is this different than the token count using CountVectorizer?
AI: Actually, the documentation was pretty clear. I'll keep it posted in case someone else searches before reading:
The TfidfTransformer transforms a count matrix to a normalized tf or tf-idf representation. So although both the CountVectorizer and TfidfTransformer (with use_idf=False) produce term frequencies, TfidfTransformer is normalizing the count. |
H: My LSTM can't reduce error down to zero, when overfitting
I have implemented LSTM in c++ which steadily decreases in error, but slows down at the certain error value. It also seems to predict most of the characters, but gets stuck and not able to correct some mistakes (or will correct them very slowly), even after 5000 backprop iterations. For example, asking it to predict character one by one might result in
abcdefgjffklmnopqrstuxwxyz
or something similar. Notice, the network almost gets things right. Also, it never 'gets lost' after making a mistake, for instance in the above example, after jff it produces k and gets back on track as if it had never made that mistake. The results are always different - sometimes network learns all the letters. However, the error still plateaus at the same value.
The error starts from around 7.0 and decreases down to 2.35 after which it slows down with every iteration, which seems like it's hitting a plateau.
If my alphabet consists only from a,b, then network almost instantly realises it should be producing abababababab, however the error starts at 0.8 and now always plateas at around 0.2 to 0.28
If with a,b, we set 4 timesteps, network learns to produce abab, but after 50 000 back-props (being well-stuck even after 25 000) it predicts 'a' only with 85%-ish certainty, even though I would expect it to be 99.999%; Similar value when 'b' has to be predicted. Once it gets stuck it maintains values similar to these. So it could keep guessing the same value over and over again when working with the a,b dataaset;
Strangelly, when working with that a,b dataset most of the time, I observe the final learnt probabilities to be:
a=[0.68, 0.31] b=[0.15, 0.85]
and sometimes, after re-initializing the network it learns the final probabilities as a=[0.8205, 0.1794] and b=[0.1795, 0.8205]
Disabling momentum (previous frame's grads times zero) still has same effect on a,b
The network doesn't explode, but its gradients seem to vanish.
Question:
Is it usual to get stuck at those values? Back-propping after 26 timesteps, done 200 000 times, and by that time the changes get turtle slow. The error sits at around 2.35 and is not worth the wait for another 0.000001 change in error.
Experimenting with a smaller learning rate (0.0004) allows the error to get down to 2.28 -that's the best I've got. Also, using a momentum coefficient with 0.2; It's applied to the previous frame's gradient. I don't increase momentum while the program executes, but keep it at a constant 0.2;
newgradient = newgradientMatrix + prevFrameGradientMatrix*0.2
I am not using any form of dropout etc. I just want the network to overfit, but it's getting stuck at 2.35 for 26-char aphabet.
I am only getting 0 error when an entire alphabet consists of a single character. In that case, the NN will predict aaaaaaaaaaaaaaaaa and error will be 0
Forward prop:
All is done on a single thread, in CPU. Using 'float' for the components of vectors.
Tanh on dataGate and after the 'cell' is constructed (before the 'cell' is multiplied by an output gate)
Sigmoid on Input, Forget and Output gate
Each gate has Matrix of weights where each column is weights from neuron in current LSTM unit to all neurons in previous LSTM unit. Last column is ignored because nothing should feed into our bias. Also, bias value (but not the weights from it!) is set manually to 1.0 just to be sure.
Each gate has a separate NxN matrix with recurrent weights (U-matrix) operating on the results of the current LSTM unit at [time-1]
Both W and U keep the last row, so they both sample bias of lower-LSTM. This shouldn't create issues, granted that both of biases are back-propagated properly. In fact, last row was removed from U-Matrix altogether - just to be sure, but the error still plateaus at the same 2.35 quantity regardless.
Weight initialization:
Xavier-Benjio with a uniform distribution page 253, bottom right.
Boundaries of the uniform distrib are defined as mentioned here, like this:
low = -4*np.sqrt(6.0/(fan_in + fan_out)); // use |4| for sigmoid gates, |1| for tanh gates
high = 4*np.sqrt(6.0/(fan_in + fan_out));
Cost function
Result of LSTM unit are softmaxed (bias is ignored), then passed through cross-entropy function to get a single float value.
Cross entropy is the correct one, for multi
class classification.
float cost = 0;
for(each vector component){
if(predictedVal == 0){ continue; }
cost += -(targetVec[i])*naturalLog(predictedVec[i]);
}
return cost;
The cost is then summed up across all timesteps and its average is returned right before we do the backprop. This is where I am getting plateaus at 2.3 for 26-character alphabet
By the way, the Cell (aka c) and Result (aka h) are cached after the last (26th timestep). After back-propagation they are used by timestep0. This could be (and was for some time) disabled, but the results are similar.
Backpropagation
Will list a couple of important gotchas & keypoints that I took care of:
de_dh is simply (targetVec - predictedVec), in that order. That's because the gathered gradient will be subtracted from the W and U matrices. This is due to derivatives cancelling out nicely when the softmax & crossEntropy are used together during forward prop.
To de_dh an extra gradient is added from t+1. That added quantity is sum from all 4 gates. To explain better, recall that one of such gates was forward-propping as follows:
dataGate = tanh(W * incomingVal + U * lstmResultPrevT + bias_ComesFrom_W_and_U); //one of the 4 gates
In the above formula, the bold represents the quantity from where the gradient is taken of one of such four gates at [t+1]. Such a gradient is then summed up across those 4 gates and added to de_dh, as stated originally. It's necessary to be done because during forward prop, 'H' of [t] has affected all 4 gates of [t+1]
When computing the gradient for Cell at [t], the cell's gradient from [t+1]is added. Afterwards, we compute a gradient for C of [t-1], to be able to repeat this process when we arrive to the earlier timestep.
The gradient for the U-weights leading to bias at [t-1] is computed with remembering that the bias's original value was 1.0; Also, it's double-checked to ensure gradient doesn't flow from our bias at [t] to the neurons at [t-1]. That's because nothing fed into our bias originally. As follows, the entire last column of U-gradient matrix is always 0.0;
Similar thing is done for such a bias-column of the W matrix too - that entire column is zero.
Finally, the gradient is computed for each H of [t-1] for each of the four gates. This is done so that the '2. key-point' is possible (adding the 4-grads to de_dh), when we get to the earlier timestep in this back-prop.
Unit Tests & debugging:
after 20 000 backprops (done every 26 timesteps) a file is collected.
it was observed that gradients were very small on all 4 gates, especially after being passed through the activation function at each gate.
This is one of the reasons why Xavier init (above) was introduced, to prevent weights from being too large (shrinks grad after pushing-back through activation) or being too small (shrinks grad after pushing-back through weights).
A significant improvement was observed after 'norm clipping' was used, allowing my LSTM to seemingly learn a correct sequence even when 56 unique characters were used and backprop was done after 56 timesteps. Similar to the original example (with the 26 chars) only a couple of characters were predicted incorrectly. However the error still always plateaus, at a higher value (around 4.5)
Once again, is this traditional behavior, and I just have to rely on the things like dropout and averaging the results of multiple networks? However it seems that my network isn't even capable of overfitting...
Edit:
I've discovered one thing - the result of LSTM is vector, whose components cannot be less than -1 or greater than 1 (curtesy of tanh and sigmoid) As a result, $e^x$ cannot be smaller than ~0.36 or greater than ~2.71
So the probabilities always have some 'precipitation' dangling, and network always 'worries' that it can't reach 100% confidence? Tried to get clarification on that here
AI: As stated in the last edit of my question, the issue indeed was to do with the softmax function.
As clarified here We shouldn't apply softmax directly to the result of the last LSTM. Notice, LSTM will produce a vector of values, each of which is bounded between -1 and 1 (due to the tanh squashing function that's applied to the Cell).
Instead, I've created a traditional fully-connected layer (just additional weight matrix), and feed result of LSTM to that layer. This "output" layer isn't activated - it feeds into a softmax function, which actually serves as an activation instead.
I modified the back-prop algorithm to supply the gradient generated by the softmax to the Output layer. Of course, if you used a cross entropy Cost function originally, then such a gradient will remain $(predicted - expected)$. It's then pushed through the weights of that Output Layer, to get the gradient w.r.t. LSTM. After this the backprop is applied as usual and the network finally converges.
Edit: There is slight improvement to momentum.
Also, using a momentum coefficient with 0.2; It's applied to the previous frame's gradient. I don't increase momentum while the program executes, but keep it at a constant 0.2;
newgradient = newgradientMatrix + prevFrameGradientMatrix*0.2
That's fine, but changing the momentum-coefficient will require us to also re-adjust learning rate. A cleaner version will be:
newgradient = newgradientMatrix*(1-0.9) + prevFrameGradientMatrix*0.9
Which is an exponential moving average, that remembers roughly $\frac{1}{0.1} = 10$ days.
On the 10th day, the coefficinet will be $$\frac{(1-\epsilon)^{\frac{1}{\epsilon}}}{0.9} = \frac{(1-0.1)^\frac{1}{0.1}}{0.9} = \frac{0.9^{10}}{0.9} \approx 0.387 \approx \frac{1}{e}$$ of the peak; Because 9 newer days have larger coefficients (larger than 0.387), their avearge really makes 10th day and older be negligible
Any older days will have even less contribution
Also, don't forget about the bias correction, - which helps get a better estimate when we are "just starting" to compute the average. Without the bias correction it would start very low, and will take some time to catch-up with the expected "exponential moving average"
newval = curval*(1-0.9) + prevVal*0.9
newval /= 1-(0.9)^t //where t is timestep
However, in practice, it will be fine after approximately 10 timesteps - and so there is no real need for the bias correction in Momentum, but it should be done if we are using combination of Adam (a combination of Momentum & RMSProp) |
H: After choosing top models in classification? Can I apply it on the rest of my dataset
I am working with a corpus that has 5 datasets in product reviews (A, B, C, D and E), mine is a text classification problem and I need to find the best 5 top models in terms of classification performance (F1).
I started with collection A: the mp3 reviews, Because it has the largest numbers of documents (900: yes, 750: No).
I trained the data using 10-fCv using different algorithms and pre-processing tasks, got the weighted results for all experiments.
I chose the top 5 models and I want to apply them to the rest of the corpus: B, C, D and E (other products' reviews).
My plan is to run 10-fCv and get the results for all the collections and compute the Micro-average for precision, recall and F1.
Is this the right way to choose a model for a large collection?
AI: This is an interesting question.
In general the split of data is about the underlying distribution. That means you split a dataset into train-test sets in a way that random split of train-test does not dramatically affect the distribution. But Splitting based on topics is not random!
Specially in your case you are talking about text in which the distribution is super sensitive to the domain i.e. if you collect the commentary of 1000 football games and the narrations of 1000 documentary movies about wild life, you will see that they are literally two different things. The conceptual difference between products most likely affects the distribution of words/terms/phrases therefore the model trained on reviews of mp3s MUST NOT be validated on reviews of football shoes!
In your case, I would say the train-test split (CV folds) should be done on whole data together so that you maintain the original topology of word distribution (topology here is not a Math term but I simply mean the shape of distribution).
In this case if you do Topic Modeling on the whole training data you simply see 5 different product topics. Or if you use word2vec or doc2vec you hopefully see 5 different clusters. Then you can run your models in this setting for evaluation.
NOTE:
If the size of classes are very different you need to come up with some solution for small classes. If it was the case, just drop me a line in comments and we can discuss solutions.
Good Luck :) |
H: Interpret clustering results after variable transformation
since some time I have a question to which I have not found the proper answer yet.
My doubt concerns the interpretation of the results of a clustering algorithm which was run on features to which a log-transformation was applied.
Specifically, let's assume we want to run a k-means algorithm on 3 interval variables. Unfortunately, these three interval variables are extremely bad distributed and the k-means gives the worst result we have ever seen.
However, let's imagine that by applying a log transformation to each variable, we obtain three incredibly perfect normal distribution.
Then, we run again the k-means and we obtain perfect clusters.
Now, my doubt concerns the interpretation of this cluster obtained by running a k-means on three log-transformed variables: it is not clear whether our interpretation of the clusters obtained should be made on the original variables or it should be made on the log-transformed variables?
Clearly, my example is related to log-transformation but we can talk about z-score or min-max normalization or any other kind of transformation that we apply in order to improve the quality distribution before running the clustering algorithm.
To clarify, what I mean by interpretation is the profiling of the cluster, which means try to describe which are the characteristics common to the individuals belonging to that cluster.
AI: Very interesting!
What you did to your data is simply a feature mapping/transformation. So how this affects the clustering results?
Clustering is not a clearly defined problem but at least we know something about it: It's about internal similarities (patterns) so these similarities should be maintained through the feature transformation. In your example if you found clusters in transformed space, it shows that you have had clusters in original space as well. You just couldn't see them according to the algorithm you used in that space!
For instance if you use Kernelized versions of algorithms you easily find that what they do is nothing but what you did as transformation. They first use a kernel to map the data into new space and then use the algorithm in that space (of course with a bit of theoretical differences/constraints).
To summarize, no transformation produces a fake pattern in the data. In worst case it vanishes the original pattern and in the best case it reveals the pattern which was not visible originally (which is your case).
I mentioned Fake Pattern above so let me say a bit more on it. I think there is a fundamental issue concerning your question:
You assume that there is a Right clustering that you got after transformation. Actually there is no right clustering!
We do not have fake pattern! If there is a pattern in a feature space, then that is true! i.e. you found an interesting representation of your data. If it does not match with the labels then either the data is very noisy or wrong features have been chosen to represent classes (maybe more reasons. just these two came to my mind now). If there is no label (your case) be sure there is a correlation between features of those cluster members. |
H: Building CNN, Need More Images
I'm building a custom Convolutional Neural Network for image recognition. I'm running into the issue of only having around 100 images or so to train and test on. From my research and model results, this is not enough.
Does anyone who of a service for hire to have someone search web/research and build up images based on certain criteria?
AI: I recommend you using Keras and employing its pre-trained models. Because of low number of data-set, you should use transfer learning. There are lots of researches about that like here. Based on the data that you have, you should choose a model which is appropriate for your task and have been trained for tasks which are like yours, and then use it. I guess ResNet and GoogleNet already have been trained on ImageNet data set and are in the Keras. You have to freeze the weights of the convolution and dense layers. You should change the soft-max layer with your own. In this kind of learning, the pre-trained model has already the ability to find features. You are just supposed to let it learn how to classify your data. |
H: What is LSTM, BiLSTM and when to use them?
I am very new to Deep learning and I am particularly interested in knowing what are LSTM and BiLSTM and when to use them (major application areas). Why are LSTM and BILSTM more popular than RNN?
Can we use these deep learning architectures in unsupervised problems?
AI: RNN architectures like LSTM and BiLSTM are used in occasions where the learning problem is sequential, e.g. you have a video and you want to know what is that all about or you want an agent to read a line of document for you which is an image of text and is not in text format. I highly encourage you take a look at here.
LSTMs and their bidirectional variants are popular because they have tried to learn how and when to forget and when not to using gates in their architecture. In previous RNN architectures, vanishing gradients was a big problem and caused those nets not to learn so much.
Using Bidirectional LSTMs, you feed the learning algorithm with the original data once from beginning to the end and once from end to beginning. There are debates here but it usually learns faster than one-directional approach although it depends on the task.
Yes, you can use them in unsupervised learning too depending on your task. take a look at here and here. |
H: Using ML to create unique descriptors?
I have a problem that doesn't seem to fall into a common machine learning category, and I was wondering if this still could potentially be solved with ML.
Problem: I have two signals recorded from two sensors, and would like to determine whether they are correlated (i.e. record the same physical event) or not.
The catch: I don't have access to the full signal time series of both sensors, but only one at a time - I can only exchange a small descriptor on the order of 32 bits to see if the signals match or not.
Our current approach is to calculate a bunch of numerical signal features such as mean, derivative, zero-crossings, FFT etc. and see which ones provide the best correlation - but that seems to be a lot of guesswork and doesn't work very well in any case.
So now I had the following idea:
Start with a neural network which takes a fixed window out of the signal (+ possibly the FFT of that window) as an input, and produces a 32-bit output
Pick two random correlated samples out of the pool of examples, and run the network twice, once with each sample (and its FFT)
Take the difference between the two output values as error measure and perform backpropagation as usual
Repeat from 2. until the difference for all examples is below a threshold
Here are my questions:
Does this approach seem feasible at all?
As someone relatively new to machine learning, how would I implement this?
I've had a look at Keras - would this be a suitable starting point?
Thanks in advance, and best regards, Florian
Addendum: I've found this somewhat related post (Is it possible using tensorflow to create a neural network that maps a certain input to a certain output?), but I don't think that this is the same problem, as I don't actually care what the output looks like, just that it is as unique as possible for each matching pair of samples.
AI: If I understood correctly your question, you want a function that takes a signal (a fixed window) and outputs a 32-bit representation in a way that the correlation between it and any other signal is preserved. Mathematically speaking given signals $s_1$ and $s_2$ in $S$ and a correlation function $corr(s_1, s_2)$ you want some function $f : S \rightarrow B$ (where $B$ is the space of 32-bit binary numbers) that you could use another correlation function for instance $corr_f(b_1, b_2) \approx corr(s_1, s_2)$.
If that is what you want you should at hashing techniques in particular learning to hash. Essentially in hashing want you do is you represent your input by binary numbers in a way that the hamming distance between the binary numbers preserves some target similarity (distance) function such as the correlation. In particular for cross-correlation (inner product) you have some methods based on random projections.
So once you have learned (designed) your hashing function $f$, what I would do is:
b1 = f(s1)
send b1
receive b1
b2 = f(s2)
return h(b2, b1) # this value is going to tell you if the signals are correlated |
H: AttributeError: 'numpy.ndarray' object has no attribute 'predict'
I have trained and saved a model :
import numpy as np
# load the dataset
dataset = np.loadtxt("modiftrain.csv", delimiter=";")
# split into input (X) and output (Y) variables
X_train = dataset[:,0:5]
Y_train = dataset[:,5]
from sklearn.naive_bayes import GaussianNB
# create Gaussian Naive Bayes model object and train it with the data
nb_model = GaussianNB()
nb_model.fit(X_train, Y_train.ravel())
# predict values using the training data
nb_predict_train = nb_model.predict(X_train)
# import the performance metrics library
from sklearn import metrics
# Accuracy
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(Y_train, nb_predict_train)))
print()
# import the lib to load / Save the model
from sklearn.externals import joblib
# Save the model
joblib.dump(nb_predict_train, "trained-model.pkl")
Then, i'm loading the model and try to make predictions on a new dataset :
# import the lib to load / Save the model
from sklearn.externals import joblib
import numpy as np
# Load the model
nb_predict_train = joblib.load("trained-model.pkl")
# load the test dataset
df_predict = np.loadtxt("modiftest.csv", delimiter=";")
X_train = df_predict
nb_predict_train.predict(X_train)
print(X_train)
Here comes the error :
File "predict01.py", line 14, in <module>
nb_predict_train.predict(X_train)
AttributeError: 'numpy.ndarray' object has no attribute 'predict'
AI: You don't want to pickle the predictions but rather the fit.
Change joblib.dump(nb_predict_train, "trained-model.pkl") to joblib.dump(nb_model, "trained-model.pkl") |
H: Reinforcement Learning - What's the formula for the value function
I'm trying to implement a value-iteration algorithm to solve a grid-world problem (I'm new to the field). The usual formula that I encounter about the value function V(s) is:
$$V(s) = R(s) + max_{a \in A} \sum_{s' \in S} T(s, a, s') V(s')$$
where $S$ is the set of states, $A$ the set of actions, $T$ the transition model
$$T(s, a, s') = P(s_{t+1} = s' | s_t = s, a_t = a)$$
and $R$ the reward function.
Since I'm working on a model-based problem, $T$ and $R$ should be known; the problem is that I don't know how to define (or compute knowing the details of the problem) $R$. If I'm in a state $s$ and take an action $a$ that would make the agent hit a boundary of the grid world, the new state $s'$ will be $s$, and the reward, as defined by the problem, is -5.
On the other hand, if I had arrived in state $s$ through another state, the reward will be -1. So basically, $R(s)$ depends on the previous state and the previous action. How do I represent that in the $V(s)$ formula?
AI: The formula you have quoted is a bit unwieldy, precisely because the R function as defined needs to "look ahead" to all possible outcomes and their probabilities, but how to do that is not included explicitly.
There are actually a few variants of the Bellman equation that express more or less detail. A good place to start for a truly generic version is Sutton & Barto (2nd Edition):
$$v^*(s) = \text{max}_{a \in \mathcal{A}}[\sum_{r,s'}p(r,s'|s,a)(r + v^*(s')]$$
Where $\sum_{r,s'}$ is over all possible reward and next state pairs.
The above equation changes your transition function that only handles next state, to a similar function that handles successor state and reward:
$$p(r,s'|s,a) = Pr\{ R_{t+1} = r, S_{t+1} = s' | S_{t} = s, A_{t} = a \}$$
Usually this does not increase the number of items to sum over, or add much complexity, because reward will most often associated with the transition.
The benefit is that this approach removes the need for a reward function that works with expected reward, and just works with specific rewards. Other variants are possible too, such as an expected reward function based on $(s,a)$ or $(s,a,s')$ - the difference is just a little bit of juggling with the expression so that it remains effectively the same given the subtle differences in definition for $r$. |
H: About training and cross validation on a time series problem
I am new to machine learning.
I'm having a task of predicting whether a user will churn in March, given February feature data and the churn result. However, March data is leaked and now I'm assigned to predict for April.
My strategy is to train the model with February data and perform cross validation with March data. Then I will try to predict April data using model trained with February data.
Here are my questions:
Is my strategy good?
Or should I append both February data and March data to predict April
data?
AI: You might want to check out my answer to a related question here.
If you want to cross-validate time series data, I would suggest creating some sort of sliding window on which you train your model and predict the next month following that window.
For example, you could train your model on February data and then predict the data of the first week of March. Then slide your window to include data from the second week of February through the first week of March and then predict the data of the second week in March. The length of the windows that you use to train and test on are parameters that you will need to play around with yourself to see what gets you the best results.
Once you feel like you are getting good results from your cross-validation, I would try training a new model on all of the data you have and see how it performs on the new data coming in. Depending on the data and model you are using, this may provide better results having trained on more data. Of course, this depends on the problem you are trying to solve, the nature of your data and your choice of model. |
H: Classify future performance of customer
I have a dataset with monthly revenue per customer. I want to build a model that can try to predict if the customer will exceed $10,000 3 months out (yes/no).
While this seems like a traditional ML problem I have an important questions
Should I build my dataset with one row per customer id and let the label be the revenue 3 months out
Should I instead have one row per month per customer and let the label by the revenue 3 months out
Thanks
AI: I'm not sure it will matter very much whether you have one row per user or if you have one row for per month per user. The important part is that the data you have is accurate for that user for a particular month. You might construct your training data like this:
--------|-----------------|--------------------|
cust. id| time on website | profit over 10,000?|
--------|-----------------|--------------------|
3 30 0
--------|-----------------|--------------------|
3 80 1
--------|-----------------|--------------------|
5 100 1
--------|-----------------|--------------------|
7 5 0
--------|-----------------|--------------------|
The important thing to notice is that even though customer 3 is in the dataset more than once, he/she has different values for their data features on which to predict, and different from how they were represented in the previous month. This is assuming that you are aggregating the data by month per customer.
This blog predicts customer churn, but you might be able to use the general strategy for your problem. |
H: What exactly is a class in kNN search?
I am trying to create a kNN search from scratch for a project. I think I have the concept of it and how it works, but I can't understand what exactly is a class.
I have a matrix of $X*Y$ where $X$ is the number of elements and $Y$ their vector. So if the values are completely random, is every element/row a different class, or do I specify the classes some other way?
AI: Given the context of your question I assume your are referring to the k-NN classifier. The idea of the classification method is you have a set of feature vectors $f_i$, in general, $f_i \in \mathbb{R}^d$ where $d$ is the dimension of the vectors. Additionally you have a class $c_i$ for each $f_i$. To classify an unseen feature vector $f_j$ you select the $k$ nearest neighbors (under some distance, usually the euclidean distance) and the most common class among the neighbors is your prediction.
So, summarizing, the input of the algorithm are the feature vectors $f_i$ and the classes $c_i$. For more on the algorithm see this wikipedia link.
The image (from wikipedia) below illustrates a typical example, in this example the classes are: red triangle,blue square and the point to predict is the green circle. Each feature vector is a two dimensional point, i.e. $f_i, f_j \in \mathbb{R}^2$. In this situation the predicted class will be red triangle.
For an explanation using both Python and R see this. |
H: Predicting Missing Features
I have "millions" of items each having N binary features. When a feature is "0" it could be that the information is simply missing. So, given the data with the currently observed 1's, I would like to have a probability of the "0" features being "1".
I am thinking this can be a Neural network with all features as input and same as output. But then I don't know how the training would work. I don't have ground truth.
I would like some help expressing my problem and hopefully not reinvent the wheel. Is this is a classical problem in ML, and what approach can be applied?
AI: A simple approach could be the following: suppose $i \in \{0,1\}^d$ is the vector you want to predict which of the $0$ entries could be $1$ and $j \in J$ the rest of the feature vectors. Take the $k$ nearest neighbors, under some suitable distance (Jaccard, Hamming, Manhattan distance). For each $0$ entry the probabilities could be the percentage of the $k$ nearest neighbors that have $1$ in the corresponding entry.
This problem has been extensively study in the collaborative filtering community. The best known example being the Netflix Prize. This blog post provides a nice explanation of this approarch for binary data.
Another, more involved, approach is matrix completion, in particular check this reference. If you are into deep learning check this. |
H: Text annotating process, quality vs quantity?
I have a question regarding annotating text data for classification.
Assume we have ten volunteers who are about to annotate a large number of texts into label A or B. They probably won't have time to go through all the text samples, but at least a significant portion of them.
Should we focus on generating new samples for each annotator? (They never see the same text samples as any other annotator) (quantity approach).
Or should all annotators see the same samples and the annotator agreement is taken to account? (quality approach).
Thoughts,
will generate more unique samples than 2. (more training samples for a classifier) - and hoping that in the feature extraction part, the useful features will appear by themselves.
will generate less unique samples, but with the annotator agreement taken into account. (less training samples for a classifier, but with higher quality)
AI: Both metods are against the nature of machine/statistical learning. Lack of data will not let you generalize well in any case. It will have different effect though. Generally speaking:
You will be able to classify more test samples, but sometimes will do
it wrong due to errors in data.
You will have precise classification of some test samples, but will have no idea how to classify the rest of them.
So, it's up to your's classification score/quality estimation function which case is better for you. |
H: How can autoencoders be used for clustering?
Suppose I have a set of time-domain signals with absolutely no labels. I want to cluster them in 2 or 3 classes. Autoencoders are unsupervised networks that learn to compress the inputs. So given an input $x^{(i)}$, weights $W_1$ and $W_2$, biases $b_1$ and $b_2$, and output $\hat{x}^{(i)}$, we can find the following relationships:
$$z^{(i)} =W_1x^{(i)}+b_1$$
$$\hat{x}^{(i)} =W_2z^{(i)}+b_2$$
So $z^{(i)}$ would be a compressed form of $x^{(i)}$, and $\hat{x}^{(i)}$ the reconstruction of the latter. So far so good.
What I don't understand is how this could be used for clustering (if there is any way to do it at all). For example, in the first figure of this paper, there is a block diagram I'm not sure I understand. It uses the $z^{(i)}$ as the inputs to the feed-forward network, but there is no mention to how that network is trained. I don't know if there is something I'm ignoring or if the paper is incomplete. Also, this tutorial at the end shows the weights learned by the autoencoder, and they seem to be kernels a CNN would learn to classify images. So... I guess the autoencoder's weights can be used somehow in a feed-forward network for classification, but I'm not sure how.
My doubts are:
If $x^{(i)}$ is a time-domain signal of length $N$ (i.e. $x^{(i)}\in\mathbb{R}^{1\times N}$), can $z^{(i)}$ only be a vector as well? In other words, would it make sense for $z^{(i)}$ to be a matrix with one of its dimensions greater than $1$? I believe it would not, but I just want to check.
Which of these quantities would be the input to a classifier? For example, if I want to use a classic MLP that has as many output units as classes I want to classify the signals in, what should I put at the input of this fully-connected network ($z^{(i)}$,$\hat{x}^{(i)}$, any other thing)?
How can I use the learned weights and biases in this MLP? Remember that we assumed that absolutely no labels are available, so it is impossible to train the network. I think the learned $W_i$ and $b_i$ should be useful somehow in the fully-connected network, but I don't see how to use them.
Observation: note that I used an MLP as an example because it is the most basic architecture, but the question applies to any other neural network that could be used to classify time-domain signals.
AI: Clustering is difficult to do in high dimensions because the distance between most pairs of points is similar. Using an autoencoder lets you re-represent high dimensional points in a lower-dimensional space. It doesn't do clustering per se - but it is a useful preprocessing step for a secondary clustering step. You would map each input vector $x_i$ to a vector $z_i$ (not a matrix...) with a smaller dimensionality, say 2 or 3. You'd then use some other clustering algorithm on all the $z_i$ values.
Maybe someone else can chime in on using auto-encoders for time series, because I have never done that. I would suspect that you would want one of the layers to be a 1D convolutional layer, but I am not sure.
Some people use autoencoders as a data pre-processing step for classification too. In this case, you would first use an autoencoder to calculate the $x$-to-$z$ mapping, and then throw away the $z$-to-$\hat{x}$ part and use the $x$-to-$z$ mapping as the first layer in the MLP. |
H: What is the difference between "expected return" and "expected reward" in the context of RL?
The value of a state $s$ under a certain policy $\pi$, $V^\pi(s)$, is defined as the "expected return" starting from state $s$. More precisely, it is defined as
$$
V^\pi(s) = \mathbb{E}\left(R_t \mid s_t = s \right)
$$
where $R_t$ can be defined as
$$
\sum_{k=0}^\infty \gamma^k r_{t+k+1}
$$
which is a sum of "discounted" rewards after time $t$, i.e. starting from time $t+1$.
$V^\pi(s)$ can also be interpreted, even more precisely, as the expected cumulative future discounted reward. This denotation contains all the words which refer to specific parts of the formula above, where
"Expected" refers to the "expected value"
"Cumulative" refers to the summation
"Future" refers to the fact that it's an expected value of a future quantity with respect to the present quantity, i.e. $s_t = s$.
"Discounted" refers to the "gamma" factor, which is a way to adjust the importance of how much we value rewards at future time steps, i.e. starting from $t + 1$.
"Reward" refers to the main quantity of interested, i.e. the reward received from the environment.
Meanwhile, I've heard the term "expected reward", but I am not sure if it refers to the same concept or not, that is if "expected reward" or "expected return" are the same thing or not.
I know there's also the concept of "expected value of the next reward", often denoted as $\mathcal{R}^a_{ss'}$, and defined as
$$
\mathcal{R}^a_{ss'} = \mathbb{E}\left(r_{t+1} \mid s_t = s, a_t = a, s_{t+1} = s' \right)
$$
which, again, is the value we expect for the reward at the next time step, that is at time step $t+1$, given that action $a$ from state $s$ brings us to state $s'$.
Is the "expected reward" actually $\mathcal{R}^a_{ss'}$ instead of $V^\pi(s)$?
AI: Is the "expected reward" actually $\mathcal{R}^a_{ss'}$ instead of $V^\pi(s)$?
In short, yes.
Although there is some context associated - $\mathcal{R}^a_{ss'}$ is in the context of specific action and state transition. You will also find $\mathcal{R}^a_{s}$ used for expected reward given only current state and action (which works fine, but moves around some terms in the Bellman equations).
"Return" may also be called "Utility".
RL suffers a bit from naming differences, however the meaning of reward is not one of them.
Notation differences also abound, and in Sutton & Barto Reinforcement Learning: An Introduction (2nd edition), you will find:
$R_t$ is a placeholder for reward received at time $t$, a random variable.
$G_t$ is a placeholder for return received after time $t$, and you can express the value equation as $v_{\pi}(s) = \mathbb{E}[G_t|S_t=s] = \mathbb{E}[\sum_{k=0}^{\infty}\gamma^kR_{t+k+1}|S_t=s]$
$r$ is a specific reward value
You won't see "expected reward" used directly in an equation from the book, as the notation in the revised book relies on summing over distribution of reward values.
In some RL contexts, such as control in continuous problems with function approximation, it is more convenient to work with maximising average reward, than maximising expected return. But this is not quite the same as "expected reward", due to differences in context (average reward includes averaging over the expected state distribution when following the policy) |
H: Can dropout and batch normalization be applied to convolution layers
Can dropout be applied to convolution layers or just dense layers. If so, should it be used after pooling or before pooling and after applying activation?
Also I want to know whether batch normalization can be used in convolution layers or not.
I've seen here but I couldn't find valuable answers because of lacking reference.
AI: In short, yes.
Batch Normalization Batch Normalization layer can be used in between two convolution layers, or between two dense layers, or even between a convolution and a dense layer. The important question is Does it help? Well, it is recommended to use BN layer as it shows improvement generally but the amount of improvement you will get is more problem dependent.
Dropout: Convolution layers, in general, are not prone to overfitting but it doesn't mean that you shouldn't use dropout. You can, but again this is problem dependent. For example, I was trying to build a network where I used Dropout in between conv blocks and my model got better with it. It is better if you apply dropout after pooling layer. |
H: Idf values of English words
I'm working on keyword/phrase extraction from a single document. I started by doing term frequency analysis, but this returns words like "new" which aren't very helpful. So I want to penalize the common words and phrases, for which we normally use idf (inverse document frequency). But since it's for a single document, I'm not sure how to do idf analysis.
Is it possible to use tf-idf method with pre-calculated idf values for (all?) words?
And are such values available somewhere?
AI: The list of 20,000 most common words in English is avaiable here.
By using Zipf's law, we can obtain the probability of these words as below.
Zipf's Law
In the English language, the probability of encountering the rth most common word is given roughly by P(r)=0.1/r for r up to 1000 or so. The law breaks down for less frequent words, since the harmonic series diverges. Pierce's (1980, p. 87) statement that sumP(r)>1 for r=8727 is incorrect. Goetz states the law as follows: The frequency of a word is inversely proportional to its statistical rank r such that
P(r) = 1/(rln(1.78R)),
where R is the number of different words.
These probability values can be used as a substitute for idf. |
H: What it Would be easier Building a Deep Net From Scratch or Using an existing Architecture?
In Practice with CNN what would be easier: Building a CNN from scratch or using a an existing architecture with some updates?
AI: It depends on your task and the amount of data you have. If you have so much data but you can not find similar tasks to have appropriate architecture you should stack convolution and dense layers yourself. But if you have appropriate amount of data and there exist good architectures then you have to decide what you want and how is your situation. Suppose that you want to have recognition task, there are so many architectures that are applied to ImageNet data-set. You can use transfer learning but there is a point here. Suppose that you want to fine tune GoogleNet. This is a very large network and is capable for recognizing about a thousand distinct classes. If you have a recognition task with 5 classes and you have an agent that should be online, this is not logical to have such a big network. You may have similar performance by stacking a few layers and get better time complexity. If you don't have so much data, freezing the layers and applying transfer learning to the last layer maybe a typical solution. |
H: Use 1 or 2 norm for Voronoi vector quantization?
I have a script from a lecture. Basically it says that based on the Voronoi partitioning we identify the corresponding (nearest) class $w_k$ to a vector $x$ where $\left| {{w_k} - x} \right| = \mathop {\min }\limits_i \left( {\left| {{w_i} - x} \right|} \right)$ given the classes $w$.
The script uses the absolute value notation. This does not make much sense as we are using vectors. Which vector norms can or should be used? The 1 or 2 norm?
The 1 norm is faster to compute but maybe there are drawbacks I cannot currently think of.
AI: In general the quality of the quantization is measured using the mean squared error (MSE) between the input vector $x$ and the reproduction value $q(x)$ ($w_k$ in the notation you used in the question). For the MSE the best partition is the one defined by Voronoi regions under the euclidean distance ($l^2$ norm), i.e:
$$V_j = \{ x \in R^d : \|x - w_j \|^2 \leq \|x - w_i \|^2\ for\ all\ i \}$$
You can find the proof in here. Also here and here the euclidean distance is used. |
H: Updating the weights of the filters in a CNN
I am currently trying to understand the architecture of a CNN. I understand the convolution, the ReLU layer, pooling layer, and fully connected layer. However, I am still confused about the weights.
In a normal neural network, each neuron has its own weight. In the fully connected layer, each neuron would also have its own weight. But what I dont know is if each filter has its own weight. Do I just have to update the weights in the fully connected layer during back propagation? Or do the filters all have a separate weight that I need to update?
AI: In a normal neural network, each neuron has its own weight.
This is not correct. Every connection between neurons has its own weight. In a fully connected network each neuron will be associated with many different weights. If there are n0 inputs (i.e. n0 neurons in the previous layer) to a layer with n1 neurons in a fully connected network, that layer will have n0*n1 weights, not counting any bias term.
You should be able to see this clearly in this diagram of a fully connected network from CS231n. Every edge you see represents a different trainable weight:
Convolutional layers are different in that they have a fixed number of weights governed by the choice of filter size and number of filters, but independent of the input size.
Each filter has a separate weight in each position of its shape. So if you use two 3x3x3 filters then you will have 54 weights, again not counting bias. This is illustrated in a second diagram from CS231n:
The filter weights absolutely must be updated in backpropagation, since this is how they learn to recognize features of the input. If you read the section titled "Visualizing Neural Networks" here you will see how layers of a CNN learn more and more complex features of the input image as you got deeper in the network. These are all learned by adjusting the filter weights through backpropagation. |
H: Grouping of similar looking text
I have a data frame which has two columns, "Title" and "Description". The title column has a bunch of titles related to clinical lab tests. Unfortunately, most of the titles are a repeat of the same test but, due to minor changes in the titles, titles are shown as unique.
values = [('Complete blood picture', 'AB'), ('Complete BLOOD test', 'AB'), ('blood glucose', 'AB'), ('COMplete blood Profile', 'AB')]
labels = ['title', 'description']
import pandas as pd
labtest = pd.DataFrame.from_records(values, columns = labels) # Create data frame
This is how the data frame looks like. [The actual dataset has many such titles, this is only for the purpose of this question]
Title Description
Complete blood test AB
COMPLETE Blood test\ AB
Blood glucose AB
Complete blood picture AB
And this is what I would the like the data frame to look like:
Title Description
Blood test AB
Blood test AB
Blood test AB
Blood test AB
I would like to search for the word "blood" in each title and if its true, then change the whole title with "Blood test". Is there a way to do this?
AI: One possible solution is the following:
import re
import pandas as pd
pattern = re.compile('blood', re.IGNORECASE)
def change(text):
if pattern.findall(text):
return 'Blood test'
else:
return text
values = [('Complete blood picture', 'AB'), ('Complete BLOOD test', 'AB'), ('blood glucose', 'AB'), ('COMplete blood Profile', 'AB')]
labels = ['title', 'description']
# Create data frame
labtest = pd.DataFrame.from_records(values, columns=labels)
labtest['title'] = labtest['title'].apply(change)
print labtest
The output was:
title description
0 Blood test AB
1 Blood test AB
2 Blood test AB
3 Blood test AB
The first line import the regex (regular expression) module of Python. The line:
pattern = re.compile('blood', re.IGNORECASE)
creates a regex that finds the word blood ignoring case. The function change, replace the input text with 'Blood test' in case the string 'blood' was found. Finally you used the apply method from pandas DataFrame to transform the column. Finally the apply method, as the name suggests, 'applies' the function change to every value in the 'title' column.
More info on regular expressions using Python and pandas apply method can be found here and here. If you want to know more about text processing in Python I would recommend you take a look at the pointers in this question. |
H: A math question about solving the Lagrangian of Support Vector Machine
$$\mathcal{L}(w,b,\xi,\alpha,r) = \frac12w^Tw+C\sum_{i=1}^m \xi_i-\sum_{i=1}^m \alpha_i[y^{(i)}(x^Tw+b)-1+\xi_i]-\sum_{i=1}^mr_i\xi_i$$
Here, the $\alpha_i$'s and $r_i$'s are our Lagrange multipliers (constrained to be $\ge 0$)
To maximize the Lagrangian of soft margin SVM (see the formula above), we set the derivatives with respect to $w$, $\xi$ and $b$ to $0$ respectively.
But what if we set the derivatives w.r.t $r$ to zero first? Wouldn't that result in $\xi$ being all $0$s? Meaning that the optimal solution is reached only when all the relaxing terms $\xi$ are $0$? But that doesn't seem right, does it?
AI: Since you have an inequality constraint you need to meet the necessary Kuhn-Tucker Conditions which for non-negativity constraints are:
$\xi \geq 0$
$-r \cdot \xi = 0$
It is not enough to set the derivative w.r.t. to $r$ to zero as is the case for equality constraints and Lagrange multipliers.
Therefore, there are two cases to distinguish:
Either the condition is tight ($\xi=0$) and you can have $r \neq 0$
Or $\xi > 0$ and $r = 0$. |
H: Creating dummy variables to match fitted model at inference
I have built a machine learning classifier using Sklearn and pandas as my main tools. Now, one of the input features to the model is country (to letter country code such as US). I have fit a model using the pd.get_dummies function.
Now I want to run inference on the data but a few of the countries haven't appeared in my dataset over the past months, so the pd.get_dummies function is misaligned with the fitted model.
How can this dealt with?
AI: I think the answer to this question will solve your problem.
import pandas as pd
train = pd.DataFrame(data = [['a', 123, 'ab'], ['b', 234, 'bc']],
columns=['col1', 'col2', 'col3'])
test = pd.DataFrame(data = [['c', 345, 'ab'], ['b', 456, 'ab']],
columns=['col1', 'col2', 'col3'])
train_objs_num = len(train)
dataset = pd.concat(objs=[train, test], axis=0)
dataset_preprocessed = pd.get_dummies(dataset)
train_preprocessed = dataset_preprocessed[:train_objs_num]
test_preprocessed = dataset_preprocessed[train_objs_num:]
If that doesn't help, a simple solution would be to add columns of all zeros for each country that the new dataset it missing. This will give you the correct shape of data that your classifier is expecting. |
H: Plot of two different matrices in R
I'm trying to plot two different size matrices using one graph (in R), but can't manage to do so.
I tried using matplot and the regular plot, but it didn't work.
Does anyone know how to plot it?
AI: (Assuming this is a similar question to what was posted on Cross Validated, but was closed): You can merge the data from both matrices while adding a variable that specifies the origin (group), then plot them together in ggplot2:
library( ggplot2 )
N1 = 19
N2 = 17
M = 10
m1 = matrix( rnorm(N1*M,mean=0,sd=1), N1, M)
m2 = matrix( rnorm(N2*M,mean=0,sd=1), N2, M)
y = c( as.vector( t( m1 ) ), as.vector( t( m2 ) ) )
x = c( rep(1:10, each = N1 ), rep(1:10, each = N2 ) )
group = c( rep( '1', N1 * M ), rep( '2', N2 * M ) )
df = data.frame( state = x, value = y, group = group )
ggplot( df, aes( x = state, y = value, colour = group ) ) +
geom_point() +
ggtitle( "State values in group 1 and 2" ) +
labs( x = "State", y = "Value" ) +
scale_x_continuous( breaks = seq(10) )
This is the result: |
H: How to correctly infer vectors in Gensim doc2vec?
I would like to know which is the correct procedure for inferring vectors in Gensim doc2vec.
I have a dataframe df with a feature, called name, and composed of two subsets train and test.
df = train + test
My aim is to find the most similar name in train given a name in test.
For doing this I have to train the doc2vec model, and I have two possible choices:
train the model on the entire df and then infer the most similar name by model.infer_vector() on test.
train the model on train, letting out test, and then use model.infer_vector() on test.
I suppose that the correct procedure is first one, but I am not sure.
Also, so doing, there is the possibility that the most similar name given test will be again in test and not in train.
AI: I would use the first approach, given that both train and test are known, there is no need of generalization i.e. you don't expect unseen vectors.
In order to avoid the problem you mentioned, you have to find the most similar vector to a vector in test considering only vectors in train. For example:
train = [v1, v2, v3]
test = [v4, v5]
most_similar = {}
for vector in test:
most_similar[vector] = v1
for vector2 in train:
if similarity(vector2, vector) > similarity(most_similar[vector], vector):
most_similar[vector] = vector2
In the end in most_similar you have the most similar vector in train for each vector in test. |
H: Incorporate luck in statistical modelling
I was wondering if it is possible (and if yes, how is it done) to incorporate a luck component in statistical models.
So let’s assume I’d perform a regression on the goal difference between two teams in order to model the outcome of a sports match. How could I also model a random luck component that would make the underdog draw or win in certain rare cases?
And what if I took it a step further and tried to model the match outcome using neural networks? Is is possible to include luck in this case?
AI: Use a probabilistic model. For example, if the probability of one team beating the other is $σ(w⋅(team1−team2))$, where $σ(⋅)$ is the logistic function, then there's always a probability that the weaker team will win.
If you wanted to use a neural network you'd use it to replace the logistic function.
Answer copied from comments. |
H: How to persist data scaler for predictions
I have a Support Vector Machine in Scikit-learn (Python) that gets trained once in a while when enough new data has accumulated (user help train the model by submitting new data).
I store the model in .pkl format for persistence. However, the SVM needs scaled data and I'm wondering what would be a good way to persist the scaler, since it was fitted on the train data that changes over time (as users submit more data).
Is there a common solution for this? Or is there a value that I can store with the model that works as some sort of seed to recreate the scaler?
EDIT (added implementation):
I have a database with a dataset. users periodically add data points to this dataset. I also have an estimator database which keeps track of all the trained models and their .pkl file locations. When enough new data points are added the system will start to train a new estimator. It extracts all the features and scales the data, then trains a model. After that it compare the existing best model with the newly trained one and if the new one is better, it sets the new one to 'active'. There is only one active model a a time and that is the one that will be used for predictions on new data. (I hope this helps)
AI: So after looking some more I found this question: save scaler model in sklearn
which says that one can picle the scaler aswell, which works. |
H: How do I implement the sigmoid function in Octave?
so given that the sigmoid function is defined as hθ(x) = g(θ^(T)x), how can I implement this funcion in Octave given that g = zeros(size(z)) ?
AI: This will compute the sigmoid of a scalar, vector or matrix.
function g = sigmoid(z)
% SIGMOID Compute sigmoid function
% g = SIGMOID(z) computes the sigmoid of z.
% Compute the sigmoid of each value of z (z can be a matrix,
% vector or scalar).
SIGMOID = @(z) 1./(1 + exp(-z));
g = SIGMOID(z);
end |
H: Layman's comparison of RMSE
I don't have a maths / stats / data science background and need to evaluate which of the two evaluations below (numerical regression on Amazon Machine Learning) predict more accuracy. Both models use the same data set but it's looking at different time frames both on the independent and dependent variables.
How can I evaluate which one of the two models is more accurate? And is there a way to tell how accurate these two models are in general (e.g. 75%)?
Model 1
Model 2
AI: Typically you want a smaller RMSE and without getting into detail it should be sufficient to just take the smaller one. However, I am concerned because you state that the models were ran on the same dataset but at different timeframes. Since RMSE scale depends on the dependent variable scale, it's entirely possible that these two timeframes are scaled different. A somewhat contrive example would be energy consumption. I would expect a model trained on daytime consumption to have a higher RMSE than for one trained between 1am and 3 am. In that case, comparing the RMSE may be meaningless. You can try to normalize your data and RMSE to help with this, but i'm unsure if AWS provides this ability.
As for your second questions, you really won't get a 75% accurate number for regression. You can look at the deviations of the residual or do cross valiadtion and see how well the model performs.
Again this may not be possible in AWS.
edit: I juse realized that the histograms were residual plots. Do three things Increase bin size. Check to see if the residuals are centered around 0 and then check if there is skewness in the data. If the data is centered around 0 and symmetric then you can say the model error is basically random and does not favor over or under predicting. If the data is not centered around 0 and there is skewness, then the errors can be systematic and then in that case considering adding more variables. |
H: How to create an array from the list of arrays in python
I was trying to write a python code that can set some neural network channels or neurons to zero at the inference; and I wrote the code below. The code generates 10 different arrays for different percentage of the channels or neurons that are set to zero. My challenge is how to combine these arrays into a single array or a list so that they can be individually accessed, but all I was getting was a list containing arrays with zero elements. Please help me.
import numpy as np
def three_steps(n):
return step(n, steps=[1, 0.5, 0.25])
B = np.linspace(0.1, 1.0, 10)
print B
B = np.array(three_steps(100))
print "A = ", A
for i, value in enumerate(B):
A[i:int(value*100)] = 0
print A
list = [np.append(A, i) for k in range(i + 1)]
print list
AI: This post from stack overflow should give you what you want. The magic code boils down to the following.
flat_list = [item for sublist in l for item in sublist]
You can also loop through the list of lists and merge them.
lists = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
merged_list = []
for l in lists:
merged_list += l
print(merged_list)
[1, 2, 3, 4, 5, 6, 7, 8, 9] |
H: Advice and Ideas appreciated - Machine Learning one man project
I have a project where I am supposed to start from scratch and learn how machine Learning works. So far everything is working out better than expected but I feel as I am offered to many ways to choose from.
My Project:
I have data with 700 rows and 108 columns as my features and I get pretty decent results when using a RandomForestClassifier.
By now I was using train_test_split to split my data but I was reading a lot of articles where it was recommended to split data into 3 Sets (train, dev, test).
Since I dont have that much Data I thought of using a Cross-Validation.
My Problem:
So I implemented it, but couldn't really find the difference between a CV and a train_test_split with shuffle.
Before doing so I thought I know the difference betweeen these to model-selection strategies but now I am a bit confused.
My Knowledge And Questions:
1. train_test_split has the problem that the sets can be unbalanced, so if I am unlucky I train my model only with positive or negative examples.
--> can't that be solved by using stratify=True?
2. train_test_split doesn't split the sets the same way all the time so the results aren't comparable
--> setting random_state=0 solves the problem?
3. When does it make sence for me to use CV?
4. How to understand the following picture and what benefit does it provide:
What would be a good way to proceed?
thanks a lot for all the help and time in advance!
Cheers!
AI: train_test_split has the problem that the sets can be unbalanced, so if I am unlucky I train my model only with positive or negative examples.
--> can't that be solved by using stratify=True?
--> yes, that's what stratify=True is for. However you still only train on the data of your training set and test with the data from the test set
train_test_split doesn't split the sets the same way all the time so the results aren't comparable
--> setting random_state=0 solves the problem?
--> it does...keep in mind that random_state= will work fine as long as you keep the seed aka random-state the same
for 3. and 4. let me make sure you understand the difference between CV and TTS. TTS splits your data once, trains on the now "fixed" training set and tests on the "fixed" test set. However this introduces a kind of bias to your evaluation because you are not training or testing on all observations. By going for a CV you make sure that all observations are used for testing and training. This reduces the bias of setting a "fixed" training and test set.
Now for 3.:
When in doubt use CV. HOWEVER...sometimes this doesn't make sense. Think about time series data and how cv would work then and what the result would be. That being said...now think about how TTS with stratify or random would work and what those would mean. CV usually is a good way to go however you always have to think about if the evaluation method fits your problem. TTS has its merit for time series as well as for performance reasons using large datasets.
And 4.:
The picture visualizes TTS and CV. First you have CV which splits your training data so that each small data set is used as test data. The "Testing" set is then used to measure generalization of your model. This is called the holdout method
The diagram that says "Data Permitting" demonstrates TTS with an additional "Testing" set for measuring generalization. You train on "Training", validate the model on "Validation" and as with the holdout method you keep a "Testing" set for measuring how well your model generalizes on unknown data.
Last but not least...5.:
If you get good results and generalization by using TTS...go for it. If you get problems with over-/underfitting or generalization try CV and see if it helps. Of course as with everything in Data Science and Machine Learning there are multiple levels of complexity that you can add on top of TTS and CV as well. For example you could read about K-Fold or LOOCV etc. I don't think that is necessary though. |
H: XGBoost validation for number of trees
I have a simple Question:
I am using XGBoost to classify some data:
1.) With 100 estimators I have following scores(roc_score):
train_data : 98.5
validation_data : 97.2
2.) With 500 estimators I have following scores(roc_score):
train_data : 99.4
validation_data : 97.7
Acco. to above can we say model with 500 estimators works better. Or should I change the validation data few times and see if similar kinds of increment happens by shifting from 100 to 500 estimators.
AI: At first glance, your conclusion appears correct, but there are some important caveats to keep in mind.
First, what are the sizes of your training and validation sets? If your validation set is too small, then the observed difference may not be statistically significant.
Second, you should verify that your validation set is a representative sample. (i.e. it should come from the same distribution as the training set). If it's not representative, then it may give poor estimates of performance.
Third, when tuning hyperparameters, it's a good idea to split your dataset into three shards - training, validation, and testing. You can use the training and validation sets to find optimal hyperparameters (as you have done), then use the testing set to generate a performance estimate for the tuned model. If you trust the validation accuracy obtained during hyperparameter tuning, you are liable to a subtle form of overfitting where hyperparameters are specialized for the validation set.
Finally, if you have the computational resources, then it's always a good idea to evaluate accuracy with cross-validation rather than a train-test split. This will give you a more robust estimate of accuracy.
If you've checked all these boxes, then you have good reason to believe that 500 estimators is better than 100 estimators!
[S]hould I change the validation data few times and see if similar kinds of increment happens by shifting from 100 to 500 estimators?
Yes, it's always a good idea to try many different configurations of hyperparameters. You can use scikit-learn's GridSearchCV or RandomizedSearchCV to easily run a search over the hyperparameter space. |
H: Confusion matrix - determine the values of FP FN TP and TN
After running my code ,I get the values of accuracy, precision and recall and I want t determine the values of FP FN TP and TN from these metrics. I tried to calculate it using the formula of each metric but I couldn't. Is there any way to do this?
AI: You should modify the code to produce the confusion matrix itself. But assuming that's impossible for some reason...
A bit of linear algebra helps here. @n1k31t4 is right that given only accuracy, precision, and recall, you can't expect to reproduce the confusion matrix: you have three equations in four unknowns, and the equations can be expressed as linear equations (in the unknowns; see below), so there are definitely infinitely many solutions (but made finite by the non-negativity requirement, and in odd cases made few or even unique by the integer requirement).
If you happen to also know the total number of samples (or perhaps some other confusion-matrix measurement), you can recover everything. You don't need both P and N as @BenjiAlbert uses (although that produces more pleasing formulas IMO). Below I've done it by putting everything else in terms of $TP$, but there are sure to be several routes to the answer.
From $\text{recall}=\frac{TP}{TP+FN}$, we get
$\frac{1}{\text{recall}} = 1+\frac{FN}{TP}$ and so
$FN = (\frac{1}{\text{recall}}-1)TP$.
Similarly, from $\text{precision}=\frac{TP}{TP+FP}$ we obtain
$FP = (\frac{1}{\text{precision}}-1)TP$.
Finally,
$TN = \text{accuracy}\cdot\text{count} - TP$,
so
$$\begin{align*}
\text{count} &= TP+TN+FP+FN \\
&= \text{accuracy}\cdot\text{count} + (\frac{1}{\text{precision}}-1)TP + (\frac{1}{\text{recall}}-1)TP,
\end{align*}
$$
and now you can solve for TP:
$$TP = \frac{(1 - \text{accuracy})\cdot(\text{count})}{\frac{1}{\text{precision}}+\frac{1}{\text{recall}}-2}$$
Plugging that back into the above formulas gives the values for all the others. |
H: Pandas: How can I update dataframe values?
I have two spreadsheets where one is updating the other.
How can I update this data using the pandas library?
Example, where 'b' updates 'a':
a = {'field': ['a', 'b', 'c'], 'value': ["", None, 1]}
b = {'field': ['a', 'b', 'd'], 'value': [1, 2, 1]}
Expected outcome:
c = {'field': ['a', 'b', 'c', 'd'], 'value': [1, 2, 1, 1]}
AI: df_a = pandas.DataFrame(a)
df_b = pandas.DataFrame(b)
c = pandas.concat([df_a, df_b], ignore_index=True).drop_duplicates(subset=['field'], keep='last') |
H: feature extraction from single word for classification into nouns and names
I would like to write a NN that can classify different kinds of words(e.g. nouns,verbs,names) and am struggling to find information on how to do feature extraction on single words.For example, i would like the NN to learn that "street" is a noun, while "How would i go about doing that? I'm really nwe to this and searching for it always seems to yield only examples of text feature extraction, which is not what I'm looking for.
Thank you in advance kind strangers!
AI: I suppose you could try to learn a mapping from characters (or character-level n-grams) to part-of-speech. This would be analogous to document classification. Instead of a document, you have a single word. And instead of a sequence of words, you have a sequence of letters.
With this formulation, most of the tricks which extract features from text could also be applied to extract features from characters (although your mileage may vary). At the simple end of the scale, you could try "old-school" techniques like bag-of-words and TF-IDF (except you would have a bag-of-characters, and it would be Character-Frequency Inverse-Word-Frequency). On the complex end of the scale, you could try to learn embeddings of characters or character n-grams.
However, before you get started, there are a few things I think you should keep in mind:
Do you think there is enough information in sub-word features to classify parts of speech? I think it's pretty unlikely (at least for English). Your model might learn some of the easy cases (e.g. an "-ly" suffix often indicates that the word is an adverb), but I don't think it would perform well in general.
This is actually a multilabel classification problem, because there are many words which can serve as more than one part-of-speech. Also, names (proper nouns) are a subclass of nouns.
What happens if you feed your model a non-existent word? Should your model try to classify it as a part-of-speech, or do you want it to recognize non-existent words?
If you're new to machine learning, consider putting this aside for a little while. Jumping headfirst into a difficult classification problem with deep learning is a hard, confusing way to learn.
Finally, are you doing this for experimental/fun reasons, or is this intended to be part of a production application? If it's the former, then go for it! Worst case: it doesn't work but you learn something.
But if this project is for anything more serious, then you should not train a ML model to solve this problem. Your model will never outperform a dictionary :) |
H: TensorFlow MLP loss increasing
When I train my model the loss increases over each epoch. I feel like this is a simple solve and I am missing something obvious but I cannot figure out what is it. Any help would be greatly appreciated.
The neural network:
def neural_network(data):
hidden_L1 = {'weights': tf.Variable(tf.random_normal([784, neurons_L1])),
'biases': tf.Variable(tf.random_normal([neurons_L1]))}
hidden_L2 = {'weights': tf.Variable(tf.random_normal([neurons_L1, neurons_L2])),
'biases': tf.Variable(tf.random_normal([neurons_L2]))}
output_L = {'weights': tf.Variable(tf.random_normal([neurons_L2, num_of_classes])),
'biases': tf.Variable(tf.random_normal([num_of_classes]))}
L1 = tf.add(tf.matmul(data, hidden_L1['weights']), hidden_L1['biases']) #matrix multiplication
L1 = tf.nn.relu(L1)
L2 = tf.add(tf.matmul(L1, hidden_L2['weights']), hidden_L2['biases']) #matrix multiplication
L2 = tf.nn.relu(L2)
output = tf.add(tf.matmul(L2, output_L['weights']), output_L['biases']) #matrix multiplication
output = tf.nn.softmax(output)
return output
My loss, optimiser and loop for each epoch:
output = neural_network(x)
loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=y) )
optimiser = tf.train.AdamOptimizer().minimize(loss)
init = tf.global_variables_initializer()
epochs = 5
total_batch_count = 60000//batch_size
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
avg_loss = 0
for i in range(total_batch_count):
batch_x, batch_y = next_batch(batch_size, x_train, y_train)
_, c = sess.run([optimiser, loss], feed_dict = {x:batch_x, y:batch_y})
avg_loss +=c/total_batch_count
print("epoch = ", epoch + 1, "loss =", avg_loss)
sess.close()
I have a feeling my problems lies in the either the loss function or the loop I wrote for each epoch, however I am new to TensorFlow and cannot figure this out.
AI: You are using the function softmax_cross_entropy_with_logits which, according to Tensorflow's documentation, has the following specification for logits,
logits: Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities.
Hence, you should pass the activations before the non-linearity application (in your case, softmax). You can fix it by doing the following,
def neural_network(data):
hidden_L1 = {'weights': tf.Variable(tf.random_normal([784, neurons_L1])),
'biases': tf.Variable(tf.random_normal([neurons_L1]))}
hidden_L2 = {'weights': tf.Variable(tf.random_normal([neurons_L1, neurons_L2])),
'biases': tf.Variable(tf.random_normal([neurons_L2]))}
output_L = {'weights': tf.Variable(tf.random_normal([neurons_L2, num_of_classes])),
'biases': tf.Variable(tf.random_normal([num_of_classes]))}
L1 = tf.add(tf.matmul(data, hidden_L1['weights']), hidden_L1['biases']) #matrix multiplication
L1 = tf.nn.relu(L1)
L2 = tf.add(tf.matmul(L1, hidden_L2['weights']), hidden_L2['biases']) #matrix multiplication
L2 = tf.nn.relu(L2)
logits = tf.add(tf.matmul(L2, output_L['weights']), output_L['biases']) #matrix multiplication
output = tf.nn.softmax(logits)
return output, logits
Then, outside your function, you can retrieve the logits, and pass it to your loss function, as in the example bellow,
output, logits = neural_network(x)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=y))
I remark that you may still be interested in the outputs tensor, for calculating your network's accuracy. If this substitution doesn't work, you should also experiment with the learning rate parameter on your AdamOptimizer (see the documentation here). |
H: random forest classifier - impact of small n_estimator and repeated training
trying to have a better understanding of random forest algorithm here. With the same training and holdout datasets, I tried two things here:
Set a small n_estimator (10), train on my training dataset and apply to my holdout dataset. If I repeat this several times, the result (e.g. correctly predicted target class) varies somewhat from run to run. My understanding is that since the # of trees is small in my model, there are variations in my model after training thus leading to different results.
Set a high n_estimator (300) and do the same. Then the results don't vary. My take is that impact of high n_estimator reduces variation in the model and thus i get the same prediction every time.
So if I run my scenario 1 a bunch of times and consolidate the results (i.e. run 1 predicts A B in class 1, run 2 predicts A C in class 1, run 3 predicts D in class 1), my final results would be A B C D are in class 1. My question is:
1. Is this essentially the same as running it once with a large n_estimator?
2. Is this approach problematic because I am relying more on "guessing" (e.g. small n_estimator leads to larger variation in outcomes)?
Thanks!
AI: there is a parameter in estimators that is called random_state that fixes a seed for your algorithm run and reruns exactly the same so you can expect the same results at every rerun of the exact same code. It makes your code deterministic. But yes, in random forests in particular, due to their nature of averaging across all trees created when the forest is grown, variance in your result becomes less evident. So your reasoning is correct.
If you want to make your low_estimators RF produce the same results at every run, just add random_state and give it a random number |
H: Substituting nan values with mean code
for x in num_cols:
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
imp.fit(np.array(ds[x]).reshape(-1,1))
ds[x] = imp.transform(np.array(ds[x]).reshape(-1,1))
AI: Here, you are substituting the missing values (nans) with something, it can be either the most frequent data, median, average(mean), whatever. mean modifies the nan values with the average of not-nan values.
for x in num_cols:
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
imp.fit(np.array(ds[x]).reshape(-1,1))
ds[x] = imp.transform(np.array(ds[x]).reshape(-1,1))
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
For transforming missing values we use SimpleImputer, here, the missing values are np.nan and we want to use the mean strategy for transforming these nans. Thereby, their values would be substituted by the average value of each feature.
imp.fit(np.array(ds[x]).reshape(-1,1))
You have an $n * m$ (array (np.array(ds[x]))), by reshape(-1,1)) you convert it to shape $(m*n, 1)$ -1 here stands for vertical array and 1 stands for having one column.
Then you fit your transformer (imp) to your data (np.array(ds[x]).reshape(-1,1))
ds[x] = imp.transform(np.array(ds[x]).reshape(-1,1))
Here, the transformed data would be substituted in ds[x] |
H: Similarity of XGBoost models?
Is xgboost with n_estimators = 100 and learning_rate = 0.1, same as xgboost with n_estimators = 50 and learning_rate = 0.2 ?
AI: No, they won't have neither the same performance nor the same architecture if you were to try to visualize it. An XGBoost with 100 n_estimators and a learning rate of 0.1 is a 100 trees grown sequentially, and each tree's output is multiplied by 0.1. The latter is a model of 50 trees grown the same way and the outputs are multiplied by 0.2, so your second model ( given the usual size of datasets nowadays ) will probably underfit the data, a low number of trees to learn from the data and a high learning rate which will make it somewhat innaccurate.
Unless you're looking for another kind of similarity, this should explain it enough. I hope. If not, feel free to comment to clarify your question more. |
H: Should I have "normal" sampled data in my dataset?
I am busy working on a project to find the reasons why kids in normal households are doing badly in school.
I have a dataset of which consists of kids that live in environments where the family is middle class, has access to necessary facilities and the kid is not suffering from any disorders but is failing grades in school.
It is understandable for kids that has a poor living condition to have problems at school but for kids that has all the necessities in life to do at least average in school, needs a bit more research.
Now that I have this dataset, does it make sense to add kids with the same living environments that are doing ok at school to the dataset?
I am planning to use SOM for data mining if that helps.
AI: In short, yes.
If your goal is to understand the drivers behind poor school performance (or even model + predict school performance), you will need both "positive" and "negative" outcomes in your dataset.
Of course, you need to be careful that the positive outcomes are "similar" enough to the negative outcomes to avoid introducing distortions into your data - i.e. ensure that the data you are adding are from middle-class families with access to facilities etc.
You should also consider the ratio of positive:negative outcomes in your final data set, as you will ideally like to avoid imbalanced data. However, there are techniques and approaches to handle imbalanced data if necessary. |
H: Membership ratio graph
Does anyone know how the what the kind of graph in this image is called?
Each color represents a class and the height of the color, in a particular instant, represents the ratio of elements that belong to that class.
How can I produce such a graph, as an example, in R or Python?
AI: This looks like a "proportional stacked area chart." The R Graph Gallery gallery has a similar one here:
They have great sample sample to provide guidance on how to re-create it (https://www.r-graph-gallery.com/136-stacked-area-chart.html). |
H: pandas.isna() vs pandas.DataFrame.isna()
I've seen the two documentation pages for pandas.isna() and pandas.DataFrame.isna() but the difference is still unclear to me. Could someone explain the difference to me using examples?
AI: They call the same underlying method, so there is no functional difference.
Calling the dataframe member function is preferred for OOP patterns, but there are many redundancies/aliases in pandas and python in general.
In case you are curious, here is how the source code breaks down (it is a mess).
The DataFrame (pandas/core/frame.py) method is simply:
def isna(self):
return super().isna()
Where DataFrame extends NDFrame (implemented in pandas/core/generic.py). NDFrame subsequently invokes:
def isna(self):
return isna(self).__finalize__(self)
Which was imported here:
from pandas.core.dtypes.missing import isna, notna
In pandas/core/dtypes/missing.py:
def isna(obj):
return _isna(obj)
The _isna function is later aliased as _isna = _isna_new because there is a deprecated method _isna_old(obj).
The _isna_new(obj) function then performs the logic operations:
def _isna_new(obj)
if is_scalar(obj):
return libmissing.checknull(obj)
# hack (for now) because MI registers as ndarray
elif isinstance(obj, ABCMultiIndex):
raise NotImplementedError("isna is not defined for MultiIndex")
elif isinstance(obj, type):
return False
elif isinstance(
obj,
(
ABCSeries,
np.ndarray,
ABCIndexClass,
ABCExtensionArray,
ABCDatetimeArray,
ABCTimedeltaArray,
),
):
return _isna_ndarraylike(obj)
elif isinstance(obj, ABCGeneric):
return obj._constructor(obj._data.isna(func=isna))
elif isinstance(obj, list):
return _isna_ndarraylike(np.asarray(obj, dtype=object))
elif hasattr(obj, "__array__"):
return _isna_ndarraylike(np.asarray(obj))
else:
return obj is None
Ultimately, the DataFrame method passes itself as a parameter to the same function that you call with pandas.isna(). |
H: Can a decision tree learn to solve a xOR problem?
I have read online that decision trees can solve xOR type problems, as shown in images (xOR problem: 1) and (Possible solution as decision tree: 2).
My question is how can a decision tree learn to solve this problem in this scenario. I just don't see a way for any metric (Information gain, Gini Score, ...) to choose one of the splits in image 2 over any other random split.
Is it possible to solve the presented problem with a decision tree?
Would using a random forest solve the problem in any way?
Thank you in advance.
AI: Yes, a decision tree can learn an XOR.
I have read online that decision trees can solve xOR type problems...
Often things are phrased not carefully enough. A neural network can perfectly sort a list of integers, but training one to do that would be quite hard. Your image shows that a tree can easily represent the XOR function, but your question is how to learn such a tree structure.
My question is how can a decision tree learn to solve this problem in this scenario. I just don't see a way for any metric (Information gain, Gini Score, ...) to choose one of the splits in image 2 over any other random split.
Indeed, the first split is probably quite random, or due to noise (if you go for $\operatorname{sign}(x\cdot y)$ with continuous $x,y$ instead of the discrete $x,y$ and XOR). But, as long as your algorithm makes the plunge with one of those first splits, the next splits are obvious and your tree will make them.
Is it possible to solve the presented problem with a decision tree?
Here's a notebook (github/colab, suggestions welcome) demonstrating that yes, a (sklearn) decision tree can learn $\operatorname{sign}(x\cdot y)$ (perhaps with some errors when points are extremely close to 0); but it also goes on to show some of the difficulties, e.g. when variables other than $x,y$ are available to the tree to split on. Up-shot: noise variables can wreck that first split I mentioned above, and even useful variables can make the tree lose track of the XOR.
Would using a random forest solve the problem in any way?
Probably not the basic problem, but it looks like it helps with, e.g., the noise variables above. |
H: Is it possible to use an array of graph coordinates as an input variable?
Say I have 1000 graphs that shows sales every year for the last 10 years for 1000 different companies. And say each of those graphs belong to either domestic countries or foreign countries.
Is it possible I could input the different graphs into a classifier? That is, could a model predict based on the graphs whether or not the country was domestic or foreign? If so, how would you do that in python or r?
AI: Yes, the graph coordinates can be used as features for training your algorithm( whichever classification algorithm you choose). Features will be X and Y and output will be labelencoded type of country. |
H: Normal equation for linear regression
I am going through the derivation of normal equation for multivariate linear regression. The equation is given by :
$\theta = (X^{T}X)^{-1}X^{T}Y$
The cost function is given by:
$J(\theta) = \frac{1}{2m}(X\theta-Y)^{T}(X\theta-Y)$
Simplifying,
$J(\theta) = \frac{1}{2m}(\theta^{T}X^{T}X\theta - 2(X\theta)^{T}Y + Y^{T}Y)$
Differentiating w.r.t $\theta$ and equating to zero
$\frac{dJ(\theta)}{d\theta} = \frac{d}{d\theta}(\theta^{T}X^{T}X\theta)-\frac{d}{d\theta}(2(X\theta)^{T}Y) = 0$
I want to specifically understand the differentiation of the left term:
$\frac{d}{d\theta}(\theta^{T}X^{T}X\theta) = X^{T}X\frac{d}{d\theta}(\theta^{T}\theta) $
$\frac{d}{d\theta}(\theta^{T}\theta) = [\frac{d}{d\theta_1}(\theta_1^{2}+\theta_2^{2}+...\theta_n^{2}),\frac{d}{d\theta_2}(\theta_1^{2}+\theta_2^{2}+...\theta_n^{2}) ,...., \frac{d}{d\theta_n}(\theta_1^{2}+\theta_2^{2}+...\theta_n^{2})]$
$\frac{d}{d\theta}(\theta^{T}\theta) = [2\theta_1,2\theta_2,...,2\theta_n] $
$\frac{d}{d\theta}(\theta^{T}\theta) = 2\theta^{T} $
But the final equation is obtained by using $\frac{d}{d\theta}(\theta^{T}\theta) = 2\theta$
How is $\frac{d}{d\theta}(\theta^{T}\theta) = 2\theta$ and not $2\theta^{T}$
AI: It is basically a matter of convention, which becomes a bit more clear if you write the whole thing in terms of elements, rather than vectors. Consider
$$ \theta^T \theta = \sum_{n=1}^N \theta_n \theta_n = \sum_{n=1}^N \theta_n^2 $$
What is usually meant if you write $\frac{df}{d\theta}$ is that you take the gradient of a scalar $f$, i.e. you get a vector $\frac{df}{d\theta}$ where each element $i$ is the derivative with respect to the respective coordinate $\theta_i$:
$$ \left( \frac{df}{d\theta} \right)_i = \frac{df}{d\theta_i} $$
Let's apply this to $\theta^T\theta$:
$$ \left( \frac{d}{d\theta} \theta^T\theta \right)_i = \frac{d}{d\theta_i} \sum_{n=1}^N \theta_n^2 = 2 \sum_{n=1}^N \theta_n \frac{d\theta_n}{d\theta_i} $$
and since $\frac{d\theta_n}{d\theta_i} = \delta_{i,n}$:
$$ \left( \frac{d}{d\theta} \theta^T\theta \right)_i = 2 \theta_i $$
If you interpret this as columns or rows is pretty much up to you. Commonly we would take $\theta_i$ as the elements of a column vector $\theta$ and for practicality we usually want the gradient to live in the same vector space as our coordinates, so $\left( \frac{df}{d\theta} \right)_i$ would also be a column vector. Hence
$$ \frac{d}{d\theta} \theta^T\theta = 2 \theta $$ |
H: Error encoding categorical features using sklearn pipelines
I am new to sklearn pipelines and am using this post as a guide for my code:
https://www.codementor.io/bruce3557/beautiful-machine-learning-pipeline-with-scikit-learn-uiqapbxuj
I am trying to encode a categorical features using a transformation pipeline, but no matter what encoder I use, I get the same error. As far as I can tell from reading other posts, scikit-learn should be able to handle categorical variables as strings from version 0.20 or greater (namely with the OneHotEncoder.)
ValueError: could not convert string to float: 'Male'
Where I have entered xxxxxxxxxxx below replace with one of the following
ce.OneHotEncoder
ce.TargetEncoder
OneHotEncoder
OrdinalEncoder
from sklearn.pipeline import FeatureUnion
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.decomposition import NMF
from sklearn.feature_extraction.text import TfidfVectorizer
import category_encoders as ce
from sklearn.preprocessing import OrdinalEncoder
from sklearn.pipeline import Pipeline
import pandas as pd
import numpy as np
# create example data
example_df = pd.DataFrame({'Sex':['Male','Female','Female'],'Survived':[1,1,0]})
X_train = example_df.drop('Survived', axis=1)
y_train = pd.DataFrame(example_df['Survived'])
# build example pipeline
cat_pipe = ("categorical_features", ColumnTransformer([
("categorical", Pipeline(steps=[
("impute_stage", Imputer(missing_values=np.nan, strategy="median")),
("label_encoder", xxxxxxxxxxx())]), ["Sex"]
)])
)
example_pipeline = Pipeline(steps=[cat_pipe])
# fit pipeline
example_pipeline.fit(X_train, y_train)
# Name Version
scikit-learn 0.20.3
category_encoders 1.3.0
numpy 1.16.2
pandas 0.24.2
AI: In this line:
("impute_stage", Imputer(missing_values=np.nan, strategy="median"))
Because your input type is string, you shouldn't fill the null value to median (we cannot average string value).
From the document, you can fill null value with a string constant like:
Imputer(missing_values=None, strategy="constant", fill_value="NULL")
to represent null value in your string field. |
H: Rank links from rss feed
I am trying to create a script to filter the most "intersting" articles from an rss feed and rank them.
feeds = ['http://feeds.theguardian.com/theguardian/technology/rss',
'http://rss.cnn.com/rss/money_news_international.rss',
'https://news.ycombinator.com/rss?format=xml',
'http://feeds.reuters.com/reuters/companyNews',
'http://feeds.reuters.com/reuters/businessNews']
I tried using applying the K-means algorithm to RSS feeds to filter the most popular articles from thousands of links in an attempts to reduce my personal RSS reading time.
However, I feel that this might not be state of the art.
Any suggestions on papers, actual implementations or approaches to get a proper list of "must-read" articles.
I appreciate your replies!
AI: Big Note: Unless you do not define what "interesting" is, Machine Learning can not do anything for you. Clustering only tells you which documents are similar to eachother but YOU need to define which cluster is interesting (BTW, clustering in terms of context is called Topic Modeling as each cluster of written text is about some topics).
Now my suggestions:
Topic Modeling
Use a simple LDA to detect topics of each document and cluster docuemnts based on topics. For each topic, print first 5 dominant keyword and if they interest you, then rad the articles.
Keyword Search
Set up a list of keywords you would like to read about. Using a simple TF-IDF model you can find "high score" docuemnts according to your favorite keywords.
Keyword Extraction
You may extract keywords of each document usng RAKE with a score higher than a manual threshold. See if keywords interest you or not. (conceptually close to LDA but here you can extract keywords from each individual document, while LDA makes sense if you have a corpus of text.)
Update
According to the additional info in the comments, the question turned even more towards classic recommender systems.
We have the number of clicks per article and we use it as the interestingness score. Now one may say (a very basic idea to be honest):
Crawl all the articles and Use RAKE to extract keywords from n most interesting articles. Use those keywords as search queries.
Use BM25 algorithm to search those queries and get bm25 score for each article according to queries.
Aggregate those bm25 score with interestingness score (e.g. simply multiply them) and rank articles from most interesting to least interesting. |
H: Create Nodes/Edges From CSV (latitude and longitude) for Graphs
The Ultimate Goal: I want to find the shortest and coolest (in terms of temperature) path between two points (for a given pair of latitudes and longitudes on the map)!
I am aware of algorithms like Dijkstra or A*, which are apparently the ones are used in navigation systems. In fact I was able to successfully create a dummy graph using NetworkX in Python and find the shortest path easily:
import matplotlib.pyplot as plt
import networkx as nx
%matplotlib inline
#graph object
g = nx.Graph()
#graph nodes
g.add_nodes_from(['s','a','b','c','g'])
#add edges
g.add_edge('s','a')
g.add_edge('s','b')
g.add_edge('a','b')
g.add_edge('a','c')
g.add_edge('a','g')
g.add_edge('b','c')
g.add_edge('c','g')
#labels
g.add_edge('s','a', weight=1)
g.add_edge('s','b', weight=4)
g.add_edge('a','b', weight=2)
g.add_edge('a','c', weight=5)
g.add_edge('a','g', weight=12)
g.add_edge('b','c', weight=2)
g.add_edge('c','g', weight=3)
# pos = nx.spring_layout(g)
fixed_positions = {'s':(0,4),'a':(4,6), 'b':(4,2), 'c':(8,4), 'g':(12,4)}#dict with two of the positions set
edge_labs = dict([( (u,v), d['weight']) for u,v,d in g.edges(data=True)])
nx.draw_networkx(g, fixed_positions )
nx.draw_networkx_edge_labels(g, fixed_positions, edge_labels=edge_labs)
nx.draw_networkx_nodes(g, fixed_positions, node_color='r')
plt.title("Simple Graph")
plt.show()
Once the graph is created, it is super easy, this one-liner gives the shortest path:
nx.shortest_path(g,'s','g')
['s', 'a', 'g']
Or the Dijkstra path:
nx.dijkstra_path(g, 's','g')
['s', 'a', 'b', 'c', 'g']
Or even A* Search:
nx.astar_path(g,'s', 'g', heuristic = None )
['s', 'a', 'b', 'c', 'g']
The Problem: I am struggling to create the Graph (nodes and edges, weights) for a real data for that Ultimate Goal mentioned first. The data looks like this:
time, longitude, latitude, temperature
2017-11-13 12:53:49,11.558139,48.102061, 22.1
2017-11-13 12:54:21,11.557347,48.102176, 22.3
2017-11-13 12:55:35,11.554643,48.099852, 22.1
2017-11-13 12:55:55,11.559246,48.099049, 22.2
2017-11-13 12:56:24,11.559256,48.098769, 22.3
2017-11-13 12:56:39,11.559191,48.098996, 22.4
2017-11-13 12:56:49,11.559029,48.099175, 22.2
2017-11-13 12:57:07,11.558799,48.098782, 22.1
2017-11-13 12:57:39,11.558861,48.098965, 22.3
contains time, longitude, latitude, and temperature (geodata).
How do I create nodes from here? Are each of longitude, latitude pairs my nodes? Is this how it is done for navigation and routing? It does not sound efficient looping for each point and create a node! Or particular longitude, latitude on the map should be regarded as nodes not all the points? Some sort of coarse sampling maybe?
This question is closely tied with the earlier one. How do I go with edges? If longitude, latitude pairs are nodes, do I create an edge between two consecutive pairs?
Also for the weights, in this case Temperature, I have to loop again over all these edges to assign appropriate weights?
How about time? I am thinking it has no impact right? Because I am not interested in providing a path in past. What matters is present, or if I go for forecasting later on using ML in future (not critical for now!).
I have studied quite some time now, and have come across many suggestions like this old question, or tools like NetworkX mentioned earlier or Gephi, and may other tutorials, but I do not get to see how to create the graph easily from such geodata. I though this should have been already well-established since maps are using it widely (maybe not open sourced). All I see in tutorials or blog posts are explaining concepts intuitively or implementations for a very simple graph shown above or they are not about geodata.
Any helps or comments or guides how to implement are appreciated. Ideally I would like to know how do I do go from CSV to creation of Graph so that I can perform shortest path algos.
UPDATE 09.10.2019:
Hugely inspired by the great answer of Brian Spiering (see below), I continued searching and came across a series of courses held in University of Helsinki in 2017 teaching how to do tons of awesome stuffs using street graphs directly retrieved from OpenStreetMap data, for example look here. As you can see in the link there is an awesome Python package osmnx that enables easy access to download and visualize OpenStreetMap data. The good news is that all these graphs are compatible with Python NetworkX, means you can do whatever you want, e.g. finding shortest path as Maps do. The developer of osmnx provides a good number of examples how to use osmnx together with NetworkX that I found super useful and interesting, definitely check it out (link to osmnx examples).
Concretely, I was able to do the followings in a few lines of code (for passionate readers):
import osmnx as ox
import matplotlib.pyplot as plt
import networkx as nx
from IPython.display import IFrame
ox.config(log_console=True, use_cache=True)
place_name = "Steglitz, Berlin, Germany"
graph = ox.graph_from_place(place_name, network_type='walk')
# project the network to an appropriate UTM (automatically determined)
graph_projected = ox.project_graph(graph)
# you can also plot/save figures as SVGs to work with in Illustrator later
fig, ax = ox.plot_graph(graph_projected, save=True, file_format='svg')
# use networkx to calculate the shortest path between two nodes
origin_node = list(graph.nodes())[0]
destination_node = list(graph.nodes())[20]
route = nx.shortest_path(graph, origin_node, destination_node)
print(route)
[1638866960,
1832211366,
443546729,
443546728,
27433702,
241881515,
241881517,
241881519,
241881521,
241881560,
4422819618,
237937128,
5471327997,
28196761,
27434765,
26627352,
26627351,
27434717,
1802301824,
2375778405]
Please note that here the route is list of osmid points that assigned to each node (a pair of latitude and longitude) in the graph (also input of source and destination was based on ! You can provide the inputs directly as a pair of latitude and longitude (based on nearest node found on the graph) like:
origin = ox.get_nearest_node(graph, tuple(nodes[['x','y']].iloc[0].tolist()))
destination = ox.get_nearest_node(graph, tuple(nodes[['x','y']].iloc[20].tolist()))
route = nx.shortest_path(graph, origin_node, destination_node)
You can extend this further to minimize the path by distance (herein length):
route_length = nx.shortest_path(graph, origin_node, destination_node, weight='length')
# plot the route with folium
route_map_length = ox.plot_route_folium(graph, route_length)
# save as html file then display map as an iframe
filepath = 'data/route_length.html'
route_map_length.save(filepath)
IFrame(filepath, width=600, height=500)
I am currently exploring how to add extra weight (in my case Temperature) to the edge for the The Ultimate Goal! I am getting some ideas based on one of the tutorial of the osmnx where they showed shortest distance accounting edge grade impedance (link).
Good luck and happy path finding!
AI: Data needs to be modeled as a graph to use graph algorithms. Longitude and latitude can not be modeled directly as nodes, thus graph algorithms can not be directly applied. The biggest reason is that nodes in a graph have no notion of distance, just weight. Pairs of longitude and latitude have an inherent notion of distance.
There are at least 2 options:
Model longitude and latitude as locations on a sphere. Then plan the path between locations on a sphere. The optimization problem will involve both distance and temperature.
Join with graph data. For example, street data is graph data. Locations can be modeled as nodes and paths of travel can be modeled as edges. Then there are two sets of weights, distance along a path of travel and temperature. Graph algorithms can minimize the combination of the two weights. |
H: Tensorflow uses more memory, the more epochs it completes
I created a genetic algorithm "optimizer" for Tensorflow but it is written in python. I know TensorFlow was not designed like this and I need to rather create the optimizer in C++ using their API's but I found out about it only after I already programmed the optimizer and I don't really have the time to do research on how to create a tf.train.Optimizer(). My GA "optimizer" is not really an optimizer as it is more a tf.Variable.assign() call. See code below
The "optimizer" updated a series of weight matrix variables after the evolve takes place. Each variable represents a weight matrix between two layers. If it is a network with inputs, one hidden layer and outputs then there will be two variables. Weights between input to hidden then hidden to output. Although this is a 2d matrix but my weights act as an individual in my population so I have a 3d matrix. 1 dimension represents the population amount and the 2d matrix as the weigths. Weight updates are then done by crossover or mutation which consists of a lot of multiplication, matmul and summation operations. My guess is the variables I create for the new layers is not garbage collected. I am guessing tensorflow keeps the variables alive as I don't ever call a train step provided by the tf.train.Optimzer classes since I created my own.
Here is my code to run my GA optimizer
for i in range(epochs):
sess.run(fitness_per_individual, feed_dict={x: train_data, y: train_labels},
options=run_options,
run_metadata=run_metadata)
evolved_layers = evolve_layer(nn_layers, fitness_per_individual, population)
for tensor_idx in range(len(nn_layers)):
layer = nn_layers[tensor_idx]
evolved_layer = evolved_layers[tensor_idx]
layer = layer.assign(evolved_layer)
# I don't have to feed my training data again I just do it here to keep tensorflow happy.
sess.run(layer, feed_dict={x: train_data, y: train_labels},
options=run_options,
run_metadata=run_metadata)
Is there a way I can dispose of variable matrices created by tf.multiply or is there no way around it without creating a tf.train.Optimizer. Also is there a good practice guideline I can read up for TensorFlow.
AI: It seems like you are adding nodes to your computational graph in this line: layer = layer.assign(evolved_layer). The assign-operation is just as multiplication or addition a node in the graph and you construct a new one in every step. Call tf.Graph.finalize() after defining your model. This will raise an error whenever a node is added to the graph and help you debugging your code. Ideally you want the graph to be fixed after construction. I suspect you will get an error in the optimizer.
You want all the operations to be defined before you run the training loop and then just run the respective nodes. In your case you'd define layer_assign = layer.assign(evolved_layer) before the loop and in the loop only use sess.run(layer_assign). Of course you would have to treat all operations in evolve_layers in the same way. |
H: Neural Network Multiple | Averging predictions
I am training multiple neural networks with various parameters. I am trying to average their predictions, but I am not really sure what that means, I am confused about what to average exactly. Here is what I mean: For a single observation in binary classification for example, the final node will give p a value between 0 and 1 (or -1 and 1 if you're using hyperbolic tangent Activation Function), then this p will be rounded to 1 or 0 if it's > 0.5, depending on your decision boundary.
Now, here is what I don't understand, should average p1, p2 and p3 produced by the models before rounding, or I should round the values to True/False responses and then compute the average? and how does that work exactly ?
AI: There are multiple ways you could do. All of them are categorized under Ensemble methods in machine learning.
Voting classifiers: which is the simplest way. You just take votes based on the label from all models and uses the majority label. That means, you should first round up all labels to 0 or 1 and then use the majority.
Weighted voting classifiers: similar to previous one, but some models have higher weights in voting.
For more information look here. |
H: Risk score from Neural Network classifier (more than 2 categories)
I am trying to use a Neural Network to perform multiclass classification. The classes represent Insurance Risk Level. The most risky level is Level 1, the least risk corresponds to Level 10. The labels came from Unsupervised clustering unannotated insurance data. The architecture is:
Input Layer 43 features
Dense, 1000 neurons, ReLu activation
Dense 500 neurons, ReLu activation
Output 10 neurons, Softmax activation
In particular, for binary classification we can use single neuron as last layer. Then, we define the score to be the input value to this final layer. But for more than 2 classes I don't know what to do.
My question: Is there a way to obtain a Risk score that is consistent, that is, the greater the Level is, the smaller the score is ? I tried to use logits, not normalized probabilites, etc. But not sure if there is a way to get a single consistent score from logits ? . Thanks for any hint on this.
AI: How about doing in an easier way?
Perform the classification in normal way, the greater the score the greater the level. Then, for the final output, just reverse the level.
For example, from the classification, the lowest score comes to level 1 and the highest score comes to level 10. Then, make level 1 as level 10 and level 10 as level 1 for the final output |
H: Split time series by python or by keras?
In Python you can use TimeSeriessplit() to split a time series properly for training but you can also do the same(?) in Keras by TimeseriesGenerator.
Which one is recommendable? And/or what are certain differences?
AI: Assuming they can both do what you want them for (can perform the correct splits for your needs), I would recommend simply using the one that fits best with your pipeline. E.g. the data is being fed straight into a Keras model, just do as much as possible in Keras -> use the TimeseriesGenerator. Otherwise, stick to Sci-kit Learn.
The both generate the splits, meaning they only actually create each split as you loop through the data. This saves memory.
Looking at the source code of the Keras variant, it doesn't seem to support running on the GPU if you combine e.g. with your Tensorflow graph - meaning there really isn't a big difference between the two, just the APIs/functionality. |
H: Neural Networks for Unsupervised Learning
Why cannot we use neural networks for unsupervised learning problem.
I do know that it can be used using the Kohenon’s Self Organizing Map (KSOM) but is this the only method that we can use or are there any other.
AI: Yes, there are others. The most important dimensionality reduction technique in Deep Learning is Autoencoders. They are neural networks with a "funnel" structure, that shrinks the size of the signal, forcing the Network to learn how to represent the same information with less nodes.
Autoencoders are much more common than SOMs in practical DL. While SOMs are used mostly for data visualization (usually they reduce a dataset in a 2D representation), Autoencoders give you absolute freedom in the number of factors that you can extract. This makes them more useful for any research or production purpose.
A good theoretical introduction can be found here.
Please also take a look at this practical implementation in TensorFlow 2.0. |
H: Why is my prediction using ARIMA better if I'm using less historic data?
I have a data set containing hourly electricity prices for since 1.01.19 until September. Since the process turned out to be (weakly) stationary, I applied an ARIMA model in Python in order to predict the prices for the next day.
It turned out that the best prediction was made using the last two days as historic data and the worst was the one using almost 6000 values.
What is a possible explanation for this happening?
AI: Electricity prices are essentially the same as stock prices: best modelled by a random walk, where the best prediction for tomorrow is the price today. Therefore I am not really surprised that you get worse results using more historical data.
Some versions of ARIMA will also include regularisation, which will punish your model for including more and more data - inclusion of new data must be justified by contributing to a lower residual error to be "worth" inclusion.
Other models that tend to be a little more robust use features other than the actual target, here the price. For example, trying to predict the volatility of the price might prove to be more accurate. For this there are GARCH models (Generalised AutoRegressive Conditional Heteroskedasticity).
Another thing you might consider is to include external data... for example, electricity consumption is heavily influenced by the weather - if it is cold outside, a lot of people heat their homes using electrical heaters, they also drink more hot drinks etc. |
H: Are validation sets necessary for Random Forest Classifier?
Is it necessary to have train, test and validation sets when using random forest classifier?
I understand it is important with Neural Networks but I am not understanding the importance of it with RF. I understand the idea of having a third unseen set of data to test on is important to know the model isn't overfitting, esp with Neural networks, but with RF it seems like you could almost not even have test or validation data (I know in practise this isn't true) but in theory since each tree of the forest uses a random sample (with replacement) of the training dataset.
At the moment I am missing out on approx 250 samples by keeping them unseen from the train and test set and I know the model would improve with the extra data, so is it possible to have only train and test and not designate a seperate validation set, whilst still having a reliable model?
AI: is it possible to have only train and test and not designate a seperate validation set, whilst still having a reliable model?
Sure! You can train a RF on the training set, then test on the testing set. That's perfectly valid as long as the model doesn't see any of the testing data during training. (Or, better yet, you can run cross-validation since RFs are quick to train)
But if you want to tune the model's hyperparameters or do any regularization (like pruning), then you'll need a validation set. Train with the training set, use the validation set for tuning, then generate an accuracy estimate with the testing set. |
H: How to arrange the sets to predict y on x in time series?
I'm performing my first NN with my own data and while I was already tuning the parameters I stumbled over an aspect which confuses me now such that I'm not sure what is right and what is wrong..
Given this (head of the) data df_train1_raw:
timestamp A_phsA
2018-02-05 14:00:00 1.517839e+09 856.436487
2018-02-05 15:00:00 1.517843e+09 859.653339
2018-02-05 16:00:00 1.517846e+09 836.635463
2018-02-05 17:00:00 1.517850e+09 801.097284
2018-02-05 18:00:00 1.517854e+09 794.855960
(...)
The timestamp column is basically the index just converted into numeric so I can use these time information for the nn model.
Goal: Predict A_phsA on timestamp
First, I create the train and test sets:
# Prepare data
X_train1_raw = df_train1_raw.values
y_train1_raw = X_train1_raw
# Split data into appropriate sets
## Standardize and scale data
scaler = StandardScaler()
tscv = TimeSeriesSplit(n_splits = 5)
pyplot.figure(1)
index = 1
fig, ax = plt.subplots(1, 1, figsize=(24,7))
for train_index, test_index in tscv.split(X_train1_raw):
X_train1, X_test1 = scaler.fit_transform(X_train1_raw[train_index]), scaler.fit_transform(X_train1_raw[test_index])
y_train1, y_test1 = scaler.fit_transform(y_train1_raw[train_index]), scaler.fit_transform(y_train1_raw[test_index])
pyplot.subplot(510 + index)
pyplot.plot(X_train1[:, 1])
pyplot.plot([None for i in X_train1[:, 1]] + [x for x in X_test1[:, 1]])
index +=1
pyplot.show();
This looks reasonable. When I plot the loss and val_loss values of the nn later it also looks reasonable.
But what struggles me actually is this line at the beginning:
y_train1_raw = X_train1_raw
I can't tell if it is plain stupid or if I can't get my head around it anymore. The reason is, when I look for example at KFold:
X = list(range(10))
print (X)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
y = [x*x for x in X]
print (y)
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
kf = KFold(n_splits=5)
X = np.array(X)
y = np.array(y)
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
print(“X_test: ", X_test)
they have different X and y. Which makes sense I would say.
But when I adjust my code accordingly
X_train1_raw = df_train1_raw.iloc[:, 1].values
y_train1_raw = df_train1_raw.iloc[:, 2].values
I get an error when performing
fig, ax = plt.subplots(1, 1, figsize=(24,7))
for train_index, test_index in tscv.split(X_train1_raw, y_train1_raw):
X_train1, X_test1 = scaler.fit_transform(X_train1_raw[train_index]), scaler.fit_transform(X_train1_raw[test_index])
y_train1, y_test1 = scaler.fit_transform(y_train1_raw[train_index]), scaler.fit_transform(y_train1_raw[test_index])
pyplot.subplot(510 + index)
pyplot.plot(X_train1[:, 1])
pyplot.plot([None for i in X_train1[:, 1]] + [x for x in X_test1[:, 1]])
index +=1
pyplot.show();
ValueError: Expected 2D array, got 1D array instead:
array=[1.5178392e+09 1.5178428e+09 1.5178464e+09 1.5178500e+09
1.5178536e+09
And I don't understand why. Which part is not correct?
edit: Is it scaler.fit_transform() ?
AI: fig, ax = plt.subplots(1, 1, figsize=(24,7))
for train_index, test_index in tscv.split(X_train1_raw, y_train1_raw):
X_train1, X_test1 = scaler.fit_transform(X_train1_raw[train_index].reshape(-1,1)), scaler.fit_transform(X_train1_raw[test_index].reshape(-1,1))
y_train1, y_test1 = scaler.fit_transform(y_train1_raw[train_index].reshape(-1,1)), scaler.fit_transform(y_train1_raw[test_index].reshape(-1,1))
pyplot.subplot(510 + index)
pyplot.plot(X_train1[:, 1])
pyplot.plot([None for i in X_train1[:, 1]] + [x for x in X_test1[:, 1]])
index +=1
pyplot.show();
This should fix your code and this happens because the Standard Scaler expects inputs as samples where each sample is an array of n_features to transform. The reshape(-1,1) does that for you if you have got a single feature to scale. |
H: General approach on time series for customer retention/churn in retail
I have a time series of data in the following form:
| purchase_date | cutomer_id | num_purchases | churned |
2018-10-31 id1 39 0
2018-11-31 id1 0 0
2019-01-31 id1 6 1
2019-03-31 id2 300 0
2018-04-31 id2 2 1
...
I grouped the data by month and summed num_purchases by month. The churned column for user id1 for example represents in which month customer churned. So id1 in my case churned in January. Before this, to label who has churned or not, we sampled customers based on 2 months of inactivity period from the churn date. I need to predict if a user is going to churn in a 2 months from now. I am not sure what is the best approach for this.
Q1: Should I be grouping customers like I am doing, on a monthly basis or I have to group them on a 2-month basis since that is how they were labeled?
Q2: Also, how do I model this? Do I keep customer_id as a feature of the model or not? Is the gap in dates for each customer relevant and how should I deal with it (if)? The dates repeat for different users, should I create index out of a date but it won't be unique or should I create index out of customer_id?
Q3: If I need to predict whether the user is going to churn by the end of the year for example or in the next 6 months, would that change how I group/arrange my date and model this?
I plan to add more features to this dataframe (both categorical and numerical).
AI: OK, let me try to dissect your questions:
Q1: Should I be grouping customers like I am doing, on a monthly basis
or I have to group them on a 2-month basis since that is how they were
labeled?
This depends a bit on what your goal is and what kind of features you want to use. I think, with the limited things I know from your question, I would go for a daily prognosis for the next two months (so if you're doing daily you can forecast a period of two months daily data for each day). Reason being that otherwise you might loose some inactivity "features" that are implicitly part of your data. As an alternative you can group your data and create features that help you introduce the information anyway. For example: activity_in_last_5d, activity_in_last_10d, etc.
Q2: Also, how do I model this? Do I keep customer_id as a feature of
the model or not?
Customer_id doesn't seem to be a feature to me but rather the key for identifying which features to feed into your model.
Is the gap in dates for each customer relevant and
how should I deal with it (if)?
You are basically asking about imputing your data (see this for more: https://towardsdatascience.com/6-different-ways-to-compensate-for-missing-values-data-imputation-with-examples-6022d9ca0779). Gaps can cause problems in your modeling. Some models (for example ARIMA for time series) won't work at all if you have gaps that aren't handled. Looking at your use case, I think taking the last known value for a gap should work fine since a gap means your customer didn't churn on that day.
The dates repeat for different users, should I create index out of a date but it won't be unique or should I create index out of customer_id?
Probably go for indexing the customer_id and creating features based on the date! Maybe there are some seasonality features interesting as well (for example how many people churned on the same day last year?)
Q3: If I need to predict whether the user is going to churn by the end
of the year for example or in the next 6 months, would that change how
I group/arrange my date and model this?
You would have to change the size of the period you are predicting for. Changing features, grouping, etc. might be useful as well but that really depends on what features you are choosing aka are important and how your model is going to work. |
H: Conceptual questions on MLP and Perceptrons
I am facing some confusion regarding the terminologies assocaiated to classification and regression problems esp. using the MLP and Perceptron models.
These are the following:
1) When the data is linearly inseparable, we use MLP. Here what is meant b "data"--is it the response or the input feature that is linearly inseparable?
2) If it is linearly inseparable then does it mean that the mapping function from input to output will always be non-linear? Hence, we prefer MLP or the latest new models such as deep learning?
3) Linear regression fails in the case of linearly inseparable data or can linear regression work for inseparable data but if the function mapping is nonlinear then it fails?
AI: When the data is linearly inseparable, we use MLP. Here what is meant by "data"--is it the response or the input feature that is linearly inseparable?
This means that a linear function of the input features is unable to separate the response.
To answer your question a bit more directly: Given only a linear function of the inputs, the response is the thing that's inseparable.
If it is linearly inseparable then does it mean that the mapping function from input to output will always be non-linear? Hence, we prefer MLP or the latest new models such as deep learning?
Yes. If the mapping from input to output were linear, then the output would necessarily be linearly separable by the input.
Linear regression fails in the case of linearly inseparable data or can linear regression work for inseparable data but if the function mapping is nonlinear then it fails?
Linear regression will never be able to perfectly separate linearly inseparable data. Consider the following example, where the input features are x1 and x2, and the output is the color:
It doesn't matter how you draw a line in the 2D space - you'll never be able to separate the colors. The same idea applies in higher dimensions.
I hope that helps! |
H: class_weight on sklearn's DecisionTreeClassifier
Can class_weight='balanced' on scikit-learn's DecisionTreeClassifier be interpreted as having identical duplicate data points for the minority classes?
I know that doesn't work that way, class_weight works as a misclassification cost. But I want to understand if it would give the same results as oversampling the minority classes.
AI: From sklearn's documentation,
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y))
It puts bigger misclassification weights on minority classes than majority classes. This method has nothing to do with resampling; it modifies the misclassification cost matrix instead.
Changing the misclassification cost of each class is a different approach from resampling approaches. In my opinion, it won't give exactly the same result as oversampling the minority class. Having said that, these two approaches are both helpful to dealing with imbalanced (or unbalanced) data classification |
H: clarification on train, test and val and how to use/implement it
So far I think I understood the differences between the training, test and validation set. Basically it is like in this image:
Training set: The data where the model is trained on
Validation set: Data the model has not been trained on and used to tune hyperparameters
Test set: In principle the same like the validation set.. just used at the final end after the model has been tailored.
The training set is usually set up via some cross validation. When fitting via model.fit(X_train, y_train,..) will Keras shuffle the data autonomously ?
Next, in Keras, you are able to provide the validation set inside the model.fit() method as validation_data=(x_test, y_test) but there is also the possibility to provide e.g. validation_split = 0.2
What is the difference?
And after that, the test/prediction set will be taken into account just as model.pred(X_pred, y_pred,..). Right?
AI: When fitting via model.fit(X_train, y_train,..), will Keras shuffle the
data autonomously?
Yes. shuffle = True is default. So, it basically shuffles every time.
Next, in Keras, you are able to provide the validation set inside the model.fit() method as validation_data=(x_test, y_test) but there is also the possibility to provide e.g. validation_split = 0.2 What is the difference?
The difference is that you can manually provide a validation data set. It's not X_test, Y_test. Rather X_test and Y_test are used for model evaluation model.evaluate() or model.predict(). model.fit is used for the training dataset. When you say validation_set = 0.2, it takes 20% data from training dataset and provides you the validation accuracy and loss.
And after that, the test/prediction set will be taken into account just as model.fit(X_pred, y_pred,..). Right?
You need to use model.predict(X_pred, Y_pred) |
H: Convey time lag information to a linear regression model
I am using a simple linear regression to predict the number of units an item has moved and price of the item is one of the input parameters.
For a few items, the older prices are not relevant and hence this results in incorrect predictions. The definition of old price varies from item to item.
Is there a way to make the linear regression know that recent prices are more relevant?
AI: Linear regressions have fixed number of variables. An alternative could be to run a separate regression for each item group, and add a different number of time lags for each. If you can't do that, I suggest you to find an optimal number of time lags to be chosen for this general model. You can take a look at ACF, and specifically PACF plots to understand how much memory (time dependency) your time series have.
In theory, adding a more time lags should't result in a problem: if the n-th time lag is not relevant to explain your y, then it's parameter will show no statistical significance. |
H: Sublime Text 2 with Pandas for Excel (Combining Data) & Data Visualization
I'm new to programming with Python, and so far it's been a headache to create a build environment- and need your support and expertise in this area.
Background I'm running a MacBook, and using Sublime Text 2, and needing to learn python. I'd like to finish this tutorial on youtube on Data Science (manipulating excel files really "aggregating data on excel"- i'll post the link below and need to import pandas- but apparently I can't.
Also, worth mentioning; I'll also be using Numpy, Xlrd, Matplotlib in the future, and not sure if these modules are available on Sublime Text 2.
The Challenge:
When I run this line, I get an error.
import pandas as pd
I have researched the problem, and it seems that I don't have package control on my sublime text and found this site with the package control addition- so here's the website that I found with a long code to download the package.
https://packagecontrol.io/installation#st2
1) Is this a legit site to download this software?
2) Can I even add these modules to Sublime Text 2 to do Data Science & view excel documents
3) If I add this installation "package control", will this package allow me to download pandas, and other modules into Sublime Text 2
4) If Yes, where can I find instructions for installing the Pandas/Numpy/Xlrd modules?
https://www.youtube.com/watch?v=4_BPNnKEMn8
AI: Sublime Text is a text editor - it is independent of your Python setup. I would suggest following a tutorial on setting up Anaconda on your machine, then follow a second tutorial on making sublime text find the Anaconda Python executable.
Here is a tutorial for setting up Python with Anaconda on a Mac.
Here is something I found with a simple search (disclaimer: I don't know anything about sublime text!)
Once you have anaconda installed, you would need to run this command in terminal to install the packages you need:
conda install numpy pandas xlrd matplotlib |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.