text
stringlengths 83
79.5k
|
---|
H: Is there any reason why providing symbolic features into an MLP wouldn't outperform feeding raw pixels to a CNN in a RL task?
I am tackling a RL problem (relaxed version of Space Fortress) with DQN. The usual approach would be to feed pixels into a CNN but that is usually very slow. I am considering feeding symbolic features such as coordinates, velocity and angles of the relevant elements into a Multi Layer Perceptron because then the training would be faster and I could try more things quicker. My idea is that the perceptual part of the task is not that interesting to learn here and I would like to focus on the RL part instead (I am experimenting with Hierarchical RL).
Given that with all the symbolic features that I would provide it would possible to reconstruct the full image and hence the state of the environment, would there be any reason that a CNN pixel based approach would be better?
PT: The environment has a fixed number of elements
Thanks
AI: If the features are able to fully define the image, there would be no reason to use a pixel-based approach. You are essentially performing lossless compression and (nearly) instantly acquiring a feature set, so you would see a dramatic performance difference between the CNN and MLP.
If you needed a quick solution, CNNs are great out-of-the-box algorithms, but if you are willing to put in the time to develop an MLP, it would provide a performance boost without losing accuracy. |
H: Package/function to convert the word to one hot vector in Python NLP
Is there a package or function in NLP which can be used to convert the word into an one hot vector.
Thank you.
AI: There are several ways to convert words to one hot encoded vectors. Since I do not know the data structure you store your data. I assume it is going to be a list
from keras.preprocessing.text import Tokenizer
samples = ['The', 'dog','mouse','elephant']
tokenizer = Tokenizer(num_words=len(samples))
This builds the word index
tokenizer.fit_on_texts(samples)
one hot representation
one_hot_results = tokenizer.texts_to_matrix(samples, mode='binary')
By changing the mode from binary to 'tfidf' or 'count', you can make a matrix of any type, apart from one hot.
You can achieve the same result using other packages like sklearn. But it does involve a bit more lines of code. |
H: Size of folds in k-fold cross-validation
When evaluating results using cross-validation, several strategies can be adopted, as using 5 or 10 folds, or doing leave one out cross-validation, as well as doing a 80/20 split.
Under which are general conditions should I try one or another?
AI: I generally advocate for cross validation in addition to a hold-out sample. As for the number of folds, that depends heavily on your data. Generally you start to approach diminishing returns after some point, but you should try, and evaluate, several regimes. This is very much an empirical question with no hard and fast best answer. |
H: Keras VAE example loss function
The code here:
https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py
Specifically line 53:
xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean)
Why is the cross entropy multiplied by original_dim? Also, does this function calculate cross entropy only across the batch dimension (I noticed there is no axis input)? It's hard to tell from the documentation...
AI: Keras metrics.binary_crossentropy computes the cross entropy averaged across all inputs (pseudocode):
original_dim = 3
x = [1,1,0]
x_decoded = [0.2393,0.7484,-1.1399]
average_BCE = binary_crossentropy(x, x_decoded)
print(average_BCE)
>>>0.1186
For this part of autoencoder loss we need the sum, not the average over all squared differences between input and output pixels, which is equivalent to average_crossentropy_of_pixels * num_pixels (original_dim)
print(original_dim * average_BCE)
>>>0.3559
Another way of writing this part would (which I think is more illustrative of what's happening, but probably less performant in Keras land):
xent_loss = K.sum(K.square(x - sigmoid(x_decoded)))
print(xent_loss)
>>>0.3559
Regarding the second part, since the first operation you are really doing is subtraction, it implied the tensors for input and output are the same size. This can be found if you check the implementation code - which is better than the docs in this case - go to line 3056:
https://github.com/keras-team/keras/blob/master/keras/backend/tensorflow_backend.py |
H: Why is cross-validation score so low?
I am using Scikit-Learn for this classification problem. The dataset has 3 features and 600 data points with labels.
First I used Nearest Neighbor classifier. Instead of using cross-validation, I manually run the fit 5 times and everytime resplit the dataset (80-20) to training set and test set. The average score turns out to be 0.61
clf = KNeighborsClassifier(4)
score = 0
for i in range(5):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
clf.fit(X_train, y_train)
score += clf.score(X_test, y_test)
print(scores / 5.0)
However when I ran cross-validation, the average score is merely 0.45.
clf = KNeighborsClassifier(4)
scores = cross_val_score(clf, X, y, cv=5)
scores.mean()
Why does cross-validation produce significantly lower score than manual resampling?
I also tried Random Forest classifier. This time using Grid Search to tune the parameters:
param_grid = {
'bootstrap': [True],
'max_depth': [8, 10, 12],
'min_samples_leaf': [3, 4, 5],
'min_samples_split': [8, 10, 12],
'n_estimators': [62, 64, 66, 68, 70]
}
clf_ = RandomForestClassifier()
grid_search = GridSearchCV(estimator = clf, param_grid = param_grid,
cv = 5, n_jobs = -1, verbose = 2)
grid_search.fit(X, y)
grid_search.best_params_, grid_search.best_score_
The best score turned out to be 0.508 with the following parameters
({'bootstrap': True,
'max_depth': 10,
'min_samples_leaf': 4,
'min_samples_split': 10,
'n_estimators': 64},
0.5081967213114754)
I went ahead make prediction on the whole 600 data points and accuracy is quite high 0.7688.
best_grid = grid_search.best_estimator_
y_pred = best_grid.predict(X)
accuracy_score(y, y_pred)
I know .best_score_ is the 'Mean cross-validated score of the best_estimator.` But I don't understand why it seems so much lower than the prediction accuracy on the whole set.
AI: In your random forest, this is due to the fact that your final model is overfitting. Sklearn's GridSearchCV has a default argument refit = True, that takes the model with the best performance based on cross-validation and retrains it in the whole dataset. Your accuracy score is very high due to the fact that it is only measured on your training data, and the best_score is a measure that incorporates how your model performs in models that it has not seen.
To wrap up, in your random forest you are overfitting very badly, as there is a big gap between your validation and training error. Try to use refit = False and you will not see this gap anymore (but you still have a problem since you are still overfitting your training set with this model). |
H: De-normalization in Linear Regression
I have implemented a Linear regression model on a dataset of 7 independent variable and 1 target with the below 2 approaches
1) Without normalization of the data, resulted in a Mean squared error of 36530921.0123 and R2 value 0.7477
2) With normalization of the data, resulted in a Mean squared error of 5.468490570335696e-10 and R2 value 0.92750882996584. But in this case even the Target variable is also normalized.
Which is the better approach adding new features to the 1st case or considering the 2nd case and it has a better R2 and MSE value.
If we are considering the 2nd case, what is the steps need to be taken to denormalize the target variable.
Thank you.
AI: Unless you normalize the MSE in scenario 1 (or denormalize the MSE in scenario 2), comparing two MSE with two different scales is irrelevant.
You can have data with values varying from 10 to 30 millions, centered then normalized to -1/+1. Suppose you have a MSE of 1000 in the first case, and 0.1 in the second, you will easily see that the second MSE is way more impacting than the first one after (de)normalization.
That said, if you want to retrieve the target from scenario 2, you need to apply the reverse operations to what has been done to get the "normalized" target. Assuming for instance that you centered / reduced the target.
$$
Z = \frac{Y-\bar{Y}}{\bar{Y}}
$$
$$
Z = \alpha X_1 + \beta X_2 + \gamma X_3
$$
Where $Y$ is your initial target, $\bar{Y}$ its average, $Z$ your normalized target and $X_i$ your predictors.
When you apply your model and get a prediction, say $\hat{z}$, you can calculate its corresponding value $\hat{y}$ after applying the reverse transformations to your normalization.
$$
\hat{z} = \alpha x_1 + \beta x_2 + \gamma x_3
$$
$$
\frac{\hat{y}-\bar{y}}{\bar{y}} = \alpha x_1 + \beta x_2 + \gamma x_3
$$
$$
\hat{y} = \bar{y} *(\alpha x_1 + \beta x_2 + \gamma x_3 ) + \bar{y}
$$
Now in this particular case, as Ankit Seth pointed out, you need not normalizing your target. The linear model would have adjusted its coefficients automatically, because you have been using linear operands.
However, if you proceed to non linear operations for target normalization, the same logic should apply. Say for instance that your model predicts the logarithm of your target.
$$
log(Y) = \alpha X_1 + \beta X_2 + \gamma X_3
$$
The reverse operation would be to apply its reciprocal function, e.g. exponential.
$$
Y = exp(\alpha X_1 + \beta X_2 + \gamma X_3)
$$ |
H: Space between an object and the ground truth bounding box
Which way is better for drawing the ground truth boxes for object detection?
Drawing as tight as the sides of the box touch the border of the object, or
Make a little space between the box and the object?
AI: The first image is a better bounding box. A perfect bounding box has no space in between the edges of the object and the box, but it also fully contains the object. |
H: Should I update my regularisation L1 and L2 regularisation parameters in online setting?
I have been working on online learning for a few weeks now, especially with Vowpal Wabbit and logistic regression. My understanding of the online learning algorithms and the problem is alright but I can't get my head straight about the regularisation issue.
In a standard machine learning problem, one uses a validation dataset in order to tune regularisations parameters for L1 and L2. But how do you choose those regularisation parameters in an online setting? Do you just fix them from start, or should I update them while training occurs?
AI: As you say @Alexis, cross-validation is the standard way of selecting a model (or its parameters) in offline learning (especially when previous knowledge in not available). For online training, one usually incorporates some sort of (automatic) compensation for the model parameters that adjust them at the same time that training takes place (the alternative being to cross-validate on some gather data or equivalently to pre-optimize your parameters). Check also this answer as well as, for example, this paper (it shows that this issue is a very active research area)! |
H: Stochastic Gradient Descent Batching
I'm new to regression and we are doing a very simple exercise in a course. I'm taking to get a basic understanding of GD and SGD for a linear regression.
From my understanding, the only difference between GD and SGD is that instead of performing the algorithm on dataset size m as is processing in GD, SGD performs the operation on subsets of m.
My question is, for SGD does one simply perform the algorithm on the mini-batch, or is there some sort of summation of the results to come out with a final answer? Apologies if I'm not asking in the correct terms, I'm newer to some of the mathematical concepts involved.
AI: In SGD you just feed an example to your model, compute the gradient of the loss function of that example and update the weights according to the gradient of the loss of that example.
In mini-batch gradient descent you feed a batch to your model, compute the gradient of the loss of that batch and update the weights according to the gradient of the loss of that batch.
In fact, SGD is mini-batch gradient descent with batch size equal to 1. |
H: When to clean data?
I am very new to data science / ML and I have what I think is a very basic question - when to 'clean' the data?
Do I clean data before using it to train a classifier (a binary classifier in my experiments)?
Do I clean data that I try to classify using this classifer?
Both?
The data in my case is just a series of Tweets.
AI: Data Cleaning or Data Munging as it is referred in most cases, is the process of transforming the data from the raw form that they exist after their collection into another format with the intent of making it more appropriate for their future process e.g. training models etc..
This process is taking place at the beginning of the whole procedure and before the training and validation of the models. In text mining problems, you have also to treat the punctuation marks, remove the stopwords (it depends on the data representation that you will choose, for unigrams it is fine, but for bigrams it is not recommended at all) and also do the stemming or lemmatization processes. |
H: Which Loss cross-entropy do I've to use?
I'm working with this dataset https://www.kaggle.com/c/sf-crime to predict the crime incident using keras. I've encoded the category with pd.get_dummies and then use it as the validation data. At first I try to use categorical_crossentropy and adam for the loss function but result returns with very bad acc which nearly 0.3 and about 2.7 of loss. So I try to experiment with different entropy by using binary_crossentropy. The result seem to be exaggerate the acc is higher than 0.97 and the loss is less than 0.1 at first epoch. I also provide 0.3 of drop out for each layer. So which crossentropy is accurate or better use for this scenario and which kind of this classification is multi-label or multi-class?
AI: which kind of this classification is multi-label or multi-class?
If we take a look at the data panel at the competition's page in kaggle:
Category - category of the crime incident (only in train.csv). This is the target variable you are going to predict.
The target we are trying to predict is the category of the crime (which can take multiple values) so this answers your question: the problem is multi-class.
So which crossentropy is accurate or better use for this scenario?
There are three main types of cross-entopy in keras:
categorical cross-entropy is used for multi-class classification, where the target variable is one-hot encoded.
sparse categorical cross-entropy is used for multi-class classification, where the target variable is not one-hot encoded.
binary cross-entropy is used for binary classification problems.
From your question I assume that you think the different types of crossentropy are interchangeable. That is not the case. You should select the cross-entropy based on how your problem and how you encoded your data.
In your case, since you have a multi-class problem and you state that you have used pandas.get_dummies(), which ont-hot encodes your data you should use categorical cross-entropy and nothing else! |
H: Implementation of Stochastic Gradient Descent in Python
I am attempting to implement a basic Stochastic Gradient Descent algorithm for a 2-d linear regression in Python. I was given some boilerplate code for vanilla GD, and I have attempted to convert it to work for SGD.
Specifically -- I am a little unsure as to if I correctly implemented the loss function and partial derivatives, since I am new to regressions in general.
I do see that the errors tend to "zig zag" as expected. Does the following look like a correct implementation or have I made any mistakes?
#sample data
data = [(1,1),(2,3),(4,3),(3,2),(5,5)]
def compute_error_for_line_given_points(b, m, points):
totalError = 0
x = points[0]
y = points[1]
return float(totalError + (y - (m * x + b)) ** 2)
def step_gradient(b_current, m_current, points, learningRate):
N = float(1)
for i in range(0, 1):
x = points[0]
y = points[1]
b_gradient = -(2/N) * (y - ((m_current * x) + b_current)) #this is the part I am unsure
m_gradient = -(2/N) * x * (y - ((m_current * x) + b_current)) #here as well
new_b = b_current - (learningRate * b_gradient)
new_m = m_current - (learningRate * m_gradient)
return [new_b, new_m]
err_log = []
coef_log = []
b = 0 #initial intercept
m = 0 #initial slope
iterations = 4
for i in range(iterations): #epochs
for point in data: #one point at a time for SGD
err = compute_error_for_line_given_points(b,m, point)
err_log.append(err)
b,m = step_gradient(b,m,point,.01)
coef_log.append((b,m))
```
AI: There is only one small difference between gradient descent and stochastic gradient descent. Gradient descent calculates the gradient based on the loss function calculated across all training instances, whereas stochastic gradient descent calculates the gradient based on the loss in batches. Both of these techniques are used to find optimal parameters for a model.
Let us try to implement SGD on this 2D dataset.
The algorithm
The dataset has 2 features, however we will want to add a bias term so we append a column of ones to the end of the data matrix.
shape = x.shape
x = np.insert(x, 0, 1, axis=1)
Then we initialize our weights, there are many strategies to do this. For simplicity I will set them all to 1 however setting the initial weights randomly is probably better in order to be able to use multiple restarts.
w = np.ones((shape[1]+1,))
Our initial line looks like this
Now we will iteratively update the weights of the model if it mistakenly classifies an example.
for ix, i in enumerate(x):
pred = np.dot(i,w)
if pred > 0: pred = 1
elif pred < 0: pred = -1
if pred != y[ix]:
w = w - learning_rate * pred * i
This line is the weight update w = w - learning_rate * pred * i.
We can see that doing this process continuously will lead to convergence.
After 10 epochs
After 20 epochs
After 50 epochs
After 100 epochs
And finally,
The code
The dataset for this code can be found here.
The function which will train the weights takes in the feature matrix $x$ and the targets $y$. It returns the trained weights $w$ and a list of historical weights encountered throughout the training process.
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def get_weights(x, y, verbose = 0):
shape = x.shape
x = np.insert(x, 0, 1, axis=1)
w = np.ones((shape[1]+1,))
weights = []
learning_rate = 10
iteration = 0
loss = None
while iteration <= 1000 and loss != 0:
for ix, i in enumerate(x):
pred = np.dot(i,w)
if pred > 0: pred = 1
elif pred < 0: pred = -1
if pred != y[ix]:
w = w - learning_rate * pred * i
weights.append(w)
if verbose == 1:
print('X_i = ', i, ' y = ', y[ix])
print('Pred: ', pred )
print('Weights', w)
print('------------------------------------------')
loss = np.dot(x, w)
loss[loss<0] = -1
loss[loss>0] = 1
loss = np.sum(loss - y )
if verbose == 1:
print('------------------------------------------')
print(np.sum(loss - y ))
print('------------------------------------------')
if iteration%10 == 0: learning_rate = learning_rate / 2
iteration += 1
print('Weights: ', w)
print('Loss: ', loss)
return w, weights
We will apply this SGD to our data in perceptron.csv.
df = np.loadtxt("perceptron.csv", delimiter = ',')
x = df[:,0:-1]
y = df[:,-1]
print('Dataset')
print(df, '\n')
w, all_weights = get_weights(x, y)
x = np.insert(x, 0, 1, axis=1)
pred = np.dot(x, w)
pred[pred > 0] = 1
pred[pred < 0] = -1
print('Predictions', pred)
Let's plot the decision boundary
x1 = np.linspace(np.amin(x[:,1]),np.amax(x[:,2]),2)
x2 = np.zeros((2,))
for ix, i in enumerate(x1):
x2[ix] = (-w[0] - w[1]*i) / w[2]
plt.scatter(x[y>0][:,1], x[y>0][:,2], marker = 'x')
plt.scatter(x[y<0][:,1], x[y<0][:,2], marker = 'o')
plt.plot(x1,x2)
plt.title('Perceptron Seperator', fontsize=20)
plt.xlabel('Feature 1 ($x_1$)', fontsize=16)
plt.ylabel('Feature 2 ($x_2$)', fontsize=16)
plt.show()
To see the training process you can print the weights as they changed through the epochs.
for ix, w in enumerate(all_weights):
if ix % 10 == 0:
print('Weights:', w)
x1 = np.linspace(np.amin(x[:,1]),np.amax(x[:,2]),2)
x2 = np.zeros((2,))
for ix, i in enumerate(x1):
x2[ix] = (-w[0] - w[1]*i) / w[2]
print('$0 = ' + str(-w[0]) + ' - ' + str(w[1]) + 'x_1'+ ' - ' + str(w[2]) + 'x_2$')
plt.scatter(x[y>0][:,1], x[y>0][:,2], marker = 'x')
plt.scatter(x[y<0][:,1], x[y<0][:,2], marker = 'o')
plt.plot(x1,x2)
plt.title('Perceptron Seperator', fontsize=20)
plt.xlabel('Feature 1 ($x_1$)', fontsize=16)
plt.ylabel('Feature 2 ($x_2$)', fontsize=16)
plt.show() |
H: Predicting customers purchase
Suppose we have a list of products categorized into 10 categories. We also have customers order details such as
order_id, product_name, quantity, order_date
We want to know for a particular month what are the most probable categories from which customer might buy products. How can we approach this problem, Any suggestions.
AI: I recommend using LSTM RNN or CNN algorithm to pick up the most popular product of on-going month based on the past purchasing history. I have created a working product ranking model for online shopping site lands'end in the United States.
In order to archieve the goal of this solution, you need to follow several steps.
collecting the dataset
I think you already gathered the dataset for this part.
Processing of the dataset for supervised learning.
Cleaning up the dataset - please remove rows that contains NaN columns
mapping the products into eyes - you need to map the categories and products into row matrix with the element of 1 and 0.
Normalization - you need to rescale all real numbers into [0, 1]
split the dataset into the training, valid and test data.
Build Deep learning model like LSTM RNN
select the deep learning framework. I recommend Tensorflow, Keras.
set the dimension of model. i.e. the number of layers, the number of neurons per layer.
optimizer, metrics
Training the model and testing
Get the predicted ranking of the category by monthly.
Protip: the most important thing to get the better result is which column would be set the output for ranking. You should sum up the total purchasing count per product or per category by month or week. |
H: Linear Regression. Interpret weights as an effect on a prediction?
In a basic linear regression, can I use the weights of each explanatory variable to describe their relative effects on the predicted value? If parameter A has a weight of 100, and parameter B has a weight of 10, can I say that parameter A has 10 times more effect on the outcome than parameter B?
An example: From a weather data set, I used two parameters, humidity and pressure, to make a prediction on temperature.
When I plot humidity vs. temperature, there's a very clear inverse relationship between the two. That is, the scatter plot tends down and to the right.
When I plot pressure vs. temperature, there's no relationship at all. The pressure in this report has very little variance, and the scatter plot is nearly vertical.
Based on those two plots, I intuit that changes in humidity have an effect on the high temperature, while changes in pressure have almost no effect.
I built a linear regression model (gradient descent, by hand, in Octave). Humidity had a weight of -.32386. Pressure had a weight of -.02219. Humidity's weight was 14.6 times larger than pressure's weight.
Based on that, can I state that parameter Humidity has nearly 15 times more effect on Temperature than parameter Pressure?
Forgive what must seem like a very amateur question. If you can give a relatively simple answer and point me toward resources where I can learn more, I'd really appreciate it. Thanks in advance.
AI: In a basic linear regression, can I use the weights of each
explanatory variable to describe their relative effects on the
predicted value? If parameter A has a weight of 100, and parameter B
has a weight of 10, can I say that parameter A has 10 times more
effect on the outcome than parameter B?
Blockquote
Somewhat yes. Precisely it means if A changes by one unit the outcome will change 100 times, on average (while B is frozen because it is a multiple linear regression -> you have more than one independent variable).
There are tons of examples and blogs explaining it, see for example here .
Also you can analyze your regression model more by doing some statistical tests like identifying how variables are correlated using Variance Inflation Factors and estimate the statistical significance (p-value, F-test), here is a very nice intro.
An example: From a weather data set, I used two parameters, humidity
and pressure, to make a prediction on temperature.
I do not see any problem with your analysis. As you said you see an inverse correlation between humidity vs. temperature in your plot as well as via your -.32386 weight. While the weight of -.02219 for pressure is nearly zero and negligible and obviously you won't see such relationship between pressure-temperature. |
H: Data Visualisation for Dataframe having 3 columns
I have the below data, I am looking for a effective visualization or graphical method (in Python) to understand how the err is distributed with respect to min and max.
Data_X_err.head()
Out[137]:
min max err
0 35435 35933 1.40
1 35155 36382 3.43
2 35305 36042 2.07
3 35216 36225 2.82
4 35196 36259 2.98
Thank you.
AI: use matplotlib:
import matplotlib.pyplot as plt
%matplotlib inline
f, (ax1, ax2) = plt.subplots(1, 2)
ax1.plt(Data_X_err.min, Data_X_err.err)
ax2.plt(Data_X_err.max, Data_X_err.err)
Alternatively you could use seaborn's pairplot
import seaborn as sns
%matplotlib inline
sns.set(style="ticks")
sns.pairplot(df) |
H: Grad Checking, verify by average?
I am running Gradient Checking, to spot any discrepancy between my mathematically-computed gradient and the actual sampled gradient - to reassure my backprop was implemented correctly.
When computing such a discrepancy, can I sum-up squares of differences, then take their average? I could then use this average as my estimate of how correctly the network computes the gradient:
$$\frac{1}{m}\sum_{i=0}^{i=m}(g_i-n_i)^2$$
or even:
$$\sqrt{\sum_{i=0}^{i=m}(g_i-n_i)^2}$$
where $g$ is a gradient from backpropagation, and $n$ is gradient from gradient checking.
However, Andrew Ng instead recommends:
$$\frac{\vert \vert (g-n) \vert \vert _2 }{
\vert \vert g \vert \vert _2 + \vert \vert n \vert \vert _2}$$
where $\vert \vert . \vert \vert _2$ is the length of the vector.
Another post post also recommends an slightly different approach: https://stats.stackexchange.com/a/188724/187816
Why would their approaches be better than mine?
AI: Let me give you an example where Andrew's recommendation works better than yours:
Let's say that the real gradient is $(0, 0, 0)$ and the gradient you have computed is $(10^{-4}, 10^{-4}, 10^{-4})$. Then your average would return $10^{-8}$, and Andrew's recommendation would return $1$. Your metric could fool you into thinking that your gradient is computed propperly and the error is just due to a numeric issue, while Andrew's cannot fool you into that, due to the fact that it considers the fact that the gradient can be very small.
To wrap up, if your gradient doesn't have norm close to zero, it wouldn't really matter. However, when the gradient is close to zero you can be fooled into thinking that your gradient is right when it is not. |
H: How to find the ranges in Equal frequency/depth binning?
I have been looking into the site http://www.saedsayad.com/unsupervised_binning.htm and there it shows range values to the right under equal frequency binning ....
I have so much looked to find how the ranges are formed but has been in vain. So , can somebody explain how those ranges are formed for the given dataset or explain the freq binning procedure in a good manner.
Great thanx for any help on this!
AI: As far as I can see the choice of the bin size /frequency is arbitrary in those examples. Frequency binning is simple choosing you bin boundaries in a way that the bin content size is the same. For the frequency approach it looks like the order the elements by size and calculate the bin edges in the middle between the highest element of bin A and the lowest of bin B.
If you want to be really fancy you can use methods like bayesian blocks that actually choose bin size dynamically with respect of information content. |
H: How to use the same minmaxscaler used on the training data with new data?
Im using the keras LSTM model to make prediction, and the code above is to scale the data:
inputs are shaped like (n, 11, 1) and the label is 1D
DailyDemand.py
#scaling data
scaler_x = preprocessing.MinMaxScaler(feature_range =(-1, 1))
x = np.array(x).reshape ((len(x),11 ))
x = scaler_x.fit_transform(x)
scaler_y = preprocessing.MinMaxScaler(feature_range =(-1, 1))
y = np.array(y).reshape ((len(y), 1))
y = scaler_y.fit_transform(y)
# Split train and test data
x_train=x[0: train_end ,]
x_test=x[train_end +1: ,]
y_train=y[0: train_end]
y_test=y[train_end +1:]
x_train=x_train.reshape(x_train.shape +(1,))
x_test=x_test.reshape(x_test.shape + (1,))
# Train and save the Model named fit1 in a json and h5 files
[....]
# serialize model to JSON
model_json = fit1.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
fit1.save_weights("model.h5")
print(">>>> Model saved to model.h5 in the disk")
and Now im trying to predict a new values of a new data with this trained model. so i loaded the model from the files:
predict.py
from DailyDemand import scaler_y
from DailyDemand import scaler_x
[...]
# load json and create model
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("model.h5")
print("Loaded model from disk")
########################################
# make prediction with the loaded model
FeaturesTest = [267,61200,695,677,70600,116700,130200,768,659,741,419300]
xaa = np.array(FeaturesTest).reshape ((1,11 )).astype(float)
print(xaa)
xaa = scaler_x.fit_transform(xaa)
xaa = xaa.reshape(xaa.shape +(1,))
print("print FeaturesTest scalled: ")
print(xaa) # incorrect scalled value, always returns -1 ones
xaa = [[[-1.]
[-1.]
[-1.]
[-1.]
[-1.]
[-1.]
[-1.]
[-1.]
[-1.]
[-1.]
[-1.]]]
tomorrowDemand = loaded_model.predict(xaa)
print("tomorrowDemand scalled: ", tomorrowDemand)
prediction = scaler_y.inverse_transform(np.array(tomorrowDemand).reshape ((len(tomorrowDemand), 1))).astype(int)
print ("the real demand is 95900 and the prediction is: ", prediction)
The problem is how i can use the same scaler used in the training on the new data?
i want to know if i made a mistake in this code to use the same scaller on the new data?
AI: You are refitting scaler_x on your test set, which you don't want. Change this line:
xaa = scaler_x.fit_transform(xaa)
to
xaa = scaler_x.transform(xaa)
You are getting [-1, -1, ..., -1] because with one sample, each feature is equal to the minimum. |
H: Very low accuracy of new data compared to validation data
I'm trying to train the neural network to predict the movement of a particular security on the market.
I teach on historical data collected for the year. At the entrance of the neural network candlesticks are served: close price and value
Before submitting, these data are normalized separately for each dataset. This happens with the z Score algorithm. Then the question immediately arises... output can not be obtained in the limit [0;1] or [-1;1] and can reach up to 10 or more (in both directions). Is that okay?
All sets are first shuffled, and then divided into two parts:: train and test (80:20)
The output is a class: the price will go up or down.
For train, the error is reduced to zero and the accuracy to 100%. For test, the error reaches a minimum of 0.28 and the accuracy reaches 90%
I tried to test this neural network to predict the next two months, which were not used in this neural network. They went through exactly the same normalization. However, the accuracy of this forecast is 0.5289256204750912. And the error - 1.8223290671986982
I know I'm doing something wrong, but I don't know what... I hope someone can help me figure this out.
PS: ZScore was used To the usual min, max. However, this particular success was not observed.
AI: You are experiencing Data Leakage. In a comment, you explained that you shuffle your data before splitting into train/validation. For each validation point, you likely are showing the model data that is nearby temporally both before and after the time of the validation point. This is information that the model can not possibly hope to have when running in real time.
To alleviate this, I would keep the data in the correct time order, and instead take the validation data as a contiguous chunk of time. If you want to be the most careful, you could throw data away in a small window around the validation data, so that the training data won’t contain information about the edges of the validation data.
This appears to be a good resource, but I have not read it too thoroughly: https://www.kaggle.com/dansbecker/data-leakage |
H: Image resizing and padding for CNN
I want to train a CNN for image recognition. Images for training have not fixed size. I want the input size for the CNN to be 50x100 (height x width), for example. When I resize some small sized images (for example 32x32) to input size, the content of the image is stretched horizontally too much, but for some medium size images it looks okay.
What is the proper method for resizing images while avoiding the content being destroyed?
(I am thinking about padding images with 0s to complete size after resizing them to some degree keeping ratio of width and height. Would it be okay with this method?)
AI: This question on stackoverflow might help you. To sum up, some deep learning researchers think that padding a big part of the image is not a good practice, since the neural network has to learn that the padded area is not relevant for classification, and it does not have to learn that if you use interpolation, for instance.
[Update April.11 2022]
Well, based on the newer research[2019], it seems like zero-padding would not affect to the CNN model at all, since zeros would not change the synaptic weights during forward propagation or back propagation during convolution.
paper reference: https://journalofbigdata.springeropen.com/articles/10.1186/s40537-019-0263-7 |
H: Default value of mtry for random forests
It is argued that the default value of mtry for random forests is square root of total number of features (for classification) and number of features divided by 3 for regression. Can someone tell me the literature where it is specifically mentioned?
AI: The textbook 'The Elements of Statistical Learning' by authors Trevor Hastie, Robert Tibshirani, and Jerome Friedman discuss how to tune the number of variables to sample as you build the trees in Chapter 15, Section 3. The pdf of the text is freely available and can be found here.
Note that the values you have listed are in fact default values, as you have mentioned. In practice, it is best to tune the value of 'mtry', as the best value will depend on the specific problem and dataset you are working with. |
H: Using handcrafted features in CNN
What is the difference between using CNN with handcrafted features and CNN without handcrafted features?
Thank you
AI: A CNN automatically extracts features, so hand-crafting features has become unnecessary for most applications. It learns what features to extract via backpropagation.
You can introduce handcrafted features in a few ways to boost the performance of a CNN:
inject hand-crafted features into the fully connected layer
feed the CNN output into another classifier alongside the hand-crafted features
This paper explores such concepts. |
H: Class distribution discrepancy training/validation. Loss now uninterpretable?
I have a 3-class image classification problem. The classes are highly unbalanced (with about 98% of the images beloning to one class). To counteract this unbalanced data, I weight the losses by using the inverse of the class probabilities (1/(Proportion of class)). This means that the loss gets multiplied a great lot for the minority classes. This does help in counteracting the class imbalance.
However, for this weighted loss I use the proportions in the training dataset. In the validation dataset, the distribution is somewhat different (96.66%, 2.59%, 0.75% in val set as opposed to 98.28%, 0.98%, 0.73% in the training set). This means that if I use the weights based on the training set, mainly misclassified class 1 images gets punished a lot more than they should according to the distribution in the validation set.
The reason for this distribution discrepancy is that I can't just do a random split on my images, since they are taken by some 10 cameras all taking pictures of the same respective static scenery (highways). This means that I have to hold out all images of about 2/3 cameras as validation and use the other cameras for training, otherwise the model will be too optimistic, since validation images come from the same camera as the train images.
This makes the weighted loss uninterpretable for the validation data. Is there a way to counteract this? For instance, would giving separate weights for the train/validation sets (based on their respective distributions) help?
AI: The validation set is used to estimate how well the model is fitting to your solution space. For instance, if your training loss continues to decrease but your validation loss remains constant or increases, then you should employ early stopping as to avoid overfitting. This does not require a weighted loss.
If you wanted to compare the training loss to the validation loss, you cannot use the same weights if the distributions differ. Instead, you can simply compare the average unweighted loss per set. The weighted loss is for penalization. |
H: Improve test accuracy for TensorFlow CNN
I'm trying to use Tensorflow for signal classification. The signals are either normal or high-risk signals. For this purpose, I used convolutional neural networks. The length of signals are 685 and the architecture is:
Convolution layer with 27 channels and 1 by 16 window size and stride 1.
Max pooling layer with 1 by 2 window size and stride 2.
Convolution layer with 14 channels and 1 by 32 window size and stride 1.
Max pooling layer with 1 by 2 window size and stride 2.
Convolution layer with 4 channels and 1 by 32 window size and stride 1.
Max pooling layer with 1 by 2 window size and stride 2.
Convolution layer with 3 channels and 1 by 10 window size and stride 1.
Max pooling layer with 1 by 2 window size and stride 2.
Fully connected layer with 20 neurons and dropout layer.
Fully connected layer with 10 neurons and dropout layer.
And finally Soft max layer.
After training the network using AdamOptimizer with learning rate 0.001 with 150000 signals the training accuracy is near 95 percent (batch training with 16 batch size is used), however testing accuracy using 20000 new signals is almost 50 percent. Since there is just 2 classes, this accuracy is no better than random guess.
model="conv1d-27-16-1,maxpool-2,conv1d-14-32-1,maxpool-2,conv1d-4-32-1,maxpool-2,conv1d-3-10-1,maxpool-2,full-20,full-10,softmax"
cnn=CNN_1D(model,input_size=685,n_classes=2,num_epochs=20,batch_size=16,dropout=0.75)
cnn.read_data('train_input','train_targets','test_input','test_targets')
cnn.build_model()
cnn.training(validation_set='all')
How can I improve the testing accuracy in my network?
AI: This is clearly a case of overfitting, since your validation error is way bigger than your training error. Having understood that overfitting is your problem, there are many ways you can address this problem:
Dropout (keras)
Weight decay (L2 regularization)
Early stopping (keras)
I can see that you are already doing dropout, so setting the dropout higher should help. Moreover, you can try using dropout in more layers, not just the two last ones. If setting the dropout rate higher and setting more layers with dropout does not work, then this probably means that your training and validation data do not come from the same distribution. You can study statistical properties of your training data and validation and study if they come from the same distribution.
Extra comment: if there are just two classes, why don't you use a sigmoid activation function in the last layer? |
H: GradientChecking, can I blame float precision?
I am trying to GradientCheck my c++ LSTM. The structure is as follows:
output vector (5D)
Dense Layer with softmax (5 neurons)
LSTM layer (5 neurons)
input vector (5D)
My gradient check uses this formula
$$d = \frac{\vert \vert (g-n) \vert \vert _2 }{
\vert \vert g \vert \vert _2 + \vert \vert n \vert \vert _2}$$
The Dense Layer returns descrepancy = 2.843e-05 with an epsilon of 5e-3
With 5e-5 or lower the network doesn't see any changes in Cost at all, anything greater than 5e-2 results in low precision.
This is only for top layer, and I am using 'float' 32 bit values for everything.
For the LSTM layer, I am using 5e-3 as well, but descrepancy is 0.95
The LSTM seems to converge fairly quickly on any task
0.95 bothered me, so I manually compared NumericalGradient array against the BackpropGradient array, side by side. All of the gradients match sign, and only differ in size of each entry.
For example:
numerical: backpropped:
-0.015223 -0.000385
0.000000 0.000000
-0.058794 -0.001509
-0.000381 -9.238e-06
9.537e-05 2.473e-06
0.000215 6.266e-0.6
-0.015223 -0.000385
...
As you can see, the signs do indeed match, and the numerical gradient is always larger than the back propped gradient
Would you say it is acceptable and I can blame float precision?
Edit:
Somewhat solved, - I simply forgot to turn-off "average the initial gradients by the number of timesteps" during backprop of my LSTM.
That's why my gradients were always smaller in the "backpropped" column.
I am now getting descrepancy of 0.00025 for LSTM
Edit: setting epsilon to 0.02 (lol) seems like a sweet-spot, as it results in descrepancy of 6.5e-05. Anything larger or smaller makes it deviate from 6.5e-05, so it seems like a numerical issue ...Only 2 layers deep though, weird af
Someone had this precision before?
AI: The fact that there is a sweet spot is a common issue in numerical differentiation. The issue is that using a high epsilon value will give you high error due to the fact that the numerical derivatives have error $O(\epsilon)$ (or $O(\epsilon^2)$ if you use centered difference), and using a very low epsilon will result in cancelling errors (see catastrophic cancellation question) due to the fact that you are substracting two numbers that are very close. It seems that $\epsilon = 0.01$ is a fair trade-off between these two extreme situations in your case. |
H: Ideal aggregation function for Partially Connected Neural Network (PCNN)
I am building a Python library that creates Partially Connected Neural Networks based on input and output data (X,Y). The basic gist is that the network graph is arbitrarily updated with nodes and edges in the hidden layers.
Example Data:
X = np.array([[0,0],[0,1],[1,0],[1,1]], dtype=np.float)
Y = np.array([[0],[1],[1],[0]], dtype=np.float)
Example Graph:
I am currently using the sum product aggregation function to calculate each layer's values:
$$
\sum_{i=0}^n w_{ij}x_i
$$
The synapse weights are denoted by two subscripts $ij$. Subscript $i$ represents the previous neuron and subscript $j$ represents the current neuron under consideration. The sum product for current neuron $j$ is computed as:
$$
s_j = w_{0j}x_0 + w_{1j}x_1 + ... + w_{nj}x_n = \sum_{i=0}^n w_{ij}x_i
$$
Is sum product an ideal aggregation function for PCNNs? Dot product won't work as the tensor sizes are almost always incompatible.
Update: As Bence Mélykúti suggested, my solution was to use a weight vector $w$ that is padded with zeros and reshaped for dot product. Here is how I accomplished it in my Python library:
from functools import reduce
import numpy as np
import math
# An example of some input/sensor data
X = np.array([[0,0],[0,1],[1,0],[1,1]], dtype=np.float16)
# From the graph, two inbound synapses in layer 1, here randomly generated
W = np.random.random((2,))
# Enumerate the element count in each array
xElements = reduce(lambda x, y: x*y, list(X.shape))
wElements = reduce(lambda x, y: x*y, list(W.shape))
# If previous layer has more elements than the weights in the current layer
if xElements > wElements:
# Create a list of zeros of shape X.shape
A = np.zeros(X.shape).flatten().tolist()
else:
# Otherwise, create a list with the larger number of elements in W
A = np.zeros(W.shape).flatten().tolist()
# If the list is odd, add one element to ensure it is even
# Done so the array can be reshaped to fit the data and dot product
if len(A) % 2 != 0:
A.extend([np.float16(0.0)])
# Fill the list with the elements from W
for ix,ele in enumerate(W):
A[ix] = ele
A = np.array(A)
# Convert list to an array and reshape to be compatible with dot product
A = A.reshape(X.shape[-1],(math.floor(A.shape[0]/2)))
result = np.dot(X,A)
The result of the dot product looks like:
print(result)
array([[0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ],
[0.45510922, 0.79058734, 0. , 0. ],
[0.45510922, 0.79058734, 0. , 0. ]])
Thanks Bence!
AI: What you call sum product is the same as a dot product of two vectors of equal length, $(w_{0j},w_{1j},\dots,w_{nj})$ and $(x_0,x_1,\dots,x_n)$.
Based on your diagram, your difficulty is that you want to be able to define connections between neurons that are not in consecutive layers (in other words, the connections might skip layers). I assume that there are no directed edges (synapses) that would go in the opposite direction (from layer $n$ to layer $m$ for some $m<n$). In this setting, you have to forward propagate information to layer $n$ from input $X$ and layers $1,2,\ldots,n-1$. I don't know when and how often you want to 'update' your network topology. Do you want to change it during training?! (Otherwise you'd fix a partially connected NN and you wouldn't write update in your question.)
For forward propagation in a fixed network topology, I would associate to neuron $j$ a vector of weights $(w_{ij})_{i\in I_j}$, where $I_j$ is the set of neurons that bring information to $j$ (those $i$ where there exists a directed edge from $i$ to $j$). Then you could either cherry-pick the neurons $i$ from the set of all neurons (by maintaining a list of indexes to find them in the vector of all neurons) and put them into a vector $x$, which is now as long as $(w_{ij})_{i\in I_j}$, and you can use dot product on the two vectors. (With numpy for Python, numpy.dot would work.) You would need to do this separately for all $j$ in the given layer. As these are receiving information from differing numbers of inputs and neurons, they indeed cannot be arranged into a matrix/tensor, not even receiving neurons of the same layer.
Or you could force all potential preceding neurons into one vector. Neuron $j$ of layer $n$ can only receive information from neurons $x=(<$input nodes$>$, $<$neurons of layer $1>$, $\ldots$, $<$neurons of layer $n-1>)$. This is a vector, and you can define a vector $w$ of equal length for $j$, which would indeed have many zeros, and $x$ and $w$ can be multiplied. For all neurons of layer $n$, you could stack these $w$ as row vectors on top of each other into a matrix/tensor of size (the number of neurons in layer $n$) times (the size of input $+$ the number of neurons in layer $1$ $+\ldots+$ the number of neurons in layer $n-1$) and use np.dot between this matrix and the $x$.
How to do training (backpropagation) with such topologies is something you have to consider a bit. I suspect it will work without much adjustment compared to a fully connected neural network without layer skipping. In the second option, you would only need to zero out the weights which must be kept zero after each backpropagation update. This indeed carries overhead if you're not careful. But if you maintained a binary ($0$ or $1$) 'mask' matrix of which weights are live (which correspond to existing connections), then you would only ever update these weights in the backpropagation.
Additionally, you may want to introduce a bias term $w_{-1j}$ (also known as intercept term, a constant offset) as if $x$ had an extra dimension that is kept constant $1$. After the aggregation with the sum product, you should also pass the result through a non-linear activation function before entering the value into the neuron in the next layer. |
H: When is precision more important over recall?
Can anyone give me some examples where precision is important and some examples where recall is important?
AI: For rare cancer data modeling, anything that doesn't account for false-negatives is a crime. Recall is a better measure than precision.
For YouTube recommendations, false-negatives is less of a concern. Precision is better here. |
H: Do I need a next for loop in order to get all values?
When i print this in the module without printing to file, it prints all values. But when I print it to the designated file, it only prints the first value.
#Create the model base
from sklearn.feature_selection import chi2
import numpy as np
N = 2
for UNSPSC, category_id in sorted(category_to_id.items()):
features_chi2 = chi2(features, labels == category_id)
indices = np.argsort(features_chi2[0])
feature_names = np.array(tfidf.get_feature_names())[indices]
unigrams = [v for v in feature_names if len(v.split(' ')) == 1]
bigrams = [v for v in feature_names if len(v.split(' ')) == 2]
with open('results.csv', 'w') as f:
print("# '{}':".format(UNSPSC), file=f)
print(" . Most correlated unigrams:\n. {}".format('\n. '.join(unigrams[-N:])), file=f)
print(" . Most correlated bigrams:\n. {}".format('\n. '.join(bigrams[-N:])), file=f)
How can i adjust this so that all values get printed onto the file?
AI: Don't forget to open the file first..
Py 2.6
For item in list:
the_opened_file.write("%s\n" % item)
Py 3.x
with open(filepath,'w') as fileptr:
for item in list:
fileptr.write("{}\n".format(item))
If you want to save your list itself,
Convert it to numpy format using np.assarray(list) and then use np.save(file,array,....(other opts) and reload it whenever required using np.load() |
H: Neural network using Tensorflow
Is there any possibility that cost function might end up in local minima rather than global minima while implementing the Neural network using Tensorflow ?
AI: Most of the critical points in a neural network are not local minima, as it can be seen in this question. Although it is not impossible to fall into a local minimum, the probability of it happening is so low that in practice it does not happen, except from very special cases as a single-layer perceptron.
Being local minima so hard to find, this means that it is highly unlikely to come across the global minimum in your optimization method. All that we do in deep learning is decrease the loss function to find fairly good parameters, but finding local and global minima is extremely unlikely. |
H: What does "baseline" mean in the context of machine learning?
What does "baseline" mean in the context of machine learning and data science?
Someone wrote me:
Hint: An appropriate baseline will give an RMSE of approximately 200.
I don't get this. Does he mean that if my predictive model on the training data has a RMSE below 500, it's good?
And what could be a "baseline approach"?
AI: A baseline is the result of a very basic model/solution. You generally create a baseline and then try to make more complex solutions in order to get a better result.
If you achieve a better score than the baseline, it is good. |
H: Accuracy and loss don't change in CNN. Is it over-fitting?
My task is to perform classify news articles as Interesting [1] or Uninteresting [0].
My training set has 4053 articles out of which 179 are Interesting. The validation set has 664 articles out of which 17 are Interesting.
I have preprocessed the articles and converted to vectors using word2vec.
The CNN architecture is as follows:
sentence_length, vector_length = 500, 100
def create_convnet(img_path='../new_out/cnn_model_word2vec.png'):
input_shape = Input(shape=(sentence_length, vector_length, 1))
tower_1 = Conv2D(8, (vector_length, 3), padding='same', activation='relu')(input_shape)
tower_1 = MaxPooling2D((1,vector_length-3+1), strides=(1, 1), padding='same')(tower_1)
tower_1 = Dropout(0.25)(tower_1)
tower_2 = Conv2D(8, (vector_length, 4), padding='same', activation='relu')(input_shape)
tower_2 = MaxPooling2D((1,vector_length-4+1), strides=(1, 1), padding='same')(tower_2)
tower_2 = Dropout(0.25)(tower_2)
tower_3 = Conv2D(8, (vector_length, 5), padding='same', activation='relu')(input_shape)
tower_3 = MaxPooling2D((1, vector_length-5+1), strides=(1, 1), padding='same')(tower_3)
tower_3 = Dropout(0.25)(tower_3)
merged = concatenate([tower_1, tower_2, tower_3], axis=1)
merged = Flatten()(merged)
dropout1 = Dropout(0.5)(merged)
out = Dense(1, activation='sigmoid')(dropout1)
model = Model(input_shape, out)
plot_model(model, to_file=img_path)
return model
some_model = create_convnet()
some_model.compile(loss=keras.losses.binary_crossentropy,
optimizer='adam',
metrics=['accuracy'])
The model predicts all articles in the validation set as Uninteresting [0]. The accuracy is 97.44% which is same as the ratio of Uninteresting articles in the validation set. I have tried variations of this architecture but still, the issue exists.
For experimentation, I predicted on the training data itself, for that too, it predicts all as Uninteresting [0].
Here are the logs for 10 epochs:
some_model.fit_generator(train_gen, train_steps, epochs=num_epoch, verbose=1, callbacks=callbacks_list, validation_data=val_gen, validation_steps=val_steps)
Epoch 1/10
254/253 [==============================] - 447s 2s/step - loss: 0.7119 - acc: 0.9555 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00001: val_loss improved from inf to 0.41266, saving model to ../new_out/cnn_model_word2vec
Epoch 2/10
254/253 [==============================] - 440s 2s/step - loss: 0.7099 - acc: 0.9560 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00002: val_loss did not improve
Epoch 3/10
254/253 [==============================] - 440s 2s/step - loss: 0.7099 - acc: 0.9560 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00003: val_loss did not improve
Epoch 00003: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 4/10
254/253 [==============================] - 448s 2s/step - loss: 0.7099 - acc: 0.9560 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00004: val_loss did not improve
Epoch 00004: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 5/10
254/253 [==============================] - 444s 2s/step - loss: 0.7099 - acc: 0.9560 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00005: val_loss did not improve
Epoch 00005: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
Epoch 6/10
254/253 [==============================] - 443s 2s/step - loss: 0.7099 - acc: 0.9560 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00006: val_loss did not improve
Epoch 00006: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07.
Epoch 7/10
254/253 [==============================] - 443s 2s/step - loss: 0.7099 - acc: 0.9560 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00007: val_loss did not improve
Epoch 00007: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 8/10
254/253 [==============================] - 443s 2s/step - loss: 0.7099 - acc: 0.9560 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00008: val_loss did not improve
Epoch 00008: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 9/10
254/253 [==============================] - 444s 2s/step - loss: 0.7099 - acc: 0.9560 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00009: val_loss did not improve
Epoch 00009: ReduceLROnPlateau reducing learning rate to 1e-07.
Epoch 10/10
254/253 [==============================] - 440s 2s/step - loss: 0.7099 - acc: 0.9560 - val_loss: 0.4127 - val_acc: 0.9744
Epoch 00010: val_loss did not improve
Epoch 00010: ReduceLROnPlateau reducing learning rate to 1e-07.
Out[3]: <keras.callbacks.History at 0x7f19898b90f0>
AI: Your dataset is highly imbalanced. Your optimization process is just minimizing the loss function, and cannot do better than a model that predicts uninteresting regardless of the input, due to the fact that your training set is very imbalanced. Moreover, you are not overfitting, since your training accuracy is lower than your validation accuracy.
In order to have a model that learns something less dummy than your model (and you might have to pay the price of having a lower accuracy), I would do the following: when providing a mini-batch to your optimizer, generate a mini-batch that is more balanced, that is, bias the elements you select towards the interesting articles. For instance, if your batch size is 64, ensure that it has 32 interesting elements and 32 uninteresting elements. Using this your network might start learning some features regarding the words in it, and in principle it should help you achieve a not so dummy predictor. |
H: WEKA Multilayer Perceptron with large dataset
I'm new to data mining using WEKA.
I was trying out datasets with a large dataset (2000+ attributes with 90 instances) and left the default parameters as it is.
Why is Multilayer Perceptron running long on a dataset with 2000+ attributes?
K-Nearest Neighbour does a better job in terms of speed given the same dataset.
How does the hiddenLayer in MLP affect the speed and accuracy of the training set?
What is the most recommended way in running such large dataset, or is there none?
AI: That is a lot of attributes. Moreover, WEKA's default MLP size of the hidden layer is "a", where the following preset sizes are given:
a = (features + classes) / 2
i = features
o = classes
t = features + classes.
It is expected that the MLP will take longer. Some things you can try:
use dimensionality reduction such as PCA to reduce the number of features
reduce the number of folds if you are using k-fold cross validation
reduce hidden layer size (which will probably negatively affect accuracy)
use another model such as a SVM, which would be a lot faster
I recommend going through this high level introduction to neural networks. It is important to gain some intuition about the models you are using and the options you have available before tackling your problem. |
H: The goal of fine tuning
I would like to ask what is the goal of fine-tuning a VGGNet on my dataset.
What does fine-tuning mean? Does it mean to change the weights or keep the weight values?
AI: Fine tuning means changing the weights such that the VGGNet can perform the task you want in your dataset. The reason why fine-tuning is not called training (which is what you are doing) is because it implies that you already use a network that has been trained on a dataset. However, the concept is the same as training, but you just happen to it do with a convenient set of initial weights. |
H: Why does Ensemble Averaging actually improve results?
Why does ensemble averaging work for neural networks? This is the main idea behind things like dropout.
Consider an example of a hypersurface defined by the following image (white means lowest Cost). We have two networks: yellow and red, each network has 2 weights, and adjusts them to end up in the white portion.
Clearly, if we average them after they were trained we will end up in the middle of the space, where the error is very high.
AI: I think there is a missunderstanding in your question. In your question you imply that you take the average of the weights of the networks, and you should not do this. Instead, what you average are the predictions of different networks. For this reason, if you average two correct predictions the result will be a correct prediction, and the problem you were thinking about does not exist. |
H: Binarized neural network
I am currently looking at a paper by Hubara et al on binarized neural network.
I am stuck in understanding algorithm 2 of the paper. The algorithm uses shift-based (bit-shifting) AdaMax, where AdaMax is an extension of the Adam optimizer. In particular, they are using
$$m_{t} = \beta_{1}.m_{t-1} + (1 - \beta_{1}).g_{t} $$
$$v_{t} = \max(\beta_{2}.v_{t-1}, |g_{t}|) $$
$$\theta_{t} = \theta_{t−1} − (\alpha\oslash(1−β_{1}^t)).(m_{t} \oslash v_{t}^{-1} ) $$
where $g_{t}$ is the gradient, $\theta_{t-1}$ is the previous parameter, $\alpha$, $\beta_{1}$, $\beta_{2}$ are learning rate, and the betas of Adam optimizer. They stated that $\oslash$ stands for both left and right bit shift. I know the left shift and the right shift on its own, but I am not sure how do we have both at the same time? Help will be much appreciated. Thank you.
AI: I take a look at their github.
Here are the relevant parts:
local stepSize = lr/biasCorrection1 --math.sqrt(biasCorrection2)/biasCorrection1
stepSize=math.pow(2,torch.round(math.log(stepSize)/(math.log(2))))
and also
state.v:copy(torch.pow(2,torch.round(torch.log(state.v):div(math.log(2)))))
state.v:add(epsilon)
tmp:addcdiv(1, state.m, state.v)
It seems that for $\alpha \oslash (1-\beta_1^t)$, what is being done is $$2^{\lfloor \log_2 \frac{\alpha}{1-\beta_1^t} \rfloor}$$
which is a left shift if $\lfloor \log_2 \frac{\alpha}{1-\beta_1^t} \rfloor > 0$ and a right shift if $\lfloor \log_2 \frac{\alpha}{1-\beta_1^t} \rfloor < 0$.
On the other hand, for $m_t \oslash v_t^{-1}$, what is being done is
$$\frac{m_t}{2^{\lfloor \log_2 v_t\rfloor}}$$
which is a right shift if $\lfloor \log_2 v_t\rfloor> 0$ and a left shift if $\lfloor \log_2 v_t\rfloor < 0$. |
H: Trade off between Bias and Variance
What are the best ideas or approaches to trade off between bias and variance in Machine Learning models.
AI: You want to decide this based on how well your model performs and generalizes. If your model is underfitting, you want to increase your model's complexity, increasing variance and decreasing bias. If your model is overfitting, you want to regularize the model and/or feed it more training data, decreasing variance and increasing bias. |
H: which NN should I use for Time-series dataset, whose pattern change as time goes
I am analyzing a time-series dataset using (supervised) tensorflow deep learning. The tensorflow code is given a series of inputs, and based on each input, the NN has to predict output value in near future.
For training there are lots of pre-labeled input/output of past. Each input is an array of state per time step, and each input is labeled with a result value. So far this kind of task is common at deep learning, and I used multiple GRU layers to solve this.
But problem is, I found that the pattern for finding appropriate output actually changes as time goes. For every 10 hours (approximately), the pattern changes so at every start of each pattern period (like first 1-2hours of total 10 hours) the NN is bad at predicting, but after that, prediction rate enhances. Plus, I assume the data also has some noise too.
So far my implementation with GRUs do its job, but I want to find better way to build NN if possible. I currently know some basic supervised learning techniques and some advanced like LSTM, and for unsupervised learning I know DQN and policy-gradient.
AI: Do you know when does the new pattern start? You could reset the hidden state of the RNN each time this happens during training. A more detailed explanation in this paper: https://arxiv.org/pdf/1706.04148.pdf
In that paper, they try to predict the next item in a user's session. They use a GRU to create a user-level representation (which in your case could be the pattern representation) and another GRU to predict the next item based on the current session information.
To do that, they reset the hidden state of the second GRU every time the session changes and reset the hidden state of the first GRU every time the user changes, and I think you could reset the hidden state every time the pattern changes. It is difficult to explain it all right here, I recommend you to read the paper I mentioned before.
EDIT:
To know where are the pattern change points you could train another network, maybe using as labels points where your GRU's accuracy decreases substantially, and then use this network's predictions to reset the hidden state on these points at training time as I mentioned before. |
H: Removing irrelevant content in columns
I have a dataset with a column named Package Size and it has data in two units inches and meters mentioned in words. How do I preprocess the data such that I remove the unnecessary word and also convert them into a single unit?
Example:
Package Size 13 inches4 meters
Package Size(in meters) 0.3302 4
AI: Since you had not provided the tag on which language you are using. I am providing a sample solution in python with pandas for loading the data and transforming.
>> import pandas as pd
>> sample = pd.DataFrame({'a': ['1 inch', '2 meters', '1 metre', '2.5 inches', '2.8 meters']})
a
0 1 inch
1 2 meters
2 1 metre
3 2.5 inches
4 2.8 meters
>> def get_number(a):
for _ in a.split(' '):
try:
return float(_.strip())
except ValueError:
pass
>> m['b'] = m['a'].apply(lambda a: get_number(a) * 0.0254 if 'inch' in a else get_number(a))
>> m
a b
0 1 inch 0.0254
1 2 meters 2.0000
2 1 metre 1.0000
3 2.5 inches 0.0635
4 2.8 meters 2.8000
Hope this gives you a general idea of how to do it. |
H: How do I get a count of values based on custom bucket-ranges I create for a select column in dataframe?
I have a column in my dataset by name 'production_output' whose values are integers in the range of 2.300000e+01 to 1.110055e+08.
I want to arbitrarily split the values in this column into different buckets based on say, percentile ranges like say [0, 25, 50, 75, 100] and get count of the length of each of theses buckets.
How do I do this using python data-science packages?
AI: numpy.histogram
https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.histogram.html
Use numpy.percentile to get the bin edges you desire.
https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.percentile.html |
H: Feature Selection in Linear Regression
I have a insurance dataset as given below. For which I need to build a model to calculate the charges.
age sex bmi children smoker region charges
0 19 female 27.900 0 yes southwest 16884.92400
1 18 male 33.770 1 no southeast 1725.55230
2 28 male 33.000 3 no southeast 4449.46200
3 33 male 22.705 0 no northwest 21984.47061
4 32 male 28.880 0 no northwest 3866.85520
But I am not sure whether the column "region" can be dropped. Is there any test can be performed to consider only significant variables?
AI: First, transform your columns, then apply linear regression, but do you want to know about the influence of your features on your selected dependent variable?
Read this article, as it provides great insight at how interpret the coefficients given by the algorithm can be interpreted and further discussed or tweaked. |
H: What does Logits in machine learning mean?
"One common mistake that I would make is adding a non-linearity to my logits output."
What does the term "logit" means here or what does it represent ?
AI: Logits interpreted to be the unnormalised (or not-yet normalised) predictions (or outputs) of a model. These can give results, but we don't normally stop with logits, because interpreting their raw values is not easy.
Have a look at their definition to help understand how logits are produced.
Let me explain with an example:
We want to train a model that learns how to classify cats and dogs, using photos that each contain either one cat or one dog. You build a model give it some of the data you have to approximate a mapping between images and predictions. You then give the model some of the unseen photos in order to test its predictive accuracy on new data. As we have a classification problem (we are trying to put each photo into one of two classes), the model will give us two scores for each input image. A score for how likely it believes the image contains a cat, and then a score for its belief that the image contains a dog.
Perhaps for the first new image, you get logit values out of 16.917 for a cat and then 0.772 for a dog. Higher means better, or ('more likely'), so you'd say that a cat is the answer. The correct answer is a cat, so the model worked!
For the second image, the model may say the logit values are 1.004 for a cat and 0.709 for a dog. So once again, our model says we the image contains a cat. The correct answer is once again a cat, so the model worked again!
Now we want to compare the two result. One way to do this is to normalise the scores. That is, we normalise the logits! Doing this we gain some insight into the confidence of our model.
Let's using the softmax, where all results sum to 1 and so allow us to think of them as probabilities:
$$\sigma (\mathbf {z} )_{j}={\frac {e^{z_{j}}}{\sum _{k=1}^{K}e^{z_{k}}}} \hspace{20mm} for \hspace{5mm} j = 1, …, K.$$
For the first test image, we get
$$prob(cat) = \frac{exp(16.917)}{exp(16.917) + exp(0.772)} = 0.9999$$
$$prob(dog) = \frac{exp(0.772)}{exp(16.917) + exp(0.772)} = 0.0001$$
If we do the same for the second image, we get the results:
$$prob(cat) = \frac{exp(1.004)}{exp(1.004) + exp(0.709)} = 0.5732$$
$$prob(dog) = \frac{exp(0.709)}{exp(1.004) + exp(0.709)} = 0.4268$$
The model was not really sure about the second image, as it was very close to 50-50 - a guess!
The last part of the quote from your question likely refers to a neural network as the model. The layers of a neural network commonly take input data, multiply that by some parameters (weights) that we want to learn, then apply a non-linearity function, which provides the model with the power to learn non-linear relationships. Without this non-linearity, a neural network would simply be a list of linear operations, performed on some input data, which means it would only be able to learn linear relationships. This would be a massive constraint, meaning the model could always be reduced to a basic linear model.
That being said, it is not considered helpful to apply a non-linearity to the logit outputs of a model, as you are generally going to be cutting out some information, right before a final prediction is made. Have a look for related comments in this thread. |
H: Pros/Cons of stop word removal?
What are the pros / cons of removing stop words from text in the context of a text classification problem, I'm wondering what the best approach is (i.e. to remove or not to remove)?
I've read somewhere (but can't locate the reference) that it may be detrimental the the performance of a model in the case of sentiment analysis to remove stop words.
AI: In the context of sentiment analysis, removing stop words can be problematic if context is affected. For example suppose your stop word corpus includes ‘not’, which is a negation that can alter the valence of the passage. So you have to be cautious of exactly what is being dropped and what consequences it can have. |
H: Advantages of one shot learning over image classification
This is a rather conceptual question. From what I've read I gather that one shot learning is useful for use cases in which you don't have datasets of millions of images of employees etc. By a one shot operation, the network or networks will do a similarity measure to determine whether the given image belongs to a bunch of images or not.
However, some of these siamese networks are trained on millions or hundreds of millions of images. So doesn't that defeat the purpose of one shot learning ? Why can't plain image classification + transfer learning be used instead ? For instance, in face recognition systems, for every image - classify, doesn't fall into any of the classes - reject. If this is a legitimate new employee image, add the image to the database and retrain the network using transfer learning.
Does one shot learning remove the need for this retraining ? Can someone help me with these basic intuitions please
AI: In the case of face detection in particular it can get very use case specific:
1a) Let's say I am facebook, and I have a billion possible faces of users to tag. With the one-shot approach, I am storing a representation of each new face as you suggest, I assume it is something like One-shot Learning with Memory-Augmented Neural Networks. Now I have a problem with a 'memory' that there are billion faces, and that is a lot of computation to do for lookup. The representations learned must increase in size w.r.t to the dataset in order to be able to distinguish between instances, so I get hit again.
1b) With Deepface, which uses the siamese architecture you mention I have the advantage that to classify a new user, I just have to run it on maybe 1k images of friends around the network where that image comes from. I do not have to retrain the network if a new human is found, since after the initial training, there is not such a problem with doing a comparison between images, as the network is doing a one-vs-one, not a one-vs-all query. The network is not learning classes, but representations and the differences between representations that are most indicative of two examples being different classes.
2a) Now let's say I am a modeling agency with 800 employees, and I have 1000 examples per person. In that case, the one shot learning approach is feasible, especially if you have downloaded a pretrained network, and do the fine-tuning as you suggested. There is close to no cost for any projected growth (the one-shot learning paper trained on a 4million example dataset).
2b) In this case it may be feasible to finetune a network with just a final layer with 1000 logits (800 for employees, and a 200 for growth). This would be totally infeasible for the billion or even million classes (individual humans) - but still probably not advisable.
In short, it depends on how many examples per class you have and the number of classes you have. Many architectures out there only work when you have millions of images for relatively few classes (this applies to all the imagenet entries). Others are designed to learn from 10k examples or less, with 50 examples per class, and yet others are designed for not explicitly using the notion of classes, when even that becomes too big.
For a comprehensive review of face detection check out Deep Face Recognition: A Survey. New on arxiv 4/18/2018 |
H: ggplot aes() choice
What is the difference between
ggplot(mtcars, aes(mpg)) +
geom_histogram(aes(y = ..density..))
and
ggplot(mtcars, aes(mpg), aes(y = ..density..)) +
geom_histogram()
I know that aes() i the geom layer overrides the aes() in the data layer. But are one of the code snippets above preferable?
AI: The difference is that when the aesthetics are set in the ggplot function, they are inherited by any other geom's that build on top of it. If you specify the aesthetics only in a geom, it will only be used in that geom. And, as you mentioned, any aesthetics used in the geom override the settings in the ggplot function.
As far as which is preferable, I think it depends on you goal with the ggplot objects. For example, if you are only creating a single plot then it doesn't really matter which method you use. However, if you plan to present multiple different visualizations of the same data, you could reuse the ggplot object and simply add different geom layers to it:
Create a reusable ggplot object:
p <- ggplot(mtcars)
Add histogram geom:
p + geom_histogram( aes(mpg, ..density..))
Reuse ggplot object with different geom:
p + geom_point(aes(cyl, mpg))
This is a simple example but you can understand that when creating more complicated visualizations, the ability to reuse plot objects comes in handy. |
H: Predict method of the perceptron algorithm
Can someone explain to me how the predict method of the perceptron algorithm works?
def predict(self, pattern):
activation = np.dot(self.w, pattern) + self.b
if activation > 0:
return 1
else:
return -1
In this example b stands for bias.
def train(self, pattern, label, step = 0.01):
out_label = self.predict(pattern)
if out_label == label:
return True
self.w += step * (label - out_label) * pattern
self.b += step * (label - out_label)
return False
This is the train method. But as far as I understand in out_label there will only be either 1 or -1. How could this be possibly equal to label? If we would return activation I (think) I would understand it.
AI: Here is how the code appears to break down:
method parameters:
The train(...) method takes an input vector called pattern along with the target output called label and a weight update coefficient (learning rate) called step.
method execution:
train(...) first passes the input vector through our neuron using the predict(...) method.
The result of predict(...) is stored in a variable named out_label.
The out_label variable is either 1 or -1.
Depending on the pattern that we provide we want the target output (label) to be either 1 or -1.
If out_label is equal to label (1 == 1 or -1 == -1), then we return True
Otherwise, the neuron outputted a value we do not want (-1 != 1 or 1 != -1), so we update the weights accordingly, then return False
example where the neuron is correct:
Let's say we have the following parameters:
input vector (pattern) = [-5,2,8]
target output (label) = 1
The train(...) method executes as follows:
we call the predict(...) which does the following:
passes our vector [-5,2,8] through the neuron using the function np.dot(self.w, pattern) + self.b
stores the result of the above function in a variable called activation (let's say activation = 0.23)
returns 1 because activation > 0
we store the result of predict(...) in the variable out_label, which is 1 in this example
because our target output (label = 1), is equal to the actual output (out_label = 1) of the neuron, we return True and consider the neuron output correct.
example where the neuron is incorrect:
Let's say we have the following parameters:
input vector (pattern) = [-1,-5,3]
target output (label) = 1
The train(...) method executes as follows:
we call the predict(...) which does the following:
passes our vector [-1,-5,3] through the neuron using the function np.dot(self.w, pattern) + self.b
stores the result of the above function in a variable called activation (let's say activation = -0.65)
returns -1 because activation <= 0
we store the result of predict(...) in the variable out_label, which is -1 in this example
because our target output (label = 1), is not equal to the actual output (out_label = -1) of the neuron, we update the weights using:
self.w += step * (label - out_label) * pattern
self.b += step * (label - out_label)
we then return False because the neuron was incorrect |
H: BOVW - Combine vocabularies
What I have so far
I have a set of images that I am trying to classify. I can also extract different feature descriptors from the images using algorithms such as hu moments, color histogram, and SIFT. I can the build a vocabularies from each of these algorithms.
What I do not understand is how can I combine these vocabularies together. From my understanding these feature descriptors can be of different size depending on what the features are. What would be the proper way to build this vocabulary from these different algorithms.
What is confusing me is how would we be combining feature descriptors from different algorithms which have different sizes? SIFT will give a m x 128 array where m is the number of feature descriptors of set size 128. A color histogram may give a count of each color in the range of 0-255. How would I combine these two in a meaningful way?
Here is the paper I was reading which gave me the idea of combining the vocabularies.
A Visual Vocabulary for Flower Classification
Here is what I was reading before:
Implementing Bag of Visual words for Object Recognition
If anyone can point me in the right direction I would greatly appreciate it.
AI: I also found an interesting paper (with code) where they merged local and global descriptors.
Scene Recognition by Combining Local and Global Image Descriptors
According to the paper: They take the daisy features (local features), cluster the features into k clusters, then create a histogram with k bins. They normalize the histogram, once they do this they concatenate this vector (the histogram) with the normalized HOG vector (global features). So in the end we just cluster the local features and create histogram, then just concat with global vector and there we have a training sample.
Also there may be an alternate method where you just use local features to cluster: https://www.quora.com/How-do-I-merge-features-from-different-feature-extractors-i-e-color-histogram-and-SIFT-for-bag-of-visual-words# |
H: Pretraining neural net example in Aurelien Geron's book
I am testing the pretraining example in Chapter 15 of Aurélien Géron's book "Hands-On Machine Learning with Scikit-Learn and TensorFlow". The code is on his github page: here - see the example in section "Unsupervised pretraining".
Pretraining the network with the weights from the previously trained encoder should assist in training the network. To check this I slightly modified Aurelien's code so it outputs the error after every batch, and also reduced the batch size. I did this so I could see the error at the start of the training, where the effect of the pretrained weights should have been most obvious. I expected the pretrained network would start with a lower error (compared to the network not using pretraining) because it was starting with pretrained weights. However the pretraining seems to make training slower.
Has anyone any idea why this could be?
The first few lines of output (when using pretraining) is:
0 Train accuracy after each mini-batch: 0.08
0 Train accuracy after each mini-batch: 0.24
0 Train accuracy after each mini-batch: 0.32
0 Train accuracy after each mini-batch: 0.2
0 Train accuracy after each mini-batch: 0.32
0 Train accuracy after each mini-batch: 0.26
0 Train accuracy after each mini-batch: 0.32
0 Train accuracy after each mini-batch: 0.5
0 Train accuracy after each mini-batch: 0.58
0 Train accuracy after each mini-batch: 0.48
0 Train accuracy after each mini-batch: 0.54
0 Train accuracy after each mini-batch: 0.48
0 Train accuracy after each mini-batch: 0.5
0 Train accuracy after each mini-batch: 0.56
0 Train accuracy after each mini-batch: 0.64
0 Train accuracy after each mini-batch: 0.56
0 Train accuracy after each mini-batch: 0.68
0 Train accuracy after each mini-batch: 0.62
0 Train accuracy after each mini-batch: 0.74
0 Train accuracy after each mini-batch: 0.78
As you can see the accuracy is initially low. In contrast, when using He-initialized weights (i.e. not using pretraining), the initial accuracy is actually higher:
0 Train accuracy after each mini-batch: 0.62
0 Train accuracy after each mini-batch: 0.5
0 Train accuracy after each mini-batch: 0.52
0 Train accuracy after each mini-batch: 0.38
0 Train accuracy after each mini-batch: 0.56
0 Train accuracy after each mini-batch: 0.56
0 Train accuracy after each mini-batch: 0.6
0 Train accuracy after each mini-batch: 0.7
0 Train accuracy after each mini-batch: 0.72
0 Train accuracy after each mini-batch: 0.86
0 Train accuracy after each mini-batch: 0.86
0 Train accuracy after each mini-batch: 0.8
0 Train accuracy after each mini-batch: 0.82
0 Train accuracy after each mini-batch: 0.84
0 Train accuracy after each mini-batch: 0.88
0 Train accuracy after each mini-batch: 0.9
0 Train accuracy after each mini-batch: 0.82
0 Train accuracy after each mini-batch: 0.9
0 Train accuracy after each mini-batch: 0.84
0 Train accuracy after each mini-batch: 0.98
0 Train accuracy after each mini-batch: 0.96
In other words, the pretraining seems to slow down training, the opposite of what it should be doing!
My modified code is:
import numpy as np
import sys
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
def train_stacked_autoencoder():
reset_graph()
# Load the dataset to use
mnist = input_data.read_data_sets("/tmp/data/")
n_inputs = 28 * 28
n_hidden1 = 300
n_hidden2 = 150 # codings
n_hidden3 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.01
l2_reg = 0.0001
activation = tf.nn.elu
regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
weights1_init = initializer([n_inputs, n_hidden1])
weights2_init = initializer([n_hidden1, n_hidden2])
weights3_init = initializer([n_hidden2, n_hidden3])
weights4_init = initializer([n_hidden3, n_outputs])
weights1 = tf.Variable(weights1_init, dtype=tf.float32, name="weights1")
weights2 = tf.Variable(weights2_init, dtype=tf.float32, name="weights2")
weights3 = tf.Variable(weights3_init, dtype=tf.float32, name="weights3")
weights4 = tf.Variable(weights4_init, dtype=tf.float32, name="weights4")
biases1 = tf.Variable(tf.zeros(n_hidden1), name="biases1")
biases2 = tf.Variable(tf.zeros(n_hidden2), name="biases2")
biases3 = tf.Variable(tf.zeros(n_hidden3), name="biases3")
biases4 = tf.Variable(tf.zeros(n_outputs), name="biases4")
hidden1 = activation(tf.matmul(X, weights1) + biases1)
hidden2 = activation(tf.matmul(hidden1, weights2) + biases2)
hidden3 = activation(tf.matmul(hidden2, weights3) + biases3)
outputs = tf.matmul(hidden3, weights4) + biases4
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X))
optimizer = tf.train.AdamOptimizer(learning_rate)
with tf.name_scope("phase1"):
phase1_outputs = tf.matmul(hidden1, weights4) + biases4 # bypass hidden2 and hidden3
phase1_reconstruction_loss = tf.reduce_mean(tf.square(phase1_outputs - X))
phase1_reg_loss = regularizer(weights1) + regularizer(weights4)
phase1_loss = phase1_reconstruction_loss + phase1_reg_loss
phase1_training_op = optimizer.minimize(phase1_loss)
with tf.name_scope("phase2"):
phase2_reconstruction_loss = tf.reduce_mean(tf.square(hidden3 - hidden1))
phase2_reg_loss = regularizer(weights2) + regularizer(weights3)
phase2_loss = phase2_reconstruction_loss + phase2_reg_loss
train_vars = [weights2, biases2, weights3, biases3]
phase2_training_op = optimizer.minimize(phase2_loss, var_list=train_vars) # freeze hidden1
init = tf.global_variables_initializer()
saver = tf.train.Saver()
training_ops = [phase1_training_op, phase2_training_op]
reconstruction_losses = [phase1_reconstruction_loss, phase2_reconstruction_loss]
n_epochs = [4, 4]
batch_sizes = [150, 150]
use_cached_results = True
# Train both phases
if not use_cached_results:
with tf.Session() as sess:
init.run()
for phase in range(2):
print("Training phase #{}".format(phase + 1))
for epoch in range(n_epochs[phase]):
n_batches = mnist.train.num_examples // batch_sizes[phase]
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = mnist.train.next_batch(batch_sizes[phase])
sess.run(training_ops[phase], feed_dict={X: X_batch})
loss_train = reconstruction_losses[phase].eval(feed_dict={X: X_batch})
print("\r{}".format(epoch), "Train MSE:", loss_train)
saver.save(sess, "./my_model_one_at_a_time.ckpt")
loss_test = reconstruction_loss.eval(feed_dict={X: mnist.test.images})
print("Test MSE (uncached method):", loss_test)
# Train both phases, but in this case we cache the frozen layer outputs
if use_cached_results:
with tf.Session() as sess:
init.run()
for phase in range(2):
print("Training phase #{}".format(phase + 1))
if phase == 1:
hidden1_cache = hidden1.eval(feed_dict={X: mnist.train.images})
for epoch in range(n_epochs[phase]):
n_batches = mnist.train.num_examples // batch_sizes[phase]
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
if phase == 1:
# Phase 2 - use the cached output from hidden layer 1
indices = np.random.permutation(mnist.train.num_examples)
hidden1_batch = hidden1_cache[indices[:batch_sizes[phase]]]
feed_dict = {hidden1: hidden1_batch}
sess.run(training_ops[phase], feed_dict=feed_dict)
else:
# Phase 1
X_batch, y_batch = mnist.train.next_batch(batch_sizes[phase])
feed_dict = {X: X_batch}
sess.run(training_ops[phase], feed_dict=feed_dict)
loss_train = reconstruction_losses[phase].eval(feed_dict=feed_dict)
print("\r{}".format(epoch), "Train MSE:", loss_train)
saver.save(sess, "./my_model_cache_frozen.ckpt")
loss_test = reconstruction_loss.eval(feed_dict={X: mnist.test.images})
print("Test MSE (cached method):", loss_test)
def unsupervised_pretraining():
reset_graph()
# Load the dataset to use
mnist = input_data.read_data_sets("/tmp/data/")
n_inputs = 28 * 28
n_hidden1 = 300
n_hidden2 = 150
n_outputs = 10
learning_rate = 0.01
l2_reg = 0.0005
activation = tf.nn.elu
regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
initializer = tf.contrib.layers.variance_scaling_initializer()
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
y = tf.placeholder(tf.int32, shape=[None])
weights1_init = initializer([n_inputs, n_hidden1])
weights2_init = initializer([n_hidden1, n_hidden2])
weights3_init = initializer([n_hidden2, n_outputs])
weights1 = tf.Variable(weights1_init, dtype=tf.float32, name="weights1")
weights2 = tf.Variable(weights2_init, dtype=tf.float32, name="weights2")
weights3 = tf.Variable(weights3_init, dtype=tf.float32, name="weights3")
biases1 = tf.Variable(tf.zeros(n_hidden1), name="biases1")
biases2 = tf.Variable(tf.zeros(n_hidden2), name="biases2")
biases3 = tf.Variable(tf.zeros(n_outputs), name="biases3")
hidden1 = activation(tf.matmul(X, weights1) + biases1)
hidden2 = activation(tf.matmul(hidden1, weights2) + biases2)
logits = tf.matmul(hidden2, weights3) + biases3
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
reg_loss = regularizer(weights1) + regularizer(weights2) + regularizer(weights3)
loss = cross_entropy + reg_loss
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
pretrain_saver = tf.train.Saver([weights1, weights2, biases1, biases2])
saver = tf.train.Saver()
n_epochs = 4
batch_size = 50
n_labeled_instances = 2000
pretraining = True
# Regular training (without pretraining):
if not pretraining:
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = n_labeled_instances // batch_size
for iteration in range(n_batches):
#print("\r{}%".format(100 * iteration // n_batches), end="")
#sys.stdout.flush()
indices = np.random.permutation(n_labeled_instances)[:batch_size]
X_batch, y_batch = mnist.train.images[indices], mnist.train.labels[indices]
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
print("\r{}".format(epoch), "Train accuracy after each mini-batch:", accuracy_val)
sys.stdout.flush()
accuracy_val = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
print("\r{}".format(epoch), "Train accuracy after all batched:", accuracy_val, end=" ")
saver.save(sess, "./my_model_supervised.ckpt")
accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print("Test accuracy (without pretraining):", accuracy_val)
# Now reuse the first two layers of the autoencoder we pretrained:
if pretraining:
training_op = optimizer.minimize(loss, var_list=[weights3, biases3]) # Freeze layers 1 and 2 (optional)
with tf.Session() as sess:
init.run()
pretrain_saver.restore(sess, "./my_model_cache_frozen.ckpt")
for epoch in range(n_epochs):
n_batches = n_labeled_instances // batch_size
for iteration in range(n_batches):
#print("\r{}%".format(100 * iteration // n_batches), end="")
#sys.stdout.flush()
indices = np.random.permutation(n_labeled_instances)[:batch_size]
X_batch, y_batch = mnist.train.images[indices], mnist.train.labels[indices]
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
print("\r{}".format(epoch), "Train accuracy after each mini-batch:", accuracy_val)
sys.stdout.flush()
accuracy_val = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
print("\r{}".format(epoch), "Train accuracy after all batched:", accuracy_val, end=" ")
saver.save(sess, "./my_model_supervised_pretrained.ckpt")
accuracy_val = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print("Test accuracy (with pretraining):", accuracy_val)
if __name__ == "__main__":
# Seed the random number generator
np.random.seed(42)
tf.set_random_seed(42)
# Fit a multi-layer autoencoder and save the weights
# - this part is from Aurelien Geron's Ch 15, "Training one Autoencoder at a time in a single graph" example
train_stacked_autoencoder()
# Fit a network, using the weights previously saved for pretraining
# - this part is from Aurelien Geron's Ch 15, "Unsupervised pretraining" example
unsupervised_pretraining()
AI: [NOTE: I have not myself worked through Aurélien Geron's tutorials, but I have read the book]
On an intuitive level, I can persuade myself that the training would actually be slower for a pretrained model. In other words, it could make sense that the rate at which a error decreases (or accuracy increases) might be lower. The fact that training accuracy is lower is (for me at least) a little more complex and, perhaps, case specific.
Rate of learning
However the pretraining seems to make training slower.
Using a pretrained model, we have essentially taken a set of weights, which are already (at least partially) optimised for one problem. They are geared towards solving that problem based on the dataset they received, which means they expect the input to correspond to a certain distribution. You have frozen the first two layers with this line:
if pretraining:
training_op = optimizer.minimize(loss, var_list=[weights3, biases3])
Freezing two layers (in your case, out of three), intuitively kind of restricts the model.
Here is a somewhat contrived analogy that I might use to explain such cases to myself. Imagine we had a clown who could juggle with three balls, but now we want them to learn to use a fourth ball. At the same time, we ask an amateur to learn how to juggle, also with four balls. Before measuring their rate of learning, we decide to tie one of the clown's hands behind their back. So the clown already knows some tricks, but is also constrained in some way during the learning process. In my mind, the amateur would most likely learn a lot faster (relatively), as there is more to learn - but also because they have more freedom to explore the parameter space i.e. they can move more freely using both arms.
In the setting of optimisation, one might imagine that position of the pretrained model on a loss curve is already in a place where gradients are very small in certain dimensions (don't forget, we have a high-dimensional search space). This ends up meaning that it cannot as quickly make changes to the output of the weights whilst backpropagating errors, as the weight updates are multiples of these potentially small optimised weights.
...Ok - might sounds plausible, but this only addresses the problem of slow learning - what about the fact that the actual training accuracy is lower that that of the model with random initialisation??
Intial training accuracy
I expected the pretrained network would start with a lower error (compared to the network not using pretraining)...
Here I tend to agree with you. In the optimal case, we could take a pretrained model, use the initial layers as they are and just fine-tune the final layers. There are, however, some cases in which this might not work.
Looking into related literature, there is a possible explanation from the abstract of the paper: How transferable are features in deep neural networks? (Yosinski et al.):
Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected.
I find the second reason to be particularly interesting and relevant to your setup. This is because you actually only have three layers. You are therefore not allowing must freedom to fine-tune, and the final layer was likely very dependent on its relationship to the preceding layer.
What you might expect to see as a result of using a pretrained model, is rather that the final model exhibits better generalisation. This may indeed come at the cost of a lower test accuracy on the hold-out set of the specific dataset you train on.
Here are another thoughts, summarised well by the amazing (and free) Stanford CS231n course:
Learning rates. It’s common to use a smaller learning rate for ConvNet weights that are being fine-tuned, in comparison to the (randomly-initialized) weights for the new linear classifier that computes the class scores of your new dataset.
In your code, the learning rate seems to be fixed for all learning phases at 0.01. This is something you could experiment with; making it smaller for the pretrained layers, or just starting with a lower learning rate globally.
Here is a comprehensive introduction to tranfer learning that might give you some more ideas about why/where you might make some different modelling decisions. |
H: Why Root Finding is important in Logistic Regression? (i.e. Newton Raphson)
I'd like to ask what is the main reason why we find the roots in logistic regression (i.e. why we use Newton Raphson method on logistic regression ). I understand the basics of Newton Raphson method, but I just can't understand what is the importance of finding the roots or using second derivatives.
P.S. I know the idea of Newton Raphson, I am wondering why do we need to use this method and find the zero's or roots of function? What does it want to tell us for logistic regression for instance?
AI: This blogpost gives a broad answer to your question. In short, Newton's method is not used to find a root of the loss, but a root of the gradient. If you find a root of the gradient, then you are either in a maximum, a minimum, or a saddle point (the three of them are critical points). When using the cross-entropy loss function in logistic regression, it can be proved that this loss function has only one critical point, and this critical point corresponds to the global minimum. For this reason, finding a root of the gradient is equivalent to finding the only critical point, therefore the global minimum of the cross-entropy loss function. |
H: Validation showing huge fluctuations. What could be the cause?
I'm training a CNN for a 3-class image classification problem. My training loss decreased smoothly, which is the expected behaviour. However, my validation loss shows a lot of fluctuation.
Is this something that I should be worried about, or should I just pick the model which scores the best on my performance measure (accuracy)?
Additional info: I'm fine-tuning the last layer of a Resnet-18 that was pre-trained on ImageNet data in PyTorch. I must note that I'm using a weighted loss function for the training phase since my data is highly unbalanced. To plot the losses, however, I use the unweighted loss as to be able to compare the validation and training loss. I would use the unweighted loss, were it not that the distributions of the training dataset and validation dataset are somewhat different (they are both highly unbalanced however).
AI: From simple inspection of your plot, I could make a few conclusions and list things to try. (This is without knowing any more about your setup: training parameters and model hyperparameters).
It looks like the loss is decreasing (put a line of best fit through the validation loss). It also looks like you might be able to train for longer to improve results, as the curve is still headed downwards.
First I will try answer your title question:
what is the cause of the fluctation in the validation loss?
I can think of three possibilities:
Regularisation - to help smoothing the learning process and make the model weights more robust. Adding/increasing your regularisation will prevent large updates to weights being introduced.
Batch size - is it relatively small (e.g. < 20?). This would mean that the measured mean error at the end of the network is computed using only a few samples. With a batch size of, say 8, then getting 4/8 correct and compared to getting 6/8 correct has a large relative difference when looking at the loss. Taking the mean of the errors with such small batches will lead to a not-so-smooth loss curve. If you have enough GPU memory/RAM, try increasing batch size.
Learning Rate - might be too large. This is similar to the first point regarding regularisation. To make smoother improvements, you might need to slow down the pace of learning as you approach a loss-minimum. You can make this perhaps run on a schedule, whereby is is reduce by some factor (e.g. multiply it by 0.5) every time the validation loss has not improved after, say 6 epochs. This will prevent you from taking big steps and then maybe overshooting a minumum and just bouncing around it.
Specific to your task, I would also suggest trying to perhaps unfreeze another layer, to increase the scope of your fine-tuning. This will give the Resnet-18 a little more freedom to learn, based on your data.
Regarding your last question:
Is this something that I should be worried about, or should I just pick the model which scores the best on my performance measure (accuracy)?
Should you be worried? In short, no. A validation loss curve like yours can be perfectly fine and deliver reasonable results; however, I would try some of the steps I mentioned above before settling for it.
Should you just pick the best performing model? If you mean taking the model at its point with best validation loss (validation accuracy), then I would say to be more careful. On your plot above, this might equate to around epoch 30, but I would personally take a point that has trained a little more, where the curve gets a little less volatile. Again, after having tried some of the steps outlined above. |
H: Boruta Feature Selection package
I am using Boruta feature selection (with Random forest) to decide the important features in the below data set.
Gender Married Dependents Education Self_Employed ApplicantIncome \
0 Male No 0 Graduate No 5849
1 Male Yes 1 Graduate No 4583
2 Male Yes 0 Graduate Yes 3000
3 Male Yes 0 Not Graduate No 2583
4 Male No 0 Graduate No 6000
CoapplicantIncome LoanAmount Loan_Amount_Term Credit_History \
0 0.0 NaN 360.0 1.0
1 1508.0 128.0 360.0 1.0
2 0.0 66.0 360.0 1.0
3 2358.0 120.0 360.0 1.0
4 0.0 141.0 360.0 1.0
Property_Area Loan_Status
0 Urban Y
1 Rural N
2 Urban Y
3 Urban Y
4 Urban Y
Please help me in clarifying the below doubts
1) whether I need to convert the all categorical variable into numeric variable (using one hot encoding) before applying Boruta?
2) Whether the NA values would be taken care by Boruta or do we need to remove NA values before feeding into Boruta ?
In case of Regression problem, whether the Boruta approach (Random forest classifier -> Boruta) remains the same as in classification problem ?
Thank you.
AI: In R, Boruta relies on the ranger implementation of random forest. So:
Converting input variables from categorical to numeric is not necessary.
You will need to address NA values prior to running the algorithm.
Be aware that Boruta can be very slow! |
H: Why is there a big drop off in performance in my GBM?
I'm working on an employee attrition predictive model using sklearn's GradientBoostingClassfier. I have 9,000 observations, which I split 50/50 for training and testing. I have another set of 1,200 observations that I use for a final validation. All 10,200 observations were obtained in similar fashion.
I carried out a grid search with 5-fold cross-validation in order to obtain a suitable set of hyper parameters. The results for my test set are good and very stable. However, there is a big drop off in performance when use my final validation data.
Results for the test set
-> Precision: 0.836 / Recall: 0.629 / Accuracy 0.874
Results for the final validation set
-> Precision: 0.149 / Recall: 0.725 / Accuracy 0.484
At first I thought this could be the caused by data leakage, but even after removing "suspicious" features, there is still a big drop off when comparing the test results with the final validation results.
Surely I'm doing something wrong, but I'm at a loss as to what exactly. Here are the relevant lines of code (nothing fancy):
> X = pd.read_csv('train_test.csv')
> y = X.pop('Target')
> X_final = pd.read_csv('final_validation.csv')
> y_final = X_final.pop('Target')
> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
> gb = GradientBoostingClassifier(n_estimators=300, max_depth=5, learning_rate=0.2)
> gb_model = gb.fit(X_train, y_train)
> # test set
> y_pred = gb_model.predict(X_test)
> precision, recall, fscore, support = score(y_test, y_pred, average='binary')
> # final validation set
> y_hat = gb_model.predict(X_final)
> precision, recall, fscore, support = score(y_final, y_hat, average='binary')
Any thoughts?
AI: I would first suggest trying to plot the results during training. How do your metrics (or at least the loss) vary over the training process for training and cross-validation datasets?
The loss at each iteration is appended to your GBC object gb_model in the train_score_ attribute.
Normally, when there is such a big gap between training and test data, it indicates that you are overfitting to your training data, and that the model does not generalise well to unseen data. You could think about doing shuffling your data, in order to balance the training/validation/test datasets - [if you are looking at a time-series problem, you should be careful as to how you do this]. |
H: Ratio between embedded vector dimensions and vocabulary size
Using Embedding layer in Keras on a fairly small vocabulary (~300), I am looking at how to choose the output of this layer (dense vector) when given a 300 dimension vector. I think that the embedded vector need to have a minimum length to be able to map a given vocabulary.
AI: The ratio of vocabulary vs embedding length to determine the size of other layers in a neural network doesn't really matter. Word embeddings are always around 100 and 300 in length, longer embedding vectors don't add enough information and smaller ones don't represent the semantics well enough. What matters more is the network architecture, the algorithm(s) and the dataset size.
A simple way to understand this concept is that a bidirectional LSTM model with 50 neurons (nodes) followed by a fully connected layer of 70 neurons will outperform a simple MLP of 1000 neurons (nodes) connected to a embedding layer simply due to its architecture. Adding dropout will improve performance as well.
In addition, even if the vocabulary is just 300 words, using pre-trained embeddings will probably yield better results than training the embeddings directly on the dataset. The same applies to data size, a dataset with more samples will make a better classifier than a dataset with just a couple thousand samples.
In summary, it is preferable to try many architectures and cross-validate them (and/or ensemble them depending if you have a large enough dataset) with the smallest number of neurons possible and then start building up in size, depending on what computational resources you have and the speed of development you need. Large models slow down development speed whereas small models speed it up. This goes whether your vocabulary is the size of common crawl or just 300. As usual, try feature engineering (sentence length, special characters, etc.) and increase the dataset size as doing so often helps in whatever task you're trying to predict. |
H: np.c_ converts data type to object. Can I prevent that?
Was trying my hand at the Titanic dataset, when I wanted to One Hot Encode a categorical feature, after which I wanted to combine the original data with the new one hot vectors. The datatypes are as such:
data : Pandas Dataframe
Titles_ohe : Numpy sparse matrix (float64)
I tried to merge them into a dataframe using np.c_ :
columns = (list(data))+list(Titles.values)
data = pd.DataFrame(np.c_[data.values, Titles_ohe.toarray()], columns=columns)
However on checking the data type of the resulting Dataframe, all the attributes have been changed to the object datatype. Is there any way I can prevent this while using np.c_, or is there an alternative solution? Thanks in advance for any help!
AI: I'd use DataFrame.join() in this case:
data = data.join(pd.SparseDataFrame(Titles_ohe, index=data.index, columns=Titles)) |
H: One hot encoding at character level with Keras
I am reading Chollet's book on deep learning at the moment and in the NLP chapter he says:
Note that Keras has built-in utilities for doing one-hot encoding of text at the word level or character
I have looked into Keras metods and I cannot find which function he is referring to. keras.utils.to_categorical does not seem to be applicable directly here, as it requires int inputs.
I would like to efficiently encode some text at character level for a small RNN project: what can I use there?
AI: I think that you are looking for the keras Tokenizer with the char_level=True flag:
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(char_level=True)
tokenizer.fit_on_texts(your_dataset_train)
sequence_of_int = tokenizer.texts_to_sequences(your_dataset_train_or_test)
Now that you have sequences of Integer, you can use keras.utils.to_categorical =) |
H: What is difference between a Data Scientist and a Data Analyst?
https://www.datacamp.com/community/tutorials/learn-data-science-infographic
https://www.datacamp.com/community/blog/data-engineering-vs-data-science-infographic
These links contain almost everything but not the difference between data science and data analytics.
Is data analytics a part of the data science workflow? Is data analytics a subset of data science?
AI: Please visit this and read the difference between Data analyst and Data scientist.
You will find the above link very interesting that fulfills your need as you will also find the words by Data scientist of LinkedIn.
Important lines from the above link:
Data analysts are masters in SQL and use regular expression to slice and dice the data. With some level of scientific curiosity data analysts can tell a story from data. A data scientist on the other hand possess all the skills of a data analysts with strong foundation in modelling, analytics, math, statistics and computer science. What differentiates a data scientist from a data analyst is the strong acumen along with the ability to communicate the findings in the form of a story to both IT leaders and business stakeholders in such a way that it can influence the manner in which a company approaches a business challenge. |
H: How to represent relation between users as a feature?
I'm developing a model for unsupervised anomaly detection. I have a dataset representing communications between users (each example represents a communication): there are many features (time, duration, ...) and the ids of sender and receiver. My question is: how to represent the link between those two users?
I have several ideas, but each of them seems to have serious drawbacks:
Use id as is. Drawback: even if ids are integers, they have no numerical sense (id 15 is not 3 times id 5) and I think this may mislead the system
Use sort of vectors: for example, with 3 users: user1 = (0 0 1), user2 = (0 1 0), user3 = (1 0 0). Drawback : the number of users may vary over time, thus the number of features would vary as well and I would have to re-train my model.
Graph theory: I've heard of that way of representing data, which could fit perfectly my data model. Drawback: I've absolutely no knowledge in graph analysis
Assign each user a id which is a prime number. That way a communication could be represented in an unique way as the product of the 2 ids. Drawback: as for point 1, ids do not have a "numerical sense"
What do you think may be the better way to represent these relations?
AI: There's a couple of approaches you can take depending on the nature of your data. It sounds like you're trying to detect social anomalies in your data so you need to model the communication boundaries between them which leads to some sort of graph representation.
If you don't have too many users in the system (say $n$) then you can create an $n\times n$ matrix, $M$ over a time period that represents the communications between users. The component $M_{ij}$ could be either $1$ or $0$ if user $i$ and $j$ communicated or the number of times that they communicated.
If you have more data then you would want to represent the data in terms of nodes and edges. Nodes would be the users and edges would be the presence of a communication. This can be done manually or by using a library such as NetworkX.
Here's a Python tutorial on getting started with graph network analysis.
If you're doing this at a large scale then you might want to use a graph database such as Neo4J. |
H: how to derive the steps of principal component analysis?
I'm trying to understand the theoretical reasoning behind the method, but I can't understand a particular step in the middle of this page.
"The constraint on the numbers in $v_1$ is that the sum of the squares of the coefficients equals $1$. Expressed mathematically, we wish to maximize
$$\frac1N\sum_{i=1}^NY_{1i}^2$$
where
$$y_{1i}=v_1'z_i,$$
and $$v_1'v_1=1$$
(this is called "normalizing" $v_1$).
Computation of first principal component from $R$ and $v_1$. Substituting the middle equation in the first yields
$$\frac1N\sum_{i=1}^NY_{1i}^2=v_1'Rv_1."$$
I don't understand how $R$ suddenly appeared in this equation. The right hand side "$v_1' R v_1$" seems to have appeared out of nowhere.
Please help
AI: \begin{align}
\frac1N\sum_{i=1}^N Y_{1i}^2 &= \frac1N\sum_{i=1}^N (v_1'z_i)(z_i'v_1)\\
&=v_1'\left(\frac1N \sum_{i=1}^N z_iz_i' \right) v_1 \\
&= v_1'Rv_1
\end{align}
where $R=\frac1N \sum_{i=1}^N z_iz_i'.$ |
H: why is mse training drastically different from the begining of each training with Encoder-Decoder
I am using encoder-decoder model to predict binary images from grayscale images. Here is the model
inputs = Input(shape=(height, width, 1))
conv1 = Conv2D(4, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
conv1 = Conv2D(4, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
drop2 = Dropout(0.2)(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(drop2)
conv4 = Conv2D(16, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv4 = Conv2D(16, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
drop4 = Dropout(0.2)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
conv5 = Conv2D(32, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
drop5 = Dropout(0.2)(conv5)
up6 = Conv2D(16, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(16, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
conv6 = Conv2D(16, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
up7 = Conv2D(8, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([drop2,up7], axis = 3)
conv7 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
conv7 = Conv2D(8, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
up9 = Conv2D(4, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(4, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(4, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)
model = Model(inputs=inputs, outputs=conv10)
nadam = optimizers.Nadam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=None, schedule_decay=0.0004)
model.compile(loss='mean_squared_error', optimizer=nadam, metrics=['accuracy', norm_mse])
nb_epoch = 30
batch_size = 8
history = unet.fit(training, trainingLabel,
validation_data=(validation, validationLabel),
batch_size=batch_size, epochs=nb_epoch, shuffle="batch", callbacks=[checkpointer], verbose=1)
I also have training dataset size is: 6912, validation dataset size is: 1728
Each time I start training my encoder-decoder in the very beginning I get different training accuracy and normalized MSE. It significantly affects the testing each time. I understand that weights in the beginning are randomly chosen and so it will affect the performance on the first epoch. My concern is that the difference is too large. Here are some examples:
Epoch 1/30
6912/6912 [==============================] - 27s 4ms/step - loss: 0.0612 - acc: 0.9252 - normalized_mse: 0.3661 - val_loss: 0.0367 - val_acc: 0.9559 - val_normalized_mse: 0.2920
Another example:
Epoch 1/30
6912/6912 [==============================] - 28s 4ms/step - loss: 0.1251 - acc: 0.8982 - normalized_mse: 0.5686 - val_loss: 0.1077 - val_acc: 0.9564 - val_normalized_mse: 0.5302
The last one:
Epoch 1/30
6912/6912 [==============================] - 26s 4ms/step - loss: 0.1721 - acc: 0.9400 - normalized_mse: 0.6751 - val_loss: 0.1582 - val_acc: 0.9582 - val_normalized_mse: 0.6473
if I run my model for more than 30 epochs it starts overestimating.
The size of the Input images is 256x256 each has quite a bit of features and it is grayscale - the output is binary image.
Can some one please help me to understand if I am doing something wrong? And how can I make my model more stable?
[SOLVED] For those who face the same problem just feed the kernel_initializer some random seed. Example kernel_initializer=he_normal(seed=seed_val). That will solve the issue.
AI: Just a couple of ideas:
Batch size: 8 is quite a small batch, meaning the average loss that is computed might have high volatility. If you have enough memory, you could increase this.
Diversity of input: try adding batch normalisation layers in the encoder part, to try smoother the input for the conv layers. You said there are quite a few features, so perhaps this makes for noisy input, which would benefit from being normalised.
You could trying running the same experiment for 15 epochs, then plotting the training and validation losses (as they evolve perhaps, using the TensorBoard callback alongside your others). Do they follow any patterns or converge after some time?
You could try using different initilisation methods, or even gradient clipping, in order to make training a little smoother - constraining the size of the updates to weights during backpropagation.
Finally, another (brand-new!) result from research into GAN models, shows that progressively increasing the size of the inputs to your models might help to smooth learning and also extract a more robust set of features, which generalise better. Have a read of this section of an article from FastAI on their experience.
EDIT: information regarding the ELU activation
Using this activation my help learning, as it has a little more flexibility than e.g. the ELU, because it may also assume negative values. Here is an image from the original paper:
Here is the official definition (might slightly differ in the implementation of your framework):
The authors mentions that this activation assists in pushing the mean activation closer to zero, just as batch normalisation does. In your case this might mean simply getting past the bumpy initial epochs and converging a little more quickly.
The final major point that the authors highlight is that the models they trained also generalised better - so you might enjoy an improved performance with your model using ELUs versus a model using ReLUs (assuming both are trained for similar time). |
H: Skewness in response variable
Why is it a problem that your response variable is skewed in regression? Is taking logarithms the only way to solve it?
AI: Taking logarithms is not a one-size-fits-all approach - what if your response is negatively skewed?
"Simple" linear regression assumes that the response variable is normally distributed. If you'd like to perform this kind of regression, then you should probably transform your response variable to be as normal as possible.
There are other approaches / model types to deal with skewed responses - check out the Tweedie family. |
H: Creating labels for Text classification using keras
I have a text file with information that needs to classified based on keywords. The text file contains many number of paragraphs. And the paragraph contains keywords that we want (lets say salary amount, interest rate and so on..)
I want to write a model which will extract the paragraph (or 3 to 4 lines of text) containing the keyword i want. How do i create a label in this case? All i have is a raw text.
I am new to NLP. Any suggestions how i can approach this?
AI: You can build the text classification application with CNN algorithm by Keras library.
Please take a look at this git repository. Here
As you can see, you need to create training and testing data by loading polarity data from files, splitting the data into words, generating labels and returning split sentences and labels.
And you can create the convolutional neural network with Dense, Embedding, Conv2D, MaxPool2D of keras.
Here is the final model training snippet.
from keras.layers import Input, Dense, Embedding, Conv2D, MaxPool2D
from keras.layers import Reshape, Flatten, Dropout, Concatenate
from keras.callbacks import ModelCheckpoint
from keras.optimizers import Adam
from keras.models import Model
from sklearn.model_selection import train_test_split
from data_helpers import load_data
print('Loading data')
x, y, vocabulary, vocabulary_inv = load_data()
# x.shape -> (10662, 56)
# y.shape -> (10662, 2)
# len(vocabulary) -> 18765
# len(vocabulary_inv) -> 18765
X_train, X_test, y_train, y_test = train_test_split( x, y, test_size=0.2, random_state=42)
# X_train.shape -> (8529, 56)
# y_train.shape -> (8529, 2)
# X_test.shape -> (2133, 56)
# y_test.shape -> (2133, 2)
sequence_length = x.shape[1] # 56
vocabulary_size = len(vocabulary_inv) # 18765
embedding_dim = 256
filter_sizes = [3,4,5]
num_filters = 512
drop = 0.5
epochs = 100
batch_size = 30
# this returns a tensor
print("Creating Model...")
inputs = Input(shape=(sequence_length,), dtype='int32')
embedding = Embedding(input_dim=vocabulary_size, output_dim=embedding_dim, input_length=sequence_length)(inputs)
reshape = Reshape((sequence_length,embedding_dim,1))(embedding)
conv_0 = Conv2D(num_filters, kernel_size=(filter_sizes[0], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_1 = Conv2D(num_filters, kernel_size=(filter_sizes[1], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_2 = Conv2D(num_filters, kernel_size=(filter_sizes[2], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
maxpool_0 = MaxPool2D(pool_size=(sequence_length - filter_sizes[0] + 1, 1), strides=(1,1), padding='valid')(conv_0)
maxpool_1 = MaxPool2D(pool_size=(sequence_length - filter_sizes[1] + 1, 1), strides=(1,1), padding='valid')(conv_1)
maxpool_2 = MaxPool2D(pool_size=(sequence_length - filter_sizes[2] + 1, 1), strides=(1,1), padding='valid')(conv_2)
concatenated_tensor = Concatenate(axis=1)([maxpool_0, maxpool_1, maxpool_2])
flatten = Flatten()(concatenated_tensor)
dropout = Dropout(drop)(flatten)
output = Dense(units=2, activation='softmax')(dropout)
# this creates a model that includes
model = Model(inputs=inputs, outputs=output)
checkpoint = ModelCheckpoint('weights.{epoch:03d}-{val_acc:.4f}.hdf5', monitor='val_acc', verbose=1, save_best_only=True, mode='auto')
adam = Adam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(optimizer=adam, loss='binary_crossentropy', metrics=['accuracy'])
print("Traning Model...")
model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, callbacks=[checkpoint], validation_data=(X_test, y_test)) # starts training
By running this code, you will get the trained model with the format of hd5. Finally, you can use your model for prediction. |
H: Number of Fully connected layers in standard CNNs
I have a question targeting some basics of CNN. I came across various CNN networks like AlexNet, GoogLeNet and LeNet.
I read at a lot of places that AlexNet has 3 Fully Connected layers with 4096, 4096, 1000 layers each. The layer containing 1000 nodes is the classification layer and each neuron represents the each class.
Now I came across GoogLeNet. I read about its architecture here.
It says that GoogLeNet has 0 FC layers. However, you do need the 1000 node layer in the end with Softmax activation for the classification task. So the final layer isn't treated as FC in this?
Also then, what is the number of FC layers in LeNet-5?
A bit confused. Any help or leads would be greatly appreciated.
AI: I think the confusion with the Inception module is the somewhat complicated structure. The point on the relevant CS231n slide (#37), saying there are no FC layers is partially correct. (Remember this is only a summary of the model to get the main points across!). In the actual part of the model being explained on that slide, they are referring only to the Inception modules:
No FC layers!
Definitions will, however, play a big role in deciding whether or not there are FC layers in the model.
In the bigger scheme of things (beyond a single Inception module), we have first to distinguish between the train and test time architectures.
At train time there are auxilliary branches, which do indeed have a few fully connected layers. These are used to force intermediate layers (or inception modules) to be more aggressive in their quest for a final answer, or in the words of the authors, to be more discriminate.
From the paper (page 6 [Szegedy et al., 2014]):
One interesting insight is that the strong performance
of relatively shallower networks on this task suggests that the features produced by the layers in the
middle of the network should be very discriminative. By adding auxiliary classifiers connected to
these intermediate layers, we would expect to encourage discrimination in the lower stages in the
classifier, increase the gradient signal that gets propagated back, and provide additional regularization.
The slice of the model shown below displays one of the auxilliary classifiers (branches) on the right of the inception module:
This branch clearly has a few FC layers, the first of which is likely followed by a non-linearity such as a ReLU or tanh. The second one simply squishes the 1000 input weights into whatever number of classes are to be predicted (coincidentally or not, this is a 1000 here for ImageNet).
However, at test time, these branches are not active. They were used simply to train the weights of the modules, but do not contribute to the final classification probabilities produces at the end of the entire model architecture.
This all leaves us with just the suspsicious looking block right at the end of the model:
There is clearly a big blue FC layer there!
This is where definitions come into play. It is somewhat subjective.
Is a fully connected layer one in which each $m$ weight is connected to each of $n$ nodes? Is it a layer in which representation are learned, and if so, does the layer require a non-linearity? We know that neural networks requires the non-linearities, such as ReLU and tanh functions to be applied to the outputs of a layer (thinking in forward flow). Without these, neural networks would simply be a combinations of linear functions, and so going deeper wouldn't theoretically add any power as we essentially would just be performing a huge linear regression.
In this spirit, we can look at the final piece of the puzzle, and point out that tis final FC layer is noted to simply be linear! That is, it takes all the weights resulting from the preceding Average Pooling layer, and combines them into a linear combination of only 1000 values - ready for the softmax. This can all be understood from the tabular overview of the network architecture:
So, do you agree with the stanford guys or not? I do! |
H: How to optimize for time correlated hidden function - the magical candy machine
Let's assume that we have this magical candy machine which takes candies as input and delivers again candies as output.
For any given time t, it picks a random function which is strictly
increasing up to a point such as f(2) = 6 here, and then it strictly
decreases. It likes to be challenging but if you become greedy it
punishes you like most of the stuff in life.
f(1) = 5
f(2) = 6
f(3) = 4
f(4) = 2
f(100) = 0
The tricky point is that this function is changing all the time, but
still highly correlated with time. So f() will be similar between
t(1) and t(2), but very different between t(1) and t(100).
I want to write a program to maximize my candies using this magical candy machine. I know the fundamentals of ML but I'm not sure which approach would fit best here. Any ideas?
Note: You can only play only once every minute.
AI: It sounds like a use case for stochastic optimization algorithms: since your reward function is not observed but highly correlated in time, I would expect stochastic optimization algorithms to converge fast to the maximum of the reward function and follow it closely as it slowly changes.
Because of this, I expect that you would obtain pretty good results with simple stochastic optimization algorithms without going to complex models such as Reinforcement Learning. Also, it's always good to start easy.
Note: whether stochastic optimization algorithms can converge to the maximum of the reward function depends on how fast the reward function changes.
Edit, after request for example:
I would start with Random Search.
Basically the idea is to
pick a position at random
pick one new position at random
select the position with highest reward
repeat steps 2-3
The trick is in generating good candidate points (step 2) in order to speed up convergence to the maximum. I have a few basic ideas that can help:
pick a random position by drawing from a random distribution (say gaussian) with mean equal to the current position and standard deviation somehow inversely proportional to the current reward $F_t$ (something like $\alpha e^{(-\beta F_t)}$) so that if we are close to the maximum of the reward function we pick a new candidate close to the current position, and if we have a low reward we can explore farther away from the current position. These two strategies are usually called "exploitation" and "exploration"
the idea above has the disadvantage of being "memory-less", which makes it so that it doesn't take into account the gradient. In facts, drawing from a gaussian distribution can either push you closer or away from the maximum with equal chance, being the gaussian symmetric around the mean. A possible workaround would be to draw the new random candidate position from a skew normal distribution, with skewness parameter somehow dependent on the sign of the gradient of the reward function at the previous step.
I think that in your case you don't need much fancier algorithms since - if I understood correctly - your reward function is not multimodal thus we do not risk getting stuck in a local maximum. |
H: Why not use Scaler.fit_transform on total dataframe?
In sklearn I'm normalizing the data with MinMaxScaler. The example I'm following uses
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train, X_test, y_train, y_test = train_test_split(X_crime, y_crime,random_state = 0)
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
Now I wonder why this is done separately on the train and test set, and not on the X_crime dataframe like:
X_crime_scaled = scaler.fit_transform(X_crime)
X_train_scaled, X_test_scaled, y_train, y_test = train_test_split(X_crime_scaled, y_crime, random_state = 0)
the R-squared score is higher and with this option I know all my values are normalized between 0 and 1.
AI: If you would use the scaler on the full dataset you would provide the algorithm with some information about the values in the test set that it would not have otherwise.
This additional information ("data leakage") from your test set to your training set is problematic because it leads to unreliable performance estimates. It is therefore not surprising that you achieve a higher R-squared. Due to the data leakage, this R-squared might be overly optimistic because it could depend on the additional information that you introduced into the training set.
This is particularly true for the MinMaxScaler because it is by definition very sensitive to outliers. The effect will probably less problematic if you use the RobustScaler (or even the StandardScaler).
The same holds for other preprocessing steps like outlier removal, feature selection etc.
If you are worried that your training data does not adequately reflect the true distribution, you can fall back to a cross-validation approach with multiple folds so you can estimate the effect across multiple splits of the data. Again, remember to fit the scaler on all the training folds and apply it to the test fold for every iteration. |
H: Why is spam detection a classification problem and not a class modelling problem
Trying to get my feet wet with machine learning on text.
The most common dataset I've seen in this space is the sms dataset with classes ham and spam.
And the most common and successful approach seems to be to model this as a binary classification problem and to use a multinomial naïve Bayes to solve it.
However I'm trying to understand why this is a binary classification problem.
I understand that the spam category has some common features associated with it across the class - such as ads, offers , free discounts and so on.
But there's no definition for what is a ham class is there? The definition of ham is - everything other than spam.
So why is this a binary classification task?
For more context - I'm trying to solve the problem of whether a news article belongs to the politics class or to the non-political class.
Suppose I have a labelled dataset of around 3000 samples in each class.
The non political class is a mix of classes like sports , religion , science and technology and miscellaneous.
Will a binary classifier work better than an algorithm such as oneclassSVM where anything other than political news is an outlier ?
What are some of the other algorithms that I can use to solve this problem? I have heard about PU learning but I haven't seen any implementations of algorithms in any machine learning libraries ( I'm working with python)
If any of you have experience doing class modelling on text. Please share your comments and insights
Thank you!
AI: Regarding the case of spam vs ham, you are right that the spam category has common features (words), whereas the ham category could have multiple sub-categories, each with distinct feature sets. However, these distinct features can also be used to label an instance as "ham". Eg: If spam messages don't usually talk about sports scores, then the occurrence of the word "score", could be used to classify a message as "ham", even when it co-occurs with typical spam words such as "ad", "offer", etc. However, the one-class classifier cannot make use of such a mechanism.
In a similar manner, specific features will help to identify a news article as belonging to "sports", "religion", etc, thereby making them easier to label as "non-political". The miscellaneous class would be more diverse, however.
One-class SVMs are useful in cases where you want to detect novel instances, the kind of which you have not seen before, and hence cannot characterize in advance. Binary classification can be used when the understanding is that the available labeled dataset covers the typical kind of examples you would see while using the model for prediction. |
H: How to release datasets with fingerprinting
I intend on monetising some large datasets. These datasets are anonymised and released to (paying) clients via a web api. Are there any standard algorithms such that if the datasets are intentionally leaked publicly, the data can be altered such that the responsible party can be identified, while at the same time the data remains practically useful?
There are certain approaches which come to mind, such as every client's data being very slightly different with known changes. For example in spatial data, every lon/lat pair is altered by the same very small vector. My worry is that if the data is anonymised again by the client before being leaked, a naive attempt might easily be circumvented.
(I am not a data scientist so I'm not really sure what the correct jargon is for what I am looking for)
AI: "Digital watermarking" is a set of techniques that might be useful in this context.
From the wikipedia page:
"Watermarking" is the process of hiding digital information in a carrier signal... Digital watermarks may be used to verify the authenticity or integrity of the carrier signal or to show the identity of its owners. [Emphasis added]
To address your requirement, you would insert a unique watermark for each client that receives your data. Watermarking techniques address requirements such as robustness to modification and imperceptibility.
As an example, this paper talks about watermarking numerical data: "Watermarking Numerical Data in the Presence of Noise", pdf |
H: How to Normalize & Scale a Single Data Point
I do understand the concept of normalizing & scaling the training/test data; it does help with the converging of the cost function. It is a great helper for many of the machine learning algorithms.
I train and validate my model with the normalized data (MinMaxScaler) and save my model.
A new input data comes in and I want to use my saved model to make predictions.
I no longer have access to the training/test data at this point. All I have is the new single row of input data.
How will I normalize this single vector of input data so that it can be fed to my model?
The only normalization that I can think of simply linear transformation of data (e.g. simply map values from [40, 70] range to [-1, 1]). I need a normalization technique that doesn't depend on the full range of data.
Thanks!
AI: You need to normalise the input in the same way that the training data was normalised -- however, you don't need access to this training data during predictions of new data. If you have used a MinMaxScaler for example, then you can re-use this to transform your new data point:
scaler = MinMaxScaler()
X_normalised = scaler.fit_transform(X)
# do other stuff here like train & validate your model
# now we have a new test data point from somewhere, we scale
# it using the scaler we fitted on the training data
new_test_point_normalised = scaler.transform(new_test_point)
# now we can classify it with our model!
You can serialise the scaler to disk using pickle and re-load it when predicting new data points if necessary (i.e., the model training and testing happen in two different scripts). |
H: Evaluation methods for multi-class classification
I am looking for single-number evaluation method that can be used in multi-class classification tasks that take into account imbalanced data-sets. For instance, ROC-AUC defined by binary classifiers, is a single-number and takes into account imbalanced data-sets. On the other hand, accuracy is single-number, defined for multi-class classifiers and does not take into account imbalanced data-sets. Finally, the confusion matrix is defined for multi-class, takes that into account but is not single-number. Is there any evaluation method that satisfies the three conditions?
AI: How about a weighted log-loss?
Lets say we have $m$ classes $c_1, \dots, c_m$. We can give each class $c_i$ a weight $w_i$ which is inversely proportional to the percentage of the dataset that belongs to $c_i$. Then, the loss for some data set with actual classes $y = y_1, \dots, y_n$ and predictions $\hat{y} = \hat{y}_1, \dots, \hat{y}_n$ can be defined as
$$
\text{loss}(y, \hat{y}) = \frac{1}{mn} \sum_{j=1}^n\sum_{i=1}^m w_i {I}_{(y_j == i)}\text{log}(\hat{y}_j)
$$
where ${I}_{(y_j == i)}$ is an indicator function which evaluates to 1 if $y_j == i$ and 0 otherwise.
One disadvantage is that it's not immediately obvious, given some value of the loss function, how good a particular value of the loss function is. However, it is easy to compare two values (lower is better). |
H: Help me choose a Data Science book in Python
I've been a Data Scientist for a few years now, but I've only recently started to do most of my work in Python (boy, do I miss ggplot2! But altair is coming to the rescue).
I want to improve my Python skills and, since most of my work is related to developing Data Science & Analytics applications, I'd rather learn from a book about these topics than from a book on, say, Application Server frameworks. Also, since I mostly develop Deep Learning models, I was looking for a PyTorch book. However I could only find two, of which one is really crappy, and the other one is from authors I really respect, but on a topic I don't work with (NLP):
https://www.amazon.com/Natural-Language-Processing-PyTorch-Applications/dp/1491978236
So I've done a bit of research and found about the following books:
Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
Data Science from Scratch: First Principles with Python
Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython(second edition recently came out)
Python Data Science Handbook: Essential Tools for Working with Data
Which book would you choose? You're free to suggest other titles. Requirements, in order of importance:
It should be a good book about Python, so the code should be Pythonic
It should be about about PyTorch, but not only about NLP (this requirement will probably be unfulfilled, for reasons above). Not about Keras, or mainly about Keras: I already have a reference for that.
It should be about Data Science
Price is not an issue.
AI: I am reading Hands-On Machine Learning with Scikit-Learn and TensorFlow, its a great book (1/2 way thru). However it uses Scikit as per the title and some of the mechanics inside Sckit are a black box (not explained in depth). The general concepts however are still quite well written.
The 2nd half of the book is about TensorFlow so again, perhaps not what you want if you're focused on PyTorch ? |
H: How to improve accuracy of deep neural networks
I am using Tensorflow to predict whether the given sentence is positive and negative. I have take 5000 samples of positive sentences and 5000 samples of negative sentences. 90% of the data I used it for training the neural network and rest 10% for testing.
Below is the parameter initialisation.
Batch size = 100
Epochs = 30
Number of hidden layers = 3
Nodes in each hidden layer = 100
epochs = 30
I could see in each epoch the cost function is getting reduced reasonably. However the accuracy of the model on test set is poor (only 56%)
Epoch 1 completed out of 30 loss : 22611.10902404785
Epoch 2 completed out of 30 loss : 12377.467597961426
Epoch 3 completed out of 30 loss : 8659.753067016602
Epoch 4 completed out of 30 loss : 6678.618850708008
Epoch 5 completed out of 30 loss : 5391.995906829834
Epoch 6 completed out of 30 loss : 4476.406986236572
Epoch 7 completed out of 30 loss : 3776.497922897339
-------------------------------------------------------
Epoch 25 completed out of 30 loss : 478.93606185913086
Epoch 26 completed out of 30 loss : 450.8017848730087
Epoch 27 completed out of 30 loss : 435.0913710594177
Epoch 28 completed out of 30 loss : 452.10553523898125
Epoch 29 completed out of 30 loss : 539.5199084281921
Epoch 30 completed out of 30 loss : 685.9198244810104
Accuracy of Train : 0.88155556
Accuracy of Test : 0.524
Is there any parameter that can be tuned to increase the accuracy of the model considering the same number of data set.
AI: You need much more data.
Deep NN shines when you have excessive amounts of data.
With only a little bit if data it can easily overfit. The big difference between training and test performance shows that your network is overfitting badly. This is likely also because your network model has too much capacity (variables, nodes) compared to the amount of training data. A smaller network (fewer nodes) may overfit less. |
H: custom delimiter for dat file
I've a dat file in the following format which I'm trying to load using pandas.
28::Persuasion (1995)::Romance
29::City of Lost Children, The (1995)::Adventure|Sci-Fi
30::Shanghai Triad (Yao a yao yao dao waipo qiao) (1995)::Drama
31::Dangerous Minds (1995)::Drama
32::Twelve Monkeys (1995)::Drama|Sci-Fi
33::Wings of Courage (1995)::Adventure|Romance
34::Babe (1995)::Children's|Comedy|Drama
Code
movies = pd.read_csv('movies.dat')
I'm getting this error
ParserError: Error tokenizing data. C error: Expected 1 fields in line 11, saw 2
Line 11:
11::American President, The (1995)::Comedy|Drama|Romance
I understand that it's because of the comma. One option I found is to skip these lines, which I don't want to. Is it possible to set newline as the delimiter here? Any help will be truly appreciated.
AI: You have to tell
pd.read_csv
the separator that you want to use by doing:
pd.read_csv('movies.dat', sep = '::')
This should work. |
H: What is normalization for?
I am new in python and data science (and not great in math). I am learning machine learning. I got following normalize function. Can you please explain what does this normalize function do?
def normalize(array):
return (array - array.mean()) / array.std()
Also please explain what this array - array.mean() does?
AI: Also please explain what this array - array.mean() do?
Basically, it is doing memberwise subtraction operation after broadcasting. np.mean function finds the mean in your array and its result will be a scalar, a single number. Your array is a numpy array and the result of the latter term is a single value as mentioned. Consequently, the single value gets extended to the shape of the former term. then a memberwise subtraction will be performed for each entry of the array and the result will have the same shape as the former term.
Can you please explain what does this normalize function do?
Normalizing data is done for accelerating optimization. If you have features with different scales, it will take too much time for your optimizer function to find optimal points. Suppose you have age feature which can change between 0 to 150 (!) and salary which can be changed from 0 to whatever, like 500,000,000 $. your optimization algorithm used in your ML model will take too much time, if possible, to find appropriate weights for each feature. Moreover, if you don't scale your data, your ML algorithm may take too much care to features with large scales. |
H: autoEncoder as LSTM input, any benefit?
When training my LSTM, is there any incentive to pre-pass its inputs through an auto-encoder, or should I always supply as raw data as possible?
I have is a small amount of training data, and it contains only a few very rare significant events (spikes, spaced out in time)
AI: The purpose of the autencoder would be similar to doing an embedding of text to pass latent space features of the input to the LSTM, as opposed to raw inputs. In the case of text, this is usually done to reduce the very large and sparse dictionary (word subword, whatever) space to a space which is less sparse, significantly smaller, and therefore more computationally efficient.
In your case, it looks like the data is not high dimensional to start with (I am assuming this because of the significant events over time), so you would not get the benefit of mapping the input to a latent space. If your trend over time is high dimensional, then it may be worth it, as the embedding/autencoding may pick up and reduce correlations between trends in a few dimensions. |
H: GridSearch mean_test_score vs mean_train_score
I am working with scikit learn and GridSearch in order to find the best parameters in my classifiers.
I have a map of different hyperparameters and I want to print out GridSearch results, but I do not understand one thing - what is the difference between mean_test_score and mean_train_score?
As I understand, GridSearch performs cross-validation in order to find the best classifier, but how do these 2 params differ from one another? I always thought that cross-validation gives only one mean, which is a mean of the performance from trained models using N subsets of given data. For example, if I perform a cross-validation with X subsets, I will have X different accuracy scores and then I will have only one mean value.
So, how can I interpret these 2 parameters and what is the difference between them?
AI: When you do k-fold cross-validation, you train k models, each one of them leaving the proportion $1/k$ of the data out. For each of the models, you can compute its train error and validation error. The train error will be the error on the data selected to train the model, and the validation error will be the data left out of the training.
For this reason, you have k training errors and k validation/test errors, and computing their averages will give you the quantities you are talking about. |
H: MemoryError for np.array
I was trying the Keras CNN Stater Code on Ubuntu 16.04, from the below link:
https://www.hackerearth.com/challenge/competitive/deep-learning-3/machine-learning/predict-the-energy-used-612632a9/#c144537
I get “MemoryError:” for
X_train = np.array(train_img, np.float32) / 255.
Any idea, what should I be doing?
AI: MemoryError is exactly what it means, you have run out of memory in your RAM for your code to execute.
When this error occurs it is likely because you have loaded the entire data into memory. For large datasets you will want to use batch processing. Instead of loading your entire dataset into memory you should keep your data in your hard drive and access it in batches. If you are using Keras there is a helper class with a very efficient implementation of batch processing. Take a look at this blog post. This is a good starting point for avoiding MemoryError.
As a short term fix you can train your model using a subset of the data available to you and discard the rest. Doing this really is a shame however. |
H: Training with data of different shapes. Is padding an alternative?
I have a dataset of about 1k samples and I want to apply some unsuspervised techniques in order to clustering and visualization of these data.
The data can be interpreted as a table of a spreadsheet and unfortunately it doensn't have a very defined pattern of structure. The number of table lines varies, but not the columns.
The data is structured like this:
sample 1:
{
"table1": {
"column1": [
"-",
"-",
"-"
],
"column2": [
"2017-04-16 10:00",
"2017-04-16 10:00",
"2017-04-16 10:00"
],
"column3": [
"-",
"-",
"-"
],
"column4": [
"name X",
"name Y",
"name Z"
],
"column5": [
"0",
"0",
"0"
],
}
}
sample 2:
{
"table1": {
"column1": [
"-",
"-",
"-",
"-",
"-",
"-",
"-",
"-"
],
"column2": [
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00"
],
"column3": [
"-",
"-",
"-",
"-",
"-",
"-",
"-",
"-"
],
"column4": [
"name A",
"name Z",
"name B",
"name X",
"name C",
"name D",
"name E",
"name F"
],
"coumn5": [
"",
"",
"3",
"1",
"0",
"3",
"0",
"0"
]
}
}
These samples comes from alarms generated by a systems that collects informations from a lot of nodes (these nodes are named as "name A", "name B"...). My objective is to transform these data into a matrix (n_samples x n_features) to apply clustering and visualization algorithms.
How can I work with these data for unsupervised training? Is padding a way forward for this problem? If so, how can I apply the padding on this case?
AI: Whether or not padding is ppropriate really depends on the entire structure of your dataset, how relevant the different variables/columns are and also the type of model you want to run at the end.
Padding would be used, whereby you would have to fix the length of each sample (either to the length of the longest sample, or to a fixed length - longer samples would be trimmed or filtered somehow to fit into that length). Variables that are strings can be padded with empty strings, variables with number can be padded with zeros. There are however many other ways to pad, e.g. using the average of a numerical variable, or even model-based padding, padding with values that "make most sense" for filling gaps in that specific sample. Getting deep into it like that might more generally be called imputation, instead of padding - it is common in time series data, where gaps aren't always at one end of a sample.
Below I outline one approach to padding or standardizing the length of each sample. It it not specifically padding.
As you did not mention a programming language, I will give and code snippet in Python, but the same is easily achievable in other languages such as R and Julia.
The Approach
Based on the two examples you provide, it seems each example would be a calendar day, on which there are a variable number of observations.
There are also columns that are strings, and others are strings of numbers (e.g. column 5 in sample 2).
In time-series analysis in general, it is desirable to have a continuous frequency of data. That means have one day give one input. So my approach would be to make your data into a form that resembles a single input for each of the variables (i.e. columns) of each sample.
This is general approach, and you will have to try things out or do more research on how this would look in reality for your specific data at large.
Timestamps
I would use these as a kind of index, like the index in a Pandas DataFrame. One row = one timestamp. Multiple variables are then different columns.
Dealing with strings
I will assume that your dataset has a finite number of possible strings in each column. For example, that column 4 (holding names), will always hold a name from a given set. One could perform set(table2['column 4']) to see which values there are (removing duplicates). Or even:
# Gather a single list containing all strings in column 4 of all samples
all_names = []
[] # list comprehension to loop
# Check how many strings there are
from collections import Counter
counter = Counter(table2['column4'])
print(len(counter)) # see how many unique values exist
print(counter) # see how many times each string appears
print(counter.most_common(5)) # see the most common 5 strings
Now assuming this shows there is a finite number (more than likely the case), You could look into using a sparse representation of each sample (that means for each day). For example, if all the words in the entire dataset were: ['hi', 'hello', 'wasup', 'yo', 'bonjour'] (duplicates removed), then for one single sample with column 4 holding e.g. ['hi', 'hello', 'yo', 'hi'], your sparse representation for this sample would be: [2, 1, 0, 1, 0], because the sample has two 'hi', one 'hello', zero 'wasup' and so on. This sparse representation would then be your single input for column 4 for the timestamp (that single sample).
It might be worth looking into something like the DictVectorizer and CountVectorizer from Scikit-Learn.
Dealing with number columns
As I mentioned right at the beginning, you could pad these to a chosen length, perhaps matching the length of the string based representation above (or not!), depending on your final model.
You can then pad the inputs with a value that makes sense for your model (see the kind of options I mentioned at the beginning of my answer).
This should land you with, once again, a single vector for the given day, containing the information in the numerical column. |
H: Calculate average Intersection over Union
I want to have a global IoU metric for each class in a segmentation model with a neural net. The idea is, once the net is trained, doing the forward pass over all training examples an calculate the IoU, I'm thinking in two approaches (for each class):
1) Calculate IoU for each training instance, and finally, calculate the mean IoU (per class)
2) Accumulate the intersections and unions along all the training instances, (per class) and finally taking the ratio.
To illustrate the problem, let's take two training instances in which for class=0, intersection_1 = 2, intersection_2 = 3, union_1=7, union_2=6. The mean IoU (approach 1) wil be 0.3929 and the second approach will be 5/13 = 0.3846. What method do you think will give better/unbiased result?
AI: The two approaches won't usually make a big difference if all your images and objects are of reasonable size. By reasonable I mean you don't are not working on some objects that are only a few pixels big.
I would usually prefer the second approach. One particular reason is you don't have to worry about instances where both I and U are 0, which could happen frequently at the beginning of your training stage.
From my experience most machine learning software adopt the second approach. For instance, the mean_iou in Tensorflow simply flatten the input tensor into a vector before calculating the IoU for each class. |
H: What does 1024 by 3 model mean?
I was watching this video and Sentdex mentioned he had to switch around 1024 by 3 model. What does he mean by 1024 by 3 model and what did he change around?
Edited: Youtube link with the timestamp where he talks about this topic
AI: At 3:30 in video, he mentions this switching. What he is referring to by 'switch around' is that he is specifying the size of the LSTM being used. The 1024 refers to number of cells (or size) of the LSTM, and the 3 is the number of layers.
In the tutorial page being referenced I the snippet he is referencing is:
python translate.py
--data_dir [your_data_directory] --train_dir [checkpoints_directory]
--size=256 --num_layers=2 --steps_per_checkpoint=50
For details on why LSTMs are useful for this task, check out the paper. |
H: Caret and rpart - does caret automatically prune rpart trees
Question relating to the caret package 'rpart' method.
Does the method='rpart' automatically prune the tree? If so, what rules does it follow? If not, how does one go about directing caret to do this?
AI: To give a proper background for rpart package and rpart method with caret package:
1. If you use the rpart package directly, it will construct the complete tree by default.
If you want to prune the tree, you need to provide the optional parameter rpart.control which controls the fit of the tree. R documentation below, eg.:
rpart(formula, data, method, control = prune.control)
prune.control = rpart.control(minsplit = 20,
minbucket = round(minsplit/3), cp = 0.01,
maxcompete = 4, maxsurrogate = 5, usesurrogate = 2,
xval = 10, surrogatestyle = 0, maxdepth = 30 )
these are the hyper parameters you can tune to obtain a pruned tree.
One followed way is to not provide the cp i.e complexity parameter and perform cross validation (xval), something like:
rpart.control(minsplit = 20, minbucket = round(minsplit/3), xval = 10)
complexity parameter (cp) can be thought of as a measure of complexity/ no of splits of your model and you want to increase complexity until your model generalizes to new observations. i.e. regularization
Therefore evaluate the cross validated error vs cp and choose the cp that gives the good value (cp_good).
Finally, add it as your control parameter i.e. rpart.control(cp = cp_good) or use the prune function i.e. prune(fit, cp = cp_good) to get the desired tree.
2. caret package on the other hand already implements the rpart method with cp as the tuning parameter. caret by default will prune your tree based on a default run it makes on a default parameter grid (even if you don't supply any tuneGrid and trControl while training your model:
model <- train(data,
labels,
method = "rpart") |
H: Understanding scipy sparse matrix types
I am trying to select the best scipy sparse matrix type to use in my algorithm. In that, I should initialize data in a vij way, then I should use it to perform matrix vector multiplication. Eventually I have to add rows and cols. Trying to select the best for my problem, I want to understand which are the best cases to use each of this types: lil_matrix, coo_matrix, csr_matrix, csc_matrix, dok_matrix. Can someone explain me? Its not necessary to show examples of all the types in the same answer.
AI: Ok, I was looking for an answer and now I have it clearer:
Scipy documentation does not elaborate too much on the explanation, but wikipedia article is much more clear. For those who are looking for an answer, there are two major groups of sparse matrices:
a) Sparse types used to construct the matrices:
DOK (Dictionary Of Keys): a dictionary that maps (row, column) to the value of the elements. It uses a hash table so it's efficient to set elements.
LIL (LIst of Lists): LIL stores one list per row. The lil_matrix format is row-based, so if we want to use it then in other operations, conversion to CSR is efficient, whereas conversion to CSC is less so.
COO (COOrdinate list): stores a list of (row, column, value) tuples.
b) Sparse types that support efficient access, arithmetic operations, column or row slicing, and matrix-vector products:
CSR (Compressed Sparse Row): similar to COO, but compresses the row indices. Holds all the nonzero entries of M in left-to-right top-to-bottom ("row-major") order (all elements in the first row, all elements in the second row, and so). More efficient in row indexing and row slicing, because elements in the same row are stored contiguously in the memory.
CSC (Compressed Sparse Column): similar to CSR except that values are read first by column. More efficient in a column indexing and column slicing.
Once the matrices are build using one of the a) types, to perform manipulations such as multiplication or inversion, we should convert the matrix to either CSC or CSR format. |
H: Ridge and Lasso Regularization
Recently, I started working on Ridge and Lasso regularization for Linear and Logistic Regression.
My doubts are given below:
Is the penalty the same (by same proportion) for all the coefficients or is it based on variable importance?
If it is the latter I believe we can directly apply regularization rather than spending time in feature selection.
Whether the Multi-collinearity is taken care by ridge and lasso regularization?
Thank you.
AI: The penalty of both Lasso and Ridge is proportional to the magnitude of the weight. That is, the penalization added to the cost function is $\lambda ||\omega||_2$ or $\lambda ||\omega||_1$.
Wether it is more convenient to apply regularization or feature selection, Lasso already does some feature selection for you, as the estimated weights for Lasso are sparse (there will be many coefficients equal to 0).
About multi-colinearity, Ridge tends to eliminate variables that are colinear, while Lasso doesn't.
All that I have said is taken from An Introduction to Statistical Learning book, from James, Witten, Hastie and Tibshirani. |
H: Linear Regression - finding thetha using Normal equation
This is to find thetha which will give minimum cost function. Why is the x0 column required? why cant we assign size as x0? why do we need the feature count to be n+1?
AI: Consider the 1-D equivalent of the table you have provided. In this case, you have one input x and one output y (see figure). If you try to fit the data points with y=mx equation, you can not fit the data successfully. You need an equation like y=mx+c to have a good fit. y=mx will be a good fit only if the line goes through the origin.
Now you can look at
y=mx+c
as
y = m * x + c * 1
y = m * X1 + c * X0
y = W1 * X1 + W0 * X0
I hope the figures below will clarify why we need the intercept term X0 and its weight W0=c. |
H: What is the input space of a neural network (or other supervised learning algorithms)?
While training the neural network (or any other supervised learning algorithms), we supply input variables and corresponding outputs. The input variables can be continuous or discrete (binary in many cases).
What happens if after training with a binary input data, we supply a continuous value for the same input at the time of evaluation? Does the algorithm internally treat all variables as continuous variables?
For example, suppose one of the inputs is Young/Old encoded in the form of 0/1 in the training dataset. What happens if we supply a value of, say, 0.2 at the prediction stage? Does it/should it make any sense to the network?
AI: I guess it depends on the algorithm, but linear models, as well as neural networks, treat all variables as continuous. The algorithm will not explode or anything if you supply 0.2 at prediction stage. However, your algorithm is trained on data. The algorithm can at best do what it has learnt from the training data. For this reason, do not expect anything meaningful when you feed an example with a value that has not been seen in the whole training set, or that it does not follow the training set distribution. |
H: Why is a correlation matrix symmetric?
I'm sorry for being so weak in math. (I'm a student) For eg. this is a correlation matrix.
Q1 Q2 Q3
Q1 1.000000 0.707568 0.014746
Q2 0.707568 1.000000 -0.039130
Q3 0.014746 -0.039130 1.000000
Why is it symmetric? Why is Q1:Q2, the same as Q2:Q1? Shouldn't they be inverses of each other? How do I read this and understand the relation?
AI: The correlation matrix is a measure of linearity. It does not express how two variables are dependent on each other. If the relationship is approximately linear, the absolute value of correlation will be closer to 1. If there is no linear relationship, the value is zero.
Consider two sets of variables (x1,y1) and (x2,y2).
y1 = 2 * x1
y2 = 1000 * x2
In both these cases, the correlation is 1.
The exact relationship between x1 and y1 cannot be understood by looking only at the correlation matrix. |
H: Question about K-Fold Cross Validation
In a machine learning procedure, suppose we've chosen k=10 for the "K-Fold Cross Validation".
After we do the k steps of "K-Fold Cross Validation", how do we choose the final model for the classifier ? (The one will we use in order to predict new data)
AI: K-fold cross validation only helps you giving an estimation of the error you are going to make with a given model. K-fold can be used in order to tune hyperparameters: we have a model that depends on some hyperparameters, we compute the K-fold validation error using different values of the hyperparameters, and we take the model with the lowest cross-validation error. Finally, we train the model with the chosen hyperparameters with all the training data and we build a classifier based on that model. |
H: Is there any alternative to L-BFGS-B algorithm for hyperparameter optimization in Scikit learn?
The Gaussian process regression can be computed in scikit learn using an object of class GaussianProcessRegressor as:
gp= GaussianProcessRegressor(alpha=1e-10, copy_X_train=True,
kernel=1**2 + Matern(length_scale=2, nu=1.5) + WhiteKernel(noise_level=1),
n_restarts_optimizer=0, normalize_y=False,
optimizer='fmin_l_bfgs_b', random_state=None)
Here, we see that the L-BFGS-B algorithm has been used to optimize the hyperparameters. Is there any alternative (for example trust-region-reflective algorithm) to this algorithm available in sklearn?
EDIT: It provides some Constrained multivariate methods for optimization. Is there such functions available for other methods like trust-region-reflective algorithm and swarm intelligence (eg. Firefly algorithm)?
AI: Scikit-learn package has a limited selection of optimizers. The scipy package has many more optimitizers, including trust-region-reflective algorithm. You would have to use another third party package for Firefly algorithm. |
H: How to build a data analysis pipeline procedure
I have a series of scripts. Some are in R, some in Python, and others in SAS. I have built them in such a way that one code outputs a .csv file that the next code obtains and then that code outputs a .csv file, and so on...
I want to create a script that will automatically run each script in order so that the final output can be generated automatically.
What method would be best for this and can anyone point me to any examples of the procedure?
AI: Nomially, I would simply just try to write a bash script (or powershell in windows) and just string the commands together. However, this approach is rather fragile as in things get overwritten, and if it's an end to end process that has long batches.
I tend to use a workflow packages like luigi or airflow when stringing dependent task together. The idea for Luigi is that you can break each action into a task. Each task has three needed functions.
Requirement - what needs to exist before this task runs?
Output - where is the output going?
Run - what is the task?
So essentially you just chain a bunch of task and define your run function to call your previously built scripts using something like subprocess. In requirement, you would reference the last step, and for output you would point towards where the file is being written.
The pro for doing this is that if your process breaks in task 50 out of 100, you don't have to rerun all 50 task, luigi will go down the dependency tree until it finds a requirement not fulfilled.
Calling R from Pytho
Luigi |
H: tree.DecisionTree.feature_importances_ Numbers correspond to how features?
clf = tree.DecisionTreeClassifier(random_state = 0)
clf = clf.fit(X_train, y_train)
importances = clf.feature_importances_
importances variable is an array consisting of numbers that represent the importance of the variables. I wonder what order is this? Is the order of variable importances is the same as X_train?
I am trying to make a plot from this. So order matters.
AI: You can take the column names from X and tie it up with the feature_importances_ to understand them better. Here is an example -
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
import pandas as pd
clf = DecisionTreeClassifier(random_state=0)
iris = load_iris()
iris_pd = pd.DataFrame(iris.data, columns=['sepal_length', 'sepal_width', 'petal_length', 'petal_width'])
clf = clf.fit(iris_pd, iris.target)
I am taking the iris example, converting to a pandas.DataFrame() and fitting a simple DecisionTreeClassifier. Once the training is done, you can take the columns attribute of a pandas df and make a dict with the feature_importances_ output.
print(dict(zip(iris_pd.columns, clf.feature_importances_)))
This will give you what you want -
{'sepal_length': 0.0, 'sepal_width': 0.013333333333333329, 'petal_length': 0.064055958132045052, 'petal_width': 0.92261070853462157}
Hope this helps! |
H: Detecting text region from an image
So I'm working on a document processing AI and I already have a character recognition model which performs decently well. Now the problem is, how do I feed each character to the model in order to make predictions. Sliding window is one technique to segment arbitrary sequences of the image but it might not work in many cases as the words may get cut off. Can anyone suggest a robust way to detect text regions in an image? I've been researching about MSER but they claim to work well on grayscale images and not RGB. I know that we can convert RGB to grayscale but that is not performant enough.
AI: You can use OpenCV's Scene Text Detection https://docs.opencv.org/3.3.1/d4/d61/group__text.html |
H: R vs. Python Decision Tree
From my experiences the R Decision tree returns more accurate results than the python decision tree. Can anymore confirm this assumption and maybe knows the reason?
AI: Decision trees involve a lot of hyperparameters -
min / max samples in each leaf/leaves
size
depth of tree
criteria for splitting (gini/entropy) etc
Now different packages may have different default settings. Even within R or python if you use multiple packages and compare results, chances are they will be different.
There is nothing which suggests R is "better"
If you want to get the same results, you need to make sure the implicit defaults are similar. For instance, try running the following:
fit <- rpart(y_train ~ ., data = x_train,method="class",
parms = list(split = "gini"),
control = rpart.control(minsplit = 2, minbucket = 1, xval=0, maxdepth = 30))
(predicted5= predict(fit,x_test))
setosa versicolor virginica
149 0 0.3333333 0.6666667
Here, the parameters minsplit = 2, minbucket = 1, xval=0 and maxdepth = 30 are chosen so as to be identical to the sklearn-options, see here. maxdepth = 30 is the largest value rpart will let you have;
sklearn on the other hand has no bound here. If you want probabilities to be identical, you probably want to play around with the cp parameter as well.
Similarly, with
model = tree.DecisionTreeClassifier(criterion='gini',
min_samples_split=20,
min_samples_leaf=round(20.0/3.0), max_depth=30)
model.fit(iris.data, iris.target)
I get
print model.predict([iris.data[49]])
print model.predict([iris.data[99]])
print model.predict([iris.data[100]])
print model.predict([iris.data[149]])
print model.predict([[6.3,2.8,6,1.3]])
[0]
[1]
[2]
[2]
[1]
which looks similar to your initial R output.
All in all, I believe the defaults in R are better suited for the dataset that you are working on, hence the "better" results. But rest assured, they are similar given the parameters are explicit and equal.
Hope this helps! |
H: Handling outliers and Null values in Decision tree
Outliers : As I understand, decision trees are robust to outliers. Can anybody please confirm if my hypothesis is right with an example?
(What if I have a features ranging from 0 to 9 but there is an outlier of which value is 10000?)
Whether it creates a separate leaf for that outlier sample or would it be merged with some other tree's leaves?
NULL Values : Do we need to replace the null values before building the model using Decision tree or it would be taken care automatically by the decision tree technique?
Thank you.
AI: Outliers: In decision tree learning, you do splits based on a metric that depends on the proportions of the classes on the left and right leaves after the split (for instance, Giny Impurity). If there are few outliers (which should be the case: if not, you cannot use any model), then they will not be relevant to these proportions. For this reason, decision trees are robust to outliers.
Null values: You have to replace them (unless the software you use already does that for you, which is not generally the case).
Edit about outliers:
What I have said in outliers is only about classification trees. However, it is certainly not true in regression tress. Regression tree split criterium depends on the averages of the two groups that are splitted, and, as the average is severly affected by outliers, then the regression tree will suffer from outliers. There are two main approaches to solve this problem: either remove the outliers or build your own decision tree algorithm that makes splits based on the median instead of the average, as the median is not affected by outliers. However, basing the tree algorithm on the median will be very slow, as computing the median is way slower than computing the average. |
H: Are Hadoop and Python SciPy used for the same?
Right now
I've been doing a bit of research because I'm quite new to Big Data world. Among several other tools or frameworks, I've read about Apache Hadoop and Python for data analysis.
Specifically, I've read that:
Hadoop: allows you to perform any task you want, if it fits the Map-Reduce paradigm. It can add concurrence using a cluster, etc.
SciPy: python based ecosystem for (in fact, a lot of things)...
So my question is
If I've data collected from an environment, and want to correlate the data, calculate means, extract conclusions, etc...
Is SciPy as suitable as Hadoop (or more) to do so?
I'm basically trying to choose between them in order to build a little example, so I wonder which one fits better, or may I be misunderstanding the uses of each one.
AI: I think you're quite confused.
Hadoop is a collection of software that contains a a distributed file system called HDFS. Essentially HDFS is a way to store data cross a cluster. You can access file stores as you would in a local file store (with some modification) and modify things via Java API. Furthermore, ON TOP OF the file system there exist a MapReduce engine that allows for distributive workflow.
Python on the other hand is a generic programming language that can be made to do a myriad of task such as build a web applciation, to generating reports and even peforming analytics.
SciPy is a package that can be used in conjunction with Python (and often numpy) as a way to perform common scientific task.
Truthfully, they focus on different paradigms. If you have LARGE DATA (ie terabytes worth of it), it might be worth wild to setup a hadoop cluster (ie multiple servers and racks) and use Java MapReduce, Hive, Pig or Spark (of which there is a python version) to do analytics.
If your data is small or you only have one computer, then it probably makes sense to just use python instead of adding the overhead of setting up hadoop.
Edit: Made correction via comment. |
H: Handling Concat and Shift Feature in Pandas for Data Science
I am trying to USE Lag features AND Concat() and Shift() function,
seies = Series.from_csv('sugar_price_2.csv', header=0, sep=';')
In [25]: prices = DataFrame(series.values)
In [26]: dataframe = concat([prices.shift(3), prices.shift(2), prices.shift(1), prices], axis=1)
In [27]: dataframe.coloumns = ['t-2', 't-1', 't', 't+1']
In [28]: print(dataframe.head(20))
0 0 0 0
0 NaN NaN NaN 2800
1 NaN NaN 2800.0 2800
2 NaN 2800.0 2800.0 2800
3 2800.0 2800.0 2800.0 2800
4 2800.0 2800.0 2800.0 2800
5 2800.0 2800.0 2800.0 2800
But The 't-2', 't-1', 't' Coloumn names aren't showing up .
Can anyone say what's wrong in my code...
AI: As was pointed out by @Stephan Rauch in his comment, the names of the columns are stored in dataframe.columns - the OP had a typo.
Below is a working example with dummy data, getting the same output as the user - using instead a little loop to compute the shifted values.
from pandas import DataFrame
prices = dict(
col1=[0, 1, 2, 3, 4, 5, 6],
col2=[2, 3, 4, 5, 6, 7, 8],
col3=[5, 6, 7, 8, 9, 10, 11],
col4=[12, 13, 14, 15, 16, 17, 18])
dataframe = DataFrame.from_dict(prices)
print(dataframe)
new_col_names = ['t-2', 't-1', 't', 't+1']
dataframe.columns = new_col_names
print(dataframe)
# Number of columns we have
N = len(dataframe.columns)
for n, col in enumerate(dataframe.columns):
shift_by = N - n - 1 # don't shift the final column
dataframe[col] = dataframe[col].shift(periods=shift_by, axis=0)
print(dataframe)
# If desired, remove the new NaNs that appear in the first
final_dataframe = dataframe.drop(labels=dataframe.index[:N - 1], axis='index')
print(final_dataframe) |
H: The differences between SVM and Logistic Regression
I am reading about SVM and I've faced to the point that non-kernelized SVMs are nothing more than linear separators. Therefore, is the only difference between an SVM and logistic regression the criterium to choose the boundary?
Apparently, SVM chooses the maximum margin classifier and logistic regression is the one that minimizes the cross-entropy loss. Are there situations where SVM performs better than logistic regression or vice-versa?
AI: If you use logistic regression and the cross-entropy cost function, it's shape is convex and there will be a single minimum. But during optimization, you may find weights that are near to optimal point and not exactly on the optimal point. This means that you can have multiple classifies that reduce the error and maybe set it to zero for the training data but with different weights which are slightly different. This can lead to different decision boundaries. This approach is based on based on statistical methods. As it is illustrated in the following shape, you can have different decision boundaries with slight changes in the weights and all of them have zero error on the training examples.
What SVM does is an attemption to find a decision boundary that reduces the risk of error on the test data. It tries to find a decision boundary that has the same distance from the boundary points of both classes. Consequently, both classes will have a same space for the empty space which there is no data there. SVM is geometrically motivated rather than statistically.
None kernelized SVMs are nothing more than linear separators. Therefore, is the only difference between an SVM and logistic regression the criterium to choose the boundary?
They are linear separators and if you find out that your decision boundary can be a hyperplane, it's better to use an SVM for diminishing the risk of error on test data.
Apparently SVM chooses the maximum margin classifier and logistic regression the one that minimizes the cross-entropy loss.
Yes, as stated SVM is based on geometrical properties of the data whilst logistic regression is based on statistical approaches.
In this case, are there situations where SVM would perform better than logistic regression, or vice-versa?
Ostensibly, their results are not very different, but they are. SVMs are better for generalization 1, 2. |
H: Is 10,000 images for one class of spectrograms good enough for music classification model?
I'm debating on using a DNN or CNN for this classification, but I have 10,000 spectrograms of songs. I was wondering if I need more because the images have a low variants.
AI: Agreed with Emre. One thing that can be helpful when asking the question is to look at comparable datasets since there's usually some around. For spectrograms for example, there are 150,000 samples w/ 12 labels here, 2890 samples here, and 1 million samples with various labels here.
Find a few examples and gauge a rough order of magnitude of samples per class given how close the tasks are to yours. That should give you at-least a starting point. Then, if lets say one set is very large, usually someone will have written a paper using that set, and that can help start both the architecture and size jumping off point (and maybe a baseline for transfer learning) of your task. |
H: Evaluation metrics for Decision Tree regressor and KNN regressor
I have started working on the Decision Tree Regressor and KNN Regressor.
I have built the model and not sure what are the metrics needs to be considered for evaluation. As of now I have considered Root mean squared error.
Can we use $R^2$ values for Decision Tree and KNN or is it applicable only to Linear Regression Model
I have also pasted the $R^2$ and MSE values.
As per my understanding Linear regression model is stable, Decision tree is over fitting and KNN is having less error. Which model needs to be considered here?
R2 Score for train - KNN regression 0.8215942683102192
R2 Score for test - KNN regression 0.7160388084850589
Mean squared error for train - KNN regression 49.92162362176166
Mean squared error for test - KNN regression 78.30907381395349
R2 Score for train - linear regression 0.6141419744748021
R2 Score for test - linear regression 0.6117893766210736
Mean squared error for train - linear regression 107.97107771851036
Mean squared error for test - linear regression 107.0583420197463
R2 Score for train - Decision Tree regression 0.9962039204515297
R2 Score for test - Decision Tree regression 0.7866182225490949
Mean squared error for train - Decision tree regression 1.0622217832469776
Mean squared error for test - Decision tree regression 58.84511637596899
Updating the AIC and BIC values
AIC value - Test - KNN : -3.8272461328797505
BIC value - Test - KNN : 1169.4748549110836
AIC value - Test - Linear Reg: -4.452667046616746
BIC value - Test - Linear Reg: 1250.154152783156
AIC value - Test - Decision T: -3.2766787253336602
BIC value - Test - Decision T: 1098.4516593376377
Thank you.
AI: Generally when ever we are trying to compare between models and to choose the best one, we go for other metrics like AIC, BIC, AUC(this is not applicable as it is used for classification algorithm) etc along with $R^2$.
Now why are they important criteria because AIC tries to select the model that most adequately describes an unknown, high dimensional reality. This means that reality is never in the set of candidate models that are being considered. On the contrary, BIC tries to find the TRUE model among the set of candidates. I find it quite odd the assumption that reality is instantiated in one of the model that the researchers built along the way. This is a real issue for BIC.
Generally we use both AIC and BIC together.
you can got through this Link to understand better on AIC, BIC and which is better but conclusion is both are important.
Now when we compare between $R^2$ and AIC value:
$R^2$ and AIC are answering two different questions. I want to keep this breezy and non-mathematical, so my statements are non-mathematical. $R^2$ is saying something to the effect of how well your model explains the observed data. If the model is regression and non-adjusted $R^2$ is used, then this is correct on the nose.
AIC, on the other hand, is trying to explain how well the model will predict on new data. That is, AIC is a measure of how well the model will fit new data, not the existing data. Lower AIC means that a model should have improved prediction.
Frequently, adding more variables decreases predictive accuracy and in that case the model with higher $R^2$ will have a higher (worse) AIC. A nice example of this is in "Introduction to Statistical Learning with R" in the chapter on regression models including 'best subset' and regularization. They do a pretty thorough analysis of the 'hitters' data set. One can also do a thought experiment. Imagine one is trying to predict output on the basis of some known variables. Adding noise variables to the fit will increase $R^2$, but it will also decrease predictive power of the model. Thus the model with noise variables will have higher $R^2$ and higher AIC.
To understand more with respect to the above explanation, you can go through this Link
I have also pasted the R2 and MSE values. As per my understanding Linear regression model is stable, Decision tree is over fitting and KNN is having less error. Which model needs to be considered here?
To answer this I think you need to derive AIC and BIC values and finally you can decide on which model to choose.
To derive the AIC value for Linear Regression can be done by using OLS, as the value of AIC is directly available.
If you want to derive, you can go through this Link
AIC:
$AIC= 2k - 2ln(sse)$
where k= number of variables
BIC:
$BIC = n * ln(sse/n) + k * ln(n)$
Do let me know if have any additional questions. |
H: Confusion-matrix clarification from Python
confusion_matrix(y_test1, pred)
That is my codes
Confusion matrix, without normalization
True [[724258 438]
value [ 25396 302]]
Predicted value
I understand that is how this is the confusion matrix. But I do not know the order of classification result.
0 [[724258 438] 1 [[724258 438]
1 [ 25396 302]] 0 [ 25396 302]]
0 1 1 0
Which order generally? I check the documentation, it does not tell me the specific result.
AI: it's the first one
0 [[724258 438]
1 [ 25396 302]]
0 1
According to the documentation
sklearn.metrics.confusion_matrix(y_true, y_pred, labels=None, sample_weight=None)
labels : array, shape = [n_classes], optional List of labels to index
the matrix. This may be used to reorder or select a subset of labels.
If none is given, those that appear at least once in y_true or y_pred
are used in sorted order. |
H: Next best predictions in decision tree
I am using decision tree classifier to predict some block selected based on the below data.
I am able to predict the "Block selected" column based on the data. How to get the second best, third best prediction and so on(I need a ordered list)? Can I get this using decision tree? Or should I be using a different model?
Any thoughts on how to do this using python scikit-learn?
AI: What you are looking for is a probabilistic classification for 2+ classes (multi class) i.e. evaluating a probability of being associated to a list of k classes given the observation. k = 4 blocks/classes for your example. Its a one vs many situation.
Binary classifiers (like logistic regression) have been extended to multi class problems.
One of the strategy used is to train/test your model k times, each time keeping one class in predicted block intact and turning other three predicted class to a single dummy class. Then use a binary probabilistic classifier(eg. logistic regression) to predict class probabilities. The order of class preference will be in decreasing order of class probabilities.
For logistic regression and for other classifiers, this is inherently built in by scikit and applied by default:
lr = linear_model.LogisticRegression(multi_class = 'ovr')
# ovr is "one vs rest"
lr.predict_proba(test_features)
refer to Multi-class capability built
in scikit
Yes, you can even use a pruned decision tree to get the class probabilities. But most probably you will not be able to get 2nd, 3rd... best predictions for most of your observations from a single tree due to the underlying splitting mechanism of algorithm.
dt = tree.DecisionTreeClassifier(min_samples_split=25)
dt.predict_proba(test_features)
Therefore, rather then relying on a single decision three, better option would be to use a randomforest classifier to obtain proportion of votes going to each class while making a prediction for an observation.
In short, any multi-class classifier (or ensemble) which spits out a likelihood over multiple classes will do. Another eg. XGBoost as pointed out in below comment by bradS. |
H: GAN - why doesn't the generator nullify the noise input?
In GAN architecture, during training, what keeps the generator's output dependant on the input noise? Why don't the weights of the noise input become zero (plus a bias)?
I would expect the generator to converge to outputting a single picture which is extremely real and non-distinguishable from a real picture, and ignore the noise all together, since this is a "cheaper" way (in convergence time and in number of parameters used) to decrease the generator's loss.
AI: If the generator always outputs the same image, then it's easy for the discriminator to win the game and tell apart the output of the generator from random images in the training set: if the input to the discriminator is that one image, then it outputs "came from the generator", otherwise outputs "came from the training step". The game is set up so that the generator is rewarded for fooling the discriminator. Always outputting the same image isn't going to fool the discriminator. |
H: Question about Knn and split validation
I have a big database with 40k recors and 2 classification classes. In this big database the 76% of records belong to the first class.
I've used a 70-30 split partition with stratified sampling, and the K-nn gives the best accuracy on k = 20.
1) Is it too big value for k ?
2) Is it possible that this big value for k is due to the disproportion of the 2 classes in the database , even if i used a stratified sampling ?
AI: There is generally a trade of in k-NN:
k should be large enough to cancel out any noise.
k shouldn't be too large to create big biased boundaries.
One rule of thumb is to select an odd k value to avoid ties in binary class problems.
Small and big value of k is subjective to the underlying structure of the data set itself, that's why we check over a range of k values.
Both of the above gets settled if done a good job with cross validation.
So the most important question you need to focus is:
Have you chosen the right metric for model selection, keeping the imbalance of classes into consideration?
I would recommend you to first identify the class that is more important for you to predict correctly. Then, look at the confusion matrix and ponder upon different metrics (eg. recall, precision etc.) before finalizing a model/ k value. |
H: Is this the correct way to apply a recommender system based on KNN and cosine similarity to predict continuous values?
My data is:
userID, gameID, rating (1.0 through 10.0)
First, I normalize the values the ratings of each row.
I use cosine similarity to create a similarity matrix where each cell represents similarity between a pair of userIDs (value 0.0 through 1.0).
For each unrated gameID, I find the 10 most similar users that have rated the gameID. The predicted rating is equal to the sum of each neighbors rating times similarity, then divided by 10 (number of neighbors).
This seems fine for finding the best predictions, just take the top N rating values, but for actually predicting the values it doesn't perform so well. Intuitively, the average of a group of similar userID's ratings would be an accurate prediction. When each rating is made smaller by multiplying by a value that is always less than 1.0, the predicted value is consistently smaller than expected.
This seemed to follow formulas that I found, but I feel like I missed something. I am using Python, Pandas, and Numpy.
Edit
I am seeing that this final value is sometimes referred to as a "weighted predicted rating". I am wondering how to "de-weight".
AI: The predicted rating is equal to the sum of each neighbors rating times similarity, then divided by 10 (number of neighbors).
You want to obtain a weighted-average of ratings, where the weight is the similarity score. Instead of dividing by 10 above, you should divide by the sum of similarities, to get a correct normalized score. Dividing by 10 is the likely reason for the predictions to be smaller than expected.
The formula you should use is:
Final score = Sum_over_neighbours(Rating * Similarity) / Sum_over_neighbours(Similarity) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.