text
stringlengths 83
79.5k
|
---|
H: Data augmentation in deep learning
I am working on a deep learning project for face recognition. I am using the pre-trained model VGG16.
The dataset has around 100 classes, and each class have 80 images. I split the dataset 60% training, 20% validation, 20% testing. I used data augmentation (ImageDataGenerator()) to increase the training data.
The model gave me different results when I change ImageDataGenerator() arguments. See the following cases:
Case1:
train_datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validate_datagen = ImageDataGenerator()
Test_datagen = ImageDataGenerator()
Case1 result: High training accuracy and validation accuracy, but the training accuracy is lower. check the following image:
Case2:
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
validate_datagen = ImageDataGenerator()
Test_datagen = ImageDataGenerator()
Case2 result: Overfitting. check the following image:
Case3:
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
validate_datagen = ImageDataGenerator(rescale=1./255)
Test_datagen = ImageDataGenerator(rescale=1./255)
Case3 result: High training accuracy and validation accuracy, but the training accuracy is lower.. check the following image:
1- Why does using augmentation in validation and testing data ImageDataGenerator(rescale=1./255) in case3 give different result than case2?
2- Is adding ImageDataGenerator(rescale=1./255) to the testing and validation better than not adding it?
3- Do you think there is a problem in the result of the first case?
AI: 1 and 2: If you rescale you images, you should do it on all partitions: training, validation and test. If you only rescale your images on the training set, then your network will see very different values (0~255, vs 0.0~1.0) on validation/test set and therefore give poor accuracy. That's your case 2.
I don't see any obvious problem. |
H: Xor gate accuracy improvement
I am pretty fresh in machine learning so probably I made rookie mistake somewhere but I tried a lot last few days and I can't find a way to further improve my network. Hope you guys can help!
So I was asked to make "perceptron Xor gate" in Tensorflow with data that I generated myself. Since single perceptron can't solve it I used MLP. My generated train and test data looks like this:
# TRAIN SET
number_of_elements = 20000
top_right, top_left, bottom_right, bottom_left = [np.random.rand(number_of_elements, 2) for _ in range(4)]
top_left[:, 0] *= -1
bottom_right[1] *= -1
bottom_left * -1
X = np.concatenate((top_right, top_left, bottom_right, bottom_left))
top_right_y, bottom_left_y = np.zeros((2, number_of_elements))
top_left_y, bottom_right_y = np.ones((2, number_of_elements))
y = np.concatenate((top_right_y, top_left_y, bottom_right_y, bottom_left_y))
y = np.reshape(y, [y.shape[0], 1])
# TEST SET
number_of_test_elements = 40000
first_el_random = random.randint(number_of_test_elements / 2, number_of_elements * 2)
second_el_random = random.randint(number_of_test_elements / 2, number_of_elements * 2)
third_el_random = random.randint(number_of_test_elements / 2, number_of_elements * 2)
forth_el_random = random.randint(number_of_test_elements / 2, number_of_elements * 2)
top_right = np.random.rand(first_el_random, 2)
top_left = np.random.rand(second_el_random, 2)
bottom_right = np.random.rand(third_el_random, 2)
bottom_left = np.random.rand(forth_el_random, 2)
top_left[:, 0] *= -1
bottom_right[1] *= -1
bottom_left * -1
X_test_v1 = np.concatenate((bottom_right, bottom_left, top_left, top_right))
top_right_y = np.zeros(first_el_random)
top_left_y = np.ones(second_el_random)
bottom_right_y = np.ones(third_el_random)
bottom_left_y = np.zeros(forth_el_random)
y_test_v1 = np.concatenate((bottom_right_y, bottom_left_y, top_left_y, top_right_y))
y_test_v1 = np.reshape(y_test_v1, [y_test_v1.shape[0], 1])
Then I made two implementations, one in Keras and one in Tensorflow. They look almost the same so I will just show Keras code:
# KERAS MODEL
model = Sequential()
model.add(Dense(64, input_dim=2, activation='tanh', kernel_initializer='normal'))
model.add(Dropout(0.2))
model.add(Dense(32, activation='tanh', kernel_initializer='normal', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(16, activation='tanh', kernel_initializer='normal', kernel_constraint=maxnorm(3)))
model.add(Dense(1, activation='sigmoid', kernel_initializer='normal'))
optimize = Adam()
learning_rate_reduction = ReduceLROnPlateau(monitor='val_binary_accuracy', patience=10, verbose=2, factor=0.5, min_lr=0.00001)
model.compile(loss='binary_crossentropy', optimizer=optimize, metrics=['binary_accuracy'])
model.fit(X, y, batch_size=200, shuffle=True, epochs=100, verbose=2, validation_split=0.33,
callbacks=[learning_rate_reduction])
preds = model.predict_classes(X_test_v1)
real_accuracy = np.mean(np.equal(y_test_v1, preds))
print('Test classification accuracy: %.4f' % real_accuracy)
Model accuracy is around 75%, I tried to make it higher by:
- increasing train data size
- adding additional dense layers, removing layers, adding dropout, removing dropout, adding neurons, removing neurons etc
- changing activation functions, only one that I didn't change is sigmoid on outer layer, when I mess with that my results don't make sense
- messing with different optimizers, learning rates, callbacks. I didn't touch binary_crossentropy as loss function tho.
- changing bath sizes, epochs, setting shuffle to False
The problem seems pretty simple in general but I just can't pinpoint what am I doing wrong. If someone can tell me what am I doing wrong I would be really grateful. Thanks!
AI: After carefully reviewing my code I found a bug in my train and test set.
Bug was in this line:
bottom_right[1] *= -1
should be:
bottom_right[:, 1] *= -1
After that my tensorflow accuracy went up to 99% and my Keras accuracy to 89%.
Anyway if someone has better solution or see something I am doing wrong please let me know :) |
H: keras' ModelCheckpoint not working
I'm trying to train a model in keras and I'm using ModelCheckpoint to save the best model according to a monitored validation metric (in my case the Jaccard index).
While I can see the model improving in tensorboard, when I try to load the weights and evaluate the model it isn't working at all. Furthermore, by the timestamp on the file where the weights are supposed to be stored, I can tell that they are not being saved at all. The timestamp corresponds roughly to the time I started training.
Has anyone encountered such a problem before?
AI: Do you run ModelCheckpoint on its default parameters (besides monitor)?
ModelCheckpoint has a parameter called mode which specifies the type of metric to be used. mode can take 3 values 'min' 'max' and 'auto' (which is the default):
min: means that you want to minimize the metric (e.g. the loss function).
max: means you want to maximize the metric (e.g. accuracy).
auto: attempts to figure what to do on its own. If you look at the code, it checks if the metric's name contains 'acc' or if it starts with 'fmeasure'. If yes it sets the mode to max, if not it sets it to min.
In your case, you monitor the jaccard index, which is a metric you want maximized, so you want the mode set to max. Normally because "jaccard" contains the string "acc", even if the mode is set to auto it should work fine.
If however you named your metric something arbitrary (e.g. my_metric), the default mode will be set to min, which means that it will store the weights that achieve the least performance on your metric, which should be the weights of the first epoch.
Suggestion: next time try with mode='max' to be sure. |
H: padding on mnist for LeNet Architecture
LeNet accepts 32X32 image. So, to use LeNet for MNIST dataset,we have to change the size from 28X28 to 32X32. I came across This implementation. i am confused about how the following line of code work.
np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
Above line of code pad 28X28 pixed image to become a 32X32 image. Can anyone help me understand how exactly its done.
AI: Basically, it does exactly what you specify. The used numpy function appends values in each dimension. The amount of "pads" on each axis is specified by ((0,0),(2,2),(2,2),(0,0)), given the dimension of your dataset, which is:
10000 (samples) x 28 (image dimension 1) x 28 (image dim. 2) x 1 (grayscale value!)
Now let's see what your specification means in that regard:
(0,0)Pad 0(as in the amount) values before and after each row
(2,2)Pad 2 before and 2 after each value of dim. 1 of your image data: 28 values -> 32
(2,2)Pad 2 before and 2 after each value of dim. 2 of your image data: 28 values -> 32
(0,0)Pad, again, nothing in the grayscale value dimension
That means you will end up with a 32x32 image in the respective dimension. Now, the only thing that's left is: Which values do we pad? The answer is quite simple, you do not specify any constant_values, meaning it will pad with the default constant_values (which is specified on the above linked page). Namely this value is 0.
To sum it up, simply imagine you have a 32x32 image, your 28x28 is in the middle, and on the outside you have a 2-value-thick border of 0's. |
H: Gradient computation
I am beginner in data-science. I am trying to understand this PyTorch code for gradient computation using custom autograd function:
class MyReLU(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
ctx.save_for_backward(x)
return x.clamp(min=0)
def backward(ctx, grad_output):
x, = ctx.saved_tensors
grad_x = grad_output.clone()
grad_x[x < 0] = 0
return grad_x
However, I don't understand this line : grad_x[x < 0] = 0. Can anyone explain this part?
AI: The example you find is calculating the gradient for the ReLU function, whose gradient is
$$\text{ReLU}'(x)=\left\{
\begin{array}{c l}
1 & \text{if } x>0\\
0 & \text{if } x<0
\end{array}\right.$$
Therefore when x<0, you make the gradient 0 by grad_x[x < 0] = 0. |
H: Training network with variable frame rate?
I would like to train a temporal network, but the video data available are in different frame rates(ex 7,12,15,30). How should I train this network, without down-sampling higher frame rate videos.
I tried up-sampling everything, but there is some artifacts generated.
What is the suitable approach?
AI: I don't believe there is a well-known method to deal with this.
Simple pre-processing
While I haven't done this with images/videos, I know from general time-series analysis that you basically have to interpolate the lower frequencies or you need to down-sample the higher frequencies. If you think about, what else is there to do...?
Modelling solution
Nvidia released a research paper with an accompanying video showing how they were able to train a model, which could estimate the frames between frames - effectively interpolating video and increasing its frame rate. This would essentially be the equivalent of interpolation between frames and allow you to scale up your lower frequency videos to match the higher frequency ones. The paper is named:
Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation
... sounds like something worth reading.
There are older algorithms that try to do the same thing (e.g. "twixtor"), but I read they have problems with things such as rotating objects. Another thing to keep in mind is the usual GIGO: garbage in garbage out. There are still some artefacts of interpolation in the Nvidia video, but that likely comes from blurry input images used during training when e.g. objects were moving faster than the recording frame rate could handle.
It seems that they train two models: the first encodes the optical flow between frames and the second model uses that, along with the base images to perform the interpolation. Please read the paper for more details. It also outlines how they train the model (learning rates, number of epochs, augmentation steps, etc.).
Here is the sketch of their model for flow computation/interpolation:
We can see that it is an encoder/decoder-looking model, introducing a bottleneck that condenses the information, before upsampling again. This is based on the U-net model architecture: an encoder/decoder that also introduces skip connections between layers of different scales. |
H: What more does TensorFlow offer to keras?
I'm aware that keras serves as a high-level interface to TensorFlow.
But it seems to me that keras can do many functionalities on its own (data input, model creation, training, evaluation).
Furthermore, some of TensorFlow's functionality can be ported directly to keras (e.g. it is possible to use a tf metric or loss function in keras).
My question is, what does TensorFlow offer that can't be reproduced in keras?
AI: Deep Learning frameworks operate at 2 levels of abstraction:
Lower Level: This is where frameworks like Tensorflow, MXNet, Theano, and PyTorch sit. This is the level where mathematical operations like Generalized Matrix-Matrix multiplication and Neural Network primitives like Convolutional operations are implemented.
Higher Level: This is where frameworks like Keras sit. At this Level, the lower level primitives are used to implement Neural Network abstraction like Layers and models. Generally, at this level other helpful APIs like model saving and model training are also implemented.
You cannot compare Keras and TensorFlow because they sit on different levels of abstraction. I also want to take this opportunity to share my experience of using Keras:
I do not agree that Keras is only useful for basic Deep Learning work. Keras is a beautifully written API. The functional nature of the API helps you completely and gets out of your way for more exotic applications. Keras does not block access to lower level frameworks.
Keras results in much more readable and succinct code.
Keras model Serialization/Deserialization APIs, callbacks, and data streaming using Python generators is very mature.
Keras has been declared the official high level abstraction for TensorFlow. |
H: Dealing with extreme values in softmax cross entropy?
I am dealing with numerical overflows and underflows with softmax and cross entropy function for multi-class classification using neural networks. Given logits, we can subtract the maximum logit for dealing with overflow but if the values of the logits are quite apart then one logit is going to be zero and others large negative numbers resulting in 100% probability for a single class and 0% for others. When loss is calculated as cross-entropy then if our NN predicts 0% probability for that class then the loss is NaN ($\infty$) which is correct theoretically since the surprise and the adjustment needed to make the network adapt is theoretically infinite.
I know this can be dealt with normalizing the data and choosing weights and biases from standard normal distribution but this is a what-if scenario where the data is mean preprocessed but not standard deviation processed, I believe this can also occur even after preprocessing both mean and stddev.
AI: Where exactly in the computations are these underflows manifesting? See here for a brief explanation around the extremes of the softmax.
Quick fixes could be to either increase the precision of your model (using 64-bit floats instead of, presumably, 32 bit floats), or just introduce a function that caps your values, so anything below zero or exactly zero is just made to be close enough to zero that the computer doesn't freak out. For example, use X = np.log(np.max(x, 1e-9)) before going into the softmax.
In any case, the softmax shouldn't have problems with your input, as the final activations are exponentiated:
$$
\sigma (\mathbf {z} )_{j}={\frac {e^{z_{j}}}{\sum _{k=1}^{K}e^{z_{k}}}}
$$
This means all values will now be in the range [0, 1].
The cross-entropy equation should also be able to deal with the output of this.
If none of this helps your specific issue, could you share a specific example of your problem? |
H: Does bagging create iid trees?
As the title suggests, I have a question regarding the trees produced through the bagging procedure.
Namely, since the bootstrap samples created to fit trees on are independent and identically distributed (iid), are the resulting trees also iid?
In other words, is there any reason there may be correlation between the "bootstrap trees"?
AI: This question may be more suited to CrossValidated. However:
"Bagging" is short for "bootstrap aggregating", meaning that a random sample with replacement is taken from the overall set. The key here is "with". For example, suppose this is your dataset:
{1, 2, 3, 4, 5}
and you are interested in obtaining 4 samples of size 3. You could end up with the following:
{1, 4, 4}
{1, 4, 5}
{2, 3, 4}
{3, 3, 4}
That is, you can have repeated elements within the same bootstrap sample (e.g. the first result, {1, 4, 4}), and each bootstrap sample could contain the same element (e.g. notice the value '4' in each sample). You can see more on this here.
Yes, it is possible for there to be a correlation among the bootstrapped samples, but if you're interested in, for example, a random forest, then the trees are necessarily NOT correlated. If you were to train 100 trees on the same data set, then you could of course end up with correlated trees. However, in a random forest, the trees are uncorrelated. You can see more on this here.
Hope that helps! |
H: 0.1 accuracy on MNIST fashion dataset following official Tensorflow/Keras tutorial
My goal is to classify products pictures into categories such as dress, sandals, etc.
I am using the MNIST fashion dataset, following this official tutorial word-per-word: https://www.tensorflow.org/tutorials/keras/basic_classification so my code is 100% identical to what can be read there:
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
Problem: The resulting accuracy is always around 0.1, much lower than the tutorial's example output of 0.876.
I am obviously doing something wrong, but I can't figure out what. How to improve accuracy to something reasonable?
My output:
$ python classify-products.py
1.10.1
Epoch 1/5
2018-09-18 13:33:46.971437: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
60000/60000 [==============================] - 3s 47us/step - loss: 13.0161 - acc: 0.1924
Epoch 2/5
60000/60000 [==============================] - 3s 46us/step - loss: 12.8998 - acc: 0.1997
Epoch 3/5
60000/60000 [==============================] - 3s 46us/step - loss: 13.3386 - acc: 0.1724
Epoch 4/5
60000/60000 [==============================] - 3s 47us/step - loss: 12.9031 - acc: 0.1995
Epoch 5/5
60000/60000 [==============================] - 3s 47us/step - loss: 13.6666 - acc: 0.1521
10000/10000 [==============================] - 0s 26us/step
('Test accuracy:', 0.1005)
Switching to 20 epochs does not improve accuracy.
AI: You haven't normalized your image dataset such as setting the pixel values between 0-1 which could help classifier converge faster.
Please do it by doing the operation below.
train_images = train_images / 255.0
test_images = test_images / 255.0
It seems you are using 50% of data for training as well as testing.
try to use the data in the ratio of 7:3 for training and testing.
Increase the epochs hope it will work. |
H: How to generate data if algo itself is involved in the process with a feedback loop?
I have an algorithm which would be a rather easy classification task with a set of features and a class output which I would like to solve with a machine learning algo.
But I am having issues and doubts about the data generation. The features my algo uses as inputs are processed beforehand by other algorithms and more importantly, also have a feedback loop to my the algorithm I want to change.
Basically, the better my algo is getting, the less false positives there should be. But with less false positives, I have more and more imbalanced data to work with, which would mean, it is harder to train the algorithm.
I could reduce the performance of my algo on purpose and generate data, but then I am not sure if the data I am getting is any meaningful as there is a feedback loop.
To me this seems like a chicken, egg problem.
AI: Are you perhaps doing ensembles?
Usually, for imbalanced dataset, the easiest way is to oversample or undersample the data. You either repeat some data on classes containing small samples or cut-off some sample data on classes with very high frequency to make a balanced dataset.
Other technique is to use weights for classes with respect to the frequency of each class.
Another one is to build a model that generates artificial inputs like that in generative adversarial networks. |
H: How do I add a column to a Pandas dataframe based on other rows and columns in the dataframe?
I've tried a lot of different methods, but I can't seem to find the right way to do this. I want to create a new column based on the time and id of the df. However, ids appear multiple times. Here's my dataframe:
df = pd.DataFrame({'time': [1,2,3, 1, 2 ,3],
'id': ['A', 'A', 'A', 'B', 'B', 'B'],
'num': [10,11,12, 20, 21, 22]}
)
and its output:
id num time
A 10 1
A 11 2
A 12 3
B 20 1
B 21 2
B 22 3
What I want is that for the new columns value to be the num value for time==1 for each unique id. Here's what I would like the output to be:
id num time y
A 10 1 10
A 11 2 10
A 12 3 10
B 20 1 20
B 21 2 20
B 22 3 20
One attempt I've made is to make a reference table made like this:
df['y'] = np.where(df['time']==1, df['num'], None)
ref = df[['id','y']]
ref = ref.dropna()
But I still don't know where to go from here. Thank you!
AI: One can create a new dataframe having only first entries of new ID, copying num to new column y and merging this with original dataframe:
newdf = df.drop_duplicates('id')
newdf['y'] = newdf['num']
newdf = df.merge(newdf, how='outer')
However, it will put NaN for non-first id rows:
print(newdf)
id num time y
0 A 10 1 10.0
1 A 11 2 NaN
2 A 12 3 NaN
3 B 20 1 20.0
4 B 21 2 NaN
5 B 22 3 NaN
One can change these NaN to previous values by following simple loop:
tempval = 0 # a variable to store value temporarily
newy=[]
for x in newdf['y']:
if not pd.isnull(x): tempval = x
newy.append(tempval)
newdf['y'] = newy
The desired dataframe is obtained:
print(newdf)
id num time y
0 A 10 1 10.0
1 A 11 2 10.0
2 A 12 3 10.0
3 B 20 1 20.0
4 B 21 2 20.0
5 B 22 3 20.0
Actually, this question belongs to https://stackoverflow.com/ |
H: StandardScaler before or after splitting data - which is better?
When I was reading about using StandardScaler, most of the recommendations were saying that you should use StandardScaler before splitting the data into train/test, but when i was checking some of the codes posted online (using sklearn) there were two major uses.
Case 1: Using StandardScaler on all the data. E.g.
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_fit = sc.fit(X)
X_std = X_fit.transform(X)
Or
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X = sc.fit(X)
X = sc.transform(X)
Or simply
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_std = sc.fit_transform(X)
Case 2: Using StandardScaler on split data.
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform (X_test)
I would like to standardize my data, but I am confused which approach is the best!
AI: In the interest of preventing information about the distribution of the test set leaking into your model, you should go for option #2 and fit the scaler on your training data only, then standardise both training and test sets with that scaler. By fitting the scaler on the full dataset prior to splitting (option #1), information about the test set is used to transform the training set, which in turn is passed downstream.
As an example, knowing the distribution of the whole dataset might influence how you detect and process outliers, as well as how you parameterise your model. Although the data itself is not exposed, information about the distribution of the data is. As a result, your test set performance is not a true estimate of performance on unseen data. Some further discussion you might find useful is on Cross Validated. |
H: Should I use rescale parameters for data augmentation?
I am using Keras library to build a CNN model. I want to use data augmentation for training data.
Should I use rescale parameters for data augmentation?
ImageDataGenerator(rescale=1./255)
AI: The rescale argument byitself does not augment your data.
If you input pixels are in the range of [0,255], you can rescale to [0,1] using the code in your question. Just remember to do the same on all partitions of your dataset.
If you input is already in the range of [0,1], then obviously you should not rescale. |
H: Should reinforcement learning always assume (PO)MDP?
I recently just started learning reinforcement learning and learned that reinforcement learning algorithms work under the assumption of MDP or POMDP.
However as I read A3C and recent vision based deep RL papers, it seems some of them aren't assuming MDPs but used RNNs or LSTM to make it seem as if it is MDP.
So my question is: how does reinforcement learning algorithms work without the assumption of (PO)MDPs?
AI: How does reinforcement learning algorithms work without the assumption of (PO)MDPs?
It doesn't. The theory of reinforcement learning is tied very strongly to an underlying MDP framework. The RNN-based solutions that you are referring to are fully compatible with such an MDP model, and don't even require a POMPDP to be useful.
Without the core guarantees of a (PO)MDP model, or something closely equivalent, it is not clear that any learning could occur, with any kind of agent. The MDP model of an environment is about describing consistent behaviour, that is predictable to some degree, and random/stochastic otherwise, and the predictable parts make it amenable to at least some optimisation. The split into states, actions, time steps and rewards help organise the thinking around this. They are not necessary for other kinds of policy-search approaches, such as genetic algorithms. However, if you try to break away from something that would fit to a (PO)MDP, it would break any other kind of meaningful policy too:
If actions had no consequence, then you could learn the value of being in a particular state, but you could not optimise an agent. This could be modelled as a Markov Reward Process, provided state transitions were not completely random, otherwise just learning associations of state to reward using a supervised learning approach would be the best you could do.
If rewards were not consistently based on any data available to the agent, not even history, but not random, then there is no way to learn how to predict or optimise rewards.
Similarly for state transitions, if they bear no relation to any information known about the environment, current state or history, but are not random, then there is no way to learn about the non-randomness, and no kind of agent could generate a meaningful policy to take advantage of knowledge about the system, because the knowledge available is not relevant. However, if the current state still influenced what rewards were available to which actions, then a contextual bandit approach might work (plus a supervised learning approach could predict currently available rewards).
When the information about rewards or state transitions is not directly available, but can be inferred or guessed at least partially from history or context, then you can model this as a POMDP.
One common scenario you can face is that you have available some observations about the environment, but are not sure how to construct a state description that has the Markov property. The velocity of an object might be such a detail, when your observations only give you positions. Technically a POMDP and this observation/state mismatch are the same basic issue, if you decided arbitrarily that your observation was the state.
When faced with this mismatch between easily-available observations and a state description based on history that would be more useful, you can either try to engineer useful features, or you can turn to learning models that will infer them. That is where using RNNs can come in useful as part of RL, and they can help with both observation to state mapping and also inferring more complex hidden state variables in POMDPs. Use of hidden markov models to model a "belief state" that augments the observed state is similar to the latter use of RNNs. |
H: What is the difference between using numpy array images and using images files in deep learning?
What is the difference between using numpy array images and using images files in deep learning?
Which way is better?
AI: In order to pass an image as an input to a model first need to convert it to a numpy array. Each image actually is represented as an array of values when you load it into python. Even if you don't do it explicitly (i.e. through keras' ImageDataGenerator), it is done behind the scenes.
If your question is: Is it better to use generators than loading the images in a large numpy array?
The answer is: it depends. Is the dataset small enough to fit in your memory?
If not, you are forced to use a generator that loads the images in batches and passes each batch to the model.
If yes, you either can use a generator to save memory for other things (e.g. the model) or you can load the images into a numpy array so that you can save on computation time (i.e. the overhead of loading images again and again). |
H: Data augmentation parameters
When I use data augmentation to increase the train dataset, should I use all augmentation techniques (parameters in keras)?
Which data augmentation parameters should use with flow_from_directory?
AI: This entirely depends on your data!
Generally, the more augmentation, the more situations your model will be exposed to during training and therefore the more robust it will be when being tested on unseen data.
However, what if we for example were working on a model for self driving cars? Using the vertical_flip just doesn't make sense, because the car will (hopefully!) be er be driving along on its roof.
I would suggest starting with no augmentation and slowly adding one possibility at a time. For example, you record an accuracy of 80% with no augmentation. Then addfeaturewise_normalizatiom and featurewise_std_normalization giving you an accuracy of 85%. Then adding horizontal_flip gets you to 90%. Finally you try adding zca_whitening and that send you back down to 86%.
The reverse approach may also work well for you, starting with all augmentation parameters turned on and removing them one by one. In any case, it is completely dependent on your specific problem and your available data. Keras' ImageDataGenerator has a long list of parameters, so having a think about what makes sense will save you a lot of time. |
H: How can I prove bottleneck layer of my CNN auto encoder contain useful information?
I am using CNN autoencoder to create a state representation layer which I will later be feed into my Reinforcement Agent. So I trained my CNN autoencoder and it is giving nice state representations. But I have following questions,
Can my autoencoder layer be overfitted
If there's a overfit will it cause rubish information in my bottleneck layer?
AI: Yes to both of your questions. Your autoencoder can overfit and this will cause your bottleneck to store useless information (besides any useful information it already stores).
Some ways to prevent this is:
Find a larger dataset, or augment the current.
Add noise to the input (see de-noising autoencoders).
Regularization (e.g. early stopping, sparsity constraints) |
H: Voice recognition with fourier transformation with audio input in python
First of all is using the fourier transformation even a good method for recognizing different speakers? I'm not sure if it could recognize a voice if the things that are said are different. I know google and amazon have features of voice/speaker recognition in their voice assistants but what would be a good way to make that too if the fourier transformation doesn't work out?
I want to recognize voices using a neural network, to do that I need to first get a good input for the neural network but by just giving the sound recording as input I don't think it would work because it is based on frequency and time. So I found the Fourier transformation and now I'm trying to transform my audio file with Fourier and plot it.
My questions are:
How can I plot a Fourier transformation with audio input in python?
And if that is working, how can I input the Fourier transformation in the neural network (I thought perhaps give every neuron a y value with the neurons as the corresponding x value)
I tried something like (a combination of things I found on the internet):
import matplotlib.pyplot as plt
from scipy.io import wavfile as wav
from scipy.fftpack import fft
import numpy as np
import wave
import sys
spf = wave.open('AAA.wav','r')
#Extract Raw Audio from Wav File
signal = spf.readframes(-1)
signal = np.fromstring(signal, 'Int16')
fs = spf.getframerate()
fft_out = fft(signal)
Time=np.linspace(0, len(signal)/fs, num=len(signal))
plt.figure(1)
plt.title('Signal Wave...')
plt.plot(Time,fft_out)
plt.show()
but considering my input in the mic was 'aaaaaa' it does not seem right.
AI: The vanilla version of Fourier Transform (fft) is not the best feature extractor for audio or speech signals. This is primarily due to that FT is a global transformation, meaning that you lose all information along the time axis after the transformation.
You need to be familiar with the concept of short-time Fourier transform (STFT). Basically STFT tells you what frequency components exist in your signal at each timestamp. The result of STFT (its squared magnitude, to be precise) is called the spectrogram, which is what people usually visualize. An example of spectrogram from the link above:
You may refer to matplotlib.pyplot.specgram or scipy.signal.stft regarding how to plot a spectrogram in Python.
The most commonly used speech feature (as input for neural networks) is the Mel-Frequency Cepstral Coefficients, or MFCC, which carry the similar semantic meaning as the spectrogram. Other commonly used features include PLP, LPCC, etc which you can google for more details. But directly feeding the result of FT or STFT into a neural network is not the best practice. |
H: Data aggregation and split train test samples
I'm working on a data science project where the goal is to predict daily electricity consumption of a building based on some of its characteristics (e.g., size, location, etc.) and weather features (e.g., temperature, humidity, wind, rain, sun radiation).
Some of my weather features (temperature, humidity, wind) are in hourly intervals and others in daily intervals (rain, sun radiation).
My target (daily electricity consumption) is also in hourly intervals.
My goal is to predict daily consumption so I need to aggregate my input data at a daily interval (sum of 24 values for consumption, average of 24 temperatures, ....)
My question is: Do I have to aggregate the data before split train/test of after ?
If I do aggregation first, I will consider only those days with 24 values (one for each hour) and drop others in order to not introduce bias
Then I will split train/test. So basically, I will clean my data before splitting.
Am I missing something?
AI: If you are training the model based on the aggregated values anyway then it wouldn't make any difference on the final dataset that is fed to the model.
You might consider aggregating earlier in your pipeline as to reduce the amount of work that has to be done afterwards (whatever steps you go through). Aggregating means less data points, so following steps will run faster. |
H: Why should we use (or not) dropout on the input layer?
People generally avoid using dropout at the input layer itself. But wouldn't it be better to use it?
Adding dropout (given that it's randomized it will probably end up acting like another regularizer) should make the model more robust. It will make it more independent of a given set of features, which matter always, and let the NN find other patterns too, and then the model generalizes better even though we might be missing some important features, but that's randomly decided per epoch.
Is this an incorrect interpretation? What am I missing?
Isn't this equivalent to what we generally do by removing features one by one and then rebuilding the non-NN-based model to see the importance of it?
AI: Why not, because the risks outweigh the benefits.
It might work in images, where loss of pixels / voxels could be somewhat "reconstructed" by other layers, also pixel/voxel loss is somewhat common in image processing. But if you use it on other problems like NLP or tabular data, dropping columns of data randomly won't improve performance and you will risk losing important information randomly. It's like running a lottery to throw away data and hope other layers can reconstruct the data.
In the case of NLP you might be throwing away important key words or in the case of tabular data, you might be throwing away data that cannot be replicated anyway else, like gens in a genome, numeric or factors in a table, etc.
I guess this could work if you are using an input-dropout-hidden layer model as you described as part of a larger ensemble though, so that the model focuses on other, less evident features of the data. However, in theory, this is already achieved by dropout after hidden layers. |
H: Boundaries of Reinforcement Learning
I finally developed a Game Bot that learns how to play the videogame Snake with Deep Q-Learning. I tried with different neural networks and hyper-parameters, and I found a working set-up, for a specific set of rewards. The problem is is: when I reward the agent for going in the right direction - positive rewards in case the coordinates of the agent increase or decrease accordingly to the coordinates of the food - the agent learns pretty fast, obtaining really high scores. When I don't reward the agent for that, but only negative rewards for dying and positive for eating the food, the agent does not learn. The state takes into account if there's any danger in proximity, if the food is up, down, right or left and if the agent is moving up, down, right or left.
Here's the question: is rewarding the agent for going into the right direction a "correct approach" in Reinforcement Learning? Or it's seen as cheating, cause the system needs to learn that by itself? Is passing the coordinates of the food as state an other way of "cheating"?
AI: Here's the question: is rewarding the agent for going into the right direction a "correct approach" in Reinforcement Learning?
It depends on what you are hoping the agent is capable of learning by itself. This is an issue for you here, because you have a "toy" problem where you can control a lot more of the environment and alter the meaning of what it means to win.
In general, then yes this is "cheating", at least in terms of claiming to have written an RL agent that solves the game. The academically ideal basic RL agent is rewarded by the gain of something meaningful in the context of the problem being solved, and is not helped by interim rewards. In a game of snake, and any other arcade-style game, it should really be the official points scored in the game and nothing else.
Is passing the coordinates of the food as state an other way of "cheating"?
Again it depends on what you expect the agent to learn from. If, in your target production environment, this data was easy to obtain, and you intended to use it to write a game bot working from the trained policy, then this is fine. There is no requirement that you do one thing or another if you have a practical problem to solve.
However, learning from a pixel-only state, as in the DQN original papers, is of academic interest, because that is a generic state representation that applies to many problems, whilst the distance from the snake to food is a specific feature that you have engineered that makes learning easier in a smaller set of games.
The main issue here is again that your goal is not really to put a "snake bot" into a production system, but to learn how RL works. RL is tricky, and often doesn't work as well as you expect - or at all, for many combinations of algorithm and problem.
It is worth reading this article: Deep Reinforcement Learning Doesn't Work Yet - it may put disappointing results from basic DQN into perspective.
I would encourage you to strip back your Snake problem to remove "helpful" rewards and state, and instead look into extensions to the core DQN algorithm, or different learning agents such as A3C. |
H: LSTM future steps prediction with shifted y_train relatively to X_train
I'm trying to predict simple one feature time series data with shifted train data. The source looks like this:
DATE PRICE
0 1987-05-20 18.63
1 1987-05-21 18.45
2 1987-05-22 18.55
3 1987-05-25 18.60
4 1987-05-26 18.63
Actual code with data link download gisted here: gist
So the main problem is that it actually can't predict next steps. Roughly speaking: y_train "shifted" relatively to X_train by timesteps defined in parameters. So we getting for X_train and y_train something like this:
timesteps = 5
data = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
# After manipulations which you can find in gist we getting this:
X_train =
[[ 1 2 3 4 5]
[ 2 3 4 5 6]
[ 3 4 5 6 7]
[ 4 5 6 7 8]
[ 5 6 7 8 9]
[ 6 7 8 9 10]
[ 7 8 9 10 11]
[ 8 9 10 11 12]
[ 9 10 11 12 13]
[10 11 12 13 14]]
y_train =
[[ 6 7 8 9 10]
[ 7 8 9 10 11]
[ 8 9 10 11 12]
[ 9 10 11 12 13]
[10 11 12 13 14]
[11 12 13 14 15]
[12 13 14 15 16]
[13 14 15 16 17]
[14 15 16 17 18]
[15 16 17 18 19]]
So it is fair to assume that after training LSTM model with X_train (as input) and y_train (as output) we getting model which able to forecast n timesteps ahead. BUT I encountered a problem that trained model not predicting anything - only "duplicates" X_test data. For the convenience
I rebuild X_test data and plot it with y_test data which returns from model.predict():
So this is result which also contains 'Dataset prices' (pure data from dataset[upper_train + timesteps:]) for clarity.
I can not find where I made a mistake (or maybe this approach is bad?) so I will be grateful for any help!
AI: Unfortunately it is more likely that this approach itself is bad. It's not the fault of your LSTM or neural netowrk.
You may be able to find a lot of online tutorials using RNN/LSTM to predict stock price/crude oil price/bitcoin price or whatever price, based on its price history, and their results are almost surely meaningless.
Note that I am not saying analyzing a market price is meaningless, but predicting future prices based only on historical prices only is usually meaningless, given that the market is well established and most relevant information influencing its price is readily available; or in other words, the market is efficient.
Outside of the domain of data science, the assumptions for the efficient-market hypothesis are not entirely true for most markets, but its consequence is close to the truth for large markets: the asset price follows a random walk. In terms of prediction into the future, the price behaves as a martingale, i.e. the best prediction for tomorrow's price is today's price. |
H: How to plot train test error for classification models like Support Vector Classification(SVC)
How to plot train test error for classification models like Support Vector Classification(SVC). I am using SVC from sklearn module, not able to get train and test errors to plot
AI: Well you haven't defined what "error" means, so I'll just assume that you want the log loss.
First, you need to create your SVC object telling it you'll want probability estimations:
model = sklearn.svm.SVC(probability=True)
Then you can compute the log loss given said estimation:
probs = model.predict_proba(x)
loss = sklearn.metrics.log_loss(y_true=y, y_pred=probs) |
H: Which recommender system: Content based or Collaborative filtering?
I want to build a recommender system for a coupons website which should do the following:
Given the past purchase behaviour of a user, recommend coupons which the user is likely to buy.
The data does not have any ratings for coupons by the user. It tells if a user bought a certain coupon or not and what is the gender and location of the user who bought these coupons. Between content-based and collborative-filtering what would be the best recommender system to choose from in this scenario. Personally, I think that content based recommender system fits the requirement because I want to look only at user's own historical behaviour. What would you guys advice? Thanks.
AI: I think it will strongly depend on the details of the content you have for the coupons, but you can always train an hybrid model with different options of selection for the final recommendation, like weighting the results or selecting the top one of each model. The most important thing is that you build a reliable test set and can compare performances between models. |
H: Neural Network for classifying humans
I am currently searching a neural network that can classify if there is a human in an image or not.
I checked the ImageNet dataset, but the 1000 classes there contain nothing like human or person or such, mostly animals and things.
Segmentation CNNs usually have a lot longer runtime, and all I really need is basically a binary NN, just "Yes = Human in Image", "No = No human in image". Can someone point me to a training set or a pre-trained network?
AI: In Part 1 of this "tutorial/discussion" ther is a good explanation of the problem and a possible solution using OpenCV built in Haar Cascades or HOG for Human Detection (this is the key-word you should use to search info about your task).
They are both really fast compared to NN and require little training.
In Part 2 are explained more modern approaches using Deep Learning. A notable mention would be AlexNet.
At the end there is even a tutorial on Setting up a Basic Human Detector with Tensorflow. |
H: RL's policy gradient (REINFORCE) pipeline clarification
I try to build a policy gradient RL machine, and let's look at the REINFORCE's equation for updating the model parameters by taking a gradient to make the ascent (I apologize if notation is slightly non-conventional):
$$\omega = \omega + \alpha \cdot \nabla_\omega log\ \pi(A_t|S_t) * V_t$$
My questions I am unsure about are the following:
Do I calculate the gradient values at each time step $t$ (like in
SGD fashion) or averaging gradient over all timesteps of the episode is a better option?
Do I care about getting the gradient values of the selected action
probability output only (ignoring the outputs for other actions, in
a discrete case)? In other words, do I consider the $V_t$ term for non-selected actions to be 0, which make the gradient values equal 0 as well?
In a discrete case the cross-entropy (the loss) is defined as:
$$H(p,q) = -\sum_x P(x) * log Q(x)$$
(source: wikipedia)
Does that mean that if I substitute the labels (denoted as $P(x)$)
with the $V_t$ terms (non-zero for selected action only) in my
neural network training, I will be getting the correct gradient
values of the log-loss which fully satisfy the REINFORCE definition?
AI: For notation and visualizations please take a look at this excellent tutorial Policy Gradients.
For your questions:
The second is correct. In PGs we try to maximize the expected reward. In order to do this we approximate the expectation with the mean reward over samples of trajectories under a parametrized policy. In other words sample actions and get their respective rewards over a period of time within an episode. Compute the discounted return from last step back to the first step (this is your $V_t$ in your notation which usually you will find it as $R_t$ and is the discounted return). Multiply returns with logits and sum. Take a look at slides 8 and 9 of the (1) to see how the REINFORCE is being implemented along with this code examples (lines 59-75).
As you may have already realized, no. It's a return over an episode (multiple timesteps) and is calculated as a discounted sum of all the rewards that you collected. Even if you get 1 at the end and 0 everywhere else the reward is propagated back to the first step (doing it by hand helps a lot!) so all rewards at every timestep are converted into returns from that state and timestep.
Look at slide 13 of (1). Your intuition is correct. The maximum likelihood equation for a classification problem (cross-entropy) multiplied by the return equates the policy gradient loss. If you use a simple Neural Network with REINFORCE with two actions to perform a task you will notice that the gradients propagated back are the same as in a classification task but here are multiplied with their respective return (line 69 2) instead of the class label (0/1).
A bit of intuition for the 3: This is with accordance with PG methods considered as Model-free methods and map states to actions. If you use a clustering technique to cluster the hidden representations of the last layer of your network (after dimensionality reduction) and color the data points according to which action your network chose, you will find out that the representations are naturally clustered into separate groups depending the action. At a high level, you can state that the REINFORCE performs a type of classification having as label the reward signal.
Hope it helps! |
H: Too low accuracy on MNIST dataset using a neural network
I am beginning with deep learning. This is an implementation of a simple neural network with just 1 hidden layer on MNIST dataset. Why is it that the loss doesn't change at all after any epoch? It clearly means that it is not learning at all. The accuracy is approx. 11% that is like random guessing. But should it be so less?
I have used Adam optimizer and cross_entropy loss.
input_nodes = 784
hl1_nodes = 64
output_nodes = 1
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train_reshaped = X_train.reshape(X_train.shape[0],784)
model = Sequential()
model.add(Dense(hl1_nodes, activation='relu', input_shape=(input_nodes,)))
model.add(Dense(output_nodes, activation = 'sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(x=X_train_reshaped, y=y_train, validation_split=0.33, verbose=1,epochs=10)
#output
Train on 40199 samples, validate on 19801 samples
Epoch 1/10
40199/40199 [==============================] - 4s 87us/step - loss: -55.0254 - acc: 0.1142 - val_loss: -55.1361 - val_acc: 0.1088
Epoch 2/10
40199/40199 [==============================] - 3s 76us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088
Epoch 3/10
40199/40199 [==============================] - 3s 74us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088
Epoch 4/10
40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088
Epoch 5/10
40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088
Epoch 6/10
40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088
Epoch 7/10
40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088
Epoch 8/10
40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088
Epoch 9/10
40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088
Epoch 10/10
40199/40199 [==============================] - 3s 75us/step - loss: -55.0284 - acc: 0.1141 - val_loss: -55.1361 - val_acc: 0.1088
What am I missing?
Edit: It is same upto the 4th digit even after 90th epoch.
AI: What am I missing?
Incorrect architecture for the classification task. You have a single binary output, trained using binary_crossentropy, so the NN can only classify something as in a class (label 1) or not (label 0). Instead, you most likely want 10 outputs using softmax activation instead of sigmoid, and categorical_crossentropy as the loss, so that you can classify which digit is most likely given an input.
Incomplete processing of MNIST raw data. The input pixel values in X_train range from 0 to 255, and this will cause numeric problems for a NN. The target labels in y_train are the digit value (0,1,2,3,4,5,6,7,8,9), whilst for classification you will need to turn that into binary classes - usually a one-hot coding e.g. the label 3 becomes a vector [0,0,0,1,0,0,0,0,0,0].
Scale the inputs - a quick fix might be X_train = X_train/ 255 and X_test = X_test/ 255
One-hot code the labels. A quick fix might be y_train = keras.utils.to_categorical(y_train)
I made those changes to your code and got this after 10 epochs:
val_loss: 0.1194 - val_acc: 0.9678 |
H: Fully connected layer in deep learning
How to determine the best number of the fully connected layers in CNN? Can I use only one fully connected layer in CNN? How to determine the dimension of the fully connected layer output?
AI: In CNNs, the convolutional layers are used to extract features in the input in order to reduce the cost function. These extracted convolutional features should be classified using dense layers. Consequently, the use of dense layers is for classifying convolutional extracted features. Based on the complexity of the high level features, extracted features in deep convolutional layers, you can have one or more layers.
Take a look at
How to set the number of neurons and layers in neural networks
to investigate how to set the number of neurons in dense layers.
Can I use only one fully connected layer in CNN?
Yes, you can. For instance, you can use only one layer for MNIST data set and get an acceptable learning.
How to determine the dimension of the fully connected layer output?
The dimension of each fully connected layer is equal to the number of neurons in that layer. For instance, suppose you have a weight matrix $W$ which is $10\times20$. the latter number represents the number of output neurons. Consequently, the output dimension belongs to $\mathbb{R}^{20}$. |
H: TypeError: '<' not supported between instances of 'int' and 'str'
I have the following code
rf = RandomForestClassifier()
rf.fit(X_train, Y_train)
print("Features sorted by their score:")
print(sorted(zip(map(lambda x: round(x, 2), rf.feature_importances_), X_train), reverse=True))
and I get the following error:
> TypeError Traceback (most recent call last)
>
> ipython-input-109-c48c3ffd74e2> in <module>()
>
> 2 rf.fit(X_train, Y_train)
>
> 3 print ("Features sorted by their score:")
>
> ----> 4 print (sorted(zip(map(lambda x: round(x, 2),
> rf.feature_importances_), X_train), reverse=True))
>
> TypeError: '<' not supported between instances of 'int' and 'str'
I am not sure what I am doing wrong. I only have int and float in my dataframe.
AI: Based on our discussion, omit reverse = True and use list instead of sorted and print it. I guess you will see what you want. |
H: Interpretation of the loss function for word2vec
I am trying to understand the loss function which is used for the word2vec model, but I don't really follow the argumentation behind this video https://www.youtube.com/watch?v=ERibwqs9p38&t=5s, at 29:30.
The formula which is unclear is the following:
$J(\theta) = \displaystyle-\dfrac{1}{T} {\sum_{t=1}^{T} \sum_{-m <= j <=m, \\j\ne0} log(p(w_{t+j}|w_t))}.$
$T$ is the number of words in the vocabulary
$w_t$ is a given word and we try to calculate the probablity that another word $w_{t+j}$ occurs within a window of +/- $m$ words ahead.
$\theta$ is the solution we're after. It is essentially a $2*d*Tx1$ dimensional vector which contains all the columns of the matrices V and U.
At a first sight it looks all pretty clear: we're iterating though the whole vocabulary and for each (fixed word then), we add up all probabilities that another word occurs within a window around that fixed word.
However, it fails apart for me when I consider that a word $w_t$ occurs in many positions in the corpus and might occur multiple times with a different word. E.g, the words 'deep learning' often occur together, which indicates that there's a contextual relation between them. Why would we only count them twice? It seems like the formula above is counting each pair $p(w_{t+j}|w_t)$ just twice (e.g. once for $p(deep|learning)$ and once for $ p(learning|deep)$). IMO we should need a correcting term that adjusts for the missing 'frequency', e.g.
$J(\theta) = \displaystyle-\dfrac{1}{T} {\sum_{t=1}^{T} \sum_{-m <= j <=m, \\j\ne0} log(\lambda(w_t,w_{t+j})p(w_{t+j}|w_t))}.$
In the case where $\lambda(x,y) = 1$, we get the formula above, but we could also be free to chose a function that boosts frequent occurrences (pairwise). The formula above could then be seen as a special case when you don't care that words that occur more often together get a boost.
On the other hand, when the formula above already accounts for multiple occurrences, then where is this visible?
The author then continues and defines $p(o|w)$ as $exp(o^T*w)/\sum(exp(u^T*w))$. In this particular case, I don't see that we're counting the dot-product as many times as the word $o$ is in the neighbourhood of the word $w$. Maybe the choice of $p$ is just very simplistic and represents a model where the fact that 2 words are in the neighbourhood (just somewhere in the corpus) is enough (bag of words model???). It's hard to see then that such models deliver good performance in NLP though.
AI: Not sure what the video said, but $T$ should not be the vocabulary size, but the training corpus size (number of all words).
For example, if your training corpus is
deep learning is popular . i love deep learning . i want to learn more about it.
Then when you sume up over $T$, you will sum up all the word pairs in the corpus including duplicates. The word pair (deep learning) is indeed calculated twice.
For details please refer to the original skip-gram paper, and notice the definitoin (1) on page 2. |
H: What does the co-ordinate output in the yolo algorithm represent?
My question is similar to this topic.
I was watching this lecture on bounding box prediction by Andrew Ng when I started thinking about output of yolo algorithm.
Let's consider this example, We use 19x19 grids and only one receptive field with 2 classes, so our output will be => 19x19x1x5.
The last dimension(array of size 5) represents the following:
1) The class (0 or 1)
2) X-coordinate
3) Y-coordinate
4) height of the bounding box
5) Width of the bounding box
I don't understand whether X,Y coordinates represent the bounding box with respect to the size of entire image or just and receptive field(filter). In the video the bounding box is represented as a part of receptive field but logically receptive field is much smaller than bounding box and also people might tinker with filter size, so positioning bounding boxes with respect to filter makes no sense.
So, basically what does the coordinates of bounding boxes of an image represent ?
AI: Simply the left high most vertex of the bounding box.
With X,Y and it's height and width you can find the bounding box in the image. (With respect to the size of entire image) |
H: Getting rid of maxpooling layer causes running cuda out memory error pytorch
Video card: gtx1070ti 8Gb, batchsize 64.
I had such UNET with resnet152 as encoder wich worket pretty fine:
class UNetResNet(nn.Module):
def __init__(self, encoder_depth, num_classes, num_filters=32, dropout_2d=0.2,
pretrained=False, is_deconv=False):
super().__init__()
self.num_classes = num_classes
self.dropout_2d = dropout_2d
if encoder_depth == 34:
self.encoder = torchvision.models.resnet34(pretrained=pretrained)
bottom_channel_nr = 512
elif encoder_depth == 101:
self.encoder = torchvision.models.resnet101(pretrained=pretrained)
bottom_channel_nr = 2048
elif encoder_depth == 152:
self.encoder = torchvision.models.resnet152(pretrained=pretrained)
bottom_channel_nr = 2048
else:
raise NotImplementedError('only 34, 101, 152 version of Resnet are implemented')
self.pool = nn.MaxPool2d(2, 2)
self.relu = nn.ReLU(inplace=True)
self.conv1 = nn.Sequential(self.encoder.conv1,
self.encoder.bn1,
self.encoder.relu,
self.pool) #from that pool layer I would like to get rid off
self.conv2 = self.encoder.layer1
self.conv3 = self.encoder.layer2
self.conv4 = self.encoder.layer3
self.conv5 = self.encoder.layer4
self.center = DecoderCenter(bottom_channel_nr, num_filters * 8 *2, num_filters * 8, False)
self.dec5 = DecoderBlockV(bottom_channel_nr + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec4 = DecoderBlockV(bottom_channel_nr // 2 + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec3 = DecoderBlockV(bottom_channel_nr // 4 + num_filters * 8, num_filters * 4 * 2, num_filters * 2, is_deconv)
self.dec2 = DecoderBlockV(bottom_channel_nr // 8 + num_filters * 2, num_filters * 2 * 2, num_filters * 2 * 2,
is_deconv)
self.dec1 = DecoderBlockV(num_filters * 2 * 2, num_filters * 2 * 2, num_filters, is_deconv)
self.dec0 = ConvRelu(num_filters, num_filters)
self.final = nn.Conv2d(num_filters, num_classes, kernel_size=1)
def forward(self, x):
conv1 = self.conv1(x)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
conv4 = self.conv4(conv3)
conv5 = self.conv5(conv4)
center = self.center(conv5)
dec5 = self.dec5(torch.cat([center, conv5], 1))
dec4 = self.dec4(torch.cat([dec5, conv4], 1))
dec3 = self.dec3(torch.cat([dec4, conv3], 1))
dec2 = self.dec2(torch.cat([dec3, conv2], 1))
dec1 = self.dec1(dec2)
dec0 = self.dec0(dec1)
return self.final(F.dropout2d(dec0, p=self.dropout_2d))
# blocks
class DecoderBlockV(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels, is_deconv=True):
super(DecoderBlockV2, self).__init__()
self.in_channels = in_channels
if is_deconv:
self.block = nn.Sequential(
ConvRelu(in_channels, middle_channels),
nn.ConvTranspose2d(middle_channels, out_channels, kernel_size=4, stride=2,
padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
)
else:
self.block = nn.Sequential(
nn.Upsample(scale_factor=2, mode='bilinear'),
ConvRelu(in_channels, middle_channels),
ConvRelu(middle_channels, out_channels),
)
def forward(self, x):
return self.block(x)
class DecoderCenter(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels, is_deconv=True):
super(DecoderCenter, self).__init__()
self.in_channels = in_channels
if is_deconv:
"""
Paramaters for Deconvolution were chosen to avoid artifacts, following
link https://distill.pub/2016/deconv-checkerboard/
"""
self.block = nn.Sequential(
ConvRelu(in_channels, middle_channels),
nn.ConvTranspose2d(middle_channels, out_channels, kernel_size=4, stride=2,
padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
)
else:
self.block = nn.Sequential(
ConvRelu(in_channels, middle_channels),
ConvRelu(middle_channels, out_channels)
)
def forward(self, x):
return self.block(x)
Then I edited my class looks to make it work without pooling layer:
class UNetResNet(nn.Module):
def __init__(self, encoder_depth, num_classes, num_filters=32, dropout_2d=0.2,
pretrained=False, is_deconv=False):
super().__init__()
self.num_classes = num_classes
self.dropout_2d = dropout_2d
if encoder_depth == 34:
self.encoder = torchvision.models.resnet34(pretrained=pretrained)
bottom_channel_nr = 512
elif encoder_depth == 101:
self.encoder = torchvision.models.resnet101(pretrained=pretrained)
bottom_channel_nr = 2048
elif encoder_depth == 152:
self.encoder = torchvision.models.resnet152(pretrained=pretrained)
bottom_channel_nr = 2048
else:
raise NotImplementedError('only 34, 101, 152 version of Resnet are implemented')
self.relu = nn.ReLU(inplace=True)
self.input_adjust = nn.Sequential(self.encoder.conv1,
self.encoder.bn1,
self.encoder.relu)
self.conv1 = self.encoder.layer1
self.conv2 = self.encoder.layer2
self.conv3 = self.encoder.layer3
self.conv4 = self.encoder.layer4
self.dec4 = DecoderBlockV(bottom_channel_nr, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec3 = DecoderBlockV(bottom_channel_nr // 2 + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec2 = DecoderBlockV(bottom_channel_nr // 4 + num_filters * 8, num_filters * 4 * 2, num_filters * 2, is_deconv)
self.dec1 = DecoderBlockV(bottom_channel_nr // 8 + num_filters * 2, num_filters * 2 * 2, num_filters * 2 * 2,is_deconv)
self.final = nn.Conv2d(num_filters * 2 * 2, num_classes, kernel_size=1)
def forward(self, x):
input_adjust = self.input_adjust(x)
conv1 = self.conv1(input_adjust)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
center = self.conv4(conv3)
dec4 = self.dec4(center) #now without centblock
dec3 = self.dec3(torch.cat([dec4, conv3], 1))
dec2 = self.dec2(torch.cat([dec3, conv2], 1))
dec1 = F.dropout2d(self.dec1(torch.cat([dec2, conv1], 1)), p=self.dropout_2d)
return self.final(dec1)
is_deconv - in both cases True. After changing it stop to work with batchsize 64, only with with size of 16 or with batchsize 64 but with resnet16 only - otherwise out of cuda memory. What am I doing wrong?
Full stack of error:
~/Desktop/ml/salt/open-solution-salt-identification-master/common_blocks/unet_models.py in forward(self, x)
418 conv1 = self.conv1(input_adjust)
419 conv2 = self.conv2(conv1)
--> 420 conv3 = self.conv3(conv2)
421 center = self.conv4(conv3)
422 dec4 = self.dec4(center)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
355 result = self._slow_forward(*input, **kwargs)
356 else:
--> 357 result = self.forward(*input, **kwargs)
358 for hook in self._forward_hooks.values():
359 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input)
65 def forward(self, input):
66 for module in self._modules.values():
---> 67 input = module(input)
68 return input
69
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
355 result = self._slow_forward(*input, **kwargs)
356 else:
--> 357 result = self.forward(*input, **kwargs)
358 for hook in self._forward_hooks.values():
359 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torchvision-0.2.0-py3.6.egg/torchvision/models/resnet.py in forward(self, x)
79
80 out = self.conv2(out)
---> 81 out = self.bn2(out)
82 out = self.relu(out)
AI: I'm not really familiar with pytorch (I only know keras) so I'm not really sure. But here's some possible reasons for memory error:
The garbage collector isn't working properly so the neural network models you've created while doing trial and error are just filling up in the memory and aren't being cleared. This can occur when you are using a notebook and doing modifications to the NN model there. In keras, you can add a few lines of codes to manually free up the GPU memory.
Removing the maxpooling layer makes the model too large for the memory to handle. Basically, the function of the maxpooling layer is to pick only the maximum values produced by the previous convolution layers. By removing the maxpooling layer, you tell the model to use all the output produced by the previous convolution layers. It could be that these are in hundreds of millions VS only a few thousands when only the maximum values are used with the maxpooling layer. |
H: 1x1 Convolution. How does the math work?
So I stumbled upon Andrew Ng's course on $1x1$ convolutions.
There, he explains that you can use a $1x1x192$ convolution to shrink it.
But when I do:
input_ = torch.randn([28, 28, 192])
filter = torch.zeros([1, 1, 192])
out = torch.mul(input_,filter)
I obviously get $28x28x192$ matrix. So how can I shrink it?
Just add the result of every $1x1x192 * 1x1x192$ kernel result? So I'd get a $28x28x1$ matrix?
AI: Let's go back at normal convolution: let's say you have a 28x28x3 image (3 = R,G,B).
I don't use torch, but keras, but the principle applies I think.
When you apply a 2D Convolution, passing the size of the filter, for example 3x3, the framework adapt your filter from 3x3 to 3x3x3! Where the last 3 it's due to the dept of the image.
The same happens when, after a first layer of convolution with 100 filters, you obtain an image of size 28x28x100, at the second convolution layer you decide only the first two dimension of the filter, let's say 4x4. The framework instead, applies a filter of dimension 4x4x100!
So, to reply at your question, if you apply 1x1 convolution to 28x28x100, passing number of filters of k. You obtain an activation map (result) of dimension 28x28xk.
And that's the shrink suggested by Ng.
Again to fully reply to your question, the math is simple, just apply the theory of the convolution using 3D filters. Sum of multiplication of overlapping elements between filter and image.
Edit: Simple example
I am going to show you a simple example of the operations that occur during the deep convolution between M, a tensor of dimension (z= 2, x=2, y=2), where you can see z as your k that you want to shrink to 1 and W, a filter of dimension (2, 1, 1). You will have to implement your own function with a loop to operate the stride of the filter.
import numpy as np
M = np.array([[[1, 2],[3, 4]], [[5, 6],[7, 8]]])
W = np.array([[[2.]],[[3.]]])
C = np.ones(shape=(2, 2))
c[0,0] = np.sum(M[:, 0, 0]*w.T)
c[0,1] = np.sum(M[:, 0, 1]*w.T)
c[1,0] = np.sum(M[:, 1, 0]*w.T)
c[1,1] = np.sum(M[:, 1, 1]*w.T) |
H: Adding a new custom column in a python data frame
I need to add a new column in a python data frame that has the dates of January in a column. Each date is repeated 24 times in the column without any intervention. The total number of entries will thus be 31*24 = 744 entries. Can you please help me code this part ?
AI: This turns out to be fairly simple! There is a handy method called repeat on a datetime index. Here are the steps:
import pandas as pd
Define a date range, supplying start and end
jan = pd.date_range(start="1-Jan-2018", end="31-Jan-2018") # could specify any year
Now provide how many times you want to repeat each date and create the repeated column
num_repeats = 24
repeated_jans = jan.repeat(num_repeats)
Let's create the random dummy dataframe as a base
total_dates = num_repeats * len(jan) # 24 x 31 = 744
df = pd.DataFrame(np.random.randint(0, 10, total_dates))
This is how we add a column - the name of the column can be anything
df['repeated_jans_lalala'] = repeated_jans
Have a look at some dates:
print(df.iloc[[0, 24, 48, 71, 72]]) # multiples of 24...we can see one repeated date
0 repeated_jans_lalala
0 7 2018-01-01
24 4 2018-01-02
48 3 2018-01-03
71 6 2018-01-03
72 3 2018-01-04
If try adding the column to a dataframe with a different number of rows, we get an error:
df_error = pd.DataFrame(np.random.randint(0, 10, 743)) # require 744!
#df_error['repeated_jans'] = repeated_jans # raises ValueError
If you want to change the way the dates look, you can use the strftime method on the dates
jans_fancy = jans.strftime('%d-%B-%y')
df['fancy_jans'] = jans_fancy
df.head()
0 repeated_jans_lalal fancy_jans
0 7 2018-01-01 01-January-18
1 2 2018-01-01 01-January-18
2 8 2018-01-01 01-January-18
3 9 2018-01-01 01-January-18
4 7 2018-01-01 01-January-18
If you don't want to show the actual year, just leave out the %y part! |
H: TypeError: object of type 'int' has no len(), LogisticRegression()
I'm trying to fit my Logistic Regression model, but I'm running into an error that I don't understand. Looked around and haven't found a straight answer.
Shape of independent features (X): (495,30)
Shape of dependent feature (y): (495,)
The dependent feature holds binary values (0,1) and the values are currently of type "int". This is probably where the problem is occurring, but I don't understand what I need to do to fix the issue.
This is the error I'm getting when I try fitting to sklearn LogisticRegression():
c_param = [1,10,100,1000]
rand_grid = {'kbest__k':list(range(5,10)),
'log__C':c_param,
'log__random_state':42}
log_rand = RandomizedSearchCV(pipe_opt('log',LogisticRegression()),rand_grid,n_iter=100,cv=3,
random_state=42,n_jobs=-1)
log_rand.fit(X_train,y_train)
AI: RandomizedSearchCV expects you provide dictionary of parameters-values to try, like in your code above:
rand_grid = {'log__C':[1,10,100,1000],
'log__random_state':42} # values should be in []
Since the idea is to search values which give the best performance, you are expected to provide several values, as a list of values. Though in your case you only provided only one value, it won't raise error as long as it was written as list (inside []).
However it's a bit weird if you only have one value to search. So better you fix those values when you initialize the estimator (i.e LogisticRegression())., e.g:
# you want to see which values perform best for params C and max_iter.
rand_grid = {'log__C':[1,10,100,1000]
'log__max_iter': [100,200,300]}
# you have fix value for params random_state, penalty and fit_intercept
logReg = LogisticRegression(random_state=42, penalty='l1', fit_intercept=False)
log_rand = RandomizedSearchCV(pipe_opt('log',logReg),
rand_grid,n_iter=100,cv=3,
random_state=42,n_jobs=-1) |
H: Is max_depth in scikit the equivalent of pruning in decision trees?
I was analyzing the classifier created using a decision tree. There is a tuning parameter called max_depth in scikit's decision tree. Is this equivalent of pruning a decision tree? If not, how could I prune a decision tree using scikit?
dt_ap = tree.DecisionTreeClassifier(random_state=1, max_depth=13)
boosted_dt = AdaBoostClassifier(dt_ap, random_state=1)
boosted_dt.fit(X_train, Y_train)
AI: Is this equivalent of pruning a decision tree?
Though they have similar goals (i.e. placing some restrictions to the model so that it doesn't grow very complex and overfit), max_depth isn't equivalent to pruning. The way pruning usually works is that go back through the tree and replace branches that do not help with leaf nodes.
If not, how could I prune a decision tree using scikit?
You can't through scikit-learn (without altering the source code).
Quote taken from the Decision Tree documentation: Mechanisms such as pruning (not currently supported)
If you want to post-prune a tree you have to do it on your own:
You can read this excellent post detailing how to do so. |
H: How to plot mean_test score and mean_train score of GridSearchCV
How to plot mean_train_score and mean_test_score values in GridSearchCV for C and gamma values of SVM?
AI: You could visualize them as a heatmap.
For example you could use the C values as the rows, the gamma values as the columns and the color intensity of each element in the heatmap array would correspond to the mean_test_score.
To implement this you first need to create a pandas.DataFrame like this:
$$
\begin{array}{c | c c c}
& C & gamma & mean\_test\_score \\ \hline
1 & 0.1 & 0.001 & 0.798 \\
2 & 1 & 0.001 & 0.813 \\
3 & 1 & 0.01 & 0.801 \\
4 & 10 & 0.001 & 0.787 \\
\end{array}
$$
To do this you need to store each run you make in a different line, which will contain all necessary hyper-parameters and the result. Then you will need to make a pivot table which will use C as the rows, gamma as the columns and mean_test_score as the values.
pivot = pd.pivot_table(df, values=df['mean_test_score'])
This pivot will be the array that will form your heatmap. Now you should select your aesthetic parameters (e.g. colormap) and proceed to make the heatmap.
sns.heatmap(pivot) # plus any other aesthetic parameters you wish |
H: Resampling pandas Dataframe keeping other columns
I'm facing a problem with a pandas dataframe. Actually my Dataframe contains 3 columns: DATE_TIME, SITE_NB, VALUE.
For some SITE_NB there are missing rows. For example:
DATE_TIME;SITE_NB; VALUE
2011-01-03 01:00; 1; 10.7
2011-01-03 04:00; 1; 3.2
2011-01-03 05:00; 1; -2.1
So here, rows for 2011-01-03 00:00, 2011-01-03 02:00 and 2011-01-03 03:00 are missing. What I want is add these rows with the same SITE_NB (=1) and with VALUE (=NaN)
I want to do the same for all different SITE_NB in my dataframe. So for each SITE_NB, add missing rows based on DATE_TIME with a frequency of 1 Hour, and putting NaN in VALUE for freshly added rows.
I tried resampling but did not get the right output...
Can somebody help me to solve this issue?
Thanks!
AI: To add missing indices, use:
full_idx = pd.date_range(start='<start_date>', end='<end_date>', freq='H')
df.reindex(full_idx)
That will put NaN's in SITE_NB and VALUE columns.
If each dataframe has only one value of SITE_NB, you can use:
df['SITE_NB'].fillna(df['SITE_NB'].unique()[0], inplace=True)
which replaces all NaN's with the first non-null values in the column.
If you actually have one dataframe with multiple SITE_NB values, could you please show what that looks like? Are the time indices overlapping in some cases or not?
Also, this answer might work for you: https://stackoverflow.com/questions/32275540/pandas-reindex-dates-in-groupby |
H: Data augmentation for the inputs of CNNs to identify flowers
I want to make a neural network to identify flowers from images like this:
or similar images e.g as on https://www.pexels.com/search/flowers/
I want to use a CNN for this as in https://stackoverflow.com/questions/52463538/reducing-memory-requirements-for-convolutional-neural-network
My questions are:
Which will be better: to put whole images into training set or to divide each image into 4 parts (cutting in midline horizontally and vertically)?
Will it help if I rotate/tilt these images and put them also in training set?
Will it help if I blur these images and put them also in training set?
The main aim of network is to identify which kind of flower it is. For this, it will be trained with images of about 20 flowers. Hence, classification is the objective.
Edit: we can assume there will be only one kind of flowers in one image.
AI: As you've mentioned your task is classification and due to using images, it is better using convolutional neural networks. First I have a suggestion, try to find an appropriate size with the same dimensions for all the images and feed them to your network. you can keep aspect ratio or not depending on the environment you are going to test your model. You can also take a look at Why do we scale down images before feeding them to the network.
Which will be better: to put whole images into training set or to divide each image into 4 parts (cutting in midline horizontally and vertically)?
I guess you attempt to do data augmentation. If so, it depends. If you do that, you may have images that do not contain the flower or they may contain a part of a flower which is common among different classes. Consequently, it may increase the Bayes error. If you are sure you do not have these problems, you can use it albeit I do not think so.
Will it help if I rotate/tilt these images and put them also in training set?
Yes, it is better than the previous technique. You have to be aware that this is dangerous in some cases. Basically, you should train your network using the training set which is given i.i.dly from the real distribution that your test data has. Suppose that while testing your model, your test data all are given from flowers which are vertically placed in the scene. In such cases, if you train your network with just rotated versions, you may not have a good test result.
Will it help if I blur these images and put them also in training set?
Again like the previous answer, it depends on your test data. If it is something that happens while testing, it is legitimate.
I didn't notice to the images. Based on the comment of our friend, I update the answer. Your classes can be one these things. They can be exhaustive or not and they can be mutually exclusive or not. if the former is satisfied, it means that the input should at least belong to one class. The latter means that if you have mutually exclusive classes, the inputs should contain only one class. If they are not mutually exclusive, you can have multiple classes in a single input.
To add an update for the answers above you should consider that data augmentation with reduction can be difficult because it needs an expert to label the inputs by hand which is time consuming. |
H: has number of output layer of DNN any effect in speed of find the optimal answer of DNN?
has number of output layer of DNN any effect in speed of find the optimal answer of DNN?
For instance the more episodes is needed to train a DNN when the number of outputs is more? Is it correct?
AI: If you mean by the number of outputs is the number of classes, then the answer is yes. Increasing the number of outputs will increase the number of parameters that you need to tune. The last set of layers in the dnn are fully connected and they contribute the most in the number of parameters, usually the sizes of these fully connected layers depend on the number of classes. The more classes the larger these fully connected layers and thus the more parameters |
H: What are features in the context of reinforcement learning?
In machine learning, "feature" is a synonym for explanatory variables. I know what a feature is. However, in the specific case of RL, it's not clear to me what features are. What are "features" in the specific case of RL? What are examples of features in RL? Which RL algorithms require the specification of features? Why do we need "features" in RL?
AI: What are "features" in the specific case of RL?
In RL, the supervised learning component is used to learn a function approximation for either a value function or a policy. This is usually a function of the state, and sometimes of the state and action. The features are therefore explanatory variables relating to the state (or state and action) and need to be sufficient to explain the value or optimal action of that state.
In RL literature, there is often no difference between terms "state representation" or "features". Although you may need to process the state representation using feature engineering, into suitable features for whatever supervised learning you are using. With neural networks, this may involve normalising values that might otherwise cause problems.
What are examples of features in RL?
In OpenAI's gym, CartPole-v1 has the following features describing state:
Cart Position
Cart Velocity
Pole Angle
Pole Velocity At Tip
The positions and speeds are measured in arbitrary units in this case, but you might consider them to be in SI units - metres along the track, metres per second, radians, metres per second.
Typically in RL, you will want features that:
Describe the state (and possibly the action, depending on implementation)
Are feasible to collect by observation of the environment
Together with the action choice, can adequately explain expected rewards and next state. In RL terms, they should in aggregate possess the Markov property
Which RL algorithms require the specification of features?
None of them*. However, without descriptive features of state, you are limited to enumerating all possible states, and using tabular methods. Working with tabular methods constrains you in practice by the time it takes to fully explore all the states, and the memory space required to represent all the individual estimates of value.
If you are looking into solving your problem using DQN, A3C or other "deep" RL method that uses neural networks, then you need to be thinking in terms of a state representation that is composed of features, and to treat those features as if they were inputs to the same neural networks used in supervised learning.
Why do we need "features" in RL?
To work with complex environments with large state spaces that cannot be solved without some form of approximation and (hopefully) generalisation.
* This is slightly more complex with policy gradient methods, as they do require that you use a function approximator, such as a neural network. However, it is possible to make the "features" in that case a one-hot coded vector of the state enumeration, which is the same representation as tabular value-based methods. There is little reason to do that in practice, but it is possible in principle. |
H: Can grid-based clustering method be use for customer segmentation?
I am trying some clustering methods for customer segmentation and I stumbled upon grid based methods like: STING, MAFIA, WAVE CLUSTER, and CLIQUE. However, from what i've read, most of them are for image segmentation.
So before I invest my time in implementing these algorithms, I would like to know if anyone has tried using grid based clustering for clustering customer data before or on something that is not image based?
AI: Depends on what your customer data is, but here's an idea:
You can look for a way to map your customer data on a 2D plot. Then you could save these plots as rasters (images in a lossless format, ML libraries commonly use .tiff). Make sure to strip decorations, scales, labels and anything that is not pure data representation, so that now every pixel in the raster represents what has now become a datapoint for you.
If you achieve this, you would be able to apply the desired algorithms on the newly created images.
The advantage of such an approach is that you add (or rather underline preexising) topological properties to your data, such as the proximity of what now become your pixels. It should go without saying that such properties should be utilised only if they contextually make sense for your data. And due to the existence of a continuous distance metric on numerical dimensions, the proximity example is better applicable to such numerical dimensions. |
H: Statistical machine translation word alignment for FR-ENG and ENG-FR: what is p(e) and p(f)?
I'm currently trying to implement this paper, but am struggling to understand some of the math here. I'm pretty sure I understand how to implement the E-step, but for the M-step, I'm confused on how to compute the M-step. It says just before section 3.1 that $p_1(x, z; \theta_1) = p(e)p(a, f|e; \theta_1)$, and then the same for $p_2$ but with $e$ and $f$ swapped. The second part of this makes sense to me, but what is $p(e)$ or $p(f)$? From my understanding, $e, f$ are sentences in the bi-text. So how would we compute the probability of a sentence?
It says earlier that $p(e)$ and $p(f)$ are arbitrary distributions that don't affect the optimization problem, but then how do we compute $p_1(x, z; \theta_1)$?
Thanks!
AI: You are right that $p(e)$ is the probability of the English sentence. Estimating the probability of a sentence is achieved by a language model.
This kind of machine translation model is known as the noisy channel model. The noisy channel model says that given a french sentence $f$, its best English translation is
$$e^* = \arg\max_{e\in E} p(e)p(f|e)$$
In this equation the $p(e)$ is the language model. Back in the era of IBM models (which are built upon the noisy channel approach), it is usually an n-gram based language model, calculated as (assuming bigram) $$p(e_1e_2...e_n)=p(e_1|<s>)p(e_2|e_1)p(e_3|e_2)...p(</s>|e_n)$$
And $p(f|e)$ is the translation model where you need to use the EM algorithm to solve. Inside the EM algorithm you do not update the language model parameters, so yes, $p(e)$ and $p(f)$ don't affect the optimization problem. |
H: Always getting value one for a binary classifier
I'm using keras. I have one classification problem. The output should be either 0 or 1. I trained my model and I'm getting 86.59 accuracy. But when i check the predicted output what I'm seeing is all ones. I tried creating a categorical classifier with two nodes and tried the same. The test accuracy is 86.59% but when I check the output the prediction contains only one node with value one for the entire dataset.
This is the code
from keras.models import Sequential
model = Sequential()
from keras.utils import to_categorical
y_traine = to_categorical(y_train)
y_teste = to_categorical(y_test)
from keras.layers import Dense
x_train = x_train.reshape(1300,64)
model.add(Dense(units=64, activation='relu', input_dim=64))
model.add(Dense(units=2, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
# x_train and y_train are Numpy arrays --just like in the Scikit-Learn API.
model.fit(x_train, y_traine, epochs=50, batch_size=32)
x_test = x_test.reshape(len(x_test),64)
loss_and_metrics = model.evaluate(x_test, y_teste, batch_size=128)
classes = model.predict(x_test, batch_size=128)
print (loss_and_metrics)
print (classes)
output
[0.32096896952051834, 0.875968992248062]
[[0.84422934 0.1557707 ]
[0.8332991 0.16670085]
[0.86778754 0.13221247]
[0.9261704 0.07382962]
[0.85143256 0.14856751]
.
.
What I'm doing wrong here? Why I'm getting training accuracy as 86% if my predictions are wrong?
AI: As with most problems like this, it is always best to see the dataset upfront to gain a full understanding.
That said, if your categorical dependent variable is between 0 and 1, have you ensured that the independent variables in your dataset have also been scaled in this way? From looking at your code, it doesn't look like this is the case.
If your data has not been transformed to a common scale, then the neural network won't necessarily give you accurate results.
In this regard, you might try scaling your x data with MinMaxScaler if you haven't done so already and see what you com up with.
For instance, suppose you have variables x1, x2, and x3.
import numpy as np
from sklearn.preprocessing import MinMaxScaler
x=np.column_stack((x1,x2,x3))
x=sm.add_constant(x,prepend=True)
x_scaled=MinMaxScaler().fit_transform(x)
x_train,x_test,y_train,y_test = train_test_split(x_scaled,y,test_size=0.2)
Essentially, you are scaling the x variables between 0 and 1, so that the x variables now have the same scale as the y variable. It might be an idea to try this if you haven't already and see what you come up with. |
H: Error in using fit() on RandomForest Classifier where X was a pandas.DataFRame object
On using fit() method on sklearn.ensemble.RandomForestClassifier I am getting a value error that says.
ValueError: could not convert string to float: 'male'
The data-set used is the one in Titanic:Machine Learning from Disaster competition on Kaggle.
Here is the link- https://www.kaggle.com/c/titanic
Can someone please help me how to deal with this, why is it occurring and how to prevent it in future.
Note-There are no NaN in my DataFrame for train_X, i.e I have replaced all NaN with df.fillna(df.mean()), also I cross-checked that no NaN values exist by using
train_X.isnull().sum()
where, train_X is the training data for features.
Please Help!!
AI: As an extension to @marco_gorelli's answer, another option apart from one-hot encoding is to use LabelEncoder from sklearn.
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['sex_enc'] = le.fit_transform(df['sex'])
df['sex_enc'] = df['sex_enc'].astype('category') |
H: Coefficient of determination is close to 1 but the value of RMSE is large. What does it mean?
I am working with the DecisionTreeRegressor and trying to understand how well the data fits the model. I calculated both RMSE and coefficient of determination. At a certain depth, coefficient of determination has the value equal to 0.9918744073066561 but the value of RMSE is equal to 75.0025. I cannot understand this. The value of RMSE is quite large but the value of the coefficient of determination is close to 1.0. What does it really mean? Is the model/fit good enough?
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.metrics import mean_squared_error
def rmse(y_true, y_pred):
return math.sqrt(mean_squared_error(y_true, y_pred))
sample_depth = np.linspace(1,40, num = 40, dtype=int)
dt_score_list = []
for index, depth in enumerate(sample_depth):
boosted_regressor = AdaBoostRegressor(DecisionTreeRegressor(max_depth=depth), random_state=1)
boosted_regressor.fit(X_train, y_train)
dt_score_list.append(boosted_regressor.score(X_test, y_test))
print(boosted_regressor.score(X_test, y_test), rmse(y_test, boosted_regressor.predict(X_test)), depth)
AI: Compared to $R^2$ $RMSE$ is dependent on the variance of values. Even if $R^2$ is close to 1 but the standard deviation is high the value of $RMSE$ will also be high.
$$RMSE=\sigma_{y_{true}}\sqrt{1-R^2}$$
where $\sigma$ is a standard deviation
As can be seen from the formula $RMSE$ and $R^2$ have a strict mathematical relationship that shows that $RMSE$ changes according to standard deviation ($\sigma$) and the higher the standard deviation of the true values the higher the value of $RMSE$.
$RMSE$ shows the deviation of the error in units of the values, while $R^2$ shows the share of variance that is explained by the model. You can explain 99% of the variance ($R^2$) but your numbers are in millions of dollars and your error still varies in thousands of dollars ($RMSE$) Whether thousands of dollars is ok for your error or not depends on the case. If you predict profit plus or minus 1000\$ is ok. If you predict revenue then plus or minus 1000\$ could be a difference between a profitable and a non-profitable company.
Proof of the formula:
According to sklearn the formula for $R^2$ is
$$R^2 = 1 - \frac{\sum_i^n(y_{true} - y_{pred})^2}{\sum_i^n(y_{true} - \overline{y}_{true})^2}$$
RMSE is
\begin{align}
RMSE &= {\sqrt{\frac1n\sum_i^n(y_{true} - y_{pred})^2}}
\\RMSE^2 &=\frac1n\sum_i^n(y_{true} - y_{pred})^2
\\\frac{RMSE^2}{\frac1n\sum_i^n(y_{true} - \overline{y}_{true})^2} &=\frac{\frac1n\sum_i^n(y_{true} - y_{pred})^2}{\frac1n\sum_i^n(y_{true} - \overline{y}_{true})^2}
\\\frac{RMSE^2}{\frac1n\sum_i^n(y_{true} - \overline{y}_{true})^2} &=\frac{\sum_i^n(y_{true} - y_{pred})^2}{\sum_i^n(y_{true} - \overline{y}_{true})^2}
\\1-\frac{RMSE^2}{\frac1n\sum_i^n(y_{true} - \overline{y}_{true})^2} &=1-\frac{\sum_i^n(y_{true} - y_{pred})^2}{\sum_i^n(y_{true} - \overline{y}_{true})^2}
\\1-\frac{RMSE^2}{\frac1n\sum_i^n(y_{true} - \overline{y}_{true})^2} &=R^2\\1-\frac{RMSE^2}{\sigma_{y_{true}}^2} &=R^2
\\RMSE&=\sigma_{y_{true}}\sqrt{1-R^2}
\end{align}
where $\sigma$ is a standard deviation and $\overline{y}$ is an average |
H: Neural network for variable length data classification
How can I create a network which can predict labels of variable lengths data:
Training data:
label1: abcdeaefafere
label1: afdfdofdjfdjdfdofdj
label1: dffdpodfdajfdjdddfddfd
label2: reorefdfpreperpe
label2: rexcxfrerperuetupterer
label2: erfdfdrpoeregjroeptreereter
...
...
test data:
fdldjffdjfjdfd
xcdjeioreweoforpeeedfdfd
...
...
Please note these are sequences belonging to different classes and this is a classification problem. I am not trying to predict future data for these sequences.
Thanks for your insight.
AI: Each input sequence should be padded to the same length. The most common method is to find the longest sequence and then add zeros to all shorter sequences. Most Deep Learning frameworks will have a built-in function to do this.
Consistently sized-input data allows common neural networks models (e.g., Convolutional Neural Network and Recurrent Neural Network) to be fit in a straight-forward manner. |
H: Batch data before feed into CNN network
I am working on a project to classify CT scan images using the CNN model, the image size is huge and I want to feed it into the network using the idea of batches, tried doing that with this pieces of code:
# train_data size = 5460
num_epochs = 14
batch_size = 390
batch = 0
print("Starting training...")
for epoch in range(num_epochs):
train_batch = train_data[batch:batch_size]
batch += batch_size
batch_size += batch_size
ep_loss = 0
for data in train_batch:
X = data[0]
Y = data[1]
_, c = sess.run([optimizer, cost], feed_dict={x_img: X, y_label: Y})
my questions are:
1- Is this a right way to do batching? or there is a better way?
2- with the above code I am using 'AdamOptimizer' is it a good optimization technique for the idea of batching or I should use another one?
AI: It's is a good practice to use batches to train neural networks. As Yann LeCun said:
Training with large minibatches is bad for your health. More
importantly, it's bad for your test error. Friends dont let friends
use minibatches larger than 32.
Although, it will not help you deal with large images. This is what convolutions are for.
If you are using Keras, then the batch implementation is entirely made for you. If you use Tensorflow, then you can use the tf.data api or do your own implementation. The code you provided seem to work for your particular training size, but you may want to adapt it for all kinds of training size (namely using num_iterations = training_size // batch_size) and making sure that the case where not all examples where shown to the network, the last remaining examples are included as well before ending the epoch...
I have had good results with the Adam Optimizer, although you might want to tweak the hyperparameters to get better results. |
H: Unbalanced training data for different classes
What precautions do I need to take while trying to develop a CNN for classification of images if there is much more training data for one label. For example:
label1 : 1000 images
label2 : 100 images
label3 : 100 images
label4 : 100 images
Numbers will become larger later but proportion is likely to stay the same.
Thanks for your insight.
AI: The problem you face is commonly called the class imbalance and has been the subject of quite a bit of research. Here's a literature review, if you're interested: He, H., & Garcia, E. A. (2008). Learning from imbalanced data
In particular, you might encounter two forms of imbalance:
Absolute imbalance/rarity occurs when, while you have plenty of data from some classes, you have only a few examples of some other classes (or subconcept of a class). In this case, the issue is that there may not be enough data for the learning algorithm to learn the minority class. In the example you give, with 100 examples of the minority class, depending on the nature of your data, you might have this problem. If you expect to have more data in the future, however, absolute rarity should eventually no longer be an issue.
Relative imbalance, on the other hand, does not go away with more data. You have a relative imbalance when the prior probability of certain classes is much larger than that of some other class or classes. For example, you will always have 10 times as many examples of class 1 than examples of class 2, because class 1 occurs 10 times as often.
Most learning algorithms for classification optimize for accuracy, or something similar like RMSE. This means that, when solving the real classification problem is hard enough, and the data is strongly imbalanced towards one class, the model may resort to predicting the majority class whenever there's a doubt. Recall for the majority class may be great, but not so much for the minority class.
This becomes an issue in many domains where detection of the minority class is particularly important. For example, in medical diagnoses, we may be willing to sacrifice overall accuracy (because of more false positives) in order to have a better true positive rate.
In short, it depends on your domain. Are you OK with optimizing for overall accuracy, or is it more important to have comparable performances across the classes? If you choose the latter, then there are a few things you may try:
Use cost-sensitive learning: Certain learning algorithms and implementations allow you to assign a cost to each class, essentially describing how bad it is if an example of that class is misclassified. If I recall correctly, this is usually considered the best approach when you have a good idea of these different costs.
Rebalance the classes: You can oversample the minority class (which carries a risk of overfitting), undersample the majority class (which is dangerous if you don't have a lot of data), use a mixture of both or perhaps something a bit more advanced like synthetic sampling (attempt to generate new examples of the minority class using something like SMOTE)
Overall, you should also be careful to pick the right evaluation metric. Evaluating your model using accuracy may lead you to believe that your model is performing very well when in fact it is classifying everything into the majority class. There are many metrics you may use, each of which have their pros and cons. The area under the ROC curve (AUC) is a common metric which gives you a general idea of your model's performance averaged over different class misclassification costs. If you can plot the ROC curve for multiple models and notice that one curve dominates the others across the entire width of the plot, then that is the clearest sign that you have a winner. There's a whole chapter on the subject, if you're interested, in the book Imbalanced Learning: Foundations, Algorithms, and Applications. |
H: Predict compatibility of 2 people as boolean classification problem
How can I predict the compatibility of 2 people as a boolean classification problem?
I want to know if below is an appropriate approach to modelling compatibility, or if I should be using "market basket analysis" or some other approach instead?
I'm less interested in the specific result below, and more interested in if this is a realistic way to frame this data science problem.
Background:
Assume people only have 3 attributes: compassion, extroversion and humor. These are also boolean and can be modelled as 1s and 0s in a list ([compassion, extroversion, humor]).
So someone with all 3 characteristics would be [1,1,1] and someone with only humor would be [0,0,1].
We have pairs of people who match and do not match, specified by 1 or 0, where 1=match and 0=no_match.
How to solve this?
I don't consider this a simple distance problem (ie: euclidean distance) because its very possible that generally an introvert and extrovert get along, but 2 extroverts don't.
Data:
person1 person2 match?
-------- -------- ------
([1,1,0], [1,0,1]) => 1
([0,0,0], [1,1,1]) => 1
([1,0,1], [1,0,0]) => 0
([1,1,1], [0,1,0]) => 0
([0,0,0], [0,1,1]) => 1
([1,1,0], [1,1,1]) => 0
([1,0,0], [1,0,1]) => 0
([0,0,1], [0,0,0]) => 0
([0,0,0], [0,0,1]) => 0
([0,0,0], [0,1,1]) => 1
([0,1,0], [0,1,1]) => 0
([0,1,0], [0,1,1]) => 0
([0,1,0], [1,0,0]) => 1
What I've tried:
My first thought was to concatenate both individuals' data for each example. Then use that to fit the model.
Data structured as python code:
X_train = [
[1,1,0,1,0,1],
[0,0,0,1,1,1],
[1,0,1,1,0,0],
[1,1,1,0,1,0],
[0,0,0,0,1,1],
[1,1,0,1,1,1],
[1,0,0,1,0,1],
[0,0,1,0,0,0],
[0,0,0,0,0,1],
[0,0,0,0,1,1],
[0,1,0,0,1,1],
[0,1,0,0,1,1],
[0,1,0,1,0,0],
]
y_train = [1,1,0,0,1,0,0,0,0,1,0,0,1]
X_test = [
[0,1,1,0,0,0],
[1,1,0,1,0,1],
[1,0,0,1,0,0],
[0,0,0,1,0,0],
[0,1,0,0,0,0],
[0,0,0,0,0,0],
]
y_test = [1,1,0,0,1,0]
Computing the match:
from sklearn.metrics import classification_report
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
print(classification_report(y_test,y_pred))
precision recall f1-score support
0 0.60 1.00 0.75 3
1 1.00 0.33 0.50 3
avg / total 0.80 0.67 0.62 6
I'm less concerned about the results here and more interested in if this is a proper way to frame this problem.
Can you offer a suggestion?
AI: Your approach to modeling compatibility seems sound, and definitely makes more sense than market basket analysis. From what I could gather, your goal is to predict compatibility, and you have access to a certain signal that you can use as your compatibility label for the training set (for example, you know for a fact that persons X and Y are compatible but Y and Z are not). In these circumstances, supervised learning definitely sounds appropriate.
Based on the example you gave, there are a few things about this problem that you may consider (if you haven't already):
Should the features (e.g. compassion, extroversion, humor) be binary, or on a continuous spectrum? This depends on your data, of course, but perhaps you may get a better measure of compatibility if you can get the finer grain of detail that continuous features provide.
Same for the class: is it exclusively binary (classification) or could this be treated as a regression problem (predict the degree of compatibility)? Even if treating it as classification, certain models output class posterior probabilities, which could be interpreted as degrees of compatibility.
Compatibility is most likely a symmetrical relationship: if a is compatible with b, then b is compatible with a. If your dataset is formed of the two individuals' data concatenated, should you create two examples per pair (one in each order)? Is there another way to enforce symmetry in your model? Also, can the signal you use for your compatibility label really be interpreted in this symmetrical sense, or does it only tell you that, for instance person A likes person B, but not necessarily the other way around.
How will you evaluate your model? If compatibility is more continuous than discrete, for example, then perhaps recall, precision and accuracy aren't the best metrics, and you may want to use something like RMSE? |
H: Jupyter notebook running very slow
I am running a RandomForestClassifier on my data but my jupyter notebook is very slow. It took almost 2 hours to run the below code:
rf = RandomForestClassifier()
rf_random = RandomizedSearchCV(estimator = rf, param_distributions =
random_grid, n_iter = 100, cv = 3, verbose=2, random_state=42, n_jobs = -1)
rf_random.fit(X_train, y_train)
My dataset has 30K rows and 300 features.
I am not sure if something is wrong with my code or jupyter notebook configuration.
I am using a remote desktop windows machine.
I would really appreciate any help!
Thanks in adnvace
AI: In this case you are running a RandomizedSearchCV which is running 100 iterations.
If you consider the fact that for every run of your 30K rows worth of data with youe 300 features (which is a fair amount), you would be looking at an average run time of ~ 1.2 minutes per run.
You could however speed this if you were running thia via GPUs instead of CPUs as you could do more rapid calculations.
So to answer your question the issue is not with your machine or your Jupyter Notebook.
Rather it is with how many iterations you have had with your RF Randomized Search Algorithm. If you reduce the iterations, you will also see a reduction in the run time. |
H: What is the relationship between MDP and RL?
What is the relationship between Markov Decision Processes and Reinforcement Learning?
Could we say RL and DP are two types of MDP?
AI: What is the relationship between Markov Decision Processes and Reinforcement Learning?
In Reinforcement Learning (RL), the problem to resolve is described as a Markov Decision Process (MDP). Theoretical results in RL rely on the MDP description being a correct match to the problem. If your problem is well described as a MDP, then RL may be a good framework to use to find solutions. That does not mean you need to fully describe the MDP (all the transition probabilities), just that you expect an MDP model could be made or discovered.
Conversely, if you cannot map your problem onto a MDP, then the theory behind RL makes no guarantees of any useful result.
One key factor that affects how well RL will work is that the states should have the Markov property - that the value of the current state is enough knowledge to fix immediate transition probabilities and immediate rewards following an action choice. Again you don't need to know in advance what those are, just that this relationship is expected to be reliable and stable. If it is not reliable, you may have a POMDP. If it is not stable, you may have a non-stationary problem. In either case, if the difference from a more strictly defined MDP is small enough, you may still get away with using RL techniques or need to adapt them slightly.
Could we say RL and DP are two types of MDP?
I'm assuming by "DP" you mean Dynamic Programming, with two variants seen in Reinforcement Learning: Policy Iteration and Value Iteration.
In which case, the answer to your question is "No". I would say the following relationships are correct:
DP is one type of RL. More specifically, it is a value-based, model-based, bootstrapping and off-policy algorithm. All of those traits can vary.
Probably the "opposite" of DP is REINFORCE which is policy-gradient, model-free, does not bootstrap, and is on-policy. Both DP and REINFORCE methods are considered to be Reinforcement Learning methods.
DP requires that you fully describe the MDP, with known transition probabilities and reward distributions, which the DP algorithm uses. That's what makes it model-based.
The general relationship between RL and MDP is that RL is a framework for solving problems that can be expressed as MDPs. |
H: Running multiple random forest and combining them
I am trying to build a random forest model in R (RStudio). My training dataset has around 2 million rows and 38 variables. When I tested 5000 rows from this dataset I was able to build the random forest but when I run on the whole dataset I get the following error:
Error in randomForest.default(m, y, ...) :
long vectors (argument 24) are not supported in .C
Can anyone please suggest, apart from removing the number of rows, how can I fix this? Can I run multiple random forests and then combine them into one? If yes, can someone please recommend how can I try this?
Many thanks in advance.
AI: I'm very surprised you're running into this error with only 2mil rows and 38 variables. I would encourage you to have a go doing this in python using SKLearn and see if you run into the same issue. More generally though, if you have too much data (imagine you had 200 million rows), the right thing to do would not be to build multiple forests, but to build each tree with a smaller fraction of the data.
I don't have a suggestion for where to find such an implementation (there might be some way of combining sklearn and dask, I've seen such things for XGBoost), but the general principle would be that you wouldn't hold all of your data in memory, you'd read a subset of it from disk, train a tree, read a new subset, etc. |
H: Target data values are not evenly distributed
Data nature:
I have features with 10 numeric type, and other 10 categorical, with a lot of values, at the end, using one-hot encoding I got a matrix of 600 columns. My problem is with accuracy which is 0.7, knowing that other peers got more that 0.9.
Problem:
Target data is binary, and is not evenly distributed at all. Trying blindly after pre-processing from sklearn.linear_model import LogisticRegression and sklearn.svm scored using roc_auc_score: .7 and .75.
Back to basics, I run this
train['cible'].value_counts() / train['cible'].count()
and got
1 0.970791
0 0.029209
Name: cible, dtype: float64
Quite interesting I think, but how can I improve accuracy. Any hints ?
Note: I will edit and add False Positive Rate and True Positive Rate as I lost output, after scaling, missing data imputation and retraining the model which takes couple of hours.
AI: From scikitlearn LogisticRegression docs:
class_weight : dict or ‘balanced’, default: None
Weights associated with classes in the form {class_label: weight}. If
not given, all classes are supposed to have weight one. The “balanced”
mode uses the values of y to automatically adjust weights inversely
proportional to class frequencies in the input data as n_samples /
(n_classes * np.bincount(y)). Note that these weights will be
multiplied with sample_weight (passed through the fit method) if
sample_weight is specified. New in version 0.17:
class_weight=’balanced’
So try to add class_weight='balanced'in your call to LogisticRegression()
Or maybe if this doesn't work, try to use as trainSet an evenly split dataset: where the number of samples of class 1 is equal to class 0. |
H: Dropout in Deep Neural Networks
I was reading a paper published on Dropout. What I find difficulty in understanding that, In the training phase, a unit is present with a probability $p$ and not present with a probability $1-p$. In the test phase, all units are present, but we multiply each of them with the probability.
Now, is it like, let we have 4 input units originally named a,b,c,d. In the training stage, after applying dropout, with a dropout rate of $0.5$, we are left with units a and c. So, As in the test stage, all the units are present, so, is it like, we multiply each of the units with $0.5$? Also, Is $p$ defined for each of the units in the network, or for the entire Neural Network?
Also, In doing so, how is the result same for training and test stage?
AI: I guess you have not figured out the concept of dropout very well. First, the reason we apply it is that we add some noise to the architecture in order not be dependant on any special node. The reason is that it was observed that while training a network, after overfitting, the weights for some of neurons increases and cause the network to be dependant on them. By exploiting dropout, we are not dependant on any node anymore due to it is possible to drop it while training.
Now, answers to your question. First, you have to bear this point in mind that the probability shows the chance of dropping a node in a layer. Consequently, chance 0.5 does not mean you, for instance, will have those two nodes. It just means after employing dropout, the chance of dropping for each node is half. Dropout is used for layers. It is customary to use it in fully connected layers. You set the hyper-parameter and it is the chance of keeping the nodes in the layer. While testing, you don't drop any node. We don't multiply neurons to the probability. The probability specifies the chance of existence of that node.
Okey-doke! I update the answer. As you can read in the paper,
At test time, it is not feasible to explicitly average the predictions from exponentially
many thinned models. However, a very simple approximate averaging method works well in
practice. The idea is to use a single neural net at test time without dropout. The weights
of this network are scaled-down versions of the trained weights. If a unit is retained with
probability p during training, the outgoing weights of that unit are multiplied by p at test
time as shown in Figure 2. This ensures that for any hidden unit the expected output (under
the distribution used to drop units at training time) is the same as the actual output at
test time. By doing this scaling, 2n networks with shared weights can be combined into
a single neural network to be used at test time. We found that training a network with
dropout and using this approximate averaging method at test time leads to signicantly
lower generalization error on a wide variety of classication problems compared to training
with other regularization methods.
I guess the easiest way to understand it is to watch this video. As you can see there are different implementations for that but the reason it is multiplied is that for any hidden unit the expected output (under the distribution used to drop units at training time) is the same as the actual output at test time. To be concise, it is done in order not to change the distribution that the outputs of the layer have. |
H: Tensorflow: how to look up and average a different amount of embedding vectors per training instance, with multiple training instances per minibatch?
In a recommender system setting: let's say I want to learn to predict future item purchases based on user past purchases using an approach inspired by Youtube's recommender system:
Concretely, let's say I have a trainable content-based network that receives as input an item and, based on its content, returns an embedding for such item. Now, let's say each user has purchased a variable number of items in the past (some users might have purchased 5 items, others maybe 1, others maybe 10, some outliers maybe 100, etc.). I want to generate a user vector, a candidate item vector and then a user-item match score as follows:
I map each item purchased by that user to its embedded item vector using the trainable content-based network
I calculate the average of all those embedded item vectors (as illustrated in the picture)
I apply a couple of ReLu layers on top of this average, thus obtaining a user vector
I map a candidate item (to be recommended) to its embedded item vector using the same trainable content-based network of step 1 (the weights of this network are always shared, like a Siamese network so to speak)
Finally, I compute the dot product between the user vector and the candidate item vector, apply a cross entropy loss during training, etc.
So my question is about the technical details of how to implement the embedding lookup and average of a variable number of embedded item vectors per user using Tensorflow, considering that during training each mini-batch may contain many training instances, where each training instance possibly consists of a different user with a different amount of purchased items in the past. Although the context is different, my question is very similar to this one, but unfortunately nobody has answered that question up to now.
AI: Use tf.gather().
Single instance case
In the example below, we selected a variable number of embedding vectors from the matrix embedding. The selection indexing vector user can be of variable length. Then we calculate the average embedding.
with tf.Graph().as_default():
embedding = tf.placeholder(shape=[10,3], dtype=tf.float32)
user = tf.placeholder(shape=None, dtype=tf.int32)
selected = tf.gather(embedding, user)
average = tf.reduce_mean(selected, axis=0)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
embedding_ = np.random.randn(10,3)
user_ = [1,3,5]
print(sess.run(average, feed_dict={embedding:embedding_, user:user_}))
print(np.mean(embedding_[user_], axis=0))
Multiple instances in a mini-batch
You may manually specify the first vector in embedding to be a zero vector, and patch the above selection vector with 0s. For example
with tf.Graph().as_default():
embedding = tf.placeholder(shape=[10,3], dtype=tf.float32)
user = tf.placeholder(shape=[None, None], dtype=tf.int32)
selected = tf.gather(embedding, user)
non_zero_count = tf.cast(tf.count_nonzero(user, axis=1), tf.float32)
embedding_sum = tf.reduce_sum(selected, axis=1)
average = embedding_sum / tf.expand_dims(non_zero_count, axis=1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
embedding_ = np.concatenate([np.zeros((1,3)),np.random.randn(9,3)], axis=0)
user_ = [[3,5,7,0], [1,2,0,0]]
print(sess.run(average, feed_dict={embedding:embedding_, user:user_}))
print(np.sum([embedding_[i] for i in user_], axis=1) / np.atleast_2d(np.count_nonzero(user_, axis=1)).T)
You can use tf.gather() like this even if the embedding is a trainable variable instead of a placeholder. |
H: How can I sort a data frame by groups?
I am using R. I have a data frame with "year , price, mileage" columns. I want to group the df by year first and then sort each group by mileage. How can I do this?
AI: The 'dplyr' package in R is ideal for these types of data manipulation tasks. The arrange function for example can group a dataframe by a certain column, and then sort by another column. For example:
arrange(df, desc(mileage), group_by = year)
See arrange for documentation on the arrange function, and dplyr for the dplyr package description. |
H: Too little or too much maxpooling?
I am creating a CNN in Keras where model.summary() shows:
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 62, 62, 32) 896
_________________________________________________________________
activation_1 (Activation) (None, 62, 62, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 31, 31, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 29, 29, 64) 18496
_________________________________________________________________
activation_2 (Activation) (None, 29, 29, 64) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 14, 14, 64) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 12, 12, 64) 36928
_________________________________________________________________
activation_3 (Activation) (None, 12, 12, 64) 0
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 6, 6, 64) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 4, 4, 64) 36928
_________________________________________________________________
activation_4 (Activation) (None, 4, 4, 64) 0
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 2, 2, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 256) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 32896
_________________________________________________________________
activation_5 (Activation) (None, 128) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 128) 0
_________________________________________________________________
dense_2 (Dense) (None, 17) 2193
_________________________________________________________________
activation_6 (Activation) (None, 17) 0
=================================================================
Total params: 128,337
Trainable params: 128,337
Non-trainable params: 0
The inputs are images of size $64\times64$. How can I find out if there is too much, too little or just right max-pooling layers? This page explain it but I am not able to get it from Kera's output here.
AI: Using max-pooling is not a good idea on its own. The reason is that by employing that, you ignore $75$% of the information each time. If your input is a signal which is quite a small pattern, as the number you have referred, it is better not to use max-pool that much. Although the use of max-pooling adds relative spatial invariance to the objects in the input and is useful for classification tasks, the main reason it is employed nowadays is to reduce the number of parameters to train. For instance, for images which belong to $R^{224\times224}$, it is wise to use them for some layers due to lessening the number of training parameters significantly. For images with smaller sizes, the input signal on its own has a smaller number of entries, features, and by employing max-pooling, you are chiefly discarding information which may be necessary and prominent for generalisation. You are actually shrugging them off! |
H: A single column has many values per row, separated by a comma. How to create an individual column for each of these?
As you can see below, I have a column called code with multiple values per row, separated by a comma. How can I create a column for each of these codes and make them all binary values?
i.e. code6254, code5854 etc...., where all these columns will be of binary value 0 or 1 depending on whether that row has the code or not? Thanks in advance :)
AI: It would be better if you could provide some code which allows us to reproduce at least part of your DataFrame, such as this:
import pandas as pd
df = pd.DataFrame({'code': ['6254', '5854, 5676, 7265, 6051', '5815']})
At the start, your dataframe looks like this:
code
0 6254
1 5854, 5676, 7265, 6051
2 5815
A possible solution would be to do this:
df['code'].str.get_dummies(sep=', ').add_prefix('code')
which gives you this:
code5676 code5815 code5854 code6051 code6254 code7265
0 0 0 0 0 1 0
1 1 0 1 1 0 1
2 0 1 0 0 0 0 |
H: Creating similarity metric with Doc2Vec and additional features
I have a dataset which contains many features. Each record is company that has many features.
For example...
Company A:
Keywords - data, big data, tableau, dashboards, etc.
Industry - Information Technology
Sub-Industry - Data Visualization
Total Funding - $150,000,000
I want to create a similarity metric between multiple companies, incorporating both doc2vec embeddings trained on the keyword lists as well as the additional features listed. I had a hard time searching/finding papers that did something like this. Any ideas?
AI: You could think of your similarity measure as a search problem if you consider one record a query, and the "near" records as search results.
I've had some good results following this paper: https://arxiv.org/pdf/1602.01137.pdf
As I understand it, the document vectors used in the paper were only good for improving the relevance of search results for results that were already decent (the top N results).
That to me suggests you might try developing a similarity score that works with your other attributes first, and then do something like a weighted average, where the significance of the doc2vec score decays quickly based on the first metric. |
H: Why this sequential model is not starting?
I am using following code:
input_shape = (75, 75, 3)
x = Input(input_shape)
model = BatchNormalization(axis = 3)(x)
Above code works all right. However, following code does not work:
from keras.models import Sequential
input_shape = (64,64,3)
model = Sequential()
model = model.add(InputLayer(input_shape=input_shape))
model = model.add(BatchNormalization(axis = 3))
But at last line, I get error:
AttributeError: 'NoneType' object has no attribute 'add'
If I change to:
model = model.add(Input(input_shape))
I get following error:
TypeError: The added layer must be an instance of class Layer.
Found: Tensor("input_1:0", shape=(?, 64, 64, 3), dtype=float32)
Where is the problem and how can it be solved?
(PS: If you find this question to be interesting/important, please upvote it.)
AI: from keras.models import Sequential
from keras.layers import InputLayer
model = Sequential()
model.add(InputLayer(input_shape))
model.add(BatchNormalization(axis = 3))
This should work. The first error was because the model was getting reassigned. The second error was because 'Input' is a function of 'layers' class but not a class, 'InputLayer' is a class. |
H: What is difference between intersection over union (IoU) and intersection over bounding box (IoBB)?
Can someone give a detailed explanation IoU and IoBB along with that the differences between them.
AI: The Intersection over Bounding Box is the Intersection over Union (IoU) for object detection tasks, where you have a bounding box.
There are many tasks (e.g. image segmentation) where you have an IoU (the predicted segment vs the actual segment), but there are no bounding boxes. |
H: normalizing data and avoiding dividing by zero
I have data that I'm compressing with AutoEncoders (3-layer neural network) and I would like to normalize my data first. I would like to try to use the coded latent vector and feed it into an anomaly detection algorithm and see what happens.
I would like to normalize the data for the autoencoder so my values are either between 0,1 or -1,-1 because my output activation function will either be a sigmoid or tanh. This way my algorithm can train and the input will be in the same range as the output values of the NN.
However, when I normalized with
x(i)-xmean/(xmax-xmin)
I ended up dividing by 0 in several features of the data which gave NaN. Is is possible to normalize my data so it is between -1,1 or 0,1 while avoiding dividing by 0 for my data?
AI: While you could do this manually, Python also has a handy little function called MinMaxScaler, which will automatically apply max-min normalization to scale data between 0 and 1.
Assume we have an array of 200 values for variables s and t:
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
mu, sigma = 20, 10 # mean and standard deviation
s = np.random.normal(mu, sigma, 200)
t = np.random.normal(mu, sigma, 200)
Reshape your variables if necessary:
s=np.reshape(s,(-1,1))
t=np.reshape(t,(-1,1))
Now, you can see that we are forming two new variables, snew and tnew, which we are scaling using MinMaxScaler.
scaler = MinMaxScaler()
print(scaler.fit(s))
print(scaler.fit(s))
snew=scaler.transform(s)
tnew=scaler.transform(t)
Here is a sample of our new variables:
>>> snew
array([[0.24896606],
[0.63121206],
[0.60448469],
.......
[0.49044733],
[0.28131596],
[0.32909155]
>>> tnew
array([[0.91224005],
[0.74540598],
[0.3938718 ],
.......
[0.75749275],
[0.80709325],
[0.19440844] |
H: What is difference between final episodes of training and test in DQN?
What is difference between running in final episode of training mode and running in test mode in DQN?
Is there any difference more than after training and tune the hyper-parameters, we test for one episode and without any exploration?
This means that test mode is similar to training mode in episode n+1 without exploring (while we train for n episode) ?Is it correct?
Why in some test code of DQN, they test for multiple episodes?
AI: What is difference between running in final episode of training mode and running in test mode in DQN?
As Q learning is an off-policy method, and DQN is based on Q learning, then the agent effectively has two different policies:
A behaviour policy that explores actions regardless of whether data so far suggests that they are optimal. The simplest and perhaps most commonly implemented behaviour policy is $\epsilon$-greedy with respect to action values $\hat{q}(s,a,\theta)$.
A target policy that is being learned, which is the best guess at an optimal policy so far. In Q learning that policy is fully greedy with respect to action values $\hat{q}(s,a,\theta)$.
When assessing how well the agent has learned the environment after training, you are interested in the target policy. So with Q learning you need to run the environment with the agent choosing actions greedily. That could be achieved by setting $\epsilon = 0$, or with a different action selection routine which doesn't even check $\epsilon$ (which to use is an implementation detail).
Is there any difference more than after training and tune the hyper-parameters, we test for one episode and without any exploration?
Essentially yes to second part - without exploration. But . . .
Why in some test code of DQN, they test for multiple episodes?
That is because one episode is often not enough to get reliable statistics.
If the environment is completely deterministic, including having only one start state, then one episode would be sufficient to assess an agent.
However, if the environment is stochastic, or you are interested in multiple possible start states, then multiple episodes are necessary in order to have low mean squared error bounds on the average total return, or any other metrics you might be interested in. Otherwise it would be hard to compare results between different hyperparameters.
How many episodes are enough? There is no fixed number, it depends on how much variance there is in the environment and how accurately you want to know the values. In some cases - e.g. OpenAI gym, there are suggested values. For instance, LunarLander suggests a target score over 100 episodes, so they are suggesting measurements are accurate enough over 100 episodes. However, the advice only applies to that problem. In general, you can measure the variance in the score, like any other statistical measure, and calculate error bounds on your metric. |
H: Why this single layer network does'nt work
I am trying following code (modified from https://www.kdnuggets.com/2017/10/seven-steps-deep-learning-keras.html ):
def single_layer(input_shape, nb_classes):
print("input shape:", input_shape)
print("print nb_classes:", nb_classes)
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(nb_classes, input_shape=input_shape, activation='softmax'))
model.compile(optimizer='sgd', loss='categorical_crossentropy')
model.summary()
return model
However, when I try to fit this model with an X_train of dimensions 64,64,3 and 17 classes, following is the output with error:
input shape: (64, 64, 3)
print nb_classes: 17
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 64, 64, 17) 68
=================================================================
Total params: 68
Trainable params: 68
Non-trainable params: 0
_________________________________________________________________
Traceback (most recent call last):
....
....
File "/home/abcd/.local/lib/python3.5/site-packages/keras/engine/training.py", line 950, in fit
batch_size=batch_size)
File "/home/abcd/.local/lib/python3.5/site-packages/keras/engine/training.py", line 787, in _standardize_user_data
exception_prefix='target')
File "/home/abcd/.local/lib/python3.5/site-packages/keras/engine/training_utils.py", line 127, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking target: expected dense_1 to have 4 dimensions, but got array with shape (10396, 17)
Why this code is not working and how should it be modified to make it work?
AI: I can't find the code in the link you posted to get more background information on what kind of data source it is that you are using.
However, a data source that is $60\times60\times3$ is described as being high dimensional. You have a total of 10,800 input features which need to be mapped down to 17 different classes. This is actually a very complex task and will not be successful using a single layer network. Such a simple network will not have enough parameters to capture the non-linearities between features in your input space.
That being said, if you insist on using a single layer network with a data of size $60\times60\times3$ with 17 input classes the code is as follows.
Let's first create some artificial data of the same dimension as your data
import numpy as np
n = 1000
x_train = np.zeros((n,64,64,3))
y_train = np.zeros((n,))
for i in range(n):
x_train[i,:,:,:] = np.random.random((64,64,3))
y_train[i] = np.random.randint(0,17)
x_train = x_train.reshape(n,64,64,3,)
n = 100
x_test = np.zeros((n,64,64,3))
y_test = np.zeros((n,))
for i in range(n):
x_test[i,:,:,:] = np.random.random((64,64,3))
y_test[i] = np.random.randint(0,17)
x_test = x_test.reshape(n,64,64,3,)
print('Training data: ', x_train.shape)
print('Training labels: ', y_train.shape)
print('Testing data: ', x_test.shape)
print('Testing labels: ', y_test.shape)
(1000, 64, 64, 3)
(1000,)
(100, 64, 64, 3)
(100,)
For a classification task we should convert our outputs to categorical vectors. Where we use one-hot encoding to identify the correct class.
import keras
# The known number of output classes.
num_classes = 17
# Convert class vectors to binary class matrices. This uses 1 hot encoding.
y_train_binary = keras.utils.to_categorical(y_train, num_classes)
y_test_binary = keras.utils.to_categorical(y_test, num_classes)
We then build the model.
from __future__ import print_function
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.models import model_from_json
from keras import backend as K
input_shape = (64,64,3,)
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(17, activation='softmax'))
model.compile(loss=keras.losses.mean_squared_error,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
You can see the summary of the model by using
model.summary()
Then we can train this model using
batch_size = 128
epochs = 10
model.fit(x_train, y_train_binary,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test_binary))
This code works. However, the data is completely random thus the model cannot learn anything. However, even if your data is easily distinguishable, as I said above, I do not expect this model to be complex enough to distinguish such a large input space from 17 different possible classes.
If you post the data source you are using we can design a model to get a good result. |
H: What do I initialise each model in cross validation with in a multi-layer Perceptron?
So, as far as my understanding goes, cross-validation is used to determine the best model.
I understand that once we determine the best model, we then train it on the entire dataset. I'm supposed to be using cross-validation for the multi-layer perceptron that can classify the MNIST dataset. I don't seem to get how cross-validation fits in training the model.
Let's say I'm using 5-fold cross-validation, which means I will have to make 5 different models but, how will the training of these individual model proceed? In particular, I have the following questions:
Will the training of these individual model be as usual(backward propagation)?
What do I initialise each model with? (Random Weights?)
After completing the cross-validation, I have the best model(say B) with me now, what does it mean to train this model on the entire dataset?
(Does it mean, I initialise the weights of the new model being trained on the whole dataset, with those of B).
AI: I think you are trying to do cross validation with hyperparameter tuning, so here is how it's done with k-fold CV (from this answer):
You can split your data into 2 datasets: training and test. k-folds cross validation takes a model (and specified hyperparameters) and partitions the training dataset into k equally sized subsets. Then, it does the following k times:
Trains the model on k-1 of the subsets
Evaluates the model accuracy on the subset that wasn’t trained on.
It then reports the average error. To do hyperparameter tuning, do the steps above using every time a different hyperparameter combinations. Then, choose the set of parameters for which k-folds reports the lowest error. However, be careful to not excessively minimize the k-folds error, since it will often lead to overfitting.
Ultimately, we want a measure of how well our final model will generalize. This is why we created the test set at the beginning—evaluating the model’s accuracy on this set is a useful estimation of its success.
So, k-fold doesn't mean k different models, but k folds of the dataset!
To reply to your questions:
Will the training of these individual model be as usual(backward propagation)?
Yes each training is as usual, just changes the training set and the hyperparameters.
What do I initialise each model with? (Random Weights?)
Yes, (still) as usual in neural networks. You are not re-using old weights.
After completing the cross-validation, I have the best model(say B) with me now, what does it mean to train this model on the entire dataset? (Does it mean, I initialise the weights of the new model being trained on the whole dataset, with those of B).
Well, you are mistakenly exchanging again weights and hyperparameters, but, if you have a very big dataset and cross validating on the entire dataset takes too long, you can:
take a portion of your dataset (maybe 10%), let's call it A
Use A to find the best hyperparameters using k-fold CV as I described before
Now you can use the entire dataset for training (except a test set) the model using those best hyperparameters. With the hope that's the real best model. |
H: How does the validation_split parameter of Keras' fit function work?
Validation-split in Keras Sequential model fit function is documented as following on https://keras.io/models/sequential/ :
validation_split: Float between 0 and 1. Fraction of the training data
to be used as validation data. The model will set apart this fraction
of the training data, will not train on it, and will evaluate the loss
and any model metrics on this data at the end of each epoch. The
validation data is selected from the last samples in the x and y data
provided, before shuffling.
Please note the last line:
The validation data is selected from the last samples in the x and y
data provided, before shuffling.
Does it means that validation data is always fixed and taken from bottom of main dataset?
Is there any way it can be made to randomly select given fraction of data from main dataset?
AI: You actually would not want to resample your validation set after each epoch. If you did this your model would be trained on every single sample in your dataset and thus this will cause overfitting. You want to always split your data before the training process and then the algorithm should only be trained using the subset of the data for training.
The function as it is designed ensures that the data is separated in such a way that it always trains on the same portion of the data for each epoch. All shuffling is done within the training sample between epochs if that option is chosen.
However, for some datasets getting the last few instances is not useful, specifically if the dataset is regroup based on class. Then the distribution of your classes will be skewed. Thus you will need some kind of random way to extract a subset of the data to get balanced class distributions in the training and validation set. For this I always like to use the sklearn function as follows
from sklearn.model_selection import train_test_split
# Split the data
x_train, x_valid, y_train, y_valid = train_test_split(data, labels, test_size=0.33, shuffle= True)
It's a nice easy to use function that does what you want. The variables data and labels are standard numpy matrices with the first dimension being the instances. |
H: Investigate why data is missing? After finding out reasons, what should I do next?
x1 x2 x3 x4 - - - - - - x10 .... x21 x22 x23
1 |
. Complete |
. data |
88 |
89 |
90 ------------------------------------------
. | |
.complete | missing | complete
. | |
. | |
. -------------------------------------------
100
101 Complete data
102------------------------------------------
The dataset looks like this. 10 % of the data is missing. It doesn't appear missing at random.
From row 90 to 99. Variables x4 to x10 are missing. All other rows do not have any missing values. It is not missing at random. Is there any statistical way to investigate why they are missing.
My initial plan is to create a new column, 0 is not missing, 1 is missing.
Run a logistic regression on non-missing columns. Is this the correct way to do it or not?
My questions:
How should I investigate why data is missing by just playing around with the data set?
If I found out the reasons or data is not missing not random, What should I do next?
AI: Add the binary column but start with summary statistics by group (missing/not missing) for all of the x’s. That may reveal an implied ‘why’, eg the values are missing whenever some other x is always above/below some value. |
H: Keras Conv1D for simple data target prediction
I am trying to use conv1D layer from Keras for predicting Species in iris dataset (which has 4 numeric features and one categorical target). Following is my code:
import numpy as np
import pandas as pd
irisdf = pd.read_csv('iris.csv')
Xall = irisdf.drop('Species', axis=1)
print(Xall.shape)
Xall = np.expand_dims(Xall.values, axis=2)
print(Xall.shape)
Yall = irisdf['Species']
nb_classes = 3
import keras
from keras.models import Sequential
from keras.layers import Dense, InputLayer, Dropout, Flatten, BatchNormalization, Conv1D
input_shape = (Xall.shape[1:],)
print(input_shape)
model = Sequential([
InputLayer(input_shape=input_shape),
Conv1D(32, 2),
Dense(nb_classes, activation='softmax')
])
model.compile(loss=keras.losses.mean_squared_error,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.summary()
model.fit(Xall, Yall, epochs=25, verbose=True)
However, it is giving following error:
Traceback (most recent call last):
File "/home/abcde/.local/lib/python3.5/site-packages/tensorflow/python/eager/execute.py", line 141, in make_shape
shape = tensor_shape.as_shape(v)
File "/home/abcde/.local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 946, in as_shape
return TensorShape(shape)
File "/home/abcde/.local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 541, in __init__
self._dims = [as_dimension(d) for d in dims_iter]
File "/home/abcde/.local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 541, in <listcomp>
self._dims = [as_dimension(d) for d in dims_iter]
File "/home/abcde/.local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 482, in as_dimension
return Dimension(value)
File "/home/abcde/.local/lib/python3.5/site-packages/tensorflow/python/framework/tensor_shape.py", line 37, in __init__
self._value = int(value)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'tuple'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "rnkeras_conv1d_iris.py", line 40, in <module>
InputLayer(input_shape=input_shape),
File "/home/abcde/.local/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/abcde/.local/lib/python3.5/site-packages/keras/engine/input_layer.py", line 86, in __init__
name=self.name)
File "/home/abcde/.local/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 515, in placeholder
x = tf.placeholder(dtype, shape=shape, name=name)
File "/home/abcde/.local/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1735, in placeholder
return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
File "/home/abcde/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4923, in placeholder
shape = _execute.make_shape(shape, "shape")
File "/home/abcde/.local/lib/python3.5/site-packages/tensorflow/python/eager/execute.py", line 143, in make_shape
raise TypeError("Error converting %s to a TensorShape: %s." % (arg_name, e))
TypeError: Error converting shape to a TensorShape: int() argument must be a string, a bytes-like object or a number, not 'tuple'.
Where is the problem and how can it be solved?
(PS: If you find this question to be interesting/important, please upvote it;)
AI: Your error is coming from the Keras framework not working with strings as the output labels. You will want to transform these to 1-hot encoded vectors to train your model. Here is some code to do this.
Getting the data
import pandas as pd
df = pd.read_csv('iris.csv', header=None, names=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'])
This will assign a class label, we will one-hot encode them later
df['labels'] =df['species'].astype('category').cat.codes
Splitting the data and reshaping the data
First we will split the data into a training and testing set. Then we will one-hot encode the labels. And finally we will structure the inputs to match what is expected from Keras. To use a 1D convolution we need to add a spatial dimension.
from sklearn.model_selection import train_test_split
import keras
X = df[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']]
Y = df['labels']
x_train, x_test, y_train, y_test = train_test_split(np.asarray(X), np.asarray(Y), test_size=0.33, shuffle= True)
# The known number of output classes.
num_classes = 3
# Input image dimensions
input_shape = (4,)
# Convert class vectors to binary class matrices. This uses 1 hot encoding.
y_train_binary = keras.utils.to_categorical(y_train, num_classes)
y_test_binary = keras.utils.to_categorical(y_test, num_classes)
x_train = x_train.reshape(100, 4,1)
x_test = x_test.reshape(50, 4,1)
The model
Your model was insufficient to get good results so I added an additional hidden layer into the mix to get acceptable results.
from __future__ import print_function
from keras.models import Sequential
import keras
from keras.models import Sequential
from keras.layers import Dense, Flatten, Conv1D
from keras.callbacks import ModelCheckpoint
from keras.models import model_from_json
from keras import backend as K
model = Sequential()
model.add(Conv1D(32, (3), input_shape=(1,4), activation='relu'))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.summary()
Now let's train the model
batch_size = 128
epochs = 10
model = model.fit(x_train, y_train_binary,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test_binary))
100/100 [==============================] - 0s 50us/step - loss: 1.0906 - acc: 0.6400 - val_loss: 1.0893 - val_acc: 0.7000
We get 70% accuracy, that's not so bad. But it can be improved by changing the model to better suit the data source.
Plot the convergence
plt.plot(model.history['loss'])
plt.plot(model.history['val_loss'])
plt.title('model train vs validation loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show() |
H: Neural network options for simple data classification
I want to clear about keras neural network options for classification of simple data where there are a number of features and one target column, as in iris flower dataset (Species is target):
SL, SW, PL, PW, Species
5.1, 3.5, 1.4, 0.2, setosa
4.9, 3.0, 1.4, 0.2, setosa
4.7, 3.2, 1.3, 0.2, setosa
...
...
I am finding that in almost all examples various combinations of Dense layers Dropout are the only options:
model = Sequential()
model.add(Dense(12, input_dim=4, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='softmax'))
What other keras layers can be used in such situations, especially if the data is large, say with 50K rows and 100 features?
Edit: My specific question is whether Dense and Dropout are the only kind of layers for this purpose and such data?
AI: You may not be very familiar with deep learning. Each kind of network is used for a special kind of task, you cannot just stack LSTMs, GRUs, dense layers and other stuff without supervision. If you have a task that your patterns are local and they may be in multiple locations in an input pattern, you can employ convolutional layers for feature extraction and you can employ dense layers for classifying those extracted features. If you want to classify data which there is a kind of sequence in it, temporal data, you can employ LSTMs and GRUs and you can stack them for better accuracy and you can use their output and feed them to other networks based on your need. MLPs are good for learning non-linear mappings.
Dropout is used for avoiding overfitting. |
H: Is it advisable to combine two dataset?
I have two datasets on heart rate of subjects that were recorded in two different places (two different continent to be exact). The two research experiments aimed to find the subjects' emotions based on how much their heart rate change over time. I am using machine learning to predict the subjects' emotions and i am getting acceptable result when tested separately on each dataset. However, i get even better result if i merge the two datasets.
I am not however sure if combining the two datasets is acceptable. As I am combining two somehow different datasets, will it create statistical bias? How should i report my finding in a journal paper?
AI: If you add ‘continent’ or ‘location’ as a feature for the model, then you will be able to control for potential bias while getting the results of the additional data. |
H: Keras functional api explanation of activation() layer?
With the keras functional api it is possible to write something like this:
x = Activation('relu')(x)
x = Dense(8, activation='softmax')(x)
My question is whether the Activation() function is a separate Layer (equivalent to Dense(128, activation='relu') or if not... why and when this notation is used?
AI: As stated in the docs, the activation layer in keras is equivalent to a dense layer with the same activation passed as an argument.
This would be equivalent
x = Dense(64)(x)
x = Activation('relu')(x)
is equivalent to
x = Dense(8, activation='relu')(x)
As per your example if the activation layer is used as a layer, this will act as a transformation of the outputs of the previous layer.
As you can see in the model
inputs = Input(shape=(784,))
x = Dense(32, activation='tanh')(inputs)
x = Activation('relu')(x)
predictions = Dense(8, activation='softmax')(x)
The output of the first layer is the result of a densely connected layer with a tanh function. Then these outputs will each be transformed by a relu function. You can see how this is simply a transformation and does not introduce any new model parameters using
model.summary() |
H: Does Orange scale the data automatically for the linear regression with Ridge regularization
I'm using the linear regression tool with the Ridge regularization. To use the Ridge regularization, I have to scale the data first. Does Orange scale the data automatically? I can't find any information about this mentioned in Orange's documentation for Ridge regularization.
In python's scikit-learn, I have to scale the data manually before using Ridge Regression. In MATLAB, the scaling in the Ridge function included. So, do I have to scale the data manually before I'm using the Ridge Regression in orange?
Thanks for your help.
AI: Not by default, no, as shown by the normalise=False here:
class Orange.regression.linear.RidgeRegressionLearner(alpha=1.0,
fit_intercept=True, normalize=False, copy_X=True, max_iter=None,
tol=0.001, solver='auto', preprocessors=None) |
H: Help with reusing glove word embedding pretrained model
When using pretrained GloVe.6B for embedding generation, How can I get only the top most frequently used 100000 words rather than all the 4M words in the file?
AI: I was stuck in a similar problem while working with glove. Assuming that you have a dataset in text form, from which you want to collect the topmost 100000 words, you'll have to make a list of those words. In the glove file, each embedding is on a separate line, with each line starting with the word itself and then the embedding. You'll have to write a code to compare your list of words with the words in glove file and extract the lines which make a hit. Have a look here for example code. |
H: Why Pair plot is taking up the target variable into consideration?
I am working on Haberman's cancer survival data set.
I tried to visualize the pair plot with features as AGE, Op_Year, axil_nodes_det and target variable as Surv_status. I expect 12 plots but getting 16 including plots with Surv_stautus which is my target variable
here is my code:
sns.set_style("whitegrid");
sns.pairplot(patients, hue="Surv_status", size=3);
AI: maybe because Surv_status is number.
there are two solution:
define variables:
sns.set_style("whitegrid"); sns.pairplot(patients, hue="Surv_status",vars=["AGE", "Op_Year", "axil_nodes_det"], size=3);
change target value to a string:
patients['Surv_status'] = patients['Surv_status'].astype(str)
or
patients['Surv_status'] = patients['Surv_status'].str |
H: Where is the output in the LSTM?
I'm trying to understand where the output of the LSTM is. Please refer to the following picture:
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
It seems that at each tilmestep, we output h_t and C_t which correspond to the hidden state and the memory cell.
Now suppose I'm trying to model the stock price movements which is binary [0,1], 0 for down, 1 for up which is my y_i.
I feed in x_t which is a feature vector at each tilmestep and I expect to get a 1 dimensional output y_t after the last tilmestep.
Are h_t what I'm looking for? This would imply that h_t matches the output dimension, but for some reason I though it is independent of the output dimension.
AI: Usually you have to add a Dense layer after the LSTM unit. That will try to understand how to use the output of LSTM.
For example in Keras:
model = Sequential()
model.add(LSTM(4, input_shape=(1, 5)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam') |
H: Implemented early stopping but came across the error SGDClassifier: Not fitted error in sklearn
Below is the simpler implementation of early stopping which i came across the book and wanted to try it.
# Implement SGD Classifier
sgd_clf = SGDClassifier(random_state=42,
warm_start=True,
n_iter=1,
learning_rate='constant',
eta0=0.0005)
minimum_val_error = float('inf')
best_epoch = None
best_model = None
for epoch in range(1000):
sgd_clf.fit(X_train_scaled,y_train)
predictions = sgd_clf.predict(X_val_scaled)
error = mean_squared_error(y_val,predictions)
if error < minimum_val_error:
minimum_val_error = error
best_epoch = epoch
best_model = clone(sgd_clf)
Once the above snippet is executed, best model and best epoch are stored in variable best_model and best_epoch.So, to test the best_model, i ran the below statement.
y_test_predictions = best_model.predict(X_test)
But then i came across the error This SGDClassifier instance is not fitted yet
Any hints on how to solve this, would be greatly helpful. Thanks
AI: It is because clone will only copy the estimator with the same parameters, but not with the attached data. So it results a new estimator that has not been fit on the data. Hence, you couldn't use it to make prediction.
Instead of clone, you can use either pickle or joblib.
1. pickle
import pickle
...
for epoch in range(1000):
...
if error < minimum_val_error:
best_model = pickle.dumps(sgd_clf)
Later if you want to use the stored model:
sgd_clf2 = pickle.loads(best_model)
y_test_predictions = sgd_clf2.predict(X_test)
2. joblib
You can also use joblib, and store the model to the disk.
from sklearn.externals import joblib
...
joblib.dump(sgd_clf, 'filename.joblib')
To use the stored model
clf = joblib.load('filename.joblib') |
H: How to train ML model with multiple variables?
I am trying to learn Machine Learning concepts these days. I understand in a traditional ML data, we will have features and labels. I have following toy data in my mind where I have features like 'units_sold' and 'num_employees' and a label of 'cost_$'. I would like to train the model to learn these features and label for a particular 'city' and 'store'. For example if I perform Linear Regression, the model learn intercept and coefficient for the city and store it relates to. When I input units_sold and num_employees for next year, I get the prediction.
city store units_sold num_employees cost_$
New York A 10 4 11000
New York B 12 4 11890
New York C 14 5 15260
New York D 17 6 17340
London A 23 5 22770
London B 27 6 25650
London C 22 3 21450
Paris A 4 2 5200
Paris B 7 3 9590
I'm trying to brainstorm about it and would like to know how to approach this problem?
AI: This is very little data so doing much with it is very difficult. especially considering you expect to get 'next year' estimates without having any historical data to base yourself from.
However, if you want to be able to estimate a cost given your input features which are city, store, units_sold and num_employees then this problem setup is very standard in machine learning.
First I will put your data into a pandas DataFrame and I will encode your categorical features using numerical values. I did this randomly, but I think you can get better results by getting the latitude and longitude of the stores, as well as some data about the neighborhood (density, wealth, etc.).
import pandas as pd
data = {'city': ['New York', 'New York', 'New York', 'New York', 'London', 'London', 'London', 'Paris', 'Paris'],
'store': ['A', 'B', 'C', 'D', 'A', 'B', 'C', 'A', 'B'],
'units_sold': [10, 12, 14, 17, 23, 27, 22, 4, 7],
'num_employees': [4,4,5,6,5,6,3,2,3],
'cost': [11000, 11890, 15260, 17340, 22770, 25650, 21450, 5200, 9560]}
df = pd.DataFrame(data)
df['store'] =df['store'].astype('category').cat.codes
df['city'] =df['city'].astype('category').cat.codes
When I am working with very little data I always like to pull some visualizations to give me some intuition about the data. Let's first see how the features are correlated.
import matplotlib.pyplot as plt
plt.matshow(df.corr())
plt.xticks(np.arange(5), df.columns, rotation=90)
plt.yticks(np.arange(5), df.columns, rotation=0)
plt.colorbar()
plt.show()
We can see that the cost is associated directly with units_sold. That is not much of a surprise I guess. But it also correlates with the num_employees and is strongly inversely correlated with the city.
If we plot the num_employees and units_sold we can see that the correlations observed above more clearly.
A predictive model
Now we want to be able to give a model our inputs and get an estimation of the cost.
Let's first put our data into a training and testing set. This is very problematic with such a small dataset because you have a very high probability of ending up with a training set that omits a city, or a store type. This will significantly affect the abiltiy of the model to predict an output for data it has never seen.
The labels of the data are the cost.
import numpy as np
from sklearn.model_selection import train_test_split
X = np.asarray(df[['city', 'num_employees', 'store', 'units_sold']])
Y = np.asarray(df['cost'])
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, shuffle= True)
Let's start with a standard linear regression
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LinearRegression
lineReg = LinearRegression()
lineReg.fit(X_train, y_train)
print('Score: ', lineReg.score(X_test, y_test))
print('Weights: ', lineReg.coef_)
plt.plot(lineReg.predict(X_test))
plt.plot(y_test)
plt.show()
Score: 0.963554136721
Weights: [ 506.87136393 -15.48157725 376.79379444 920.01939237]
A better alternative is to use ridge regression.
from sklearn import linear_model
reg = linear_model.Ridge (alpha = .5)
reg.fit(X_train, y_train)
print('Score: ', reg.score(X_test, y_test))
print('Weights: ', reg.coef_)
plt.plot(reg.predict(X_test))
plt.plot(y_test)
plt.show()
Score: 0.971197683706 Weights: [ 129.78467277 2.034588
97.11724313 877.73906409]
If you do another split and rerun both these models you will see that the performance varies greatly for both of them as we expected due to the limited data. However, the effect is much more pronounced for the simple linear regression.
To get a better idea of the actual results, let's do something a bit naughty. We will split the data over and over again to get different models and get the average of their scores.
scores = []
coefs = []
for i in range(1000):
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, shuffle= True)
lineReg = LinearRegression()
lineReg.fit(X_train, y_train)
scores.append(lineReg.score(X_test, y_test))
coefs.append(lineReg.coef_)
print('Linear Regression')
print(np.mean(scores))
print(np.mean(coefs, axis=0))
scores = []
coefs = []
for i in range(1000):
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, shuffle= True)
lineReg = linear_model.Ridge (alpha = .5)
lineReg.fit(X_train, y_train)
scores.append(lineReg.score(X_test, y_test))
coefs.append(lineReg.coef_)
print('\nRidge Regression')
print(np.mean(scores))
print(np.mean(coefs, axis=0))
Linear Regression
-1.43683760609
[ 1284.47358731 1251.8762943 -706.31897708 846.5465552 ]
Ridge Regression
0.900877146134
[ 228.05312491 95.33306385 123.49517018 873.49803782]
We can see that across 1000 trials the ridge regression was by far the superior model.
Now you can use this model to estimate costs by passing the model a vector with the features in the same order as the dataset as follows
reg.predict([[2, 4, 1, 12]])
The resulting score is
array([ 12853.2132658])
This is not enough data to do any machine learning regression reliably. However you can get some insight into what factors affect your cost the most. |
H: How to use LeakyRelu as activation function in sequence DNN in keras?When it perfoms better than Relu?
How do you use LeakyRelu as an activation function in sequence DNN in keras?
If I want to write something similar to:
model = Sequential()
model.add(Dense(90, activation='LeakyRelu'))
What is the solution? Put LeakyRelu similar to Relu?
Second question is: what are the best general setting for tuning the parameters of LeakyRelu? When is its performance significantly better than Relu?
AI: You can use the LeakyRelu layer, as in the python class, instead of just specifying the string name like in your example. It works similarly to a normal layer.
Import the LeakyReLU and instantiate a model
from keras.layers import LeakyReLU
model = Sequential()
# here change your line to leave out an activation
model.add(Dense(90))
# now add a ReLU layer explicitly:
model.add(LeakyReLU(alpha=0.05))
Being able to simply write e.g. activation='relu' is made possible because of simple aliases that are created in the source code.
For your second question:
what are the best general setting for tuning the parameters of LeakyRelu? And when its performance is significantly better than Relu?
I can't give you optimal settings for the LeakyReLU, I'm afraid - they will be model/data dependent.
The difference between the ReLU and the LeakyReLU is the ability of the latter to retain some degree of the negative values that flow into it, whilst the former simply sets all values less than 0 to be 0. In theory, this extended output range offers a slightly higher flexibility to the model using it. I'm sure the inventors thought it to be useful and perhaps proved that to be the case for a few benchmarks. In practice, however, people generally just stick to the ReLU, as the benefits of the LeakyReLU are not consistent and the ReLU is cheaper to compute and therefore models train slightly faster. |
H: best activation function for ensemble?
i have created some logistic regression model (different preprocessing) with softmax function. and i mix all model with an ensemble with a hierarchical method. so the output of all model (base) will be used as input for the final model (logistic regression too).
the default base model used a softmax function. i think transform a confident value into a probability will lose much information. so i have a plan to change the softmax into activation function.
what i learn in my CNN class, Relu is the best default activation function for an image. but my case is multi-class classification email.
which activation function should i choose?
Relu, sigmoid or other?
thanks
AI: I think you will have to experiment; there isn't generally a one-activation-fits-all for hierarchical models. Give the ones you mentioned a try, but perhaps starting with sigmoid and tanh and also a LeakyReLU.
The reason I'd perhaps leave out a normal ReLU at first is because that too would potential trim out information before the final model, as it simply removes negative weights to zero. A good final model should be able to correct for this, but I think its unnecessary to force it to. |
H: Models that converged before aren't converging anymore in Keras
I have two models with saved data that worked well previously but won't anymore.
First, it happened with one of my Jupyter notebooks. I can even load the saved model and weights that work. When I train more with the exact same model, the performance actually drops!
For example, I get a dice coefficient of -.39 with my previous training when it worked. Now if I load the same model, weights, and data, it drops to -0.04. (Loss of -1 is perfect).
So I load one of my older notebooks with a different model and saved data that worked well. It doesn't converge to nearly as high of a performance as it did previously either.
However, I tried setting up a simple MNIST CNN classifier and it worked fine.
Is there any way for there to be persistent changes to occur so that the exact same code/data that performed well before no longer does?
AI: It was actually just the random weights and incredible luck that I got good convergence the first three tries in a row while didn't most other times afterwards.
I tried setting the seed and found certain seeds that gave me good results every time whereas most other seeds won't converge for many epochs.
from numpy.random import seed
seed(5)
from tensorflow import set_random_seed
set_random_seed(42) |
H: Creating a single number from a numpy array - Python
I am working on a gender classification project. I am extracting the pixels of an image using a Numpy array in Python, similar to the one below:
[[[129 155 191] [123 150 185] [120 149 183]]
How can I use these values to extract a single meaningful number to be used in a csv file for K-Nearest Neighbor Algorithm. For example single number like
0.34232
In summary, how can I turn [[[129 155 191] [123 150 185] [120 149 183]] into a single number like 0.34232
AI: Welcome to Data Science! You're question needs a little more detail... there are many many ways to make an array into a single number. You should say a little more about what you mean by meaningful.
Here are a few examples, using your example array, which may seem outrageously simple, but do indeed form the basis to many of the techniques used in modern research:
In [1]: import numpy as np
In [2]: x = np.array([[129, 155, 191], [123, 150, 185], [120, 149, 183]], dtype=np.float32)
In [3]: x
Out[3]:
array([[129., 155., 191.],
[123., 150., 185.],
[120., 149., 183.]], dtype=float32)
Now here are a few (randomly selected) ways to create a single number from your array:
In [4]: np.mean(x) # the mean / average
Out[4]: 153.88889
In [5]: np.sum(x) # the sum
Out[5]: 1385.0
In [6]: np.std(x) # the standard deviation
Out[6]: 25.722641
In [7]: np.linalg.norm(x) # the Frobius norm - a distance measure
Out[7]: 468.07156
In [8]: np.max(x)
Out[8]: 191.0
These might seem stupidly simple, but if we were to treat your array as a single block from a larger image-array, then these might represent pooling-layers that are used to down-sample arrays as they are passed through a neural network. Just have a look at the available pooling layers within the Keras library.
Which method you might want to use will heavily depend on your use case, your dataset and your model. |
H: Problem with calculating error rate for KNN
I am trying to validate the accuracy of my KNN algorithm for the movie rating prediction.
I have $2$ vectors: $Y$ - with the real ratings, $Y'$ - with predicted ones.
When I calculate Standard Error of the Estimate (is it the one I need to calculate?) using following formula:
$$\sigma_{est} = \sqrt{\frac{\sum (Y-Y')^2}{N}}$$
I'm getting result of $\sim 1.03$. But I thought that it can't be $> 1$. If it is not, then what does this number say to me?
results = load('first_try.mat');
Y = results(:,1);
Y_predicted = results(:,2);
o = sqrt(sum((Y-Y_predicted).^2)/rows(Y))
AI: K-NN is a measure of distance, thus the result of your equation will depend on the scale of your data. If the ratings are in a scale from 0 to 100. Then if you always predict very poorly you are evidently going to have values much larger than 1.
For example, for a very bad predictor
import numpy as np
Y = [100, 90, 100, 90]
Y_p = [10, 10, 10, 10]
np.sqrt(np.sum(np.subtract(Y, Y_p)**2)/len(Y))
85.14693182963201 |
H: Keras input dimensions for a MLP
I have been training a multilayer perceptron using Keras to make a prediction on a function similar to that of a normal distribution. I have input variables , and I have one output value .
When I set my input layer to have neurons as such
model.add(Dense(4, input_dim=4, activation= 'relu'))
the model learns with a accuracy.
When I tried to use neurons in my input layer as such
model.add(Dense(35, input_dim=4, activation= 'relu'))
my model learns it with an accuracy.
I'm not understanding the logic behind this. Surely you have to have only neurons for the input layer; what is happening with the other neurons
AI: If I understand your question properly. In first example you have 4 input neurons and they are connected to 4 neurons. In second, 4 inputs are connected to 35 neurons. That's it, you simply add more neurons in hidden layer. Btw, what do you mean saying "learns it with an accuracy"? |
H: Keras: Dimension Error with Sparse Categorical Crossentropy
I'm trying to make an NN that, given the time on the clock, would try to predict which class (out of 32 in this example) is making a request to the system. As a first attempt, I've tried to use categorical_crossentropy, but this will obviously not work because the targets are very sparse, so the system will be heavily rewarded by just always predicting the non-requests.
Now I'm trying to use sparse_categorical_crossentropy, but I keep getting a dimension mismatch error (the train and test sets are the same in this case because I just wanted to evaluate performace in the training set at first):
Error when checking target: expected dense_90 to have shape (1,) but got array with shape (32,)
The DataFrame is here (a simple clock and another column for the requests) and the code is:
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical
import tensorflow as tf
requests = df['requests'].values
requests_cat = to_categorical(requests, 32)
length = len(df['clock'])
train = np.reshape(df['clock'].values, (length, 1))
train = train.astype(np.int)
target = requests_cat
model = Sequential()
model.add(Dense(25, activation = 'relu', input_shape = (train.shape[1],)))
model.add(Dense(25, activation = 'relu'))
model.add(Dense(32, activation = 'softmax'))
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy')
model.fit(x = train, y = target, epochs = 100, validation_data = (train, target))
On a sidenote:
This architecture doesn't seem to be the best in this case. As a second prototype I was thinking about doing something with an LSTM, since past requests can affect later ones. Is there a standard architecture for scheduling?
What would be the proper way of splitting sparse sets into training and testing ones?
AI: I thing you've misunderstood what the difference between categorical_crossentropy and sparse_categorical_crossentropy is. The sparse part doesn't refer to the sparsity of the data but the format of the labels.
If your labels are one-hot encoded: use categorical_crossentropy
If your labels are encoded as integers: use sparse_categorical_crossentropy |
H: How does training a ConvNet with huge number of parameters on a smaller number of images work?
I have two questions:
I am wondering why is that a very deep model such as VGG-16 which has approximately 138 million parameters (Source) can be used as a model to be trained on just 1.3 million images? Even though the authors of VGG-16 used dropout and regularisation to avoid overfitting (source), I just feel that the amount of parameters is still significantly more than the number of images.
Is it also true that when you use $N$ number of epochs, you "effectively" have $K*N$ training datasets, where $K$ is the number of training datasets you have? In the case above, $K$ = 1.3 million. Personally I don't think this is the case, but I am not sure.
I probably have not done research or read thoroughly, so I would like to apologize if the answers are already somewhere out there. Thank you.
AI: First of all, you are ignoring the dimensionality of the problem. Images are very high-dimensional. Let's say an image has a resolution of $256 \times 256$, which means each image has $65,536$ pixels. ImageNet images are RGB, so each image has 3 channels, resulting in $196,608$ pixels per image. Now, the whole dataset ($1.3$m images) has more than $255$ billion pixels associated with its images.
Because filters work at pixel-level, the information it gains from one image is more than what a regular ML algorithm would gain from a single training example.
As you can see, the kernel (whose weights you want to update, and accounts to 9 parameters in this case) is much smaller than the input image. It would be a mistake to consider a single image (of around $200,000$ pixels as we say previously) as only capable of updating one parameter.
Secondly, the input of each layer changes from layer to layer, because of the sequential nature of the network. The second layer sees the output of the first,and so on... This means that the second layer's filters won't see the same image as the first and in fact, as training progresses, their input will also gradually change (because the first layer's filters will be getting more effective).
By the time it reaches let's say the $M^{th}$ layer, the input will have changed a lot from the original image, so it would be a mistake to consider that the original image is updating the totality of the parameters in the network.
Thirdly, data augmentation is also a thing. By making simple random transformations to each image (flips, shifts, scales, rotations, brightness/contast adjustments etc.) the network is tricked into thinking that this is a totally new image. This can exponentially increase the size of the dataset.
Finally, you should take a look at where the parameters in a network are. The VGG19 architecture does have more than $143$ million, but around $124$m of those are from the last 3 FC layers. In fact the first of the three layers has around $103$m parameters on its own! This is a very inefficient network design and that the research community has strayed far from recently. A more representative network you should look at is the ResNet architecture. For example a 50-layer ResNet (much deeper than the 19-layered VGG) consists of just around $25$m parameters, while achieving a similar -if not better- performance. |
H: Relationship between train and test error
I have some specific questions for which I could not extract answers from books. Therefore, I ask for help here and shall be extremely grateful for an intuitive explanation if possible.
In general, neural networks have a bias/variance tradeoff and thus we need to have a regularizer. Higher bias --> underfitting; Higher Variance--->overfitting.
To solve overfitting, we use regularization for contraining the weight. This is a hyperparameter and should be learned during training based on my understanding using cross-validation. Thus, the dataset is split into a train, validation and test set. The test set is independent and is unseen by the model during learning, but we have the labels available for it. We usually report the statistics such as false positives, confusion matrix, misclassification based on this test set.
Q1) Is this bias/variance problem encountered in other algorithms such as SVM, LSTM etc as well?
In convolutional neural network (Matlab toolbox) I have not seen any option for specifying the regularization constant. So, does this mean that CNN's don't need a regularizer?
Q2) What is the condition if training error and test error are both zero? Is this the ideal best situation?
Q3) What is the condition if training error > test error?
Q4) What is the condition if training error > validation error?
Please correct me where wrong. Thank you very much.
AI: First of all be very clear with the use of the Training set, Validation set and Testing set. These play a crucial part in tuning your DL model. Usually, a validation dataset is used for keeping a check over the model during the training. An intutive observations are noted with the training validation and testing accuracy during data fitting:
If the model is having high validation accuracy and low training accuracy, it is an underfit model.
If the model has higher training accuracy and low validation accuracy, it is overfit.
Bias-variance trade-off problem is a central problem of supervised machine learning algorithms.
The bias–variance decomposition is a way of analyzing a learning algorithm's expected generalization error with respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called the irreducible error, resulting from noise in the problem itself.
Ref |
H: Normalize matrix in Python numpy
I've an array like this:
array([[ 0, 1],
[ 2, 3],
[ 4, 5],
[ 6, 7],
[ 8, 9],
[10, 11],
[12, 13],
[14, 15]])
I want to make normalize this array between -1 and 1. I'm currently using numpy as a library.
AI: nazz's answer doesn't work in all cases and is not a standard way of doing the scaling you try to perform (there are an infinite number of possible ways to scale to [-1,1] ). I assume you want to scale each column separately:
1) you should divide by the absolute maximum:
arr = arr - arr.mean(axis=0)
arr = arr / np.abs(arr).max(axis=0)
2) But if the maximum of one column is 0 (which happens when the column if full of zeros) you'll get an error (you can't divide by 0).
arr = arr - arr.mean(axis=0)
safe_max = np.abs(arr).max(axis=0)
safe_max[safe_max==0] = 1
arr = arr / safe_max
Still, this is not the standard way to do this. You're trying to do some "Feature Scaling" see here
Then the formula is:
import numpy as np
def scale(X, x_min, x_max):
nom = (X-X.min(axis=0))*(x_max-x_min)
denom = X.max(axis=0) - X.min(axis=0)
denom[denom==0] = 1
return x_min + nom/denom
X = np.array([
[ 0, 1],
[ 2, 3],
[ 4, 5],
[ 6, 7],
[ 8, 9],
[10, 11],
[12, 13],
[14, 15]
])
X_scaled = scale(X, -1, 1)
print(X_scaled)
Result:
[[-1. -1. ]
[-0.71428571 -0.71428571]
[-0.42857143 -0.42857143]
[-0.14285714 -0.14285714]
[ 0.14285714 0.14285714]
[ 0.42857143 0.42857143]
[ 0.71428571 0.71428571]
[ 1. 1. ]]
If you want to scale the entire matrix (not column wise), then remove the axis=0 and change the lines denom[denom==0] = 1 for denom = denom + (denom is 0). |
H: Adaboost vs Gradient Boosting
How is AdaBoost different from a Gradient Boosting algorithm since both of them use a Boosting technique?
I could not figure out actual difference between these both algorithms from a theory point of view.
AI: Both AdaBoost and Gradient Boosting build weak learners in a sequential fashion.
Originally, AdaBoost was designed in such a way that at every step the sample distribution was adapted to put more weight on misclassified samples and less weight on correctly classified samples. The final prediction is a weighted average of all the weak learners, where more weight is placed on stronger learners.
Later, it was discovered that AdaBoost can also be expressed in terms of the more general framework of additive models with a particular loss function (the exponential loss). See e.g. Chapter 10 in (Hastie) ESL.
Additive modeling tries to solve the following problem for a given loss function $L$:
$ \min_{\alpha_{n=1:N},\beta_{n=1:N}} L\left(y, \sum_{n=1}^N \alpha_n f(x,\beta_n) \right)$
where $f$ could be decision tree stumps. Since the sum inside the loss function makes life difficult, the expression can be approximated in a linear fashion, effectively allowing to move the sum in front of the loss function iteratively minimizing one subproblem at a time:
$ \min_{\alpha_n,\beta_n} L\left(y, f_{n-1}((x) + \alpha_n f_n(x,\beta_n) \right)$
For arbitrary loss functions this is still a tricky problem, so we can further approximate this by applying a steepest descent with line search, i.e. we update $f_n$ by taking a step into the direction of the negative gradient.
In order to avoid overfitting on the gradient, the gradient is approximated with a new weak learner. This gives you the gradient boosting algorithm:
Start with a constant model $f_0$
Fit a weak learner $h_n$ to the negative gradient of the loss function w.r.t. $f_{n-1}$
Take a step $\gamma$ so that $f_n= f_{n-1} + \gamma h_n$ minimizes the loss $L\left(y, f_n(x) \right)$
The main differences, therefore, are that Gradient Boosting is a generic algorithm to find approximate solutions to the additive modeling problem, while AdaBoost can be seen as a special case with a particular loss function. Hence, Gradient Boosting is much more flexible.
On the other hand, AdaBoost can be interpreted from a much more intuitive perspective and can be implemented without the reference to gradients by reweighting the training samples based on classifications from previous learners.
See also this question for some further references (quote):
In Gradient Boosting, ‘shortcomings’ (of existing weak learners) are identified by gradients.
In Adaboost, ‘shortcomings’ are identified by high-weight data points. |
H: Maximum likelihood estimation vs calculating distribution parameters "manually"
I'm sorry for asking probably elementary question, but I cannot understand how estimating probability distribution parameters using maximum likelihood estimation method differs from calculating these parameters from observed data manually.
For MLE we need to know the type of probability distribution anyway so why don't we just use the known formulas for calculating the corresponding parameters from observed data?
I believe that MLE is somehow more general method but I cannot see what is the real advantage of MLE compared to getting these parameters "manually".
Thanks for explanation.
Tomas
AI: These "known formulas" that you are thinking about are precisely the ones that maximize the likelihood of the distribution (e.g. taking the mean of a sample gives you the maximum likelihood estimate of the $\mu$ parameter of a normal distribution fit to that sample).
In many cases however, there aren't any closed formulas for the "best" parameters of distributions, in which case you have to follow an iterative optimization approach (e.g. generalized linear models), if that's what you meant to ask. |
H: Which is the fastest image pretrained model?
I had been working with pre-trained models and was just curious to know the fastest forward propagating model of all the computer vision pre-trained models. I have been trying to achieve faster processing in one-shot learning and have tried the forward propagation with few models over a single image and the results are as follows:
VGG16: 4.857 seconds
ResNet50: 0.227 seconds
Inception: 0.135 seconds
Can you tell the fastest pre-trained model available out there and the drastic time consumption difference amongst the above-mentioned models.
AI: The answer will depend on some things such as your hardware and the image you process. Additional, we should distinguish if you are talking about a single run through the network in training mode or in inference mode. In the former, additional parameters are pre-computed and cached as well as several layers, such as dropout, being used, which are simply left out during inference. I will assume you want to simply produce a single prediction for a single image, so we are talking about inference time.
Factors
The basic correlation will be:
more parameters (i.e. learnable weights, bigger network) - slower than a model with less parameters
more recurrent units - slower than a convolutional network, which is slower than a full-connected network1
complicated activation functions - slower than simple ones, such as ReLU
deeper networks - slower than shallow networks (with same number of parameters) as less run in parallel on a GPU
Having listed a few factors in the final inference time required (time taken to produce one forward run through the network), I would guess that MobileNetV2 is probably among the fastest pre-trained model (available in Keras). We can see from the following table that this network has a small memory footprint of only 14 megabytes with ~3.5 million parameters. Compare that to your VGG test, with its ~138 million... 40 times more! In addition, the main workhorse layer of MobileNetV2 is a conv layer - they are essentially clever and smaller versions of residual networks.
Extra considerations
The reason I included the whole table above was to highlight that with small memory footprints and fast inference times, comes a cost: low accuracies!
If you compute the ratios of top-5 accuracy versus number of parameters (and generally versus memory), you might find a nice balance between inference time and performance.
1 Have a look at this comparison of CNNs with Recurrent modules |
H: How can I check the correlation between features and target variable?
I am trying to build a Regression model and I am looking for a way to check whether there's any correlation between features and target variables?
This is my sample dataset
Loan_ID Gender Married Dependents Education Self_Employed ApplicantIncome\
0 LP001002 Male No 0 Graduate No 5849
1 LP001003 Male Yes 1 Graduate No 4583
2 LP001005 Male Yes 0 Graduate Yes 3000
3 LP001006 Male Yes 0 Not Graduate No 2583
4 LP001008 Male No 0 Graduate No 6000
CoapplicantIncome LoanAmount Loan_Amount_Term Credit_History Area Loan_Status
0.0 123 360.0 1.0 Urban Y
1508.0 128.0 360.0 1.0 Rural N
0.0 66.0 360.0 1.0 Urban Y
2358.0 120.0 360.0 1.0 Urban Y
0.0 141.0 360.0 1.0 Urban Y
I am trying to predict LoanAmount column based on the features available above.
I just want to see if there's a correlation between the features and target variable. I tried LinearRegression, GradientBoostingRegressor and I'm hardly getting a accuracy of around 0.30 - 0.40%.
Any suggestions on algorithms, params etc that I should use for better prediction?
AI: Your data can be put into a pandas DataFrame using
import pandas as pd
data = {'Loan ID': ['LP001002', 'LP001003', 'LP001005', 'LP001006', 'LP001008'],
'Married': ['No', 'Yes', 'Yes', 'Yes', 'No'],
'Dependents': [0, 1, 0, 0, 0],
'Education': ['Graduate', 'Graduate', 'Graduate', 'Not Graduate', 'Graduate'],
'Self_Employed': ['No', 'No', 'Yes', 'No', 'No'],
'Income': [5849, 4583, 3000, 2583, 6000],
'Coapplicant Income': [0, 1508, 0, 2358, 0],
'LoanAmount': [123, 128, 66, 120, 141],
'Area': ['Urban', 'Rural', 'Urban', 'Urban', 'Urban'],
'Loan Status': ['Y', 'N', 'Y', 'Y', 'Y']}
df = pd.DataFrame(data)
Now to get a correlation we need to convert our categorical features to numerical ones. Of course the choice of order will affect the correlation but luckily all of our categories seem to be binary. If this is not the case you will need to devise a custom ordering.
df = pd.DataFrame(data)
df['Married'] =df['Married'].astype('category').cat.codes
df['Education'] =df['Education'].astype('category').cat.codes
df['Self_Employed'] =df['Self_Employed'].astype('category').cat.codes
df['Area'] =df['Area'].astype('category').cat.codes
df['Loan Status'] =df['Loan Status'].astype('category').cat.codes
Now we can get the correlation between the 'LoanAmount' and all the other features.
df[df.columns[1:]].corr()['LoanAmount'][:]
Now using some machine learning on this data is not likely to work. There just is not sufficient data to extract some relevant information between your large number of features and the loan amount.
You need at at least 10 times more instances than features in order to expect to get some good results.
To only obtain the correlation between a feature and a subset of the features you can do
df[['Income', 'Education', 'LoanAmount']].corr()['LoanAmount'][:]
This will take a subset of the DataFrame and then apply the same corr() function as above. Make sure that the subset of columns selected includes the column with which you want to calculate the correlation, in this example that's 'LoanAmount'. |
H: which forecasting models could be chosen?
I'm new for data analysis. I got some data from the regional environmental center.
Measurements:
Datetime, PointID, SubstanceID, Value (substances concentrations in air), MeteoID ,NextValue
Meteorological data:
MeteoID, Datetime, temperature, wind speed, wind diretion,humidity, pressure, precipitation.
Substances(8 substances(CO,Cl2,NO2, etc.), 5 of them have about 1.2 million records):
SubstanceID, Name, MaxVal (maximum allowable concentration value).
Points(9 static monitoring stations):
PointID, Adress, Longitude, Latitude.
Measurments table contains about 8 million records of substances concentratios in air(local mysql). I linked the measurement data and the meterologicala data (closest in time), and added the next value to measurement record. time between measurements 20 min( in average, for some periods data are missed).
I want to make a predictive data analysis and get short-term forecasts for substances concentrations depending on the previous value and weather data(may be for few hours). I am still thinking which methods, techniques can be chosen. I want to consider different methods. which toolkit is most suitable for this case? Should the data be considered as a multidimensional time series? I'm currently looking into the direction of some kind of neural network and implementation in python (but I still don’t know which package).
AI: Should the data be considered as a multidimensional time series?
This depends on whether the target variable (the one that you want to predict) depends on the others. If not, there is no point in doing it. A fast way of checking if the variables are linearly dependent and, therefore, multidimensional forecasting is meaningful is by checking the linear correlation of the variables. Then select only the variables that have high correlation (>0.5) with the target variable to include them in your prediction model.
I am still thinking which methods, techniques can be chosen.
The model that I recommend for time series forecasting is a Recurrent Neural Network. This is because of its inherent ability to store previous timesteps in its memory and to incorporate them into future predictions. This is very important, because it is among the few approaches that exploit the temporal dependencies between samples.
I'm currently looking into the direction of some kind of neural
network and implementation in python (but I still don’t know which
package).
The most convenient way of implementing a recurrent neural network in Python is by utilizing the Keras framework. Please go carefully through this tutorial, as it will definitely be a very good first step to attack your problem. |
H: Tuning svm and cart hyperparameters
I am trying to optimize the hyperparameters of SVM and CART with tune() function of e1071 R package, but I have a doubt.
Should I tune the parameters on the training data, fit the model on the training data and then test it on the test data, or may I avoid the second step?
AI: There's two common approaches here, depending on how much data you have.
If you have plenty of data, you'll probably be fine to:
set part of the training set aside, and call it your validation set;
start with a set of hyperparameters, train a model on your training set, evaluate performance on the validation set;
repeat step 2 with different hyperparameters;
pick the hyperparameters which give you the best score on the validation set;
train your model on the training set and the validation set;
Test your model ONCE on your test set.
Otherwise, you can do the following:
start with a set of hyperparameters, evaluate your model's performance on unseen data via cross-validation on the training set;
repeat step 2 with different hyperparameters;
pick the hyperparameters which give you the best score on the validation set;
train your model on the entire training set;
Test your model ONCE on your test set. |
H: To which category does this algorithm belongs?
I have came across the Catboost package. Among the classes in categories in Sklearn, Catboost seems to belong to Ensemble methods.
What are then the advantages of Catboost over AdaBoost, Bagging etc.?
AI: Ensemble Methods as defined in Wikipedia:
In statistics and machine learning, ensemble methods use multiple
learning algorithms to obtain better predictive performance than could
be obtained from any of the constituent learning algorithms alone.
All those methods you mentioned are tree-based ensemble models:
Bagging (Breiman, 1996): Fit many large trees to bootstrap-resampled versions of the training data, and classify by majority vote.
Random Forests (Breiman 1999): Fancier version of bagging (only a subset of features are selected at random, unlike bagging where all features are considered for splitting a node).
Boosting (Freund & Shapire, 1996): Fit many large or small trees to reweighted versions of the training data. Classify by weighted majority vote. There is a nice article explaining gradient boosting trees.
In general (in terms of prediction capability, boosting to be the best):
Boosting > Random Forests > Bagging > Single Tree
You might be wondering where AdaBoost fits?
Adaptive Boosting (or in short AdaBoost, was the first really successful boosting algorithm) works on improving the base learner esp. where it fails on predictions. Please note that the base learner can be any machine learning algorithms upon which the boosting is applied to obtain a strong learner. When Decision stumps are used as base learner, AdaBoost is comparable to the above-mentioned boosting trees. You might be asking, again, what are their differences, see below (taken from this book):
Modern Boosting Trees
Due to the success of gradient boosting trees, there are types of boosting algorithms, namely: Gradient Boosting, XGBoost, and Catboost. They are conceptually very similar, yet they differ in e.g. sampling methods, regularizations, handling categorical features, performance etc. Strongly recommend checking this article out if interested to learn more.
Personal Note: About 1.5 year ago I was a fan of XGBoost (for many reasons), till I experimented Catboost. Now I really like Catboost. First of all, it easily handles a mixture of numerical and categorical features EVEN without coding the categorical ones. And the default hyperparameters give comparable results to fine-tuned hyperparameters in XGBoost, thus less hassle. At present Catboost community is smaller than let's say XGboost, kind of makes it less attractive, but it is growing. Last note: I am not affiliated to any of these methods/implementations.
Hope, it is now better! ;-) |
H: Is train/test-Split in unsupervised learning of neural network necessary?
I am using autoencoder for anomaly detection in warranty data. It is unsupervised. I calculate the reconstruction error by the model and the records with high reconstruction error value is considered as an anomaly. I would like to know, if it is necessary to train/test-split the data.
Any help is much appreciated!
AI: Yes it is still necessary, you are fitting your model on that data and learning it to find a good representation for that sample. Validating whether or not this was actually an anomaly is a lot more difficult then. |
H: Which ML method would be best for deriving a rough formula for prediction based on existing data?
Which ML method would you say is the easiest to derive a mathematical formula from based on already existing data of predictor stats and outcome?
I have this data:
Opponent 1:
Strength: x
Battle Score: y
I also have a model that I put against the opponent:
Opponent 2:
Strength: z
Battle Score: k
Finally, all outcomes of fights are written into a database (which currently has around 2800 outcomes) and look something like this model:
Fight:
Strength: x - z
Battle Score: y - k
Outcome: win/lose
I would want to get proper weights for Strength and Battle Score, so I can derive a simple formula from it and thus somewhat predict whether the next fight will be won or lost.
AI: If you want "the easiest" for "a simple formula", then for sure it will be a linear regression on the battle score, or a logistic regression on "win/lose". That way you'll have the coefficients of the model, and they will be interpretable (which you won't get from a neural network with hundreds of parameters). |
H: How to analyze CNN model summary and improve it?
I am using a CNN (adapted from a few links on the net) for an image classification task. There are about 8000 images of size 128x128 each. They are of 13 different classes. Following is output of model.summary()):
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
batch_normalization_1 (Batch (None, 128, 128, 3) 12
_________________________________________________________________
conv2d_1 (Conv2D) (None, 128, 128, 32) 896
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 64, 64, 32) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 64, 64, 32) 128
_________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 64, 64) 18496
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 32, 32, 64) 0
_________________________________________________________________
batch_normalization_3 (Batch (None, 32, 32, 64) 256
_________________________________________________________________
conv2d_3 (Conv2D) (None, 32, 32, 128) 73856
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 16, 16, 128) 0
_________________________________________________________________
batch_normalization_4 (Batch (None, 16, 16, 128) 512
_________________________________________________________________
conv2d_4 (Conv2D) (None, 16, 16, 64) 73792
_________________________________________________________________
global_average_pooling2d_1 ( (None, 64) 0
_________________________________________________________________
dense_1 (Dense) (None, 13) 845
=================================================================
Total params: 168,793
Trainable params: 168,339
Non-trainable params: 454
How does one analyze this model summary and how can this model be improved?
AI: What is the meaning of: Non-trainable params: 454? Should this ideally be 0? If so, how can this be made 0.
I guess it is due to using batch normalisation. Try to ascertain by investigating the math of that. If you see the paper, you can easily find out that the bias terms and $\beta$, if I remember, both get added. Consequently, one can be ignored and won't be trained because it is not needed.
Dense layer should be added at the end (in the part you mentioned in your comment) or can it be added in previous layers also?
Not really, dense layers should be employed after conv layers. What they do is classifying the extracted features obtained by conv layers. About conv layers, they are employed for reducing the number of parameters and finding local patterns.
There is no consensus on how to change the number of filters in convolutional layers, at least as far as I know. But there is a point here. In the following lines of your code, you've employed a kind of pooling layer just before dense layer. If the number of activations coming from conv layer is many, you can use it but consider that by doing so, you ignore important features. I suggest you not doing that, especially, for the last conv layer. Also, try to increase the number of neurons in a dense layer or add extra layers for better accuracy.
conv2d_4 (Conv2D) (None, 16, 16, 64) 73792
_________________________________________________________________
global_average_pooling2d_1 ( (None, 64) 0
_________________________________________________________________
dense_1 (Dense) (None, 13) 845
=================================================================
About using batch norm, it is used due to a kind of problem which is called Covariat Shift. It simply tries to keep the distribution of the outputs of different layers in order to facilitate the learning process.
Based on your questions, I highly recommend you watching professor Andrew Ng's course about ConvNets in Coursera. |
H: How to correctly perform data sampling for train/test split in multi-label dataset?
Problem statement
I have a text multi-label classification dataset, and I've found a problem with the dataset sampling.
I'm facing two different strategies. The first one consists in preprocessing the corpus all together and then make the train/test split just before training. The second one starts with a pre-made train/test split, so the preprocess is made separately.
The preprocessing step simply consists in transforming the labels into OneHot representation and keep only the N most frequent ones. I expect similar (the same) behaviour, but I'm getting really weird results. Let's take a closer look.
Train+Test all together and then split
Start with:
|ID |TEXT |LABELS|
|-----|------|------|
|1.txt|the |A:B |
|2.txt|lazy |B |
|3.txt|fox |C |
|4.txt|jumps |B:C |
|5.txt|over |C:D |
|6.txt|crazy |D |
After preprocessing and split:
Train
|ID |TEXT|A|B|C|D|
|-----|----|-|-|-|-|
|1.txt|the |1|1|0|0|
|2.txt|lazy|0|1|0|0|
|3.txt|fox |0|0|1|0|
Test
|ID |TEXT |A|B|C|D|
|-----|-----|-|-|-|-|
|4.txt|jumps|0|1|1|0|
|5.txt|over |0|0|1|1|
|6.txt|crazy|0|0|0|1|
The results are good. Let's take this as reference. F1-Score = 0.61.
Pre-made Train/Test split
Start with:
Train
|ID |TEXT|LABELS|
|-----|----|------|
|1.txt|the |A:B |
|2.txt|lazy|B |
|3.txt|fox |C |
Test
|ID |TEXT |LABELS|
|-----|-----|------|
|4.txt|jumps|B:C |
|5.txt|over |C:D |
|6.txt|crazy|D |
After preprocessing and split:
Train
|ID |TEXT|A|B|C|
|-----|----|-|-|-|
|1.txt|the |1|1|0|
|2.txt|lazy|0|1|0|
|3.txt|fox |0|0|1|
Test
|ID |TEXT |B|C|D|
|-----|-----|-|-|-|
|4.txt|jumps|1|1|0|
|5.txt|over |0|1|1|
|6.txt|crazy|0|0|1|
The results are totally degradated. F1-Score = 0.15.
What is going on? What could be causing the divergence in results?
Extra information
The labels predicted at the prediction step are not compatible with the labels on the test set. I've taken that into account and is correctly managed, that's not the problem.
The splits are exactly the same. The documents in train/test are the same in both situations.
AI: I'm facing two different strategies. The first one consists in preprocessing the corpus all together and then make the train/test split just before training. The second one starts with a pre-made train/test split, so the preprocess is made separately.
Strategy 2 is "correct" from a "don't reuse your data" perspective. Data pre-processing is part of training. Your train/test pipeline should, in principle, account for this. For example, if you have a feature $A$ with 5 categories, but only 4 of those categories appear in the data, the correct thing to do is to only use 4 categories in your one-hot encoding, and treat the 5th category as an "unknown" value.
In practice, however, this isn't always practical. In some cases the pre-processing pipeline is compute-intensive (e.g. training word vectors) or requires processing a massive amount of data (e.g. ingesting 5 GB of images off of AWS). In those cases, you have to accept the fact that some knowledge of your test set will "leak" into your training set by way of your preprocessing steps, but it's usually not so bad.
Other times you don't have enough data after train/test splitting to conduct preprocessing correctly. Or you didn't stratify your splits, and so you ended up with incomplete class representation, or and otherwise imbalanced training set. This is where you find yourself. Just look at the headings to see why:
Train:
|ID |TEXT |A|B|C|
Test:
|ID |TEXT |B|C|D|
Class "A" is missing from your test set, and class "D" is missing from your training set. This will ruin just about any model.
There has been some research done on the stratification of multi-label data. Notably, there is "On the Stratification of Multi-label Data" (Sechidis, Tsoumakas, and Vlahavas; ECML PKDD 2011), which introduces "iterative stratification". There is an implementation of this technique in the scikit-multilearn Python library. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.