text
stringlengths 83
79.5k
|
---|
H: Where and how to do large scale supervised machine learning?
I'm beginner in ML and I have a large dataset that has 15 features with 6M rows, so it becomes challenging to work on it locally. I can train one model locally but to perform hyper parameter tuning and cross validations with my macbook pro, it runs out of memory and lacks the processing speed and capacity. I tried spark but that gives poor results, so I would prefer python native ecosystem of pandas and sklearn.
So I want to know what are my options? How do professionals do it? Should I provision a VM on cloud with high memory and CPU or there are any other cloud based or SAAS platforms that I can checkout
AI: First, when working with big data most of the time it's more convenient to work with a random subset rather than the whole thing: usually during the design and testing stages there is no need to work with the full data since optimal performance is not needed.
Second, it's often useful to do an ablation study in order to check that using the full data is actually useful for the model. Sometimes training the model with a subset gives the same results as with the full available data, so in this case there's no advantage using all the data.
Finally there are indeed cases where one needs to process a large dataset or run a long training process which cannot be done on a regular computer. There are various options depending on the environment:
Buy the required hardware (it's rarely the best option but it needs to be mentioned)
Use a commercial cloud service such as AWS
Some organizations have their own in-house computing servers/clusters. In particular if you're a student it's likely that you should have access to this kind of service through your university, ask around (afaik most decent universities provide it nowadays). |
H: Using batchnorm and dropout simultaneously?
I am a bit confused about the relation between terms "Dropout" and "BatchNorm". As I understand,
Dropout is regularization technique, which is using only during training.
BatchNorm is technique, which is using for accelerating training speed, improving accuracy and e.t.c. But I also saw some conflicting opinions about question: is BatchNorm regularization technique?
So, can somebody,please, answer some questions:
Is BatchNorm regularization technique? Why?
Should we use BatchNorm only during training process? Why?
Can we use Dropout and BatchNorm simultaneously? If we can, in what order?
AI: Batch normalization can be interpreted as an implicit regularization technique because it can be decomposed into a population normalization term and a gamma decay term, being the latter a form of regularization. This was described in the article Towards Understanding Regularization in Batch Normalization, which was presented at the ICLR'19 conference.
Batch normalization happens at training time. However, at inference time we still apply the normalization, but with the mean and variance statistics learned during training, not with the current batch. The Wikipedia page for Batch normalization gives a nice description of this.
It is posible to use both dropout and batch normalization in the same network, with no specific ordering. However, in some setups, the performance of their combination is worse than applying them separately. This was studied in the article Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift, presented at the CVPR'19 conference:
[...] Dropout shifts the variance of a specific neural
unit when we transfer the state of that network from training to test. However, BN maintains its statistical variance,
which is accumulated from the entire learning procedure, in
the test phase. The inconsistency of variances in Dropout
and BN (we name this scheme “variance shift”) causes the
unstable numerical behavior in inference that leads to erroneous predictions finally. [...] |
H: LSTM for regression in time and space
I am trying to implement my first LSTM for a supervised regression problem.
The data is in the following format: every row has month, day and hour in three separate columns, other 10 predictive features and one output.
Of those 10 predictive features, two are the coordinates in space where the output value was measured. Since the space was modeled as a 6 x 8 grid, there are 48 combinations, and hence 48 rows for every time step.
The number of rows is therefore: 365 (days/year) * 24 (hours/day) * 48 (points in space/hour).
I have already solved the problem using a simple fully-connected neural network, but I have been asked to compare it with other models including LSTM.
The problem is that I don't understand if I have to rearrange the data, and what the input sizes that I should use for the problem are.
I need one prediction for every point in space, for every hour of every day of every month. I would like to implement this in Keras/Tensorflow.
EDIT: to clarify, I am confused on how to implement this as a time-series problem because, for every point in time, I have 48 rows (one per grid point in space), instead of having one row per point in time.
I could technically pivot wider, but this would lose any information about the x- and y- coordinates, which means that I could not extrapolate with other information.
AI: Before some days I had the same problem and I found quite useful these sites, https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/, https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/ and https://www.tensorflow.org/tutorials/structured_data/time_series. |
H: Does it make sense to use UMAP for dimensionality reduction for modeling (rather then presentation/exploration)?
Reducing dimensionality via PCA before training is a common practice, but PCA cannot makes use of nonlinear relations between features.
I read about UMAP (e.g. https://adanayak.medium.com/dimensionality-reduction-using-uniform-manifold-approximation-and-projection-umap-4aa4cef43fed), a technique for reducing dimensionality that is able to make sense of nonlinear relations between features.
However, I only saw its use in data presentation and exploration.
Would it make sense to use UMAP as a form of feature engineering/dimensionality reduction when creating input for downstream model training?
AI: Yes, it makes sense, and that is one of the advantages UMAP has over t-SNE. While t-SNE has no ability to operate on out-of-sample data, UMAP creates a map to the lower-dimension space that can be applied to out-of-sample data just like the PCA matrix would be applied to out-of-sample data.
(Certainly we could run everything through the t-SNE algorithm and then do the data split, but that is majorly cheating. What happens when we get new observations that didn’t exist when we built the model, like how Siri is supposed to be able to understand speech by people who have yet to be born when they can talk in a few years?) |
H: How to know if a time series sequence is predictibale or just random (Univariate time series prediction)?
I'm trying to predict a current value of a variable based on the its previous 10 values. I tried multiple time series approaches including ARIMA, LSTM and linear regression... None of them really performed well, so I'm starting to think that the sequence of data I have is just random and not predictable.
Please if you have any advice, let me know. Or if you know of any metrics I can compute to make sure that the sequences of data I have are not just random.
for LSTM I'm trying to use the Window Method in to do my prediction in the following link:
https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/
Here's the auto-correlation plot and the plot and a part of the data sequence I'm using:
AI: The metric to the time series forecastability is "the spectral entropy". I learned it from some talk of Rob Hyndman, so here is the description of his implementation for R tsfeatures package, entropy
The spectral entropy is the Shannon entropy −∫π−πf^(λ)logf^(λ)dλ,
where f^(λ) is an estimate of the spectral density of the data. This
measures the “forecastability” of a time series, where low values
indicate a high signal-to-noise ratio, and large values occur when a
series is difficult to forecast.
entropy(AirPassengers)
#> entropy
#> 0.2961049 |
H: How to use efficient net as feature extractor for meta/Few shot learning in PyTorch
I am working on few shot learning and I wanted to use efficient-net as backbone feature extractor. Most of the model use Resnet as feature extractor. For example I can use below line of code and it extract features for me -
from model.res50 import ResNet
self.encoder = ResNet()
self.fc = nn.Linear(hidden_dim, num_classes)
def forward(self, data):
out = self.encoder(data)
out = self.fc(out)
return out
I am using this PyTorch implemetation of Efficientnet- EfficientNet-PyTorch. I am not sure how to use this effcientnet as feature extractor as I did with Resnet. Help is highly appreciated.
AI: Just to make things clear, features of a model are raw model outputs before the fully connected layers, i.e. output of your backbone. So in your code, you get the features of the model as shown in comments -
from model.res50 import ResNet
self.encoder = ResNet()
self.fc = nn.Linear(hidden_dim, num_classes)
def forward(self, data):
out = self.encoder(data) # <-- This line gives you the features of model (backbone) in the 'out' tensor
out = self.fc(out) # <-- These features are then fed to fully connected layers to produce final result. At the time of final predcition, activation functions are used (eg sigmoid or softmax).
return out
These features of a CNN can be of shape 1D, 2D or 3D depending on the architecture. So in the link that you have mentioned, you can get the features of efficientnet using -
x = self.extract_features(inputs) # refer line number 314 in model.py
Then global average pooling is done to further reduce the size of output tensor (originally done by the authors of Resnet). So it depends on you whether you want to do global average pooling or not. And finally these features are flatten (resized into 1D shape). Ideally I would use this flatten tensor as the model features to perform few-shot learning.
That is how you can get the features. And we don't consider the output of fully connected layers without activation are features. |
H: How can compare suggestion models with different performances?
I have 4 class binary classification models. That models identify which class a particular students is suitable for.
For example, we have user 1 and 4 classes recommendation model.
Models were identify how this user would like to take its class.
By reading user 1's personal profile data (features), model A, B, C, D predicts each class' fitness. Binary classification threshold were all 50%.
model A: 77%, True
model B: 65%, True
model C: 33%, False
model D: 88%, True
Based on this result, system recommends class A, B, and C to user 1.
However, models' performance were all different. Each model may have different F1-score, for example, model A: 77%, model B: 64%, model C: 81%, and model D: 55%.
How can we measure each recommendation score rationally, based on models' F1 score?
I also had thought that some recommender system might works, however recommendation algorithms were limit to utilize user's profile.
AI: I founds some information about this problem. Like this problem called, Multilabel Classification. Unlike multiclass classification, multilabel classification classifies labels including both 1s' or all zeros.
You can refer this ideas from sklearn: https://scikit-learn.org/stable/modules/multiclass.html |
H: Calculating confidence interval for model accuracy in a multi-class classification problem
In the book Applied Predictive Modeling by Max Kuhn and Kjell Johnson, there is an exercise concerning the calculation of a confidence interval for model accuracy. It reads as follows.
One method for understanding the uncertainty of a test set is to use a
confidence interval. To obtain a confidence interval for the overall accuracy,
the based R function binom.test can be used. It requires the user
to input the number of samples and the number correctly classified to
calculate the interval. For example, suppose a test set sample of 20 oil
samples was set aside and 76 were used for model training. For this test
set size and a model that is about 80% accurate (16 out of 20 correct),
the confidence interval would be computed using
> binom.test(16, 20)
Exact binomial test
data: 16 and 20
number of successes = 16, number of trials = 20, p-value = 0.01182
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.563386 0.942666
sample estimates:
probability of success
0.8
In this case, the width of the 95% confidence interval is 37.9%. Try
different samples sizes and accuracy rates to understand the trade-off
between the uncertainty in the results, the model performance, and the
test set size.
The dataset used here contains samples belonging to 7 classes. Since this is a multiclassification problem, not a binary one, shouldn't the probability of success in the null hypothesis be equal to 1/7? Is there a reason why the authors have chosen 1/2 as the probability of success?
AI: The author uses binom.test - The binomial test is used when an experiment has two possible outcomes and you have an idea about what the probability of success is and similar to other statistical test, you measure if the success in observed set is significantly different from what was expected.
From now it is getting more complicated : the author is trying to calculate To obtain a confidence interval for **the overall accuracy**. Therefore, the test is not the accuracy per group (each seven oil type) but rather the whole model: whether it predicts correctly or not. So you are dealing with a binary output: success vs fail and you calculate the confidence interval of that.
So I assume if you play with the data and make various model with different data size (more/less than 76 data points), you will see using more data leads to a narrower confidence interval. How confidence interval is important when it comes to predictive analysis, is another question which I encourage you to ask and follow the sister community https://stats.stackexchange.com/ |
H: LogCoshLoss on pytorch
Hi I am currently testing multiple loss on my code using PyTorch, but when I stumbled on log cosh loss function I did not find any resources on the PyTorch documentation unlike Tensor flow which have as build-in function
is it excite in Pytorch with different name ?
AI: Yes the pytroch is not found in pytorch but you can build on your own
or you can read this GitHub which has multiple loss functions
class LogCoshLoss(nn.Module):
def __init__(self):
super().__init__()
def forward(self, y_t, y_prime_t):
ey_t = y_t - y_prime_t
return T.mean(T.log(T.cosh(ey_t + 1e-12))) |
H: How to compare models and which settings to keep constant?
I already posted this in another forum but no response. So, posting it here.
Currently, in clinical practice, clinicians use a score (as a single feature) to predict the mortality of a patient. Now in my project based on clinician inputs, we have created two new features to see whether we can enhance the prediction accuracy. Since our objective is to predict yes or no, I tried logistic regression and got the below results
Now my question is as below
a) Since our dataset of 3.2k rows is imbalanced (87:13), I tried to optimize the decision threshold for classification. But the optimized decision threshold value changes based on the feature set. So, I guess that's expected and my comparison is still valid. For example, I am comparing model A with only one feature (existing_feature) with model B with two features (existing_feature and new_feat_1) and model C with two features (existing_feature, new_feat_1, and new_feat_2). Is my comparison valid? I can't have the same threshold because when I have a different no of features, the optimal threshold will definitely change. I can't just choose 0.5 as the default because my dataset is imbalanced (87:13). So any advice, please? Should I change or not change the optimal threshold for different models?
b) Since my dataset is imbalanced, am focusing on f1-score as an evaluation metric. We can see that there is some improvement in the f1-score due to the addition of 2 new features. How can I know that these 2 new features are indeed useful (and add value to prediction) and not just by some random chance? Any suggestion on how to assess the usefulness of this feature? Please note that am using the random_state variable in the scikit-learn log regression function. So, everything else is controlled. It's just that am adding new features and changing the optimal threshold for it. I don't change anything else.
c) In a hospital setting, False negatives are costly. Meaning if we miss predicting a person who will die at a high risk of dying, it is costly. So, I guess we have to look at precision for it. However, you can see that my recall is dropping heavily. So, how should I decide whether this model and new features are helpful or not
d) For the purpose of interpretability, I used only the logistic regression model. Do you advise me to run other models like SVM, RF, and Xgboost, etc? Since my dataset is imbalanced and non-linear separation, do you think it's good to try other models as well. or based on your experience, looking at the results, do you think there cannot be any further improvement to this?
e) I am using the class_weight=balanced parameter to run my log reg model as my dataset is balanced. Do you advise me to oversample the minority class? In real-time, people who are dead are always less when compared to people who are alive (for the problem that I am studying). Meaning, the positive class will always be the minority class. So should I oversample it or just use the class_weight=balanced parameter and carry on with my tasks?
f) I can use gridsearchCV and find the best parameter to be used for model A. Can the same parameters be best for model B? Do I have to run gridsearchCV for each of the models A, B, and C to identify the best parameters. If I am going to use different best parameters for each model, then I cannot compare them for performance. Am I right? Because I am changing the hyperparameters which violate controlled settings criteria for model comparison and evaluation. How should I do it?
AI: General remark: your two new models give very different results in terms of precision and recall, I find this a bit surprising. I would probably try different learning methods (e.g. decision trees, SVM) in order to investigate if this is really due to the features or not.
a) Absolutely, the threshold should be specific to the model and features, it would be suboptimal to keep the same value.
b) Typically one would check whether there is a statistically significant difference in the performance between the models. I'm not sure which significance test is appropriate here.
c) Then you should evaluate your models with $F_{\beta}$-score instead of just F1, with $\beta$ higher than 1 in order to favour recall. I'd say at least 2, it depends how much more costly is a FN than a FP error. Note that the threshold should also be selected based on this measure.
d) I would certainly try different methods, because it's always possible that another method would perform better. I always recommend decision trees because they are robust and interpretable.
e) I'm not sure but there might be a confusion here: class_weight=balanced means that you give a higher importance to the minority class than what it really represents in the data, in other words the learning algorithm will work as if the two classes have the same number of instances. So as far as I know there would be no point in resampling additionally to using the weights. However you could provide manual weights, for instance 0.1,0.9 if you want to favour detecting the minority class even more (that would increase recall). By the way I would suggest to also try without using weights at all: it's unlikely to be good in terms of recall but that it could be useful just to know the precision/recall in this case.
f) For tuning hyper-parameters the important point is to use a validation set different from the final test set. For every "method" (set of features) independently:
Use grid search to determine the best parameters for the method, evaluating on the validation set only.
Pick the best parameters. At this stage if you want you can train the final model using both training set+validation set (but only with the selected parameters).
Then you can apply the 3 final models on the unseen test set and compare their performance. Essentially the parameter tuning is part of the training process (it's a kind of "meta-training"), so there's no data leakage as long as you don't use the test set for determining the best parameters. |
H: How can I aggregate/combine 3 columns of a data frame into one column with the sum of the values of the other three in R?
Part of my dataset is shown in this image, I want to combine the columns GR_S01_w1_a, GR_S01_w1_b and GR_S01_w1_c into a single column - GR_S01_w1 - whose values are the sum of the three.
I know how to use mutate to add a new column which does this, but I also want to delete the other three, and do this about 100 more times for all the other samples I have. So essentially - I have three replicates of each sample in the form of a column of the format samplename_a, samplename_b and samplename_c, and I want to replace these with a single column, many times over.
I have tried using mutate like this -
Gregory <- Gregory %>% mutate(GR_S01_w1 = sum(GR_S01_w1_a, GR_S01_w1_b, GR_S01_w1_c))
but for all of the samples that I have this would of course take far too long. Is there a quick way for me to do this (other than manually on excel which is what I'm doing at the moment)?
AI: Answer
This can be done by following a number of steps:
Use grep to get the groups of columns to sum over
Use rowSums over each group of columns
base <- c("GR_S01_w1", "GR_S01_w2")
cols <- lapply(base, grep, names(Gregory), fixed = TRUE)
for (i in seq_along(base)) {
Gregory[, base[i]] <- rowSums(Gregory[, cols[[i]]])
}
This automates the whole process without defining any names manually (apart from the group names), and without having to transform your dataset to long and then back to wide.
Finding the sample names automatically
If you also don't want to have to specify the samples by hand, then you can use grep and sub. Here, we make the assumption that your structure is always "sample underscore letter", e.g. sample_d or test_sample_b. We can do this by using grep:
relevant_columns <- grep(".*_[a-zA-Z]{1}$", names(Gregory), value = TRUE)
base <- unique(sub("(_[a-zA-Z]{1})$", "", relevant_columns))
base
# [1] "GR_S01_w1" "GR_S01_w2"
What the grep term means:
.*: Any number of any characters.
_: Presence of an underscore, followed by...
[a-zA-Z]: Any of the alphabetic letters (lowercase or upper case).
{1}: Only one of those.
$: This is the end of the word.
Next, we just use sub to remove that part, select the unique values, and we're done.
This does assume that:
There are no other columns that end with _[a-zA-Z]; you can just avoid those columns by inputting names(Gregory)[-1] or whatever columns you DON'T want to consider.
Names are only followed by ONE letter, not e.g. two or three. |
H: TypeError: __init__() missing 1 required positional argument: 'num_features'
I was trying to denoise image using Deep Image prior. when I use ResNet as an architecture i am getting error.
INPUT = 'noise' # 'meshgrid' get_noise function
pad = 'reflection'
OPT_OVER = 'net' # 'net,input'
reg_noise_std = 1./30. # set to 1./20. for sigma=50
LR = 0.01
OPTIMIZER='adam' # 'LBFGS'
show_every = 100
exp_weight=0.99
num_iter = 1000
input_depth = 3
figsize = 4
net = get_net(input_depth, 'ResNet', pad, upsample_mode='bilinear').type(dtype)
net_input = get_noise(input_depth, INPUT, (img_pil.size[1], img_pil.size[0])).type(dtype).detach()
# Compute number of parameters
s = sum([np.prod(list(p.size())) for p in net.parameters()]);
print ('Number of params: %d' % s)
# Loss
mse = torch.nn.MSELoss().type(dtype)
img_noisy_torch = np_to_torch(img_noisy_np).type(dtype)
the error:
TypeError Traceback (most recent call last)
<ipython-input-7-7c96e1404ffa> in <module>()
16
17
---> 18 net = get_net(input_depth, 'ResNet', pad, upsample_mode='bilinear').type(dtype)
19
20
2 frames
/content/models/common.py in act(act_fun)
90 assert False
91 else:
---> 92 return act_fun()
93
94
TypeError: __init__() missing 1 required positional argument: 'num_features'
But get_net clearly doesn't have any num_features as an argument. you can see the source of get_net below.
def get_net(input_depth, NET_TYPE, pad, upsample_mode, n_channels=3, act_fun='LeakyReLU', skip_n33d=128, skip_n33u=128, skip_n11=4, num_scales=5, downsample_mode='stride'):
if NET_TYPE == 'ResNet':
# TODO
net = ResNet(input_depth, 3, 10, 16, 1, nn.BatchNorm2d, False)
elif NET_TYPE == 'skip':
net = skip(input_depth, n_channels, num_channels_down = [skip_n33d]*num_scales if isinstance(skip_n33d, int) else skip_n33d,
num_channels_up = [skip_n33u]*num_scales if isinstance(skip_n33u, int) else skip_n33u,
num_channels_skip = [skip_n11]*num_scales if isinstance(skip_n11, int) else skip_n11,
upsample_mode=upsample_mode, downsample_mode=downsample_mode,
need_sigmoid=True, need_bias=True, pad=pad, act_fun=act_fun)
elif NET_TYPE == 'texture_nets':
net = get_texture_nets(inp=input_depth, ratios = [32, 16, 8, 4, 2, 1], fill_noise=False,pad=pad)
elif NET_TYPE =='UNet':
net = UNet(num_input_channels=input_depth, num_output_channels=3,
feature_scale=4, more_layers=0, concat_x=False,
upsample_mode=upsample_mode, pad=pad, norm_layer=nn.BatchNorm2d, need_sigmoid=True, need_bias=True)
elif NET_TYPE == 'identity':
assert input_depth == 3
net = nn.Sequential()
else:
assert False
return net
where I am doing wrong in Deep Image Prior code.
AI: The issue is in get_net function. RestNet model is not defined correctly, try with the below code snippet.
def get_net(input_depth, NET_TYPE, pad, upsample_mode, n_channels=3, act_fun='LeakyReLU', skip_n33d=128, skip_n33u=128, skip_n11=4, num_scales=5, downsample_mode='stride'):
if NET_TYPE == 'ResNet':
# TODO
#net = ResNet(input_depth, 3, 10, 16, True,'LeakyReLU',True, nn.BatchNorm2d,'reflection')
net = ResNet(
num_input_channels=input_depth,
num_output_channels=3,
num_blocks=10,
num_channels=16,
need_residual=True,
act_fun='LeakyReLU',
need_sigmoid=True,
norm_layer=nn.BatchNorm2d,
pad='reflection')
elif NET_TYPE == 'skip':
net = skip(input_depth, n_channels, num_channels_down = [skip_n33d]*num_scales if isinstance(skip_n33d, int) else skip_n33d,
num_channels_up = [skip_n33u]*num_scales if isinstance(skip_n33u, int) else skip_n33u,
num_channels_skip = [skip_n11]*num_scales if isinstance(skip_n11, int) else skip_n11,
upsample_mode=upsample_mode, downsample_mode=downsample_mode,
need_sigmoid=True, need_bias=True, pad=pad, act_fun=act_fun)
elif NET_TYPE == 'texture_nets':
net = get_texture_nets(inp=input_depth, ratios = [32, 16, 8, 4, 2, 1], fill_noise=False,pad=pad)
elif NET_TYPE =='UNet':
net = UNet(num_input_channels=input_depth, num_output_channels=3,feature_scale=4, more_layers=0, concat_x=False,upsample_mode=upsample_mode, pad=pad, norm_layer=nn.BatchNorm2d, need_sigmoid=True, need_bias=True)
elif NET_TYPE == 'identity':
assert input_depth == 3
net = nn.Sequential()
else:
assert False
return net |
H: Shapley value, conditional expectation vs reference point
In Shapley, the marginal contribution of a feature is computed by comparing the performance of a model with and without a feature over all possible subsets of features.
A common choice is using the average value of a feature, when such feature is not present in the selected subset.
What would be the implication of using a fixed constant value for a missing feature (e.g., a reference point) instead of its average value?
AI: Most probably it will introduce a bias in the estimation of Shapley value.
However on average this should not pose a problem.
Taking the average value (ie arithmetic mean) of a feature, over a given dataset, in no way guarantees that is the most probable or characteristic value, unless the distribution is of a specific kind (eg normal).
So taking another reference point might introduce bias, but on average given random means, it should not pose a problem.
The above is a deduction not a detailed analysis.
References:
A Unified Approach to Interpreting Model Predictions
Shapley value |
H: Meaning of the covariance matrix?
I wonder about the excessive usage of the covariance matrix across all kinds of machine learning tools. So far, for me, the covariance is just a pre-step to get to the correlation. And as there is an obvious reason for the correlation itself, I wonder why I encounter the covariance so often. And, however, I wonder in general why it is used so much. What is/are the purposes for the covariance matrix?
AI: Is essential when u look at theory of linear models, matrix algebra and also u can see usefulness in Methods of multivariate analysis book (because in ML u do use more than 1 variables variables so is explained in details theory behind that).
In simple words: covariance matrix show the distribution magnitude and measure of directionalrelationship between variables for multivariate data in multidimensional space and useful tool for decorrelate variables or applied as transformation for other variables. Here is math-info cor and cov cor vs cov in ML space
Also, is very useful for models and special for big data modeling, dimension reductions aka PCA and family. I use it for feature reduction too and there u can use from cov matrix or corr matrix to extract requirements and build PCs and more (this is a-lot to explain here and also is part of explaining PCA).
Hope it helps and can add math explanation if is the case. |
H: How to design a model for contour recognition? In particular, how to shape the output layer?
I want to design and train a neural network for the automatic recognition of the edges, in some microscopic images.
I am using Keras for a start, I may consider PyTorch later.
The structure of the images is rather simple, with some dark areas, and some clear areas, relatively easy to distinguish, and the task is to select the pixels of the contour between dark and clear areas.
The transition between dark and clear is gradual, so my result is not a single line of edge pixels, but rather a 10 or 15 wide ribbon of edge pixels.
I have manually annotated 200-something images, so for each image I have another image, of the same size, where the pixels of the contours are black, and all the other pixels are white.
I have seen many tutorials on how to design, compile and fit a model (a neural network), and then how to test it, using the manually annotated data.
However, most of the tutorials work on problems of classification, where the number of neurons in the output layer is the number of categories.
My problem is not a problem of classification, and ideally my output should be an image of the same size of the input.
So, here is my question:
What is the best way to design the output layer? Is a layer with a number of neurons equal to the number of pixels the best idea? Or this is a waste, and there is a more efficient way?
Addendum
The images are "easy", but it is still difficult to find the contour pixels, so it is worth using the machine learning approach.
The transition between dark and clear is a little gradual, so my result is not a single line of pixels on the edge, but rather a band, a 10 or 15 wide ribbon of edge pixels.
Since I am after a ribbon of pixels, my categories should be "edge" and "not-edge". If I use the categories "dark pixels" and "clear pixels", and then numerically find the pixels between the two areas I do not get the "ribbon" result, which I need.
AI: You probably don't require the use of artificial intelligence for your task, it seems quite 'easy' to do, check out openCV library, it probably has a function that does the job without AI (here is one for example).
The use of IA network may get you a more accurate result tho, so I would do it using classic image segmentation (U-Net is one of the most common architecture used), with 2 classes : one for the darker areas, and one for the lighter areas. Once you have your image with 2 classes, lighter and darker ares, apply an algorithm that detect if a Pixel is an edge to get your result. |
H: CNN: training accuracy vs. validation accuracy
I just finished training two models, while the one is pretrained and the other trained from scratch and created two diagrams afterward with their data, but as I am very new to machine learning, I don't get what they state.
Why is the training accuracy so low? Did I use to less data? I had about 7200 pictures for training and 800 for validation!
What does it mean, that the validation accuracy of the pretrained algorith is so much higher as the other one? Does it mean the pretrained is two times better then the one trained from scratch?
AI: Why is the training accuracy so low?
This is because your model is underfit. Few of the reasons for this could be,
you might be using small learning rate.
your model architecture is simple (small) and not big enough to recognize patterns from the data. Try increasing layers.
try removing regularization if any.
As per the best of my knowledge and assumptions, I think following could be some of the reasons for validation accuracy to be higher than training accuracy. You might consider investigating in these areas.
The dataset domain might not be consistent? This means that there might be different types of images present in you dataset. For some images (or a type of images) the model is able to learn correctly (as a result ~ 50% accuracy on train set). And for the rest of the images the model is getting confused i.e. it is difficult for the model to recognize these other 50% images. And there might be a possibility that the particular type of images that are easier for model to recognize are present in the validation set. You can ensure that the domain of train and validation sets is same.
The dataset might not be properly split? This means the domain might be consistent but the dataset has imbalanced classes? There might be a possibility that the train set might contain some classes having more instances (majority classes) and some classes having very less instances (minority classes). Generally, model gets a hard time recognizing these minority classes, hence less train accuracy. And perhaps the validation set is containing only majority classes, which are very easy for the model to recognize.
What does it mean, that the validation accuracy of the pretrained algorith is so much higher as the other one? Does it mean the pretrained is two times better then the one trained from scratch?
Yes, it means given unseen 800 images to both of the models, the pretrained model predictions are two times better then the one trained from scratch.
Edited as per the suggestion from Nikos M. |
H: How do I find pairwise maximum of multiple rows in a column using python?
I have a column with float values. The column has 300 rows. I want to get the pairwise max of each row with the row below it. For example: if my column has 2, 25, 1, 24 as row values, I want to find max of 2 and 25, then max of 25 and 1 and so on. I also want to be able to create a new column with max values. How do I do it?
AI: This would be another approach a little bit shorter:
df.assign(resultado = lambda x: x.rolling(2).max())
EDIT:
For your comment try:
def idx(x):
return x.index.values[np.argmax(x.values)]
df.rolling(2).agg(['max', idx])
will return both, the pairwise maximum and the index that corresponds to that value. |
H: How to train 3 models with single loss function in pytorch
optimizer=torch.optim.AdamW(list(model3.parameters())+list(model1.parameters())+list(model2.parameters()))
optimizer.zero_grad()
prediction=model3(model1(x)+model2(x))
loss=nn.BCELoss(prediction,labels)
loss.backward()
optimizer.step()
How can I update parameters of all three models with single loss
AI: Yes, there is no problem with the code you showed. The gradients are propagated all the way up, unless you do something to prevent it (e.g. .detach(), param.requires_grad = False, etc) |
H: How can i extract words from a single concatenated word?
I'm stuck on this problem and would love some input.
I have mulitple words such as getExtention, getPath, someWord or someword and i want to separate each concatinated words into its own words such as:
getExtention ---> [get][Extention].
someword --> [some][word].
The concatenated words can also be in all small letters.
Do you guys have any ideas how I could achieve that?
AI: Can use a package that relies on a spellchecker to find the best way to split, like this one: https://pypi.org/project/compound-word-splitter/ |
H: Gradient descent method
If we suppose that this is formula for gradient descent method:
$$x_{n+1}=x_n-\lambda\cdot{{df(x)}\over{dx}},\ n=0,1,2,3,...$$
Since there is no exact value that we subtract instead of derivative, does it mean that we subtract value of derivative and use it only for controlling direction of next position of x? And why do we subtract derivative but not any other value that is depends on x?
AI: Suppose I want to find the minimum of a function $f(x)$ around $x_m$. Then I have 3 options:
minimum is $x_m$ within some accuracy (end of procedure)
minimum is to the right of $x_m$ (and update my initial guess appropriately)
minimum is to the left of $x_m$ (and update my initial guess appropriately)
So:
If $x_m$ is close (within desired accuracy) to the minimum then derivative at this point will be (approximately) zero (basic analysis).
If minimum is to the right of $x_m$ (lets say $x_{m'}$) then $f'(x_m)$ will have negative slope (negative of derivative points to the direction of minimum).
If minimum is to the left of $x_m$ (lets say $x_{m'}$) then $f'(x_m)$ will have positive slope (negative of derivative points to the direction of minimum).
So far derivative seems a good choice to use to update my initial guess. If magnitude is also taken into account we have:
If $x_m$ is close (within desired accuracy) to the minimum then derivative at this point will be (approximately) zero.
If minimum is to the right or left of $x_m$ (lets say $x_{m'}$) then $f'(x_m)$ will have non-zero magnitude that becomes greater as the $x_m$ is furthest from minimum.
So derivative magnitude can also be used.
So far we have observed and deduced that: $x_{m'}-x_m = \Delta x \sim -f'(x_m)$ or that: $x_{m'} = x_m - \lambda f'(x_m)$ is a good scheme to update my guess.
$\lambda$ (learning rate) is a necessary parameter that satisfies two things:
if $x$ and $f'(x)$ have different physical dimensions then it scales the derivative appropriately to match the dimensions of $x$.
It allows to adjust the learning rate so as not to overshoot the desired minimum or go too slow, in case derivative has some critical points.
References:
Gradient descent
Why sign of gradient is not enough for finding steepest ascent |
H: Classification report and confusion matrix problem
I am working on sign language recognition system using HOG and KNN. I have 26 classes of 180 images per class. The dataset was split into 1/3(67%) for tanning and 2/3(33%) testing after feature extraction with HOG. Model achieved recognition accuracy of 95% on testing dataset. But I am not understating the confusion matrix and classification generated. I believed 1/3 (33%) of each class should be 60 images for testing per class. But result the confusion matrix and classification report generated are shared below. Very confusing report. Kindly help. I can see TP of 65 more than class images.
AI: This looks completely normal to me: your dataset has 26x180=4680 instances, so the test set should have 4680x0.33=1544.4 instances. According to the classification report it contains 1545 instances, which is consistent with this calculation.
It's important to understand that by default the dataset is split between training and test set randomly across all the instances, without taking their class into account. This means that by chance some of the classes can have a bit more or a bit less than 33% instances in the test set. This is what can be observed in the classification report and it's not a problem.
Sometimes this can be an issue when there are classes which have very few instances in total. In this case one should use stratified sampling in order to apply the proportion to each class independently. |
H: Should I train from scratch or use pre-trained weights?
With yolov4, I am training an 80k images dataset that is used to classify different species of fish. Currently, I am using the following pre-trained weights: yolov4.conv.137 .
Now I was wondering if this is a backbone or weights trained with the COCO dataset?
Would I benefit better from using these pre-trained weight files or just training from a blank slate?
AI: My general intution says yes, your model might get benefit from pre-trained weights.
But, in transfer learning, two can things happen -
Positive transfer - The pretrained model on big dataset performs very well when trained on a new dataset
Negative transfer - The pretrained model on big dataset has a performance decline when trained on a new dataset
I will suggest a small experiment -
make a small subset of your fish dataset of ~ 10% ~ 8K images having same class proportion as the original and train your model for both cases, with transfer learning and without transfer learning. This small experiment will quickly determine which method is the most successful in your case (Fish dataset and YOLOv4).
More detailed discussion on the same is here -
https://stats.stackexchange.com/questions/450801/the-negative-transfer-problem-in-machine-learning |
H: Can I set the rewards of a multi armed bandit problem with deterministic values?
I am new to reinforcement learning and I am tryng to understand the multi armed bandit problem.
I think I have understood that it consists in choosing the bandit that maximizes the future reward.
My doubt is in the implementation. In all the examples I have seen, the reward from each bandit is set with a probability distribution. My question is:
Can I set the rewards of a multi armed bandit problem with deterministic values?
AI: I think I have understood that it consists in choosing the bandit that maximizes the future reward.
Yes, in expectation.
The target of finding the best action is often easy to get eventually correct - you could for instance try each action 1000 times in turn and calculate the average reward from it. So, there are usually two other important goals that bandit algorithms try to solve:
Finding the best action with the least number of trials.
Scoring the highest reward during learning. This may also be expressed as "minimising regret", the idea of looking back once you know what the best action is and figuring out how far away from optimal all your previous choices were.
Can I set the rewards of a multi armed bandit problem with deterministic values?
You can. A deterministic value is a special case of a probability distribution - it has $p(X = r) = 1$ and $p(X \ne r) = 0$ - so you can even use exactly the same description of the problem.
However, deterministic mulit-armed bandits are not very challenging. If you know in advance that the results are deterministic then you could try each action once and then pick the maximising result from then on. Although technically this is still a learning algorithm, it is not a very interesting one.
Deterministic contextual bandits, where the agent is given an observation that is known to influence the rewards obtained from different actions, do still have some challenge when solving for deterministic rewards. The agent then has to learn the mapping from input features to rewards whilst exploring different actions. |
H: An universal sentence encoder for a specific language?
I am making a model that uses encoded articles (multiple sentences). I have found the Universal Sentence Encoder by Tensorflow, but it says it is only for English. Specifically, I am looking for an encoder for the Macedonian language. Can I use this encoder and if not is there a multilingual model that understands Macedonian?
AI: This Universal Sentence Encoder that you link is trained specifically on English data, so it's going to work very poorly on any other language (to be clear, it's likely to produce garbage).
Unfortunately it's quite unlikely that you'll find a similar pre-trained model for Macedonian. You would have to train your own model from Macedonian data, and you need a really large amount. Btw that's the main reason why these pre-trained models are often trained on English only, since there's a lot of English text available. In case you want to try this, there is a Macedonian corpus as part of the Universal Dependencies project. |
H: Can this dataset be separated linearly?
Is this dataset linearly separable? If not, can it be converted into one by applying some function as it seems to follow the same pattern?
Also, which classification algorithms could be used to fit this dataset?
AI: As Nikos said, the dataset is not linearly separable: one cannot draw a single straight line which has all the red points on one side and all the green points on the other.
There is an obvious repetition pattern on $x_2$. Let's assume that $x_2$ ranges from 0 to 100 on this graph, the value $x_2$ modulo 50 would make it linearly separable. That would be feature engineering.
Notice that $x_1$ doesn't have any impact on the class. So imho the best algorithm in this case is to just map $x_2$ intervals to the class. |
H: How to convert a dataframe into a single dictionary that is not nested?
I have a dataframe as below:
+----+----------------+-------------+----------------+-----------+
| | attribute_one| value_one | attribute_two | value_two |
|----+----------------+-------------+----------------+-----------|
| 0 | male | 10 | female | 15 |
| 1 | 34-45 | 17 | 55-64 | 8 |
| 2 | graduate | 32 | high school | 5 |
...
I want to convert it into dictionary that gives this output:
{'male': '10',
'34-45':'17',
'graduate':'32'
'female':'15',
'55-64': '8',
'high school': '5'
}
How do I do that? I only want attribute columns as keys and their value columns as values.
AI: Explanation is mentioned in comments
# Created some data like yours
data = {
'attribute_one':['male','34-45','graduate'],
'value_one':[10,17,32],
'attribute_two':['female','55-64','high school'],
'value_two':[15,8,5]
}
# Pandas for handling dataframes
import pandas as pd
# Created a dataframe from the given data
df = pd.DataFrame(data)
# Sliced the columns of interests
df1 = df.iloc[:,0:2] # all values of first two columns
df2 = df.iloc[:,2:4] # all values of last two columns
# Final dictionary for the output
your_dict = {}
# Iterate through numpy values of dataframes
for i in df1.to_numpy():
your_dict[i[0]] = i[1] # populate the dictionary with first dataframe
for i in df2.to_numpy():
your_dict[i[0]] = i[1] # populate the dictionary with second dataframe
# Your dictionary is ready
print(your_dict) |
H: What we can learn from the data if PCA scree plot bins are almost the same?
Suppose we have a data-set with 4 features.
Suppose we calculate the PCA for this dataset and we plot the scree-plot:
What we can learn from the features? Can we say that they are not linearly correlated? Can we say something else?
AI: In short: what you can learn is that you data variability is distributed on multiple of the orthogonal dimensions and PCA can not reduce it further.
Longer: In an extreme scenario, imagine you have four features/dimensions which have variability distributed across all dimension. It is hard to think of a four dimension object but think you have a perfect cube (three dimension) which each edge get various multiple color (the fourth dimension). Here is a dedicated post on understanding geometrically four dimension object
Since all these dimensions are orthogonal (in this example the color is not but just imagine it meets the statistical assumption of the PCA) the PCA can not do much. (In fact PCA of a cube, is a cube and nothing less).
What you can do : explore some non-linear dimension reduction - Principle Components from PCA are orthogonal - perhaps if you choose a non-linear method, you find subset of dimensions which can carry the variability - but you won't get anything less than four - but more PCs. Still I doubt you can reduce that it. |
H: Distance between any two points after DBSCAN
DBSCAN is a clustering model which is robust to detect the outliers also. A parameter $\epsilon$ i.e. radius is an input of the algorithm, a point is said to be outlier if it's circle with radius $\epsilon$ has no point except that point of center. I have detected the outliers for a dataset, but then I observed that all pair distances is less than $\epsilon$. I'm just confused now, Is my understanding of DBSCAN wrong or there should be some mistake in my code?
import numpy as np
import pandas as pd
df = pd.read_csv('HomeC.csv')
time_index = pd.date_range('2016-01-01 00:00', periods=503911, freq='min')
#time_index = pd.DatetimeIndex(time_index)
L = []
for i in range(len(time_index)):
L.append("")
for i in range(len(time_index)):
if int(str(time_index[i])[10:13]) <4 and int(str(time_index[i])[10:13]) >= 0:
L[i] = 'Night'
if int(str(time_index[i])[10:13]) <9 and int(str(time_index[i])[10:13]) >= 4:
L[i] = 'Morning'
if int(str(time_index[i])[10:13]) <12 and int(str(time_index[i])[10:13]) >= 9:
L[i] = 'Late Morning'
if int(str(time_index[i])[10:13]) <15 and int(str(time_index[i])[10:13]) >= 12:
L[i] = 'afternoon'
if int(str(time_index[i])[10:13]) <18 and int(str(time_index[i])[10:13]) >= 15:
L[i] = 'late afternoon'
if int(str(time_index[i])[10:13]) <21 and int(str(time_index[i])[10:13]) >= 18:
L[i] = 'Evening'
if int(str(time_index[i])[10:13]) <24 and int(str(time_index[i])[10:13]) >= 21:
L[i] = 'Late evening'
df = df.iloc[:,:].values
from sklearn.preprocessing import LabelEncoder, OneHotEncoder #sklearn is inside numpy module
labelencoder_X = LabelEncoder()
df[:,0] = L
df[:, 0] = labelencoder_X.fit_transform(df[:, 0]) # Converting Categorical feature to numerrical feature
df[:,20] = df[:,20].astype('str')
df[:,23] = df[:,23].astype('str')
labelencoder_Y = LabelEncoder()
df[:, 20] = labelencoder_Y.fit_transform(df[:, 20])
labelencoder_Z = LabelEncoder()
df[:, 23] = labelencoder_Z.fit_transform(df[:, 23])
df = df[58:,:]
df = df.astype('float')
df = df[:len(df)-1]
df = np.log(df+10)
house1 = []
house2 = []
house3 = []
house4 = []
for i in range(0,len(df)):
if i % 4 == 0:
house1.append(df[i])
elif i % 4 == 1:
house2.append(df[i])
elif i % 4 == 2:
house3.append(df[i])
else:
house4.append(df[i])
X_house1 = house1[:5000]
y2 = dbscan(X_house1)
mins = []
count = 0
for i in range(5000):
print(i)
temp = []
count = 0
for j in range(5000):
if i != j:
temp.append(np.sqrt(np.sum(np.square(X[i]-X[j]))))
if min(set(temp)) > eps:
count += 1
mins.append(min(set(temp)))
house1 = np.array(house1)
house2 = np.array(house2)
house3 = np.array(house3)
house4 = np.array(house4)
from sklearn.cluster import DBSCAN
def dbscan(X):
clustering = DBSCAN(eps=0.6 , min_samples=200).fit(X)
y = clustering.labels_
y_2 = []
for i in range(len(y)):
if y[i] != -1:
y_2.append(0)
else:
y_2.append(1)
return np.array(y_2)
X = X_house1
eps = 0.6
mins = []
count = 0
for i in range(5000): #Calculating the distance of each pair of points
print(i)
temp = []
count = 0
for j in range(5000):
if i != j:
temp.append(np.sqrt(np.sum(np.square(X[i]-X[j]))))
if min(set(temp)) > eps:
count += 1
mins.append(min(set(temp)))
print(count,sum(y2)) #count is 0, but should be equal to sum(y2), sum(y2) is total number of the outliers
link of the dataset https://www.kaggle.com/taranvee/smart-home-dataset-with-weather-information
AI: I think that you miss the second parameter: min_samples=200
DBSCAN not only detects the outliers, but it mainly detects so-called noise. When we do clustering via DBSCAN, we do not look only at distance eps=0.6, but we check if the cluster-candidate is populated with over than min_samples=200 objects.
You don't see "outliers", but you see all the objects that do not perform the cluster. That is why object can have a neighbor in a ball with radius 0.6, but it can be considered as -1 "noise" cluster. |
H: Shared classifier for 3 neural networks (is this weights sharing?)
I would like to create 3 different VGGs with a shared classifier. Basically, each of these architectures has only the convolutions, and then I combine all the nets, with a classifier.
For a better explanation, let’s see this image:
I have no idea on how to do this in Pytorch. Do you have any examples that can I study? Is this a case of weights sharing?
Edit: my actual code. Do you think is correct?
class VGGBlock(nn.Module):
def __init__(self, in_channels, out_channels,batch_norm=False):
super(VGGBlock,self).__init__()
conv2_params = {'kernel_size': (3, 3),
'stride' : (1, 1),
'padding' : 1
}
noop = lambda x : x
self._batch_norm = batch_norm
self.conv1 = nn.Conv2d(in_channels=in_channels,out_channels=out_channels , **conv2_params)
self.bn1 = nn.BatchNorm2d(out_channels) if batch_norm else noop
self.conv2 = nn.Conv2d(in_channels=out_channels,out_channels=out_channels, **conv2_params)
self.bn2 = nn.BatchNorm2d(out_channels) if batch_norm else noop
self.max_pooling = nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
@property
def batch_norm(self):
return self._batch_norm
def forward(self,x):
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = F.relu(x)
x = self.max_pooling(x)
return x
class VGG16(nn.Module):
def __init__(self, input_size, num_classes=1,batch_norm=False):
super(VGG16, self).__init__()
self.in_channels,self.in_width,self.in_height = input_size
self.block_1 = VGGBlock(self.in_channels,64,batch_norm=batch_norm)
self.block_2 = VGGBlock(64, 128,batch_norm=batch_norm)
self.block_3 = VGGBlock(128, 256,batch_norm=batch_norm)
self.block_4 = VGGBlock(256,512,batch_norm=batch_norm)
@property
def input_size(self):
return self.in_channels,self.in_width,self.in_height
def forward(self, x):
x = self.block_1(x)
x = self.block_2(x)
x = self.block_3(x)
x = self.block_4(x)
x = torch.flatten(x,1)
return x
class VGG16Classifier(nn.Module):
def __init__(self, num_classes=1,classifier = None,batch_norm=False):
super(VGG16Classifier, self).__init__()
self._vgg_a = VGG16((1,32,32),batch_norm=True)
self._vgg_b = VGG16((1,32,32),batch_norm=True)
self._vgg_star = VGG16((1,32,32),batch_norm=True)
self.classifier = classifier
if (self.classifier is None):
self.classifier = nn.Sequential(
nn.Linear(2048, 2048),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(2048, 512),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(512, num_classes)
)
def forward(self, x1,x2,x3):
op1 = self._vgg_a(x1)
op2 = self._vgg_b(x2)
op3 = self._vgg_star(x3)
x1 = self.classifier(op1)
x2 = self.classifier(op2)
x3 = self.classifier(op3)
return x1,x2,x3
return xc
model1 = VGG16((1,32,32),batch_norm=True)
model2 = VGG16((1,32,32),batch_norm=True)
model_star = VGG16((1,32,32),batch_norm=True)
model_combo = VGG16Classifier(model1,model2,model_star)
EDIT: I changed the forward of VGG16Classifier, because previously I took the output of the 3 VGG, I made a concat, and I passed to a classifier. Instead, now we have the same classifier for each VGG.
Now, my question is, I want to implement this loss:
And here is my attempt of implementation:
class CombinedLoss(nn.Module):
def __init__(self, loss_a, loss_b, loss_star, _lambda=1.0):
super().__init__()
self.loss_a = loss_a
self.loss_b = loss_b
self.loss_star = loss_star
self.register_buffer('_lambda',torch.tensor(float(_lambda),dtype=torch.float32))
def forward(self,y_hat,y):
return (self.loss_a(y_hat[0],y[0]) +
self.loss_b(y_hat[1],y[1]) +
self.loss_combo(y_hat[2],y[2]) +
self._lambda * torch.sum(model_star.weight - torch.pow(torch.cdist(model1.weight+model2.weight), 2)))
Probably the part of lamba*sum is wrong, however, my question is, in this way, I have to split my dataset in 3 parts to obtain y[0], y1 and y2, right?
If is not possible to ask in this post, I will create a new question.
AI: Everything seems good but you are not taking any outputs from model1, model2 and model_star?
Here is how I would code this thing -
import torch
import torch.nn as nn
import torch.nn.functional as F
class VGGBlock(nn.Module):
def __init__(self, in_channels, out_channels,batch_norm=False):
super(VGGBlock,self).__init__()
conv2_params = {'kernel_size': (3, 3),
'stride' : (1, 1),
'padding' : 1}
noop = lambda x : x
self.conv1 = nn.Conv2d(in_channels=in_channels,out_channels=out_channels , **conv2_params)
self.bn1 = nn.BatchNorm2d(out_channels) if batch_norm else noop
self.conv2 = nn.Conv2d(in_channels=out_channels,out_channels=out_channels, **conv2_params)
self.bn2 = nn.BatchNorm2d(out_channels) if batch_norm else noop
self.max_pooling = nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2))
def forward(self,x):
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = F.relu(x)
x = self.max_pooling(x)
return x
class VGG16(nn.Module):
def __init__(self, input_size, num_classes=1,batch_norm=False):
super(VGG16, self).__init__()
self.in_channels,self.in_width,self.in_height = input_size
self.block_1 = VGGBlock(self.in_channels,64,batch_norm=batch_norm)
self.block_2 = VGGBlock(64, 128,batch_norm=batch_norm)
self.block_3 = VGGBlock(128, 256,batch_norm=batch_norm)
self.block_4 = VGGBlock(256,512,batch_norm=batch_norm)
@property
def input_size(self):
return self.in_channels,self.in_width,self.in_height
def forward(self, x):
x = self.block_1(x)
x = self.block_2(x)
x = self.block_3(x)
x = self.block_4(x)
x = torch.flatten(x,1)
return x
class VGG16Classifier(nn.Module):
def __init__(self, num_classes=1, classifier=None, batch_norm=False):
super(VGG16Classifier, self).__init__()
self._vgg_a = VGG16((1,32,32),batch_norm=True)
self._vgg_b = VGG16((1,32,32),batch_norm=True)
self._vgg_c = VGG16((1,32,32),batch_norm=True)
self.classifier = classifier
if (self.classifier is None):
self.classifier = nn.Sequential(
nn.Linear(2048, 2048),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(2048, 512),
nn.ReLU(True),
nn.Dropout(p=0.5),
nn.Linear(512, num_classes)
)
def forward(self,x1,x2,x3):
op1 = self._vgg_a(x1)
op2 = self._vgg_b(x2)
op3 = self._vgg_c(x3)
xc = torch.cat((op1,op2,op3),0)
xc = self.classifier(xc)
return xc
model = VGG16Classifier()
ip1 = torch.randn([1, 1, 32, 32])
ip2 = torch.randn([1, 1, 32, 32])
ip3 = torch.randn([1, 1, 32, 32])
# Model inference
print(model(ip1,ip2,ip3).shape) # torch.Size([3, 1])
Training also becomes straight forward, you can do it just like we do it for any other network. For eg defining optimizer like -
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
And similarly loss and optimizer steps also remains same -
loss.backward()
optimizer.step() |
H: How should a stateless data transformation be applied in regard to train/test split?
I want to apply spatial sign transformation to my data, but unlike other transformations this one is stateless. I am using sklearn and normallly i would first use the .fit() function on training set and then .transfrom() on test set, but in the documantation it says (even though the fit method is useless in this case: the class is stateless as this operation treats samples independently). Since this is a transformation to reduce the influence of outliers should it be applied before the train/test split? Or should I just transform both sets independently?
The exact class i am using is Normalizer.
AI: In this case it makes no difference due to the nature of the transform. It is equivalent, before/after makes no difference. |
H: What type of visualization is this and what are my options to produce something like it?
I am looking to reproduce the bottom part of the visualization below. What is this type of visualization called? What are my options to reproduce it? Preferably using Python or R, but I'm open to using other tools as well.
AI: It's similar to a violin plot, which shows the shape of the distribution of a variable. However here the X axis shows only categorical values, it's not clear if the shape is based on some underlying numerical variable (this is a requirement for a violin plot).
Violin plots can be made in Python or in R. |
H: Influence of label names on the classfierier perfromance
I am building a text classifier, the labels in my training data are not just short names like "Dog" or "Cat", they are more of lengthy sentences that range from 2 words to around 20 words.
Does the length of the label/class name affect the performance of the classifier? in other words, should I try to shorten the names?
AI: Definitely not. Because the classifier won't bother with how you name your classes, but how are they encoded.
Basically, if you can have only one class per input value, you can use one-hot encoding, that is for each input value you will have a target converted from the string "Cat", for example, to [0, 1] given that you have two classes, where the other one is "Dog", which will become [1, 0].
Of course, this is by no means limited to 2 classes. You could have any N classes, be it in the order of tens or even hundreds, but the idea still applies, one 1 and N-1 0s.
You could also convert your classes to integers, some algorithms can work with that too.
Now, if you can have multiple classes per input value, you could for example encode them as 1 for each class position in the target vector and 0s for the rest.
Let's say you have the following classes: Has fur, Is carnivorous, Feeds alone. Then a lion would have the target value encoded as [1, 1, 0], and a sheep will correspond to [1, 0, 0].
Hope this answers your question. |
H: What are the advantages/disadvantages of using tfidf on n-grams generated through countvectorizer?
What are the advantages/disadvantages of using tfidf on n-grams generated through countvectorizer when your end goal is to see the frequent occurring terms in the corpus with the occurrence percentage?
AI: First, CountVectorizer produces a matrix of token counts, not TF-IDF weights. In order to obtain TF-IDF weights you would have to use TfidfVectorizer.
If the goal is to study term frequency, there is no point using TF-IDF since TF-IDF weights are different from frequency. TF-IDF is used to reduce the weight of tokens which appear frequently compared to tokens which appear rarely. Moreover, TF-IDF weights are at the level of a document, so they cannot be used as a measure of global comparison across all documents. Across documents you could use the IDF (Inverse Document Frequency) part only, but then why not simply use Document Frequency.
Note that the same applies to a token count matrix: the values are at the level of a document. In order to find the global frequency one has to sum across the documents for every token.
Finally if you are trying to find the most frequent terms/n-grams of any length, it's difficult to compare frequencies between n-grams of different length. Additionally you're going to find real "terms" mixed with frequent grammatical constructs, for example "it is" is not a term but it's a frequent n-gram. |
H: How to reshape or clean data to be able to visualize it with violin plots?
My end goal is to visualize some data using a violin plot or something similar using Python.
I have the following data in a file (test.csv). The first column is a list of species. The other columns determine abundance of the species at a certain latitude (e.g. how abundant is species A at altitude 1000, 2000?). (Ignoring units for now.) How can I plot this as a violin plot (or something similar)?
test.csv
species,1000,2000,3000,4000,5000,6000,7000
species_A,0.5,0.5,,,2,1,2
species_B,0.5,1,0.5,0.5,1,1,10
species_C,1,1,10,3,15,4,5
species_D,15,3,2,1,0.5,1,3
The Python code I tried so far is below. This does not work because it only plots the distribution of altitudes, which is the same for all species (because they were all sampled from the same set of altitudes).
file = "test.csv"
df = pd.read_csv(file)
# convert columns to list
colnames = list(df.columns)
colnames.remove("species")
# Transform the data so that I have a dataframe with only three columns: species, Altitude, and Count
df = pd.melt(df, id_vars=['species'], value_vars=colnames, value_name="Count", var_name="Altitude")
df.species = df.species.astype('category')
df.Altitude = df.Altitude.astype('int')
# Plot the data
sns.violinplot(x="species", y="Altitude", data=df)
plt.title("Abundance of Species at Various Altitudes")
plt.grid(alpha=0.5, ls="--")
plt.xticks(rotation=90)
# show graph
plt.show()
```
AI: I ended up creating a new Pandas DataFrame using the code below. I wash hoping for something simpler or more elegant.
# Create a new dataframe
df_2d = pd.DataFrame()
for _, sp in df.iterrows():
count = 0 if np.isnan(sp['Count']) else int(np.ceil(sp['Count']))
df_2d = df_2d.append([{"species": sp["species"], "Altitude": sp["Altitude"]}] * count) |
H: understanding pytorch pycoco tools object detection output
I am using pytorch vision library for object detection. I am using utilities provided for objection detection metrics. https://github.com/pytorch/vision. I am seeing following output
Epoch: [2] [0/10] eta: 0:00:50 lr: 0.000500 loss: 1.1589 (1.1589) loss_classifier: 0.1807 (0.1807) loss_box_reg: 0.0592 (0.0592) loss_objectness: 0.6662 (0.6662) loss_rpn_box_reg: 0.2528 (0.2528) time: 5.0786 data: 1.0029 max mem: 8476
My question is what is numbers in the brackets denote? Another quesiton is what is difference between and loss and loss_classifier in this context. Kindly help me to understand output. Thanks for your time and help.
AI: Looking at the source code, it seems that the first value is the median value for the epoch and the second value in parentheses is the global average/mean. The difference between the value for loss and loss_classifier is that the value for loss is the sum of the losses of the individual parts (including that of the classifier, 0.1807 + 0.0592 + 0.6662 + 0.2528 = 1.1589), whereas the value for loss_classifier is the loss for just the classifier of the model, i.e. the part that classifies the pixels into the different classes. |
H: How many training data should I use in multilabel classification?
Now I'm using Keras to implement a multi-label classification model. Specifically, I want to classify who present in an audio clip (maximal 8 people). The label of data has 8-bit, for example, [0,1,0,0,1,0,1,1]. It means totally the data should have 2^8=256 combinations. Now I only collected part of the data (3700 samples, but only with 20 labels) for the model training. Although the model has a good performance in seen data, it performs badly for the data with unseen labels(data with other 236 labels).
I wonder how I can improve the model performance? or I have to train this model with as much as data with different labels? I think it will cause a combinatorial explosion for the data collecting workload.
AI: General rule: use all the data you can get, keeping aside a test set and validation set.
It is unlikely there is a combinatoric issue as an 8 dimensional output is not large in the context of machine learning. Also, not all of the $2^8$ possible labels may occur, due to correlation or other inter-relationships between label dimensions (depending on the data). For example, in the extreme, all digits might always be the same so only two labels ever occur: (0,0,0, ...) and (1,1,1, ...) meaning there is effectively only 1 classification task rather than 8. In your case, maybe certain people always speak at the same time. If any combination is possible then it really is effectively 8 independent tasks, but that is not necessarily a problem.
The data requirement here is more about you learning a pattern in very-high dimensional data (sound clips) from relatively few samples (I don't know how many dimension each clip has but presumably mroe than the number of samples you have?). That means there are many possible functions that map x to y for the training data, many of which may not capture the latent structure needed to generalise (i.e. they are finding spurious correlations). In short, you are probably over-fitting. If so, remedies are:
get more labelled data if possible;
regularise, e.g. $\ell_2$ to keep weights small, $\ell_1$ to make them sparse, dropout; and
if you can get unlabelled sound clips use semi-supervised learning. |
H: Does Keras MultiHeadAttention with 1 head equals normal self attention?
Keras multihead attention if used as single head num_heads=1, then how is it different than Keras Attention ?
Also, Is multihead attention by default self-attention type?
AI: The Keras attention layer is a Luong attention block of type dot-product. Optionally, you can specify the layer to have a learnable scaling factor, with use_scale=True.
A single-head attention block from the Transformer model is also a dot-product, but scaled to the fixed dimension of the embedding ($\frac{1}{\sqrt{d_k}}$).
Therefore, their only difference is the scaling factor, which is learned in the case of the Keras attention layer (if enabled), while it is fixed in the case of the Transformer attention. |
H: How can the accuracy of the dictionary-based approach be measured and improved?
I recently used TextBlob and the NLTK library to do sentiment analysis. I used both dictionary-based and machine learning-based approaches. It is relatively easy to measure accuracy when we use machine learning approach, just define a test set. The same goes for improving accuracy, just modify the training set.
But what about dictionary-based approaches instead? How do you measure and improve their accuracy?
AI: Evaluation is always based on the task, not on the method. Since the dictionary-based method gives an output similar to the ML-based approach, you can evaluate it in the same way, using a test set with gold-standard labels (preferably the same test set as the other method, or at least similar in size).
Maybe what confuses you is that the dictionary-based method doesn't require training so you there's no need for splitting the data between training and test set.
Note: the dictionary-based approach is a heuristic method. |
H: Can landmark detection be only used for faces and human bodies?
I want to use landmark detection for finding specific points of interest in an indoor setting e.g. bedrooms, bathrooms etc. Is it possible to use it? So far I have only seen landmark detection being used for things like faces or human bodies. Any suggestions or ideas?
AI: Yes, it is totally possible.
Any suggestions or ideas?
You will need to train your own model because you might not find any pre-trained models for the same. (but do check if they are available)
Pick any state of the art model and I will suggest choosing the pose estimation models.
Collect images and annotate your dataset in the appropriate format using various tools that are available (that the authors of the model used to annotate their dataset).
Experiment whether the pre-trained weights of the model are helpful or not. For this, you can try to quickly train the model on a small subset of the dataset.
If you are getting good results, train the model using the whole dataset.
If you are not getting good results then try increasing data, collecting more images, or changing the model. |
H: Does the abstraction of a class affects the performance of neural networks?
For example, if I have 3 audio classes including
Ambulance Siren
Police Car Siren
Firetruck Siren
assuming these 3 classes could be distinguished by humans. If I just want the model to classify all these sounds as "Siren" sound only. What approach gives the better performance if I:
Group these classes together into 1 class (Siren sound). and merge all datasets together.
Separate these classes into their own individual categories.
AI: Generally speaking, if the requirement is just to classify those sounds as "siren", you will want to group everything as a "siren" sound.
However, if some other sounds (example: a whale sound) could be confused with a specific siren, the generalisation could be a problem and it would be better to learn every siren sound separately to avoid confusion with other sounds. Then you can group the sirens in one.
As any recongnition problem, we have to take into account similarities in all data in order to tune the neural network correctly.
Hope this answers your question,
Nicolas |
H: Measure of Separation for fuzzy clustering
Is there a measure of separation such as the Sillohete score for fuzzy clustering? I understand the logic for Hard-clustering algorithms but not sure about fuzzy. Is there a Python package for that such as scikit-learn?
AI: Why not using classing distance measurement such as K-Means?
Otherwise, this page has code about fuzzy c-means including a distance calculation:
https://pythonhosted.org/scikit-fuzzy/_modules/skfuzzy/cluster/_cmeans.html
There is also a publication but no code:
https://www.researchgate.net/publication/256471114_Fuzzy_Distance_Measure_and_Fuzzy_Clustering_Algorithm |
H: Use of multiple models vs training a single model for multiple outputs
So let's say I have data with numerical variables A, B and C.
I believe that the value of A has an effect on B.
I also believe that A and B both have an effect on C.
I don't think C has an effect on either A or B.
I want to use machine learning to predict A, B, and C. I obviously have A and B as training data, and I have other variables as training data too.
Do I simply create multiple models to predict all three, or is there a way to make one model predict them all if I just throw the entire dataset at it?
AI: Do you have a data sample to answer better to your question?
For instance, are those variables related to time?
If yes, time series based models could be interesting like multi LSTM:
LSTM Multi-class classification for large number of classes
If not, you could use a random forest regressor.
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
My best advice is to start with a simple single model that makes predictions on A, B and C, and then try more complex ones. |
H: SAS Studio seems to imply that apparently non-normal data is normal
I have some data I'm trying to analyze in SAS Studio (university edition). I am using the Distribution Analysis feature to try to test some data for normality.
It gives me the following histogram:
Skewness is approximately 2.934 and Kurtosis is approximately 9.013. I would have assumed based on that (and the fact that the shape of the histogram looks so different than the normal curve) that this is not normally distributed. However, my goodness-of-fit tests are:
The Kolmogorov-Smirnov D Statistic is 0.2820865, PR > d < 0.010.
The Cramer-von Mises W-Sq statistic is 2.5706303 with Pr > W-Sq <0.005.
The Anderson-Darling A-Sq statistic is 13.2288360 with Pr > A-Sq <0.005.
Unless I'm misreading that horribly, isn't that implying that this is, in fact, normal?
Can someone point out what I'm missing?
AI: You have mixed up the null and alternative hypotheses.
The null hypotheses are that the data are normally distributed. You get three small p-values that would lead most people to reject the null hypotheses I’m favor of the alternative hypotheses of non-normal distributions.
This is completely consistent with your visual examination and inspection of the skewness and kurtosis values. |
H: Why checking the distribution of data is needed before calculating Gower distance?
I read this article(Clustering datasets having both numerical and categorical variables) to learn how to perform clustering on datasets with not just numerical variables.
Before calculating the Gower distance, distribution of data are plotted and positive skew distribution is log-transformed. (The one on the top right corner)
Anyone knows the reason of doing that? Can you explain in an easy way? Thanks!
AI: Log transformation is necessary to avoid data being too sparse or having high variability. In other words, log has the rule to compress data and have clean distributions, as you can see on the top right corner.
If you try to calculate the Gower distance without the log, you will see that the distances cannot be meaningful between each other. |
H: ValueError: Input 0 of layer sequential_7 is incompatible with the layer
I have 77 columns, with 4 class labels (already one-hot-encoded) by get_dummies.
x_train = X_train.reshape(-1, 1, 77)
x_test = X_test.reshape(-1, 1, 77)
y_train = y.reshape(-1, 1, 4)
y_test = y_test.reshape(-1, 1, 4)
batch_size = 32
model = Sequential()
model.add(Convolution1D(64, kernel_size=77, padding="same", activation="relu", input_shape=(77, 1)))
model.add(MaxPooling1D(pool_size=5))
model.add(BatchNormalization())
model.add(Bidirectional(LSTM(64, return_sequences=False)))
model.add(Reshape((128, 1), input_shape = (128, )))
model.add(MaxPooling1D(pool_size=5))
model.add(BatchNormalization())
model.add(Bidirectional(LSTM(128, return_sequences=False)))
model.add(Dropout(0.5))
model.add(Dense(5))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
print(model.summary())
This is the model summary :
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 77, 64) 4992
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 15, 64) 0
_________________________________________________________________
batch_normalization (BatchNo (None, 15, 64) 256
_________________________________________________________________
bidirectional (Bidirectional (None, 128) 66048
_________________________________________________________________
reshape (Reshape) (None, 128, 1) 0
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 25, 1) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 25, 1) 4
_________________________________________________________________
bidirectional_1 (Bidirection (None, 256) 133120
_________________________________________________________________
dropout (Dropout) (None, 256) 0
_________________________________________________________________
dense (Dense) (None, 5) 1285
_________________________________________________________________
activation (Activation) (None, 5) 0
=================================================================
Total params: 205,705
Trainable params: 205,575
Non-trainable params: 130
_________________________________________________________________
None
When I tried to fit the model:
history = model.fit(x_train, y_train,validation_data=(x_test,y_test), epochs=10)
I got this error :
raise ValueError(
ValueError: Input 0 of layer sequential_7 is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape (None, 1, 77)
What is wrong in the input_shape ?
AI: In the definition of the convolutional layer you defined input shape to be (77, 1), but then your actual input has shape (None, 1, 77). As you can see, the dimensionality of the axes are swapped. They should match. |
H: Attention transformation - matrices
Could somebody explain which matrix dimension should be found here - K? and if it is for example 3X3, should I use just 9?
AI: $d_k$ is the dimensionality of the query/key/value vectors. In your example, the length of those vectors is 3, so $d_k = 3$ |
H: Random forest and the number of samples
I am new to AI and ML and I am learning how does random forest work. I implemented a small experiment. I have got a dataset with 1.6M samples and about 120 features. It is a classification problem, the output, which I am trying to predict, is a binary value. I am using RandomForestClassifier from sklearn in python. I am aiming to maximize accuracy calculated by accuracy_score function. At first attempt there was a big difference between train and test set accuracy, e.g. 100% train and 50% test, so I came to conclusion that my forest is overfitting. I did hyper-params tuning and managed to reduce the difference. Eventually I ended up with the following set:
model = RandomForestClassifier(
n_estimators = 200,
max_features = 11,
max_depth = 30,
min_samples_leaf = 30,
n_jobs = 12,
verbose = 1)
Then I played around with the number of samples and I got the following results: the more samples I use, the lower accuracy I get. Here are results for 2'500, 10'000 and 100'000 samples, on x axis the number of steps ahead I am trying to predict, on y axis accuracy, red is the train set, blue is the test set. It further decreases with more samples.
I find it counterintuitive, since I believe more data should improve quality, so I would like to first understand why is it the way like that. The only reason I can come up with is, since some of the features used clearly show a trend and are not stationary, the algorithm performs well on a subset of data, which is "more stationary", than on the whole set, which exhibits more changeability. Would it be correct reasoning?
If so, how can I improve it? I can thing of a couple of ideas.
De-trend features, which are not stationary. Seems to be against the general rule, which says decision trees do not require data preprocessing.
Just use a subset of the most recent data. Again intuitively the more data the better, so it sounds awkward.
Accept the fact that with these features this is the best I can get and look for different/more features.
Thanks in advance.
AI: For all the three samples the training data accuracy saturates around 85℅.
However, with more training data, the gap of training and test data widens.
This is a clear sign of overfit. The hyper-parameter tuning has not alleviated overfiting yet.
Try hyper-parameter tuning with k-fold cross validation.
Here is an article on the same: https://towardsdatascience.com/hyperparameter-tuning-the-random-forest-in-python-using-scikit-learn-28d2aa77dd74 |
H: What is the scope of Keras' ImageDataGenerator.flow_from_dataframe seed parameter?
I've been working on a U-Net model using training images stored on my local drive. To load these I have been using Keras' ImageDataGenerator.flow_from_dataframe method and optionally applying some augmentations.
I have had no problems with this but noticed some odd behaviour when I retrieve batches of data from the flow.
In the below, simplified, example I am loading 8-bit RGB files from a directory and setting the seed - I've omitted augmentation parameters in this example but get the same behaviour with and without those present.
For QA/QC purposes I will typically get a batch and look at a random selection of images. However, when I get a batch and generate some random image indices I always get the same result. This only occurs after batch generation, not initialisation of the flow generator object.
# Step 1
# Set up image data flow
img_generator = ImageDataGenerator(rescale=1/255.)
train_gen = img_generator.flow_from_dataframe(
img_df, # filnames are read from column "filename"
img_dir, # local directory containing image files
y_col=None,
target_size=(512,512),
class_mode=None,
shuffle=False, # I'm using separate mask images so no shuffling here
batch_size=16,
seed=42 # behavior occurs when using seed
)
# Step 2
# Generate and print 8 random indices
# No batch of images retrieved yet; no use of seed
print(np.random.randint(16, size=8))
>>> [ 7 15 13 3 6 3 2 14] # always random
# Step 3
# Now get a batch of images; seed is used
batch = next(train_gen)
# Step 4
# Generate and print 8 random indices
print(np.random.randint(16, size=8))
>>> [ 6 1 3 8 11 13 1 9] # always the same result
Using a seed of 42, the output of Step 2 changes each time Steps 1 & 2 are executed. This is expected behaviour since Step 1 should not impact Step 2. However, once a batch is retrieved from the generator in Step 3, Step 4 always returns the same indices.
This behaviour continues as new batches are yielded; the seed is changed on each yield so each batch returns different indices but always the same indices.
With the seed set to 42 the indices generated after first few batches are:
>>> [ 6 1 3 8 11 13 1 9] # Batch 1
>>> [10 10 5 5 5 8 10 11] # Batch 2
>>> [ 5 3 0 10 4 9 15 2] # Batch 3
This suggests to me that when a batch of images is generated the global numpy seed is changed. In practical terms, I end up always examining the same sample of images. When the seed parameter is not provided the global seed remains unmodified and no outputs are alike.
I'm wondering if others have come across this - is this a bug or am I misunderstanding something?
AI: Further investigation confirms that, in this case, Keras does indeed modify the global random number generator.
The repo has active issues and PRs that address this behaviour in other areas of the library by using a local random state e.g. this issue. |
H: Types Of Plots for Discrete Data
So I have a lot of discrete variables in my dataset and want to visualize them (univariate for now). I went through various articles over the internet and it is suggested that histograms and count plots are apt choices for plotting discrete data. Many of the discrete variables in my dataset have 500+ unique discrete values and when I plot them on a histogram it is taking a lot of time to show my any output. So is my approach correct?, can we actually plot these many unique valued discrete variables using a histogram? or do you suggest any other type of plot for the same?
Edit : Just got the output for a variable with 400+ discrete values and the histogram (sns.histplot) is empty, the x and y axis are visible but there are no bars in the histogram. Why would that be?
I have attached a reference photo of my column with the value_counts()
function's output. There are about 400 discrete values
AI: [completely edited after clarification from OP]
A histogram is built by making bins of equal size across the range of values taken by the variable. For example if the variable ranges from 0 to 500 one might decide to create 50 bins of size 10. Then the actual values of the distribution are counted by bin: every value between 0 and 9 goes into the first bin, every value between 10 and 19 goes into the second bin etc.
The number of discrete values does not matter (in fact the values can be continuous) because the values are binned, i.e. they are grouped by how close they are to each other (with arbitrary interval bounds).
I can see that the data you have is already formatted as
<value> <frequency>
The problem you have certainly comes from the fact that this format is incorrect for the function: typically histogram functions create the bins themselves, so there's no need to have counted the values beforehand. This means that you should provide a single vector containing all the values as many times as they occur.
Alternatively you could create the bins yourself beforehand: decide the intervals then count how many values in each bin. Then use a simple bar plot to show the count for every bin. This option is usually less convenient. |
H: Training a YOLO-style object detector
tl;dr I'm trying to train a small CNN (two conv layers and two connected layers) to find humans in the COCO dataset. Is my network big enough, and if so, roughly how many epochs of training will it need (there are 64115 training images)?
I am trying to make a neural network that can draw bounding boxes around humans in an image.
I initially intended to use YOLO, since it already exists and does exactly what I want. However, I found that it took many seconds to do a single forward pass through the YOLO network, which is far too slow for my purposes. Since my task is much simpler (YOLO can distinguish between many object classes, whereas I'm only interested in humans), and I don't need as much accuracy, I decided to make a smaller CNN in the style of YOLO, but with far fewer layers and parameters.
I have made a CNN with two convolutional layers and two fully connected layers, which can do a forward pass in a fraction of a second, and I am training it on the images from the COCO dataset that contain humans. My problem now is that, since the network is so small, I have no idea whether it can actually perform the task, and I don't know how long to train it before trying a bigger architecture. I'm also concerned that, since I'm on an ordinary laptop, it might take months or even years for enough training to take place.
If anyone could tell me what the minimum network size is for this sort of task, and how many epochs of training are usually required, it would be much appreciated. Alternatively, if I've made a wrong assumption (i.e. maybe I'm using the pretrained networks wrong for them to be so slow), it would be very helpful if someone could point that out.
AI: As rightly said by @Nikos M., it is based on trial and error. And here are some tips you might find useful -
Create a good enough validation set.
Use YOLO-tiny versions instead of custom architecture.
Use Google Colab
how many epochs of training will it need
Your data is very large. Training time depends on batch_siz, learning_rate, and other hyperparameters. So I will suggest run your training loop according to steps (one step = one backprop). Start running the training for a large number of steps (or infinite steps). But, test your model on Val set after every 100-200 steps (according to your speed) and check your model's accuracy. If the model gets you sufficient accuracy after 2000 steps only, then interrupt the training. Make sure to save the checkpoints after a certain number of steps.
I initially intended to use YOLO
I am assuming that you are using YOLO from 2016. There are many versions of YOLO after that, and mainly I will suggest you should try YOLO-tiny models. You will find these tiny models for each version for YOLO (v3,v4,v5). And these models are superfast and super tiny. One more advantage of using these pre-trained models will be that they are pre-trained! You will save a lot of training time if at all needed.
since I'm on an ordinary laptop
Use google colab for free high-end GPUs. And if getting your data in google drive is an issue then try resizing your images to the standard input size of your network (like 448x448) which will drastically reduce the dataset size. If that doesn't shrink the dataset size by a lot, try training on a part of your dataset, and try getting good accuracy on Val set. I feel that 60K is a very huge dataset already and I have trained models with very good accuracy using 3K-5K images max. (but again I have not seen your task and your images). |
H: Yolov3 Tiny: What do each of the 2535 cells detect?
Source: https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b
According to this image, it says the red grid is responsible for detecting the dog.
Similarly, do other cells detect "dog" as well or only the center ones?
For example, what do you think the first cell (0,0) would detect?
AI: As the blog mentioned, each cell predicts three things -
bbox coords (tx,ty,tw,th)
objectness score (po)
class scores (p0 - pc)
and again each cell predicts three boxes. Hence you get that big red tensor as model output.
So yes, other cells can detect "dog" and more precisely, each cell can detect any class.
what do you think the first cell (0,0) would detect?
Any object. for eg a sample one box prediction of that cell would be like -
box coords - [ 0.23, 0.12, 0.4, 0.2]
objectness score - (0.6) # this is a probability whether this box contains an object
class scores - [0.2,0.3,0.5] # lets say you have three classes, so here you will get the probability scores for those classes.
And each cell predicts three boxes, so you will get two more similar predictions from this cell. All of these are packed in that red tensor. |
H: What is fully connected layer additive bias?
I'm going to use PyTorch specifically but I suspect my question applies to deep learning & CNNs in general therefore I choose to post it here.
Starting at this point in this video and subsequently:
https://www.youtube.com/watch?v=JRlyw6LO5qo&t=1370s
George H. explains that the PyTorch function torch.nn.Linear with the bias parameter set to False then makes the torch.nn.Linear functionally equivalent (other than GPU support of course) to the following NumPy line:
x = np.dot(weights, x) + biases
Note that in torch.nn.Linear bias by default is set to True:
https://pytorch.org/docs/stable/generated/torch.nn.Linear.html
Here is the PyTorch documentation for the bias parameter:
bias – If set to False, the layer will not learn an additive bias.
Default: True
Can anybody please explain what "additive bias" is? In other words, what additional steps is PyTorch doing if torch.nn.Linear bias parameter set to True? Surprisingly I was not able to find much on this topic upon Googleing.
AI: You probably misanderstood the video, it is not said that a linear layer with Bias set to False is equivalent to :
x = np.dot(weights, x) + biases
Because that is not true, a layer without Bias is equivalent to
x = np.dot(weights, x)
The way he recreates the layer without Bias is actually with the following function :
x = x.dot(l1) # X = W1.X First linear layer
x = np.maximum(x, 0) # X = ReLU(X)
x = x.dot(l2) # X = W2.X Second linear layer
Setting Bias=True means the layer has the second bias term it adds after multiplying the weights with the input (as in the formula you quoted) :
x = np.dot(weights, x) + biases |
H: Normalize data between 0 and 95 instead of between 0 and 100
I want to normalize the data between 0 and 95 instead of 0 and 100. I am using this formula to normalize between 0 and 100, please let me know how to edit it.
def normalization(data):
return(data - np.min(data)) / (np.max(data) - np.min(data))
AI: Just multiply every value by $0.95$. Your original $0$ will stay at $0$; your original $100$ will be reduced to $95$; and the original values in between will be reduced a bit, too. |
H: Should I normalise my data if future unseen data may have a different range?
I'm new to ML and researching data prep, more specifically feature normalisation.
My question is whether it's a good idea to normalise data when its range may change over time?
For example, if I'm trying to predict stock prices in my train dataset, the prices range from 100 to 200 and later (unseen data) they could reach 300.
Should I normalise the training data?
AI: whether it's a good idea to normalise data when its range may change over time?
Short answer : Yes
Long answer : It is generally a good idea to scale/normalise your data in machine learning.
However, I am not sure what you mean when you say :
when its range may change over time?
I would assume that you are asking if the range of dependent variable(y) changes over time. Note that you don’t necessarily scale the dependent variable(y), just the independent variables.So, In this case you won’t have to anything additionally.
However, there is one boundary case that you might need to consider : If the change in range of dependent variable (y) is because of some change in distribution of data, you might need to go back, do the necessary data preparation and hence the normalisation and train your model again. |
H: relaying on feature during training that won't (necessarily) be available during prediction
I'm doing a little project of bugs prediction. My goal is to predict which bug will be (eventually) assigned to which relevant group (this is my label obviously).
For training, I'm relaying on a bugs database where I'm extracting various features (as much as possible) from each bug.
traces
panics
git blame (if available)
While most of the above features will always be available for me during prediction, there's another feature I thought I can use, which is "comments" between group members. This feature will probably won't be available for me during prediction (since I'm planning to predict the bug on its early stage).
Now I'm a bit confused. Is it ok for me to relay on it during training? Am I cheating myself? Needless to say, the score is much higher when using it (without it I'm around 80% vs 90% or more while using it).
AI: 3 points:
If the feature is certainly or very most of the time is not available during prediction the no you cant use it
If it is sometimes available and sometimes not, you must include no-comment bugs into your training as well and choose a default value which means no-comment (e.g. 'no-comment' string! or None)
In case they are available only for training, you can still benefit from them during EDA. Extracting keywords, topics, etc. from them will help you understand the situation of different labels and possibly helps you validate your labels, score them and/or understand the relation between other features (by clustering-like analysis)
If you want to use them, be careful how you do it. You have a bunch of categorical and/or numerical features and if you want to put text feature next to them you need to thing about feature representation. For example if you want to use TF-IDF, you suddenly introduce potentially thousands of features which may kill the information of your main features. So try to make it as sparse as possible like extracting keywords or topic from those texts and use those as categories. If you use any embedding to model them, check if you need to normalize your feature set as the scale of embedding values and other numerical features might be different
Hope it helped. Good Luck! |
H: Why it is recommended to use T SNE to reduce to 2-3 dims and not higher dim?
According to wiki it is recommenced to use T-SNE to map to 2-3 dimensional.
I can understand this , if we want to visualizing the data.
If we want to reduce the number of features (i.e from 30 features to 5 dims), is it recommended to do this with T-SNE ? or we should use other dimensional reduction algorithm ?
AI: Big Alarm!
T-SNE is NOT a dimensionality reduction algorithm (like PCA, LLE, UMAP, etc.). It is ONLY for visualization, and for that sake, more than 3 dimensions does not make sense.
T-SNE is not a parametric method so you do not get abase vector representation based on which you reduce dimensionality of a new dataset (validation, test). Thats why it can not be used for dimensionality reduction.
It is calculated stochastically only based on the data it sees, so if you use it on the train set, there is no way to do the same calculation for your test set thus no modeling with T-SNE.
If you see Sklearn functions, for PCA and other dimensionality algorithms you see both fit(), transform() and fit_transform() functions but for T-SNE you have only fit() and fit_transform() because you will have no model to only transform() a new dataset.
I tried to be intuitive. If you need more technical explanation just drop a comment. |
H: T-SNE with high number of features
If we have high number of features (more than 50), should we use T-SNE ?
According to https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html:
It is highly recommended to use another dimensionality reduction method (e.g. PCA for dense data or TruncatedSVD for sparse data) to reduce the number of dimensions to a reasonable amount (e.g. 50) if the number of features is very high. This will suppress some noise and speed up the computation of pairwise distances between samples
It seems that if we have more than 50 features, it is better to work with PCA and not with T-SNE, Did I understand it correctly ?
Why T-SNE is not good with high number of features ?
Why the document suggest to work with PCA and not with other dimension-reduction (like UMAP) ?
AI: Be aware that PCA is a linear dimensional reduction algorithm, whereas t-SNE or UMAP are non linear (=gaussian). Consequently the results are usually better with t-SNE or UMAP, even with a large number of features.
Then, you should be carefull with the features because not all of them have the same weight, and some are too noisy, which creates bad results (no clear clusters).
I usually recommend using less features or simplified data, see if the results are correct, and then increase the number of features or the data complexity.
Then, the main advantage of UMAP is that clusters are correlated to each other, but t-SNE could be better in correlations point to point.
Note that t-SNE could require data normalisation, whereas UMAP doesn't. |
H: What is the entity of cross entropy (loss)
Cross-entropy (loss), $-\sum y_i\;\log(\hat{p_i})$, estimates the amount of information needed to encode $y$ using Huffman encoding based on the estimated probabilities $\hat{p}$. Therefore one could claim it should be considered to measure the amount of information, for example a number of bits.
Depending on the base of $\log$, these can be binary bits or digits, but typically are Euler-bits since $\ln$ is mostly used. Is there a popular or official name or unit for these so called Euler-bits? Is it OK to consider the unit of cross-entropy loss this way?
Obviously, loss is used for optimization and most don't care about the exact unit, but I'm curious and would like to use the correct terminology when explaining ML to others.
AI: Basically, the first one who defines such amount of information was Shanon thanks to his definition of entropy: https://en.wikipedia.org/wiki/Entropy_(information_theory)
using bits of Shanon.
It describes the level of "interest" or "surprise" of information and it is also based on log 2. But I don't know if it is applicable to cross entropy. |
H: Is Cross validation and GridSearchCV required every time we train a model?
I have a repetitive process that will build a model weekly based on the previous week's data. So while in development I tried GridSearchCV and cross-validation to find the best hyperparameters and validate my model. Is this flow required every time I build a model or the best hyperparameters from my development time can be used without check every other time
AI: You simply need to see the dynamic of change in the incoming data. The need for retraining is a direct function of change in data distribution.
If new data, theoretically, does not change the distribution then there is no need, however in practice that is simply impossible. So how much change in the distribution of data (both input and output) is a threshold for a retraining? This is the question. And more importantly how can I see that change in practice?
The empirical error shows you that change. Monitoring the performance of your system on a test set, you can observe if there is a significant drop and you can set a threshold for automatic retraining whenever you see that drop in performance (for this you need a test set which is different than the validation set you used for parameter tuning).
As data is dynamically change, you must have new incoming samples inside your test set every time!
Long story short:
Set up a data extraction pipeline which selects a part of new incoming, labeled data and injects it to the test set.
Set up a monitoring pipeline that uses that test set to calculate evaluation metrics on a scheduled basis.
Set up a retraining pipeline that is triggered by a significant drop in performance due to (2) |
H: MOOC - for causal analysis - no statistics background
Am a software guy with no background in causal inference.
While I am now familiar with prediction techniques due to the plethora of courses available online, I would like to seek recommendations from people here for causal inference.
As you might know, how prediction techniques tutorials are available in a way that can be consumed by people from a software background, I would like to learn causal inference techniques like Propensity Score Matching, Comparative Effectiveness Research, epidemiology analytics, effect estimates, Hazard Ratios, Odd ratios, etc in a proper course which can take us from beginner to advanced curriculum. Basically, things that are used in observational healthcare research and public health fields. Are there any courses like what Udacity offers for Data Science (which only deals with Prediction problems)? Am looking for a similar course that can take us from A-Z of causal analysis.
Can you guys direct me to some resources where I can learn such techniques? Doesn't really have to be a degree but can also be online tutorials or Youtube series
AI: Even i am getting accustomed with concepts of causal inference and working on it's use case for telecom data. Sharing some of the resources which might be helpful to you:
Causalnex Package
Pyro Tutorials https://github.com/altdeep/causalML
https://github.com/DeekshaD/causalML-lecturenotes (Notes of the course)
Some courses though they may be stat heavy:
Introduction to Causal Inference
Causal Inference
Looking forward to more suggestions from the community :) |
H: How to convert float type nan in a dictionary value to 0.50?
I have a dictionary as below:
{
'$175000-199999': nan,
'$698506': nan
}
I want to convert the nan to 0.50. I tried using dictionary comprehensions
{k:v is 0.50 if v == nan else v for (k, v) in dictionary.items()} but it throws an error saying nan is float. How do I fix this?
AI: With reference to this answer, here's a running example to solve your problem,
nan_obj = float( 'nan' )
# dict as mentioned in the question
dictionary = {
'$175000-199999': nan_obj,
'$698506': nan_obj
}
# Loop through key-value pairs
# For different ways to check if a number is NaN,
# see https://stackoverflow.com/questions/944700/how-can-i-check-for-nan-values
for key , value in dictionary.items():
if value != value:
dictionary[ key ] = 0.5
print( dictionary ) |
H: Evaluation metric for time-series anomaly detection
I have a question for AI or data experts. I'm writing a paper
My dataset are time-series sensor data and anomaly ratio is between 5% and 6%
1.
For time-series anomaly detection evaluation, which one is better, precision/recall/F1 or ROC-AUC ?
When empirically studying this issue, I found some papers use precision/recall/F1 and some papers use ROC-AUC .
Considering that positive samples(anomalies) are relatively less than negative samples(normal points), which one is better?
I'm confused with this issue
2.
If I use precision/recall/F1 , should I check precision/recall/F1 only for positive class ?
I think because the number of positive samples are sparse, it's not appropriate to check precision/recall/F1 only for positive class
Thus, should I check precision/recall/F1 for both positive class and negative class?
If that's right, can I report precision/recall/F1 with macro avg in my paper?
(you can see the picture below. I used classification_report in sklearn library)
Thank you for your explanation !
AI: Hi and welcome to the community!
Don't get confused between those. they are different ways of explaining the same concept. The point is that in such problems with very imbalanced class populations, you need to use an evaluation metric which considers the effect of fined detailed inspection i.e. TP, FP, TN and FN. Precision/Recall and AUC/ROC both use them.
But What's the main difference between them? AUC/ROC give you a wonderful visual representation (of course along with a number) and Precision/Recall give you more comprehensive detailed numerical evaluation. So the first is good for comparing several models and second one is better for deep inspection of each of models (of course they are still used vice-versa but "less nicer"). Do not even hesitate to include both. Just enriches the evaluation section of your paper.
Positive class is the main point of your paper however you also want to keep a track of performance on trivial class (normal points) so I suggest include both i.e. the classification_report that you posted is a great way of reporting results
You should certainly report Macro Average! In imbalanced problems you actually use precision/recall or auc/roc to get rid of something and weighted average is exactly calculation that thing!
That "thing" is affecting evaluation by size of big classes.
Example: here you see that precisions is very good for normal point and very bad for anomalies. What Weighted Average is telling you? it says 0.91 which is very good. But how is the performance? it is 0.1 on detecting anomalies which is the point of your paper! right? So be careful ... imbalanced problems should be evaluated by Macro Average. |
H: Label data set for sentiment analysis
I am a beginner in this field. I have a scrapped review data set. It contains review socre (1 - 10) and review content. I am going to label the reviews according to the review score like below :
0-2 -> negative, 3-6 -> neutral, 7-10 -> positive
Is it possible to directly label contents like this? Is there any specific process to do this? Do I need to validate my labeling ?
AI: Is it possible to directly label contents like this? Is there any specific process to do this? Do I need to validate my labeling ?
Yes, it's definitely possible to define sentiment classes in this way. One can reasonably assume that the review score is a good approximation of the review sentiment.
it's just a method to define the gold standard, there's no particular process for that. It's important to realize that defining the gold standard is an important part of designing the task itself, as opposed to designing a system which tries to solve the task.
In some cases it makes sense to prove that whatever is used as gold standard corresponds to the goal of the task, but in this case it's straightforward: it's safe to assume that a user who writes a review gives as score a value which corresponds to their overall sentiment.
Even if this is a reasonable design, it's also important to notice the limitations:
By discretizing the score into 3 classes, the score information
is simplified. For example the difference between a 7 and a 10 is lost.
The arbitrary cut-off points cause a threshold effect. Normally there is less difference between a 2 and a 3 than between a 3 and a 6, but the classes reverse this relation.
Note that sentiment analysis does not have to be a classification task (predicting a categorical variable), it can also be defined as a regression task (predicting a numerical variable). In this case the target variable could be the score itself, and that would avoid some of the problems mentioned above. This is also a design choice, it depends mostly on what the application is for. |
H: How to obtain Accuracy of Feature Selection methods?
I used the following methods:
Variance_Threshold: selecto_vth = VarianceThreshold(threshold=1.0)
ANOVA: anova = SelectKBest(score_func=f_classif, k=20)
Mutual_Information: fs_mutual = SelectKBest(score_func=mutual_info_classif, k=20)
Sequential_Feature_Selector: sfs = SequentialFeatureSelector(RandomForestClassifier(), n_features_to_select=20, scoring='accuracy')
But I did not find how to get the their Accuracy, unlike Recursive Feature Elimination:
print(accuracy_score(y_test, rfe.predict(x_test))) # it worked
AI: One way (I know no other way, btw) is to create a model with the (best) selected features and measure the accuracy of that model.
This accuracy will be parametrised by the model you used. For example, using a different model might alter the accuracy, so using a handful of models and getting the average will give you a hint of the accuracy of the features selected.
This procedure is exactly similar to the procedure followed by sklearn.feature_selection.RFE where a model (an estimator) is passed as parameter. |
H: How to structure unstructured data
I am analysing tweets and have collected them in an unstructured format. What is the best way to structure this data so I can begin the data mining processes?
Somebody suggested using python packages such as spacy but not sure how to go about using this.
AI: In Natural Language Processing it's crucial to choose the representation of the data and the design of the system based on the intended task, there is no generic method to represent text data which fits every application. This is not a simple technical problem, it's an important part of designing the system.
The simplest method to structure text data is to represent the sentence or document as a bag of words (BoW), i.e. a set containing all the tokens in the sentence or document. Such a set can be represented with One-Hot-Encoding (OHT) over the full vocabulary (all the words in all the documents) in order to obtain structured data (features). Many preprocessing variants can be applied: remove stop words, replace words with their lemma, filter out rare words, etc. (don't neglect them, these preprocessing options can have a huge impact on performance).
Despite their simplicity, BoW models usually preserve the semantic information of the document reasonably well. However they cannot handle any complex linguistic structure: negations, multiword expressions, etc. |
H: GridSearchCV and time complexity
So, I was learning and trying to implement a GridSearch. I have a question regarding the following code, which I wrote:
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
dtc = DecisionTreeClassifier(random_state = 42, max_features = "auto", class_weight = "balanced")
clf = AdaBoostClassifier(base_estimator= dtc, random_state= 42)
parameters = {'base_estimator__criterion': ['gini', 'entropy'],
'base_estimator__splitter': ['best', 'random'],
'base_estimator__max_depth': list(range(1,4)),
'base_estimator__min_samples_leaf': list(range(1,4)),
'n_estimators': list(range(50, 500, 50)),
'learning_rate': list(range(0.5, 10)),
}
scorer = make_scorer(fbeta_score, beta=0.5)
grid_obj = GridSearchCV(clf, parameters, scoring=scorer, n_jobs= -1)
grid_fit = grid_obj.fit(X_train, y_train)
best_clf = grid_fit.best_estimator_
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
So, I was learning and trying to implement a GridSearch. I have a question regarding the following code, which I wrote:
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV
dtc = DecisionTreeClassifier(random_state = 42, max_features = "auto", class_weight = "balanced")
clf = AdaBoostClassifier(base_estimator= dtc, random_state= 42)
parameters = {'base_estimator__criterion': ['gini', 'entropy'],
'base_estimator__splitter': ['best', 'random'],
'base_estimator__max_depth': list(range(1,4)),
'base_estimator__min_samples_leaf': list(range(1,4)),
'n_estimators': list(range(50, 500, 50)),
'learning_rate': list(range(0.5, 10)),
}
scorer = make_scorer(fbeta_score, beta=0.5)
grid_obj = GridSearchCV(clf, parameters, scoring=scorer, n_jobs= -1)
grid_fit = grid_obj.fit(X_train, y_train)
best_clf = grid_fit.best_estimator_
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
Is this overkill? I mean, I have a decent PC, or at least I think it is decent, but this piece of code took more than 5 hours to run and.... it did not! I had to stop it and write a simpler version with less parameters for the GridSearch in order for it to run and deliver some results. The simpler version was the following:
parameters = {'base_estimator__splitter': ['best'],
'base_estimator__max_depth': [2, 3, 5],
'n_estimators': [100, 250, 500],
'learning_rate': [1.0, 1.5, 2.0],
}
and it took 20 minutes.
I do not know much about GridSearch and I think that everything is fine with the code, I mean it is right with no errors, so it didn't delivered any result because it didnt ended running... Is there any good reference about time complexity on running gridsearch, and in particular when running in parameters for AdaBoost? I have no intuition for this, and after 3 or 4 hours I wasn't sure if the problem was that my PC was not strong enough, or the code would run forever because I did wrote something wrong or anything else. But since I changed, everything went fine, so the problem must be the time. Somebody told me to run a RandomizedSearch, which I never heard about until yesterday. It can be a solution. But is this always an issue with GridSearch? Are there any piece of code that can assist me on telling me how much parameters were searched by the algorithm or aything like that? Because it is frustrating to look at the screen and the clock and realize that 5 hours has passed and I have no idea of how much longer it will run or even if it will finish someday...
AI: There's no difficult time complexity issue, you just need to understand what GridSearchCV does, it will clarify everything. It's actually very simple:
sklearn documentation says that GridSearchCV performs an "exhaustive search over specified parameter values for an estimator".
An exhaustive search means that the algorithm simply iterates over all the possibilities in the search space.
The search space consists of all the combination of values for the parameters, that is:
base_estimator__criterion: 2,
base_estimator__splitter: 2,
base_estimator__max_depth: 3,
base_estimator__min_samples_leaf: 3,
n_estimators: 9,
learning_rate: I don't know because range(0.5, 10) gives an error, let's say 5 for instance.
Now that gives us $2*2*3*3*9*5 = 1620$ combinations of parameters. By default GridSearchCV uses 5-fold CV, so the function will train the model and evaluate it $1620*5=8100$ times. Of course the time taken depends on the size and complexity of the data, but even if it takes only 10 seconds for a single training/test process that's still around 81000 sec = 1350 mn = 22.5 hours.
By comparison, your second set of parameters contains only $3*3*3=27$ combinations. Since it took 20 minutes, we can estimate that a single combination (full CV) requires 0.7 mn to run. This means that in the first case above you would need $1620*.7=1200$ mn = 20 hours for the full process (approximately).
The usual solutions are:
decrease the number of combinations, as you did in the second case
parallelize the process with the n_jobs argument, that will roughly divide the time by $n$. Of course it works only if you have multiple cores and you don't plan to use them during this time ;) |
H: Question regarding weights in a Model
Let's take a Linear regression model.I just want to know are weights the same for every row once the model is trained? Or are weights vector or an array for eg I have data X= [4,5] four rows and 5 features will the weights W be the same for every row or it is has a different value for every row?
I am a beginner so please spare me these basic questions
AI: In the basic regression setup, the weights are a function of the whole of the training dataset therefore they are the same for every row. |
H: Tips for clustering rows of a gigantic "distance" matrix
I've been assigned to the following task:
I was given 1,000,000 data points and was asked to create a sort of distance matrix and to cluster the rows.
So this matrix is 1,000,000 x 1,000,000 which is obviously way to large to fit on my poor 8GB of RAM.
I'd like to ask for some tips of how to handle this kind of data.
I'd like to choose maybe 100,000 data points at random and cluster their distances instead hoping they represent the entirety of the data.. even so this seems like a hard task.
So what kinds of clustering methods could work here? If I can't feed all my data at once to some algorithms which usually can handle lots of data such as hierarchical clustering or DBscan what options do I have left?
AI: Starting with a small random sample is always a good idea, otherwise there would be too much processing without good result.
It depends also of the project's objective project: if the objective is to have an overview of the data, a random sampling of 5%-10% would be enough. You can make several test of random samples to ensure that the percentage is representative enough. If the objective is to have a complete understanding of all data, you should start with a small sample and increase the amount progressively until reaching 100%.
If you have too much processing time, you will want to use multiprocessing, fast calculation libraries, use of ROM or efficient coding.
In both cases, you can start with a small random sample of 2000 elements to compare quickly different clustering algorithms, thanks to their low amount (= little processing time).
Then, if your data has many features, I would recommend dimensional reduction with algorithms like t-SNE (https://www.youtube.com/watch?v=wvsE8jm1GzE) or UMAP (https://www.youtube.com/watch?v=6BPl81wGGP8) to make meaningful clusters. Those clusters could be identified automatically using Kmeans for instance. |
H: Memorization in deep neural networks, random vs. properly labelled datasets
From about 19:20 in the video here: https://www.youtube.com/watch?v=IHZwWFHWa-w
it shows the difference in value of the cost function for randomly labelled data vs. properly labelled data.
What do they mean by randomly labelled? That the labels change on every batch or epoch? Or that they are just assigned some random label which are then fixed throughout training?
Then how is it possible for a classification network with a softmax layer for the output to do any better on the properly labelled data? They are just labels and meaningless to the network it seems.
AI: They are just comparing learning rates between random labels and organised labels, to understand better how the neural networks behave.
The lesson is that neural networks are better suited to learn on structured data. It can also learn on random data but much slower. |
H: Studying and choosing between different neural network structures
I would like to develop a model that uses convolutional neural networks for image classification. From the many different network structures described in papers and articles online, I would like to choose, as a starting point, the one that better suits my problem and dataset.
I know that there is no certain answer and the best structure is highly dependent on each problem, but I imagine that there is some method behind building such a network beyond pure chance and testing. What properties and hyperparameters should I pay attention to when reading papers and comparing structures? In order to acquire this intuition about different models, is it better to read more literature or focus on experimenting with different models?
Though I have special interest in convolutional neural networks, this question also applies to studying the architecture of neural networks in general.
AI: First of all, get familiar with the standard benchmarking dataset, Imagenet. One of the reasons why it is a famous dataset is because, if a technique shows effectiveness on Imagenet, there is a very high probability that the same technique will be useful for any other dataset.
After that, start reading papers or try to find some good articles/blogs/tutorials about these papers on the internet. Try to answer this question - What novel contributions did this paper make? Start all the way from LeNet to the latest state-of-the-art models. If you want to be extremely good at understanding these papers, implement them from scratch. It will take a long time. But you will get the best intuition about CNN architectures. What all have people tried to explore till now. What has worked for them, what is used the most, what is not used anymore. For example - talking about the ResNet paper, one of the main contributions of the paper was introducing Skip Connections. And today's latest CNNs like efficientnet also use these Skip Connections. So you got one intuition that you must give a try at Skip Connections. On the other hand, in the VGG paper, they did not use the global average pooling layer. Which makes it one of the heaviest models. This means not using the global average pooling layer will increase model parameters by a lot. And lastly, the efficientnet paper will give you a very good idea about designing and scaling CNN architectures. You will gain a lot of insights and methods like this by reading these papers. This knowledge will be very useful when you will make CNNs from scratch.
Now, once you gain this theoretical knowledge, let's talk about applying it to a new dataset. For this, we need a trial and error method. But your theoretical intuition will give you a great boost. Also, I use a technique to implement this trial and error method -
I create a small subset of the dataset if the dataset is very big.
I create a very good validation set.
As this new subset of the dataset is small, I run many experiments on this small dataset. I start by implementing very simple models and go up to implementing complex techniques till I get the satisfying performance on my validation set.
Once I get a good architecture (one that has satisfying accuracy), I try to train it on the whole dataset. And, I also try to scale it (efficientnet paper).
Lastly, I would like to mention that it is not only the model architecture that will make a difference always. There are also techniques like student-teacher learning and meta pseudo labels (novel training methodologies) that boost the performance significantly. |
H: Training & Test feature shape is different from number of columns in dataset
I am making a Sequential Neural Network for regression with 3 dense layers which will be trained on a simple dataset. But before I even get to that part of the code to execute the model I am getting a different shape of my features than columns in dataset. Columns of the dataset includes:
one categorical "Name" column which is one-hot encoded
2)the other 20 columns are integers/floats
I have 21 features in my dataset. ValueError is telling me it is expecting 36 but there are only 21
When I check the shape using X.shape for my dataset it is telling me the shape is (98,36). My dataset has 98 rows x 21 columns. There are only 21 features in my dataset. How is it getting a shape of 36 ?
I am consequently receiving this error of course when I try to run my Keras model
Error
ValueError: Input 0 of layer sequential_1 is incompatible with the layer: expected axis -1 of input shape to have value 21 but received input with shape (None, 36)
Here is my code when I import and clean the dataset
Import Dataset
N_df_1 = pd.read_csv('/', error_bad_lines=False) #I can't show dataset paths
N_df_2 = pd.read_csv('/', error_bad_lines=False)
N_df_3 = pd.read_csv('/', error_bad_lines=False)
N_df_4 = pd.read_csv('/', error_bad_lines=False)
N_df_5 = pd.read_csv('/', error_bad_lines=False)
N_df_6 = pd.read_csv('/', error_bad_lines=False)
N_df_7 = pd.read_csv('/', error_bad_lines=False)
N_df_8 = pd.read_csv('/', error_bad_lines=False)
N_df_9 = pd.read_csv('/', error_bad_lines=False)
N_df_10 = pd.read_csv('/', error_bad_lines=False)
Cleaning data
#Had to combine datasets through concatenation
N_df = pd.concat([N_df_1, N_df_2, N_df_3, N_df_4,N_df_5 ,N_df_6, N_df_7,N_df_8, N_df_9, N_df_10 ignore_index=False, axis=0)
#Getting rid of all NaN values
N_df.dropna(axis = 0, how = 'all', inplace = True)
Encoding Categorical data
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [0])], remainder='passthrough')
X = np.array(ct.fit_transform(N_df))
AI: The case is that you are applying the One-Hot Encoding which means it increases the columns with the factor of each variable. suppose you have a binary variable (Male/Female). In actual it is one column in your data file which will look like.
It is actually a one-column but when you make a one-hot encoding of this feature then this would look like this.
So, you have 21 features subtract the 36 - 20 in total you have 16 names in your name variable. So, therefore you are getting shape error. |
H: Visualization suggestion:-
I have a data frame like below:-
Here I have 194 countries and the columns are fan_out values which is in percent of the total population.
Like for country AD, the total fan_out value is 2.24 -06 % of the total population.
I tried a stacked chart like below:-
The only issue is it's not presentable because the fan_out value for 5,6 or 7 is very small are not clearly seen.
Is there any way I can build a cluster to make sense of this that?
My only question is how to represent this data in one graph to make more sense like finding clusters or a pattern or even I can apply any ML algorithm to find any pattern.
AI: You can use a dimensional reduction algorithm like t-SNE:
https://youtu.be/wvsE8jm1GzE
https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html
It is quite easy to implement and you will see correlations between countries clearly. |
H: Overfitting in active learning
How can I make sure that the initial model trained on a small dataset will not suffer from overfitting before applying the active learning sampling techniques? because I will use this model to select new unlabeled samples.
AI: I'm not totally sure about my answer so please take it with a grain of salt.
I think you shouldn't worry too much about the initial model being overfitted:
This is likely to happen since the initial dataset is small, so the model might have no choice other than to capture patterns that happen by chance.
The process of active learning is intended to "correct" the initial model progressively. This is not only about the model capturing new details, it can also be about the model re-evaluating previous patterns based on the data.
So my intuition would be to just let the model overfit a bit if it has to. However, if the model overfits a lot and/or is too complex, it means that it will require a lot (possibly too many) instances to be labeled. Depending on the context, this could be a more serious problem: the initial model should be decent enough for the active learning process not to need the labeling of many/all instances. |
H: can't understand the Architecture of Neural Network
Please explain how Z1 is working I just want to know why W is of shape (4,3) I understand that there are four Weights we are performing (4,3)*(3,1) + (4,1) but I don't understand what is 3 in (4,3)
Just write the full equation of Z1 rest is self-explanatory.
I New to This field so, please spare me its very common to forget basic concepts in beginnig
AI: Neural Network is nothing but an accumulation of multiple Logistic Regressors. What is the Equation of Logistic Regression?
hypothesis = $W^T * x$ + b
activation = Sigmoid (hypothesis)
Now, what does z1 tell us it tells us that first take the dot product between the input (x) and the weight matrix then apply the non-linearity. you have 3 input examples. $<x_1, x_2, x_3>$. and a weight matrix,
[[ $w^{1}_{11} , w^{1}_{12} , w^{1}_{13} , w^{1}_{14}$ ]
[ $w^{1}_{21} , w^{1}_{22} , w^{1}_{23} , w^{1}_{24}$ ]
[ $w^{1}_{31} , w^{1}_{32} , w^{1}_{33} , w^{1}_{34}$ ]],
Now you're multiplying the x with the transpose of W, i.e. $W^{t} * x$
Take the transpose of the Matrix W,
[[$w^{1}_{11} , w^{1}_{21} , w^{1}_{31}$]
[$w^{1}_{12} , w^{1}_{22} , w^{1}_{32}$]
[$w^{1}_{13} , w^{1}_{23} , w^{1}_{33}$]
[$w^{1}_{14} , w^{1}_{24} , w^{1}_{34}$]]
So, the input dimensions are (3, 1) Now the transpose matrix dimensions are (4, 3) and $W^T*x$ dimensions are (4, 3) * (3, 1) after multiplication the dimensions would be (4, 1).
Now, Z1 is computed in this way.
The $W^1$ is telling the layer number means for which hidden layer this weight is initialized in your case your weight matrix is for hidden layer number 1. And the subscript $W_{11}$ or $W_{12}$...So on, represents the corresponding matrix multiplication indices $W_{1j}$ this first number in the subscript tells us the input number like in this case $x_1$. and the second subscript associated with the $W_{i2}$ tells us the neuron number in the corresponding hidden layer like in this case it is 2.
Now, lets compute $Z_1$,
$Z_1 = (w^{1}_{11} * x_1 + w^{1}_{21} * x_2 + w^{1}_{31} * x_2)$ + bias
$ = sigmoid(Z_1)$
The rest are computed in the same and now you have the weight matrix and input vector you can compute the rest by yourself. |
H: Plot NaNs as a category seaborn countplot
I have a column in my dataframe which has 'True' as a value and all other values are NaNs (so there are no 'false' values). I want to plot a countplot for the said data in seaborn but want to include the NaNs as well. Basically, I want to convert the NaNs to 'false' values and plot a graph then but I dont want to make any changes to my original column. Is there a way I can create a separate category for NaNs as False category and plot it alongside the True category?
AI: Yes, you can make a copy of your dataframe to a new dataframe and then apply the conversion from NaN to false as follows:
df2 = df.copy()
df2['column'] = df2['column'].fillna(False) |
H: Does it means a high accuracy of model will have high confidence score
I am trying to select a model from different CNN trained models based on some parameter.
Initially, I was considering using model confidence score to decide which model is better. Now I am considering using accuracy for the selection of a good model among other models. I need to ask is there a relation between accuracy and confidence score, does it means high accuracy leads to high confidence score and vice versa. Thanks
AI: If by confidence score you mean the probability output of your CNN then you have to consider if your models are well calibrated. A well calibrated model is one that a confidence score of, e.g. 0.8 implies 80% accuracy. Another way to think about it is that if you get 100 predictions of class 0 with a confidence score of 0.8, then 80 of those predictions should indeed be of class 0. A specific type of plot, a 'calibration curve', can help you identify that. The above answer the question about the relation between the score and accuracy.
Now, in practical applications, you might not want to use the confidence score as a measure of performance. That is because of effects such as covariate shift and concept drift which have an unpredictable effect on the confidence scores. |
H: What is the disadvantage of using a completely normalized training set for Deep learning?
Batch normalization is generally preferred in deep learning, which normalizes the output of the activation function in each layer (as an output from the cost function differs depends on the input).
Instead, if the training set is normalized before passing through layers, does this solve the same problem?
AI: Batch normalization supposedly solves the issue of covariate shift within each layer. Even if you start out with a normalized/whitened dataset, after an input is passed through a layer of the network it may no longer be centered around 0 with unit variance.
For example, if one considers a layers of the same size with constant weight matrix $w_{ij}=10$ and bias $b_j = 4$, so that $z^{L}_i = \sum_{j}w_{ij}z^{L-1}_j + b^L_j$, then if the neurons in layer $L-1$ (i.e. $z^{L-1}_i$) have zero mean and unit variance, then the neurons in layer $L$ will have variance of 10 and mean of 4 (here I'm ignoring the activation function for simplicity).
Therefore even if the input is normalized, the distribution of neuron values within the network are not guaranteed to be normalized. Batch norm solves this issue by normalizing each layer. |
H: Returning a DataFrama with nlargest values based on a particular column
This is my sample DataFrame:
inputArr = [['A', 0, 6],
['A', 1, 57],
['A', 2, 81],
['A', 3, 9],
['A', 4, 87],
['B', 0, 24],
['B', 1, 30],
['B', 2, 96],
['B', 3, 54],
['B', 4, 81],
['C', 0, 6],
['C', 1, 6],
['C', 2, 6],
['C', 3, 93],
['C', 4, 99],
['D', 0, 0],
['D', 1, 90],
['D', 2, 6],
['D', 3, 87],
['D', 4, 75],
['E', 0, 93],
['E', 1, 60],
['E', 2, 63],
['E', 3, 48],
['E', 4, 36]]
trialPD = pd.DataFrame(inputArr, columns = ["Name", "rating", "num_items"])
Now, I want to select rows from trialPD where the rating is in top 3 for a particular name. I tried trialPD.groupby("Name")["rating"].nlargest(3) but that doesn't seem to be returning the last column ("num_items") at all. is there a way to get the index where the rows match top 3 within a particular grouping?
P.S: This is my first question on StackExchange, so do tell me if I am doing anything wrong. BTW, I found the nlargest solution also on this same forum so thank you all for this!
AI: This is a list of tuples: top 3 rows of each group (group_name,index):
list(df.groupby('Name')['Ratings'].nlargest(3).index) |
H: What is difference between Validation steps and Steps per epoch?
I am trying to understand the difference between validation steps and steps per epoch, Can anybody tell me the difference between these two terms. I also want to know about, how will it be helpful in training and what number should we set in it?
AI: Validation steps
While training, a machine learning model performs training steps on the training data. A step is a single backpropagation performed by the model. Similarly, the validation step is a single backpropagation performed on the validation data. The validation steps are generally performed after several training steps to track the validation accuracy. This way, we can track the model during its training whether it is overfitting or not.
Steps per epoch
Once we know what is a step, the next term is how many steps does a model will perform in a single epoch. We call it as one epoch when model has performed several steps and completed the whole training data. In the next epoch, the model will begin its steps again from the start of the training data. And while training it using the stochastic gradient descent process (it is generally used to train the deep learning models) we need to specify a batch size. For eg - if we specify the batch size as 16, the model will perform 16 forward propagations, then the loss will be calculated on all 16 predictions that the model has predicted and only one back propagation will be performed to correct the weights. And let's say there are 1600 training examples in the training set, so there will be 1600/16 = 100 batches. Also, there will be 100 backpropagations. 1 backpropagation = 1 step. Thus, there are 100 steps per epoch for this dataset.
Choosing numbers for both of them
Validation steps cannot be chosen. You should perform all the steps on the validation set and track It as your validation accuracy (or relevant metric). If you mean, you want to choose a number of steps after which validation steps should be performed, then that totally depends on your speed and convenience. This won't affect the performance of the model directly.
Steps per epoch will directly affect the performance of the model. And, it depends on the batch size. This blog will help you understand how batch sizes affect the training process. And (steps per epoch) = (total number of training instances) / (batch size). |
H: Cosine vs Manhattan for Text Similarity
I'm storing sentences in Elasticsearch as dense_vector field and used BERT for the embedding so each vector is 768 dim. Elasticsearch gives similarity function options like Euclidean, Manhattan and cosine similarity. I have tried them and both Manhattan and cosine gives me very similar and good results and now i don't know which one should i choose ?
AI: Intuitively, if you normalized the vectors before using them, or if they all ended up having almost unit norm after training, then a small $l_1$ norm will imply that the angle between the vectors is small, hence the cosine similarity will be high. Conversely, almost colinear vectors will have almost equal coordinates because they all have unit length. So if one works well, the other will work well too.
To see this, remember the equivalence of $l_1$ and $l_2$ norms in $\mathbb{R}^n$, in particular that for any $x \in \mathbb{R}^n$ it holds that $||x||_2 \le ||x||_1$. We can use that to prove the first of the statements (the other is left as an exercise ;)
If $||u||_2 = ||v||_2 = 1$ and $||u-v||_1 \le \sqrt{2\epsilon}$, then $\langle u, v \rangle \ge 1 - \epsilon$.
To prove this just expand $||u-v||_2^2 = 2-2 \langle u, v \rangle$ to obtain:
$$\langle u, v \rangle = 1 - \frac{1}{2} ||u-v||_2^2 \ge 1- \frac{1}{2} ||u-v||_1^2 \ge 1 - \epsilon.$$
So in the end which one you choose is up to you. One reason to prefer the cosine is differentiability of the scalar product, which if you assume normed vectors is all you need. |
H: Identifying problematic binary features in classification task
I am working on a classification problem consisting of data with binary features. I am trying to find which features, when equal to 1 give a false negative for a particular class.
To better illustrate my point consider the data below consisting of samples with 5 features along with their GT and predicted classes.
Features |GT Pred
0 0 0 1 0|A A
0 0 0 0 1|A A
0 0 0 1 1|A A
1 0 0 1 1|A A
1 0 0 0 0|B B
0 1 0 0 0|B B
1 1 0 0 0|B B
0 0 1 0 0|B B
0 1 1 0 0|C C
1 0 1 0 0|C C
1 1 1 0 1|C A
1 0 1 0 1|C A
What statistical tests can I perform on the samples with GT label C that would tell me that feature 5 equal to 1 results in a misclassification?
AI: You could try modelling misclassification as a binary variable, and then you have $p$ (number of features) independence tests for two binary variables. You could use the likelihood ratio test or Pearson's $\chi^2$ test (see Wasserman, All of Statistics, Chapter 15).
Note that you will have to correct for the fact that you are doing multiple testing. The most crude approach is Bonferroni's correction, which in case of small $p$ and clear enough dependencies might be enough. Because it is so crude (conservative) you might want to look into a test to correct for false negative rate (Wasserman, AoS, Chapter 10.7)
Update after OP's comment:
One idea to to estimate which single features caused a misclassification could be to estimate the average treatment effect of each one of them, considering each feature as a "treatment" and misclassification as the outcome. For each sample $X, Y$ the outcome $Y$ can be either correctly classified or not and you are interested in whether setting feature $X_{j} = 1$ had an effect. In order to approximate the ATE you need to randomize the treatment, i.e.randomize $X_{j}$ across all samples and then you can estimate
\[ \widehat{\operatorname{ATE}} = \hat{\mathbb{E}} [Y|X_{j} = 1] -
\hat{\mathbb{E}} [Y|X_{j} = 0] . \]
The reason that you can use correlations is that, because you randomized the treatment, it is now independent of the potential outcome for each sample.
This method is however quite naive and assumes that single features can flip the classification. Maybe they are related in a more complex way and $X_3=1$ and $X_12=1$ lead to a misclassification but neither of them does separately. |
H: Efficient way to tackle card games with many q-table states?
I'm currently in the process of developing an AI for a popular card game here in Germany (called "Schafkopf"). Obviously, one could try to find a perfect strategy with the help of some game theory, but I tried the path with ML. Now after implementing the game and going down the line with a deep q-learning (reinforcement learning) approach, I faced the following issue:
I ran the agent for about five hours and my q-table grow to a size of ~49k rows. Therefore concluding that a q-table is ineffective for a game with a tremendous amount of states (i.e. cards dealt to you, cards left in the stack for each given round ("card counting"), what cards are considered to be trump cards and so forth).
Now my question arises: Is there a more efficient way / approach to such card games? Genetic algorithms? Supervised learning?
AI: How is it possible to use DeepQ along with a tabular Q representation? The whole purpose of Q-learning with action-value function approximation (= DeepQ) is to overcome the limitations of a tabular approach.
In any case, you need to consider that, in a sense, game theory + RL $\approx$ multi-agent RL (MARL). The game that you are trying to tackle is (most likely) a zero-sum game as it is a competitive game among 2 or more players. If you try to use one agent vs another opponent agent (how did you model the opponents?) you treat the opponent as part of the environment and that will lead to a non-stationary policy. This problem is quite critical as most of the times, except if opponent is easily exploitable and has very limited strategies, it will lead to conditions that won't allow your training agent to converge. Then you have to deal with the problem of partial observability as players cannot see each other's hands. In other words, treating a MARL problem as RL is not a good idea.
Here is Hanabi a cooperative multi-agent card game. There are lots of MARL "solutions" (there is no actual solution just better performance) for this game out there. This might give you a starting point.
Before starting, consider that MARL contains quite advanced material and if you are not skilled with RL (and understand the math behind) it won't actually help you. Whatever algorithm you find it won't be an easy plug-n-play solution. |
H: Input to `.fit()` should have rank 4. Got array with shape: (31500, 784)
I am new to CNNs and am working on/using the MNIST dataset. After splitting the data to train and test sets, I needed to use 'ImageDataGenerator'. The code I used is the same code on the Keras API site.
The shapes are as below:
print(X_train.shape,
X_test.shape
,y_train.shape
,y_test.shape)
(31500, 784) (10500, 784) (31500,) (10500,)
But I suddenly faced a ValueError. Here is my code below:
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
validation_split=0.2)
datagen.fit(X_train)
The error:
ValueError: Input to `.fit()` should have rank 4. Got array with shape: (31500, 784)
How can I handle this error? Could anyone help?
AI: The fit method of ImageDataGenerator expects an input with four dimensions (n_samples, height, width, n_channels). The data you are providing only has two dimension, i.e. n_samples, height*width*n_channels. Try reshaping the data before using the fit method as follows:
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
validation_split=0.2)
datagen.fit(X_train.reshape(31500, 28, 28, 1)) |
H: Given M binary variables and R samples, what is the maximum number of leaves in a decision tree?
Given M binary variables and R samples, what is the maximum number of leaves in a decision tree?
My first assumption was that the worst case would be a leaf for each sample, thus R leaves maximum. Am I wrong and there should be a kind of connection with the number of variables M? I know that the maximum depth of a decision tree is M as a variable can appear once in a branch, but I don't see the relation with the number of leaves.
Thanks in advance!
AI: The maximum possible combinations with M binary variable is
$$
2^M
$$
so essentially if all these values have different classes, then the number of leaves should be equal to
$$
if \ R<2^M => R \\
else \ 2^M
$$
A tree which has all possible leaves for M binary variables could at max contain 2^M combinations, think each leaf with written value of a possible combination eg: 0000,0001,0002,...,1110,1111, and these can come only once, because only 1 label can be associated with each leaf
In case a row has multiple labels, for same set of input, the max number of leaves would be equal to unique input combinations in R
A B label
0 1 0 0
1 0 0 1
2 1 1 2
3 1 1 1
The number of leaves in this case would be 3 and not 4 (number of inputs) |
H: Really confused with characteristics of Naive Bayes classifiers?
Naive Bayes classifiers have the following characteristics-:
They are robust to isolated noise points because such points are
averaged out when estimating contiditional probabilities from data.
Naive Bayes classifiers can also handle missing values by ignoring the
example during model building and classification.
They are robust to irrelevant attributes. If X_i is an irrelevant
attributet then P(X_i/Y) becomes almost uniformly distributed. The
class conditional probability for X_i has no impact on overall
computation of posterior probability.
I barely understand anything said here. The book doesn't even provide examples and most of the resources available online are exact photocopies of this book. None of those materials dive deep into these things and actually explain this.
Can you guys help me out here to explain what this means via examples? I will be really glad. I have been banging my head against the wall to get this concept for a long time. I will be glad with some recommended reading that I need to do as well.
AI: Before explaining I just want to point out that these points are only about the advantages of NB classification, there are also disadvantages (in particular NB is very prone to overfitting).
They are robust to isolated noise points because such points are averaged out when estimating conditional probabilities from data.
An "isolated noise point" has features values which differ a lot from the majority of the points. Since by definition there are very few such points, their values play a very small role in the conditional probability across all the points.
In my opinion this argument is a bit questionable, because isolated points can also cause a NB model to overfit due to rare features values (this applies to Bernouilli NB, probably not to Gaussian NB).
Naive Bayes classifiers can also handle missing values by ignoring the example during model building and classification.
For a particular feature $x_i$, if some of the instances do not have a value for this feature it's still possible to calculate the conditional probabilities $p(x_i|Y)$ using the other instances. Interestingly, the model can ignore different instances for different features, which makes NB a bit more robust (i.e. kind of flexible) than other methods.
They are robust to irrelevant attributes. If X_i is an irrelevant attribute then P(X_i/Y) becomes almost uniformly distributed. The class conditional probability for X_i has no impact on overall computation of posterior probability.
An "irrelevant feature" is a feature which does not help predicting the class, which means that $p(x_i,Y) \approx p(x_i)p(Y)$ (the variables are close to independent). This is equivalent to $p(x_i|Y) \approx p(x_i)$, therefore the probability for this feature will be identical for every possible class $Y=y_k$ so it gives the same weight to every class.
Note: I think saying that "$P(X_i|Y)$ becomes almost uniformly distributed" is ambiguous at least, because normally $p(a|b)$ means the distribution of varying value $a$ given fixed value $b$. In my opinion it should be: $P(X_i|Y)$ becomes almost identical to $P(X_i)$. |
H: Alternative to EC2 for running ML batch training jobs on AWS
We are building an ML pipeline on AWS, which will obviously require some heavy-compute components including preprocessing and batch training.
Most the the pipeline is on Lambda, but Lambda is known to have time limits on how long a job can be run (~15mins). Thus for the longer running jobs like batch training of ML models we will need(?) to access something like EC2 instances. For example a lambda function could be invoked and then kick off an EC2 instance to handle the training.
Are there any alternatives to using EC2 for the heavy compute? Is there a way to still host/run the job on AWS without leveraging any EC2 to do the compute?
The idea is to avoid the extra management that comes with EC2 since we’re not currently using it. Keeping everything ad close to Lambda-like as possible is ideal.
AI: For batch training i have been utilizing sagemaker though it's a bit expensive then ec2 but it's easy to setup and get started.
Make a docker container and push it to ecr then start the training
and track the metrics using any monitoring tool like wandb
If your use case don't require any custom packages then you can
also utilize HuggingFace DLC it which can make it more easy to start training.
References:
Sagemaker_Examples
HuggingFace DLC |
H: What is the benefit of training an ML model with an AWS SageMaker Estimator?
It looks like there are different routes to deploying an ML model on SageMaker. You can:
pre-train a model, create a deployment archive, then deploy
create an estimator, train the model on SageMaker with a script, then
deploy
My question is: are there benefits of taking the second approach? To me, it seems like writing a training script would require a bit of trial and error and perhaps some extra work to package it all up neatly. Why not just train a model running cells sequentially in a jupyter notebook where I can track each step, and then go with the first approach?
Does anyone have experience and can compare/contrast these approaches?
AI: The approach one looks best when you are training a small model or one which doesn't take much compute time but when training large model it's always prefered to train via second approach.
Reasons are as follows:
When training large model you might require distributed training either Data or Model parallel.
When running large model it's best practise to train via second approach as if the training stops abruptly, job gets stopped and you will not be charged which is not the case when you run via notebook instance.
So,that extra works pays off as it help train faster and saves your bill!! |
H: Are there any popular English corpus?
Are there any popular English corpus?
AI: Finding corpora for NLP research can be hit and miss, my advice would be to study the availability of adequate data when deciding about the research direction, not afterwards. Of course this completely depends on the type of requirement for the data. In case you have to create your own corpus, design the corpus collection and annotation very carefully because papers with weaknesses in the data collection can be rejected (at least in selective venues). There's no particular problem about collecting text data from the web, as long as this can be justified (for instance social media is not a good source for grammatically correct sentences ;) ).
Honestly I'm not aware of any simple way to find corpora. Here are some sources:
The Linguistic Data Consortium has a catalog of corpora, some free and some not.
ELDA also has a catalog, it's also a semi-commercial provider.
The LRE Map is a repository (also by ELDA) for people to register their research data and software.
A major source of quality data are the various shared tasks which are often organized jointly with major conferences. It's very task specific though.
For the rest it's often about following specific parts of the domain, for example if you find papers related to your task of interest check where the authors found their data, whether they make some data available on their webpage, etc.
For phrasal verbs the PARSEME Shared Task corpora might suit your needs. |
H: What does bootstrap mean in scikit-learn's BaggingClassifier?
I just started using scikit-learn and was learning about the BaggingClassifier. I am a little confused on what bootstrap means. The meaning on the scikit-learn website doesn't make sense. Just asking for a little help. Thanks.
AI: Bootstrapping methods resample from the data with replacement to "fake more data". You've got many good explanations in stats SE. For bagging this means sampling from the training data a "new" data set for each base estimator that is fitted. |
H: Reducing The Impact of Luck On My Training Data
I am new to this data science stuff and I am trying a project on my own to learn more about this field. So I have a project that has the goal of taking in a bunch of features and indicating whether a player will make or miss a shot.
My current training data has a bunch of features alongside the output for each observation. I plan on using a Random Forrest model, as I am comfortable with it (and it fits the objective), however, one issue I see includes making sure luck does not play a role in the decision of the output.
I am trying to think of ways to limit the impact of luck on the model. For anyone familiar with basketball, sometimes a player takes a great shot and misses- sometimes he takes a horrible shots and makes it (both of those situat will be included in my training set). I do not want the model "thinking" that a shot is good/bad because of lucky/unlucky makes/misses.
So my question is how can I limit the impact of the luck within my data-sets or am I just able to assume that a large enough data set will take care of the luck since one gets lucky and unlucky at relatively equal rates (normal distribution) or do I instead revert to an unsupervised model that has the test data not include whether the shot was a miss or a make? Or is there another option to do something I have not considered to make the data better?
Thank you for your feedback.
AI: If your training data is large enough, the model will have enough information to deal with chance through the statistics in the data. For example maybe a great shot is successful 80% of the time, so if there are 10 instances of great shot in the data there should be around 8 of them successful. In other words, the model will use the distribution of the data in order to make the best predictions possible.
When applying the model the predictions are deterministic, so there can be only one possible outcome for one instance. However with most types of models you can obtain the probability of success according to the model instead of a binary answer.
Minor notes:
lucky or unlucky would be a Bernouilli or binomial distribution, not a normal one.
unsupervised learning would be a completely different task, it would not make sense to do it for this reason. |
H: Different values of mean absolute error when using GridSearchCV for max_leaf_nodes vs manually optimising max_leaf_nodes
I am trying out hyperparameter tuning vs manually selecting the best parameter (max_leaf_nodes) on a decision tree model with mean absolute error as the scoring. In theory, both should give me the same MAE and max_leaf_nodes; but, both are giving me different MAEs. Also, if I change the value of cv in GridSearchCV I get different results. So basically I have two questions:
Why am I getting different max_leaf_nodes and MAE in both cases?
How do I determine the value of cv in GridsearchCV, because I get different results for cv = 3, cv = 5, and cv = 10?
AI: Your manual approach gives the MAE on the test set. Because you've set an integer for the parameter cv, the GridSearchCV is doing k-fold cross-validation (see the parameter description in grid search docs), and so the score .best_score_ is the average MAE on the multiple test folds.
If you really want a single train/test split, you can do that in GridSearchCV, see e.g. this SO post. |
H: Gamma parameter in xgboost
As per the original paper on xgboost, the best split at a node is found by maximising the quantity below
$
\cal{L}_{\rm split} = \frac{1}{2} \sum \left [ \frac{G_L}{H_L + \lambda} + \frac{G_R}{H_R + \lambda} - \frac{G_I}{H_I + \lambda} \right ] - \gamma
$
There exists a gamma parameter in xgboost package; assuming it is referring to the same parameter as in the equation, why would it impact the choice of the split if its value does not change?
AI: You are correct that it does not affect the choice of which split to make. Instead, it affects the choice of whether to make any split. If every $\mathcal{L}_{\text{split}}$ is negative, then no split will be made at the node, pre-pruning the tree.
See also Tree complexity in xgboost |
H: Traditional alternatives to Caputure Words Sequence information in NLP
What were the traditional/earlier methods in which NLP researchers captured the word sequence information through feature engineering?
I know the current methods which rely on deep learning models like roBERT and BERT and work well with capturing sequence information. I also know about embeddings like word2vec, but they fail to capture the sequence information.
For example, I would like a feature which can differentiate between
"cat ran after the dog." and "dog ran after the cat."
AI: There are different methods, it depends what kind of sequential task.
Conditional Random Fields are typically used for sequence labeling tasks like POS tagging or NER.
n-gram models can be used for standard language models, but n-grams can also be used as features in various tasks where order matters. However the larger $n$ the more data is required, so it's rare to go beyond $n=5$ and this is a limitation for tasks which require long distance relations to be represented.
In order to reasonably capture semantics, the traditional approach is to deploy a chain of components: POS tagging, syntactic parsing (e.g. dependency parsing), then semantic role labeling. This would result in an explicit semantic representation of the sentence. |
H: What is the difference between BERT and Roberta
I want to understand the difference between BERT and Roberta. I saw the article below.
https://towardsdatascience.com/bert-roberta-distilbert-xlnet-which-one-to-use-3d5ab82ba5f8
It mentions that Roberta was trained on 10x more data but I don't understand the dynamic masking part. It says masked tokens change between epochs.
Shouldn't this flatten the learning curve?
AI: The masked language model task is the key to BERT and RoBERTa. However, they differ in how they prepare such masking. The original RoBERTa article explains it in section 4.1:
BERT relies on randomly masking and predicting tokens. The original BERT implementation performed masking
once during data preprocessing, resulting in a single static mask. To avoid using the same mask for
each training instance in every epoch, training data
was duplicated 10 times so that each sequence is
masked in 10 different ways over the 40 epochs of
training. Thus, each training sequence was seen
with the same mask four times during training.
We compare this strategy with dynamic masking where we generate the masking pattern every
time we feed a sequence to the model. This becomes crucial when pretraining for more steps or
with larger datasets.
This way, in BERT, the masking is performed only once at data preparation time, and they basically take each sentence and mask it in 10 different ways. Therefore, at training time, the model will only see those 10 variations of each sentence.
On the other hand, in RoBERTa, the masking is done during training. Therefore, each time a sentence is incorporated in a minibatch, it gets its masking done, and therefore the number of potentially different masked versions of each sentence is not bounded like in BERT. |
H: Choosing a Change Point Detection Algorithm
I am currently working on a dataset that belongs to the restaurant and food delivery domain. After completing sentiment analysis and quantification, I now need to select a Change Point Detection Algorithm and detect a shift in the sentiment signal in the reviews on each category of restaurant. The signal will be a score that is the difference between the positivity and negativity of the reviews in a certain time frame.
I have considered 3 sentiments that are positive, neutral and negative and am thus contemplating the use of a multivariate time series. Since, I have the complete dataset at hand, I will be conducting an offline change point detection algorithm to detect a shift in the sentiment of the reviews in the pre-covid and post-covid time.
Please provide me some help as to how do I select an algorithm. I am very confused and a few factors to consider with links to resources are welcome.
AI: The problem with Change point detection (or positive/negative trend detection), is that it depends on many things, including noise and sensitivity through time.
For instance, you cannot send an alert when the scores just started being negative in one day. You have to wait a few days to see if the trend is actually bad or not.
Consequently, you have to tune your model to define the "few days" and the "criticality".
That's why I would recommend to visualize every category trends by using a general score system (ex: sum of positive +1, neutral 0 and negative -1) and applying a smoothing function (ex: Kalman filter, with different noise reduction values). In this way, you should be able to detect the required sensitivity to detect when the situation is becoming critical or better.
For the smoothing function, you can use pykalman. After evaluating the right noise reduction and the right quantity of days N, you can apply the diff function to measure the difference of the filtered curve in the last N days. |
H: How does Word2Vec actually help with sentimental analysis?
I'm trying read in a whole article, separate the article by sentences, and then words. Then I pass this into the Word2vec Model and the output comes out.
However, my goal is to find the positive or negative sentiment of the article. The input is unsupervised in that it does not have a label.
Do I need to perform some sentiment scoring on the article before inputting into the word2Vec. I don't understand how word2vec actually helps with sentimental analysis. All it tells me is that words are close together/ have same context, but not actually whether the words are positive or negative.
I've read articles claiming to "use word2vec for sentimental analysis", but none actually do, so I'm not sure if I am misreading something here.
I'm wondering how I should go about this. Thanks.
AI: In general, you need labeled data to perform Sentiment Analysis. In case you don't have, you need to improvise. I found one article where the author claims that his implementation of unsupervised learning worked adequately.
The post: Unsupervised Sentiment Analysis
I quote some parts of it:
The main idea behind this approach is that negative and positive words
usually are surrounded by similar words. This means that if we would
have movie reviews dataset, word ‘boring’ would be surrounded by the
same words as word ‘tedious’, and usually such words would have
somewhere close to the words such as ‘didn’t’ (like), which would also
make word didn’t be similar to them. On the other hand, it would be
unlikely to have happened, that word ‘tedious’ had more similar
surrounding to word ‘exciting’, than to word ‘boring’. With such
assumption, words could form clusters (based on similarity of their
surrounding) of negative words that have similar surroundings,
positive words that have similar surroundings, and some neutral words
that end up between them (such as ‘movie’).
So what he actually did was to use word2vec to transform his texts to vectors and then a simple K-Means with K=2. He expected that positive words will gather in one cluster and negative words in the other cluster.
Then using gensim’s most_similar method he compared a word with each of the clusters.
It's nice to experiment like this, but nowadays, it is super easy to find a labeled dataset to use in almost any language. |
H: How is there an inverse relation between precision and recall?
What I know?
Firstly,
Precision= $\frac{TP}{TP+FP}$
Recall=$\frac{TP}{TP+FN}$
What book says?
A model that declares every record has high recall but low precision.
I understand that if predicted positive is high, precision will be low. But how will recall be high if predicted positive is high.
A model that assigns a positive class to every test record that matches one of the positive records in the training set has very high precision but low recall.
I am not able to properly comprehend how there is inverse relation between precision and recall.
Here is a doc that I found but I could not also understand from this doc as well.
https://www.creighton.edu/fileadmin/user/HSL/docs/ref/Searching_-_Recall_Precision.pdf
AI: There is an overall inverse relationship, but not a strictly monotone one. See e.g. the precision-recall curves in the sklearn examples.
A model that declares every record [to be positive class] has high recall but low precision.
If the model declares every record positive, then $TP=P$ and $FP=N$ (and $FN=TN=0$). So recall is 1; and the precision is $P/(P+N)$, i.e. the proportion of positives in the sample. (That may be "low" or not.)
Rather than addressing your second quote immediately, I think it might be beneficial to just examine (nearly) the opposite case to the above. Suppose your classifier only makes one positive prediction; assuming the model rank-orders reasonably well, it's very likely this is a true positive. Then $TP=1$, $FP=0$, $TN$ and $FN$ are both large. Then precision is 1, and recall is very small.
The second quote makes the assumption there more solid: every positive prediction is a true positive (assuming no opposite-class clones), but there are very few positive predictions. |
H: what can be done using NLP for a small sentence samples?
I am new to NLP. I have few 100 textual sentences (100 rows in dataframe) with an average word length of 10 in a sentence. I would like to know what interesting insights (simple descriptive to advanced) can be derived using NLP techniques. I don't intend to predict anything but analyze and get some interesting insights.
I have thought of the below items that can be done using sample data that I have.
Count the number of occurrences of each word in a sentence and finally find out the most frequently used (top) word and least used (bottom) word in the list of sentences that I have
Find the Entity in each sentence using NER. Which entity is discussed most in the sentences that I have?
Find which sentences are similar using textual similarity metrics.
I can identify the sentiment of the sentence
Can LDA be used to identify the topic of the sentence (which on average has 10 words) and my dataset itself has only 100 sentences?
What do you think is the use of creating syntactic/dependency trees? What can we infer through this? This might be useful for linguists, but can it help the layman end-users/business folks get some insight? Any simple explanation on this topic or directing me to resources would be helpful
I think we cannot summarize it because my sentences only contain 10 words on average
Can you help me with q5, q6, and q7?
Is there anything else that you think can be done? What more do you think can be done using.
AI: First I think it's worth mentioning that in the context of an exploratory study with a small dataset, manual analysis is certainly as useful as applying NLP methods (if not more) since:
Small size is an advantage for manual study and a disadvantage for automatic methods.
There's no particular goal other than uncovering general patterns or insights, so it's unlikely that the results of an automatic unsupervised method would exhibit anything not directly observable.
That being said one can always apply automatic methods indeed, if only for the sake of observing what they can capture or not.
Observing frequency (point 1) can always be useful. You may consider variants with/without stop words and using document frequency (number of documents containing a term) instead of term frequency.
points 3 and 5 are closely related: LDA essentially clusters the sentences by their similarity using conditional words probabilities as hidden variable. But the small size makes things difficult for any probabilistic method, and there could be many sentences which have little in common with any other.
Syntactic analysis with dependency parsing can perfectly be applied to any sentence, but the question is what for? As far as I know this kind of advanced analysis is not used for exploratory study, it's used for specific applications where one needs to obtain a detailed representation of the full sentence. Traditionally this was used for higher-level tasks involving semantics, often together with semantic role labeling and/or relation extraction. I'm not even sure that this kind of symbolic representation is still in use now that end-to-end neural methods have become state of the art in most applications.
I agree that summarizing a short sentence is pointless. You could try to summarize the whole set of sentences though, if that makes sense.
In the logic of playing with any possible NLP method, you could add a few things to your list:
Lemmatizing the words, this can actually be useful as preprocessing.
Using embeddings or not: on the one hand this can help finding semantic similarities through the embedding space, on the other hand the small size makes it questionable to project the data in a high dimension space.
Finding colocations (words which tend to appear together in the same sentence) with association measures such as Pointwise Mutual Information.
Spelling correction and/or matching similar words with string similarity measures.
It's unlikely that there's any interest in it but there are also stylometry methods, i.e. studying the style of the text instead of the content. These range from general style like detecting the level of formality or readability to trying to predict whether two texts were authored by the same person. |
H: Implementing computational graph and autograd for tensor and matrix
I am trying to implement a very simple deep learning framework like PyTorch in order to get a better understanding of computational graphs and automatic differentiation. I implemented an automatic differentiator for scalar values inspired by this and then was trying to implement a tensor automatic differentiator that you can see in my code below.
import numpy as np
class Tensor:
def __init__(self,data,_children=()):
self.data=data
self.grad=np.zeros_like(self.data)
self._prev=set(_children)
self._backward=lambda :0
def __str__(self):
return f"Tensor of shape {self.data.shape} with grad {self.grad}"
def dag(self):
topo=[]
visited=set()
def build_topo(v):
visited.add(v)
for i in v._prev:
if(i not in visited):
build_topo(i)
else:
pass
topo.append(v)
build_topo(self)
topo.reverse()
return topo
def backward(self):
topo=self.dag()
self.grad=np.ones_like(self.data)
for v in topo:
v._backward()
@staticmethod
def sum(self,other):
_tensor=Tensor(self.data+other.data,(self,other))
def _back():
self.grad=_tensor.grad
other.grad=_tensor.grad
_tensor._backward=_back
return _tensor
@staticmethod
def dot(self,other):
assert self.data.shape[1]==other.data.shape[0], \
f"can't multiply two Tensor with shape {self.data.shape} and {other.data.shape}"
_tensor=Tensor(np.dot(self.data,other.data),(self,other))
def _back():
self.grad=(_tensor.grad*other.data.T)
other.grad=(_tensor.grad*self.data)
_tensor._backward=_back
return _tensor
My question is how am I suppose to implement an automatic differentiator for when we have a matrix of input data where each input is a column vector of the matrix (like what we do in a neural network for training)?
I would appreciate it if you give me some study material or sample code so I can implement PyTorch like autograd for matrix input or tensors.
AI: Making my comments an answer.
Effectively micrograd engine, uses the reverse mode of automatic differentiation (stored in the backward method) in order to calculate the derivatives as they pass through the various layers starting from the final output (reverse mode). This is needed for training NNs.
To generalise this to vector/matrix entities one can follow one of 2 approaches:
Construct matrices from scalar Values, and keep using micrograd.
Generalise the reverse mode differentiation rules to use matrices/tensors.
Both can be used and which is most efficient depends on various factors along with ease of implementation.
To follow the 2nd approach (and make sth like PyTorch), one has to generalise the gradients for scalars to gradients of matrices and the reverse mode differentiation rules of scalars to matrices.
For example (reference 3 below):
Assuming a binary operator $C = f(A, B)$ that takes 2 matrices $A, B$ and produces a 3rd matrix $C$. The reverse mode automatic differentiation rule is:
$C = f(A, B) = A+B$, reverse mode grad: $\operatorname{grad}A=\operatorname{grad}C, \operatorname{grad}B=\operatorname{grad}C$
$C = f(A, B) = A \cdot B$, reverse mode grad: $\operatorname{grad}A=\operatorname{grad}C \cdot B^T, \operatorname{grad}B=A^T \cdot \operatorname{grad}C$
$C = f(A) = A^T$, reverse mode grad: $\operatorname{grad}A=\operatorname{grad}C^T$
$C = f(A) = A^{-1}$, reverse mode grad: $\operatorname{grad}A=-C^T \cdot \operatorname{grad}C \cdot C^T$
and so on.. (see reference)
So that is the way to generalise the operations to take account of matrix entities (these rules would be put in the respective backward method). Note: for scalar values (ie degenerate matrices of shape 1 x 1) the above rules degenerate to the micrograd (reverse differentiation) rules for scalar Values.
References:
Matrix calculus
Vector, Matrix, and Tensor Derivatives
An extended collection of matrix derivative results for forward and reverse mode algorithmic differentiation
Automatic Differentiation for Matrices (Differentiable Programming) |
H: Loss is Nan even with clipvalue set and Adam optimizer
I'm currently doing this task from kaggle.
I've normalized my data with the minmax scaler and fixed the dummie variable trap by removing one column to every dummie variable I created.
Here is the first row of my training data:
array([0.45822785, 0.41137515, 0.41953444, 0.01045186, 0.00266027,
0.13333333, 0. , 0.02342393, 0.62156863, 0.16778523,
0.09375 , 0. , 1. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 1. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 1. , 0. , 0. ,
0. , 0. , 0. , 1. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0. , 0. , 1. , 0. , 0. ,
0. , 0. , 1. , 1. , 0. ])
This is the model I'm using:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation,Dropout
from tensorflow.keras import optimizers
from tensorflow.keras.callbacks import EarlyStopping
early_stop = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=25)
optimizer = optimizers.Adam(clipvalue=1)
model = Sequential()
# https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw
model.add(Dense(units=85,activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(units=42,activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(units=42,activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(units=1,activation='sigmoid'))
# For a binary classification problem
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.fit(x=X_train,
y=y_train,
epochs=600,
validation_data=(X_test, y_test), verbose=1,
callbacks=[early_stop]
)
As stated, I'm using a clipvalue of 1 with the adam optimizer which was what was recommended on this and this posts.
Despite that, the output I'm getting is the following:
Epoch 1/600
8664/8664 [==============================] - 7s 810us/step - loss: nan - val_loss: nan
Epoch 2/600
8664/8664 [==============================] - 7s 819us/step - loss: nan - val_loss: nan
Epoch 3/600
8664/8664 [==============================] - 7s 818us/step - loss: nan - val_loss: nan
Epoch 4/600
8664/8664 [==============================] - 7s 824us/step - loss: nan - val_loss: nan
Epoch 5/600
8664/8664 [==============================] - 7s 805us/step - loss: nan - val_loss: nan
Epoch 6/600
8664/8664 [==============================] - 7s 800us/step - loss: nan - val_loss: nan
Epoch 7/600
8664/8664 [==============================] - 7s 833us/step - loss: nan - val_loss: nan
Epoch 8/600
8664/8664 [==============================] - 7s 821us/step - loss: nan - val_loss: nan
Epoch 9/600
8664/8664 [==============================] - 7s 806us/step - loss: nan - val_loss: nan
Epoch 10/600
8664/8664 [==============================] - 7s 813us/step - loss: nan - val_loss: nan
AI: It ended up being that I didn't save my changes upon removing NaN values...
Changed this:
df.dropna()
To this:
df = df.dropna() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.