text
stringlengths 83
79.5k
|
---|
H: When to use GloVe vocabulary vs. building a vocabulary from the training data?
While studying some (pytorch) examples that use pretrained GloVe vectors I came across two variants:
Use the vocabulary of the GloVe vectors and thus initialize the embedding layer with the pretrained GloVe vectors.
Build a vocabulary from the corpus and then only use the pretrained GloVe vectors that correspond to that vocabulary to initialize the embedding layer.
To me it seems that by using the vocabulary of the GloVe vectors there is a chance that some of the tokens in the training set may not have corresponding GloVe vectors and are thus excluded from the vocabulary. Therefore, you may miss out on tokens that are significant for the task.
On the other hand, building a vocabulary from the corpus implies that the model cannot handle unseen words (as far as I understand that correctly).
Therefore, I was wondering: When should one use the GloVe vocabulary vs. building a vocabulary from the training data? And would it make sense to use the "union" of the two vocabularies instead?
AI: The arguments that you say are pretty much correct.
The primary reason for using pre-trained embeddings is typically the lack of task-specific training data. In tasks that (at least for some languages) have vast amounts of training data, such as machine translation, the word embeddings are always trained jointly with the rest of the model.
The mere fact that you are using GloVe indicates that your training data is not that large. In that case, it is very likely that at inference time, words can appear that were not in the training data, but GloVe still has a good representation for them and the rest of the model knows how to use it. GloVe has a really large vocabulary, it is much more likely that there will be GloVe words at inference time than unknown training words.
You can also opt for a hybrid solution. If there are words in the training data, that are not in GloVe, but still are frequent enough, you can learn their embeddings jointly with the rest of the model while keeping the rest of the GloVe embeddings frozen. (But implementing this might be a little bit tedious.) |
H: Why do I get an ValueError for an SVR model with RFE, but only when using pipeline?
I am running five different regression models to find the best predicting model for one variable. I am using a Leave-One-Out approach and using RFE to find the best predicting features.
Four of the five models are running fine, but I am running into issues with the SVR. This is my code below:
from numpy import absolute, mean, std
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from sklearn.model_selection import cross_val_score, LeaveOneOut
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.feature_selection import RFECV
from sklearn.pipeline import Pipeline
# one hot encoding
dataset.Gender.replace(to_replace=['M','F'],value=[1,0],inplace=True)
# select predictors and dependent
X = dataset.iloc[:,12:]
y = dataset.iloc[:,2]
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X = scaler.fit_transform(X)
First I run LOOCV with all features, this runs fine
## LOOCV with all features
# find number of samples
n = X.shape[0]
# create loocv procedure
cv = LeaveOneOut()
# create model
from sklearn.svm import SVR
regressor = SVR(kernel = 'rbf')
# evaluate model
scores = cross_val_score(regressor, X, y, scoring='neg_mean_squared_error', cv=n)
# force positive
#scores = absolute(scores)
# report performance
print('MSE: %.3f (%.3f)' % (mean(scores), std(scores)))
Next, I want to include RFECV to find the best predicting features for the model, this runs fine for my other regression models.
This is the part of the code where I get the error:
# automatically select the number of features with RFE
# create pipeline
rfe = RFECV(estimator=SVR(kernel = 'rbf'))
model = SVR(kernel = 'rbf')
pipeline = Pipeline(steps=[('s',rfe),('m',model)])
# find number of samples
n = X.shape[0]
# create loocv procedure
cv = LeaveOneOut()
# evaluate model
scores = cross_val_score(pipeline, X, y, scoring='neg_mean_squared_error', cv=n)
# report performance
print('MSE: %.3f (%.3f)' % (mean(scores), std(scores)))
The errors I receive are
ValueError: when `importance_getter=='auto'`, the underlying estimator SVR should have `coef_` or `feature_importances_` attribute. Either pass a fitted estimator to feature selector or call fit before calling transform.
I am not sure what this error means?
AI: RFE operates by fitting its estimator and then eliminating the worst feature(s), and recursing. The "worst" feature(s) are determined by using feature importance from the model, by default using either coef_ or feature_importances_ (as noted in the error message). SVR has no such attribute, and indeed does not really come with builtin feature importances, especially with a nonlinear kernel. See also https://stats.stackexchange.com/q/265656/232706
With the estimator being a pipeline, you'd anyway need to give more detail to the RFE on where to get the coefficients, see the second paragraph of the docs for importance_getter:
Also accepts a string that specifies an attribute name/path for extracting feature importance (implemented with attrgetter). For example, give regressor_.coef_ in case of TransformedTargetRegressor or named_steps.clf.feature_importances_ in case of sklearn.pipeline.Pipeline with its last step named clf.
Finally, if you really want to use SVR, have a look at the third paragraph of the docs for importance_getter:
If callable, overrides the default feature importance getter. The callable is passed with the fitted estimator and it should return importance for each feature.
You can write a callable that uses, say, permutation importance (though this will be expensive) or some other agnostic importance measure. Erm, actually, since the callable only gets the fitted estimator, and not the data, permutation importance won't work. See also https://stats.stackexchange.com/q/191402/232706 |
H: How can I measure time and memory complexity for a deep learning model?
How can I measure or find the time complexity and memory complexity for a model like VGG16 or Resnet50? Also, will it be different from one machine to another like using GTX GPU or RTX GPU?
AI: Talking of Deep Learning specifically, you will see a lot of research papers that report the following metrics to compare time complexity (Speed of inference) and space complexity (Model size) in their papers -
Time complexity in terms of FLOPs (floating-point operations) -
FLOPs are often used to describe how many operations are required to run a single instance of a given model. The more the FLOPs the more time model will take for inference.
Model size in terms of the number of parameters -
If a perceptron with 5 inputs has 5 trainable weight parameters, then just calculate the total number of trainable parameters in a neural network. This will give you the total number of parameters. This is typically calculated in millions or billions.
For eg -
In the following image from paper, the author compares his approach with other models using these two metrics.
The reason we need both of these metrics is that a model with lesser parameters may have more computations (for eg residual connections in resnet are not trainable yet contribute to computational cost i.e more FLOPs). Vice versa, a model with more parameters might have lesser computations.
Lastly, these metrics are independent of the hardware machines. That means these metrics will scale relatively for the chosen machine for all models. For eg if a model takes 1 second for 10 FLOPs on GPU_1 and takes 2 seconds for the same on GPU_2, then another model with 100 FLOPs will take 10 seconds on GPU_1 and 20 seconds on GPU_2. I am using these numbers just for analogy, don't take them literally.
To calculate FLOPs in PyTorch you can follow this link. And for TensorFlow follow this link. |
H: AUC-PR but there is no recall or precision
Is it possible to have a Precision-Recall curve like this if your recall is zero and your precision is not defined? How do I interpret this?
I have checked that all the scores are right and I still don't understand the appearance of the curve.
AI: The recall is 1 when the system predicts every instance as positive, 0 if it predicts every instance as negative. But if all the instances are predicted as negative then precision is undefined (division by zero: there's no TP nor FP). I assume that the undefined value is replaced with zero on the curve in order to make it look continuous.
Technically the only way to have zero for both precision and recall is this:
zero TP
at least one FP (otherwise precision is undefined)
at least one FN (otherwise recall is undefined)
This would mean that the system always predicts gold positive instances as negative and gold negative instances as positive. Therefore the system can be improved... just by switching the predicted labels :) |
H: Date and Time to only date
I have the following date format
2018-02-06T16:50:36Z
I am trying to change it to only date format in R using the following however it is not working:
date_1 <- data %>% mutate(date_time_submitted = as.Date(date_time_submitted, format = "%Y-%m-%d:%H:%M:%S:%Z"))
Any ideas?
Thanks
AI: You are just using the incorrect format as you are missing the T character and have an extra : character between the day and hour and the seconds and timezone indicator. Changing the format to "%Y-%m-%dT%H:%M:%SZ" gives the correct date:
> as.Date("2018-02-06T16:50:36Z", format="%Y-%m-%dT%H:%M:%SZ")
[1] "2018-02-06"
Another option would be to use the as_date/as_datetime function from the lubridate package, which give the correct date(time) without specifying the format:
> lubridate::as_date("2018-02-06T16:50:36Z")
[1] "2018-02-06"
> lubridate::as_datetime("2018-02-06T16:50:36Z")
[1] "2018-02-06 16:50:36 UTC" |
H: Applying standardization using ImageDataGenerator
I have a multiclass image dataset ( 8 classes) that is divided as follows, the main folder is called training and I have 8 subfolders with each subfolder for one class. I know how to perform data standardization using ImageDataGenerator :
train_datagen = ImageDataGenerator(
samplewise_center=True,
rescale=1. / 255,
shear_range=30,
zoom_range=30,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2)
then :
train_datagen.fit(x_train)
My problem is with x_train, I don't know how I could pass my dataset as x_train ? They are contained in a directory as described above and they are 8 classes ( and hence 8 folders )?
AI: If your data it not loaded into memory yet but only stored in a directory you will have to use the flow_from_directory method instead of the fit method. The fit method is only needed if you are standardizing based on the feature level (or when you already have all data loaded into memory), but you are not using the featurewise_center or featurewise_std_normalization arguments (see also the docs). The 'standardizations' you are applying can simply be done with data present in a specific sample or batch without needing any statistics from the whole dataset. The examples sections of the docs shows how to you can use the flow_from_directory method when just rescaling. |
H: Reasons for a model predicting probability of being class 1 at x value
All,
This is a general question. I have a binary classification which predicts if someone is rich or not. I had a question from someone asking that if the probability someone is rich is 0.6 and another person is also given this probability are the reasons for WHY they are rich the same?
I am using an xgboost and my instinct is to say no. e.g. if i were to profile each population > = 0.5, >= 0.6,... etc would i find differences in their features? I would say it's hard because there's no linear relationship between outcome and target most of time, it can be complex.
In general i guess my question is: if two people are given same probabiity of being class 1 - will the models reasons for giving each of these people this 0.6 be the same? 'reasons' being features/feature values
AI: Not necessarily, while it can be the case that two observations belong to the same 'group' and end up in the same leaf node (and thus get the same predicted value) there can also be multiple groups of observations that both have the same predicted value. If this is the case in your example is of course dependant on the data you are using. It would indeed be a time intensive task to manually check why certain observations have a certain predicted value there are an increasing number of methods/python packages to help explain a model's decision with the increased focus on interpretable/explainable machine learning models. An example of such a method is using Shapley values, which is implemented in the shap python package and can be applied easily on tree ensembles (see the example on the linked github page). |
H: Training a Fuzzy Distance for Clustering later
I have a set of strings $ s_i \in S $ and associated labels $ y_i $, where $ y_i $ could possibly be null.
There are many labels, but the cardinality is much smaller than the strings.
$$
1 << |\{ y_i \}| << |\{ s_i \}|
$$
Given 2 strings $ (s_i, s_j) $, I want to train a model $ \nu: S \rightarrow \mathbb{R}^n $ that computes,
$$
Pr(y_i = y_j | s_i, s_j) \propto e^{-\beta ~ \nu(s_i) \cdot \nu(s_j)}
$$
Effectively, $ \nu $ converts the string into a real vector such that can be used to cluster strings.
With $ \nu $, I would store my strings in my database with a Spatial Index so that I can search for related strings quickly.
Has this been done before? Can you please provide references to applicable research?
Also, https://hackernoon.com/chars2vec-character-based-language-model-for-handling-real-world-texts-with-spelling-errors-and-a3e4053a147d is basically what I was thinking.
AI: One option is training an embedding space that includes both the strings and the labels as entities. Example algorithms are doc2vec (where the doc is the label) and StarSpace.
The result is a dense vector representation that can be used with common distance metrics like L2 and cosine distance. |
H: Dataset and why use evaluate()?
I am starting in Machine Learning, and I have doubts about some concepts. I've read we need to split our dataset into training, validation and test sets. I'll ask four questions related to them.
1 - Training set: It is used in .fit() for our model to learn parameteres such as weights in a neural network?
2 - Validation set: Can also be used in .fit(). The validation set is used so we can validate our model at the end of each epoch (to tune some hyperparameteres, like the number of nodes in a hidden layer)?
3 - If 2 is correct (i.e, the validation set was already used in .fit()), do we still need to use .evalute()? And why?
4 - Test set: New inputs (x) never seen by the model, so i can make predictions on them? Used through the .predict() function?
AI: First to be clear we're talking specifically about supervised learning here: there's a training stage during which the model is provided with labelled instances (features and class).
Simple analogy: the model is like a student.
Training: the student sees many exercises with their answer. The goal is for the student/model to learn how to find the answer based on the input in the exercise.
Evaluation: Equipped with their new knowledge, the student takes a test where the answers are not provided with the questions. The teacher knows the answers, and grades the student based on how correctly they answered. This step also requires annotated data in order to calculate how the model performs.
Production: Later the student may have a job where they apply the knowledge they acquired. In general nobody checks their answers, that's not the goal.
The most important distinction is between training set and test set, these two are always necessary in supervised learning. A validation set is optional: it's an intermediate test set which is used when there are different "levels" of training. For example at level 1 multiple models (e.g. with different features or hyper-parameters) are trained on the training set and evaluated on the validation set. At level 2 the best model is picked out of these multiple models: level 2 is a kind of "meta-training", so the final model still needs to be evaluated with a fresh test set (unseen data). Deep NNs also require a validation set, since they iteratively apply a big training/evaluation loop in order to train a model.
So the answer to question 3 is: because the validation set was used during training (level 2), it cannot be considered a good evaluation of the final model (since the model uses information from it). |
H: Anomaly Detection Techniques
Often the hardest part of solving an Anomaly Detection problem can be finding the right technique for the job.
Different Anomaly techniques are better suited for different types of data and different problems.
Are there any flowchart/tree diagrams that are designed to give users a bit of a rough guide on different anomaly techniques and the approach problems with regard to which technique to try on the data?
AI: Anomaly Detection Techniques
Here is a link from coggle that has different Anomaly Detection Techniques |
H: Using precision as a metric - how to gauge if more TP's
So precision is calculated as tp/(tp+fp)
But this doesn't seem to be a good way to assess a model as both of the below would give a result of 1?
Binary Classification
[0,0,0,0,0,0,0,0,1] or [0,1,1,0,1,1,1,0,1]
Am I missing something here? Or is there a better metric for binary classification which would limit the amount of false positives while also producing the MOST amount of true positives?
I guess you are going to tell me F1?
AI: Your example doesn't make sense without a gold standard, evaluation measures work by comparing predicted labels against true labels.
To answer your question: precision is not sufficient on its own to evaluate a model, it's always used together with recall. And the two are often combined into F-score, indeed.
Precision only represents the proportion of correct positive predictions among all the positive predictions.
Recall only represents the proportion of correct positive predictions among all the gold positive instances.
Am I missing something here? Or is there a better metric for binary classification which would limit the amount of false positives while also producing the MOST amount of true positives?
"limit the amount of false positives" -> precision
"producing the MOST amount of true positives" -> recall
Technically accuracy satisfies both conditions:
$$accuracy=\frac{TP+TN}{TP+TN+FP+FN}$$
However accuracy doesn't distinguish between positive and negative instances, and this is often an issue (unless the classes are balanced). |
H: Help starting ML project in pythin(novice)
I am starting a machine learning project (for fun!), but I am not sure where to start from... I am fairly new to ML so any hints are appreciated.
I have a relatively large data-set where each input is a list of roughly 300 integers (mainly zeros). The output is a list of 20ish integers. The goal is to predict the output given a random input (obviously). And I am not sure what is the best method for that. I have started reading a bit into neural networks which seem like it could be a good way to solve such problem, but there seems to be a whole range of different activation etc. (not sure how it's actually called) so I am not really sure what to do.
Any hints on what direction to look into?
Thanks a lot!
AI: In case you have several output columns (targets), you have a limited choice of models since most models predict one output column. See the sklearn docs for a good overview. In essence you deal with a multioutput problem.
The sklearn models for regression simply aim at predicting each output separately. The RegressionChain trys to successively predict one target value after another, taking the previous prediction into account. From the docs:
Multioutput regression support can be added to any regressor with
MultiOutputRegressor. This strategy consists of fitting one regressor
per target. Since each target is represented by exactly one regressor
it is possible to gain knowledge about the target by inspecting its
corresponding regressor. As MultiOutputRegressor fits one regressor
per target it can not take advantage of correlations between targets.
Regressor chains (see RegressorChain) is analogous to ClassifierChain
as a way of combining a number of regressions into a single
multi-target model that is capable of exploiting correlations among
targets.
If you want to try neural nets, you can use Keras' "functional API" which allows you to define several outputs (columns). Find a minimal example here. In a nutshell you need to define some input (the $X$ matrix, aka the explanatory variables) and you need to define all of the outputs separately. Inputs and outputs can be feeded into the model definition and the model fit statement as shown below.
# Input and model architecture
Input_1=Input(shape=(13, ))
x = Dense(1024, activation='relu', kernel_regularizer=regularizers.l2(0.05))(Input_1)
x = Dense(512, activation='relu', kernel_regularizer=regularizers.l2(0.05))(x)
x = Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.05))(x)
x = Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.05))(x)
x = Dense(8, activation='relu', kernel_regularizer=regularizers.l2(0.05))(x)
# Outputs
out1 = Dense(1)(x)
out2 = Dense(1)(x)
# Compile/fit the model
model = Model(inputs=Input_1, outputs=[out1,out2])
model.compile(optimizer = "rmsprop", loss = 'mse')
# Add actual data here in the fit statement
model.fit(train_data, [train_targets,train_targets2], epochs=500, batch_size=4, verbose=0, validation_split=0.2) |
H: Is it normal for exhaustive feature selector to run for three days?
I'm trying to optimize my features in a dataset to get a better predictive model. I used Exhaustive feature selector from mlxtend. This checks all possible combinations of features. I have a dataset with 80+ features, and I set the maximum of features to be selected to 20 with 10-Fold cross validation. There are 9k entries and the estimator is Random Forest Regressor. It's been roughly three days this has been running. Is this normal? Previously I have run Recursive Feature Elimination on the same dataset with 10 fold cross validation. It took roughly 24 hours.
efs = EFS(RandomForestRegressor(),
min_features=3,
max_features=20,
scoring='neg_mean_squared_error',
cv=10)
efs.fit(X, Y)
AI: An exhaustive search means that the 10-fold CV is run for every possible subset of features. The number of possible subsets of size 20 for a set of 80 features is:
$$\binom{80}{20}=3.5\ 10^{18}$$.
And this is only for size 20 exactly, so the total number is higher.
So yes, it might take a very long time. You could estimate the duration before running a potentially very long experiment like this:
First take a lower maximum number of features and calculate the corresponding number of runs. For instance with 5 as maximum the number of subsets would be roughly around 24 millions (I let you calculate the exact number).
Then run the experiment and measure the duration. Applying a simple proportion you can estimate how much time it would takes for any maximum number of features. You might find out that your current experiment would take many years ;)
Exhaustive search on a large set is rarely a good idea, I'd suggest looking at genetic search. |
H: How to determine if my data split is appropriate for my data size?
I currently have a model that has a pretty large dataset (50ishMB) and was performing pretty well with a 80:20 split. However, when I tried changing it up to a 50:50 split, the model performed 28% better than the 80:20 split. Note this is a time-series problem.
Since I have evaluated the model to perform better in that data split, is it fine to more forward with it or should I continue with the 80:20 split? Why?
AI: Assigning more examples for training allows your model to be exposed to more types of hidden patterns in your data & presumably learn a better representation. While assigning more examples for testing gives you a better fidelity of the evaluation over these data patterns. Ultimately this is a project specific tradeoff you make.
CONSIDERATIONS
Does the evaluation need to be precise for a specific trained model or could it be a range approximation? If the latter then cross validation let’s you best use all the data to learn & evaluate.
Is the model used classical or deep learned? The latter techniques often require way more data so I would favor a 99:1 train/test split over the typical 70:30 or 80:20 splits.
Is the problem domain simple or extremely difficult? ie: are there sub patterns, many edge cases, lots of classes to learn with high human disagreement? You can plot learning curves to see how well your model learns with different volumes of data. Once it plateaus you can allocate excess data for evaluation.
How expensive is it to get quality & representative labeled data? If cheap consider creating ship-gates (unit test equivalent) datasets to capture various data scenarios.
Is the data heavily imbalanced, very noisy or very repetitive? if so I would apply smarter splitting techniques like stratified sampling, imbalance handling, etc over just random splitting.
It is common to allocate more data for training. I would attribute the perceived improved performance in your 50:50 data split simply to producing a less over-fitted model due to having fewer learning examples. |
H: Not in index Error
xtrain is a numpy array
from sklearn.linear_model import LogisticRegression
outer_kfold = KFold(n_splits=5, random_state=27, shuffle=True)
final_scores = list()
for train, test in outer_kfold.split(xtrain):
x_train, x_test = xtrain[train], xtrain[test]
y_train, y_test = ytrain[train], ytest[test]
model=LogisticRegression()
model.fit(x_train, y_train)
preds = model.predict(x_test)
final_scores.append(accuracy_score(y_test, preds))
print("Score:", final_scores[-1])
print("\nAverage Score:", np.average(final_scores))
model=LogisticRegression()
model.fit(x_train, y_train)
preds = model.predict(x_test)
final_scores.append(accuracy_score(y_test, preds))
print("Score:", final_scores[-1])
print("\nAverage Score:", np.average(final_scores))
Error
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/var/folders/vd/lb0gkn7j2t34s4ljbgdb7g5r0000gn/T/ipykernel_1376/2946208749.py in <module>
5 for train, test in outer_kfold.split(xtrain):
6 x_train, x_test = xtrain[train], xtrain[test]
----> 7 y_train, y_test = ytrain[train], ytest[test]
8
9 model=LogisticRegression()
/opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/pandas/core/series.py in __getitem__(self, key)
964 return self._get_values(key)
965
--> 966 return self._get_with(key)
967
968 def _get_with(self, key):
/opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/pandas/core/series.py in _get_with(self, key)
999 # (i.e. self.iloc) or label-based (i.e. self.loc)
1000 if not self.index._should_fallback_to_positional():
-> 1001 return self.loc[key]
1002 else:
1003 return self.iloc[key]
/opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/pandas/core/indexing.py in __getitem__(self, key)
929
930 maybe_callable = com.apply_if_callable(key, self.obj)
--> 931 return self._getitem_axis(maybe_callable, axis=axis)
932
933 def _is_scalar_access(self, key: tuple):
/opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/pandas/core/indexing.py in _getitem_axis(self, key, axis)
1151 raise ValueError("Cannot index with multidimensional key")
1152
-> 1153 return self._getitem_iterable(key, axis=axis)
1154
1155 # nested tuple slicing
/opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/pandas/core/indexing.py in _getitem_iterable(self, key, axis)
1091
1092 # A collection of keys
-> 1093 keyarr, indexer = self._get_listlike_indexer(key, axis)
1094 return self.obj._reindex_with_indexers(
1095 {axis: [keyarr, indexer]}, copy=True, allow_dups=True
/opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/pandas/core/indexing.py in _get_listlike_indexer(self, key, axis)
1312 keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr)
1313
-> 1314 self._validate_read_indexer(keyarr, indexer, axis)
1315
1316 if needs_i8_conversion(ax.dtype) or isinstance(
/opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/pandas/core/indexing.py in _validate_read_indexer(self, key, indexer, axis)
1375
1376 not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique())
-> 1377 raise KeyError(f"{not_found} not in index")
1378
1379
KeyError: '[7, 39, 44, 45, 54, 58, 74, 79, 82, 90, 94, 98, 99, 103, 108, 115, 116, 119, 130, 134, 147, 157, 159, 161, 177, 186, 188, 192, 201, 203, 217, 218, 219, 220, 229, 231, 234, 239, 248, 250, 254, 273, 276, 285, 288, 291, 299, 301, 305, 307, 308, 313, 314, 321, 324, 325, 331, 342, 343, 345, 347, 354, 368, 372, 386, 394, 407, 413, 433, 438, 442, 447, 448, 462, 470, 481, 487, 496, 503, 507, 513, 517, 536, 538, 545, 554, 558, 559, 569, 573, 574, 587, 589, 592, 596, 599, 602, 605, 607, 608, 623, 627, 634, 642, 644, 650, 654, 662, 664, 666, 675, 687, 691, 705, 712, 714, 716, 717, 718, 724, 729, 755, 758, 761, 781, 783, 793, 802, 813, 822, 823, 847, 858, 859, 863, 867, 871, 874, 895, 900, 911, 925, 941, 946, 951, 955, 958, 961, 963, 968, 975, 982, 995, 1002, 1005, 1028, 1038, 1041, 1049, 1050, 1051, 1060, 1063, 1070, 1071, 1073, 1075, 1089, 1117, 1122, 1139, 1144, 1145, 1153, 1155, 1156, 1159, 1160, 1162, 1167, 1169, 1172, 1177, 1178, 1183, 1190, 1197, 1198, 1213, 1217, 1221, 1224, 1225, 1227, 1233, 1253, 1256, 1262, 1264, 1266, 1271, 1282, 1287, 1288, 1289, 1297, 1299, 1305, 1307, 1308, 1322, 1328, 1331, 1332, 1342, 1347, 1356, 1373, 1380, 1392, 1406, 1409, 1416, 1422, 1447, 1454, 1458, 1463, 1494, 1508, 1513, 1519, 1520, 1525, 1532, 1534, 1536, 1540, 1544, 1550, 1554, 1564, 1578, 1604, 1614, 1620, 1624, 1628, 1652, 1657, 1664, 1676, 1682, 1683, 1685, 1686, 1692, 1703, 1711, 1717, 1720, 1729, 1732, 1735, 1736, 1740, 1743, 1745, 1748, 1750, 1753, 1758, 1768, 1773, 1785, 1790, 1793, 1814, 1817, 1827, 1828, 1829, 1851, 1857, 1873, 1885, 1892, 1909, 1917, 1924, 1929, 1933, 1939, 1945, 1947, 1956, 1958, 1961, 1971, 1988, 1989, 1990, 2017, 2019, 2025, 2031, 2040, 2045, 2054, 2064, 2065, 2089, 2090, 2097, 2098, 2119, 2120, 2134, 2136, 2140, 2157, 2161, 2164, 2168, 2174, 2181, 2196, 2209, 2229, 2252, 2255, 2262, 2271, 2285, 2296, 2325, 2328, 2331, 2333, 2345, 2349, 2351, 2353, 2355, 2362, 2363, 2364, 2377, 2378, 2385, 2398, 2402, 2403, 2407, 2408, 2411, 2437, 2438, 2445, 2450, 2456, 2458, 2481, 2487, 2488, 2489, 2491, 2495, 2505, 2512, 2514, 2532, 2537, 2539, 2550, 2567, 2570, 2573, 2586, 2588, 2593, 2597, 2605, 2606, 2608, 2615, 2619, 2624, 2637, 2639, 2640, 2645, 2652, 2653, 2656, 2657, 2666, 2677, 2684, 2688, 2694, 2696, 2698, 2701, 2702, 2704, 2706, 2711, 2722, 2723, 2735, 2736, 2755, 2773, 2774, 2776, 2787, 2800, 2807, 2812, 2815, 2820, 2827, 2831, 2837, 2842, 2856, 2858, 2861, 2864, 2866, 2868, 2878, 2883, 2887, 2889, 2897, 2899, 2900, 2901, 2909, 2917, 2918, 2919, 2921, 2927, 2929, 2932, 2939, 2954, 2959, 2981, 2989, 2999, 3001, 3005, 3014, 3016, 3023, 3032, 3039, 3053, 3069, 3072, 3079, 3080, 3081, 3092, 3095, 3099, 3100, 3108, 3111, 3112, 3126, 3134, 3140, 3144, 3153, 3157, 3165, 3167, 3191, 3196, 3198, 3207, 3210, 3211, 3224, 3234, 3242, 3248, 3265, 3272, 3283, 3285, 3287, 3291, 3293, 3304, 3329, 3338, 3339, 3369, 3370, 3371, 3376, 3382, 3384, 3391, 3397, 3419, 3422, 3423, 3426, 3427, 3431, 3435, 3455, 3458, 3461, 3463, 3472, 3473, 3477, 3484, 3485, 3489, 3491, 3492, 3498, 3500, 3502, 3504, 3505, 3511, 3516, 3522, 3531, 3532, 3554, 3563, 3565, 3571, 3585, 3588, 3593, 3595, 3611, 3619, 3628, 3636, 3644, 3645, 3658, 3662, 3663, 3665, 3669, 3675, 3680, 3689, 3690, 3692, 3696, 3715, 3716, 3729, 3737, 3738, 3741, 3755, 3761, 3762, 3767, 3769, 3771, 3777, 3784, 3789, 3792, 3801, 3802, 3803, 3807, 3808, 3811, 3812, 3816, 3819, 3823, 3829, 3830, 3832, 3834, 3835, 3849, 3862, 3865, 3866, 3872, 3878, 3891, 3897, 3901, 3903, 3906, 3916, 3920, 3925, 3928, 3935, 3938, 3943, 3945, 3954, 3963, 3979, 3985, 3986, 3988, 3993, 4008, 4023, 4029, 4040, 4045, 4051, 4058, 4060, 4067, 4071, 4076, 4078, 4084, 4086, 4089, 4098, 4106, 4109, 4113, 4117, 4124, 4133, 4140, 4145, 4152, 4154, 4158, 4165, 4175, 4184, 4192, 4194, 4195, 4203, 4205, 4207, 4224, 4227, 4228, 4230, 4232, 4236, 4253, 4258, 4261, 4268, 4269, 4272, 4276, 4288, 4290, 4295, 4296, 4301, 4305, 4306, 4322, 4326, 4331, 4332, 4364, 4367, 4369, 4370, 4381, 4382, 4396, 4399, 4406, 4410, 4422, 4423, 4424, 4425, 4429, 4437, 4444, 4445, 4446, 4460, 4462, 4464, 4479, 4481, 4482, 4484, 4486, 4487, 4492, 4495, 4497, 4503, 4510, 4514, 4520, 4530, 4544, 4546, 4556, 4557, 4558, 4563, 4569, 4571, 4575, 4576, 4583, 4586, 4589, 4591, 4594, 4599, 4613, 4621, 4627, 4629, 4636, 4646, 4649, 4652, 4656, 4661, 4673, 4678, 4679, 4685, 4688, 4695, 4698, 4705, 4706, 4708, 4714, 4727, 4728, 4732, 4736, 4737, 4741, 4744, 4748, 4757, 4760, 4797, 4844, 4846, 4848, 4859, 4870, 4874, 4888, 4897, 4907, 4914, 4925, 4926, 4928, 4932, 4939, 4951, 4953, 4954, 4958, 4964, 4966, 4976, 4977, 4980, 4983, 4984, 4992, 5005, 5013, 5014, 5021, 5041, 5042, 5051, 5053, 5063, 5065, 5066, 5069, 5073, 5078, 5080, 5083, 5084, 5089, 5090, 5112, 5114, 5116, 5118, 5137, 5142, 5160, 5163, 5173, 5180, 5189, 5196, 5198, 5206, 5207, 5210, 5211, 5216, 5221, 5224, 5232, 5234, 5240, 5241, 5243, 5244, 5247, 5260, 5261, 5265, 5281, 5282, 5289, 5290, 5301, 5309, 5311, 5316, 5319, 5323, 5328, 5329, 5345, 5347, 5355, 5364, 5365, 5368, 5375, 5376, 5388, 5389, 5402, 5407, 5408, 5414, 5418, 5430, 5433, 5438, 5443, 5451, 5455, 5460, 5461, 5476, 5477, 5478, 5483, 5487, 5488, 5492, 5493, 5495, 5509, 5513, 5522, 5531, 5535, 5549, 5552, 5555, 5557, 5558, 5564, 5570, 5583, 5590, 5592, 5599, 5603, 5604, 5615, 5629, 5638, 5640, 5641, 5644, 5648, 5651, 5668, 5669, 5676, 5680, 5682, 5684, 5689, 5699, 5727, 5729, 5735, 5736, 5738, 5748, 5756, 5757, 5761, 5764, 5765, 5774, 5792, 5793, 5796, 5799, 5803, 5817, 5822, 5824, 5832, 5833, 5838, 5872, 5877, 5878, 5887, 5888, 5899, 5910, 5915, 5916, 5921, 5944, 5946, 5950, 5959, 5987, 5989, 5991, 5992, 5993, 5996, 6005, 6012, 6013, 6021, 6027, 6034, 6037, 6042, 6046, 6049, 6054, 6055, 6058, 6061, 6063, 6064, 6070, 6075, 6077, 6083, 6087, 6091, 6102, 6117, 6126, 6129, 6137, 6147, 6149, 6160, 6168, 6170, 6181, 6184, 6189, 6190, 6191, 6192, 6200, 6211, 6213, 6215, 6219, 6228, 6233, 6238, 6240, 6252, 6257, 6260, 6262, 6274, 6275, 6279, 6280, 6302, 6303, 6305, 6315, 6335, 6364, 6366, 6368, 6377, 6384, 6385, 6386, 6392, 6397, 6404, 6418, 6426, 6442, 6446, 6450, 6453] not in index'
AI: Replace $7^{th}$ row with:
y_train, y_test = ytrain[train], ytrain[test] |
H: Select one best model according to accuracy, precision, recall, f1 score and roc score
I have two classifiers that classify the same dataset with these results:
Model | Accuracy| Precision| Recall | F1 Score| ROC Score
--------------| --------| ---------|--------|---------|---------
Random Forest| 90% | 0.61 | 0.7 | 0.64 | 0.81
XGBoost | 91% | 0.70 | 0.67 | 0.66 | 0.80
I am not sure but It seems to me XGBoost is the best model since it has 91% accuracy and higher precision and F1 score than Random Forest. Can anyone please help me to choose the best model from the above result?
AI: As the comment suggests, XGBoost is a better model. It's F1 score (which is nothing but a combination of precision and recall) is slightly better than RandomForest.
Depending on your use case, if Recall is more important, than go for RandomForest. |
H: Some questions about supervised learning, model evaluation and preprocessing
I've been trying to employ some basic techniques of supervised learning on a dataset that I have and I have several questions about the overall procedure (i.e. data preprocessing, model evaluation etc).
Before I start posing the questions let me give you an insight of how my dataset looks like. The dataset is from the open ML repository, it consists of 22 different types of articles (the targets or classes) and 1079 different words (features). The aim is to classify the upcoming blurbs of these articles based on these 1079 words. Below you can see the first 5 rows of my dataset.
As you can see from the above snippet in the first 22 columns I have my targets (i.e. type of articles) and the rest of the columns belong to my word predictors. The values of my features are binary, i.e. «1» if the word appeared on the blurb, and «0» otherwise. First, I do a little of preprocess and I separate the targets from the features, I change the boolean values that correspond to the targets to 0 and 1 and I give Labels to my articles (i.e. 0:«Entertainment», 1: «Interviews» etc). In the following snippet you can see the distribution of my samples according to each different type of article.
As you can see from the above image, my dataset is unbalanced. My aim is to try the following classification algorithms and choose the best one at end: 1) GaussianNB (GNB), 2) KNearestNeighbors (KNN), 3) LogisticRegression (LR), 4) Multi-Layer Perceptron (MLP) and 5) Support Vector Machines (SVM). Before I start to preprocess my dataset in more detail I split my dataset on 70% train and 30% test and I do an Out-of-box-test of these algorithms, in the following table you can see my results
Classifier
f1score(train)
acc(train)
f1score(test)
acc(test)
Time – f1score(train)
GaussianNB
23 %
41 %
24 %
40 %
0.54 (sec)
Dummy
1 %
16 %
2 %
19 %
0.01 (sec)
1NN
4 %
18 %
8 %
24 %
0.22 (sec)
Logistic
34 %
56 %
40 %
62 %
1.04 (sec)
MLP
33 %
54 %
39 %
59 %
10.5 (sec)
SVM
19 %
46 %
32 %
57 %
3.7 (sec)
The first two columns refer to the outcome of the classifier using a 2-fold cross validation on my training set according to accuracy and f1(average = macro) metrics and the last two columns are the results of the classifiers on the test set according to the aforementioned metrics. From the above table you can observe that some of these classifiers have low performance. In my next task I am trying to preprocess my data in more detail using techniques like StandarScaler, VarianceThreshold, PCA or SelectBestK, RandomOverSampling and optimize the parameters of my classifiers with GridSearch. At the moment I can pose my first question.
Q1 Are the above techniques (except GridSearch which I completely understand) going to improve the perfomance of the classifiers for sure? I mean is there any justification that these techniques generally work for better or it is just a trial and observation procedure?
In the next code I create a pipeline that does the following manipulations, Oversampling, deleting the features with low variance, sequentially, before training the model (in this case I use only GaussianNB. Observe that I first do the splitting of my dataset into train and test and then use the oversampling.
# Case 1 splitting first and then oversampling on training set
from imblearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from imblearn.over_sampling import RandomOverSampler
from sklearn.feature_selection import VarianceThreshold
selector = VarianceThreshold()
scaler = StandardScaler()
over = RandomOverSampler()
clf = GaussianNB()
pipe = Pipeline(steps = [('over', over),('selector', selector), ('scaler', scaler),\
('GNB', clf)])
train_new, test_new, train_new_labels,\
test_labels_new = train_test_split(features, labels, test_size = 0.3)
pipe.fit(train_new, train_new_labels)
pipe.score(test_new, test_labels_new)
The accuracy on the test set of the following pipeline is 38 % which is 2 % less than the score of the GaussianNB without this preprocess procedure, so my second question is the following.
Q2 Why these modifications deteriorated the performance of the classifier? Is there any sign from the values of my dataset or its structure that could predict this outcome?
Now, if I change the order of splitting and the oversampling I get completely different different results, for example if I run the following block of code.
# Case 2 oversampling before splitting
clf = GaussianNB()
features2, labels2 = over.fit_resample(features, labels) # First do the oversampling
train_new, test_new, train_new_labels, test_labels_new \
= train_test_split(features2, labels2, test_size = 0.3) # Then do the splitting
train_new = selector.fit_transform(train_new)
test_new = selector.transform(test_new) # This block of code does all
train_new = scaler.fit_transform(train_new) # all the preprocessing.
test_new = scaler.transform(test_new)
train_new = selector2.fit_transform(train_new, train_new_labels)
test_new = selector2.transform(test_new)
clf.fit(train_new, train_new_labels) # Fit on train
clf.score(test_new, test_labels_new) # Evaluate on test
I get 74 % percent accuracy on the test set which is much better than before. So my question is:
Q3 Why changing the order of the splitting with oversampling changed the result that much? In general, I must do the splitting first and then do the preprocess only on my training set? For example, I understand that if first do some preprocess like scaling, PCA and then split my set, then my results would be biased since I have preprocessed the test set also, but I don't understand why this happens with oversampling also (if this is the case).
To give you another view of the above result below I can show you the learning curve of the GaussianNB on a 10-fold cross validation in the second case where I first do the oversampling and then the splitting.
As you can see from the above snippet the validation score and the training score converges on the same number which is a good indication that the model can achieve a good generalization performance.
Q4 Which of the above two cases is more reliable to give me good results on future unseen samples? Furthermore, what kind of preprocess would you suggest on the above dataset? For example, in these two runs I deleted all samples from my dataset that belonged to 2 or more classes, this modification gave me back the 84% of my initial dataset. Would it be better if I had created duplicates of these samples to prevent as much the unbalancedness of dataset?
PS: Excuse me for the long post, I don't expect to get answers for all the above questions, I would be very pleased to share your opinions if you have any insightful answers to any of the above questions! Thanks in advance!
AI: Q1
Q1 Are the above techniques (except GridSearch which I completely understand) going to improve the perfomance of the classifiers for sure? I mean is there any justification that these techniques generally work for better or it is just a trial and observation procedure?
Q1 is very broad because it involves many different techniques which can each have their purpose in some cases but not in general. As a whole it's unlikely to always have a positive effect on performance, especially with a set of very different learning algorithms. But when used properly it's not a trial and error procedure, it's about using appropriate technique(s) knowing the characteristics of the data and algorithm (and the possible problems with them).
Q2
Q2 Why these modifications deteriorated the performance of the classifier? Is there any sign from the values of my dataset or its structure that could predict this outcome?
Probably most of the difference is explained by the oversampling. We see a lot of questions here on DSSE about resampling, presumably because this is a very simple technique which can sometimes have a large impact on performance... especially when used the wrong way!
It's useful to understand resampling and there might be a few rare cases where it makes sense, but imho it's more often just a lazy option to avoid studying the specific data and problem properly, and it's also a big source of confusion and mistakes (see next question). In general resampling simply doesn't improve performance, see for instance this answer.
Q3
Q3 Why changing the order of the splitting with oversampling changed the result that much? In general, I must do the splitting first and then do the preprocess only on my training set?
The most important point is to understand what is a proper evaluation method in supervised learning:
A ML model is trained to solve one particular "target problem", and this problem must be clearly defined from the start. In particular the distribution of the classes is an important part of the definition of the problem itself. For example it's not the same problem to classify between N equal classes and between N imbalanced classes, and a classifier trained for one problem is unlikely to work well for the other.
Logically, the evaluation stage must always represent the target problem as defined, including of course the class distribution. If the test set doesn't follow the "true distribution" for the target problem, the resulting performance is unreliable, as we simply don't know how the model would perform on the true distribution.
In Q3 the model is evaluated on the oversampled data. Unless you really mean to use the model on this distribution (unlikely since it's not the one in the source data), this performance doesn't mean anything for the real target problem with the original distribution. The reason why the performance is much higher is that the balanced version of the problem is much easier, so it's like solving an easy question and pretend that it's the same as the original difficult question ;)
Q4
Q4 Which of the above two cases is more reliable to give me good results on future unseen samples? Furthermore, what kind of preprocess would you suggest on the above dataset? For example, in these two runs I deleted all samples from my dataset that belonged to 2 or more classes, this modification gave me back the 84% of my initial dataset. Would it be better if I had created duplicates of these samples to prevent as much the unbalancedness of dataset?
As per the answer above, future unseen samples probably follow the same distribution as the source data, so any performance calculated on another distribution is meaningless for this.
For the same reasons deleting or adding instances can indeed modify performance, but that also means changing the target problem.
See also this question about supervised learning, in case it helps. |
H: What is the best way to limit number of features in TF-IDF?
I am using the tf-idf to build representations. It is large dataset and it quickly becomes too much for my RAM if I convert the matrix to a Data-Frame.
What is the best way to reduce the number of features/columns and retain the highest possible level of information.
The model has the possibility of setting max_features to a number but that retains features that have high_term frequency, which kind off defeats the purpose of tf-idf. You can also set stop-words, but that doesn't reduce the dimensionality much in my case.
AI: Generally speaking the correct representation on td-idf encoding is a hyperparameter to be optimized.
As suggested in the above's answers, you can go for the regularization parameter i.e min_df which will control the minimum representativeness of a term to appear in the term-document matrix.
A reasonable approach would be a combination of both min_df and ngram_range but again via hyperparameter using cv score to select the appropriate one and of course, it is highly recommended to preprocess/clean the data by removing stopwords. |
H: Activation and Loss Function not chosen correctly when use Neural Network
I have three classes for my text dataset before.
These are my classes:
0 = Cat
1 = Not Both
2 = Dog
Then I use this code:
df_result = df[df["class"] != 1]
So, now my classes are 0 and 2.
When I use neural networks, what can I choose for the Loss and Activation? Then, what should I do for choose the last Dense of neural network's model?
Before, my code like this:
model.add(Dense(2, activation = 'softmax'))
history = model.compile(loss = sparse_categorical_crossentropy,
optimizer = 'adam', metrics=['accuracy'])
I do not know if it makes a bad accuracy and bad validation accuracy or not. So, I am still confused. Please, give me a hand.
AI: There are 2 possible scenarios here.
You can use all 3 categories and build a multiclass classification model, where the output layer has 3 neurons, activation function is softmax and loss is sparse cross entropy loss. If you choose to go with this method then make sure to use LabelEncoder to encode your target variable.
The other scenario is that you use only 2 classes as you suggest. In that case you can build a binary classification model where the output layer has 1 neuron, activation function is sigmoid and loss is binary cross entropy loss. If you choose to go with this method then make sure you use OneHotEncoder to encode your target variable.
I would suggest try both methods and see which one gives better results.
Cheers! |
H: Error with decision tree prediction
I write this script in R about decision tree.
library(class)
library(ggplot2)
library(gmodels)
library(scales)
library(caret)
library(tidyverse)
db_data <- iris
row_train <- sample(nrow(iris), nrow(iris)*0.8)
db_train <- iris[row_train,]
db_test <- iris[-row_train,]
unique(db_train$Species)
table(db_train$Species)
model_dt<-train(Species ~ ., data = db_train, method = "rpart",tuneLenght = 9)
summary(model_dt)
#-------
#PREDICTION NEW RECORD
#-------
test_data <- db_test
db_test$predict <- predict(model_dt, newdata=test_data, interval='confidence')
confusionMatrix(data=factor(db_test$predict),reference=factor(db_test$Species))
but when I run model_dt I have this error:
Error in train(Species ~ ., data = db_train, method = "rpart", tuneLenght = 9) :
unused arguments (data = db_train, method = "rpart", tuneLenght = 9)
why?
AI: Something that is wrong is the spelling of tuneLength. The code has tuneLenght = 9. Maybe that will get the program further. |
H: SMOTE for multi-class balance changes the shape of my dataset
So I have a dataset of shape (430,17), that consists of 13 classes (imbalanced) and 17 features.
The end goal is to create a NN which btw works when I import the imblanced dataset, however when i try to over-sample the minority classes using SMOTE in jupyter notebook, the classes do get balanced but also the shape changes.
from imblearn.over_sampling import SMOTE
from sklearn.preprocessing import OneHotEncoder
from imblearn.pipeline import Pipelineenter
steps = [('onehot', OneHotEncoder()), ('smt', SMOTE())]
pipeline = Pipeline(steps=steps)
X_res, y_res = pipeline.fit_resample(X, y)
The y_res shape is (754,) from y shape which was (430,), so upsampling works, also by checking:
unique, counts = np.unique(y_res, return_counts=True)
print(np.asarray((unique, counts)).T)
the classes have been balanced.
However, the X_res shape has now changed to (754, 5553), from X shape which was (430, 17).
Then, if I fit these data in my NN it doesnt work of course since the input_dim has changed for my input layer.
My question is, did the SMOTE procedure add not only rows to balance the classes but also columns?
Should't I got X_res with shape (754, 17)? and because I need these data for a NN they have to be arrays, or numpys, instead of pd.dataframes, which is also complicated to understand where that 5553 columns come from.
I am new in python and jupyter so I do not know how to solve this, and I would really appreciate any help :)
AI: Your understanding is correct: data balancing techniques like SMOTE will only add/remove rows (data points) not columns (features). I suspect your extra dimensions are due to one-hot-encoding. |
H: Not sure how I can get class labels from Neural Network predictions. Would this be an acceptable alternative to the old method
I am currently working with a dataset that has over 300 variables and a target variable with 10 different classes. My goal is to use said variables and produce a prediction for the target variable. Since the target variable has mutually exclusive classes, I've come up with the following script:
y_train_1 = pd.get_dummies(y_train["Class_target"])
y_train_1.columns = [f"Class_{i}" for i in y_train_1.columns]
model = keras.Sequential([
keras.layers.Dense(250, activation = "relu", input_shape = [X_train.shape[1]]),
keras.layers.Dense(250, activation = "relu"),
# Softmax is used as the target variable has 10 different mutually exclusive classes
keras.layers.Dense(10, activation = "softmax")
])
early_stopping = callbacks.EarlyStopping(
min_delta = 0.001,
patience = 20,
restore_best_weights = True
)
model.compile(
loss = keras.losses.CategoricalCrossentropy(),
optimizer = "adam",
metrics = ["accuracy"]
)
model.fit(
X_train_1,
y_train_1,
validation_data = (
X_test_1,
y_test_1
),
batch_size = 32,
epochs = 3,
callbacks = [early_stopping],
verbose = 1
)
# The predictions are converted into a dataset and as the sum of each row is 1, I transpose the dataset and reset the index of the dataset to get a column of class labels. I used y_train_1.columns to get column labels for the probability dataset.
predicted_df = pd.DataFrame(model.predict(X_train))
predicted_df.columns = y_train_1.columns
predicted_df_1 = predicted_df.T.reset_index()
new_list = []
# I then use the following loop to get the class labels for each row based on the maximum probability for each column
values = list(predicted_df_1["index"])
for i in predicted_df_1.columns:
if i != "index":
column_list = list(predicted_df_1[i])
max_index = column_list.index(max(column_list))
new_list.append(values[max_index])
df_1 = pd.DataFrame({
"Predicted Classes":new_list
})
The part that I'm particularly uneasy about is the part where I use y_train's column names as the predicted tables column names. I have no idea whether this is the correct way to get class labels or not as the old way of getting class labels (model.predict_classes()) is deprecated.
I'd very much appreciate any form of assistance regarding this matter.
AI: Instead of looping through all rows to get the max over the different columns, it is much faster and easier to use numpy.argmax to automatically find the index with the highest value. You can then use these indices to get the corresponding labels.
import numpy as np
labels = ["a", "b", "c", "d", "e"]
preds = np.random.rand(10, 5)
preds = preds / preds.sum(axis=1, keepdims=True)
indices = np.argmax(preds, axis=1)
np.array(labels)[indices]
# array(['c', 'e', 'a', 'b', 'b', 'c', 'c', 'b', 'd', 'c'], dtype='<U1')
You are correct in using the training labels to get the labels for new data as the model is predicting the labels it has been trained on, so the output for any new data would also be for the same labels that are in the training dataset. That is also why you can't simply have a new unseen label in your test set, since the model hasn't seen it during training it cannot directly predict it. |
H: Smoothing target variable
I am training a regression model (using quantile regression forest) to forecast crop yield deviations from trend (residuals) using weather variables with different lag times. Trying to improve the accuracy and confidence of my results, I recently tested replacing the target variable with a smoothed version of it (computed with locally weighted scatterplot smoothing, LOWESS) using as independent variable a feature with no lag time (i.e., with lag time = 0) trying to remove noise from the measured data. In the figure below, the crosses represent the observed values, the red dots are outliers, and the gray line is the smoothed version of the dependent variable.
I got significant improvements in the result, but something doesn't seem right with this approach.
I have been looking into the use of smoothing techniques in machine learning and have found that, indeed, smoothing is a technique used in data preprocessing, feature engineering, and data mining for noise filtering (e.g., here or here; or here, applied for time series forecasting). On the one hand, it sounds logical to remove noise from the target variable to estimate "true values" derived from the process I am trying to model; however, I've learned that the preprocessing is applied to features (explanatory variables) and I am not sure that smoothing the target variable is a valid procedure. To summarize:
Is it valid to replace the target variable with a smoothed version of it?
If it is, given that the independent variable used to smooth the target variable is not available a priori, how should I proceed?
a. train the model with the smoothed target variable and test it with the raw target variable; or
b. train and test the model with the smoothed target variable.
Any thoughts will be appreciated.
AI: Is it valid to replace the target variable with a smoothed version of it?
It does not sound like a valid approach.
I’m not convinced that you removed the noise and not legitimate effects. If you consider any two neighbor points on your graph, the difference in target variable is likely to be driven by a number of factors (not shown on you plot). It might appears as noise (big jumps) because we don’t account for other factors.
Imagine that your target variable is the height of a student and you smooth using the height ~ age loess, because you observe some big jumps in height e.g. between 17 and 17.5 y.o. The problem is that half of your students are from Netherland (the tallest nation in Europe). If you smooth your target as above you’ll never be able to get good fit (to the original target, of course) even if you include the nationality as a feature.
In other words, applying such smoothing you disregard the influence of all other factors and implicitly assume that the target can be modelled as a smooth function of a single (in your case unobservable / unavailable) factor.
In general, removing the noise might help, but I guess it’s very difficult to differentiate between true effects and the noise in practice. Instead of pre-processing the target I would just control for overfitting (e.g. with some regularization or early stopping) and let the model smooth the data. |
H: Is binary classification the right choice in this case?
I am somewhat new to text classification and I have some questions if you folks can help:
I have some text I need to be able to classify as belonging to a single class or not (usually 1-10 sentences long each). For the examples of the class, I have around 500 examples, but the non-class case can really be any text at all of which I have hundreds of documents with tens of thousands of sentences (which are not the class). What I have to do is be able to classify each of the sentences in each document as belonging to the class or not. The vast majority won't belong.
I'm using a BERT based Binary Classifier (simpletransformer) to identify the text similar to (or exactly) the 500 class examples, does this seem reasonable/possible?
How should I deal with the class imbalance of 500 to 10000's? I tried oversampling the minority class (my target), but it seems to overfit when I do that.
What is the usual way of handling this particular use case? The 1-class anomaly detection doesn't seem to fit here, from what I can tell. Is there a similar NLP style training that works for this case? Or something else?
Would it make sense to just do a semantic similarity comparison of some sort? That is, just take the class examples, and for each sentence in a document, test to see how similar it is to each class example and if the text is "close enough" to any of the class examples, then it's a "hit"? this would seem slow... Is there a standard/good library for semantic comparison?
AI: Well, I think that you identified the options and problems quite well.
The main problem in this kind of text classification task is that it's impossible to obtain a representative sample of the negative class.
Binary classification is certainly a reasonable option, but since a classifier learns to separate the two classes there's always a risk that some future negative example won't look like any of the training examples and end up misclassified.
One-class classification is also a reasonable option. By definition it's supposed to handle the open classification problem better, but it's not always the case in practice.
Calculating a similarity measure against the reference documents is possible, but it's not efficient so rarely convenient. The performance also depends a lot on the data and measure chosen, and of course there is the problem of determining an optimal threshold.
The class imbalance should probably not be treated by resampling, it's not going to solve anything. Imbalance is a problem only because the model doesn't find good indications in the features, so imho the only good option is to investigate and understand why the features don't help the model in cases of errors. Very often it's because the minority class is simply not representative enough. |
H: I want to do sub groups in pandas data frame
I have a dataframe as below I want to do first group by trip_number and then subgroup by mode.
trip_number mode Km time
1 0 10 2021-10-01 01:44:50+00:00
1 0 10 2021-10-01 01:44:51+00:00
1 0 10 2021-10-01 01:44:52+00:00
1 1 10.2 2021-10-01 01:44:53+00:00
1 1 10.3 2021-10-01 01:44:54+00:00
1 1 10.5 2021-10-01 01:44:55+00:00
1 1 11 2021-10-01 01:44:56+00:00
1 1 11.2 2021-10-01 01:44:57+00:00
1 0 11.2 2021-10-01 01:44:58+00:00
1 0 11.2 2021-10-01 01:44:59+00:00
1 0 11.2 2021-10-01 01:45:00+00:00
1 0 11.2 2021-10-01 01:45:01+00:00
2 0 11.2 2021-10-01 01:45:02+00:00
2 0 11.2 2021-10-01 01:45:03+00:00
2 0 11.2 2021-10-01 01:45:04+00:00
2 0 11.2 2021-10-01 01:45:05+00:00
2 1 11.2 2021-10-01 01:45:06+00:00
2 1 11.5 2021-10-01 01:45:07+00:00
2 1 11.6 2021-10-01 01:45:08+00:00
2 0 11.6 2021-10-01 01:45:09+00:00
2 0 11.6 2021-10-01 01:45:10+00:00
2 0 11.6 2021-10-01 01:45:11+00:00
2 0 11.6 2021-10-01 01:45:12+00:00
2 0 11.6 2021-10-01 01:45:13+00:00
2 0 11.6 2021-10-01 01:45:14+00:00
Can someone please help me or guide me how can I achieve this.
The output data frame I want is to like this:
trip_number mode startkm endkm starttime endtime subtrip
1 0 10 10 2021-10-01 01:44:50+00:00 2021-10-01 01:44:52+00:00 1
1 1 10.2 11.2 2021-10-01 01:44:53+00:00 2021-10-01 01:44:57+00:00 2
1 0 11.2 11.2 2021-10-01 01:44:58+00:00 2021-10-01 01:45:01+00:00 3
2 0 11.2 11.2 2021-10-01 01:45:02+00:00 2021-10-01 01:45:05+00:00 1
2 1 11.2 11.6 2021-10-01 01:45:06+00:00 2021-10-01 01:45:08+00:00 2
2 0 11.6 11.6 2021-10-01 01:45:09+00:00 2021-10-01 01:45:14+00:00 3
AI: First you would have to create a column that indicates the subtrip, this can be done by comparing the value for the mode column with the value in the row before it. You can then simply group on the trip_number, subtrip, and mode columns and aggregate to get the minimum and maximum for the columns you need.
(
df
# compare value for mode column with previous row
.assign(subtrip = lambda x: (x["mode"] != x.groupby("trip_number")["mode"].shift(1)))
# use cumsum() to get the subtrip number within each group
.assign(subtrip = lambda x: x.groupby("trip_number")["subtrip"].transform(pd.Series.cumsum))
.groupby(["trip_number", "subtrip", "mode"])
.agg(
startkm = ("Km", "min"),
endkm = ("Km", "max"),
starttime = ("time", "min"),
endtime = ("time", "max"),
)
.reset_index()
)
trip_number
subtrip
mode
startkm
endkm
starttime
endtime
1
1
0
10
10
2021-10-01 01:44:50+00:00
2021-10-01 01:44:52+00:00
1
2
1
10.2
11.2
2021-10-01 01:44:53+00:00
2021-10-01 01:44:57+00:00
1
3
0
11.2
11.2
2021-10-01 01:44:58+00:00
2021-10-01 01:45:01+00:00
2
1
0
11.2
11.2
2021-10-01 01:45:02+00:00
2021-10-01 01:45:05+00:00
2
2
1
11.2
11.6
2021-10-01 01:45:06+00:00
2021-10-01 01:45:08+00:00
2
3
0
11.6
11.6
2021-10-01 01:45:09+00:00
2021-10-01 01:45:14+00:00 |
H: How important is real analysis/measure theory to this field?
I am a college student, struggling to decide whether or not to take pure maths electives on topics such as real analysis and measure theory. If I were to take them then I would definitely have to invest a lot of time into them in order to understand them and get better grades. However I am considering the opportunity cost for this, as I could have learned new skills such as deepening my programming skills. How important was this to you? Thanksss
AI: It’s probably one of the less important skills.
This isn’t to say that it would be worthless to take. Indeed, for someone pursuing a PhD in a data science field like statistics, I would expect some probability theory based on measure theory. If you plan to go that route, then you would be doing measure theory at some point.
However, so much of data science in industry is pretty unrelated to theoretical statistics or even statistics of any kind. Data science is more of a subfield of software engineering than statistics. Even when data science involves statistics, it tends to have more to do with the implementation of known methods than invention of new ones.
I know a lot of good, \$uccessful software engineers who definitely don’t know what measure theory is.
(Even if you do plan to get a PhD in statistics or mathematics, there is not much expectation of a background in measure theory. People will have bits and pieces of measure theory in undergrad, but that’s mostly a topic for graduate school. My recommendation, if you’re pursuing that path, is to take the elective classes that have the best teachers. If you can get a recommendation letter from a hotshot in their field, that has its advantages, too.) |
H: Has bloom filters a higher probability of collide when strings are similar
Because if it is the case, this really suits my need.
I'm trying to discover popular searches on a website. For this, I'm using TopK algorithm which is based on Bloom Filter hashing.
I don't want "Hello world" and "hello world" to be count twice. So if it is a collision, that would be really appropriate for my use case.
Practically, I'm using this implementation.
Practically, I can run String similarity prior to adding to TopK bucket, like for any new string item, if it is similar to at least one String in the bucket, then convert it to the one in the bucket before adding. But this would be stacking same logics (in case it could be achieved directly with Bloom filters).
AI: It depends on the hash function but in general no, because standard hash functions are designed to avoid that similar objects have similar hash codes. [edited, see comment]
For your use case you should probably use a custom hash function which makes it more likely for two strings to have the same code. A one-hot encoding of the characters or characters bigrams in the string might work. |
H: Why is my training loss not changing?
I'm trying to train a semantic segmentation model based on this architecture, using this one as a base.
The base model uses about 10 ReLU activations, and when implemented according to the first paper, the number jumps up to 14.
The input images are of dimensions 216 x 64, and the output labels can be one of 8 classes.
Here's the complete model implementation.
I've written a custom training step, since the paper calls for it:
@tf.function
def train_step(batch_size, x_batch, y_batch, loss_func):
with tf.GradientTape() as tape:
# print(y_batch_train.shape)
logits_strong, logits_weak = model(x_batch, training=True) # Logits for this minibatch
# logits = tf.concat([logits_strong[0:batch_size//2], logits_weak[batch_size//2:batch_size]], 0)
loss_strong_value = loss_func(y_batch[0:batch_size//2], logits_strong[0:batch_size//2])
loss_weak_value = loss_func(y_batch[batch_size//2:batch_size], logits_weak[batch_size//2:batch_size])
loss_value = loss_strong_value + loss_weak_value
# loss_value = loss_func(y_batch, logits)
# tf.print(loss_value.shape)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_acc_metric.update_state(y_batch[0:batch_size//2], logits_strong[0:batch_size//2])
train_acc_metric.update_state(y_batch[batch_size//2:], logits_weak[batch_size//2:])
# train_acc_metric.update_state(y_batch, logits)
return loss_value
def train(model, start_epoch, num_epochs,train_dataset, optimizer, model_path, train_acc_metric, loss_fn=customized_loss, model_weights=None,):
"""
Run a for loop with number of epochs. Run an inner for loop for each minibatch and get logits_strong and logits_weak.
Drop second half of logits_strong, and first half of logits_weak. Compute cross entropy loss separately and add.
Finally, compute grads and apply.
Save model and weights after every 20 or so epochs.
Save losses and acc for each epoch and plot after epochs are done.
NOTE: All minibatches need to contain strong labels for first half and weak labels for second half. DO NOT SHUFFLE.
Parameters: model, start_epoch, no. of epochs, optimizer, path to model, metric for train acc, model weights, loss func.
"""
train_acc=[]
batch_size=16
epochs = num_epochs
end_epoch=start_epoch + num_epochs
if model_weights:
load_status = model.load_weights(model_path + f"/weights/{model_weights}")
load_status.assert_consumed()
for epoch in range(start_epoch, end_epoch):
print(f"\nStart of epoch {epoch}")
start = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_value = train_step(batch_size, x_batch_train, y_batch_train, loss_fn)
# print(loss_value.shape, len(model.trainable_weights))
# Log every 200 batches.
if step % 5 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(np.sum(loss_value)))
)
print("Seen so far: %s samples" % ((step + 1) * batch_size))
train_acc_epoch = train_acc_metric.result()
train_acc.append(train_acc_epoch)
print("Training acc over epoch: %.4f" % (float(train_acc_epoch),))
print("Time taken: %.2fs" % (time.time() - start))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
if epoch % 10 == 0:
model.save_weights(model_path + f"/weights/ckpt_DB_{start_epoch}_{end_epoch}")
Here are the optimizer and loss details:
optimizer = tf.keras.optimizers.SGD(learning_rate = 0.001, momentum = 0.9, nesterov = True)
#Calculation of the dice co-efficient based on actual and predicted labels
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coef_loss(y_true, y_pred):
return 1-dice_coef(y_true, y_pred)
#Combined loss of weighted multi-class logistic loss and dice loss
def customized_loss(y_true,y_pred):
# print("Shape of ground truth:", y_true.shape)
# print("Shape of prediction:", y_pred.shape)
return (1*K.categorical_crossentropy(y_true, y_pred))+(0.5*dice_coef_loss(y_true, y_pred)) # + 0.01*np.linalg.norm())
When I try to train it (on a relatively small dataset), this is the output:
Start of epoch 0
Training loss (for one batch) at step 0: 566345.3750
Seen so far: 16 samples
Training loss (for one batch) at step 5: 1526504.7500
Seen so far: 96 samples
Training loss (for one batch) at step 10: 1538868.5000
Seen so far: 176 samples
Training loss (for one batch) at step 15: 1445873.7500
Seen so far: 256 samples
Training loss (for one batch) at step 20: 1514306.7500
Seen so far: 336 samples
Training loss (for one batch) at step 25: 1492221.5000
Seen so far: 416 samples
Training loss (for one batch) at step 30: 1438761.3750
Seen so far: 496 samples
Training acc over epoch: 0.8664
Time taken: 13.09s
Start of epoch 1
Training loss (for one batch) at step 0: 1411657.2500
Seen so far: 16 samples
Training loss (for one batch) at step 5: 1526504.7500
Seen so far: 96 samples
Training loss (for one batch) at step 10: 1538868.5000
Seen so far: 176 samples
Training loss (for one batch) at step 15: 1445873.7500
Seen so far: 256 samples
Training loss (for one batch) at step 20: 1514306.7500
Seen so far: 336 samples
Training loss (for one batch) at step 25: 1492221.5000
Seen so far: 416 samples
Training loss (for one batch) at step 30: 1438761.3750
Seen so far: 496 samples
Training acc over epoch: 0.8944
Time taken: 10.71s
Start of epoch 2
Training loss (for one batch) at step 0: 1411657.2500
Seen so far: 16 samples
Training loss (for one batch) at step 5: 1526504.7500
Seen so far: 96 samples
Training loss (for one batch) at step 10: 1538868.5000
Seen so far: 176 samples
Training loss (for one batch) at step 15: 1445873.7500
Seen so far: 256 samples
Training loss (for one batch) at step 20: 1514306.7500
Seen so far: 336 samples
Training loss (for one batch) at step 25: 1492221.5000
Seen so far: 416 samples
Training loss (for one batch) at step 30: 1438761.3750
Seen so far: 496 samples
Training acc over epoch: 0.8944
Time taken: 10.69s
Every epoch after 2 until 20 gives the same loss and accuracy for the same batches.
I'm also suspicious of the high accuracy right from the get-go.
I've already tried reducing the lr to 0.001, but no change.
Could this be a divergence issue? Dying ReLU?
Most importantly, how can I fix it?
AI: Dying ReLU is a good guess. But the question is why this is not happening in original paper. Answer is input data features. Have you normalize the data? You can debug this issue by trying to find the original dataset from the paper and try to train your model with it. |
H: What is the problem that causes overfitting in the code?
**
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from keras import models
from keras.layers import Dense
from keras.regularizers import l1
from keras.layers import Activation
from keras.layers import Dropout
from sklearn.preprocessing import StandardScaler
std=StandardScaler();
x_train, x_test, y_train, y_test=train_test_split(features,target,test_size=0.2,stratify=target,random_state=1)
X_train_std=std.fit_transform(x_train)
X_test_std=std.transform(x_test)
network = models.Sequential()
network.add(Dropout(0.2, input_shape=(55,)))
network.add(Dense(units=16, activation='linear', activity_regularizer=l1(0.0001)))
network.add(Activation('relu'))
network.add(Dropout(0.2))
network.add(Dense(units=32, activation='linear', activity_regularizer=l1(0.0001)))
network.add(Activation('relu'))
network.add(Dropout(0.2))
network.add(Dense(units=1, activation='sigmoid'))
network.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
history=network.fit(X_train_std,y_train,epochs=100,batch_size=10,validation_data=(x_test, y_test))
**
AI: Based on your comment, I would like to point out a few things.
1.) First of all Neural Nets (or Deep Learning in general) is used when we have a large amount of data. The data you have is very small and Neural Net needs more data to learn from it. You would be much better off with using traditional machine learning models instead of deep learning.
2.) Since your data is imbalanced, accuracy is the worst choice of metric that your could be using. This is the reason your are getting high accuracy. Using accuracy in imbalanced data will always result in high accuracy because the model will always predict the majority class and hence will give high accuracy. You should use other metrics like f1 score and others. A simple google search would give you the answer.
3.) The reason you are getting constant validation loss might have something to do with the way you modeled your neural net. I once had the same issue but I don't remember how I solved it. Most probably it had something to do with the activation function or the number of neurons but it could be different in your case. Try different activation functions and no of neurons to see if something works. If it works out then do let me know here what the problem was!
Cheers! |
H: Factor Analysis vs PCA
Could someone please explain when FA is used or when PCA is used, as I understood FA do dimensionality reduction, however PCA - the main goal is the same. Then which one should I use and in which cases?
AI: Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance.
Factor analysis is a way to condense the data in many variables into a just a few variables. For this reason, it is also sometimes called “dimension reduction.” You can reduce the “dimensions” of your data into one or more “super-variables.”
PCA is used to decompose the data into a smaller number of components and therefore is a type of Singular Value Decomposition (SVD). Factor Analysis is used to understand the underlying 'cause' which these factors (latent or constituents) capture much of the information of a set of variables in the dataset data.
For More,
What are the differences between Factor Analysis and Principal Component Analysis?
In terms of a simple rule of thumb, I'd suggest that you:
Run factor analysis if you assume or wish to test a theoretical model of latent factors causing observed variables.
Run principal component analysis If you want to simply reduce your correlated observed variables to a smaller set of important independent composite variables. |
H: How to give feedback if it's uncertain whether or not a choice was correct?
Suppose you have a machine learning system, which aims to predict whether or not the recipient of a parcel will be at home, before delivery is attempted. For example, Alice is working from home, so the likelihood of her being at home at the time of delivery is very high. Bob, however, is working on a construction site, so he is presumed to not be at home during possible delivery windows.
The system would learn by making predictions about each person, and then receive feedback whether or not that prediction was correct. If the system predicted that the recipient would be at home, then a delivery attempt would be made, and it could be recorded whether the recipient was actually at home or not.
However, if the system would predict that the recipient was not at home, then it would not be possible to verify whether that prediction was correct. It could then be presumed that the system would always predict that the user was not at home.
Possible Solutions
I came up with two possible strategies to mitigate this issue. First, even if the system stated that the recipient was likely not at home, there would be a random chance for the delivery attempt to be made regardless. This would then be able to confirm whether or not the system was correct. This, however, would come to the downside of the added "cost" of attempting to deliver a parcel to someone who was likely not at home.
The second, somewhat more elegant solution, is to ask people whether or not they were at home during the initial delivery window, when they pick up their package. This initially seems better, but could lead to negative responses from recipients. For instance, they could be upset that they were at home, but were not delivered their parcel, because a computer didn't deem them worthy. Or, it could lead to recipients lying about having been at home, as for them, being categorized as "someone, who is likely at home" is a strictly positive categorization. All the downsides of being categorized as such only affect the delivery service, but not the recipient. It is thus in the best interest to give dishonest feedback in an attempt of being categorized more favourably.
Is there a better way of modelling this? Or would I just have to "bite the bullet" and accept that people will attempt to game the system?
AI: What you are essentially trying to do here (if I understand you correctly) is collect data (i.e. labels) while also predicting the labels.
Your second approach does look promising. One question is, why would you assume people want to lie about them being home when they are not? don't they want to get their package? - it doesn't seem like a assumption you can make, in this case, if you are trying to verify the predictions - human annotated data - in your case, manually provided yes/no feedback should always take more weight than any prediction your system would make.
After collecting the manual responses - you would want to use these responses to make your predictions better - and once you have reached certain threshold of accuracy (90%) you would stop manual response collection. |
H: Finding global optimum of unknown and expensive function
I would like to find optimal combination of parameters for the algorithm affecting the disk space used by some storage. Therefore, several algorithm parameters (x1, x2, x3, where 0 < x1 < 1, 10 < x2 < 100, 0.1 < x3 < 0.5) are used as an input for the model, and the disk space occupied by storage S(x1, x2, x3) is the cost function I'd like to minimize.
The problem is that every function call S(x1, x2, x3) is very expensive, and it may take minutes or even hours to end up, so it's almost infeasible to prepare the training set. Instead, I would like ML library to suggest, which combination of (x1, x2, x3) values to use to estimate S on every next optimization step.
I believe this task is quite common for the major ML frameworks, but I wasn't able to find anything suitable yet. So what is the name for this class of problems, and which ML framework provide the routines to solve it?
AI: What you have is a standard optimization problem, which has got nothing to do with ML. I don’t know what modelling language you use, but Scipy in Python offers many nonlinear optimization algorithms - global or local.
If I were you, for this problem I would stay away as much as possible from gradient descent: if the evaluation of your cost function does not also provide gradient information, then you have to calculate it numerically. Which involves many more functions evaluations. Whether your cost function provides gradient outputs or not, gradient descent is just about the worst optimization algorithm you can think of in the realm of nonlinear solvers.
Global solvers will attempt to give you a “global optimum” - but they are much more expensive to run and you will never know whether the minimum/maximum you found is the actual “global” one. Local algorithms are faster to converge but they only ever provide a locally optimum minimum/maximum.
https://docs.scipy.org/doc/scipy/reference/optimize.html |
H: Correlating & Combining categorical and numerical features for classification problem
I just started with my first Machine Learning project (Jupyter Notebook, Python, Scikit-learn, pandas) and I am working on a Palmer Penguin Dataset. I have been doing some data analysis and I got stuck when trying to correlate and combine some of the attributes from the dataset.
Correlation.
The dataset contains 2 categorical (gender, island) and 4 numerical (body weight, flipper length, culmen depth, culmen width) features as well as a label (penguin species). I was first trying to correlate the features with the label - I have used LabelEncoder from scikit-learn to transform the species label into a numerical attribute since the correlation function of pandas dataframe only works on numerical attributes. Even though I got some results, I have read from many sources that it is an improper approach for measuring the correlation between categorical and numerical attributes. I would appreciate it if somebody can clarify for me, what is the preferred way of measuring the correlation of such types.
Combining.
One of the reasons why I wanted to measure a correlation between those attributes is to possibly replace some of the attributes with combined attributes if it would make sense. One of the combination that is logically making sense is combining a gender with a body weight - I was thinking that gender separately does not bring much value, but it could be more valuable if we combine it with the body weight. Does such a combinations make sense and can they improve the accuracy of the classification model? If yes, how to approach combining categorical with numerical attributes using specified tools?
AI: Regarding your first point, you can correlate your features to your target the way you have done (to my knowledge. I'm not 100% sure). This is because you are only converting the categorical classes into numerical so in the end you might get results like feature 1 has a correlation with class 1 of about 0.4 and so on. Then you can reverse your LabelEncoder encoding to get back the categorical classes i.e class 1 is species A and class 2 is species B. I think it should be correct but someone please correct me if I am wrong.
Regarding point 2, what you describe is known as feature engineering, where you combine multiple features into a single feature based on domain knowledge. If you do not have domain expertise then I would suggest to stay away from combining features.
The example you gave does not make sense to me as gender is a categorical feature and weight is a numerical feature. So how can you combine both of them? It will probably lead to incorrect/bad results. |
H: Which Keras metric for multiclass classification
I have a multiclass classification data where the target has 11 classes. I am trying to build a Neural Net using Keras. I am using softmax as activation function and categorical_crossentropy as the loss function. I have one hot encoded the target before passing it into the net. The issue I am facing is which Keras metric should I use for this purpose? The official documentation does not mention which metric is suitable for multiclass classification.
This link mentions to use categorical_accuracy as the metric for multiclass classification but other than that, all other question on this site are about multilabel classification metrics like this and this link.
Is there any implementation of lets say f1_score in Keras using the custom metric function, since f1_score is the go to metric for multiclass classification I guess?
EDIT 1:
Would something like this work using the custom metric functionality in Keras?
from sklearn.metrics import f1_score
def my_metric_fn(y_true, y_pred):
f1 = f1_score(y_true, y_pred)
return f1
model = Sequential()
model.add(Dense(input_dim = 12, units = 128, activation = 'relu', kernel_initializer = 'he_uniform'))
model.add(Dense(units = 64, activation = 'relu', kernel_initializer = 'he_uniform'))
model.add(Dense(units = 32, activation = 'relu', kernel_initializer = 'he_uniform'))
model.add(Dense(units = 11, activation = 'softmax', kernel_initializer = 'glorot_uniform'))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = [my_metric_fn])
history = model.fit(train_x5, train_encoded, validation_split = 0.2, epochs = 5, batch_size = 1000)
EDIT 2: This is giving me an error in the last line as follows:
OperatorNotAllowedInGraphError: using a tf.Tensor as a Python bool is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
AI: One option is to implement F1 score in Keras:
from tensorflow.keras import backend as K
def f1(y_true, y_pred):
def recall_m(y_true, y_pred):
TP = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
Positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = TP / (Positives+K.epsilon())
return recall
def precision_m(y_true, y_pred):
TP = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
Pred_Positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = TP / (Pred_Positives+K.epsilon())
return precision
precision, recall = precision_m(y_true, y_pred), recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon())) |
H: How to evaluate triple extraction in NLP?
I am current NLP work, I am extracting triples using triple extraction function in Stanford NLP and Spacy libraries. I am looking for a good method to evaluate how good the extraction has been? Any suggestions
AI: The standard evaluation method works for this kind of task: measure precision, recall and F1-score on a manually annotated sample.
In general one can find which evaluation measure is standard for a particular task in the literature. For example this paper seems to address the topic (I didn't read it). |
H: Get Hidden Layers in PyTorch TransformerEncoder
I am trying to access the hidden layers when using TransformerEncoder and TransformerEncoderLayer. I could not find anything like that in the source code for these classes.
I am not using hugging face but I know one can get hidden_states and last_hidden_state. I am looking for something similar.
Do you know how I can access them?
AI: Not exactly sure which hidden layer you are looking for, but the TransformerEncoderLayer class simply has the different layers as attributes which can easily access (e.g. self.linear1 or self.self_attn). The TransformerEncoder is simply a stack of TransformerEncoderLayer layers, which are stored in the layer attribute as a list. For each layer in the list you can then access the hidden layers as mentioned. |
H: How to find median/average values between data frames with slightly different columns?
I am trying to combat run-to-run variance of the data I collect by combining the data from different runs and finding the mean/average. The problem is that in each run there is a chance that some of the features may not appear:
x y z
0 0 2 2
1 0 1 3
2 5 3 0
3 1 1 0
4 0 2 0
x y d
0 1 0 2
1 1 1 3
2 0 4 2
3 0 2 0
4 0 2 1
z y
0 0 2
1 0 1
2 0 2
3 1 0
4 3 0
As you can see from this example, the rows are always consistent, but some runs might provide less columns than the rest. Therefore in a theoretical dataframe where all the columns are averaged, in some columns the values would have to be divided by a lower number than others (in this case the values in the y column will have to be divided by 3, but in the x column - by 2).
Bonus question: Is there a way make this row-specific: do the same thing, but not take into account the 0s, since in my case that indicates "no data", so it might interfere with the results (y for row 0 has one zero, so the average will be $(2+2)\over 2$, whereas in row 1 it would be $(1+1+1)\over3$.
AI: Assuming that you have the data stored in separate dataframes, you can use a combination of pandas.concat and pandas.DataFrame.groupby to achieve what you are looking for:
import pandas as pd
import numpy as np
df1 = pd.DataFrame({
"x": [0,0,5,1,0],
"y": [2,1,3,1,2],
"z": [2,3,0,0,0]
})
df2 = pd.DataFrame({
"x": [1,1,0,0,0],
"y": [0,1,4,2,2],
"z": [2,3,2,0,1]
})
df3 = pd.DataFrame({
"y": [2,1,2,0,0],
"z": [0,0,0,1,3]
})
df = (
# combine dataframes into a single dataframe
pd.concat([df1, df2, df3])
# replace 0 values with nan to exclude them from mean calculation
.replace(0, np.nan)
.reset_index()
# group by the row within the original dataframe
.groupby("index")
# calculate the mean
.mean()
)
index
x
y
z
0
1
2
2
1
1
1
3
2
5
3
2
3
1
1.5
1
4
nan
2
2 |
H: Why tfidf of one document is not zero?
I'm new to nlp. Recently I wanted to do little nlp tasks, and faced strange thing.
That is I have run the following code
from sklearn.feature_extraction.text import TfidfVectorizer
docs = ["strange event"]
tfIdf_vectorizer = TfidfVectorizer(analyzer='word', tokenizer=word_tokenize,
stop_words=stopwords, ngram_range=(1, 2), use_idf=True,
norm='l2')
tfidf = tfIdf_vectorizer.fit_transform(docs)
print(tfidf)
and see the following result
(0, 2) 0.5773502691896258
(0, 0) 0.5773502691896258
(0, 1) 0.5773502691896258
shouldn't the tfidf of one single document be zero? (Since the IDF=log(1/1)=0)
AI: It is because, by default sklearn's TF-IDF vectorizer will normalize the results. See the the Tf-IDF Term Weighting section of the User Guide. For your example,
n = 1
tf = 3
df = 1
idf = np.log(n/df)+1 = 1
You have 3 terms with identical frequency. So, the L2 normalized tf-idf is computed as
abs(1)/sqrt(1+1+1) = 0.577 |
H: How Can I Process SageMaker Ground Truth NER JSON Output into DataFrame?
So, I've recently created a job using AWS SageMaker Ground Truth for NER purposes, and have received an output in the form a manifest file. I'm now trying to process the manifest file into a dataframe, and I'm failing greatly.
The JSON file is incredibly complex. Here's an example of it based on the documentation:
{
"source": "Amazon SageMaker is a cloud machine-learning platform that was launched in November 2017. SageMaker enables developers to create, train, and deploy machine-learning (ML) models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices",
"ner-labeling-job-attribute-name": {
"annotations": {
"labels": [
{
"label": "Date",
"shortDisplayName": "dt"
},
{
"label": "Verb",
"shortDisplayName": "vb"
},
{
"label": "Thing",
"shortDisplayName": "tng"
},
{
"label": "People",
"shortDisplayName": "ppl"
}
],
"entities": [
{
"label": "Thing",
"startOffset": 22,
"endOffset": 53
},
{
"label": "Thing",
"startOffset": 269,
"endOffset": 281
},
{
"label": "Verb",
"startOffset": 63,
"endOffset": 71
},
{
"label": "Verb",
"startOffset": 228,
"endOffset": 234
},
{
"label": "Date",
"startOffset": 75,
"endOffset": 88
},
{
"label": "People",
"startOffset": 108,
"endOffset": 118
},
{
"label": "People",
"startOffset": 214,
"endOffset": 224
}
]
}
},
"ner-labeling-job-attribute-name-metadata": {
"job-name": "labeling-job/example-ner-labeling-job",
"type": "groundtruth/text-span",
"creation-date": "2020-10-29T00:40:39.398470",
"human-annotated": "yes",
"entities": [
{
"confidence": 0
},
{
"confidence": 0
},
{
"confidence": 0
},
{
"confidence": 0
},
{
"confidence": 0
},
{
"confidence": 0
},
{
"confidence": 0
}
]
}
}
So far, I've only been able to extract the "source" and the "entities", but now the dataframe has a list of dictionaries on its second column.
How should I process the JSON file into a DataFrame using Pandas? Or is there a better way to process this output?
Many thanks in advance.
Edit: Here's what I'm hoping to see
AI: You can use base python syntax to transform the dictionary into a format that would work for pandas before converting it to a dataframe:
from itertools import groupby
import pandas as pd
entities = data["ner-labeling-job-attribute-name"]["annotations"]["entities"]
entities = [
(entity["label"], data["source"][entity["startOffset"]:entity["endOffset"]])
for entity in entities
]
entities = {
key: [[x[1] for x in data]]
for key, data in groupby(entities, lambda x: x[0])
}
df = pd.DataFrame({"Source": data["source"], **entities}).to_markdown(index=False)
Source
Thing
Verb
Date
People
Amazon SageMaker is a cloud machine-learning platform that was launched in November 2017. SageMaker enables developers to create, train, and deploy machine-learning (ML) models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices
['cloud machine-learning platform', 'edge-devices']
['launched', 'deploy']
['November 2017']
['developers', 'developers'] |
H: Suggestions on how to visualization this data
I was hoping to pick people's brain on better way visualize data that appears like this:
Two categorical variables
One continuous variable
I'm trying to visualize this data in way that is more appropriate than heat map. Does anyone have any suggestions?
Here's the code plus data (Note: There are more rows than the example data I have provided.):
test_data <- structure(list(Toys = c("Slinky", "Slinky", "Slinky", "Slinky",
"Slinky", "Slinky", "Tin Solider", "Tin Solider", "Tin Solider",
"Tin Solider", "Tin Solider", "Tin Solider", "Hungry Hungry Hippo",
"Hungry Hungry Hippo", "Hungry Hungry Hippo", "Hungry Hungry Hippo",
"Hungry Hungry Hippo", "Hungry Hungry Hippo"),
Manufacturer = c("Manufacturer A", "Manufacturer B", "Manufacturer C", "Manufacturer A", "Manufacturer A",
"Manufacturer A", "Manufacturer B", "Manufacturer B", "Manufacturer B",
"Manufacturer B", "Manufacturer B", "Manufacturer B", "Manufacturer C",
"Manufacturer C", "Manufacturer C", "Manufacturer C", "Manufacturer C",
"Manufacturer C"),
Price = c(5.99, 6.99, 7.99, 9, 6, 5.54, 7,
9.99, 6.99, 6.75, 8, 7.99, 9.99, 7.99, 5.99, 8.99, 10.99, 9.75)),
class = "data.frame", row.names = c(NA, -18L))
melted_test_data <- reshape::melt(test_data %>% select(Toys,Manufacturer, Price))
library(plotly)
library(scales)
plot_test_data <- melted_test_data %>%
ggplot(aes(x = Manufacturer, y = Toys, fill = value)) +
geom_tile() +
scale_fill_distiller(palette = 'Accent', label = label_comma()) +
theme(panel.background = element_rect(fill = 'white'), axis.text.x = element_text(angle = 45, hjust = 1), plot.title = element_text(hjust = 0.5)) +
labs(title ="Price by Manufacturer and Toys", x = "Manufacturers", y = "Manufacturer") +
guides(fill = guide_colourbar(title = "price($)"))
ggplotly(plot_test_data)
AI: If you want to make some kind of comparison of cost across the categories and the manufacturer, then you can compare the prices on mean/median.
library(tidyverse)
test_data %>%
group_by(Toys, Manufacturer) %>%
summarise(Price=mean(Price)) %>%
ggplot()+
geom_bar(aes(x=Manufacturer, y=Price, fill=Toys), stat = 'identity',position = 'dodge')+
scale_fill_brewer(palette = 'Set1')+
theme_bw()+
theme(legend.position = 'top')
Side note:
Significance of the graph?
Hungry Hippo and Tin soldier are made by only 1 Manufacturer while Slinky is produced by all three. Even though it is produced by all the three manufacturers, the average price varies and it is the highest for Manufacturer C. |
H: What kind of model is this?
Can anyone help me identify what kind of Architecture is behind this Application? Is it a "simple" Classification Network? If so how are the heatmaps generated?
https://www.youtube.com/watch?v=c_giaOUzB_8
Thanks
AI: This technique is called saliency maps [1][2].
From the documentation of Keras-Vis:
The idea is pretty simple. We compute the gradient of output category with respect to input image. This should tell us how output category value changes with respect to a small change in input image pixels. All the positive values in the gradients tell us that a small change to that pixel will increase the output value. Hence, visualizing these gradients, which are the same shape as the image should provide some intuition of attention.
The idea behind saliency is pretty simple in hindsight. We compute the gradient of output category with respect to input image.
$\frac{∂output}{∂input}$
This should tell us how the output value changes with respect to a small change in inputs. We can use these gradients to highlight input regions that cause the most change in the output. Intuitively this should highlight salient image regions that most contribute towards the output. |
H: File format where column names are repeated on each row
I have received a dataset in text file with the following format
col1=datac1r1,col2=datac2r1,col3=datac3r1
col1=datac1r2,col2=datac2r2,col3=datac3r2
col1=datac1r3,col2=datac2r3,col3=datac3r3
col1=datac1r4,col2=datac2r4,col3=datac3r4
Each row is a unique entry, with columns separated by comma, just that the column name is repeated in each element.
I need to parse this in R and analyze it. I have worked with csv files extensively, but I have never seen this format before.
Is it a std. format I can import it in? Or do I need to write a script to convert it into a csv format?
AI: Using Miller (https://github.com/johnkerl/miller)
mlr --ocsv unsparsify input.txt
you will have this CSV
col1,col2,col3
datac1r1,datac2r1,datac3r1
datac1r2,datac2r2,datac3r2
datac1r3,datac2r3,datac3r3
datac1r4,datac2r4,datac3r4 |
H: Getting stock data in a discipline manner from Yahoo finance
I used the below code for downloading stock data from yahoo finance:-
import yfinance as yf
import datetime
stocks = ["AXISBANK.NS", "HDFCBANK.NS", "ICICIBANK.NS" ,"INDUSINDBK.NS",
"KOTAKBANK.NS",
"SBIN.NS",
"YESBANK.NS"]
start = datetime.datetime(2018,1,1)
end = datetime.datetime(2019,7,17)
data = yf.download(stocks, start=start, end=end)
data
I get the data in a below manner:-
I saved the data using panda:-
import pandas as pd
df = pd.DataFrame(data)
# saving the dataframe
df.to_csv('BANKING STOCK.csv')
I got the data in this format:-
But I ant my data in this format:-
Because this format is more convenient for analyzing data through SQL. How can I change the format of the data?
AI: Use:
data.stack().reset_index().rename(index=str, columns={"level_1": "Symbol"}).sort_values(['Symbol','Date'])
Output:
Date Symbol Adj Close ... Low Open Volume
0 2018-01-01 AXISBANK.NS 564.80 ... 560.50 563.80 6943234
7 2018-01-02 AXISBANK.NS 558.81 ... 556.35 567.00 6292268
14 2018-01-03 AXISBANK.NS 559.75 ... 555.75 561.30 3990149
21 2018-01-04 AXISBANK.NS 559.21 ... 558.00 564.85 2766664
28 2018-01-05 AXISBANK.NS 562.55 ... 556.70 561.05 3249056
35 2018-01-08 AXISBANK.NS 564.75 ... 562.45 565.80 3770339
42 2018-01-09 AXISBANK.NS 564.60 ... 563.00 564.00 4607504
49 2018-01-10 AXISBANK.NS 562.10 ... 560.45 566.15 2411625
The basic operation is stack that stacks the column headers into a multiindex. This multiindex is then converted to columns (reset_index()) which then get proper names. Finally the data are sorted by Symbol and Date as required.
Remark: You don't need df = pd.DataFrame(data), yf.download already returns a DataFrame. |
H: Repeated training and testing in Weka?
I am using one file for training (e.g train.arff) and another for testing (e.g test.atff) with the 70-30 ratio in Weka. I want to ask how can I use the repeated training/testing in Weka when I have separate train and test data files and the second part of the question is what is the advantage if we use repeated and what if we dont use it? Thanks in advance
AI: In general the advantage of repeated training/testing is to measure to what extent the performance is due to chance. The most common source of chance comes from which instances are selected as training/testing data. One can use k-fold cross-validation in order to mitigate the effect of chance in this case. Weka performs 10-fold CV by default, as far as I remember, but this is not compatible with providing a specific training/test set.
[edit based on OP's comments]
In the video mentioned by OP, the author loads a dataset and sets the "percentage split" at 90%. This means that the full dataset will be split between training and test set by Weka itself. Weka randomly selects which instances are used for training, this is why chance is involved in the process and this is why the author proceeds to repeat the experiment with different values for the random seed: every time Weka will selects a different subset of instances as training set, resulting in a different accuracy. In other words, the purpose of repeating the experiment is to change how the dataset is split between training and test set. In this case (J48 with default options) there would be no point repeating the experiment with a fixed training set, because there's no chance involved in the process so there's no variation in the result.
It's worth noticing that this lesson by the author of the video seems to be used as an introduction to the more general concept of k-fold cross-validation, presented a couple of lessons later in the course. |
H: speed up the fit of model
What are the ways to speed up the fit of the model on large files (more than 200 mb)? I tried to divide into pieces (chunks) or use dask but the effect is unsatisfactory. I use 16GB RAM and i7 2.2GHz processor 4 core. It remains to buy a cloud service or use a better computer?
My clasiffier
clf = xgb.XGBClassifier(n_estimators=500,
n_jobs=4,
max_depth=9,
learning_rate=0.05,
subsample=0.7,
colsample_bytree=0.7,
missing=-999,
gamma = 0.1)
# Number of variable > 400
# Number of rows > 200000
AI: One thing you could try is to use a "normal" Logit, as it is computationally not very expensive. Using Lasso, Elastic Net, or Ridge can yield good results (often similar to boosting) shown here.
Here is a recent code example for Logit/Lasso.
If you would like to stick with boosting, you may check LightGBM as it tends to be faster. XGBoost tends to be "heavy" in terms of data handling. LightGBM (as the name indicates) aims at resolving this problem and I have had good experiences so far.
Here is the link to the the LightGBM docs. |
H: Pytorch doing a cross entropy loss when the predictions already have probabilities
So, normally categorical cross-entropy could be applied using a cross-entropy loss function in PyTorch or by combing a logsoftmax with the negative log likelyhood function such as follows:
m = nn.LogSoftmax(dim=1)
loss = nn.NLLLoss()
pred = torch.tensor([[-1,0,3,0,9,0,-7,0,5]], requires_grad=True, dtype=torch.float)
target = torch.tensor([4])
output = loss(m(pred), target)
print(output)
The thing is. What if the data at the output is already in a state with the probabilities where the variable pred already has the probabilities. Where the data is presented like the following:
pred = torch.tensor([[.25,0,0,0,.5,0,0,.25,0]], requires_grad=True, dtype=torch.float)
How could the cross-entropy then be completed in PyTorch?
AI: You can implement categorical cross entropy pretty easily yourself. It is calculated as
$$
\text{cross-entropy} = -\frac{1}{n} \sum_{i=0}^{n} \sum_{j=0}^m \mathbf{y}_{ij} \log \hat{\mathbf{y}}_{ij}
$$
where $n$ is the number of samples in your batch, $m$ is the number of classes, $\mathbf{y}_i$ is the one-hot target for example $i$, $\mathbf{\hat{y}}_i$ is the predicted probability distribution, and $\mathbf{y}_{ij}$ refers to the $j$-th element of this array.
In PyTorch:
def categorical_cross_entropy(y_pred, y_true):
y_pred = torch.clamp(y_pred, 1e-9, 1 - 1e-9)
return -(y_true * torch.log(y_pred)).sum(dim=1).mean()
You can then use categorical_cross_entropy just as you would NLLLoss in the training of a model. The reason that we have the torch.clamp line is to ensure that we have no zero elements, which will cause torch.log to produce nan or inf.
One difference you'll have to make in your code is that this version expects a one-hot target rather than an integer target. You can easily convert your current target list like so:
one_hot_targets = torch.eye(NUM_CLASSES)[targets]
where targets is a torch.tensor with integer values and NUM_CLASSES is the number of output classes that you have. |
H: What does this symbol means, what operator is it?
I am confused about the $E_{x\sim P_{data}(x)}$, what does $E$ means here. I cannot find an appropriate answer on the internet, and hence I am trying data science stack exchange. Please help.
AI: $\mathbb{E}$ means expected value.
The subscript is there to clarify which random variable is the expected value taken. |
H: Feature selection method explanation
In the context of feature relevance, I am trying to understand the meaning of the correlation method for feature selection. Can somebody please explain if the following results of the correlation coefficients arise, then should I take that feature? The rule is to select the features for which corrcoeff values are greater than 0.5. Please correct me if wrong. The way I am calculating is using Matlab's corrcoeff(target,feature) where target and feature are vectors
Case1: corrcoeff returns NaN values --
Nan Nan
Nan 1
Should the feature be selected since the value is greater than 0.5?
Case2: corrcoeff returns 0 values
0 0
0 1
In this case, I should reject the feature.
Case3:
-0.3 0
0 -0.3
Negatively correlated but absolute values less than 0.5, so reject the feature
Case4: What if there is no linear relationship at all in which case corrcoeff will not work. How do I know if there is no linear relationship and in that case how to do feature selection; is there any other function or technique?
AI: The idea is quite simple: If you find features that are correlated with your target variable but uncorrelated with each other, it will give you information with which you can predict your target variable.
The threshold of 0.5 is chosen by a person and should be changed depending on your data. For example, if you find 10 features that are uncorrelated with each other but all have a 0.3 absolute correlation coefficient with the S&P 500 than you found the stock market equivalent of the Holy Grail.
Also, as you already mentioned, the correlation coefficient only captures linear correlations. This could cause you to exclude strongly related features just because they have a more complex relationship to your target variable.
Therefore, I would strongly advise against this method. Instead, I recommend plotting the data in pairplots. |
H: Getting a ValueError from train_test_split
I'm working on this dataset. I'm trying to select features using Random Forest. This is the relevant code:
x_train, y_train, x_test, y_test = train_test_split(train, test_size = 0.3)
sel = SelectFromModel(RandomForestClassifier(n_estimators=100))
sel.fit(x_train, y_train)
sel.get_support()
selected_feat = x_train.columns[sel.get_support()]
The code x_train, y_train, x_test, y_test = train_test_split(train, test_size = 0.3) throws the following ValueError:
Traceback (most recent call last):
File "C:/Users/security/Downloads/AP/Boston-Kaggle/Boston.py", line 25, in <module>
x_train, y_train, x_test, y_test = train_test_split(train, test_size = 0.3)
ValueError: not enough values to unpack (expected 4, got 2)
I don't know what it means by not enough values. The dataset has a big dimension.
AI: This is the implementation of the train_test_split function
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
Split your train into X and y and try again! |
H: In the context of Deep Learning, what is training warmup steps
I found the term "training warmup steps" in some of the papers. What exactly does this term mean? Has it got anything to do with "learning rate"? If so, how does it affect it?
AI: This usually means that you use a very low learning rate for a set number of training steps (warmup steps). After your warmup steps you use your "regular" learning rate or learning rate scheduler. You can also gradually increase your learning rate over the number of warmup steps.
As far as I know, this has the benefit of slowly starting to tune things like attention mechanisms in your network. |
H: Data driven project for a self-catering accommodation business
I want to apologise in advance for my ignorance but I'm hitting my head in a wall here as I'm not sure how to proceed with my project and I've got no experience in that field even though I'm doing a masters in Data Science (don't slag me too much pls)
So I've started a project with this small business that offer self-catering accommodation like Airbnb and what they want is to increase their occupancy. Basically they have properties which perform well and some that doesn't and they want to know why. I've got a list with variables but no clue how to use any other software than weka and I am not sure if it will be appropriate. I am also clueless how to proceed with the analysis so I would be grateful if you have any ideas. I've added a screenshot of the variables to give you an idea.
My university supervisor is not very helpful so I've decided to reach out for help here and any tips will be much appreciated.
Would it be more helpful to provide you with the excel sheet? I am not sure how to upload it
AI: I would start with a simple decision tree regression with the occupancy rate as target value. By visualizing the tree after training you can see which features impact the occupancy rate. You can do this in Weka with M5P regression: http://weka.sourceforge.net/doc.stable/weka/classifiers/trees/M5P.html |
H: How does Decision Tree with Gini Impurity Calculate Root Node?
I couldn't figure out how it selected the root node with with <=7.5 and it's gini impurity is 0.45 but I tried to manually calculate it but the weighted gini impurity I got for it was 0.27.
Can anyone explain me how the calculation is done here just for the Root Node?
Here's a small dataset that I've generated,
import pandas as pd
import numpy as np
a = [5, 6, 7, 8, 9]
a1 = [1, 0, 1, 0, 0]
df = pd.DataFrame(np.c_[a, a1], columns=['val','target'])
val target
0 5 1
1 6 0
2 7 1
3 8 0
4 9 0
Here my code,
from sklearn.tree import DecisionTreeClassifier
from IPython.display import Image
import pydot
dt = DecisionTreeClassifier()
dt.fit(df.val.to_frame(), df.target.to_frame())
data_dot = export_graphviz(dt, out_file=None, class_names=['0', '1'])
graph= pydot.graph_from_dot_data(data_dot)
Image(graph[0].create_png())
AI: From ISLR:
... we consider all predictors $X_1$, . . . , $X_p$, and all
possible values of the cutpoint s for each of the predictors, and then
choose the predictor and cutpoint such that the resulting tree has the
lowest RSS ...
Since it's a classification problem, the best split is chosen by maximizing the Gini Gain, which is calculated by subtracting the weighted impurities of the branches from the original Gini impurity.
For c total classes with the probability of picking a datapoint with class, i is p(i), then the Gini Impurity is calculated as:
\begin{equation}
G = \sum_{i=1}^{c} [ p(i) * (1 − p(i)) ]
\end{equation}
1. Gini Impurity
Here, c = 2 , P(0) = 3/5 and P(1) = 2/5
G = [P(0) * (1 - P(0))] + [P(1) * (1 - P(1))]
G = [3/5 * (1 - 3/5)] + [2/5 * (1 - 2/5)] = 12/25
G = 0.48
2. Gini Gain
Now, let's determine the quality of each split by weighting the impurity of each branch. This value - Gini Gain is used to picking the best split in a decision tree.
In layman terms, Gini Gain = original Gini impurity - weighted Gini impurities So, higher the Gini Gain is better the split.
Split at 6.5:
Gini Impurity G_left = [1/2 * (1 - 1/2)] + [1/2 * (1 - 1/2)] = 0.50
Gini Impurity G_right = [2/3 * (1 - 2/3)] + [1/3 * (1 - 1/3)] = 0.44
Weighted Gini = (1/5 * .50) + (4/5 * 0.44) = 0.45
Gini Gain = 0.48 - 0.45 = 0.03
Split at 7.5:
Gini Impurity G_left = [2/3 * (1 - 2/3)] + [1/3 * (1 - 1/3)] = 0.444
Gini Impurity G_right = [2/2 * (1 - 2/2)] = 0
Weighted Gini = (3/5 * 0.444) + (2/5 * 0) = 0.27
Gini Gain = 0.48 - 0.27 = 0.21
Split at 8.5:
Gini Impurity G_left = [2/4 * (1 - 2/4)] + [2/4 * (1 - 2/4)] = 0.500
Gini Impurity G_right$ = [1/1 * (1 - 1/1)] = 0
Weighted Gini = (4/5 * 0.5) + (1/5 * 0) = 0.40
Gini Gain = 0.48 - 0.40 = 0.08
So, the best split will be chosen 7.5 because it has the highest Gini Gain. |
H: Why could my DDQN get significantly worse after beating the game repeatedly?
I've been trying to train a DDQN to play OpenAI Gym's CartPole-v1, but found that although it starts off well and starts getting full score (500) repeatedly (at around 600 episodes in the pic below), it then seems to go off the rails and do worse the more it plays.
I'm pretty new to ML so I'm not really sure what could cause this so I'm not sure how to start debugging (I've tried tweaking some of the hyper-parameters, but nothing seems to stop this trend).
If it helps, here's the (probably) relevant parts of my agent:
def _build_model(self):
model = Sequential()
model.add(Dense(24, input_dim=self.state_size, activation="relu"))
model.add(Dense(24, activation="relu"))
model.add(Dense(self.action_size, activation="linear"))
model.compile(optimizer=Adam(lr=self.learning_rate), loss="mse")
return model
def get_action(self, state):
# Use random exploration for the current rate.
if np.random.rand() < self.epsilon:
return random.randrange(self.action_size)
# Otherwise use the model to predict the rewards and select the max.
q_values = self.model.predict(state)
return np.argmax(q_values[0])
def replay(self, batch_size):
if len(agent.memory) < minibatch_size:
return
# Decay the exploration rate.
self.epsilon *= self.epsilon_decay
self.epsilon = max(self.epsilon_min, self.epsilon)
minibatch = random.sample(self.memory, minibatch_size)
state_batch, q_values_batch = [], []
for state, action, reward, next_state, done in minibatch:
# Get predictions for all actions for the current state.
q_values = self.model.predict(state)
# If we're not done, add on the future predicted reward at the discounted rate.
if done:
q_values[0][action] = reward
else:
f = self.target_model.predict(next_state)
future_reward = max(self.target_model.predict(next_state)[0])
q_values[0][action] = reward + self.gamma * future_reward
state_batch.append(state[0])
q_values_batch.append(q_values[0])
# Re-fit the model to move it closer to this newly calculated reward.
self.model.fit(np.array(state_batch), np.array(q_values_batch), batch_size=batch_size, epochs=1, verbose=0)
self.update_weights()
def update_weights(self):
weights = self.model.get_weights()
target_weights = self.target_model.get_weights()
for i in range(len(target_weights)):
target_weights[i] = weights[i] * self.tau + target_weights[i] * (1 - self.tau)
self.target_model.set_weights(target_weights)
And the full notebook is here.
AI: This is called "catastrophic forgetting" and can be a serious problem in many RL scenarios.
If you trained a neural network to recognise cats and dogs and did the following:
Train it for many epochs on a full dataset until you got a high accuracy.
Continue to train it, but remove all the cat pictures.
Then in a relatively short space of time, the NN would start to lose accuracy. It would forget what a cat looks like. It would learn that its task was to switch the dog prediction as high as possible, just because on average everything in the training population was a dog.
Something very similar happens in your DQN experience replay memory. Once it gets good at a task, it may only experience success. Eventually, only successful examples are in its memory. The NN forgets what failure looks like (what the states are, and what it should predict for their values), and predicts high values for everything.
When something bad happens and the NNs high predicted value is completely wrong, the error can be high, and the NN may have incorrectly "linked" its state representations so that it cannot distinguish which parts of the feature space are the cause of this. This creates odd effects in terms of what it learns about values of all states. Often the NN will behave incorrectly for a few episodes but then re-learn optimal behaviour. But it is also possible that it completely breaks and never recovers.
There is lots of active research into catastrophic forgetting and I suggest you search that term to find out some of the many types of mitigation you could use.
For Cartpole, I found a very simple hack made the learning very stable. Simply keep aside some percentage of replay memory stocked with the initial poor performing random exploration. Reserving say 10% to this long term memory is enough to make learning in Cartpole rock solid, as the NN always has a few examples of what not to do. The idea unfortunately does not scale well to more complex environments, but it is a nice demonstration. For a more sophisticated look at similar solutions you could see the paper "The importance of experience replay database composition in deep reinforcement learning" |
H: Algorithm or function or model that encourages clustered classfication?
I have a soft classfication problem, i.e., the correct label for a certain instance is not just one class with 100% probability, but rather bunch of classes with probabilities that sum up to one.
What I know as apriori information (I know it because I understand the underlying physical phenomenon) that the classes are close together, like a cluster. E.g, class one and five and eight always come together cause they're adjacent (PS: I can create adjacency matrix if required).
What do I want?
I want some way to tell the NN about this fact. Currently it is vanilla classfication neural net. Any suggestions or readings or guidannce are appreciated.
AI: Why not create another set of classes which corresponds to the groups you want? This works if you care more about groups. Later on you can subsequently classify within groups. Another idea would be to devise a loss function which penalize less within group misclassification and more outside group classification but this involves going into implementation details of your NN. As an idea it would involve writing something like softmax multiplied with misclassification cost matrix |
H: Depending samples in ad ranking and click rate prediction
I am struggling with the following problem:
Suppose we fit a machine learning model to model advertisers click rates. I used a Logistic Regression approach using a one-hot/dummy encoding.
We have two advertisers A and B with a click rate of 10% and 9% respectively. Hence, we would rank A before B and would show users ads from A more often, say 90% of all times. Hence our overall click rate would be close to 10%.
Now we introduce a new feature which says a user either clicked on the advertiser before or not. For advertiser A, we observe a click rate of 8% for users that have not clicked before and a rate of 30% for users that have at least clicked once on an ad of the advertiser. For advertiser B we see no difference in the click rate for this feature.
So when it now comes to ranking the ads and we get a "fresh" user that has not clicked on both advertisers, we would rank B above A. Since fresh users are obviously more often, we would mostly show users ads from B, say 90%.
As a consequence, our overall click rate would drop from almost 10% to a little over 9%. The machine learning model does everything correct since we did not introduce any information that the two states of this new features are timely dependent.
From an intuitive approach I would still rank advertiser A over B for these fresh users since it's not clear if they're going to click.
Any ideas how to tackle this problem to get a ranking that in the end maximizes the click rate?
AI: Let $X$ represent the event "a user clicks an ad" with advertiser X,
Let event $Y$ represent a new user ($\neg Y$ a returning user),
for any advertiser $X$ the probability of a user clicking an ad is:
$p(X) = p(X\wedge Y) + p(X\wedge \neg Y)$
$p(X) = p(X|Y)p(Y) + p(X| \neg Y)p(\neg Y)$
$p(X) = p(X|Y)p(Y) + p(X| \neg Y)(1 - p(Y))$
If it's possible to obtain the information "new user" (event $Y$) in the system, then obviously there should be two distinct rankings, one for $Y$ and one for $\neg Y$. Phrased in a ML context: $Y$ is a binary feature and the predicted best advertiser depends on it. In this scenario a different advertiser would be selected depending on whether the user is new or not, here A for returning users and B for new users.
If not possible, then the advertisers should be ranked according to their overall probability $p(X)$, which depends on the probability of a new user $Y$. Based only on your initial values 10% vs. 9%, this means selecting advertiser $A$ is optimal overall.
To illustrate how this works we can calculate the detail based on the values provided. First we calculate $p(Y)$:
$p(X) = p(X|Y)p(Y) + p(X| \neg Y) - p(X| \neg Y)p(Y)$
$p(X)-p(X| \neg Y) = (p(X|Y)- p(X| \neg Y))p(Y) $
$(p(X)-p(X| \neg Y)) / (p(X|Y)- p(X| \neg Y)) = p(Y) $
We have $p(A)=0.1$, $p(A|Y) = 0.08$ and $p(A| \neg Y) = 0.3$, so we can derive $p(Y)=0.91$.
What happens in this example is that the high value $p(A| \neg Y) = 0.3$ makes up for
the low $p(A|Y) = 0.08$ despite $p(Y)=0.91$:
A: $0.91*0.08+0.09*0.3 = 0.1$
B: $0.91*0.09+0.09*0.09 = 0.09$
Of course this would be different if $p(A| \neg Y)$ was 0.15 instead of 0.3 or if $p(A| Y)$ was 0.11 instead of 0.09, etc.
[edit based on comments by OP]
Indeed it seems that a more accurate representation of the problem would take into account the probability that a user clicks again (and again, and again...) on an ad.
This reminds me of classical probability exercises such as the Russian roulette game, the tree of possible outcomes is somewhat similar:
user doesn't click on the ad at all: $n=0$
user clicks on the ad at least once
user never clicks again: $n=1$
user clicks on the ad at least twice
user never clicks again: $n=2$
user clicks on the ad at least 3 times
user never clicks again: $n=3$
.......
So the goal would be to represent as accurately as possible the random variable $n$ which represents how many times a user clicks on an ad, and to use the expected value of $n$ (that is, how many times a user clicks on average) to rank the advertisers.
I'm guessing that this can get pretty sophisticated depending on how we choose to represent the problem. I'm probably not able to do anything sophisticated so I'll stick to a simple representation where I assume that there are two probabilities involved:
initial probability $F=p(n\geq 1)$ for a new user to click
probability for a user to click again after having clicked $m$ times: $G=p(n\geq m+1|n\geq m)$ for $m>0$.
This representation has the advantage to be very close to the original probabilities in the question: new user clicks and returning user clicks.
Notice that the model assumes that the probability to click again is a constant, which is unlikely in reality. Also the model doesn't take into account any other features, such as for how long a user hasn't clicked. I'll just assume that one estimate these probabilities from a training set for the sake of simplicity.
Now let's calculate the expected value [fair warning, I might get it wrong!]
$p(n=0) = 1 - F$
$p(n=1) = F \times (1-G)$
$p(n=2) = F \times G \times (1-G)$
$p(n=3) = F \times G \times G \times (1-G)$
...
$p(n=i) = F \times G^{i-1} \times (1-G)$
Therefore the expected value is:
$\sum_{i\geq 0} (p(n=i) \times i) = \sum_{i\geq 1} (F \times G^{i-1} \times (1-G) \times i)$
[Ok so normally now we can plug in the values for advertisers A and B. The problem is that I just calculated the result for the first few values and it doesn't really work... I probably made a mistake somewhere, or maybe the values don't represent the same thing... Anyway I hope this gives you the idea, maybe I'll give it another try later] |
H: One hot encoding with too many features (~ 10,000)
I am building a model to predict time off and sick leave for a specific employee.
Each of the employees has one row per day from 01/01/2013 to 31/12/2018 in the dataset flagged with 0 or 1 (if that day was taken as a time off or sick day). I am using information like location, age, job position, etc.
I was thinking to use employee id as a feature to train the model. Some users have obvious patterns, e.g, taking Monday off for some consecutive weeks. On the other hand, I feel like I do not have enough information as to have one model per employee, specially if they have being hired recently.
After researching a bit, seems that the only encoding that would make sense for employee id would be 'One Hot Encoding'. However, this would generate up to 10,000 features which doesn't seem very optimal.
Do you have any ideas that can help me build this model in a better way?
Thanks in advance
AI: I think you are on the right track. What you are looking for is what is known as fixed effects in econometrics/statistics. Your have a model in which features X determine outcome y (so y = bX + u). On top of that you have unobserved heterogeneity (each individual employee) in your model. For each employee, the relation y <-> X may be a little different (fixed effect per employee), so that your model becomes y = b1X + b2Z + u, where Z denotes employees.
See here for an overview and here for an R implementation.
You can solve this by adding one dummy/indicator/one-hot per employee (known as least squares dummy variable fixed effects). So technically speaking, each employee gets his/her own intercept in the model. Thus the Z above is a massive matrix of one-hot encoded employee labels.
There are solutions in Stata (areg) and R (felm) which "absorb" the massive dummy variable set, so that it does not bother you when looking at the regression coefficients. If you are not interested in statistical inference (but only in making predictions), you will not look at the coefficients anyway.
I don't know the datails of your data (i.e. observations per employee), but from what you say, adding one dummy (aka one-hot feature) per employee would be the right way to go to control for unobserved heterogeneity of the employees. |
H: DQN - target values vs action values?
I'm trying to understand the difference between target-values and action-values in Deep Q Networks.
From what I understand, action-value tries to approximate the reward of a given action (at some state). The target-value is also an approximate of the reward. How are they different?
Reference
- https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf
AI: Action value and target value in DQN refer to the same thing in terms of what they measure, but are obtained and used in different ways.
An action value does not approximate the reward of a given action, but represents the related concept of expected return - the expected discounted sum of future rewards when taking action $a$ in state $s$. You see it very often associated with the action value function $q(s,a)$.
Action values are not inherently approximated, they are more conceptual than that. However, in practice in Reinforcement Learning (RL), the action value function must be learned, and some form of approximation used based on observed data. When an action value function it is approximated by a neural network during learning you may see it written $\hat{q}(s,a,\theta)$ which makes explicit that the NN is approximating some "true" function that you don't know, and that it is parametrised by $\theta$, the neural network weights and biases.
During learning, the agent takes actions and observes resulting states and rewards. It must use these observations to improve its estimates of values. To do so in DQN, the agent constructs a temporal difference (TD) target - for single-step Q-learning this is $G_{t:t+1} = r_{t+1} + \gamma\text{max}_{a'}\hat{q}(s_{t+1},a',\theta)$. This is the "target value" and there are multiple ways to construct it, giving rise to variants of RL algorithms.
The value $G_{t:t+1}$ is also an estimate of expected return. Technically it is a sampled estimate, which may vary and may be biased (due to starting conditions of the neural network). However, it includes some real experience, so may be used to improve the neural network through training. You can use it like ground truth associated with the state and action, to train the neural network.
The TD target or "target value" gets its name because by updating a Q table or training a NN with it as a ground truth, the estimator will output values in future closer to the supplied value. The estimator "gets closer to the target". |
H: Is it okay to use training data for validifying the trained model?
Currently, I have trained my model through 5-fold cross validation with very small amount of the sample (n=100).
I used whole data set to train and got quite low performance in terms of accuracy, which is bit higher than 70%.
However, if I put my data which was used for training back to trained model to validate, it gives me higher accuracy (80%).
So, my question is it okay to say that I have verified my trained model using training set and got 80 of accuracy? or should I have to stick with 70 % of accuracy that I received from 5-fold cross validation?
AI: Scoring your model using the training data is not a best practice. The reason is that you have already used this data to develop the "guess" at an accurate model.
This is akin to saying - 'Pick a random number between 1 and 10 (20 times)', here's a list of the last 10 numbers I picked to get you started. You pick 20 new numbers and come up with 5 correct (25%). What you are suggesting is that instead of reporting the 25% accuracy, that you would throw the first 10 back into the mix and report a 50% accuracy because now you've gotten 15 out of 30 right.
So there are two different things here. The MSE of your initial model against the data you gave it - provides you and understanding of how well the model demonstrates history. While the MSE (or other error metric) of the 'test' data shows how well you can predict the future. |
H: Using strong predictor in Model training?
I am trying to build a Disease predictor based on symptoms. I am using data scraped from Symcat website. After sampling the data we have symptoms to disease mapped for training purpose.Data looks like these: X --> S1,S2,S4... Y --> D1,D2,D5. The problem is some symptoms are strong predictors for a particular disease. Say S1 is a strong predictor for D1. So basically model should predict higher probability score for D1 compared to others due to presence of S1. How to imbibe the information that S1 is strong predictor for D1 so that it can be used in model training
Tried oversampling those strong predictor to disease combination. But dont think that is right as I have a multi label data set. results are not reflecting either
Dataset looks like this:
![Symptoms D1 D2 D3
S1,S2,S4 0 1 1
S3,S4,S5 1 0 0
S1,S4,S6 1 0 1]1
Symptoms are features. We are using symptoms definition embeddings. Y is 1 hot vector.
AI: I can think of two way inducing priors into supervised machine learning training process.
1) Using a Bayesian framework(PYMC for example) where you can define you random variables with whatever prior you like.
2) Engineering features that describe the needed relation. For example, in your instances I would create a new Boolean feature named 'is_strong_predictor_by_prior', and would assign true if x=s1 and y=d1 and false else wise. This new feature can hold multiple priors(not only the s1>>d1 relation). |
H: What is the difference between multiply and dot functions that is used to merge layer in Keras?
I want to merge two CNN deep learning model using Keras and would like to know what is the difference multiply and dot functions that is used to merge layer?
keras.layers.multiply(inputs)
keras.layers.dot(inputs, axes, normalize=False)
AI: The multiply() function performs element-wise multiplication. For example, let us consider 1D CNN for simplicity and you pass two inputs of batch size b with a tensor length of 5, the output will be (b,5) as it's element-wise multiplication.
Let us assume two tensors of length 5 as follows: [1,2,3,4,5] and [6,7,8,9,10], the result shall be [6,14,24,36,50] as it's just element-wise multiplication.
In the case of dot(), it takes the dot product, and the dot product for 1D is mathematically defined as: a.b = sum(a_i * b_i), where i ranges from 0 to n-1; where n is the number of elements in vector a and b. (For 2-D , you can consider it as matrix multiplication). So the result shall be of length (b,1) where b is the batch size. In case of out example, the results of dot product of [1,2,3,4,5] and [6,7,8,9,10] shall be [130] |
H: what do we actually study in machine learning?
I just started learning machine learning and learned a few basic algorithms and there is one stupid doubt in my mind and I am unable to find the answer of it. What do we actually study in machine learning algorithms that solves kinds of problems based on the data given to the algorithms?
Take an example of linear regression:
We give data.
We tell machine how to calculate error.
Using gradient descent also we are telling machine how to optimize and based on all above some optimized parameters come out.
Why there is actually a "learning" word in this algorithm? Although we are the telling how to perform steps and we have provided all the mathematics in this algorithm.
AI: Let me try to explain you with an example,
Consider we have a dataset with features X1, X2 and Y is the target value.
Now in Machine Learning our traget is to get an equation which should calculate the value as close as target value.
Consider a simple equation,
f(X) = w1X1 + w2X2 + b
and here our target is to identify best value of weights i.e. w1 and w2 such that f(X) ~Y.
Generally, initially f(X) gives you a very different value(as weights are not tuned), but gradually with each iteration(epoch), f(x) becomes better(weights getting tuned), that is called "parameter Learning". |
H: Understanding LSTM Training and Validation Graph and their metrics (LSTM Keras)
I have trained a RNN/LSTM model. I would like to interpret my model results, after plotting the graph for Loss and accuracy (b/w training and Validation data set).
My objective is to classify the labels (either 0 or 1) if i provide only a partial input to the model. In such a way I have performed training.
Train_Validate_Test_Split
Train 80% ; Validate 10 % ; Test 10%
X_train_shape : (243, 100, 5)
Y_train_shape : (243,)
X_validate_shape : (31, 100, 5)
Y_validate_shape : (31,)
X_test_shape : (28, 100, 5)
Y_test_shape : (28,)
Model Summary
Model Graph
Model Metrics
Question or Interpretation from the model results
Q 1 : What can I understand/interpret from loss and Accuracy graph ?
How can i confirm whether the model trained properly for my data set
or not ?
Q 2 : Whether oscillations in both loss and accuracy, have some effect in >model training ? (Or it is a normal behavior) If not, how
can I regularize my model without oscillations ?
Q 3 : What can I interpret or understand from my metrics tabular column ? My > Y_test accuracy is more when compared with Train &
Validation accuracy, What can i interpret from this behavior ?
AI: From visually inspecting the graph, we see that the validation loss and accuracy has improved with each epoch - with the training loss and accuracy higher than that of validation. This indicates that accuracy has improved with training.
As suggested in another post, one potential solution is to calculate the exponential moving average of the validation loss to remove the oscillations and better determine the improvement in this metric.
If you are finding that the test accuracy is higher than that of training, this might suggest underfitting. This could imply that more training on your model is required, or has been over-regularized. |
H: Clustering with geolocation (lat/long pairs) attributes
I am trying to cluster customer behavior based on where they shop given by lat/long pairs. I also have other numeric attributes such as volume, average amount spent, etc. I am considering using HDBSCAN to create clusters. However, I'm not sure whether to feed the dataframe directly to the clustering algorithm or whether I would need to scale/normalize the data.
Is it wise to scale the geolocation pairs? Or would important location information be lost?
Any help would be much appreciated.
https://stats.stackexchange.com/questions/89809/is-it-important-to-scale-data-before-clustering
This page explains a lot. However, in the answer by @Anony-Mousse, he mentions not to scale lat/long pairs. That's good but what about other continuous variables?
AI: Don't treat clustering algorithms as black boxes. If you don't understand the question, don't expect to understand the answer.
So before dumping the data and hoping that magically a desired results comes out, understand what you are doing...
Standardizing latitude/longitude is a horrible idea. These values are angles on a sphere. Linearly scaling these values breaks everything that these values mean. There are many valid transformations - even rotations can be good to get a desirable Mercator protection, for example. But standardizing them, I cannot imagine what this would be good for.
Mixing variables with different meaning rarely works well. It's not just the problem of scale. Scaling often helps as a heuristic to prevent one variable dominating another. It also has the nice property that it doesn't matter if your data were feet or yards. But the need to do so usually means that there is something wrong with your approach at a deeper level: that you apparently are trying really hard to compare apples and oranges... You'll get out some result. It's probably even interesting. But once you try to explain or act on it, you're back to square one: what does it mean if you scale your data this way, and why is that better than the infinitely many alternative ways, infinitely many of which lead to other results? |
H: How to merge two CNN deep learning model using weighted sum and weighted product in Keras?
I am using Keras to create a deep learning model and I want to merge two CNNs by using weighted sum or weighted product.
How can I merge two CNNs using weighted sum and weighted product?
AI: I think the most elegant way is to write a layer that does this. For example for the case of the weighted sum:
class WeightedSum(Layer):
def __init__(self, a, **kwargs):
self.a = a # "weight" of the weighted sum
super(WeightedSum, self).__init__(**kwargs)
def call(self, model_outputs):
return self.a * model_outputs[0] + (1 - self.a) * model_outputs[1]
def compute_output_shape(self, input_shape):
return input_shape[0]
Suppose you have two models model1 and model2 with outputs out1 and out2 respectively. This layer simply performs the operation:
$$
out = a \cdot out_1 + (1-a) \cdot out_2
$$
You can compute a weighted product the same way, just change the call method.
Example
from keras.layers import Layer, Input, Dense
from keras.models import Model
import keras.backend as K
import tensorflow as tf
# Define the custom layer
class WeightedSum(Layer):
def __init__(self, a, **kwargs):
self.a = a
super(WeightedSum, self).__init__(**kwargs)
def call(self, model_outputs):
return self.a * model_outputs[0] + (1 - self.a) * model_outputs[1]
def compute_output_shape(self, input_shape):
return input_shape[0]
# Create model1
inp1 = Input((5,))
d1 = Dense(100)(inp1)
out1 = Dense(10)(d1)
model1 = Model(inp1, out1)
# Create model2
inp2 = Input((7,))
d2 = Dense(70)(inp2)
out2 = Dense(10)(d2)
model2 = Model(inp2, out2)
# Weighed sum of the two models' outputs with a = 0.1
out = WeightedSum(0.1)([model1.output, model2.output])
# Create the merged model
model = Model(inputs=[inp1, inp2], outputs=[out])
Let's check the summary:
>>> model.summary()
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_5 (InputLayer) (None, 5) 0
__________________________________________________________________________________________________
input_6 (InputLayer) (None, 7) 0
__________________________________________________________________________________________________
dense_9 (Dense) (None, 100) 600 input_5[0][0]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 70) 560 input_6[0][0]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 10) 1010 dense_9[0][0]
__________________________________________________________________________________________________
dense_12 (Dense) (None, 10) 710 dense_11[0][0]
__________________________________________________________________________________________________
weighted_sum_10 (WeightedSum) (None, 10) 0 dense_10[0][0]
dense_12[0][0]
==================================================================================================
Total params: 2,880
Trainable params: 2,880
Non-trainable params: 0
__________________________________________________________________________________________________
Let's see if it works:
import numpy as np
a = np.random.random(size=(32, 5)) # input for model1 (batch size 32)
b = np.random.random(size=(32, 7)) # input for model2 (batch size 32)
pred = model.predict([a, b])
Let's see if it has the right shape:
>>> model.shape
(32, 10)
Let's see if it's the correct thing:
# Generate model outputs manually:
o1 = model1.predict(a) # model1's output for array a
o2 = model2.predict(b) # model2's output for array b
# Compute their weighted sum manually:
o = 0.1 * o1 + 0.9 * o2
Now if we're correct, o should be equal to pred:
>>> np.array_equal(o, pred)
True |
H: Machine Learning: Balanced training set but highly unbalanced prediction set? How to adjust?
I am trying to train a model to detect gender in a dataset of CEO speeches. Here are the datasets that I have:
Final Dataset: 20K CEO voices analyzed (around 95% male)
Testing dataset (?): 1K CEO voices analyzed from the final dataset, less unbalanced because I added more females on purpose (80% male)
Training dataset: 6K voices analyzed from audiobooks and TED talks (55% male).
For now, I have been trying different models by training and splitting dataset #3 (70%, 30%). I get good accuracy (95%) using this method. However, when I apply the trained model to dataset #2, I get an accuracy of 85%.
I am not sure what to do. Should I undersample women in the training dataset so that its distribution is more similar to the final dataset?
AI: Over- or undersampling should be your second choice. Currently, the best method to deal with class-imbalances is to use the weights argument which sklearn- and keras-classifiers support (see for example the DecisionTreeClassifier).
On a general sidenote, I would recommend focusing on the f1-score, AUC, and the confusion matrix to evaluate the model's performance, as accuracy might be not as informative in this case. |
H: What is the use of @ between two matrices
I was looking into a repo, where I came across this snippet of code:
priors = x[None, :, :, None, :] @ self.route_weights[:, None, :, :, :]
It basically tried to incorporate weights of an input image matrix.
I am really not sure what is the use of @ out here and what are the advantages of using this. Is there any simple alternative too, like won't simple matrix multiplication work?
AI: According to here, the matmul function implements the semantics of the @ operator introduced in Python 3.5 following PEP465.
Hence, rather than np.matmul(a,b).
We can use a @ b instead. |
H: Which method of NLP is this?
I have been searching for 2 weeks and I got no where so far.
There is a list of diseases
Diabetes
Heart Transplant
Fingertip amputation
Injury by sharp tools
.
.
.
and My dataset is a list of medical text reports.
the training dataset has diseases that can be generated from each record
example that I made
This patient has suffered a cut while using his Carving Chisel and led
to losing the fingertip therefore we had to operate to sew
the tip.....
from this report we get these diseases
1- Injury by sharp tools
2- Fingertip amputation
3- Sewing injury
another report results may have 3 or less or more diseases
I have searched a lot
I found many examples about NLP classification
where a text will be classified into Sports, Politics, Culture, Science, etc.
I found NER where person names, locations, dates, etc can be extracted from a text.
But did not find anything for a single text could have multiple values (similar to my dataset)
I dont know where to start.
Could anyone please help me finding what is the name of this method of extracting list of issues from a text?
Edit
What else do I need to exclude the negations, if the report says
"This patient has stomach problem but not diabetes "
How can I make AI understand there is a negation (NOT) before diabetes so it should not be included?
So the result will be
stomach problem
as diabetes should be excluded because of the negation word
AI: You might want to look at Multi-Label-Classification . If you have sufficient number of samples as your training data, you can build a model that can predict more than one label for a test sample. You can find more about the implementation at the sklearn page for the same here . |
H: Why does discriminiator accuracy falls to 0%, and is there a fix around this?
I am training a Vanilla-GAN(or original GAN 2016) on a pokemon dataset https://www.kaggle.com/kvpratama/pokemon-images-dataset, for few epochs the discriminator has 100% accuracy over the real examples, but as the epochs pass it falls to 0% around 4-5 epochs.
One interesting effect however is that when I increase my batch size from 32 to 64, this effect is seemed to have a delayed effect. i.e It kicks in around 400-500 epochs.
Please help.
AI: I figured out my answer a while back. SO here it goes...
Ensure that your training data is normalized. This was what causing the discriminator's accuracy to fall.
You need to decrease your dropouts as it can cause heavy bias if done ineffectively.
Remove any unnecessary layers in your discriminator as they can cause overfitting.
Use batch normalization.
Introduce dropouts in your generator. |
H: What it means for a neural network to memorize data
I've seen at many places that sometimes neural networks simply memorizes training data. What that means actually ?
Neural network consists of bunch of weights which gets trained and outputs based on input data. It'll output different thing for different input. Where does this memorization comes into play ?
AI: Memorization is the same as overfitting. The memory is implicitly represented by your weights. If your network does have enough parameters it will be able to memorize/overfit.
In order to understand what I mean by overfitting and memorization let us look at the polynomial regression
$$y_n=w_0+w_1x_n+w_2x_n^2+ \varepsilon.$$
We have three coefficients. If we only had three data points (which do not perfectly lie on a line) we could fit a quadratic regression equation without any error. Hence, the model would memorize the data by using three coefficients.
We would have a loss of zero, but as you know this result would also be very likely overfitting the model to the data. |
H: Do we need to tune same model differently for different window sizes in time series data classification?
I am currently working on the time series data classification problem using deep learning. As we all know that in time series, we process the time-series data sequentially for some time steps at a time through the model which is called as a window. We slide the window and the next window is our input again. Since I have just started working in deep learning and time-series domain, my question is if we tune some deep learning model for some window size , let's say 10 then should the same tuned model be used to get the near accuracies for some different window(say 15) on the same data or the model should be tuned again for the latter window.
Currently,if I am using the same model for a different window then I get the accuracy decreased by 4-5% from the previous window. From this I believe that window size again is a hyperparameter which when changed requires the model to be tuned again.
So, what is the right thing in this case? Should the same model be used or should it be tuned again? I tried searching this online but I couldn't get any help on this. Help is appreciated as I am new to this domain.
AI: In my opinion it should be tuned again.
Reasoning:
Using different window size is almost equivalent to using different features(as in a non time-series modeling). Tuning hyper-parameters is usually done after features were selected(I.E for a given set of features). |
H: In the context of natural language processing, can anyone give a concrete example of True Positive, True Negative, False Positive, False Negative?
Google post gives a interesting explanation about True Positive, True Negative, False Positive, False Negative
True Positive (TP): Reality: A wolf threatened. Shepherd said: "Wolf."
Outcome: Shepherd is a hero.
True Negative (TN): Reality: No wolf threatened. Shepherd said: "No
wolf." Outcome: Everyone is fine.
False Positive (FP): Reality: No wolf threatened. Shepherd said:
"Wolf." Outcome: Villagers are angry at shepherd for waking them up.
False Negative (FN): Reality: A wolf threatened. Shepherd said: "No
wolf." Outcome: The wolf ate all the sheep.
In the context of CV, the classifier predicts if an image contains cat
True Positive (TP):
Reality: an image contains cat.
classifier predicts: cat.
True Negative (TN):
Reality: an image does not contains cat.
classifier predicts: no cat.
False Positive (FP):
Reality: an image does not contains cat.
classifier predicts: cat.
False Negative (FN):
Reality: an image contains cat.
classifier predicts: no cat.
Can anyone gives a concrete example of TP、TN、FP、FN like above, in the context of natural language processing?
AI: Imagine a hot news classifier.
True Positive (TP):
Reality: a piece of hot news.
classifier predicts: hot.
True Negative (TN):
Reality: not a piece of hot news.
classifier predicts: not hot.
False Positive (FP):
Reality: not a piece of hot news.
classifier predicts: hot.
False Negative (FN):
Reality: a piece of hot news.
classifier predicts: not hot. |
H: How does class_weight work in Decision Tree
The scikit-learn implementation of DecisionTreeClassifier has a parameter as class_weight.
As per documentation:
Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one.
and
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y))
My understanding is that it should be used in case of imbalanced classes.
Question: How does the DT (classification) algorithm use this parameter while determining the ideal split for a given node? Does it consider some kind of "weighted" majority class instead of simple majority class in a region in the prediction space?
AI: When deciding on a split at a node, the algorithm basically calculates some metric (entropy or gini impurity) for the given node and for the two resulting left and right nodes after the split. Comparing them tells you how much the split would improve the result.
The statistics for the child nodes are weighted by the number of samples in the left and right node, respectively.
When you use sample_weight this adjusts the count and replaces it with the sum of the sample weights. class_weight gives equal sample weights for each sample based on its class according to its class proportion.
For example, the improvement in impurity is calculated as
$$ \frac{N_{parent}}{N_{total}} * (impurity_{parent} - \frac{N_{right}}{N_{parent}} * impurity_{right-child}
- \frac{N_{left}}{N_{parent}} * impurity_{left-child}) $$
Without class_weight or sample_weights, the $N$s are just counts. With class_weight you replace them with the corresponding weights.
The idea is the same for entropy, even though calculated differently.
source code |
H: What is sample and feature
I'm reading Scikit-learn and I can't understand sample and feature. (n_samples, n_features)
Can anybody describe those by example?
AI: [x[1,2,3,4], x2[1,2,2,3], x[2,3,2,1]]
The data above has 4 features. We can gives those features labels with a header. We'll just call them feature 1, feature 2, feature 3. For the first entry, feature 1 has a value of 1 and feature 2 has a value of 2 and so on.
A sample, is a subset of data taken from your dataset. x[1,2,3,4] is a single sample of the dataset.
Whatever you are trying to do with Scikit-learn wants to know how many features you have, my example has 4 features (or columns). |
H: Uneven dataset, how I should train the model if I can't load all pictures at once
I have a dataset of multiple classes (About 50). The dataset doesn't have the same number of pictures per class, some have 300, some have 1000, and some more, and I've seen that this ruined my accuracy on my model
First of all, the data is too big for me to store in the RAM, so I have to split it into parts (I take 300 pictures of each class at once), train the network on that data then repeat the process until I have no more photos left.
The question I have is, is it a problem if I do this (I guess it is from my results)? I have for example from 100 classes of pictures, only 10 left and I train the model only on those 10,then on only 5,then on 3,etc.? Because I did that and after I went over the smallest classes, the overall accuracy went up, but those smaller classes accuracy went down, and in the end from 80% in the beginning for each class I got 0-2% on 48 classes, and 99% on 2 classes.
How can I solve this 'unevenness' (I don't know the word) then so I won't have this problem anymore?
AI: From your tags, I see that you use keras. Keras offers you the class ImageDataGenerator which has the method flow_from_directory() (see here). This method loads the images in your training directory batch by batch from the hard drive and stores only the current batch in the RAM. This eliminates the bottleneck you currently face when loading the images.
To tackle the class imbalance, the recommended method is to use the class_weight argument of the keras classifiers. This argument assigns a weight to every class in your data, allowing you to give higher importance to images from the minority classes. This answer shows how you can calculate the class weights.
In the code below, I put that all together:
# Define constants - change them according to your requirements
BATCH_SIZE = 128
EPOCHS = 50
IMAGE_SIZE = 224
# Set up Image Data Generator
train_datagen = ImageDataGenerator(dtype=np.float16) # here you can also do some data augmentation
# Set up flow from directory
train_generator = train_datagen.flow_from_directory(directory="path/to/your/directory",
class_mode="categorical",
target_size=(IMAGE_SIZE, IMAGE_SIZE), # resize the images if required
batch_size=BATCH_SIZE)
# Calculate class weights
counter = Counter(train_generator.classes)
max_val = float(max(counter.values()))
class_weights = {class_id: max_val/num_images for class_id, num_images in counter.items()}
#
# Here you set up your model ...
#
# After compiling the model, you fit it to your data using fit_generator
model.fit_generator(train_generator,
steps_per_epoch=train_generator.n // BATCH_SIZE,
epochs=EPOCHS,
class_weight=class_weights, # use the class_weights as method parameter
verbose=1) |
H: Do RL agents learn the optimal "degree" of an action to take?
I have a game environment I want to train an RL model on. This environment has 2 fundamental actions that the agent can take; "Left" or "Right" (say, 0 or 1).
However, the actions "Left" or "Right" can be taken in a discrete number of "degrees". For example, I can take action "Left" with degree 70% , or take action "Right" with degree 16%.
Assuming a discrete action space between 0-100% for each "Left" or "Right", making the total action space a discrete size of 201 (0-200 in increments of 1), does an agent learn the optimal degree to take either "Left" or "Right" in any given state?
AI: If you are using a value-based method, like Q-learning in Deep Q Networks (DQN), then the "degree" concept has little meaning to the agent, and you are effectively training an agent to learn the best discrete action out of 200 actions.
Yes this could take a lot longer than learning a simple 2-action scenario, as the agent will not easily learn that the value of "Right 49%" is correlated to the value of "Right 52%" in any particular state.
This is a similar problem you might face in supervised learning if you wanted to move from classifying a dog vs cat image classifier to add bounding boxes for where the creature is. You don't add the bounding box as a large number of classifications - instead you add it as a regression, mapping form the image directly to some vector of real numbers.
There might be ways to address this using value-based methods, but the usual approach here would be to move to a continuous action space (maybe with -1 for 100% Right and +1 for 100% Left), and use a Policy Gradient method. There are a few different choices of Policy Gradient, but they all share the basic principle of modelling a policy function - which might be a probability distribution over action space which you need to sample from, or a specific action - converting from state to action.
The most basic Policy Gradient approach, which you could use as an introduction for your problem if it is simple enough, is REINFORCE. Algorithms like A3C, DDPG, PPO are based over this idea and can cope with more challenging environments. |
H: Why do trained RL agents still display stochastic "exploratory" behavior on testing data?
I am training a PPO2 RL model using stable baselines. One thing I found is that a trained agent will still display some stochastic behavior on test data, as shown by the predict method of PPO2; the deterministic flag should be set to True to see deterministic (un-stochastic) behavior.
I understand that when training an RL agent, there is some amount of random exploration so that an agent can learn the optimal value-policy network. However, I thought that once an agent was trained, and used to act (predict) on new test data, the model parameters would be used and no exploration would take place. Therefore, when testing, I thought the agents actions and subsequent rewards would be deterministic (once the test data is static).
Why is the trained agent exploring on test data / Why is the trained agent still stochastic when predicting test data? To what degree is the trained agent stochastic (will it follow its model predictions 90% of the time and guess the other 10%)?
AI: The PPO approach directly generates stochastic policies. Its output is some probability distribution function over the action space.
This is not the case for all RL algorithms, but is common for many Policy Gradient methods.
In some cases you may want this behaviour to continue. Paper-Scissors-Stone is a classic example of a game where a stochastic policy is optimal, and there are other examples from game theory.
Also, the agent many have learned on-policy (I'm not sure in case of PPO without looking it up), in which case the stochastic behaviour should match with expected value predictions. This is similar conceptually to using SARSA to generate an $\epsilon$-greedy policy. The value functions match following that policy, although if $\epsilon$ is low enough you may take a reasonable guess that a fully greedy policy is optimal.
To what degree is the trained agent stochastic (will it follow its model predictions 90% of the time and guess the other 10%)?
To the degree that the output of the policy is stochastic. It will always "follow its model predictions".
Switching deterministic on actually stops the agent from following the model, and typically will select the mode (highest probability density) of the action distribution.
Unlike SARSA, with Policy Gradient methods, there is not always access to a "greedy" policy that picks according to action values. So instead, your deterministic flag is likely picking greedily according to action probability.
In some cases (e.g. Actor-Critic) you may also have a value based estimator for V(s) or Q(s,a), and it could be used instead, but that is usually considered secondary to the policy function (in addition, using this on a large or continuous action space is very inefficient compared to a real-valued policy function)
Also note in some cases e.g. DDPG (which stands for Deep Deterministic Policy Gradients), a policy gradient method can use a deterministic policy with exploration added as a behaviour policy, making the algorithm off-policy. DDPG should not exhibit exploration during testing as PPO is for you. You may find DDPG behaves closer to your original expectations because of this. |
H: Statistical comparison of model performance when training and validation data is always the same
I have a labelled dataset of text in 4 languages (1000 samples per languages makes a total of 4000 samples). In one experiment I would like to assess the performance of a classification algorithm (neural network) on an unseen language. That means using the 3000 tweets from three languages as training data and 1000 samples from the fourth language as validation data:
training data:
- 1000 English samples
- 1000 German samples
- 1000 Italian samples
validation data:
- 1000 Dutch samples
Now, I am trying to find a statistic to find if there is a significant difference between 2 versions of the classification algorithm. I have been reading and find that 2x5 cross-validation or 10x10 cross validation using a modified student t-test is (one of) the best options. While I can apply this to most other experiments, I don't think I can apply this directly to this particular experiment since I am not using folds.
My question is, if I can still use the corrected student t-test training the network a number of times on the same training and validation data? If so, what should be the number of training and validation folds? If not, what would be a better approach?
For reference, below is my Python implementation of the Nadeau and Bengio correction using the equation stated here.
def corrected_dependent_ttest(data1, data2, n_training_folds, n_test_folds, alpha):
n = len(data1)
differences = [(data1[i]-data2[i]) for i in range(n)]
sd = stdev(differences)
divisor = 1 / n * sum(differences)
test_training_ratio = n_test_folds / n_training_folds
denominator = sqrt(1 / n + test_training_ratio) * sd
t_stat = divisor / denominator
# degrees of freedom
df = n - 1
# calculate the critical value
cv = t.ppf(1.0 - alpha, df)
# calculate the p-value
p = (1.0 - t.cdf(abs(t_stat), df)) * 2.0
# return everything
return t_stat, df, cv, p
AI: If your model is deterministic (no randomness), then repeating the training/testing on the exact same set of data is pointless - you will get the exact same answer every time. The benefit of cross-validation is that it provides an unbiased estimate of your model performance, and does so by using different perturbations of the train/test data. You can still do something similar, selecting 80% of your training data and testing on some subset of the test data, and repeatedly doing a resampling. There's a slight difference from traditional CV, where your train/test set are mutually exclusive and essentially define one another, whereas in this case, your training and test datasets can be defined totally independently (but that shouldn't be a problem).
Incidentally, what model are you using that you expect can accurately predict data that's completely unlike what it's been trained on? Usually the point of the training data is to provide the model with examples of the data you expect to see, along with the correct output. It's not clear to me why you'd want to train a model to classify tweets in English, but then test how it performs at classifying Dutch tweets - the model has never seen Dutch, so I don't expect it would perform well. It seems this evaluation would test how similar English and Dutch are, rather than testing how good your classification model is. |
H: Mapping one embedding to another using Deep Learning
I am trying to write a model that has the input vector of one embedding (say $E_1$) and predicts the corresponding vector in the second embedding $E_2$. Both are n-dimensional real dense vectors $\mathbb{R}^n$.
Concretely one is a skipgram word embedding, and the other is a node2vec graph embedding. I have approximately 30 000 training examples that provide a mapping between the two. Since they are both just real vectors, it seems like a trivial task to write a simple MLP that learns the non-linear transformation of one to the other (I actually don't really care about over fitting here, since the domain is closed).
However, I can't seem to get it to work properly.
In Keras, naively something like this should work:
in = Input(e1_dim)
hidden = Dense(some_value, activation="tanh")(in)
out = Dense(e2_dim)(hidden)
I have tried adding more hidden layers, but I think my problem lies with the fact that the input and output vectors are in the domain (-1,1), and so the choice of initializer, loss function, and activation function is critical.
I have tried setting the initializer to RandomUniform, but still no good results. For loss I have tried MAE en cosine_proximity, but both seem to produce terrible results. In particular cosine_proximity seems to not get above -0.5 which might be a sign. Any thoughts on the choice of architecture and loss function for mapping one embedding onto another (essentially a high-dimensional non-linear regression?)
AI: You should take into account a few factors:
some_value is key here. You should probably choose a value greater than e1_dim and e2_dim. This may be a problem due to the large memory needs for high embedding dimensionalities.
if e2_dim is bounded to $(-1,+1)$, you may want to set a tanh activation in the last layer (i.e. after Dense(e2_dim)). With that in place, you may want to replace the activation of the hidden layer with a ReLU, to avoid squashing the gradients unnecessarily.
If your embeddings are directional, or they are in the unit ball, then you should be fine with cosine_proximity. Maybe a better option would be to use the von Mises-Fischer distance, but I think it's not straightforward. Otherwise, you may try World-Mover's Distance.
That being said, a lot of embedded space mapping works out there assume that the embedded spaces are approximately isomorphic and just go ahead with a linear transformation. You may want to have a look at the relevant bilingual embedding mapping literature; you may start with the work by Artetxe et al., 2016. I don't know if the mentioned assumption is met to any degree in your scenario (skipgram $\leftrightarrow$ node2vec), but you may assess this with the VF2 algorithm, like Artetxe et al. did. |
H: What is the right way to keep track of the different things we try?
In Machine Learning we usually try many combinations of different features, filters we apply to the data, transformations on the features or the target variables and different versions of hyperparameters.
This fact makes it difficult to keep track of what works and what doesn't if we are not exhaustive with how we keep track of the different combinations we try.
I am wondering if there are any best practices around this problem. My current approach is to keep track of the different combinations naming files making reference to the parts that compose it, for example a hyperparameters pickle file I would name it booster_params_{}_{}_{}_{}.pickle'.format(filter_name, features_name, model_target, params_iteration)
where filter name is the set of filters I'm applying to the data, features name refers to the set of features used, model target to the target I'm modeling and params iteration refers to the version of the hyperparameters.
This seems like an overkill, and that is why I'm looking for ideas on how to tackle this problem.
AI: You can maintain multiple versions of booster_params.pickle either:
1) through a version control system
or
2) manually, e.g. booster_params_v1.pickle, ...v2, etc. and a separate file where you would describe each version |
H: Does a larger action space take longer to train an RL agent?
I am playing around with the openai gym to try and better understand reinforcement learning. One agent parameter you can modify is the action space i.e. the specific actions an agent can take in an environment at each state e.g. "Left", "Right", "Up" or "Down" if the environment is a game with 4 discrete actions.
In my research, I have not found anywhere that explicitly states that an RL model, specifically PPO2, will take longer to train if the action space is larger.
All else being held the same; same data, same environment, same hyperparameters, same hardware, will a model with a larger action space (more possible actions) take longer to train (1 episode) than a model with a smaller action space?
(e.g. will an agent with 100 possible actions take longer to train 1 episode than an agent with 2 possible actions?)
Intuitively, I would have thought that the more actions an agent has, the more "choice" it has at each state, and therefore it would take longer in choosing in one of those actions. But again, I haven't found anything proving or disproving this.
AI: I think there are two things in your question:
The number of parameters of your network. So if you have more actions to predict the action layer of your network will have more parameters and it will take a little bit longer theoretically to update it (the time difference thought might be negligible depending your computational resources).
The total training time. Yes, having more choices it will take longer for the algorithm to generate some good estimations of expected reward per action in order to lead to a satisfactory performance. If for Critic you are using a state value $V(s)$ instead of $Q(s,a)$ still your algorithm will take longer to explore the action space and map your input state to a probability distribution over your action space. |
H: How to read specific column with specific row in x_test using python
I have a dataset with four input. here I am trying to predict X1 input in next time period using lstm model.
Here I put x_train 80% and x_test 20% .
In x_test first input some of rows are with 0 values.
What I want is I want to read that 0 value separately. I tried and it didn't work for me. Can anyone help me to solve this problem.
Here my code:
data=data.values
scaler_x = preprocessing.MinMaxScaler(feature_range =(0, 1))
x = np.array(x).reshape ((len(x),4 ))
x = scaler_x.fit_transform(x)
scaler_y = preprocessing.MinMaxScaler(feature_range =(0, 1))
y = np.array(y).reshape ((len(y), 1))
y = scaler_y.fit_transform(y)
train_end = 80
x_train=x[0: train_end ,]
x_test=x[train_end : ,]
y_train=y[0: train_end]
y_test=y[train_end :]
x_train=x_train.reshape(x_train.shape +(1,))
x_test=x_test.reshape(x_test.shape + (1,))
After writing code for lstm I tried to predict value for x_test. Before that I want to specify that rows with 0 values.
x_test_n = np.flip(x_test_n, axis=0)
curr_frame = x_test_n[0]
Then it just read first row with four input together.
What I want is to read first input with 0 values.
Image for what I got:
What I want to read
After changing the code:
pred=[]
for index, row in x_test_n.iterrows():
if index_row, index_col = np.where(x_test_n == 0)
x_test_n.iloc[index] = pred[-1]
row['X_test_n'] = pred[-1]
pred1 = model.predict(x_test_n)
pred.append(pred1)
It gave me an error:
if index_row, index_col = np.where(x_test_n == 0)
AI: For a 2D np array x_test_n, you could use index_row, index_col = np.where(x_test_n == 0) to return the indices with 0 values. |
H: My NN has less than 1% accuracy rate, what's wrong with my model/dataset?
Data info
X_train.shape
(20160, 44)
y_train.shape
(20160,)
#first row of the dataset
X_train.values[0]
array([93., 87., 85., 79., 78., 74., 64., 60., 60., 56., 51., 41., 37.,
36., 0., 36., 0., 31., 0., 0., 32., 0., 0., 31., 0., 23.,
22., 17., 0., 0., 14., 0., 12., 8., 5., 1., 0., 0., 0.,
0., 0., 0., 0., 0.])
y_train.values[0]
106.0
Neural Network
n_cols = len(X_train.columns)
es = EarlyStopping(patience = 5)
adam = Adam(lr = 0.0001)
model = Sequential()
model.add(Dense(n_cols, activation = 'relu', input_shape = (n_cols,)))
model.add(Dense(50, activation = 'relu'))
model.add(Dense(50, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(optimizer = adam, loss = 'mean_squared_error', metrics = ['accuracy'])
history = model.fit(X_train, y_train, batch_size = X_train.shape[0], epochs = 100, validation_data = (X_test, y_test), callbacks = [es], verbose = 0)
The code above gives me a maximum accuracy of around 0.0025, I've also tried normalising X_train but same results.
A linear regression with the exact same data returns an R2 of 0.97
AI: You note that y_train.values[0] returns 106.0 - is that correct? Sigmoid activation binds the output to between 0 and 1, so it's not possible for it to predict a 106.
Try changing the activation for your final layer to relu too.
You should also switch to a more regression-appropriate metric like mean_squared_error or R2. Accuracy is more of a classification thing. |
H: Interpretability of RMSE and R squared scores on cross validation
I'm working on a regression problem with 30k rows in my dataset, decided to use XGBoost mainly to avoid processing data for a quick primitive model. And i noticed upon doing cross-validation that there's a noticeable difference between R² for train and R² for CV => clear signs of overfitting. Here's my code for CV :
oof_train = np.zeros((len(train_maisons)))
ind = 0
cv_scores = []
train_scores=[]
for ind,(ind_train,ind_val) in (enumerate (kfolds.split(X,y))):
#ind=+1
X_train,X_val = X.iloc[ind_train],X.iloc[ind_val]
y_train,y_val = y.iloc[ind_train],y.iloc[ind_val]
xgb = XGBRegressor(colsample_bytree=0.6,gamma=0.3,learning_rate=0.1,max_depth=8,min_child_weight=3,subsample=0.9,n_estimators=1000,objective='reg:squarederror',eval_metric='rmse')
xgb.fit(X_train,y_train)
val_pred = xgb.predict(X_val)
train_pred = xgb.predict(X_train)
oof_train[ind_val] += val_pred
score_fold_validation=np.sqrt(mean_squared_error(y_val, val_pred))
score_fold_train=np.sqrt(mean_squared_error(y_train, train_pred))
train_scores.append(score_fold_train)
cv_scores.append(score_fold_validation)
#r2_score(y_val, grid.best_estimator_.predict(X_val))
print('Iteration : {} - CV Score : {} - R² Score CV : {} - Train Score : {} - R² Score train : {}'.format(str(ind+1),score_fold_validation,r2_score(y_val, val_pred),score_fold_train,r2_score(y_train,train_pred)))
end_train_score=np.mean(train_scores)
train_scores.append(end_train_score)
end_cv_score=np.mean(cv_scores)
Using SquaredError as objective ( loss function ) , evaluating with RMSE and R², here are the metrics' outputs :
CV Score : 96416.84137549331 - R² Score CV : 0.6545903695464426 - Train Score : 30605.655815355676 - R² Score train : 0.9730563148067477
My question: is this considered an overwhelming overfitting problem? or is it mild? and should I do more feature engineering or tune hyperparameters more? ( used GridSearchCV for the current hyperparameters ). And one last thing, is my result on X_train indicative that my features are informative enough to learn the target? or is the R² train score somehow biased?
Note : In this code i'm using 10 folds for CV. Used 3 folds gave me a better result on CV, if someone can also explain that , it would be great.
AI: And one last thing, is my result on X_train indicative that my features are informative enough to learn the target? or is the R² train score somehow biased?
High scoring fits on training data does not necessarily indicate that your features are informative enough to learn the target in a general fashion. Only your cross validation scores can do so.
Note : In this code i'm using 10 folds for CV. Used 3 folds gave me a better result on CV, if someone can also explain that , it would be great.
I wouldn't expect wildly significant differences so it depends on how much better they were, but remember that you're randomly selecting a subset of data for each fold. It's plausible that when you ran it for 3 folds the model was coincidentally better able to learn from the selected training sets to predict the validation sets. Generally speaking, the more folds you use the more confident you should be in the reliability of the score.
My question: is this considered an overwhelming overfitting problem? or is it mild? and should I do more feature engineering or tune hyperparameters more? ( used GridSearchCV for the current hyperparameters )
I'm no professional, but I would consider that to be severely overfit. I would in the first instance return to hyperparameter tuning to try and bring them in line. I'll also perform the obligatory plug of RandomizedSearchCV at this point: http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf
EDIT:
Since you've asked for further details:
1) Does a high score against training data indicate that the features are informative enough to learn the target?
Consider the following example:
import numpy as np
from xgboost import XGBRegressor
x = np.random.rand(5000, 12)
y = np.random.randint(0, 200, 5000)
rfr = XGBRegressor(colsample_bytree=0.6,gamma=0.3,learning_rate=0.1,max_depth=8,min_child_weight=3,subsample=0.9,n_estimators=1000,eval_metric='rmse')
rfr.fit(x, y)
rfr.score(x, y)
Out[26]: 0.999918392694166
Entirely random targets trained against entirely random inputs, and still scoring almost perfectly. That's a completely useless model; validating against your training data does not allow you to determine that your features are informative enough to allow you to predict the target against data that your model has not seen. Only cross-validation against unseen data can do this.
2) Used 3 folds gave me a better result on CV, if someone can also explain that , it would be great.
The RNG gods smiled upon you. Again, consider an example:
from xgboost import XGBClassifier, XGBRegressor
import pandas as pd
from sklearn.model_selection import KFold
df = pd.read_csv(r'https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv', index_col=None)
x = df.drop('species', axis=1)
y = df.species
xgb = XGBClassifier(max_depth=3)
kf = KFold(n_splits=10, shuffle=True)
for train_idx, test_idx in kf.split(x):
xgb.fit(x.iloc[train_idx], y.iloc[train_idx])
print(xgb.score(x.iloc[test_idx], y.iloc[test_idx]))
Out[17]: 0.9333333333333333
0.9333333333333333
0.9333333333333333
1.0
0.8666666666666667
1.0
1.0
0.9333333333333333
0.8666666666666667
1.0
That code splits the data 10 times and then trains and scores the model on each selected slice. As you can see, there's a 13.4% difference between the highest and worst scoring slices.
Your 3-Fold run coincidentally gave you 3 'good' folds. The more Folds you have, the more representative of the true result your cross-validation is.
3) is this considered an overwhelming overfitting problem? or is it mild? and should I do more feature engineering or tune hyperparameters more? ( used GridSearchCV for the current hyperparameters )
Well, whether or not it's overwhelming is something of a matter of opinion. In my mind however a loss against the validation set that is 300% the training score is very severely overfit indeed, but if your validation RMSE is still within the margin of error you're willing to accept then I suppose you could go ahead and use it anyway. It's really your call.
To reduce the overfitting, you need to tune your hyperparameters better. Reducing max_depth and increasing min_samples_split is my usual go-to with trees. If your revised model (exhibiting either no overfitting or at least significantly reduced overfitting) then has a cross-validation score that is too low for you, you should return at that point to feature engineering. We can highlight the effect of hyperparameters on overfitting quite easily by plotting the effect of tweaking them:
from xgboost import XGBClassifier, XGBRegressor
import pandas as pd
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.preprocessing import OneHotEncoder
df = pd.read_csv(r'https://raw.githubusercontent.com/mwaskom/seaborn-data/master/diamonds.csv', index_col=None)
ohe = OneHotEncoder(sparse=False)
x = pd.DataFrame(ohe.fit_transform(df[['cut', 'color', 'clarity']]))
df.drop(['cut', 'color', 'clarity'], axis=1, inplace=True)
df = df.merge(x, left_index=True, right_index=True)
x = df.drop('price', axis=1)
y = df.price
cv = {}
for i in range(20):
xgb=XGBRegressor(max_depth=i+1)
x_train, x_test, y_train, y_test = train_test_split(x, y)
xgb.fit(x_train, y_train)
cv[i+1] = (xgb.score(x_train, y_train), xgb.score(x_test, y_test))
train = []
test = []
for i in cv.keys():
train.append(cv[i][0])
test.append(cv[i][1])
fig = plt.figure()
plt.plot(train)
plt.plot(test)
plt.legend(['train', 'test'])
plt.xlabel('max_depth')
plt.xlim(0, 20)
plt.ylabel('R^2 Score')
plt.show()
Apologies for the stupid x-axis labels :P. We can see the affect of the max_depth hyperparameter on the model's tendency to overfit easily; anything over 3-ish and the train/test scores begin to diverge.
Hope that those revisions are more helpful to you :) |
H: Keras: Misunderstanding the Conv2D's param "filters"?
From https://keras.io/layers/convolutional/ (Conv2D):
keras.layers.Conv2D(filters, kernel_size, strides=(1, 1), padding='valid',
data_format=None, dilation_rate=(1, 1), activation=None,
use_bias=True, kernel_initializer='glorot_uniform',
bias_initializer='zeros', kernel_regularizer=None,
bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None)
filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution).
Why is it possible to indicate the number of features maps (results of the scalar product between the kernel and the input), since we specify the kernel's size and the input?
Indeed, if we know the kernel's size (width and height) and the input's size (width and height), thus we know the number of features maps.
AI: filters are the numbers of kernels or feature detectors that we choose for the convolutional layer to learn.
in the end, the number of feature maps that we get equals to this number of filters that we have declared here.
Now, what are these feature detectors? these are matrixes usually smaller (much smaller) than the input images. It's occasionally $3by3$, $5 by 5$ or like the one in Alexnet $7by7$.
In the convolution layer, a convolution operation is done using each of these filters, each time we take a filter and our input image, then we put the filter on the left top corner of the input image and multiply each matching cells, then we would sum up the results and put that in the very first cell of our output (feature map) matrix.
Then we move this filter on the image and do the same,... till we fill the whole feature map (stride defines how many cells we move in the image). Once we have our feature map number one that is a result of applying filter number one on the input image, we take another filter and do the same and so on and so forth.
Obviously, the outputted feature maps are smaller than the input, the amount by with the input size is reduced depends on the stride that we choose, moreover, we have kinda aggregated features in our feature map (also called convolution/convolved map)
Why is it possible to indicate the number of features maps (results of the scalar product between the kernel and the input), since we specify the kernel's size and the input?
Indeed, if we know the kernel's size (width and height) and the input's size (width and height), thus we know the number of features maps.
The thing is if we just know the input size and the size of the kernel, we don't know the stride, then setting a stride of 2 or 1 or anything else would give us different results. If you assume your stride to be a specific number, then you would know how many filters should be used. |
H: An exhaustive, representative test database in phrase search algorithm
For a phrase searching algorithm, imagine the goal is to search for a name phrase and return matched results based on a pre-defined threshold. For example, searching for "Jon Smith" could return "Jon Smith", "Jonathan Smith", "Jonathan David Smith", "Jonathan Smith-Mikel", "Jonathan 'Smith' Mikel" etc.
The plan is to manually choose N test cases and put them in a benchmark database. I have concern about this plan because the test cases is likely to be not exhaustive. I know there are pretty mature search engines there, so is there a test database which covers all possible cases, such as different name combinations, punctuations, symbols, etc. such that we can use this as our benchmarks instead of guessing?
For example, this test database should contain all cases for "Jon Smith", as well as connectors such as hyphen, apostrophe, and so on.
AI: Based on the example I assume that the target is persons names.
Let's be clear, there's no such thing as an exhaustive dataset containing all possible persons names in the world. Also a crucial part of the question is: in What kind of names? In which language? Persons names in English are pretty different from names in Chinese for instance. And there is also the difficult question of transliteration of proper names.
That being said, a few resources exist. They would usually be found by searching resources for "personal names", "record linkage", "named entities matching/coresolution". The following ones probably don't cover all the requirements but it's a start:
Febrl (see also here)
Found This paper which presents a large resource and this corresponding resource description but couldn't find the data.
The EMM news explorer has an interesting database of named entities including persons names with all the spelling variants/transliterations. |
H: How to make this intersection chart?
I want to prepare the following chart. Say, I have three datasets of different lengths, for example:
A = [c,d,e,g,h,o,m,n,o,x,y,z]
B = [d,e,f,g,h,i,j,k,l]
C = [a,b,c,d,e]
Any dataset might contain items already there in other 2 datasets. Now I want to show the size of each dataset as a circle. More items, bigger circle. But all three circles also overlap each other depending on how much items are common. As demonstrated below.
What this chart is actually called? And what library can I use which might have this kind of chart available?
AI: Thanks to @Peter. It's called Venn Diagram and here's my needed Javascript Library. https://www.benfrederickson.com/venn-diagrams-with-d3.js/ |
H: How does R-CNN and AlexNet compare?
I know AlexNet does object classification in images [categories] and R-CNN does object localization [category and bounding box].
How does R-CNN and AlexNet compare?
Are they used for the same purpose or R-CNN does more? Does R-CNN use Alexnet as a sub-module?
AI: I think you got most of it from the way you wrote your question.
How does R-CNN and AlexNet compare?
Are they used for the same purpose or R-CNN does more?
They are different things. AlexNet is a CNN architecture, i.e a neural network with a specific set of layers. R-CNN is multistep method that does object localization and classification using a search algorithm with a CNN (of any architecture) and a SVM.
Does R-CNN use Alexnet as a sub-module?
R-CNN can use AlexNet, or other CNN architectures, as its' feature extractor. |
H: Why is it necessary to overfit a yolov2 on a subset of images before model training?
I am using yolov2 for passport details detection with 8 classes? My object detectiion bounding boxes has errors though recognition being fine.
I want to know if object_detection by overfitting with a sample subset of training data, say 3 images, helps determine prediction performance? If yes how?
AI: As far as I know, overfit a small sample of training data doesn't help predict performance. Normally, overfit a batch of training data is a debugging method.
It's often very hard to detect logical errors in a neural network. One easy way to test is if our neural network can overfit a small batch of training data, we can reasonably assume that it has no critical bug and start training. The reason is that a neural network can even fit a random labeling of training data, if it cannot fit a small batch, it's almost guaranteed that there's a bug in your network.
Keep in mind that this is only one test to perform, there might be some other subtle bugs that prevent you from getting a good result. |
H: Training vs test data set for supervised learning in real life scenario
In the tutorials, I have noticed only similar data has been used with models training and prediction.
I was wondering how cases where you can't find training data that is similar to your final use case(test data)?
What happens if I am building a recommendation system for house rentals but the closest training data I can find is for movies. And it has fewer features than I would have in the house scenario.
How common are scenarios like this and how are they generally handled?
AI: One important assumption of data for a lot of Machine Learning algorithms is that the data from the training set and test set are i.i.d (independent and identically distributed) and come from the same probability distribution. More explanation here. That said, you cannot train a model for movies recommendation and use for house rentals. Or not that I'm aware of.
However, it's possible to train a model for house rentals recommendation if you have a lot of data for movies recommendation and a small amount of data for house rentals recommendation, using a technique called Transfer Learning. The idea is simple: If two tasks are similar, we should be able to use what we learn from one task to learn the other task faster and better.
Following is the general process (for neural networks):
Using movies data, train a model for movies recommendation
Fine-tune this recommendation system using house rentals data: Freeze the most part of the movies recommender, retrain a few last layers with house rentals data.
This is just a direction about how it can be done, there's quite a lot of related literature for you to learn more. I would say this is a not very uncommon scenario. |
H: Poor performance of regression model for imbalanced data
I am trying to train a neural network model to solve a regression problem. The specificity of my dataset is that it has something like an exponential distribution of target values (imbalanced). Therefor, the model seems to output just values less than 2 (if the range is [0,6]), for instance, and it absolutely ignores bigger target's values, which have a smaller performance in the dataset. How is it possible to improve the model's results in such a case?
For example, when it comes to a multiclass classification, we can weigh penalties for errors on a smaller class to enhance performance with imbalanced data. Are there any tricks in terms of regression? Which loss-functions could be useful? It seems, that MSE loss function is more preferable than RMSE. Is it more powerful loss functions for this problem?
There is a paper about such a problem of imbalanced regression (http://proceedings.mlr.press/v74/branco17a/branco17a.pdf) which might be helpful to someone. However, I'm more interested in special tricks for a neural network, not pre-processing approaches (I can't generate more data for example).
AI: Neural Networks can in general be interpreted as a regression problem and as such, you could apply well known ways of dealing with this. This paper gives you a good introduction to different approaches. For instance you can upsample the minority class, or you could do loss weight balancing during training.
For instance, consider the data point $x_i$ that can belong to one of two classes $a$ and $b$. For instance, class $a$ is here the minority. Then you would, during training, multiply the calculated loss with the weight. $$L_i=\begin{cases}l_a\cdot x_i,&\text{if datapoint $x_i$ is in the minority class}\\l_b\cdot x_i,&\text{if datapoint $x_i$ is in the majority class}\end{cases}$$ with $l_a$ > $l_b$ which would be natural for this example.
You could also look into continous performance measures that could work nicely with imbalanced dataset. For instance the generalized dice (F1) score.
Referenced papers:
Provost 2000, Machine Learning from Imbalanced Data Sets 101
Sudre et. al 2017, Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations |
H: Keras: Misunderstanding what Keras does when invoking a Model, providing an Input?
In a class named Generator, I have defined a model with the below method define_model:
def define_model(self):
conv2d = Generator.__last_block()
output = Activation('tanh')(conv2d)
model = Model(self.input_layer, output)
return model
In another file, I am writing the entry-point of my Python program, which calls the generator's define_model method:
generator = Generator()
generator_model = generator.define_model()
input_low_resolution = Input(shape=low_resolution_shape)
generated_high_resolution_images = generator_model(input_low_resolution)
As I'm reading a course, I however have a queston about this code: what does Keras do when it executes the line generated_high_resolution_images = generator_model(input_low_resolution)?
As far I can understand, it doesn't define a model (my model is already defined thanks to generator.define_model()!). And since the Keras methods train, fit or other aren't called, I deduce that this line doesn't train the model.
By the way, it's a really weird line for me, because it passes a parameter to a reference (the reference to the object is generator_model and the parameter is input_low_resolution). Normally we pass parameters to the reference's methods (constructor and other methods).
AI: Keras does a lot of stuff when you call a model. I guess that the most important is that it defines the trainable variables and the graph based on the inputs and outputs. For instance actually making any dense layers, and not just the representations.
For instance, if you did mod1=generator_model(input_low_resolution) and mod2=generator_model(input_low_resolution), then if you train mod1 then mod2 would not be affected as they have different parameters. So the .define_model just makes a "handle" to your model, which when called actually "builds" it.
This functionality is very useful when you want to mix-and-match multiple layer configurations, or working with more complex structures like for instance GANs or ADDA.
I would argue that your implementation is not exactly canonical and actually I'd recommend looking at the tensorflow.keras.Model example. Here they introduce a very nice, canonical way of making your own model.
Furthermore maybe Keras Model class API and Getting started with the Keras functional API can be of help. |
H: User Location Prediction
I am trying to predict the user location for a mobile app for a certain timestamp. So far I've gathered training data throughout different days. The data contains the latitude, longitude and the timestamp.
What would be the best algorithm/best way in order to train a model that takes a timestamp as an input and gives as output a latitude and longitude pair?
AI: KNN:
The prediction of Lat/Lng from a given timestep could be performed by analysing the previous data of the user. This is a totally naive approach but could be tried,
Take an input timestamp from the user. Let's call it $x$.
From the previous data of the user, search for the timestamp which is the closest to $x$.
Once you find the closest timestamp, output the Lat/Lng associated with that timestamp.
This approach is called K-NN ( K Nearest Neighbours ) classification.
It will work well if the user is mostly at a common place at a given
time ( timestamp ).
Clustering method:
This method will work on making clusters of Lat/Lng. For some a given time-interval, suppose, you will get a common timestep ( which will be the centre of the cluster ).
Once you have a cluster of timesteps, find the timestep ( centre of the cluster ) which is the closest to the input $x$.
For this timestep, you will get Lat/Lng which close to one another.
For predicting the Lat/Lng, use the centre of the cluster and output the associated Lat/Lng values.
Basically, you are clustering similar data points which help in prediction. You can find where a user's location is clustered at given time intervals. |
H: What are named entity advantages over strings?
I am unclear about the practical benefit of named entity recognition. Specifically, I don't get how this entity is any better than a string representation of the same object. Let's take the popular example of movies. I can create a model where each row is a movie. Any attribute you can have in a recommendation system can just as easily be a column in my movie table: release date, actors, genre, etc. If a user does a search for 'animation' - or even if they don't, but you know from that user's history that they saw The Lion King, any decent search engine could pull up Moana as a recommendation, so how is a named entity any better?
Or you pull a named entity out of a text document. Great, but now what? Isn't it still just a string, which can be assigned as an object with it's own unique dict of values, like genre and release date? I assume there are benefits over strings, or it wouldn't be such a big deal to so many people. I just don't know what those benefits are.
AI: The comparison between named entity and string is irrelevant, because the concept of named entity doesn't belong to data types such as string or numeric values. Of course a named entity is usually represented as a string but so is a text document, a cryptographic key or a translation of a text to Vietnamese: what matters here is the semantics, not the technical representation.
The reason why named entities (NEs) are often offered special treatment is because they are particularly relevant for certain tasks and are notoriously difficult to detect and analyze:
The task of Named Entity Recognition (NER) consists in detecting named entities in a raw text document. It's not a trivial task, the problem is still an active area of research.
Even after having been identified, named entities are often difficult to disambiguate. The same entity may appear under many different variants for various reasons:
"John Smith", "J. Smith" and "John A. Smith" may or may not be the same person.
"The president" and "Donald Trump" may or may not be the same entity.
"Big Apple" and "New York" represent the same entity, but not "big apple" and "New York".
...
Of course when one is provided with the result of the NE recognition/disambiguation process, they can process NEs like any other piece of information... but that's because all the hard work about NEs has been done before. |
H: Why Generator error increases after some epoch
I'm training Realtivistic GAN on dog images. I don't understand why my generator loss increased after 100 epoch.
This is the generator loss graph.
AI: In your question, it's pretty hard to answer specifically what is going on as you don't share any details of what you're trying to do. However, I'll try to address the general case:
In GANs (and other adversarial models) an increase of the loss functions on the generative architecture could be considered preferable because it would be consistent with the discriminator being better at discriminating.
As the discriminator is trained, it actually changes the loss landscape of the generator, which in turn increases the loss of said generator. This is probably why.
Another reason could be that the discriminator is getting too good and the generator is getting stuck, which often can be fixed by adjusting learning rates in the optimizers or increase the number of iterations of the adversarials. In this particular case I'd try decreasing the discriminator learning rate or increase the number of steps the generator is trained.
I'd suggest reading the original Generative Adversarial Nets paper and perhaps also the Improved Techniques for Training GANs paper. |
H: Difference between from nltk import word_tokenize and from nltk.tokenize import word_tokenize?
What is the difference between the word_tokenize, one imported directly from nltk and the other being imported from tokenize package of nltk?
AI: So basically
from nltk import word_tokenize
and
from nltk.tokenize import word_tokenize
from nltk import word_tokenize is basing to the same tokenizeAPI as both are the same with the usage and I don't see any difference in the results which they give. |
H: Comparing models using P-value
I have 2 different ML architectures for a translation task, I evaluate them using BLEU score (higher is better)
I've run them 9 times each, yielding the following scores
Architecture 1 | 36.52 |
36.27 |
35.9 |
35.22 |
37.13 |
35.53 |
35.3 |
34.14 |
35
Architecture 2 | 36.85 |
35.64 |
36.37 |
36.82 |
36.74 |
36.46 |
35.77 |
37.31 |
36.68
Means are 35.67 for the first and 36.52 for the second
I want to calculate a P-value that confirms (or not) the superiority of the second model, with alpha = 0.05
Thanks in advance
AI: I think you can just apply a t-test or Wilcoxon test if you're don't believe in the normality of your observations.
In R:
A <- c(36.52 , 36.27 , 35.9 , 35.22 ,37.13 , 35.53 , 35.3 , 34.14 , 35)
B <- c(36.85 , 35.64 , 36.37 , 36.82 , 36.74,36.46 , 35.77 , 37.31 , 36.68)
wilcox.test(A,B,alternative="less",paired=T)
t.test(A,B,alternative="less",paired=T)
I've done paired tests under the assumption that you did the same folds for each architecture. If that is false, then do unpaired tests.
Edit: You may be interested in a question I posted a few months ago. Cross-validation for model comparison: use the same folds? |
H: How to optimize hyperparameters in stacked model?
I was wondering whether somebody could explain how to optimize hyperparameters for the base learners and meta algorithm when stacking? In many tutorials they seem to be plucked out of thin air!
Thanks,
Jack
AI: I believe the most common way involves some slight data leakage during the training step that is often ignored. The "correct" way involves giving up more training data but many have empirically realized that giving up more training data is often worse.
Split your data into training and testing.
Split the training set into k-folds.
Train your base models using all of the k folds. Save the predictions from the best (optimized in terms of hyperparameters) base models, for each of the k folds.
Example: suppose we use 3 fold cross validation and have two base models. For each base model, we find the best hyperparameters that on average optimize some loss function over these three folds. We then save the predictions made by this optimal base model over these same three folds (i.e. we save the exact same predictions that you must have made when scoring your model over all three folds).
In essence, you are transforming your original training set with your base models in pieces and reconstructing it back together.
Using your reconstructed training set of predictions from folds 1, 2 and 3, use k fold cross validation again to train your combiner (i.e. find optimal hyperparameters again but for the combiner). You can choose to use the same splits as you did in step three but it doesn't matter. This is where the problem lies with data leakage. No matter how you split your data here you will have variables in your test set that were directly created from observations in your training set. For example, suppose fold 1 and fold 2 in the previous example make up the training set, and therefore fold 3 is used for validation. Since both fold 1 and fold 2 were created using fold 3 observations there is a chance that you overfit here due to optimism in your validation scores. In Kaggle competitions this is often ignored. My guess is probably because if you are stacking you have a large dataset, and if you have a large dataset minor data leaks become less of an issue (in general).
Fit your base models using your entire training dataset, using the hyperparameters found in step 3. Predict the training set to generate the required meta features for your combiner, and also predict the test set created in step 1.
Fit your combiner using the training set generated in the previous step, using the hyperparameters found in step 4.
Predict the test set that you had created from your base models in step 5, using the combiner model from the prior step. Score your loss and report this score as your models performance. |
H: Intuition behind the entropy definition
The definition of information entropy is defined below:
This looks fine but I got no intuition why it is defined this way. Could any one share their ideas on this? Thanks!
AI: Pls. refer Pattern recognition and Machine Learning for details - section 1.6: Information Theory.
Left graph is Information(events) vs Probability. And the right graph is Expected Information/(Uncertainity) vs Probability.
If we look carefully at the left graph; the variation at either extremes of the curve dies down, representing less change. Translated in current context, it means not much Information flows-in at extremes.
Staying with the left curve, it's somewhere in the middle (45-degree, from vertex) where there's relatively more "meat" in the sense that more change happens/ translated, more information flows-in.
Same thing is shown by the right curve: at two extremes, Change/ Expected Information is low at extremes, and highest at centre.
A crude Graph using Excel: |
H: Do CNN convolution and pooling layers get backpropogated?
I can't find a simple answer to this by Googling which leads me to think the answer is no, but I want to be sure...
In a feed forward network, all of the layers of weights get backpropogated, but what happens in a convolutional neural network on the backprop step? Is it only the feedforward part of the network (after the convolution and pooling layers) that gets backpropped? Which would mean that the convolutional layers are a form of static feature extraction...
AI: All layers of a neural network take part in the back-propagation process.
This includes the convolutional layers and the pooling layers. In general, every step of the network that the input has to go thru, the back propagation goes thru as well (in reverse order).
However, not all layers contain trainable parameters. For example standard pooling layers (max-pooling, average-pooling) and standard activation layers (sigmoid, ReLU, softmax) don't have any parameters to adjust. They still take part in the back propagation, contributing their partial derivatives, but they just have no weights that can be updated.
The convolutional layers do contain weights that are updated during the process (the parameters of the filter and their bias).
Note: I assume that what you refer to as "feed forward part", are the
fully-connected layers that are usually placed as the final layers of
a network. In standard CNNs, all the network is a "feed forward part", (including convolutional layers)
it just means that the input goes thru a sequential pipeline until
becoming the final output. |
H: Does an RL agent still learn if its actions are "blocked"?
Say we have a game that is a maze environment where there is a character to be controlled through the maze. When the agent (the character) approaches a wall, it may try to execute an action that would lead it into the wall, which is not permitted. Instead of executing this action, we place an if statement in the code that checks if the action is permitted, and if not, does not execute the action, and simply proceeds to the next state.
Another similar example is if an RL agent is being trained in a stock trading environment, where say it tries to sell more stock than it actually owns, or buy less than the minimum amount. Similarly as before, we place an if statement that checks for these conditions and either allows the action (allows the trade) or moves on to the next state.
Does the agent still learn, even if we "override" it and block certain actions? How else would we go about restricting the agent to certain actions in certain states e.g. if a wall is to the left of the agent in the game environment, how would we restrict the agent to only move forward, backward or right, but not left?
AI: By overriding the Agent's action, Agent can theoretically take this action over and over and do nothing else, there is nothing to stop the agent from doing this, or teach the agent that this is not the intended action here.
What I usually do in these situations is, I punish the agent if the action is not desirable. So if agent take the action to go to the left, but there is a wall on the left side, I would not change the state of the environment (So the agent won't move), but I would also send a minus value (punishment) as a reward. This way after some training, the agent would learn that this action is not desirable.
The same can be applied to you Stock example. So if the agent tries to sell more stock that it actually has, you just don't sell the stock but also punish it with a big minus reward. This way it makes it easier for the agent to actually understand the environment.
Hope this helps. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.