text
stringlengths 83
79.5k
|
---|
H: Random Forest significantly outperforms XGBoost - problem or possible?
I have dataset of around 180k observations of 13 variables (mix of numerical and categorical features). It is binary classification problem, but classes are imbalanced (25:1 for negative ones). I wanted to deployed XGBoost (in R) and reach the best possible Precision & Recall. For dealing with imbalances I tried upsampling of positive class, as well as XGB high weights for positive class. However, despite the fact Recall is pretty high, there is very poor Precision (around 0.10).
My parameters tuning for XGB:
Random search of parameters - 10 interations
5-folds CV
Parameter's intervals:
max_depth = 3-10
lambda = 0 - 50
gamma = 0 -10
min_child_weight = 1 -10
eta = 0.01-0.20
Then, I tried Random Forest with upsampled dataset and it performed suprisingly great with Recall 0.88 and Precision 0.73 (on test dataset).
Could someone tell me please, if it is possible that RF outperforms XGB so much, or it is a sign I am doing something wrong?
Thank you very much.
AI: There are two important things in random forests: "bagging" and "random". Broadly speaking: bagging means that only a part of the "rows" are used at a time (see details here) while "random" means that only a small fraction of the "columns" (features, usually $\sqrt{m}$ as default) are used to make a single split. This helps to also let seemingly "weak" features have a say in the prediction or to avoid dominance of few features in the model (and thus to avoid overfitting).
Looking at your XGB parameters, I notice that you do not subsample rows and columns, which is possible by using the parameters colsample_bytree and subsample. You could also use scale_pos_weight to tackle imbalanced classes. Subsetting columns and rows would possibly be useful if you have some dominant features or observations in your data. I suspect that using subsampling (this would be „stochastic gradient boosting“), the XGB results would improve and be "closer" to the results obtained by using a random forest.
Also make sure you have enough boosting rounds (to have good learning progress). You can add a watchlist and an early_stopping_rounds criterium to stop boosting in case no more progress is made. In this case you would set nrounds to a "high" number and stop boosting in case no more learning progress after early_stopping_rounds steps is made as in this generic code. |
H: Conv2d for image with additional features as input layer
I would like to train a model with Keras and TensorFlow.
My input consists of images and some additional features.
I would like to use conv2d for the images and dense for the other inputs.
The question is how do I build an architecure that has multiple input types where each one gets connected to a different layer?
AI: This is quite easy to do using the keras functional API. Assuming you have an image of size 28 by 28 and 5 additional features, your model could look something like this:
from tensorflow.keras import Model, Input
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Flatten, concatenate
input_image = Input(shape=(28, 28, 3))
input_features = Input(shape=(5,))
# apply convolutional layers to image branch
x = Conv2D(32, 3)(input_image)
x = Conv2D(32, 3)(x)
x = MaxPool2D(2)(x)
x = Flatten()(x)
# concatenate flattened image branch with input features
concat = concatenate([x, input_features])
# apply dense layers on concatenated data
dense = Dense(64)(concat)
output = Dense(64)(dense)
# create models using inputs and output specified above
model = Model(inputs=[input_image, input_features], outputs=output)
When visualized the model structure would like this:
from tensorflow import keras
keras.utils.plot_model(model, show_shapes=True) |
H: Why people use precision and not true negative rate (specificity)?
From my experience the standard way to evaluate a classifier is not to check only its accuracy but also its recall and precision.
People tend to assume that recall measures the model on positive samples, and precision measures it on negative samples. In addition, those measures are considered as a good way to evaluate a model when you have a non-ballanced distribution.
However, precision does not measure the model performance only on negative samples. In addition, in contrast to recall, it is not agnostic to positive-negative distribution. In fact, when recall is a fixed attribute of the model, precision is an attribute of the model and the positive-negative distribution.
Recall is the True Positive Rate, and there is a symmetric measure - True Negative Rate, with all of its good qualities.
So my question is - Why, in the ML field, people tend to use recall and precision and not recall and true negative rate?
AI: The positive/negative distinction is not what the precision/recall pair of measures tries to capture.
precision measures the proportion of correctly predicted instances among the instances predicted as positive. In other words, if X is the precision then one can say "when the classifier predicts an instance as positive, it is correct X% of the time".
recall measures the proportion of correctly predicted instances among the gold-standard positive instances. In other words, if Y is the recall then one can say "when an instance is gold-positive, the classifier identifies it correctly Y% of the time".
Thus the pair precision/recall focuses on the distinction "predicted as true" vs. "gold-standard true". It is designed for tasks where the positive class is really the class of interest, and the negative class is not essential.
Of course it's also a matter of habit: people get used to thinking in terms of precision/recall, but technically other measures (like TPR vs. TNR) could also be appropriate.
[Update]
Is it desirable for the evaluation measure to be agnostic to the class distribution?
I don't know if there is a canonical answer to this question, I suspect that there isn't and that it's a matter of interpretation.
Personally I don't have a strong answer to this question, but at least it doesn't seem obvious to me that it is.
Let's look at an example where the difference appears:
Confusion matrix A
Gold positive = 10%
pred pos
pred neg
gold pos
2
8
gold neg
10
80
Precision=0.16, Recall=0.2
TPR=0.2, TNR=0.89
Confusion matrix B
Gold positive = 50%
pred pos
pred neg
gold pos
2
8
gold neg
1
8
Precision=0.66, Recall=0.2
TPR=0.2, TNR=0.89
Interpretation
First it's important to keep in mind that when the class distribution is different, it means that the problem itself is different. Since in general evaluation measures are used to compare systems on the same task, there's no strong reason to favour an evaluation which is constant across different class distributions.
In the example above the precision is higher in case B, where TNR is equal in the two cases.
In the case of precision, the higher performance in case B cannot be interpreted as "B is better than A", since the problem is different (as mentioned above; btw this is a mistake that we see quite often on DSSE, when resampling is wrongly done on both the training and test set). This would be an argument leaning in favour of TNR.
However it could be interpreted as an indication that the problem in case B is easier than in case A. It's indeed easier in general to find the positive instances when they represent a larger proportion of the data. The fact that TNR cannot give this kind of information may sometimes be a disadvantage.
The precision/recall pair is intended for problems where the focus is the positive class, which is usually the minority class. A typical example is the detection of a disease in a population. Intuitively, in this kind of problem knowing the precision score is more useful than knowing the TNR. Also the TNR is usually very high and its variations are small. |
H: Semi-supervised classification with SelfTrainingClassifier: no training after calling fit()
I am practicing semi-supervised learning, at the moment experimenting with sklearn.semi_supervised.SelfTrainingClassifier. I found a dataset for multiclass classification (tweet sentiment classification into 5 sentiment categories) and randomly removed 90% of the targets.
Since it is textual data, preprocessing is needed: I applied CountVectorizer() and created a sklearn.pipeline.Pipeline with the vectorizer and the self-training classifier instance.
For the base estimator of the self-training classifier I used RandomForestClassifier.
My problem is, when running the below script, no training happens. The argument verbose is set to True so if any iteration happened, I would see its output. Also when inspecting the predicted labels, they are identical to the initial ones, confirming that despite no errors showing, something is not in order.
The full code:
import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelEncoder
from sklearn.semi_supervised import SelfTrainingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer
# Coronavirus dataset from Kaggle: https://www.kaggle.com/datatattle/covid-19-nlp-text-classification
# For this semi-supervised demonstration, only train file is used.
df = pd.read_csv("./datasets/Corona_NLP_Train.csv", encoding='latin-1')
# subsample the dataset (purely for efficiency, i.e. running the examples quicker)
df = df.sample(frac=0.1)
print("Original data shape: ", df.shape)
# Unlabeled data must be denoted by -1 in the target column. Since original data is labeled, we remove labels for 90% of target
rand_indices = df.sample(frac=0.90, random_state=0).index
# create new 'Sentiment_masked' column
df['Sentiment_masked'] = df['Sentiment']
df.loc[rand_indices, 'Sentiment_masked'] = -1
# check original 'Sentiment' distribution
print("Original (unaltered) sentiment distribution:\n", df['Sentiment'].value_counts())
# check masked sentiment distribution
print("Masked sentiment distribution:\n", df['Sentiment_masked'].value_counts())
X = df['OriginalTweet']
y = df['Sentiment_masked']
stclf = SelfTrainingClassifier(
base_estimator = RandomForestClassifier(n_estimators = 100),
threshold = 0.9,
verbose = True)
pipe = Pipeline([('vectorize', CountVectorizer()), ('model', stclf)])
pipe.fit(X, y)
And I returned the updated/modified labels using:
pd.Series(pipe['model'].transduction_).value_counts()
which yielded:
-1 3704
Positive 117
Negative 93
Neutral 79
Extremely Positive 72
Extremely Negative 51
i.e. the exact same as what df['Sentiment_masked'].value_counts() yielded earlier.
What I am missing here?
AI: The reason you are not seeing any verbose output from the model fitting and no change in the model's labels is because the treshold you are currently using is too high, which doesn't allow the model to add any new pseudo-labels to the dataset. Decreasing the threshold (e.g. to 0.7) does show output with the number of labels added in each iteration:
End of iteration 1, added 54 new labels.
End of iteration 2, added 163 new labels.
End of iteration 3, added 310 new labels.
End of iteration 4, added 576 new labels.
End of iteration 5, added 982 new labels.
End of iteration 6, added 1350 new labels.
End of iteration 7, added 249 new labels.
End of iteration 8, added 12 new labels.
End of iteration 9, added 3 new labels.
End of iteration 10, added 1 new labels.
The reason you are not seeing any change when getting the value counts for the different labels is because the model doesn't actually add the newly generated pseudo-labels to the original dataset. It only adds the labels internally (see the source code) and after fitting returns the class itself which now contains the classifier trained on the original dataset plus the pseudo-labels. This is stored in the base_estimator_ attribute which is then called when predicting on new data (e.g. see the predict method of sklearn.semi_supervised.SelfTrainingClassifier). |
H: Which method is more suitable? overfitting of traning data or low accuracy?
Recently, I tested two methods after embedding in my data, using Keras.
Convolution after embedding
Maxpooling after embedding
The first method's loss and validation loss are like,
The second method's loss and validation loss are
As you can see, the first one reduces the loss a lot for the training data, but the loss increases significantly in the validation set.
The second method's loss for the training data is higher than the first method, but both loss in the training data and validation data are almost same.
Which method is more suitable for further analysis with additional variables?
AI: The performance on in-sample data almost does not count. The performance on out-of-sample data is more indicative of how you should expect your model to perform on future inputs.
The second model has better out-of-sample performance. With just that information, I would prefer the second model. |
H: Is there any point in having layers in a neural network for regression problems?
In my textbook I read that an MLP and linear activation functions for the hidden layers can be reduced to a simple input-output system, i.e. no hidden layers. This makes sense to me. Later on I read that for regression problems, the linear activation function is commonly used (along with MSE loss function). How does this work together? Is it reasonable to say that no layers are ever needed if using an MLP to solve a regression problem, or is any of the statements wrong (or compatible)?
AI: In regression, it makes perfect sense to have multiple hidden layers to model the complex relationships between inputs and outputs. However, those layers need to have non-linear activation functions, otherwise, they would be equivalent to a single layer.
When your textbook said that "linear activation function is commonly used (along with MSE loss function)", it refers only to the last layer, to allow the model to generate unbounded real numbers. The hidden layers must have non-linear activation functions.
P.D.: the equivalence of multiple layers with linear activations and a single layer can be proved easily:
$(xW_1 + b_1)W_2 + b_2 = xW_1W_2 + b_1W_2 + b_2 = x(W_1W_2) + (b_1W_2 + b_2)$). |
H: Which analytics metrics are useful?
I am working on a home project and the question is :
A session is defined as a period of activity in the app. If there is a period longer than 5 minutes when the user does not do anything actively the session ends. The session time is the time between the first action to the last action of the session.
The distribution of segments A and B is :
We want to figure out if segment A is performing different than segment B. How would do to decide if one is better than the other? Do not calculate any results but list the metrics you would calculate in the order you would calculate them.
Edit: If we assume that better is longer session times how to determine which segment is better?
AI: First thing I would do is to try a better visualization, at least using a log scale on the Y axis.
For the question itself you want to compare the two distributions, so you could go with the usual Wilcoxon significance test (Student t-test is obviously not appropriate since the distribution is not normal). One may also be interested in the distance between the two distributions, in which case KL divergence or its variants could be used.
Knowing which one is better depends what is called "better", with this description we don't even know if it's a good thing for a user to spend more time on the app. |
H: Interpreting 'values' of a Decision Tree
I am trying to interpret my decision tree here which was resulted as a part of pre-pruning-
I am trying to understand why the values in my nodes are in decimal places. Ideally, they should represent the #observations that belong to a particular class in my binary classifier, they should have been absolute numbers
they are not percentages as well since they don't add up to 100.
this is the code I used to plot this tree-
plt.figure(figsize=(15, 10))
out = tree.plot_tree(
best_model2,
feature_names=feature_names,
filled=True,
fontsize=9,
node_ids=False,
class_names=None,
)
for o in out:
arrow = o.arrow_patch
if arrow is not None:
arrow.set_edgecolor("black")
arrow.set_linewidth(1)
plt.show()
This tree was a post pruned tree with code as -
best_model2 = DecisionTreeClassifier(
ccp_alpha=0.002, class_weight={0: 0.15, 1: 0.85}, random_state=1
)
best_model2.fit(X_train, y_train)
AI: This answer was submitted by the user @Vlad_Z
These values represent the weighted observations for each class, i.e. number of observations per class multiplied by the respective class weight. Since your class weights aren't integers, the resulting values are the way they are. If you want to get class counts, you can simply divide your values by class weights. |
H: Is it bad to use "coefficient of determination" for recommendation?
This is a general question about recommendation:
Is it a bad idea to use "coefficient of determination"($R^2$) as metrics for recommendation?
I am building a model of recommendation and wonder if it is a good idea to use "coefficient of determination" for evaluating models
As for pure regression I often use "coefficient of determination".
On the other hand, I cannot find "coefficient of determination" on recommendation Kaggle kernels or some benchmarks.
AI: First of all, coefficient of determination ($R^2$) is not necessarily a good indicator for evaluating regression models. The equation of calculating R Squared is as below:
$R^2 = 1-\frac{RSS}{TSS}$
here RSS is residual sum of squares and TSS is total sum of squares. R squared only tells you How much variability in $Y$ can the linear model explain. See this post for a list of examples when R squared is not a useful indicator for model selection.
Back to the recommendation question: I'm not an expert but my intuition is R^2 cannot evaluate and rank the robustness of every single recommendation. See these kaggle notebooks below for a list of evaluation metrics.
Cosine similarity, RMSE link
Similarity score and customized score link |
H: Deep Learning 2D-Convol Layer: When does the Channel =1?
Sorry if this sounds basic. But for input_shape, understand that the channel =3 for colored images.
But for black-white and grey-scale images, channel =1?
Thank you
model.add(Conv2D(filters=32, kernel_size=(4,4),input_shape=(28, 28, 1), activation='relu',))
input_shape = image_height, image_width, channels
```
AI: This is indeed correct, when talking about single images a colored RGB image has three channels (R, G, and B) and a black and white has one channel indicating the pixel luminance. |
H: Estimate timeline for a ML Project
I am a novice data scientist and have been asked to provide an estimate for a data science project in our organization.
From the problem stmt description, i am able to understand that it is a traditional binary classification problem.
However, am new to the domain, dataset etc (and I don't have access to full dataset yet).
Through my work, I will also have to interact with business users throughout to clarify my questions regarding data, domain etc.
How can I propose a timeline to my supervisor without any experience in this space. Is there any guidelines that I can follow to come up with a reasonable timeline?
AI: Look at your past experience. Even though you're a novice, you were hired as a data scientist, so you'll probably have some experience with data science projects. A simple binary classification problem with a few hundred datapoints can be solved in a productive afternoon, whereas a large project that requires significant upfront engineering for the acquisition of your dataset could take months.
Honesty is always key, as it leads to proper expectation management. Just stating the different phases of the project with an indication of how long they could take will already be quite nice. This could even be very rudimentary like:
data acquisition: 1 week ~ 3 months
EDA and preprocessing: ...
If you don't have a better guess than 'somewhere between 1 week and 3 months', don't try to make a better guess. Because it will only lead to disappointment. Trust me, I'm speaking from experience here.
Your supervisor will probably know you're a novice, and should not be offended and/or surprised if you come up with a timeline that is still quite abstract and prone to change over the coming time period.
Also always take into account Hofstadter's law:
It always takes longer than you expect, even when you take into account Hofstadter's Law |
H: Difference between Doc2Vec and BERT
I am trying to understand the difference between Doc2Vec and BERT. I do understand that doc2vec uses a paragraph ID which also serves as a paragraph vector. I am not sure though if that paragraph ID serves in better able to understand the context in that vector?
Moreover, BERT definitely understands the context and attributes different vectors for words such as "Bank". for instance,
I robbed a bank
I was sitting by the bank of a river.
BERT would allocate different vectors for the word BANK here. Trying to understand if doc2vec also gets this context since the paragraph id would be different here (for doc2vec). Can anyone please help with this?
AI: The main difference it that BERT includes attention mechanisms, whereas Doc2Vec doesn't.
Attention mechanisms are functions to detect context between words, i.e. learning from words positions using attention weights.
This gives a better result than classic embedding approaches like Doc2Vec, thanks to a contextual approach of data.
On the other hand, BERT can handle out of vocabulary words because it uses subwords (example: "sub" + "word" + "s") instead of complete words (ex: "subwords"), which gives more meaningful information about the data. |
H: How to tell if downsampling helped model performance
Fitting a logistic classifier to imbalanced data. My target variable is 5% 1, 95% 0. As such, I think it's probably best to use the PR-AUC to evaluate the model rather than ROC-AUC. I get a PR-AUC of 0.1, which is better than nothing I guess.
Another way to potentially increase performance is to downsample the majority class (or upsample the minority or some combination but let's stick with downsampling).
The problem is that how do I tell if the downsampling actually helped model performance? unlike ROC-AUC which falls between 0.5 for a random model and 1 for a great model, PR-AUC is relative to the percentage of data in the positive class. Because the percentage of the positive class is different in these models by design, how can I compare them?
AI: You never downsample your test data. Test data should maintain same %age as original distribution of classes. You compare test reseults before and after sampling to see if it works |
H: Can t-SNE be applied to visualize time series datasets
I have multiple time-series datasets containing 9 IMU sensor features. Suppose I use the sliding window method to split all these data into samples with the sequence length of 100, i.e. the dimension of my dataset would be (number of samples,100,9). Now I want to visualize those splitted samples to find out the patterns inside. Can I treat it as tabular data and transform the original dimension firstly to (number of samples, 900), then apply t-sne method directly on that to find out the patterns? Or is there any other method to better find out the patterns for time-series data
AI: It all depends on the data variability.
If the time series are too variable between each other in terms of raw values, you might not see any meaningful cluster.
That's why you will want to transform the data to make the times series more comparable.
A first step would be to have relative values instead of absolute values if you want to detect behavioral patterns.
Then, it is always recommended to normalize data. Without normalization, the gaussian functions behind the t-SNE algorithm would not segregate the small and the big values the same way and it could lead to more variability (= no clear cluster).
In addition to that, if the variability is too high in terms of raw value, it could be interesting to apply a log, but be aware that there would be precision loss, which is generally not critical as you only want to visualize data in a lower dimension.
Finally, if the time series are too long (100 could be the case), and T-SNE can't detect visible similarities between sensors, you should simplify the data by applying mean values between 2 or more consecutive values.
Please keep in mind that t-SNE is mainly usefull when you have many variables (ex: comparing 50 engines having 9 sensors with 100 values each, and you want to lower those 9 sensors to 2 dimensions) that you want to visualize to detect similarities or differences.
My last advice is to start with one important sensor first before considering more sensors. As soon as you reach interesting result with one sensor, you can increase complexity with more sensors. Alternatively, you can apply UMAP: it has the benefit to make correlations between clusters, no normalization is needed, but the result might be worse than t-SNE between individual points in the lower dimension. A correlation study between sensors can be interesting: some sensors with no or low correlation, or too noisy, could be discarded.
In conclusion, data preprocessing is crucial and depends on the algorithm you use, but always start with smaller datasets that you can evaluate the result quality, before considering the whole datasets. |
H: Pandas and data imputation from a multiindex dataframe
I have the following dataframe:
df = pd.DataFrame([[np.nan, 2, 20, 4],
[3, 1, np.nan, 1],
[3, 1, 15, 1],
[np.nan, 1, np.nan, 1],
[10, 1, 30, 4],
[50, 2, 35, 4],
[10, 1, 37, 4],
[40, 2, 30, 1]],
columns=list("ABCD"))
I want to fill the Nan values with their group means.
Towards that purpose, I run the following:
df_mean=df.groupby(["B","D"]).mean()
df_mean
A C
B D
1 1 3.0 15.0
4 10.0 33.5
2 1 40.0 30.0
4 50.0 27.5
Is there a way to fill the dataframe df with the values computed in df_mean?
AI: This is more of a programming question than a data science question and would therefore be better suited for stackoverflow stackexchange, but the following code should do what you're looking for:
df[["A", "C"]] = (
df
# create groups
.groupby(["B", "D"])
# transform the groups by filling na values with the group mean
.transform(lambda x: x.fillna(x.mean()))
)
A
B
C
D
50
2
20
4
3
1
15
1
3
1
15
1
3
1
15
1
10
1
30
4
50
2
35
4
10
1
37
4
40
2
30
1 |
H: Tidy vs. untidy data
In R for Data Science, the authors explain their idea of tidy data. They give an example for data that is not tidy:
#> # A tibble: 3 x 3
#> country `1999` `2000`
#> * <chr> <int> <int>
#> 1 Afghanistan 19987071 20595360
#> 2 Brazil 172006362 174504898
#> 3 China 1272915272 1280428583
A tidy version of the same data, according to the authors, would be:
#> # A tibble: 6 x 3
#> country year population
#> <chr> <chr> <int>
#> 1 Afghanistan 1999 19987071
#> 2 Afghanistan 2000 20595360
#> 3 Brazil 1999 172006362
#> 4 Brazil 2000 174504898
#> 5 China 1999 1272915272
#> 6 China 2000 1280428583
I see that in the untidy version, one must already know that the data is population, otherwise it is impossible to understand, what 1999 and 2000 mean. However, that could be derived from context, e.g. if the tibble is stored in a variable called population.
Now, who am I to doubt their judgment -- I do not. But I would like to better understand their idea. What are the actual advantages of the second version? I cannot not intuitively see them, e.g. calculation of mean etc. would be easy for both cases AFAICS, even if different functions need to be used. On the other hand, I would think that it is e.g. easier to calculate the correlation between the population of two years if the data is stored in the "untidy" form. What is my mistake here?
AI: As you mention, the first example is data in an "untidy" format which can make analysis more difficult because of multiple reason. The first one being the column names, as you mention you do not know what the values in the columns actually represent. You mention that this can be derived from the context (e.g. the variable name), but what if the data is originally stored in a csv file called data.txt? In addition, how would this work if you have another type of value for which you also have data for 1999 and 2000, what would you name the columns? A second (and probably the biggest) advantage of using tidy data is that it allows for easier and more standardized analysis. I think the following from the original paper describes it well:
Tidy data makes it easy for an analyst or a computer to extract needed
variables because it provides a standard way of structuring a dataset.
Compare Table 3 to Table 1: in Table 1 you need to use different
strategies to extract different variables. This slows analysis and
invites errors. If you consider how many data analysis operations
involve all of the values in a variable (every aggregation function),
you can see how important it is to extract these values in a simple,
standard way. Tidy data is particularly well suited for vectorised
programming languages like R, because the layout ensures that values
of different variables from the same observation are always paired.
Section 4 adn 5 from the paper give more in-depth information on how the tidy format works with existing functions within R and how to apply them. |
H: Associationg Rules/ Mining is this association rule strong
I have a problem. I can't not solve this exerciese. What is the best way to solve this exerciese? What are the approaches for this kind of exerciese?
The following table summarizes transactions in a supermarket where customers
bought tomatoes and/or mozzarella cheese or neither.
We study the association "mozzarella => tomatoes" (with the idea that many people
like to eat mozzarella with tomatoes (plus fresh basil plus olive oil - yum!)) and
assume a minimum support threshold of 25% and a minimum confidence threshold of
50%. Is this association rule strong?
AI: Market Basket Analysis is a technique which identifies the strength of association between pairs of products purchased together and identify patterns of co-occurrence. A co-occurrence is when two or more things take place together.
Market Basket Analysis creates If-Then scenario rules, for example, if item A is purchased then item B is likely to be purchased. The rules are probabilistic in nature or, in other words, they are derived from the frequencies of co-occurrence in the observations. Frequency is the proportion of baskets that contain the items of interest. The rules can be used in pricing strategies, product placement, and various types of cross-selling strategies.
Here is the calculation :
Support({Moz}--{Tom} = Transaction containing both moz and tom / total number of transaction
= 2000/ 5000 = 2/5 = 0.4
Confidence({Moz}--{Tom}) = Transaction containing both moz and tom / total number of transaction containing Moz
= 2000 / 2500 = 2/2.5 = 0.8
Lift({Moz}--{Tom}) = (Transaction containing both moz and tom / total number of transaction containing Moz ) / Fraction of transaction containing Y
= 0.8 / (3000/5000)
= 0.8/0.6 = 1.33
For your table this association mining rule is strong |
H: Accessing regression coefficients when using MultiOutputRegressor
I am working on a multioutput (nr. targets: 2) regression task. The original data has a huge dimensionality (p>>n, i.e. there are far more predictors than observations), hence for the baseline models I chose to experiment with Lasso regression, wrapped in sklearn's MultiOutputRegressor. After optimizing the hyperparameters of the Lasso baseline, I wanted to look into model explainability by retrieving the coef_ of the wrapped Lasso regression model(s), but this doesn't seem to be possible. I'm now wondering how I could look into the model's coefficients and have a better understanding of the predictions it makes.
My idea was to return the estimator with the best hyperparameters from GridSearchCV by setting refit=True. Then, accessing the estimator argument of it which yields MultiOutputRegressor(Lasso()), as intended. Now MultiOutputRegressor also has an estimator argument, accessing it would return Lasso(). Last, Lasso has a coef_ argument that returns the coefficients of the regressor. According to sklearn documentation the shape of the array returned by this coef_ argument is either (n_features,) or (n_targets, n_features), so multioutput regression coefficients seem to be supported.
Sample data and code:
from numpy import logspace
from sklearn.datasets import make_regression
from sklearn.model_selection import GridSearchCV
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import Lasso
X, y = make_regression(n_samples=1000, n_features=10, n_informative=5, n_targets=2, random_state=1, noise=0.5)
search = GridSearchCV(
MultiOutputRegressor(Lasso()),
param_grid={'estimator__alpha': logspace(-1,1,3)},
scoring='neg_mean_squared_error',
cv=10,
n_jobs=-1,
refit=True
)
best_model = search.fit(X, y)
print(best_model)
print(best_model.estimator.estimator.coef_)
AI: Instead of using the estimator attribute you should be using the best_estimator attribute, after which you can access the underlying estimators of the MultiOutputRegressor using the estimators_ attribute. You can then access the coefficients as follows:
coefficients = [estimator.coef_ for estimator in best_model.best_estimator_.estimators_]
# [array([-0. , 30.91353913, -0. , 76.42321339, 93.22724698,
# -0. , 0. , 86.41714933, 12.34299398, -0. ]),
# array([ 0. , 88.99494183, 0. , 8.93482644, 26.63584122,
# -0. , -0. , 3.19035541, 33.95384004, 0. ])] |
H: Retrieving the ordinal encoding of a variable after it's placed in a pipeline/columntransformer
I am applying ordinal encoding to a dataset through a column transformer - how can I retrieve the ordinal encoding of a feature (e.g. Area)?
from sklearn.datasets import fetch_openml
df = fetch_openml(data_id=41214, as_frame=True).frame
df
df_train, df_test = train_test_split(df, test_size=0.33, random_state=0)
dt_preprocessor = ColumnTransformer(
[
(
"categorical",
OrdinalEncoder(),
["VehBrand", "VehPower", "VehGas", "Area", "Region"],
),
("numeric", "passthrough", ["VehAge", "DrivAge", "BonusMalus","Density"]),
],
remainder="drop",
)
f_names = ["VehBrand", "VehPower", "VehGas", "Area", "Region", "VehAge", "DrivAge", "BonusMalus", "Density"]
dt = Pipeline(
[
("preprocessor", dt_preprocessor),
(
"regressor",
DecisionTreeRegressor(criterion='squared_error', max_depth=3, ccp_alpha=1e-5, min_samples_leaf=2000),
),
]
)
dt.fit(
df_train, df_train['ClaimFreq'], regressor__sample_weight=df_train["Exposure"]
)
fig, ax = plt.subplots(figsize=(75, 50))
tree.plot_tree(dt['regressor'], feature_names=f_names, ax=ax, fontsize=30)
plt.show()
```
AI: You can access the steps within a pipeline by their name using the named_steps attributes. After getting the preprocessing step you can then use the transformers_ attribute in combination with standard python indexing to get to the OrdinalEncoder. Using the categories_ attributes then gives you the attributes for the encoder and, since the index of each value is also the encoded value, also the corresponding value.
dt.named_steps["preprocessor"].transformers_[0][1].categories_
# [array(['B1', 'B10', 'B11', 'B12', 'B13', 'B14', 'B2', 'B3', 'B4', 'B5',
# 'B6'], dtype=object),
# array([ 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15.]),
# array(['Diesel', 'Regular'], dtype=object),
# array(['A', 'B', 'C', 'D', 'E', 'F'], dtype=object),
# array(['R11', 'R21', 'R22', 'R23', 'R24', 'R25', 'R26', 'R31', 'R41',
# 'R42', 'R43', 'R52', 'R53', 'R54', 'R72', 'R73', 'R74', 'R82',
# 'R83', 'R91', 'R93', 'R94'], dtype=object)]
If you want the mapping in a dictionary format you can use a dictionary and list comprehension:
categories = dt.named_steps["preprocessor"].transformers_[0][1].categories_
[
{
value: encoding
for value, encoding in zip(col_values, range(len(col_values)))
}
for col_values in categories
]
# [
# {
# "B1": 0,
# "B10": 1,
# "B11": 2,
# "B12": 3,
# "B13": 4,
# "B14": 5,
# "B2": 6,
# "B3": 7,
# "B4": 8,
# "B5": 9,
# "B6": 10
# },
# ...
# ] |
H: name 'layers' is not defined
I am trying to use EfficientNetB7 from keras implementation
Image classification via fine-tuning with EfficientNet
but always the following code gives me error:
def build_model(num_classes):
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
model = EfficientNetB7(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(NUM_CLASSES, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy")
return model
and I call the function using:
with strategy.scope():
model = build_model(num_classes=NUM_CLASSE)
epochs = 10 # @param {type: "slider", min:8, max:80}
hist = model.fit(train_set, epochs=epochs, validation_data=test_set, verbose=2)
plot_hist(hist)
Error:
NameError: name 'layers' is not defined
can somebody help me?
AI: The reason for this seems to be your are importing
from tensorflow.keras.layers import *
But while your are calling you are using :
layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
this calling will give you an error so instead try below import
Please try this import :
from tensorflow.keras import layers |
H: exclude variables with no variation during prediction?
I am working on a binary classification problem.
I do have certain input categorical variables such as gender, ethnicity etc.
But all the records have the same value. Meaning, all 10K records in my dataset have female as gender value. Same for ethnicity as well.
Is it okay to straight away exclude these variables during model building?
Or it is important to retain them? Since, there is no variation (in these variables between output classes), I assume they will not be contributing anything to the output.
Can help me with this?
AI: If You have only females in your dataset, adding gender feature to the model input will not improve it.
The technical explanation on why it won't help changes between models, but the intuition is simple - the model tries to find correlation between the features and the labels, and the correlation between any variable and a fixed-value variable is zero.
You didn't directly asked about it but it worth mentioning that if the classification problem is related to the gender, the model will work better on females than on males because you don't have data about them. That will be true whether you will add those features or not.
I talked about the gender feature as an example, but the answer is valid to any other feature. |
H: T-test against normalised or standardised data gives different results
I am studying the problem to predict popularity of a tweet, and want to test null hypothesis: there is no relationships between favorite_counts and another set of variables, like number of friends of users.
I am not sure if normalise or standardise the variables, because I am thinking how to model popularity and don't know how the distributions of likes and friends among users are (please advise).
So I tried the two, and tried an independent t_test.
I get very different results:
from sklearn.preprocessing import StandardScaler, MinMaxScaler
do_scaled = pd.DataFrame(StandardScaler().fit_transform(do[columns].values), columns=columns)
ttest_ind(do_scaled.favorite_count, do_scaled.user_favourites_count)
#Ttest_indResult(statistic=-1.682257624164912e-16, pvalue=0.9999999999999999)
#pvalue is about 1 : the association is likely due to pure chance
here a boxplot to show the distribution of outliers (StandardScaler)
from sklearn.preprocessing import StandardScaler, MinMaxScaler
do_scaled = pd.DataFrame(MinMaxScaler().fit_transform(do[columns].values), columns=columns)
ttest_ind(do_scaled.favorite_count, do_scaled.user_favourites_count)
#Ttest_indResult(statistic=-5.999028611045575, pvalue=2.3988962933916377e-09)
#pvalue is almost 0 (less than 1%) : there is an association between predictor and response.
here a boxplot to show the distribution of outliers (MinMaxScaler)
I don't understand why I get opposite results and don't know how to interpret them.
Can you please advice ?
Can you please help to approach the problem ?
AI: First, T-test null hypothesis is that there are no differences between means of two samples. And p-value is the probability to observe the data, given that the null hypothesis is correct, so if p-value is small - you are likely to reject the null hypothesis. So in your case it is actually vice-versa to what you wrote:
In case of StandardScaler your test says that "two samples are taken from the distributions with the same mean"^[1].
And in case of MinMaxScaler it says that "two samples are unlikely to be taken from the distributions with the same mean".
Now to the second part, why you get this result. The answer is actually quite straightforward. To compute Student's statistics one use 3 parameters (6 in case of comparing means of two samples): Mean of the sample, Variance (or standard deviation) of the sample and the size of the sample^[2]. StandardScaler applies z-scoring:
$$
X_{\text{standartized}} = \frac{X - \text{mean}(X)}{\text{std}(X)}
$$
thus, after standartization both of the columns have zero mean and unit variance, therefore Student t-test says that means of two samples are the same (because they are the same and equal to 0).
Conversely, MinMaxScaler:
$$
X_{\text{minmax}} = \frac{X - \text{min}(X)}{\text{max(X)} - \text{min(X)}}
$$
does not make niether mean, nor variance of two samples to be equal (it makes minimal value of the sample to be equal to 0 and maximal to 1), therefore Student t-test says that they are different.
[1] To be more precise your results says that you can not reject the null hypothesis (you could never accept a null hypothesis in statistical testing).
[2] You could check the technicalities of the t-test on wiki page for Welch's t-test https://en.wikipedia.org/wiki/Welch%27s_t-test (Unpaired independent T-test for two samples of different sizes with different variance, which is an appropriate version of the test in your case) |
H: AutoML for categorical feature encoding
I have an input dataset with more than 100 variables where around 80% of the variables are categorical in nature.
While some variables like gender, country etc can be one-hot encoded but I also have few variables which have an inherent order in their values such rating - Very good, good, bad etc.
Is there any auto-ML approach which we can use to do this encoding based on the variable type?
For ex: I would like to provide the below two lists as input to the auto-ml arguments.
one-hot-list = ['Gender', 'Country'] # one-hot encoding
ordinal_list = ['Feedback', 'Level_of_interest'] # ordinal encoding
Is there any auto-ML package that can do this for us?
Or is there any other efficient way to do this as I have 80 categorical columns
AI: Sure, there are plenty of them, using scikit-learn it will looks as follow:
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder
ohe = OneHotEncoder()
ordine = OrdinalEncoder()
oh_col_names = [...]
ordin_col_names = [...]
encoded_oh = ohe.fit_transform(X[oh_col_names]) # supposing X is your pandas.DataFrame
encoded_ordin = ordin.fit_transform(X[ordin_col_names])
you could also use method get_feature_names_out (or get_feature_names in sklearn versions before 1.2) to get appropriate names to encoded features:
X.drop(oh_col_names, axis=1)
X[ohe.get_feature_names_out()] = encoded_oh
X.drop(ordin_col_names, axis=1)
X[ordin.get_feature_names_out()] = encoded_ordin
``` |
H: LightGBM eval_set - what to do when I fit the final model (there's no test data left)
I'm using LightGBM's eval_set feature when fitting my model. This enables early stopping on the number of estimators used.
callbacks = [lgb.early_stopping(80, verbose=0), lgb.log_evaluation(period=0)]
fit_params = {"callbacks":callbacks, "eval_metric" : "auc", "eval_set" : [(x_train,y_train), (x_test,y_test)], "eval_names" : ['train', 'valid']}
lg = LGBMClassifier(n_estimators=5000, verbose=-1,objective="binary", **{"scale_pos_weight":train_weight, "metric":"auc"})#"binary_logloss"})
This works great when doing cross validation and early stopping is triggered.
But when I have finally selected a model, and want to train it on the full data set. I have no test data left to trigger early stopping?
What's the accepted practise here? Can I use the holdout data?
Or shall I keep another set of data purely for the eval_set?
EDIT:
Come to think of it, is there data leakage if in a cross validation I pass my test data to eval_set? Am I doing this all wrong?
AI: Its a always a good practice to have complete unsused evaluation data set for stopping your final model.
Repeating the early stopping procedure many times may result in the model overfitting the validation dataset.This can happen just as easily as overfitting the training dataset.
One approach is to only use early stopping once all other hyperparameters of the model have been chosen.
Another strategy may be to use a different split of the training dataset into train and validation sets each time early stopping is used. |
H: Custom Named-Entity Recognition (NER) in product titles using deep learning
I am new to machine learning and Natural Language Processing (NLP). I am trying to identify which brand, product name, dimension, color, ... a product has from its product title. That is, from
'Sony ZX Series Wired On-Ear Headphones, Black MDR-ZX110'
I want to extract
'brand=''Sony''', 'item=''Headphones''', 'color=''Black''', 'model_number=''MDR-ZX110'''.
I understand that this amounts to something like custom Named-Entity Recognition.
The very minimum of what my algorithm should do is identify the 'item' attribute value, i.e. it should recognize that the above example refers to ''Headphones''.
My dataset contains product titles such as the one above with appropriate attribute-value pairs scraped from e-commerce websites.
Which deep learning algorithm is best-suited to solve this problem? Which type of input (embeddings?), neural network model, output layer should I choose to start?
AI: I am currently working on a similar project but limited only to brand detection in product titles, the task is a named entity recognition task and can be solved by different models, the most used ones are BI-LSTM + CRF (Bidirection LSTM with a CRF layer on top). You could try to use spaCy for the task which has a nice documentation and good workflow to train NER models. Or you can build the model yourself using tensorflow (here is a good tutorial) or PyTorch (there is a tutorial here)
Obviously, you will need annotated data to train your models.
Here are some papers that might help you get some insights (at least they did for me) :
Sequence tagging with BI-LSTM CRF
Product name detection in user generated content (a bit old)
Edit
You can find my project which is similar to yours here with pretrained models: https://github.com/annis-souames/brand-ner
Here's a second project on Github :
https://github.com/maciej-cecot/brand-detection |
H: Should number of classes be the same in few shot learning train and test?
I used to believe in k-way-n-shot few-shot learning, k and n (number of classes and samples from each class respectively) must be the same in train and test phases. But now I come across a git repository for few-shot learning that uses different numbers in the train and test phase :
parser.add_argument('--dataset')
parser.add_argument('--distance', default='l2')
parser.add_argument('--n-train', default=1, type=int)
parser.add_argument('--n-test', default=1, type=int)
parser.add_argument('--k-train', default=60, type=int)
parser.add_argument('--k-test', default=5, type=int)
parser.add_argument('--q-train', default=5, type=int)
parser.add_argument('--q-test', default=1, type=int)
Are we allowed to do so?
AI: In few-shot learning, the number of classes does not have to be the same in the training and inference stages. Generally speaking, the number of classes in training is bigger than that in inference. The most crucial setting in few-shot learning is that the classes in the inference phase must not be present in the training phase. In other words, the intersection of the classes in training and the classes in inference is the empty set. |
H: Rough ideas of expected performance boost from over-sampling techniques?
I am trying to train a classifier for a multi class classification task. However, the dataset is very imbalanced. About half of the around 160 unique labels are such that there are only 10 or less samples corresponding to each of these rare labels. There are about 20 labels that occur exactly once. So the dataset contains a few classes that are well represented and very long and skinny tail of rare labels.
There are around 50 features (both numerical and categorical) and around 20 000 samples in the dataset.
I have looked briefly into this python library: https://imbalanced-learn.org/stable/index.html but I haven't yet gotten good results. Might be that I am not using it correctly.
What kind of boost from SMOTE or other methods for handling imbalanced dataset can I realistically expect? I know this is context dependent but trying to get some rough ideas.
AI: My experience is oversampling with replacement may gives better classification performance than SMOTE on imbalanced data although the latter is considered more advanced than the former. If the minority classes are too small, the synthetic data generated by SMOTE can have wrong labels i.e. a synthetic instance is class 0 (a minority class) but should be class 1 (a majority class). This is because SMOTE labels the synthetic instances based on their K nearest neighbors in the training set. When the class 0 instances and the class 1 instances of the training set are mixed together in the multi-dimensional space of the features i.e. both classes can not be separated either linearly or non-linearly, SMOTE would label the synthetic instances wrong which leads to the poor classification performance of the trained classifier. Boosting can leads to over-fitting of ensemble model to the training set and poorer generalization than bagging. Because the base classifiers of boosting are trained one by one on those training instances which are classified wrong by the preceding base classifier. This over-fits to the training set.
The balanced random forest algorithm of imbalanced-learn library normally give promising results on imbalanced datasets. Balanced bagging classifier are worth trying.
https://imbalanced-learn.org/stable/references/generated/imblearn.ensemble.BalancedRandomForestClassifier.html
https://imbalanced-learn.org/stable/references/generated/imblearn.ensemble.BalancedBaggingClassifier.html#imblearn.ensemble.BalancedBaggingClassifier |
H: Is it possible to reduce the time of computing DTW with dtw-python package by disabling computation of?
I am trying to classify some time series using dtw-python package which is a python version of R package implementing Dynamic Time Warping described in this nice paper. By default a call to dtw function returns DTW object containing the distance as well as found warping path (docs).
Classification with K-Nearest-Neighbours requires computing DTW(X,Y) for each of possible pairs $X \in T, Y \in E$, where $T$ is a test set and $E$ is eval set. Results can be stored in $|T| \times |E|$ distances matrix $\lambda$ and later used to find the best matches. It is a lot of computations though. In the cited paper we can read that:
$\lambda$ can be straightforwardly computed through proxy::dist(q, q, method = "DTW"). Computing K × K alignments in this way is faster than iterating over the plain dtw call, because only the numeric value of the distance is computed, while the construction of the actual warping path is bypassed.
I don't need to know the warping paths so I'd like to use this approach. Unfortunately this is valid for R version only as proxy is R package. Can I do something equivalent so that dtw in dtw-python doesn't return warping paths and only the distance so that it is faster?
AI: What function are you currently using to compute DTW? Without knowing anything about DTW and the package it seems that the dtw function has a keyword argument distance_only (see the docs) that can be used to only calculate the distance between two time series. |
H: Gan paper: sampling the distribution
In the Gan paper it is said page 3 Figure 1:
"The lower horizontal line is
the domain from which z is sampled, in this case uniformly. The horizontal line above is part of the domain
of x. The upward arrows show how the mapping x = G(z) imposes the non-uniform distribution pg on
transformed samples"
For those who wants to see the figure:
I wanted to know what that would mean in practical case.
Let's say you are working with images that are normalized between [0,..,1] this would be the domain of x as referred in the paper right? Does this mean that I would have to sample my z from the domain of x, i.e: [0,..,1] ?
In most implementations I see people taking point randomly using things such as:
np.random.randn(latent_dim)
AI: Let's say you are working with images that are normalized between [0,..,1] this would be the domain of x as referred in the paper right?
No, the domain of X would be "images of [whatever they contain (e.g. dogs)] normalized between 0 and 1".
Does this mean that I would have to sample my z from the domain of x, i.e: [0,..,1] ?
No, they are both different domains and the generator $G$ maps between them.
In the paragraph you linked, the authors just point out that the generator $G$ is a function that maps the input data (i.e. random vectors following a uniform distribution in $[0, 1]$) to the output data (e.g. images of dogs) and that the mapping is non-regular, meaning that very different inputs may lead to similar outputs and vice versa. |
H: RCNN to predict sequence of images (video frames)?
In the following work the authors apply a convolutional recurrent neural network (RNN) to predict the spatiotemporal evolution of microstructure represented by 2D image sequences. In particular, they apply some sort of 3D-CNN & LSTM architecture to predict crystal growth:
In the above picture, we can see RNN predictions (P) versus ground truth (G) from a testing case, in which the RNN outputs 50 frames based on 10 input frames.
Now, this is something new to me: how is possible for a RCNN to generate images as output? From my (limited) knowledge, the only structures able to generate new images in output are generative adversarial networks like GANs and Convolutional Encoder-Decoder NN (like VAE), but apparently the authors achieve these results by solely stacking together 3D-Convs and RNN units.
Have you ever seen these kind of architectures?
AI: The authors provide this image in their supplemental information:
There, you can see their explanation. The convolutional layers encode the image into some latent space representation. The RNN operates in this latent space, generating a new latent space representation based on the previous observations. For any latent space representation, the decoder can convert it into an image.
Thus the RCNN uses essentially the same procedure as the model types you mentioned (GANs, Convolutional Encoder-Decoders); there is a decoder that takes representations from the latent space to the image space. |
H: Backpropagation of Bias in Neural networks
My goal is to calculate backpropagation(Especially the backpropagation of the bias).
For example, X, W and B are python numpy array, such as [[0,0],[0,1]] , [[5,5,5],[10,10,10]] and [1,2,3] for each. And, suppose dL/dY is [[1,2,3],[4,5,6]].
How to calculate dL/dB? Answer should be [5, 7, 9]. why is it calculated that way?
AI: The image you provided already shows how to calculate the derivative of the loss with regards to the biases, it is equal to the derivative of the loss with regards to the y values. The only thing you have to do is take the sum of $\frac{\partial L}{\partial Y}$ over the number of samples, which gives you $\frac{\partial L}{\partial B} = [1 + 4, 2 + 5, 3 + 6] = [5, 7, 9]$. |
H: does ValueError: 'rat' is not in list means not exist in tokenizer
Does this error means that the word doesn't exist in the tokenizer
return sent.split(" ").index(word)
ValueError: 'rat' is not in list
the code sequences like
def sentences():
for sent in sentences:
token = tokenizer.tokenize(sent)
for i in token :
idx = get_word_idx(sent,i)
def get_word_idx(sent: str, word: str):
return sent.split(" ").index(word)
sentences split returns ['long', 'restaurant', 'table', 'with', 'rattan', 'rounded', 'back', 'chairs']
which rattan here is the problem as i think
AI: First, a tokenizer doesn't have a dictionary of predefined words, so anyway it doesn't make sense to "add a new token" to a tokenizer.
Instead it uses indications in the text in order to separate the tokens. The most common indication is of course a whitespace character " ", but there are lots of cases where it's more complex than that. This is why there would be many cases where the second method with sent.split(" ").index(word) would not return the same tokens (punctuation marks, for example).
Also the tokenizer doesn't change the text, so if the sentence contains the word rattan it cannot transform it into the word rat. Why are you testing this? Btw rattan is a real word, in case this is the issue. |
H: I have two dataframes, df and df1, I want to check if df1 columns values is present in df columns and print the entire row of the df if present
first df:
df = pd.DataFrame({'col1':[1,2,3,4],
'col2':[5,6,7,8],
'col3':[9,10,11,12],
'col4':[13,14,15,16]})
second df:
df1= pd.DataFrame({'col1':[4,5,12,10],
'col2':[1,5,25,12]})
I used this :
mergged = pd.merge(df,df1, on=['col1'],how='inner')
print(mergged)
what I want to be as output is compare both df's col1,col2 and if values were matching I want to display the whole record
Desired Output:
4 8 12 16
1 5 9 13
any ideas or help will be very helpful thanks
AI: You can accomplish this by comparing the two columns using .isin(), like so:
import pandas as pd
df1 = pd.DataFrame({'col1': [1, 2, 3, 4],
'col2': [5, 6, 7, 8],
'col3': [9, 10, 11, 12],
'col4': [13, 14, 15, 16]})
df2 = pd.DataFrame({'col1': [4, 5, 12, 10],
'col2': [1, 5, 25, 12]})
df3 = df1[df1["col1"].isin(df2["col1"])]
df4 = df1[df1["col2"].isin(df2["col2"])]
print(df3.values[0])
print(df4.values[0])
Outputs:
[ 4 8 12 16]
[ 1 5 9 13]
If you need to check all different combinations of columns I believe you would need to loop over the different combinations. E.g. for col_name in df1.columns would get you the column name as a string. |
H: In ML why selecting the best variables?
Almost all ML notebooks out there have a section where they select the best features to use in the model. Why is this step always there ? How bad can it be to keep a variable that is not correlated with the response variable ? If you are really unlucky then yes a feature that is positively correlated with your response in your training set could in fact be negatively correlated with it in the real world. But then, it's not even sure that one will be able to catch it with a feature selection routine.
My assumption is that it used to be a necessary step when computing resources were scarce, but with today's resources it is basically irrelevant.
What is your view ? Can you give a real world example where it would harm the model to keep all training features ?
AI: You are right. If someone is using regularization correctly and doing hyperparameter tuning to avoid overfitting, then it should not be a problem theoretically (ie multi-collinearity will not reduce model performance).
However, it may matter in a number of practical circumstances. Here are two examples:
You want to limit the amount of data you need to store in a database for a model that you are frequently running, and it can be expensive storage-wise, and computation-wise to keep variables that don't contribute to model performance. Therefore, I would argue that although computing resources are not 'scarce' they are still monetarily expensive, and using extra resources if there is a way to limit them is also a time sink.
For interpretation's sake, it is easier to understand the model if you limit the number of variables. Especially if you need to show stakeholders (if you work as a data scientist) and need to explain model performance. |
H: Awful predictions of RNN while MSE is very low
I have encountered a strange situation where the predictions of RNN are just awful despite the fact that NN has found a minimum of loss function at 0.002 for training and 0.0013-0.0015 for validation data.
My data set consists of 12470 samples with 60 time steps each.
I've used sklearn shuffling method for every prepared dataset (x_train, y_train etc)
like this:
x_train, y_train, x_valid, y_valid, x_test, y_test = shuffle(x_train, y_train, x_valid, y_valid, x_test, y_test, random_state=0 )
The model looks like this:
n_neurons = 64
epochs = 150
lr = 0.0001
initializer = tf.keras.initializers.HeNormal()
model = keras.models.Sequential([
keras.layers.SimpleRNN(n_neurons, activation='elu', return_sequences=True, input_shape=[None, 1], kernel_regularizer='l2', bias_regularizer='l2', kernel_initializer=initializer, bias_initializer=tf.keras.initializers.RandomNormal),
keras.layers.Dropout(0.2),
keras.layers.SimpleRNN(n_neurons, activation='tanh', return_sequences=True, kernel_regularizer='l2', bias_regularizer='l2', kernel_initializer=initializer, bias_initializer=tf.keras.initializers.RandomNormal),
keras.layers.Dropout(0.2),
keras.layers.SimpleRNN(n_neurons, activation='selu', return_sequences=True, kernel_regularizer='l2', bias_regularizer='l2', kernel_initializer=initializer, bias_initializer=tf.keras.initializers.RandomNormal),
keras.layers.Dropout(0.2),
keras.layers.SimpleRNN(n_neurons, kernel_regularizer='l2', bias_regularizer='l2', kernel_initializer=initializer, bias_initializer=tf.keras.initializers.RandomNormal),
keras.layers.Dropout(0.2),
keras.layers.Dense(1, activation='linear', kernel_regularizer='l2', bias_regularizer='l2', kernel_initializer=initializer, bias_initializer=tf.keras.initializers.RandomNormal)
])
np.random.randn(42)
tf.random.set_seed(42)
optimizer = keras.optimizers.Adam(learning_rate=lr)
model.compile(loss='mse', optimizer=optimizer)
history = model.fit(x_train, y_train, epochs=epochs, batch_size=64,
validation_data=(x_valid, y_valid))
I am aware of the fact that He initialization is not optimal for tanh, I was just changing activation functions during testing of my model, to see which one will fit.
Now I've tested learning rates from range (0.0001 to 0.0009) with steps 0.0001.
But after x epochs models with these learning rates were stuck at 0.03 mse loss and than changed to NaN.
Learning rate of 0.0001 appears to be the best (in this very moment I am testing lower learning rates).
As I wrote above this learning rate appears to be the best due to two facts:
has found a minimum
the loss curve is ideal
But as you see the prediction is awful, the NN only kind of get's the 'range' it should be in.
More examples below:
What is causing such a bad performance of the RNN? How could I fix this?
AI: If I am getting you right, you are trying to predict the absolute price, and not just the change in price from the last time step. This task is much more difficult than just predicting the change in price from the previous time step, as the net accurately needs to memorize what the price in the previous time steps were. I suggest you calculate the log returns from each time step to the next and input these values into the RNN. The log return at time step t is equal to log(price(t)/price(t-1)). The log is a small addition, which can help as log returns are additive (they still reflect the price change after adding log returns of different time steps together). Then your model could predict a change in price from the most recent time step.
This is one way of doing this, but remember that predicting prices is not a trivial task and "good results" are very hard to achive. |
H: Merge two dataframes on multiple columns, only if not NaN
Given two Pandas dataframes, how can I use the second dataframe to fill in missing values, given multiple key columns?
Col1 Col2 Key1 Key2 Extra1 Col1 Col2 Key1 Key2 Col1 Col2 Key1 Key2 Extra1
-------------------------------------------------------------------------------------------------------------------
["A", "B", 1.10, 1.11, "Alice"] ["A", "B", 1.10, 1.11, "Alice"] # left df has more non-NaNs, so leave it
["C", "D", 2.10, 2.11, "Bob"] [np.nan, np.nan, 1.10, 1.11] ["C", "D", 2.10, 2.11, "Bob"] # unmatched row should still exist
[np.nan, np.nan, 3.10, 3.11, "Charlie"] + ["E", "F", 3.10, 3.11] = ["E", "F", 3.10, 3.11, "Charlie"] # left df has NaN, so fill in values
["I", np.nan, 5.10, 5.11, "Destiny"] ["G", "H", 4.10, 4.11] ["I", np.nan, 5.10, 5.11, "Destiny"] # no matching values in second df, so leave it
[np.nan, "J", 6.10, 6.11, "Evan"] [np.nan, "J", 6.10, 6.11, "Evan"] # no matching values in second df, so leave it
My attempt:
import pandas as pd
import numpy as np
data = [
["A", "B", 1.10, 1.11, "Alice"],
["C", "D", 2.10, 2.11, "Bob"],
[np.nan, np.nan, 3.10, 3.11, "Charlie"],
["I", np.nan, 5.10, 5.11, "Destiny"],
[np.nan, "J", 6.10, 6.11, "Evan"],
]
df1 = pd.DataFrame(data, columns = ['common_1', 'common_2', 'common_3', 'common_4', 'extra_1'])
data = [
[np.nan, np.nan, 1.10, 1.11],
["E", "F", 3.10, 3.11],
["G", "H", 4.10, 4.11],
]
df2 = pd.DataFrame(data, columns = ['common_1_other', 'common_2_other', 'common_3_other', 'common_4_other'])
df3 = pd.merge(
df1,
df2,
how="left",
left_on=["common_3", "common_4"],
right_on=["common_3_other", "common_4_other"],
)
"""
Want df3 to be:
["A", "B", 1.10, 1.11, "Alice"]
["C", "D", 2.10, 2.11, "Bob"]
["E", "F", 3.10, 3.11, "Charlie"]
["I", np.nan, 5.10, 5.11, "Destiny"]
[np.nan, "J", 6.10, 6.11, "Evan"]
"""
AI: If you remove all the "_other" from the column names of your df2, then you can do
df1.set_index(['common_3', 'common_4']).fillna(df2.set_index(['common_3', 'common_4'])).reset_index()
This should fill nan in any of the Col1 and Col2 if there is a match in both Key1 and Key2 |
H: The Python TSAI library for time series -- how can I make it less verbose?
I tried to RTFM, here is the documentation:
https://timeseriesai.github.io/tsai/
I found no mention of how to get rid of the table that is output while training. Here is an example of what I do NOT want it to print:
# Assuming you have X, y, and splits
batch_tfms = TSStandardize()
fcst = TSForecaster(X, y, splits=splits, batch_tfms=batch_tfms, bs=nrows, arch=TST, metrics=mae)
fcst.fit_one_cycle(len(splits[0]), 1e-3)
# I do not want this stuff --+
# |
# v
epoch train_loss valid_loss mae time
0 0.156252 0.108844 0.295480 00:00
1 0.132855 0.075484 0.238252 00:00
2 0.126005 0.058170 0.203647 00:00
3 0.122865 0.048346 0.181954 00:00
4 0.119844 0.043026 0.169333 00:00
5 0.117942 0.039927 0.161866 00:00
6 0.116912 0.037847 0.156890 00:00
7 0.114023 0.036735 0.154284 00:00
8 0.112187 0.036117 0.152822 00:00
AI: The extra output during training is not from the tsai package itself but from the underlying fastai package. Depending on what out you want to show/hide you can use a few different context managers. To hide just the table output you should be able to use the model.no_logging() context manager:
with fcst.no_logging():
fcst.fit_one_cycle(25, 1e-3)
You can also use the model.no_bar() context manager to disable the progress bar shown. |
H: Approaches for matching leads to salesmen
I'm starting to tackle a new problem where we are trying to optimally match new leads (perspective customers) for our product to our sales representatives in the hopes of improving bottom-line metrics like conversion rate, average sale price, etc. We have a bunch of data from the leads when they fill out their info on web forms and from 3rd party data providers we use to enrich the core web form data (we try and pull their soft credit score, income, etc. based on the info they provide, this is all automated).
On the salesman side, we don't have nearly as much data on them (mainly just who they are and their sales performance history). I suppose we could actually run them through our data enrichment service to pull additional info on them though.
My question is simply: from an ML perspective what, would be the best way to structure this problem? I was thinking of just building models for each salesman and assigning the lead to the salesman with the highest predicted score (e.g. for conversion) but this seems a bit crude. I was also considering recommender systems given the matching nature of the problem but my background is more in traditional ML so not sure what subtype would be best to start with (content-based, collaborative, etc.).
Any input is greatly appreciated.
AI: I'm going to make a bunch of assumptions about the shape of your data and model choice, just to make the setup simple and concrete. Hopefully the broader ideas will generalize from there.
Suppose you wrangle your data into a matrix with a response vector of zeros and ones representing whether a sale was made. This is a nice simple supervised classification problem and logistic regression is probably the first thing you try in this case.
If you ignore which sales rep each customer was assigned to, you will get a model that tells you the probability of a sale based on customer characteristics (income, etc). But it won't tell you anything about which is the best sales rep to assign.
If you fit a separate model for each sales rep then you could compare the outputs of each model. But I share your concern about this approach. Each model could pick up some idiosyncrasies. Also, if a sales rep got lucky with getting good leads in the past then they are likely to fit to a model with a high general probability of sale for all leads simply because the constant term is higher in their models than others.
There might be another constraint in your ideal system here -- presumably you want to avoid building a model that just assigns all leads to the best sales rep.
Another approach would be to fit a single logistic regression but include instrumental (or dummy) variables for the sales reps. A column for each rep with a one if they worked with that customer and a zero otherwise. Your coefficients vector will then include a coefficient for each sales rep. This feels like a step in the right direction but will necessarily result in a model where all leads are assigned to a single sales rep -- to the one with the highest coefficient. One nice aspect to this approach is that it requires no additional information about the sales reps. Only information about which customers they previously worked with is needed.
A next step from there might be to add cross-terms. That is, features that are the product of a customer characteristic and a sales rep indicator. This isn't guaranteed produce a model where leads are assigned evenly between sales reps but might produce recommendations of the form "assign low income customers to rep A and high income customers to rep B". (Whether or not such a recommendation is politically acceptable in your firm is a different question entirely.)
I'm not sure that model is going to get you to something that you would be willing to use in production but it might be a nice first step to get a sense of the data and which variables tend to be predictive.
One last thought: your dataset might include some information that precludes some sales reps working with some customers. The rep and the customer have to be in the same geographic region, perhaps. You're definitely going to want to work that in somehow. If customers and sales reps are split into disjoint regions then you just fit a model per region. If it is more complicated than that then your model will be, inevitably, more complicated. |
H: What is exactly the difference between Validation data and Testing data
I asked this question on stack overflow and was told that this is a better place for it.
I am confused with the terms validation and testing, is validating the model same as testing it? is it possible to use testing data for validation?
what even confuses me more is that when to use validation? is it a necessary step for the model? also is it possible to do validation instead of testing?
also can the training data be the same as the validation data?
also can you tell if this code does testing? it is really confusing me
model.fit_generator(
training_gen(1000,25),
steps_per_epoch=50,
epochs=10000,
validation_data=validation_gen(1000, 25),
validation_steps=1,
callbacks=[checkpoint],
verbose=2)
model.load_weights('./temp_trained_25.h5')
BER = []
for SNR in range(5, 30, 5):
y = model.evaluate(
validation_gen(10000, SNR),
steps=1
)
BER.append(y[1])
print(y)
print(BER)
noting that training_gen and validation_gen are:
def training_gen(bs, SNRdb = 20):
while True:
index = np.random.choice(np.arange(train_size), size=bs)
H_total = channel_train[index]
input_samples = []
input_labels = []
for H in H_total:
bits = np.random.binomial(n=1, p=0.5, size=(payloadBits_per_OFDM,))
signal_output, para = ofdm_simulate(bits, H, SNRdb)
input_labels.append(bits[0:16])
input_samples.append(signal_output)
yield (np.asarray(input_samples), np.asarray(input_labels))
def validation_gen(bs, SNRdb = 20):
while True:
index = np.random.choice(np.arange(train_size), size=bs)
H_total = channel_train[index]
input_samples = []
input_labels = []
for H in H_total:
bits = np.random.binomial(n=1, p=0.5, size=(payloadBits_per_OFDM,))
signal_output, para = ofdm_simulate(bits, H, SNRdb)
input_labels.append(bits[0:16])
input_samples.append(signal_output)
yield (np.asarray(input_samples), np.asarray(input_labels))
I'm quite new to deep learning and it seems like everything confuses me, sorry if my questions seems dump and unreasonable but please if you can help me to figure out this confusion I would be thankful.
Thanks in advance!
AI: Usually you first split your dataset into train/test set, and then if your model training process requires a validation set, you can further split your train-set into the final train-set and the validation-set. A simple rule is that the test set never shows up in your model development process, including when you develop your data preprocessing steps (such as your data normalizer).
You need a validation set in the following cases:
Training a gradient boosted decision tree (lgbm, xgboost, etc.) with early-stopping enabled. Because it needs to evaluate your model with a validation set after each training step to see if the early-stopping criteria is satisfied
Training a neural network. This is optional but suggested because you can get the validation score curve in addition to the always-there training score curve to monitor if your model begins to overfit. This is required when you use early-stopping.
You are doing cross validation. The idea is to fit the model with the same hyperparameter set N times but at each time it uses a different train-set and validation-set. In this way you know how the same set of hyperparameters works for different data scenarios.
One validation set should only serve one purpose, so if you do both 1 and 3, then you first split your data into train/test set. Then at each cross validation round (out of N rounds), you split your train-set into another train-set and the $1^{st}$ validation set. Then in training your GBDT model, you again split your train-set into the final train-set and your $2^{nd}$ validation set. Your $1^{st}$ validation set is for cross validation purposes. Your $2^{nd}$ validation set is for GBDT early-stopping.
For your code, I see two potential problems:
If you source of data is channel_train, then both train_gen and valid_gen are getting from the same source but choices are different by the random generator. This is a problem because you do not guarantee your train and your valid data is mutually different.
You called your valid gen two times. The first time it serves as the purpose of point number 2 which I stated in above. The second time it should have served the purpose of testing -- which is what you asked. But again, it may not certain that the test data and the train data do not overlap.
Therefore, you have code that does everything, but you may not have made sure that the train/valid/test data are mutually exclusive. |
H: In DTW, is the distance the sum of the shortest path's elements or the fathest element?
The title says it. In dynamic time warping, I keep hearing that the distance between two distances is the sum of the shortest path's elements. But I also see the distance as the element in the farthest corner? Could somebody please clear this up.
Example:
x = [0,1,1,2]
z = [0,2,2]
gives this cost matrix:
[[ 0. inf inf inf]
[inf 0. 2. 4.]
[inf 1. 1. 2.]
[inf 2. 2. 2.]
[inf 4. 2. 2.]]
meaning that the distance should be 0 + 0 + 1 + 2 + 2 = 5. But every dtw implementation I found returns the distance is 2 (i.e. always returns element in the farthest corner). Thank you!
AI: There's a confusion about how the algorithm works.
DTW is inspired by the Levenshtein edit distance, which is an example of dynamic programming to calculate the minimum distance efficiently. The main idea is to incrementally calculate the minimum distance between any two subsequences $1..i$ and $1..j$.
So the DTW matrix is not a cost matrix, at least not in the sense of cost between individual positions. any cell in the matrix $m[i,j]$ is the smallest possible distance between the subsequences $1..i$ and $1..j$, so that the last cell $m[m,n]$ already contains the smallest distance for the full sequences. In other words there's nothing to sum, actually summing the values doesn't make sense since the larger subsequences include the smaller ones. |
H: How to merge with smallest Euclidean distance?
Given a main left dataset, how can I merge with right dataset with smallest Euclidean distance (d = sqrt(a^2 + b^2)) on specified columns?
Details:
if both of key1 and key2 from left exist in right, merge with row that matches key1 and key2, and has smallest value from sqrt((aux1r - aux1l)^2 + (aux2r - aux2l)^2)
if both of key1 and key2 are not NaN and either of key1 and key2 from left does not exist in right, merge with row that has smallest value from sqrt((aux1r - aux1l)^2 + (aux2r - aux2l)^2)
if one of key1 or key2 from left is NaN, merge with row from right that matches the non-NaN key1 or key2, and has smallest value from sqrt((aux1r - aux1l)^2 + (aux2r - aux2l)^2)
if both of key1 and key2 from left is NaN, merge with row from right that has smallest value from sqrt((aux1r - aux1l)^2 + (aux2r - aux2l)^2)
Example of input dataframes, and wanted dataframe after merge:
import pandas as pd
import numpy as np
left = pd.DataFrame([
# "key1" "key2" "aux1l" "aux2l" "left1" "left2"
[np.nan, np.nan, 1.00, 1.00, "unimportant", "unimportant"], # left[0]
["key1", np.nan, 1.00, 1.00, "unimportant", "unimportant"], # left[1]
[np.nan, "key2", 1.00, 1.00, "unimportant", "unimportant"], # left[2]
["key1", "key2", 1.00, 1.00, "unimportant", "unimportant"], # left[3]
["key1unique", "key2unique", 1.00, 1.00, "unimportant", "unimportant"], # left[4]
["key1repeat", "key2repeat", 1.00, 1.00, "unimportant", "unimportant"], # left[5]
["key1repeat", "key2repeat", 1.00, 1.00, "unimportant", "unimportant"], # left[6]
], columns=["key1", "key2", "aux1l", "aux2l", "left1", "left2"])
right = pd.DataFrame([
# "key1" "key2" "aux1r" "aux2r" "right1" "right2"
[np.nan, "key2", 0.99, 0.97, "unimportant", "unimportant"],
["key1", "key2", 0.99, 0.96, "unimportant", "unimportant"],
["key1repeat", "key2repeat", 1.85, 1.56, "unimportant", "unimportant"],
["key1repeat", "key2repeat", 0.99, 0.99, "unimportant", "unimportant"],
], columns=["key1", "key2", "aux1r", "aux2r", "right1", "right2"])
# what to do here?
# left.merge(right) discards left with no matches (left[4] discarded, but want to fill with closest match with aux1l/aux2l with aux1r/aux2r)
# it does not matter if aux1r and aux2r is included
wanted = pd.DataFrame([
# "key1" "key2" "auxl1" "aux2l" "left1" "left2" "right1" "right2"
[np.nan, np.nan, 1.00, 1.00, "unimportant", "unimportant", "unimportant", "unimportant"],
["key1", np.nan, 1.00, 1.00, "unimportant", "unimportant", "unimportant", "unimportant"],
[np.nan, "key2", 1.00, 1.00, "unimportant", "unimportant", "unimportant", "unimportant"],
["key1", "key2", 1.00, 1.00, "unimportant", "unimportant", "unimportant", "unimportant"],
["key1unique", "key2unique", 1.00, 1.00, "unimportant", "unimportant", "unimportant", "unimportant"],
["key1repeat", "key2repeat", 1.00, 1.00, "unimportant", "unimportant", "unimportant", "unimportant"],
["key1repeat", "key2repeat", 1.00, 1.00, "unimportant", "unimportant", "unimportant", "unimportant"],
], columns=["key1", "key2", "aux1l", "aux2l", "left1", "left2", "right1", "right2"])
AI: Let's simplify the requirements
A wanted row from the right dataframe is a row that
has the least amount of unmatched keys
has the minimum euclidean distance
def merge_left_and_right(left, right):
# save original input from modification
left = left.copy()
right = right.copy()
# numerate rows in the right dataframe
right['row_num'] = range(len(right))
# find the best matchings for left dataframe
left['row_num'] = [
find_the_best_matching(right,
key1=row['key1'],
key2=row['key2'],
aux1l=row['aux1l'],
aux2l=row['aux2l'])
for index, row in left.iterrows()
]
# merge dataframe by row number
right = right.drop(['key1', 'key2'], axis=1)
merged = left.merge(right, on='row_num', how='left')
merged = merged.drop('row_num', axis=1)
return merged
def find_the_best_matching(right, key1, key2, aux1l, aux2l):
right = right.copy()
# keys match only when they aren't NaN and equal ("np.nan != np.nan" is True)
right['unmatched_key_count'] = 0
right['unmatched_key_count'] += (right['key1'] != key1).astype(int)
right['unmatched_key_count'] += (right['key2'] != key2).astype(int)
right['euclidean_distance'] = np.sqrt((right['aux1r'] - aux1l) ** 2 + (right['aux2r'] - aux2l) ** 2)
# Sort by unmatched amount, then by distance. The first row will be best
return right.sort_values(['unmatched_key_count', 'euclidean_distance']).iloc[0]['row_num'] |
H: Supervised vs Unsupervised - Flag fake accounts on social medias
I have this project I'm working on where I scraped users' data from social media to predict if they are bots, fake accounts or legit users based on their comments, likes, posts, public data only.
I'm at this point where I have data about more than 80k users. The data is not labeled, so what are my options here? I can, by manually looking at each account flag them as fake/not fake so I can do it with supervised ML and get an accuracy score but this will take forever as I'll have to accurately label my data.
I could also do unsupervised learning by clustering those users but then how can I measure if my model is doing a good job? How do I convey my results? I haven't worked a lot with unsupervised learning.
I'm at this point where I've been scraping the data, and I'm now doing the EDA but I'm not sure about the next steps and would love some help and guidance.
AI: this will take forever as I'll have to accurately label my data.
This is why one should plan the full experimental setup before collecting the data, ideally ;)
I could also do unsupervised learning by clustering those users but then how can I measure if my model is doing a good job?
This is the right question, and the answer is: you can't.
To be clear, this task requires some labelled data at least for the purpose of evaluation, and likely also for training. Thus at least a sample needs to be labelled manually, because no automatic method can provide you with a dataset which is guaranteed to be labelled accurately for this task. And anyway you would have to evaluate this automatic method itself with... labelled data.
Before doing any manual labelling, you should define a clear methodology. Typically annotators study a sample of data in order to establish precise annotation criteria, before even starting the actual annotation process. Even with clearly defined criteria, this task is likely to involve subjective decisions, since you're going to find cases where the answer is not clear. Ideally one would use multiple different annotators for such tasks, in order to measure discrepancies between them. |
H: Relationship of Bias and size of dataset
I was reading the following book: http://www.feat.engineering/resampling.html where the author mentioned the below:
Generally speaking, as the amount of data in the analysis set shrinks,
the resampling estimate’s bias increases. In other words, the bias in
10-fold cross-validation is smaller than the bias in 5-fold
cross-validation.
I am unable to understand what the author is trying to say here. My understanding is, as we reduce the size of the dataset, we can induce bias that, a certain sample is restricted to those values only, which is not true since it's just a sample of a larger set. Is this what the author meant? If so, then how does 10-fold cross-validation has a smaller bias than 5-fold cross-validation, since the 10-fold cross-validation will have fewer samples in each fold, than a 5-fold cross-validation? Thanks.
AI: In K fold cross validation, the training data is divided into K folds. In each iteration of training fold K-1 subset are used for training the model and one of the subsets is used for validation.The error estimation is averaged over all k trials to get total effectiveness of our model.
As can be seen, every data point gets to be in a validation set exactly once, and gets to be in a training set k-1 times. This significantly reduces bias as we are using most of the data for fitting, and also significantly reduces variance as most of the data is also being used in validation set. |
H: Best way to optimize dataframe row by row sum of squared errors calculation?
I am seeing a significant slowdown with the following small snippet of code which computes the sum of squared errors between two dataframes - e.g. it takes approximately 2.5 seconds to run when combopd has a length of 1140.
In the example below, target is a dataframe with one row and 8 columns, and combopd is a dataframe with i rows and 8 columns. The goal is to compute the sum of squared errors of each column value between target and combopd and then create a new column in combopd called "SSE" which stores the value of the error calculation:
for i in range(len(combopd)):
row = combopd.iloc[i]
sse = ((target["x1"] - row["x1"]) ** 2) + ((target["x2"] - row["x2"]) ** 2) + ((target["x3"] - row["x3"]) ** 2) + ((target["x4"] - row["x4"]) ** 2) + ((target["x5"] - row["x5"]) ** 2) + ((target["x6"] - row["x6"]) ** 2) + ((target["x7"] - row["x7"]) ** 2) + ((target["x8"] - row["x8"]) ** 2)
combopd.at[row.name, 'SSE'] = sse.values[0]
Any thoughts on a more faster/efficient/better way of accomplishing this would be much appreciated.
AI: Iterating through Dataframes is (generally speaking) an anti-pattern. Always try to avoid it if you can!
You can easily vectorize this operation by subtracting the scalar value from target rather than treating target like another array:
# Vectorized squared errors
combopd["SSE"] = sum(
(target[f'x{n}'].values[0] - combopd[f'x{n}'])**2 # squared error
for n in range(1, 9) # for each of the columns x1, x2, ... x8
)
Time comparison with 10K rows shows a pretty dramatic speedup :)
import pandas as pd
import numpy as np
import timeit
ROW_COUNT = 10000
combopd = pd.DataFrame(
np.random.random(size=(ROW_COUNT, 8)), columns=[f'x{n}' for n in range(1, 9)]
)
target = pd.DataFrame(
np.random.random(size=(1, 8)), columns=[f'x{n}' for n in range(1, 9)]
)
def vectorized():
return sum(
(target[f'x{n}'].values[0] - combopd[f'x{n}'])**2
for n in range(1, 9)
)
def original():
for i in range(len(combopd)):
row = combopd.iloc[i]
sse = (
((target["x1"] - row["x1"]) ** 2)
+ ((target["x2"] - row["x2"]) ** 2)
+ ((target["x3"] - row["x3"]) ** 2)
+ ((target["x4"] - row["x4"]) ** 2)
+ ((target["x5"] - row["x5"]) ** 2)
+ ((target["x6"] - row["x6"]) ** 2)
+ ((target["x7"] - row["x7"]) ** 2)
+ ((target["x8"] - row["x8"]) ** 2)
)
combopd.at[row.name, 'SSE'] = sse.values[0]
return combopd['SSE']
assert np.array_equal(vectorized(), original())
vectorized_time = timeit.timeit('vectorized()', globals=globals(), number=10)
original_time = timeit.timeit('original()', globals=globals(), number=10)
print(f'Vectorized time: {vectorized_time:0.4f}s'
f'\nOriginal time: {original_time:0.4f}s')
# Vectorized time: 0.0230s
# Original time: 192.0022s
``` |
H: Why -2 is seen in supervised binning using decision tree?
I have a continuous variable called salary, age etc and output variable as loan_status
Instead of me choosing the cut off points for salary and age bins , I used Decision Tree to compute the bins based on loan_status.
I tried the below
clf = DecisionTreeClassifier(criterion = 'entropy', max_depth = 4)
clf.fit(X_train.values.reshape(-1,1),y_train.values)
threshold = clf.tree_.threshold
Got an output like below
> array([ 4.8750e+04, 2.0800e+03, 5.5200e+02, 5.5000e+01,
> -2.0000e+00,
> -2.0000e+00, 1.9625e+03, -2.0000e+00, -2.0000e+00, 2.3904e+04,
> 4.9075e+03, -2.0000e+00, -2.0000e+00, 4.1600e+04, -2.0000e+00,
> -2.0000e+00, 4.0000e+06, 1.3765e+06, 1.2765e+06, -2.0000e+00,
> -2.0000e+00, -2.0000e+00, -2.0000e+00])
a) Can you help me on why do we see -2 items in the threshold output?
b) I don't have any negative values in my dataset for salary. So, don't know what is -2 indicates and how can I avoid this (from generating it)?
c) How to restrict the no of bins to only 3? Currently it produces several thresholds which will increase my bin size to 7 or 8 etc.
AI: I believe they are leaf nodes. See here. - "For example, the arrays feature and threshold only apply to split nodes. The values for leaf nodes in these arrays are therefore arbitrary."
Try to use the code in the page to print out the structure of the tree or plot the tree to find the thresholds for split nodes. Then make adjustments to the parameters to adjust your bins sizes. |
H: zero padding problem
i need to implement this code by using padding
PAD=0
embeddings = []
for sent_idx in sentences:
# pad sentence to correct size
while len(sent_idx) < 49:
sent_idx.append(PAD)
but got
AttributeError: 'str' object has no attribute 'append'
sent_idx returns restaurant has modern wooden tables and chairs
len(sent_idx) returns 46
AI: Currently, each sentences sent_idx in sentences is a string. However, it should not be a string but a list of integers representing token indexes.
You must tokenize your sentence (i.e. split it into pieces and convert each piece of text in a numeric index to the token table) before appending any padding token. |
H: How to set vocabulary size, padding length and embedding dimension in LSTM network?
Usually in a LSTM network, we have certain parameters that need to be set before the model can begin training. I am specifically talking about vocabulary size, padding length and embedding dimension. Below is a simple LSTM network where I have randomly chosen the 3 parameters:
vocab_size = 20000
pad_size = 35
embedding_dim = 50
# ONE HOT REPRESENTATION FOR TRAINING SET
ohr_train = [one_hot(i, vocab_size) for i in train_x]
# PAD EACH TEXT IN TRAIN SET SO THAT EACH TEXT IS OF SAME LENGTH
train_embedded_docs = pad_sequences(ohr_train, padding = 'pre', maxlen = pad_size)
# DEFINE THE MODEL
model = Sequential()
model.add(Embedding(vocab_size, dimension, input_length = pad_size))
model.add(Dense(1, activation = 'sigmoid'))
My question is how do you set all the three above mentioned parameters?
PS: From the answers I understood how to set vocabulary size. Padding length should be more than the maximum length of text in train set (not the whole set as that would lead to data leakage!).
But when setting the embedding dimension via HP tuning, it will be a time consuming process as for each combination, I would have to run the whole model and as you know neural nets take a long time to run. Isn't there a better way?
AI: Vocabulary size, padding length and embedding dimension are like hyperparameters which needs to chosen wisely to get good performance from model
Vocabulary Size : The set of unique words used in the text corpus is referred to as the vocabulary. When processing raw text for NLP, everything is done around the vocabulary. When the text corpus is large and you need to limit the vocabulary size to increase training speed or prevent overfitting on infrequent words. To do this most people restrict it to specific number or say apply a threshold for example Vocabulary size is equal to words which have frequency greater than 10.
Padding Length : Since LSTM takes input of same length, inputs are padded to the maximum length of the sequence in the batch
Embedding Diemnsions : Usually people use multiple of 2 like 128, 256 and 512. Higher the dimension better the capturing of information but more the time required for training |
H: how to link the predicted output to the original observation?
Am working on a binary classification using logistic regression data
I have 1000 rows and 28 features. Three to 4 variables are Id variables like product_id, subject_id etc
During train_test split, I drop them like as shown below
X = df.drop(['status','Product_ID','subject_ID'], axis=1)
y = df.status
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=0)
Once I do this I do some preprocessing and modelling tasks as below
a) encoding categorical variables for train and test separately
b) model.fit()
c) model.predict()
d) Finally, I get the y_pred and I compare it with y_test.
My question are as follows
a) when there is no identifier in y_pred, how can I link back to get the full row of that instance? Meaning, I want the full data row of that observation along with new column predicted_status (beside already existing actual status column). Is there anyway to include ID variables in model building process but make their effect as 0 (or just stay there as a useless column)
b) Would the same order be preserved during train-test split,encoding,cross-validation,testing etc
c) What happens if we split train and test based on some criteria like year between 2015-2020 (becomes train) and anytime after 2020 becomes test?
AI: Keep subject_ID and after train_test_split pass to the model dataframes without the ID column, as in:
df.loc[ : , df.columns != 'subject_ID']
Unless you are explicitly shuffling datapoints during prediction, I believe that commonly returned predictions persist the initial order. Definitely worth checking with the particular model you are using.
EDIT:
See an example of entire process below:
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
iris = sns.load_dataset('iris')
# you don't need this, you have subject IDs.
iris['Subject_ID'] = numpy.random.randint(1, 10000, iris.shape[0])
# use subject IDs as index.
iris.set_index('Subject_ID', inplace=True)
# Split your data, index will be persisted.
Xtrain, Xtest, ytrain, ytest = train_test_split(
iris.loc[ : , iris.columns != 'species'],
iris['species'],
random_state=1
)
# Model and predict.
from sklearn.naive_bayes import GaussianNB # 1. choose model class
model = GaussianNB() # 2. instantiate model
model.fit(Xtrain, ytrain) # 3. fit model to data
y_hat = model.predict(Xtest, ) # 4. predict on new data
# Append predictions, original datapoint IDs will be encoded in the index.
Xtest["predictions"] = y_hat
# Test by joining original labels, using a dummy dataset for prediction.
Xtest = Xtest.join(iris['species'], how='inner')
Xtest.head()
>>>
Subject_ID
sepal_length
sepal_width
petal_length
petal_width
predictions
species
2265
5.8
4.0
1.2
0.2
setosa
setosa
1961
5.1
2.5
3.0
1.1
versicolor
versicolor
4177
6.6
3.0
4.4
1.4
versicolor
versicolor
6041
5.4
3.9
1.3
0.4
setosa
setosa
8500
7.9
3.8
6.4
2.0
virginica
virginica |
H: Look ahead bias predicting a time series using features
I am making some ML methods (RF, RNN, MLP) to predict a time series value 'y' based on features 'X' and not the time series 'y' itself. My question is regarding the bias I might be including since I am doing a simple random train-test-split for the fit and evaluation process, so I am using data from different days (past and future) and not spliting by time. Is it valid for this prediction process, or even that I am not using the time series to predict future values, I am still introducing bias since I am using features which also are time series themselves since I go them daily. I tried both ways and got far better results using the simple random train-test-split, so I got suspicious.
AI: The answer depends on whether there is autocorrelation in the y target variable which is not accounted for in your X regressors. In other words: if you know all the X values and you are trying to predict the current time step y value, would it help you at all to know the previous time steps' y values? If so, there is autocorrelation in y that is not reducible to your X features, and your simple random train-test-split is not advisable.
A simple example: suppose X is just temperature, and y is depth of snow on the ground each morning in, say, some particular spot in northern Canada. Even though X is a pretty good predictor of y, the relevant autocorrelation still holds, in the sense that even if I know X, my prediction of today's y value will be much, much better if I also know yesterday's y value.
Of course, in real world situations, we usually never really know for sure whether there is the kind of autocorrelation described above. If you suspect it might be there, then play it safe and use a train/test or cross validation method that respects the time domain by training only on data prior to the test split(s). |
H: N-Gram Smoothing
I am wondering if there is a good example out there that compares N-Gram with various smoothing techniques.
I found this notebook that applies Laplace transform but that is about it. Any suggestions are greatly appreciated.
AI: There have been many papers on this topic, but you probably should search with the term language modeling (the name of the main application for smoothing n-grams frequencies).
Apparently there's a recent paper, but otherwise you'll most likely find papers from 20 years ago, as this used to be an important question for statistical machine translation (it isn't anymore with neural MT, afaik). |
H: When it is okay to stick with low performance models?
I posted here already but it is marked to close, so thought of posting it here (as this might be the right forum)
Am working on a simple logistic regression with 1000 records and 28 features.
My business users suggest that they want to first see what the AI can do by itself based on our data as it is. Meaning, they don't want me to do feature engineering, trying out multiple algorithms etc.
They want me to avoid all that because they feel it takes time to do feature engineering and they wish to showcase something quicker and earlier. For the 1st cut, they wish to go live with baseline model with no feature engineering (even if it is 50% accuracy).
They are okay with low recall like 30% or 40% (at least for now) for one of the classes because currently there is nothing done to solve this problem. No one is tackling this problem or even thought to solve this problem. So, this is new to them... So, even if it is low recall for negative class, they feel it is something good for them to start (because positive class has high recall). Meaning, they identify those positive cases accurately and go follow up with them. Since this model is reliable (for them) in terms of positive cases, they wish to go live with this.(and focus on those positive cases) Of course, recall for negative cases is a serious concern for them. But at least they have a solution for one of the classes and they are happy. But ultimately, they would like to have solution for negative classes as well. So they suggest me do feature engineering, model experimentation etc after going live. By live, I mean just a simple static dashboard (and not high end MLops etc).
Later, they want to know with all these experiments of model and new features, is recall for negative class is improved or not?
Is this a right way to go further? As a novice data scientist, I don't feel right about this. If it had been at least 80% (my random choice), I would have been bit okay. I don't have any evidence to proof that 80% is the right choice rather than just saying higher no of actuals are predicted correctly.
So my questions are
a) what should I do and what are the pitfalls/points that I should make sure to keep them aware?
b) Is there anything important that I should highlight them?
c) Should this project still be dropped if business is okay to be with 50% acc? Can we continue to use this model as long as business is fine with it?
d) Any real time experience from your model deployment decisions?
Can share your views on this? Would really be helpful for me to learn and also keep them aware?
AI: I completely agree with the answer on statsSE, I don't have much to add to it:
Essentially this is a business decision: you can voice your concerns if you think that the company is making a bad decision, but at the end of the day this is their choice to make.
There's one point in particular that I think is worth making clear to the company, it's what it means practically for this system to have low performance. For example it's easy to do a random classifier or a classifier which always predicts the majority class, but it's not useful. Are they aware that in the worst case, "low performance" means that the classifier might be as bad as these?
Assuming that things are clear for the company, in general I think it's always a good idea to start on a new problem with a basic model which can be used later as a baseline for future improvements. So I don't see any issue with this decision, as long as they understand what it means. As a side note, starting from a very poor baseline model will make your future models look even better ;)
(note that I'm in academia so I don't have any direct experience like this) |
H: Sequence-to-Sequence Transformer for Neural machine translation
I am using the tutorial in Keras documentation here. I am new to deep learning. On a different dataset Menyo-20k dataset, of about 10071 total pairs, 7051 training pairs,1510 validation pairs,1510 test pairs. The highest validation accuracy and test accuracy I have gotten is approximately 0.26. I tried the list of things below:
Using the following optimizers: SGD, Adam, RMSprop
Tried different learning rate
Tried the dropout rate of 0.4 and 0.1
Tried using different embedding dimensions and feed-forward network dimension
Used Early stopping and patience =3, the model does not go past the 13th epoch.
I tried the model itself without changing any parameters, the validation accuracy never got to 0.3, I tried to change the different parameters in order to know what I am doing wrong and I can't figure it out. Please what am I doing wrong? Thank you in advance for your guidance.
AI: Few things can be tried :-
Increase no of iterations a lot
Add more neural or Transformer layers
Use Bleu Score metric instead of accuracy metrics as accuracy metrics in machine language translation does not make that much sense.
May need larger dataset
Not sure what will work finally - have to experiment |
H: Vanishing gradients: examine output gradients
For a feedforward network or RNN, in theory we should examine the output gradients with respect to the weights over time to check whether it vanishes to zero. In my code below I am not sure whether it is appropriate to feed the input 'xtr' into the backend function defined on weights.
weights_vars= model.layers[1].trainable_weights # weights on 2nd hidden layer
sess= k.get_session()
# Obtain the actual gradients:
grad_fun= k.gradients(model.output, weights_vars[0]) # [0] for weight, [1] for bias
grad_value= sess.run(grad_fun, feed_dict={model.input: xtr})
I have seen posts demonstrating how to obtain gradients of output wrt $\textit{inputs}$, aka Jacobians. Feeding inputs to function defined on model.input is certainly correct.
grad_fun= k.gradients(model.output, model.input)
grad_value= sess.run(grad_fun, feed_dict={model.input: xtr})
My questions are:
Can I use these Jacobians to check the extent of vanishing gradients, as a proxy to the gradients with respect to weights?
How can I use backend.function defined on weights to obtain gradients? What do I put in feed_dict?
If there is a better way to examine the output gradients on weights please let me know. Thanks in advance.
AI: Jacobians are proxy to gradients. Sometimes in NN, we need to find partial derivatives of a function whose input and output both are vectors. The matrix containing all such partial derivatives are known as Jacobian Matrix. So yes you can use jacobian matrix as a proxy to graident.
The second question has already being answer here |
H: Multi-label one-hot encoding
So im having this paticular problem triying to do one hot encoding on multilabel data, the encoder is reading more classes than it should, and i dont know why.
let me show you:
Here's my data (17 classes)
#'Admisión_de_Aire','Antisurge','Compresión','Control','Electrónico','Eléctrico','Enfriamiento','Escape','Gas_Combustible','Lubricación','Mecánico','Normal','Proceso','Sellos','Surge','Vibración','Válvulas'
>>>in: y
>>> Out:
357 Normal
1776 Normal
1777 Normal
1778 Normal
11927 Normal
...
67226 Normal
67227 Normal
67682 Normal
67704 Normal
67707 Normal
Name: CLASS_TARGET, Length: 39519, dtype: object
running the code
>>>in:
# Encode
label_encoder = LabelEncoder()
label_encoder.fit(y)
num_classes = len(label_encoder)
print('num classes:',num_classes)
y.describe() #double check clasess
>>> Out:
num classes: 49 #Reads 32 more classes?
count 39519
unique 17 #Describe finds 17 classes
top Normal
freq 32266
Name: CLASS_TARGET, dtype: object
With those 32 more classes it finds, i get this result, but i dont know why this happen any suggestion?.
>>>in:
label_encoder.class_to_index
>>> Out:
{'A': 0,
'C': 1,
'E': 2,
'G': 3,
'L': 4,
'M': 5,
'N': 6,
'P': 7,
'S': 8,
'V': 9,
'_': 10,
'a': 11,
'b': 12,
'c': 13,
'd': 14,
'e': 15,
'f': 16,
'g': 17,
'i': 18,
'l': 19,
'm': 20,
'n': 21,
'o': 22,
'p': 23,
'r': 24,
's': 25,
't': 26,
'u': 27,
'v': 28,
'á': 29,
'é': 30,
'ó': 31,
'Admisión_de_Aire': 0,
'Antisurge': 1,
'Compresión': 2,
'Control': 3,
'Electrónico': 4,
'Eléctrico': 5,
'Enfriamiento': 6,
'Escape': 7,
'Gas_Combustible': 8,
'Lubricación': 9,
'Mecánico': 10,
'Normal': 11,
'Proceso': 12,
'Sellos': 13,
'Surge': 14,
'Vibración': 15,
'Válvulas': 16}
this is how the encoder works
import itertools
from collections import Counter
from typing import List, Sequence, Tuple
class LabelEncoder(object):
"""Label encoder for tag labels."""
def __init__(self, class_to_index={}):
self.class_to_index = class_to_index
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
def __len__(self):
return len(self.class_to_index)
def __str__(self):
return f"<LabelEncoder(num_classes={len(self)})>"
def fit(self, y):
classes = np.unique(list(itertools.chain.from_iterable(y)))
for i, class_ in enumerate(classes):
self.class_to_index[class_] = i
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
return self
def encode(self, y: pd.Series) -> np.ndarray:
"""Encode a collection of labels using (multilabel) one-hot encoding.
Args:
y (pd.Series): Collection of labels as a pandas Series object.
Returns:
Labels as (multilabel) one-hot encodings
"""
y_one_hot = np.zeros((len(y), len(self.class_to_index)), dtype=int)
for i, item in enumerate(y):
for class_ in item:
y_one_hot[i][self.class_to_index[class_]] = 1
return y_one_hot
def decode(self, y):
classes = []
for i, item in enumerate(y):
indices = np.where(item == 1)[0]
classes.append([self.index_to_class[index] for index in indices])
return classes
def save(self, fp):
with open(fp, 'w') as fp:
contents = {'class_to_index': self.class_to_index}
json.dump(contents, fp, indent=4, sort_keys=False)
@classmethod
def load(cls, fp):
with open(fp, 'r') as fp:
kwargs = json.load(fp=fp)
return cls(**kwargs)
AI: Your code processes symbols instead of words.
Fixes
# classes = np.unique(list(itertools.chain.from_iterable(y)))
classes = np.unique(y)
# for class_ in item:
# y_one_hot[i][self.class_to_index[class_]] = 1
y_one_hot[i][self.class_to_index[item]] = 1
Also, take a look at sklearn.preprocessing.OneHotEncoder
from sklearn.preprocessing import OneHotEncoder
label_encoder = OneHotEncoder(sparse=False)
label_encoder.fit(y.to_frame())
label_encoder.transform(y.to_frame()) |
H: How can I define such a function that can name for same values?
I have such a dataframe, I want to group them both when they are consecutive and when they are the same. In fact, I would like to see every intervals in different names so that I can print them and see where the beds spread from start to finish.
Input:
THICKNESS DEPTH
738 0.2 50
739 0.2 50.05
740 0.2 50.10
741 0.2 50.15
742 0.2 50.20
800 0.10 120
801 0.10 120.05
802 0.10 120.1
950 0.15 170
951 0.15 170.05
952 0.15 170.1
953 0.15 170.15
Output Except:
BED1= (50,50.20)
BED2= (120,120.1)
BED3= (170,170.15)
Normally I can define If, Elif function to define names for every single interval. But the problem is that I want to use this codes for different databases. What I mean is that I cannot define such a function that using for every same variable. So, I should use lambda function. But Im struggling writing lambda function. Tried this kind of function but I got NaN result in dataframe.
df['BEDS'] = (
df2.groupby(df2['THICKNESS'].ne(df2['THICKNESS'].shift()))
.transform(lambda x: x.diff().sum()))
Edit: The keypoint should be the thickness. It should say the depth value at the end and beginning of the constancy.
Edit2: These are my codes.
import pandas as pd
df = pd.read_excel('V131BLOG.xlsx')
df['THICKNESS'] = (
df.groupby(df['LITHOLOGY'].ne(df['LITHOLOGY'].shift()).cumsum())['DEPTH_MD']
.transform(lambda x: x.diff().sum()))
out = df[(df['LITHOLOGY'] == 1) & (df['THICKNESS'] >= 0.35)]
Output for these lines:
Unnamed: 0 DEPTH_MD CALIPER GR LITHOLOGY SHALLOW DEEP WELL THICKNESS
738 738 266.90 32.5626 11.34590 1 2.72669 2.49057 V131B 0.8
739 739 266.95 33.2834 3.81954 1 2.86810 2.60241 V131B 0.8
740 740 267.00 33.4193 11.42080 1 2.99825 2.70382 V131B 0.8
741 741 267.05 34.1252 3.84500 1 3.12187 2.78883 V131B 0.8
742 742 267.10 33.9583 3.84500 1 3.23839 2.85871 V131B 0.8
... ... ... ... ... ... ... ... ... ...
1618 1618 310.90 30.5434 22.69180 1 3.79042 3.25649 V131B 1.0
1619 1619 310.95 29.5919 22.69180 1 3.80181 3.27215 V131B 1.0
1620 1620 311.00 29.2208 11.30880 1 3.80493 3.28251 V131B 1.0
1621 1621 311.05 27.2846 26.64850 1 3.79330 3.28023 V131B 1.0
1622 1622 311.10 28.1289 26.47380 1 3.76142 3.26074 V131B 1.0
As you can see my dataframe has many rows and thickness values for LITHOLOGY column equals 1. I want to give consecutive nouns to every different thickness values.
AI: The following should give the information you are looking for, the only difference is that the three fields (i.e. bed number, minimum value and maximum value of the 'DEPTH' field) are in seperate columns instead of concatenated into a single string, which should be easy if this is really what you need.
import pandas as pd
df = pd.DataFrame({
"THICKNESS": [0.2, 0.2, 0.2, 0.2, 0.2, 0.1, 0.1, 0.1, 0.15, 0.15, 0.15, 0.15],
"DEPTH": [50, 50.05, 50.1, 50.15, 50.2, 120, 120.05, 120.1, 170, 170.05, 170.1, 170.15]
})
(
df
# get the bed number by comparing rows in the 'THICKNESS' column
.assign(BED = lambda x: (x["THICKNESS"] != x["THICKNESS"].shift()).cumsum())
# get the minimum and maximum value of the 'DEPTH' column for each bed
.groupby("BED")
.agg(
DEPTH_min = ("DEPTH", "min"),
DEPTH_max = ("DEPTH", "max")
)
.reset_index()
)
BED
DEPTH_min
DEPTH_max
1
50
50.2
2
120
120.1
3
170
170.15 |
H: Auto-ML for only fixed estimator
Am working on a binary classification with 1000 rows and 28 columns.
I would wish to use an Auto-ML solution to try out different combinations of hyperparameters etc but the algo should only be logistic regression.
I don't wish to use other algorithms for the lack of interpretability.
So, I would like my auto-ML solution to stick to logistic regression and try out different values for hyperparameters. Of course, I might use fixed estimators like Decision trees, random forests etc as well
Is there any auto-ML solution that can use fixed estimator?
I read about Tpot, Evalml, AutoML etc but they all try multiple algorithms and finally output the best one (which may not be logistic regression). How can I restrict my auto-ML solution to only use logistic regression?
AI: I found that we can do this using Tpot's config_dict and pass this as input to the classifier function like as shown below
tpot_config = {
'sklearn.linear_model.LogisticRegression': {
'penalty': ["l1", "l2"],
'C': [1e-4, 1e-3, 1e-2, 1e-1, 0.5, 1., 5., 10., 15., 20., 25.],
'dual': [True, False]
},
}
tpot = TPOTClassifier(max_time_mins=10,verbosity=2,
config_dict=tpot_config,scoring='f1')
tpot.fit(ord_train_t, y_train)
This will ensure that TPOT searches best pipeline based on the configs provided by the config_dict.
However, if there are any other ML tool, I am interested to know from others here as well |
H: Encoding before vs after train test split?
Am new to ML and working on a dataset with lot of categorical variables with high cardinality.
I observed that in lot of tutorials for encoding like here, the encoding is applied after the train and test split.
Can I check why is it done so?
Why can't we apply the encoding even before the train test split?
Can't we apply the encoding to the full dataset and after encoding, split it into train and test sets?
What difference does it make?
AI: If you perform the encoding before the split, it will lead to data leakage (train-test contamination) In the sense, you will introduce new data (integers of Label Encoders) and use it for your models thus it will affect the end predictions results (good validation scores but poor in deployment).
After the train and validation data category already matched up, you can perform fit_transform on the train data, then only transform for the validation data - based on the encoding maps from train data.
Almost all feature engineering like standarisation, Normalisation etc should be done after train testsplit. Hope it helps |
H: Study names not showing in forest plot (R)
Context: I followed an existing R script to create a forest plot in R using the forest function from the metafor package.
Problem: The plot is showing, however, the study names are not showing on it and I am not sure why.
See data frame here. I want author and year to show in the forest plot below in place of "Study X":
See resulting forest plot here:
Code below:
library("here") # detect root directory
library("tidyverse") # data science
library("robumeta")
library("metafor") # for meta-analysis functions
library("dplyr")
library("weightr")
library("knitr")
bio_path <- here("data", "biomarkers.csv") # specify path for dataset
biomarkers <- read.csv(bio_path) %>% rename("cytokine"="ï..cytokine") %>% select(author:year, cytokine:dep_measure) # load data, rename + reorder col
df <- biomarkers %>% rename(c("ri"="r", "ni"="n")) # rename df and some col for z transformations
df <- escalc(measure="ZCOR", ri=ri, ni=ni, data=df, slab=paste(author, year, sep=", ")) # convert r to z
print(df) # `r`=correlation coefficients; `n`=sample sizes; the new `yi` variable=z score transformation; `vi` variable=corresponding estimated sampling variance.
ifn_y <- df %>% filter(cytokine=="IFN-y" | cytokine=="IFN-y_PHA") %>% mutate(id=1:4) %>% select(id, author:vi)
# print out the data and calculate and print the confidence interval for the amount of heterogeneity (I^2)
res_ifny <- rma(yi, vi, data=ifn_y)
predict(res_ifny, digits=3, transf=transf.ztor) # convert z back to r
confint(res_ifny) # calc heterogeneity measures with 95% CI
# visualize the meta analysis with a forest plot
forest(res_ifny, xlim=c(-1.6,1.6), atransf=transf.ztor, at=transf.rtoz(c(-.4,-.2,0,.2,.4,.6)), digits=c(2,1), cex=.8, header = "Author(s), Year", xlab="Correlation coefficient")
AI: Have you tried looking at some examples? You should be able to set the labels for the different studies using the slab argument of the forest function. Something like this should work:
forest(
res_ifny,
xlim=c(-1.6,1.6),
atransf=transf.ztor,
at=transf.rtoz(c(-.4,-.2,0,.2,.4,.6)),
digits=c(2,1),
cex=.8,
header="Author(s), Year",
xlab="Correlation coefficient",
slab=paste(ifn_y$author, ifn_y$year, sep=", ")
) |
H: Explainable AI solutions and packages in Python
I recently built a logistic regression for binary classification
While I understand that logistic regression is inherentlt interpretable, I am trying to use explainable AI solutions to understand the predictions. For that, I am trying to make a list of all XAI solutions and compare the results against each other.
As of now, these are below that I have identified
XAI solutions
a) SHAP
b) LIME
c) PFI
d) Dependencies plots
Python packages implementations
a) Explainerdashboard
b) ExplainX.ai
c) Shapash
Is there anything that you can add to the list?
Would really be useful for beginner like me
AI: A few which I am aware of are:
Permutation importance, the python package is this ELI5
Lime, Shap, PDP, and dependency plots you have already covered.
To understand AI explainability I would highly suggest reading:
https://www.bankofengland.co.uk/working-paper/2019/machine-learning-explainability-in-finance-an-application-to-default-risk-analysis
Taking this course on Kaggle: https://www.kaggle.com/learn/machine-learning-explainability |
H: Difference between the different measurement metric
Can someone explain what each of these mean? both in simple terms and in terms of TP, TN, FP, FN?
Also are there any other common metrics that I am missing?
F-measure or F-score
Recall
Precision
Accuracy
AI: Lets say you are creating a model to predict whether a person has a disease or not.
True Positives (TP): Model predicts yes (they have the disease), and they do have the disease.
True Negatives (TN): Model predicts no, and they don't have the disease.
False Positives (FP): Model predicts yes, but they don't actually have the disease. (Also known as a "Type I error.")
False Negatives (FN): Model predicts no, but they actually do have the disease. (Also known as a "Type II error.")
If you look at confusion metric below
Accuracy: Overall, how often is the classifier correct?
(TP+TN)/total = (100+50)/165 = 0.91
True Positive Rate: When it's actually yes, how often does it predict yes?
TP/actual yes = 100/105 = 0.95
also known as "Sensitivity" or "Recall"
Precision: Model predicts yes, how often is it correct?
TP/predicted yes = 100/110 = 0.91
F1 Score = 2 * (Precision * Recall)/ Precision + Recall
= 2 * .91 * .95 / (0.91+.95)
Other terms which people usually use are :
AUC - Area Under Curve
Cohen's Kappa |
H: Loss-value of normal equation vs gradient descent
My question is if gradient descent can give a better aproximation than normal equation in Python?
for the Loss function, I wrote
import numpy as np
def loss(w, X, y):
X = np.concatenate([np.ones(shape=(len(X),1)),X], axis = 1)
dif_y = y - X@w
lossValue = np.mean(dif_y**2)
return lossValue
and for gradient descent (grad_desc) and normal equation (solv)
def grad_desc(w,X,y, alpha = 0.01, maxiter = 1000, eps = 1e-2):
count = 0
while loss(w,X,y) >= eps and count <= maxiter:
w = w - alpha*grad(w,X,y)
count += 1
return w
def solv (X,y):
X = np.concatenate([np.ones((len(X),1)),X], axis = 1)
s = np.linalg.inv(X.T @ X) @ X.T @y
return s
where my outputs for
alpha = 0.000000001
gradDescent = grad_desc(w, Z, y, alpha, 10000)
normalEq = solv(Z,y)
print(loss(gradDescent,Z,y))
822.5912253816151
print(loss(normalEq,Z,y))
10674.419453701901
AI: The normal equations do not approximate a solution, they are closed form. Thus, the result is the single best estimate of the parameters.
Gradient descent will never find a better estimate of the parameters than the normal equations (assuming the normal equation method is applicable).
That result is idiosyncratic to your implementation. |
H: What are the differences between bert embedding and flair embedding
I read about BERT embedding model and FLAIR embedding model, and I'm not sure I can tell what are the differences between them ?
BERT use transformers and FLAIR use BLSTM
With BERT, we the feed words into the BERT architecture, and with FLAIR we feed characters into FLAIR architecture.
What are the strengths of BERT embedeeing ?
What are the strengths of FLAIR embedeeing ?
In which cases would we prefer to use one model rather than another ?
AI: In general, BERT is probably better for tasks where meaning plays an important role. FLAIR is probably just as good on tasks related to syntax and morphology. Also, the typical advantage of character-level models is their better robustness towards noise (cf. case study in machine translation). There is not much direct comparison, however, there are many indirect clues that suggest so.
FLAIR is evaluated on named entity recognition, chunking, and part-of-speech tagging. All these three tasks rely heavily on syntax. FLAIR reports the F-1 score of 93.09 on the CoNLL-2003 Named Entity Recognition dataset, the same as BERT reports the F1-score of 92.8. (Note, however, that there are BERT-like models that are much better than the original BERT, such as RoBERTa or ALBERT.)
The semantic abilities of pre-trained representations are evaluated in the GLUE and SuperGLUE benchmarks, where BERT is used as a baseline and FLAIR is not in the leaderboards at all. This suggests it would work well. Also, very recent attempts to do character-level pre-trained representations (cf. Charformer, CANINE) struggle to reach good semantic performance, which also suggests that FLAIR as a character-level representation might have problems with semantic-oriented tasks.
To summarize, I would only consider FLAIR for the tasks they test it on (NER, POS tagging, chunking), especially for noisy user-generated data. For everything else I would use RoBERTa or other BERT-like models. |
H: What happens to auc when true positive rate grows
How does change in true positive rate affects AUC? Does increase of TPR lead to increase of AUC as well?
AI: Theoretically if you are able increase TPR keeping the FPR constant your AUC will increase.
Lets understand the ROC curve first to understand why it happens :
AUC is calculated from ROC curve. ROC curve shows the performance of model (classification) at all the thresholds. The two parameters are as follows
True Positive Rate : TP / (TP + FN) (TP : True Positive , FN : False Negative)
False Positive Rate : FP / (FP + TN) (FP : False Positive, TB : True Negative)
Ideally for different thresholds, you plot values of TPR and FPR.
The image below represent a ROC curve, with TPR on Y axis and FPR on X axis (Image Sorce):
Ideally, Both FPR and TPR are interrelated. If you lower the threshold both FPR and TPR will increase. |
H: Can we not backpropagate model
I saw a model based on CNN for question classification. The author said that they don't backpropagate gradient to embeddings. How this is possible to update network if you don't backpropagate please?
Thanks
AI: When we are using pretrained embeddings as model inputs sometimes we dont want to update embedding so thats why we don't backpropogate gradients to embeddings |
H: Error from linear regression in scikit-learn
I am trying to implement simple linear regression on iris dataset.
my code is:
from sklearn.linear_model import LinearRegression
df = sns.load_dataset('iris')
x = df['sepal_length']
y = df['sepal_width']
model = LinearRegression()
model.fit(x,y)
However, I got this error:
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
AI: This is more of a programming question than a data science question and would therefore be better suited for the stackoverflow stackexchange. The error already gives quite a good explanation on what the issue is, you are passing a 1D array to the linear regression where it is expecting a 2D array. This because you are using a single string to index the feature column, which returns a pandas.Series, instead of using a list of strings, which would return a pandas.DataFrame. Changing the way you are selecting your feature(s) from the dataframe solves the issue:
import seaborn as sns
from sklearn.linear_model import LinearRegression
df = sns.load_dataset('iris')
x = df[['sepal_length']] # change from single string to list of strings
y = df['sepal_width']
model = LinearRegression()
model.fit(x, y) |
H: Why is my test accuracy higher than train accuracy? SKLEARN
As the title say my code produces low train accuracy and high test accuracy.
First I split my data set to train and test sets.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
I then fitted my model with the training sets
model = LogisticRegression(max_iter=10000, penalty='l2')
model.fit(X_train, y_train)
After which I printed the Training accuracy of the model
predictions = model.predict(X_train)
print('Training Accuracy: {}'.format(accuracy_score(y_train, predictions)))
The training accuracy showed 0.8275. I then validated my model using stratified KFold cross validation and it gave me an 0.8125 mean accuracy score.
cross_val = StratifiedKFold(n_splits=10, random_state=1, shuffle=True)
scores = cross_val_score(model, X_train, y_train, scoring='accuracy', cv=cross_val, n_jobs=-1)
for index, score in enumerate(scores):
print('Iteration {} Accuracy score: {}'.format(index + 1, score))
print('\nMean Accuracy: {}'.format(np.mean(scores)))
After that I evaluated by testing set with the following code:
testing_predictions = model.predict(X_test)
print(classification_report(y_test,testing_predictions))
cm = confusion_matrix(y_test, testing_predictions, labels=model.classes_)
display = ConfusionMatrixDisplay(confusion_matrix = cm, display_labels=model.classes_)
display.plot()
Surprisingly it gave me a higher testing accuracy of 0.86 than the training accuracy, 0.82. I have also red that it is impossible to have high testing accuracy than the training accuracy. Did I do something wrong in my process?
AI: With your sample size I can see that happening by chance... rerun your train/test split 100 times (drop random seed) and see how often the performance is better on the test set. |
H: How to train multioutput classification with hyperparameter tuning in sklearn?
I am working on a simple multioutput classification problem and noticed this error showing up whenever running the below code:
ValueError: Target is multilabel-indicator but average='binary'. Please
choose another average setting, one of [None, 'micro', 'macro', 'weighted', 'samples'].
I understand the problem it is referencing, i.e., when evaluating multilabel models one needs to explicitly set the type of averaging. Nevertheless, I am unable to figure out where this average argument should go to; accuracy_score, precision_score, recall_score built-in methods have this argument which I do not use explicitly in my code. MultiOutputClassifier doesn't have such an argument, neither does the RandomizedSearchCV's .fit() method. I also tried passing methods like precision_score(average='micro') directly to the scoring and refit arguments of RandomizedSearchCV but that didn't solve it either since methods such as precision_score() require correct and true y labels as arguments, which I have no access to in the individual K-folds of the randomized search.
Full code with data:
from sklearn.datasets import make_multilabel_classification
from sklearn.naive_bayes import MultinomialNB
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler
X, Y = make_multilabel_classification(
n_samples=1000,
n_features=2,
n_classes=5,
n_labels=2
)
pipe = Pipeline(
steps = [
('scaler', MinMaxScaler()),
('model', MultiOutputClassifier(MultinomialNB()))
]
)
search = RandomizedSearchCV(
estimator = pipe,
param_distributions={'model__estimator__alpha': (0.01,1)},
scoring = ['accuracy', 'precision', 'recall'],
refit = 'precision',
cv = 5
).fit(X, Y)
```
AI: The solution to this problem is relatively straight-forward, as the scikit-learn documentation mentions you can use strings (or a list of strings) to specify the scoring method(s) used. Following the link of scoring options you will see a list of predefined values for the scorers. For some scorers there are multiple types as mentioned by the error you are getting, e.g. for the F1 score you can use the default one for binary targets or any of the micro, macro, weighted, or samples options. You therefore simply have to change the strings you are providing to RandomizedSearchCV for the scoring and refit parameters to select the specific type of scorer you want to use. This would look something like this:
search = RandomizedSearchCV(
estimator=pipe,
param_distributions={'model__estimator__alpha': (0.01, 1)},
scoring=['accuracy', 'precision_micro', 'recall_micro'],
refit='precision_micro',
cv = 5
).fit(X, Y) |
H: Removing strings after $ in a tibble
Say I have a tibble that looks like this
library(tidyverse)
df <- tibble(`var` = c(1,2), `text` = c("elephant $ more text", "cat $ some more text"))
and I would like to programmatically get all the text from the character $ to be removed, so to obtain the equivalent result of
df <- tibble(`var` = c(1,2), `text` = c("elephant ", "cat "))
I tried to use the equivalent of Removing strings after a certain character in a given text
but it seems that thee special character $ does not yield the desired outcome.
AI: You can achieve this by simply escaping the $ character using the \\ characters:
df %>%
mutate(text = gsub("\\\$.*", "", text))
var
text
1
elephant more text
2
cat some more text |
H: How FLAIR NER algorithm detects entities with typo?
I'm checking the NER FLAIR algorithm with typos:
'Jackson has a number of apartments in Les Angeles San Diego and Oakland'
I wrote Les Angele instead of Los Angele
I deleted the comma between Les Angele and San Diego
And it seems that FLAIR detectes the correct locations:
Span [1]: "Jackson" [− Labels: PER (0.9986)]
Span [8,9]: "Les Angeles" [− Labels: LOC (0.9788)]
Span [10,11]: "San Diego" [− Labels: LOC (0.9483)]
Span [13]: "Oakland" [− Labels: LOC (0.9818)]
How FLAIR detects that Les Angeles is place location ?
How it detects that Les Angeles is one location and not 2 (2 words) ?
How it detects that Les Angeles and San Diego are two different locations (there is no comma) ?
AI: This is not specific to FLAIR, this is how NER models work in general. A NER model captures the clues in a sentence which are likely to correspond to an entity of a particular category, for example:
A location is often preceded by the word in
A proper name starts with a capital letter, and most locations are proper names.
These two clues above probably explain (1).
After one word has been recognized as part of an entity, it's likely that the next word is also part of the same entity.
This probably explains (2).
I'm not sure how the model gets (3), possibly because it has seen the entity San Diego a few times in the training data.
All these rules have been statistically inferred from the training data during the training stage. |
H: When an author says Features are the input to Machine Learning Model what does it mean?
I am reading an article about graph neural network and it is mentioned:
In this step, we extract all newly update hidden states and create a final feature vector describing the whole graph. This feature vector can be then used as input to a standard machine learning model.
What does it mean that this feature vector can be used as an input to standard machine learning model? Isnt machine learning all about obtaining the features in the first place? And what does it imply that feature vector would be input to machine learning model? Should it be part of the ML model rather an input?
AI: There's quite a lot of confusion here:
The word "features" is a bit vague but it usually refers to the structured information provided as input to a ML system.
No, features are not a part a ML model. They are the input used to train a model.
No, ML is not "all about obtaining features". Obtaining the features is just the stage of obtaining the information in a usable way, so it's as if you said that "learning" is all about "obtaining a book".
In this sentence the author means there are (at least) two stages to this ML process: the first step trains a neural network model, then the values
of the hidden states of the NN are collected in order to provide them to a second model which uses them as input features. |
H: Sequential NN Using Keras Questions
I have some questions about the sequential model from Keras. Please let me know if there is anything wrong with how I presented my questions and I will be more than happy to fix it to make it align with community guidelines.
I am working with a data set of about 6000 points. I have three input variables (all real numbers) and I have one output (also a real number). It is essentially a regression problem.
I found here that the sequential model is not a good idea to use if you have multiple inputs. Is this true? I also found many examples of people using this network for multiple input values so I'm confused.
My three data inputs represent three different parameters for a structure design. This is considered sequential data is that correct?
The current code I have is giving me rather good $R^2$ values (~.85) so I am wondering if this is a fluke or if what I have done is valid even though I have a multi-dimensional input.
This is more of a conceptual question but in case it helps, here is the code I wrote:
model = Sequential()
model.add(Dense(3, input_shape=(3,), activation='relu'))
model.add(Dense(13, activation='relu'))#hiddn layer 1
model.add(Dense(9, activation='relu'))#hidden layer 2
model.add(Dense(9, activation='relu'))#hidden layer 3
model.add(Dense(9, activation='relu'))#hidden layer 4
model.add(Dense(1,))
model.compile(Adam(learning_rate = 0.001), 'mean_squared_error')
earlystopper = EarlyStopping(monitor='val_loss', min_delta=0, patience=15, verbose=1, mode='auto')
model.fit(X_trainL, y_trainL, epochs = 5000, validation_split = 0.1,shuffle = True, verbose = 0,callbacks=[earlystopper])
AI: Sequential model when it says you cant have multiple inputs its basically does not refer to your case were you have single input of 3 dimension. The examples of multiple inputs and multiple output are as follows :
Training a model to generate image caption requires both image and text data during training which is multiple inputs
No, the three variables you have is not a sequential data as API refers it.
R2 values may be accurate but rely on it if it is calaculate on test data only |
H: why sign flip to indicate loss in hyperopt?
I am using the hyperopt to find best hyperparameters for Random forest.
My objective is to get the parameters which returns the best f1-score as my dataset is imbalanced.
So, I defined the below objective function based on a tutorial online
space = {
"n_estimators": hp.choice("n_estimators", [100, 200, 300, 400,500,600]),
"max_depth": hp.quniform("max_depth", 1, 15,1),
"criterion": hp.choice("criterion", ["gini", "entropy"]),
"max_features": hp.quniform("max_features", 1, 14,1)
}
def hyperparameter_tuning(params):
clf = RandomForestClassifier(**params,n_jobs=-1)
f1score = cross_val_score(clf, ord_train_t, y_train,scoring="f1").mean()
return {"loss": -f1score, "status": STATUS_OK}
Am I doing this right?
If I remove the loss keyword from return statement it throws key error.
Why is it so? I just wish to name the key as f1score but it threw error.
Once I changed it back to loss, it started working fine.
I get output like below
88%|████████████████████████████████████████▍ | 88/100 [29:42<03:25, 17.14s/tri best loss: -0.8615863383404218
Does it mean my model returns an f1-score of 86%?
Shouldn't the loss be 100-86 = 14?
btw, why is it that we should sign-flip the f1score to indicate loss?
AI: The fmin function is the optimization function that iterates on different sets of algorithms and their hyperparameters and then minimizes the objective function. In Hyperopt we try to find the parameters which minimizes the loss. So if you have evaluation metrics that needs maximizing like accuracy, F1 score we change the sign so that minimizing is equal to maximise the evaluation metrics.
So lets say you use f1-score, you need to maximise you pass negation so that minimizing it will maximise it
F1 Score = 0.89 Loss = -0.89
F1 Score = 0.91 Loss = -0.91
Negation make sure your hyperparameter which maximises F1 score 0.91 are chosed though Hyperopt minimized it.
Also, if you have metrics like RMSE, MAE which needs to minimised you don't need to flip it. |
H: Is it good to use .fit to xtest when we use PolynomialFeatures() of sklearn?
My teacher did this in class, and I'm wondering is this ok to use .fit_transform with xtest? It shouldn't just be poly.transform(xtest)
Teacher's Code
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=3)
xtrain_poly = poly.fit_transform(xtrain)
xtest_poly = poly.fit_transform(xtest)
As I think it should be:
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=3)
xtrain_poly = poly.fit_transform(xtrain)
xtest_poly = poly.transform(xtest)
As an optional question, what does fit() and transform() do in PolynomialFeatures? transform() scales the data based on some value(s) returned by fit(), such as when using sklearn.preprocessing.StandardScaler?
AI: Technically I don't think there is a difference in the output in the two methods, with the main reason being that fitting the PolynomialFeatures class to data does not save any parameters internally, as is the case for example for the StandardScaler class (see also the source code for PolynomialFeatures). However, I would say that it is still better to use transform instead of fit_transform to be consistent with other transformers from scikit-learn. |
H: Why best hyperparameters leads to drop in test performance?
I am working on a binary classification problem using random forests (75:25 - class proportion). Label 0 is minority class. So, I am following the below approach
a) execute RF with default hyperparameters
b) execute RF with best hyperparameters (GridsearchCV with stratified K fold and scoring was F1)
While with default hyperparameters, my train data was overfit as can be seen from the results below. But, I went ahead and tried the default paramters in test data as well (results below)
default hyperparameters - Test data confusion matrix and classification report
Best hyperparameters - Test confusion matrix and classification report
So, my question is
a) Why does a best parameters lead to drop in test data performance? Despite my model.best_score_ returning 86.5 as f1-score? I thought f1 scoring would allow us to find the best f1-score for both the classes. Looks like it is only focusing on class 1. How can I make the score function to work to increase the f1-score for minority class?
b) This makes me feel like it is okay to stick with the overfit model as it provides me relatively good performance on test data (when compared to best parameter model because it performs poorly)
c) My objective is to maximize the metrics like recall and precision for label 0 (minority class)? How can I do that? Any suggestions please?
d) In this case, should I go ahead with the overfit model with default parameters?
update
when I invert the labels based on below answer, meaning 0's as 1's and 1's as 0's, I get the below performance
update - code for best hyparameters
from sklearn.model_selection import GridSearchCV
param_grid = {
'n_estimators': [100,200,300,500],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [4,5,6,7,8],
'criterion' :['gini', 'entropy']
}
skf = StratifiedKFold(n_splits=10, shuffle=False)
model = GridSearchCV(rfc,param_grid=None,cv = skf, scoring='f1')
model.fit(ord_train_t, y_train)
print(model.best_params_)
print(model.best_score_)
rfc = RandomForestClassifier(random_state=42, max_features='sqrt', n_estimators= 500, max_depth=8, criterion='gini')
rfc.fit(ord_train_t, y_train)
y_train_pred = rfc.predict(ord_train_t)
y_test_pred = rfc.predict(ord_test_t)
y_train_proba = rfc.predict_proba(ord_train_t)
y_test_proba = rfc.predict_proba(ord_test_t)
code for default hyparameters
rfc = RandomForestClassifier()
rfc.fit(ord_train_t, y_train)
y_train_pred = rfc.predict(ord_train_t)
y_test_pred = rfc.predict(ord_test_t)
y_train_proba = rfc.predict_proba(ord_train_t)
y_test_proba = rfc.predict_proba(ord_test_t)
AI: A simple hack, minority class people usually keep this as 1. As most metrics take into account that only. So you can chnage the mapping that is 0 to 1 and 1 to 0 and see if it works.
I think its happening as model is optimising for 1s which in this case is not aligned with what you want to do |
H: what tree generation algorithm is used to create DT, ID3? C4.5, CART?
By default what tree generation algorithm is used to create decision tree? D3? C4.5, CART?
AI: Assuming that you're using the scikit-learn library, the DT algorithm implemented is an optimized version of CART. From the documentation:
scikit-learn uses an optimised version of the CART algorithm; however, scikit-learn implementation does not support categorical variables for now. |
H: Does higher coefficient equate to high feature importance?
I have red in this blog post that the higher the feature coefficient the more important the feature is.
However, the author did not mention why higher coefficient equates to higher importance, which made me confused.
AI: It is not correct to claim that "higher" coefficients per se equate to more importance. The reason is that the scale of the data matters in linear models.
df = data.frame(y=c(1,1,1,0,0,0),x1=c(10,11,12,5,4,6), x2=c(11310,12520,10110,6010,5020,4010))
logit1 <- glm(y ~ x1 + x2, data = df, family = "binomial")
summary(logit1)
df$x2_alt = df$x2 / 1000
df$x2_alt
logit2 <- glm(y ~ x1 + x2_alt, data = df, family = "binomial")
summary(logit2)
Logit 1:
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -7.274e+01 1.679e+05 0 1
x1 4.561e+00 1.072e+05 0 1
x2 4.450e-03 1.019e+02 0 1
Logit 2:
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -72.743 167870.599 0 1
x1 4.561 107247.232 0 1
x2_alt 4.450 101948.329 0 1
As you can see, a linear transformation on $x_2$ changes the estimated coefficient in my dummy example above.
In order to compare coefficients in linear models, all $x$ need to be on the same scale (sometimes called beta regression). I.e. the $x$ need to have a standard deviation of one. If all $x$ are scaled, you can claim that a "higer" coefficient has a higher impact on $y$ (stronger change in "high" $x$ coefficient is associated with stronger change in $y$ compared to "low" $x$ coefficient).
However, this is not neccesarily the same thing as "feature importance", which means "high predictive power" of some $x$.
In the context of a Logit model you would use "Lasso" or "Ridge" (or Elastic Net) to "shrink" coefficients which are not very useful in making a prediction. See ISL, Ch. 6.2.2. In these models an additional "penalty term" is used to "shrink" coefficients which contribute little (or less than others). In the case of OLS this would look like:
sklearn.linear_model.LogisticRegression by default uses an l2 penalty (option penalty) which is "Ridge".
So overall, you can use Ridge/Lasso to select features (some see this as feature importance). However, the claim that "larger coefficients" are per se "more important" is not correct. |
H: Comparing distributions Python
I have a larger dataset (random variable) 'x' containing values approximating a Gaussian distribution. From 'x', a much smaller random variable 'y' is sampled without replacement. I want to compare their distributions using histograms. The code in Python 3.9 is as follows:
# Create a gaussian distribution-
x = np.random.normal(loc = 0, scale = 2.0, size = 20000000)
# Sample from 'x' without replacement-
y = np.random.choice(a = x, size = 400000, replace = False)
x.size, y.size
# (20000000, 400000)
# Compare the distributions using 'histplot()' in seaborn with different bin sizes for x & y-
sns.histplot(data = x, bins = int(np.ceil(np.sqrt(x.size))), label = 'x')
sns.histplot(data = y, bins = int(np.ceil(np.sqrt(y.size))), label = 'y')
plt.xlabel("values")
plt.legend(loc = 'best')
plt.title("Comparing Distributions")
plt.show()
This produces the output:
# Compare the distributions using 'histplot()' in seaborn with same bin sizes for x & y-
sns.histplot(data = x, bins = int(np.ceil(np.sqrt(x.size))), label = 'x')
sns.histplot(data = y, bins = int(np.ceil(np.sqrt(x.size))), label = 'y')
plt.xlabel("values")
plt.legend(loc = 'best')
plt.title("Comparing Distributions")
plt.show()
This produces the output:
In my opinion, the second plot is wrong because each histogram should be computed and visualized with it's own bin size for the given data.
To further analyze the two distributions using a histogram-
n_x, bins_x, _ = plt.hist(x, bins = int(np.ceil(np.sqrt(x.size))))
n_y, bins_y, _ = plt.hist(y, bins = int(np.ceil(np.sqrt(y.size))))
# number of values in all bins-
n_x.size, n_y.size
# (4473, 633)
# bin size-
bins_x.size, bins_y.size
# (4474, 634)
# bin-width-
bw_x = bins_x[1] - bins_x[0]
bw_y = bins_y[1] - bins_y[0]
bw_x, bw_y
# (0.004882625722377298, 0.02781399915135907)
Since 'y' has a much smaller size than 'x', consequently, it's bin-width (0.0278) is much larger than 'x' bin-width (0.0049). Hence, this produces a different histogram and visualization. Since 'y' is sampled from 'x', using Kolmogorov Smirnov two sample test doesn't make sense.
What's the appropriate way to compare these two distributions?
AI: If you want to compare their shape you need to do two things:
account for size of the set
account for number of bins
the more data you have, the higher your bins will be. The more bins you have, the shorter the bins will be (because you're dividing the same quantity of data into more bins)
this is what I came up with
#importing
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#Create a gaussian distribution-
x = np.random.normal(loc = 0, scale = 2.0, size = 200000)
# Sample from 'x' without replacement-
y = np.random.choice(a = x, size = 40000, replace = False)
#binning
n_x, bins_x = np.histogram(x, bins = int(np.ceil(np.sqrt(x.size))))
n_y, bins_y = np.histogram(y, bins = int(np.ceil(np.sqrt(y.size))))
#normalizings
n_x=n_x/len(x)/len(bins_y)
n_y=n_y/len(y)/len(bins_x)
#plotting
plt.plot(bins_x[:-1],n_x)
plt.plot(bins_y[:-1],n_y)
plt.show()
which renders this
I'm not sure if the y value, in this case, is of any practical utility.
Edit:
It strikes me that scaling up the subset to match the original set might make more sense under more use cases (for instance, trying to efficiently plot a subset of data). This would do the trick:
#Scaling the subsample
n_y=n_y/len(y)/len(bins_x)*len(x)*len(bins_y)
then you wouldn't have to scale n_x at all |
H: A question on intercepts and coefficients in linear regression
So I was studying through some sites and saw a Linear regression problem where a company is attempting to find the correct amount to spend on marketing. The example had a small dataset with units sold and total expenditure historically. The example then shows a function with 3 variables: historic expenditure (list), units sold (list) and the desired number of units they wish to sell. The training data used the historic expense as the independent variable and the units sold as our dependent variable, which was fit with the LinearRegression model. In their example, the answer boiled down to (desired units sold - model.intercept_)/ model.coef_
so conceptually, I understand the intercept as the constant with the predictor variable @ 0, so it makes sense to subtract the constant value from our value with which we wish to predict on (our desired output of units sold) but I'm having a hard time understanding why we choose to divide that value by the model coefficients? I don't fully understand what the coefficients really mean (weights or something?). Any explanation with math would be helpful.
AI: Since historic expense $E$ is an independent variable and unit sold $S$ is a dependent variable, the model should be like this
$S = m \times E + c$
where $m$ should be the model.coef_, and $c$ should be the model.intercept_.
So if you want to work back the $E$ based on a target $S$ given the model, you will do
$ \frac{S - c}{m} $ |
H: Interpret Naive Bayes output Python
I am running Python code off Kaggle on the adult dataset using Naive Bayes. How do you interpret the results below, I know that it's for each instance the likelihood they make less than 50K or more than 50k.
How do you interpret say the first row in simple terms? Is there a way I could make it in standard form?
y_pred_prob_df = pd.DataFrame(data=y_pred_prob, columns=['Prob of - <=50K', 'Prob of - >50K'])
y_pred_prob_df
AI: The first row simply means that for the first observation the model predicts a probabiliy of 99.99% (basically 100%) that they make less than 50k. For the second records your model predicts a probability of 99.97% to make less than 50k, but the probability of making more than 50k is not 31% but 0.0312% (notice the e-4). The two numbers always add up to 100%. |
H: Can't understand an MSE loss function in a paper
I'm reading a paper published in nips 2021.
There's a part in it that is confusing:
This loss term is the mean squared error of the normalized feature
vectors and can be written as what follows:
Where $\left\|.\right\| _2$is $\ell_2$ normalization,$\langle , \rangle$ is the dot product operation.
As far as I know MSE loss function looks like :
$L=\frac{1}{2}(y - \hat{y})^{2}$
How does the above equation qualify as an MSE loss function?
AI: Recall what mean square error is actually measuring... the Euclidean distance between some regressed function, $\hat y$ and the true signal/function $y$ evaluated at every input $x$. The above is a more formalized vector definition, but is still very much the same.
Starting from this idea that the Euclidean distance is coming into play:
$ d(f_{1}(x),f_{2}(x))^{2} = \langle f_{1}(x) - f_{2}(x), f_{1}(x) - f_{2}(x) \rangle = \langle f_{1}(x),f_{1}(x) \rangle + \langle f_{2}(x),f_{2}(x) \rangle - 2 \langle f_{1}(x),f_{2}(x) \rangle = 2 (1 - \langle f_{1}(x),f_{2}(x) \rangle) = 2 - 2 \langle f_{1}(x),f_{2}(x) \rangle$.
The denominator is just to make each vector (and by extension, their dot product) of unit length.
Hope this helps! |
H: XGBoost Regression Prediction
I trained an XGBoost Regression model that tries to predict the number of conversions that a campaign provides. Independent variables are monthly dummy, location dummy, and 4 columns of campaign rules (numerical). Total of 6 columns of ind. variables.
I trained the model. Now I am trying to predict a given campaign's performance using this model.
My aim is to input the rules, month and location of a new campaign, and get the amount of conversions that XGBoost predicts. Do you know how can I implement this kind of prediction in Python?
Thanks
AI: You should be able to simply use the predict method and give it the same for which you are trying to get a prediction and the model will output the predicted value. Simply make sure that the data you want to predict on has the same format as the data you used for training (i.e. same number of columns and scaling of values if applicable) and pass it as the X argument of the predict method. If you only want to predict on one observation you might need to add an extra dimension so you have an array of shape (1, 5) (or however many columns you have) as the model likely expects a 2D array. |
H: One-hot encode a numeric categorical feature (e.g. year built, satisfaction out of 10, etc) or not?
I'm trying to understand the pros and cons of different approaches for encoding a certain feature rather than keeping its numerical value.
Let's say we have a dataframe that has a Satisfaction column with values in the range 1-10 and we were trying to regress on a continuous Y value which is the rate of probability of the client to return
10 Very Excellent
9 Excellent
8 Very Good
7 Good
6 Above Average
5 Average
4 Below Average
3 Fair
2 Poor
1 Very Poor
In this example the variable is categorical but we have an order relationship between the values which could be useful for predicting the Y. An user with a 10 would be more inclined to return than an user with a 0.
But the model would probably be able to figure out by itself which category is more likely to return while looking at the Y variable in the train dataset during training.
Also by keeping the order relationship 1-10 we assume that between all of the category there is the same distance, while I'd say that there probably is a different distance in emotion between Average and Good with respect to Excellent and Very Excellent.
I could:
One hot-encode the column to eliminate the numerical relationship between the values
Transform the data as numeric
Do both, keep the values as numeric and add a one-hot encoding
Can someone make light on what would be the nuances between all of the options?
AI: That rating system is not categorical, it's ordinal, meaning there is a scale and order to the data.
There are few hard and fast rules, because sometimes doing things "wrong" produces better results under particular circumstances. However, I would recommend using numeric data. Predicting a regression instead of a classification will help the model to understand that there is an order to the output, which will likely improve performance in a use case like this. If you present it as categorical, your model has to spend more "effort" learning that there is an order to the output. This may require a more complex model, meaning longer training times, less efficient prediction, and a higher risk of overfitting.
Your output will likely be easier to interpret too. Instead of
80% very poor
10% poor
3% moderate
you would get a single value between 0-1. So 0.13 for instance. (you will want to normalize your data range by shrinking it to be between 0 and 1).
types of data |
H: A/B test on model - split on results
I developed a predictive model that assigns the best product (P1, P2, P3) for each customer. I wanted to compare the conversion rate using this model VS the as-is deterministic assignment, so I applied an A/B test: I decided the product between P1, P2, P3
using the model on 50% of my users
using the deterministic rules on the other 50%
and then I compared the different conversion rates.
My question is: is it correct to split the analysis on the test results by product? So I'd like to compare
test CR on P1 VS control CR on P1
same on P2, P3
I have a doubt since I'm using the model's results (so the product assignment) as the splitting feature. Is it correct, or am I introducing bias in the analysis?
AI: Provided everything is coacher, and you're doing the analysis of the conversion rate in a way that is devoid of any bias or error, I don't see an explicit problem with either approach. I would do both
Compare the conversion rate across all products
Compare the conversion rate on a product by product basis
That will give you an idea of aggregate and specific performance, which is useful. You may also want to do a third:
compare the conversion rate across meaningful customer subsegments
Maybe your model is better at converting certain types of users, or users in certain areas? The more granular you get, the more you need to worry about confirmation bias (correlation does not imply causation, and if you go digging around you will find compelling trends based on chance), but these tests will allow you to know where you should further improve your approach, and may inspire additional meaningful study. |
H: If an SVM decision boundary is the perpendicular bisector of the line connecting the support vectors, why iterate for it using a loss function?
Would it not make more sense to do some linear algebra to find the vector of the decision boundary? Is that more computationally expensive?
AI: You have to find the support vectors!
Anyway, the title is not quite correct, because there can be many support vectors, so "the line connecting" them doesn't necessarily exist. |
H: Word Embedding Dimensions Reduction
In my NLP task, I use Glove to get each word embedding, Glove gives 50 float numbers as an embedding for every word in my sentence, my corpus is large, and the resulted model is also large to fit my machine, I'm looking for a way to reduce each word embedding from 50 numbers to something less, maybe one floating number is possible, is there an approach to do that?
I was thinking about an aggregation approach,like taking the average of the 50 numbers, would that work?
AI: Taking the average (or any other statistic combo) of the embedding vector for each word is NOT a good idea because the embedding dimensions are independent & you will loose a lot of nuance of the learned space.
You could do a few things however. First would be to do PCA on the embedding matrix to reduce variance a little bit say to 45/50 dimensions. This is a quick hack so don’t over do it. Ideally you would train another embedding of of this one into a smaller dimensionality space say of 10-30 dimensions that gives you good results for your task. You could remove unused or very rare words from your embedding altogether. You could also normalize your embedding vocabulary to exclude case sensitive and accented words.
From an engineering standpoint if fitting in RAM is still an issue you could also load chunks of the embedding vocabulary into memory at a time via some caching mechanism like LRU. |
H: train-test split on forecasting a time series using external features
I have a question regarding the train-test split when forecasting a timeseries using features instead of the time series itself. I know that I should use a time-based train-test-split if i use lagged values of the time series to predict, but I am wondering if that is the case also if I use an external feature. Suppose I try to forecast the watermelon consumption using only the temprature (X feature) instead of using the time series regarding the watermelon. Leaving aside that it might be better to use the time series, would it be valid to do a normal(random) train-test-split for the feature based forecast so I could train using days from november (temperature, watermelon consumption) and testing on unseen data but that it was techinically gathered before (lets say september) ?
Just thinking of the validity of the random tran-test-split, I know that different months might be important and not just the temprature itself, but its just a simple example trying to clarify my concern.
Thanks in advance.
AI: Simple answer is no, you should not.
When performing a forecasting task, you don't want your trained model to have any information about the future it has to forecast. Otherwise, it might use this information.
Now, if you are 100% sure that the latter data have no relation with the earlier, you could do a random train/test split (similarly using time series directly or features). For example, you could create a weather forecast model based on year 1990 and test it on year 1960 (I guess, I am no weather expert).
In any case, if you can do without future data, I would say it is better (as there would be no data leak for sure). |
H: What does rmse of a LSTM model tells?
Suppose I made a model which has rmse of 50
Now when I predict the next data which is 500
So does that mean the actual value has high probability to be within the range of 450 - 550 ? If so what is the probability that it will be in this range?
Or it means the actual value has high probability to be within the range of 475- 525 ? If so what is the probability that it will be in this range?
AI: RMSE is calculate based on multiple predictions so it isn't as easy as saying when you predict 500 the actual result will be a deterministic calculation of 500+-RMSE.
Instead you would expect that with a new batch of e.g. n = 500 predictions, the square-root of the mean squared difference would be close to 50.
Note that due to the root and squaring operations, a RMSE indicates an absolute average difference around 50 which can be interpreted as 500 +-50 not 500+-25.
In practice individual results of course can vary much more e.g. for:
y_pred y_act
50 100
50 100
50 100
the RMSE is the same as
y_pred y_act
50 50
50 50
50 136
and so on. This means that you can use normal distribution assumptions to say:
"Assuming my errors are not biased and normally distributed, my range of error should be +-50 for 68% of my values." |
H: Accuracy over 100%
I am using OpenFL, the Intel framework for Federated Learning.
If I run their tutorial example, I have that loss decreases and accuracy is in range 0-100%, like this:
[16:21:05] METRIC Round 4, collaborator env_one train result train_loss: 3.083468 experiment.py:112
[16:21:29] METRIC Round 4, collaborator env_one localy_tuned_model_validate result acc: 0.640100 experiment.py:112
[16:21:53] METRIC Round 4, collaborator env_one aggregated_model_validate result acc: 0.632200 experiment.py:112
METRIC Round 4, collaborator Aggregator localy_tuned_model_validate result acc: 0.640100 experiment.py:112
METRIC Round 4, collaborator Aggregator aggregated_model_validate result acc: 0.632200 experiment.py:112
[16:21:58] METRIC Round 4, collaborator Aggregator train result train_loss: 3.083468
So basically, an accuracy of 64%.
The problem is when I run my experiments. From their code I changed only the dataset (I am using MNIST) and the neural network. My neural network is the following:
class VGG16(nn.Module):
def __init__(self, num_classes):
super(VGG16, self).__init__()
# calculate same padding:
# (w - k + 2*p)/s + 1 = o
# => p = (s(o-1) - w + k)/2
self.block_1 = nn.Sequential(
nn.Conv2d(in_channels=1,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
# (1(32-1)- 32 + 3)/2 = 1
padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(in_channels=64,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_2 = nn.Sequential(
nn.Conv2d(in_channels=64,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.Conv2d(in_channels=128,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_3 = nn.Sequential(
nn.Conv2d(in_channels=128,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_4 = nn.Sequential(
nn.Conv2d(in_channels=256,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.BatchNorm2d(512),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.classifier = nn.Sequential(
nn.Linear(out_features, 4096),
nn.ReLU(True),
nn.Dropout(p=0.65),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(p=0.65),
nn.Linear(4096, num_classes)
)
for m in self.modules():
if isinstance(m, torch.nn.Conv2d) or isinstance(m, torch.nn.Linear):
nn.init.kaiming_uniform_(m.weight, mode='fan_in', nonlinearity='leaky_relu')
# nn.init.xavier_normal_(m.weight)
if m.bias is not None:
m.bias.detach().zero_()
# self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
def forward(self, x):
x = self.block_1(x)
x = self.block_2(x)
x = self.block_3(x)
x = self.block_4(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
And I pass 10 as number of classes:
# Create the model
model = VGG16(10)
output = model(test_x)
output.shape
torch.Size([1, 10])
If I run this, I obtain accuracies like this:
[15:22:51] METRIC Round 0, collaborator env_one train result train_loss: 0.741962 experiment.py:112
[15:22:54] METRIC Round 0, collaborator env_one localy_tuned_model_validate result acc: 13.395700 experiment.py:112
[15:22:57] METRIC Round 0, collaborator env_one aggregated_model_validate result acc: 12.433300 experiment.py:112
METRIC Round 0, collaborator Aggregator aggregated_model_validate result acc: 12.433300 experiment.py:112
[15:23:03] METRIC Round 0, collaborator Aggregator train result train_loss: 0.741962 experiment.py:112
METRIC Round 0, collaborator Aggregator localy_tuned_model_validate result acc: 13.395700
Loss is correct, while accuracies are out of range.
My loss:
def cross_entropy(output, target):
"""Binary cross-entropy metric
"""
#return F.cross_entropy(input=output,target=target)
#return F.binary_cross_entropy_with_logits(input=output,target=target)
criterion = nn.CrossEntropyLoss()
loss = criterion(output, target)
return loss
My validation function:
@task_interface.register_fl_task(model='net_model', data_loader='val_loader', device='device')
def validate(net_model, val_loader, device):
torch.manual_seed(0)
device = torch.device('cuda')
net_model.eval()
net_model.to(device)
val_loader = tqdm.tqdm(val_loader, desc="validate")
val_score = 0
total_samples = 0
with torch.no_grad():
for data, target in val_loader:
samples = target.shape[0]
total_samples += samples
data, target = torch.tensor(data).to(device), \
torch.tensor(target).to(device, dtype=torch.int64)
output = net_model(data)
pred = output.argmax(dim=1,keepdim=True)
val_score += pred.eq(target).sum().cpu().numpy()
return {'acc': val_score / total_samples,}
```
AI: I solved in this way:
#original
#pred = output.argmax(dim=1,keepdim=True)
#my solution
_, pred = torch.max(output, dim=1)
I do not know why, buy my solution it works. If someone has an intuition can explain me why this works? Thanks |
H: Module 'sklearn' has no attribute 'datasets'?
Isn't scikit-learn version 1.0.2 supposed to have an attribute datasets? If so, why am I getting an error?
Python 3.9.2 (default, Feb 28 2021, 17:03:44)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sklearn as skl
>>> skl.__version__
'1.0.2'
>>> data = skl.datasets.load_diabetes()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'sklearn' has no attribute 'datasets'
AI: Works fine for me:
import sklearn
from sklearn.datasets import load_diabetes
X, y = load_diabetes(return_X_y=True)
print('The scikit-learn version is {}.'.format(sklearn.__version__))
The scikit-learn version is 0.20.3.
See docs and example. |
H: WMT: What are the differences of WMT14, WMT15 and WMT16 datasets?
Each year, the Workshop on Statistical Machine Translation (WMT) holds a conference that focuses on new tasks, papers, and findings in the field of machine translation.
Let's say we are talking about the parallel dataset Newscommentary. There is the Newscommentary in WMT14, WMT15, WMT16 and so on.
How much does the dataset differ from each conference? Is it possible to read this somewhere?
AI: The most straightforward way is to check the article published to summarize the results of each WMT, where you can find the figures of each of the datasets supplied for the competition e.g.
WMT'2014:
WMT'2015:
WMT'2016:
You can find all the papers in the ACL Anthology, searching for Findings of the Conference on Machine Translation 20XX, specifying the target year.
Note that in 2016, WMT changed its name from "Workshop on Statistical Machine Translation" to " Conference on Machine Translation", despite keeping the WMT acronym. |
H: Which Machine Learning algorithm should I use for a sports prediction study?
I am trying to choose a ML algorithm and use it in my final year project, the class I am in has just started and we are studying K-Means and DBSCAN.
My supervisor for the project suggested a Neural Network algorithm but we are not studying this in the ML module I am enrolled in.
I am wondering which Neutral Network model or other ML algorithm I should use for my project.
The goal of the project is to predict soccer results there will be plenty of control variables (Home/Away: goals/for and against, wins,draws,loses, referee results, each teams starting 11 with plenty of stats for each player ....).
I am wondering which model would be the best to use and is a Neutral Network model the best route to go?
AI: Welcome to the wonderful world of ML.
I'd use XGBoost. It's simple to get started. It can be kind of a pain to install on windows, but this might help. As I recall, on linux it's a breeze.
It's what's called a "decision tree", so it takes all your inputs and learns a series of thresholds (if x>y and z<7, they'll win). This has several advantages, especially for a beginner in the field:
it's very tolerant to poorly formatted data (non normalized)
most of the hyperparameters are pretty intuitive
it has a tendency to work fairly well out of the box.
It will be daunting, the first time you implement just about any algorithm it's challenging. Just keep your head down and perserveere.
If you do want to go with a NN (which is also an excelent choice), I recommend using tf.keras. There's excellent beginner tutorials by this guy. This is an, arguably, more useful library, but it can also be tough to get started. If you watch a few tutorials, though, you'll be fine.
You will quickly find that the choice of model is often the easy part. It's the data preprocessing, training/validation, etc. that is a pain. So, If I were you, I would just pick a model and get started ASAP; your objective is to learn, not to make a perfect model.
Some other things you'll probably need in your tool belt:
python in general
pandas for storing and manipulating data
numpy for messing around with data types
matplotlib.pyplot for plotting
sklearn for miscellaneous stuff (or more, if you look into it) |
H: Why is gridsearchCV.best_estimator_.score giving me r2_score even if I mentioned MAE as my main scoring metric?
I have a lasso regression model with the following definition :
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import scale
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression, Lasso
from sklearn.svm import SVR
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.metrics import r2_score
folds = KFold(n_splits = 5, shuffle = True, random_state = 100)
# specify range of hyperparameters
hyper_params = [{
"n_features_to_select": [0.25, 0.5, 0.75, 1.0],
"estimator__alpha" : [0.2, 0.5, 0.7, 1, 1.2]}]
scoring_list = ['explained_variance','neg_mean_absolute_error','r2']
# specify model
lm = Lasso()
#lm.fit(x_train,y_train)
rfe = RFE(lm)
# set up GridSearchCV()
model_cv = GridSearchCV(estimator = rfe,
param_grid = hyper_params,
scoring= scoring_list,
cv = folds,
verbose = 3,
return_train_score=True,
refit = 'neg_mean_absolute_error')
The best estimator was found to be
RFE(estimator=Lasso(alpha=0.2), n_features_to_select=0.5)
with best score of 3.513 (MAE).
I wanted to use the best predictor to score my test dataset
model_cv.best_estimator_.score(x_test,y_test)
which gives 0.6548
I tried to use predict to check the value if it corroborates if I manually check with a scorer.
from sklearn.metrics import r2_score , mean_absolute_error
y_pred = model_cv.best_estimator_.predict(x_test)
mean_absolute_error(y_test,y_pred) // gives 3.4804479077256256
r2_score(y_test,y_pred) // gives 0.6548
This shows that model_cv.best_estimator_.score is giving the r2_score.
My question is why is it giving the r2_score when the refit parameter is neg_mean_absolute_error .
Not given a toy data as it is data agnostic.
AI: This is the default behavior for any Scikit-learn regressor, and as far as I know, it cannot be modified.
So for regressors, the score method will return the $R^2$ and $Accuracy$ for classifiers. (check)
If you want to evaluate the best estimator with MAE you simply have to do:
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y_test, model_cv.best_estimator_.predict(x_test))
Hope it helps! |
H: What does re.split(r'[_]' , i) does?
What does re.split(r'[_]' , i) does?
I have a function with the above code. Can someone please explain how does the split occurs.
AI: This is more of a programming question than a data science question and is therefore better suited for the stackoverflow stackexchange page, but the provided code simply splits a string using a regex pattern ([_]). The regex pattern used simply finds any underscores (_) in the string, as a result the code simply splits the string on underscore characters:
import re
text = "test_string"
print(re.split(r'[_]' , text))
# ['test', 'string'] |
H: My custom stop-words list using tf-idf
I want to make my own stop words list, I computed tf-idf scores for my terms.
Can I consider those words highlighted with red to be stop word? and what should my threshold be for stop words that depend on tf-idf? Should I consider the high values of tf-idf as the most important words that I need to keep?
@Erwan answered this question, check their answer to the question they linked too it is very informative
AI: There's no standard definition of stop-word, but in general stop words are very frequent words which don't contribute to the meaning of the text, like determiners, pronouns, etc. Importantly stop-word is a property which applies to unique words in the vocabulary. For example if the word $w$ is considered as a stop-word then this applies to all the occurrences of $w$ in the text, not only to some of them.
On the contrary TFIDF applies to the words in the sentences/documents, so the same word $w$ may have a different TFIDF value in different sentences/documents:
IDF is a property at the vocabulary level, i.e. all the occurrences of $w$ have the same IDF.
TF is specific to the sentence/document. If $w$ appears 3 times more often in document A than in document B, then it has 3 times higher TFIDF value in A than in B.
This is why it doesn't really make sense to consider the TFIDF value to select stop-words: the former is specific to a sentence/document but not the second. You could use the IDF part only, but there's no difference with just using the document frequency, and practically it would give the same results as using the overall frequency. |
H: How do I fine-tune model performance after the initial run? (Scikit-Learn)
I've just started learning regression using scikit-learn and stumbled upon a problem. For a given dataset, let's say that I've imputed the missing data and one-hot encoded all categorical features. This point is where it starts getting confusing for me.
After hot-encoding categorical features, I usually end up with a lot of columns. How do I know that all of these columns benefit the model's performance? If not, how can I determine which columns/features to keep? Is there a method of determining the importance of these columns (their 'influence' to the model, perhaps?) or is it more of a trial and error situation?
While I understand that modeling is an iterative process, where even after the initial data analysis and modeling, the results from that first model must be used to improve the model by 'fine-tuning' the hyperparameters or data accordingly. However, I have no intuition/idea on what to do after the first model fitting. Ideally, how should one approach fine-tuning model parameters/ data configurations based on the model's initial run?
I would greatly appreciate some help.
AI: Since you added that you use linear regression, few ideas (but still a very broad question):
How do I know that all of these columns benefit the model's
performance? [...] how can I determine which columns/features to
keep?
Have a look at Introduction to Statistical Learning (ISL) (Chapter 6.1). You can use stepwise (feature) selection for a start. The book comes with "R-Labs", so you can directly see how it is done (also available for Python).
Is there a method of determining the importance of these columns
Yes, use "shrinkage" of (standardized) features to see which features have a "strong" impact. This is Chapter 6.2 in ISL.
However, I have no intuition/idea on what to do after the first model
fitting.
In linear models there is not much to be tuned. You can do feature selection / feature engineering /feature generation but apart from that there are no hyperparameters to be tuned. However, if you are up for a predictive model, make sure you have a proper test strategy. This is explained in Ch. 5 of ISL (" Resampling Methods").
The remaining chapters (i.e. Ch. 7, 8) give a good overview what you can do if you want to go beyond purely linear models. When you face stark non-linear data, you may look at "generalized additive models" (GAM, Ch. 7 in ISL). Often Random Forest is a good choice as well when the parameterization of the data is unclear (Ch. 8 in ISL). |
H: How to use a model after cross_validation in predicting a test data?
I want to do the following:
train a model using cross-validation
use the model for prediction (test dataset)
check the algorithmic bias towards some features values
I wonder if what I am doing is right? or there is another way?. Also, I have some feature that has many values. Is there a better way to split test data into subsets based on the values (like below split into male and female)?
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import RandomForestClassifier as RFC
from sklearn.metrics import f1_score
include_fesatures=all features except demographics like gender and region
X=full_DF[include_fesatures]# include_feature are numeric here
y=X.pop('target')
X, X_test, y, y_test = train_test_split(X, y, test_size=0.2,random_state=1,stratify=y)
skf = StratifiedKFold(n_splits=10)
clf = RFC()
c=0
for train_index, test_index in skf.split(X,y):
Xtr,Xte,ytr,yte=X.iloc[train_index],X.iloc[test_index],y.iloc[train_index],y.iloc[test_index]
clf.fit(Xtr,ytr)
y_predict = clf.predict(Xte)
acc = f1_score(yte, y_predict)#accuracy_score(yte, y_predict)
c= c+ acc
print ('Accuracy:', float(c)/10)
indexes=X_test.index
mixed_df=full_DF[full_DF.index.isin(indexes)]
mdf_idx=mixed_df[mixed_df['gender']=='M'].index
fdf_idx=mixed_df[mixed_df['gender']=='F'].index
mX,my=X_test[X_test.index.isin(mdf_idx)],y_test[y_test.index.isin(mdf_idx)]
fX,fy=X_test[X_test.index.isin(fdf_idx)],y_test[y_test.index.isin(fdf_idx)]
print(f1_score(y_test, clf.predict(X_test)))
print(f1_score(my, clf.predict(mX)))
print(f1_score(fy, clf.predict(fX)))
AI: Cross validation generally is used to assess model performance. Usually, you will train the model on some part of the data (e.g. 4/5 in 5-fold CV) and test on the remaining part (1/5). This process is repeated for all folds. Also see Introduction to Statistical Learning (Chapter 5.1) for a good overview.
What you miss in your approach is hyperparameter tuning. This is important for many models. So far you only train with standard parameter. One way of doing this (just an example, see sklearn docs for details) could be:
Do test/train split
Tune using 5-fold cv on train data
Fit proper model with "best" parameter on train data
Test overall performance on test data
Minimal example:
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
from sklearn.experimental import enable_halving_search_cv # noqa
from sklearn.model_selection import HalvingGridSearchCV
from sklearn.metrics import accuracy_score
X, y = datasets.load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0, stratify=y)
param_grid = {'max_depth': [10,50,100],
'min_samples_split': [2, 5, 10, 20]}
base_estimator = RandomForestClassifier(random_state=0)
sh = HalvingGridSearchCV(base_estimator, param_grid, cv=5,
factor=2, resource='n_estimators',
max_resources=30).fit(X_train, y_train)
print(sh.best_estimator_)
print("CV Score",sh.best_score_)
clf = RandomForestClassifier(max_depth=50, min_samples_split=20, n_estimators=24, random_state=0).fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("Predicted Acc." , round(accuracy_score(y_test, y_pred),2)) |
H: Scikit learn ComplementNB is outputting NaN for scores
I have an unbalanced binary dataset with 23 features, 92000 rows are labeled 0, and 207,000 rows are labeled 1.
I trained models on this dataset such as GaussianNB, DecisionTreeClassifier, and a few more classifiers from scikit learn, and they all work fine.
I want to run ComplementNB on this dataset, but when i do so, all the scores are coming out as NaN.
Below is my code:
from sklearn.naive_bayes import ComplementNB
features = [
# Chest accelerometer sensor
'chest_accel_x', 'chest_accel_y', 'chest_accel_z',
# ECG (2 leads)
'ecg_1', 'ecg_2',
# Left ankle sensors
'left_accel_x', 'left_accel_y', 'left_accel_z',
'left_gyro_x', 'left_gyro_y', 'left_gyro_z',
'left_mag_x', 'left_mag_y', 'left_mag_z',
# Right lower arm sensors
'right_accel_x', 'right_accel_y', 'right_accel_z',
'right_gyro_x', 'right_gyro_y', 'right_gyro_z',
'right_mag_x', 'right_mag_y', 'right_mag_z',
]
df = pd.read_csv('mhealth_s_m.csv')
X = df[features]
y = df['label']
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size = 0.2, random_state = 69)
def K_fold_unbalanced(train_X, train_y):
scoring = ['accuracy', 'f1', 'precision', 'recall', 'roc_auc']
print('Unbalanced Data')
model = ComplementNB()
start_time = time.time()
scores = cross_validate(model, train_X, train_y, scoring = scoring, cv = 5, return_train_score = True)
print(scores)
print('Took', time.time() - start_time, 'to run')
print('=======================================')
K_fold_unbalanced(train_X, train_y)
output is:
train accuracy nan
train f1 nan
train precision nan
train recall nan
train roc auc nan
test accuracy nan
test f1 nan
test precision nan
test recall nan
test roc auc nan
Took 0.12271976470947266 to run
Any ideas why all the values are NaN? My data can be found here
AI: The issue is that you have negative values in your input features $X$
Try using a scaler like MinMaxScaler prior your fit.
from sklearn.naive_bayes import ComplementNB
from sklearn.model_selection import cross_validate
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline
import time
df = pd.read_csv('mhealth_s_m.csv')
X = df[features]
y = df['label']
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size = 0.2, random_state = 69)
def K_fold_unbalanced(train_X, train_y):
scoring = ['accuracy', 'f1', 'precision', 'recall', 'roc_auc']
print('Unbalanced Data')
# model = ComplementNB()
model = Pipeline([("scaler",MinMaxScaler()), ("clf",ComplementNB())])
start_time = time.time()
scores = cross_validate(model, train_X, train_y, scoring = scoring, cv = 5, return_train_score = True)
print(scores)
print('Took', time.time() - start_time, 'to run')
print('=======================================')
return scores
scores = K_fold_unbalanced(train_X, train_y) |
H: Recommendation for sklearn model persistent
I want to serve a sklearn model in server, any suggestions what is best format/method to save sklearn models. Currently I am using pickle.dump method to save the model.
AI: According to the docs:
It is possible to save a model in scikit-learn by using Python’s
built-in persistence model, namely pickle [...] In the specific case
of scikit-learn, it may be better to use joblib’s replacement of
pickle (dump & load), which is more efficient on objects that carry
large numpy arrays internally as is often the case for fitted
scikit-learn estimators, but can only pickle to the disk and not to a
string [...]
I usually use joblib as it is very practical.
from joblib import dump, load
dump(clf, 'filename.joblib')
clf = load('filename.joblib')
As of Version 1.0.2. there is an additional remark on "interoperable formats":
For reproducibility and quality control needs, when different
architectures and environments should be taken into account, exporting
the model in Open Neural Network Exchange format or Predictive Model
Markup Language (PMML) format might be a better approach than using
pickle alone. These are helpful where you may want to use your model
for prediction in a different environment from where the model was
trained [...]
So in case you want to be flexible, you should look into ONNX or PMML. |
H: Levenshtein distance vs simple for loop
I have recently begun studying different data science principles, and have had a particular interest as of late in fuzzy matching. For preface, I'd like to include smarter fuzzy searching in a proprietary language named "4D" in my workplace, so access to libraries is pretty much non existent. It's also worth noting that client side is single threaded currently, so taking advantage of multi-threaded matrix manipulations is out of the question.
I began studying the levenshtein algorithm and got that implemented, but it's well known for the slowness. Further, it builds a matrix by looping through both words, creating an O(mn) efficiency in time.
That got me thinking, what's the advantage in using the Levenshtein algorithm, vs just a single for loop that checks the character at every index, compares, and throws back a -1 if incorrect (among checking for length of string, etc), producing an O(n) time efficiency? Or does Levenshtein do extra checks that I'm unaware of?
AI: A simple example shows the difference: let's compare the strings hello! and hhello!:
The Levenshtein distance finds that the 2nd character doesn't match, but also that the following ones can be aligned (in other words it finds one insertion). It returns a score corresponding to 1 edit out of 6 characters.
The "single loop" described in the question finds that the first character matches and every other doesn't. It returns a score corresponding to 5 edits out of 6 characters.
The Levenshtein distance is an alignment algorithm. Without the double loop there's no alignment, so only the characters at the same position get compared. |
H: How does SMOTE work for dataset with only categorical variables?
I have a small dataset of 977 rows with a class proportion of 77:23.
For the sake of metrics improvement, I have kept my minority class ('default') as class 1 (and 'not default' as class 0).
My input variables are categorical in nature. So, the below is what I tried. Let's assume we don't have age and salary info
a) Apply encoding like rare_encoding and ordinal_encoding to my dataset
b) Split into train and test split (with stratify = y)
c) Apply SMOTE to resample the training data only.
However, my question is on how does SMOTE work/resample when there is only categorical variable like below
gender degree occupation Country status
MALE BE ENGGINER USA default
MALE ME RESEARCHER UK default
FEMALE BSc Admin staff NZ default
FEMALE MS Scientist sweden default
Now if my objective is to oversample minority sample using SMOTE, How will the above sample look like? Will they just randomly populate/shuffle gender, degree, occupation and country on different permutation and combinations?
Is there any simple explanation or tutorial that you can share for someone who likes to applying this technique?
My objective is to understand how does SMOTE work for categorical variables only dataset?
AI: SMOTE itself wouldn't work well for a categorical only feature-set for a few reasons:
It works by interpolating between different points. Depending on how the data is encoded, you might end up with some undefined class (when using one-hot encoding, you might end up with a point that is half of one class and half of another class), or you might end up with a correct class but it doesn't make any sense from an interpolation point of view (for example, if you encode for example the country on a numerical scale like 1 -> US, 2 -> UK, 3 -> NZ, but it doesn't make much sense to interpolate between US and NZ and end up in UK).
SMOTE uses k-means to select points to interpolate between. If you encode your categorical features using one-hot-encoding, you typically end up with a lot of sparse dimensions (dimensions that most points take only the value 0 in). k-means typically won't perform very well in such a space, and points that are nearby in this space might not look a lot like each other.
What you can do is use a modification of the SMOTE algorithm, called SMOTE-N (see https://imbalanced-learn.org/dev/over_sampling.html#smote-variants), which works when all features are categorical.
This modifies the SMOTE algorithm to
Use a different interpolation method: selects the most common class of the nearest neighbors
Use a different distance metric (Value Difference Metric) instead of Euclidean distance in the encoded space.
In that link this method is attributed to the original SMOTE paper (https://www3.nd.edu/~dial/publications/chawla2002smote.pdf) where it's found in Section 6.2. There is also SMOTE-NC which is a combination of SMOTE and SMOTE-N for data which has both numerical and categorical features.
For your example, let's say for some reason 3 of the points given:
MALE ME RESEARCHER UK default
FEMALE BSc Admin staff NZ default
FEMALE MS Scientist sweden default
are considere nearby each other and are used for interpolation. Then a possible added point by SMOTE-N would be:
FEMALE (because that's the majority class)
MS (all 3 classses have equal frequency, so a class is randomly picked)
RESEARCHER (idem to above)
NZ (idem to above)
default (majority class) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.