text
stringlengths 83
79.5k
|
---|
H: Normalising Image Data
Hi I am wondering when it comes to normalising images across each of the channels, do you use the same scaling factors that is used for training for testing set as well or separate ones.
In traditional ML problems using scikit-learn, the usual procedure is normalise the training data and apply the same scaler for testing data
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle = True)
scaler = MinMaxScaler()
X_train_norm = scaler.fit_transform(X_train)
X_test_norm = scaler.transform(X_test)
However when using deep learning I am wondering whether the same procedure is used for image data
For example
import torch
from torchvision import transforms
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
# Resize Images to (3,150,150) and convert to torch tensors which gives values between 0 and 1 instead of 0 and 255. So need to divide by 255
train_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
])
test_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
])
train_datasets = ImageFolder(root = "data/dogs-vs-cats/concise_dataset/train", transform = train_transforms)
test_datasets = ImageFolder(root = "data/dogs-vs-cats/concise_dataset/test", transform = test_transforms)
# just a function to get mean of the mean and std for each channel across the entire train and test sets separately
def get_mean_and_std(dataset):
mean_values = torch.zeros(len(dataset),3)
std_values = torch.zeros(len(dataset),3)
for idx, (img, lab) in enumerate(dataset):
mean_values[idx, :] = img.mean(dim = [1,2])
std_values[idx, :] = img.std(dim = [1,2])
print(f"mean of entire dataset : {mean_values.mean(dim = 0)}")
print(f"std of entire dataset : {std_values.mean(dim = 0)}")
get_mean_and_std(train_datasets)
# mean of entire dataset : tensor([0.4854, 0.4515, 0.4143])
# std of entire dataset : tensor([0.2233, 0.2178, 0.2185])
get_mean_and_std(test_datasets)
# mean of entire dataset : tensor([0.4902, 0.4571, 0.4188])
# std of entire dataset : tensor([0.2257, 0.2203, 0.2207])
Now apply these means and standard deviations separately for training and testing data.
train_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
transforms.Normalize(mean = [0.4854, 0.4515, 0.4143], std = [0.2233, 0.2178, 0.2185])
])
test_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
transforms.Normalize(mean = [0.4902, 0.4571, 0.4188], std = [0.2257, 0.2203, 0.2207])
])
or should I apply the same mean and std of train set to the test set?
test_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
transforms.Normalize(mean = [0.4854, 0.4515, 0.4143], std = [0.2233, 0.2178, 0.2185])
])
AI: The method is the same as it is for traditional ML problems, i.e. you need to apply the same mean and standard deviation to the test data as you do for the training data. The mean and standard deviation used are derived from the training data, but depending on the type of problem and data used you can also use the values derived from the ImageNet dataset. |
H: Can GridSearchCV be used for unsupervised learning?
im trying to build an outlier detector to find outliers in test data. That data varies a bit (more test channels, longer/shorter testing).
First im applying the train test split because i want to use grid search for hypertuning. This is timeseries data from multiple sensors and i removed the time column beforehand.
X shape : (25433, 17)
y shape : (25433, 1)
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.33,
random_state=(0))
Standardize afterwards and then i changed them into an int Array because GridSearch doesnt seem to like continuous data. This surely can be done better, but i want this to work before i optimize the coding.
'X'
mean = StandardScaler().fit(X_train)
X_train = mean.transform(X_train)
X_test = mean.transform(X_test)
X_train = np.round(X_train,2)*100
X_train = X_train.astype(int)
X_test = np.round(X_test,2)*100
X_test = X_test.astype(int)
'y'
yeah = StandardScaler().fit(y_train)
y_train = yeah.transform(y_train)
y_test = yeah.transform(y_test)
y_train = np.round(y_train,2)*100
y_train = y_train.astype(int)
y_test = np.round(y_test,2)*100
y_test = y_test.astype(int)
I chose the IForrest because its fast, has pretty good results and can handle huge data sets (i currently only use a chunk of the data for testing). Setting Up the GridSearchCV:
clf = IForest(random_state=47, behaviour='new',
n_jobs=-1)
param_grid = {'n_estimators': [20,40,70,100],
'max_samples': [10,20,40,60],
'contamination': [0.1, 0.01, 0.001],
'max_features': [5,15,30],
'bootstrap': [True, False]}
fbeta = make_scorer(fbeta_score,
average = 'micro',
needs_proba=True,
beta=1)
grid_estimator = model_selection.GridSearchCV(clf,
param_grid,
scoring=fbeta,
cv=5,
n_jobs=-1,
return_train_score=True,
error_score='raise',
verbose=3)
grid_estimator.fit(X_train, y_train)
The Problem:
I cant fit the grid_estimator.
GridSearchCV needs an y_argument, without y its passing me the "missing y_true" error.
What should be used as a target here ? Atm i just passed an important data column to y for testing, but im getting this error that i dont understand:
ValueError: Classification metrics can't handle a mix of multiclass and continuous-multioutput
targets
I also got the advice that the i need a scoring function and the iForest doesnt have one.
I couldnt find useful information for this, are there any helpful guides or info that can help me ?
AI: The goal of GridSearchCV is to iterate over (hence search) all possible combinations (hence grid) of hyper parameters and evaluate a model on a cross-validation (hence CV). You do need some score to compare models with different sets of hyper parameters. If you can come out with some reasonable way to score a model after the fit, you can write a custom scoring function. If this scoring function does not require target (y) to be computed, you can simply pass an array of zeros to GridSearchCV. The example of such scorer is given here.
Otherwise, if you use some supervised model on a filtered (by IsolationTrees) data, you can do that using Pipelines, and run GridSearchCV on that, see examples in sklearn docs:
from sklearn.pipeline import Pipeline
from sklearn.ensemble import IsolationForest
estimators = [('filter_data_it', IsolationForest()),
('clf', LogisticRegression())]
pipe = Pipeline(estimators)
param_grid = dict(filter_data_it__max_features=[5,15,30], clf__C=[0.1, 10])
grid_search = GridSearchCV(pipe, param_grid=param_grid)
recall, that when you use Pipelines you need to prepend param_grid with the name of the pipeline step.
UPD1. As stated in the comments IF don't have method transform, thus simple chaining will not work. The way IF works is by predicting outliers and not by filtering the data (you are supposed to filter outliers afterwards). However, there is a way around this problem. We need to create a new class with transform method, which will run IF, and filter the data based on its predictions. I will update the code snippet.
It turns out there is no clear way to adapt sklearn API for that purpose, as stated in these questions, 1, 2, also this answer suggest a solution, however it is relatively complex. Thus, I suggest you proceed with scorer example. |
H: Is it possible to train a RNN using multiple time series?
I have multiple time series (about 200) of soil moisture behavior after saturation in different soil types. They are all the same length and nearly the same shape, differing only in their ultimate value and rate of soil moisture decline due to the effects of different soil properties.
What I need is an RNN model that can predict the time series with only one sequence as input. This RNN must be able to detect, at least internally, which of the 200 training sequences the input sequence corresponds to and then predict the next values. Is something like this possible? What I tried was to concatenate all the time series into one and I trained an RNN with 3 layers and different numbers of hidden units, but I didn't get good results. Should I increase the complexity of the model or try a new approach?
AI: Yes, you can use a Multivariate RNN.
Multivariate RNN
In this architecture multiple sequential features (i.e., a number of sequneces) as an input to your recurrent layers.
Taking pytorch as a reference, you can see that the input of LSTM object is a tensor of shape
$$input = (L, H_{in})$$ where $L$ is the length of your sequences whereas $H_{in}$ is the number of input features* (i.e., a number of sequences). I attach below a couple of resources in case they are helpful:
Keras implementation: https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/
Pytorch implementation: https://stackoverflow.com/a/56893248
Hope it helps!
* Input can also have $(L, N, H_{in})$ for $N$ batches. |
H: What is the best approach to tackle stance prediction?
I am working on a task where I need to predict one of the following stances for a tweet: "In favor", "Against", "Neutral", "Not related", and "Yes if". I've been trying to use scikit-learn and transformers for classification, but both seem to produce quite poor results. The problem is that the categories are not usual categories, but rather the attitude of the writer toward a specific topic, which probably should be tackled differently. I think there should be something that works with stances, but I managed to find only sentiment analysis and topic modeling tutorials so far. Is there anything I can take a look at? Any links, models, and advice would be greatly appreciated!
AI: Perhaps you could fit two (or three) models. The first one asks is the question "related" or not? Then you can fit to 4 ordinal classifications. Finally, if you wanted to get even fancier, you could add a model to analyze your Yes responses and deterimine if they were a "Yes if" or not.
This is likely not optimal but just an approach.
The obvious alternative is fit to 5 categorical levels with no sense of scale or order. Probably less efficient though. |
H: What can I do when my object detection model learns background images instead the objects?
I'm training a machine learning model using YOLOv5 from Ultralytics (arch: YOLOv5s6).
The task is to detect and identify laundry symbols.
For that, I've scraped and labeled 600 images from Google.
Using this dataset, I receive a result with an mAP around 0.6.
But 600 images is a tiny dataset and there are multiple laundry symbols where I have only 1-4 images for training and symbols where I have 100 and more.
So I started writing a Python script which generates more images of laundry symbols.
The script basically takes a background image and adds randomly positioned 1-10 laundry symbols in different colors and rotations. No background is used twice.
With that script, I generated around 6.000 entirely different images with laundry symbols that every laundry symbol is at least 800 times in the dataset.
Here are examples of the generated data:
I combined the scraped and the generated dataset and retrained the model with the same configuration. The result is really bad: the mAP dropped to 0.15 and the model overfits. The confusion matrix told me why:
Why is the model learning the background instead the objects?
First I thought my annotation might be wrong, but the training script from Ultralytics saves a few examples of training batch images - there the boxes are drawn perfectly around the generated symbols.
For completeness, below are more analytics added about the training:
More analytics
Labels
Curves
More examples from the dataset
AI: I asked the same question on Reddit and got a few replies. The main point why my model is not performing on synthetic data, is because YOLO is looking at the whole picture and tries to learn the context and not only the patterns of laundry symbols. The background is just too random for YOLO.
A Reddit user even created a video about this, explaining it using a card game: https://www.youtube.com/watch?v=auEvX0nO-kw
Referenced Reddit posts:
https://www.reddit.com/r/MachineLearning/comments/ydc9n1/p_object_detection_model_learns_backgrounds_and/
https://www.reddit.com/r/datascience/comments/ydbkaf/object_detection_model_learns_backgrounds_and_not/ |
H: Best approach for rule-based system in multilabel classification-problem?
I’m new to the world of NLP and am looking for some guidance. I want to create a rule-based system that “grades” text in accordance to some set of criteria. For example, one criteria could be “The author mentions that he/she wants money”, another “The author mentions working toward promotion”.
My initial idea was to use some available, open-source NLP-model, such as en_core_web_lg from the spaCy library. With such a model I could look at all verbs in a text, and classify texts as adhering to certain criteria when they have an appropriate verb with appropriate subject and object. I’ve read somewhere that exploiting the linguistic structure of sentences is a bad/unreliable way to go about things. The problem is that I don’t have any substantive data so as to allow supervised learning.
How do one typically go about creating a rule-based system for such a task? Is there any name for the problem I want to solve, maybe “Multi-label classification”? Any resources you could point me to?
Help a noob out!
I greatly appreciate it.
AI: I think the first step should be to define the task more formally: do you mean there should be a grade for every such criterion? Is the set of the criteria fixed or an input parameter?
From the point of view of people understanding what you want to do you should also mention how long is a text, and how many texts, how many criteria? And if possible add an actual example with its expected output.
The problem is that I don’t have any substantive data so as to allow supervised learning.
This is a serious issue not only because you can't do supervised learning, but also because you can't evaluate the system. Evaluation is a must: if you don't know how well your system works, you can't guarantee anything about its output... so it's pointless to use it.
You should probably manually annotate a sample yourself. It might feel boring but it's actually useful indirectly for you to design the task correctly, because it forces the annotator to think about the details.
I’ve read somewhere that exploiting the linguistic structure of sentences is a bad/unreliable way to go about things.
I don't know where you read that but this is wrong. This approach might not be optimal compared to modern ML methods, but it can be a perfectly decent method.
Is there any name for the problem I want to solve, maybe “Multi-label classification”?
“Multi-label classification” represents a broad type of tasks, and that's not even the correct type if you want to predict grades :) Grades are numerical values so yours would be a regression task, as opposed to classification.
Anyway this is not the name of a specific problem, and it's very unlikely that your problem has a standard name or method.
How do one typically go about creating a rule-based system for such a task?
That's the design part and it's not easy. You need to study examples, try to find the clues that a human would use to decide, then try to transform these clues into actionable rules.
To be honest, a rule-based system for some kind of highly semantic and interpretative task is unlikely to perform very well, but why not try. |
H: What is the next after finding best distribution for my data?
I found the distribution of my data with "distfit" library for python. But what now? The best distribution that describes my data is "weibull" distribution. But I don't know what can I do with this knowledge. Can someone help?
AI: Imho there's probably nothing to do with this information, especially considering only the technical side of it.
It's highly subjective, but I think that what people mean when they say to "know the distribution of your data" is that it is useful to have an intuitive understanding of what your data consists of: main stats, characteristics, how much variance, imbalance, important patterns between variables, etc. This information, put together with the expert knowledge related to the specific task at hand, would normally help an experienced data scientist decide the design of the system (what kind of algorithm, preprocessing, etc.).
But it's not a recipe, you can't expect to follow deterministic steps like with a manual. It's more an analysis depending on the context, the time one wants to spend, etc. My advice for improving the performance of any system: start by investigating a sample of the errors it makes. See if these errors are preventable (they might not be), and if yes what prevents the system to find the correct answer. |
H: Unbalanced Classification: What happens when many points of the bigger class are inside of the smaller class' area?
Let's assume we habe an unbalanced dataset: 90% of the data belong to class A, 10% belong to class B. Furthermore, there are around as many points from class B inside of class A's cluster. Someone with a lot of expertise told me that models will weight class A more in that area.
But as far as I know, models don't just automatically weight the classes. Am I wrong? How would different models behave and why?
AI: So if we take simple classification model like KNN, there are ways to handle this kind of imbalance in data. And also this kind of issues are largely seen in real world datasets.
In KNN we can use distance based weights and helping us in predicting classes. Checkout parameter weight here KNN. By default model considers uniform, but if u know u have imbalance then use weights = 'distance'.
In based classifiers as well u can see this. Check class_weight section DT_Classifier. This by default considers it as None i.e all classes have same weightage.
There are some other ways to deal with this issue,
UpSample minority class
Downsample majority class
Use SMOTE (it creates new data based on existing points) -> model training time has impact here. SMOTE should only be used on training data never on testing data. |
H: Using Simple imputer replace NaN values with mean error
I am trying to replace 2 missing NaN values in data using the SimpleImputer.
I load my data as follow;
import pandas as pd
import numpy as np
df = pd.read_csv('country-income.csv', header=None)
df.head(20)
As we can see I have 2 NaN values which I am trying to replace with mean() values using SimpleImputer and I get the following error:
imputer = SimpleImputer(missing_values=np.nan, strategy='mean', fill_value=None)
imputer.fit(df)
Because I have some categorical data (hence the error), I tried to take only the numeric columns so I tried this method:
missing_vars_numeric = [var for var in df.columns
if df[var].isnull().mean() > 0 and df[var].dtype != "0"]
missing_vars_numeric
Output: [1,2]
But when I use `missing_vars_numeric in the imputer I get the following error:
imputer = SimpleImputer(missing_values=np.nan, strategy='mean', fill_value=None)
imputer.fit(missing_vars_numeric)
ValueError: Expected 2D array, got 1D array instead:
array=[1. 2.].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
I also tried using astype() and It did not work for me. What am I missing?
Sample of the DataFrame
df = pd.DataFrame({'0': ['Region', 'India','Brazil', 'USA','Brazil','USA','India','Brazil','India','USA','India'],
'1': ['Age', '49', '32', '35','43','45','40','NaN','53','55','42'],
'2': ['Income', 86400, 57600, 64800,73200,'NaN',69600,62400,94800,99600,80400],
'3': ['Online Shopper','No','Yes',' No',' No','Yes','Yes','No','Yes','No','Yes']},
index=['0', '1', '2', '3','4','5','6','7','8','9','10'])
AI: One-liner
df.fillna(df.select_dtypes(np.number).mean(), inplace=True)
df.select_dtypes(np.number) selects only the numeric columns of the dataframe
.mean() computes the mean of each column, returning a new dataframe
df.fillna() accepts a dataframe (or other forms) to impute NaNs in named columns
inplace just means it happens in the original dataframe itself, without making a copy
You can probably use this to accomplish many more variations of imputation - replacing .mean() with whatever you need.
Update with df from OP
The example dataframe you provided has columns with mixtures of data types. Every column contains strings (e.g. '49' is a string). Only column 2 contains integer types.
When a pandas column contains strings, the column's dtype becomes object. This type is not part of np.number, meaning you cannot select any columns with the method in my one-liner solution above.
Note: OP originally showed a CSV being loaded, which pandas likely loaded into the correct data types. The example snippet for df = pd.DataFrame(...) gives nearly all values as strings. There is a difference then between the original question and the updated snippet.
Solution
I will walk you through the steps required based on your example snippet.
In general, you need to ensure that all your column types are correct. E.g. the Age column should have type int, whereas Region is str. You need to convert your column types.
In [1]: import pandas as pd, numpy as np
In [2]: df = pd.DataFrame({'0': ['Region', 'India','Brazil', 'USA','Brazil','USA','India','Brazil','India','USA','India'],
...: '1': ['Age', '49', '32', '35','43','45','40','NaN','53','55','42'],
...: '2': ['Income', 86400, 57600, 64800,73200,'NaN',69600,62400,94800,99600,80400],
...: '3': ['Online Shopper','No','Yes',' No',' No','Yes','Yes','No','Yes','No','Yes']},
...: index=['0', '1', '2', '3','4','5','6','7','8','9','10'])
...:
In [3]: df
Out[3]:
0 1 2 3
0 Region Age Income Online Shopper
1 India 49 86400 No
2 Brazil 32 57600 Yes
3 USA 35 64800 No
4 Brazil 43 73200 No
5 USA 45 NaN Yes
6 India 40 69600 Yes
7 Brazil NaN 62400 No
8 India 53 94800 Yes
9 USA 55 99600 No
10 India 42 80400 Yes
In [4]: df.dtypes
Out[4]:
0 object
1 object
2 object
3 object
dtype: object
Column names stored as the first row, so make them actual column names and remove that first row:
In [5]: column_names = df.iloc[0].tolist()
In [6]: df = df.iloc[1:]
In [7]: df.columns = column_names
The missing NaN values are stored as string: replace with numpy.nan:
In [8]: df[df == "NaN"] = np.nan
Convert the types of all columns, using a column names to type mapping - not all object. Note that np.nan is actually a float - so we can't use int:
In [9]: df = df.astype({"Region": str, "Age": float, "Income": float, "Online Shopper": bool})
In [10]: df
Out[10]:
Region Age Income Online Shopper
1 India 49.0 86400.0 True
2 Brazil 32.0 57600.0 True
3 USA 35.0 64800.0 True
4 Brazil 43.0 73200.0 True
5 USA 45.0 NaN True
6 India 40.0 69600.0 True
7 Brazil NaN 62400.0 True
8 India 53.0 94800.0 True
9 USA 55.0 99600.0 True
10 India 42.0 80400.0 True
In [11]: df.dtypes
Out[11]:
Region object
Age float64
Income float64
Online Shopper bool
dtype: object
The one-liner solution now works:
In [12]: imputed_df = df.fillna(df.select_dtypes(np.number).mean())
In [13]: imputed_df
Out[13]:
Region Age Income Online Shopper
1 India 49.000000 86400.000000 True
2 Brazil 32.000000 57600.000000 True
3 USA 35.000000 64800.000000 True
4 Brazil 43.000000 73200.000000 True
5 USA 45.000000 76533.333333 True
6 India 40.000000 69600.000000 True
7 Brazil 43.777778 62400.000000 True
8 India 53.000000 94800.000000 True
9 USA 55.000000 99600.000000 True
10 India 42.000000 80400.000000 True
You might want to convert the columns to new types, e.g. making Age of type int. I will leave this is an exercise for you. I think this shows you many of the tools you will need to work it out. |
H: How can i get this way to create random data?
I need to create random data using this lines
n_samples = 3000
X = np.concatenate((
np.random.normal((-2, -2), size=(n_samples, 2)),
np.random.normal((2, 2), size=(n_samples, 2))
))
but didn't get the difference between two lines of random here . I got that this way be used to concatenate two random numbers to create 2 clusters but why one of them using (-2,-2) and the other (2,2) and does 2 in this size because concatenate using to merge 2 groups of random data or not ?
AI: Providing multiple values to either the loc or scale arguments can be used to generate multiple random distributions at once with different parameters. In the code you provided the values for the loc argument are the same, meaning that you could also just use the value -2 instead of (-2, -2). You can see this when fixing the seed and generating new numbers
import numpy as np
np.random.seed(0)
print(np.random.normal((-2, -2), size=(5,2)))
# [[-0.23594765 -1.59984279]
# [-1.02126202 0.2408932 ]
# [-0.13244201 -2.97727788]
# [-1.04991158 -2.15135721]
# [-2.10321885 -1.5894015 ]]
np.random.seed(0)
print(np.random.normal(-2, size=(5,2)))
# [[-0.23594765 -1.59984279]
# [-1.02126202 0.2408932 ]
# [-0.13244201 -2.97727788]
# [-1.04991158 -2.15135721]
# [-2.10321885 -1.5894015 ]]
The different between the two lines is that one is generating random noise from a normal (Gaussian) distribution with a mean of -2 and the other from a mean of 2, see also the loc keyword in the documentation. |
H: What advantages does Data Visualization have in EDA?
It is not clear to me what advantage the EDA data visualization provides. By advantage I mean what decision I will make according to one or the other visualization.
Could someone give me an example where the data visualization makes me decide for one or the other algorithm ?
i.e from the book "Introduction to ml with python"
Visualising datasets before fitting any models can be extremely useful. It allows us to see obvious patterns and relationships,and may suggest a sensible form of analysis. With multivariate data, finding the right kind of plot is not always simple, and many different approaches have been proposed.
How does whether I have seen this visualization or not change the way to proceed?
AI: First, visualization is just an easy and intuitive way to understand underlying patterns in your data. Everything that you can achieve through this, can also be achieved through painstakingly printing different values and statistics.
I will just mention two simple examples of algorithms chosen because of patterns in the data. They are very simple, but they can be generalized.
Regression
If you find out that the data is linear, Linear Regression can be a good choice of algorithm
Classification
If the data are linearly separable, SVM is suitable
These are visualizations of the datapoints themselves, but other visualizations like histograms can help find underlying distributions too.
In addition, visualization can be useful in other parts of the process. For example, if you see a normal distribution, you can impute missing data using the mean value, while for a skewed distribution the median is more suitable. |
H: Flipping the labels in a binary classification gives different model and results
I have an imbalanced dataset and I want to train a binary classifier to model the dataset.
Here was my approach which resulted into (relatively) acceptable performance:
1- I made a random split to get train/test sets.
2- In the training set, I down-sampled the majority class to make my training set balanced. To do that, I used the resample method from sklearn.utils module.
3- I trained the model, and then evaluated the performance of the model on the test set (which is unseen and still imbalanced).
I got fairly acceptable results including precision, recall, f1 score and AUC.
Afterwards, I wanted to try out something. Therefore, I flipped the labels in both training set and testing set (i.e. converting 1 to 0 and 0 to 1).
Then I repeated the step 3 and trained the model again with flipped labels. This time, the performance of model dropped and I got much lower precision and f1 score on test set.
Additional details:
The model was trained with GridSearchCV using a LogisticRegression estimator.
I have then two question: is there anything wrong with my approach (i.e. downsampling)?
and how come flipping the label led into worse results?
I have a feeling that it could be due to the fact that my test set is still imbalanced. But more insight will be appreciated.
AI: First I'd like to say that you're asking the right questions, and doing an experiment like this is good way to understand how things work.
Your approach is not wrong by itself and the performance difference is not due to downsampling. Actually resampling rarely works well, it's a very simplistic approach to handle class imbalance, but that's a different topic.
The second question is more important, and it's about what precision and recall mean: these measures rely on which class is defined as the positive class, thus it is expected that flipping the label would change their value.
For example, let's assume a confusion matrix:
A B <- predicted as class
A 9 1
B 10 70
^
true class
Precision is the proportion of correct predictions (true positive, TP) among instances predicted as positive (TP+FP):
if A = positive: 9/(9+10) = 0.47
if B = positive: 70/(10+70) = 0.87
Recall would also be different. The logic of these measures is that the task is defined with a specific class as "main target", usually the minority class (A in this example). So flipping the labels is like changing the definition of the task, there's no reason that the performance has to be the same.
Note: accuracy would have the same value no matter the positive class, and this is why it's not recommended with an imbalanced dataset (it gives too much weigth to the majority class). |
H: How to get ROC curves in a multi-label scenario
I have a data set with multi-labels. I am trying to generate the ROC curves. Unfortunately, I can not use the code which I frequently used while doing binary classification. How should I modify the code in order to be able to get the ROC curves in a multi-label scenario ? In the error message it says,
multiclass format is not supported
The code that I use is:
from sklearn import metrics
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
plt.figure()
models = [
{
'label': 'Logistic Regression',
#'model': LogisticRegression(),
'y_pred': predict_proba[:,1]
},
{
'label': 'SVM',
#'model': SVM(),
'y_pred': preds
},
{
'label': 'RandomForestClassifier',
#'model': RandomForestClassifier(),
'y_pred': Y_Pred_proba[:,1]
},
]
for m in models:
print('LABEL:', m['label'])
y_pred = m['y_pred']
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
# fpr, tpr, thresholds = roc_curve(y_test, y_pred, pos_label= 'ovr')
auc = metrics.roc_auc_score(y_test, y_pred)
# Now, plot the computed values
plt.plot(fpr, tpr, label='%s ROC (area = %0.2f)' % (m['label'], auc))
# Custom settings for the plot
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('1-Specificity(False Positive Rate)')
plt.ylabel('Sensitivity(True Positive Rate)')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()
AI: ROC is a way to evaulate how well a classifier can separate one class-distribution from another in a given dataset. For a multiclass setting this is per definition not possible. What you can do is, either treat this as a "One vs Rest"-scenario, where you evaluate the performance of your classifier in separating one class from all the others combined, repeating this for every class or you treat this as a "One vs One"-scenario where you compare every possible combination of two classes.
You can find an example with illustrations here (not mine):
ROC Curve - Multiclass.ipynb |
H: ML for predictions that exist "in-between" classification targets
I have 5 broad metagenomic "ecoregion" categories (just think lots of DNA at different nice locations) which become the training targets for their complete (and augmented) metagenomic data. Any standard model works fine notably random forest, naive Bayes, SVM, confusion matrix is okay and ROC fine. These are small data sets of 10E5 to 10E6.
The categories are very broad and most predictions (meta genomic data from other "ecoregions"), will fall between these categories. ML in contrast will 'relocate' the prediction into 1 of 5 categories. So thus if I've a "woodland" category and a "lake" category, a marsh will fall "in between" the trained classification, but ML will call it either a 'wood' or a 'lake'.
How do I attain that in-between classification status via ML?
AI: The task should be reframed as regression or ordinal classification. Using the ecosystem metaphor, a regression target could be the amount of ground covered in water or an ordinal classification target could be sequenced categories of landscapes. |
H: What does it mean when a model is more conservative?
I'm trying to tune some parameters in XGBoost and read a lot about "...makes to model more conservative". Can somebody explain me what the word conservative means in this case?
I can imagine that it learns slower (more similar observations needed) and is less prone to overfitting? I'm not sure if my assumption is correct though.
Example from the docu:
eta [default=0.3, alias: learning_rate]
Step size shrinkage used in update to prevents overfitting. After each boosting step, we can directly get the weights of new features, and eta shrinks the feature weights to make the boosting process more conservative.
gamma [default=0, alias: min_split_loss]
Minimum loss reduction required to make a further partition on a leaf node of the tree. The larger gamma is, the more conservative the algorithm will be.
lambda [default=1, alias: reg_lambda]
L2 regularization term on weights. Increasing this value will make model more conservative.
and many more..
AI: As you suggest, and although the term conservative might be confusing, it means having a less complex model by reducing a possible overfitting.
Parameters like lambda (L2 regularization) seem to make clear that, in this context, conservativeness means having a model that, although it could be more adjusted to your training set trying to capture all the possible information from your data and be more sensitive to the class of interest, it provides you a more "balanced" model so you have a lower variance, having more confidence when applying it to new data in the inference phase, trying to minimize the generalization error. A refresher on this concepts. |
H: What is the best way to determine if there is variable interactivity between independent parameters in a prediction model
OK, the best way to describe this is with an example. (admittedly simplified)
I want to predict the speed of drivers on a motorway and I have two input variables
the nationality of the driver
how heavy it is raining
Clearly, these 2 are independent of each other, so throwing this into a simple linear regression I get something like speed = intercept + X1[Nationality] + X2Rain_In_Inches + Error
So I may infer from this that British drivers go 7mph slower than Turkish drivers and speed decreases by 2mph for every inch of rain - so far so good
However, the effect of rain is applied across the whole population here, what I am trying o determine is how rain affects the speed of English drivers vs Turkish drivers. For example, I might expect that one is hardly affected and the other is affected a lot.
Is there a neat way to do this without individually building a model for each category? The above is simple but I want to do it with lots of categories and more parameters
I feel like I'm missing something but cant determine what
Thanks
AI: You can add an interaction term to the linear regression model. An interaction term models the effect that one feature has at different levels of another feature.
If nationality is one-hot encoded, you will have to add a separation interaction term for each level of nationality. For example:
$$ Speed = β_0 + β_1Rain + β_2British + β_3Turisk + β_4(Rain*British)+ β_5(Rain*Turisk)+Error $$
Most statistical software programs can automatically create the one-hot encoding and interaction terms. |
H: Is there a hardware-independent standard for comparing ML models complexity?
Let us say I have two machine learning models on different machines and one on the cloud. Comparing them using elapsed times of execution does not make sense since they are powered up by different hardware.
Since all models are equations in the core, why is there no such method to calculate the complexity of these methods?
AI: Yes, you can use FLOPS to count the floating point operatons and under stand the CPU cycles required for the algorithms you are building.
In python you can do this with a package/module called pypapi
Here is a basic 'hello world' tutorial of pypapi
If you are working in Pytorch specifically there is a valuable and simple tool, flopth that shows you FLOPS for your convolutions. You can read about its implementation here |
H: How do well informed labels for ordinal encoding improve model performance?
From Kaggle's intermediate machine learning tutorial, it was stated that
for each column, we randomly assign each unique value to a different integer. This is a common approach that is simpler than providing custom labels; however, we can expect an additional boost in performance if we provide better-informed labels for all ordinal variables.
Here's what I understood:
If I had a column named place with the unique values being [first, second, third], then I would get better performance by encoding those as [1,2,3] compared to [2,1,3]. Is my understanding correct? If so, how does this lead to better performance? Since the integers are just used as a numeric placeholder for the unique values, does the ordering even matter as long as those integers can uniquely identify each value?
AI: If I had a column named place with the unique values being [first, second, third], then I would get better performance by encoding those as [1,2,3] compared to [2,1,3]. Is my understanding correct?
Yes, your understanding is correct.
If so, how does this lead to better performance? Since the integers are just used as a numeric placeholder for the unique values, does the ordering even matter as long as those integers can uniquely identify each value?
There is a confusion between:
the actual meaning for a human of these integers in this particular case -> in this sense yes, they are just used as a numeric placeholder
how these values are interpreted by a ML algorithm, e.g. classifier -> the ordering does matter for the algorithm, it considers these values as numerical and treat them like any other feature.
The second point implies that the algorithm may use a condition v>=2, this could cause overfitting if the ordering doesn't make any sense.
For the record I don't understand the logic of this part of the tutorial: to me an ordinal variable has by definition an order which is assumed to be known, it doesn't make sense to use OrdinalEncoder on the wrong order. It looks like they are talking about using OrdinalEncoder on some categorical variable (sometimes done to avoid high number of features with one-hot encoding) or some variable of unknown type, maybe? |
H: Failed to convert a NumPy array to a Tensor (Unsupported object type float) in Python
I am trying to build a MLP with Keras and an error appears. I do not have experience with neural networks so it is difficult for me. When I run the code for the NN after some time it says:
'Failed to convert a NumPy array to a Tensor (Unsupported object type float)
in Python'
The code I have, including the preprocess of the dataset, is the following:
import pandas as pd
from tensorflow.keras.utils import get_file
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 5)
dfs = []
for i in range(1,5):
path = './UNSW-NB15_{}.csv'# There are 4 input csv files
dfs.append(pd.read_csv(path.format(i), header = None))
all_data = pd.concat(dfs).reset_index(drop=True) # Concat all to a single df
# This csv file contains names of all the features
df_col = pd.read_csv('./NUSW-NB15_features.csv', encoding='ISO-8859-1')
# Making column names lower case, removing spaces
df_col['Name'] = df_col['Name'].apply(lambda x: x.strip().replace(' ', '').lower())
# Renaming our dataframe with proper column names
all_data.columns = df_col['Name']
# display 5 rows
pd.set_option('display.max_columns', 48)
pd.set_option('display.max_rows', 21)
all_data
all_data['attack_cat'] = all_data['attack_cat'].str.strip()
all_data['attack_cat'] = all_data['attack_cat'].replace(['Backdoors'], 'Backdoor')
all_data.groupby('attack_cat')['attack_cat'].count()
all_data["attack_cat"] = all_data["attack_cat"].fillna('Normal')
all_data.groupby('attack_cat')['attack_cat'].count()
all_data.drop(all_data[all_data['is_ftp_login'] >= 2.0].index, inplace = True)
all_data.drop(['srcip', 'sport', 'dstip', 'dsport'],axis=1, inplace=True)
df = pd.concat([all_data,pd.get_dummies(all_data['proto'],prefix='proto')],axis=1)
df.drop('proto',axis=1, inplace=True)
df_2 = pd.concat([df,pd.get_dummies(df['state'],prefix='state')],axis=1)
df_2.drop('state',axis=1, inplace=True)
df_encoded = pd.concat([df_2,pd.get_dummies(df_2['service'],prefix='service')],axis=1)
df_encoded.drop('service',axis=1, inplace=True)
df_encoded['ct_flw_http_mthd'] = df_encoded['ct_flw_http_mthd'].fillna(0)
df_encoded['is_ftp_login'] = df_encoded['is_ftp_login'].fillna(0)
df = pd.DataFrame(df_encoded)
temp_cols=df_encoded.columns.tolist()
index=df.columns.get_loc("attack_cat")
new_cols=temp_cols[0:index] + temp_cols[index+1:] + temp_cols[index:index+1]
df=df_encoded[new_cols]
df_encoded = df.drop('label', axis=1)
x_columns = df_encoded.columns.drop('attack_cat')
x = df_encoded[x_columns].values
dummies = pd.get_dummies(df['attack_cat'])
products = dummies.columns
y = dummies.values
import numpy as np
import tensorflow.keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from sklearn.model_selection import train_test_split
from tensorflow.keras.callbacks import EarlyStopping
from sklearn import metrics
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
model = Sequential()
model.add(Dense(10, input_dim= x.shape[1], activation= 'relu'))
model.add(Dense(9, activation= 'relu'))
model.add(Dense(9,activation= 'relu'))
model.add(Dense(y_train.shape[1],activation= 'softmax', kernel_initializer='normal'))
model.compile(loss= 'categorical_crossentropy', optimizer= 'adam', metrics= ['accuracy'])
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5,
verbose=1, mode='auto', restore_best_weights=True)
model.fit(x_train,y_train,validation_data=(x_test,y_test),
callbacks=[monitor],verbose=2, epochs=1000)
pred = model.predict(x_test)
pred = np.argmax(pred,axis=1)
y_compare = np.argmax(y_test,axis=1)
score = metrics.accuracy_score(y_compare, pred)
print("Accuracy score: {}".format(score))
The dataset i'm using is the UNSW-NB15 (2+ million inputs)
The error appears after executing the last block of code (begins at import numpy as np)
Thanks for any tip that you can give me to solve the problem.
The error appearing after the update provided by Muhammad is the following:
ValueError Traceback (most recent call last)
Input In [13], in <cell line: 22>()
18 model.compile(loss= 'categorical_crossentropy', optimizer= 'adam', metrics= ['accuracy'])
19 monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5,
20 verbose=1, mode='auto', restore_best_weights=True)
---> 22 model.fit(tf.cast(x_train,dtype=tf.float32),y_train,validation_data=(x_test,y_test),
23 callbacks=[monitor],verbose=2, epochs=1000)
26 pred = model.predict(x_test)
27 pred = np.argmax(pred,axis=1)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\util\dispatch.py:206, in add_dispatch_support.<locals>.wrapper(*args, **kwargs)
204 """Call target, and fall back on dispatchers if there is a TypeError."""
205 try:
--> 206 return target(*args, **kwargs)
207 except (TypeError, ValueError):
208 # Note: convert_to_eager_tensor currently raises a ValueError, not a
209 # TypeError, when given unexpected types. So we need to catch both.
210 result = dispatch(wrapper, args, kwargs)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\ops\math_ops.py:988, in cast(x, dtype, name)
982 x = ops.IndexedSlices(values_cast, x.indices, x.dense_shape)
983 else:
984 # TODO(josh11b): If x is not already a Tensor, we could return
985 # ops.convert_to_tensor(x, dtype=dtype, ...) here, but that
986 # allows some conversions that cast() can't do, e.g. casting numbers to
987 # strings.
--> 988 x = ops.convert_to_tensor(x, name="x")
989 if x.dtype.base_dtype != base_type:
990 x = gen_math_ops.cast(x, base_type, name=name)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\profiler\trace.py:163, in trace_wrapper.<locals>.inner_wrapper.<locals>.wrapped(*args, **kwargs)
161 with Trace(trace_name, **trace_kwargs):
162 return func(*args, **kwargs)
--> 163 return func(*args, **kwargs)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\ops.py:1566, in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1561 raise TypeError("convert_to_tensor did not convert to "
1562 "the preferred dtype: %s vs %s " %
1563 (ret.dtype.base_dtype, preferred_dtype.base_dtype))
1565 if ret is None:
-> 1566 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1568 if ret is NotImplemented:
1569 continue
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\tensor_conversion_registry.py:52, in _default_conversion_function(***failed resolving arguments***)
50 def _default_conversion_function(value, dtype, name, as_ref):
51 del as_ref # Unused.
---> 52 return constant_op.constant(value, dtype, name=name)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\constant_op.py:271, in constant(value, dtype, shape, name)
174 @tf_export("constant", v1=[])
175 def constant(value, dtype=None, shape=None, name="Const"):
176 """Creates a constant tensor from a tensor-like object.
177
178 Note: All eager `tf.Tensor` values are immutable (in contrast to
(...)
269 ValueError: if called on a symbolic tensor.
270 """
--> 271 return _constant_impl(value, dtype, shape, name, verify_shape=False,
272 allow_broadcast=True)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\constant_op.py:283, in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
281 with trace.Trace("tf.constant"):
282 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
--> 283 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
285 g = ops.get_default_graph()
286 tensor_value = attr_value_pb2.AttrValue()
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\constant_op.py:308, in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
306 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
307 """Creates a constant on the current device."""
--> 308 t = convert_to_eager_tensor(value, ctx, dtype)
309 if shape is None:
310 return t
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\constant_op.py:106, in convert_to_eager_tensor(value, ctx, dtype)
104 dtype = dtypes.as_dtype(dtype).as_datatype_enum
105 ctx.ensure_initialized()
--> 106 return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float).
Column types :
df_encoded.info(verbose=True)
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2539861 entries, 0 to 2540046
Data columns (total 205 columns):
# Column Dtype
--- ------ -----
0 dur float64
1 sbytes int64
2 dbytes int64
3 sttl int64
4 dttl int64
5 sloss int64
6 dloss int64
7 sload float64
8 dload float64
9 spkts int64
10 dpkts int64
11 swin int64
12 dwin int64
13 stcpb int64
14 dtcpb int64
15 smeansz int64
16 dmeansz int64
17 trans_depth int64
18 res_bdy_len int64
19 sjit float64
20 djit float64
21 stime int64
22 ltime int64
23 sintpkt float64
24 dintpkt float64
25 tcprtt float64
26 synack float64
27 ackdat float64
28 is_sm_ips_ports int64
29 ct_state_ttl int64
30 ct_flw_http_mthd float64
31 is_ftp_login float64
32 ct_ftp_cmd object
33 ct_srv_src int64
34 ct_srv_dst int64
35 ct_dst_ltm int64
36 ct_src_ltm int64
37 ct_src_dport_ltm int64
38 ct_dst_sport_ltm int64
39 ct_dst_src_ltm int64
40 proto_3pc uint8
41 proto_a/n uint8
42 proto_aes-sp3-d uint8
43 proto_any uint8
44 proto_argus uint8
45 proto_aris uint8
46 proto_arp uint8
47 proto_ax.25 uint8
48 proto_bbn-rcc uint8
49 proto_bna uint8
50 proto_br-sat-mon uint8
51 proto_cbt uint8
52 proto_cftp uint8
53 proto_chaos uint8
54 proto_compaq-peer uint8
55 proto_cphb uint8
56 proto_cpnx uint8
57 proto_crtp uint8
58 proto_crudp uint8
59 proto_dcn uint8
60 proto_ddp uint8
61 proto_ddx uint8
62 proto_dgp uint8
63 proto_egp uint8
64 proto_eigrp uint8
65 proto_emcon uint8
66 proto_encap uint8
67 proto_esp uint8
68 proto_etherip uint8
69 proto_fc uint8
70 proto_fire uint8
71 proto_ggp uint8
72 proto_gmtp uint8
73 proto_gre uint8
74 proto_hmp uint8
75 proto_i-nlsp uint8
76 proto_iatp uint8
77 proto_ib uint8
78 proto_icmp uint8
79 proto_idpr uint8
80 proto_idpr-cmtp uint8
81 proto_idrp uint8
82 proto_ifmp uint8
83 proto_igmp uint8
84 proto_igp uint8
85 proto_il uint8
86 proto_ip uint8
87 proto_ipcomp uint8
88 proto_ipcv uint8
89 proto_ipip uint8
90 proto_iplt uint8
91 proto_ipnip uint8
92 proto_ippc uint8
93 proto_ipv6 uint8
94 proto_ipv6-frag uint8
95 proto_ipv6-no uint8
96 proto_ipv6-opts uint8
97 proto_ipv6-route uint8
98 proto_ipx-n-ip uint8
99 proto_irtp uint8
100 proto_isis uint8
101 proto_iso-ip uint8
102 proto_iso-tp4 uint8
103 proto_kryptolan uint8
104 proto_l2tp uint8
105 proto_larp uint8
106 proto_leaf-1 uint8
107 proto_leaf-2 uint8
108 proto_merit-inp uint8
109 proto_mfe-nsp uint8
110 proto_mhrp uint8
111 proto_micp uint8
112 proto_mobile uint8
113 proto_mtp uint8
114 proto_mux uint8
115 proto_narp uint8
116 proto_netblt uint8
117 proto_nsfnet-igp uint8
118 proto_nvp uint8
119 proto_ospf uint8
120 proto_pgm uint8
121 proto_pim uint8
122 proto_pipe uint8
123 proto_pnni uint8
124 proto_pri-enc uint8
125 proto_prm uint8
126 proto_ptp uint8
127 proto_pup uint8
128 proto_pvp uint8
129 proto_qnx uint8
130 proto_rdp uint8
131 proto_rsvp uint8
132 proto_rtp uint8
133 proto_rvd uint8
134 proto_sat-expak uint8
135 proto_sat-mon uint8
136 proto_sccopmce uint8
137 proto_scps uint8
138 proto_sctp uint8
139 proto_sdrp uint8
140 proto_secure-vmtp uint8
141 proto_sep uint8
142 proto_skip uint8
143 proto_sm uint8
144 proto_smp uint8
145 proto_snp uint8
146 proto_sprite-rpc uint8
147 proto_sps uint8
148 proto_srp uint8
149 proto_st2 uint8
150 proto_stp uint8
151 proto_sun-nd uint8
152 proto_swipe uint8
153 proto_tcf uint8
154 proto_tcp uint8
155 proto_tlsp uint8
156 proto_tp++ uint8
157 proto_trunk-1 uint8
158 proto_trunk-2 uint8
159 proto_ttp uint8
160 proto_udp uint8
161 proto_udt uint8
162 proto_unas uint8
163 proto_uti uint8
164 proto_vines uint8
165 proto_visa uint8
166 proto_vmtp uint8
167 proto_vrrp uint8
168 proto_wb-expak uint8
169 proto_wb-mon uint8
170 proto_wsn uint8
171 proto_xnet uint8
172 proto_xns-idp uint8
173 proto_xtp uint8
174 proto_zero uint8
175 state_ACC uint8
176 state_CLO uint8
177 state_CON uint8
178 state_ECO uint8
179 state_ECR uint8
180 state_FIN uint8
181 state_INT uint8
182 state_MAS uint8
183 state_PAR uint8
184 state_REQ uint8
185 state_RST uint8
186 state_TST uint8
187 state_TXD uint8
188 state_URH uint8
189 state_URN uint8
190 state_no uint8
191 service_- uint8
192 service_dhcp uint8
193 service_dns uint8
194 service_ftp uint8
195 service_ftp-data uint8
196 service_http uint8
197 service_irc uint8
198 service_pop3 uint8
199 service_radius uint8
200 service_smtp uint8
201 service_snmp uint8
202 service_ssh uint8
203 service_ssl uint8
204 attack_cat object
dtypes: float64(12), int64(27), object(2), uint8(164)
memory usage: 1.2+ GB
New error after removing ct_ftp_cmd :
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 12>()
8 from sklearn import metrics
10 x_cast = tf.cast(x,dtype=tf.float32)
---> 12 x_train, x_test, y_train, y_test = train_test_split(
13 x_cast, y, test_size=0.25, random_state=42)
16 model = Sequential()
17 model.add(Dense(10, input_dim= x.shape[1], activation= 'relu'))
File ~\miniconda3\envs\pruebas\lib\site-packages\sklearn\model_selection\_split.py:2443, in train_test_split(test_size, train_size, random_state, shuffle, stratify, *arrays)
2439 cv = CVClass(test_size=n_test, train_size=n_train, random_state=random_state)
2441 train, test = next(cv.split(X=arrays[0], y=stratify))
-> 2443 return list(
2444 chain.from_iterable(
2445 (_safe_indexing(a, train), _safe_indexing(a, test)) for a in arrays
2446 )
2447 )
File ~\miniconda3\envs\pruebas\lib\site-packages\sklearn\model_selection\_split.py:2445, in <genexpr>(.0)
2439 cv = CVClass(test_size=n_test, train_size=n_train, random_state=random_state)
2441 train, test = next(cv.split(X=arrays[0], y=stratify))
2443 return list(
2444 chain.from_iterable(
-> 2445 (_safe_indexing(a, train), _safe_indexing(a, test)) for a in arrays
2446 )
2447 )
File ~\miniconda3\envs\pruebas\lib\site-packages\sklearn\utils\__init__.py:378, in _safe_indexing(X, indices, axis)
376 return _pandas_indexing(X, indices, indices_dtype, axis=axis)
377 elif hasattr(X, "shape"):
--> 378 return _array_indexing(X, indices, indices_dtype, axis=axis)
379 else:
380 return _list_indexing(X, indices, indices_dtype)
File ~\miniconda3\envs\pruebas\lib\site-packages\sklearn\utils\__init__.py:202, in _array_indexing(array, key, key_dtype, axis)
200 if isinstance(key, tuple):
201 key = list(key)
--> 202 return array[key] if axis == 0 else array[:, key]
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\util\dispatch.py:206, in add_dispatch_support.<locals>.wrapper(*args, **kwargs)
204 """Call target, and fall back on dispatchers if there is a TypeError."""
205 try:
--> 206 return target(*args, **kwargs)
207 except (TypeError, ValueError):
208 # Note: convert_to_eager_tensor currently raises a ValueError, not a
209 # TypeError, when given unexpected types. So we need to catch both.
210 result = dispatch(wrapper, args, kwargs)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\ops\array_ops.py:1014, in _slice_helper(tensor, slice_spec, var)
1012 new_axis_mask |= (1 << index)
1013 else:
-> 1014 _check_index(s)
1015 begin.append(s)
1016 end.append(s + 1)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\ops\array_ops.py:888, in _check_index(idx)
883 dtype = getattr(idx, "dtype", None)
884 if (dtype is None or dtypes.as_dtype(dtype) not in _SUPPORTED_SLICE_DTYPES or
885 idx.shape and len(idx.shape) == 1):
886 # TODO(slebedev): IndexError seems more appropriate here, but it
887 # will break `_slice_helper` contract.
--> 888 raise TypeError(_SLICE_TYPE_ERROR + ", got {!r}".format(idx))
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got array([ 214948, 2349007, 452929, ..., 2356330, 2229084, 2219110])
```
AI: Add tf.keras.regularizer.l1(0.1) to your Dense Layers. May be this shall increase the number of your epochs and try to run it in the COLAB Setup under the GPU. |
H: In neural networks, is applying dropout the same as zeroing random neurons?
Is applying dropout equivalent to zeroing output of random neurons in each mini-batch iteration and leaving rest of forward and backward steps in back-propagation unchanged? I'm implementing network from scratch in numpy.
AI: Indeed. To be precise, the dropout operation will randomly zero some of the input tensor elements with probability $p$, and furthermore the rest of the non-dropped out outputs are scaled by a factor of $\frac{1}{1-p}$ during training.
For example, see how elements of each tensor in the input (top tensor in output) are zeroed in the output tensor (bottom tensor in output) using pytorch.
m = nn.Dropout(p=0.5)
input = torch.randn(3, 4)
output = m(input)
print(input, '\n', output)
>>> tensor([[-0.9698, -0.9397, 1.0711, -1.4557],
>>> [-0.0249, -0.9614, -0.7848, -0.8345],
>>> [ 0.9420, 0.6565, 0.4437, -0.2312]])
>>> tensor([[-0.0000, -0.0000, 2.1423, -0.0000],
>>> [-0.0000, -0.0000, -1.5695, -1.6690],
>>> [ 0.0000, 0.0000, 0.0000, -0.0000]])
EDIT: please note the post has been updated to reflect Todd Sewell's addition in the comments. |
H: Why is Data with an Overrepresented Class called Imbalanced not Unbalanced?
I've seen the term Imbalanced used to described data that has an over-representation of one class.
What's the reasoning behind naming this type of data Imbalanced as opposed to Unbalanced, which seems to fit the intended meaning perfectly already?
AI: Unbalanced:
not balanced: such as
a: not in equilibrium
b: mentally disordered : affected with mental illness
c: not adjusted so as to make credits equal to debits
an unbalanced account
Imbalanced:
lack of balance : the state of being out of equilibrium or out of proportion
a structural imbalance
a chemical imbalance in the brain
Imbalanced is used when something is out of proportion, Unbalanced when you can destabilise something. Although Imbalanced may sometimes be more correct or the only correct word to use, I rarely see it used outside of academic circles, datasciences or similar areas. |
H: Advice on How to Train Neural Networks
I am relatively new to neural networks and AI, and I have a question regarding the training method in such networks. In particular spiking neural networks (SNNs) are the type we are working with.
I am confused with the best way to train spiking neural networks when high accuracy is the most desired performance metric I am working towards.
For context, we are doing supervised learning with a SNN as an anomaly detector to classify various input data samples, inputted as spike trains, into 2 classes: Healthy and Unhealthy. Our training data has one healthy input sample that we want the SNN to recognise as healthy, and we make up random unhealthy input samples that we want the SNN to recognise as unhealthy. This leads to my question:
How should you train a SNN? Take an example where you have a training dataset with 100 samples and say 50% are healthy and the other 50% are unhealthy, how should this network be trained in terms of the ratio of healthy and unhealthy training samples used to train?
Do you need more than epoch, or iterations?
Should you leave some training samples unshown to the SNN for testing?
And as I only have one healthy sample, will this work?
AI: There are many ways to train SNNs. This publication explains a few of them:
https://arxiv.org/pdf/2109.12894.pdf
However, we can start with some useful tips.
SNNs highly depend on a variable threshold (according to max values), the learning rate, and the number of spikes per sample (impacts the weights training & prediction). You will want to make several trials to find the right parameters' values, and the right amount of iterations and checks before reaching a good result.
In addition to that, 150 samples could be enough as soon as they cover most cases. I don't know the data, so I can only speak in general terms.
Finally, weights initialization also plays an important role: testing several weights initialization could be necessary to reach good results.
Here are some codes that could be helpful:
https://github.com/fangwei123456/spikingjelly
https://github.com/Shikhargupta/Spiking-Neural-Network |
H: Predictive value of short text fields
I am working on a classification model using one of the following three algorithms: RandomForestClassifier, a TensorFlow model and a LogisticRegression model.
The data set I am working with has a feature that is represented by a single word that uses ASCII characters (may or may not be a valid word in any language). I don't see any advantage in treating this column as categorical data since number of unique words/total number of rows is very close to 1, i.e., almost every word is unique.
Is there any obvious way to use this column to improve the predictive capabilities of the resulting classification model?
The data I am working with are player IDs that are strings and are meaningless in any language. But would the answer to the above question change if I were working with single English words?
AI: In general there is no reason to include a meaningless id as feature, it has no semantic value.
At the semantic level, the question is whether knowing this information provides any help with knowing the target label. In other words, would a human expert be able to use this information? If not it's very unlikely to be relevant.
At the technical level, you can measure the amount of information brought by this variable about the target, for example with conditional entropy. |
H: Time-series classification in a production environment - Doubts
I created an ML model to classify five IoT signals (say A, B, C, D, and E) I get in CSV files monthly. Each signal has a value in the sampled timestamps.
My questions (doubts) are:
Do I have to preprocess new data in the production only on the same (in this example, daily) timestamp; in other words, only the same number of values (features) for each time-series sample as during the model's training? I am pretty sure that is true, but I wonder if there is something specific to the time series.
Since my data are normalized and standardized, what would be the suggestion regarding the length of the time series, since that is important for the standardization of input data in the model in the production environment?
During the training, I divided the values on the daily time stamp (say 5000 values for each signal in a day). So, my time-series are a daily basis time-stamp. I have finished the training, and the results on the test dataset and with cross-validation are acceptable for production. However, I would like not to make a mistake in directions for the data acquiring team.
AI: Applying time series for IoTs could be quite complex because you have to deal with model constraints (in general, it can't process too many data), business constraints (what to do you want to predict and with which accuracy), and device constraints (sensors could have different calibrations and the components are not 100% identical).
So the first step would be to define a time range between 50 and 200 steps with a clear time limit for a starting and an ending, ideally, that corresponds to a cycle.
I recommend starting with simple business objectives because you have already a lot of complexity due to the model and devices. It also applies to devices: starting studying one device could be more efficient to understand the main behaviors and add more devices progressively.
Then you have to choose the right model. Random Forest is quite universal and could take into account several variables.
https://pyts.readthedocs.io/en/latest/generated/pyts.classification.TimeSeriesForest.html
LSTMs are great to learn patterns, but they are quite sensitive to noise. You may have to know your devices very well to smooth their signal correctly.
https://www.analyticsvidhya.com/blog/2019/01/introduction-time-series-classification/
https://www.kaggle.com/code/meaninglesslives/simple-neural-net-for-time-series-classification
Sktime could be interesting:
https://www.sktime.org/en/v0.9.0/examples/02_classification_univariate.html
Note that if the classification rules apply to any IoT, it is not a multi-variate case, but rather a pattern recognition applicable to any similar device. However, be aware that the devices should be comparable enough to make good classification (= data normalization and maybe noise reduction). |
H: Ordering training text data by length
If I have text data where the length of documents greatly varies and I'd like to use it for training where I use batching, there is a great chance that long strings will be mixed with short strings and the average time to process each batch will increase because of padding within the batches.
I imagine sorting documents naively by length would create a bias of some sort since long documents and short one would tend to be similar to each other.
Are there any methods that have been tried that can help reduce training time in this case without sacrificing model performance?
AI: What you are referring to is called "bucketing". It consists of creating batches of sequences with similar length, to minimize the needed padding.
In tensorflow, you can do it with tf.data.Dataset.bucket_by_sequence_length. Take into account that previously it was in different python packages (tf.data.experimental.bucket_by_sequence_length, tf.contrib.data.bucket_by_sequence_length), so the examples online may containt the outdated name.
To see some usage examples, you can check this jupyter notebook, or other answers in stackoverflow, or this tutorial. |
H: How to perform some calculations after dbscan clustering
I have performed a clustering with geospatial data with the dbscan algorithm. You can see the project and the code in more detail here: https://notebook.community/gboeing/urban-data-science/15-Spatial-Cluster-Analysis/cluster-analysis
I would like to calculate the following in a dataframe:
the area of each cluster. It can be calculated as: (lat_max - lat_min) * (lon_max - lon_min)
number of points belonging to each cluster
At the moment I have added to the original dataset a column with the cluster to which the coordinate belongs.
for n in range(num_clusters):
df['cluster'] = pd.Series(cluster_labels, index=df.index)
Any idea of simple code that would allow me to do this?
AI: A simple solution is to apply Voronoi Diagrams to the DB Scan clusters:
https://www.arianarab.com/post/unsupervised-point-pattern-clustering-using-voronoi-tessellation-and-density-based-scan-algorithms
You can get the polygon coordinates and calculate the polygon area like this:
import numpy as np
x = np.arange(0,1,0.001)
y = np.sqrt(1-x**2)
def PolyArea(x,y):
return 0.5*np.abs(np.dot(x,np.roll(y,1))-np.dot(y,np.roll(x,1)))
Sources:
https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates
https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Voronoi.html |
H: select rows containing max value basing on another duplicated rows of a group
I want to select rows by the maximum values of another column which would be the duplicated rows containing duplicated maximum values of a group.
This should contain three steps:
(1) group dataframe by column A;
(2) get duplicated rows with duplicated maximum values of column B;
(3) get rows if it contains maximum values of column C (if it is still duplicated, pick the first).
Example:
df_test = pd.DataFrame({'A':[1,1,2,3,4,2,4,3,3,2],
'B':[3,3,2,4,5,2,5,3,4,3],
'C':[80,85,88,90,70,83,85,90,90,70]})
df_result=pd.DataFrame({'A':[1,2,3,4],
'B':[3,3,4,5],
'C':[85,70,90,85]})
AI: This is more of a programming than a data science question, and would therefore be better suited for stackoverflow, but this can be achieved relatively easily using a combination of sorting and grouping:
(
df
# sort such that the first row is within each group is the one you want
.sort_values(["A", "B", "C"], ascending=[True, False, False])
# group based on column A
.groupby("A")
# select the first row within each group
.first()
# reset the index such that A is a column instead of the index
.reset_index()
)
Which gives the following result:
A
B
C
1
3
85
2
3
70
3
4
90
4
5
85 |
H: what are the solutions for efficient feature selection in a very large feature space?
I have a classification dataset (50k observations and 10 features) on which I can't get a good result..
I want to try increasing the number of features..
I plan to automatically generate many feature options from whatever seems reasonable to me for my dataset.
When I roughly calculated the number of possible signs, they turned out to be about 100 million or more ...
Naturally, such a large number of features cannot be processed at once, and 99.9% of the features will turn out to be unimportant.
My plan is this:
create a date set of 100/1000 features
train the model
choose features that are important from the point of view of the classification algorithm
save important features
transition to point "1" but with new features
My question is this:
What are the effective strategies for selecting features if there are potentially millions or more features.
Is my plan correct, or am I missing something?
upd_for_Dave===========================
Here is a little R code how I plan to generate features with a very simple and "small" grammar
library("gramEvol")
grammarDef <- CreateGrammar(list(
expr = grule(op(expr, expr), func(expr), var),
func = grule(sin, cos, log, sqrt),
op = grule("+","-","*","/"),
var = grule(var, var^n),
n = gvrule(1:4),
var = grule(x1,x2)))
Here are the potential features that this grammar generates
gramEvol::GrammarRandomExpression(grammarDef,numExpr = 10)
[[1]]
expression(x2)
[[2]]
expression(x2)
[[3]]
expression(cos(sqrt(x1 + sqrt(x1)) + sqrt(x1)))
[[4]]
expression(cos(sqrt(sin(sqrt(cos(log(sin(log(x2))) * cos(sqrt(cos(x1)))))))) - cos(x2 * x2))
[[5]]
expression(log(x1))
[[6]]
expression(x2)
[[7]]
expression(sqrt(cos(x2 * (x1 - sqrt(log((sqrt(x1) + x1)/x2/x2)/(sin(x1) * x2))))))
[[8]]
expression(cos(x1))
[[9]]
expression(x2)
[[10]]
expression(cos(x2))
So many unique combinations
summary(grammarDef)
No. of Unique Expressions: 3.993138e+15
And this is only with this simple grammar, but you can do not only mathematical expressions, but code, text .. anything ..
But I think this is all beyond the scope of the question.
AI: Your version is good enough, but features based on 1 feature can also be used.
I use Weight of Evidence (WOE) and Information Value (IV) when there are many features. I would bin the variable, calculate the IV, throw out the obviously bad ones (iv < 0.03 for example) and make a feature / iv table. I also have some heuristic rule: I will add features to the model as long as they add at least 0.01 roc_auc_score. If the 1-variable model has abs(roc_auc_score) < 0.501 then I remove it from the sample. |
H: Error using sigmoid activation function in the last dense layer of a LSTN
Trying to use sigmoid as an activation function for the last dense layer of a LSTN, I get this error
ValueError: `logits` and `labels` must have the same shape, received ((None, 60, 1) vs (None,)).
The code is this
scaler = StandardScaler()
X_train_s = scaler.fit_transform(X_train) #scaled_train
X_test_s = scaler.transform(X_test) #scaled_test
length = 60
n_features=89
generator = TimeseriesGenerator(X_train_s, Y_train['TARGET_ENTRY_LONG'], length=length, batch_size=1)
validation_generator = TimeseriesGenerator(X_test_s, Y_test['TARGET_ENTRY_LONG'], length=length, batch_size=1)
# define model
model = Sequential()
model.add(LSTM(90, activation='relu', input_shape=(length, n_features), return_sequences=True, dropout = 0.3))
model.add(LSTM(30,activation='relu',return_sequences=True, dropout = 0.3))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy')
model.summary()
# fit model
model.fit(generator,epochs=3,
validation_data=validation_generator)
#callbacks=[early_stop])
If I replace the last layer declaration with the following one
model.add(Dense(1))
I get no errors, but probably also not the expected result. Any idea?
AI: Found the cause of the trouble after several attempts, it was in the layer before the last one: it shall have no "return_sequences=True" set, that is for all the layers before if the last one is a dense layer for binary classification using sigmoid as activation function. Therefore, this layer
model.add(LSTM(30,activation='relu',return_sequences=True, dropout = 0.3))
shall be written instead as following
model.add(LSTM(30,activation='relu', dropout = 0.3)) |
H: Should I use "Recency" as an predictor for churn if I want to catch churners early?
I want to build a customer churn prediction model that predicts probability of churn the next day and I'm looking for some features that might be important for the target variable which has outcomes churn (1) or not churn (0).
A customer is considered to have churned if more than 365 days have passed since the date of last purchase. Naturally then, "Recency" (Time since last purchase) will be an important predictor for predicting churn. So if a customer is on his/her 364th day of purchase-inactivity the model will with high probability predict a churn next day. But I want to be able to predict churn sooner and not depend too much on time since last purchase.
How would I go about if I want to be able to catch churners early and not depend on time too much?
AI: You may calculate the ratio of churn-90 customers who became churn-365 to understand the problem deeper. I expect it depends on industry and other factors. In general, churn-a to churn-b ratio may give you a better vision. Then plot different combinations of a and b.
The last step is to pick a reasonable threshold that definitely involves some unpleasant trade-off between the model usefulness and model accuracy |
H: Variable selection and NA
I have a very large dataset with a lot of NAs in the data. I want to perform an analysis and have to select the variables that are of most interest.
I feel like I have to take 3 steps before I can start to analyse. I have to perform PCA, I will have to fill in the NAs and also, some variables contain very, very little data. Almost half of the variables have less than 50% of the observations filled in. So I feel like I have to delete these at some point. My question is, what is the most appropriate order?
My intuition is to delete variables with very little data, then do PCA with remaining variables and then fill in the NAs using Nearest Neighbor. But I would like to hear your takes on it! :)
Also, when deleting variables with little data, what is an appropriate percentage of NAs per variable?
AI: My question is, what is the most appropriate order?
PCA creates "new" variables in a different dimensional space. I am not even sure most PCA libraries can handle missing values, and it doesn't make sense too (in this case, all of the output vector would be NaN since the output is a linear combination of all the input variables). So you first need to do any sort of imputation and then try to reduce the dimensions with PCA.
when deleting variables with little data, what is an appropriate percentage of NAs per variable?
There is no silver bullet solution I think. You can try to see what produces the best outcomes. In my experience, more than 20% NaN is too much (but it depends on the size of your data too, 50% on a dataset with a billion rows might provide enough information while 50% on 100 rows might be too little).
In general, I would propose to
First drop the features with too many missing data (what is too many is up to you to find)
Then impute the missing values using whatever method you want (you mentioned KNN but even simpler methods like mean or mode or constant might work while taking a lot less time)
Finally do the PCA (or any other method of feature selection or dimensionality reduction, since you mentioned in the comments that you wanted to drop features, not dimensionality reduction) |
H: How to split train/test datasets according to labels' classes
I faced a problem while I using sklearn.train_test_split().
Here is the code I use.
xtrain, xtest, ytrain, ytest = train_test_split(X_source, Y_source, test_size=0.3)
The shape of X_source is (2427,features_size), Y_source is (2427,1). And there are 65 different classes of labels in Y_source, what I mean is that Y_source is a matrix length of 2427 and the value from 1 to 65.
My problem is that, while I use train_test_split, the output (ytrain/ytest) only contains some of labels' class not all classes. I think it is because the method just split 30% of the whole data but don't care about their labels classes. What should I do to deal with it, I want to make sure the output contain all the labels' classes, and every classes are splited 30% into the train set. Does there have a function can do this? Or I need to reshape my data according to different labels' classes.
AI: You can use the argument stratify=Y_source to maintain the proportions after splitting. |
H: Can I fine tune GPT-3?
Can anyone fine-tune the GPT-3 model on commodity hardware without GPU?
What I meant is can we fine tune available GPT-3 equivalent models?
For example, we have only access to GPT-J.
Can we fine-tune GPT-J with commodity hardware or lets say with only basic GPU such as 1with one RTX 3080.
Can we fine-tune these models (not training from scratch)?
Or will it need high-end infrastructure with GPUs?
AI: The weights of GPT-3 are not public. You can fine-tune it but only through the interface provided by OpenAI. In any case, GPT-3 is too large to be trained on CPU.
About other similar models, like GPT-J, they would not fit on a RTX 3080, because it has 10/12Gb of memory and GPT-J takes 22+ Gb for float32 parameters. It should possible to fine-tune some special versions that use int8 precision, like this one. |
H: Can I use scikit learn's cross_val_predict with cross_validate?
I am looking to make a visualization of my cross validation data in which I can visualize the predictions that occurred within the cross validation process. I am using scikit learn's cross_validate to get the results of my bayesian ridge model's (scikit learn BayesianRidge) performance, but am unsure if my plot using cross_val_predict expresses the same predictions? My plot is a one-to-one plot of the predicted labels that occurred during cross validation versus the observed labels the model trained on. I use the same number of folds in both cross_validate and cross_val_predict.
Basically, I just want to know if the plot I make with cross_val_predict can be described by the returned performance metrics from cross_validate?
Thanks for the help
AI: No, the folds used will (almost surely) be different.
You can enforce the same folds by defining a CV Splitter object and passing it as the cv argument to both cross-validation functions:
cv = KFold(5, random_state=42)
cross_validate(model, X, y, cv=cv, ...)
cross_val_predict(model, X, y, cv=cv, ...)
That said, you're fitting and predicting the model on each fold twice by doing this. You could use return_estimator=True in cross_validate to retrieve the fitted models for each fold, or use the predictions from cross_val_predict to generate the scores manually. (Either way though, you'd need to use the splitter object to slice to the right fold, which might be a little finicky.) |
H: Is it Possible to plot Scatter Plot + Histogram + Correlation Values in a single plot (in python)?
I recently came across corrmorant package in R.
It allows to plot all three basic EDA plots together: Scatter Plot + Histogram + Correlation Values.
Is it possible to do same in Python also?
AI: Corrmorant is based on ggplot, but it seems that there is no equivalent in Python.
However, you can redo it thanks to this code:
import matplotlib.pyplot as plt
from scipy import stats
import seaborn as sns
import numpy as np
def corrdot(*args, **kwargs):
corr_r = args[0].corr(args[1], 'pearson')
corr_text = round(corr_r, 2)
ax = plt.gca()
font_size = abs(corr_r) * 80 + 5
ax.annotate(corr_text, [.5, .5,], xycoords="axes fraction",
ha='center', va='center', fontsize=font_size)
def corrfunc(x, y, **kws):
r, p = stats.pearsonr(x, y)
p_stars = ''
if p <= 0.05:
p_stars = '*'
if p <= 0.01:
p_stars = '**'
if p <= 0.001:
p_stars = '***'
ax = plt.gca()
ax.annotate(p_stars, xy=(0.65, 0.6), xycoords=ax.transAxes,
color='red', fontsize=70)
sns.set(style='white', font_scale=1.6)
iris = sns.load_dataset('iris')
g = sns.PairGrid(iris, aspect=1.5, diag_sharey=False, despine=False)
g.map_lower(sns.regplot, lowess=True, ci=False,
line_kws={'color': 'red', 'lw': 1},
scatter_kws={'color': 'black', 's': 20})
g.map_diag(sns.distplot, color='black',
kde_kws={'color': 'red', 'cut': 0.7, 'lw': 1},
hist_kws={'histtype': 'bar', 'lw': 2,
'edgecolor': 'k', 'facecolor':'grey'})
g.map_diag(sns.rugplot, color='black')
g.map_upper(corrdot)
g.map_upper(corrfunc)
g.fig.subplots_adjust(wspace=0, hspace=0)
# Remove axis labels
for ax in g.axes.flatten():
ax.set_ylabel('')
ax.set_xlabel('')
# Add titles to the diagonal axes/subplots
for ax, col in zip(np.diag(g.axes), iris.columns):
ax.set_title(col, y=0.82, fontsize=26)
Source: https://stackoverflow.com/questions/48139899/correlation-matrix-plot-with-coefficients-on-one-side-scatterplots-on-another |
H: Test-set error in cross-validation with regularization
I am performing regularization (Ridge regression) using cross validation. I understand that for a certain value of the regularization parameter $\lambda$, we first fit on the training-set and then we use the resulting parameters to calculate the error on the test-set and the optimal $\lambda$ is the one that minimizes the test-set error.
My question is: for the calculation of the test-set error do we include the penalty term (for the value of $\lambda$ that we used in the training set), or not (i.e. use $\lambda =0$)?
On one hand, I see the need to treat the training and test sets "on an equal footing" and use the same $\lambda$ for the test-set error calculation.
On the other hand, the parameter values that we get from fitting the training-set have already been penalized, and we want to see how these values perform on the test-set, so why use a penalty term again?
AI: You should use the same value of λ on both training and test data sets.
The value of λ influences the values of the parameters. The values of the parameters should be consistent across training and test data sets, thus the value of λ should be the same. |
H: preprocess unbalanced skewed data
I am trying to find a way to preprocess my data.
The data is as follow:
study
person_id
energy_1
energy_2
y
study_id
A
2.3
-1.05
1
study_id2
B
1.03
0.04
0
Statistically speaking, we can see that for each study, the value of energy_1 and energy_2 brings a lot of value to determine wether the person is 0 or 1 in the y column: We can mostly only use them to make the prediction.
But when we are using the whole dataset and mixing the studies together, the model used (a binary XGBoost classifier) is no longer able to properly predict the label.
Can you give hints on how to preprocess/transform my data so that the model could react properly independently of the study?
I am aware that XGBoost do not need normalized data.
AI: I would use mixed effects logistic regression here; it was designed and built for things like this. Y is binary here per comment so you can use logistic regression. Energy 1 and 2 should the fixed effects here; we want to know the effect of increasing energy_1 by 10%. The random effects can be person_id and study. Person and study are just a collection of potentially infinite possibilities. We don't care what the "effect" of having person A vs Person B taking the test. By treating them as Random effects, we save degrees of freedom and can estimate the effect of energy better. Finally, note mixed effects logistic regression has a unique interpretation you should know: https://stats.oarc.ucla.edu/r/dae/mixed-effects-logistic-regression/#:~:text=Mixed%20effects%20logistic%20regression%20is,both%20fixed%20and%20random%20effects. |
H: Does fine-tuning require retraining the entire model?
Would it be necessary to retrain the entire model if we were to perform fine-tuning?
Let's say we somehow got the GPT-3 model from OpenAI (I know GPT-3 is closed source).
Would anyone with access to a couple of RTX 3080 GPUs be able to fine tune it if they got the GPT-3 model weights?
Or would it need infrastructure like the big companies?
AI: No, you don't need to retrain the entire model. Fine-tuning refers to taking the weights trained in the general model and then continuing training a bit using your specific data. Using this approach, typically the only things you need to fully train are the models performing the downstream task from the model creating the representation of the data, often just a handful of densely connected layers to perform e.g. classification, which are orders of magnitude less expensive to train than the representation model. |
H: What should the target variable (y) look like here?
I am doing some data science problems for practice, and this is the question I'm currently tackling:
Given a list of L values generated independently by some unknown process, we will use the mean of L to predict unseen values generated by the same process. Use leave-one-out cross-validation to estimate the mean absolute error (MAE) of this process.
Input: An array of floats arr
Output: A float score
Example:
arr = [1,2,3],
score = 1.0
Now, usually, the input variables (X) and target variable (y) have the same number of rows. But in this case, since it says "we will use the mean of L to predict unseen values", what does y look like? Because in the given example, X has just one column, so if we take the mean of X, we will get a scalar value, which gives error when trying to do cross-validation:
from sklearn.model_selection import LeaveOneOut, cross_val_score
from sklearn.linear_model import LinearRegression
import numpy as np
# input list of values
x = [[2, 5, 4, 3, 4, 6, 7, 5, 8, 9]]
# define the output as the mean of the inputs, as specified in the question
y = [np.mean(x)]
# build multiple linear regression model
model = LinearRegression()
# define cross-validation method to use
cv = LeaveOneOut()
# use LOOCV to evaluate model
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)
# view mean absolute error
np.mean(np.absolute(scores))
>>>
---------------------------------------------------------------------------
Empty Traceback (most recent call last)
File ~/miniforge3/lib/python3.10/site-packages/joblib/parallel.py:862, in Parallel.dispatch_one_batch(self, iterator)
861 try:
--> 862 tasks = self._ready_batches.get(block=False)
863 except queue.Empty:
864 # slice the iterator n_jobs * batchsize items at a time. If the
865 # slice returns less than that, then the current batchsize puts
(...)
868 # accordingly to distribute evenly the last items between all
869 # workers.
File ~/miniforge3/lib/python3.10/queue.py:168, in Queue.get(self, block, timeout)
167 if not self._qsize():
--> 168 raise Empty
169 elif timeout is None:
Empty:
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Input In [70], in <cell line: 18>()
15 cv = LeaveOneOut()
17 # use LOOCV to evaluate model
---> 18 scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)
20 # view mean absolute error
21 np.mean(np.absolute(scores))
File ~/miniforge3/lib/python3.10/site-packages/sklearn/model_selection/_validation.py:515, in cross_val_score(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch, error_score)
512 # To ensure multimetric format is not supported
513 scorer = check_scoring(estimator, scoring=scoring)
--> 515 cv_results = cross_validate(
516 estimator=estimator,
517 X=X,
518 y=y,
519 groups=groups,
520 scoring={"score": scorer},
521 cv=cv,
522 n_jobs=n_jobs,
523 verbose=verbose,
524 fit_params=fit_params,
525 pre_dispatch=pre_dispatch,
526 error_score=error_score,
527 )
528 return cv_results["test_score"]
File ~/miniforge3/lib/python3.10/site-packages/sklearn/model_selection/_validation.py:266, in cross_validate(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch, return_train_score, return_estimator, error_score)
263 # We clone the estimator to make sure that all the folds are
264 # independent, and that it is pickle-able.
265 parallel = Parallel(n_jobs=n_jobs, verbose=verbose, pre_dispatch=pre_dispatch)
--> 266 results = parallel(
267 delayed(_fit_and_score)(
268 clone(estimator),
269 X,
270 y,
271 scorers,
272 train,
273 test,
274 verbose,
275 None,
276 fit_params,
277 return_train_score=return_train_score,
278 return_times=True,
279 return_estimator=return_estimator,
280 error_score=error_score,
281 )
282 for train, test in cv.split(X, y, groups)
283 )
285 _warn_or_raise_about_fit_failures(results, error_score)
287 # For callabe scoring, the return type is only know after calling. If the
288 # return type is a dictionary, the error scores can now be inserted with
289 # the correct key.
File ~/miniforge3/lib/python3.10/site-packages/joblib/parallel.py:1085, in Parallel.__call__(self, iterable)
1076 try:
1077 # Only set self._iterating to True if at least a batch
1078 # was dispatched. In particular this covers the edge
(...)
1082 # was very quick and its callback already dispatched all the
1083 # remaining jobs.
1084 self._iterating = False
-> 1085 if self.dispatch_one_batch(iterator):
1086 self._iterating = self._original_iterator is not None
1088 while self.dispatch_one_batch(iterator):
File ~/miniforge3/lib/python3.10/site-packages/joblib/parallel.py:873, in Parallel.dispatch_one_batch(self, iterator)
870 n_jobs = self._cached_effective_n_jobs
871 big_batch_size = batch_size * n_jobs
--> 873 islice = list(itertools.islice(iterator, big_batch_size))
874 if len(islice) == 0:
875 return False
File ~/miniforge3/lib/python3.10/site-packages/sklearn/model_selection/_validation.py:266, in <genexpr>(.0)
263 # We clone the estimator to make sure that all the folds are
264 # independent, and that it is pickle-able.
265 parallel = Parallel(n_jobs=n_jobs, verbose=verbose, pre_dispatch=pre_dispatch)
--> 266 results = parallel(
267 delayed(_fit_and_score)(
268 clone(estimator),
269 X,
270 y,
271 scorers,
272 train,
273 test,
274 verbose,
275 None,
276 fit_params,
277 return_train_score=return_train_score,
278 return_times=True,
279 return_estimator=return_estimator,
280 error_score=error_score,
281 )
282 for train, test in cv.split(X, y, groups)
283 )
285 _warn_or_raise_about_fit_failures(results, error_score)
287 # For callabe scoring, the return type is only know after calling. If the
288 # return type is a dictionary, the error scores can now be inserted with
289 # the correct key.
File ~/miniforge3/lib/python3.10/site-packages/sklearn/model_selection/_split.py:86, in BaseCrossValidator.split(self, X, y, groups)
84 X, y, groups = indexable(X, y, groups)
85 indices = np.arange(_num_samples(X))
---> 86 for test_index in self._iter_test_masks(X, y, groups):
87 train_index = indices[np.logical_not(test_index)]
88 test_index = indices[test_index]
File ~/miniforge3/lib/python3.10/site-packages/sklearn/model_selection/_split.py:98, in BaseCrossValidator._iter_test_masks(self, X, y, groups)
93 def _iter_test_masks(self, X=None, y=None, groups=None):
94 """Generates boolean masks corresponding to test sets.
95
96 By default, delegates to _iter_test_indices(X, y, groups)
97 """
---> 98 for test_index in self._iter_test_indices(X, y, groups):
99 test_mask = np.zeros(_num_samples(X), dtype=bool)
100 test_mask[test_index] = True
File ~/miniforge3/lib/python3.10/site-packages/sklearn/model_selection/_split.py:163, in LeaveOneOut._iter_test_indices(self, X, y, groups)
161 n_samples = _num_samples(X)
162 if n_samples <= 1:
--> 163 raise ValueError(
164 "Cannot perform LeaveOneOut with n_samples={}.".format(n_samples)
165 )
166 return range(n_samples)
ValueError: Cannot perform LeaveOneOut with n_samples=1.
Curiously, if I duplicate the contents of X and y, the error goes away, and a score of 0.0 is outputted:
# input list of values
x = [[2, 5, 4, 3, 4, 6, 7, 5, 8, 9], [2, 5, 4, 3, 4, 6, 7, 5, 8, 9]]
# define the output as the mean of the inputs, as specified in the question
y = [np.mean(x),np.mean(x)]
...
...
...
>>> 0.0
Why is that?
AI: You have not interpreted the problem correctly.
I will try to explain using your example, with the array [1, 2, 3].
Because there are only 3 samples, the cross validation is called "leave one out".
First fold, elements [1, 2] are used for training and [3] for testing.
The mean of the train elements is 1.5, so the prediction is 1.5, so the absolute error is 3-1.5 = 1.5.
Similarly we repeat by choosing 2 and 1 as the test elements and the other two as train.
Mean of 1 and 3: 2, absolute error = 2-2 = 0
Mean of 2 and 3: 2.5, absolute error = |1 - 2.5| = 1.5
So, the mean absolute error will be mean([1.5, 0, 1.5]) = 1.0.
You tried to think about the problem as a usual machine learning problem with tabular data, but essentially your X is not a row (the problem statement mentions that the input is an array, but you define it as a 2D array in the code), it is a column which happens to be both your feature, and the target variable, and the model you have to use is simply y_pred = np.mean(x).
The following snippet does not use library functions (well, apart from np.mean) and is easy to understand:
import numpy as np
def model(X):
return np.mean(X)
def cross_validation(X, model):
errors = []
for i in range(len(X)):
test_element = X[i]
train_elements = X[0:i] + X[i+1:len(X)]
prediction = model(train_elements)
error = abs(prediction - test_element)
errors.append(error)
return np.mean(errors)
arr1 = [1,2,3]
arr2 = [2, 5, 4, 3, 4, 6, 7, 5, 8, 9]
print(cross_validation(arr1, model))
print(cross_validation(arr2, model))
and produces
1.0
1.9555555555555557 |
H: Keras model does not construct the layers in sequence
Every time I print the model summary it prints out the convlstm first and then batchnormalization at the end.
Here is the output of model.summary
|¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯|
| conv_lstm2d (ConvLSTM2D) multiple 416256 |
| |
| conv_lstm2d_1 (ConvLSTM2D) multiple 295168 |
| |
| conv_lstm2d_2 (ConvLSTM2D) multiple 33024 |
| |
| batch_normalization (BatchN multiple 256 |
| ormalization) |
| |
| batch_normalization_1 (Batc multiple 256 |
| hNormalization) |
|_______________________________________________________________|
I want the batch_normalization layer after every conv_lstm2d except for the last layer.
AI: It looks like you're seeing this issue: BUG!! tf.keras.model.summary() output is wrong. When you use the subclass API, the summary() method prints the layers in the order they are created, not the order they appear in the network. If you plot the network using the plot_model utility, this should output the layers in the correct order (output goes to a png file). |
H: What is the name for 'cheating' with data, i.e. letting machine learning models use out-of-sample data?
What is the term for abusing data and machine learning methods to get better results than normally (with proper training and test set)? Some of my co-workers call it data-mining but I do not believe that is the correct word?
AI: I found the word: data snooping |
H: Find clusters of industrial probe
I'm new to data science, but in these last months i have started to gather many samples of values coming from different industrial probes (temperature of water, pression, Kw consumed, etc). I have developed a procedure to collect every 5 seconds many different values from many differents PLCs, and I have now about 40M records of data. After doing some basic analysis and visualization, I would like to start by finding for every probe the clusters of data and of course the outliers, so I can understand when I have a "strange" situation. Can you help me giving some advise to what kind of analysis I can try to do?
Sorry, I understand the question could seem quite vague, but I'm really at the beginning and trying to undestand something more about my data.
AI: Dimensional reduction like UMAP or PacMap is a good way to understand complex and numerous data.
Here is a demonstrator.
Nevertheless, 40M is a lot of data: you may not see clear clusters if you take all the data at once, and it might be useless: a lot of industrial cases focus on a time frame, and taking all the data would result in a blurred visualization.
I recommend starting with a ~1000 values data sample (ex: 1 day), and then increase progressively. You'll be able to detect outliers that are outside the clusters.
In addition, you can also apply some PCA to understand some data dependency linearly.
Here is a similar case to yours:
https://github.com/bharathsudharsan/Air-Quality-IoT-Analytics |
H: How do you Speed up the Calculation of a Correlation Matrix on a Large Dataset in Pandas?
I'm using a dataset with roughly 460,000 rows and 1,300 columns. I'd like to reduce the number of columns by seeing which have the largest effect on score using pandas' .corr() function.
However, on such a large dataset, calculating the correlation matrix takes about 20 minutes. Is there any way to speed up the calculation?
AI: You can use libraries with similar or identical pandas syntax, such as: dask, pandaralells, ray, modin. Each of these libraries allows all processor cores to work. Pandas often uses only 1 core. Dask and ray also allow you to work with big data.
It is also possible to select only part of the dataset. 460,000 is quite a lot, I think if you accidentally take half of this value, the result will be very similar if you take the entire dataset. Unfortunately, I cannot mathematically estimate how much difference there will be. |
H: Confusion regarding K-fold Cross Validation
In K fold cross validation, we divide the dataset into k folds, where we train the model on k-1 folds and test the model on the remaining fold. We do so until all the folds were assigned as the test set. In every of these iterations, we will train a new model that is independent of the model created from the previous iteration ( every iteration uses a new instance of the model). So my question is, if I divide my dataset into train and test sets, then I have used only the training set for the k cross validation process, and since every iteration uses a new model, what is the output model from this k fold cross validation process that I should use to evaluate it ( calculates the ROC curve, F1-score, precision and so on) using the test set ?? (As I have different models for every iteration). One way to implement k fold cross validation is to use sklearn.model_selection.cross_val_score and this returns only an array of scores of the model for each run of the cross validation and this confirms my problem, where there is no model is returned to be further evaluated by the test set. What should I do in this case ?
AI: One of the most common uses of k-fold cross-validation is for model selection (i.e. what type of model - such as linear regression, random forest, neural network) is best for the problem and/or what are the appropriate hyperparameter settings. We train models using K-fold cross-validation for each model type and/or set of hyperparameters we want to test, and select the best one based on the cross-validation results. Then we use all the training data to train another model, with the hyperparameters set to the values found by the cross-validation process. This last model is one evaluated using the test data. |
H: Reversing features after applying log transform, what does it mean and when should i apply it?
So, I have a skewness problem in my data (the features not the target var) that I want to feed into a NN. and according to this here and a few other links, I have quite a few options to deal with it. but some of them mention
you’ll have to reverse it once when making predictions
I don't understand when making predictions means. does it mean when I do forward prop i should reverse the log transform? or does it mean when making predictions on dev and test set?
And, should i also apply log transform to dev and test set?
Also, if i were to deploy this model, does it mean that I always need to log transform features that i apply log transform while training?
AI: You don’t have to back-transform your features. Just apply the transformations to your features and make your predictions. For instance, if you $\log$ your features during training, so the same in production.
Back-transforming your predictions means that, if you predict the expected value of the $\log$, you need to wrestle with your model to tease out the expected value. However, this only applies when you transform $y$, which you do not.
I’m not convinced that you need to transform the features, but if you know to expect particular relationships, transformations can help. All you have to watch out for is applying the same transformations to your features in production. |
H: How does Pandas' Correlation Method Handle Non-Numeric Columns?
I'm using Pandas' .corr() method to figure out which columns I can eliminate from a large dataset. Some of those columns have non-numeric types.
How does Pandas handle these columns?
AI: According to this source, Pandas will ignore any columns that are non-numeric.
If you want Pandas to perform correlations on your categorical variables you'll have to turn them into dummy variables using pandas.get_dummies() (reference) or something similar. |
H: How do you Specify which Columns Pandas reads in?
I have a huge dataset with ~470,000 rows and ~1400 columns. I only require 184 of the available columns. I'd like to limit the columns I read in to improve my dataset's loading times.
How do you specify the columns read in by a function like pandas.read_csv()?
AI: As can be seen in the pandas documentation, this can be done using the usecols keyword:
usecols: list-like or callable, optional
Return a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or
strings that correspond to column names provided either by the user in
names or inferred from the document header row(s). If names are given,
the document header row(s) are not taken into account. For example, a
valid list-like usecols parameter would be [0, 1, 2] or ['foo', 'bar',
'baz']. Element order is ignored, so usecols=[0, 1] is the same as [1,
0]. To instantiate a DataFrame from data with element order preserved
use pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for
columns in ['foo', 'bar'] order or pd.read_csv(data, usecols=['foo',
'bar'])[['bar', 'foo']] for ['bar', 'foo'] order.
If callable, the callable function will be evaluated against the
column names, returning names where the callable function evaluates to
True. An example of a valid callable argument would be lambda x:
x.upper() in ['AAA', 'BBB', 'DDD']. Using this parameter results in
much faster parsing time and lower memory usage. |
H: Linear Regression line not showing in plot
It's a silly problem, I know, but it's getting my nerves. Everything seems fine, but I cannot get the line to show on the plot.
I've put it in a public Google notebook, for your convenience.
t represents months, and f_t are sales (accumulated). I feed the model 12 months of data, and use the 13th month for prediction. Simple.
import random
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from matplotlib import pyplot as plt
muestra = []
# creamos las 13 muestras de acuerdo a la fórmula provista en la Situación Problemática.
for i in range(1,14):
t = 3*i + 2*(random.uniform(-1, 1))
muestra.append((i, t))
df = pd.DataFrame(muestra)
df.rename(columns={0:"t", 1:"f_t"}, inplace=True)
train = df.loc[:11]
test = df.loc[12:]
X_train = train.t.values.reshape(-1, 1)
y_train = train.f_t.values
X_test = test.t.values.reshape(-1, 1)
y_test = test.f_t.values
LR_model = LinearRegression()
LR_model.fit(X_train, y_train)
y_pred = LR_model.predict(X_test)
%matplotlib inline
plt.plot(X_test, y_pred, label="Regresión Lineal", color='g')
plt.scatter(X_train, y_train, label="Muestra",color='b')
plt.scatter(X_test, y_test, label="Mes 13",color='r')
plt.legend()
plt.xlabel('Meses (t)')
plt.ylabel('Ventas (f(t))')
plt.title("Análisis en base a la técnica de regresión lineal simple")
plt.show()
Now, I get the scatter points, but not the regression line. What am I missing?
Thank you.
AI: You are trying to plot a single predicted point, When I believe you are actually looking to plot the fitted model. To do that you'll need the coef_ and intercept_ properties of the model. I have included a link to the documentation on this if you want to learn more.
%matplotlib inline
f_x = lambda x: (x * LR_model.coef_) + LR_model.intercept_
x_range = [0,13]
LR_model_y = list(map(f_x, x_range))
plt.plot(x_range,LR_model_y, label="Regresión Lineal", color='g')
https://scikit-learn.org/stable/modules/linear_model.html |
H: Difference between sklearn's LogisticRegression and SGDClassifier?
What is the difference between sklearn's LogisticRegression classifier and its SGDClassifier? I understand that the SGD is an optimization method, while Logistic Regression (LR) is a machine learning algorithm/model. I also understand that SGDClassifier is a linear classifier that is optimized by SGD (per this answer: per this answer but how do these two models below differ?
SGDClassifier(loss='log') and LogisticRegression(solver='sag') ?
AI: Logistic regression has different solvers {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, which SGD Classifier does not have, you can read the difference in the articles that sklearn offers.
SGD Classifier is a generalized model that uses gradient descent. In it you can specify the learning rate, the number of iterations and other parameters. There are also many identical parameters, for example l1, l2 regularization.
If you select loss='log', then indeed the model will turn into a logistic regression model. However, the biggest difference is that the SGD Classifier can be trained by batch - using the partial_fit() method. For example, if you want to do online training, active training, or training on big data.
That is, you can configure the learning process more flexibly and track metrics for each epoch, for example.
In this case, the training of the model will be similar to the training of a neural network. Moreover, you can create a neural network with 1 layer and 1 neuron and take the logistic loss function for training on tensorflow or pytorch or another framework. And you will get logistic regression again.
If you use only the fit function, then there should not be a big difference in the use of these models, except for the one introduced by various LogisticRegression solvers, which are not in SGD Classifier. |
H: Does Scikit-Learn's OneHotEncoder make all Columns Categorical?
I've been using Scikit-Learn's OneHotEncoder to turn categorical data into binary columns, however, it seems that fitting OneHotEncoder to a dataset with numerical and categorical variables causes it to make binary columns for the numerical data too.
I've tried searching the documentation for an explicit answer, but I can't find one. Does OneHotEncoder automatically avoid encoding numerical columns? If not, how can I make a pipeline with it without splitting and re-joining the dataframe?
AI: (My basic answer, post a more informed one and I'll accept it.)
Examples from this article show that the OneHotEncoder will encode for every unique value in a column.
You can check what's being encoded by using the OneHotEncoder().categories_ attribute. This attribute will give you a series of arrays that include all the unique values of each column that have been encoded for. If you feed numerical values into the OneHotEncoder you'll notice that these arrays contain every unique numerical value as a category.
To avoid this, you should pre-select your categorical columns and feed those to the OneHotEncoder. See the following SciKit-Learn Tutorial, and the reference for ColumnTransformer to see how this can be included in a pipeline. |
H: How can I let Non-Transformed Values through Scikit-Learn's Column Transformer?
I'm working with a heterogenous dataset that includes numerical and categorical columns. I want to use Scikit-Learn's ColumnTransformer to OneHotEncode the categorical data, however, ColumnTransformer will only recombine the columns that I apply a transformer to.
I don't want to apply a transformer to the numerical columns, how can I let the non-transformed values through the function unchanged and included in the output?
AI: ColumnTransformer comes with an option called remainder, by default it's set to 'drop' which means columns that aren't used by a transformer are dropped from the dataframe.
Setting this option to remainder='passthrough' will let the unused columns pass through the function and join the returned dataframe unchanged.
The implementation in code looks like this:
ColumnTransformer(transformers=YOUR_TRANSFORMER_LIST_HERE,
remainder='passthrough') |
H: How to interpret the sample_weight parameter in MiniBatchKMeans?
I am using scikit-learn MiniBatchKMeans to do text clustering.
In the fit() function there is a parameter sample_weight described as follows:
The weights for each observation in X. If None, all observations are
assigned equal weight (default: None).
How should I interpret it exactly?
For instance, if I assigned greater weight to certain observations, will the centroid be nearer to that observations?
AI: Yes, the parameter is available in the vanilla K-Means too.
The algorithm supports sample weights, which can be given by a parameter sample_weight. This allows assigning more weight to some samples when computing cluster centers and values of inertia. For example, assigning a weight of 2 to a sample is equivalent to adding a duplicate of that sample to the dataset X.
scikit-learn.org |
H: Is there a good systematic approach to explore and analyze data (prior to modelling)?
I have found a few examples of kernels on Kaggle people have made where they seem to follow a certain methodology in order to systematically analyze and explore the data, to make sure to find all outliers, missing values etc. They are just practical examples and leave quite a few questions.
I would just assume there must be one, or a few, recipes/methodologies/flowcharts that I can learn/use in order to have a good checklist and systematically work my way through my data and catch all data quality issues as well as note them in a well defined way. I.e. 1. Do this, 2. Do that, if result A do this, if result B note that for later this way 3. Do this etc.
What should I look into in order to find something like this? I can't imagine this not existing. It seems very much like Data Science Course 101 (which I am currently taking but it seems like a lousy course) but I can't find anything.
Any references to YouTube-videos or Coursera-courses that explain this would be welcome as well.
AI: I assume you are asking about tabular data not vision or NLP. It doesn't exist because 1) there is so much type of data and weird problems and 2) univariate EDA is generally not enough. I can detail you what I usually do for univariate analysis if this helps.
In the context of supervised ML. I have a generic R markdown file that is used to genereate a Word report on a given variable. I like the word format because I can anotate, comment and share them easily in a professional context. I use another R script to generate those reports for all of my variables. At some point I was trying to do the same in Python but didn't manage to get something similar with easy formatting.
In this report I have by default (tweak in parenthesis) for continuous variables :
name of the variable, format
generate the univariate distribution plot and log plot (removing 0.001 and 0.999 quantile outliers, missing values, can get tricky if variable get negatives - often I use a |x| * log(1+c*|x|) transformation - not otpimal but ok).
same graphs but with distribution for positive / negative classes (scaling might be tricky for unbalanced problems)
some sort partial dependency plot (group your variables in buckets and see the associated positive ratio)
a table with main values (mean, median, min, max, extreme quantiles, count of missing values, count of weird encoding for NA, it also detect if some value is taken by more than 5% of instances and give that count) and associated positive ratio as calculated above
distribution (generally in log scale) regarding some of my main identification variables (think sex or race), time (to look if there is some drift) and geographical distribution of some average value.
This is relatively difficult to do as each variable is different and you will always have a weird variable causing some bug and formatting is a pain. As you can see it is quite dependent on your variables and the problem. Nothing easy (It took me around 150 tries to have it to work on all of my variable the first time).
Then you have to look at each report individually, this can take mutlitple days if you want to do it properly as there is no rule on what you are looking for exactly. Sometimes its a weird bump in distribution, sometimes it is missing values encoded incorrectly, sometimes it is a discrepency in categories, sometimes it is a very skewed variable. As the problems often depends on the data generating process, their solutions depend on that too and there is no general rule to deal with them.
At some point I tried to have something similar for categorical but didn't get anything satisfactory. Provided the category number is quite low I just do the count of positive class by category and some of the main table mentionned above.
Then thing get weird as you go to multivariate EDA. What I usually do :
Some dimension reduction (UMAP) for a 2D plot to see if there are clusters or not, if the output is clustered too.
Some linear correlation analysis, correlation matrix + some clustering / dendograms, but rarely remove anything as I have imbalanced data set where info may come from the difference between two highly correlated variables.
I abandon any idea of iterative variable selection process and go with one simple model with a dozen variable (selected by experts) to create a benchmark (think glmnet), then use a model with strong regularisation on all my variables (xgboost or vanilla NN).
Then I remove the 50%-80% of variables that have no importance in the big model.
After that there is no rule (except answering your manager positively) |
H: How can collaborative filtering be extended to include more features?
Looking at the following:
https://realpython.com/build-recommendation-engine-collaborative-filtering/#using-python-to-build-recommenders
I can see that userID, itemID, rating are the standard features used in a collaborative filtering model. However, my question is how can I incorporate more features into the model (such as the reviewText)?
AI: There's a lot of different ways to embed contextual user & item features into recommendations systems. These algorithms are often called "Hybrid Recommendation Systems".
The highest performers are usually big complicated neural nets, but where I would start with is Factorization Machines.
FMs are a staple in the recsys community today, and is probably the best thing to try first if you're trying to build a hybrid recommendation model.
Some characteristics of FMs to be aware of:
They perform well with sparse data
FMs have linear complexity, so they train fast and once deployed they predict fast.
Very flexible in terms of input data, it can take any real valued feature vector.
They're much more easily interpretable than neural networks, if you know how to interpret a regression you're probably capable of figuring out what the FM is doing.
They are capable of both regression (ratings) and binary classification. In my experience you can mix implicit and explicit feedback by weighting interactions differently.
They have a very unique data model. You will notice that in the FM data matrix, there are at least 4 sections, the user-index, user-features, item-index, and item-features. You will sometimes see a interaction-feature section as well. The fundamental idea here is that each row is a record of all the data involved in the interaction between a user and an item (or interactions between whatever). The user & item features section will look more familiar, they are the features of the respective users and items. What may be new to you is the user and item indices. It's basically two separate one-hot-encodings for both the users and items, but you should only ever have 1 user and item on for any interaction. This will make more sense once you see the matrix visuals, I suggest starting at it for awhile to wrap your head around it.
Some additional resources/further reading:
Jefkine Blog
Berwyn's Blog
FastFM Python library (This is an older library, it works well and is good for out of the box performance but if this is for a long term commercial project I would suggest looking for a more modern implementation)
LightFM Python Library (Another FM Library that I personally prefer over FastFM)
RankFM Python Library (Another FM Library specifically adapted for when you only have implicit feedback and want to rank/recommend items.) |
H: Scaling the data iteratively one by one vs batch scaling
I have 2000 signals in a dataset of shape (2000, 400000) where each signal is recorded within the range -127, 128. I want to downscale each signal from (-127, 128) to (-1,1) to save memory space and also for better visualization. There are two approaches:
Approach 1:
Iteratively apply minmax_scale individually at each signal something like the following:
from sklearn.preprocessing import minmax_scale
data = read_dataset(...)
for i in range(data):
minmax_scale(i, feature_range=(-1, 1))
Approach 2:
Fit the whole dataset using MinMaxScaler something like the following:
from sklearn.preprocessing import MinMaxScaler
data = read_dataset(...)
scalar = MinMaxScaler(feature_range=(-1,1))
scaled_data = scalar.fit_transform(data)
I use the first approach because the dataset does not fit the memory but I am worried if it is incorrect. I want to make sure if my choice is sound, in theory.
Thank you very much.
AI: With the first approach, you are completely disregarding the global scale of the signal and only focusing on the relative scale. This will most probably hurt the performance of whatever system you train on that data (or any analysis you perform on it), compared to training on the globally scaled data, unless the relative values are the only important pieces of information in your signals.
I noticed that you already know the range in which the data is defined: (-127, 128). If you want to scale your data, why don't you scale the data with a fixed computation like $((x + 127) \cdot 2 / 255) - 1$ ?
Anyway, I don't see how a mere linear change in scale would save memory or help visualization, which seem to be your final goal. |
H: Different strategies for dealing with features with multiple values per sample in python machine learning models
I have a dataset which contains pregnancy, maternal, foetal and children data and I am developing a predictive machine learning model to predict adverse pregnancy outcomes.
The dataset contains mostly features with a single value per pregnancy, e.g. maternalObesity = ["Yes", "No]. However, I have some features that have multiple values per pregnancy, such as the foetal abdominal circumference and estimated foetal weight which have been recorded multiple times at different times during gestation (so each pregnancy will have between 1 and 26 observations for these features each), like so:
PregnancyID gestationWeek abdomcirc maternalObesity
1 13 200 Yes
1 18 240 Yes
1 30 294 Yes
2 11 156 No
2 20 248 No
So in pregnancy 1, we can see that the abdominal circumference was recorded 3 times at weeks 13, 18 and 30.
All questions I have seen here which have addresses the issue of multiple values per sample have been about categorical features, like this and this. Here the suggested solution was to OneHotEncode the features. However, like I said, this does not apply in my case as I have continuous (float) variables.
I have spent the last few months attempting different methods to best handle these features such that I don't lose any valuable information. Simply adding these features into in my dataset will result in almost duplicate rows as the vast majority of my samples have single values (like in the table above.
Here are some of the different approaches I have considered to handle these features:
Derive statistical values from the features, like here. So I compute mean the maximum, minimum, variance, range, etc. of all the observations per pregnancy. However, the downfall with this approach is that the time at which the values are recorded is neglected. The time of the measurement may be significant as a higher abdominal circumference earlier in pregnancy may be more correlated with the adverse outcome I am trying to predict.
Summarise the measurements into a fixed number of features by grouping them into 3 trimester, like here. So I can group all measurements by 3 trimesters, and each feature would hold the maximum measurement recorded during that trimester.
So my dataset will look like this:
PregnancyID MotherID abdomCirc1st abdomCirc2nd abdomCirc3rd
1 1 200 315 350
2 2 156 248 NaN
This approach takes into account the time range of the measurement, but will result in a lot of NaNs in the new features, as many pregnancies do not have a measurement for each trimester. Also, the maximum may result in some statistical information being lost, unlike approach 1.
I initially thought about using a python list for these features. However, I do not know if a machine learning model can handle this data type, and again, the time each measurement was taken is neglected in this approach.
So my data will look something like this:
PregnancyID maternalObesity abdomcirc
1 Yes [200, 240, 294]
2 No [156, 248]
In conclusion, I need some guidance as I have found a lack of examples and resources out there about this issue. So please advise what the best approach is in this case and if there are any detailed examples out there that address this issue I would appreciate it.
AI: You should make the "abdomCirc" as Multiple separate features (e.g. one for each month).
Then handle NaN as it should be i.e.
Remove columns that are having more NaN than a Threshold
Try to fill NaN for cases where only a few are missing.
If we can't accommodate the above solution because of NaN counts, then we should accept that we don't have enough data and ignore the feature-set (i.e. abdomCirc) altogether.
Otherwise you may be at risk of finding a pattern that doesn't exist because of very small amount of available data points for abdomCirc.
I initially thought about using a python list for these features. However, I do not know if a machine learning model can handle this data type
None of the Model will accept this data i.e. a List |
H: Do the benefits of ridge regression diminish with larger datasets?
I have a question about ridge regression and about its benefits (relative to OLS) when the datasets are big. Do the benefits of ridge regression disappear when the datasets are larger (e.g. 50,000 vs 1000)? When the dataset is large enough, wouldn't the normal OLS model be able to determine which parameters are more important, thus reducing the need for the penalty term? Ridge regression makes sense when the data sets are small and there is scope for high variance, but do we expect its intended benefits (relative to OLS) to disappear for large datasets?
AI: One thing that you might be overlooking with your reasoning is the fact that increased predictors does not necessarily lead to a better model. In this case, the more predictors you have the higher the risk of collinearity that there is between them - thus increasing the utility of ridge regression.
In fact, a second point:
The larger the number of predictors that you have in your model the more useful techniques like ridge regression are. This is because with so many predictors it is very difficult to determine if collinearity or other relationships exist between predictors. When you have a smaller model with say 5 predictors, you could verify this fairly easily. With 1,000 predictors this would be much more difficult.
In the case of the Lasso regression, the benefits are even more obvious. Since predictor coefficients values can be shrunk fully to zero this acts as a form of feature selection. Here, you are potentially removing predictors that capture redundant information.
Also - in answer to the second part of your question - OLS does not perform any form of feature selection. Therefore, it will not be able to pick out which parameters are the most important by adding more. The more parameters you add to your model the higher the variance gets. Why is this? You have more estimations to make and therefore collectively your model is less reliable. Consequently, your model will likely inflate the coefficients on the predictors giving undue "weight" or importance to certain predictors. |
H: Loss function for ReLu, ELU, SELU
Question
What is the loss function for each different activation function?
Background
The choice of the loss function of a neural network depends on the activation function. For sigmoid activation, cross entropy log loss results in simple gradient form for weight update z(z - label) * x where z is the output of the neuron.
This simplicity with the log loss is possible because the derivative of sigmoid make it possible, in my understanding. The activation function other than sigmoid which does not have this nature of sigmoid would not be a good combination with the log loss. Then what are the loss function for ReLu, ELU, SELU?
References
Derivative of Binary Cross Entropy - why are my signs not right?
AI: The question requires some preliminary clarification IMHO. The choice of activation and loss function both depend on your task, on the kind of problem you want to solve. Here are some examples:
If you are training a binary classifier you can solve the problem with sigmoid activation + binary crossentropy loss.
If you are training a multi-class classifier with multiple classes, then you need softmax activation + crossentropy loss.
If you are training a regressor you need a proper activation function with MSE or MAE loss, usually. With "proper" I mean linear, in case your output is unbounded, or ReLU in case your output takes only positive values. These are countless examples.
With activation function here I refer to the activation of the output layer. The activation that performs the final prediction scores doesn't have to be (and usually isn't) the same as the one used in hidden layers.
ELU and SELU are typically used for the hidden layers of a Neural Network, I personally never heard of an application of ELU or SELU for final outputs.
Both choices of final activation and loss function depend on the task, this is the only criterion to follow to implement a good Neural Network. |
H: How to improve results from ML model? (spam classification)
I am trying to build a model that predicts if an email is spam/not-spam. After building a logistic regression model, I have got the following results:
precision recall f1-score support
0.0 0.92 0.99 0.95 585
1.0 0.76 0.35 0.48 74
accuracy 0.92 659
macro avg 0.84 0.67 0.72 659
weighted avg 0.91 0.92 0.90 659
Confusion Matrix:
[[577 8]
[ 48 26]]
Accuracy: 0.9150227617602428
The F1-score is the metric I am looking at. I am having difficulties in explaining the results: I think are very bad results! May I ask you how I could improve it?
I am currently considering a model that looks at corpus of the emails (subject + corpus).
After Erwan's answer:
I oversampled the dataset and these are my results:
Logistic regression
precision recall f1-score support
0.0 0.94 0.77 0.85 573
1.0 0.81 0.96 0.88 598
accuracy 0.86 1171
macro avg 0.88 0.86 0.86 1171
weighted avg 0.88 0.86 0.86 1171
Random Forest
precision recall f1-score support
0.0 0.97 0.54 0.69 573
1.0 0.69 0.98 0.81 598
accuracy 0.77 1171
macro avg 0.83 0.76 0.75 1171
weighted avg 0.83 0.77 0.75 1171
AI: In your results you can observe the usual problem with imbalanced data: the classifier favors the majority class 0 (I assume this is class "ham"). In other words it tends to assign "ham" to instances which are actually "spam" (false negative errors). You can think of it like this: with the "easy" instances, the classifier gives the correct class, but for the instances which are difficult (the classifier "doesn't know") it chooses the majority class because it's the most likely.
There are many things you could do:
Undersampling the majority class or oversampling the minority class is the easy way to deal with class imbalance.
Better feature engineering is more work but it's often how to get the best improvement. For example I guess that you use all the words in the emails as features right? So you probably have too many features and that probably causes overfitting, try reducing dimensionality by removing rare words.
Try different models, for instance Naive Bayes or Decision Trees. Btw Decision Trees are a good way to investigate what happens inside the model. |
H: Result of uniform weight initialization in all neurons
Background
cs231n has the question regarding how to initialize weights.
Question
Please confirm or correct my understandings. I think the weight value will be the same in all the neurons with ReLU activation.
When W = 0 or less in all neurons, the gradient update on W is 0. So W will stay 0. When W = 1 (or any constant), the gradient update on W is the same value in all neurons. So W will be changing but the same in all neurons.
AI: When the weight is the same, o/p will be the same for all the Neurons in every Layer.
$\hspace{3cm}$
Hence, during backpropagation, all the Neurons in a particular layer will get the same Gradient portion. So, the weight will change by the same amount.
But, the very first layer (connected to input Features) will work normally as its input is the actual Features itself which is different always. |
H: Is the interval variable considered as a type of numerical variable or ordinal variable?
I have a fundamental question about interval variable and I have searched about it in different tutorials but still not sure.
"An interval scale is one where there is order and the difference between two values is meaningful." from graphpad.com
"The interval variable is a measurement variable that is used to define values measured along a scale, with each point placed at an equal distance from one another. It is one of the 2 types of numerical variables and is an extension of the ordinal variable." from formpl.us
My question is about the last sentence above, which states that interval variable is numerical and an extension of ordinal. If we have an atribute 'age' as follows:
Then based on the given definitions above, "age" must be an interval attribute. However, doesn't it seem to be more ordinal than numerical?
AI: This is the formal definition of an interval (from Wikipedia):
In mathematics, a (real) interval is a set of real numbers that
contains all real numbers lying between any two numbers of the set.
For example, the set of numbers x satisfying 0 ≤ x ≤ 1 is an interval
which contains 0, 1, and all numbers in between.
This definition doesn't tell us what to do to represent an interval for ML purposes... and that's normal :)
For ML purposes we need to decide a suitable representation, and there's rarely a mathematically pure or unique way to represent a mathematical object. Your confusion comes from the fact that the sites you mention propose their own representation, i.e. they consider a particular definition of interval, presumably because it suits their goal. The second one in particular is clearly defining an "interval variable" as an object in a particular language, with constraints which have nothing to do with the above definition.
Now to answer your question: an interval is not atomic, since it's not a single value. So strictly speaking it's neither a numerical nor an ordinal.
But you're probably not very satisfied with this answer, since you still want to be able to use this kind of feature. How? As usual, it depends:
Most of the time an interval can be converted to an ordinal value (note that it's a conversion, i.e. we simplify the data to make it usable). The conversion to ordinal is typically done by numbering the possible intervals (following their order) with integers. If the variable is going to be a label in a classification task, it can even be converted to a categorical variable.
However there are cases where an ordinal would not be a good representation, in particular if the intervals are not equal and the ML method involves distances. In such cases it's probably safer to convert the interval to a numerical value, typically the mean of the interval. This way both order and distances are preserved. |
H: Modern methods for reducing dimensions and feature engineering
I am training a binary classifier in Python to estimate the level of risk of credit applicants. I extracted a little over a thousand independent variables to model the observed behavior of four million people. My target is a binary column that tells me whether or not a person defaulted on a loan (1 for event, 0 for non event).
I am asking this question because I feel overwhelmed by the dimensionality of the problem. I want to know some common and modern ways used to:
drop features (dimensionality reduction)
create new features based on combinations of other features (feature engineering)
So far, I dropped features based on their Information Values and kept only the most relevant ones. From the remaining set of features, I calculated each pair's correlation coefficient and for every highly correlated pair, I kept the feature with the highest information value of the two.
I now want to make new features based on this subset of remaining variables, such as ratios and multiplications (i.e. Number of open accounts divided by number of closed accounts). However, I think that my elimination process can be improved. My current method is very old school, as I have mostly relied on univariate analysis to drop features (one variable against the target).
AI: I'm not really knowledgeable about the modern techniques but I can tell you about the old ones: ;)
First there are two main approaches to dimensionality reduction: feature selection and feature extraction. You're using the former, which consists in discarding some of the original features. The latter consists in some kind of "merging" of similar variables, it can be worth trying especially if you have redundant features.
As you rightly noticed, feature selection based on individual features is rarely optimal. There are methods which can take the full set of features as a basis for selection, in particular genetic feature selection. |
H: Train error vs. Test error in linear regression by samples analysis
I have run a multivariate linear regression model on a small set of about 3500 samples. While the model's error is as large as expected, I also ran a bias vs. variance analysis by comparing the train set error vs. the test set error using different sample sizes. I was expecting something like this:
But instead I found that the train error doesn't plateau at any point.
The code that generates this graph is the following:
def loss_by_sample(X, Y, testX, testY, learning_rate):
samples_list = list()
train_loss_list = list()
test_loss_list = list()
m = X.shape[0]
for i in range(50, len(X), 50):
samples_list.append(i)
weights, loss = gradient_descent(learning_rate, X[:i], Y[:i])
X2 = tf.concat([tf.ones([X.shape[0], 1]), X], 1)
train_loss = (1/(2 * m) * tf.tensordot(tf.transpose(h(X2[:i], weights) - Y[:i]), (h(X2[:i], weights) - Y[:i]), axes=1))[0][0]
train_loss_list.append(train_loss)
X2 = tf.concat([tf.ones([testX.shape[0], 1]), testX], 1)
test_loss = (1/(2 * m) * tf.tensordot(tf.transpose(h(X2, weights) - testY), (h(X2, weights) - testY), axes=1))[0][0]
test_loss_list.append(test_loss)
// plot train_loss, test_loss
I have tried different learning rates and the one I picked is the one that minimizes the loss using mean squared error:
(values to the right of the x axis after the last datapoint are inf.
Is there something I can conclude from this? Any reason why the train error can surpass the test error?
AI: It occurs because your data is noisy. Suppose that your model is
$$ y = X\beta + noise. $$
Suppose that $\beta$ is recovered exactly. Since the noise is present, our model will NOT predict accurately, it will be always some noise. In other words, think that any time we predict the values, there is some random perturbation. The amount of perturbation is of the same order each time. When we compute the error, we square these perturbations, sum them up, and then take the square root, i.e., compute the norm
In your example, the size of the training set is growing, and, therefore, the test error WITHOUT averaging is growing as well -- you sum up squares of the perturbation. The size of the test set is steady, so the norm of error is about the same there.
Solution: use appropriate averaging to both train and test set.
train_loss = (1/np.sqrt(i) * tf.tensordot(tf.transpose(h(X2[:i], weights) - Y[:i]), (h(X2[:i], weights) - Y[:i]), axes=1))[0][0]
test_loss = (1/(2 * X2.shape[0]) * tf.tensordot(tf.transpose(h(X2, weights) - testY), (h(X2, weights) - testY), axes=1))[0][0]
P.S. X2 in my formulas are as defined in your cell, accordingly. I would suggest choosing a different name for the variable. |
H: First perform data augmentation or normalization?
Should I first perform data augmentation or normalization in deep learning? I am mainly interested in 2D and 3D input data. In tutorials that I have seen so far the data augmentation always comes first. Is there a (mathematical) reason for that? Would it also work vice versa?
AI: In the specific context of image/video problems, it may be okay to normalize the data before augmentation, because you already know that each feature (pixel) will have a value between 0 and 255. As long as you normalize w.r.t. the theoretical min and max value, then the order of normalization/augmentation shouldn't matter.
But in general, data augmentation should always come first. Otherwise, the normalized features may be incorrect after augmentation operators are applied. After all, you can think of augmentation as gathering additional data. The normalization computed on your original dataset may not be valid after gathering additional data.
To illustrate, suppose we have a dataset consisting of three 2x2 matrices like this:
$$
\begin{bmatrix}
0 & 0 \\
0 & 200
\end{bmatrix}
%
\begin{bmatrix}
50 & 100 \\
200 & 0
\end{bmatrix}
%
\begin{bmatrix}
100 & 200 \\
0 & 0
\end{bmatrix}
$$
And suppose we decide that one good way to augment the data is to flip one or more of the images horizontally. If we do that for the third image before normalizing. Then the dataset becomes:
$$
\begin{bmatrix}
0 & 0 \\
0 & 200
\end{bmatrix}
%
\begin{bmatrix}
50 & 100 \\
200 & 0
\end{bmatrix}
%
\begin{bmatrix}
100 & 200 \\
0 & 0
\end{bmatrix}
%
\begin{bmatrix}
200 & 100 \\
0 & 0
\end{bmatrix}
$$
And after normalization, the dataset will be:
$$
\begin{bmatrix}
0.0 & 0.0 \\
0.0 & 1.0
\end{bmatrix}
%
\begin{bmatrix}
0.25 & 0.5 \\
1.0 & 0.0
\end{bmatrix}
%
\begin{bmatrix}
0.5 & 1.0 \\
0.0 & 0.0
\end{bmatrix}
%
\begin{bmatrix}
1.0 & 0.5 \\
0.0 & 0.0
\end{bmatrix}
$$
This is the correct encoding of our dataset, and it can be transformed back into the original feature space without errors.
Now what happens if we normalize first, then apply our augmentation operators?
Again we start with our dataset of three matrices:
$$
\begin{bmatrix}
0 & 0 \\
0 & 200
\end{bmatrix}
%
\begin{bmatrix}
50 & 100 \\
200 & 0
\end{bmatrix}
%
\begin{bmatrix}
100 & 200 \\
0 & 0
\end{bmatrix}
$$
This time we normalize first. Notice that the top-left feature has a range [0, 100] in our dataset, while all other features have a range [0, 200]. So after normalization the dataset becomes:
$$
\begin{bmatrix}
0.0 & 0.0 \\
0.0 & 1.0
\end{bmatrix}
%
\begin{bmatrix}
0.5 & 0.5 \\
1.0 & 0.0
\end{bmatrix}
%
\begin{bmatrix}
1.0 & 1.0 \\
0.0 & 0.0
\end{bmatrix}
$$
Now let's augment the dataset like we did before, by flipping the last matrix horizontally. The augmented dataset is:
$$
\begin{bmatrix}
0.0 & 0.0 \\
0.0 & 1.0
\end{bmatrix}
%
\begin{bmatrix}
0.5 & 0.5 \\
1.0 & 0.0
\end{bmatrix}
%
\begin{bmatrix}
1.0 & 1.0 \\
0.0 & 0.0
\end{bmatrix}
%
\begin{bmatrix}
1.0 & 1.0 \\
0.0 & 0.0
\end{bmatrix}
$$
Oh no! The last two matrices are identical in the normalized space. But we know that they should represent two different instances. And if we try to map these matrices back into their original form, we would get incorrect results. The last two matrices would be identical, even though we know they should be flipped.
Usually, the errors would not be this extreme, but this example show why it is dangerous to normalize before augmentation. After augmentation, the features may not "line up" correctly with the original normalization. |
H: NLP CounterVectorizer (sklearn), not able to get it to fit my code
I was starting an NLP project and simply get a "CountVectorizer()" output anytime I try to run CountVectorizer.fit on the list. I've had the same issue across multiple IDE's, and different code. I've looked online, and even copy and pasted other codes with their lists and I receive the same CountVectorizer() output.
My code is as follows:
cv = CountVectorizer()
messages = ['This is a good product', 'It was a bit pricey for what it does', 'I found good value here']
cv.fit(messages)
**output -----> CountVectorizer()**
I'm really stumped on this issue. Any advice would be greatly appreciated. Thanks.
Update: This seems to be a local issue as I am able to get it to fit on Colab. If anyone can suggest what might be going on I'd be estatic.
AI: This is normal, the fit method of scikit-learn'sCountverctorizer object changes the object inplace, and returns the object itself as explained in the documentation. This holds true for any sklearn object implementing a fit method, by the way.
When you run:
cv.fit(messages)
The cv object learns the transformation (here it is simply about counting words) and stores all the necessary parameters in order to be able to later apply the transformation to any new data point.
If you want to transform the data, you need to use first the fit method (to learn the transformation as said before) and then the transform method.
In your case if you want to transform your list of sentences you should do as follows:
cv = CountVectorizer()
messages = ['This is a good product', 'It was a bit pricey for what it does', 'I found good value here']
cv.fit(messages)
vectorized_messages = cv.transform(messages)
vectorized_messages
This will output:
<3x14 sparse matrix of type '<class 'numpy.int64'>'
with 15 stored elements in Compressed Sparse Row format>
So you will have successfully transformed your list of strings to a a matrix of token counts.
Note that sklearn often also implement a fit_transform method that does bothe at the same time:
cv = CountVectorizer()
cv.fit(messages)
vectorized_messages = cv.transform(messages)
#is the same as:
cv = CountVectorizer()
vectorized_messages = cv.transform(messages) |
H: How to get pixel location in after rotating the image?
I'm trying to rotate some images with some boundary boxes, but I couldn't get the new bb. So if I have an image of 100x70 and I have a pixel at (19,39) and then I rotate the image with angle = 45,
how can I calculate the new position of this pixel?
AI: EDIT: As pointed out by Frank, the original answer mixed up row vector and column vector affine transformations. The formulas have been edited to use row vectors to be consistent with the referenced Mathworks page.
You first need to define where you are rotating your image around.
If you are rotating it around (0,0) (i.e. your image is located in the positive x and y quadrant), you can apply a rotation matrix in its vanilla form:
$\left[\begin{array}{c}x' \\y' \\1\end{array}\right]^T=$
$\left[\begin{array}{c}x \\y \\1\end{array}\right]^T
\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta & \cos \theta &0 \\0 &0 &1\end{bmatrix}$=$\left[\begin{array}{c}x\cos\theta + y\sin\theta \\-x\sin\theta+y\cos\theta \\1\end{array}\right]^T$
where x and y is your starting pixel location, x' and y' is your new pixel location, and $\theta$ is the counterclockwise angle to rotate. T means to transpose and is mostly there so that the row vector fits on the screen. You can ignore the 3rd row for now. Notice that because you are using sine and cosine functions, the new pixel location is unlikely to be an integer, so you will need to do some rounding.
However, if you are rotating around the center of the image (which is the more common use-case), you will need to first translate the "origin" of your image to the center, apply your rotation, then move your image back.
This alters the calculations to this:
$\left[\begin{array}{c}x' \\y' \\1\end{array}\right]^T$
$=\left[\begin{array}{c}x \\y \\1\end{array}\right]^T$
$\begin{bmatrix}1 &0 &0\\0 &1 &0 \\-X/2 &-Y/2 &1\end{bmatrix}$
$\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta & \cos \theta &0 \\0 &0 &1\end{bmatrix}$
$\begin{bmatrix}1 &0 &0\\0 &1 &0 \\X/2 &Y/2 &1\end{bmatrix}$
$=\left[\begin{array}{c}(x-X/2)\cos\theta + (y-Y/2)\sin\theta + X/2 \\-(x-X/2)\sin\theta+(y-Y/2)\cos\theta + Y/2 \\1\end{array}\right]^T$
where capital X and Y are the size of your image in the x and y directions.
You can read more about image transformations on the mathworks page here. |
H: Confused about polynomial regression with multiple variables
I'm trying to create a multivariable polynomial regression model from scratch but I'm getting kind of confused by how to structure it.
So, I have an array of feature vectors such that each vector can be displayed like so:
[height, weight, age]
I know with multivariable linear regression I would create an algorithm like so:
y=B0+B1*x0+...Bn*xn
Where x0 would be the first element of each in the feature vector.
So for multiple variable polynomial regression would it go something like this:
y = B0+B1*x0+B2*x1**2+...Bn*Xn**d
Where d is the degree of the polynomial. Apologies if this is painstakingly obvious and formatted badly, I'm just a small bit lost.
AI: Not exactly,
Polynomial regression means that the dataset is not linear and we have to transform it to a specific polynomial degree based on the dataset, so that we may map the Linear model
Decide a polynomial degree first, let's say 2
$y=b_0+b_1*x_0^2+b_2*x_1^2+...b_n*x_n^2$
If we want to add feature interaction,
$y=b_0+b_1*x_0^2+b_2*x_1^2+b_3*x_0*x_1....$
Basically, you have to implement this Class of Scikit-Learn
PolynomialFeatures |
H: For very simple linear regression can we quantify the prediction accuracy hit between using one hot encoding and simple numerical mapping?
Suppose I had a simple linear regression model that had the following input or X variable:
[North]
[East]
[West]
[South]
[North, East]
...
[North, East, West, South]
and I decided to numerate them like:
[North] - 0
[East] - 1
[West] - 2
[South] - 3
[North, East] - 4
...
[North, East, West, South] - 15
I had someone take a look at my model and tell me to use One Hot Encoder or One Hot Binary Encoder instead of assigning inputs like this.
My question is from a linear regression perspective what is the advantage for using OHE over my simple numerical mapping? If we can quantify an accuracy loss would it be substantial? If I had 10 model variables that I had to map like this would the loss be more substantial?
I want to know what sacrifice I'm making not using OHE
AI: The issue with numerical encoding in this context is you are enforcing that your input variable X is ordinal when it's likely not. This is telling your model that the order in which you encode your inputs are either increasing or decreasing monotonically with your target. Let's say you encoded your data like this:
[North] - 0
[East] - 1
[West] - 2
[South] - 3
[North, East] - 4
...
[North, East, West, South] - 15
If you train a linear regression model with this encoding you are telling your model that [North] either indicates a higher or lower target than [North, East, West, South], and that [East], [West], [South], and [North, East] are somewhere in between. What if [West] typically has a lower target than either [North] or [North, East, West, South]? In this case you would be enforcing some constraint on your model which is not true.
To test this you could split your input data in a 70/30 train-test-split and evaluate how your numerical encoding performs against one-hot-encoding on the test set. Another alternative would be to look into Target Encoding - this would allow you to keep a small feature space (an issue with one-hot-encoding) while attempting to keep your input and target increasing/decreasing monotonically. I would recommend testing all three encoding methods to figure out the ideal solution for your given problem. |
H: Is it right to maintain the train distribution in test set for unbalanced data?
If the training set was unbalanced the chances are the model will be biased. But if the data distribution in the test set is the same distribution as the train set, this kind of bias is not going to affect validation accuracy. But my question is if this is the right thing to do? It's not cheating? What if we want to use the model for commercial business that we have no idea how the distribution of data would be? In this case, what is the right thing to do?
AI: If the training set was unbalanced the chances are the model will be biased.
Not really. Depending on the loss function you use. Also, note that for data to be unbalanced at least it has to be in a proportion of 1/100.
The rest of the questions:
ML is based on the hypothesis that train and test look alike. Oversampling methods can help in training time, still in validation and test, you should not use the oversample and validate with the real data.
Use your evaluation metrics on the real test, with test distribution and don't oversample there.
we want to use the model for commercial business that we have no idea how the distribution of data would be?
If you have no idea of how the distribution will be you have a problem. The hypothesis that people make is that the future distribution resembles something of the current real distribution (last week,last month, one year ago....) |
H: PCA ? after the transformed data, are they still same with original data, (if maintain same dimensional)
after the step of pca, I try to plot them (see pic)
example in the original (A) 2 features => pca 2 features
if I do not reduce any dimensional data, are they still have the same meaning of these data? (the picture look a bit different from them)
it is a circle example for classification data.
any seem similar example after the step of pca.
AI: As you can see from your pics there is a change.
From an algebra point of view, PCA is a base change. The interest of the algo is how the target base is designed: it is designed as to explain variance in the dataset. Dimension reduction is achieved trough removing some base vector with the least associated variance. In other words : PCA is a base change to an interesting base in term of variance explained, which might be followed by a projection on a sub-base.
When you remove the dimension reduction component you still have a base change. Which in the general case amount to some rotation, translastion, scaling, but no significant change in the underlying data. It seems to be what happen in your second graph : a rotation by 45° plus some scaling. |
H: Principal components analysis need standardization or normalization?
Principal components analysis need standardization or normalization?
After some google, I get confused. pca need the scalar be same. So which should I use.
Which technique needs to do before PCA?
Does pca need standardization? standardized values will always be zero, and the standard deviation will always be one.
Does pca need normalization? range zero to one
or both ?
AI: Purpose of PCA is to find directions that maximizes the variance. If variance of one variable is higher than others we make the pca components biased in that direction.
So, best thing to do is make the variance of all variables the same. One way of doing this is by standardizing all the variables.
Normalization does not make all variables to have the same variance. |
H: What do the parameters used in crop mean?
When we have an image to be used as an input to a CNN and we want to classify only part of the image, we usually feed the classifier with a crop of the image.
Lets say my image is called frame and x, y, w and h are xmin, ymin, xmax and ymax, respectively:
frame = frame[y:y + h, x:x + w] #Crop a part of the image
What does y:y or x:x mean and why do we sum them to h and w, respectively?
I've been seeing some people performing the crop in the following way:
frame = frame[y:h, x:w] #Crop a part of the image without adding to `w` and `h`
I saw the second approach being used in some places like in the following line: https://github.com/balajisrinivas/Face-Mask-Detection/blob/master/detect_mask_video.py#L51
What's the difference?
AI: Lets say my image is called frame and x, y, w and h are xmin, ymin, xmax and ymax
You're confusing $w$ with $xmax$ and $h$ with $ymax$: Usually $w$ is the width of the crop whereas $xmax$ is the horizontal position of the end of the crop. Similarly $h$ is the height and $ymax$ is the vertical position of the end of the crop.
Logically since $x$ is the (horizontal) start of the crop and $w$ is the width, we can obtain $xmax$ like this: $xmax=x+w$.
Example: in a 100x100 image, let's say we want to crop a 20x20 square in the centre: $x=40, y=40, w=20, h=20, xmax=60, ymax=60$.
In the following code:
frame = frame[y:y + h, x:x + w]
the operator : is used to represent a sequence (for instance 3:7 means 3,4,5,6) so y:y + h represents the sequence from y to y+h, i.e. from $y$ to $ymax$. Same for x+w, so this line would select the part of the array corresponding to the crop.
Your second example is wrong due to the same confusion, the actual code is:
face = frame[startY:endY, startX:endX]
In this case the author is directly using the end coordinate endY (same as $ymax$) instead of calculating it as startY+h. |
H: Does testing a training dataset guarantee successful results?
If I test an image that has been previously used to train a classification model, is it guaranteed to classify correctly?
My guess is that since the parameters have been trained with other images as well, there is no guarantee of getting a correct classification, just a high probability.
AI: This is correct, there's no guarantee at all, not even a high probability. As usual It depends on the type of model, the data, the number and distribution of the classes.
However there's of course a higher chance that the instance would be correctly classified. That's why one shouldn't use a test set containing training instances to estimate the performance of the model, since there's a high risk the performance would be overestimated (data leakage). |
H: How do I interpret loss and accuracy per epoch while training a CNN?
I am really new to Neural Networks, and I am training a CNN for image classification, and while training, I get the following:
which tells me the training loss and accuracy and validation loss and accuracy, soemone please correct me if i am wrong.
So, my question is:
What are pecisely these quantities? And is there a way to understand if I a, doing well with my problem looking at them?
AI: "loss" refers to the loss value over the training data after each epoch. This is what the optimization process is trying to minimize with the training so, the lower, the better.
"accuracy" refers to the ratio between correct predictions and the total number of predictions in the training data. The higher, the better. This is normally inversely correlated with the loss, but not always.
The "validation" counterparts are the same concepts but computed over the validation data, which is not used for training and hence "unseen" to your model. If the training loss and accuracy are good but the validation counterparts are bad, it means that your model is "overfitting", as it can't generalize to unseen data.
Normally, these measures are plotted together into a training and validation loss plot and a training and validation accuracy plot, so that you can better evaluate their behavior over time. You can see some examples on how to interpret different trends here.
In your case, the training is not finished, so we can't be sure. From what we can see, the accuracy is less than 30%, so it is bad, but maybe more training leads to improvements. |
H: Searchable list of Kaggle challenges
Is there a way to search Kaggle for a list of current and prior challenges, or an external site that does that?
When I search on Kaggle it will only bring up solution notebooks and datasets, it doesn't seem to have a filter on challenge page.
AI: Have you checked the competition's stage?
In there you will see all the active and concluded contests
https://www.kaggle.com/competitions |
H: Concept of xml files for haar cascade in object detection with opencv?
I've just started ML and since I entered a project on object detection, their haar cascade is used and XML files are loaded via OpenCV for detection. So I would like to understand why XML files are used. Also can I get such files for detection of any other features?
AI: XML files are generated when you train the Haar Cascade Classifier. From the OpenCV docs:
Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in their paper, "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images
More information on Haar cascades can be found here.
Some tutorials for training Haar Cascades are available.
Link 1
Link 2
Link 3 |
H: Evaluation of recommendation systems
I have developed a content-based recommendation system and it is working fine. The input is a set of documents={d1,d2,d3,...,dn} and the output will be Top N similar documents for a given document output={d10,d11,d1,d8,...}. I eyeballed the results and found it to be satisfactory, the question I have is how do I measure the performance, accuracy of the system.
I did some research and found that recall, precision, and F1-score are used to evaluating the recommendation systems that predict user ratings. For this, we should no the original ratings and then the system should predict the ratings later we can plot the confusion matrix and then compute the aforementioned metric. However, in my case, I don't predict anything instead I measure the cosine similarity score sort it in descending order and pick the top N.
In this use case, how do I evaluate the system?
Thanks
AI: There's some confusion about different kinds of output and their corresponding evaluation:
One can consider the top N results as predicted positive, any result lower than N as predicted negative. In this option one can use binary classification evaluation measure: precision, recall, f1-score would be the standard measures in this case.
One can consider the ratings/scores assigned to the full set of results. In this case there are two options:
if the numerical results are comparable, e.g. same kind of rating, then standard regression evaluation measures can be used, for instance RMSE.
if not, then it's still possible to compare the order of the results. Spearman rank correlation is a common evaluation measure in this case.
It seems that in your case you could use either the classification or the ranking evaluation measures. Of course, any of these evaluation methods requires gold standard results in order to compare the predictions against them. |
H: How is the target variable passed to the final estimator in this pipeline?
There is a pipeline like below. X is features and y is the target variable.
I would like to know how y is passed to the estimator, LinearSVC. As far as I know, StandardScalerreturns only transformed X. So, I thought that y was not passed to LinearSVC. However, this code worked and I could make prediction. Thus, I would like to know how y was reached to the final estimator.
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.svm import Linear SVC
svm_clf = Pipeline([
("scaler",StandardScaler()),
("linear_svc",LinearSVC(C=1,loss="hinge")),
])
svm_clf.fit(X,y)
AI: The Pipeline object performs a .fit_transform(X, y) sequentially based on how you structure your pipeline until just before the last step (the estimator). The estimator does not perform a transform because it's purpose is not to transform your input array X, but to make a prediction. The fit_transform method performs a fit, then a transform on that pipeline step when it is called. In your case that means:
Your standard scalar is fit based on X and y
Your standard scalar transforms your input data X
Your LinearSVC is fit based on your previously transformed X and your un-transformed target y
Since your LinearSVC is the last step of the pipeline object and doesn't have a transform method the process stops. Your Pipeline object that contains your StandardScalar and the LinearSVC now contains a fit version of both these objects. Now you can perform a .predict() method with this pipeline which will cause the fit StandardScalar to perform a .transform() on input X, then this transformed X will be passed to the final estimator LinearSVC for a .predict() which will return your predicted y array. |
H: How do I deal with the fact that I have images which are not consistent with the class they belong in an image classification problem with CNN?
I am really new to Neural Networks and to Machine Learning in general, and I have been given a dataset composed by images for performing multi-class image classification with a CNN.
The images were already divided into classes, and looking at the images I have noticed that some of them are complitely different from the class they belong, for example If I have a class Fruits, with images of fruits, in the folder of this class I have some pictures of cars, people,..., which of course are not fruits and neither belong to any other class in the classification problem.
The problem is that this creates some problems when I train my CNN, and this results in a low accuracy, infact I cannot go above 0.5.
How do I deal with the fact that I have images which are not consistent with the class they belong?
AI: There are different ways of doing this, but the final idea in regard to this is that you need to clean your dataset.
You could go through it manually and then separate the images. This is extremely slow if you're dealing with a large dataset.
A faster less robust way would be to do a Principle component analysis
step1 : Do PCA to reduce the dimensionality to a 2D or 3D space
step2 : Plot and see if there are any clusters. Usually, images which are different fall into different clusters
step3 : Cluster it via a convex clustering algorithm like K-means
step4 : Store the images belonging to a cluster into a folder.
step5 : Go through the folders and make the necessary cleaning. |
H: Trained BERT models perform unpredictably on test set
We are training a BERT model (using the Huggingface library) for a sequence labeling task with six labels: five labels indicate that a token belongs to a class that is interesting to us, and one label indicates that the token does not belong to any class.
Generally speaking, this works well: loss decreases with each epoch, and we get good enough results. However, if we compute precision, recall and f-score after each epoch on a test set, we see that they oscillate quite a bit. We train for 1,000 epochs. After 100 epochs performance seems to have plateaued. During the last 900 epochs, precision jumps constantly to seemingly random values between 0.677 and 0.709; recall between 0.729 and 0.798. The model does not seem to stabilize.
To mitigate the problem, we already tried the following:
We increase the size of our test data set.
We experimented with different learning rates and batch sizes.
We used different transformer models from the Huggingface library, e.g. RoBERTa, GPT-2 etc.
Nothing of this has helped.
Does anyone have any recommendations on what we could do here? How can we pick the “best model”? Currently, we pick the one that performs best on the test set, but we are unsure about this approach.
AI: BERT-style finetuning is known for its instability. Some aspects to take into account when having this kind of issues are:
The number of epochs typically used to finetune BERT models is normally around 3.
The main source of instability is that the authors of the original BERT article suggested using the Adam optimizer but disabling the bias compensation (such a variant became known as "BertAdam").
Currently, practitioners have shifted from Adam to AdamW as optimizer.
It is typical to do multiple "restarts", that is, train the model multiple times and choose the best performing one on the validation data.
Model checkpoints are normally saved after each epoch. The model we chose is the checkpoint with best validation loss among all epoch of every restart we tried.
There are two main articles that study BERT-like finetuning instabilities that may be of use to you. They describe in detail most of the aspects I mentioned before:
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
Revisiting Few-sample BERT Fine-tuning |
H: Machine Learning - Precision and Recall - differences in interpretation and preferring one over other
I have summarising this from lot of blogs about Precision and Recall.
Precision is:
Proportion of actual positives that classifier has predicted as positive.
meaning out of the sample identified as positives by classifier as positive, how many are actually positive?
and Recall is:
Proportion of actual positives were predicted as positive correctly.
meaning out of the ground truth positives, how many were identified correctly by the classifier as positive?
That sounded very confusing to me. I couldn't interpret difference between both of them and relate each to real examples. some very small questions about interpretation I have are:
if avoiding false-positives matter the most to me, i should be measuring precision; And if avoiding false-negatives matters the most to me, i should be measuring recall. Is my understanding correct?
Suppose, I am predicting if a patient should be given a vaccine, that when given to healthy person is catastrophic and hence should only be given to an affected person; and I can't afford giving vaccine to healthy people. assuming positive stands for should-give-vaccine and negative is should-not-give-vaccine, should I be measuring Precision? or Recall of my classifier?
Suppose, I am predicting if an email is spam(+ve) or non-spam(-ve). and I can't afford a spam email being classified as non-spam, meaning can't afford false-negatives, should I be measuring Precision? or Recall of my classifier?
What does it mean to have high precision(> 0.95) and low recall(< 0.05)? And what does it mean to have low precision(> 0.95) and high recall(< 0.05)?
Put simply, in what kind of cases is to preferable or good choice to use Precision over Recall as metric and vice versa. I get the definition and I can't relate it to real examples to answer when one is preferable over other, so I would really like some clarification.
AI: To make sure everything is clear let me quickly summarize what we are talking about. precision and recall are evaluation measures for binary classification, in which every instance has a ground truth class (also called gold standard class, I'll call it 'gold') and a predicted class, both being either positive or negative (note that it's important to clearly define which one is the positive one). Therefore there are four possibilities for every instance:
gold positive and predicted positive -> TP
gold positive and predicted negative -> FN (also called type II errors)
gold negative and predicted positive -> FP (also called type I errors)
gold negative and predicted negative -> TN
$$Precision=\frac{TP}{TP+FP}\ \ \ Recall=\frac{TP}{TP+FN}$$
In case it helps, I think a figure such as the one on the Wikipedia Precision and Recall page summarizes these concepts quite well.
About your questions:
if avoiding false-positives matter the most to me, i should be measuring precision; And if avoiding false-negatives matters the most to me, i should be measuring recall. Is my understanding correct?
Correct.
Suppose, I am predicting if a patient should be given a vaccine, that when given to healthy person is catastrophic and hence should only be given to an affected person; and I can't afford giving vaccine to healthy people. assuming positive stands for should-give-vaccine and negative is should-not-give-vaccine, should I be measuring Precision? or Recall of my classifier?
Here one wants to avoid giving the vaccine to somebody who doesn't need it, i.e. we need to avoid predicting a positive for a gold negative instance. Since we want to avoid FP errors at all cost, we must have a very high precision -> precision should be used.
Suppose, I am predicting if an email is spam(+ve) or non-spam(-ve). and I can't afford a spam email being classified as non-spam, meaning can't afford false-negatives, should I be measuring Precision? or Recall of my classifier?
We want to avoid false negative -> recall should be used.
Note: the choice of the positive class is important, here spam = positive. This is the standard way, but sometimes people confuse "positive" with a positive outcome, i.e. mentally associate positive with non-spam.
What does it mean to have high precision(> 0.95) and low recall(< 0.05)? And what does it mean to have low precision(> 0.95) and high recall(< 0.05)?
Let's say you're a classifier in charge of labeling a set of pictures based on whether they contain a dog (positive) or not (negative). You see that some pictures clearly contain a dog so you label them as positive, and some clearly don't so you label them as negative. Now let's assume that for a large majority of pictures you are not sure: maybe the picture is too dark, blurry, there's an animal but it is masked by another object, etc. For these uncertain cases you have two possible strategies:
Label them as negative, in other words favor precision. Best case scenario, most of them turn out to be negative so you will get both high precision and high recall. But if most of these uncertain cases turn out to be actually positive, then you have a lot of FN errors: your recall will be very low, but your precision will still be very high since you are sure that all/most of the ones you labeled as positive are actually positive.
Label them as positive, in other words favor recall. Now in the best case scenario most of them turn out to be positive, so high precision and high recall. But if most of the uncertain cases turn out to be actually negative, then you have a lot of FP errors: your precision will be very low, but your recall will still be very high since you're sure that all/most the true positive are labeled as positive.
Side note: it's not really relevant to your question but the example of spam is not very realistic for a case where high recall is important. Typically high recall is important in tasks where the goal is to find all the potential positive cases: for instance a police investigation to find everybody susceptible of being at a certain place at a certain time. Here FP errors don't matter since detectives are going to check afterwards but FN errors could cause missing a potential suspect. |
H: BERT uses WordPiece, RoBERTa uses BPE
In the original BERT paper, section 'A.2 Pre-training Procedure', it is mentioned:
The LM masking is applied after WordPiece tokenization with a uniform masking rate of 15%, and no special consideration given to partial word pieces.
And in the RoBERTa paper, section '4.4 Text Encoding' it is mentioned:
The original BERT implementation (Devlin et al., 2019) uses a
character-level BPE vocabulary of size 30K, which is learned after
preprocessing the input with heuristic tokenization rules.
I appreciate if someone can clarify why in the RoBERTa paper it is said that BERT uses BPE?
AI: BPE and word pieces are fairly equivalent, with only minimal differences. In practical terms, their main difference is that BPE places the @@ at the end of tokens while wordpieces place the ## at the beginning.
Therefore, I understand that the authors of RoBERTa take the liberty of using BPE and wordpieces interchangeably. |
H: Problem with convergence of ReLu in MLP
I created neural network from scratch in python using only numpy and I'm playing with different activation functions. What I observed is quite weird and I would love to understand why this happens.
The problem I observed depends on initial weights. When using sigmoid function it does not matter that much if weights are random numbers in ranges of [0,1] or [-1,1] or [-0.5,0.5]. But when using ReLu the network very often has a huge problem with ever converging when I'm using random weights in range [-1,1]. But when I changed the range of initialization of weights to [-0.5,0.5] it started to work. This only applies to ReLu activation function and I totally don't get it why it won't work for [-1,1]. Shouldn't it be able to converge with any random weights?
Also when I changed initial weights to normal distibution, it has no problem with convergence. I understand that normal distribution should work better and faster than random [-1,1]. What I don't understand is why it can't converge (error remains the same epoch after epoch) with [-1,1] and has no problem with converging with normal distribution... Shouldn't it always be able to converge just slower and faster with different initialization method?
PS. I'm using normal backpropagation with softmax as last layer and MSE as loss function
AI: I will start with a toy example for the convergence part. Suppose that the loss function is $f(x) = x^4$ and we want to minimize it using the gradient descent. Clearly, the minimum is attained at zero and, in general, we would like the magnitude of the current approximation to decrease.
The update rule of gradient descent is
$$ x_{k+1} = x_k - \lambda \nabla f = x_k - \lambda \cdot 4x_k^3.$$
Simplifying the expression, we get
$$x^{k+1} = x^k(1 - 4\lambda x_k^2).$$
And now the combo initialization + learning rate start to appear. If $|1-4\lambda x_0^2| < 1$, then $|x_0| > |x_1| > |x_2| > ...$. The sequence will go to zero eventually. If $|1-4\lambda x_0^2| > 1$, then $|x_0| < |x_1|$. In this case, $|x_1| < |x_2|$ and so on -- the sequence will grow. Therefore, if the learning rate $\lambda$ if fixed, then the initial values $x^0$ determine if the grad descent converges or not.
When the gradient descent converge?
The math says that the gradient descent converge when the learning rate $\lambda$ and the gradient $\nabla f$ satisfy
$$\|\lambda \nabla f\| < 1$$
along the optimization path. Note that this condition does not need to be valid for all values of $x$, but at every $x_0, x_1, x_2, ...$
For many "good" functions, it suffices to require $\|\lambda \nabla f\| < 1$ only at $x_0$. The reason is that after the first iteration, we are closer to the local minimum, and for many "good" functions, it means that the gradient will be smaller.
What about weight choice in [-0.5,0.5] and [-1, 1]?
I think of it as follows: Suppose that we selected weights in $[-0.5, 0.5]$ (model 1) and then multiplied all weights by 2 to get uniform distribution in $[-1, 1]$ (model 2). Suppose that the learning rate is identical in both cases, and let's check how the SGD performs. For simplicity of the argument, I replace it with the gradient descent.
How does it transfer to the NN?
Note that a linear map (say, for the dense layer) has the following property. Suppose that all weights W are multiplied by 2, then
$$\|2W\| = 2\|W\|.$$
ReLU is quazi-linearly-scalable: for every $a > 0$,
$$a\cdot\text{ReLU}(x)=\text{ReLu}(ax).$$
Note that if your NN has $d$ layers, then, multiplying all weights by 2, your output increases in $2^d$ times (factor of 2 for each layer).
It is hard to compute the gradient for the NN, but using the product rule, I expect it to increase in approximately $2^{d}$ times. If model 1 satisfies the conditions for the gradient descent convergence, then we need to decrease the learning rate about $2^{d}$ times to guarantee the gradient descent convergence condition. |
H: Similarity Threshold Standards
When using similarity measures (eg. Resnik Information Content, Cosine Similarity, etc.) for any type of data, are there any standard similarity thresholds that are used, or does it all depend on the situation? A similarity threshold would be the value X in [0,1] such that all pairs with similarity score greater than X are "connected" while ones with similarity score below X are not.
Also, are low similarity thresholds (~0.15) acceptable when higher thresholds simply do not produce enough "connected" pairs and having a low similarity threshold still works well in practice?
AI: I don't think there's any standard, but there might be some exceptions in very specific cases where the distribution of the scores is known precisely.
There's no standard because in general the optimal value of the threshold strongly depends on the task and the data. That's why thresholds are usually determined empirically based on the desired outcome. In other words, a threshold can be seen as an hyper-parameter: its optimal value can be found by maximizing the performance of the target task on a training set (or validation set). |
H: Document clustering to merge common labels
I am building a recommendation system and I have to clean up some of the labels that I have. For example of the data
df['resolution_modified'].value_counts()
Gives
105829
It is recommended to replace scanner 1732
It is recommended to reboot station 1483
It is recommended to replace printer 881
It is recommended to replace keyboard 700
...
It is recommended to update both computers in erc to ensure y be compliant with acme 1
It is recommended to configure and i have verify alignement printer be work now corrado 1
It is recommended to create rma for break devices please see tt for more information resolve this in favor of rma ticket create 1
It is recommended to replace keyboard manually clear hd space add to stale profile manager instal windows update 1
It is recommended to switch out dpi head from break printers 1
Notice that It is recommended to replace keyboard and It is recommended to replace keyboard manually clear hd space add to stale profile manager instal windows update are very similar. Ideally, I would like to just converge to the string that occurs more frequently thus the second string should convert to the first.
I am thinking of using document clustering to handle this approach. I have tried using fuzzywuzzy but since I have a lot of strings the process below is too slow
from fuzzywuzzy import fuzz
def replace_similars(input_list):
# Replaces %90 and more similar strings
for i in range(len(input_list)):
for j in range(len(input_list)):
if i < j and fuzz.ratio(input_list[i], input_list[j]) >= 90:
input_list[j] = input_list[i]
def generate_mapping(input_list):
new_list = input_list[:] # copy list
replace_similars(new_list)
mapping = {}
for i in range(len(input_list)):
mapping[input_list[i]] = new_list[i]
return mapping
res = h['resolution_modified'].unique()
res.sort()
mapping = generate_mapping(res)
for k, v in mapping.items():
if k != v:
h.loc[h['resolution_modified'] == k, 'resolution_modified'] = v
I wanted to know if there is some document clustering I could apply that weighs in the strings that occur more than once and thus I would just take the common strings related to them that occur less and converge them to the more frequent occuring string. Does anyone have any recommendation on which method to use?
What I have Tried Thus Far:
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import KMeans
v = TfidfVectorizer()
x = v.fit_transform(df['resolution_modified'])
kmeans = KMeans(n_clusters=2).fit(x)
test_strings = ['It is recommended to replace keyboard', 'It is recommended to replace keyboard manually clear hd space add to stale profile manager instal windows update']
kmeans.predict(v.transform(test_strings))
Which gives
array([1, 0], dtype=int32)
Obviously not working so far, will try to increase the number of clusters.
AI: I might misunderstand something but it looks to me like you're trying to find a complex method for a simple problem: if there are many strings which occur multiple times in the list, you should deduplicate the list before comparing all the pairs. You could use a set, but since you will need to count how frequent each string is you should probably directly create a map (dictionary) which stores the frequency for every string (just iterate over the list of strings, then increment the frequency of this string (key) in the map).
Depending how many distinct strings you have, this simple step might be enough to allow you to compare all the pairs of strings efficiently.
Then you could for example decide on a threshold for frequency, for instance keep only the strings which appear at least 10 times. For any string which appear less than 10 times, replace with the frequent string (more than 10 times) which has the highest similarity with it. |
H: Encoder-Decoder LSTM for Trajectory Prediction
I need to use encoder-decoder structure to predict 2D trajectories. As almost all available tutorials are related to NLP -with sparse vectors-, I couldn't be sure about how to adapt the solutions to a continuous data.
In addition to my ignorance in seqence-to-sequence models, embedding process for words confused me more. I have a dataset that consists of 3,000,000 samples each having x-y coordinates (-1, 1) with 125 observations, which means the shape of each sample is (125, 2). I thought I could think of this as 125 words with 2 dimensional already embedded words, but the encoder and the decoder in this Keras Tutorial expect 3D arrays as (num_pairs, max_english_sentence_length, num_english_characters).
I doubt I need to train each sample (125, 2) separately with this model, as the way Google's search bar does with only one word written.
As far as I understood, an encoder is many-to-one type model and a decoder is one-to-many type model. I need to get a memory state c and a hiddenstate h as vectors(?). Then I should use those vectors as input to decoder and extract predictions in the shape of (x,y) as many as I determine as encoder output.
I'd be so thankful if someone could give an example of an encoder-decoder LSTM architecture over the shape of my dataset, especially in terms of dimensions required for encoder-decoder inputs and outputs, particulary on Keras model if possible.
AI: There are multiple questions in your description:
The input to LSTMs are normally continuous representations. In NLP, you normally embed discrete elements as vectors in a continuous representation space and then you pass these vectors to an LSTM. You already have continuous representations, so you just pass your 2-dimensional vectors as input to the LSTM.
In NLP neural networks, like most neural networks, the training happens passing as input not one sample, but a minibatch of N samples. This N most of the times is chosen as the maximum number that makes the model, data and intermediate computations fit in the GPU memory.
The 3D array expected as input by the LSTM has shape [N, timesteps, feature]. In your case, this would be [N, 125, 2].
I think you don't need an encoder-decoder architecture, only the encoder. Therefore, a single LSTM would suffice. You would train it to receive any number of input elements and predict the next one. If you want more predictions ahead, you can feed the model's own predictions as input, autoregressively. To find an analogy in the NLP world, your model would be a language model, which receives words (or letters) and generate the following word.
P.S.: I may be wrong about you needing only the encoder instead of an encoder-decoder architecture, as I don't know the specific nature of the predictions you want to make. |
H: Difficulties training a classifier
I'm a begginer in data science but I tried to build a classifier for my own bank transactions, I collected ~ 50.000 in total. My intention is to create a relationship between the statement of the transaction and the type of transaction. For example:
Statement: Payment with card number XXXXXX in Wallmart.
Type/label: Buy in a supermarket
For doing so I trained my model with 80% of the total of statements (previously labeled) and I reached a so high accuracy (85%). After that I realized that something was wrong, there are so many statements with the text:
Statement: Payment with card number XXXXX in _______ .
Because this is a "default" statement text in my bank when someone buys a product in a supermarket. Then, if the model tries to find out in what category the statement is it will be pretty simple because there will be many statements repeated and the probability of predict one of those will be high. What I did later was trying to delete all the repeated statements, since the repeated ones will not affect in the training process, but because of this my total of statements went down to ~ 2.000, and reaching then only a 10% of accuracy in the prediction.
My questions are: What is correct process to train my model? Should I let the repeated statements? (I feel like I'm cheating the prediction letting the repeated ones) What I'm doing wrong?
AI: If I understand correctly, you noticed that a lot of correct predictions were due to a case which always has the same label and is very frequent in your data, right?
Let me say that I think you had a good reasoning: you analyzed the problem, designed a potential solution and then tried it. This kind of problem is very common, but in this case the solution is not really to delete the repeated cases in your training data.
What happens is that you have imbalanced data, that is you have a class which appears very frequently: when you deleted the very frequent instance you mentioned, your data size went from 50000 to 2000, so this instance represents 96% of your data. This instance has probably almost always the same label, so my guess is that this instance with this class as label represents 85% of your data. At first you thought your classifier was working well since you had 85% accuracy, but it's very easy for the classifier to reach 85% of correct predictions if the most frequent class represents 85% of the data: it just needs to always answer this class.
The first lesson here is that accuracy is not a good evaluation measure for imbalanced data: it's too simple, it can be very high even though the classifier doesn't do anything useful. That's why you should probably look at precision, recall and f1-score by class, and typically at macro f1 score as a global evaluation score.
Second point, I'm going to mention resampling because it's usually the standard answer to imbalanced data: the idea is simply to artificially "re-balance" the distribution of the classes in the data by either:
undersampling the frequent class, that is using only a random subset of this class. Note that your idea to delete the frequent instance is actually not so different.
oversampling the other classes, that is repeating the rare instances as many times as necessary to make the data balanced.
Importantly, the resampling must be done only on the training set, otherwise the evaluation on the test set is biased.
Resampling is quite easy, but the results are often disappointing. This is because quite often the problem is deeper: the classifier is just not able to distinguish the classes from the information provided by the features.
That brings me to my final point: the fact that you get 10% accuracy after removing the frequent case shows that the classifier doesn't have a clue what it's doing. Most of the time designing a good ML system needs some serious work with the features, especially with text data. The type of model (and parameters) can also play an important role. You didn't explain what kind of model and what kind of features you use, so there's not much I can say about this. Don't hesitate to post a new question with these details, and maybe we can help you improve this model. |
H: Adapting ZFNet on 2244x224 image using a filter 7X7
I am building a model based on ZFNet in Tensorflow 2.0. I am using the Petal images dataset. The images are of size 224x244x3.
So my question is when implementing the first layer (conv2d) with filter size = 7 and a stride of 3 and padding of 0. I am getting the output dimension of 109.5 using formula (n+2p-f/S + 1). So if I use the above-mentioned values what will be the dimension returned by TensorFlow in the first layer.
and secondly, how can I adjust the parameter values so it returns a whole number.
reference formula : (n+2p-f)/2 +1
reference calculations:
224+0-7/2 +1 = 109.5
Thanks.
AI: As per the formula for the feature map dimension:
$$feature_{dim} = \frac{n+2p-f}{S} + 1$$
The values for :
n = 224
p = 0
f = 7
s = 3
$$feature_{dim} = \frac{224+2*0-7}{3} + 1 = 73.66 $$
As you've guessed this is not the size of the feature map.
Tensorflow takes it as 73.
If you're relying on the formula, you are missing out on a concept, that this should a process were in the kernel slides over the Input hence the feature map dimension should be an integer. So what happens is that the kernel while sliding with a stride of 3 leaves out the last few pixels and won't reach the other edge.
If you're trying to get a feature map that's an integer by keeping a constant filter size of 7. Then you're stride is :
$$S = 217 /(N - 1)$$
where N is you're desired output size.
If you choose N to be 8 or 32, you'll end up with a stride of 7 or 31. It's better to choose S = 7 to get most of the information. But still, it doesn't matter as TensorFlow has checks for the same to prevent errors. |
H: What does it mean for an image to have a distribution?
While studying Independent Component Analysis, specifically SVD to be used for the task of image separation. The lines given in the textbook is like this:-
"The first assumption will be that the two images are statistically independent."
Now, Statistical independence means images must have some kind of distribution to start with. But I am unable to grasp the meaning behind this. What does it mean for an image to have a distribution? Like how is a image having Gaussian distribution or Uniform distribution look like. It would be great if some examples are presented.
AI: First, we must understand that an image can be represented as the numerical values of its pixels. For example, a 256x256 grayscale image can be represented by a vector of 65536 integer values between 0 and 255.
Now, let's consider that we are talking about images of human faces, like the CelebA dataset, but in grayscale.
And finally, let's imagine that we sample a vector of 65536 components from a uniform distribution between 0 and 255, that is, from $x \sim U[0, 255]^{65536}$. If we interpret such a randomly sampled vector as an image, like the interpretation we described in the first paragraph, do you think that is it probable that we see a face? Well, probably not. Probably we would see just noise, like this:
Why do you know that we won't see a face? Well, because you know that a face is not composed of uniformly random pixels, but has some structure in it. In other words, because you intuitively know that the distribution of pixel values of a face image does not match $U[0, 255]^{65536}$, but has a different distribution.
As a summary, the distribution of an image domain is the distribution the pixel values follow. Note that this is applicable not only to grayscale images, but to color images expressed in the RGB space or any other numeric space like HSB.
P.S.: this is how an image of Gaussian noise looks like (it was generated with Imagemagick, with convert -size 256x256 xc:gray +noise Gaussian out.png): |
H: Finding Correlations in two Datasets
I have an assignment where I am trying to find correlations between Lightning Strikes and Telecommunication damage. The two datasets consist of many columns (especially the human-recorded Telecommunication damage one), but let's assume it was something like this :
Telecommunication_Damage_df.columns = (timestamp, geolocation(lat, lon), type_of_damage) and
Lightning_Strikes.columns = (timestamp, geolocation(lat, lon))
I have done some EDA and cleaned the data, also assigned each lightning strike and each telecommunication damage row to a particular location(cities/areas), but now I am confused as to how I should proceed. Every other Data Science / Machine Learning project I have been involved in was much more direct and usually had one training/testing dataset whereas with this one I am stuck as to how I should proceed, is there a model that could help? Is there a methodology I am unfamiliar with?
I tried following this crime/geolocation tutorial (https://www.kdnuggets.com/2020/02/introduction-geographical-time-series-crime-r-sql-tableau.html) but it's not exactly the same, because there is one dataset being used, crimes, if a crime occurred then that's it, whereas here if a lightning strike occurred that doesn't necessarily mean a telecommunication problem was found and vice versa.
I know it's a bit vague, but I've been stuck for quite a while, and I was hoping that someone could guide me to any direction, because at the moment I am idle.
AI: About the type of problem: apparently here the goal is not to do any kind of supervised learning, it seems to be more a kind of descriptive task.
The first thing I would try to do is to align the two datasets, i.e. merge them based on events in one occuring at the same time and place as in the other one. In order to do that you need to be able to compute for any two events whether they appear in the same area around the same time. I guess you'll need a window, for example defining two events as related if they occur within 1h of each other and are distant by less than 10km. Be careful about noise in the data: the time of a telecom incident might be the time it's reported, not the time it actually happens.
Once you have linked potential related events, you can merge the two datasets into one. You should probably preserve the events which have no match in the other dataset for statistical purposes, so use an outer join.
At this stage you have an exploitable dataset. I would start by doing plots such as overall probability distribution of related events, probability of related event by location. Probability of related events across time. |
H: Custom loss function with multiple outputs in tensorflow
my network has two outputs and single input. I am trying to write a custom loss function
$$
Loss = Loss_1(y^{true}_1, y^{pred}_1) + Loss_2(y^{true}_2, y^{pred}_2)
$$
I was able to write a custom loss function for a single output. But for multiple output, I am struck. Below I wrote a mwe I tried
def model(input_shape=4, output_shape=3, lr=0.0001):
"""
single input and multi-output
loss = custom_loss(out_1_true, out_1_pred)+mse(out_2_true, out_2_pred))
"""
input_layer = Input(input_shape)
layer_1 = Dense(units=32, activation='elu', kernel_initializer='he_uniform')(input_layer)
#outputs
y_1 = Dense(units=output_shape, activation='softmax', kernel_initializer='he_uniform')(layer_1)
y_2 = Dense(units=1, activation='linear', kernel_initializer='he_uniform')(layer_1)
def custom_loss(y_true, y_pred):
# both true value of out_1, out_2 are encoded in y_true
y_true_1 = y_true[:, :1+output_shape]
y_true_2 = y_true[:, 1+output_shape:]
#(this part is wrong...I dont know how)
y_pred_1, y_pred_2 = y_pred[:, :1+output_shape], y_pred[:, 1+output_shape:]
#custorm loss for y_pred_1
entropy_loss = -y_pred_1 * K.log(y_true_1 + 1e-10)
#mse for y_pred_2
mse = -K.mean(K.square(y_pred_2 - y_true_2))
#net loss
loss = entropy_loss + C * mse
return loss
Network_model = Model(inputs = input_layer, outputs = [y_1, y_2])
Network_model.compile(loss = custom_loss, optimizer=RMSprop(lr=lr))
return Network_model
I think, the main issue lies in spiting the y_pred tensor.
PS: For the purpose of mwe, I have use normal cross entropy loss and mse loss functions in the above code. But however, I have a different cost functions.
AI: import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras import Model
from tensorflow.keras.optimizers import RMSprop
import tensorflow.keras.backend as K
import numpy as np
def model(input_shape=4, output_shape=3, lr=0.0001):
"""
single input and multi-output
loss = custom_loss(out_1_true, out_1_pred)+mse(out_2_true, out_2_pred))
"""
input_layer = Input(input_shape)
layer_1 = Dense(units=32, activation='elu', kernel_initializer='he_uniform')(input_layer)
# STEP-1: name your outputs
y_1 = Dense(units=output_shape, activation='softmax', kernel_initializer='he_uniform', name='actions')(layer_1)
y_2 = Dense(units=1, activation='linear', kernel_initializer='he_uniform', name='values')(layer_1)
# STEP-2: Define the custom custom_loss. Note that the args should be y_true, y_pred
def custom_loss(y_true, y_pred):
entropy_loss = -y_pred * K.log(y_true + 1e-10)
return entropy_loss
# STEP-3: Define the dicts for loss and lossweights with keys as the name of the output layers
LossFunc = {'actions':custom_loss, 'values':'mse'}
lossWeights = {'actions':0.5, 'values':0.5}
Network_model = Model(inputs = input_layer, outputs = [y_1, y_2])
# STEP-4: complie using LossFunc and lossWeights dicts
Network_model.compile(optimizer=RMSprop(lr=lr), loss=LossFunc, loss_weights=lossWeights, metrics=["accuracy"])
return Network_model
#training example:
#model
Network_model = model(4,2)
# input
S = np.reshape([1.,2.,3.,4.], (1, -1))
#true outputs
action, value = np.reshape([0.1], (1, -1)), np.reshape([0.1, 0.9] , (1, -1))
Y = [action, value]
Network_model.fit(S, Y)
1/1 [==============================] - 0s 2ms/step - loss: 6.7340 - actions_loss: 1.1512 - values_loss: 12.3169 - actions_accuracy: 0.0000e+00 - values_accuracy: 1.0000 |
H: Context Based Embeddings vs character based embeddings vs word based embeddings
I am working on a problem that uses English alphabets in the text but the language is not English. Its a mixture of English and different language text. But all words are written using English alphabets. Now, word-based pre-trained embedding models will not work here as it gives a random embedding to out of vocabulary words.
Now my question is that how the Context-based pre-trained embeddings deal with "out of vocabulary" words?
Besides, what's the difference between context-based embeddings and character-based embeddings?
AI: Context-based or contextual means that the vector contains information about the use of the word in a context of a sentence (or rarely a document). It thus does not make sense to talk about the word embeddings outside of the context of the sentence.
Models such as BERT use input segmentation into so-called subwords which is basically a statistical heuristic. Frequent words are kept intact, whereas infrequent words get segmented into smaller units (which often resemble stemming or morphological analysis, and often seem pretty random), ultimately keeping some parts of the input segmented into characters (this is typically the case of rare proper names). As a result, you get contextual vectors of subwords rather than words.
Character-based embeddings usually mean word-level embeddings inferred by from character input. For instance, ELMo used character-level inputs to get word embeddings that were further contextualized using a bi-directional LSTM. ELMo embeddings are thus both character-based and contextual.
Both when using sub-words and when using embeddings derived from characters, there are technically no OOV words. With subwords, the input breaks to characters (and all characters are always in the vocabulary). With character-level methods, you always get a vector from the characters. There is of course no guarantee that the characters are processed reasonably, but in most cases they are.
Models that use static word embeddings (such as Universal Sentence Encoder) typically reserve a special token for all unknown words (typically <unk>), so the model is not surprised by a random vector at the inference time. If you limit the vocabulary size in advance, the OOV tokens will naturally occur in the training data. |
H: How to get non-normalized feature importances with random forest in scikit-learn
I'm using the feature_importances_ attribute in the random forest classifier of scikit-learn to plot the importances of each feature. However, I'd like to plot these importances non-normalized. I have searched around how to do this, but there doesn't seem to be an easy method how to do this. I tried manually:
temp = [t.tree_.compute_feature_importances(normalize=True) for t in clf.estimators_]
arr = np.array(temp)
arr2=[]
for i in range(19):
arr2.append(sum(arr[:,i]))
arr3 = np.array(arr2)
indices = np.argsort(arr3)[::-1]
indices.reshape(1,-1)
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), arr3[indices],
color="r", align="center")
plt.xticks(range(X_train.shape[1]), indices)
plt.xlim([-1, X_train.shape[1]])
plt.show()
Which gives these results for normalize=True and normalize=False respectively. (the normalize parameter seems to be inverted somehow...?)
For reference, this is the result from using feature_importances_ (the error-bars are not relevant to the question):
AI: It seems you are normalizing the Gini dip for every tree and then summing those values.
You should do the normalization at the end after the sum.
Result might vary depending upon the overall numbers in the two scenarios.
See this dummy example
import numpy as np
# Three Tree and 3 Features. F1 sum across Trees = 50, F2 = 48, F3 = 20
gini_dip = np.array([[20, 15, 5], [20, 15, 5], [10, 18, 10]])
# Normaliation before sum
gini_dip_n = np.array([((elem-min(elem))/(max(elem)-min(elem))) for elem in gini_dip])
fi_1 = gini_dip_n.sum(axis=0)
# Normaliation After sum
gini_dip_sum = gini_dip.sum(axis=0)
fi_2 = (gini_dip_sum - gini_dip_sum.min())/(gini_dip_sum.max() - gini_dip_sum.min())
fi_1, fi_2
(array([2. , 2.33333333, 0. ]),
array([1. , 0.93333333, 0. ]))
F2 is best in scenario #1 and F1 is best in scenario #2 |
H: Oscilations in loss curve
I saw a similar question, but I think my problem is something different.
While training, the training loss and the validation loss move around one number, not decreasing significantly.
I have 122707 training observations and 52589 test observations with 55 explanatory variables and one dependent, One CONN1D with 24 filters, 2 Lstm years with 24 units and one dense layer. I've added a dropout rate of 0.2 between the layers. Total parameters 13417.
Seems like my model is not learning at all. Does it mean that the dataset is not a good representation of the specific problem? Should I increase the number of epochs? I use Adam optimizer with default learning rate.
Adding additional info:
I am trying to predict next hour air pollution based on previous value air pollution concentration, previous hour meteorological data as temperature, windspeed etc. Day, Hour and Month are also included and encoded with One Hot Encoding. Additionaly the wind degree is decomposed according to its sin and cos components. Previously I tried normalization of data but it didn`t seem to give any difference. Haven't try any other models. Here is the model:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=24, kernel_size=3,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 55]),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.LSTM(24, return_sequences=True),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.LSTM(24, return_sequences=True),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 1000)
])
Somewhere I saw Labda layer after Dense layer in regression. I noticed that adding Lambda layer at the end speeds up the learning. I multiply the output with 1000 because it is the maximum value for the variable I want to predict.
AI: Rather than oscillations, it looks like white noise, like a random walk. In other words, as you said, your model is not learning anything.
Unfortunately it's impossible to say what's wrong, since we can't see any code. We need more information about dataset, how you processed it, model implementation, all the hyperparams you chose, what other versions you tried before that one, ... the list is countless. But most importantly it's really hard to help you without code.
If the dataset is a good one the problem must be some error you made along the way.
EDIT:
Here's what I think:
You don't need Conv layers followed by RNN layers. This doesn't really make sense. Let the LSTM receive raw input.
Don't use Dropout with RNNs, they don't go along very well together. Dropout makes sense with Dense and Conv data, but in RNNs, where sequence is everythin, they can actually make things worse. Somebody uses recurrent dropout as an alternative but it's not necessary.
Don't use return_sequences=True between an LSTM and a Dense layer. That must be used between LSTM's only.
That Lambda layer at the end is probably causing most of the error. If you multiply all your predictions by 1000, what you get is by definition a prediction that on a completely different scale than your target value.
The Network overall is too deep and has too many parameters. I assume you are working with the famous Beijing air quality dataset. In this case, It is enought to work with one LSTM layer, followed by a Dense node to make the prediction. Everything else is overkill and not necessary for a simple dataset like that.
Something much more simple, like:
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(24, input_shape=(seq_len, n_vars)),
tf.keras.layers.Dense(1),
])
has higher chance to work. (Please specify the input shape correctly). Try playing with its hyperparameters, after you made sure all variables are properly scaled between train and test data.
Good luck! |
H: Does standardization result in normal distribution?
I have a question about standardization (subtract mean, divide by standard deviation) of data consisting of different features with different ranges.
I read some information that seemed to be contradictory.
Standardization
does not alter the fundamental shape of the data
but it gives the data the properties of a normal distribution: mean of 0 and standard deviation of 1
So, does it change the normal distribution or not?
AI: No, standardization does not change the shape of the distribution. It centers the distribution by subtracting the mean and scales it by dividing by the standard deviation. |
H: replace values based on Number of duplicate rows are occured
I have a dataframe ,that looks like this
site Active
0 deals Active
1 deals Active
2 deals Active
3 discount Active
4 discount Active
i don't want to drop the duplicate items, but i want to change the Active columns value based on Site column,for example Active has to change inactive based on duplicate item in site column,last duplicate item has to Active, other than that Inactive
Expected
site Active
0 deals InActive
1 deals InActive
2 deals Active
3 discount InActive
4 discount Active
AI: I would do this manually. First, let us create the index set of entries whose state must remain active. To do this, I iterate over all rows and record active instances. Note that the later occurrence overrides earlier ones, so we keep only the last one occurrence of active event.
last_active = dict()
for i, row in df.iterrows():
if row['Active'] == 'Active':
last_active[row['site']] = i
keep_active = last_active.values()
Now I assign the state 'Active' to those entries whose index is in keep_active and InActive otherwise.
df['refined_active'] = df.apply(lambda x: 'Active' if x.name in keep_active else 'InActive', axis=1) |
H: Python library to detect a bank/financial institution name in a string
I would like to extract bank names from a given text like wells Fargo, chase....is there a python library for this? I know there is entity tagger in space and flair but they only identify the entity (org/person)
AI: As mentioned in the comment you can use regex but you would need to define a set of rules for it. You can try with LexNLP which is trained on legal documents and use it to extract data type like address, companies and persons. |
H: How to Identify Repeating Data Entries when the Repeated Entries are Spelled or Constructed Differently
I have a dataset of entries and a variable for the owner of the entry. Some of these people occur more than once. However, the names are sometimes written differently. I want to eventually be able to aggregate the other data to the single owner. These are the names of business owners so sometimes it's a singular name, sometimes it's more than one name, and sometimes it's just the company name. Here's an example of some of the styles of names in the data:
DOE JOHN
DOE JOHN J
DOE, JOHN
DOE, JOHN + JANE
DOE JOHN + JANE
JOHN DOE J ETAL % JOHN J DOE
COMPANY CO
I've never done anything like this before. How could I go about identifying some of the same people? Is there a way to create an index to identify the similarity between these groups? Most of the ones I've seen are for longer text. Is there an index well suited for this?
I apologize if this is too basic a question. I'm new to doing things like this and I'm not sure if I know exactly what to search for. I'm most comfortable with Stata and R but I've used Python before and I could eventually figure out how to do something with that.
AI: For R: Have a look and the stringr package. I would use for example the str_detect() function as follows: str_detect(column_of_different_names,"DOE|company_name"). This will return TRUE for each string that includes "DOE" or the company name in "company_name". |
H: How to generate and visualize results of all possible parameter values in Python?
I have two parameters: demand, ticket_qty.
demand is an integer and can have a possible range of [100 - 200]
ticket_qty is also an integer and can have a possible range of [0 - 200]
I want to visualize how values for the above parameters affect the functions below:
price = demand - tickets_qty
revenue = price * ticket_qty
In Excel I created two matrixes (price and revenue) with demand as row indexes and ticket_qty as column indexes and simply applied the formula for each cell based on the rules above. However, seeing as this is a Kaggle microchallenge, I'd like to perform this using Python so as to then be able to plot and show optimal levels of price/revenue and do all my work inside the Jupyter notebook.
Questions:
How would I go about doing this in Python? pandas? numpy?
In statistics/data science, what is this "process" of mapping out possible values called? So as to facilitate further research
AI: It's probably fastest to do this using numpy, in which you can define the possible ranges of your values using numpy.arange as follows:
import numpy as np
demand = np.arange(100, 201, 1) # integers from 100-200 inclusive
ticket_qty = np.arange(0, 201, 1) # integers from 0-200 inclusive
If you then want to calculate the possibilities using the different values you can use the .outer method of the different universal numpy functions (i.e. numpy.subtract to subtract values) to get easily get the results without looping over the arrays. |
H: Bag-of-words and Spam classifiers
I implemented a spam classifier using Bernoulli Naive Bayes, Logistic Regression, and SVM. Algorithms are trained on the entire Enron spam emails dataset using the Bag-of-words (BoW) approach. Prediction is done on the UCI SMS Spam Collection dataset. I have 3 questions:
During test time, while creating the term-frequency matrix, what if none of the words from my training BoW are found in some of my test emails/smses. Then, wouldn't the document vectors be zero vectors for those datapoints. How should I tackle this?
What if a new word from my test email/sms doesn't exist in BoW?
How do I choose my BoW so as to improve my prediction accuracy?
AI: Supervised ML works on the assumption that the test data follows the same distribution as the training data. This is not always the case in real-world use cases, but it's at least necessary that the test data is "mostly similar" to the training data.
As a consequence, a BoW model can only be applied to data which uses mostly the same vocabulary, with a mostly similar distribution over the words. It is true that out of vocabulary (OOV) words frequently appear in the test data, because words in natural languages follow a Zipf distribution so there are many words which occur rarely. The general assumption in BoW ML models is that since these words occur rarely, they can reasonably be ignored.
During test time, while creating the term-frequency matrix, what if none of the words from my training BoW are found in some of my test emails/smses. Then, wouldn't the document vectors be zero vectors for those datapoints. How should I tackle this?
This event is supposed to be unlikely: a sentence usually contains at least a few common words (again due to the Zipf distribution). However this could happen with a very short text message. Anyway there is nothing special to do about it: all the words are ignored, the vector contains only zeros indeed, the model gives a prediction for this vector of features.
What if a new word from my test email/sms doesn't exist in BoW?
This is the traditional case of OOV words mentioned above. The most simple (and probably most common) option is to completely ignore the unknown word. With some probabilistic models smoothing can be used to account for the existence of OOV words, but as far as I know this is used only with n-grams.
How do I choose my BoW so as to improve my prediction accuracy?
Experimentally: use a validation set in order to evaluate several methods, then select the one which performs best and apply only this one on the test set. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.