text
stringlengths 83
79.5k
|
---|
H: How to process Dicom Images for CNN?
I am building a disease classifier. I have Dicom scans for many patients.
The scans have different slice thickness, and different scans have different number of slices. However, the slice thickness for a single patient's scan has the same thickness for all slices.
For example:
Patient1's scan : 100 slices of 2mm thickness
Patient2's scan : 500 slices of 1mm thichness
The number of pixels for each slice is 512 x 512,
so currently the shape of nd array containing the information for
Patient1 is 100*512*512
Patient2 is 500*512*512
I want to pass all patients information into a CNN.
Do I need to resample the slices and make it uniform for all patients (like 512*512*n). If yes, how to do it, and what should be the value of 'n'?
AI: Can you use zero padding? It's a pre-processing technique for CNNs, it consists in creating a frame of zeros around the image, so that all input image will have the same size.
The CNN will then learn autonomously to ignore the zeros. It's a common technique, Keras layers already have padding built-in arguments. |
H: Skin Detection Classifier
I have a small data set containing around 80 images for people, and the corresponding ground truth for skin regions. I want to train a classifier to be able to detect a skin, and use it later on my own generated data, but I'm not sure what is the right way to go. I believe it would be hard to train a CNN due to the small number of images. So can someone suggest a direction?
AI: I suggest to give transfer learning a try for two reasons:
Your dataset is relatively small
Skin, faces and similar human body features have been successfully classified or segmented by existing neural networks
Specifically, you might want to have a look at Skin Lesion Analysis Toward Melanoma Detection. While the tasks were slightly different it did include a segmentation task of Melanomas. Here is an example of what they did:
And this is a snapshot of how some of the models performed:
Source: both images are from the paper linked above
It might be close enough to your task to re-use their networks or some part of it. Once you have found a transfer network, you can vary the degree to which you replace or retrain it`s layers to see what works. Given your relatively small dataset I would not retrain too much but start with the final layers only.
However, depending on how exactly your dataset looks like, there might be other tasks (e.g. from security applications where one needs to detect, classify or mask an object (e.g. a person or a face)) which resemble yours even better. Therefore, before digging into a specific model and apply transfer learning I'd do some more research. |
H: Fitting glm without explicit declaration of each covariate
When I fit a linear model with many predictor variables, I can avoid writing all of them by using . as follows:
model = lm(target_deathrate~., data = full_data)
But for models with higher complexity, I cannot make this work:
x = glm(target_deathrate~poly(., i),data = full_data)
In these cases I have to write all variables.
How to avoid writing all variable names and include all variables in my model?
AI: This has already been answered in stackoverflow here and here.
The main idea is to create the formula from a string with as.formula:
xnam <- paste("x", 1:25, sep="")
fmla <- as.formula(paste("y ~ ", paste(xnam, collapse= "+"))) |
H: Risk prediction vs classification model
I am working on a binary classification model. Currently, when I use scikit logistic regression, it outputs binary values like 0s and 1s. However, I understand, from online reading, that it outputs probability, and based on threshold of 0.5, converts them into two classes.
1) Does building a risk prediction model mean just stopping our project as soon as we get the probability output and not apply this threshold? Is that what is called as Risk prediction model? If yes, how do I do that using scikit logistic regression?
2) Does scikit logistic allow us modify the threshold?
3) Can all classification algorithms like SVM, RF, XGBOOST, etc. be used to build a risk prediction model without going for the threshold cutoff?
AI: I will try to answer your question as shortly as possible.
Yes, if you define probability as a risk, then the probabilities are risk scores. But, there's a catch in these scenarios, you will have to include the prevalence of a class to calibrate them. If Person A has a risk score of 0.9 but you have observed that the positive class is only 20% of the data, then the actual risk is much lower than the probability it self. You can use clf.predict_proba() function to get these probabilities.
Once you have these probabilities, you should be able to set your custom threshold.
Yes, most of these models in different libraries have predict_proba() functions.
Let me know if you have more questions. |
H: What inference can we draw from the frequency distribution of thresholds?
I've the probability scores of positive class of two models. The frequency distribution of those probability scores(thresholds) are like this
Model #1
Model #2
It's a binary classification problem. And the thresholds obtained are from the test data. The stats in the confusion matrix are nearly same for two models. Can we say the Model #2 is doing well in separating classes? What other conclusion we can draw from the frequency of the thresholds.
AI: I'd say the model 1 is performing really well. If you can use different colors while plotting for positive and negative classes, then you should be able to see the difference. When you are trying to do binary classification, the distributions of negative and positive classes should be dipping in the mid region where as Model 2 is the opposite. |
H: data splitting into 3 sets based on years
let's suppose we have a customer data from the year 2015 to 2019, I want to train_test_split() my data such that my data gets divided into three sets, set-1 is from 2015 to 2017 (3 years) on which i will train my model, set-2 i.e. 2018(1 year) on which i will validate my model , set 3 is 2019(1 year) on which I will test my model. I want a code to divide data into 3 sets based on time(years).
AI: Seems to me the best (or at least quickest) way to do this would be have all the data in a Pandas dataframe, then create masks based on year and create new dataframes for each group. Ex:
train_df = data[data['year'].isin(['2015', '2016', '2017'])
validate_df = data[data['year'] == '2018']
test_df = data[data['year'] == '2019']
Hope this is what you're looking for. If not, let me know and we can work out another solution. |
H: Activation function between LSTM layers
I'm aware the LSTM cell uses both sigmoid and tanh activation functions internally, however when creating a stacked LSTM architecture does it make sense to pass their outputs through an activation function (e.g. ReLU)?
So do we prefer this:
model = LSTM(100, activation="relu", return_sequences=True, input_shape(timesteps, n_features))
model = LSTM(50, activation="relu", return_sequences=True)(model)
...
over this?
model = LSTM(100, return_sequences=True, input_shape(timesteps, n_features))
model = LSTM(50, return_sequences=True)(model)
...
From my empirical results when creating an LSTM-autoencoder I've found them to be quite similar.
AI: Simple explanation with images
We know that an activation is required between matrix multiplications to afford a neural network the ability to model non-linear processes.
A classical LSTM cell already contains quite a few non-linearities: three sigmoid functions and one hyperbolic tangent (tanh) function, here shown in a sequential chain of repeating (unrolled) recurrent LSTM cells:
Images borrowed from "colah's blog"
So far this is just a single LSTM layer, and here we see that the cell output is already the multiplication of two activations (a sigmoid and a hyperbolic tangent). In this case, you could agree there is no need to add another activation layer after the LSTM cell.
You are talking about stacked layers, and if we put an activation between the hidden output of one layer to the input of the stacked layer. Looking at the central cell in the image above, it would mean a layer between the purple ($h_{t}$) and the stacked layer's blue $X_{t}$. You will notice then, that the output in this case, like with the sequential output, has already been activated, as it is the exact same output as the black left-to-right arrow (know as $f_{t}$). What is more, the first thing the input would do in the stacked layer is to be passed through the sigmoids and hypoerbolic tangents of the forget/input/output gates.
So there are plenty of non-linearities being used, meaning it is unnecessary to add yet another between the stacked LSTM layers. You might like to think of it as simply applying two ReLU layers after a fully-connected layer. The results might be slightly different compared to just using one, but not much; as in your experiements with stacked LSTMs.
Documentation
If you look at the Tensorflow/Keras documentation for LSTM modules (or any recurrent cell), you will notice that they speak of two activations: an (output) activation and a recurrent activation. It is here that you can decide which activation to use and the output of the entire cell is then already activated, so to speak. PyTorch doesn't seem to (by default) allow you to change the default activations.
Real world stacked models
Common applications of recurrent networks are found in NLP, for example the ELMo model. If you look through the network design code, you see only basic LSTM cells being used, without additional activation laters. They only mention adding activations for the fully-connected layers (namely a ReLU) at the final output.
The first usage of stacked LSTMs (that I know of) was applied to speech recognition (Graves et. al), and the authors also do not mention the need for activation layers between the LSTM cells; only at the final output in conjunction with a fully-connected layer. |
H: Synthetic time series generation according to some distribution
I'm trying to develop a change detection model that uses sliding windows. Given a time series with some features I've a sliding widows that analyses that time period and compares with a successive time window. If the distribution of data has changed than it is notified. Anyway, I don't have enough real data about my problem, so I need to generate some artificial time series but with different distributions, no matter what distributions, the important thing is that they are different, so that I can test when my model detects change in the data. How can I generate those data?
AI: If I understand you correctly, I think I have a simple way and you do not need to overthink it.
Take a Gaussian (Normal) distribution random number generator, it could even be just downloading a set from https://www.random.org/gaussian-distributions/
Multiply by the desired standard deviation, add the desired mean.
From a certain timestamp, change the standard deviation and mean to a new value to simulate the distribution change. This is the same as concatenating smaller sets with different mean and standard deviation.
See if/when your model detects it. |
H: How to plot multiple columns with ggplot in R?
I do have a data frame with different categorical and numerical columns with the following schema:
Id | num_col_1 | num_col_2 | num_col_3 | cat_col_1 | cat_col_2
Now I want to draw a combined plot with ggplot where I (box)plot certain numerical columns (num_col_2, num_col_2) with boxplot groups according cat_col_1 factor levels per numerical columns. Along y axis is the spread of the respective selected columns (not other column). So far I couldn' solve this combined task.
Thank you.
AI: If I understand right your question, you are looking to plot selected numerical columns against a selected categorical column of your dataset, am I right ?
If so, you can have the use of dplyr, tidyr and ggplot2 packages to achieve this.
Starting with this dataframe:
id num1 num2 num3 cat cat2
1 C -0.48892284 1.417909 2.8884577 a f
2 C -0.62795166 1.472390 1.6625688 c f
3 B -0.04691673 2.731553 0.9692889 c e
4 B 0.16261812 -1.152528 2.4308332 a d
5 C 1.29230591 -1.609465 2.2089074 a f
6 E -0.46355650 -1.070132 0.4517597 b f
Basically, you are selecting first your columns of interest (here num1, num2 and cat), then, you reshape data into a longer format using pivot_longer function to obtain something like that:
library(tidyr)
library(dplyr)
df %>% select(id, num1, num2, cat) %>%
pivot_longer(., cols = c(num1,num2), names_to = "Var", values_to = "Val")
# A tibble: 200 x 4
id cat Var Val
<fct> <fct> <chr> <dbl>
1 C a num1 -0.489
2 C a num2 1.42
3 C c num1 -0.628
4 C c num2 1.47
5 B c num1 -0.0469
6 B c num2 2.73
7 B a num1 0.163
8 B a num2 -1.15
9 C a num1 1.29
10 C a num2 -1.61
# … with 190 more rows
Finally, you can add to this pipe sequence the plotting part by calling ggplot and geom_boxplot :
library(tidyr)
library(dplyr)
library(ggplot2)
df %>% select(id, num1, num2, cat) %>%
pivot_longer(., cols = c(num1,num2), names_to = "Var", values_to = "Val") %>%
ggplot(aes(x = Var, y = Val, fill = cat)) +
geom_boxplot()
Is it what you are looking for ?
Data
set.seed(123)
id <- sample(LETTERS[1:5],100, replace = TRUE)
num1 <- rnorm(100)
num2 <- rnorm(100)*2
num3 <- rnorm(100)+2
cat <- sample(letters[1:3],100, replace = TRUE)
cat2 <- sample(letters[4:6],100, replace = TRUE)
df <- data.frame(id, num1, num2,num3, cat,cat2) |
H: Justification for values used in backpropagation
I'm learning the method for backpropagation in adjusting weights. A generalization of a formula used to determine the change made to a respective weight is
where is the rate the total error changes as the i-th weight changes. I get why this value is interesting in the context of what we're trying to accomplish with backpropagation, but I don't totally understand why this is a good value to use when determining how to adjust the new weight, particularly when there's multiple weights influencing the total error. Why is this a "good" number to use to when changing our weight?
AI: The objective of back-propagation is to isolate the effect of each weight on the total error. Once the effect is isolated, each weight can be changed individually such the total error is minimized.
During back-propagation, the effects of other weights are automatically isolated. Take, for example, the function:
$$
E_{total}(x) = w_1x + w_2x^2 \\
$$
Then
$$
dE_{total}(x)/dw_1 = d(w_1x)/dw_1 + d(w_2x^2)/dw_1 \\
dE_{total}(x)/dw_1 =x + 0
$$
So, you see, back-propagation ignores the effects of other weights by definition of differentiation. When you back-propagate with respect to $w_1$, all other $w$s are considered constant.
$dE_{total}/dw_i$ is the change in error as you change a particular weight $w_i$ and only that weight. So it makes sense that you would want to change $w_i$ in the opposite direction of that change i.e. $-dE_{total}/dw_i$. |
H: Knowing Joint probability distribution between feature-label space
I am doing a course CORNELL CS4780 "Machine Learning for Intelligent Systems". you can find the link here for the one I am going to refer 1st lecture
The professor explains, we have a sample
$D ={ (X_1,y_1),(X_2,y_2), \ldots,(X_n,y_n)} \sim P$
Where, (Xi,yi) is a feature-label pair. There is a joint distribution
over the feature-label space and is denoted by $P$.
We never have access to the $P$, Only God knows $P$. What we want to do in this supervised learning task is to take data from this distribution and learn a mapping/function form $X$ to $y$.
I agree/understand till this point.
Then, Professor goes on to make a statement in the lecture, precisely at 34 minutes 26 seconds, that
"IF we had access to this distribution, everything would be easy". But he doesnt explain this statement.
Now my question is What would have been easy if we knew about the distribution ? Does he mean, if we had access to the distribution then we would know the probabilities of each of $(X_i,Y_i)$ pair. Then we can learn a mapping/parameters such that we reduce out of sample error?
AI: The interest is to predict $y$.
If we know the real distribution $P$ of $(x,y)$, there is no need to build any more machine learning model. Given $x$, we can directly consult $P$ to know the probability of $P(y|x)$. For the case of discrete $Y$, we have $P(y|x)=\frac{P(x,y)}{\sum_z P(x,z)}$. |
H: Handling Categorical Features on NGBoost
Recently I have been doing some research on NGBoost, but I could not see any parameter for categorical features. Is there any parameter that I missed?
__init__(self, Dist=<class 'ngboost.distns.normal.Normal'>, Score=<class 'ngboost.scores.MLE'>, Base=DecisionTreeRegressor(ccp_alpha=0.0, criterion='friedman_mse', max_depth=3,
| max_features=None, max_leaf_nodes=None,
| min_impurity_decrease=0.0, min_impurity_split=None,
| min_samples_leaf=1, min_samples_split=2,
| min_weight_fraction_leaf=0.0, presort='deprecated',
| random_state=None, splitter='best'), natural_gradient=True, n_estimators=500, learning_rate=0.01, minibatch_frac=1.0, verbose=True, verbose_eval=100, tol=0.0001)
https://github.com/stanfordmlgroup/ngboost
AI: It does not support at the time (it will come just as xgboost did not have to have it)
Given thats its a boosting method in the first place one can ask whats the history of xbgoost and subsequent cat and lgboost. XGBoost implementation of gradientboosting did not handle categorical features because it did not have to, it was sufficient enough as it was. What made xgboost special was the use of Hessian information. When other implementations (e.g. gbm in sklearn in Python) used just gradients, XGBoost used Hessian information when boosting. Which in turn was super faster.
TL;DR handling categorical features is not just a matter of convinience, it is also important for speed. This feature will come subsequently (if we contribute!) for time its all about representing uncertainty. |
H: Interpreting Gradients and Partial Derivatives when training Neural Networks
I am trying to understand of purpose of partial differentiation in NN training by knowing how to interpret gradients and their partial derivatives. Below is my way of interpreting them so I would like to know if I am correct, and if not, could someone please point me in the right direction.
If we are working with functions that depend on a single variable, then derivative of that function with respect to that particular variable is a slope (i.e. constant) which tells us how the changes in the dependent variable will effect the changes in the function value.
If we are working with functions that depend on several (N) variables, then derivative of that function with respect to all of these dependent variables is a gradient (i.e. vector of partial derivatives) which points to the direction of function extreme. Each partial derivative corresponds to a specific dimension in the N dimensional space that we are trying to optimize (e.g. quadratic cost function C(W,b)).
My question is, when we calculate partial derivative with respect to one parameter (e.g. weight between input x1 and 1st hidden layer neuron) then we are treating all other weights and biases as constants and we are evaluating how will cost function change if we were to take a step in the direction that is represented by that particular weight. Is this correct? If not, please correct my understanding of partial differentiation in NN training procedure.
Also, what is the role of Jacobian matrix in NN training?
Thank you so much!
AI: My question is, when we calculate partial derivative with respect to one parameter (e.g. weight between input x1 and 1st hidden layer neuron) then we are treating all other weights and biases as constants and we are evaluating how will cost function change if we were to take a step in the direction that is represented by that particular weight. Is this correct?
Yes.
Long answer: that is exactly the meaning of partial derivative. It's the impact of a specific variable, while keeping all the others constant. It's an all things equal condition (or ceteris paribus, if you like Latin).
You need it in order to understand how much that specific parameter, at that current value, is contributing to the final Loss. You have to keep everything else equal in order to understand what is that parameter's responsibility of the final model's error.
Also, what is the role of Jacobian matrix in NN training?
More generally, the Jacobian matrix of a Neural Network is a matrix of the partial derivatives of the y. It's size is:
( Number of observations, Number of parameters )
It's a way to store/represent your gradient information. The application of the chain rule to execute backprop is executed on those in practice. |
H: Trying to return more than just the top result from sklearn NearestNeighbors
I'm trying to compare a list of names (duplicated into a clean file and a messy file). I then compare the files against each other. My problem is that it returns only the top 1 result for each, which is itself (the identical record in each file). What I am trying to capture is the second result, which would be the closest match, not being itself.
names = pd.read_csv('C:/Temp/messynames.txt', sep='\t')
org_names = names['VariationName'].unique()
vectorizer = TfidfVectorizer(min_df=1, analyzer=ngrams)
tf_idf_matrix = vectorizer.fit_transform(org_names)
clean_org_names = pd.read_csv('C:/Temp/cleannames.txt', sep='\t')
org_name_clean = clean_org_names['StandardName'].unique()
vectorizer = TfidfVectorizer(min_df=1, analyzer=ngrams, lowercase=False)
tfidf = vectorizer.fit_transform(org_name_clean)
nbrs = NearestNeighbors(n_neighbors=3, n_jobs=-1).fit(tfidf)
unique_org = set(names['VariationName'].values)
def getNearestN(query):
queryTFIDF_ = vectorizer.transform(query)
distances, indices = nbrs.kneighbors(queryTFIDF_)
return distances, indices
distances, indices = getNearestN(unique_org)
unique_org = list(unique_org) #need to convert back to a list
matches = []
for i,j in enumerate(indices):
temp = [round(distances[i][0],2), clean_org_names.values[j][0][0],unique_org[i]]
matches.append(temp)
matches = pd.DataFrame(matches, columns=['Match confidence (lower is better)','Matched name','Original name'])
matches.to_csv('C:/Temp/matchednames.txt', sep='\t', encoding='utf-8', index=False, quoting=3)
For a file with the following four names:
NOKIA
NOKIAA
NOKIA LMD
NOKIA LTD
The results looks like this:
Match confidence Matched name Original name
0 0.0 NOKIA LMD NOKIA LMD
1 0.0 NOKIAA NOKIAA
2 0.0 NOKIA NOKIA
3 0.0 NOKIA LTD NOKIA LTD
I'm trying to get to something more like:
Match confidence Matched name Original name
0 0.1 NOKIA LTD NOKIA LMD
1 0.1 NOKIA NOKIAA
AI: My guess would be that in the following piece of code:
for i,j in enumerate(indices):
temp = [round(distances[i][0],2), clean_org_names.values[j][0][0],unique_org[i]]
matches.append(temp)
The variable (array element):
distances[i][0]
contains the top element match for the $i^{th}$ line. Replace the second array index so it becomes:
distances[i][1]
You may want to increase one of the two zero indices in the following line as well to have a distance like value:
clean_org_names.values[j][0][0] |
H: Modifying a distribution by adding in samples incrementally
I would like to calculate the distribution (e.g., Gaussian) of a set of samples. However, I would also like to see how the distribution changes as I fit the samples into the distribution incrementally.
One way to do this would be to compute the distribution over all relevant samples every increment (e.g., first increment: calculate distribution of 2 samples, second increment: calculate distribution of 3 samples). However, this is computationally intensive.
Would I be able to calculate the distribution of 3 samples from solely the 3rd sample and the properties of the distribution of 2 previous samples?
For example, say I have 5 ordered samples.
I start off by calculating the mean and standard deviation of the first 2 samples This is the 1st Gaussian distribution.
I then look at the third sample, and fit it into the first Gaussian distribution (knowing the mean, std, number of samples). This is the 2nd Gaussian distribution.
I then look at the fourth sample, and fit it into the 2nd Gaussian distribution (knowing the mean, std, number of samples). This is the 3rd Gaussian distribution.
AI: Calculation of standard deviation on the fly is possible (turn to our brothers at math.stackexchange):
https://math.stackexchange.com/questions/198336/how-to-calculate-standard-deviation-with-streaming-inputs
It is easier to keep track of the variance and only take the square root to calculate the stdev when you really need it.
And the mean is even easier, don't overthink it. If you have 4 samples with a mean of 4 and 3 samples with a mean of 3, this totals up to 25 in 7 samples.
So
$$\mu = \frac{\mu_1 N_1 + \mu_2 N_2 + \mu_3 N_3}{N_1 + N_2 + N_3}$$
and
$$\sigma^2 = \frac{\sigma_1^2 N_1 + \sigma_2^2 N_2 + \sigma_3^2 N_3}{(N_1 + N_2 + N_3)^2}$$ |
H: How to properly apply CrossValidation and/or split the dataset?
I have a particular problem and do not really now how to properly validate my experiments in this scenario.
There is one big data set with 100.000 samples, 99.000 y=0, 1.000 y=1
Each sample has 1.000 Features
There are 10 different subsets of feature combinations which have to be evaluated to get information about more or less expressive feature groups
Because of the different number and type of features each subset, an adequate models architecture has to be figured out manually
The performance of a few different kinds of models have to be evaluated
The performance of several different sampling techniques have to be evaluated
10-fold Cross Validation has to be applied
Random Search has to be applied for each experimental configuration (Combination of subset + model + sampling technique)
These facts lead to the following experimental routine:
train, val = splitDataset()
iterate each featureSubset:
iterate each modelType:
model = manuallySearchGoodArchitecture(featureSubset, modelType, train, val)
iterate each samplingTechnique:
train_temporary = applySampling(samplingTechnique)
HPs = HPOptimization(model, train_temporary, val)
result = k-FOldCV(model, HPs, samplingTechnique, train)
end
end
end
So, now I'm not sure about whether the CrossValidation is applied correctly. Because the model which has to be validated has already seen the train data (during manuallySearchGoodArchitecture and HPOptimization) and those are also used for testing purpose during CV.
So, is this routine faulty?
What would be the correct way of performing those experiments while applying proper validation technique?
How must the data set be splitted?
AI: I'm not sure that I understand every part of the process but there is one clear issue with it: because the CV is applied in the inner loop, there is a serious risk of overfitting the model with respect to the other parameters (feature subset, model type, sampling technique). Depending on the goal, this is not necessarily wrong but it's important to interpret the results accordingly. For instance in this case the results will show how different subsets of features perform on the data, but this difference in performance across subsets shouldn't be considered reliable: it's possible that a particular subset happens to be better than another by chance.
It's quite a complex setting and I assume that there are efficiency constraints to take into account. If possible the most reliable results would be obtained by doing several stages of CV (or other techniques, e.g. bagging) using different subsets of data. For instance you could run the whole process a few times, each time using a different random subset of instances: in this way you can average performance and see whether a particular subset of features is constantly better than another (for example, same idea for other parameters).
[edited] Disclaimer: I don't know if there is any standard way to proceed with a complex multi-level setting like this, my advice is based only on the experience I had with a few broadly similar cases.
Generally the idea is that every choice to make can be considered as an hyper-parameter, including the subset of features, the type of model, the architecture, the sampling technique. Therefore I think that ideally one would cross-validate at every level, i.e. put every loop level in a function and call this function k times with a different subset of data. It would look like something like this:
train1, val1 = splitDataset()
iterate each featureSubset:
train2, val2 = splitData(train1)
resultTrain2 = kfoldCV(processLevel2, train2)
resultLevel2 = apply(resultTrain2, val2)
resultLevel1 = apply(resultLevel2, val1)
processLevel2:
iterate each modelType:
train3, val3 = splitData(train2)
resultTrain3 = kfoldCV(processLevel3, train3)
...
remark: I'm not 100% sure about the algorithm, maybe I over-complicated it. I think it gives the general idea though.
But of course following this logic the computational complexity becomes way too high, so you will probably have to take a few shortcuts. One thing I've tried successfully in the past is to use genetic learning in order to optimize different parameters at the same time: that would mean having different "genes" which represent the different parameters (feature subset, model type, etc.), each with its set of values, and then run the genetic process which is supposed to converge to a set of optimal values for all the parameters (I was using CV every time a particular combination of parameters is evaluated). But again I don't know if it's a very orthodox method :)
[edit2]
After more thought I think I would try to do something like this:
innerData, valOuter = splitDataset() // keep large amount for validation set, say between 20-40%
train, valInner = splitDataset(innerData)
iterate each featureSubset:
iterate each modelType:
model = manuallySearchGoodArchitecture(featureSubset, modelType, train, val)
iterate each samplingTechnique:
train_temporary = applySampling(samplingTechnique)
HPs = HPOptimization(model, train_temporary, valInner)
result = k-FOldCV(model, HPs, samplingTechnique, train)
end
end
end
bestHPCombinations = select top N HPCombinations from first stage
for each HPCombination in bestHPCombinations
model = train(innerData)
result = apply(model, valOuter)
end
It's simpler: the idea is just to re-evaluate the results from the first stage on some fresh data in order to avoid overfiting (here HPCombination includes featuresSubset, modelType, etc.). The second stage could also include CV or bagging for more reliable results. But in general I think this option would be reasonably reliable, since it's unlikely that the best models from the first stage would also be the best models in the second stage by chance only. |
H: How to validate regex based Resume parser efficiently
I am using rule based logic to extract features from resume. Basically I am trying to find if the candidate switched the company in less than 1 year. So I have the code in place to find it using python. However if I want to validate it, I am currently doing it manually for few resumes, meaning I open the resume and find if the candidate switched the company in less than 1 year and compare it with what my regex logic gives. But this is time consuming process to validate.
Is there any better way to validate this piece of python code. ?
AI: Is there any better way to validate this piece of python code. ?
No, because any automatic method would be equivalent to creating another learner, and there would be no way to know if it's good or bad at the job. So unless you find another annotated dataset (i.e. a set of resumes with labels indicating whether or not they switched jobs in the past year), you must create your own annotated set. |
H: Does it make sense to do train test split when trainning GANS?
For normal supervised learning the dataset is split in train and test (let's keep it simple).
Generative Adversarial Networks are unsupervised learning but there is a supervised loss function in the discriminator.
Does it make sense to split the data into train and test when training GANs?
My first opinion will be no, but I am not 100% sure. Is there any reason why having a test set will help the Generator?
AI: Training GANs is only a partially unsupervised task, IMHO. It's certainly unsupervised for the Generator, but it's supervised for the Adversarial Network. So it might be useful to test the Disciminator's ability to distinguish fake and true cases on new data it has never seen before.
In other words, it makes sense to split your dataset in train(-validation)-test if you want to understand the Discriminator's ability to generalize its task on data it has never seen before. In case this is not of your interest, I guess you don't need to do it. |
H: Comparing excel data sets in Pandas
Pretty new to Python, but as an SEO I'm looking at the benefits of using notebooks in my workflow.
I've got two excel files which I've cleaned and imported into a new notebook using pandas.
I'm trying to compare position changes and create a new dataframe with new columns to show previous, new, and changes in positions.
Have a look at the screengrabs[! of the data below. Thanks in advance.
AI: You can do a pandas.DataFrame.join if you know how this works.
-- Edit: merge is apparently a better choice: See the example at the end.
I think you need an outer join on Keyword.
This should give a new DataFrame, that contains unique rows for the Keyword in both tables. Some entries may be NULL/None. This indicates that in the old or new table, the keyword was not present and you should treat is as a new keyword, or a keyword that has dropped from the list.
Rename the columns in the new table appropriately, and then apply a smart value between columns, taking into account that some values are NULL.
You can do a similar thing in Excel: https://superuser.com/questions/1023123/how-to-simulate-a-full-outer-join-in-excel
Edit:
Minimalistic example:
import pandas as pd
old = pd.DataFrame({'keyword': ['football', 'soccer', 'rugby'], 'position': [2, 1, 3]})
new = pd.DataFrame({'keyword': ['hockey', 'rugby', 'soccer'], 'position': [3, 2, 1]})
old.keyword = old.keyword.astype(str)
new.keyword = new.keyword.astype(str)
old.set_index(['keyword'])
new.set_index(['keyword'])
old = old.rename(columns={"position": "position_old"})
new = new.rename(columns={"position": "position_new"})
print(old)
print(new)
merged = pd.merge(old, new, how='outer', on='keyword')
print(merged)
Output:
keyword position_old
0 football 2
1 soccer 1
2 rugby 3
keyword position_new
0 hockey 3
1 rugby 2
2 soccer 1
keyword position_old position_new
0 football 2.0 NaN
1 soccer 1.0 1.0
2 rugby 3.0 2.0
3 hockey NaN 3.0 |
H: What applications does linear programming have in data science?
I'm currently learning about linear programming in my degree. I'm wondering how this is relevant to anything in data science?
AI: Whenever you have an optimization problem the first question that you have to ask yourself is.
Can I make it a Linear Programming problem?
For python I normally use Gurobi. Here is a basic example to get started: https://www.gurobi.com/resources/food-manufacture-i/
You can also do Machine Learning with it, that is the hot topic nowadays, but if you can make it a linear programming problem you will achieve optimality. |
H: Consider ratings as sentiment labels?
Beginner here!
I have a dataset, with reviews of a product as text, ratings for the product.
My previous motive was to use Naive Bayes classifier for sentiment analysis. But my data doesn't have the variables( sentiment) required - negative/positive.
Shall I use the ratings (1-5) and encode it as positive and negative?
Or using the lexicon-based methods is more valid?
AI: That would depend on the exact goal of the task and the specifics of the dataset, but in general I would say that it's always better to use the information specifically provided with the data if it's relevant for the task. In this case the rating for the product is indeed very likely to reflect the sentiment of the text, so I would go with it. Notice that you could also do both and compare the cases where the predicted sentiment differs from the one derived from the rating.
Given that the ratings are provided as 1-5 scores I would also consider the option of treating the task as a regression problem, instead of the standard binary classification setting. |
H: Reason for generally using RMSE instead of MSE in Linear Regression
In linear regression, why we generally use RMSE instead of MSE? The rationale I know is that it's easy to minimize the error in RMSE instead of MSE by Gradient Descent, but I need to know the exact reason.
AI: However RMSE seems similar to MSE and is the root of it, gradient of RMSE with respect to $i^{th}$ prediction differs from that of MSE.
$$\frac{\sigma{RMSE}}{\sigma{y_i}} =\frac{1}{2}\frac{1}{\sqrt{MSE}}\frac{\sigma MSE}{\sigma y_i}$$
Gradient of RMSE is equal to the gradient of MSE multiplied by this $\frac{1}{2}\frac{1}{\sqrt{MSE}}$ value which is constant and is called learning rate. And it shows that RMSE and MSE cannot be interchangeably used when using gradient based methods like when it comes to use linear regression with gradient descent optimization algorithm.
Further, in cases when it is better to give more weight (higher error) to large errors, RMSE can be a better error measure. I believe it depends on your data distribution which one of them to pick to use. But in linear regression, maybe not caring about the points which are considerably off the actual prediction line leads to being biased to considering these points and having a bad model. So, it might be better to give a higher error value to these points.
This and this might also help. |
H: How to choose the best parameter values for TfidfVectorizer in sklearn library?
Recently, I used TfidfVectorizer in scikit-learn library to calculate a matrix of TF-IDF features. However, I do not know how to set some parameters such as max_features, min_df, max_df, etc.
AI: I think these parameters are mostly used when you combine the vectorizer and a machine learning model in a pipeline. Therefore, you should tune these parameters based on the outcome of your model training.
For example, if your task is to classify input texts, you may want to tune the max_features parameter such that the number of features is not too large, but your model can still perform reasonably well. In general, these parameters are used to control the size of the vector to be fed into your machine learning model, so as to avoid overfitting and improve the efficiency of model training. |
H: "Pip install tensorflow" only installs version 1.14 – how to install version 2 (with eager mode enabled?)
The question is already in the title: when installing tensorflow in command line (in a virtual environment or not...), it is always tensorflow version 1.14.0 which is going to be installed. Consequently, eager execution does not work, I get error messages ("Tensor Object has no attribute numpy..." etc.). However, the tensorflow documentation states that it should the recent tensorflow 2 version which is in fact the one I need. I would be glad to read your advice on this issue.
Kehrwert
AI: You install the version you want with:
pip install tensorflow=={version you want}
for example:
pip install tensorflow==2.0.0-beta1
If you are working in Google Colab it's even simpler, just type:
%tensorflow_version 2.x
import tensorflow as tf
and it will automatically import the latest version of TensorFlow 2.
Eager mode is default in TensorFlow 2.x, therefore you don't need to manually activate it. As soon as you import it it's immediately available. |
H: Decision tree regression: Polynomials unnecessary?
I am testing out different models for a regression task.
When using OLS, Ridge and Lasso, I use different polynomial degrees of the explanatory variables.
Example: For two variables x and y, degree 2 would give the explanatory variables x, x^2, xy, y, y^2.
When using decision tree, however, I am not sure whether it makes sense to use any any higher degrees than 1 as explanatory variables. Example: Does it make sense to test for x^2, xy and y^2 when applying a decision tree regressor?
The reason I ask, is that the decision tree regressor is a non-linear regressor. On the one hand this could perhaps be an arguement it not making sense to include higher order polynomials, as the decisoin tree already can deal with non-linearity.
AI: Since CARTs (Classification And Regression Tree) are a non-parametric algorithm, they should be able to find interactions between variables and non-linear behaviors.
Nevertheless, building polynomials can help them have a better performance. |
H: Assistance needed on what machine learning approach to use
I'm currently writing my Master's Thesis on Subjective tagging of sounds and I feel that I've been stuck with the same problem for quite a time now and need assistance to progress. I'll, in short, describe my goal and what I've accomplished so far.
I'm writing a program that lets users manually tag audio files with a single label according to their own, i.e. subjective, perception. The tags are always adjectives, like happy, spooky or spacy. The goal is to have the program automatically tag similar sounds with the correct label by finding related sounds using machine learning. The program currently looks like this:
Usage of the program goes like this:
The user adds sounds to be labeled.
The program analyses the files and searches for related sounds using Algorithm 1.
The user tags a sound and the program automatically tags related sounds using Algorithm 2.
The user can verify tags given by the program and choose to accept them or give the sound a new tag.
Repeat from step 1 or step 3.
By looking at the program flow above I've been able to implement step 1 using AudioCommons Timbral Models to convert sounds to a usable data format, and step 2 with Algorithm 1 using Mean shift to find groups of related sounds. The idea is to use the groups found by Algorithm 1 as a starting point when no tag has yet been given to a group. When one of the sounds in the group is tagged, all the sounds receive the same tag.
My problem, however, lies in step 3 with Algorithm 2 and step 4. I've tried a few different approaches, but they all seem a bit off and not quite fit for the job.
My goal is to have the program continuously or recursively learn from user validation, hence improving over-usage. My thought process is to have a verified flag on each sound, which the user can use to say if a tag given to a sound by the program is correct or not. Algorithm 2 should (re)learn from verified sounds in step 4 so that when step 3 is repeated, the automatic tagging of sounds done by the program is more accurate according to the user's choices.
So my question is: What kind of algorithm(s) is suitable for step 3 and 4 in place of Algorithm 2?
I've tried to make myself as clear as possible, but I'm quite new to machine learning, so feel free to point out confusing sections or errors.
In hope of assistance,
Andreas
Notes:
I've used tag and label interchangeably throughout the text; they refer to the same thing.
I want all machine learning to occur in isolation for a user, i.e. I don't want to learn from other user's input (like Collaborative filtering).
AI: Just a few observations from my side:
I think you should clarify your goal for yourself. Writing a program isn't a goal in itself. I'm missing the top level picture.
The second part: 'The goal is to have the program automatically tag similar sounds with the correct label by finding related sounds using machine learning.' is more like a goal, but where is the subjective part in it, and how does this answer questions in your thesis?
I'm not sure about the meaning of the output of Algorithm 1. Is this supervised or unsupervised classification? If it would be supervised, the 'supervision' is not subjective (until Algorithm 2). If is it unsupervised, the classes will find similarities, but the similarities are unlikely to correspond to the labels you intend to use.
However, if you take e.g. $k=9$ classes with k-means (you will have to optimize the number) the similarities in the classes will increase, and you could 'learn' the matching of labels and the classes. The $k$ classes should map to one of 3 labels.
Don't try to fit all these steps into one model. Create a few and connect them properly. Measure, interpret and report intermediate steps in your thesis. |
H: What is a 'hidden state' in BERT output?
I'm trying to understand the workings and output of BERT, and I'm wondering how/why each layer of BERT has a 'hidden state'.
I understand what RNN's have a 'hidden state' that gets passed to each time step, which is a representation of previous inputs. But I've read that BERT isn't a RNN - it's a CNN with attention.
But you can output the hidden state for each layer of a BERT model. How is it that BERT has hidden states if it's not a RNN?
AI: BERT is a transformer.
A transformer is made of several similar layers, stacked on top of each others.
Each layer have an input and an output. So the output of the layer n-1 is the input of the layer n.
The hidden state you mention is simply the output of each layer.
You might want to quickly look into this explanation of the Transformer architecture :
https://jalammar.github.io/illustrated-transformer/
Note that BERT use only Encoders, no Decoders. |
H: RandomForest and tree feature importance in scikit-learn
What is the difference between model.feature_importances_ and tree.feature_importances_ in the following code:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
# Boston Housing dataset
from sklearn.datasets import load_boston
boston = load_boston()
# Convert 'skleran.bunch' to Pandas dataframe
data = pd.DataFrame(boston.data, columns=boston.feature_names)
# Create train and test sets for cross-validation
X,y = data.iloc[:,:-1], data.iloc[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 123)
model = RandomForestRegressor()
model.fit(X_train, y_train)
As I understand the following is importance of features:
importance = model.feature_importances_
importance_df = pd.DataFrame(importance, index=X_train.columns,
columns=["Importance"])
Importance
-----------------
CRIM 0.025993
ZN 0.002781
INDUS 0.004832
CHAS 0.000315
NOX 0.028655
RM 0.406285
AGE 0.017987
DIS 0.040696
RAD 0.003615
TAX 0.009281
PTRATIO 0.009103
B 0.012354
LSTAT 0.438106
And what is tree.feature_importance?:
[tree.feature_importances_ for tree in model.estimators_]
How do they differ, calculated and which is more important 0.2 or 0.9? Can't find in the docs.
AI: A random forest model is an agglomeration of Decision Trees. tree.feature_importance_ defines the feature importance for each individual tree, but model.feature_importance_ is the feature importance for the forest as a whole. The docs give the explanation for calculation as:
The relative rank (i.e. depth) of a feature used as a decision node in a tree can be used to assess the relative importance of that feature with respect to the predictability of the target variable. Features used at the top of the tree contribute to the final prediction decision of a larger fraction of the input samples. The expected fraction of the samples they contribute to can thus be used as an estimate of the relative importance of the features. In scikit-learn, the fraction of samples a feature contributes to is combined with the decrease in impurity from splitting them to create a normalized estimate of the predictive power of that feature.
By averaging the estimates of predictive ability over several randomized trees one can reduce the variance of such an estimate and use it for feature selection. This is known as the mean decrease in impurity, or MDI. Refer to [L2014] for more information on MDI and feature importance evaluation with Random Forests.
The higher the number, the more important the feature |
H: Understanding cosine distance with word vectors
I'm a new DL4J user, and I'm running all the works of Shakespeare through a Word2Vec neural net. I've got a pretty basic question about how to understand the results so far. In the below example, there's an obvious association with the "ing" in king and the "ing" in other words that probably don't have much to do with king. Am I missing something about how a word2vec formula uses the characters inside the words it is mapping? Or is my net just really untrained?
Also, what does the cosine distance between those example words say to you about the results, if anything? Thank you for your advice!
List<String> abc = vec.similarWordsInVocabTo("king", 0.8); //80% similar
System.out.println(abc);
String[] words = {"woman", "king", "boy", "child", "human"};
for (String word : words) {
System.out.println(vec.similarity("man", word));
}
Output - Similar words to king:
[taking, drinking, kingly, picking, waking, singing, wringing, knight, feigning, beginning, ink, thinking, kin, knocking, making, bringing, knowing, lingring, winking, neighing, king-, kings, asking, stinking, king, liking]
Output - Vector similarity between "man" and woman, king, boy, child, human:
woman: 0.8305895924568176
king: 0.00203840178437531
boy: 0.2974374294281006
child: 0.4752597510814667
human: -0.10414568334817886
AI: Word2Vec algorithm does not go inside words. Word “king” is never used as a gerund, so there is no reason why it should be similar to gerunds.
My guesses are:
Your corpus might by wrongly tokenized. Maybe there are some OCR-related errors with word splitting something like “li-↲ king”.
You might be using a different algorithm for getting the embeddings (e.g., FastText) that goes inside the words and infers the word embedding as some embeddings of character n-grams the word consists of.
On the other hand, words similar to man look fine. If you think about how Word2Vec (and also FastText) is trained, you should not ask a question: ”Do the words have as similar meaning as possible?” but rather “Does the word appear similarly frequently in a similar context Shakespear's works?”
(Of course, when the embeddings are trained on data which is large enough, there is almost no difference between these two questions.) |
H: Keras RNN (batch_size
I created an RNN model for text classification with the LSTM layer, but when I put the batch_size in the fit method, my model trained on the whole batch instead of just the mini-batch _size.
This also happened when I used GRU and Bidirectional layer instead of LSTM.
What could be wrong?
def create_rnn_lstm():
input_layer = layers.Input((70, ))
embedding_layer = layers.Embedding(len(word_index) + 1, 300, weights=[embedding_matrix], trainable=False)(input_layer)
embedding_layer = layers.SpatialDropout1D(0.3)(embedding_layer)
lstm_layer = layers.LSTM(100)(embedding_layer)
output_layer1 = layers.Dense(70, activation="relu")(lstm_layer)
output_layer1 = layers.Dropout(0.25)(output_layer1)
output_layer2 = layers.Dense(2, activation="softmax")(output_layer1)
model = models.Model(inputs=input_layer, outputs=output_layer2)
model.compile(optimizer=optimizers.Adam(), loss='binary_crossentropy')
return model
LSTM_classifier = create_rnn_lstm()
LSTM_classifier.fit(X_train_seq, y_train, batch_size=128, epochs = 10, shuffle=True)
AI: In Keras, with verbose=1 (default parameter of the fit method) will display the total number of samples, not the batch number.
If your batch size is 128, then the progress bar will jump by multiples of 128. You can try to change batch_size parameter to 13714, and you will see the progress bar jumping straight from 0/13714 to 13714/13714, since you would have defined a batch size of the size of your whole training dataset.
Finally, if you want to train your model only on one mini batch, you can extract it from your training set and fit your model on it.
import random
indices = random.sample(range(13714), 128)
X_train_seq_one_batch = X_train_seq[indices]
y_train_one_batch = y_train[indices]
LSTM_classifier.fit(X_train_seq_one_batch, y_train_one_batch, batch_size=128, epochs = 10)
The above code will train your network only on one random batch only. random library is used to draw a random batch of indices in your training set. |
H: h in LSTM increasing in size?
So I was reading about the LSTM architecture and I was having trouble understanding a certain aspect of it. This article mentions the step in question near the bottom of the page. Here is the image given:
The issue I have is this: If ot is the result of an operation on the concatenation of ht-1 and xt, then ot will be larger than ht-1. Then, ht is calculated by an operation on the now larger ot. ht is then 'passed' on to the next timestep. So wouldn't the size of h continually increase over time?
I'm definitely misunderstanding some part of this, so thanks in advance for any help. :)
AI: o(t) is not the result of concatenation of h(t-1) and x(t), but a simple matrix multiplication.
See wikipedia for further details:
https://en.wikipedia.org/wiki/Long_short-term_memory |
H: Train-Test split for a recommender system
In all implementations of recommender systems I've seen so far, the train-test split is performed in this manner:
+------+------+--------+
| user | item | rating |
+------+------+--------+
| u1 | i1 | 2.3 |
| u2 | i2 | 5.3 |
| u1 | i4 | 1.0 |
| u3 | i5 | 1.6 |
| ... | ... | ... |
+------+------+--------+
This is transformed into a rating matrix of the form:
+------+-------+-------+-------+-------+-------+-----+
| user | item1 | item2 | item3 | item4 | item5 | ... |
+------+-------+-------+-------+-------+-------+-----+
| u1 | 2.3 | 1.7 | 0.5 | 1.0 | NaN | ... |
| u2 | NaN | 5.3 | 1.0 | 0.2 | 4.3 | ... |
| u3 | NaN | NaN | 2.1 | 1.3 | 1.6 | ... |
| ... | ... | ... | ... | ... | ... | ... |
+------+-------+-------+-------+-------+-------+-----+
where NaN corresponds to the situation where a user has not rated that particular item.
Now, from each row (user) of the matrix, a certain percentage of the numeric (non-NaN) values are removed and set aside into a new matrix, representing the test set. The model is then trained on the initial matrix, with test samples removed, and the goal of the recommender is to fill-in the missing values, with the smallest possible error.
My question is, can the train-test split be somehow done user-wise? For example to keep a set of users separate, train the recommender on the rest of the user set and then try to predict the ratings for the new users? I know this goes a bit against the idea that "if a recommender does not know you, it cannot recommend something you like", but I am wondering if some k-NN can be done.
AI: My two cents,
Evaluating results of a recommendation engine where test-set is unseen users only, will allow you doing exactly that.
Evaluating results on unseen users only.
If this is indeed the motivation(product/business wise) behind your recommendation engine I would suggest trying to tackle this problem directly.
If evaluating performance on 'unseen' users is an additional metric and not the only one I would keep with current scope of train/test splitting(which should create some unseen users in the test set).
Regarding methods on how to best recommend to unseen users, your intuition(K-nn) is one common solution.
See this for example |
H: If a dataset is imbalanced in real life, should I train on my machine learning model on an imbalanced dataset
I have a dataset where around 20% of the data is the positive class and 80% of the data is the negative class. When I undersample and train my classifier on a balanced dataset and test on a balanced dataset, the results are pretty ok. However, if I train on the balanced dataset and test on an imbalanced dataset that replicates the real world (80-20 split) the metrics are not great. Should I train the model on the original imbalanced dataset if I want it to perform well on real world test data that is also imbalanced.
AI: When I undersample and train my classifier on a balanced dataset and test on a balanced dataset, the results are pretty ok
It's not surprising that the results are good since the job is easier in this case. It's actually a mistake to test on the artificially balanced dataset, since it's not a fair evaluation of how the system will perform with real data.
Should I train the model on the original imbalanced dataset if I want it to perform well on real world test data that is also imbalanced.
Both training on the original dataset or the balanced dataset are valid methods, choosing between the two options is a matter of design and performance. It's often a good idea to try both and then pick the one which performs better than the other on the real imbalanced dataset. |
H: How to deal with with rows with zero in every feature while clustering?
I am working on a clustering problem which has 13000 observations and 15 features. Around 3000 observations in the dataset has zero in every features ( i.e all values zero in 3000 rows). I am trying to do clustering on top of it. What is a better way to deal with it ? I have few things in my mind but would like to get clarity on :
Check for number of rows with all zero and remove them ?
Include the rows with zero value in every feature and let the clustering
algorithm handle the same?
Also,please suggest if there are any better way to handle the same.
Note : I am using k-means clustering.
AI: If you expect that all zeros is a result of error in the measuring of the features (i.e. the observations should not be all 0s but they are), then I would say: Keep all the data, but increase k (from k-means) by 1. This extra one will hopefully become the class of all these wrong observations.
If you expect that all zeros is correct (i.e. these observations are indeed all zeros) just keep them and go on as normal.
Note: If you keep using K-Means having all these extra observations should be fine. However, if you switch to another algorithm which takes densities into account (e.g. Mean Shift) then having all these extra observations might influence your model in ways that you do not expect. For example, the observations [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.1] might end up being far from the all 0s cluster. |
H: On regression to minimize log distance rather than distance
Suppose I have a lot of points $ x_i \in \mathbb{R}^N $ with corresponding non-negative labels $ y_i \in \mathbb{R} $ and I want to do regression and make a prediction on some new datapoint $ x^* \in \mathbb{R}^N $ for which I don't have a label. Is there a name for the procedure of choosing a parametric model $ f_\theta : \mathbb{R}^N \rightarrow \mathbb{R} $ so as to minimize the cost function $ \sum_i {|\log(f_\theta(x_i)) - \log(y_i)|^2 } $ rather than $ \sum_i{|f_\theta(x_i) - y_i|^2} $? It seems that minimizing the difference between logs has some nice properties, and I'm surprised I didn't see this discussed in Bishop's machine learning book, for example.
I thought of this when I was considering a house pricing problem, where I figured I cared more about the percentage by which I was wrong than the pure difference. After all, in my application (and I'm sure many others like it), being wrong by \$50,000 is terrible for a \$60,000 home, but it's okay for a $2.5M home.
Any data science veterans reading this who have used a cost function like the one I suggested above with logs, or who can tell me what it's called (if it has a formal name)?
AI: There is a loss called Root Mean Squared Log Error (RMSLE): $\sqrt[]{\frac{1}{n}\sum_{i=1}^n{(\log(y_i + 1) - \log(\hat{y_i} + 1))^2}}$ (do not forget the $+1$ as the $log$ is not defined at $0$)
You will find a brief explanation and discussion here. It has also been used in competitions as for example here. |
H: Make a random forest estimator the exact same of a decision tree
The idea is to make one of the trees of a Random Forest, to be built exactly equal to a Decision Tree.
First, we load all libraries, fit a decision tree and plot it.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
import random
from pprint import pprint
import pdb
random.seed(0)
np.random.seed(0)
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
data = load_iris()
dtc = DecisionTreeClassifier(random_state=0)
dtc.fit(data['data'].squeeze(),data.target)
tree.plot_tree(dtc)
We then do the same thing with the random forest
rf = RandomForestClassifier(n_estimators=1,max_features=None,random_state=0)
rf.fit(data['data'].squeeze(),data.target)
tree.plot_tree(rf.estimators_[0])
My question:
Is it possible to make the exact same the first tree of the random forest and a decision tree?
AI: You need to set bootstrap=False in the random forest to disable the subsampling. (I originally commented because I expected there to be more impediments [in addition to your already-coded random_states and max_features=None], but I guess there aren't any!)
You probably don't want to do this in general; by stripping out all the randomness so that the first tree is the same as the DecisionTreeClassifier, you'll end up with all the trees being the same, and the random forest loses its usefulness. |
H: How to keep the test data from leaking into the training process of a machine learning algorithm?
I read in many different sources that I need to split my data into a training set and a test set. Then I have to make sure that the algorithm is trained only on the training data, and do my best to keep the test data from leaking into the training process.
To avoid learning insignificant details of the data (which will increase the algorithm's generalization ability) I can further split my training data into the proper training set and a validation set multiple times, and choose the parameters of the algorithm that give the best average performance for all such splittings.
Finally I evaluate my algorithm on the test set, and get some numbers: MSE, RMSE, etc. But do these numbers really show, how good is my algorithm, and aren't they affected by the test set?
Of course, I didn't use my test data during training but the parameters of the algorithm that I got are valid only for this particular splitting into the training and test sets. If I split my data differently, I will get different MSE, RMSE, and different optimal parameters.
I can then do many different splittings into training/test, and calculate the error for each split. I that case how is it different from the cross-validation that I did during the first split? Do I need to further divide my training data into the proper data set and a validation set?
If I do many different splittings into the training an test sets, doesn't it mean that my test data leak into the training process because some data were in the test set for one splitting but in the training set for another splitting?
AI: Just to clarify (and I think you've got this right, but I'm just being careful), it is best practice to:
1: Split your data into train and test
2: Split train into train and eval
3: Grid search over hyperparameters, for each combination, train on train, evaluate on eval. Select the hyperparameters which allow you to get the best score on the eval set
4: Using the best model (as ascertained in step 3), calculate the test loss, using the test data set create in step 1. Use your answer in step 4 to get the best estimate of how well your model will perform in production.
To be very clear, step 4 is not about hyperparameter selection, the hyperparameters are fixed by step 3. Step 4 is all about getting a better estimate of production performance, as what you get from step 3 is likely to be overly optimistic, as the particular hyperparameter combination used to overly optimised (overfit) to that particular train-eval split.
You can repeat steps 1-4 many times, but what you should not do, is try to use this repetition in any way, to find the best hyperparameters, you are using it to answer the question "how well is my model likely to perform on unseen data". For example, you might run it ten times and get a spread of accuracies from 65%-67%. You can then tell your boss, that if you deployed this model in production, it would likely have an accuracy of 65%-67% (or you're a bit more quantitative and you say it's the mean $\pm$ the standard error)
This leaves the question, "how do I select my best hyperparameters?". After all, you've run steps 1-4 many times and step 3 has given you a different best hyperparameter combination every time.
This is where you need to run step 3 one more time, but slightly differently. This time, you don't need a train-test split. You simply take all of your data, and do a train-eval split. You then run one further hyperparameter grid search, and that's the model you use in prod (or you could combine the train and eval sets, and train one more time, using those hyperparamters, depending on whether you're using a model that needs an eval set for early stopping like a neural net/xgboost or one that does not like a Random Forest). However, at this stage, you ignore the accuracy/peformance metrics that the training procedure is telling you. You already estimated your production accuracy/performance when you ran steps 1-4 multiple times, and that was the point of doing this.
I hope this answers your question, both on how to do the splitting, as well as illustrates why test data at no point leaks into the training data |
H: Explainable AI and unsupervised algorithms
There are several packages that allow explaining ML algorithms (Lime, Shap and so on). However, it is not clear how we can explain unsupervised algorithms for example, if we use PCA for dimensionality reductions first and random forest after, how it is possible to explain the whole process?
AI: PCA isn't a great example, as it's inherently explainable, in the same way that linear regression is inherently explainable.
In supervised learning, at a high level, the Shapley value for each feature tells you the extent to which including that feature in an ML model allows you to make better predictions about the target variable than if you did not have access to that feature. One way of implementing this is to calculate the difference in test cross entropy between a model which includes that feature and one which does not (and it's much more complicated than that in reality, as there are many models which do not include that feature, you have to average over all coalitions of other features, but that's another story).
So in short, you need some sort of cost function, which including features tends to improve.
Now let's turn our example to the simplest of all clustering algorithms, KMeans. In KMeans, the most sensible cost function is probably to split your data into train and test, fit on a training set, and then take the sum over your test set, of each point's euclidean distance from its nearest centroid.
Now, what happens if you remove a feature? Well, in general, post fitting, the sum described above will run over fewer features, so the cost will be lower. This in turn will mean that every feature has a negative Shapley value...not good
More generally, in unsupervised learning, the cost function tends to be related to the probability of seeing a datapoint, according to the model, i.e. the likelihood. The key however, is that this is not just the likelihood of the target, it's the entire, multidimensional likelihood of all features (indeed, there is no target in unsupervised learning).
What you would need, in order to calculate Shapley values for unsupervised learning problems, is some way to describe "how good the clustering is" with one number. The "goodness of clustering" would need to be comparable when you changed the dimensionality of the problem...and standard cost functions used in inferential clustering (which KMeans is kind of a simplified form), do not satisfy this.
So I've only answered a subset of your problem, I've said that Shapley values don't really make sense for inferential clustering problems, rather than "explainability doesn't make sense for unsupervised algorithms". I don't know enough about all unsupervised algorithms to claim Shapley values don't make sense for any unsupervised algorithm, but it certainly seems to me to be the case, that you'll need an algorithm which lends itself to determining for any coalition of features included in the clustering, "how good the clustering is", in a way which somehow "objectively measures whether similar instances are grouped", rather than which calculates full data likelihood in a way which is inversely proportional to the number of features in the model. This seems to go against the idea of unsupervised learning, namely that there's no objective truth. |
H: Why is my validation loss going up while my validation accuracy also goes up?
Scenario:
I've been training a CNN for the cifar10 dataset.
I'm using tensorflow, and a CNN with 12 conv layers and 1 dense layer before a softmax dense layer.
I'm using data augmentation as well with batch normalization.
After a few hundred epochs I archieved a maximum of 92.73 percent accuracy on the validation set.
My problem:
Validation loss goes up slightly as I train more.
While validation loss goes up, validation accuracy also goes up.
Example:
One epoch gave me a loss of 0.295, with a validation accuracy of
90.5%. My best epoch for validation accuracy gave me 92.73% with a validation loss of 0.33.
Question:
Why is my validation accuracy increasing while my validation loss is
going up?
Should I use a loss metric diferent to cross_entropy?
AI: It is possible that accuracy and cross entropy increase at the same. For example, for a positive sample your predicted probability could go from 0.4 to 0.1 (still wrong but worse increasing the entropy loss) and for another positive sample your predicted probability could change from 0.49 to 0.51 (changes from wrong to right improving accuracy). The first case would increase your entropy loss while the second would improve accuracy without significantly changing the entropy loss.
It's a little difficult to say if cross entropy is not a good metric for your case without knowing any details. But most likely you would want to stick with it for a couple of reasons. For example, the cross entropy gives better probability estimates and has nice properties for training with gradient descent (smooth gradients, training doesn't stall for large values because the log off-sets the exp in the sigmoid activation etc.). |
H: Why does my GridSearchCV always break up?
GridSearchCV for my Random Forest breaks up. I need to know the reason and the solution to make it work:
# Grid-Search for Random Forest
param_grid = {
'bootstrap': [True],
'n_estimators': [100, 200, 300, 400, 500],
'max_depth': [50, 100, None],
'max_features': ['auto', 200],
'min_impurity_decrease':[0],
'min_samples_split': [2, 5],
'min_samples_leaf': [2, 5],
'oob_score': [True],
'warm_start': [True]
}
# Base-Model for improvement
rf_gridsearch = RandomForestRegressor(random_state=42)
# Grid-Search initiation
rf_gridsearch = GridSearchCV(estimator = rf_gridsearch, param_grid = param_grid,
scoring = 'neg_mean_absolute_error', cv = 5,
n_jobs = -1, verbose = 5)
# Perform the grid search for the model
rf_gridsearch.fit(X_train, y_train)
```
AI: First, you are fitting $5 \cdot 3\cdot2\cdot2\cdot2\cdot5=600$ models and n_estimator=500 is quite big.
Of course, this depends on your dataset and in your computing power.
My first guess will be that you have not enough RAM memory on your laptop(if you are running it there) and that is why it is collapsing.
If the error is this one, I recommend sampling your data to 1/10 or less (depending on your data) and searching for the best hyperparameter there and then using your whole data for the final model. |
H: How does two convolution layers make computation quadratic increase in GoogLeNet?
In GoogLeNet papaer, "For example, in a deep vision network, if two convolutional layers are chained, any uniform increase in the number of their filters results in a quadratic increase of computation."
But, I don't know how a quadratic increase is possible when the number of filters increase.
AI: For any CNN layer $n$ with $F_n$ filters and kernel size $K_n^{height},K_n^{width}$ the number of parameters to train is
$$F_{n-1} \cdot F_n \cdot K_n^{height} \cdot K_n^{width} + F_n$$
where the first summand stands for the weights and the second for the bias parameters (if $n$ is the first CNN layer then $F_{n-1}$ equals the number of input channels). And for the special case of
$F_{n-1} = F_n$ (i.e. same number of filters for layer $n$ and layer $n-1$)
and $K_n^{height} = K_n^{width}$ (i.e. square shaped Kernel)
the number of parameters for layer $n$ is $F_n^2 \cdot K_n^2 + F_n$. Accordingly, you have a non-linear dependency of trainable parameters with regards to the number of filters. |
H: Practical way to convert jupyter notebook to MS Word document?
What would be a practical way to convert a Jupyter Notebook to a Word document (.doc) ? I am asking this in a professional context, so I'd like to avoid manual solutions, do it in an efficient way (fast), avoid third parties... etc. Something that works like Rmarkdown to produce .doc would be very welcome.
AI: The easiest way is probably using a method similarly to what is described in this answer, that is, convert the notebook to markdown and then use any of the tools available (such as Pandoc) to convert the markdown to a Word document. |
H: What level of math is required for machine learning research
There are several levels of math understanding:
Know the math
Know the intuitions behind math concepts
Know the intuitions and proofs of math concepts
Know the intuitions, proofs of math concepts and be able to apply them to deduce new results
My question is what level of understanding is required for machine learning research? Especially for publishing papers in conferences like CVPR or NeurIPS.
AI: There can be no objective answer to this question. Obviously the more one understands the better, but the field of ML is vast, quite specialized and ranges from very theoretical to very applied research, so it's perfectly reasonable to publish in ML without a strong background in maths.
A better way to estimate your own ability to publish papers in a particular area or journal is to study recent papers published in this area/journal. You should be able not only to understand them but to redo the reasoning:
Understand the problem and the solution proposed by the authors
starting from the initial problem, how would you solve it? Can you think of alternatives to the authors' solution?
Can you find limitations to their approach and improve on it?
If you reach step 3 then congratulations: you are ready to publish your own research! |
H: Dealing with low-information centroids using Nearest Centroid Classifier and bag of words method
I am currently working on a problem where we have projects and e-mails that belong to a single project each.
My goal is to create a recommendation system for incoming e-mails which presents the projects the e-mail might belong to.
The number of projects is constantly growing, just like the number of e-mails. This is why I decided to use the Nearest Centroid Classifier because the "training" of new classes is easy (after all, just calculating a mean over the e-mails belonging to a centroid) and it seemed to be promising to me.
I use NCC in combination with the bag of words method and for that I am calculating the scores of the words via TF-IDF.
The data pool is not the greatest actually, which is another reason I tried using a less complex model like NCC. I only have 5000 useful e-mails and around 300 projects.
The problem is though that when I calculate the distances to every project centroids some centroids win in every case. For nearly every e-mail the first 10 centroids of the recommendation are the same, all the time. When I had a look at them I noticed that the "best" centroids are simply centroids which do not hold much information, they seem to have very few text data. And if the e-mail does not have much text obviously the error is low and thus the distance is low.
Is there any way to deal with that problem? Or is TF-IDF and NCC not a good combination?
AI: This is a standard problem with distance/similarity measures between texts of different length. I'm not aware of any standard way to solve it, but in your case I would simply try to remove any email shorter than a certain length from the training set (you can experiment with different thresholds). This would hopefully force the centroids to be more specific, the goal being that none of them can easily attract all the instances. |
H: Search for hyperparameters whith different features using Random Forest
I have a dataset in which I would like to perform a classification model, so I have decided to use Random Forest. The number of features that I have is approximately 200 and I would like to test which set of features gives me a better result. For the sake of experimentation, I do not want to use any feature importance method.
So what I have done is to split my features, without replacement, in groups of 20, 40,60,80 and 100 features. I am using GridSearch for tuning the hyperparameters of my RF. The question that I have is if I want to compare the accuracy of each one of the models (the one with 20 features, with 40, and so on), it would be fair to apply GridSearch with each subset of the features. Or should I only perform Gridsearch once, let's say with the model of 20 features, and then use the same hyperparameters with the set of 40 features, 60 features and so on.
Any help?
Thanks
AI: By performing GridSearch I understand you want to say searching for the best hyperparameters.
For sake of simplicity, let say that you want to fit a linear regression with a penalty (lasso/ridge) with 1 feature and with 100 features. The hyperparameter that you are looking for is the $\lambda$ penalty.
It is easy to see that with 1 feature your model might need some parameter, it could be that the feature has a perfect distribution and it doesn´t need any penalty at all. But when we go for 100 features there is some noise and you might need penalties to make sure that your model is generalizing well. So $\lambda$ will be different
With this example, my idea is to show that hyper-parametrization is a specific problem for each task.
For Random Forest, it would be the same. Different features will require different parameters so yes, you have to do a GridSearchCV with every subset of features if you want to achieve optimality |
H: Should Feature Selection processes be apply on training data or on all data?
I've realized that on examples and guides, sometimes feature selection processes (correlation elimination, backward/stepwise) are applied on the train data after splitting all data but on the other hand, sometimes its applied on all data.
So is there any clear answer for that? Which is more logical?
AI: Like any preprocessing step, feature selection must be carried out using the training data, i.e. the process of selecting which features to include can only depend on the instances of the training set.
Once the selection has been made, i.e. the set of features is fixed, the test data has to be formatted with the exact same features. This step is sometimes called "applying feature selection" but it's an abuse of language: it's only about preparing the test data with the features which were previously selected during the training stage.
Applying feature selection on the test data is a mistake because the training depends on it, so that would mean that the model "has seen" the instances of the test set and therefore invalidate the results on the test set. |
H: Loss being outputed as nan in keras RNN
Since the first Epoch of the RNN, the loss value is being outputted as nan.
Epoch 1/100
9787/9787 [==============================] - 22s 2ms/step - loss: nan
I have normalized the data.
...,
[9.78344703e-01],
[1.00000000e+00],
[9.94293976e-01]]])
Example of my X_train (float64 of size (9787,60,1))
-
array([6.59848480e-04, 6.98212803e-04, 6.90540626e-04, ...,
1.00000000e+00, 9.94293976e-01, 9.95909540e-01])
Example of my y_train (float64 of size (9787,))
My RNN:
# Initialising the RNN
regressor = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True, input_shape =
(X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
# Adding the output layer
regressor.add(Dense(units = 1))
# Compiling the RNN
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the RNN to the Training set
regressor.fit(X_train, y_train, epochs = 100, batch_size = 32)
AI: It could possibly be caused by exploding gradients, try using gradient clipping to see if the loss is still displayed as nan. For example:
from keras import optimizers
optimizer = optimizers.Adam(clipvalue=0.5)
regressor.compile(optimizer=optimizer, loss='mean_squared_error') |
H: Compare Coefficients of Different Regression Models
in my project, I am using asuite of shallow and deep learning models in order to see which has the best performance on my data. However, in the pool of shallow machine learning models, I want to be able to compare the coefficients of each regression model between each other. Example: I have
Lasso
Ridge
ElasticNet
AdaBoostRegressor
GradientBoostRegressor
I am aware that I could get the coefficients of Lasso, Ridge, and ElasticNet from model.coef_ and model.intercept_ from sklearn. However, AdaBoostRegressor does not have this, but rather, it has weights assigned to each of the estimators. Similarly, GradientBoostingRegressor of sklearn does not even have weights at all. How am I to compare the coefficients between each of the regressive models I am using??
AI: Weights across different types of models are not always comparable, so
I think that it would make more sense to do this kind of comparison not across different types of model but within a single type of model varying:
the hyper-parameters (if any),
the set of instances (e.g. selecting different subsets randomly),
the set of features.
I'd recommend in particular varying the instances: if the weight of a feature tends to change a lot depending on the training set it's a sign of overfitting.
If the goal is to compare the importance of some specific features for different models, it's probably more reliable to directly evaluate how the model performs with/without the feature(s). |
H: Duplicated features for gradient descent
Suppose that our data matrix X has a duplicated column, i.e, there is a duplicated feature and the matrix is not full column rank. What happpens?
I guess that we can not find a unique solution because that's the case for the close form in linear regression, but I do not see how to show that intuitively, or even if it is true or not.
AI: In 'Efficient Backprop' by Lecun and others (http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf), they explain why correlated variables are bad (§ 4.3 normalizing the inputs).
Duplicated data is a special case of linear dependence, which is a special case of correlation. Say you have duplicated variables $X1 = X2$, so the network output is constant over the weight lines $w_2 = c - w_1$, where c is a constant. It means that the gradient of the error is 0 along those lines : moving along those lines has no effect on the learning. In that sense the solution won't be unique.
It's bad because you could be solving a problem of lesser dimension. Removing one of the input will leave you with a network of lesser dimension.
Other than the dimension problem, for simple learner I don't think it will affect the learning process that much. For more complex learning processes (learning rate depending on time / on variables), it might get more complex. |
H: Feature Importance from GridSearchCV
I created a GridSearchCV for a Random Forest Regressor. Now I want to check the feature importance. I searched around and I found this:
rf_gridsearch.best_estimator_.named_steps.feature_importances_
This already works, but my training data is huge, 669 attributes. Therefore, I need the attribute names. So I found this code:
rf_gridsearch.best_estimator_.named_steps["step_name"].feature_importances_
But I don't know what the "named_steps["step_name"]" are.
I tried something like this:
named_steps = X_train.columns
But this doesn't work. Could somebody explain me what named_steps["step_name"] is?
AI: I think that you just need:
feature_importances = rf_gridsearch.best_estimator_.feature_importances_
This provides the feature importance for all the attributes in your dataset. For more information on this as well as other options, you may also refer to the Scikit-learn official documentation. |
H: Is there a link between Training, Test errors based on k fold CV and not doing CV?
I am using Matlab to train a feedforward NN using Cross validation (CV) approach. My understanding of CV approach is the following. (Please correct me where wrong)
Let X be the entire dataset with Y as the label set. Split X into 90/10 ratio to get: [Xtrain,Xtest] using holdout approach by calling the cvpartition(Y,'Holdout',0.1,'Stratify',true)
Apply CV on Xtrain: For every fold I calculated the accuracy. At the end of the CV loop I have an average accuracy score. Let accCV denote this variable. Inside the CV loop xtrain is further split into [xtrain_cv,xtrain_val].
After CV loop, I reinitialze the weights and re-train a new model using Xtrain. Then I get a training accuracy which I denote by the variable accTrain.
Using the model obtained in Step3 I test for evaluating the model's purpose and consider this to be the generalization performance that is the performance when we have an unseen future data, Xtest. I call this accuracy as accTest.
Question1: Is it possible that accCV will be less than the accuracy over the train set Xtrain when not using the CV approach? That is I call the NN model over Xtrain only once and record the accuracy and denote it by variable accTrain, then is it possible that accCV ~ accTrain?. Or intuitively, accCV should be close to the accuracy when not using CV approach since the dataset is the same which is Xtrain. If this is the case, then why use CV when outside the CV we do not reuse the model that was created inside the CV? What does it tell us?
Question2: If accCV < accTest but the accuracy on the entire dataset Xtrain without using CV is close to that of accTest (accTrain ~ accTest) are we doing something wrong? What is the best case scenario? Is it accCV ~ accTest?
AI: It is expected that accCV < accTrain: the former is the accuracy on the test folds (averaged over all the splits), so represents models' scores on unseen-to-them data. Similarly, you would expect accTrain > accTest.
There are two main reasons to evaluate a model, whether by k-fold cross-validation or simple train/test split: for hyperparameter optimization / model selection, or to estimate future performance. (N.B., k-fold should generally give better predictions than simple train/test split.) If you make any decisions based on the scores, then they no longer represent unbiased estimates of future performance. If, in your setup, you make no decisions based on step 2, then you should expect accCV ~ accTest, and there's no real reason to include that step. If you do make decisions based on step 2, you may expect accCV > accTest, though the gap is probably substantially smaller than the gap in accTrain > accTest. You may see discrepancies here, due to natural variation in the datasets, or perhaps data leakage. |
H: What are some best Text Representation techniques in NLP
I've studied about various text representation techniques like : Bag of Words, N-gram data modelling, Tf-idf, word embedding etc.
I would like to know which among all the techniques are most efficient when it comes to data modelling or representation for a supervised text classification across a large number of categories.
I might have around 40 categories and then around a same number of sub-categories upto 4 levels.
AI: There's no simple answer to this question. As far as I know in general the choice depends mostly on the type of classification:
Bag of Words (usually with tf-idf weights) is a simple but quite efficient representation for classification based on the text topic or similar, assuming the classes are reasonably distinct from each other.
Word embeddings are a more advanced option for semantic-based classification. They can handle more subtle semantic relations but require being trained on a large training corpus. Using pre-defined embeddings can be a solution but then there's the risk that the original training data isn't perfectly suitable for the dataset.
N-grams models can be used in many different ways but are often chosen when the classification involves syntax and/or writing style. Note that the higher the value $n$, the larger the training corpus needs to be, this can also be taken into account in the choice.
I might have around 40 categories and then around a same number of sub-categories upto 4 levels.
It depends on the data but 40 classes is already a very challenging classification task. For the sake of simplicity let's assume a uniform distribution over classes: a random baseline accuracy would be 1/40 = 2.5%. Of course it depends on the data and a good classifier will do better than that, but don't expect too much...
Now 4 levels of 40 sub-categories means 40^4 = 2.5 millions classes! Even assuming you have enough data (say around 10 instances by class in average, that is 25 millions instances!), it's very unlikely that a classifier will be able to predict anything useful from such a huge amount of classes. |
H: validation accuracy and loss increase
I am training a generic LSTM based autoencoder to get the sentence embeddings, the bleu score is the accuracy metric. The model is coded to output the same number of tokens as the length of labels, hence the losses are calculated using cross-entropy loss between the output of token and the corresponding label token and added to a total loss to be backpropagated The embeddings size is 1000 throughout.
Here are the logs:-
Training Epoch: 1/20, Training Loss: 78.32446559076034, Training Accuracy: 0.23442341868755373
Validation Epoch: 1/20, Validation Loss: 75.11487170562003, Validation Accuracy: 0.28851715943634565
Training Epoch: 2/20, Training Loss: 60.702940691499734, Training Accuracy: 0.3043263558919579
Validation Epoch: 2/20, Validation Loss: 68.58432596068359, Validation Accuracy: 0.337459858582381
Training Epoch: 3/20, Training Loss: 51.62519727157313, Training Accuracy: 0.35618672202599283
Validation Epoch: 3/20, Validation Loss: 64.17064862141332, Validation Accuracy: 0.37158793060235135
Training Epoch: 4/20, Training Loss: 44.40417488866389, Training Accuracy: 0.4094415453046547
Validation Epoch: 4/20, Validation Loss: 61.230048799977716, Validation Accuracy: 0.3955376494828317
Training Epoch: 5/20, Training Loss: 38.78325418571326, Training Accuracy: 0.46050421873328257
Validation Epoch: 5/20, Validation Loss: 59.78918521062842, Validation Accuracy: 0.4063787247291398
Training Epoch: 6/20, Training Loss: 33.65953556655257, Training Accuracy: 0.5193937894102788
Validation Epoch: 6/20, Validation Loss: 58.64455007580877, Validation Accuracy: 0.41980867690343776
Training Epoch: 7/20, Training Loss: 29.35849161540994, Training Accuracy: 0.5831378755700898
Validation Epoch: 7/20, Validation Loss: 58.26881152131025, Validation Accuracy: 0.4261582422867802
Training Epoch: 8/20, Training Loss: 25.244888168760856, Training Accuracy: 0.6488748581642462
Validation Epoch: 8/20, Validation Loss: 57.62903963564669, Validation Accuracy: 0.43286079887479756
Training Epoch: 9/20, Training Loss: 22.05663261861035, Training Accuracy: 0.7039174093261202
Validation Epoch: 9/20, Validation Loss: 58.09752491926684, Validation Accuracy: 0.4399501875046306
Training Epoch: 10/20, Training Loss: 19.248559526880282, Training Accuracy: 0.7486352249548112
Validation Epoch: 10/20, Validation Loss: 58.613073462421454, Validation Accuracy: 0.4470900014647744
Training Epoch: 11/20, Training Loss: 16.95602631587501, Training Accuracy: 0.7857343322245365
Validation Epoch: 11/20, Validation Loss: 58.38435334806304, Validation Accuracy: 0.44778823347334884
Training Epoch: 12/20, Training Loss: 14.74661236426599, Training Accuracy: 0.8136944976817879
Validation Epoch: 12/20, Validation Loss: 59.63633590068632, Validation Accuracy: 0.45206057264928495
Training Epoch: 13/20, Training Loss: 13.507415059699248, Training Accuracy: 0.8299945959036563
Validation Epoch: 13/20, Validation Loss: 60.149887264208886, Validation Accuracy: 0.4512303133278385
Training Epoch: 14/20, Training Loss: 12.026118357521792, Training Accuracy: 0.8491757446561087
Validation Epoch: 14/20, Validation Loss: 59.89944394497038, Validation Accuracy: 0.45497359431776685
Training Epoch: 15/20, Training Loss: 10.785567499923806, Training Accuracy: 0.8628473173326144
Validation Epoch: 15/20, Validation Loss: 61.482036528946125, Validation Accuracy: 0.45541000266481596
Training Epoch: 16/20, Training Loss: 9.373574649788727, Training Accuracy: 0.8767987081840235
Validation Epoch: 16/20, Validation Loss: 62.18386231796834, Validation Accuracy: 0.4580630794998584
Training Epoch: 17/20, Training Loss: 8.5658748998932, Training Accuracy: 0.8878869616990712
Validation Epoch: 17/20, Validation Loss: 63.56435154233743, Validation Accuracy: 0.4606744393166781
Training Epoch: 18/20, Training Loss: 7.807730126944895, Training Accuracy: 0.8960175152587504
Validation Epoch: 18/20, Validation Loss: 63.88373188037895, Validation Accuracy: 0.4606897915210869
Training Epoch: 19/20, Training Loss: 6.829077819740428, Training Accuracy: 0.9038927070366026
Validation Epoch: 19/20, Validation Loss: 65.59262917371629, Validation Accuracy: 0.4639800374912485
Training Epoch: 20/20, Training Loss: 6.152266260986982, Training Accuracy: 0.9090036335609419
Validation Epoch: 20/20, Validation Loss: 66.84154795008956, Validation Accuracy: 0.4672414105594907
Here is are the accuracy and loss vs epoch graphs :
I want to know why it is that the validation loss and accuracy is increasing.
AI: An increase in validation loss while training loss is decreasing is an indicator that your model overfits. Check out this article for an easy to read general explanation.
In the context of autoencoders this means your neural net almost reproduces the input image. Try to reduce overfit by applying regularization, e.g. add dropout, add input noise, use less layers or use less nodes per layer (not all at once but one by one). |
H: A bump in CV score curve! What does it mean?
My learning curve is behaving strangely and I don't know if I'm doing anything wrong or it is because of the dataset nature.
I'm using a Neural network with (30,30,15,1) layers and 'ReLU' activation function for training and linear for the output layer.
My learning curve is showing weird behavior while learning. There is a bump like increase and decrease in the validation curve around 100 epochs which I don't have any idea about the source or how to resolve it(or even if it is needed to be resolved). Can anybody help me with this? (Any related comments are welcomed)
AI: That pattern is common in neural networks training.
Train performance is an estimate of bias, and validation performance is an estimate of variance. Initially both go down. Bias continues to go down, but variance goes up. That is the classic bias-variance tradeoff. However, in neural networks variance will start to go down again. This is called the “double descent curve”.
It is not known why neural networks training show this “double descent curve.” One idea is that this pattern might be a concatenation of the two curves. One curve might be from optimization and the other curve might be from sampling. The paper "A Modern Take on the Bias-Variance Tradeoff" goes into more detail about it. |
H: Word Embedding or Hash?
In my dataset I have a 'text' column and a 'followers' column containing lists of follower IDs, i.e. '1093777852477116417, 936194589043683328,...'. Some of the 'followers' values contain thousands of IDs.
I am preprocessing the data for LSTM, and I will do word embedding on the text column.
My question is, should I add the follower IDs to the word embedding of the text column, or should I hash the follower IDs and add an extra LSTM input layer for the IDs?
Thanks in advance!
AI: It depends…
The general rule of thumb is that there should be at least 40 occurrences of an item to train an embedding model to find a robust representation. If most follower IDs repeat then an embedding model can learn which ones co-occur. If follower IDs are sparse then hashing (which randomly assigns numbers) is a better choice.
Which method is better is an empirical question. You can create both models, benchmark, and then choose the data processing pipeline that is best for your task. |
H: Passing data to SMOTE after applying train/test split
I'm trying to resample my dataset after splitting it into train and test partitions using SMOTE. Here's my code:
smote_X = df[cols]
smote_Y = df[target_col]
#Split train and test data
smote_train_X,smote_test_X,smote_train_Y,smote_test_Y = train_test_split(smote_X,smote_Y,test_size = .25,random_state = 111)
smote_train_Y_series = smote_train_Y.iloc[:,0]
#oversampling minority class using smote
os = SMOTE(random_state = 0)
os_smote_X,os_smote_Y = os.fit_sample(smote_train_X,smote_train_Y_series)
I added line #5 to convert the DataFrame coming out of train_test_split to Series as the newer version of SMOTE fit_sample (docs) wants this data type but it now throws the following error.
Any ideas how to fix it?
-------------------------------------------------------------------------- KeyError Traceback (most recent call
last) in
16 #oversampling minority class using smote
17 os = SMOTE(random_state = 0)
---> 18 os_smote_X,os_smote_Y = os.fit_sample(smote_train_X,smote_train_Y_series)
19 os_smote_X = pd.DataFrame(data = os_smote_X,columns=cols)
20 os_smote_Y = pd.DataFrame(data = os_smote_Y,columns=target_col)
/opt/conda/lib/python3.6/site-packages/imblearn/base.py in
fit_resample(self, X, y)
86 if self._X_columns is not None:
87 X_ = pd.DataFrame(output[0], columns=self._X_columns)
---> 88 X_ = X_.astype(self._X_dtypes)
89 else:
90 X_ = output[0]
/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py in
astype(self, dtype, copy, errors, **kwargs) 5863
results.append( 5864 col.astype(
-> 5865 dtype=dtype[col_name], copy=copy, errors=errors, **kwargs 5866 ) 5867
)
/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py in
astype(self, dtype, copy, errors, **kwargs) 5846 if
len(dtype) > 1 or self.name not in dtype: 5847
raise KeyError(
-> 5848 "Only the Series name can be used for " 5849 "the key in Series dtype mappings."
5850 )
KeyError: 'Only the Series name can be used for the key in Series
dtype mappings.'
Upd 1/28/2020:
Tried two more options with no luck so far. Still looking for help.
A. Passing the raw outputs of train_test_split:
#oversampling minority class using smote
os = SMOTE(random_state = 0)
os_smote_X,os_smote_Y = os.fit_sample(smote_train_X,smote_train_Y)
--------------------------------------------------------------------------- AttributeError Traceback (most recent call
last) in
1 #oversampling minority class using smote
2 os = SMOTE(random_state = 0)
----> 3 os_smote_X,os_smote_Y = os.fit_resample(smote_train_X,smote_train_Y)
4 os_smote_X = pd.DataFrame(data = os_smote_X,columns=cols)
5 os_smote_Y = pd.DataFrame(data = os_smote_Y,columns=target_col)
/opt/conda/lib/python3.6/site-packages/imblearn/base.py in
fit_resample(self, X, y)
73 """
74 check_classification_targets(y)
---> 75 X, y, binarize_y = self._check_X_y(X, y)
76
77 self.sampling_strategy_ = check_sampling_strategy(
/opt/conda/lib/python3.6/site-packages/imblearn/base.py in
_check_X_y(self, X, y, accept_sparse)
148 if hasattr(y, "loc"):
149 # store information to build a series
--> 150 self._y_name = y.name
151 self._y_dtype = y.dtype
152 else:
/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py in
getattr(self, name) 5177 if self._info_axis._can_hold_identifiers_and_holds_name(name): 5178
return self[name]
-> 5179 return object.getattribute(self, name) 5180 5181 def setattr(self, name, value):
AttributeError: 'DataFrame' object has no attribute 'name'
B. Converting smote_train_X to matrix before passing it alongside smote_train_Y being converted to Series:
smote_train_X_matrix = smote_train_X.as_matrix()
smote_train_Y_series = smote_train_Y.iloc[:,0]
#oversampling minority class using smote
os = SMOTE(random_state = 0)
os_smote_X,os_smote_Y = os.fit_resample(smote_train_X_matrix,smote_train_Y_series)
Note that the resulting matrix and series show a shape of (4633, 46) and (4633,) respectively.
--------------------------------------------------------------------------- ValueError Traceback (most recent call
last)
/opt/conda/lib/python3.6/site-packages/pandas/core/internals/managers.py
in create_block_manager_from_blocks(blocks, axes) 1677
blocks = [
-> 1678 make_block(values=blocks[0], placement=slice(0, len(axes[0]))) 1679 ]
/opt/conda/lib/python3.6/site-packages/pandas/core/internals/blocks.py
in make_block(values, placement, klass, ndim, dtype, fastpath) 3283
-> 3284 return klass(values, ndim=ndim, placement=placement) 3285
/opt/conda/lib/python3.6/site-packages/pandas/core/internals/blocks.py
in init(self, values, placement, ndim)
127 "Wrong number of items passed {val}, placement implies "
--> 128 "{mgr}".format(val=len(self.values), mgr=len(self.mgr_locs))
129 )
ValueError: Wrong number of items passed 46, placement implies 44
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call
last) in
2 os = SMOTE(random_state = 0)
3 os_smote_X,os_smote_Y = os.fit_resample(smote_train_X_matrix,smote_train_Y_series)
----> 4 os_smote_X = pd.DataFrame(data = os_smote_X,columns=cols)
5 os_smote_Y = pd.DataFrame(data = os_smote_Y,columns=target_col)
6 ###
/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py in
init(self, data, index, columns, dtype, copy)
438 mgr = init_dict({data.name: data}, index, columns, dtype=dtype)
439 else:
--> 440 mgr = init_ndarray(data, index, columns, dtype=dtype, copy=copy)
441
442 # For data is list-like, or Iterable (will consume into list)
/opt/conda/lib/python3.6/site-packages/pandas/core/internals/construction.py
in init_ndarray(values, index, columns, dtype, copy)
211 block_values = [values]
212
--> 213 return create_block_manager_from_blocks(block_values, [columns, index])
214
215
/opt/conda/lib/python3.6/site-packages/pandas/core/internals/managers.py
in create_block_manager_from_blocks(blocks, axes) 1686
blocks = [getattr(b, "values", b) for b in blocks] 1687
tot_items = sum(b.shape[0] for b in blocks)
-> 1688 construction_error(tot_items, blocks[0].shape[1:], axes, e) 1689 1690
/opt/conda/lib/python3.6/site-packages/pandas/core/internals/managers.py
in construction_error(tot_items, block_shape, axes, e) 1717
raise ValueError("Empty data passed with indices specified.") 1718
raise ValueError(
-> 1719 "Shape of passed values is {0}, indices imply {1}".format(passed, implied) 1720 ) 1721
ValueError: Shape of passed values is (8410, 46), indices imply (8410,
44)
AI: Your smote_train_Y is already a series, so need to use iloc[:,0]. Just use that in fit_sample function-
#oversampling minority class using smote
os = SMOTE(random_state = 0)
os_smote_X,os_smote_Y = os.fit_sample(smote_train_X_matrix, smote_train_Y) |
H: Predicting parallel time series with multiple features
I am trying to predict sales for 2 departmental stores which share similar demographic properties. My goal is to make a single LSTM model to predict sales from these parallel time series having multiple features.
My input features for training would be
+----------+-------+--------------+-------+
| Date | Store | DayOfTheWeek | Sales |
+----------+-------+--------------+-------+
| 1/1/2019 | A | 2 | 100 |
| 1/2/2019 | A | 3 | 200 |
| 1/3/2019 | A | 4 | 150 |
| 1/1/2019 | B | 2 | 300 |
| 1/2/2019 | B | 3 | 550 |
| 1/3/2019 | B | 4 | 1000 |
+----------+-------+--------------+-------+
and my output for training would be
+----------+-------+--------------+-------+
| Date | Store | DayOfTheWeek | Sales |
+----------+-------+--------------+-------+
| 1/4/2019 | A | 5 | 220 |
| 1/4/2019 | B | 5 | 700 |
+----------+-------+--------------+-------+
Problem is that LSTM takes input as 3D i.e (n_sample, n_timesteps, n_features) and I can pass a single time series for a specific store (e.g. A)
If I had a univariate mutiple time series I can reshape my input data as follows and pass it to LSTM.
+----------+---------+---------+
| Date | A_Sales | B_Sales |
+----------+---------+---------+
| 1/1/2019 | 100 | 300 |
| 1/2/2019 | 200 | 550 |
| 1/3/2019 | 150 | 1000 |
+----------+---------+---------+
But I need to identify how can I predict parallel multivariate time series? Is there any other way to define in Input LSTM layer that there are 2 time series with 2 features each i.e (2*2).
AI: Here are two ways to model the problem. The first one is simpler, the second one is more complex but closer to your original statement of the problem.
Store as an input feature
You can consider the store as a feature to pass to your LSTM. With two different stores, just add a binary input feature "store" where store A is 0 and store B is 1, for example.
Then, you can implement it this way.
from keras.models import Sequential
from keras.layers import LSTM, Dense
timesteps = 20
n_features = 5
model = Sequential()
# add +1 to n_features for the store identifier
model.add(LSTM(32, input_shape=(timesteps,n_features + 1), return_sequences=True))
model.add(LSTM(32))
model.add(Dense(1,activation="relu"))
model.compile(optimizer="rmsprop", loss="mse")
One sample of your data at a given timestep will be a vector of the form (feature_1, feature_2, ..., feature_n, store).
Multivariate time-series prediction
Here we input both time series and aim to predict next values of both stores. So you have a shared-LSTM processing store separately, then concatentate both produced embeddings, and compute the predicted values.
from keras.models import Model
from keras.layers import LSTM, Dense, Concatenate, Input
timesteps = 20
n_features = 5
# Store A and B time-series inputs
a_inputs = Input(shape=(timesteps, n_features))
b_inputs = Input(shape=(timesteps, n_features))
# Stacked LSTM
lstm_1 = LSTM(32, return_sequences=True)
lstm_2 = LSTM(32)
# Stacked LSTM on Store A
a_embedding = lstm_1(a_inputs)
a_embedding = lstm_2(a_embedding)
# Stacked LSTM on Store B
b_embedding = lstm_1(b_inputs)
b_embedding = lstm_2(b_embedding)
# Concatenate embeddings and define model
outputs = Concatenate()([a_embedding, b_embedding])
outputs = Dense(64)(outputs)
outputs = Dense(2, activation="relu")(outputs)
model = Model([a_inputs, b_inputs], outputs)
model.compile(optimizer="rmsprop", loss="mse")
Inspired by section 7.1.5 Shared weight sharing, in Deep Learning with Python by F. Chollet |
H: How to approach a new data set with no dependent variable
Basically I am taking a small course in data science.
And one of our first assignments are that we have gotten a pretty huge data set (hundred of thousands of samples, and roughly 150 independent variables). There is no response variable in this, but the assignment is pretty much: "Find something that are interesting". Which I obviously kind of broad.
My questions is: How would/should you/I approach this ?
Up until now I have just cleaned the data set as good as I can. Removed variables I don't think make much sense, made sure there are no NaN variables, and stuff like that. But what would be the next step in order to "find something useful" in a data set that may or may not contain anything interesting?
AI: Your assignment is basically the process we call EDA - Explorative Data Analysis.
So what should you do? Simply explore!
What is the shape of your dataset?
How do variables behave, do they have a factor structure, correlate, etc.
What are the main descriptives of your dataset, to they tell an interesting story, etc.
And once you start doing this you will find something that might be interesting to explore a bit deeper depending on your dataset. Do not just use summary functions like mean, median, etc. but also try to build simple graphs and comment everything in a neat notebook!
My tip:
Look at some EDA notebooks on Kaggle for inspiration or watch this superior video by a master at work:
https://www.youtube.com/watch?v=go5Au01Jrvs
Also here is a beginner guide as well:
https://towardsdatascience.com/exploratory-data-analysis-eda-a-practical-guide-and-template-for-structured-data-abfbf3ee3bd9 |
H: Unable to save the TF-IDF vectorizer
I'm workig on multi-label classification problem. I'm facing issue while saving the TF-IDf verctorizer and as well as model using both pickle and joblib packages.
Below is the code:
vectorizer = TfidfVectorizer(min_df=0.00009, max_features=200000, smooth_idf=True, norm="l2", \
tokenizer = lambda x: x.split(), sublinear_tf=False, ngram_range=(1,3))
x_train_multilabel = vectorizer.fit_transform(x_train)
x_test_multilabel = vectorizer.transform(x_test)
classifier = OneVsRestClassifier(SGDClassifier(loss='log', alpha=0.00001, penalty='l1'), n_jobs=-1)
classifier.fit(x_train_multilabel, y_train)
predictions = classifier.predict(x_test_multilabel)
Error Message while saving the TF-IDF vectozier.
Any suggestions ?
Thanks in advance.
AI: The issue is due to your lamda function with the tokenizer key word argument.
>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> from joblib import dump
>>> t = TfidfVectorizer()
>>> dump(t, 'tfidf.pkl')
['tfidf.pkl']
No issues. Now let's pass a lambda function to tokenizer
>>> t = TfidfVectorizer(tokenizer=lambda x: x.split())
>>> dump(t, 'tfidf.pkl')
Which throws the following error:
_pickle.PicklingError: Can't pickle at 0x100e18b90>: it's not found as main.
To get around this, create a function to split the text:
>>> def text_splitter(text):
... return text.split()
Try dumping again:
>>> t = TfidfVectorizer(tokenizer=text_splitter)
>>> dump(t, 'tfidf.pkl')
['tfidf.pkl']
Now you can you save the vectorizer. |
H: What does 'lm' mean in the lmplot() in seaborn?
I was learning seaborn, but could not find the full name of seaborn.lmplot. What is 'l' and 'm' in the name 'lmplot'.
AI: "lm" = "linear model". As reported in the docs:
Plot data and regression model fits across a FacetGrid. [...] It is intended as a convenient interface to fit regression models across conditional subsets of a dataset. |
H: Pandas dataframe, create columns depending on the row value
I get a csv that if you read it, looks like:
import pandas as pd
df = pd.DataFrame([['de,ch,fr', '1,2,3'],['fr,ch,dk', '3,4,5']], columns=['countries', 'numbers'], index=['abc', 'bcd'])
I want to make it look like this:
df = pd.DataFrame([[1,2,3,0], [0,4,3,5]], columns=['de_number', 'ch_number', 'fr_number', 'dk_number'], index=['abc', 'bcd'])
Meaning exploding the countries column and getting for every value in index the number for every country in a separate column. I have the list of all the countries for this dataframe beforehand (meaning that I knew beforehand that I'm going to have the values ['de', 'ch', 'fr', 'dk'])
Is there a nice clean way of doing it? Everything that comes into my mind is quite messy.
AI: First we use DataFrame.explode to unnest your lists to rows.
Then we use DataFrame.pivot_table to pivot your dataframe from rows to column to get your desired result:
dfn = df.assign(countries=df['countries'].str.split(',')).explode('countries')
dfn['numbers'] = df.assign(numbers=df['numbers'].str.split(',')).explode('numbers')['numbers']
dfn = (
dfn.pivot_table(index=dfn.index,
columns='countries',
values='numbers',
aggfunc=lambda x: x,
fill_value=0)
.rename_axis(None, axis='columns')
.add_suffix('_number')
)
Output
ch_number de_number dk_number fr_number
abc 2 1 0 3
bcd 4 0 5 3 |
H: Several independent variables based on the same underlying data
I got a data containing, among others, two feature variables, which are based from the same underlying data (i.e. have mutual information), but they convey different information/message. How to handle such cases?
Since, logically, they will be highly correlated, it would make sense to only use one of them, preferably the one which conveys more information. But:
Is this the correct approach, or do we actually lose a valuable
information by not including it?
If including it is the correct approach, is there anything else needed to be done and/or checked to prevent messing up the model
(since they will be highly correlated)?
Example 1:
Let's say we have a feature which can be pair of any number from 1 to 3, e.g. (1,1), (3,2), (2,1), etc.
And we also have another feature which tells us how many ones (i.e., 1) are in the previous feature, so for the previous cases this would correspond to 2, 0, 1, etc.
Although this second feature does not provide us with any new information not already present in the first feature per se (i.e. can be deduced from the first feature), it does have some special meaning, i.e. lets say that the number of ones is expected to influence the results (dependent variable).
Example 2:
One variable is a discrete/integer value, and the other one is 0 if the value of the first feature is below some specific value, and 1 if higher or the same.
Just as in the Example 1, the second feature has some special meaning.
AI: For predictive power, in general, including both shouldn't be a problem. But there is a lot of nuance here.
Foremost, if predictive power isn't all you care about: if you're making statistical inferences, or care about explainability and feature importances, then including both can cause issues. Briefly, your model may split the importance of the underlying variable across all the derived ones.
In some cases, it might not help at all: a tree model on your second example can already easily discover the derived variable given only the original. It may be actively harmful: adding too many of these derived variables might provide noise for your model to overfit to, rather than useful signal.
In some cases, it might help a lot: in your first example, a linear classifier won't see the derived feature at all from the original, and a tree model would require several consecutive splits to see it. A neural network could build it, but it's not clear whether the training process would find it. |
H: Combine 100 2D arrays into one 3D array
I have 100 2D lists a0 to a99, it is easy to convert them into arrays (I verified already):
print(np.array(a0).shape) # (2,150)
print(np.array(a1).shape) # (5,150)
print(np.array(a2).shape) # (10, 150)
# the sizes of a0 to a99 vary from (1,150) to (10, 150) only
I want to combine these 100 3D arrays into ONE 3D array, for example combined_array:
print(combined_array.shape) # (100,10,150)
print(combined_array[0].shape) # (2,150)
print(conbined_array[1].shape) # (5,150)
print(combined_array[2].shape) # (10,150)
I use Python 3.7.
AI: Your arrays have different shapes on the 0 axis, so you cannot use numpy.stack directly.
You can either use padding or put all arrays in a list. Using padding:
import numpy as np
a0 = np.empty((2,150))
a1 = np.empty((5,150))
a2 = np.empty((10,150))
max_shape = [0,0]
for a in [a0, a1, a2]:
if max_shape[0] < a.shape[0]:
max_shape[0] = a.shape[0]
if max_shape[1] < a.shape[1]:
max_shape[1] = a.shape[1]
arrays = []
for a in [a0, a1, a2]:
arrays.append(np.pad(a, pad_width=((0, max_shape[0] - a.shape[0]),
(0, max_shape[1] - a.shape[1])),
mode='constant'))
stacked_array = np.stack(arrays) |
H: What is the layer above/below in a NN?
In the lecture notes of CS231n, it says (emphasis mine)
... There are three major sources of memory to keep track of:
From the intermediate volume sizes: These are the raw number of activations at every layer of the ConvNet, and also their gradients (of equal size). Usually, most of the activations are on the earlier layers of a ConvNet (i.e. first Conv Layers). These are kept around because they are needed for backpropagation, but a clever implementation that runs a ConvNet only at test time could in principle reduce this by a huge amount, by only storing the current activations at any layer and discarding the previous activations on layers below.
......
Normally we imagine a neural network propagating from left (input) to right (output), but what is the layer above/below the current layer? Is the input the bottom layer or the top layer?
I guess output is the top layer. This is because we only need the gradients of the next (closer to output) layer to compute those of the current layer. The exact activations of the next layer are not required, so we can discard them after the corresponding gradients have been computed. This is possible because we compute gradients backwards (last layers first, first layers last).
AI: Normally we imagine a neural network propagating from left (input) to right (output), but what is the layer above/below the current layer? Is the input the bottom layer or the top layer?
Yes, a "below" layer means an "earlier" layer, i.e. closer to the input. For CNNs it is quite common to speak of lower layers, lower level feature maps etc.
And for RNNs you can even take this literally as they are usually visualized bottom-to-top instead of left-to-right (e.g. see figures here). |
H: How to compute modulo of a hash?
Let's say that I have a set of users in my database, that have GUIDs as their IDs. I use xxhash to generate fixed-length hashes for each value, so that I can then proceed to "bucketizing" them and being able to do random sampling with the help of the modulo function.
That said, if I have a hash such as 367b50760441849e, I want to be able to use hash % 20 == 0 to randomly pick 5% of the population (hence, 20 "buckets"). This is the approach that is used in Kusto hash() with a modulo argument.
With this in mind, what is the approach that should be used to calculate an integer value from the hash, so that I can calculate the modulo?
AI: Any good hash will be uniformly distributed, which means that you can assume a uniform distribution when you apply modulo n, as long as $n < 2^{M/2}$, where M is the number of bits in your hash, see here. So for SHA1-32 you would at most modulo by $2^{16}$.
There is no approach to calculating an integer value; what you have there is an hexadecimal representation of a hash, you just need to convert it to a numeric type if you obtained it as a string. XXH32() and XXH64() both already produce an unsigned int output. |
H: Word2Vec and Tf-idf how to combine them
I'm currently working in text mining ptoject I'd like to know once I'm on vectorisation. With method is better.
Is it Word2Vec or Tf-Idf ?
Here I see we can combine them why that? Does it improve quality of data?
What about GloVe?
Thanks
AI: Word2Vec algorithms (Skip Gram and CBOW) treat each word equally,
because their goal to compute word embeddings. The distinction
becomes important when one needs to work with sentences or document
embeddings; not all words equally represent the meaning of a
particular sentence. And here different weighting strategies are
applied, TF-IDF is one of those successful strategies.
At times, it does improve quality of inference, so combination is
worth a shot.
Glove is a Stanford baby, which has often proved to perform better. Can
read more about Glove against Word2Vec here, among many other
resources available online. |
H: Dynamic pricing with aggregate constraints
so I have this situation. I am trying to understand whether my customers will buy my product at a certain price based on a previous offer made to them. Specifically, I have a lot of data on my clients and I have rolled a previous "campaign" round and I saw who bought it and who did not. Now I am trying to calibrate new prices for those who did not and I wanted to find prices that maximize the chances of them buying it but at the same time I have an aggregate constraint that I have to uphold (a weighted aggregate profit margin).
I am in the early stages but I wanted to understand the models of reference for this kind of problems. Any suggestion to which area/material is most relevant is very appreciated.
p.s.: I have tried to scour the forum for similar questions without success but, in case I missed something, do close the question and kindly refer me to that content.
AI: Hi Asher11 and welcome to the community. I have worked in pricing in the past and I will try to give you some directions, although your question is quite general. It all comes down to how you want to formulate your problem. I suggest you to not look for models first but to figure out the problem you are facing and translate into math.
For example:
Do you want to know if a customer will buy a specific product according to some characteristics (features) of them? Then you have a prediction problem. You could possibly represent each customer with a set of features and try to predict if they will buy a product by training a Supervised Learning model. Start with linear regression in order to have a performance baseline.
Do you want to optimize profit/revenue? then you have an optimization problem and potentially you can use Reinforcement Learning methods to solve it. If the sequence of the events (offering and buying) does not play an important role in the decision then you can try algorithms from Bandit theory.
Another question you need to answer is why you want to use ML methods? You might be surprised but you will have a way easier solution by looking at Pricing Optimization with Linear Programming. You will need to make some assumptions about the supply and demand curve of each product (or a group of products), add appropriate constraints and optimize in order to get the best price. The product grouping might be done with ML methods or according to your intuition. By doing this exercise you will start getting an idea of the best way to formulate your problem.
Be aware in order to train an RL agent for optimizing profit/revenue it means that the agent will try random prices LIVE (RL training is based on trial and error). For this reason you need to look up for techniques on how you can pre-train a network to suggest prices within a specific range and then use RL live without the fear that the algorithm will output unreasonable values. Another way is to build a "market" simulator with the data that you have and train the RL agent directly there.
By writing the above, I tried to give you few keywords that you could look up in order to specify your problem better and possibly find a solution. |
H: Recommender system that matches similar customers with similar highly rated products?
I have a dataset of 1,000 customers that bought 20 distinct phones and rated them 1-5. I have several demographic attributes for these customers (gender, age). My website offers 100 distinct devices, each with several attributes (screen size, case material, operating system, price).
For new customers, I want to recommend a ranked list of 5 devices they might enjoy. This might be based on 1) the new customer having a similar demographic profile to the 1,000 customers that gave ratings and 2) devices that were highly rated or had similar attributes to the devices that were highly rated by a similar customer.
All the examples I’ve found show how to make recommendations to similar customers for products that were rated highly by customers of similar profiles. I want to do this without limiting recommendations to the products that were rated. Instead, I want to build a system that might recommend phones I have ratings for, and also phones that haven’t been rated (ex: the phone is new, I need to get rid of inventory, or I want them to try something they might enjoy more) based on their similarity to other highly rated products by similar customers.
What is a good approach to build a recommended system for this scenario? I’ve read about content-based, collaborative filtering, and hybrid recommenders, but I can’t find a good example of this scenario. Is there a name for the type of system I am describing here?
AI: You want to do three things:
Cluster similar customers based on their profiles and their preferences for certain devices
Cluster similar devices based on ratings but also device attributes
Create a recommender system based on clusterings 1 and 2
Here is how I would do it:
Cluster customers via usual means (PCA, K-Means, etc.) based on their demographics
For each cluster develop a model that predicts the "rating" of a device based on the devices attributes (e.g. via ensemble learning, association rules, etc.). Then use the model to predict the rating of a new device for each cluster!
Implement a simple recommender based on the cluster membership and a relevant device set based on ratings, this works because of step 2 each device now has a ranking. If you want to "manipulate the customer" further you can add simple rules to add devices below the relevant rating threshold if they satisfy certain criteria (price, "is on sale", etc.).
Voila! |
H: How to analyse player and enemy position for data analysis
I am trying to analyse a Serious Game for students learning.
In one of the game levels,
There are multiple positions like Player Position (x, y, z), enemy position (x, y, z), Player shot at position (x, y, z).
Other features like items acquired,
How do I use these for analysis, for e.g. to categorize some students as "more-exploring" than others.
I am sorry if the question is not clear. But I am just confused with the project.
AI: Either do unsupervised learning with something like k-means clustering or DBScan where you attempt to segment students into groups and see if you can discern any insights based on the cluster generated or pick a threshold for certain categories, create a class column and label each student, and do a classification model. |
H: What's the purpose of padding with Maxpooling?
As mentioned in the question, i've noticed that sometimes there are pooling layers with padding.
More specifically, I found this Keras tutorial, where there's a net which contains MaxPooling layers with padding.
If padding=same in convolutional layers, our output size (at least height and width, depth can change based on the number of filters) is the same as the input.
I expected the same with the MaxPooling layer, but Keras model.summary() (as shown in the article) shows that the output size after the pooling layers is half of the input.
What's the point of adding padding to the Pooling layer if we still get an output which is half of the input?
AI: The whole purpose of pooling layers is to reduce the spatial dimensions (height and width). Therefore, padding is not used to prevent a spatial size reduction like it is often for convolutional layers. Instead padding might be required to process inputs with a shape that does not perfectly fit kernel size and stride of the pooling layer.
This is an example where it perfectly fits and your pooling layer does not require any padding:
Side note: The output dimensions are calculated using the usual formula of $O=\frac{I-K+2P}{S}+1$ with $I$ as input size, $K$ as kernel size, $P$ as padding and $S$ as stride.
However, lets take another example where it does not fit as nicely:
Here you need padding since your input size is not an integer multiple of your kernel size. Therefore, you need to add padding on one side in order make it work.
So padding="same" in Keras does not mean the spatial dimensions do not change. It just means that padding is added as required to make up for overlaps when the input size and kernel size do not perfectly fit.
Also this question for a discussion what the difference between same and valid padding is for pooling layers. |
H: matplotlib graph to plot values and variance
I am really new to the world of matplotlib graphing as well as using those graphs to understand data.
I have written a simple python code where I read a .csv file in and then store the values of one column into a variable. Then plotting them similar to the code bellow:
dev_x= X #storing the values of the column to dev_x
plt.plot(dev_x)
plt.title('Data')
The graph looks like this, which seems quite messy and hard to understand. So, I am asking for some advice on how to make more cohesive graphs.
This is what my .csv column looks like. It is just many other other rows.
['40' '20' '10' '0' '10' '30' '50' '70' '90' '110' '130' '150' '170' '200'
'240' '290' '40' '20' '10' '0' '10' '30' '50' '70' '90' '110' '130' '150'
'170' '200' '240' '290' '40' '20' '10' '0' '10' '30' '50' '70' '90' '110'
At the end of the day I would like a way to display these in a better way so I can also find the variance of this column.
AI: You have currently stored your numbers as strings causing matplotlib to treat your variable as categorical, hence the y-axis is not ordered as expected. Before plotting you should therefore first convert them to integers like this:
x = [float(i.replace(",", ".")) for i in dev_x]
You can then use plt.plot(x) once again to plot the values, this should give you the following plot:
Edit:
Using the csv file you've provided, I am using the following code to read in the data and create the plot:
import matplotlib.pyplot as plt
import pandas as pd
# Read in csv file
df = pd.read_csv("DATA.csv")
# Set figure size
plt.figure(figsize=(15, 5))
# Create plot
plt.plot(df["DATA"])
This should give the following plot: |
H: How to correctly calculate average F1 score, precision and recall of a Named Entity Recognition system?
My Named Entity Recognition (NER) pipeline built with Apache uimaFIT and DKPro recognizes named entities (called datatypes for now) in texts (e.g. persons, locations, organizations and many more).
I have a gold corpus and a result corpus which I want to calculate precision, recall and F1 score on. As of now, I calulate these metrics like this:
1. Calculate precision, recall and F1 score from TP, FP and FN per datatype per document
2. Average precision, recall and F1 score per datatype for all documents
In the table you can see the results of step 2 in the corresponding datatype rows.
Regarding step 2: I think the way of calulating the F1 score is neither macro nor micro. I calculate precision and recall in the macro way (like explained here). But I don't calculate the F1 score as the harmonic mean of the average precision and recall (macro way), but as the average F1 score for every datatype for all documents. I am getting higher results for the macro-way F1 score compared to the way I am doing it at the moment.
Question: What is the right way to calculate the average F1 score for every datatype? Both ways seem to be correct to me. Please name sources for your answers.
AI: There is a quite detailed comparison with references here: https://towardsdatascience.com/a-tale-of-two-macro-f1s-8811ddcf8f04
Basically the two definitions are used and both can be considered valid. For the sake of clarity I would recommend mentioning which definition you are using when you report your results. |
H: What does these points mean in Naive Bayes?
I have two concept related questions related to Naïve Bayes.
Naïve Bayes is robust to irrelevant features. What does this mean? Can anyone give an example how does the irrelevant features cancels out and what are the irrelevant features?
It is optimal if the independence assumption holds. Can anyone give an example of independence assumption not holding? I think it would be related to presence of words like Hong Kong, United Kingdom etc in a sentence.
Regards, Akshit Bhatia
AI: Naïve Bayes is robust to irrelevant features. What does this mean? Can anyone give an example how does the irrelevant features cancels out and what are the irrelevant features?
Imagine a classifier for sentiment analysis. For a strongly positive word like $w=great$, the conditional probability $p(w|pos)$ is going to be quite high whereas $p(w|neg)$ is going to be quite low, so the posterior $p(pos|d)$ for a document $d$ containing this word is likely to be much higher than $p(neg|d)$.
Now what happens with a neutral word $w=today$? Neither $p(w|pos)$ or $p(w|neg)$ is going to be much higher than the other. So all other things being equal, the difference between the two posterior probabilities is not going to depend much on this word compared to other more relevant words, for instance "great".
It is optimal if the independence assumption holds. Can anyone give an example of independence assumption not holding? I think it would be related to presence of words like Hong Kong, United Kingdom etc in a sentence.
In practice the independence assumption almost never holds with real data. For example words in a text actually depend on each other, that's how they make sense together in a sentence. This is true for entities like "Hong Kong" but also for virtually any sentence. For instance "I love chocolate but you hate it" doesn't mean that same as "You love chocolate but I hate it", or "it chocolate you hate love but" which means nothing. NB will treat all these variants the same way: basically the model assumes independence for the purpose of making things simpler and easier to compute, and it turns out that it works quite well in general, despite the massive simplification. |
H: Fitting a model as a significance testing alternative
My colleagues use somewhat unusual approach to estimate how far away performance of some of our company related processes diverge from historical levels. Their nature isn't really relevant for the purpose of this discussion. We may just assume here, that the final result can be represented with some real number which fluctuates over time and has some seasonality components.
They fit a regression model (with XGBoost) on historical data excluding the most recent period (say, the last month) and than compare the most recent values with model's forecast.
I argue, that this is basically an overkill and some sort of a crooked statistical significance test. And rather than fitting a model and making a forecast, we should actually calculate some normalised historical indicators (stratified by relevant groups and taking into account seasonality, where appropriate) and then do a standard significance test using a significance level, that would make sense from a business perspective.
Am I right or not? Are there any better ways to approach a problem of estimating performance and comparing it to historical data?
Does approach with modelling/forecasting has any legs? How it can be justified from a statistical standpoint?
AI: Depends on the question you're asking, I suppose. You are right that you need some kind of significant test. The null hypothesis is that performance is "as expected," hasn't changed, and that any deviation from that is just sampling error or noise.
But what is "as expected"? if the business process were indeed static, it'd be straightforward. You know the distribution of measurements and can compute p-values or something.
But you have seasonality, and variation by some grouping. You want to control for that? sure. Doing that is also building some kind of time series model.
xgboost isn't particularly suited to timeseries forecasting, in the way that maybe FB prophet is. Also, you really want some distribution over predictions to assess how unusual the actual value is. prophet should give you that, but you would need to run xgboost with k-fold CV or something to start to infer the distribution of its prediction.
Like, I'm wondering what it means to compare one xgboost prediction to the actual result. What is "too different" and how would you know?
You're sort of on the same track but yes I think you are looking for a significance test at some level, and probably a better tool than xgboost. |
H: Why this calculation of weight vector in linear regression is only for small dataset?
Slides from my university says, that the following way of calculating the weight vector is suitable only for small datasets. Can you please explain, why it may be suitable for small datasets?
Here, X is observed variable and Y is dependent variable{usually 0 or 1}
AI: $X$ is your design matrix (data), so can get large, both in number of data points and number of predictors. The issues are indeed:
Forming $XX^T$ may be computationally expensive
Computing it explicitly may be inaccurate due to floating-point roundoff, as you're adding lots of products
Inverting a matrix can be expensive here if the number of predictors is large as the Gramian is large
With lots of predictors and a big Gramian, it's more likely to be ill-conditioned and suffer from numerical instability |
H: How is the target_f updated in the Keras solution to the Deep Q-learning Cartpole/Gym algorithm?
There's a popular solution to the CartPole game using Keras and Deep Q-Learning:
https://keon.github.io/deep-q-learning/
But there's a line of code that's confusing, this same question has been asked in the same article and many people are confused but there's no a complete answer.
They are basically creating a main network but also a target network to try to approximate the Q function.
In this part of the code they are replaying from the buffer to train the target network:
# Sample minibatch from the memory
minibatch = random.sample(self.memory, batch_size)
# Extract informations from each memory
for state, action, reward, next_state, done in minibatch:
# if done, make our target reward
target = reward
if not done:
# predict the future discounted reward
target = reward + self.gamma * \
np.amax(self.model.predict(next_state)[0])
# make the agent to approximately map
# the current state to future discounted reward
# We'll call that target_f
target_f = self.model.predict(state)
target_f[0][action] = target
# Train the Neural Net with the state and target_f
self.model.fit(state, target_f, epochs=1, verbose=0)
What I can't understand is this line:
target_f[0][action] = target
In terms of code, the predict function is returning a numpy array of arrays, like this one for example:
[[-0.2635497 0.03837822]]
Writing target_f[0] to access the first predicted action is understandable, but why are they using the [action]?
Thank you very much for the help!
AI: Hi David and welcome to the community! I think that the [0] is for accessing the array of actions as it is in double brackets. After this you need the action index to update appropriately.
The Q network would have as many outputs as the actions available (in your case 2). Then you want to update the weights of the network in the output layer that is responsible for estimating the Q that corresponds to the action selected. So it seems that you have received a $r(s,action)$ and thus you assign it as target in order to use it for the MSE($Q(s,action)$, $r(s,action)$) between estimation and real reward.
The weights of the other outputs cannot be updated by using the collected $r(s,action)$ as the Q-learning update equation would be wrong so the target remains the same as the prediction (resulting a MSE=0). Think that each of the network's outputs is responsible for estimating the return by performing a particular action, at a particular state. For example the first output is responsible for estimating the return (given input state) for action=left ($Q(s,left)$) and the other for action=right ($Q(s,right)$). You train each network head (output layer) with MSE between real reward and estimation. Each time the reward that is sampled from the environment would be the result of a selected action. Thus you update only the corresponding head by assigning as target the reward sample. |
H: Classify graphs using machine learning
I am trying classify an input graph (a 2D point sequence) into one of the predefined graphs (A,B,C etc) using machine learning. The goal is to identify which type of graph the input graph belongs to.
I have done classification of single data points before, but I have never classified sequences of data like graphs before. The only way I could think of is calculating 'mean squared error' between input & each of A,B,C graphs and choose the category with the lowest error.
3 example outputs might look like this.
input graph belongs to type A (confidence: 82%)
input graph belongs to type C (confidence: 68%)
input graph doesn't belong to any type (max confidence: 12%)
How can I achieve this using classification techniques or any other accurate way?
AI: The wording of your question makes it seems as if you only want to answer the question is the current graph to a specific graph A,B or C. If this is the case, machine learning might not be the best approach and in fact I would say your MSE based approach is probably a good starting place. If this is the case You may want to calculate the performance of this approach and see if it meets your needs. If you have a bunch of examples of graphs that are class A,B, and C then machine learning a probably good approach.
If you have a bunch of examples of each class then you'll first need to calculate a set of features on each graph before you can apply machine learning techniques. After calculating the value of each feature on the graph you'll have a feature vector. From this point, the classification problem should be the same as the "single" data point problems you've already encountered.
Example features:
Min or max x or y value
The average slope of the series
Path length of the series
Maximum curvature of the series
Coefficients from a polynomial fit
The tsfresh package will calculate a bunch of features for time series and help you sort out the relevant ones so its probably a good place to start.
Good luck! |
H: How to support a dynamic shape input for tf.where()
Tensorflow (tf.where) function does not support dynamic input shapes.
For example, the following function calculates the svd of a matrix A and tries to identify the singular values greater than a certain threshold to invert them while filling other values with zeros.
def func (A, threshold)
s,u,v = tf.svd(A)
condition = tf.greater(s, threshold)
z= tf.where(condition , tf.reciprocal(s), tf.zeros(s.shape))
return z
will result in an error:
Cannot convert a partially known TensorShape to a Tensor: (?,)
Any idea how to overcome this issue?
AI: Just do tf.zeros_like(s) instead of using the shape explicitly. |
H: Dealing with issues in "test" predictons for single "items" (null values, standardization in place, etc)
I know this is kind of a broad question but I have tried to scour both this forum and the internet in general to no avail for this particular situation. So imagine I have a model trained for which, though the data initially might not be complete and clean, I took steps to make the data complaint with the model requirements (no outliers where appropriate, de-skewed if necessary, normalized if necessary, null values imputed appropriately). This is done in a cross validation framework. All this stuff works and is absolutely fine when tuning the model but I run into problems when I try to make a single prediction out of it (meaning I have a single "test" record - think web service with some fields that can be null). In fact, null values generally need a dataset to refer to for filling, as well as for the normalization/outlier procedures.
Initially I thought about linking such "test" record to a portion of the "train" dataset so that I would not run into this problem (such problems would be resolved) but at that point other issues would arise: how would I choose such dataset? if I used the most recent data, would I bias it somehow? and using the whole dataset is impractical as well as potentially unfeasible when dealing with "big" data.
do you happen to know whether there are some best practices on the topic or could you refer me the themes/keywords that deal with these issues?
p.s.: for the relevance of the problem, the null values most likely will remain there (I have no way of forcing them beforehand in the web application in order to have a smoother user experience)
AI: You need to save the instructions for performing these preprocessing steps, not necessarily the dataset that you extracted them from.
See
Obtaining consistent one-hot encoding of train / production data
Binary Classification - One Hot Encoding preventing me using Test Set
one-hot-encoding categorical data gives error
In particular, sklearn preprocessors can be pickled, then used with their transform method in production, if you can use sklearn in deployment. PMML also cam translate most transformers. Or you can write your own simple transformer.
As to using newer data to rework the transformers, that's getting closer to retraining; in most settings, I would keep it in the same place as refitting the model: either both offline or both online. |
H: Is it possible to pass dataframes between R and Python using Arrow with zero-copy?
Arrow is an in-memory format, so I can have a dataframe in Python backed by the arrow format. So the dataframe sits in memory, can I use that dataframe directly from R without making a copy of the dataframe? Imagine if the dataframe is 100G in size so copying is very inefficient.
I read in the documentation of Arrow that there is zero-copy streaming, but there isn't a way to make the whole dataframe available.
AI: Not currently, though hopefully very soon. https://issues.apache.org/jira/browse/ARROW-3750 is in progress and hopefully will resolve in the coming weeks. |
H: Will Keras fit( ) function automatically shuffles the input dataset by default?
I am asking this model fit( ) function.
fit(x=array_x, y=array_y, batch_size=32, epochs=10)
The question is straightforward:
Whether fit( ) will automatically shuffle the input dataset? (array_x, array_y in this case)
I ask this question because I find my fit( ) and fit_generator( ) has different outputs, with same input. My generator feeds inputs to fit_generator( ) in order. I guess the reason is about shuffle.
AI: Yes, by default it does shuffle.
Here is the documentation.
The default call signature:
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None,
validation_split=0.0, validation_data=None, shuffle=True,
lass_weight=None, sample_weight=None, initial_epoch=0,
steps_per_epoch=None, validation_steps=None, validation_freq=1,
ax_queue_size=10, workers=1, use_multiprocessing=False)
where the description for shuffle is:
shuffle: Boolean (whether to shuffle the training data before each epoch) or str (for 'batch'). This argument is ignored when x is a generator. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.
This means it will not shuffle if you have set a value for steps_per_epoch. |
H: How to split data into 3 parts in Python - training(70%), validation(15%) and test(15%) and each part have similar target rate?
I'm working on a company project which I will need to do data partition into 3 parts - Train, Validation, and Test(holdout).
Does anyone know how I can split the data into 3 parts above and each part will have similar response variable(target rate) - (similar accuracy for classification and similar mean of (response) for regression.
I know how to split data into 3 parts by using train_test_split function from SKLEARN
from sklearn.model_selection import train_test_split
x, x_test, y, y_test = train_test_split(xtrain,labels,test_size=0.2,train_size=0.8)
x_train, x_cv, y_train, y_cv = train_test_split(x,y,test_size = 0.25,train_size =0.75)
But this does not give a similar target rate, can someone help me?
AI: For classification you can use the stratify parameter:
stratify: array-like or None (default=None)
If not None, data is split in a stratified fashion, using this as the class labels.
See sklearn.model_selection.train_test_split. For example:
x, x_test, y, y_test = train_test_split(xtrain,labels,test_size=0.2, stratify=labels)
This will ensure the class distribution is similar between train and test data.
(side note: I have tossed the train_size parameter since it will be automatically determined based on test_size)
For regression there is, to my knowledge, no current implementation in scikit learn. But you can find a discussion and manual implementation here and here with regards to cross-validation. |
H: K-Means Clustering for data points with multiple attributes
I'm very new to K-Means clustering. Every example that I have seen has a two-dimensional data set.
I am working to classify recipes of varying ingredient composition into families. Each recipe is composed of a number of ingredients, and I want to group together the recipes that are similar to each other. Each recipe on average has 8 ingredients.
Is there any way I can use K-Means clustering to group these recipes together? Or would you suggest another unsupervised machine learning method?
AI: In k-means clustering, the "k" defines the amount of clusters - thus classes, you are trying to define.
You should ask yourself: how many different groups (=clusters) of recipes am I looking for?
In your case, your data points (features) (=recipes), are of variable dimensions (attributes) (avg 8 dimensions).
What you must realize is that the dimension of each datapoint must be the of the same length. (So the amount of ingredients for each recipe should be the same).
This is because k-means calculates the distance of each datapoint to each centroid (k centroids).
What you can do to create datapoints of the same dimension, you can introduce "DTW".
More about this here:
https://stats.stackexchange.com/questions/26100/how-to-run-k-means-clustering-on-data-points-of-varying-dimensionality
An easier solution would be that you populate your dataset with a value for each ingredient present in the dataset, and set the value to "0" when the ingredient is absent. |
H: Clustering data set with multiple dimensions
I have a data set which is similar to the following:
It is recipe data along with the composition of the recipe (in %)
I have 91 recipes and 40 ingredients in total. I want to be able to cluster these recipes together into families based on similarity of ingredient composition.
How would I achieve this? Which clustering method can be used and how?
AI: welcome to the community.
There are many criteria on the basis of which you can cluster the recipes. The usual way to do this is to represent recipes in terms of vectors, so each of your 91 recipes can be represented by vectors of 40 dimensions. This means that now the system or machine will identify your recipes as vectors in a 40-dimensional space.
Now, to check the "similarity" between the recipes you have two of the most common metrics, one is the euclidean distance. Check it out:- https://en.wikipedia.org/wiki/Euclidean_distance
The other is the cosine similarity. Check it out:- https://en.wikipedia.org/wiki/Cosine_similarity
Coming back to how to cluster the data, you can use KMeans, it is an unsupervised algorithm. The only thing you need to input here is how many clusters you want. Scikit-Learn in Python has a very good implementation of KMeans. Visit this link.
However, there are two conditions:- 1) As said before, it needs the number of clusters as an input. 2) It is a Euclidean distance-based algorithm and NOT a cosine similarity-based.
A better alternative to this is Hierarchical clustering. It creates the clusters in a top-down approach(divisive) or bottom-up approach(agglomerative) recursively. Read about it here. It is better than KMeans in two ways:-
1) You have some flexibility on how to cut the recursion to obtain the clusters on the basis of number of clusters you want like KMeans or on the basis of the distance between cluster representatives.
2) You can also choose among various similarity criteria or affinity, like euclidean distance, cosine similarity, etc.
Hope this helps,
Thanks. |
H: Can numerical discrete finite data be always treated also as categorical?
In many sources, for example here data is classified as being qualitative (categorical) and quantitative (numerical). Where numerical data can be continuous or discrete, and discrete can be finite or infinite.
I want to establish if a numerical, discrete and finite data can also be treated as categorical data.
I know that it depends on 'the meaning' of the data and requires some common sense analysis but I want to establish if the following statement is always true:
"Numerical, discrete and finite data can also be a categorical data"
In the classification of data the numerical data is said to have 'mathematical meaning as a measure of something'. But 'technically', without assessing the meaning of the data, it does also make them capable of being a categorical data (ordinal or not), if we strip it from mathematical meaning.
Example can be a following array of items:
Energy
15
15
20
25
25
Every observation has 'Energy' characteristic it can be treated as mathematical discrete and finite numerical value which can be a measurement of energy an item has. But also it can be treated as a category: two items are in 15 category, one in category 20 and two in category 25.
Thanks for confirming this.
AI: I would separate value with representation in this case.
Energy as you mentioned, in the real world, holds a very continuous value. However, we may choose (for various reasons) to represent this value in different forms.
We can take values as they are (15.21252, 23.76535), we can round them into integers (15, 24), we can even decide to represent this data by clusters ("UNDER 20", "OVER 20").
Technically, all data can be represented as categorical data, we need to consider what value does this data represent, and what are we losing/gaining from using different representations. |
H: Simulating mental disorders in machine learning systems
Is there any work as the following example in ML:
Suppose a reinforcement learning system which has a fixed penalty for every action, so there would be a regret for taking action if the estimated reward is wrong most of the time. And suppose the estimated action values reach that limit in many iterations, say agent estimates there is a +100 reward in 10 steps ahead, while taking each step has a -1 reward (+1 penalty). After taking 10 steps agent has a total of -10 reward and if the +100 promise comes true, the total sum would be +100 + (-10) = +90 but in our case that +100 is wrong most of the times and the true reward would be -10 + 0 = -10. This agent will end up taking no action as the action-value method is flawed and taking action has a cost. But reaching that point (taking no/little action) takes some iterations.
This is a simple example but now our system is resembling depression like seen in humans. Similarly we can simulate machine mental disorders (depression, anxiety...) in more complicated settings and studying these problems can help with similar disorders in psychiatry and vice versa.
Is there some related work (links are appreciated)? which keywords should I use for search?
AI: When you look for work on mental disorders and data science you will probably only find things related to real world data which is being analyzed, e.g. predict a mental disorder based on social-economic and medical features of people.
However, your toy example is somewhat similar to the toy examples in chapter "17 Making Complex Decisions" and "21 Reinforcement Learning" in Artificial Intelligence: A Modern Approach. While you will not find anything related to your topic in the book, it points in the direction I would focus research on: intelligent agents and related terms to agent theory. That is probably a more promising direction of research than "data science" or "machine learning" here.
As a side note: I would keep in mind that psychological models of mental disorders itself are an active area of research. You could, for example, look into a reinforcement learning agent to investigate learned self-helplessness as a potential cause of clinical depression. However, learned self-helplessness as a cause of clinical depression has, to my knowledge, not been fully validated in clinical psychology yet and is only one psychological model among others. |
H: Matching Algorithms in Python
We run a online marketplace for Commercial Real Estate industry and are looking to write matching algorithms to reduce the cost of search and transaction for the property owners/tenants.
We have two groups of users - owners and tenants and would like to implement matching algorithms based on their characteristics. Without any prior transactions data to work with, how would you approach this problems? More specifically, what matching algorithm techniques would you implement?
Code is in python.
Example:
Property
"property": [
{
"amenities": ["kitchen",
"conference room",
.....,
.....]
"location": "Munich",
"feature2: "xyz"
...
}
Tenants
"Tenants": [
{
"amenities": ["kitchen",
"conference room",
.....,
.....]
"location": "Munich",
"Type": 'Retail',
"feature3": 'xyz'
...
}
AI: According to your description you can only use similarities between descriptions, and since there's no labelled data it has to be unsupervised.
Option 1: heuristic (i.e. ad-hoc unsupervised method). Based on your knowledge of the specifics of the data, implement a function which returns a score representing how similar two descriptions are. For example a basic matching could simply count the number of common items between the two descriptions divided by the number of items in the longest description. For a query description return the N top similar matches after comparing it to every potential match. Naturally this can be improved in many ways.
Option 2: represent each description as a vector and use any generic similarity measure over pairs of vectors (e.g. cosine). The representation can be some kind of direct representation, e.g. some kind of TF-IDF vector, or a more sophisticated representation as a tree or graph embedding. |
H: Loading models from external source
I have a 500MB model which I am commiting to Git. That is a really bad practice since for newer model versions the repository will be huge. As well, It will slow down all builds for deployments.
I thought of using another repository that contains all the models and then fetch them in running time.
Does anybody know a clean approach or alternative?
AI: In most cases, you would use a file-storage solution such as Amazon S3 or Google Cloud and many others, which provide designated solutions for large object storage and retrieval.
You would then ideally want to update your code to retrieve the model directly from the file storage. Whether this download needs be done on every run or only once (storing the model locally for future runs), should be decided based on your specific needs. |
H: What should I do with a test and train set not having same classes?
Assume we have a train dataset with 5 classes a,b,c,d,e but the test dataset have only d and e with extra class f not present in train set. If i want to do machine learning with this datset in weka or python what should I do with this dataset? should I transform them and how should I do it? Should I remove classes form train set which are not in test set?
(it's a multi-class classification, and the letters a-f are the target classes)
AI: Well, what you should do depends highly on what your goal here is.
Do you - for whatever reason - have to use the given test and training sets as such?
If not, a valid approach would be to shuffle the available data and determine a new test and training set, so that they each contain all classes.
Do you want a classifier only for d and e, and have to use the given test and training sets?
Then nothing really speaks against just training on the training set with all classes and testing only the ones you are interested in/ have test data for. Of course, then you can not make statements about the accuracy for other classes.
You could also try removing the other classes from the training set, if they won't appear in your application, which may or may not make the classifier perform better on the test data.
Essentially, it's not easy to provide a ceratin answer without having more context about why you have this kind of data, and what your goal is.
It would be great if you can provide a few more details, so we can work on a more fitting answer. |
H: Getting feature vectors from CatBoost pool
I have a dataset with some numerical and categorical features and I am trying to apply CatBoost for categorical encoding and classification.
Since my dataset is highly imbalanced, with a large number of data samples with label 0 compared to those with label 1, I'm also trying to use SMOTE to synthesize label 1 data samples before CatBoost classification.
My code -
# train_categorical_features is a list of columns that have categorical values
train_pool = Pool(data = X,
label = y,
cat_features = train_categorical_cols)
X_enc = train_pool.get_features()
print(X_enc)
y_enc = train_pool.get_label()
print(y_enc)
smote = SMOTE()
X_res, y_res = smote.fit_resample(X_enc, y_enc)
print('Resampled dataset samples per class {}'.format(Counter(y_res)))
predictions = []
for i in range(10):
clf = CatBoostClassifier(learning_rate=0.08,
depth = 10,
loss_function='Logloss',
l2_leaf_reg = 4,
iterations=1000,
task_type="GPU",
random_seed=i,
logging_level='Silent')
clf.fit(train_pool, plot=True,silent=True)
predictions.append(clf.predict_proba(test[inputcols])[:,1])
print(clf.get_best_score())
I get an error on X_enc = train_pool.get_features() that says -
CatBoostError: Pool has non-numeric features, get_features supports
only numeric features
My questions are -
Is my approach towards applying SMOTE with CatBoost correct?
I've diligently followed the catboost documentation, and I am not able to understand or fix the error I've mentioned above. Would appreciate any help.
AI: The reason you are getting the error is that your pool contains categorical features. If all of your features were numerical, it would work fine.
SMOTE is irrelevant here. |
H: How to select one record from multiple record for a subject during analysis?
I have a dataset where I am working on a binary classification. I have two classes of subjects. One is Outpatients and Other is Inpatients. (66:33 is the class proportion)
My objective is to identify the risk factors that influence hospital admission (Inpatients).
But the problem here is, I have my dataset like as below
1) Let's say we have a subject called "John". He has visited hospital 19 times based on my data duration from Jan 2001- Dec 2005. All of his 19 visits are outpatients.
2) Let's say we have another subject called "Jack". He has visited the hospital 34 times based on data duration from Jan 2001-Dec 2005. Out of 34 visits, he has been admitted as inpatient 18 times and rest 16 are outpatient visits.
So now my question is
1) Usually for analysis, we only see one record per subject/individual. Right? But now on what basis should I pick that one record?
Meaning, for John out of his 19 visits, which one should I pick?
Similarly for Jack, out of his 18 inpatient visits, which one should I pick?
I choose only one out of 18 from Jack because we don't need his outpatient info as we already have a separate group of outpatients and jack is considered for Inpatient class (because he has inpatient records too unlike John).
2) Is it really necessary to have only one record per person for analysis? Is there anyway to do this? Or is it like I have to represent in aggregate form the info of multiple records in one record? Is there any theory that allows analysis of multiple records for an individual?
Hope my question is clear and kindly request you to help me
AI: I will try and be as concise as possible. First, let's redefine the way you think about your data points. There can ever only be two types of visits in terms of time. Periodic and Non-Periodic. Let's call each visit an event. Some events could be related to chronic conditions where periodic visits are quite common. Some events could be related to flu, head injury, etc., These are non-periodic visits. You need to think about what you are trying to predict. Are you predicting an inpatient visit based on periodic events or non-periodic events?
1) I would not recommend picking a single record in multiple visits as it is insufficient. For example, if I come in for a visit for a Blood Pressure Check-Up and you pick that event, then that means you only get that information but where as I would be visiting for a follow up check up or something like that or it could be pre-surgery visits. Some of these events could be both inpatient and outpatient depending on the nature of the event. It is also possible that some of the outpatient visits will lead to an inpatient visit. Jack's earlier outpatient visits could be a sign of him coming in as an inpatient. All of John's visits are outpatient but will he ever be an in-patient? Is that what are you trying to predict?
2) You can definitely have multiple records per person and treat them as a sequence of visits over a period of time. This basically means that you are treating these visits as sequences. Most of the health care related risk prediction stuff is based on sequences. It is just the way you treat sentences. Each patient is described by a sentence of visits. This meaning, you can leave John's visits and Jack's visits as they are and use padding techniques to make them appropriate for a deep learning solution to classify which one would be inpatient.
You can refer to this paper.
This is definitely not concise but the number of permutations and combinations when it comes to healthcare stuff is high. Please let me know if you've got any questions. |
H: Why i am getting t statistics different
import scipy.stats as st
st.t.ppf(0.95, df=9)
Output: 1.8331129326536335
As per this, I should get 2.262 with dof = 9.
AI: t.ppf is calculating a 1-tail inverse cdf. It looks like you're trying to look up the t-value for p=.95, but the value you are referring to in the table is a 2-tailed value of .95, meaning the one tailed value is 1 - (1-.95)/2 = .975. So
In [104]: st.t.ppf(0.975, df=9)
Out[104]: 2.2621571627409915 |
H: How to use the $\chi^2$ test to select the features, that can be String or categorical?
I want statistics to select the characteristics that have the greatest relationship to the output variable.
Thanks to this article, I learned that the scikit-learn library proposes the SelectKBest class that can be used with a set of different statistical tests to select a specific number of characteristics.
Here is my dataframe:
Do you agree Gender Age City Urban/Rural Output
0 Yes Female 25-34 Madrid Urban Will buy
1 No Male 18-25 Valencia Rural Won't
2 ... ... ... ... ... Undecided
....
The output is 'Will buy', 'won't' and 'undecided'.
I then tried the chi-square statistical test for non-negative characteristics to select 10 of the best characteristics:
import pandas as pd
import numpy as np
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
data = pd.read_csv("D://Blogs//train.csv")
X = data.iloc[:,0:20] #independent columns
y = data.iloc[:,-1] #target column i.e price range#apply SelectKBest class to extract top 10 best features
bestfeatures = SelectKBest(score_func=chi2, k=10)
fit = bestfeatures.fit(X,y)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(X.columns)
#concat two dataframes for better visualization
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score'] #naming the dataframe columns
print(featureScores.nlargest(10,'Score')) #print 10 best features
But certain columns are 'String'. So, I get the terminal back:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-59-e64d61febefd> in <module>
1 bestfeatures = SelectKBest(score_func=chi2,k=10)
----> 2 fit = bestfeatures.fit(X,y)
3 dfscores = pd.Dataframes(X.columns)
4 #concat two dataframes for better visualization
5 featuresScores = pd.concat([dfcolumns,dfscores], axis = 1)
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\feature_selection\univariate_selection.py in fit(self, X, y)
339 self : object
340 """
--> 341 X, y = check_X_y(X, y, ['csr', 'csc'], multi_output=True)
342
343 if not callable(self.score_func):
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py in check_X_y(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, warn_on_dtype, estimator)
754 ensure_min_features=ensure_min_features,
755 warn_on_dtype=warn_on_dtype,
--> 756 estimator=estimator)
757 if multi_output:
758 y = check_array(y, 'csr', force_all_finite=True, ensure_2d=False,
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
565 # make sure we actually converted to numeric:
566 if dtype_numeric and array.dtype.kind == "O":
--> 567 array = array.astype(np.float64)
568 if not allow_nd and array.ndim >= 3:
569 raise ValueError("Found array with dim %d. %s expected <= 2."
ValueError: could not convert string to float: 'Yes'
AI: You can only compute chi2 between two numerical arrays. You are getting that error because you are comparing a string. Also I am not sure if it works for multiclassification also.
df = df.apply(LabelEncoder().fit_transform)
This will solve the problem for you. But there are a thousand ways to encode features and for sure other will work better for you. |
H: How to encode a column containing both string and numbers
I have a column in my dataset which contains both number and strings as the value. I want to encode the string variable so to use it for predicting. What is the best way to do this?
AI: Encoding is a way to transform categories to numerical variables, there are a lot of techniques.
The best technique depends on what is the information you want to encode and what is the model that you are going to use. Some models benefit more of one techniques than others.
You should ask yourself the following questions to try to find the best solution:
- Why do you have numerical and categorical values in a column?
- Does it make sense to have them?
- Which model am I using?
- What is the best way to feed them to this model?
From the bit that I get I would reccomend that either you do target encoding with everything or that you split that feature into two. And then you do target encoding in the categorical and even in the numerical. Just check which works better. |
H: Relation between hyperparamters and training set for an object detection model
I have 2 instances of an object detection model. The only difference between these two models is the training data used:
The first model was trained with a small training set
The second model was trained on a larger training set than the first one
The first model was trained on the following hyperparameters:
Number of iterations: 250k
Batch Size: 10
Learning Rate: warms up to 0.001 and decreases to 0.0002 after 150k iterations
Since the second model has more training data, I assumed I need to change the hyperparameters a bit. So I tried training the second model on the following hyperparamters:
Number of iterations: 600k
Batch Size: 10
Learning Rate: warms up to 0.001 and decreases to 0.0002 after 400k iterations
When I measure the mAP for both models on a testing set, the first model vastly outperforms the second model.
model 1 mAP: 0.924
model 2 mAP: 0.776
This leads me to my question for you:
How would the hyperparameters (batch size, learning rate etc) change when the size of your training set increases? What factors need to be considered for this increase in training set size, in order to get the most optimal model possible?
Any and all responses will be greatly helpful. Thank you :)
AI: A major difference between the first and the second model you trained is the size of the data assuming that the model is not pretrained. Increased data, of course, needs increased epochs. According, the batch size must also increase.
Batch Size:
While training on the smaller dataset, a batch size of 10 yielded better results. The errors were averaged over 10 samples and then back-propagated through the model. But now for the larger dataset, the batch size remains the same and hence only little optimization occurs as the error is averaged over 10 samples only ( which is relatively smaller for a large dataset ).
Learning Rate:
For the larger dataset, the number of epochs is increased. The purpose of the learning rate is to scale the gradients of the loss with respect to the parameter. A smaller learning rate always helps as it prevents the overshooting of the minima of the loss function. I would insist you increase the learning rate so that the optimization does not diminish as we are having a larger number of epochs. Gradually decrease the learning rate, as the loss approaches its minima.
If you are training a popular architecture ( like Inception, VGG, etc. ) and that too on datasets like ImageNet, COCO with little modifications, you should definitely read the research papers published on various problems as they would provide a better start to the training. |
H: Export pandas dataframe to a nested dictionary from multiple columns
Its a similar question to
Export pandas to dictionary by combining multiple row values
But in this case I want something different.
from pandas import DataFrame
df = DataFrame([
['A', 123, 1],
['B', 345, 5],
['C', 712, 4],
['B', 768, 2],
['B', 768, 3],
['A', 123, 9],
['C', 178, 6],
['C', 178, 5],
['A', 321, 3]],
columns=['maingroup', 'subgroup', 'selectedCol'])
I'd want the output to be:
{
'A': {'123':[1, 9], '321':[3]},
'B': {'345':[5], '768':[2, 3]},
'C': {'712':[4], '178':[6, 5]}
}
AI: Using dict comprehension with nested groupby:
d = {k: f.groupby('subgroup')['selectedCol'].apply(list).to_dict()
for k, f in df.groupby('maingroup')}
Output:
{'A': {123: [1, 9], 321: [3]},
'B': {345: [5], 768: [2, 3]},
'C': {178: [6, 5], 712: [4]}} |
H: How to drop points from a data series for presentation?
We monitor long running industrial engines and we have data series that we want to present on a line chart on a web page. For instance, we have sensors that monitor the oil temperature and pressure on the engine.
There are several other similar data series on the components of the equipment.
The objective is to have a human operator identify deviations in engines, for post-analysis. Our chart will display 24 or 48 hours of engine operation and the operator may identify peaks in temperature or pressure, or on the other measurements.
As such, it is a large amount of data to present on the chart on the web page, and we're starting to hit limitations in several places.
At 24 hours * 3600seconds/hour * 1 data point/second = 86400 data points on the chart.
This amount of points is slowing down the rendering of the web page, and is a lot of data to transfer.
We want to reduce the count of data points presented on the chart, without losing much context. So I ask:
How can I drop data points without losing much precision?
What techniques are usually applied in this scenario?
A first (naive) thought was to group them in 5-second windows and only return one data point to represent the 5-second window on the chart;
should I average the points in the window?
should I consider the maximum in the window?
Are there other techniques than grouping data points in window, to reduce the loss of meaning for the monitoring?
AI: Indeed this problem isn't a very simple one to deal with, despite looking very easy to conceptualise. There exist a certain number of techniques for "reducing" the number of points of timeseries, one being called "downsampling".
A little litterature : https://skemman.is/bitstream/1946/15343/3/SS_MSthesis.pdf
hope this helps,
Cheers |
H: Extracting encoded features after CatBoost
I have a dataset containing numerical as well as categorical variables.
After I've fit my dataset to a CatBoostClassifier, I want to extract the entire feature set, with the categorical variables encoded in whatever method the classifier decided to encode them.
How can I extract the fully transformed (encoded) features? (similar to what a fit_transform method would return)
AI: I don't believe this is possible, CatBoost does target encoding per split, so you end up with different values of encoding at different trees.
Before each split is selected in the tree (see Choosing the tree
structure), categorical features are transformed to numerical. This is
done using various statistics on combinations of categorical features
and combinations of categorical and numerical features.
https://catboost.ai/docs/concepts/algorithm-main-stages_cat-to-numberic.html
However, if you just want to use the CatBoost encoding algorithm. You can use CatBoostEncoder |
H: Using a JSON dataset for lstm
I am a bit new to python. I have a json dataset which I have to use in a lstm using keras. My implementation is about emotion analysis for a set of reviews using lstm.
I have run the code below:
a = "ive probably mentioned this before but i really do feel proud of myself for actually keeping up with my new years resolution of monthly and weekly goals"
# Encode samples
tokenized_sample = a.split(" ")
encoded_samples = [[word2id[word] for word in tokenized_sample]]
# Padding
encoded_samples = keras.preprocessing.sequence.pad_sequences(encoded_samples, maxlen=max_words)
# Make predictions
label_probs, attentions = model_with_attentions.predict(encoded_samples)
label_probs = {id2label[_id]: prob for (label, _id), prob in zip(label2id.items(), label_probs[0])}
# Get word attentions using attenion vector
token_attention_dic = {}
max_score = 0.0
min_score = 0.0
for token, attention_score in zip(tokenized_sample, attentions[0][-len(tokenized_sample):]):
token_attention_dic[token] = math.sqrt(attention_score)
print(label_probs)
where my variable 'a' is a sentence and I got my results as shown below:
{'fear': 0.19682443, 'disappointed': 0.19954187, 'joy': 0.20219588, 'happy': 0.20103683, 'sad': 0.20040102}
But I have to run a json dataset where I have to add this output for each review in the json file. Can someone please guide me on how to modify the above codes to have a good reults Please..?
Below is an example of my Json file:
{
"name": "3",
"Date": "September 25, 2017",
"comment": "Nice Room, good entertainment - well organised\nFood was wonderful at Brasserie Chic - Seafood Platter (for 2)\nThats’ where it ends…\nStaff was woeful, very badly mannered and almost rude\nTiling works in lobby from Saturday 3pm till 11:30pm without any notice or apologies\nPolishing of tiles at 6:45am Sunday morning, again with no notice, or apology given.\nPillows smell of sweat\nPipe noises every 5 minute for 10sec in walls!\nMuch to improve on, shame as the renovated hotel looks very nice, and the food great!\nDisappointed."
},
{
"name": "2",
"Date": "June 05, 2017",
"comment": "A wonferful experience. The hotel staff went out of their way to make the stay personal and special."
},
{
"name": "1",
"Date": "September 18, 2015",
"comment": "At the desk, they don’t give any information at all when you check-in (like at what time breakfast is served, you have to guess) or promoting the hotel; communication is very scarce. It’d have been better to do your own check-in. For a 5* hotel, English is very poor except when you call over the phone. We asked a question in English at the reception desk, one of them stepped back & the other one asked if we speak French. Overall ok, very nice hotel but room standard more of a 4 star hotel ,swimming pool funny. The men carrying the Luggage, and on car park duty were friendly & the food at the Yusu restaurant is good"
}
Additional: I want my result to appear like this:
{
"name": "3",
"Date": "September 25, 2017",
"comment": "Nice Room, good entertainment - well organised\nFood was wonderful at Brasserie Chic - Seafood Platter (for 2)\nThats’ where it ends…\nStaff was woeful, very badly mannered and almost rude\nTiling works in lobby from Saturday 3pm till 11:30pm without any notice or apologies\nPolishing of tiles at 6:45am Sunday morning, again with no notice, or apology given.\nPillows smell of sweat\nPipe noises every 5 minute for 10sec in walls!\nMuch to improve on, shame as the renovated hotel looks very nice, and the food great!\nDisappointed."
"emotion": "{'fear': 0.19682443, 'disappointed': 0.19954187, 'joy': 0.20219588, 'happy': 0.20103683, 'sad': 0.20040102}"
},
{
"name": "2",
"Date": "June 05, 2017",
"comment": "A wonferful experience. The hotel staff went out of their way to make the stay personal and special."
"emotion": "{'fear': 0.19682443, 'disappointed': 0.19954187, 'joy': 0.20219588, 'happy': 0.20103683, 'sad': 0.20040102}"
},
{
"name": "1",
"Date": "September 18, 2015",
"comment": "At the desk, they don’t give any information at all when you check-in (like at what time breakfast is served, you have to guess) or promoting the hotel; communication is very scarce. It’d have been better to do your own check-in. For a 5* hotel, English is very poor except when you call over the phone. We asked a question in English at the reception desk, one of them stepped back & the other one asked if we speak French. Overall ok, very nice hotel but room standard more of a 4 star hotel ,swimming pool funny. The men carrying the Luggage, and on car park duty were friendly & the food at the Yusu restaurant is good"
"emotion": "{'fear': 0.19682443, 'disappointed': 0.19954187, 'joy': 0.20219588, 'happy': 0.20103683, 'sad': 0.20040102}"
}
Can you please help me out.
AI: I'm not sure how you are processing your JSON file right now. But Pandas is really convinient way for doing so.
So for start you can read the file with pandas, following instructions here.
A dataframe will be creatd and then After you did all your processing, adding different columns and rows to your dataframe, you can use pandas.DataFrame.to_json for converting it back to JSON file.
So you will have a new column named "emotion" and you can fill in the values for it with the label_probsvariable that you are printing. |
H: What does online learning mean in Topic modeling (LDA) - Gensim
I came across this line in the Gensim Documentation- Gensim LDA - "The model can also be updated with new documents for online training."
So my assumption on what it means is - 'Once we have a model trained on one corpus, we can add new data and continue to train the model with new data thereby adding more vocabulary and enriching results. Is this correct?
Is this the same approach discussed in the paper - Online Learning for LDA ? Help me understand this technique.
AI: Yes, your intuition about the definition of online learning in Topic modeling(LDA) is correct :
"The model can also be updated with new documents for online training."
However, I would quote the standard definition of online learning in machine learning :
It is a method in machine learning in which data becomes available in sequential order and is used to update our best predictor for future data at each step, as opposed to batch learning techniques which generates the best predictor by learning on the entire training dataset at once. It is a very useful technique in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms.
You can find more about it here.
In your case (nlp), online learning would -
Add vocabulary as the data comes.
Train the model on newly added corpus.
update the results
All of these would keep happening as the new data comes. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.