text
stringlengths 83
79.5k
|
---|
H: Why spectral clustering results in disjointed cluster?
I'm working on a project where I have to dynamically cluster the position of objects with respect to one coordinate. So I'm essentially dealing with subsequent frames and each frame represents a one-dimensional dataset. The intuition behind clustering is to form clusters out of points that are in similar distance to other points within the cluster and can be naturally connected. I use spectral clustering due to its ability to cluster points by their connectedness and not the absolute location and I set rbf kernel due to its non-linear transofrmation of distance. However, in some frames the algorithm results in unnatural assignments. One example is
import numpy as np
from sklearn.cluster import SpectralClustering
X = np.array([[51.08354988], [57.10594997], [70.51259995], [76.74425011],
[61.24844971], [89.00615082], [98.55859985], [61.26575031], [88.35105019],
[87.40859985]])
clustering = SpectralClustering(n_clusters = 4, random_state = 42,
gamma = 5 / (X.max() - X.min()))
clustering.fit(X)
and the result of clustering is presented in a form of swarm plot below, so only x coordinate matters here (each color represents a cluster and labels are array indices):
What I cannot understand is why points marked as red are clustered together as the similarity between points {4, 7} and {5, 8, 9} should be really low. My first thought was that maybe this is caused by unlucky, random initialisation, but I tried with many different random states and the resulting clusters seems to be persistant. So I guess this is connected to the chosen affinity measure (rbf_kernel) and its gamma parameter. As points move with each frame, and distances between them are kind of dependent on their overall range, I tried to set gamma to 5 / (X.max() - X.min()). The intuition behind this was that if the range is bigger, then the distances between points are usually bigger and we should penalise them more to obtain similar values of exp(-gamma * ||x-y||^2) to those obtained within smaller range. But it doesn't seem to work as expected and results in faulty clustering where red cluster is formed out of points divided by green cluster). My expectation would be rather clusters formed as follows: {0, 1, 4, 7}, {2, 3}, {9, 8, 5} and {6} or {0}, {1, 4, 7}, {2, 3}, {5, 6, 8, 9}.
So my question are:
Is affinity choice and its gamma parameter really the problem here?
If so, how can I choose gamma better?
Otherwise, what approach should I consider to deal with faulty assignments with separated points within the same clusters as presented here?
(Side question) Is there any measure/index that would be suitable to automatically compare clusterings with different number of clusters?
@Edit: As it can be observed under this link, those separated clusters occurs for a short period of time, but still, the problem seems to be recurrent.
AI: First, please note that spectral clustering is very sensitive to the affinity kernel. With the standard RBF kernel, my experience is that spectral clustering often isolates outliers (in the spectral space), leaving clusters with numerous observations which can be separated by great distances. This is a major difference with direct k-means: there is no notion of distance anymore, it's just about connectivity.
In the case shown in your example, the results are "strange" because of the low dimensionality: 1D is typically a case where spectral clustering does not provide "logical" results. In addition, you are quite unlucky on several aspects:
The numbers of clusters (4): try it out with different numbers of clusters, and you will find more trivial results
The positions of observations: move #0 to the left and you may find results that make more sense
The low number of observations: #6 is not considered as an outlier, due to its vicinity to the #5, #8, #9 triplet. Indeed, the more points there are in an area, the largest its influence is. This is also how #4 and #7 can be considered part of the (#5, #6, #8, #9) cluster, rather than the (#2, #3) cluster. Try removing #9 and see how the results suddenly make sense.
The chosen kernel (not tried on my side, but it is likely that the results may vary a lot if you increase the gamma value or change the kernel shape)
TL;DR: your case is an example of how spectral clustering can go wrong with low number of observations and dimensions. |
H: In datasets, why don't we represent nominal values that are part of a scale with numbers?
I am trying to pre-process a small dataset. I don't understand why I am not supposed to do the thing I explained below:
For example, say we have an attribute that describes the temperature of the weather in a set of 3 nominal values: Hot, mild and cold. I understand these definitions may have derived from numerical values while summarising.
But why would we summarise such values that are on a scale, and lose the scale in the process?
Would it not help to have the algorithm(any classification algorithm) realise that the difference between hot and cold is twofold of the difference between hot and mild by representing hot, mild and cold as integers 1, 2 and 3 respectively?
AI: Nice and necessary question to be clear when doing machine learning preprocessing, there are two points here to take into account:
depending on the learning algorithm, you may need to convert your categorical data into a numeric format; e.g. decission trees do not need it and can handle categorical data, whereas another regression algos do need numbers as input
in case you have to convert data into numbers, you have two possibilities:
integer encoding: this is the case you are correctly describing; with this, you have no problem with just changing the labels by integers keeping the ordinal order (in fact it is better so the algo can learn the natural distance between cold and hot as bigger than cold and mild. Nevertheless, for a decision tree algo, it should not matter as these keep being as labels
one-hot-encoding: when the algorithm type requires numbers and there is NOT an ordinal nature among the data, this is the option required to prevent not ordinal data to be understood as being ordinal.
More info here |
H: Time series with additional information
Given a time series with job-submission counts, how can I predict which certain features about the jobs?
I need to predict how many jobs and which jobs arrived in some system. Using pandas.groupby, I've sliced the data into 15 minutes intervals.
I can predict how many jobs arrived but also need to predict which type of job will arrive and some other features.
AI: I'm assuming the displayed time series shows number of jobs submitted per 15 minute interval.
Categorical features
Divide the time series per category. If the jobs can be divided into type1, type2, type3 then make a time series for each type and predict each series individually. So type1-time series has number of type1-jobs per 15 minute interval.
Continuous features
For continues features e.g time-to-do-job you can divide the jobs into categories of time00,time10, time20, time30 for jobs that take 0-9 minutes, 10-19 minutes, 20-29 minutes etc respectively. As before generate a time-series per division.
Depending on how much data you have and how it is distributed you can make more groups or space them differently. |
H: Will the MAE of testing data always be higher than MAE of training data?
On the Kaggle Course Page the chart below shows that MAE of testing data is always higher than MAE of training data. Why is this the case? Is it only limited to DecisionTreeRegressor model? Or the graph is wrong and in practice the MAE of testing can be lower than MAE of training?
AI: Train MAE is "generally" lower than Test MAE but not always.
Now coming to your questions.
Q1 Why does this happen?
A1. Train MAE is generally lower than Test MAE because the model has already seen the training set during training. So its easier to score high accuracy on training set. Test set on the other hand is unseen so we generally expect Test MAE to be higher as it more difficult to perform well on unseen data.
However, it is not always necessary for Train MAE to be lower than Test MAE. It might happen "by chance" that the test set is relatively easier (than the training set) for the model to score higher accuracy hence leading to lower Test MAE!
Q2. Is this true only for DecisionTreeRegressor?
A2. No, this plot is not specifical for DecisionTreeRegressor. If you notice that in my explanation I haven't made any assumption on the model!
Q3. Is the graph incorrect?
A3. No, the graph is not wrong. We speak of the general case of what we are expecting on an average. If you were to draw a graph only for a particular/current instance of the model running you can have Train MAE above Test MAE. |
H: How to find the various matrix sizes in designing a CNN
I am trying to understand CNN especially the maths and working mechanism using Matlab as the coding language. I have few confusion regarding the concept and the associated programming and will be immensely grateful for an intuitive answer.
Below is the structure of my CNN for 5 classes. I could calculate only the output structure of the first Conv layer and stuck on determining the number of parameters i.e., number of neurons?
The output for the first convolution layer that I could calculate: In the first layer an input of size [50 50 2] is convolved
with a set of M_1 5-dimensional filters applied over all the input channels. The first 2 D convolutional layer is composed of M_1 = 20 filters of size [5x5x 1] having the step size (stride) for traversing the input vertically and horizontally as 1 creating a feature map of size {(h-f_h+1) x (w - f_w +1)x 1x M_1} = (50-5+1)x(50-5+1)x20 = [46x46x 20] So we have 20 channels.
AI: For a CNN layer with input of dimensions h * w * d, kernel size k * k and number of kernel filters as f, we have the number of parameters as k * k * d * f, if we ignore the biases. If use biases then the number of parameters becomes (k * k * d + 1) * f
For e.g., the 1st conv layer has 5 * 5 * 2 * 20 parameters if we are ignoring the biases. With bias, the number of parameters would become (5 * 5 * 2 + 1) * 20.
Note that the number of parameters does not depend on the stride, padding, pooling, dropout nor on the spatial dimensions of the input or output!
To find the total number of parameters in the network one needs to add the parameters of all the individual layers in the model. |
H: Which kind of model is better for keyword-set classification?
There exists a similar task that is named text classification.
But I want to find a kind of model that the inputs are keyword set. And the keyword set is not from a sentence.
For example:
input ["apple", "pear", "water melon"] --> target class "fruit"
input ["tomato", "potato"] --> target class "vegetable"
Another example:
input ["apple", "Peking", "in summer"] --> target class "Chinese fruit"
input ["tomato", "New York", "in winter"] --> target class "American vegetable"
input ["apple", "Peking", "in winter"] --> target class "Chinese fruit"
input ["tomato", "Peking", "in winter"] --> target class "Chinese vegetable"
Thank you.
AI: Use the segment embedding (idea from BERT) for the origin text classification model.
For example:
input ["apple", "Peking", "in summer"] += segment emb [1,2,3,3,0]
input ["tomato", "New York", "in winter"] += segment emb [1,2,2,3,3]
where 1,2,3 are something like the data source type for input.
Another improvement: check out PCNN or PCNN+ATT |
H: looking for approaches to detecting outliers in individuals unequal sequential time series
I am looking for approaches related to outlier detection in time series.
Example:
A person visits hospital overtime on multiple bases and there are some measurements made (bmi, blood_pressure, stress_level) at each occasion. Usually, the stress value will be the same for most of the individuals but somehow for one person increases continuously over time. So, the idea is to find such a person who are dissimilar to all others over time.
Some of the literature which I have already searched look to deal with time series on continuous data and some of the approaches involve similarity-based where an average threshold similarity is created(DTW and DTW-Adaptive) and with the nearest neighbor approaches a most dissimilar series whose distance greater than the threshold is identified. These series are usually long and are sensor observations.
The series which I am looking at is a short time series of discrete observations and may vary in length. I am looking for approaches that are usually suitable for them.
Could the multivariate data outlier detection algorithms be suitable over time or could be adapted over time?
Additionally, are there any outlier models which take time into account? (As I could not find anything on this in the literature)
An example:
Patient Time bmi blood_pressure stress
1 t1 v1 v2 v3
2 t1 v1 v2 v3
3 t1 v1 v2 v3
4 t1 v1 v2 v3
5 t1 v1 v2 v3
1 t2 v1 v2 v3
2 t2 v1 v2 v3
4 t2 v1 v2 v3
3 t2 v1 v2 v3
5 t2 v1 v2 v3
Some of the multivariate approaches already looked at are:
1. https://pyod.readthedocs.io/en/latest/ - Only holds for static data.
2. Shokoohi-Yekta, M., Hu, B., Jin, H., Wang, J., & Keogh, E. (2017). Generalizing DTW to the multi-dimensional case requires an adaptive approach. Data mining and knowledge discovery, 31(1), 1-31. (Time-series based approach but uses sensors)
I recently also thought, if I could include time itself as a feature to model outlier? Has anyone did this way or have any literature in these lines
AI: Outlier detection doesn't sound like the most promising approach to me, as you have a model for the data. Some ideas you could try: use a hypothesis test to check the hypothesis that the stress values fit iid Gaussian with pre-defined standard deviation and unknown mean; use linear regression to fit a line that predicts stress as a function of time, and see if the slope is greater than zero by a statistically significant amount; etc. |
H: How should I approach such classification problem where the input is an array of integers?
I am training a model for predicting a number between 0 to 10 in this case. These are the number of roots of a polynomial. The input array for the number of polynomials is the coefficients of that polynomial from $x^n$ to constant. Even though I prepared a balanced dataset in which all number of roots (from 0 to 10) are equally existed, my input is an array of array of integers.
For example:
23 43 -545 34 45 -34 234 -434 234 434 -2344334
And the output of this is the number of roots of that polynomial (as I said before).
I tried many combinations of layers, but the accuracy never gets higher than 50% percent. By accuracy, I mean the number of correct predictions (I count the biggest probability as the prediction).
My Keras code for modeling:
model = Sequential()
model.add(Dense(32, input_dim=degree+1, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(degree+1, activation='softmax'))
model.compile(loss="categorical_crossentropy",
optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=500, batch_size=50, verbose=0)
Is there something that I am doing wrong? Is this a good model for this kind of problem? I am new to deep learing so if the answer is obvious, I am sorry.
Thanks.
AI: Your information is not discriminative enough
Why? Coefficients of polynomials dont give (alteast partially) discriminative information about roots of the polynomial. In other words different coefficients could give same roots.
It does not matter how complex your network is it cant catch what is not there to be catched in the first place. |
H: Why cant RNN learn long term dependencies=?
In Colah's blog, he explain this.
In theory, RNNs are absolutely capable of handling such “long-term
dependencies.” A human could carefully pick parameters for them to
solve toy problems of this form. Sadly, in practice, RNNs don’t seem
to be able to learn them. The problem was explored in depth by
Hochreiter (1991) [German] and Bengio, et al. (1994), who found some
pretty fundamental reasons why it might be difficult.
In quick explanation what is the fundamental reasons why it might be difficult?
AI: There is no explicit notion of memory (like gates in lstm and gru)
Gates are a way to optionally let information through, ommiting this functionality we will just be updating weights that will in process fade away hence longer memory is hard to learn. |
H: How to interpret dummy variable in ML prediction?
I am working on a binary classification problem where I have a mix of continuous and categorical variables.
Categorical variables were created by me using get_dummies function in pandas.
Now my questions are,
1) I see that there is a parameter called drop_first which usually is given the value True. Why do we have to do this?
Let's say for the purpose of example, we have 2 values in gender column namely Male,Female. If I use drop_first=True, it returns only one column. like gender_male with binary 1 and 0 as values
For example, If my feature importance returns gender_male as an important feature, Am I right to infer that it is only Male gender that influences the outcome (because male is denoted as 1 and female is 0) and female (0's) don't impact the model outcome? or 0's in general doesn't play any role in ML model predictions?
2) Let's say my gender has 3 values for example Male,Female,Transgender. In this case if I use drop_first=True, it would only returns two columns
gender_male with 1 and 0 - Here 0 represents Transgender right?
gender_female with 1 and 0 - Here 0 represents Transgender right?
3) What's the disadvantage of not using drop_first=True? Is it only about the increase in number of columns
Can you help me with the above queries?
AI: 1) Using drop_first=True is more common in statistics and often referred to as "dummy encoding" while using drop_first=False gives you "one hot-encoding" which is more common in ML. For algorithmic approaches like Random Forests it does not make a difference. Also see "Introduction to Machine Learning with Python"; Mueller, Guido; 2016:
The one-hot encoding we use is quite similar, but not identical, to
the dummy encoding used in statistics. For simplicity, we encode
each category with a different binary feature. In statistics, it is common
to encode a categorical feature with k different possible values
into k–1 features (the last one is represented as all zeros). This is
done to simplify the analysis (more technically, this will avoid making
the data matrix rank-deficient).
However, using dummy encoding on a binary variable does not mean that a 0 has no relevance. If gender_male has high importance that does not generally say anything about the importance of gender_male==0 vs gender_male==1. It is variable importance and accordingly calculated per variable. If you, for example, use impurity based estimates in Trees it only gives you the average reduction in impurity achieved by splitting on this very variable.
Moreover, if your gender variable is binary, gender_male==1 is equivalent to gender_female==0. Therefore from a high variable importance of gender_male you cannot infer that being female (or not) is not relevant.
2) In this case gender_male==0 AND gender_female==0 means Transgender is true.
3) see 1). For algorithmic approaches in ML there is no statistical disadvantage using one-hot-encoding. (as pointed out in the comments it might even be advantageous since tree-based models can directly split on all features when none is being dropped) |
H: Rescaling each image Individually with keras
I am a beginner working on a simple CNN to classify X-ray detector images. Due to source intensity, all images have different max values. I want to use ImageDataGenerator to rescale those images to be in range [0,1], but cant't find a way to do that for each individual image. As far as I understand it, people usually just divide by 255 because that is the RGB max, but in my case the max could be anything between 1 and serveral million. Does anyone have an idea how to do this within ImageDataGenerator? Thank you for your time!
Edit:
I ran over my images and normalized all of them prior to feeding them to the generator. Inspecting them then showed that the generator had scaled them back to rgb values (I assume due to color_mode). I then used rescale with the standard 1./255 and got the expectd max value of 1. However, after adding back samplewise_std_normalization, I again am getting values larger than 1. I understand why this is happening, but I'm not sure which "rule" is more important to follow: normalize to [0,1] or use samplewise_std_normalization
AI: Normalize to $[0, 1]$ is what I'd do to ensure that the model gets expected (sanitized) inputs.
Using the samplewise_std_normalization is something that I'd do inside the model to highlight features.
E.g. a white pixel is more important in a mostly black image than in a noisy image. |
H: Terminology - regression with one output and multiple output variables
I am trying to predict the response when the input is represented by Fourier transform. These form the features and are typically represented as a vector, $x_1,x_2,...,x_d$ where $d$ is the length of the fourier transform. Based on my understanding each such $d$ dimensional vector can be an input to a regression model. The output is $y_1$ which is a scalar real-valued number and there is another output $y_2$ denoting another scalar real-valued number. These are the dependent variables. I have $N$ number of $d$ dimensional examples of fourier transform each labelled by $y_1$ and $y_2$.
Question 1) When the task is of predicting only one output response using the input fourier transform then is the problem termed univariate regression? Is univariate associated by the input's dimension(which is d>1) or the output's dimension (which is 1)
Question 2) When the task is of jointly predicting the two response variables - $y_1$ and $y_2$ then is the problem termed multivariate regression?
AI: Question 1. Both. If you think in opposite to multivariate terms, than in univariate regression both input and output variables should be 1-d
Question 2. Multivariate regression where more than one independent variable (predictors) and more than one dependent variable (responses), are linearly related. So input needs to be more than 2 also. |
H: XGBoost Feature Importance, Permutation Importance, and Model Evaluation Criteria
I have built an XGBoost classification model in Python on an imbalanced dataset (~1 million positive values and ~12 million negative values), where the features are binary user interaction with web page elements (e.g. did the user scroll to reviews or not) and the target is a binary retail action. My ultimate goal was not so much to achieve a model with an optimal decision rule performance as to understand which user actions/features are important in determining the positive retail action.
Now, I have read quite a bit in forums and literature about evaluating/optimizing an XGBoost model and subsequent decision rule, which I assume is required before achieving my ultimate goal. It seems that there are a lot of different ways to evaluate the decision rule part (e.g. Area Under the Precision Recall Curve, AUROC, etc) and the model (e.g. log-loss). I believe that both AUC and log-loss evaluation methods are insensitive to class balance, so I don't believe that is a concern. However, I am not quite sure which evaluation method is most appropriate in achieving my ultimate goal, and I would appreciate some guidance from someone with more experience in these matters.
Edit: I did also try permutation importance on my XGBoost model as suggested in an answer. I saw pretty similar results to XGBoost's native feature importance. Should I now trust the permutation importance, or should I try to optimize the model by some evaluation criteria and then use XGBoost's native feature importance or permutation importance? In other words, do I need to have a reasonable model by some evaluation criteria before trusting feature importance or permutation importance?
AI: So your goal is only feature importance from xgboost?
Then don't focus on evaluation metrics, but rather splitting.
I would suggest to read this. Using the default from tree based methods can be slippery. |
H: How to assess that a cross entropy based model has converged
I have a question regarding cross entropy convergence using Stochastic Gradient Descent. I am a little bit confused about how the convergence should be assessed. Should we treat the model as converged if the loss hit minimum on any single example or may be a certain number of examples? I am asking because what if the model converged on a single example just randomly.
AI: Pure, hardcore Stochastic Gradient Descent (when you feed just one observation at a time) is not advisable at all. The descent of the Gradient is so noisy that after a certain minimal loss reduction it will stop learning anything. It will wander around the loss function in unpredictable ways. In this case, you're right: there's no way to assess final convergence. Even if the Gradient hits the global minimum, it will jump away from it at the following iteration.
The most powerful technique is a robust version of SGD: Mini-Batch Gradient Descent. In this case, you feed a batch of data at each training iteration (usually of size between 30 to 250, but it's up to you). Batch size is another hyperparameter, therefore you must find a trade off between robustness to noise on one side, and on the other side to avoid get stuck in local minima and lower computational times. |
H: Is there a paper accomplishing finding physical law from observation without premade perception, using machine learning?
For example:
Isaac Newton finds law of universal gravitation just by looking a falling apple, without any premade perception of that phenomenon. Is it possible to accomplish that kind of discovery using machine learning?
AI: You can check this paper
Discovering physical concepts
with neural networks - https://arxiv.org/pdf/1807.10300.pdf
I quote one of the first sentences
Here, we present a neural network architecture that can be used to discover physical concepts from experimental data without being provided with additional prior knowledge |
H: How can I compare my regressors?
I am trying to build a regressor for a dataset which gives info about students' school performance and the probability of getting admitted in the University of their choice.
The first 5 observations look like this :
GRE_Score TOEFL_Score Uni_Rating LOR CGPA Research Chance_of_admit
0 337 118 4 4.5 9.65 1 0.92
1 324 107 4 4.5 8.87 1 0.76
2 316 104 3 3.5 8.00 1 0.72
3 322 110 3 2.5 8.67 1 0.80
4 314 103 2 3.0 8.21 0 0.65
I have build the following regressors so far : linear regressor, knn regressor and a recurrent neural network. ( I will try a few more later. )
So far, in order to chose among my models, I used the "score" method on the test set for the first two regressors ( it returns the $R^2$ for each one of them ) and the "evaluate" method on the test set for the network ( it returns the MeanSquaredError ).
So, keeping in mind that $R^2$ and MeanSquaredError have different formulas, how can I compare my network with the other two models ??
Any help is much appreciated.
AI: If you up to a predictive model, you look for a model which performs well on the test set and the metric of interest is the mean squared error which indicates by how much you fail to predict $y$ on average. So don't use $R^2$. Just compare all models based on MSE. |
H: Val_accuracy (val_acc) very low
We have a data set that is converted from signal data to video. We want to classify these images using convolution. We tried many different methods but val acc is consistently low. Training accuracy is 99% and val_acc is 40%. We need your help in this respect. Thank you
weight_decay=0.0005
input_ = Input(shape=(125, 125, 1))
# Block 1
x = Conv2D(64, (3, 3), strides=(1, 1), activation='relu', padding='same',kernel_regularizer='0.001')(input_)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2), strides=(2, 2), padding='same', name='pool1')(x)
# Block 2
x = Conv2D(128, (3, 3), strides=(1, 1), activation='relu', padding='same',kernel_regularizer='0.001')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2), strides=(2, 2), padding='same', name='pool2')(x)
# Block 3
x = Conv2D(256, (3, 3), strides=(1, 1), activation='relu', padding='same',kernel_regularizer='0.001')(x)
x = BatchNormalization()(x)
x = Conv2D(256, (3, 3), strides=(1, 1), activation='relu', padding='same',kernel_regularizer='0.001')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2), strides=(2, 2), padding='same', name='pool3')(x)
# Block 4
x = Conv2D(512, (3, 3), strides=(1, 1), activation='relu', padding='same',kernel_regularizer='0.001')(x)
x = BatchNormalization()(x)
x = Conv2D(512, (3, 3), strides=(1, 1), activation='relu', padding='same',kernel_regularizer='0.001')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2), strides=(2, 2), padding='same', name='pool4')(x)
x = GlobalMaxPooling2D()(x)
x = Dense(800,kernel_regularizer='0.001')(x)
x = Dropout(0.5)(x)
x = BatchNormalization()(x)
x = Dense(800,kernel_regularizer='0.001')(x)
x = Dropout(0.5)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Dense(8)(x)
x = Activation('softmax')(x)
model = Model(inputs = input_, outputs=x)
enter image description here
my kernel and dataset
https://www.kaggle.com/ultrasonraporlama/video-kernel/
AI: Try adding dropout layers to your network, this should work very well to reduce the amount of overfitting of your network. |
H: Tableau: Clustering based on value-range for map coloring
Is there a possibility to cluster coloring for certain statistical ranges?
This is what I have been able to achieve so far.
AI: You can, as an example, create a binned field for the measure. The value range can be specified in the tooltip. I used a single color based continuous palette since (population) total is a continuous field.
[Edited]
Based on your comment, I tested with a fixed dimension based binned field for the continuous measure. At least in my example file, the values make more sense now with the new binned field. I see China at 16M population, India at 14M and so on.
If this solution is relevant to your use case, you will need to be careful with dimensions included/excluded in the fixed calculation. |
H: Does cross_validate in scikit-learn automatically fits and train the model?
# from the titanic dataset
X = df.drop(columns="survived")
y = df.survived
scoring = ['accuracy','precision','roc_auc','f1',]
from sklearn.model_selection import cross_validate
from sklearn.linear_model import (LogisticRegression)
def model_LR(): #logstic Regression
index = ["kfold-1","kfold-2","kfold-3","kfold-4","kfold-5"]
s = model_selection.cross_validate(LogisticRegression(), X, y, scoring = scoring, cv = 5 )
s = pd.DataFrame(data = s, index = index)
display (s)
print ("The mean scores for the above:\n", s.mean())
model_LR()
# OUTPUT :
fit_time score_time test_accuracy test_precision test_roc_auc test_f1
kfold-1 0.003998 0.006969 0.774809 0.711340 0.823673 0.700508
kfold-2 0.003990 0.005005 0.820611 0.778947 0.856481 0.758974
kfold-3 0.003003 0.003989 0.774809 0.715789 0.796667 0.697436
kfold-4 0.002992 0.003992 0.767176 0.709677 0.841852 0.683938
kfold-5 0.001994 0.003989 0.819923 0.819277 0.877081 0.743169
The mean scores for the above:
fit_time 0.003195
score_time 0.004789
test_accuracy 0.791466
test_precision 0.747006
test_roc_auc 0.839151
test_f1 0.716805
dtype: float64
My Question:
In my code above, by calling on the cross_validate with LogisticsRegression, I get scores (such as roc_auc) as if the model has been fitted and trained.
I did not use any fit or train function.
Does that mean that cross_validate does that automatically?
Thx.
AI: Yes, you do not need to fit or train explicitly when using cross_validate(). Also see this example from the SKLearn documentation:
from sklearn import datasets, linear_model
from sklearn.model_selection import cross_validate
from sklearn.metrics import make_scorer
from sklearn.metrics import confusion_matrix
from sklearn.svm import LinearSVC
diabetes = datasets.load_diabetes()
X = diabetes.data[:150]
y = diabetes.target[:150]
lasso = linear_model.Lasso()
cv_results = cross_validate(lasso, X, y, cv=3)
Just as in your case you can already get the results with no further steps, e.g.
cv_results['test_score']
gives this:
OUT: array([0.33150734, 0.08022311, 0.03531764]) |
H: Evaluating a IR system (Precision and Recall)
I am studying by now IR system, in the field of valuation of IR system outputs related to a specific query but I need some help to understand it properly.
My book states that when an IR system has to be evaluated, we need a test document collection, a set of query examples, a valuation (relevant or not) for each couple of query/document, defined by experts in the field. So we need two measures to know quantitatively if a IR system is good: Precision and Recall.
My doubt is related to the following question: Do we use those two measures only if we are testing a IR system or not?
I'll explain: before we calculate Precision and Recall related to a specific query example (see above), we need to know how many elements belong to the relevant set, which is impossible if there isn't a valuation (relevant or not) for the query we are using. My book says we can increase Recall in a search engine by using the relevance feedback technique (query expansion and term reweighting): in this case do we assume the Recall value is unknown?
For example, everyday many documents are shared on the Internet and Google can find them. So it is impossibile to apply Recall and Precision to this scenario, in which information grows and there is no valuation for every new document for each specific query. It is also impossibile to predict all the possibile queries a user can do on a search engine.
AI: My doubt is related to the following question: Do we use those two measures only if we are testing a IR system or not?
Technically the answer is no because precision and recall are used to evaluate not only IR systems but also many other tasks. However your question seems to be specific to IR so I'll assume that it's actually about the distinction between testing and evaluation:
Testing a ML system consists in predicting the target variable for a set of instances given as input (in the case of supervised learning a "model" obtained from a previous stage of training is required as input as well). At this stage we don't know whether the predictions are correct or not.
Evaluation is the process of assessing the quality of the predictions: it's done after obtaining the predictions from the testing stage, and it requires some form of "gold standard", i.e. data which says what is the correct answer for every instance.
In IR, the testing stage happens every time the system is run to find relevant documents based on a query.
Naturally one wants at first to make sure that the system works properly and returns actually relevant documents, so the system needs to be evaluated, for instance with precision and recall using a dataset containing some queries and their relevant documents (gold standard).
Once the quality is evaluated, the goal is to use the IR system (testing) without evaluating the results every time. Of course there's no evaluation so the performance measures (precision and recall) are not used. |
H: How to explain the connection between the input layer and H1 of this CNN Architecture?
I am currently reading the paper proposed by LeCun et al. for handwritten zip code recognition. There is this figure below visualizing the CNN architecture. But I do not really understand how the connection between Layer H1 and input layer makes sense. If there are 12 kernels with size 5x5, shouldn't the layer H1 be 12x144? Or is there any downsampling taking place here too?
AI: Yes, the spatial dimensions (height and width) are reduced: the input is 16x16, H1 is 8x8 and H2 is 4x4.
Also see the first paragraph in the architecture section:
Source
In modern terms you would say that they use a stride of 2. Which reduces the spatial dimensions accordingly.
EDIT (based on your comment)
The formula for the spatial output dimension $O$ of a (square shaped) convolutional layer is the following:
$$O = \frac{I - K + 2P}S + 1$$ with $I$ being the input size, $K$ being the kernel size, $P$ the padding and $S$ the stride. Now you might think that in your example $O = \frac{16 - 5 + 2*2}2 + 1 = 8.5$ (assuming $P=2$)
But take a closer look at how it actually plays out when the 5x5 kernel of layer H1 scans the 16x16 input image with a stride of 2:
As you can see from the light grey area the required and effective padding is actually not 2 on all sides. Instead for the width or height respectively it is 2 on one side and 1 on the other side, i.e. on average $(2+1)/2=1.5$.
And if you plug that into the equation to calculate the output size it gives: $O = \frac{16 - 5 + 2*1.5}2 + 1 = 8$. Accordingly the convolutional layer H1 will have spatial dimensions of 8x8. |
H: About the maximum likelihood, when we convert the maximization problem into minimization, why we take the negative?
On page 12, we take $log$ on both side.
$\max_{\boldsymbol{w}}L\boldsymbol({w})=\max_{w}\displaystyle\prod_{n=1}^Np(t^{(i)}|x^{(i)};\boldsymbol{w})$
$\ell(\boldsymbol{w})=-logL(\boldsymbol{w})$
$\ \ \ \ \ \ \ =-\displaystyle\sum_{i=1}^Nlog\ p(t^{(i)}|x^{(i)};\boldsymbol{w})$
The $log$ function is increasing as $\boldsymbol{w}$ increase. Why we have to take the negative?
AI: It is common to define optimization problems as minimization problems instead of maximization. And by multiplying your target functions with $-1$ you can transform one into the other:
$$\max_{w} \log{L(w)} \Leftrightarrow \min_{w} -log{L(w)}$$
So to maximize the log-likelihood you minimize the negative log-likelihood. Basically it just comes down to conventions in optimization theory.
Moreover, since $L(w) \in [0,1]$ its logarithm $\log{L(w)}$ will be less than or equal to $0$ (note that $log{0}$ is not defined). Accordingly $\max_{w} \log{L(w)}$ means to maximize a negative number which is, at least to me, less intuitive than minimizing a positive number.
The more interesting part is actually the log-transformation which increases numerical stability of your calculations (since it "transforms" the multiplication to a sum and thereby reduces the risk of underflowing). |
H: 1: 10 rule in logistic regression - EPV
I have a dataset with 4712 records. Label Yes - 1558 records and Label No - 3554 records.
I read online that 1:10 rule is based on the frequency of lower occurring class.
In my case, frequency of lower occurring class is 1558
According to 1:10 rule, am I right to understand that it is calculated like 1558/10 = 155.8 further equals 150 predictors
So In my logistic regression, I can use 150 variables/input features to the model without the risk of overfitting. Am I right?
By any chance do we also have to look at the frequency of the other (high occuring) class to determine the no of predictors that I can use? If yes, can you share me as to what has to be done to determine the predictor count?
I am aware that we could also use 1:20 or 1:50 rule. But my question is mainly on
1) Whether is there any other consideration for determining the number of predictors in logistic regression model?
2) How do people calculate min sample size required based on this?
Can someone help me with this?
AI: 1) Whether is there any other consideration for determining the number of predictors in logistic regression model?
The right number of predictors depends on your data and on your theories about data, and on that only. All these rules of thumb seem completely arbitrary to me, and they lack any scientific ground.
2) How do people calculate min sample size required based on this?
What do you mean with "min sample size"?
EDIT: Of course, the number of explanatory variables cannot be larger that the number of observations. Based on this answer, a rulo of thumb is to use at least 10-20 observations for each variable. I wish also to stress one point: they must be useful variables, i.e. variables with actual explanatory power. If a variable is a linear combination of others, it won't improve the model by any means, and statistical softwares such as R would delete one automatically.
However, my suggestion is to try to always employ all the data available, don't just stick to a minimum threshold. |
H: Batch Normalization vs Other Normalization Techniques
In the context of neural networks, I understand that batch normalization ensures that activation at each layer of the neural net does not 'blow-up' and cause a bias in the network. However, I don't understand why it would be used as opposed to other normalization techniques such as Cosine or Weight normalization, these achieve the same goal and don't seem to be any more computationally complex.
Could someone please explain to me the advantages and disadvantages of using batch normalization vs other normalization techniques, and which contexts batch norm would be most beneficial?
AI: Cosine normalisation is result of the fact that we bound dot
product and hence decrease the variance, when we use cosine similarity or centered cosine similarity instead of
dot product in neural networks (which is quasi ground-stone in NN)
Main benefit of cosine normalisation is Cosine
normalization bounds the pre-activation of neuron within
a narrower range, thus makes lower variance of neurons.
Also it does not depend on any statistics on batch or mini-batch
examples, and performs the same computation in forward
propagation at training and inference times. In convolutional networks, it normalizes the neurons from the receptive fields rather than the same layer or batch size.
Have a look at this paper showing emipirically comparison between normalisations you mentioned. C.N. comes on top. |
H: Does it make sense to expand word embeddings so that each array index is a feature input or should the embedding itself be a model input?
If you are building a DNN, say, with two layers, and you want to use embeddings as one of your feature inputs, what's the best way to input the embedding?
I'm trying to understand if I should break the embeddings up so that every array value in the embedding becomes its own input feature to the model or whether the embedding should be kept in array form.
I've been following AirBnB's model for inspiration.
I'm trying to predict a binary classification in the final layer.
AI: Breaking the array you lose distributed representation-point of word embedding.
There is information in other dimensions (words) that are distributed in the word-embedding representation, breaking them up makes no sense. |
H: feature selection using genetic algorithm in Python?
I have a dataset of 4712 records and 60+ features working on a binary classification problem. I already tried out all the feature selection approaches like filter, embedded and wrapper but am just curious to learn and try genetic algorithm for feature selection.
The reason for choosing genetic algorithm is because I guess it will just provide me the best model fit based on best features.
1) I understand it might take time but would you people help me know how can I do this in Python?
2) In addition, is genetic algorithm any different or better than all other feature selection approaches discussed above? What are its disadvantages?
Is there any python packages and tutorial available on how to use this?
I see tutorials but they all about the theory of genetic algorithm
Can you help me by sharing any tutorial or package for genetic algorithm?
post update
AI: Feature selection is a combinatorial optimization problem. And genetic algorithms is an optimization technique.
So there really isn't anything special, you just need to formulate your problem as an optimization one, and understand how do genetic algorithms optimize. There are enough tutorials on this.
Whether it's better or worse you already know the answer. It depends. On the dataset, constraints etc. What I can tell you from experience is that
You can not expect it to blow your mind but they do work pretty well
They are a great ensembler, meaning results are pretty different (yet accurate) from tree-based methods, NN etc...
Finally regarding implementation, here is completely (maybe too much) automated library based on genetic programming. (notice the word programming here referring to optimization not writing code) Also, it covers feature selection. |
H: Why replay memory store old states and action rather than Q-value (Deep Q-learning)
Here is the algorithm use in Google's DeepMind Atari paper
The replay memory D store transition (old_state, action performed, reward, new_state)
The old_state and the performed action a are needed to compute the Q-value of this action in this state.
But since we already compute the Q-value of action a in state old_state in order to choose a as the best action, why don't we simply store directly the Q-value ?
AI: But since we already compute the Q-value of action a in state old_state in order to choose a as the best action, why don't we simply store directly the Q-value ?
That is because calculating a relevant TD Target e.g. $R + \gamma \text{max}_{a'}Q(S', a')$ requires the current target policy estimates for action value $Q$. The action value at the time of making the step can be out of date for two reasons:
The estimates have changed due to other updates, since the experience was stored
The target policy has changed due to other updates, since the experience was stored
Storing the $Q$ value at the time the experience was made should still work to some degree, provided you don't keep the experience for so long that the values are radically different. However, it will usually be a lot less efficient, as updates will be biased towards older less accurate values.
There is a similar, but less damaging, effect from the experience replay table even with $Q$ recalculations. That is because the distribution of experience may not match what the current policy generates - something that most function approximators (e.g. neural networks used in DQN) are sensitive to. However, there are other factors in play here too, and it can be beneficial to train on a deliberately different distribution of experience - for instance prioritising experiences with larger update steps can speed learning and keeping older experiences available can reduce instances of catastrophic forgetting.
Note that if you were using an off-policy Monte Carlo method, you could store the Monte Carlo return in the experience replay table, since it does not bootstrap by using current value estimates. However, the early parts of older less relevant trajectories would stop contributing to updates in that case once the target policy had changed significantly during learning. |
H: PyTorch: How to use pytorch pretrained for single channel image
If I have to create a model in pytorch for images having only single channel. How can I transform my model to adopt to this new architecture without having the need to compromise the pre-trained weights on which it has been trained upon.
AI: I came across a code where the user had this very innovative method to tackle this problem. Here is the small trick to convert any pre-trained network to accept 1 channel images without loosing pertained weights.
arch = models.resnet50(num_classes=1000, pretrained=True)
arch = list(arch.children())
w = arch[0].weight
arch[0] = nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=2, bias=False)
arch[0].weight = nn.Parameter(torch.mean(w, dim=1, keepdim=True))
arch = nn.Sequential(*arch)
Basically here, weights of the first Conv2d is taken and stored in w. Afterwards, we are creating new Conv2d which has 1 channel and replacing its parameter with w for which we take mean.
I am pretty sure this can be achieved in keras or tenserflow too.
Let me know if you happen to try it. |
H: Auto ML vs Manual ML for a project
I recently was introduced to a AUTO ML library based on genetic programming called tpot. Thanks to @Noah Weber. I have few questions
1) When we have AUTO ML, why do people usually spend time on Feature selection or preprocessing etc? I mean they do at-least reduce the search space/feature space
2) I mean atleast, they reduce our work to some extent and we can work from the output of AUTO ML solution and tune further if required. We don't really have to do gridsearchCV by manually keying in range of values that we might require. Right?
3) Is there any disadvantage to it? I understand it might be black box but for data analysis, don't they make it easier? Computer scientists,may not prefer it. Ofcourse, we need to have some sort of knowledge to be able to fine tune the model, interpret the results etc
4) What's the advantage of doing manual ML when compared to AUTO ML
5) Will it be possible for us to improve the results further? I mean once we get the output from Auto ML
Can you help me understand this?
AI: 1) Feature Selection should be done by AutoML on the other hand preprocessing is normally done by the user in order to make sense fo the data.
2) AutoML takes care of the hyper-parametrization.
3) The disadvantage that I mostly find is that is extremely computationally expensive. And from what I have seen in Kaggle most of the winning solutions use manual ML, not AutoML.
4) For me, one of the advantages is that sometimes it finds a good algorithm that I have not tried (or thought) and that it avoids me spending some coding time. Also, it happens to do some good ensembles of different models.
5) You can do Manual ML in your side and then doing an ensembling with your personal ML model and the AutoML. This doesn't guarantee improvement but it could boost your performance.
You can have a look at H20 AutoML, I quote the documentation, I believe it can be helpful in this case in order to have an intuition about it:
Although H2O has made it easy for non-experts to experiment with machine learning, there is still a fair bit of knowledge and background in data science that is required to produce high-performing machine learning models. Deep Neural Networks in particular are notoriously difficult for a non-expert to tune properly. In order for machine learning software to truly be accessible to non-experts, we have designed an easy-to-use interface which automates the process of training a large selection of candidate models. H2O’s AutoML can also be a helpful tool for the advanced user, by providing a simple wrapper function that performs a large number of modeling-related tasks that would typically require many lines of code, and by freeing up their time to focus on other aspects of the data science pipeline tasks such as data-preprocessing, feature engineering and model deployment.
You can also have a look at this blog of Bojan Tunguz where he defines what are the phases of AutoML:
Level 0: No automation. You code your own ML algorithms. From scratch. In C++.
Level 1: Use of high-level algorithm APIs. Sklearn, Keras, Pandas, H2O, XGBoost, etc.
Level 2: Automatic hyperparameter tuning and ensembling. Basic model selection.
Level 3: Automatic (technical) feature engineering and feature selection, technical data augmentation, GUI.
Level 4: Automatic domain and problem specific feature engineering, data augmentation, and data integration.
Level 5: Full ML Automation. Ability to come up with super-human strategies for solving hard ML problems without any input or guidance. Fully conversational interaction with the human user. |
H: How to select best feature set and not ranking for tree based models?
I am currently using feature selection approaches like filter, wrapper, embedded etc.
All these methods give different set of features and I rank them based on their frequency of occurrence in other feature selection approach.
Ex: If Age occurs in all 6 feature selection algorithm, then they will have a rank as 6. If gender occurs only in 2 feature selection algorithm, its rank would be 2.
So I arrange them in descending order and choose the features which has occurred in at least 3-4 feature selection algorithm.
But where I am trying to seek your help is
1) Is there any systematic way where I get only a subset of features which returns best output? I thought genetic algorithm for feature selection will return a feature subset which will give high output. But unfortunately it lists all features with their importance. I don't wish to define a threshold myself to select few. Example is currently Xgboost produces an F1-score of 81 with around 27-28 features but when I manually play around, it gives around 81.9 or 82.1 with just 6-7 features. So I would like to have a way to select this 6-7 features automatically/systematically without human intervention
2) Is there any algorithm, like RFE but which does exhaustive search like genetic algorithm and finally returns us the best feature set? Mix of RFE and genetic algorithm is what I need
3) I wish to have an exhaustive search in feature space and finally provide me the best feature set which will provide best f1-SCORE which is my objective.
I am looking for a systematic way/algorithmic way to do this rather than me deciding to pick features which occurs at least in 3 feat selection algorithms
Hope my question is clear. Can you help me with this?
AI: I believe that what you are looking for is Best Subset, Forward Stepwise, or Lasso?
Here is the R implementation Best Subset
If anyone knows a python implementation, please let me know! |
H: Why joint probability in generative models?
I have been reading about generative models for past few days and there's one question that's bugging me. Many sources(Wiki, Google dev article) mention that generative models try to model the joint probability distribution p(x, y) and that to generate new samples we sample from this distribution. My question is why model and sample from the joint distribution and not the conditional distribution p(x|y)?
AI: Its because joint distribution takes all of the values variable Y can take and conditional only one. Additionally it could be easier to compute/use approximative methods for joint. |
H: R: Error when using Aggregate function to compile monthly means into yearly means
Disclaimer: I'm extremely new to R and have been getting by with using google as my professor.
I have a somewhat large collection of monthly values over a period of several years from several different locations. I am attempting to use the aggregate function to calculate the yearly means for each location so that yearly rates of change can be calculated. However, when I run the code
read_csv_filename <- function(filename){
ret <- read.csv(filename)
ret$Source <- filename #EDIT
ret
}
import.list <- ldply(filenames, read_csv_filename)
by1 <- import.list$Source
by2 <- import.list$Result
by3 <- import.list$Year
Yearly_Mean <- aggregate(import.list, by==list(by1, by2, by3), FUN= "mean")
I get an error like this
> Yearly_Mean <- aggregate(import.list, by==list(by1, by2, by3), FUN= "mean")
Error in by == list(by1, by2, by3) :
comparison (1) is possible only for atomic and list types
I've spent quite a bit of time looking here and elsewhere for similar issues, but haven't found a case that helped me out at all. Any advice on how to fix this (or a completely new, easier method) would be appreciated.
Thanks!
AI: You can use the group_by() and summarize function from the dplyr package to achieve the above easily:
library(dplyr)
import.list <- ldply(filenames, read_csv_filename)
# No need to create b1, b2 and b3
#by1 <- import.list$Source
#by2 <- import.list$Result
#by3 <- import.list$Year
# here you want to group by the data based on 'Year' as you mentioned you want the yearly mean and you want to calculate mean of 'Result' right?.
import.list %>%
group_by(Year) %>%
summarize(Mean = mean(Result, na.rm=TRUE))->Final_Output
The above will group_by the data based on Year and calculate mean of Result column. |
H: How does L1 regularization make low-value features more zero than L2?
Below formulas, L1 and L2 regularization
Many experts said that L1 regularization makes low-value features zero because of constant value. However, I think that L2 regularization could also make zero value. Could you please explain the reason why L1 has more tendency to make zero value? (It's gonna be great if you let me know this reason by using formula(like above equation)!
AI: The penalty on coefficients of L1 is more aggressive on values close to zero than it is for L2.
With L1, when a weight value comes closer to zero, it tends to get even closer, because of the $\epsilon\lambda$ penalty which stays constant.
With L2, the $\epsilon\lambda w^{(t)}$ term gets smaller, so the regularization gets smaller as a weight comes closer to zero, with the update depending only on $\epsilon\Delta E(w)$.
So the main argument for this to my knowledge is that
$$
\lim_{w\rightarrow0}\lambda = \lambda\\
\lim_{w\rightarrow0}\lambda w = 0
$$ |
H: Different encoders applied to a dataset
I have a dataset which have both categorical features with high cardinality (>8000) and low cardinality (4 or 5).
Would that be ok to encode the high cardinality ones with one encoder (target encoder, for example) and the others with low cardinality with another encoder (one hot encoder) and put everything together to train a model?
Is this wrong and should apply the same encoder to all the features regardless of their cardinality?
Many thanks for your inputs!
AI: It is perfectly fine. When you encode you want to extract the most information possible. You can apply one encoding to each feature, or even two.
Also you can encode numerical variables or bin them and then encoding.
For different encoding techniques, I recommend Category Encoders.
Be aware that some encodings can make your model decrease its performance, so you should select features somehow, not by adding more encodings your model will perform better. |
H: Which metric to choose for tracking model performance?
I am working on a binary classification problem with class proportion of 33:67.
Currently what I am doing is running multiple models like LR,SVM,RF,XgBoost for classification.
RF and Xgboost models perform better.
However I am reading online that AUC, F1-score or Accuracy comparison between models may not be a good metric (as they are sensitive) to measure the model performance.
I see these are the mostly used scoring metrics but can you let me know which metric should I look for measuring binary classification model performance?
Can you let me know why should we choose a different metric?
These are the metrics that I see in scikit-learn
AI: F1 is just based on the confusion matrix(and taking into account class imbalance), hence different models should only focus on predicting the confusion matrix correctly and if they dont they are wrong not sensitive.
F1Score is a metric to evaluate predictors performance using the formula
F1 = 2 * (precision * recall) / (precision + recall)
recall = TP/(TP+FN)
precision = TP/(TP+FP)
but the main thing is: |
H: Multi-label classification using output quantization
Problem statement
It's a fact that in order to train the network for multilabel-dataset, a one-hot-vector output is usually used.
Example:
dog [1 0 0]
cat [0 1 0]
rabbit [0 0 1]
Consequently, we're increasing size of the weight matrix as well as the required training time.
Question: Is there an approach that we can use one output and quantize it for different classes?
Example:
0 <= output < =1
dog [0.0 - 0.33]
cat [0.33 - 0.66]
rabbit [0.66 - 1]
AI: TL;DR
Bad idea, don't do that.
Explanation
1)
You are introducing an ordering in the classes that doesn't exist. Basically, with your example, you are saying that: "dog is close to cat", "cat is close to rabbit", but "dog is far from rabbit", which is an extra learning feature your network needs to learn.
For example, if the network outputs 0.35, then it is class cat but it is also close to dog and far from rabbit.
2)
Another problem, that some results are not possible to be represented.
For example, what if you changed the range to:
0 <= output < =1
cat [0.0 - 0.33]
rabbit [0.33 - 0.66]
dog [0.66 - 1]
How will you represent that the same example from before ("it is class cat but it is also close to dog and far from rabbit.")? It become impossible.
With the previous representation you could have:
output = [0.5, 0.0, 0.5] # cat, rabbit, dog
how would you map that to the new range where cat = [0.0-0.33] and dog = [0.66-1.0]? Can we pick the middle (0.5)? No, because that is class rabbit.
Conclusion
By doing that, you reduce the number of weights in the final layers (good), however, you introduce dependencies which needs to be learned and some results are not representable or interpretable. |
H: How to incorporate keyboard positions on character level embeddings?
I am working with NLP and have character level embeddings.
I have embeddings learned from Wikipedia text.
Now, I want to learn embeddings from chat data (where misspellings and abbreviations are way more common). Usually, the character n doesn't follow from character b, however, during texting, this can be common because they are close together on the keyboard, and a misspelling occurs.
So, my questions is: what are strategies to incorporate character keyboard position information to a traditional character level embedding?
Note: it can be assumed that only QWERTY keyboards exist.
AI: Character keyboard position information is an example of noisy channel model information, an error that depends on how a word is transmitted. It is very common to add noisy channel model information to spell checkers, including spell checkers that use character-level embeddings.
Most character-level embedding models would automatically learn to model common transmission mistakes. Characters that are frequently confused in the dataset would be embedded closer to each other because they frequently co-occur. There would be minimal gain by explicitly adding channel information to a character-level embedding model during training. |
H: NLP: How to group sub-field into fields?
Suppose I have a list of strings that captures a sub-field of academic research and would like to group them as higher-level fields.
For example,
'Quantum Mechanics' => 'Physics'
'Abstract Algebra' => 'Mathematics'
....
My understanding is that standard NLP techniques may not fit here, because the relationship between sub-fields and fields are linked by its meanings but not word-frequency or word-embedding etc.
I wonder if there is anything done that could be useful to tackle this problem (papers or packages)?
AI: You mentioned:
My understanding is that standard NLP techniques may not fit here, because the relationship between sub-fields and fields are linked by its meanings but not word-frequency or word-embedding etc.
However, your understanding is not totally correct, because word embeddings do convey meaning in them and could be used in your case.
Here is an example, given a list of countries, you can figure out their capitals in the vector space. Even though they are linked by the geographical location.
You would for example be able to do the following: Rome - Italy + France and you would get Paris.
So, you could create your own word-embeddings where Physics - Quantum Mechanics + Abstract Algebra = Mathematics. The only things you would need is a seed relationship (e.g. Quantum Mechanics - Physics), then all the other relationships would be a simple displacement in the vector space, which you can figure out by subtracting and adding the words. |
H: ML Project - Achieve 2 Objectives
I have a dataset with 5K records focused on binary classification. I am posting it here to seek your suggestions on project methodology
Currently what is my objective is
1) Run statsmodel logistic regression to find risk factors that influence the outcome
2) Then build a predictive model based on best features (may or may not include risk factors). because as you may know not all significant variables are good predictors.
Though I can use scikit-learn logistic regression to build a predictive model but I am planning to go with Xgboost because it provides better performance in my dataset (non-linear data slightly imbalanced)
I do step one because I have to find what are the risk factors that influence the outcome, so I am doing it. (ex: risk factors that influence customer to default in loan repayment) You know where we get p-values and find significant risk factors.
In 2nd step, I build predictive model because I realized through running the built model that not all risk factors are good predictors. So in the end, I include new set of features that help in better prediction along with risk factors
Do you think that I am right in having/approaching this as two objectives problem?
Do you think what I am doing is redundant or am proceeding in the right direction?
Do you think there is no reason to use 2 algorithms separately?
Do you have any suggestions or tips to make it easy to achieve my objective?
AI: Xgboost does the feature selection for you. If you want to report how valuable certain features are for prediction, print the feature importances. Those will, however, just tell you "feature $x_1$is very important for predicting the outcome, feature $x_2$ is nearly useless for predicting the outcome, etc.".
To get risk factors with p-values, you need a more interpretable model. You have to use linear classification, for example. Then you can make statements like "high $x_1$ correlates with a positive outcome".
If you really want to only use a subset of features, train an Xgboost model on a validation dataset and drop features that have low importance. Then run an Xgboost model with the remaining features on the remaining training set.
You need to think about what you want: Do you want to explain observations and extract explicit knowledge from data and try to find possible causal links? Use linear methods. Do you want to predict an outcome for new patient data? Use gradient boosting. You can obviously do both. |
H: Is there any difference between a weak learner and a weak classifier?
While reading about decision tree ensembles Gradient Boosting, AdaBoost etc.
I have found the following two concepts weak learner and weak classifier.
Are they the same?
If there is any difference what is it?
AI: A weak learner can be either a classification or a regression algorithm:
Boosting (Schapire and Freund 2012) is a greedy algorithm for fitting adaptive basis-function models of the form in Equation 16.3, where the $\phi_m$ are generated by an algorithm called a weak learner or a base learner. The algorithm works by applying the weak learner sequentially to weighted versions of the data, where more weight is given to examples that were misclassified by earlier rounds.
This weak learner can be any classification or regression algorithm, but it is common to use a CART model.
Source: "Machine Learning - A Probabilistic Perspective"; Murphy; 2012
So a weak classifier is simply a weak learner which is a classifier. |
H: Best clustering algorithm to identify clusters and determine the closet cluster each individual response is near?
I have a survey where each question is related to a different 'shopper' type (there are 5 types so 5 questions). Each question is either binary (True/False) or scale based.
IE:
1. Do you like to shop at our physical location store ? (True/False)
2. Do our discounts entice you to shop more? a. no b. maybe c. yes
For each response I convert the answer choice to a numerical value. So True becomes 1, answer choice 2C becomes 3 etc.
At this point, I am clueless as too what clustering algorithm to use so I can create clusters for each of the 'shopper' types and measures each individual survey response submitted to determine a single cluster closet to the responses given and label the response as that cluster.
IE. This individual that submitted the survey response is 'location conscience shopper type'
Open to any new method of analysis not just clustering
AI: Since they are categorical variables, I would cluster them using the k-medoids clustering method. Before applying this method, one-hot encode all the predictors.
See a tutorial here:
https://towardsdatascience.com/k-medoids-clustering-on-iris-data-set-1931bf781e05
Sklearn has an implementation:
https://scikit-learn-extra.readthedocs.io/en/latest/generated/sklearn_extra.cluster.KMedoids.html |
H: why we have run make.sh file initially in darknet and YOLO object detection?
In have seen a couple of text and objection detection algorithm wher the first step everyone dose is to install cython and run a make.sh file . why we have run make.sh file initially in darknet/YOLO object detection ?
Below are the 2 links im referring to build yolo algorithm using darknet and im not able to understand why cython is needed and why it is necessary to compile make.sh file for the first step to build yolo model ?
How to compile on Linux (using cmake)
How to compile on Linux (using make)
AI: I know what is cython and make (but I never use YOLO!)
Cython is a C-extension for python. It allows you to write code C/C++ in a python script. (use for very fast program execution)
Make is command which executes your makefile. You can consider makefile is a build script to create/tune the necessary things like environment/folders/.. etc. |
H: What to choose: an overfit model with higher evaluation score or a non-overfit model with lower one?
For lack of a better term, overfit here means a higher discrepancy between train and validation score and non-overfit means a lower discrepancy.
This "dilemma" just showed in neural network model I've recently working on. I trained the network with 10-fold cross-validation and got overfitted model (0.118 score difference):
0.967 accuracy for training set and
0.849 for validation set.
Then, I applied dropout layer with dropout rate of 0.3 after each hidden layer and got "less overfitted" model (0.057 score difference):
0.875 accuracy for training set and
0.818 for validation set
which is supposedly good since have lower discrepancy thus have better reliability for unknown data. The problem is, it has lower validation set score. My uninformed intuition says that no matter how overfitted your model is, validation set score is what matters because it indicates how well your model sees new data, so I choose the first model.
Is that a right intuition? How to go for this situation?
AI: TLDR: I think you can do that as long as you understand why this is happening.
I think first you should be really sure your validation set is not in any way polluted by your training data. This sometimes can happen very indirect- in that case you would still be at risk. Otherwise there is nothing fundamentally wrong with using an overtrained predictor that still generalizes good enough.
Think about examples like the Titanic dataset. Its pretty small so it's not hard to learn all survivors in your training sample but still getting the general trend right.
Another point you should consider is how big your samples are. If they are small (maybe a few hundred datapoints) you could also observe statistical noise that can be quite large. |
H: Can colored images have more than 3 channel values?
I was reading this well-known paper and noticed something in figure 1 below:
It says in the caption (The number of channels is denoted on top of the box). You can see that the number of channels is ranging from 1 up to 1024. I am confused here because it is known that the number of channels in colored images are 3 (R,G,B). Did I misunderstand something here?
Thank you.
AI: Here it's not with respect to the number of channels in an image, it's related to the number of out_channels you get after you have applied Conv operations.
in_channels is the number of channels in the input(generally for an image it's either 1 or 3 depending on the image data, for video it's 4 or more as well etc.)
out_channels Number of channels produced after the Conv operation on the given input. |
H: Threshold to consider new feature as a new finding to a model?
I am working on binary classification problem with 5K records and 60 features.
Through feature selection, I narrowed it down to 14 features.
In existing literature, I see that there are well-known 5 features.
I started my project with an aim to find new feature that can help improve the predictive power of the model
However, I see that with well known features (reported in literature), it produces an AUC of 84-85 and having all my 14 features decreases it to 82-83.
So I tried manual add and drop and found out that if I add only one feature (let's say magic feature), it increases the AUC to 85-86.
I see that there is a difference of 1 point in AUC.
1) Is it even useful to be happy that this adds some info to the model?
2) Or me looking at AUC is not the right way to measure model performance?
3) Does it mean the other new features (9 out of 14) that I selected based on different feature selection/ genetic algorithm aren't that useful? Because my genetic algorithm returned 14 features, so I was assuming that was the best subset but still through my previous experiments I know that model had better performance when it had 5 features. Any suggestions here? What can I do?
4) I am currently using train and test split as my training and testing data. I applied 10 fold cv to my data. Should I be doing anything different here?
5) If I add around 16-17 features, I see the AUC is increased to 87 but this can't be over fitting right? Because if it's overfitting, shouldn't I be seeing the AUC as97-100or just100? I know we haveoccam razor's principleto keep the model parsimonius but in this case, just having 16-17 features in model is not too complex or heavy. Am I right? Because it's increasing theAUC`. Any suggestions on this?
AI: A lot of questions here. Here are some thoughts.
should you be happy about a 1 point increase in AUC? Yes. An effect can be genuine, but small. A 1 point advantage is still an improvement. But do I trust that outcome? Not sure.
you need some more data. Your sample size is not large. Furthermore, cross-validation is a wonderful thing, but you have been running a lot of tests on the same small data set, so cross-validation notwithstanding -- we really don't know how your classifier will perform on brand new, unseen data.
point 3. This sounds like an issue with over-fitting.
point 4. I'm not sure what you are doing here. Cross-validation is an alternative to "train" and "test" sets, since each fold acts as the "test" set to the model fit on the "Non-fold" part of the data. Did you mean to say that you were doing CV on the train part, and then used the "test" part for a final check on performance? That would be a good thing to do, but if you have been using the same test set to check every model you have tried, then that test set is beginning to look like a training set. You will need another "test" set.
point 5. You don't need a ridiculously high AUC to be guilty of over-fitting. Over-fitting happens when you fit features to the noise in your data set. An over-fit model will have a higher mean squared error on a fresh test set than the optimal model, although it will do better on the training set. Having 16 to 18 features is too many if, in fact, the outcome is well explained with only 5.
Given that you have reduced your candidate features to 14, you now have a much smaller problem, and it should be possible to examine the features from a subject matter perspective. Which features would a SME recommend you to retain? And you can also examine correlations between the 14 features and check for redundancy that way. With a small data problem (which is what this is), you can work to understand your data directly. That approach might yield some interesting insights.
You might want to use a different method of model selection than maximal AUC. This paper discusses the merits of AUC and offers an alternative. Could be interesting. |
H: How neural style transfer work in pytorch?
I am using this pytorch script to learn and understand neural style transfer. I understood most part of the code but having some hard time understanding some parts of the code.
In line 15 Its not clear to me how model_activations work. I made a sample style tensor of the shape style.shape -> torch.Size([3, 300, 374]) and tried this sample code first without layers dict.
x = style
x = x.unsqueeze(0)
for name,layer in model._modules.items():
x = layer(x)
print(x.shape)
Output:
torch.Size([1, 64, 300, 374])
torch.Size([1, 64, 300, 374])
torch.Size([1, 64, 300, 374])
torch.Size([1, 64, 300, 374])
torch.Size([1, 64, 150, 187])
torch.Size([1, 128, 150, 187])
torch.Size([1, 128, 150, 187])
torch.Size([1, 128, 150, 187])
torch.Size([1, 128, 150, 187])
torch.Size([1, 128, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 75, 93])
torch.Size([1, 256, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 18, 23])
torch.Size([1, 512, 9, 11])
and then with layers
layers = {
'0' : 'conv1_1',
'5' : 'conv2_1',
'10': 'conv3_1',
'19': 'conv4_1',
'21': 'conv4_2',
'28': 'conv5_1'
}
x = style
x = x.unsqueeze(0)
for name,layer in model._modules.items():
if name in layers:
x = layer(x)
print(x.shape)
Output:
torch.Size([1, 64, 300, 374])
torch.Size([1, 128, 300, 374])
torch.Size([1, 256, 300, 374])
torch.Size([1, 512, 300, 374])
torch.Size([1, 512, 300, 374])
torch.Size([1, 512, 300, 374])
My question is how the second method maintained the height and widght of style tensor (300, 374)?
The second confusion is in the line 91 where optimizer was set as optimizer = torch.optim.Adam([target],lr=0.007). In most pytorch tutorials I have seen them doing some thing like this optimizer = torch.optim.Adam(model.parameters(), lr = 0.01)
Why is the optimizer initialization in neural style transfer different from other neural network tutorials?
What is reason behind this optimizer = torch.optim.Adam([target],lr=0.007) >
AI: Question 1
My question is how the second method maintained the height and widght of style tensor (300, 374)?
Applying the second piece of code does not call all layers but only the ones listed in the layers dict. In contrast to that the first method runs through all layers. Now, if you look at the network architecture:
IN: model
OUT:
Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(17): ReLU(inplace=True)
(18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(24): ReLU(inplace=True)
(25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(26): ReLU(inplace=True)
(27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(31): ReLU(inplace=True)
(32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(33): ReLU(inplace=True)
(34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(35): ReLU(inplace=True)
(36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
You can see that in fact all max pooling layers are being skipped with the second method. Accordingly, your input does not get pooled, i.e. height and width remain unchanged. (the CNN layers in this network do not change height and width but only the channels)
If you slightly change the second piece of code to:
x = style
x = x.unsqueeze(0)
for name,layer in model._modules.items():
x = layer(x)
if name in layers:
print(x.shape)
It runs through all layers again and gives:
torch.Size([1, 64, 300, 374])
torch.Size([1, 128, 150, 187])
torch.Size([1, 256, 75, 93])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 37, 46])
torch.Size([1, 512, 18, 23])
Question 2
Why is the optimizer initialization in neural style transfer different from other neural network tutorials?
In image style transfer learning you do not optimize the weights (model parameters) but the target image. Since you would like the target image to resemble your content and style pictures as close as possible. All weights (model parameters) are being kept constant. Therefore, the optimizer needs to receive the target image as an input and provide updates for it (and not for your model parameters like usually).
Question 3
What is reason behind this optimizer = torch.optim.Adam([target],lr=0.007) >
See Question 2. The target is initialized in line 69. Typically, you use the content image as a start:
target = content.clone().requires_grad_(True).to(device)
As you can see it also activates the gradient for the target.
And this target image is then being optimized in order to minimize the loss
total_loss = content_wt*content_loss + style_wt*style_loss.
A good read on the general approach is Image Style Transfer Using Convolutional Neural Networks. |
H: What is a manifold for Unsupervised Learning?
I've been watching Dr. G. Hinton lectures on Neural Networks in Machine Learning, and in one of the lectures he explains what the goals of Unsupervised Learning are.
I am having trouble understanding the part where high-dimensional inputs such as images live on or near a low-dimensional manifold (or several such manifolds). What is a manifold exactly, and why is this the case?
Thanks!
AI: Depends who you ask but generally mannifold is just some structure in a (high) dimensional space having finite dimension : a line, a curve, a plane, a rock, a sphere, a ball, a cylinder, a torus, a "blob"... something like this :
If you ask a mathematician
they would say that it is a general term describing "a curve" (dimension 1) or "surface" (dimension 2), or a 3D object (dimension 3)... for any possible finite dimension $n$. A one dimensional manifold is simply a curve (line, circle...). A two dimensional manifold is simply a surface (plane, sphere, torus, cylinder...). A three dimensional manifold is a "full object" (ball, full cube, the 3D space around us...).
TO ANSWER YOUR QUESTION what he is referring is that information in this high-dimensional space can be compressed and perserved in some smaller space. (remember images when quantified live in some space that contains information about them) |
H: Maximize one data point
I am completely new to data science and looking to narrow down the search and reduce the learning curve required to solve problems like the one given below
I have a data set with 7 columns ,
Column A(all positive decimal) is the data point I want to maximize.
Column B and C are boolean values
remaining columns are a combination of positive and negative decimal numbers.
I want to find some relation and insights from all colums such that I can maximize the sum of column A.
AI: In R you can run a linear regression. Consider this "academic" minimal example:
df = data.frame(c(3,5,2,7,5,3), c(1,0,1,0,1,0), c(0,1,1,0,1,0))
colnames(df) = c("A", "B", "C")
df
Take this data as an example:
A B C
1 3 1 0
2 5 0 1
3 2 1 1
4 7 0 0
5 5 1 1
6 3 0 0
Now we can see how B and C describe A in the best way.
reg = lm(A~B+C, data=df)
summary(reg)
Output:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.917 1.322 3.719 0.0338 *
factor(B)1 -1.750 1.774 -0.987 0.3966
factor(C)1 0.250 1.774 0.141 0.8968
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.048 on 3 degrees of freedom
Multiple R-squared: 0.2525, Adjusted R-squared: -0.2459
F-statistic: 0.5066 on 2 and 3 DF, p-value: 0.6463
This tells us that when B, C is 0, A=4.1917 if B=1 we would have A=4.917-1.750 and if C=1 we would have A=4.917+0.25.
So, we can also make predictions:
predict(reg, newdata=df)
Which would be in this case:
1 2 3 4 5 6
3.166667 5.166667 3.416667 4.916667 3.416667 4.916667
This is a simple form of ML (linear regression), where the sum of squared residuals is minimized in order to find the coefficients for the intercept as well as B and C which best describe A.
You would write this model like: $A = \beta_0 + \beta_1 B + \beta_2 C + u$, where $u$ is the statistical error term. You would solve this model by minimizing $\sum u^2$ (the sum of squared residuals).
In matrix algebra you could write $y=\beta X + u$, and you would solve this by $(X'X)^{-1}X'y = \hat{\beta}$.
So we do not "maximise" but minimize the statistical error $u$ in order to find the best "fit" for columns B, C given column A.
Have a look at the great book "Introduction to Statistical Learning" to get the main concepts sorted. |
H: Machine learning dataframe dimension concept vs NumPy dimension
From Machine Learning for Absolute Beginners: A Plain English Introduction:
Contained in each column is a feature. A feature is also known as variable, a dimension or an attribute - but they all mean the same thing.
From here (the supplement file for this book):
In NumPy, each dimension is called an axis.
The number of axes is called the rank.
For example, the above 3x4 matrix is an array of rank 2 (it is 2-dimensional).
The first axis has length 3, the second has length 4.
An array's list of axis lengths is called the shape of the array.
For example, the above matrix's shape is (3, 4).
The rank is equal to the shape's length.
The size of an array is the total number of elements, which is the product of all axis lengths (eg. 3*4=12)
Question: Is the dataframe dimension completely different not related to the NumPy dimension (just same word but describing different concept)?
I am learning Python and Machine learning but familial with R and R dataframe from statistical perspective
AI: The dataframe case refers to the linear algebraic notion of a dimension. In the NumPy context, it just means the number of axes or rank. |
H: Why and how to match variables in logistic regression?
I have a dataset of ~4.7K records focused on binary classification with 60 features. class 1 is of 1554 records and class 2 is of 3558 records.
Now I would like to find the risk factors that influences the outcome which is disease present or not. This is a supervised learning problem
I understand that people do matching to ensure that both the classes have similar distribution, so that the comparison results are reliable.
1) I see people usually do matching based on demographics like Age etc. Is it to infer what factors really influence the outcome if we keep Age constant. Am I right to understand this way?
2) If I put all the variables in logistic regression model, doesn't that account for confounding? Why do I have to do matching?
3) Out of 60 features, I would like to do matching based on 4 variables. How do I do this for my full dataset? Is there any python package to do this?
Can someone help me on how to do this?
AI: It appears (if I understand you correctly) that you want to model the causal effect of some confounders $X$ on some outcome $y$ by using Logit. If this is correct, and if you know the outcome $y$, you should be fine with using just Logit, because in this case (and if you are able to contol for all relevant confounders $X$), you can identify the marginal effect of $x$ on $y$.
I guess you refer to "Propensity Score Matching" (PSM) in your question. This technique is used to "predict" cases in which you do not know for sure if some $i$ would have received treatment ($y=1$ or $y=0$) and you try to "predict" this outcome. In other words: The propensity score is the conditional (predicted) probability of receiving treatment given some $x$.
However, in case you observe $y$, and in case you don't have rasons to believ that there is a bias in $y$, you should be fine with a normal Logit. Here it would be really important for you to clarify what your problem actually is (this is not clear from your question). So if treatment ($y$) is non-random, you may think about "correcting" this bias using PSM. If not, you can go with normal Logit.
PSM is used in econometrics and related fields. So there is no off the shelf Python implementation to my best knowledge. However, you may have a look at this post. Or if you don't mind using R, have a look at this.
Anyway: PSM has it's pros and cons. It also is a little outdated since matching has made huge progress in recent years. So the most important thing for you to do is - as it appears to me - to get a good idea if you really need to "adjust" the treated/non-treated $y$ (so if you have reasons to believe that the data generating process is biased or non-random). |
H: TensorFlow Sigmoid activation function as output layer - value interpretation
My TensorFlow model has the following structure. It aims to solve a binary classification problem where the labels are either 0 or 1. The output layer uses a sigmoid activation function with 1 output.
model = keras.Sequential([
layers.Dense(10, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(1, activation='sigmoid')
])
optimizer = 'adam'
model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=[tf.keras.metrics.Precision(), tf.keras.metrics.Recall(), tf.keras.metrics.Accuracy()])
The output given is an array of dtype=float32 numbers that lie between 0 and 1.
array([[9.5879245e-01],
[3.6847022e-01],
[3.4174323e-04],
...,
[2.6283860e-03],
[3.2045375e-04],
[1.0798702e-03]], dtype=float32)
The Tensorflow tutorials state that
"Using the sigmoid activation function, this value is a float between 0 and 1 and represents a probability, or confidence level".
https://www.tensorflow.org/tutorials/keras/text_classification
My question is:
Do I interpret the float values from output as:
"How likely it belongs to the first class label - in this case class 0 is my first class label?" e.g. model.predict() yields 0.99998 and therefore has a 99% chance of belonging to my first class label (class 0) and 1% belonging to the other class (class 1).
or
"The closer the output from model.predict() is to 0.0 the more likely it is class 0 and the closer the output is to 1.0 the more likely it is class 1"
AI: In short, value of model.predict() function is interpreted as mentioned in option 2.
In order to clarify, let's assume we are talking about spam detection application. Label 0 represents that text/email is not spam and label 1 represents that text/email is spam.
Suppose, after running the function model.predict(), we get value 0.9899. Then we can interpret it as:
chance of given text/email being spam is 98.99%
chance of given text/email not being spam is 1.01%
Usually, when we talk about a binary classification problem, we will often use labels 0 & 1 to represent the classes. So the predicted value being closer to 0 means it is more likely to fall under the class 0 (in our example class 0 represents text/email not being spam.). Likewise, if the predicted value being closer to 1 means it is more likely to fall under the class 1 (in our example class 1 represents text/email being spam.) |
H: How to use random forest model to new data?
I am new to this Data Science field. I have a question to apply Random forest to new data.
I have this table.
Y prop_A prop_B
A 0.8 0.2
A 0.7 0.3
B 0.5 0.5
B 0.4 0.6
B 0.1 0.9
I assumed that if the proportion of the group is high, chances are high that it is in the group. I built a model using random forest and test it with validation set (8/2 splits).
I thought the above model can be used for new data. This is an example of the data. The data structure and variable meaning is same, but the number of variable is different.
Y prop_C prop_D prop_E prop_F
- 0.8 0.1 0.05 0.05
- 0.6 0.3 0.05 0.05
- 0.5 0.4 0.05 0.05
- 0.4 0.2 0.4 0
- 0.1 0.5 0.4 0.4
The new data is unlabeled so I would like to make a label using the Random forest I used with previous data. Is it right approach to label the new data?
In the model, it doesn't works (due to different independent variables).
How should I do to label the new data based on a model using labelled data, which is different?
AI: Yes, you can do that. However, how accurate your new labels are depends on the ability of your model to generalize to new data and the similarity of the new data to your training data. Therefore, it is something you need to test for in the first place by using a separate test dataset to assess model performance. In contrast to that, a validation dataset is used for model selection and hyperparameter tuning (e.g. max depth of your trees).
An in-depth coverage of the topic you can find in chapter 7 of "The Elements of statistical Learning" by Hastie et al.
With regards to your second question: If your training data does not contain the variables prop_C and prop_D then your model cannot make use of them. So either include these in the training data or ignore them in the new data. |
H: Multilingual Bert sentence vector captures language used more than meaning - working as interned?
Playing around with BERT, I downloaded the Huggingface Multilingual Bert and entered three sentences, saving their sentence vectors (the embedding of [CLS]), then translated them via Google Translate, passed them through the model and saved their sentence vectors.
I then compared the results using cosine similarity.
I was surprised to see that each sentence vector was pretty far from the one generated from the sentence translated from it (0.15-0.27 cosine distance) while different sentences from the same language were quite close indeed (0.02-0.04 cosine distance).
So instead of having sentences of similar meaning (but different languages) grouped together (in 768 dimensional space ;) ), dissimilar sentences of the same language are closer.
To my understanding the whole point of Multilingual Bert is inter-language transfer learning - for example training a model (say, and FC net) on representations in one language and having that model be readily used in other languages.
How can that work if sentences (of different languages) of the exact meaning are mapped to be more apart than dissimilar sentences of the same language?
My code:
import torch
import transformers
from transformers import AutoModel,AutoTokenizer
bert_name="bert-base-multilingual-cased"
tokenizer = AutoTokenizer.from_pretrained(bert_name)
MBERT = AutoModel.from_pretrained(bert_name)
#Some silly sentences
eng1='A cat jumped from the trees and startled the tourists'
e=tokenizer.encode(eng1, add_special_tokens=True)
ans_eng1=MBERT(torch.tensor([e]))
eng2='A small snake whispered secrets to large cats'
t=tokenizer.tokenize(eng2)
e=tokenizer.encode(eng2, add_special_tokens=True)
ans_eng2=MBERT(torch.tensor([e]))
eng3='A tiger sprinted from the bushes and frightened the guests'
e=tokenizer.encode(eng3, add_special_tokens=True)
ans_eng3=MBERT(torch.tensor([e]))
# Translated to Hebrew with Google Translate
heb1='חתול קפץ מהעץ והבהיל את התיירים'
e=tokenizer.encode(heb1, add_special_tokens=True)
ans_heb1=MBERT(torch.tensor([e]))
heb2='נחש קטן לחש סודות לחתולים גדולים'
e=tokenizer.encode(heb2, add_special_tokens=True)
ans_heb2=MBERT(torch.tensor([e]))
heb3='נמר רץ מהשיחים והפחיד את האורחים'
e=tokenizer.encode(heb3, add_special_tokens=True)
ans_heb3=MBERT(torch.tensor([e]))
from scipy import spatial
import numpy as np
# Compare Sentence Embeddings
result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_heb1[1].data.numpy())
print ('Eng1-Heb1 - Translated sentences',result)
result = spatial.distance.cosine(ans_eng2[1].data.numpy(), ans_heb2[1].data.numpy())
print ('Eng2-Heb2 - Translated sentences',result)
result = spatial.distance.cosine(ans_eng3[1].data.numpy(), ans_heb3[1].data.numpy())
print ('Eng3-Heb3 - Translated sentences',result)
print ("\n---\n")
result = spatial.distance.cosine(ans_heb1[1].data.numpy(), ans_heb2[1].data.numpy())
print ('Heb1-Heb2 - Different sentences',result)
result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_eng2[1].data.numpy())
print ('Heb1-Heb3 - Similiar sentences',result)
print ("\n---\n")
result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_eng2[1].data.numpy())
print ('Eng1-Eng2 - Different sentences',result)
result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_eng3[1].data.numpy())
print ('Eng1-Eng3 - Similiar sentences',result)
#Output:
"""
Eng1-Heb1 - Translated sentences 0.2074061632156372
Eng2-Heb2 - Translated sentences 0.15557605028152466
Eng3-Heb3 - Translated sentences 0.275478720664978
---
Heb1-Heb2 - Different sentences 0.044616520404815674
Heb1-Heb3 - Similar sentences 0.027982771396636963
---
Eng1-Eng2 - Different sentences 0.027982771396636963
Eng1-Eng3 - Similar sentences 0.024596810340881348
"""
P.S.
At least the Heb1 was closer to Heb3 than to Heb2.
This was also observed for the English equivalents, but less so.
N.B.
Originally asked on Stack Overflow, here
AI: Sadly, I don't think that Multilingual BERT is the magic bullet that you hoped for.
As you can see in Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT (Wu and Dredze, 2019), the mBERT was not trained with any explicit cross-lingual task (for example, predicting a sentence from one language given a sentence from another language).
Rather, it was trained using sentences from Wikipedia in multiple languages, forcing the network to account for multiple languages but not to make the connections between them.
In other words, the model is trained with predicting a masked instance of 'cat' as 'cat' given the rest of the (unmasked) sentence, and predicting a foreign word meaning 'cat' in a masked space given a sentence in that language. This setup is does not push the model towards making the connection.
You might want to have a look in Facebook's LASER, which was explicitly trained to match sentences from different languages.
P.S
The fact that the sentences do not have the same representation does not mean that the mBERT cannot be used for zero-shot transfer learning across languages. Again, please see Wu and Dredze |
H: TensorFlow / Keras: What is stateful = True in LSTM layers?
Could you elaborate on this argument? I found the brief explanation from the docs unsatisfying:
stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
Also, when stateful = True is to be chosen? What are practical cases of its use?
AI: This flag is used to have truncated back-propagation through time: the gradient is propagated through the hidden states of the LSTM across the time dimension in the batch and then, in the next batch, the last hidden states are used as input states for the LSTM.
This allows the LSTM to use longer context at training time while constraining the number of steps back for the gradient computation.
I know of two scenarios where this is common:
Language modeling (LM).
Time series modeling.
The training set is a list of sequences, potentially coming from a few documents (LM) or complete time series. During data preparation, the batches are created so that each sequence in a batch is the continuation of the sequence at the same position in the previous batch. This allows having document-level/long time series context when computing predictions.
In these cases, your data is longer than the sequence length dimension in the batch. This may be due to constraints in the available GPU memory (therefore limiting the maximum batch size) or by design due to any other reasons.
Update: Note that the stateful flag affects both training and inference time. If you disable it, you must ensure that at inference time each prediction gets the previous hidden state. For this, you can either create a new model with stateful=True and copy the parameters from the trained model with model.set_weights() or pass it manually. Due to this inconvenience, some people simply set stateful = True always and force the model not use the stored hidden state during training by invoking model.reset_states(). |
H: How to use scikit metrics for a statsmodel or vice versa?
Am working on binary classification problem with 5K records. Label 1 is 1554 and Label 0 is 3558.
I did refer this post but not sure whether it is updated now or anyone has any way to compute this metrics
Currently I use logit model as shown below
model = smm.Logit(y_train, X_train_std)
result=model.fit()
y_pred = result.predict(X_test_std)
print("Accuracy is ", accuracy_score(X_test_std, y_pred)) #throws error from here and all the line below
print(classification_report(X_test_std, y_pred))
print("ACU score is ",roc_auc_score(X_test_std, y_pred))
print("Recall score is",recall_score(X_test_std,y_pred))
print("Precision score is",precision_score(X_test_std,y_pred))
print("F1 score is",f1_score(X_test_std,y_pred))
The reason why I am trying to do this is because statsmodel has p-values, coeff, intervals etc and I was hoping to get the usual metrics through scikit metrics as shown above but it isn't accepted.
On the other hand, Through scikit logistic regression I can get usual metrics and coeff, but what about p-values, conf intervals? Is there anyway to do the reverse?
Can someone help me with this?
AI: Interpreting the code (since the error received is not mentioned), it looks like the code is passing in the X matrix and y-pred to the metrics. According to the documentation, the metrics want the y-true and y-pred. This would lead to an error mentioning incorrect dimensions.
I have used statsmodels and call scikit metrics. Many of the scikit examples, like the above documentation, show arrays being passed in, not from a specific model.
If that is not it, please post the error received. |
H: Which feature selection technique to pickup(Boruta vs RFE vs step selection)
I have data with 103 columns. I would like to understand which algorithm is best for feature selection and what may be the logic to call any feature as best.
I run below feature selection algorithms and below is the output:
1) Boruta(given 11 variables as important)
2) RFE(given 7 variables as important)
3) Backward Step Selection(5 variables)
4) Both Step Selection(5 variables)
I not able to decide which one to pick up; with domain knowledge it appears I must pick up results from Boruta (as it is giving most number of variables and all seems important).
However I don't find any concrete reason to pickup the best combination.
AI: There is a tradeoff between selecting features and precision. Fewer features probably have less precision (predictive power).
Select the features that make sense for your problem taking into consideration the trade-off of information vs performance. The fewer features the model sees the predictive power that it has. |
H: sklearn SimpleImputer too slow for categorical data represented as string values
I have a data set with categorical features represented as string values and I want to fill-in missing values in it. I’ve tried to use sklearn’s SimpleImputer but it takes too much time to fulfill the task as compared to pandas. Both methods produce the same output.
Here is the code to reproduce the behavior on a synthetic data:
import numpy as np
import pandas as pd
from sklearn.impute import SimpleImputer
lst = np.array(['a', 'b', np.nan], dtype='object')
arr = np.random.choice(lst, size=(10**6,1), p=[0.6, 0.3, 0.1])
ser = pd.Series(arr.ravel())
Using SimpleImputer:
%%time
imp = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
imp.fit_transform(arr)
Wall time: 13 s
Using pandas:
%%time
ser.fillna(value=ser.mode()[0])
Wall time: 64.8 ms
Things get even worse when string values are longer (e.g. ‘abc’ instead of one letter ‘a’). For numerical data pandas still outperform sklearn, but the difference is not that huge.
What am I doing wrong?
AI: Searching the source code of Sklearn for SimpleImputer (with strategy= "most_frequent"), the most frequent value is calculated within a loop in python, therefore that is the part of code that is so slow. In the source code of SimpleImputer there is also the comment that explains why they do not use the scipy.stats.mstats.mode, which is mush faster:
scipy.stats.mstats.mode cannot be used because it will no work properly if the first element is masked and if its frequency is equal to the frequency of the most frequent valid element. See https://github.com/scipy/scipy/issues/2636
So if you want to use the SimpleImputer with this strategy, a faster way would be to use the "constant" strategy and pass the most frequent value by yourself (ser.mode()[0]) then the time is almost the same:
t0 = time.time()
imp = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
imp.fit_transform(arr)
print('Simple Imputer (Most Frequent) Time Elapsed:', time.time()-t0)
t0 = time.time()
imp = SimpleImputer(missing_values=np.nan, strategy='constant', fill_value=ser.mode()[0])
imp.fit_transform(arr)
print('Simple Imputer (Constant) Time Elapsed:', time.time()-t0)
t0 = time.time()
ser.fillna(value=ser.mode()[0])
print('Pandas Time Elapsed:', time.time()-t0)
And the time elapsed for each strategy:
Simple Imputer (Most Frequent) Time Elapsed: 14.320188045501709
Simple Imputer (Constant) Time Elapsed: 0.052472829818725586
Pandas Time Elapsed: 0.04726815223693848 |
H: In which cases are non-linear learning methods preferred than logistic regression in classification problems?
We know that neural networks and other learning methods can have better performance relative to logistic regression in some non-linear classification problems. But, it is known too that logistic regression can separate classes with a line that can be curvy if only we add more predictors that are a square, cube, etc of the given predictors (still considered a linear decision boundary).
So my question is, can we theoretically solve any classification problem through logistic regression, or are there limitations that I do not realize that make other non-linear learning methods mandatory to certain classification problems?
AI: There are several assumptions that are needed to be satisfied in order for logistic regression to work. From personal experience the two most important assumptions:
Features need to be not (or a little) correlated with each other, because if they are correlated then the change of one feature indicates also the change of the other that's a problem when you try to solve the optimization problem
May some features not be ordinal
You can also check here for more information.
Other learning methods such as RandomForest require no such assumptions and may work better for some datasets. But keep in mind the no free lunch theorem, most of the times you need to get your hands dirty and test several learning models before you decide which works best for your data. |
H: Algorithm selection rationale (Random Forest vs Logistic Regression vs SVM)
I want to understand the criteria of selection of ML algorithms, i.e., what are the guidelines on which algorithm to be selected in which case?
The reasons I know are:
Logistic regression to be picked in case we want to advise the impact on y variable on what changes on any x variable.
Random forest works good on mixed data and very effective for categorical data. Also, it does feature selection first (so dimension reduction is not needed).
Random forest not to be picked with high featured and multiple category data due to its high processing time.
SVM works well with the closely placed data points like in image processing identification of dog vs cat.
But these are not sufficient enough to pick anyone, as I don't have any reason for why which algorithm not to be picked, like when to choose SVM over Logistic regression or RF over Logistic regression.
The only rationale I have is the performance, so I run all algorithms and who ever performs best that I select (but this is not right way).
AI: I suppose I will suggest as a starting point and expand on what you suggested by just adding the following
Knowing the type of data you are working with and it's characteristics, (categorical, supervised/unsupervised, data size etc.).
Knowing what accuracy requirements you need, timeframe and computational power you have at your disposal vs accuracy and really answering "why, am I trying to solve this problem?"
After answering these questions you can at least narrow down slightly what you may use (and eliminate those you clearly don't believe fit). After that, I suppose it's trial and error, experience and comparing to others who dealt with similar datasets and problems.
I have this crude flow chart I found in my favourites from the scikitlearn website. Not sure where I found it to be honest. Take it for what you will, hopefully it helps somewhat:
https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html |
H: How to approach TF-IDf based analysis?
Problem statement :
We have documents with list of words in them.
Overall these documents are classified into 2 group (say, good quality vs bad)
docs -
doc1 = [w1,w2,w3,w4]
doc2 = [w4,w3,w3,w4]
doc3 = [w2,w4,w8,w1]
doc4 = [w5,w4,w0,w9]
doc group -
good_grp = [doc2, doc1]
bad_grp = [doc3, doc4]
Now we have to find out which words actually are important to make the document good vs bad ?
Idea 1:
Merge all words from documents that belong to document group 1 into single document say (good quality doc) and other one being (bad quality doc) and calculate tf-idf score per doc; but in this case we lose information of document level words and now just see document group level word importance.
doc1 = [w1,w2,w3,w4]
doc2 = [w4,w3,w3,w4]
doc3 = [w2,w4,w8,w1]
doc4 = [w5,w4,w0,w9]
good_grp = [w1,w2,w3,w4,w4,w3,w3,w4]
bad_grp = [w2,w4,w8,w1,w5,w4,w0,w9]
Can someone help me to direct to a better approach tf-idf or any other technique to solve this problem?
AI: I think here you must maintain the actual tf-idf and create corpus over it.. Assuming you already have lables for documents available. You can rum classification over it.
Best classification I am anticipating for this problem would be naive bayes.. |
H: How to save and load model from unsupervised learning?
[Beginner]
Sorry if this is dumb question.
I am following the model from this article and below.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import datasets
from sklearn.decomposition import PCA
# Dataset
iris = datasets.load_iris()
data = pd.DataFrame(iris.data,columns = iris.feature_names)
target = iris.target_names
labels = iris.target
#Scaling
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
data = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
#PCA Transformation
pca = PCA(n_components=3)
principalComponents = pca.fit_transform(data)
PCAdf = pd.DataFrame(data = principalComponents , columns = ['principal component 1', 'principal component 2','principal component 3'])
datapoints = PCAdf.values
m, f = datapoints.shape
k = 3
#Visualization
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
X_reduced = datapoints
ax.scatter(X_reduced[:, 0], X_reduced[:, 1], X_reduced[:, 2], c=labels,
cmap=plt.cm.Set1, edgecolor='k', s=40)
ax.set_title("First three PCA directions")
ax.set_xlabel("principal component 1")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("principal component 1")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("principal component 1")
ax.w_zaxis.set_ticklabels([])
plt.show()
def init_medoids(X, k):
from numpy.random import choice
from numpy.random import seed
seed(1)
samples = choice(len(X), size=k, replace=False)
return X[samples, :]
medoids_initial = init_medoids(datapoints, 3)
def compute_d_p(X, medoids, p):
m = len(X)
medoids_shape = medoids.shape
# If a 1-D array is provided,
# it will be reshaped to a single row 2-D array
if len(medoids_shape) == 1:
medoids = medoids.reshape((1,len(medoids)))
k = len(medoids)
S = np.empty((m, k))
for i in range(m):
d_i = np.linalg.norm(X[i, :] - medoids, ord=p, axis=1)
S[i, :] = d_i**p
return S
S = compute_d_p(datapoints, medoids_initial, 2)
def assign_labels(S):
return np.argmin(S, axis=1)
labels = assign_labels(S)
def update_medoids(X, medoids, p):
S = compute_d_p(datapoints, medoids, p)
labels = assign_labels(S)
out_medoids = medoids
for i in set(labels):
avg_dissimilarity = np.sum(compute_d_p(datapoints, medoids[i], p))
cluster_points = datapoints[labels == i]
for datap in cluster_points:
new_medoid = datapoints
new_dissimilarity= np.sum(compute_d_p(datapoints, datap, p))
if new_dissimilarity < avg_dissimilarity :
avg_dissimilarity = new_dissimilarity
out_medoids[i] = datap
return out_medoids
def has_converged(old_medoids, medoids):
return set([tuple(x) for x in old_medoids]) == set([tuple(x) for x in medoids])
#Full algorithm
def kmedoids(X, k, p, starting_medoids=None, max_steps=np.inf):
if starting_medoids is None:
medoids = init_medoids(X, k)
else:
medoids = starting_medoids
converged = False
labels = np.zeros(len(X))
i = 1
while (not converged) and (i <= max_steps):
old_medoids = medoids.copy()
S = compute_d_p(X, medoids, p)
labels = assign_labels(S)
medoids = update_medoids(X, medoids, p)
converged = has_converged(old_medoids, medoids)
i += 1
return (medoids,labels)
results = kmedoids(datapoints, 3, 2)
final_medoids = results[0]
data['clusters'] = results[1]
#Count
def mark_matches(a, b, exact=False):
"""
Given two Numpy arrays of {0, 1} labels, returns a new boolean
array indicating at which locations the input arrays have the
same label (i.e., the corresponding entry is True).
This function can consider "inexact" matches. That is, if `exact`
is False, then the function will assume the {0, 1} labels may be
regarded as the same up to a swapping of the labels. This feature
allows
a == [0, 0, 1, 1, 0, 1, 1]
b == [1, 1, 0, 0, 1, 0, 0]
to be regarded as equal. (That is, use `exact=False` when you
only care about "relative" labeling.)
"""
assert a.shape == b.shape
a_int = a.astype(dtype=int)
b_int = b.astype(dtype=int)
all_axes = tuple(range(len(a.shape)))
assert ((a_int == 0) | (a_int == 1) | (a_int == 2)).all()
assert ((b_int == 0) | (b_int == 1) | (b_int == 2)).all()
exact_matches = (a_int == b_int)
if exact:
return exact_matches
assert exact == False
num_exact_matches = np.sum(exact_matches)
if (2*num_exact_matches) >= np.prod (a.shape):
return exact_matches
return exact_matches == False # Invert
def count_matches(a, b, exact=False):
"""
Given two sets of {0, 1} labels, returns the number of mismatches.
This function can consider "inexact" matches. That is, if `exact`
is False, then the function will assume the {0, 1} labels may be
regarded as similar up to a swapping of the labels. This feature
allows
a == [0, 0, 1, 1, 0, 1, 1]
b == [1, 1, 0, 0, 1, 0, 0]
to be regarded as equal. (That is, use `exact=False` when you
only care about "relative" labeling.)
"""
matches = mark_matches(a, b, exact=exact)
return np.sum(matches)
n_matches = count_matches(labels, data['clusters'])
print(n_matches,
"matches out of",
len(data), "data points",
"(~ {:.1f}%)".format(100.0 * n_matches / len(labels)))
How can I save the above model after training without having to rerun the above code every time a new record that has not been assigned a cluster is added to the data set?
I have also streamlined the code on my local machine to comment out all visualizations and everything after the #Count and am still able to get cluster assignments on my dataset. Just dont want to run the above code everytime we get a new record.
I can save and load a model post training with Keras/Tensorflow, not sure if I have to use only those tools to do what I want.
AI: Not very familiar with k-medoids, but i guess it's something like k-means, right? If so, the most time-consuming part of the entire model is updating the medoids. We randomly select initial start and update the center of mass to have better cluster results.
I suggest you to pickle final_medoids. When you have new data, compute the pca, pass it to kmedoids function with the pickled final_medoids as starting medoids. Then you can use following functions to compute score or something. There might be some tenical errors, but i think the main idea is to save the steady medoids so that we don't need a lot of time for updating. |
H: How are samples selected from training data in Xgboost
In Random Forest, each tree is not fed with the full batch of training data, only a sample.
How does this work for Xgboost? If this sampling happens as well, how does it work for this ML algorithm?
AI: In Gradient Boosting the simple tree is built for only a randomly selected sub-sample of the full data set (random without replacement).
While on the other hand, Random Forest the samples for each decision tree are selected via bootstrapping; sampling a dataset with replacement.
Particularly for xgboost (see paper here)the ratio of sampling of each tree can be modified with the following hyperparameter:
subsample [default=1]
Subsample ratio of the training instances. Setting it to 0.5 means that XGBoost would randomly sample half of the training data prior to growing trees. and this will prevent overfitting. Subsampling will occur once in every boosting iteration.
There are more parameters such as colsample_bytree, colsample_bylevel, colsample_bynode.
Bootstrapping techniques can vary with the implementation and. Normally I use Catboost implementation for several reasons.
You can read about the sampling of cb that you can use in the documentation here
Moreover, a paper that I found interesting a bit ago is Minimal Variance Sampling in Stochastic Gradient Boosting . I will quote one of the sentences from the abstract
Different sampling approaches were proposed, where probabilities are not uniform, and it is not currently clear which approach is the most effective...it leads
to a new sampling technique, which we call Minimal Variance Sampling (MVS)...The superiority of the algorithm was confirmed
by introducing MVS as a new default option for subsampling in CatBoost
Here the Catboost development team explains what is the new sampling technique that is the default in their algorithm. |
H: Does variance and standard deviation both measure how spread out the numbers are?
I've heard from different sources that, standard deviation measures how spread out the numbers are. But I've also heard the same for variance.
Is it technically correct to say this statement for both std and var?
AI: Yes it is. Standard deviation is a square root of variance. Square root is a monotonic transformation, meaning that it preserves the order, e.g, if a > b then sqrt(a) > sqrt(b), assuming a and b are non-negative and they always are for variance.
Standard deviation is easier to interpret and is more commonly used, when we calculate variance we square every term, taking a square root in standard deviation, we undo that. |
H: Clustering algorithm which does not require to tell the number of clusters
I have a dataframe with 2 columns of numerical values. I want to apply a clustering algorithm to put all the entries into the same group, which have a relatively small distance to the other entries. But which clustering algorithm can I use, although I do not know how many groups will be formed? It would be ideal if there is a parameter to determine the maximum distance allowed. And if there isn't such an algorithm, maybe it would be really helpful to come up with some intuitions, how such an algorithm can be implemented by myself. Thanks a lot!! :)
The data could look like this:
a,b
20,30
19,31
10,10
9,8
12,11
31,11
32,11
AI: I'd suggest looking at hierarchical clustering:
It's simple so you could implement and tune your own version
It lets you decide at which level you want to stop grouping elements together, so you could have a maximum distance.
Be careful however that this approach can sometimes lead to unexpected/non-intuitive clusters. |
H: what are the steps in adaboosting?
I went through adaboost tutorial and below are my simplified understanding:
Sample weight of equal value is given to all sample in dataset.
Stumps are created which uses only one feature from data set.
Using total error and sample weight stump importance is calculated.
Samples weights are changed i.e. samples which were predicted wrongly by stumps with high importance get sample weight increased and rest of the sample weight are decreased in the same order.
The process is repeated till the number of iterations mentioned where sample weight provides path for training.
Does adaboost contain multiple stumps with different splitting value for a single feature?
As mentioned, are the stumps created as first step or is it a continuous process?
AI: Firstly, i want to notice that adaboost is a ensemble of stumps, and each stump is added to the ensemble sequentially, trying to compensate the errors of the existing ensemble.
Saying that:
In every step you have a new weighted dataset, and the Adaboost algorithm tries to fit a stump that splits the new dataset best. So yes, it is possible, and it does happen, that a single feature is used more than once (but the splitting value calculated by the stump should be different in order to reduce the ensemble's error.)
Stumps' creation is a continuous process, they are calculated in
every step for the new weighted dataset. |
H: Classify pdf files - image approach vs. text approach
I'm about to start a project with the objective to classify PDF-documents. I'm wondering if there's a best practice approach to tackle this problem.
Concretely I'm wondering if one of the following two approaches performs usually better:
use an OCR reader to convert the files to text and train the classifier model on the text data
convert the files to images and train a CNN classifier model
I'm planning to classify different testimonials and certificates mostly.
Since most of these files share a similar layout and text within the classes the ideas should both work out. I'm wondering if anybody has already made experience with this and could tell me about some advantages/disadvantages when using a specific approach.
I'd highly appreciate any kind of help.
AI: Both methods will be beneficial for different cases.
If you think that dependencies in the text are more discriminative of the classes, then NLP apporach. Image approach in this case will need to be super complex to catch this kind of information.
On the other hand layout and position could be very informative and this it could happen is not encoded in text and only image approach can catch this information.
Conclusion. Maybe think about hard-encoding layout and position features and pass them along to NLP model. |
H: Including the validation file in the training process after tunning
Should I include the validation file in the training process after finishing the tuning process (e.g. searching for params using the validation file)?
AI: It depends on the distribution of the train, valid and holdout/test set.
There are a couple of possibilities (basically permutations). In general any different distribution=covariate shift is bad and you should repair it. If this is the case, including valid is the least of your problems (but you should include it in this case to make corrections) and you should worry about covarite shift.
If distributions are the same between the sets, it wont make any negative difference and it could only help if you add the valid-hyperparam tuning dataset to the train. |
H: How to recognize product based on image using neural network?
Company has many products in their offer (some about 100,000), some of these are very similar to each other. In database there is available only one image per product.
Company want to make possible to recognize product based on video camera and display its specification. Is it possible to train new model with this kind of data or build it using existing model?
AI: Is it possible to train new model with this kind of data
Yes, you need Convolutional Neural Networks (CNN) for image classification. If you only have one image per product in your dataset, I suggest you to use a lot of image augmentation, a technique that is meant to artificially increase the size of an image dataset by applying combinations of distortions to image data.
or build it using existing model?
You can re-train existing models. On the TensorFlow website you can find a huge list of pretrained CNN architectures that you can download ad use for your needs. It's a powerful alternative to training a new model from scratch. |
H: What is purpose of the [CLS] token and why is its encoding output important?
I am reading this article on how to use BERT by Jay Alammar and I understand things up until:
For sentence classification, we’re only only interested in BERT’s output for the [CLS] token, so we select that slice of the cube and discard everything else.
I have read this topic, but still have some questions:
Isn't the [CLS] token at the very beginning of each sentence? Why is that "we are only interested in BERT's output for the [CLS] token"? Can anyone help me get my head around this? Thanks!
AI: CLS stands for classification and its there to represent sentence-level classification.
In short in order to make pooling scheme of BERT work this tag was introduced. I suggest reading up on this blog where this is also covered in detail. |
H: How to convert Scikit Learn logistic regression model to TensorFlow
I would like to use existing Scikit Learn LogisticRegression model in the BigQuery ML. However, BQ ML currently has a hard limit of 50 unique labels and my model needs to handle more than that.
BQ accepts TensorFlow models, which do not seem to have this limit.
How can I convert existing Scikit logistic regression model to TensorFlow model?
AI: Sure, here is a skeleton in TF 2.0.
import tensorflow as tf
weights = tf.Variable(tf.random.normal(shape=(784, 10), dtype=tf.float64))
biases = tf.Variable(tf.random.normal(shape=(10,), dtype=tf.float64))
def logistic_regression(x):
lr = tf.add(tf.matmul(x, weights), biases)
return lr
def cross_entropy(y_true, y_pred):
y_true = tf.one_hot(y_true, 10)
loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
return tf.reduce_mean(loss)
def grad(x, y):
with tf.GradientTape() as tape:
y_pred = logistic_regression(x)
loss_val = cross_entropy(y, y_pred)
return tape.gradient(loss_val, [weights, biases]) |
H: Using Majority Class to Predict Minority Class
Suppose I want to train a binary model in order to predict the probability of who will buy a personal loan and in the dataset only 5 percent of the examples are people who marked as bought a personal loan. So, in this scenario maybe I can leverage downsampling or upsampling to balance the dataset, but if my dataset isn't big enough there may left very few examples or may be upsampling isn't appropriate. Then suppose I decided to use whole dataset, I partitioned it to the training and test sets in order to predict the probability of who won't buy a personal loan. Considering it's a binary model does it make sense to subtract this model's output probabilities from 1 and predicting who will buy a personal loan by using this result?
AI: Yes that's correct, but assuming that you follow the exact same methodology you will obtain exactly the same performance at the end, so there's no advantage.
Keep in mind that the problem with class imbalance is not that one class is harder to identify than the other, but rather that it's harder to properly separate the two classes.
[edit] It would be a different story when using one-class classification. I'm not sure if it makes sense in this case but maybe it could be something worth trying. |
H: R: Producing multiple plots (ggplot, geom_point) from a single CSV with multiple subcategories
I have a collection of bacteria data from approximately 140 monitoring locations in California. I would like to produce a scatterplot for each monitoring location with the Sampling Date on the Y-axis and the Bacteria Data on the X-axis. The Sampling Date, Bacteria Data, and Monitoring Location all reside within their own column.
I've come up with the below code:
## Create List of Files ##
filenames <- list.files(path = "C:\\Users\\...")
## Combine into one CSV ##
All_Data <- ldply(filenames, read.csv)
All_Data$SampleDate <- as.Date(All_Data$SampleDate, origin="1899-12-30")
## Save CSV for possible future use ##
write.csv(All_Data, file= "C://Users//...", row.names = FALSE)
## Construct Plots ##
ggplot(All_Data) + geom_point(mapping =aes(SampleDate, Total.Result)) + facet_wrap( ~ Identifier) +ylim(0,20000)
but this produces the below plot where every single location is crammed into the plot frame. Is there a way to print each plot individually to a folder?
I tried to incorporate the subset function like so
ggplot(All_Data) + geom_point(mapping =aes(SampleDate, Total.Result)) + facet_wrap( ~ subset(All_Data, Identifier)) +ylim(0,20000)
but received the error
Error in subset.data.frame(All_Data, Identifier) :
'subset' must be logical
Alternatively, is it better to do this through some sort of loop through the original 15 csvs that I've combined together? I would still have the challenge of creating one plot per monitoring location. Thanks in advance for any suggestions!
AI: something like this might work:
library(plyr)
# initialize All_Data around here
dlply(All_Data, 'Identifier', function(dataSubset) {
g <- ggplot(dataSubset) + geom_point(mapping =aes(SampleDate, Total.Result)) + ylim(0,20000)
file_name <- paste0("Scatter_", unique(dataSubset$Identifier), ".tiff", sep="")
ggsave(file_name,g)
})
(I didn't test it) |
H: Using a GAN discriminator as a standalone classifier
The goal of the discriminator in a GAN is to distinguish between real inputs and inputs synthesized by the generator.
Suppose I train a GAN until the generator is good enough to fool the discriminator much of the time. Could I then use the discriminator as a classifier that tests whether an input belongs to a single class?
For instance, if I train StyleGAN to be able to synthesize photorealistic cats, could I use the trained discriminator to detect whether an image is a cat or not?
My thinking is that perhaps the discriminator would be more accurate than other classifier models because it has effectively trained on many, many more inputs thanks to the generator.
On the other hand, perhaps the discriminator is somehow worse because it has been trained overwhelmingly on cat-like images (assuming the generator has gotten pretty good), and hasn't seen a wide variety of negative examples. It is concerned less with "is this a cat?" than "what are the tell-tale signs of this being synthetic?"
AI: Yes, we can use the Discriminator of the GAN to classify images. But we should make sure that the images produced by the Generator are real looking.
If you have trained your GAN on a large number of images and it is performing pretty well on the dataset then I insist you to treat the Discriminator model as a pretrained model ( like we do in transfer learning ) and again train this model on images which were not used to train the GAN earlier. Thus the model is fine tuned on the dataset on the GAN wasn't trained before.
Another similar way could be to only use the weights of the CNN layers and load them in our new model.
Suppose I train a GAN until the generator is good enough to fool the
discriminator much of the time. Could I then use the discriminator as
a classifier that tests whether an input belongs to a single class?
You should definitely try reusing a Discriminator from a GAN and share your results too :-). |
H: Kernel selections in SVM
I want to understand the kernel selection rationale in SVM.
Some basic things that I understand is if data is linear, then we must go for linear kernel and if it is non-linear, then others.
But the question is how to understand that the given data is linear or not, especially when it has many features.
I know that by cross validation I can try and feed different kernels and see the output whichever performs best to be selected, but I'm looking for something anyway to have some early indications.
AI: Start with linear kernel and see if your data is linearly seperable or not. Performing that is simpler than looking for early indications.
Linear kernels are suggested when the number of features is larger than the number of observations in the dataset (otherwise RBF will be a better choice).
However, once you conclude that you have non-linear data, you can try to visualize and map the non-linear separable data into a higher dimensional space and see if it makes the data linearly separable. That is what a kernel does anyway and so you visualizing will give you insights and indications about the type of kernel to be used.
This link might help. |
H: Convert Numpy array with 'n' and 'y' into integer array of 0 and 1
I have a NumPy array of strings: 'n', 'y', wanna convert it into integer array of 0, 1, how to convert it?
imp = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
X = imp.fit_transform(X)
X
array([['n', 'y', 'n', ..., 'y', 'n', 'y'],
['n', 'y', 'n', ..., 'y', 'n', 'y'],
['n', 'y', 'y', ..., 'y', 'n', 'n'],
...,
['n', 'y', 'n', ..., 'y', 'n', 'y'],
['n', 'n', 'n', ..., 'y', 'n', 'y'],
['n', 'y', 'n', ..., 'y', 'n', 'n']], dtype=object)
AI: It is quite trivial
(X=='y').astype(int)
Should do the trick. It simply converts your array to True or False according to your requirements and then astype will impose the required datatype. By default int will give you 1 for True and 0 for False. |
H: Should I create a separate column for each Id value in a feature column or can I use the feature column as it is?
I am working on developing a model for predicting, revenue that a movie will make. One of the features in the training set contains id of the series that a movie belongs to.Say, Star Wars series has Id 1, then the corresponding value in that column for "Return Of Jedi" movie will be 1.
Should I create a separate column for each of the series and add value 1 if the film belongs to that series, or will just keeping the existing column containing ids will be fine?
AI: You should not leave the column as it is since similar id values are not semantically related. Maybe a deep neural network could handle this, but linear models definitely do not.
There are two options, depending on the cardinality (the number of unique entries/categories) of the column.
If the cardinality is low you should one-hot-encode the series, one column for each series, like you propose. However, in your dataset, there are probably many series, and your data will end up having too many features. See this image (source), you need one column/feature for each category if you one-hot encode.
If the cardinality is high, you should find an encoding for the id column. One way would be using target statistics: Calculate the mean target (in your case revenue) a movie of a series makes for each series-category of the training set. Then replace the id column with a column of the target-means for the respective series. If there is a series in the test data, which is not in the training data, just use the overall target mean of the training set.
It is important for this approach that you never use target-statistics calculated from the test set, otherwise, your error estimate will be highly optimistic!
The rationale behind this approach is, that the model is allowed to learn things like "continuations of successful movies are often successful". |
H: KeyError: 'i' and Error
I am implementing the codes for https://www.kaggle.com/eray1yildiz/using-lstms-with-attention-for-emotion-recognition
which is an emotion analysis from the text. I am having some errors when I am Encoding my samples with corresponding integer values. The following codes are:
X = [[word2id[word] for word in sentence] for sentence in input_sentences]
Error:
Traceback (most recent call last):
File "<pyshell#32>", line 1, in <module>
X = [[word2id[word] for word in sentence] for sentence in input_sentences]
File "<pyshell#32>", line 1, in <listcomp>
X = [[word2id[word] for word in sentence] for sentence in input_sentences]
File "<pyshell#32>", line 1, in <listcomp>
X = [[word2id[word] for word in sentence] for sentence in input_sentences]
KeyError: 'i'
can someone please help me. thanks
AI: The dictionary word2id does not contain the word "i". I can run the Kaggle Kernel and do not get the error.
When I run the code my word2id dictionary looks like this:
{'i': 0,
'feel': 1,
'awful': 2,
'about': 3,
'it': 4,
'too': 5,
'because': 6,...}
The word "i" is definitely in there, I think you did not run the code to create the word2id dictionary properly. Make sure to run all the cells of the kernel. |
H: How to balance class weights correct for a CNN in Keras, given an unbalanced data set?
I want to use class weights for training a CNN with a imbalanced data set.
The question arise if the sum of the weights of all examples have to stays the same?
My previous plan was to use the function compute_class_weight('balanced,np.unique(y_train),y_train) function from scikit-learn.
But I'm totally unsure if this is even suitable for the class weights of a CNN?
Thank you in advance for each tip
AI: If the "cost" for experimenting is not really that big I suggest you take the time to experiment and take this as a learning opportunity and just try if it could actually work.
There are many approaches to address class imbalance and setting class weight is one of them and the easiest to implement.
Change loss function (for example to focal loss for binary classification with extreme imbalance)
Oversampling and Undersampling
Setting class weights
Use specific algorithm that are build to address this problem e.g. siamese network which is very useful when you only say have very few training sample of object of interest.
etc.
Specifically for your case, I can tell you the specific case that it could fail base on my experience. So basically this very likely fail when you have extreme class imbalance say like 1% positive and 99% negative. How this could fail is simply because using class weighting in this case will put very high value on the positive sample and if your model fails to detect this, the penalty is very high and hence lead to unstable training. To top it off consider a hypothetical situation your model predict the positive class correctly on epoch 10 and then it fails on epoch 11. For this case you might get a loss for example 1.3 for epoch 10 but then on epoch 11 your loss could go to say like 37.7 simply because it fail to detect said sample. This could also affect any callbacks that utilize this loss.
In summary if the situation could be as I described then don't use this otherwise just play around and find out what's best for you. |
H: In survival analysis, which is the correct way to introduce a variable which changes the survival rate but occurs at different times?
I am making a survival analysis with a cox regression with proportional hazards, we want to analyze wheter the introduction of a phenomenon influences the time until the death of an individual.
A similar example would be: We have patients which were given a medicine at different stages of the disease (and at different ages too and some were not given the medicine at all), how the treatment variable should be introduced so we can measure the "strength" of the introduced change.
I have a possible solution:
Create a dummy exogenous variable for ranges of the time in which the medicine was introduced. "First 0-6 months", "6 m - 1 year", "1-2 years", "Never", this measured since the birth of the patient.
Create the same exogenous variable but not dummy.
Change the time variable and measure it not from birth but from the introduction of the medicine (the problem would be the definition of this variable for the patients without medicine).
Duplicate the rows of the individuals who had the medicine. In the first repetition, the time would be between birth and the introduction of the medicine as a "censored death"; the second repetition would include the time between the introduction of the medicine and the actual death.
Nowadays, the model includes the introduction of the medicine as a dummy but doesn't have into account the time in which it was introduced.
AI: The problem you'll run into if you are not careful is the "immortal time bias". In short, the problem is that a subject isn't "in" the "1-2 years" group until they atleast 1 year under observation. This 1 year period is called immortal because patients can't die then. More concretely, if I naively partition my population into "First 0-6 months" vs "1-2 years" and measure survival, the latter group is going to look like they have much better survival in the first year because in order to qualify for the latter group, you need to live longer.
So what do you do? You need to model the time-varying nature of your data. Check out the "long" format for survival data. Below I have a Python example that uses lifelines. There are four individuals, each will be treated in a different treatment period. We use multiple lines (but same id) to denote different time periods (note the mutually exclusive start/stop). A dummy variable of the treatment period is provided.
import pandas as pd
from lifelines import CoxTimeVaryingFitter
df = pd.DataFrame([
{'id': 1, 'start': 0, 'stop': 12, 'E': 1, 't1': 0, 't2': 0, 't3': 0}, # never received treatment, died at t=12
{'id': 2, 'start': 0, 'stop': 10, 'E': 1, 't1': 1, 't2': 0, 't3': 0}, # received treatment at very start
{'id': 3, 'start': 0, 'stop': 3, 'E': 0, 't1': 0, 't2': 0, 't3': 0}, # will received treatment in second "period"
{'id': 3, 'start': 3, 'stop': 9, 'E': 1, 't1': 0, 't2': 1, 't3': 0}, # received treatment in second "period"
{'id': 4, 'start': 0, 'stop': 6, 'E': 0, 't1': 0, 't2': 0, 't3': 0}, # will received treatment in third "period"
{'id': 4, 'start': 6, 'stop': 11, 'E': 1, 't1': 0, 't2': 0, 't3': 1}, # received treatment in third "period"
])
ctv = CoxTimeVaryingFitter().fit(df, id_col='id', start_col='start', stop_col='stop', event_col='E')
Another solution instead of using dummy variables is to create an interaction with the time variable above, but that will be harder to interpret I believe. |
H: Resampling with Python SMOTE
I am trying to do a simple ML re-sampling approach after the train-test split. However when I do this, it throws the below error. Can you please help me understand what this error is about?
KeyError: 'Only the Series name can be used for the key in Series dtype mappings.'
The code is given below:
# split into training and testing datasets
from sklearn.model_selection import train_test_split
from sklearn.utils import resample
from imblearn.over_sampling import SMOTE
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30, random_state = 2, shuffle = True, stratify = y)
print("Number transactions X_train dataset: ", X_train.shape)
print("Number transactions y_train dataset: ", y_train.shape)
print("Number transactions X_test dataset: ", X_test.shape)
print("Number transactions y_test dataset: ", y_test.shape)
print("Before OverSampling, counts of label '1': {}".format(sum(y_train==1)))
print("Before OverSampling, counts of label '0': {} \n".format(sum(y_train==0)))
sm = SMOTE(random_state=2)
X_train_res, y_train_res = sm.fit_sample(X_train, y_train.ravel()) # error is thrown here
print('After OverSampling, the shape of train_X: {}'.format(X_train_res.shape))
print('After OverSampling, the shape of train_y: {} \n'.format(y_train_res.shape))
print("After OverSampling, counts of label '1': {}".format(sum(y_train_res==1)))
print("After OverSampling, counts of label '0': {}".format(sum(y_train_res==0)))
Here is the full error message:
KeyError Traceback (most recent call last)
<ipython-input-216-af83b63865ac> in <module>
3
4 sm = SMOTE(random_state=2)
----> 5 X_train_res, y_train_res = sm.fit_sample(X_train, y_train.ravel())
6
7 print('After OverSampling, the shape of train_X: {}'.format(X_train_res.shape))
~\AppData\Local\Continuum\anaconda3\lib\site-packages\imblearn\base.py in fit_resample(self, X, y)
86 if self._X_columns is not None:
87 X_ = pd.DataFrame(output[0], columns=self._X_columns)
---> 88 X_ = X_.astype(self._X_dtypes)
89 else:
90 X_ = output[0]
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\generic.py in astype(self, dtype, copy, errors, **kwargs)
5863 results.append(
5864 col.astype(
-> 5865 dtype=dtype[col_name], copy=copy, errors=errors, **kwargs
5866 )
5867 )
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\generic.py in astype(self, dtype, copy, errors, **kwargs)
5846 if len(dtype) > 1 or self.name not in dtype:
5847 raise KeyError(
-> 5848 "Only the Series name can be used for "
5849 "the key in Series dtype mappings."
5850 )
KeyError: 'Only the Series name can be used for the key in Series dtype mappings.'
AI: do it without ravel (or reshaping of any kind).
Or if you going to do than transform dataframe X_train into an matrix also. This is the correct format fit_sample |
H: Machine learning methods for panel (longitudinal) data
I have a panel data set, for example:
obj time Y x1 x2
1 1 0.1 1.28 0.02
2 1 0.11 1.27 0.01
1 2 -0.4 1.05 -0.06
2 2 -0.3 1.11 -0.02
1 3 -0.5 1.22 -0.06
2 3 1.2 1.06 0.11
I`m new at ML and until recently I did not know that this is a special (panel) data type. I predicted the value of a variable $Y(t+1)$ by values $x_1(t)$ and $x_2(t)$ (time lag) using linear regression model and MLP. But now I read some information about panel data analysis and realized that the methods I used were not suitable. At the moment I found that fixed/random effects models are suitable for panel data analysis. So, I have several questions:
What other methods are correctly used to analyze panel data (I`m interested in neural network models)? I read that these methods must take into account the dependency between particular object values and previous occurring values of this object (
what is in models with fixed and random effects).
I also tried to use MLP by feeding 2D data to it. I divided the panel data into $k=time$ $quants$ $count$ 2D blocks and passed this data to the MLP input. For example above, $k=3$, ($input$ $layer$ $size= 4 =
number$ $of$ $predictors* block$ $objects$ $count$). In this case $batch$ $size=1$.
If I make the $batch$ $size=2$ and feed the neural network with 1D data ($input$ $layer$ $size=2$ for example above, will there be any difference? In both cases, the weights of the neural network will be rebuilt after the observations on all objects are transmitted in one quantum of time.
AI: Panel data = multi-object time series.
in other words you have time series problem (time) for different objects (obj) that you are trying to predcit (Y).
If I were you I would just dissect this problem and start thinking in terms of time-series + another discriminative column called obj. What approaches do you know there? Here is really cool and MODERN time-series tutorial, check it out.
Regarding NN, why are you trying to squeeze it in so hard? Let the data tell you what algo can model it. Personally given these 3 features NN is just too much and you can achieve similiar results with less complexity using some simpler/less expensive approaches. |
H: How to impute Missing values not the usual way?
I have a dataset of 4712 records working on binary classification. Label 1 is 33% and Label 0 is 67%. I can't drop records because my sample is already small. Because there are few columns which has around 250-350 missing records.
How do I know whether this is missing at random, missing completely at random or missing not at random. For ex: 4400 patients have the readings and 330 patients don't have the readings. But we expect these 330 to have the readings because it is a very usual measurement. So what is this called?
In addition, for my dataset it doesn't make sense to use mean or median straight away to fill missing values. I have been reading about algorithms like Multiple Imputation and Maximum Likelihood etc.
Is there any other algorithms that is good in filling the missing values in a robust way?
Is there any python packages for this?
Can someone help me with this?
AI: To decide which strategy is appropriate, it is important to investigate the mechanism that led to the missing values to find out whether the missing data is missing completely at random (MCAR), missing at random (MAR), or missing not at random (MNAR).
MCAR means that there is no relationship between the missingness of the data and any of the values.
MAR means that that there is a systematic relationship between the propensity of missing values and the observed data, but not the missing data.
MNAR means that there is a systematic relationship between the propensity of a value to be missing and its values.
Given what you have told its likely that its MCAR. (assumption is that you already tried to find this propensity yourself (domain knowledge) or build a model between the missing columns and other features and failed in doing so)
Some other techniques to impute the data, I would suggest looking at KNN imputation (from experience always solid results) but you should try different methods
fancy impute supports such kind of imputation, using the following API:
from fancyimpute import KNN
# Use 10 nearest rows which have a feature to fill in each row's missing features
X_fill_knn = KNN(k=10).fit_transform(X)
Here are different methods also supported by this package:
•SimpleFill: Replaces missing entries with the mean or median of each
column.
•KNN: Nearest neighbor imputations which weights samples using the
mean squared difference on features for which two rows both have
observed data.
•SoftImpute: Matrix completion by iterative soft thresholding of SVD
decompositions. Inspired by the softImpute package for R, which is
based on Spectral Regularization Algorithms for Learning Large
Incomplete Matrices by Mazumder et. al.
•IterativeSVD: Matrix completion by iterative low-rank SVD
decomposition. Should be similar to SVDimpute from Missing value
estimation methods for DNA microarrays by Troyanskaya et. al.
•MICE: Reimplementation of Multiple Imputation by Chained Equations.
•MatrixFactorization: Direct factorization of the incomplete matrix
into low-rank U and V, with an L1 sparsity penalty on the elements of
U and an L2 penalty on the elements of V. Solved by gradient descent.
•NuclearNormMinimization: Simple implementation of Exact Matrix
Completion via Convex Optimization by Emmanuel Candes and Benjamin
Recht using cvxpy. Too slow for large matrices.
•BiScaler: Iterative estimation of row/column means and standard
deviations to get doubly normalized matrix. Not guaranteed to converge
but works well in practice. Taken from Matrix Completion and Low-Rank
SVD via Fast Alternating Least Squares.
EDIT: MICE was deprecated and they moved it to sklearn under iterative imputer |
H: Extracting name, date and total from a set of heterogeneous receipts
So, this is how the problem goes: I am trying to extract information from scanned receipts like this,
What I have been told is that I would get the textual data from a OCR software, so in short I will be working with a textual version of the image directly.
Problem:
The problem in hand here is that I have to extract certain information here, namely,
The Location. (for eg: New York, United States)
The complete Total (after all the discounts, tips, etc. have been involved) (Eg: 1033.42)
The Currency. ($, £, €, etc)
The Date. (Easier to guess)
The reason I want to extract the Location information is that, if for example, the currency is not explicitly mentioned here, then I can infer from the location where the receipt is generated.
Challenges:
The challenge here is that, information like Total Due could be anything like Grand Total or Total (only), or something semantically similar to Total, as I am not going to get the same receipt from the same restaurant only. (The restaurant could be anywhere in the world, but the problem is limited to only English speaking countries for now.)
The other challenge is to actually get the total information. It's very easy for us to see that 1033.42 is the total in the above receipt. But how do I get the software to know that? The way I see it, is that 1033.42 is near to the total(proximity). But there could be other numbers near to it as well.
Where I have tried and failed:
I have been told to start from NLTK(NER) but NER doesn't work for everything here. I can get the date information through it but the problem isn't just only to identify what the named entities are, imo.
What I think would work
The way I see it, I think I need to use a machine/deep learning model where the machine would be able to understand the proximity match between anything which semantically says Total and the number near to it (most probably on the right side).
Any help regarding which model would work best in terms of speed(first and foremost) and accuracy would be greatly appreciated.
I would also appreciate any help regarding where I could find any dataset or existing model which I could use for Transfer Learning.
AI: There is already an ML engine that does these extractions, here is a general process layout:
Hre is the orginal paper describing the architecture, features, approaches etc. read it up, instead of me copying it here. cloudscan |
H: How to evaluate performance of a new feature in a model?
I am working on a binary classification where I have 4712 records with Label 1 being 1554 records and Label 0 being 3558 records.
When I tried multiple models based on 6,7 and 8 features, I see the below results. Based on the newly added 7th or (7th & 8th) feature, I see an AUC improvement only in one of the models (LR scikit and Xgboost).
I also come across articles online that says AUC or F1-score aren't strict scoring rules. We could use log-loss metric but it's only applicable for logistic regression. but we can't use log-loss metric for Xgboost or RF or SVM right? So, is there any common metric which I can use to compare. Am I missing something here?
Does this mean that new feature is helping us improve the performance? But it decreases the performance in other models?
Please note that I split the data into train and test and did 10 fold CV on train data.
So, how do I know that this newly added 7th feature is really helping in improving the model performance?
update based on answer
from statsmodels.stats.contingency_tables import mcnemar
# define contingency table
table = [[808,138], # here I added confusion matrix of two models together (I mean based on TP in model 1 is added with TP in model 2 etc)
[52, 416]]
# calculate mcnemar test
result = mcnemar(table, exact=True)
# summarize the finding
print('statistic=%.3f, p-value=%.3f' % (result.statistic, result.pvalue))
# interpret the p-value
alpha = 0.05
if result.pvalue > alpha:
print('Same proportions of errors (fail to reject H0)')
else:
print('Different proportions of errors (reject H0)')
AI: Informally democracy.
So how many classifiers did it improve, only 1 than dont add it.
Formally there are a couple of statistical tests.
Cochran's Q test
Is a generalisation of the McNemars test for comparing Machine Learning models.
or read this formal paper where they discuss it. |
H: Why results of statsmodel logreg is different from scikit-learn logreg?
I am trying to do a binary classification. I have only 6 input variables and one output variables. Label 1 is 1554 records and Label 0 is 3558 records.
As you can see below, the metrics that I get from these two are different. I am not sure what other info I can share more about this issue. I just tried to do the classification through different methods
# code for statsmodel logreg
model = smm.Logit(y_train, X_train_std) #std indicates standardized inputs
result=model.fit()
result.summary()
y_pred = result.predict(X_test_std)
y_pred[y_pred > 0.5] = 1
y_pred[y_pred < 0.5] = 0
cm = confusion_matrix(y_test, y_pred)
print(cm)
print("Accuracy is ", accuracy_score(y_test, y_pred))
print(classification_report(y_test, y_pred))
print("ACU score is ",roc_auc_score(y_test, y_pred))
print("Recall score is",recall_score(y_test,y_pred))
print("Precision score is",precision_score(y_test,y_pred))
print("F1 score is",f1_score(y_test,y_pred))
# code for scikit-learn
#log reg optimized parameters
op_param_grid = {'C': [0.01],'class_weight':['balanced'],'penalty': ['l1'], 'solver': ['saga'],'max_iter':[200]}
logreg=LogisticRegression(random_state=41)
logreg_cv=GridSearchCV(logreg,op_param_grid,cv=10,scoring='f1')
logreg_cv.fit(X_train_std,y_train)
May I know why is this happening and which one should I rely upon?
Is it not right to expect the same results on both approach? Do they work differently?
Can anyone help?
AI: As far as I can tell, you use a "normal" Logit in one approach and a Logit with L1 penalty in the other case (penalty': ['l1']), which is called "Lasso". In Statsmodels L1 penalty would be implemented like stated in the docs.
Lasso and "normal" Logit are two different approaches. In the first case, (some) parameters are shrunken and can be set to zero, while with normal Logit, no parameter is shrunken. Lasso is often used to improve fit with noisy $x$, to do feature selection, or in case of high dimensional data (lot of $x$). Compare Chapters 4.3 and 6.2 in Introduction to Statistical Learning.
ISLR comes with Python labs. This can give you a good starting point how to apply one or the other method in a proper way. |
H: Why ML model produces different results despite random_state defined? And how to set global random seed for sklearn
I have been running few ML models on same set of data for a binary classification problem with class proportion of 33:67.
I had the same algorithms and same set of hyperparamters during yesterday and today's run.
Please note that I also have the parameter random_state in each estimator function as shown below
np.random.seed(42)
svm=SVC() # i replace the estimator here for diff algos
svm_cv=GridSearchCV(svm,op_param_grid,cv=10,scoring='f1')
svm_cv.fit(X_train_std,y_train)
q1) Why does this change happens even when I have random_state configured?
q2) Is there anything else that I should do to reproduce the same results every time I run?
Please find below the results that are different? Here auc-Y denotes yesterday's run
AI: Not every seed is the same.
Here is a definitive function that sets ALL of your seeds and you can expect complete reproducibility:
def seed_everything(seed=42):
""""
Seed everything.
"""
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
You have to import torch, numpy etc.
UPDATE: How to set global randomseed for sklearn models:
Given that sklearn does not have its own global random seed but uses the numpy random seed we can set it globally with the above :
np.random.seed(seed)
Here is a little experiment for scipy library, analogous would be sklearn (generating random numbers-usually weights):
import numpy as np
from scipy.stats import norm
print('Without seed')
print(norm.rvs(100, size = 5))
print(norm.rvs(100, size = 5))
print('With the same seed')
np.random.seed(42)
print(norm.rvs(100, size = 5))
np.random.seed(42) # reset the random seed back to 42
print(norm.rvs(100, size = 5))
print('Without seed')
np.random.seed(None)
print(norm.rvs(100, size = 5))
print(norm.rvs(100, size = 5))
outputing and confirming
Without seed
[100.27042599 100.9258397 100.20903163 99.88255017 99.29165699]
[100.53127275 100.17750482 98.38604284 100.74109598 101.54287085]
With the same seed
**[101.36242188 101.13410818 102.36307449 99.74043318 98.83044407]**
**[101.36242188 101.13410818 102.36307449 99.74043318 98.83044407]**
Without seed
[101.2933838 100.52176902 101.38602156 100.72865231 99.02271004]
[100.19080241 99.11010957 99.51578106 101.56403284 100.37350788] |
H: Compare between similar and dissimilar couples of instances
I label couples of similar and dissimilar instances based on user behavior.
each instance has a lot of features.
I have few ways of labeling the couples.
I know want to evaluate which of the label methods produce the most homogeneous distribution in the groups or to tell if the two groups comes from the same distributions.
I am looking for a statistical measures mostly.
Any suggestions?
AI: You can compute a similarity score between each couple of instances (diff in features) and then you can check if the distribution of the difference for each group (similar and dissimilar) is significantly different using the Kolmogorov-Smirnov test. |
H: What should I use as training data for base (level 1) classifiers in ensembling?
Can I just take all training data that I have, train the base models on them and then take their results and use them for training level 2 model? Is this a good practice, or should it be done differently?
AI: You can do that, but your model will not generalize well. You should not use base-model predictions from data, which were used to fit the base model.
Thus, you have to get the base model predictions for the training data using cross-validation. This is called "model stacking".
This page has a good explanation:
Split your training data into subsets, predict the target for each subset using all other subsets.
Fit the base model on the whole training data and predict the target for the test set.
Do this for multiple base models. Now you have train and test set predictions for each base model. In this example we have two base models:
Fit an ensemble model on the base training predictions and evaluate the performance on the base test predictions. |
H: Multiclassification Error: NotFittedError: This MultiLabelBinarizer instance is not fitted yet
After picking the model, when I try to use it, I am getting error -
"NotFittedError: This MultiLabelBinarizer instance is not fitted yet.
Call 'fit' with appropriate arguments before using this estimator."
X = <training_data>
y = <training_labels>
# Perform multi-label classification on class labels.
mlb = MultiLabelBinarizer()
multilabel_y = mlb.fit_transform(y)
p = Pipeline([
('vect', CountVectorizer(min_df=min_df, ngram_range=ngram_range)),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(clf))
])
# Use multilabel classes to fit the pipeline.
p.fit(X, multilabel_y)
AI: This code will work. Just let sklearn.linear_model.LogisticRegression handle the multiclassification for you.
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.multiclass import OneVsRestClassifier
from sklearn.linear_model import LogisticRegression
X = ["How to join amazon company ","How to join google ",'Stay home']
y = ["Career Advice", "Fresher",'Other' ]
# Perform multi-label classification on class labels.
clf = LogisticRegression()
p = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(clf))
])
# Use multilabel classes to fit the pipeline.
p.fit(X, y);
p.predict(X)
``` |
H: How does BERT and GPT-2 encoding deal with token such as <|startoftext|>,
As I understand, GPT-2 and BERT are using Byte-Pair Encoding which is a subword encoding. Since lots of start/end token is used such as <|startoftext|> and , as I image the encoder should encode the token as one single piece.
However, when I use pytorch BertTokenizer it seems the encoder also separate token into pieces. Is this correct behaviour?
from pytorch_pretrained_bert import BertTokenizer, cached_path
tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)
tokenizer.tokenize('<s> This is a sentence <|endoftext|>')
The results are:
['<',
's',
'>',
'This',
'is',
'a',
'sentence',
'<',
'|',
'end',
'##oft',
'##ex',
'##t',
'|',
'>']
AI: BERT is not trained with this kind of special tokens, so the tokenizer is not expecting them and therefore it splits them as any other piece of normal text, and they will probably harm the obtained representations if you keep them. You should remove these special tokens from the input text.
In the case of GPT-2, OpenAI trained it only with <|endoftext|>, but it has to be added after the tokenization. Some people mistakenly add it before tokenization, leading to problems.
<|startoftext|> is specific to the library gpt-2-simple. |
H: Need of maxpooling layer in CNN and confusion regarding output size & number of parameters
In my CNN architecture for binary classification, I have 2 convolutional layers, 2 maxpooling layers, 2 batchnormalization operations, 1 RELu and 1 fullyconnected layer.
Case1: When the number of channels, $d=1$:
In the first layer an input of size $[28*28*d]$, $d=1$ channel is convolved with $M_1=20 $ number of filters applied over all the input channels of size {$f_h \times f_w \times d$} = $[3\times 3\times 1]$ having the step size (stride) as 1 that creates a feature map of size ${(h-f_h+1) \times (w - f_w +1)\times d \times M_1} = (28-3 +1)\times(28-3+1)\times 10 = [26\times 26\times 20]$.
The second convolutional layer contains twice the number of filters = 40 of same size $[3\times 3 \times 1]$. So, the number of parameters becomes $[23 * 23 * 1 * 40]$ as the output from the second convolutional layer. So total number of parameters = $[26\times 26\times 20]+ [23 * 23 * 40]$
Case 2: When $d=2$ and all other sizes are same. The filter size become $[3 \times 3 \times 2]$. The output of the first convolutional layer will contain: $(28-3 +1)\times(28-3+1)\times 2 \times 20 = [25\times 25\times 40]$. For the second convolutional layer, the output will contain $[23 \times 23 \times 2 \times 40]$ parameters.
Question1) Is my calculation for each case above correct?
Question2) I have read that the purpose of maxpooling is to reduce the dimensionality of the feature map. In my case each maxpooling is of size 3 and stride 2. What does it mean by reducing the dimensionality and then what will be the size of the output from each layer upon considering that there is a maxpooling operation after each convolutional layer?
AI: Question 1) Is my calculation for each case above correct?
No, it is not correct. The formula to calculate the spatial dimensions (height and width) of a (square shaped) convolutional layer is
$$O = \frac{I - K + 2P}S + 1$$ with $I$ being the spatial input size, $K$ being the kernel size, $P$ the padding and $S$ the stride. For your two cases that is (assuming padding of $1$ since otherwise numbers won't fit later with pooling):
Case 1:
Layer 1: The spatial dimensions are $O_1 = \frac{28 - 3 + 2\cdot1}1 + 1 = 28$. So the total layer has size $height \cdot width \cdot depth = 28 \cdot 28 \cdot 20$.
Layer 2: The second layer has $O_2 = \frac{28 - 3 + 2\cdot1}1 + 1 = 28$ with a total layer size of $height \cdot width \cdot depth = 28 \cdot 28 \cdot 40$.
With regards to parameters please note that does not refer to the size of a layer. Parameters are the variables which a neural net learns, i.e. weights and biases. There are $1 \cdot 20 \cdot 3 \cdot 3$ weights and $20$ biases for the first layer. And $20\cdot 40 \cdot 3 \cdot 3$ weights and $40$ biases for the second layer.
Case 2:
The sizes of the convolutional layer do not depend on the number of input channels. However, the number of parameters for the first layer increases to $2 \cdot 20 \cdot 3 \cdot 3$ while the number of biases remains unchanged. The second convolutional layer is not affected at all.
Question2) I have read that the purpose of maxpooling is to reduce the dimensionality of the feature map. In my case each maxpooling is of size 3 and stride 2. What does it mean by reducing the dimensionality and then what will be the size of the output from each layer upon considering that there is a maxpooling operation after each convolutional layer?
Typically convolutional layers do not change the spatial dimensions of the input. Instead pooling layers are used for that. Almost always pooling layers use a stride of 2 and have size 2x2 (i.e. the pooling does not overlap). So your example is quite uncommon since you use size 3x3.
You can apply the same formula as above (assuming padding again - see footnote (1) for an explanation)
$O_{pool\text{ }1} = \frac{28 - 3 + 2\cdot0.5}2 + 1 = 14$
Since now the input to the second conv. layer has different spatial dimensions you would get:
$O_{CNN\text{ }2} = \frac{14 - 3 + 2\cdot1}1 + 1 = 14$
And finally:
$O_{pool\text{ }2} = \frac{14 - 3 + 2\cdot0.5}2 + 1 = 7$
Footnotes:
(1) The padding of $0.5$ is the average padding required in each spatial dimension. As an example, have a look at how the second maxpooling kernel scans an input (I have chosen the second one since the input is smaller and, therefore, easier to visualize). The input has width and height of $14$:
As you can see from the drawing the actual padding is only required on one side for height and width respectively. Therefore, the actually performed padding is
$$padding_{width} = \frac{padding_{left} + padding_{right}}{2}=\frac{0 + 1}{2}=0.5$$
And the same numbers apply to the height. Accordingly, the actual padding is $(0.5, 0.5)$.
Moreover, the drawing shows 7x7 dark blue squares (kernel centers). Which visualizes the calculations leading to an output of $pool_2$ with spatial dimensions of 7x7 ($O_{pool\text{ }2}=7$). |
H: Forward pass vs backward pass vs backpropagation
As mentioned in the question, i have some issues understanding what are the differences between those terms.
From what i have understood:
1) Forward pass: compute the output of the network given the input data
2) Backward pass: compute the output error with respect to the expected output and then go backward into the network and update the weights using gradient descent ecc...
What is backpropagation then? Is it the combination of the previous 2 steps? Or is it che particular method we use to compute dE/dw? (chain rule ecc...)
AI: In a narrow sense backpropagation only refers to the calculation of the gradients. So it does, for example, not include the update of any weights. But usually it is used refering to the whole backward pass.
Also see Wikipedia. |
H: NA in LR model summary(R)
So, i was trying to improve mr LR model performing multiple linear regression on a dataset. I had a categorical variable region
Region(variable):
Midwest
Northeast
South
West
I made dummy variable for each of them and it did improved my model a bit.
Previous Model Summary
After adding these variable(Which i made using a different variable)
I am getting NA's in the coefficient for west, which i don't understand why. can someone explain?
AI: You've given all four regions a dummy variable, so these are perfectly multicollinear, and the (unpenalized?) regression doesn't have a unique solution. R automatically drops a column in this situation and reports the NA.
https://stats.stackexchange.com/q/212903/232706
https://stackoverflow.com/q/7337761/10495893
https://stats.stackexchange.com/q/25804/232706
(You can see where R calls C and subsequently FORTRAN functions, then inserts the NAs at
https://github.com/wch/r-source/blob/0f07757ad10ca31251b28a2c332812e63c0acf38/src/library/stats/R/lm.R#L117
A nice article that helped me find that: http://madrury.github.io/jekyll/update/statistics/2016/07/20/lm-in-R.html ) |
H: PCA Regression Problem
I have a regression problem whereby my data has 21 features and I wish to apply dimensionality reduction using PCA. As far as I know, all the tutorials I have seen so far use PCA for classification problems. I did do PCA for regression but I am un-able to display the nice scatterplots that show the PC1 on the x-axis, the PC2 on the y-axis, and the targets in the middle.
I wrote the following code
X = self.X
pca = PCA(n_components=NUM_FEATURES_PCA)
principal_components = pca.fit_transform(X)
principalDf = pd.DataFrame(data=principal_components,
columns=['PC1', 'PC2'])
finalDf = pd.concat([principalDf, self.df[[self.target_variable]]], axis=1)
plt.scatter(finalDf.loc[self.df[self.target_variable], 'PC1']
, finalDf.loc[self.df[self.target_variable], 'PC2'], s=50)
plt.xlabel('PC1', fontsize=15)
plt.ylabel('PC2', fontsize=15)
plt.title('2 component PCA', fontsize=20)
plt.show()
So in other terms, could we display such plots for PCA in regression ? Or should we transform the continuous target variable to a categorical (labeled one) through binning or like-wise ?
references: these plots
AI: First of all, you can project your explanatory variable (continuous) in your first plane (PC1 + PC2). The direction of the arrow (projection) and how far goes from the axis origin will tell you how the points are distributed according to this representation of variables in your factorial plane.
On the other hand, the quick answer is to group your continuous variables into chunks (descretizise your variable into an ordinal one), then you'll have the same plot as the reference.
Furthermore, you could try to color your scatter plot using a color scale (from white to black, red to blue...), and you'll see if there's some kind of progression of your data in that factorial plane according to the continuous variable.
These three "strategies" actually are showing the same, although the second one is more sensitive to the cuts.
Summarizing:
Project the continuous variable in the factorial plane (what PCA usually does).
Group your continuous variable in groups and plot it (easier way to see although it is sensitive).
Color your scatter plot using an scale (using min and max value of your continuous variable). |
H: Previous work Replication and Research ethics Ask Question
I am very much concerned about biding by research ethics in my work, especially issues to do with plagiarism. I come across a recent research paper in my field of study that applies state-of-the-art tools (deep learning architectures) in their work using a publically available dataset.
I am impressed by their work and feel I should apply the same methodology they used but using my dataset (private).
Would this be considered a plagiarised version of their work?
AI: I wouldn't think so. If they're publishing their methodology, they want other people to see how well it works and apply it to their work. You'll probably want to explain why you think this method works best for your dataset and compare the performance results to other methods commonly used. |
H: (pre-trained) python package for semantic word similarity
I am searching for a python package that calculates the semantic similarity between words. I do not want to train a model (what most packages seem to offer) - the package should have been pre-trained on ideally thousands of natural language books and documents (e.g. on how often do words occur in close proximity to each other in the training material) and be simple to install/use. As for example in pseudo code below:
import XYZ
assessor = XYZ.loadPreTrainedModel("standard_text")
assessor.scoreWords("pilot", "airplane") # returns 0.94 (I made up these numbers)
assessor.scoreWords("student", "university") # returns 0.91
assessor.scoreWords("cat", "dog") # returns 0.82
assessor.scoreWords("cat", "airplane") # returns 0.13
assessor.scoreWords("student", "apple") # returns 0.25
...
AI: The spaCy Python package might work for you. It allows you to easily "install" large pre-trained language models, and it provides a nice high-level interface for comparing word vectors.
To install spaCy:
pip install spacy
Then you need to download a language model. I believe these models are trained on Common Crawl, which is a massive dataset. You should choose one of the medium or large models, because the small models do not ship with word vectors.
python -m spacy download en_core_web_md
Using spacy models to compute word similarity is a breeze:
import spacy
# load the language model
nlp = spacy.load('en_core_web_md')
word1 = 'cat'
word2 = 'dog'
# convert the strings to spaCy Token objects
token1 = nlp(word1)[0]
token2 = nlp(word2)[0]
# compute word similarity
token1.similarity(token2) # returns 0.80168
Here's an example that's more similar to the one in your question:
import spacy
nlp = spacy.load('en_core_web_md')
token = lambda word: nlp(word)[0] # shortcut to convert string to spacy.Token
score_words = lambda w1, w2: token(w1).similarity(token(w2))
score_words("pilot", "airplane") # 0.5998
score_words("student", "university") # 0.7238
score_words("cat", "dog") # 0.8017
score_words("cat", "airplane") # 0.2654
score_words("student", "apple") # 0.0928 |
H: Fastest way to relearn machine/deep learning
I hope I came to the right place to ask this question.
Back when I was at collage I studied machine and deep learning in-depth. My whole programme was based on those areas. I knew all underlying maths, even today I know how to derive backpropagation for any feed-forward network. Well, maybe I would need to take a peek. But I still understand the math and I can follow without problems. Back then (2017) I was even doing something with and researching GANs which were novelty back then.
Basically, I was at the hotspot at that time. I was working with all sorts of algorithms, from logistic regression, SVMs, to MLPs, CNNs, RNNs (mostly LSTM) and was trying already mentioned GANs. Oh, heuristics too: GAs, tabu search, simulated annealing, etc. There was also some NLP involved too.
And then after college I went to gaming industry, heh. I was/am still, during that time, working with ML/DL (and some OpenCV) but mostly easy, toy projects (although one project was real life project, but it was easy, I had to extract written digits from paper and classify them).
So, my question is, what is the best (and fastest) way to get back on the track considering my, I would say, pretty strong background? I saw that Kaggle has some courses on their site, for example course on DL is estimated 4 hours, course on feature engineering is also 4 hours. That is not a lot of time, but I am afraid it would be too easy for me and consequently a waste of time.
What are some good resources to refresh/relearn ML/DL considering my background?
AI: If you enjoy reading, that's probably the best ML/DL book ever. |
H: Implementing "full convolution" to find gradient w.r.t the convolution layer inputs
I've been trying to implement "full convolution" w.r.t to convolution layer inputs. According to this article, it looks like this:
So, I wrote this function:
def full_convolve(filters, gradient):
filters = np.ones((5,5))
gradient = np.ones((8,8))
result = list()
output_shape = 12
filter_r = filters.shape[0] - 1
filter_c = filters.shape[1] - 1
gradient_r = gradient.shape[0] - 1
gradient_c = gradient.shape[1] - 1
for i in range(0,output_shape):
if (i <= filter_r):
row_slice = (0, i + 1)
filter_row_slice = ( 0 , i + 1)
elif ( i > filter_r and i <= gradient_r):
row_slice = (i - filter_r, i + 1)
filter_row_slice = (0, i + 1)
else:
rest = ((output_shape - 1) - i )
row_slice = (gradient_r - rest, i + 1 )
filter_row_slice = (0 ,rest + 1)
for b in range(0,output_shape):
if (b <= filter_c):
col_slice = (0, b + 1)
filter_col_slice = (0, b+1)
elif (b > filter_c and b <= gradient_c):
col_slice = (b - filter_c, b + 1)
filter_col_slice = (0,b+1)
else:
rest = (output_shape - 1 ) - b
col_slice = (gradient_r - rest , b + 1)
filter_col_slice = (0, rest + 1)
r = np.sum(gradient[row_slice[0] : row_slice[1], col_slice[0] : col_slice[1]] * filters[filter_row_slice[0]: filter_row_slice[1], filter_col_slice[0]: filter_col_slice[1]])
result.append(r)
result = np.asarray(result).reshape(12,12)
I tested this with ones and the output seems correct (if I get "full convolution" right):
[[ 1. 2. 3. 4. 5. 5. 5. 5. 4. 3. 2. 1.]
[ 2. 4. 6. 8. 10. 10. 10. 10. 8. 6. 4. 2.]
[ 3. 6. 9. 12. 15. 15. 15. 15. 12. 9. 6. 3.]
[ 4. 8. 12. 16. 20. 20. 20. 20. 16. 12. 8. 4.]
[ 5. 10. 15. 20. 25. 25. 25. 25. 20. 15. 10. 5.]
[ 5. 10. 15. 20. 25. 25. 25. 25. 20. 15. 10. 5.]
[ 5. 10. 15. 20. 25. 25. 25. 25. 20. 15. 10. 5.]
[ 5. 10. 15. 20. 25. 25. 25. 25. 20. 15. 10. 5.]
[ 4. 8. 12. 16. 20. 20. 20. 20. 16. 12. 8. 4.]
[ 3. 6. 9. 12. 15. 15. 15. 15. 12. 9. 6. 3.]
[ 2. 4. 6. 8. 10. 10. 10. 10. 8. 6. 4. 2.]
[ 1. 2. 3. 4. 5. 5. 5. 5. 4. 3. 2. 1.]]
However, I don't like all these manual checks and if/else statements. I feel there is a better way to implement this in NumPy (perhaps, using some zero paddings or something like this). Can anyone suggest a better approach? Thanks
AI: Code:
import numpy as np
from scipy import signal
j5 = np.ones((5,5))
j8 = np.ones((8,8))
c58 = signal.convolve2d(j5, j8, boundary='fill') # by default filled with 0, which is correct for your case
Results in:
print(c58)
[[ 1. 2. 3. 4. 5. 5. 5. 5. 4. 3. 2. 1.]
[ 2. 4. 6. 8. 10. 10. 10. 10. 8. 6. 4. 2.]
[ 3. 6. 9. 12. 15. 15. 15. 15. 12. 9. 6. 3.]
[ 4. 8. 12. 16. 20. 20. 20. 20. 16. 12. 8. 4.]
[ 5. 10. 15. 20. 25. 25. 25. 25. 20. 15. 10. 5.]
[ 5. 10. 15. 20. 25. 25. 25. 25. 20. 15. 10. 5.]
[ 5. 10. 15. 20. 25. 25. 25. 25. 20. 15. 10. 5.]
[ 5. 10. 15. 20. 25. 25. 25. 25. 20. 15. 10. 5.]
[ 4. 8. 12. 16. 20. 20. 20. 20. 16. 12. 8. 4.]
[ 3. 6. 9. 12. 15. 15. 15. 15. 12. 9. 6. 3.]
[ 2. 4. 6. 8. 10. 10. 10. 10. 8. 6. 4. 2.]
[ 1. 2. 3. 4. 5. 5. 5. 5. 4. 3. 2. 1.]]
Reference to see other options when you'd need them:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve2d.html |
H: Random Forest prediction fails due to unseen Features
I have trained a Random Forest Model on some dataset and like to predict outcomes on other data which were not seen in training. When doing this, I get
ValueError: Number of features of the model must match the input. Model n_features is 12 and input n_features is 13
The problem is that there are some variables from the training data not existent in my prediction set. E.g. I capture the count of some feature via dummy variables D_0, D_1, D_2, D_3 indicating the number of occurences of D. I might have no D_2 in my training data but D_2 in my prediction data set.
What's best practice in such a case? I am planning to use this estimator repeatedly on future data and I can't know which features will be existent. Should I rather check for inconsistencies between both feature lists and manually correct those which do not overlap? In the above example, I'd code all occurences of D_2 to D_3 in order to align feature lists.
AI: Problem is the way you're onehot encoding.
Best practice for any type of encoding :
You should train an estimator for Onehot encoding on the training data only, and when encoding test data, you should use the same estimator used on training data.
Eg : sklearn.preprocessing.OneHotEncoder does this, and it has a parameter called : handle_unknown.
handle_unknown{‘error’, ‘ignore’}, default=’error’
Whether to raise an error or ignore if an unknown categorical feature is present during transform (default is to raise). When this parameter is set to ‘ignore’ and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature will be all zeros. In the inverse transform, an unknown category will be denoted as None.
Optimal option is : You could use this parameter and set it to ignore, in order to ignore the unknown feature value and avoid an error, until you retrain your model eventually and add the new feature values to your model.
from sklearn.preprocessing import OneHotEncoder
ohe=OneHotEncoder(handle_unknown='ignore')
train=ohe.fit_transform(train)
test=ohe.transform(test)
Or you could , as you said , manually correct differences in the feature space, but would be time-consuming at each update of your model, without excluding the possibility of your code raising an error if you're sloppy in your manual correction. |
H: Why continuous features are more important than categorical features in decision tree models?
I have both categorical and continuous features in my prediction model and want to select (and rank) most important features.
I have converted all categorical variables into dummy variables using one hot encoding (for better interpretation in my logistic regression model).
On one hand, I use LogisticRegression (sklearn) and rank the most significant features by using their coefficients. In this way, I see both categorical and continuous variables among the most important features.
On the other hand, When I want to rank the features by using Decision Tree models (SelectFromModel) they always give higher scores (feature_importances_) first to continuous features and then to categorical (dummy) variables. A completely different behavior in comparison with Logistic Regression.
Whilst the performance of Decision Tree models is much higher (about 15%) than the performance of Logistic Regression, I want to know which sorting of features (Decision Tree or Logistic Regression) is more correct? And why Decision Tree models give more priority to continuous features?
AI: It could be the way that you encode categorical variables.
If you do One Hot Encoding (dummy) each encoded feature will only have two possible values [0,1]. Binary variables normally have less importance in Decision trees given how it is computed the feature weight.
Let's say per example, that you are trying to predict the condition of a patient at the hospital Alive==0, Dead ==1. Imagine that you have a feature is called Head_Shot[0,1], that is really rare, it only appears a few times in the dataset.
The linear model will assign a lot of weight to this coefficient since it is crucial for the target variable. If this happens the rest of the features has no meaning.
For the decision tree, it could do a split in just one of the tree and since it calculates the importance of a feature weighted time the number of times it appears it wouldn´t such a relevant weight.
I am assuming you are doing one-hot encoding. With other techniques, it will be different. And I also assume the way that you calculate the feature importance. So this is far from a scientific answer.
Continuous variables can have more importance in decision trees because each tree can do several splits along its way.
Sorry for the example, it is a bit drastic but I believe it makes the point. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.