text
stringlengths 83
79.5k
|
---|
H: How to perform upsampling (and NOT interpolation) process theoretically modelled?
As an example, I know that sampling a signal $s$ is modelled by multiplication of s by a dirac comb, which has the effect of convolving the Fourier Transform (FT) of $s$ by the FT of the dirac comb which is another dirac comb but with inverse spacing between peaks.
My question is what is the corresponding process for when one upsamples (e.g., by a factor of 2), , i.e., inserts zeros between samples, and NOT yet applies interpolation since I know for interpolation you need to follow upsampling by a lowpassfilter whicg is a convolution a signal $s$?
AI: If I understood your question right, you want a mathematical expression for the I/O (input/output) relationship of a signal expander (name of the block that expands (upsamples) -without interpolation filering- an input signal $x[n]$)
Below is a block diagram of signal expander by a factor of $L$:
$$ x[n] \longrightarrow \boxed{ \uparrow L } \longrightarrow x_e[n] \tag{1}$$
An expression for the expanded sequence $x_e[n]$ can be written as :
$$ x_e[n] = \begin{cases} { x[\frac{n}{L}] ~~~,~~~ n=m\cdot L ~~~,~~~ m=...,-1,0,1,... \\ ~~~ 0 ~~~~~~,~~ ~ \text{otherwise} } \end{cases} \tag{2} $$
An identical expression is also the following :
$$x_e[n] = \sum_{k=-\infty}^{\infty} x[k] \delta[n-L\cdot k] \tag{3}$$
where $\delta[n]$ is a unit sample (discrete-time impulse).
With $L=3$, an input $x[n]=[1,2,3,4]$ becomes output $x_e[n] =[1,0,0,2,0,0,3,0,0,4,0,0]$. |
H: Determine relationship between users and age?
I would like to understand how to find an association between users, spam and email's age.
My dataset looks like as follows:
User Spam Age (yr)
porn_23 1 1
Mary_g 0 6
cricket_s54 0 4
rewuoiou 1 0
pure75 1 2
giogio35 0 10
viv3roe 1 1
I am looking at the correlation using Pearson. Is it right? I would like to determine the correlation between age and user: spam email should likely come from users having recent email's addresses (fake account / email).
AI: If you are using pandas, all you need to do is:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
corrMatrix = df.corr()
Then you can print the correlation matrix and also plot it using seaborn or any other plotting method.
sns.heatmap(corrMatrix, annot=True)
plt.show()
Hope this helps. |
H: Why do we need to have the test set remain consistent across multiple runs?
In the book Hands-on machine learning with scikit-learn and tensorflow: concepts, tools, and techniques to build intelligent systems, more specifically in Chapter 2, the writer is teaching us how to create a test set. He mentions that we need to keep the test set consistent across multiple runs. To do so, he mentioned the following:
A common solution is to use each instance’s identifier to decide
whether or not it should go in the test set (assuming instances have a
unique and immutable identifier). For example, you could compute a
hash of each instance’s identifier and put that instance in the test
set if the hash is lower or equal to 20% of the maximum hash value.
This ensures that the test set will remain consistent across multiple
runs, even if you refresh the dataset. The new test set will contain
20% of the new instances, but it will not contain any instance that
was previously in the training set.
My question is why is it important to do so? Why do we not like to generate different test sets for each run?
AI: You will judge the performance of the trained model based on certain performance metrics.
If you keep updating your test set, then you will not know whether one run is better than the other.
For e.g. you are trying to predict the value of house. And you have only one data point in your test set and you are using mean absolute error (MAE) as your performance criteria.
Scenario 1: the actual value of house is 100K, your prediction with first model is 101K. MAE = 1000
Scenario 2: the the actual value of house is 25K, your prediction with second model is 26K. MAE = 1000
In reality, prediction is better with model1 as compared to model2. But because you changed your test set, you will not be able to make that conclusion. Had you kept your test set same in scenario2, your prediction would have been much higher making MAE much higher, clearly indicating that model1 is better than model2.
Hope this helps. |
H: Why is it valid to remove a constant factor from the derivative of an error function?
I was reading the book 'Make your own neural network' by Tariq Rashid. In his book, he said:
(Note - He's talking about normal feed forward neural networks)
The $t_k$ is the target value at node $k$, the $O_k$ is the predicted output at node $k$, $W_{jk}$ is the weight connecting the node $j$ and $k$ and the $E$ is the error at node $k$
Then he says that, we can remove the 2 because we only care about the direction of the slope of the error function and it's just a scaling factor. So, can't we remove $sigmoid($$\sum_{j}$$ W_{jk}. O_j)$, as we know it would be between $0$ and $1$, and so it would also just act as a scaling factor. If you then see, we can remove everything after $(t_k-O_k)$, as we know the whole expression would be between $0$ and $1$, and so it would just act as a scaling factor. So that leaves us with just:
$$t_k-O_k$$
which is definitely the wrong derivative.
If we can't remove that whole expression, then why did he removed the $2$, as they both were scaling factors?
AI: You can remove the factor because:
It is constant with respect to the variable you compute the derivate on.
The constant is positive
You have this constant factor for all variables you compute the derivative on. So if $\nabla f(\mathbf{W})$ would be the correct gradient, by setting the constant factor to $1$, you get $\frac{1}{2} \nabla f(\mathbf{W})$.
When you use (stochastic) gradient descent, you scale the gradient anyway (with the learning rate / step size). So it is important to have the correct gradient, up to a scaling factor (which must be independent of the variables, and positive).
You could also set the scaling factor $\frac{1}{2}$ already in the loss function $L$. So if $L$ is the unscaled loss and $L' = \frac{1}{2}L$, then you have $L(\mathbf{w})$ is minimal if and only if $L'(\mathbf{w})$ is minimal. |
H: Why deep feedforward neural networks are restricted to solve classification problems?
I want to use deep neural networks for regression problems, but as far as I've read through, it's mainly for classification. I was wondering why can't a regular convolutional network, or a multilayer perception work as a linear regression, just with multiple layers.
I tried connecting two convolutional layers with max pooling on each, and a fully connected layer together, without activation function, with 10000 MNIST images, 100 epochs, and with 100 batch size. Used mean squared error as the loss function, and regular stochastic gradient descent as the optimizer. Instead of one hot encoding, I used the actual expected numbers as labels.
The results are very promising even though I haven't played optimizing the parameters. The loss after the training is under 1, and I've got R2 values over 0.9 at testing.
So the question is, why neural networks are mainly used for classification while it could handle regression as well?
AI: There is regression by neural networks. For example, have a look at this analysis (Deep Regression), RBF neural networks, and General regression neural networks. Also, linear regression can be implemented using a neural network.
If using neural networks for interpolation, this may work very good. However, if regression is used to extrapolate, results may be unsatisfactory.
Note also that the universal approximation theorems say only that a contineous function can be approximated within a compact set (thus in particular bounded set!). |
H: Are "Gradient Boosting Machines (GBM)" and GBDT exactly the same thing?
In the category of Gradient Boosting, I find some terms confusing.
I'm aware that XGBoost includes some optimization in comparison to conventional Gradient Boosting.
But are Gradient Boosting Machines (GBM) and GBDT the same
thing? Are they just different names?
Apart from GBM/GBDT and XGBoost, are there any other models fall into
the category of Gradient Boosting?
AI: Boosting is an ensemble technique where predictors are ensembled sequentially one after the other(youtube tutorial. The term gradient of gradient boosting means that they are ensembled using the optimization technique called gradient descent (Boosting Algorithms as Gradient Descent.
Given this, you can boost any kind of model that you want (as far as I know). Moreover in the scikit learn library, gradient boosting, its under the ensemble folder. You can boost any kind of model (linear, svm) its just that decision trees achieve normally great results with this kind of ensemble. Same way that you can do a bagging with any kind of estimator, but if you do it with a decision tree and add a couple technichalities more, you can call it Random Forest.
From scikit learn documentation: GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage a regression tree is fit on the negative gradient of the given loss function.
But are Gradient Boosting Machines (GBM) and GBDT the same thing? Are they just different names?
Gradient boosting machines are a kind of ensemble and Gradient Boosting Decision Tree the particular case when a tree is used as the estimator.
Apart from GBM/GBDT and XGBoost, are there any other models fall into the category of Gradient Boosting?
You can use any model that you like, but decision trees are experimentally the best.
"Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like support vector machines (SVM)." Kai Ming TingLian Zhu, Springer |
H: Replacing value not in list in Pandas
Say I have a data frame in Python.
If I want to replace all values in the say, the Size column that are not 'M' or 'S' or 'L' with nan, how do I do so?
Thanks in advance
AI: I don't know if it's the best way, but I'd write a function and apply it to the df['size']:
def rename_size(size):
if size not in ['M','S','L']:
return np.nan
else:
return size
df['size']= df['size'].apply(rename_size) |
H: In XGBoost, how is a leaf index corresponding to the particular leaf node in actual base learner trees?
I've trained a XGBoost model for regression, where the max depth is 2.
# Create the ensemble
ensemble_size = 200
ensemble = xgb.XGBRegressor(n_estimators=ensemble_size, n_jobs=4, max_depth=2, learning_rate=0.1,
objective='reg:squarederror')
ensemble.fit(train_x, train_y)
I've plotted the first tree in the ensemble:
# Plot single tree
plot_tree(ensemble, rankdir='LR')
Now I retrieve the leaf indices of the first training sample in the XGBoost ensemble model:
ensemble.apply(train_x[:1]) # leaf indices in all 200 base learner trees
array([[6, 6, 4, 6, 4, 6, 5, 5, 4, 5, 4, 3, 5, 4, 5, 3, 6, 3, 5, 5, 3,
3,
3, 5, 4, 4, 3, 4, 3, 6, 6, 6, 4, 6, 6, 3, 5, 3, 5, 4, 6, 4, 4, 6,
3, 3, 6, 3, 6, 3, 4, 3, 6, 6, 3, 6, 5, 3, 6, 6, 3, 4, 6, 5, 3, 3,
3, 6, 3, 4, 3, 6, 3, 6, 3, 3, 3, 4, 6, 3, 4, 4, 6, 3, 3, 6, 3, 6,
6, 3, 3, 4, 4, 4, 3, 3, 6, 6, 3, 3, 6, 3, 3, 3, 6, 6, 6, 4, 4, 3,
5, 3, 3, 3, 4, 5, 3, 3, 6, 3, 3, 6, 3, 4, 5, 3, 6, 3, 5, 3, 4, 4,
3, 3, 4, 6, 6, 6, 6, 3, 4, 4, 3, 5, 6, 6, 3, 5, 3, 3, 6, 6, 3, 3,
6, 3, 3, 4, 4, 3, 4, 3, 5, 3, 3, 3, 3, 3, 4, 4, 6, 3, 6, 4, 4, 5,
6, 3, 4, 5, 6, 3, 4, 3, 4, 5, 6, 6, 5, 4, 3, 3, 6, 6, 3, 6, 5, 4,
3, 3]], dtype=int32)
Here is my question:
Since there are four leaf nodes in the first tree, how come there is
index 6 for the first training sample?
In the official doc for apply(), it says "Leaves are numbered within [0; 2**(self.max_depth+1)), possibly with gaps in the numbering." So if max_depth is 2, the leaves are numbered between 0 and 7. Since there are only four leaves in a binary tree of depth 2, shouldn't the leaves numbered within [0, 4)? What is the reason behind the design $[0; 2^{(self.max\_depth+1)})$?
Related question: https://stackoverflow.com/questions/58585537/how-to-interpret-the-leaf-index-in-xgboost-tree
AI: I think what you are seeing is the fact that all nodes in the tree are indexed because a priori the model doesn't know where splits will happen (i.e. any node could be a leaf). My guess is that the nodes follow an ordering similar to:
In your case all of the leaf nodes are at the max depth of the tree, so nodes 3-6 show up in your list. By contrast, if your data was all the same value I would expect all labels to be node 0 (because the split criteria was not met). And then you could have intermediate situations where after 1 split there is a node which does not meet the split criteria (in this case you could see either node 1 or 2 show up in your list).
Hope this helps! |
H: Back Propagation Vs Learning rate in Neuralnet Optimisation
I was doing some research on how backpropagation works? I read that, backpropagation is used to find the optimal weight of each neuron after every iteration using partial derivates and updates the weights of the neuron.
On the other hand, we have hyperparameter called 'learning-rate' used to update the weight of the neuron in each iteration by calculating the direction of the error.
These two cases look like working independently, I mean, while backpropagation algorithm itself finding the optimal weight, we do not need a learning rate parameter itself.
Is my understanding correct? Please correct me if I am wrong.
AI: Using backpropagation is nothing else than performing (stochastic) gradient descent.
It computes the gradient, but it is not the "optimal" weight. The gradient is used to update the current weight (according to the gradient descent algorithm).
The gradient descent algorithm needs a step size (which is called learning rate in the context of machine learning).
The step size defines how "strong" the current weights are updated by the current gradient. |
H: Hyperparameter Tuning in Random Forest Model
I'm new to the machine learning field, and I'm learning ML models by practice, and I'm facing an issue while using the machine learning model.
While I'm implementing the RandomForestClassifier model with hyper tunning it's taking too much time to predict output. And I'm also using GridSearchCV on it. so it's take much time.
Is there any way how can I solve this problem.
OR, Is Google Colab or Kaggle Notebook editor can perform better than Jupiter Notebook ?
AI: You can access the GPU by going to the settings:
Runtime> Change runtime type and select GPU as Hardware accelerator. |
H: Analysis of passed/not passed students by enrolled year
I am analysing some data from my University about successful students through years. The dataset looks like as follows
Student Passed Enrolled_Year
A 0 2016
B 1 2008
C 1 2008
D 0 2012
E 1 2007
F 1 2019
G 1 2006
H 1 2006
I 0 2012
L 1 2019
M 0 2018
N 1 2008
I already plotted the frequency of passed/not passed through years, to see when students, who passed exams.
I would like to know if it makes sense to look at a possible correlation between Passed and Enrolled_Year, when the number of Passed students is small compared to those who not passed.
I would like to get some insights from my dataset:
https://drive.google.com/file/d/1tUgHoFdzq2vSLHaPzAiNQC5Xkrk2SmKX/view?usp=sharing and one of this was looking at the possible correlation.
I would appreciated it if you could let me know if it makes sense a such analysis with big difference among frequency.
AI: To start with, you could use a simple thresholding.
If you have the dataset $S$ where an element has the form $(x,y,c) \in S$, $x$ denotes the year, $y$ is a binary value (exam passed or not), and $c$ is the student id.
you can obtain a classifier by using
$\{(x,y,c) \in S \mid x \leq \theta\}$ and $\{(x,y,c) \in S \mid x > \theta\}$.
Now you can check all plausible $\theta$ values (..,2008,2009,..,2020) to see how good your data can be seperated. For example you could use Matthews correlation coefficient to evalute which $\theta$ is best.
But If you want to see if there is some linear correlation, yes you could compute the correlation coefficient, but I would assume that there is no linear correlation. Essentially you are computing the Point-biserial correlation coefficient.
Also with the thresholding method, in case you see that all $\theta$ perform more or less the same, and badly, you can see that there is no linear correlation.
You should have a look at histograms to see how your data is distributed. |
H: EarlyStopping based on the loss
When training my CNN model, based on the random initialization of weights, i get the prediction results. In other words, with the same training and test data i get different results every time when i run the code. When tracking the loss, i can know if the result would be acceptable or not. Based on this, i want to know if there is a way to stop the training if the loss begins by a value superior to a desired one in order to re-run it. The min_delta of the EarlyStopping does not treat this case.
Thanks in advance
AI: You can extend the base Keras implementation of callbacks with a custom on_epoch_end method which compares your metric of interest against a threshold for early stopping.
From the linked article they provide a code sample with a custom callback class + calling that class during model.fit:
import tensorflow as tf
# Implement callback function to stop training
# when accuracy reaches ACCURACY_THRESHOLD
ACCURACY_THRESHOLD = 0.95
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc') > ACCURACY_THRESHOLD):
print("\nReached %2.2f%% accuracy, so stopping training!!" %(ACCURACY_THRESHOLD*100))
self.model.stop_training = True
# Instantiate a callback object
callbacks = myCallback()
# Load fashion mninst dataset
mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
# Scale data
x_train, x_test = x_train / 255.0, x_test / 255.0
# Build a conv dnn model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', \
loss='sparse_categorical_crossentropy', \
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=20, callbacks=[callbacks])
Check the linked article for more details:
https://towardsdatascience.com/neural-network-with-tensorflow-how-to-stop-training-using-callback-5c8d575c18a9 |
H: Can I do bagging method as improvement technique to decision tree in research?
Bagging use decision tree as base classifier. I want to use bagging with decision tree(c4.5) as base as the method that improve decision tree(c4.5) in my research that solve problem overfitting. Is that possible because some lecturers said not right as bagging is other classifier not hybrid between two?
AI: Let's clarify a few things first:
The bagging technique is an ensemble method which is not specific to decision trees, it can be applied to any classification method.
It's worth noting that there is another ensemble method specifically for decision trees, it's called Random Forest. While it's not the same method, it is known to generally improve performance compared to a regular Decision Tree algorithm like C4.5.
These techniques exist because they have been proved to improve performance in general, but whether they would improve performance on a specific problem (and by how much) has to be tested.
Also just to be clear: these techniques are already established, so using them wouldn't be an original research contribution in the field of Machine Learning. Their application to a new problem in a specific research domain might be a contribution to this domain, but that depends on the specific context. |
H: Is Recursive Feature Elimination finding best features subset?
On a set of 9 features I have applied Recursive Feature Elimination (RFE) algorithm using SVM estimator, following approach from (1). When requesting a subset of size 1 to be found, then RFE returned feature X.
However, when I trained SVM over each feature individually, I found another feature Y to have higher accuracy than SVM trained over X.
I thought that RFE finds features with the highest accuracy.
Is my understanding of RFE wrong?
(1): Gene selection for cancer classification using support vector machines
AI: No, RFE cannot guarantee that it finds the feature subset with optimal score.
As with most greedy processes, the point of RFE is to reduce the computational cost (fitting a model for each of the $2^m$ feature subsets), at the cost of perhaps not finding the actual optimum (but hopefully "close enough").
See also https://stats.stackexchange.com/questions/232461/question-about-recursive-feature-elimination |
H: Learning with Positive labels only
I have ~7 million rows of customer data (~500 sparse attributes)
A million out of them have opted in to a new service.
How do I use this signal to predict which of the remaining customers are likely to adopt the service? And how do I measure the effectiveness?
Problems face so far -
Unable to treat this as a supervised problem due to lack of definitely negative variable
Unable to apply label propagation because there is only one class
Apart from treating this as an anomaly detection problem (oneclasssvm etc.), I also tried using nearest neighbors based approach.
Looking for other ways to solve the problem if there are some go-to techniques that I am missing.
I know there is an answer here but it only talks about oneclasssvm that I have already tried. Also trying to find ways to measure model effectiveness along with any novel ways to solve.
AI: The topic you are interest in is called "PU learning" or "positive and unlabeled learning".
You can start by having a look into survey literature. |
H: Pytorch LSTM not training
So I am currently trying to implement an LSTM on Pytorch, but for some reason the loss is not decreasing. Here is my network:
class MyNN(nn.Module):
def __init__(self, input_size=3, seq_len=107, pred_len=68, hidden_size=50, num_layers=1, dropout=0.2):
super().__init__()
self.pred_len = pred_len
self.rnn = nn.LSTM(
input_size=input_size,
hidden_size=hidden_size,
num_layers=num_layers,
dropout=dropout,
bidirectional=True,
batch_first=True
)
self.linear = nn.Linear(hidden_size*2, 5)
def forward(self, X):
lstm_output, (hidden_state, cell_state) = self.rnn(X)
labels = self.linear(lstm_output[:, :self.pred_len, :])
return lstm_output, labels
And my training loop
LEARNING_RATE = 1e-2
net = MyNN(num_layers=1, dropout=0)
compute_loss = nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)
all_loss = []
for data in tqdm(list(train_loader)):
X, y = data
optimizer.zero_grad()
lstm_output, output = net(X.float())
# Computing the loss
loss = compute_loss(y, output)
all_loss.append(loss)
loss.backward()
optimizer.step()
# Plot
plt.plot(all_loss, marker=".")
plt.xlabel("Epoch")
plt.xlabel("Loss")
plt.show()
And this is what I got
I have been trying to look for what the hell I am doing wrong but I have no idea. Also, before I used a keras LSTM and it worked well on the dataset.
Any help?
Thanks!
AI: You look at loss at every batch. You should average your loss over all batches. When you look at different batches your loss may increase simply because one batch is harder to predict than the other one. That's why it's not really interpretable. So start with that. If the problem persists it's probably exploding gradients. In that case lower your learning rate to 1e-3 or 1e-4 or even less if it continues. |
H: Machine learning, speech recognition technologies for Sound of Animals interpretation
https://www.google.com/search?q=sound+of+animals&client=ms-android-lava&prmd=inv&sxsrf=ALeKk02xrn0-yn-FZSkidTogB4l4B_TH6A:1600539091086&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjVhuTf6PXrAhWR4HMBHWS-AYQQ_AUoAXoECA8QAQ&biw=360&bih=592
https://www.google.com/search?q=sound+of+animals&source=lmns&bih=592&biw=360&client=ms-android-lava&prmd=inv&hl=en&sa=X&ved=2ahUKEwjc79L07vXrAhWQYysKHSWxBr8Q_AUoAHoECAAQAw
https://www.instagram.com/p/CFVAPhNlTjh/?utm_source=ig_web_button_share_sheet
https://www.instagram.com/p/CFVBPaqF00M/?utm_source=ig_web_button_share_sheet
Is it possible using AI , speech recognition & Machine learning technologies for interpreting Sound of Animals?
Input Dataset will be Sound files viz .wav format of animals.
Output will be recognising the sound and naming the animal.
AI: Yes it is very easily possible.
If you want quick output use teachable machine.
https://github.com/seth814/Audio-Classification Here is a sample git rep which you could use to enter into the domain. Be sure to label the sounds properly for higher efficiency. |
H: Clarify recurrent neural networks
I'm in the beginning to learn and understand recurrent neural networks. As far as I can imagine, its multiple feed-forward neural networks with one neuron at each layer put next to each other, and connected from left to right, where each neuron is connected not just with the neuron below it, but the one at the left from the previous time. Not sure if it's a right way to think about it, but so far it's my first impression.
Some things are unclear though.
As far as I understood the final output of each timestep is supposed to predict the input of the next timestep. Is this true? What if I would just like to show the network two images of for example a horse, and depending on them, predict what distance did it go, and in which direction? Is this possible?
In the illustration above there's $A_0$. From where? I would assume at least two timesteps are needed to make a prediction, so in my understanding an $x_0$ is missing from the left side of the diagram. Am I right?
I've been reading through an article which says "Lets train a 2-layer LSTM with 512 hidden nodes". Does it mean two layer of activations, and 512 timesteps?
AI: As far as I can imagine, its multiple feed-forward neural networks with one neuron at each layer put next to each other, and connected from left to right, where each neuron is connected not just with the neuron below it, but the one at the left from the previous time.
Not really. Each cyan box in your image represents the exact same cell. Now this cell can be a lot of things just take a look at LSTM cell (h and c represents your A) but it can also be a network which takes $A_i$ and $X_{i+1}$ as input and returns $A_{i+1}$ and $Y_{i+1}$ as output.
It may be true if the RNN tries to predict e.g. a time series. To train such a net you would provide the series as a training input and the same time series but in the next step as an output (so it would try to predict $X_{i+1}$ based on $\forall_{j \in [1;i)} X_{j}$. But in general it's not true. The output may be in a completely different format and represent completely different thing exactly like in your example. In your example your $X_i$ is an encoded i-th frame and $Y_i$ is what the network thinks the distance the horse has traveled is up until i-th frame.
$A_0$ is the starting state of the RNN. What it is exactly depends on the exact architecture used but it's common to just set it all to zeros. We need this starting state because as I mentioned the same cell is used at each recurrent step so there has to be something to provide as network state in the beginning. There is no $X_0$ missing. Also there is nothing stopping you from making a prediction based on a sequence of length 1. It's just that it's not useful to use a RNN in such a situation.
The number of timesteps taken depend on the data - not the network. You can use the same network on sequence of length 512, 2 and 2 million. That's why they're commonly used to solve problems with varying length like speech recognition. LSTMs have weights just like a normal neural network does. You can think of these 512 hidden nodes as the size of a hidden layer in the cell. Using two layers of LSTM means using two LSTMs with 512 nodes, and using output of the first as the input of the second one. The output of the second LSTM is the output of 2-layer LSTM. |
H: Why you shouldn't upsample before cross validation
I have an imbalanced dataset and I am trying different methods to address the data imbalance. I found this article that explains the correct way to cross-validate when oversampling data using SMOTE technique.
I have created a model using AdaBoost algorithm and set the following parametres to be used in Grid Search:
ada = AdaBoostClassifier(n_estimators=100, random_state=42)
params = {
'n_estimators': [50, 100, 200],
'random_state': [42]
}
According to the article, this is the wrong way to oversample:
X_train_upsample, y_train_upsample = SMOTE(random_state=42).fit_sample(X_train, y_train)
# cross-validate using grid search
grid_naive_up = GridSearchCV(ada, param_grid=params, cv=kf,
scoring='recall').fit(X_train_upsample,
y_train_upsample)
grid_naive_up.best_score_
0.6715940782827282
# test set
recall_score(y_test, grid_naive_up.predict(X_test))
0.2824858757062147
Whereas the correct way to oversample is like so:
from imblearn.pipeline import Pipeline, make_pipeline
imba_pipeline = make_pipeline(SMOTE(random_state=42),
AdaBoostClassifier(n_estimators=100, random_state=42))
cross_val_score(imba_pipeline, X_train, y_train, scoring='recall', cv=kf)
new_params = {'adaboostclassifier__' + key: params[key] for key in params}
grid_imba = GridSearchCV(imba_pipeline, param_grid=new_params, cv=kf, scoring='recall',
return_train_score=True)
grid_imba.fit(X_train, y_train);
# How well do we do on our validation set?
grid_imba.best_score_
0.29015614186873506
# compare this to the test set:
y_test_predict = grid_imba.predict(X_test)
0.2824858757062147
So, according to the article, the first method is wrong because when upsampling before cross validation, the validation recall isn't a good measure of the test recall (28.2%).
However, when using the imblearn pipeline for upsampling as part of the cross validation, the validation set recall (29%) was a good estimate of the test set recall (28.3%). According to the article, the reason for this is:
When upsampling before cross validation, you will be picking the most
oversampled model, because the oversampling is allowing data to leak
from the validation folds into the training folds.
Can anyone explain to me simply how the oversampling allows data to leak into the validation and causes the overfitting? And why does this problem not occur in the imblearn pipeline?
AI: To see clearly why the procedure of upsampling before CV is mistaken and it leads to data leakage and other undesired consequences, it is useful to imagine first the simpler "baseline" case, where we simply upsample (i.e. create duplicate samples) without SMOTE.
The first reason why such a procedure is invalid is that, this way, some of the duplicates due to upsampling will end up both to the training and the validation splits (CV folds); the result being that the algorithm is validated with some samples that have already been seen during training, which invalidates the very fundamental requirement of a validation set (fold) and it is actually the very definition of data leakage. For more details, see own answer in the SO thread Process for oversampling data for imbalanced binary classification; quoting from there:
I once witnessed a case where the modeller was struggling to understand why he was getting a ~ 100% test accuracy, much higher than his training one; turned out his initial dataset was full of duplicates -no class imbalance here, but the idea is similar- and several of these duplicates naturally ended up in his test set after the split, without of course being new or unseen data...
But there is also a second reason: this procedure shows biased performance measures in our validation folds that are no longer representative of reality: remember, we want our validation folds to be representative of the real unseen data, which of course will be imbalanced. Performing CV after upsampling results also to artificially balancing our validation folds; doing so, and claiming that we get X% accuracy when a great part of this accuracy will be due to the artificially upsampled minority class makes no sense, and gives misleading impressions. For details, see own answer in the SO thread Balance classes in cross validation. Notice that the author of the post you have linked to says (rather cryptically, and only in a parenthesis):
(we are smart enough not to oversample the test data)
For more corroboration, here is Max Kuhn, creator of the caret R package and co-author of the (highly recommended) Applied Predictive Modelling textbook, in Chapter 11: Subsampling For Class Imbalances of the caret ebook:
You would never want to artificially balance the test set; its class frequencies should be in-line with what one would see “in the wild”.
Now, it's true that the above hold for the case of balancing through simple upsampling of the minority class; but SMOTE does not do that - it uses interpolation to create synthetic samples that are "close" enough to the real minority ones. How does this change the situation?
Not much.
The second reason stated above (biased performance measures in the validation folds) is still fully applicable - in fact, it holds independently of the exact nature of the upsampling (duplicate samples or synthetic ones).
Given that the synthetic samples generated by SMOTE are indeed highly correlated with the real ones, the problems due to the first reason mentioned above are still largely present, although somewhat ameliorated.
In contrast, the pipeline approach does not suffer from these issues, because it first splits into training and validation folds, and applies SMOTE subsequently only to the training ones. |
H: Time series forecast for small data set
I am new in data science so please accept my apology in advance if my question sounds stupid. I want to do a time series forecast of outage mins in the current regulatory year. The regulatory year starts from 1 April and ends on 30 March of next year. I have data of around six months i.e. from April to September. Outage does not occur every day. So I have only 144 data points (or days out of 171 days) where the outage occurred. I have plotted the data in the following graph. The graph shows the cumulative sum of outage mins.
Now I am trying to predict the value from October to March. I wanted to forecast the value that what would be the cumulative outage mins by the end of March next year. I tried to use Exponential smoothing but it did not work, it may be because I don't have a lot of observation. Then I was reading about ARIMA but not sure whether its the right algorithm to use or not as I don't think that there would be any seasonality in this scenario and also I don't have long data points. Could anyone help with which algorithm should I use to forecast the value? I am using Python as a programming language. Any help would be really appreciated.
AI: ARIMA could work, I think it's the right approach. It's simple enough to be used on a small dataset, but sufficiently flexible at the same time. If you are using Python, library statsmodels allows you to implement ARIMA regressions. You have to grid search and find the right parameters to find the best fit, and run the prediction.
If you want to know how to do it, take a look here and here.
Alternatively, even simpler models could work correctly, such as a simple moving average (MA), or auto-regression (AR). But that's something you can find from the ARIMA grid search above. |
H: Why is $Y=\beta_0 x^{\beta_1} e$ a linear model?
Why is $Y=\beta_0 x^{\beta_1} e$ a linear model? When we apply the transform, it becomes $lnY = ln\beta_0+\beta_1 lnx +lne$, and why is it still linear when the $\beta_0$ part is under ln?
AI: The term "linearity" is context-dependent, so a linear regression model is not necessarily the same as a linear function.
A linear function is classified as such via the superposition principle, requiring both additivity and homogenity. We can generalize linear maps with to multivariate functions simply:
Additivity: $f(\vec{x}_1+\vec{x}_2)=f(\vec{x}_1)+f(\vec{x}_2)\enspace \forall\ \vec{x}_1,\vec{x}_2\in \Bbb{R}^n$
Homogeneity: $f(\alpha \vec{x})=\alpha f(\vec{x})\enspace \forall\ \vec{x}\in \Bbb{R}^n$ and $\alpha\in \Bbb{R}$
So the equation $f(\beta_0,\beta_1)=\beta_0 x^{\beta_1}e$ is a nonlinear function because it is not additive nor homogeneous.
However, w.r.t. linear regression, linear models are of the form $Y=\beta_0+\beta_1f_1(x_1)+\beta_2f_2(x_2)+\cdots+\beta_nf_n(x_n)+c$, regardless of whether any $f_i$ is a nonlinear map.
So, after a nonlinear transformation such as natural logarithm, you can consider the resulting regression model as the form $Y'=\beta_0'+\beta_1ln(x)+1$ where the primes denote the natural log transformed variables. This demonstrates linearity between $Y'$ and the parameters $\beta_0'$ and $\beta_1$. |
H: Is is possible to train on Hypergraphs in Keras?
I know that vanilla Keras doesn't support operations with graphs.
For example, the Spektral library, based on Keras API, provides some layers to work with simple graphs. However, it doesn't support graphs with multi-node connections (i.e. hypergraphs).
Is there any way to train a deep learning model on the hypergraph input data?
AI: As far as I'm aware, deep learning on hypergraphs is still a relatively new area, so I don't think there's any ready-made solution for hypergraphs. I did find this repo, which implements some models in keras to accompany a recent paper on hypergraph learning, but it is hardly a library.
You may also check out this paper, which cites a pair of techniques for converting a hypergraph to a graph:
Notably, previous methods [for hypergraph learning] typically decompose the hyperedge into pair-wise relationships where the decomposition methods can be divided into two categories: explicit and implicit. For instance, given a hyperedge (v_1,v_2,v_3), the explicit approach would decompose it directly into three edges, $(v_1,v_2), (v_2,v_3), (v_1,v_3)$, while the implicit approach would add a hidden node $e$ representing the hyperedge before decomposition, i.e., $(v_1,e),(v_2,e),(v_3,e)$.
If you're okay with doing this, then there are a few options besides Spektral:
Deep Graph Library provides a Tensorflow.keras compatible API, although their documentation seems to favor PyTorch.
There is also Graph Nets, which is built on top of Tensorflow.keras, although it's quite bare-bones at the moment. |
H: Why are we taking the square root of the gradient in Adagrad?
This is how we update weights with Adagrad:
$$w_i = w_i - \frac{lr}{\sqrt{g_i+E}}$$
where, $w_i$ is the $i^{th}$ weight, $lr$ is the learning rate, $g_i$ is the gradient of the $i^{th}$ weight at all the timesteps and $E$ is epsilon to prevent division by zero.
Now why are we taking the square root of $g_i+E$, can't we just remove the root and just divide.
I know removing the root gives bad performance, but why does it gives bad performance on removing the root?
AI: We are taking the square root of $G_i$ because $G_i$ is the sum of squares of the gradient of weight $i$ from iterations $1$ to the current iteration. Since we squared all the gradients of all timesteps, $G_i$ was very large, and so we take its square root. Now you may think that, then why don't we just add the gradients instead of squaring the gradients, then adding. The answer to this question is that we square it to prevent negative gradients, so that negative gradients and positive gradients don't cancel each other out. Now you may think that why don't we take absolute value of the gradient then, the answer to this question is
here
That post was about root mean squared error or mean absolute error.
If you think carefully, these both issues are the same.
So, we take the square and not the absolute value. Now you may think that why don't we power the gradient to some even power, why only square? Now, I leave this question to the consideration of the reader |
H: Creating a image classification model
I am working on a dataset to classify facial expressions.
Dataset has 7 classes, training images 28000 and test images 7000. I created 2 models
Model1:
this model has 11 layers. Initially model was working fine but now accuracy is increasing very slowly after 60% and drooping sometimes. model 1 link
then I thought that due large dataset my model is over fitting and I reduced the images by randomly deleting in each class and reducing train set images to 15000 and test set images to 3900. and created another model.
Model 2:
This model has 9 layers. The graph for first 40 epochs was fine but after then when I resumed training its seems to overfit.model 2 link
first 40 epoch
this seems fine i guess
after resuming training
I have also tried model with fewer layers nut cant figure out why my accuracy is stuck at and model is not improving
dataset link, I don't understand is there a problem with dataset, can anyone provide me link to another dataset. Figure out what changes are required in model.
AI: 1.) You should not change your test set, even if you downsample the training set.
2.) At least in the plots you show there is no clear indication of overfitting. Did you see what happens if you train for longer epochs, and tried different learning rates ?
3.) If you have some overfitting, normally you would simplify the model, and not reduce the size of the dataset. Also you can think about data augmentation to prevent overfitting.
4.) You can compare your solution to the solutions available at kaggle.
5.) With such a black-box solution (vanilla CNN) you will not get super high results. Though I do not know the upper bound. You can compare to kaggle for that..
6.) To further improve you have to integrate some prior knowledge (e.g. facial key points). |
H: What library to choose for machine learning on swift?
I have to do a ML project for university and I’ve chosen to do something similar as this https://m.youtube.com/watch?v=Aut32pR5PQA, since I have most experience with swift language, I would like to use some ML library for swift. I’ve googled and found this https://medium.com/@ricardocastellanos_13596/ios-machine-learning-libraries-and-frameworks-e458f18e5a18 but i have no experience with ML so I don’t know what to choose, what would be good option? Or should I start using python anyway?
AI: If you want to stick with Swift, your main option is Swift for Tensorflow. There are some tutorials in their web page (e.g. this one for training a neural net on the Iris dataset).
I would suggest, however, to stay away from Swift if your main focus is machine learning. Swift for Tensorflow is one of the most popular ML libraries for Swift, and there are rumors that it is no longer active. Other frameworks listed in the webpage you linked seem not to be active either, e.g. Bender's last commit was on January 2019, SwiftAI on May 2017, BrainCore on March 2017. The same seems to apply to the Objective-C ones.
Python and R are the programming languages with most ML tooling available, so I suggest you go for one of them and use their vast and super active ML ecosystems. |
H: How to include user features in a recommender system?
I'm novice in that matter but I was thinking about the formulation of a recommender system. Let's take the example of a movie recommendation system. We have a column dedicated to movies ID (or names), a matrix related to the rates the users gave to each movie, and a matrix with movie features (romance, drama, etc..)- joined a photo that shows this formulation. What about I would like to use users features to improve my recommendation? If I had information like age, profession, revenue of each user I would like to use it in my formulation. But if I include user features, this formulation is not Content-‐based neither Collaborative filtering anymore. Anyone knows what kind of formulation it can assume?
AI: This is referred to as side information. This is used to enhance the recommender system.
A good library for collaborative filtering (and beginner friendly) is turicreate. Have a look at this link. o summarise, the traditional, basic matrix factorisation will encode user i and items j respectively as vectors $u_i$ and $v_j$ so that the predicted score that a user would give to the unseen item is:
$$
score(i,j) = u_i^T v_j
$$
but you can have a more complex model that will also take into account idiosyncratic characteristics of both items and users:
$$
score(i,j) = u_i^T v_j + a^T x_i + b^T y_j
$$
which increases the capacity of the model. |
H: Confusion with Notation in the Book on Deep Learning by Ian Goodfellow et al
In chapter 6.1 on 'Example: Learning XOR', the bottom of page 168 mentions:
The activation function $g$ is typically chosen to be a function that
is applied element-wise, with $h_i = g(x^TW_{:,i}+c_i).$
Then we see equation 6.3 is defined as (assuming g as ReLU):
We can now specify our complete network as
$f(x; W,c,w,b) = w^T$
max$\{0, W^Tx + c\} + b$
Wondering why the book uses $W^Tx$ in equation 6.3, while I expect it to be $x^TW$. Unlike XOR example in the book where $W$ is a $2\times2$ square matrix, we may have non-square $W$ as well, and in such cases, $x^TW$ is not same as $W^Tx$.
Please help me understand, if I'm missing something here.
AI: Let $\mathbf{y} = \mathbf{W}^T \mathbf{x}$
Then, $\mathbf{y}^T =(\mathbf{W}^T \mathbf{x})^T =\mathbf{x}^{T}(W^T)^T = \mathbf{x}^{T}W $. Note that $\mathbf{W}$ does not have to be a square matrix.
Let $e^{(i)}_{j} = \delta_{i,j} $.
Then, $y_{i} = \mathbf{y}^{T}e^{(i)} = (\mathbf{x}^T W) e^{(i)} = \mathbf{x}^{T}(We^{(i)}) = \mathbf{x}^{T}W_{:,i}$
and thus
$h_{i} = g(\mathbf{x}^T W_{:,i}+c_{i}) = g(y_{i}+c_{i})$
On the other hand,
$f(..) = w^{T} \max\{\mathbf{0},W^{T}\mathbf{x}+\mathbf{c}\}+b = w^{T} \max\{\mathbf{0},\mathbf{y}+\mathbf{c}\}+\mathbf{b}$.
Does that answer your question ? |
H: regex for JSON only for one nested data
Hi guys do you know how to change this regex, that it only looks for test :
AI: You need to apply two regexes: first, get r'^test:.*$' with m option, then your original regex on the result of the first. |
H: Generative chatbots with BERT pretrained vectors
Most places seem to train generative chatbots with one hot encoded vectors. See here for example, and even the official tutorial on pytorch.
But using one hot encoded vectors are undoubtedly the worst performing method. No tutorial seems to provide this using BERT vectors.
Why hasn't chatbots been built with BERT vectors?
Are BERT vectors are not meant to be used this way?
AI: First, let's clarify the issue with one-hot vectors: most NLP neural models nowadays don't use one-hot encodings for the model input; instead, they use (non contextual) embedding layers. While theoretically you get the same result multiplying a one-hot vector with a matrix, it is more practical just to index the position in the table directly, which is what embedding layers do.
The Pytorch model you linked makes use of an embedding layer for the input, not one-hot vectors.
Now, the specific questions:
BERT representations have actually been used for chatbots. A quick google search reveals multiple relevant blog posts, github repos, etc.
BERT representations are generic subword representations, meant to be reused in any context where text is received as input. The scenarios where it makes most sense to apply BERT's kind of transfer learning are those where training data is scarce. In the cases where there is abundant good-quality training data, the potential gains of using BERT representations are not so high.
Summing up, it is perfectly fine to use BERT representations to create chatbots and many people have done it before. |
H: how to find parameters used in decision tree algorithm
I use a machine learning algorithm, for example decision tree classifier similar to this:
from sklearn import tree
X = [[0, 0], [1, 1]]
Y = [0, 1]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X, Y)
clf.predict([[2., 2.]])
How to find out what parameters are used?
AI: Just type clf after defining it; in your case it gives:
clf
# result:
DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',
max_depth=None, max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort='deprecated',
random_state=None, splitter='best')
i.e. all arguments with their default values, since you did not specify anything in the definition clf = tree.DecisionTreeClassifier().
You can get the parameters of any algorithm in scikit-learn in a similar way.
Tested with scikit-learn v0.22.2
UPDATE
As Ben Reiniger correctly points out in the comment below, from scikit-learn v0.23 onwards, we need to set the display configuration first in order for this to work:
sklearn.set_config(print_changed_only=False) |
H: How to visualise a large correlation matrix?
I have a dataset with 24 variables, 21 of them numeric. As part of model building I decided to look into the correlation between features and so what I get is a large correlation matrix (21 * 21).
Now visualising such large matrices becomes a very messy task and you end up hurting your eyes. So what I have done is set a threshold and to slice out those rows that have greater than this value (say 0.60). However, I'm getting a matrix that has now several NaNs. When I try to drop these null values, the matrix loses all data and what I'm left is a 0*0 matrix.
corr_matrix = data.corr()
threshold = 0.60
high_corr = corr_matrix.loc[corr_matrix >= 0.60]
high_corr.dropna(inplace=True)
print(high_corr)
Empty DataFrame
Columns = []
Visualising the matrix with nans is a good idea but it also results in empty squares. I'm looking for a way where only those rows that have values >= threshold are retained, with no nans. That would make a much smaller matrix which is much less messier when plotted in matplotlib. However I haven't been able to code it that way; can anyone suggest some strategies to deal with such large matrices?
AI: Try this (note that I didn't add error checking, so it'll crash if your threshold removes all values). Also, I made it an absolute high pass rather than a normal high pass because I assume you'd be interested in strong negative correlation as well? If you're not, just remove the abs() in the filter function.
from numpy.random import randn
from pandas import DataFrame
from seaborn import heatmap
from matplotlib.pyplot import show
from itertools import combinations
def absHighPass(df, absThresh):
passed = set()
for (r,c) in combinations(df.columns, 2):
if (abs(df.loc[r,c]) >= absThresh):
passed.add(r)
passed.add(c)
passed = sorted(passed)
return df.loc[passed,passed]
labels = [chr(x) for x in range(65,91)]
corrDf = DataFrame(randn(26,26), index=labels, columns=labels).corr()
#heatmap(corrDf,cmap="YlGnBu")
heatmap(absHighPass(corrDf,0.5),cmap="YlGnBu")
show()
This is the filtered heatmap:
And this is another run but with the unfiltered heatmap: |
H: Why sparse features should have bigger learning rates associated? And how Adagrad achieves this?
I was learning about Adagrad optimizer. I came to know that it has a very helpful functionality which is that we can have lower learning rates for the features that are more common and greater learning rates for the features that are less common (or more sparse) when using Adagrad.
Now, why do we even want to have lower learning rates for the features that are most common?
Why do we even care if one feature is more sparse than any other feature?
AI: Intuition behind learning rates
With Adagrad, parameters that model the influence of the features in our problem tend to be updated at the same rhythm, something that is accomplished by what you explained in your question $\rightarrow$ the parameters are updated with different learning rates/ steps which may depend on how frequent (or sparse) the features are.
But why the parameters that we are learning should be updated with different learning rates? $\rightarrow$ For example, if our cost function was $f(x,y,z) = 20x^2+y^2+z^2 \rightarrow$ and $x,y,z$ were our parameters, it is clear that having a point far away from the global minimum, the function will be more sensitive to changes on $x$ than on $y$ or $z$. But the same parameter step applied to $y$ and $z$ will lead to the same change on our cost function $f(x,y,z)$.
In the general case, when we don't know exactly the terms of our cost function, one conservative rule for updating would be to use smaller learning rates on the directions with "big" gradients. This would prevent us from overshooting on these directions if the decision of moving along them with a big step was wrong.
To visualize the previous reasoning we can plot how would be the paths followed by our parameters applying different learning rates to them. In this case, the hypotetical cost function that I have used is $f(x,y)=20x^2+y^2$ and the number of iterations made are the same in all of them ($20$). With this plots we can see that:
Left plot $\rightarrow$ Applying a "big" learning rate to both $x$ and $y$ directions lead to overshooting on the $x$ direction, which causes that the path doesn't reach the minimum.
Center plot $\rightarrow$ Applying a "small" learning rate to both $x$ and $y$, solves the issue of overshooting, but causes the learning to be very slow.
Right plot $\rightarrow$ Applying a "small" learning rate to $x$ than to $y$ not only solves the issue of overshooting on the $x$ direction, but also makes faster the optimization.
Note that the optimization used here is not exactly like Adagrad optimization, but with it I pretend to show the effects of different learning rates.
Adagrad
With Adagrad we are able to take this previous intuition of why the learning rates should be different for each parameter but with another point of view.
For understanding why Adagrad behaves in the way described by the question, we should take a look at the practical formula that is commonly used for applying it:
$$ \mathbf{w}_{t+1}= \mathbf{w}_t - \frac{\alpha}{\sqrt{\epsilon I + \text{diag}(G_{t+1})}}\,\,g_{t+1} $$
Where $\mathbf{w}$ is the vector of parameters that we want to update, $g$ is the gradient of the cost function w.r.t. these parameters $\mathbf{w}$, $t$ is the number of the iteration taking place and $G_t$ is given by:
$$G_{t+1}=\sum_{\tau=1}^{t+1} g_\tau\,g_\tau^T$$
So, if we have a look at one parameter $w^j$ from the vector $\mathbf{w}= [w^1,w^2,...,w^n]^T$, we have:
$$ w_{t+1}^j = w_t^j - \frac{\alpha}{\sqrt{\epsilon + G_{t+1}^{jj}}}\,\,g_{t+1}^j \,\,\,\,\,\, \leftrightarrow \,\,\,\,\,\, G_{t+1}^{jj}=\sum_{\tau=1}^{t+1}(g_\tau^j)^2$$
What does this mean? $\rightarrow$ As $t\uparrow\uparrow$ the learning rate related to the parameter $w^j$ will tend to decrease. This is because in the denominator we have the cummulative sum of the absolute value of its previous gradients.
Thereby, using Adagrad we could also solve the problem presented in the previous plots. Due to the fact that the plotted function was more sensitive to changes on the $x$ direction (hence the x-component of the gradient was bigger), Adagrad will automatically give a lower learning rate to the updates in $x$ than on $y$.
But what is the relation of the above with the more/less common features?
Common features here take the meaning of features that during a significant number of updates their influence on the problem has been poorly analysed. This can happen because during these iterations the components of the gradient vector related to these features have had significant lower magnitudes.
Let's look at an example using Stochastic Gradient Descent $\rightarrow$ we will be updating the parameters for every single sample that we have.
Let's consider that our model of parameters is linear, and have two parameters $(w^{(1)},w^{(2)})$ that have to be trained $\rightarrow$ so we can have a predictive function like this:
$$\hat{y} = w^{(1)} x^{(1)} + w^{(2)} x^{(2)}$$
where $x^{(1)}$ and $x^{(2)}$ are the features of our problem.
So if we make use of a cost function ($J$) that represents the mean squared error of the prediction ($\hat{y}$) given by our parameters, the gradient at iteration $t$ will be given by a quantity proportional to the features:
$$ g_t = \left(\frac{\partial J}{\partial w^{(1)}_t}, \frac{\partial J}{\partial w^{(2)}_t}\right)^T \propto (x^{(1)}_t, x^{(2)}_t)^T $$
Thereby, using SGD the quantity that we use to update each of our parameters $\delta w_{t+1}$ is also proportional to the associated feature:
$$ \delta w_{t}^{(j)} \propto x^{(j)}_t$$
With this already set up, lets imaging that our training is done in three samples (just for visualization purposes). During these samples the $x^{(1)}$ feature is more frequent (or less sparse) than the other feature $x^{(2)}$, so we have a dataset, $\mathbb{X}$, similar to the next:
$$ \mathbb{X} = \begin{pmatrix}
1 & 0\\
5 & 0\\
3 & 3\\
\end{pmatrix}$$
Where each row represents a sample of $x^{(1)}_t$ and $x^{(2)}_t$, with $t\in\{1,2,3\}$.
Now it is clear that the parameter $w^{(2)}$ won't be updated until the last sample ($t=3$), because during the first two samples ($t=1, 2$) its gradient component has been zero (becase its proportional to $x^{(2)}_t$).
But what would have happened at this point to $w^{(1)}$? $\rightarrow$ it would have been updated during the first two samples (non-zero gradient component), and because of it, its associated learning rate will be smaller than the one of $w^{(2)}$ in spite of having the same $x^{(1)}_3 = x^{(2)}_3$!
So now we understand why the parameters associated with sparse features can be updated with greater learning rates using Adagrad.
Conclusion
Adagrad allows us to give more importance to updates in parameters that have associated features which are sparse, or more generally, to give more importance to parameter updates that have experimented a record of relatively lower gradients (in magnitude).
Why is this useful? To answer this, we can cite the original authors (see paper: "Adaptive Subgradient Methods for
Online Learning and Stochastic Optimization "):
> "Our procedures give frequently occurring features
very low learning rates and infrequent features high learning rates, where the intuition is that each
time an infrequent feature is seen, the learner should “take notice.” Thus, the adaptation facilitates
finding and identifying very predictive but comparatively rare features" |
H: One Neural network with multiple outputs or multiple neural networks with a single output?
I an building a feed forward deep learning model using tabular data. The inputs are numeric features or categorical features (represented with embeddings). The outputs are the same number of numeric input features.
Is there any known research or models out there which verifies that using a single model with multiple outputs would be better/worse than multiple models, each with a single output?
In essence, with N observations and M outputs, a single model minimizes:
$$
\frac{1}{N}\sum_n^N\sum_m^M \left(y_m^{(n)} - \hat{y}_m^{(n)} \right)^2
$$
while multiple models with single output, each minimize:
$$
\frac{1}{N}\sum_n^N \left(y_m^{(n)} - \hat{y}_m^{(n)} \right)^2
$$
For a single value of $m$.
Any reason one would be preferred over the other, or do I just have to try and see for myself?
AI: Given the information you provided, the most honest answer is: You have to test it by yourself, there is no general answer for it.
Still, it has been shown empirically in research that a neural network may benefit from having multiple outputs.
So let's say we have a neural network that has multiple outputs. Further, let us group them into specific tasks:
For example:
The output neurons of group 1 tell if the image containts a dog or a cat.
The output neurons of group 2 tell the size of the animal (width and height)
The output neurons of group 3 tell the color of the animal's hair (in some encoding)
and so on...
A common example would be Faster-RCNN vs Mask RCNN.
Assume that $g$ denotes the number of different groups of output neurons.
Now if you take a feed-forward neural network, you will have common layers that eventually branch to the different output groups. Let us call $\pi$ the function that maps an input image to this particular last common layer $L$ and let $\phi_{j}$ be the function that takes the information from layer $L$ to output the result of group $j$.
Thus, given an input image $\mathbf{I}$, the neural network maps it to $\begin{pmatrix} \phi_{1}(\pi(\mathbf{I})) \\ \vdots \\ \phi_{g}(\pi(\mathbf{I})) \end{pmatrix}$.
The output of the last common layer $\pi(\mathbf{I})=:\mathbf{f}$ can be understood as an image descriptor $\mathbf{f}$ of the input image $\mathbf{I}$.
In particular, all predicted outputs rely on the information contained in $\mathbf{f}$.
$\textbf{Therefore}$: Merging multiple outputs into a single neural network can be understood as a regularization technique. The image descriptor $\mathbf{f}$ must contain not only the information if the images shows a dog or a cat, but also all the other information. It must therefore be a more comprehensive (or "more realistic") description of the input, which makes it more difficult for the network to overfit. The network cannot solve a specific task using a non-plausible explanation, as the corresponding image descriptor would lead to bad results on the other tasks.
As a consequence adding additional (auxiliary) tasks to the neural network can improve the accuracy on the initial task, even if you are not interested in predicting these additional tasks.
So essentially, if there is a common description of your data, that can be used to solve your required tasks, the system may benefit by using one model with multiple outputs.
You may have a look into the literature, e.g. collaborative learning, multi-task learning, and auxiliary tasks.
I hope this answers your question. |
H: Numpy failing in subtraction even after same dimensions of arrays
When I subtract these two arrays, it returns a (354,354) shaped array because as per the documentation there is a mismatch in the shape of the arrays.
Why does it happen and what else can I do for this except numpy.reshape ?
AI: This is a problem that I have also run into before, right now your ytrain is a one dimensional array (advisable to avoid). Check this answer.
expanding(adding) additional dimension while assigning ytrain should fix your problem
x = np.array([1, 2])
x.shape
(2,)
y = np.expand_dims(x, axis=1)
y
array([[1],
[2]])
y.shape
(2, 1) |
H: What algorithm should I use to get a mapping between two variables?
I have a dataset that contains for every row, a list X of x items, that is a subset of X_total, and a list Y of y items, that is a subset of Y_total.
|Row| List X | List Y |
|1 | A, B | 1, 4 |
|2 | A, D, F | 2, 3, 6 |
|3 | B, E | 5 |
|4 | G, W, N | 1, 2, 5, 7|
|5 | W, G, D, T | 3, 5 |
I want to learn a mapping of X -> Y, so that when new items x appear, I want to know which items in y are likely to appear based on past history. The dataset has ~4k rows.
So far I tried a simple Sequential Model in Keras, but with 2000 inputs and 4000 outputs the results are meaningless , I think there could be a smarter way. I think Embeddings might be a good path to follow, because they can group together items of X that "tweek" the same Y's.
Any suggestions ? Something like RNN's ? Any useful links trying to solve some kind of similar problem ?
AI: Embeddings is a good way to go, however, the way I am thinking about embeddings, is that you can convert these lists to a form like A 1, A 4, B 1, B 4, by concatenating every element in list X to its corresponding element in list Y, and then learn embeddings, using word2vec model(example), however, I am not sure if this would be able to learn relationships where combination of items in list X might lead to an item in list Y. I think this aspect should be tackled while learning embeddings. You can also look at market basket analysis as one of the ways of solving this problem. This article provides an introduction to market basket analysis https://towardsdatascience.com/a-gentle-introduction-on-market-basket-analysis-association-rules-fa4b986a40ce |
H: Can a novelty detection model overfit?
Can a novelty detection model overfit? In novelty detection, the model is trained on normal data instances (not polluted by outliers) where no labels are used in the training process, while validated and tested on a data instances that contain outliers in them. An example of algorithms that can be used for novelty detection are one-class SVM (OCSVM) and Local Outlier Factor (LOF).
AI: Answering your question: yes, depending on the hyperparameters you choose, you could overfit the considered normal data, if you fit your separating hyperplane between normal and novel points being too much "shaped" on your input data.
There are, for instance in case of one-class support vector machines, some important hyperparams like nu or gamma:
nu: with this one, you tell the oc-SVM the fraction of novel points (i.e. anomalies) you want to consider in your input data; this way, you don't overfit your model by not considering normal all the input data (it also depends on your use case, how sure you want to be about normality of the input data points...)
You can test it with the scikit-learn package this way:
with nu=0.01
VS with nu=0.1, where you tell the model to consider a higher fraction of points as being abnormal:
so, the lower nu value is, the more you are "overfitting" your novelties detector (which could be better or worse depending on how well you know your input data).
gamma: now, look at the effect of the value of gamma, which is crucial for overfitting your model:
with gamma=0.1 (and rbf kernel), you have the following decision surface:
VS with gamma = 10
where, with this last option, your are overfitting so much. |
H: Sentiment Analysis on long and structured texts
I'm trying to learn how sentiment analysis based on machine learning techniques works by reading guides online and papers from the academic world and I'm struggling to understand the following:
Why don't people run - or, at least, hardly ever - sentiment analysis
on the long and structured text like newspaper articles or speeches
transcripts?
I noticed it's pretty common analyzing reviews and newspapers headlines as they are short in terms of characters. So... I was wondering if it is just because of the computational power and time required to train ML algorithms (thinks about neural networks) or because of other reasons.
Can someone help me to understand?
AI: I was wondering if it is just because of the computational power and time required to train ML algorithms
It is not because of that; it is arguably because a long and structured text may probably contain segments of "positive" sentiment along with "negative" ones, it can be infinitely more subtle and nuanced, and in principle trying to simply label it overall as "positive/negative" (or even adding a couple more sentiment categories) is futile, unproductive, and at the end of the day hardly useful.
Andrew Ng has famously said:
If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.
and this is exactly the idea behind sentiment analysis: for short text excerpts, and especially for the kinds of text sentiment analysis is usually deployed for (reviews, tweets etc), a typical person has no difficulty in classifying them into such a short list of possible sentiments; additionally, this is a task we want to automate, so that we can do it massively and in scale without having to put a person going through them one by one (not a scalable approach).
These are requirements that normally do not apply to long and structure texts, like (long) newspaper articles, essays etc.; and in these cases, it is not unheard of for people reading them to disagree if, overall, they are "positive", "negative", supportive, contrarian etc (you get the idea), so any thought of actually delegating such a task to an ML model is actually beyond consideration, at least for the present, and not for lacking computing power. |
H: Automation of finding a starting point of measurement in a large dataset
I am looking for a way to automatically find a starting point of rising in my signal in Python. The data are collected with the frequency 10k (0.0001 s each) so the differences between each point are very small, lost in the noise. I found this point (black dot) manually using data analysis software before but I have multiple files and the manual process is not gonna work well. I was trying to think of something to do with derivative (red dots) or rolling variance (green dots) but it's a dead end for me now. Here's how manual point was chosen:
I pick a point that looks to me that is the closest one to rising signal but is still in the middle of noise before rising. Chosing it manually is just my rough estimation but I don't mind being one or two points wrong from the "correct" starting rising point. I will use it to offset my signal so that rising starts more or less at X = 0.
And now I wanted to find it using python. The full signal looks like this:
The derivative:
The rolling variance:
So they're all close to the interest point (black dot) but I don't know what to do with them next. If I change the limits it all looks like this:
Any ideas how to solve my problem? The simple code sample is below (plotting excluded)
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.optimize import curve_fit
import scipy.signal as sig
#reading dataset
signal = pd.read_csv('dataset.txt', delimiter=' ' )
signal.columns = ['time','current']
#calculating derivative, finding max and min indices of derivative
signal_derivative = np.gradient(signal,axis=0)
signal['derivative'] = pd.DataFrame(signal_derivative[:,1])
index_derivative_max = signal['derivative'].index[signal['derivative'] == signal['derivative'].max()]
index_derivative_min = signal['derivative'].index[signal['derivative'] == signal['derivative'].min()]
#calculating rolling variance, range 50 points, finding indices of peaks
signal['rolling_var'] = signal['current'].rolling(window=50,center=False).std()
index_rolling_max = signal['rolling_var'].index[signal['rolling_var'] == signal['rolling_var'].max()]
index_rolling_2nd_max = signal['rolling_var'].index[signal['rolling_var'] == signal['rolling_var'][:100000].max()]
AI: Well ... I would do exactly what you did. The derivative on original signal is very noisy. I would probably take derivative out of moving-averaged smoothed signal, however it brings some delay into your detection. See this answer for more info and python code.
The other approach is to detect the point in time-frequency domain. Simply plot the STFT of your signal and see if that helps the accuracy of detection over derivative. |
H: Is each form of word classification also considered to be '(named) entity recognition'?
In an article that I am writing, I focus on word classification. A typical task that involves word classification is (named) entity recognition. Entity recognition is a rather broad task and seems to cover other sub-tasks as well.
Therefore, it seems fair to me to use the terms interchangeably.
Is this a fair assumption?
AI: Is this a fair assumption?
No: Named Entity Recognition (NER) is a specific task which consists in detecting named entities. The more general term for this kind of task in Machine Learning is sequence labelling, because it's not only about classifying words but annotating a sequence of instances in which order matters (e.g. words).
It's true that NER is certainly the most famous task of this kind, but there are other important ones, for instance Part-Of-Speech (POS) tagging. |
H: Train-Test split for Time Series Data to be used for LSTM
values = df.values
train, test = train_test_split(values)
#Split into train and test
X_train, y_train = train[:, :-1], train[:, -1]
X_test, y_test = test[:, :-1], test[:, -1]
Executing the above code splits the time series dataset into training- 70% and testing 30%. I want to control the train-test split as 80-20 or 90-10. Can someone please help me understand what train[:, :-1] does in this context?
It is borrowed from https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/.
Note : I cannot split the dataset randomly for train and test and the most recent values have to be for testing. I have included a screenshot of my dataset.
If anyone can interpret the code, please do help me understand the above. Thanks.
AI: the syntax arr[:,:-1] selects all rows and every column except the last one. Python can use negative indexing, but it's inclusive-exclusive such as [a,b): inclusive of a, exclusive of b.
If you don't use the : operator, such as arr[:,-1], then it simply selects the entire last column.
So in the context of your example, the last column is the value to be regressed/classified/etc according to the previous columns training data.
>>> import numpy as np
>>> arr = np.random.randn(5,5)
>>> print(arr)
[[-0.86690967 -0.63959234 0.99754053 -0.24828822 0.5346927 ]
[ 0.6174709 2.16558841 -1.28983554 1.15387215 0.64630439]
[ 0.35104248 -0.54240157 0.80377977 -0.9447689 -0.08145433]
[ 0.61195442 0.09407687 0.39065215 -0.8887228 -1.63845254]
[-1.58212796 -0.46017275 -0.2065184 0.44879872 -0.95037541]]
>>> print(arr[:,:-1])
[[-0.86690967 -0.63959234 0.99754053 -0.24828822]
[ 0.6174709 2.16558841 -1.28983554 1.15387215]
[ 0.35104248 -0.54240157 0.80377977 -0.9447689 ]
[ 0.61195442 0.09407687 0.39065215 -0.8887228 ]
[-1.58212796 -0.46017275 -0.2065184 0.44879872]]
>>> print(arr[:,-1])
[ 0.5346927 0.64630439 -0.08145433 -1.63845254 -0.95037541]
Notice that the final print is actually the last column of arr, but because it is a 1D array, it appears as a row-vector rather than column-vector. |
H: NLP Emotion Detection - Model fails to learn to recognize negations
I am working on a nlp emotion detection project. The emotions that I try to predict are 'joy', 'fear', 'anger', 'sadness'. I use some publicly available labeled datasets to train my model e.g. ISEAR, WASSA etc. I have tried the following approaches:
Traditional ML approached using bigrams and trigrams.
CNN with the following architecture: (X) Text -> Embedding (W2V pretrained on wikipedia articles) -> Deep Network (CNN 1D) -> Fully connected (Dense) -> Output Layer (Softmax) -> Emotion class (Y)
LSTM with the following architecture: (X) Text -> Embedding (W2V pretrained on wikipedia articles) -> Deep Network (LSTM/GRU) -> Fully connected (Dense) -> Output Layer (Softmax) -> Emotion class (Y)
The NN models achieve more than 80% accuracy but still when I use the trained model to predict the emotion on text that includes some negation I get the wrong results. For example:
Text : "I am happy with easy jet, it is a great company!"
Predicts Happy
Text: I am not happy with easyjet #unhappy_customer
Predicts Happy
Any suggestions on how to overcome this problem?
AI: In general this is a difficult problem, it's about the problem of Natural Language Understanding which is far from being solved.
The advanced option requires a full syntactic parsing of the sentence, ideally followed by some kind of semantic representation of the sentence, for example by extracting relations. As far as I know this is rarely used because these steps will often cause as many errors as they solve.
Some more reasonable heuristics can be considered, for instance detecting specific negation words and either including this information as feature or modifying the original features accordingly (e.g. when a negation is detected replace the token "happy" with "not(happy)" in the features).
Note that it's unlikely to be perfect anyway, due to the usual obstacles: hedging ("I would assume that it was quite good"), sarcasm ("Sure, I was very happy with the terrible service"), metaphors ("I was feeling like a fish in water"), etc. |
H: Sir Rod Stewart's and Celine Dion's voice after 15 years i.e. Year 2035
https://www.google.com/search?sxsrf=ALeKk03hQn_rH1aaf7yO0q7CgN7CxPw2vw%3A1601345036018&ei=DJZyX9BW_o_j4Q-Fn77QCg&q=rod+stewart+age&oq=Rod&gs_lcp=ChNtb2JpbGUtZ3dzLXdpei1zZXJwEAEYADIJCCMQJxBGEPsBMggIABCxAxCRAjIKCC4QsQMQFBCHAjIFCC4QsQMyBQguELEDMgUIABCxAzIFCAAQsQMyBQgAELEDOgQIABBHOgcIIxDqAhAnOgcILhDqAhAnOgQIIxAnOgUIABCRAjoICAAQsQMQgwE6AgguOgQILhAnOgQIABBDOgQILhBDUITwA1j4hwRglZIEaAFwAXgBgAHqAogB6AySAQcwLjIuNC4xmAEAoAEBsAEPyAEIwAEB&sclient=mobile-gws-wiz-serp
https://www.google.com/search?client=ms-android-lava&sxsrf=ALeKk02wz-Vxsa8IAbrEMJeSfJ05is0jxg%3A1601489970504&ei=Msx0X7q2Ht7Dz7sPv7OQqA0&q=celine+dion%27s+age&oq=Celine+di&gs_lcp=ChNtb2JpbGUtZ3dzLXdpei1zZXJwEAEYADIJCCMQJxBGEPsBMgUILhCxAzIFCAAQsQMyBQgAELEDMgoIABCxAxAUEIcCMgIIADICCAAyAgguOgQIABAKOgQILhAKOgoILhCxAxAUEIcCUKEbWLaSAWCBnAFoAnAAeACAAdcFiAGEF5IBDTAuOC4wLjEuMC4xLjGYAQCgAQHAAQE&sclient=mobile-gws-wiz-serp
https://youtu.be/YlP1v8s688Q
https://youtu.be/SCDcnOWftx0
Taking example of Sir Rod Stewart with current age as 75 years, i see the same melody,sweetness in his voice.
Input dataset : Inputting all songs i.e. .wav, .mpg files sound formats sung by Sir Rod Stewart till date.
Inputting all songs i.e. .wav,.mpg files sound formats sung by Celine Dion till date.
Output : Can Machine learning prediction algorithms predict in Year 2035 when Sir Rod Stewart will be 90 years whether his voice will be as melodious and sweet which is also at the current age?
Output : Can Machine learning prediction algorithms predict in Year 2035 When Celine Dion will be 67 years whether her voice will be as melodious and sweet which is also at the current age?
Output will be sound (.wav,.mpg) files.
https://www.quora.com/What-is-the-best-prediction-algorithm-for-machine-learning
https://www.google.com/search?q=random+forest+vs+decision+tree&oq=random+forests+v%2Fs&aqs=chrome.1.69i57j0l4.10570j0j7&client=ms-android-lava&sourceid=chrome-mobile&ie=UTF-8
AI: The base of any experiment or model for a machine learning problem is data. While not similar there are many models which channel faces into aging them and lowering them.
This problem statement is quite similar where in one wants to know the effect a voice would have (or no effect) with the aging of a person.
If you could collect audio files of peoples voices from now and before I guess you can reach a certain level of accuracy. You could start by collecting voices of singers who have been singing for a long time. While no one sings at 90 years of age professionally you can surely work from 18 - 60 years.
You should also look up on GAN's as this will be the base of your model. You could also try to hard code a few aspects by studying the statistical variations of voices in terms of pitch, modulation etc. |
H: What makes the validation set a good representative of the test set?
I am developing a classification model using an imbalanced dataset. I am trying to use different sampling techniques to improve the model performance.
For my baseline model, I defined an AdaBoost model like so:
from sklearn.model_selection import KFold
kf = KFold(n_splits=5, shuffle=False)
ada = AdaBoostClassifier(n_estimators=100, random_state=42)
params = {
'n_estimators': [50, 100, 200],
'random_state': [42]
}
grid_ada = GridSearchCV(ada, param_grid=params, cv=kf, n_jobs=-1,
scoring='precision').fit(X_train, y_train)
# The best precision score is
grid_ada.best_score_
0.5294693068932419
# look at all the validation scores
grid_ada.cv_results_['mean_test_score']
array([0.51916435, 0.52946931, 0.48800155])
# check the test scores are in line with what we expect from the CV scores
precision_score(y_test, grid_ada.predict(X_test))
0.4423076923076923
In this case, I am not able to determine if my validation result (~53%) was a good representative of my test result (~44%). And if it isn't, why is that the case
I suppose my question can be split into 3 parts:
When do we determine that a validation set is a good representative of a test set? Should the difference between the two results be between a certain range?
What are some of the reasons for large discrepancies between the validation and test set results? I know from a previous question that data leakage from training set into validation set by upsampling the data before splitting it can cause this. But are there any other obvious reasons?
Does class imbalance influence the reliability of the validation results? So should I be using StratifiedKFold as the scikit learn documentation states:
Some classification problems can exhibit a large imbalance in the distribution of the target classes: for instance there could be several times more negative samples than positive samples. In such cases it is recommended to use stratified sampling as implemented in StratifiedKFold and StratifiedShuffleSplit to ensure that relative class frequencies is approximately preserved in each train and validation fold.
UPDATE:
I have taken two additional steps that have made my validation set more representative of my test set:
I am now using the StratifiedKFold for the cross validation, like so:
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=5, shuffle=False)
Regarding the initial split of the data between and train and test split, I am now using the stratify option in this method because I expect that future data this model will receive to make predictions will have a similar, imbalanced distribution between classes, so stratification by class percentages make sense:
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y,
shuffle=True, test_size=0.2, random_state=42)
AI: The general idea of using a validation set is to analyse or control the training of the model. Usually, the model tends to under-fit or over-fit, and this can be observed using the validation split. In practical applications, we just have the visible training set and unknown test set. So, with some assumption on the test set, a simple assumption can be such that test set belongs to the similar distribution as of the training set, we use part of training set as validation set. Once the validation set is chosen, we train the system on the training set and at every required step we observe how the trained model performs on the validation set. (Here, we are still under an assumption that the test-set is not visible). Hence, we can use the validation to check if the model is being trained well and to control the training (early stopping) especially in the case of over-fitting. With this assumption a good validation set should be something which has similar distribution as that of the test set. However, most of the times since we split the available dataset into training, validation and test set, the validation set usually has similar distribution as that of test set.
As mentioned above, difference in the distributions of test set and validation set can cause some discrepancies in the results. As you pointed out, if we upsample the training set before obtaining the validation split, then the validation set will contain different distribution than that of the test set. The discrepancies also occur when the the test set is not provided before-hand, and if it has different distribution than that of training/validation set.
Generally, a random-uniform-split produces the similar distribution, i.e, the ratio of imbalance reflects in all the splits. However, when the number of samples are small (either overall number of samples or number of samples in minority class), the ratio may not be proportional across the splits. Hence, given an option to have stratified-split, you can prefer the stratified split. (As mentioned, it may not make much difference if random-uniform-distribution produces similar ratios across the split). |
H: Find inputs that give highest variance in output space
I have a relatively simple problem that I think should have a known solution, but that I can't figure out. Any help would be greatly appreciated.
Basically, I have a function $f : \mathbb{R}^d \rightarrow \mathbb{R}^p$ which has vector input of features and vector output of images. Given a training set of combinations of the input variables and their corresponding outputs $\{x_i, y_i\}_{i=1}^N$, I would like to find the input feature (i.e. element of the inputs) that leads to the highest variability in the outputs, i.e. highest pixel-wise variance. Broadly speaking, I'm hoping to quantify the relationship between each feature and its effect on the variance of the output, to get an idea of how "sensitive" that feature is.
Methods like PCA wouldn't work because I am working in a supervised setting. Whereas PCA tries to find the directions of maximum variance of an unsupervised dataset, I am trying, in a sense, to find the directions of maximum variance in the label (output) space as it relates to the input space. Does an answer to this exist? Thanks in advance.
AI: Very interesting question! I didn't get if the out put of one input vector is just one image or a set of images but this does not change the solution much.
I propose you directly learn the relation between input and the variance of output images. For simplicity, I assume you target the pixle-wise variance within an output image. Then your training data would become $\{x_i, Var(y_i)\}_{i=1}^N$ instead of $\{x_i, y_i\}_{i=1}^N$!
Then your problem can be solved as:
Classification: You threshold the "high variance" and discritize your output into high variance and low variance classes
Regression: Even better! you directly learn the value of variance and have more flexibility on interpreting the results at the end (e.g. the thresholding mentioned above can be applied afterwards).
I hope I understood your question right, I was clear and it could help!
Good Luck! |
H: Is the way to combine weak learners in AdaBoost for regression arbitrary?
I'm reading about how variants of boosting combine weak learners into final predication. The case I'm consider is regression.
In paper Improving Regressors using Boosting Techniques, the final prediction is the weighted median.
For a particular input $x_{i},$ each of the $\mathrm{T}$ machines makes a prediction $h_{t}, t=1, \ldots, T .$ Obtain the cumulative prediction $h_{f}$ using the T predictors: $$h_{f}=\inf\left\{y \in Y: \sum_{t: h_{t} \leq y} \log \left(1 / \beta_{t}\right) \geq \frac{1}{2} \sum_{t} \log \left(1 / \beta_{t}\right)\right\}$$ This is the weighted median. Equivalently, each machine $h_{t}$ has a prediction $y_{i}^{(t)}$ on the $i$'th pattern and an relabeled such that for pattern $i$ we have: $$
y_{i}^{(1)}<y_{i}^{(2)}<, \ldots,<y_{i}^{(T)} $$ (retain the association of the $\beta_{t}$ with its $y_{i}^{(t)}$). Then sum the $\log \left(1 / \beta_{t}\right)$ until we reach the smallest $t$ so that the inequality is satisfied. The prediction from that machine $\mathrm{T}$ we take as the ensemble prediction. If the $\beta_{t}$ were all equal, this would be the median.
An Introduction to Statistical Learning: with Applications in R: The final prediction is the weighted average.
As such, I would like to ask of the way of aggregation is mathematics-based, or because the researcher feels it's reasonable.
Thank you so much!
AI: The ISL description is of gradient boosting (regression, with mse as the loss function), not of AdaBoost. There, $\lambda$ is constant, not weights for each tree. Since each tree is fitted to the residuals, we need to add the results to better approximate the true values, not average.
However, the title question is still an interesting one. It does seem probably mostly arbitrary, but at least some testing has been done, see e.g. "Experiments with AdaBoost.RT, an Improved Boosting Scheme for Regression" by Shrestha and Solomatine. |
H: use the same gradient to maximize one part of the model and minimize another part of the same model
I want to calculate the gradient and use the same gradient to minimize one part and maximize another part of the same network (kind of adversarial case). For me, Ideal case would be, if there are two optimizers responsible for two part of the network/model and one of the optimizers has a negative learning rate. But it seems that PyTorch does not allow negative learning rate.
In this case what I am doing is:
loss.backward()
optimzer_for_one_part of the model.step()
and then
(-loss).backward()
Problem is, This time the again calculated gradient will not be the same(values are different but flopped of course) because some weights of the same network (same computation graph) have already been changed. But, Ideally, I want to use the flipped version of the previous gradient.
How can I achieve this?
AI: The trick you are looking for is called the Gradient Reversal Layer. It is a layer that does nothing (i.e., identity) in the forward pass, but it reverts the sign of the gradient, so everything behind the layer optimizes the opposite of the loss function.
There are several PyTorch implementations:
https://github.com/janfreyberg/pytorch-revgrad
https://github.com/jvanvugt/pytorch-domain-adaptation/blob/master/utils.py
Initially, it was introduced for unsupervised domain dataptaion. Now it has quite a lot of applications, such as removing sensitive information from CV representation or removing language identity from multilingual contextual embeddings. |
H: Why do we move in the negative direction of the gradient in Gradient Descent?
It is said that backpropagation, with Gradient Descent, seeks to minimize a cost function using the formula:
$$ W_{new} = W_{old} - learningRate \cdot \frac{\partial E}{\partial W} $$
My question is, if the derivate indicates in which direction the function (the graph of the error with respect to the weights) is decreasing, then why subtract from an already negative gradient?
Why not allow the current direction of the gradient (negative lets say) to be the driving factor for updating the weights:
$$ W_{new} = W_{old} + learningRate \cdot (-gradient) $$
AI: Consider a simple example where the cost function to be a parabola $y=x^2$ which is convex(ideal case) with a one global minima at $x=0$
Here your $y$ is the independent variable and $x$ is the dependent variable, analogus to the weights of model that you are trying to learn.
This is how it would look like.
Let's apply gradient descent to this particular cost function(parabola) to find it's minima.
From calculus it is clear that $dy/dx = 2*x$. So that means that the gradients are positive in the $1^{st}$ quadrant and negative in the $2^{nd}$. That means for every positive small step in x that we take, we move away from origin in the $1^{st}$ quadrant and move towards the origin in the $2^{nd}$ quadrant(step is still positive).
In the update rule of gradient descent the '-' negative sign basically negates the gradient and hence always moves towards the local minima.
$1^{st}$ quadrant -> gradient is positive, but if you use this as it is you move away from origin or minima. So, the negative sign helps here.
$2^{nd}$ quadrant -> gradient is negative, but if you use this as it is you move away from origin or minima(addition of two negative values). So, the negative sign helps here.
Here is a small python code to make things clearer-
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-4, 4, 200)
y = x**2
plt.xlabel('x')
plt.ylabel('y = x^2')
plt.plot(x, y)
# learning rate
lr = 0.1
np.random.seed(20)
x_start = np.random.normal(0, 2, 1)
dy_dx_old = 2 * x_start
dy_dx_new = 0
tolerance = 1e-2
# stop once the value has converged
while abs(dy_dx_new - dy_dx_old) > tolerance:
dy_dx_old = dy_dx_new
x_start = x_start - lr * dy_dx_old
dy_dx_new = 2 * x_start
plt.scatter(x_start, x_start**2)
plt.pause(0.5)
plt.show() |
H: Is there a way to classify an alphanumeric string?
I have data containing various items. Each item has a unique alphanumeric code associated with it (see the example below).
Is there a way to predict the item type based on the alphanumeric code?
Data:
Item code type
1 4S2BDANC5L3247151 book
2 1N4AL3AP1JC236284 book
3 3R4BTTGC3L3237430 book
4 KNMAT2MT1KP546287 book
5 97806773062273208 pen
6 07356196706378892 Pen
7 97807345361169253 pen
8 01008130715194136 chair
9 01076305063010CCE44 chair
etc
AI: Find a minimal implementation of (binary) text classification based on n-grams below. It's written in base R. You first need to extract the ngrams and apply some model (Lasso in my case) to predict each ngram's class. You can easily expand this to multiclass by changing the model accordingly. Note that this is just a minimal example with room to improve.
See also here: https://github.com/Bixi81/R-ml/blob/master/NLP_ngram_short_text.R
Find a Python implementation of GLMnet here: https://web.stanford.edu/~hastie/glmnet_python/
Data:
# Dummy data
# Note the little differences in the strings, however there is a clear pattern
df = data.frame(text=c("ab13ab12ab16","ag16ag16fg16","ab12ab12ab12","fg12fg16fg16","ab16ab12af12","fg16fg16fg16"),target=c(1,0,1,0,1,0))
head(df)
text target
1 ab13ab12ab16 1
2 ag16ag16fg16 0
3 ab12ab12ab12 1
4 fg12fg16fg16 0
5 ab16ab12af12 1
6 fg16fg16fg16 0
Ngrams:
# Set up lists to post results from loop
ngrams = list()
targets = list()
observation = list()
# Loop over rows of DF
counter = 1
for (row in 1:nrow(df)){
# Loop over strings in first colum per row
for (s in 1:nchar(as.character(df$text[[row]]))){
# Get "ngram" (sequence of two letters/digits)
substring=substring(as.character(df$text[[row]]), s, s+1)
# Append if >1, also post row and target
if (nchar(substring)>1){
ngrams[[counter]]<-substring
targets[[counter]]<-df$target[[row]]
observation[[counter]]<-row
counter = counter+1
}
}
}
# Lists to DFs
ngramdf=data.frame(ngram=matrix(unlist(ngrams), nrow=length(ngrams), byrow=T))
targets=data.frame(ngram=matrix(unlist(targets), nrow=length(targets), byrow=T))
obs=data.frame(obs=matrix(unlist(observation), nrow=length(observation), byrow=T))
# Get dummy encoding ("one hot") from all the ngrams
dummies = model.matrix(~ . -1 , data=ngramdf)
# Bind target and dummies to train DF
train = cbind(targets,dummies)
Estimate some model:
# Now apply Lasso to predict the classes
# https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html
library(glmnet)
cvfit = cv.glmnet(as.matrix(train[,-1]), as.matrix(train[,1]), family = "binomial", type.measure = "class")
# Predict class per ngram
classes = predict(cvfit, newx = as.matrix(train[,-1]), s = "lambda.min", type = "class")
# predicted probability per ngram -> needs (type="response")
probs = predict(cvfit, newx = as.matrix(train[,-1]), s = "lambda.min", type = "response")
# Calculate average prob that a sequence (so ngrams per row) belongs to some class
result = cbind(probs,obs)
# Get mean per row
result = aggregate(. ~ result$obs, result[1], mean)
# Bind original text and target
result = cbind(result,df)
colnames(result)<-c("observation","estimated_prob", "text", "original_target")
Result:
observation estimated_prob text original_target
1 1 0.7173567 ab13ab12ab16 1
2 2 0.3020248 ag16ag16fg16 0
3 3 0.7674571 ab12ab12ab12 1
4 4 0.2922798 fg12fg16fg16 0
5 5 0.6815783 ab16ab12af12 1
6 6 0.2393033 fg16fg16fg16 0 |
H: What happens if we use Z-Score(Mean Normalisation) in Neural Networks to standardize values
I'm currently doing the DeepLearning.ai Specialization where they divide pixel intensity with the maximum value it has (255) to standardize the data when working on a classification NN for cats.
What effect does Z_Score have on this?
Will my code fail becuase pixels cannot have negative values which will be assigned by Z-Score?
AI: By normalizing the data the code is not likely to get negatively affected. This is because the network doesn't know a priori that images are being used as input data, it only receives a set of numeric values (which can represent any type of data) and finds the appropiate values of weights and biases that decrease our cost function.
Thereby even if we get negative values as the new input data, that doesn't mean that this preprocessing is going to cause a bad effect.
But, why normalization may be useful?
The useful thing about Z-score normalization here is that by doing it, all the "new pixels" have a value with zero-mean and the same variance. This is a good thing as explains Yann LeCun in his paper "Efficient Backprop" (page 8):
Why use data with features having a mean close to zero:
Convergence is usually faster if the average of each input variable over the training set is close to zero... ....When all of the components
of an input vector are positive, all of the updates of weights that feed into a
node will be the same sign. As a result, these weights can only
all decrease or all increase together for a given input pattern. Thus, if a weight
vector must change direction it can only do so by zigzagging which is inefficient
and thus very slow
The expression that he is refering to is the one that is used to update the value of the weights of the first layer, which is given by:
$$ \frac{\partial C}{\partial w_{jk}^{l=1}} = \delta_j^{l=1}x_k$$
Where $x_k$ represents an input pixel value and $w_{jk}^l$ a weight that connects that input $k$ to the neuron $j$ of the first layer ($l=1$). So, given the scalar term $\delta_j^{l=1}$ it's clear that all the weights that connect the input layer to the first layer of neurons will be updated in the same direction if all $x_k$ are positive, as the quote explains.
Why use data with features having the same variance?:
Scaling speeds
learning because it helps to balance out the rate at which the weights connected
to the input nodes learn.
As we saw earlier, the expression that gives the updates on the weights $w_{jk}^{l=1}$ are proportional to the input $x_k$. So if e.g. $x_1$ samples have bigger values than $x_2$ samples, then the updates on $w_{j1}$ on $w_{j2}$ may not be balanced and therefore the updates are made with different rythms on both parameters.
Just a side note $\rightarrow$ The method of dividing each feature (each pixel intensity) by their maximum value is not a Z-score normalization but is another way of normalizing the data (in order to use the Z-score, the quantity that we would have to use to divide the features is the standard deviation of each pixel throughout all the samples). |
H: vanishing gradient and gradient zero
There is a well known problem vanishing gradient in BackPropagation training of Feedforward Neural Network (FNN)(here we don't consider the vanishing gradient of Recurrent Neural Network).
I don't understand why vanishing gradient does not mean the zero gradient namely the optimal solution we want? I saw some answer said vanishing gradient is not exactly the zero gradient, just means the update of gradient is very slow. However, the stopping rule of gradient decent is just the unchange of parameter within $\epsilon.$
So can anyone give me a clear answer?
AI: The setting:
We have a neural network $\phi_{\mathbf{w}}:\mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$ with weights $\mathbf{w} \in \mathbb{R}^{q}$. A loss function $\hat{L}: \mathbb{R}^{m} \times \mathbb{R}^{m} \rightarrow \mathbb{R}$ evaluates the quality of a prediction. If $x \in \mathbb{R}^{n}$ shall be mapped to $y \in \mathbb{R}^{m}$ by the neural network, the loss is given as $\hat{L}(\phi(x),y)$.
For a fixed dataset $D \subset \mathbb{R}^{n} \times \mathbb{R}^{m}$, we obtain the empirical error
$F(\mathbf{w}):= \sum_{(x,y) \in D} \hat{L}(\phi_{\mathbf{w}}(x),y)$.
Then $F: \mathbb{R}^{q} \rightarrow \mathbb{R}$.
Now $F$ is minimized using backpropagation.
Let us try to define the vanishing gradient term. I am not sure if there is a proper definition, but I would say we have a vanishing gradient at $p$ if $0 <||\nabla F(p)|| \leq c$ for some small $c$.
Raised questions:
If the gradient is almost zero due to the vanishing gradient, does that mean the current solution is very close to the optimum ? So we can stop iterating..
Why is it bad to have "vanishing gradients" ?
Adressing Question 1
Recall from school that if a functional $F$ has a local optimum at $p$, then $\nabla F(p) = \mathbf{0}$ and $D^2 F(p)$ definite.
If $D^2 F(p)$ is positive definite ($x^T D^2 F(p) x > 0$, for all $x$, where $D^2 F(p)$ is the Hesse matrix), then $p$ is local minimum.
If $\nabla F(p) = \mathbf{0}$ and $D^2 F(p)$ in definite, then $p$ is a saddle point.
In particular this shows that having a zero gradient does not always imply that the position is a local optimum.
(In case of $q = 1$ and $F$ being two times differentiable, $F$ has a local optimum at $p$, if $F'(p) = 0$ and $F''(p) \neq 0 $. )
We can also construct a function that can have arbitrary small gradient while being far away from the minimum:
Consider the function $f_{c}(x) = \max\{0,cx\}$ with $c>0$. Then $\min_{x \in \mathbb{R}} f_c(x) = 0$. For any $p>0$, we have $f'_{c}(p) = c$.
As an example let $p = 10^{9999}$ and $c = 10^{-90}$. Then, the value $f_{c}(p)$ is far away from the minimum, still for the gradient $f'_{c}(p) = 10^{-99}$ holds, which shows that a small gradient does not imply that the current point is close to the optimum.
Adressing Question 2
Note that performing backpropagation is performing the gradient descent algorithm.
Now to address the section questions, there are two directions (an analytical answer and a numerical answer).
The analytical answer would be that a vanishing gradient is nothing special that needs to be considered.
If the step size is choosen appropriately, it can be shown that the sequence of iterates $(p_k)$ is either finite with $\nabla F(p) = 0$, or it is an infinite sequence, and $\lim_{k \rightarrow \infty} \nabla F(p_{k}) = 0$, so that each limit point is a stationary point. This will work independent of any "vanishing gradients".
However, if we consider the question from the numerical aspect, there are certain issues.
1.) There is a machine epsilon $\epsilon$ so that updates with values smaller than $\epsilon$ cannot be performed numerically in a computer. This effectively means that the algorithm converges to some point if $||\nabla F(p)|| \leq \epsilon$.
2.) Even if the values are bigger than $\epsilon$, a "small" gradient vector results in very slow weight updates.
3.) The vanishing gradient problem may arise for example if the sigmoid function is used as activation in a deep neural network. This can be understood from the chain rule.
Let us compute a simple example where we have neural network consisting of $L$ layers, and each layer consists of a single neuron, without any bias. As activation function, we use the sigmoid function $\sigma(t) = \frac{1}{1+e^{-t}}$. Then $\sigma'(t) = \sigma(t)*(1-\sigma(t))$.
The output of the i-th layer is given as
$f_{i}(w) := \sigma(w o_{i-1})$, where $o_{i-1}$ is the output of the $i-1$-th layer.
Let us ignore the loss function and assume that $F$ is exactly the neural network.
The output at layer $i$ is denoted as $o_{i}$ so that we have $o_{i} := \sigma(w_{i} o_{i-1})$ and $o_{1} = w_{1}$.
Then $F(w_{1},\ldots,w_{L}) = o_{L} = \sigma(w_{L} o_{L-1})$.
According to the chain rule, we have $\frac{\mathrm{d}F}{\mathrm{d}w_{1}}(w) = \frac{\mathrm{d} \sigma}{\mathrm{d}w_{1}}(w_{L} o_{L-1}) = \sigma'(w_{L} o_{L-1}) \frac{\mathrm{d} w_{L} o_{L-1}}{\mathrm{d}w_{1}} = \sigma'(w_{L} o_{L-1}) w_{L} \frac{\mathrm{d} o_{L-1}}{\mathrm{d}w_{1}}$.
Repeatedly applying the chain rule results in:
$\frac{\mathrm{d}F}{\mathrm{d}w_{1}}(w) = \prod_{i = 2}^{L} w_{i} \prod_{i = 2}^{L} \sigma'(w_{i}o_{i-1})$.
Since $\sigma(t) \in [0,1]$, we have $\sigma'(t) \in [0,1]$. In particular, we often want $\sigma$ to output either $0$ or $1$. However, if $\sigma(w_{i}o_{i-1})$ is close to $0$ or $1$, then $\sigma'(w_{i}o_{i-1})$ will be close to $0$.
Now if one (or multiply) numbers are close to zero in $\frac{\mathrm{d}F}{\mathrm{d}w_{1}}(w)$, we obtain a very small number, which results in numerical issues (due to reaching machine epsilon), causing a very slow update during the algorithm. |
H: Metric MAP@k for what
What is the MAP@K metric for?
What are you measuring? And where does it make sense to use it?
Unfortunately, I can't find much about this on the Internet. Could someone help me with this? Thanks in advance.
AI: MAP@k is normally used in recommendation systems, but also in other kinds of systems. Quoting from here:
If you have an algorithm that is returning a ranked ordering of items, each item is either hit or miss (like relevant vs. irrelevant search results) and items further down in the list are less likely to be used (like search results at the bottom of the page), then maybe MAP is the metric for you!
Some application examples are these Kaggle competitions:
Expedia Hotel Recommendations
Santander Product Recommendation
National Data Science Challenge 2019
And these are some resources with more information about it:
Mean Average Precision (MAP) For Recommender Systems
How mean Average Precision at k (mAP@k) can be more useful than other evaluation metrics |
H: Evaluating Language Model on specific topic
I have finetuned a pretrained Language Model(GPT-2) on a custom dataset of mine. I would like a way of evaluating the ability of my model to generate sentences of a specific predefined topic, given in the form of either a single keyword(e.g. 'Computers') or a bag-of-words(e.g. 'Computers', 'Linux', 'Server'...).
For example given a LM, how relative are the outputs of the model to the topic specified by the word Computers?
What I have already tried: Generating a large enough number of sentences from the LM and taking the average cosine similarity between these sentences and the target topic(or every word in that topic we have more than one) as described here . I am not sure if this is a valid way to go and furthermore the cosine similarity between sentences yields poor results in many cases.
Thanks in advance for any help.
AI: I think there are (at least) two parts to take into account in evaluating such a model:
Whether the generated text correctly relate to the input topic
Whether the generated text is grammatically and semantically acceptable
In my opinion the first kind of evaluation could reasonably be done with an automatic method such as the one you propose. Note that cosine scores should not be interpreted absolutely: you should probably compute cosine similarity with a random sample of topics, and normally one expects the similarity to be much higher with the input topic than any other. You could also think of other variants, for instance training topic models on the generated text together with a sample of documents from various known topics, then check that the generated text belongs to the target topic (i.e. it should be grouped with the documents known to belong to this topic).
For the second kind of evaluation, it would be difficult and unreliable to use an automatic method. As far as I know the only reliable way would be to ask human annotators to assess whether the text is grammatically correct and whether its content makes sense. If you're going to do that you might as well ask them to annotate to what extent the text is related to the topic.
[added following comment]
if you check whether the generated text is similar to the topic only by computing similarity with this target topic, what you obtain is for instance an average cosine score. Then you would probably select a threshold: for instance if the similarity is higher than 0.5 then consider that the text is indeed related to the topic. But there are two problems with this option:
In some cases the average similarity will be lower than the threshold even though the text is correctly related to the topic. This could happen for example with a very "broad" topic which covers a large vocabulary.
On the contrary you might have cases where the average similarity is higher than the threshold, but actually comparing to another topic would give an even higher similarity value.
These issues are due to interpreting the similarity score "absolutely", as opposed to interpreting it relatively to other similarity scores. Instead you can calculate the similarity not only against the target topic but also against other topics, and then just check that the target topic is the most similar topic (or at least one of the top similar). This way
:
The target similarity score may be low, as long as it's higher than the other topics
you can detect the case where another topic happens to have higher similarity than the target topic |
H: Confusion matrix to check results
I am a new user in StackExchange and a new learner of Data Science. I am working on better understanding how to estimate the results collected, specifically fake users extracted from a dataset running some analysis.
Using a specific algorithm, I found some users
User_Alg
user1
user2
user3
user28
user76
user67
and I would like to estimate accuracy of my algorithm comparing with the dataset which contains all the fake users manually labelled:
User_Dat
user1
user5
user28
user76
user67
user2
user29
As you can see, there are some users, in my extracted list (User_Alg), who are missing, i.e. not included in the list manually labelled (all the fake users in the dataset; User_Dat).
I have thought to use a confusion matrix to check the accuracy, but I would like to know from people with more experience in statistics and machine learning than me, if such method can be ok and how it looks like, or if you recommend another approach.
Thanks for your attention and your time.
AI: A confusion matrix is indeed a very useful way to analyze the results of your experiment. It provides the exact number (or percentage) of instances with true class X predicted as class Y for all the possible classes. As such it gives a detailed picture of what the system classifies correctly or not.
But a confusion matrix is a bit too detailed if one wants to summarize the performance of the classifier as just one single value. This is useful especially when one wants to compare two different classifiers, since there's no general way to compare two confusion matrices. That's why people often use evaluation measures: for binary classification, the most common ones are:
Accuracy, which is simply the number of correct predictions divided by the total number of instances.
F-score, which itself is the harmonic mean of precision and recall. Quite ironically, F-score gives a more accurate picture of performance than accuracy because it takes into account the different types of possible errors. |
H: Help with type of ML problem: when training data is spread across different subgroups/categories
I've been searching around for a while without any luck - hopefully someone more knowledgable can give me some advice about the following ML problem that I've been thinking about:
Say you are trying to predict the Rotten Tomato "Tomatometer" score of a film before it's released. Typically you might approach this by compiling a list of features and labels for some existing films and input this into a supervised ML algorithm.
In this example, the feature list will be standard metrics that describe the film such as, budget, filming duration, number of actors, etc., while the label is the Tomatometer score of the film that is given as a value between 0 to 100. Every film can be expressed using this score, but individually they are spread across many genres, country of production, etc. meaning that there are natural subsets within the training data.
Let's say our training data contains only films belonging to five genres (e.g. Action, Thriller, Horror, Fantasy and Documentary), while we want our algorithm to be applicable to films outside of this genre (e.g. Sci-Fi or Animation), but for the sake of question, we do not have access to these entire categories. In this example, were also assuming that some features will be more important to certain genres than others, for example, a large cast size may correlate more with the score for Action films than for Animations.
What is the general way to transform the data to make it invariant to the subgroup (genre), or what ML algorithm can be used here (if any)? Is there a common name for this situation (some keywords I can search?)
AI: I'll try to rephrase your question: How does one use the information contained in some categorical feature x to predict y in the presence of unseen category values in the test set?
Assuming the train set is representative of the test set distribution, you'd expect the large categories to also be present in the test set.
We're thus mostly concerned with the small categories which might be present in the train set but absent from the test set and/or present in the test set but absent from the train set.
One way of dealing with such situation is merging small categories (e.g. under 2% of all observations) into a single category. That way you treat any new category level as part of the merged category.
Below I share how I implement the above in python, in a way that can be combined in scikit learn pipelines:
from collections import Counter
import pandas as pd
class mergeSmallCategoryLevels():
def __init__(self, min_frac):
self.min_frac = min_frac
def fit(self, X, y=None, **fit_params):
category_counts = pd.DataFrame.from_dict(Counter(X), orient = "index", columns = ["count"])
min_category_count = len(X)*self.min_frac
large_categories = category_counts[category_counts["count"] >= min_category_count]
self.large_categories = list(large_categories.index.values)
return self
def transform(self, X, **transform_params):
ans = [val if val in self.large_categories else ".merged" for val in X]
ans = pd.DataFrame({"category_feature":ans})
return ans.to_numpy().reshape(-1, 1) |
H: ValueError: cannot reshape array of size 136415664 into shape (2734,132,126,1)
I have a data set I loaded with cv2, but when I try to format it I get the above error. I start by moving the data into X_train and X_test (the loaded data is in x_train and x_test).
X_train = []
X_test = []
# Image matrices are different sizes so I am making them the same size
for i in range(len(x_train)-1):
resized = cv2.resize(x_train[i], (img_width, img_height))
X_train.append(resized)
for i in range(len(x_test)-1):
resized = cv2.resize(x_test[i], (img_width, img_height))
X_test.append(resized)
# Convert to numpy arrays
X_test = np.array(X_test)
X_train = np.array(X_train)
# Gather statistics
print(X_train.shape) # -> (2734, 132, 126, 3)
print(X_train.size) # -> 136415664
print(len(X_train)) # -> 2734
# Convert to black and white
X_train = X_train/ 255.
X_test = X_test/ 255.
# First line throws error
X_train = np.reshape(X_train, (len(X_train), img_height, img_width, 1))
X_test = np.reshape(X_test, (len(X_test), img_height, img_width, 1))
What am I doing wrong?
AI: You need $2734 \times 132\times 126\times 1=45,471,888$ values in order to reshape into that tensor. Since you have $136,415,664$ values, the reshaping is impossible. If your fourth dimension is $4$, then the reshape will be possible. |
H: How do I determine which variables contribute to the 1st PC in PCA?
Given the coefficients of PC1 as follows for each variable (0.30, 0.31, 0.42, 0.37, 0.13, -0.43, 0.29, -0.42, -0.11) which variables contributes most to this PC? Does the sign(+/-) matters or considering the absolute value is enough?
AI: Welcome to the site. PCA is an unsupervised dimensionality reduction algorithm. It works by transforming the original feature-set into eigen-vectors that are difficult to map with the original feature set. As such, the first Principal Component (PC) contains the features with maximum variance. The subsequent PCs contain features with decreased variance to the first PC.
With this background, I invite you to read this Q on SO. It has the solution to programmatically determine the features deemed most important by PCA.
[edited]
Regarding the sign of the components, eve if you change them you do not change the variance that is contained in the first component. Moreover, when you change the signs, the weights (prcomp( ... )$rotation) also change the sign, so the interpretation stays exactly the same:
set.seed( 2020 )
df <- data.frame(1:10,rnorm(10))
pca1 <- prcomp( df )
pca2 <- princomp( df )
pca1$rotation
gives
PC1 PC2
X1.10 0.9876877 0.1564384
rnorm.10. 0.1564384 -0.9876877
and pca2$loadigs gives,
Comp.1 Comp.2
SS loadings 1.0 1.0
Proportion Var 0.5 0.5
Cumulative Var 0.5 1.0
Then the question arises that why the interpretation remains the same
You do the PCA regression of y on component 1. In the first version (prcomp), say the coefficient is positive: the larger the component 1, the larger the y. What does it mean when it comes to the original variables? Since the weight of the variable 1 (1:10 in df) is positive, that shows that the larger the variable 1, the larger the y.
Now use the second version (princomp). Since the component has the sign changed, the larger the y, the smaller the component 1 -- the coefficient of y< over PC1 is now negative. But so is the loading of the variable 1; that means, the larger variable 1, the smaller the component 1, the larger y -- the interpretation is the same.
The conclusion is that for each PCA component, the sign of its scores and of its loadings is arbitrary and meaningless. It can be flipped, but only if the sign of both scores and loadings is reversed at the same time.
Furthermore, the directions that the principal components act correspond to the eigenvectors of the system. If you are getting a positive or negative PC it just means that you are projecting on an eigenvector that is pointing in one direction or 180∘ away in the other direction. Regardless, the interpretation remains the same! It should also be added that the lengths of your principal components are simply the eigenvalues. |
H: grid search result max_features = 'sqrt' in random forest - how to understand
I did a grid search at random forest params. the result of
print(randomforestreg.best_params_)
The result is =
{'max_depth': 28, 'n_estimators': 500 ',max_features': 'sqrt', 'min_samples_split': 2, 'min_samples_leaf': 1'}
The Random Forest documentation:
If “auto”, then max_features=sqrt(n_features).
If “sqrt”, then max_features=sqrt(n_features) (same as “auto”).
So 'sqrt' is like max_features=sqrt(n_features) --> same as 'auto'.
--> the number of features with which tree is build. I dont understand this. When I have 19 columns(features) and 100.000 rows, what is now the answer of the question; "what is the best value for max_features?". Is it the squared value of n features = 19^2 = 361?
AI: The way to understand Max features is "Number of features allowed to make the best split while building the tree". The reason to use this hyperparameter is, if you allow all the features for each split you are going to end up exactly the same trees in the entire random forest which might not be useful. To overcome this we let the model select a fixed number of features randomly, in this case, the no of features allowed = Square root of total no of features in your dataset.
Hope this clears up !! |
H: How do I control sensor data readings against (measured) outside influences?
I need some hints in this problem I have.
I have a dataset of electro-dermal activity readings that are, by nature, influenced by the movements of the person that is being surveyed. In the same dataset, however, I also have accelerometer data that I could use to "control" the data. Meaning that, I hope, I could somehow factor in the person's accelerometer readings when looking at EDA readings and filtering them/correcting against posture changes made obvious in the accelerometer values.
As an example, fig. 1 shows EDA (also called galvanic skin response, or gsr) readings with higher peak prominence at the beginning of the experiment:
This second figure shows accelerometer readings in x, y and z directions, appearing to show more activity during these phases of higher peak prominence:
How do I control my EDA values against these accelerometer readings? Any hints to literature or math will be greatly appreciated.
Thanks in advance!
AI: In short, I think the focus of your question is mostly on how to deal with covariates (watch out for the not always clear use of this term and its synonyms - see here and here. What I mean by covariates is "In statistics, a covariate represents a source of variation that has not been controlled in the experiment and is believed to affect the dependent variable.")
If we first take a step back and ask what you really want from your analysis, it is to my understanding to investigate the relationship between two (or more) variables. These would be e.g. EDA values on one hand and some output variable such as e.g. emotional states based on heart beat data on the other (since I don't have much information on your outcome variable(s) at this point, I will simply call them y). Thus, you have EDA values as an independent variable and y as a dependent variable. So far, so good - bringing together dependent and independent variables statistically can be done in many ways, ranging from even just calculating a correlation between the two (given they are both numeric variables) across fitting linear models and up to machine learning techniques etc. Let us for now go with setting up a linear model:
Short-hand notation: y = EDA
Same model written out more detailed: y = intercept + EDA + error
You already know, however, that you have this other outside influence that affects your EDA-values. Luckily, you even tried to measure/quantify this influence via the accelerometer. Thus, besides dependent variables that you are interested in and the independent variables you are interested in, you now have a third type of variable: the independent variables you are not really interested in, but that you expect to have an influence on the variables that you are interested in. This is what I mean by covariates. Taking a covariate into a model can greatly improve the statistical model in terms of its ability to actually analyze the variables we are interested in. In other words: In the optimal case, a covariate explains so much of the unexplained noise in the data, that the variability that is left lets us draw conclusions about the relationship between the other variables that we could not see before. So the model from above may become:
y = EDA + COVARIATE
y = intercept + EDA + COVARIATE + error
Thus, the question comes up on how to decide whether or not to include such a covariate in your model/analysis. One analysis that is very much in this realm is ANCOVA - analysis of covariance.
I will stop here to make sure I understood the problem correctly and would be glad to hear whether this is helping so far. |
H: Using a random forest, would a RandomForest performance be less if I drop the first or the last tree?
Suppose I've trained a RandomForest model with 100 trees. I then have two cases:
I drop the first tree in the model.
I drop the last tree in the model.
Would the model performance be less in the first or the second case?
As the last tree should be the best trained one, I would say that the first scenario should be less performant than the last one.
And what if I was using another model like a Gradient Boosting Decision tree? I guess it should be the same.
I am okay with some math to prove it, or any other way that might prove it.
Update
I tried with two different learning rates 0.1 and 8. With 0.1 I get:
# For convenience we will use sklearn's GBM, the situation will be similar with XGBoost and others
clf = GradientBoostingClassifier(n_estimators=5000, learning_rate=0.01, max_depth=3, random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict_proba(X_test)[:, 1]
# "Test logloss: {}".format(log_loss(y_test, y_pred)) returns 0.003545821535500366
def compute_loss(y_true, scores_pred):
'''
Since we use raw scores we will wrap log_loss
and apply sigmoid to our predictions before computing log_loss itself
'''
return log_loss(y_true, sigmoid(scores_pred))
'''
Get cummulative sum of *decision function* for trees. i-th element is a sum of trees 0...i-1.
We cannot use staged_predict_proba, since we want to manipulate raw scores
(not probabilities). And only in the end convert the scores to probabilities using sigmoid
'''
cum_preds = np.array([x for x in clf.staged_decision_function(X_test)])[:, :, 0]
print ("Logloss using all trees: {}".format(compute_loss(y_test, cum_preds[-1, :])))
print ("Logloss using all trees but last: {}".format(compute_loss(y_test, cum_preds[-2, :])))
print ("Logloss using all trees but first: {}".format(compute_loss(y_test, cum_preds[-1, :] - cum_preds[0, :])))
which gives:
Logloss using all trees: 0.003545821535500366
Logloss using all trees but last: 0.003545821535500366
Logloss using all trees but first: 0.0035335315747614293
Whereas with 8 I obtain:
clf = GradientBoostingClassifier(n_estimators=5000, learning_rate=8, max_depth=3, random_state=0)
clf.fit(X_train, y_train)
y_pred = clf.predict_proba(X_test)[:, 1]
# "Test logloss: {}".format(log_loss(y_test, y_pred)) returns 3.03310165292726e-06
cum_preds = np.array([x for x in clf.staged_decision_function(X_test)])[:, :, 0]
print ("Logloss using all trees: {}".format(compute_loss(y_test, cum_preds[-1, :])))
print ("Logloss using all trees but last: {}".format(compute_loss(y_test, cum_preds[-2, :])))
print ("Logloss using all trees but first: {}".format(compute_loss(y_test, cum_preds[-1, :] - cum_preds[0, :])))
gives:
Logloss using all trees: 3.03310165292726e-06
Logloss using all trees but last: 2.846209929270204e-06
Logloss using all trees but first: 2.3463091271266125
AI: The two slightly-smaller models will perform exactly the same, on average. There is no difference baked in to the different trees: "the last tree will be the best trained" is not true. The only difference among the trees is the random subsample they work with and random effects while building the tree (feature subsetting, e.g.).
Gradient boosted trees are a different story. If you drop the first tree after you finish training, the resulting model will be mostly garbage. Every subsequent tree was trained to improve upon the fit of the previous trees, and removing any single tree will put all future trees out of context. (To give an extreme example, suppose the first tree actually captures "the correct" model. All future trees will just fit on the remaining noise.) On the other hand, removing the final tree is equivalent to having just trained one fewer tree, which may be good or bad depending on your bias-variance tradeoff at that point. |
H: Siamese Network - Sigmoid function to compute similarity score
I am referring to siamese neural networks introduced in this paper by G. Koch et al.
The siamese net computes 2 embeddings, then calculates the absolute value of the L1 distance, which would be a value in [0, +inf). Then the sigmoid activation function is applied to this non-negative input, so the output afterwards would be in [0.5, 1), right?
So, if two images are from the same class, your desired L1 distance should be close to 0, thus the sigmoid output should be close to 0.5, but the label given to it is 1 (same class); if two images are from different classes, your expected L1 distance should be very large, thus the sigmoid output should be close to 1, but the label given to it is 0 (diff. class).
How does the use of a sigmoid function in order to compute the similarity score (0 dissimilar, 1 similar) in a siamese neural network make sense here?
AI: I would like to augment the answer of @Shubham Panchal, since I feel the real issue is still not made explicit.
1.) $\alpha$ could also contain negative entries so that the sigmoid function maps to $(0,1)$.
2.) @Stefan J, I think you are absolutely correct: two identical embedding vectors would be mapped to $0.5$ while two vectors that differ would be mapped to (depending on $\alpha$) values towards $1$ or $0$, which is not what we want!
@Shubham Panchal mentions the Dense layer and provides a link to an implementation, which is correct.
Now to make it very clear and short, in the paper they forgot to mention that there is a bias!
So it should be $p = \sigma(b+ \sum_{j}\alpha_{j}|h_{1,L-1}^{(j)} - h_{2,L-1}^{(j)}|)$.
Let $\hat{h} := \begin{pmatrix}\hat{h}_{1} & \ldots & \hat{h}_{n}\end{pmatrix}^{T}$, where $\hat{h}_{j}:= |h_{1,L-1}^{(j)} - h_{2,L-1}^{(j)}|$.
Then we know that $\hat{h}_{i} \geq 0$ for all $i$.
If you consider now the classification problem geometrically, then $\alpha$ defines a hyperplane that is used to separate vectors $\hat{h}$ close to the origin from vectors $\hat{h}$ further away from the origin. Note that for $\alpha = 1$, we have $\sum_{j}\alpha_{j}|h_{1,L-1}^{(j)} - h_{2,L-1}^{(j)}| = ||\hat{h}||_{1}$. Using $\alpha$ results thus in a weighting of the standard $1$-norm, $\sum_{j}\alpha_{j}|\hat{h}^{(j)}|$.
Already for $n=2$ you can see that you can have two classes where the hyperplane must not go through the origin. For example, let's say two images belong together, if $\hat{h}_{1} \leq c_{1}$ and $\hat{h}_{2} \leq c_{2}$. Now you can not separate those points from points with $\hat{h}_{1} > c_{1}$ or $\hat{h}_{2}> c_{2}$ using a hyperplane that contains the origin. Therefore, a bias is necessary.
Using the Dense layer in Tensorflow will use a bias by default, though, which is why the presented code is correct. |
H: How to check two list (predicted and actual) having different lengths?
I got a list of fake accounts through an algorithm and I would like to determine the precision/accuracy of this result comparing it with the labelled dataset. The lists contain only fake accounts and not only the accounts were identified by my algorithm, so the length of the lists is different.
The predicted list (via algorithm) is the following:
['A','B','C','G','L'] # these values are unique; not have duplicate in the list
whereas the labelled dataset, i.e. the source dataset containing labelled data, is the following:
['A','C','D','H','J', 'L']
The length is different, as you can see. I was thinking of using a confusion matrix, but probably this is not the case.
There are 3 finding in common (A,B and L) and 2 not in common.
Any idea on how I could consider 'good' my model?
AI: A confusion matrix is indeed the right tool to do the job, if you view both the labelled data and your result not as sequences, but as assignments of binary labels (1 = fake, 0 = not fake) to all the accounts (in your case, probably A through L).
To construct the confusion matrix, you can either calculate the binary labels by a "contains" operation on both lists and use standard confusion matrix routines, or use set operations (if the labelled list is L and your result R, the true positive count would be L & R, the false negative count L - R, etc.)
Once you have the confusion matrix, you can use standard classification metrics to evaluate the accuracy of your model. |
H: My corr() function in Python keeps resulting in an "ValueError: The truth value of a Series is ambiguous..."
I am a very inexperienced programmer, this is my first question on the Data Science StackExchange, I sorry if it is formatted poorly or comes across as basic. For some strange reason, in Python, whenever I try to run a correlation function on the population density & total cases per million columns of my COVID-19 DataFrame (which I imported/read into Spyder as a csv), I keep getting the same long error message, namely, "ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()." regardless of whether I use the correlation function from Pandas or Numpy.
My first thought was that this error was caused by the presence of null values in those columns, so I used df.dropna(), then ran the correlation function again but I got the same "ValueError", so I have no idea what is going on here, I was able to run a correlation on those same columns just fine in RStudio which I am equally unskilled and inexperienced with.
AI: You should look at the documentation. You do not pass the column names as an argument.
subset_df = df[['col1', 'col2']]
subset_df.corr()
That should solve this for you. |
H: difference between scaling/normalizing data at a specific step
I am using the MinMaxScaler normalization method, however I have seen various ways that this can be done, I want to know if there is any actual difference between the following:
1. Standardizing/Normalizing the data before splitting the data into train and test
Code 1
scaler = MinMaxScaler() #Normalization
#Transform X and Y values with scaler
x = scaler.fit_transform(x)
y = y.reshape(-1,1)
y = scaler.fit_transform(y)
# Split Data in train and validation
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size = 0.25)
2. Standardizing/Normalizing the data after splitting the data into train and test and then scaling on train and test
# Split Data in train and validation
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size = 0.25)
# created scaler
scaler = MinMaxScaler() #Normalization
# transform training dataset
x_train = scaler.fit_transform(x_train)
# transform test dataset
x_valid = scaler.fit_transform(x_valid)
3. Standardizing/Normalizing the data after splitting the data into train and test. Then fitting on the training set and then scaling on both train and test
# Split Data in train and validation
x_train, x_valid, y_train, y_valid = train_test_split(x, y, test_size = 0.25)
# created scaler
scaler = MinMaxScaler() #Normalization
# fit scaler on training data
scaler = MinMaxScaler().fit(x_train)
# transform training dataset
x_train = scaler.fit_transform(x_train)
# transform test dataset
x_valid = scaler.fit_transform(x_valid)
AI: Case I -
With a single scaling step, you might leak the test info into the train. In this approach you have a common min/max otherwise it would have been two pairs
See, the plot for one of the Features of Iris dataset
Also, we don't scale the target. But I see this in your code.
Case II -
This is fine but you should also consider the online cases, where you will not have a test set to scale the new test data.
Case III -
This is a better and an agnostic approach.
Your code implementation incorrect as suggested in the comment |
H: Pull Random Numbers from my Data (Python)
Let's imagine I have a series of numbers that represents cash flows into some account over the past 30 days in some time window. This data is non-normal but it does represent some distribution. I would like to pull "new" numbers from this distribution in an effort to create a monte-carlo simulation based on the numerical data I have. How can I accomplish this? I've seen methods where you assume the data is normal & pull numbers based on some mean and standard deviation - but what about non-normal distributions? I'm using python so any reference including python or some python libraries would be appreciated.
AI: If what you want is to generate random numbers with the same distribution as your cashflow numbers I recommend you using Python's Fitter package
It is powerful and very simple to use.
You can in this way use it to find the distribution of your data and then generate random numbers with the same distribution.
From documentation:
from scipy import stats
data = stats.gamma.rvs(2, loc=1.5, scale=2, size=10000)
from fitter import Fitter
f = Fitter(data)
f.fit()
# may take some time since by default, all distributions are tried
# but you call manually provide a smaller set of distributions
f.summary()
Also useful resources might be found in stackoverflow |
H: training gradient boosting algorithm in python testing in Golang
What are the best strategy to train and save a gradient boosting algorithm, e.g. LightGBM or XGboost or Catboost in Python but load the model in GoLang and make prediction with Golang ?
AI: There's actually a few libraries that handle the inference part well. https://github.com/dmitryikh/leaves is probably the most common one and seems to fit your need. |
H: Why does a simpler model performs better than a complicated one?
This has happened to me, a complicated model couldn't solve the problem when a simpler one solved it in a few epochs. How is that? I believed that a more complicated model means more number of parameters and more number of parameters means a higher capability to solve a problem.
I have heard that people say, A simpler Model can perform well because it has a low variance. But, what does that even mean?
AI: High variance means that your model's performance varies from data to data, which is bad for the model.
For example:
You used a polynomial classifier with high degree, so it will overfit your training data resulting in good accuracy, but when tried on a different dataset it will yield less accuracy , resulting in variance.
The image on the right is way too complex which results in overfitting.
The middle image shows a simpler model than the image on right, but fits the data better ie. generalize the data better |
H: Use heading in Neural Network model
I am working on a prediction model where I must find out the destination of a boat based on its actual coordinates and heading (compass direction) :
In[8]: X.head()
Out[8]:
latitude longitude heading
0 0.094700 0.094700 332.398791
1 0.090828 0.090828 197.320172
2 0.085800 0.085800 140.537550
3 0.081676 0.081676 128.891893
4 0.077804 0.077804 129.881418
Latitude and longitude are in degrees (normalised here) and heading is in degrees.
The output variable is an integer between 1 and n (there are n possible destinations).
My problem here is to make the neural network understand how heading difference works : 5 degrees and 15 degrees are as close as 5 degrees and 355 degrees because we have to use a modulo 360 after making the difference :
In[11]: diff_heading(355, 5)
Out[11]: 10
In[12]: diff_heading(15, 5)
Out[12]: 10
I've got some acceptable results when I put the coordinates and heading in a basic multilayer perceptron neural network without any preprocessing on the heading. But I'm pretty sure it could be way more better if the network could take into account this heading problem. For now, the network must attribute weights to the heading which indicate that 0 and 360 degrees are opposite, which is false.
Do you have any idea on how to process the data or how to change the network structure to achieve this ?
AI: You could break the heading ($\theta$) into components: $sin(\theta)$ and $cos(\theta)$, which would remove the modulo problem. |
H: CUDA compatibility of GTX 1650ti versus 1650
I am confused about CUDA compatibility. I am studying deep learning and looking for a laptop to buy. One laptop has GTX 1650ti and another has GTX 1650. Will both be able to use GPU for model training, or only second one?
I checked for gpu compatibility. On the nvidia website only gtx 1650 is mentioned. But on some other forums I read that both can work.
AI: Both can run CUDA to accelerate deep learning. Cards that end in "ti" are slightly better versions than their non-ti counterparts.
NVIDIA, the corporation that makes the 1650 and 1650ti cards, also develops CUDA. Only (and all) NVIDIA graphics cards currently support CUDA because it is proprietary.
However, because deep learning models are often limited by memory capacity, I recommend buying a card such as the GTX 1070, which is comparably priced but has 8GB of RAM. |
H: Combine K-nearest neighbor with categorical embedding
I've tried a few ways to do my multi-class classification. For categorical data, I used the embedding technique with Tensorflow, which moves the entity closer with its similarity. This technique provides me with approximately 25%-30%, which is low.
For the numerical data, I used the KNN algorithm that gave me roughly 40% accuracy. I am wondering is there any way to "combine" these two techniques together to achieve a better result. For example, perhaps using the probability given by the KNN algorithm to form a layer concatenated with the embedding layer. Then, use the Dense layer to further train these data.
I've searched on the Internet. It's not the technique of ensembling, which averages the accuracy of each model. It's more like concatenating the layer together.
Any help is highly appreciated.
AI: If I understand well, you label encode categorical variables and fed them to a neural network. If this is the case, you can try the following:
add the numerical variables
create and train an autoencoder
use the encoder part to map input to a vector space and perform k-nearest neighbor to it.
You can read the second method described in https://towardsdatascience.com/detecting-credit-card-fraud-with-autoencoders-in-python-98391cace8a3 It uses a dataset with numerical variables only, but since you label encode categorical variables it applies in your case too. |
H: How to interprete percentile information from the describe function in Pandas?
I am a bit stumped on how to interpret the percentile information you see when you call the describe function on dataframes in Pandas.
I believe I have a basic understanding of what percentile means. For example if in a test someones score 40% which ranks at the 75% percentile, this means that the score is higher than 75% of the total scores.
But I don't know how to translate this knowledge to interpret what I see from the describe function.
To illustrate, given the following:
test = pd.DataFrame([1,2,3,4,5,1,1,1,1,9])
test.describe()
This prints out something similar to this:
| count | 10.000000 |
|-------|-----------|
| mean | 2.800000 |
| std | 2.616189 |
| min | 1.000000 |
| 25% | 1.000000 |
| 50% | 1.500000 |
| 75% | 3.750000 |
| max | 9.000000 |
Now I do not know how to interpret the values assigned to 25%, 50% and 75%. For example 5 out of the 10 values is set to 1, but the 50% has a value of 1.50000, clearly it is not saying 1.5 has a value of 50% because there is not even 1.5 in the data set.
Also why is 25% set to 1.000000 and 75% set to 3.750000?
I know I am interpreting this wrong hence this question! Would appreciate if someone can help understand this
AI: Pandas' describe function internally uses the quantile function. The interpolation parameter of the quantile function determines how the quantile is estimated. The output below shows how you can get 3.75 or 3.5 as the 0.75 quantile based on the interpolation used. linear is the default setting. Please take a look at Pandas' quantile function source code here 1
test = pd.Series([1,2,3,4,5,1,1,1,1,9])
test_series = test[0]
quantile_linear = test.quantile(0.75, interpolation='linear')
print(f'quantile based on linear interpolation: {quantile_linear}')
quantile based on linear interpolation: 3.75
quantile_midpoint = test.quantile(0.75, interpolation='midpoint')
print(f'quantile based on midpoint interpolation: {quantile_midpoint}')
quantile based on midpoint interpolation: 3.5 |
H: how to scale a dataset contains a b&w and Grayscale images
I have a dataset that contain both black and white images and a grayscale images (some of them are scnned by printer and other by camera and changes into gray)
how can make or scale my dataset so I can use it to feed a model ?
AI: $\hat{p}=2\big(\frac{p}{p_{max}}\big)-1$ so that $\hat{p}\in [-1,1]$ where $p\in[0,p_{max}]$
E.g. with 8-bit channels, $p_{max}=255$, black/white correspond to $-1$ and $1$ respectively, and grayscale is linearly mapped in $[-1,1]$. |
H: Tips on how to create box whisker plot for huge data set
I have a huge dataset (about 2 million lines) that I want to visualize to have an idea of how spread the data is. The problem now is when I create a box whisker plot the resulting graph is not legible due to the huge amount of data.
Is there any trick to be able to successfully create a box whisker plot for huge data set in the way that is readable?
AI: Hope you are aware of data sampling methods. They exist to solve problems such as these. They are of several types like;
Probability Sampling Methods
Simple random sampling
Systematic sampling
Stratified sampling
Clustered sampling
Non-Probability Sampling Methods
Convenience sampling
Quota sampling
Judgement (or Purposive) Sampling
Snowball sampling
When sampling the data, be careful of the bias.
Bias in sampling
There are five important potential sources of bias that should be considered when selecting a sample, irrespective of the method used. Sampling bias may be introduced when:
Any pre-agreed sampling rules are deviated from
People in hard-to-reach groups are omitted
Selected individuals are replaced with others, for example if they are difficult to contact
There are low response rates
An out-of-date list is used as the sample frame (for example, if it excludes people who have recently moved to an area)
Basis of the above detail, I think there is no need to use the entire dataset for visualization purpose. Consider an example, each country conducts a census survey to estimate its population of people. Such a dataset is huge both in size and in complexity. Do you think those statisticians use the complete dataset for visualization? I strongly doubt it. They use sampling methods.
Edit
Python Code
# load required libraries
import pandas as pd
# create some dummy data
df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
'num_wings': [2, 0, 0, 0],
'num_specimen_seen': [10, 2, 1, 8]},
index=['falcon', 'dog', 'spider', 'fish'])
print("## Original data ##\n",df)
## Original data ##
num_legs num_wings num_specimen_seen
falcon 2 2 10
dog 4 0 2
spider 8 0 1
fish 0 0 8
Sampling methods:
Simple random sampling: Extract 3 random rows
print("\n Simple random sample")
print(df.sample(3, random_state=10))
Simple random sample
num_legs num_wings num_specimen_seen
spider 8 0 1
falcon 2 2 10
fish 0 0 8
1.1. Simple random sampling: Extract 3 random elements from the Series
df['num_legs']
print("\n Simple random sample of particular column\n",df['num_legs'].sample(n=3, random_state=10))
Simple random sample of particular column
spider 8
falcon 2
fish 0
Name: num_legs, dtype: int64
1.2. Random sampling with replacement
print("\nA random 50% sample of the DataFrame with replacement:")
print(df.sample(frac=0.5, replace=True, random_state=10))
A random 50% sample of the DataFrame with replacement:
num_legs num_wings num_specimen_seen
dog 4 0 2
dog 4 0 2
1.3. Random upsampling with replacement
print("\nAn upsample sample of the DataFrame with replacement")
print(df.sample(frac=2, replace=True, random_state=10))
An upsample sample of the DataFrame with replacement
num_legs num_wings num_specimen_seen
dog 4 0 2
dog 4 0 2
falcon 2 2 10
fish 0 0 8
falcon 2 2 10
dog 4 0 2
fish 0 0 8
falcon 2 2 10
1.4. Random sampling with weights
print("\nUsing a DataFrame column as weights. Rows with larger value in the num_specimen_seen column are more likely to be sampled.")
print(df.sample(n=2, weights='num_specimen_seen', random_state=10))
# Using a DataFrame column as weights. Rows with larger value in the num_specimen_seen column are more likely to be sampled.
num_legs num_wings num_specimen_seen
fish 0 0 8
falcon 2 2 10
Stratified Random Sampling
print("\nStratified Random Sampling")
print(df.groupby('num_legs', group_keys=False).apply(lambda x:x.sample(min(len(x), 2))))
Stratified Random Sampling
num_legs num_wings num_specimen_seen
fish 0 0 8
falcon 2 2 10
dog 4 0 2
spider 8 0 1
This brief example should get you started! |
H: Is it possible to predict sentiment of unlabelled dataset using BERT?
I have a large unlabeled dataset and I want to predict sentiment for each document in this dataset. I want to know, is it possible that I can use BERT for sentiment analysis of unlabeled data? I have seen so many tutorials and read the blog posts but I couldn't find one. All shows the use of BERT on datasets that are already labeled such as the IMDB review dataset or Yelp review.
AI: Bert is a pre-trained language model with objectives like masked token prediction and next sentence prediction. So, it doesn't have any setup to do sentiment analysis,but you can use the pre-trained information to do the same.
There are couple of ways to solve your problem, they are
Annotate few of your documents and fine-tune the model for sentiment analysis.
Get open-source sentiment analysis datasets which fit your requirement, train Bert on the data and use the same classifier for your purpose (But this will work only if both data distributions look the same). |
H: Minimizing error on unseen data
The classifier aims to minimize the loss function (($F(x)$ - $\hat{F}(x)$)2), where $F(x)$ is unknown function and $\hat{F}(x)$ is the predicted function. If $F(x)$ is not known for unseen data, how do we compute this loss? Why is the training error used to estimate the error for unseen data?
AI: if we don't know $F(x)$ for unseen data, how does a decision tree minimize this error?
Every supervised ML method relies on the assumption that the test data (any unseen data) follows the same distribution as the training data (note that this is not specific to Decision Trees). In fact both the training data and the test data are assumed to be sampled from the true population data. As a consequence $F(x)$ is assumed to be the same for the training data and the test (unseen) data.
If one uses a trained model on some unseen data which is not distributed like the training data, the results are simply unpredictable and the performance is likely to drop.
Why do we estimate the error for unseen data using the error observed in the training data?
You seem to suggest to use the "unseen data" in the training process. You would indeed get better results on the "unseen data" if you optimized on it, but then you would lose the point of having a portion of data set apart. "Unseed data" is necessary to estimate how good your model will perform on data never seen before. If you don't keep some data set apart you may have a better model but you have no way of estimating how good it will be when put into production. |
H: ValueError: y should be a 1d array, got an array of shape (285, 30) instead
I am using this data set below and I am trying to find the support vector machine of the data set. Also
I have my code and error below as well.
http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html#sklearn.datasets.load_breast_cancer
import numpy as np
from sklearn import svm,datasets
breastcancer = datasets.load_breast_cancer()
#print(breastcancer)
everydata = breastcancer.data
#print(everydata)
everytarget = breastcancer.target
traindata = []
traintarget = []
testdata = []
testclasses = []
#Class 0 data separtion
for i in range(0,140):
traindata.append(everydata[i])
traintarget.append(everytarget[i])
for i in range(140,212):
testdata.append(everydata[i])
#Class 1 data separation
for i in range(212,357):
traindata.append(everydata[i])
traintarget.append(everytarget[i])
for i in range(357,569):
testdata.append(everydata[i])
traindata = np.concatenate((everydata[:140, :],everydata[212:357, :]),axis=0)
traintarget = np.concatenate((everydata[:140],everydata[212:357]),axis=0)
testdata = np.concatenate((everydata[140:212, :],everydata[357:569, :]),axis=0)
print(len(traindata))
print(traintarget)
print(testdata)
dd = svm.SVC(kernel='linear')
dd.fit(traindata,traintarget)
decide = dd.predict(testdata)
print(decide)
Why am I getting this error is my concatenation incorrect. The results are supposed to output 0s and 1s at the end.
AI: The problem is that you are using the everydaydata to build the traintarget dataset, but you should use the labels in everytarget. That is why is complaining abut the shape, because the labels should be one dimensional.
Try replacing this:
traintarget = np.concatenate((everydata[:140],everydata[212:357]),axis=0)
with this:
traintarget = np.concatenate((everytarget[:140],everytarget[212:357]),axis=0) |
H: Predictive output with your own model built
I would need to better understand how can be created a machine learning algorithm from scratch using an own model developed based on boolean values, for example # of words in a text, # of punctuation, # of capital letters, and so on, to determine if a text is formal or informal.
For instance: I have
Text
there is a new major in this town
WTF?!?
you're a great person. Really glad to have met you
I don't know what to say
BYE BYE BABY
I created some rules to assign a label on this (small) train dataset, but I would need to understand how to apply these rules to a new dataset (test):
if there is an upper case word then I;
if there is a short expression, like don't, 'm ,'s, ... , then I;
if there are two symbols (punctuation) close to each other, then I;
if a word is in list of extra words, then I;
otherwise F.
Suppose that I have a dataframe to test and to assign these labels (I or F):
FREEDOM!!! I don't need to go to school anymore
What are u thinking?
Hey men!
I am glad to hear that.
how could I apply my model to this new dataset, adding labels?
Test Output
FREEDOM!!! I don't need to go to school anymore I
What are u thinking? I
Hey men! I
I am glad to hear that. F
Update after mnm's comment:
Would it be considered a machine learning problem the following one?
import pandas as pd
import numpy as np
data = { "ID":[1,2,3,4],
"Text":["FREEDOM!!! I don't need to go to school anymore",
"What are u thinking?",
"Hey men!","
I am glad to hear that."]}
# here there should be the part of modelling
df['upper'] = # if there is an upper case word then "I"
df['short_exp'] = # if there is a short exp then "I"
df['two_cons'] = # if there are two consecutive symbols then "I"
list_extra=['u','hey']
df['extra'] = # if row contains at least one of the word included in list_extra then 'I'
# append cols to original dataframe
df_new = df
df_new['upper'] = df1['upper']
df_new['short_exp'] = df1['short_exp']
# and similar for others
It is not clear, however, the latest part, that one based on condition. How can I predict the new values for the other texts?
AI: What you are proposing is a heuristic method, because you define the rules manually in advance. From a Machine Learning (ML) point of view the "training" is the part where you observe some data and decide which rules to apply, and the "testing" is when you run a program which applies these rules to obtain a predicted label. As you correctly understood, the testing part should be applied to a test set made of unseen instances. The instances in the test set should also be manually labelled (preferably before performing the testing in order to avoid any bias), so that you can evaluate your method (i.e. calculate the performance).
Technically you're not using any ML approach here, since there is no part where you automatically train a model. However heuristics can be useful, in particular they are sometimes used as a baseline to compare ML models against.
[addition following comment]
I think most of common pre-processing approach requires to convert text into lower case, but a word, taken in different contest, can have a different weight.
This is true for a lot of tasks in NLP (Natural Language Processing) but not all of them. For example for tasks related to capturing an author's writing style (stylometry) one wouldn't usually preprocess text this way. The choice of the representation of the text as features depends on the task so the choice is part of the design, there's no universal method.
how to train a model which can 'learn' to consider important upper case words and punctuation?
In traditional ML (i.e. statistical ML, as opposed to Deep Learning), this question is related to feature engineering, i.e. finding the best way to represent an instance (with features) in relation with the task: if you think it makes sense for your task to have specific features to represent these things, you just add them: for instance you can add a boolean feature which is true if the instance contains at least one uppercase word, a numeric feature which represents the number of punctuation signs in the instance, etc.
Recent ML packages propose standard ways to represent text instances as features and it's often very convenient, but it's important to keep in mind that it's not the only way. Additionally nowadays Deep Learning methods offer ways to bypass feature engineering so there's a bit of a tendency to forget about it, but imho it's an important part of the design, if only to understand how the model works. |
H: NLP: what are the advantages of using a subword tokenizer as opposed to the standard word tokenizer?
I'm looking at this Tensorflow colab tutorial about language translation with Transformers, https://www.tensorflow.org/tutorials/text/transformer, and they tokenize the words with a subword text tokenizer. I have never seen a subword tokenizer before and don't know why or when it should be used as opposed to a word tokenizer.
The tutorial says The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.
To get an idea of what the results can look like, the work Transformer gets broken down into index-subword pairs.
7915 ----> T
1248 ----> ran
7946 ----> s
7194 ----> former
Does anybody know what the advantages of breaking down words into subwords is and when somebody should use a subword tokenizer instead of the more standard word tokenizer? Is the subword tokenizer used because the translation is from Portuguese to English?
*The version of Tensorflow is 2.3 and this subword tokenizer belongs to tfds.deprecated.text
AI: Subword tokenization is the norm nowadays in NLP models because:
It mostly avoids the out-of-vocabulary (OOV) word problem. Word vocabularies cannot handle words that are not in the training data. This is a problem for morphologically-rich languages, proper nouns, etc. Subword vocabularies allow representing these words. By having subword tokens (and ensuring the individual characters are part of the subword vocabulary), makes it possible to encode words that were not even in the training data. There's still the problem with characters not present in the training data, but that's tolerable in most of the cases.
It gives manageable vocabulary sizes. Current neural networks need a pre-defined closed discrete token vocabulary. The vocabulary size that a neural network can handle is far smaller than the number of different words (surface forms) in most normal languages, especially morphologically-rich ones (and especially agglutinative ones).
Mitigates data sparsity. In a word-based vocabulary, low-frequency words may appear very few times in the training data. This is especially troublesome for agglutinative languages, where a surface form may be the result of concatenating multiple affixes. Using subword tokenization allows token reusing, and increases the frequency of their appearance.
Neural networks perform very well with them. In all sorts of tasks, they excel: neural machine translation, NER, etc, you name it, the state of the art models are subword-based: BERT, GPT-3, Electra,... |
H: Practical attention models
Attention is all you need is a nice paper that suggests using
positional encodings as an alternative to RNNs in their Transformer architecture.
GPT-2 and GPT-3 are examples of using this architecture which
are trained on input data of a massive scale.
Is there a paper and a model that uses positional encodings
and outcompetes RNN/LSTM based models for small scale datasets (MBs of text data, not terabytes)?
If there are many, which ones are the leading ones in production applications?
AI: Is there a paper and a model that uses positional encodings and outcompetes RNN/LSTM based models for small scale datasets (MBs of text data, not terabytes)?
Yes, there are several. Similar to GPT, they still pre-train on terabytes of data. But the embeddings they learn generalize well. Then you can fine-tune on a much smaller dataset. It works much in the same way as transfer learning on a CNN where a model first is trained on ImageNet and then trained on a specific task. It tends to give better results than RNN/LSTMs.
If there are many, which ones are the leading ones in production applications?
The one that sees most use is definitely BERT. Here is a really nice explanation of how it works. This transformers library from Huggingface makes it really easy to work with BERT and other transformers that have already been pre-trained. |
H: How can I make a face verification system?
I'm building a face verification system using face embeddings (of 512 dimensions) from a pretrained Facenet model. For this, if I have some 4 to 5 images for a person, how can I successfully verify a new unseen images as either the same or different person?
I think this of one class classification (because of having single class) task, and googled it, but unable to find any reliable sources which suits well for this task.
Then, I've trained a SVM classifier with embeddings of a person and also with dummy embeddings of some 4 to 5 persons (so that having positive and negative classes). But it doesn't seem to work well.
Please suggest me with a ML algorithm/technique for this task, thank you.
AI: I am assuming Facenet is an image classifier and it will give embedding for a face similar to other CNNs. If that is the case you don't need to train different classifiers, you can just remove the head of Facenet and initialize a small network on top of it and train an end to end network for better accuracies. |
H: unit-testing Machine Learning models
I have been asked to unit-test my machine learning model(not the code that made the model). Since we wouldn't actually know what predictions models make, how to carry out the unit-testing to check the model's predictions against? How is this done?
EDIT 1:
The machine learning model I have is trained on tabular data of patients. let's take an example of cancer prediction(I am not allowed to disclose the actual one, but this example is very close). It takes multiple reading from various tests as inputs and outputs how close or how risky a patient is to get cancer.
EDIT 2:
Is there any way, like testing for range of value for every set of inputs (or) adversarial inputs(inputs that are sure model will fail on) or extreme input cases. What ate the best practices for this?
AI: In the above case, you can collect annotated data ( which is not seen by the model during training) and validate the predictions made by the model.
And another way is if you are a domain expert or have sufficient knowledge on the data, you can tweak the input values and test the predicted output with your expected output. |
H: keras predicts nan values
I implemented a Keras model for my all-integer dataset with values greater than or equal to 0. The train data has dimensions of (393, 108) and prediction data has (1821, 108). Code is as follows.
import keras
from keras.models import Sequential
from keras.layers import Dense
X = data.iloc[:, :-1]
y = data.iloc[:, -1]
model = Sequential()
model.add(Dense(X.shape[1]-1, input_dim=X.shape[1], activation='tanh'))
for i in range(X.shape[1] - 2, 2, -100):
model.add(Dense(i, activation='tanh'))
model.add(Dense(1, activation='tanh'))
opt = keras.optimizers.Adam(learning_rate=100)
model.compile(loss='categorical_crossentropy', optimizer=opt)
model.fit(X, y)
model.predict(X0)
I am getting all nan values as results.
array([[nan],
[nan],
[nan],
...,
[nan],
[nan],
[nan]], dtype=float32)
AI: Your input is not standardized
The learning rate is way too high, start with the Default i.e. 0.001
Other suggested changes -
Use "relu" in the hidden/input layer
OHE the target
If the target is multi-class, the output layer should have same number of Neurons with softmax as activation
Data points are very less, Neural Net might not be the best option |
H: How to split train/test datasets having equal classes proportion
I would like to know how I can split in an equal number the following
Target
0 1586
1 318
in order to have the same proportion of 0 and 1 classes in a dataset to train, if my dataset is called df and includes 10 columns, both numerical and categorical.
I would consider the following
y=df['Target']
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.1, stratify=y)
so to do a stratification, but I do not know if it is right and I would appreciate if you could confirm it or provide an alternative to do that.
Sample of data
Fin Eco Target
High percentage 12 1
Low percentage 5 0
Medium percentage 48 0
NA 3 1
TBC NA 1
Low percentage 25 0
Medium percentage 12 0
How can I check if it is actually splitting in equal classes proportion my dataset?
I think the best way to train a model should be having an equal proportion of 0 and 1 values. Right now I have 5 times data with Target=0.
AI: If what you want is to have the same proportion of classes, 50% - 0 and 50% - 1. Then, there are two techniques oversampling (sampling more data from the smaller class) and undersampling (sampling less data from the bigger class). But I won't recommend you this for your problem. Your label seems fairly well balanced 1/5 it's a great proportion.
Still, this library has some implementations to do that
In this blog, you can see an overview of imbalanced datasets, but yours is not. Choosing a proper metric is more important.
If what you want to do is keep the same proportions of the label across the splits, what you are doing is right.
To validate your model properly, the class distribution and the different splits (train, validation, test) should be similar.
In the train test split documentation , you can find the argument:
stratifyarray-like, default=None
If not None, data is split in a stratified fashion, using this as the class labels.
One step beyond will be using Stratified K-Folds cross-validator.
This cross-validation object is a variation of KFold that returns stratified folds. The folds are made by preserving the percentage of samples for each class.
There are more splitting techniques in Scikit Learn that you can use, have a look.
To test if the function is doing what you want just calculate the percentages in the splits:
np.unique(y_train, return_counts=True)
np.unique(y_val, return_counts=True)
But this will make you have the same proportions across the whole data, if your original label proportion is 1/5, then you will have 1/5 in train and 1/5 in test. |
H: Fuzzy and FuzzyWuzzy: what are the differences in text comparison?
I have found a lot of information about fuzzy logic, but less information about fuzzywuzzy. I would like to know more about this, the function which determines the logic, if possible, and understand what partial_ratio in Python does.
Any information will be well welcome.
AI: In the source code you can find a simple explanation of what does partial ratio in fuzzywuzzy does:
Return the ratio of the most similar substring
as a number between 0 and 100
In this code snippet you can find the differences
from fuzzywuzzy import fuzz
fuzz.ratio("this is a test", "this is a test!")
Out: 97
fuzz.partial_ratio("this is a test", "this is a test!")
Out: 100
[1] https://github.com/seatgeek/fuzzywuzzy
[2] https://github.com/seatgeek/fuzzywuzzy/blob/master/fuzzywuzzy/fuzz.py |
H: Adding a trend line or horizontal mean±stdev lines in facet_grid view
Today was the final day for an On Demand event I adminned. We got some data back from the provider today. Vendors bought in at different tiers, and only one T1 was allowed because they're a sponsor. The higher the Tier number, the fewer graphics options and later one's exhibit appears.
I applied a facet_grid to the data to separate by Tier. I'd like to add a trend line or at least three hlines (mean, mean ± stdev) to the graphic that sorts by position from the start of the exhibition hall to illustrate that the later the position, the less likely a vendor will be visited.
This code
Interactions %>%
ggplot(mapping = aes(fill=Tier, color=Tier, x=reorder(`Booth Name`, -BoothOrder, max), y=`TotalInteractions`)) +
scale_fill_manual(values=GraphColors) +
geom_col() +
geom_text(aes(label=TotalInteractions), color='black', nudge_y = 10) +
facet_grid(Tier ~ ., scales = 'free_y', space = 'free_y', drop = T) +
xlab(label = 'Booth Name') +
ylab(label = 'Total Interactions, grouped by Tier, ordered by Booth Order in Exhibit Hall') +
coord_flip()
begets this graphic (vendor names anonymized).
I'd like in each facet panel to show the mean and stdev and/or a trend line illustrating the likelihood of an attendee visiting a vendor which is at the end of the virtual exhibition hall. This would be comparable to a live event having someone off in a corner, furthest from the entrance or a main attraction.
Do I need to add more geom_somethings or facet differently?
Here's the redacted dput if anyone wants it.
structure(list(`Booth Name` = structure(c("6066eecb44", "da7e90874c",
"76f9149b67", "ce285d23a7", "6e38489fe3", "eef7ae4fb6", "c171400d47",
"29cadfb808", "16d463a501", "06aed259dd", "5c3ed6d72e", "6196941184",
"8ad3ea5fa4", "98a8388f89", "b2f06f4240", "7034dda2fa", "da004a8aed",
"da317748e2", "ffd775a22b", "461ac5053c", "45a2dc3ba8", "9e28ff5dd5",
"23c6d72b14", "83a776083d", "3c13b35d6b", "83152ac13a", "9a1a86885c",
"c1599dec43", "2bb225f0ba", "b6f9b29b5e", "7cfe83e072", "717bfc4838",
"e213328e22", "c9af37768a", "122d80d313", "701a01a7d6", "cb2e52e25a",
"0214e13085", "47f08bcef3", "7ace29dd27", "e8ecf5ceff", "d8eb53a6b0"
), class = c("hash", "md5")), Tier = structure(c(1L, 2L, 2L,
2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L,
4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L,
5L, 5L, 5L, 5L, 5L, 5L, 5L), .Label = c("1", "2", "3", "4", "5"
), class = "factor"), `Total Booth Visits` = c(187, 137, 101,
198, 107, 109, 119, 119, 95, 90, 191, 157, 51, 146, 97, 131,
62, 62, 161, 98, 54, 68, 67, 202, 274, 47, 100, 97, 135, 73,
74, 109, 68, 69, 79, 154, 45, 55, 38, 15, 73, 98), `Unique Booth Visits` = c(133,
112, 87, 137, 84, 99, 101, 102, 79, 75, 155, 133, 46, 111, 78,
110, 58, 54, 133, 80, 51, 57, 61, 156, 205, 40, 83, 82, 108,
65, 65, 95, 63, 60, 73, 125, 41, 43, 36, 13, 64, 88), `Documents Clicked` = c(7,
0, 3, 4, 0, 9, 20, 8, 5, 0, 6, 4, 2, 0, 1, 7, 9, 2, 0, 0, 0,
0, 0, 12, 12, 0, 0, 0, 1, 11, 6, 0, 0, 0, 0, 14, 0, 3, 0, 0,
13, 0), `Videos Viewed` = c(2, 0, 24, 9, 20, 13, 0, 0, 2, 0,
10, 0, 5, 0, 6, 0, 6, 13, 0, 0, 0, 18, 0, 11, 20, 2, 0, 0, 0,
0, 0, 0, 6, 5, 6, 28, 0, 0, 0, 0, 0, 0), `Tabs Clicked` = c(53,
6, 8, 13, 14, 12, 11, 10, 17, 4, 30, 37, 7, 34, 13, 4, 8, 3,
18, 36, 5, 19, 6, 25, 50, 6, 28, 16, 5, 1, 2, 27, 9, 14, 11,
43, 4, 2, 1, 0, 7, 26), BoothOrder = c(1, 11, 4, 2, 14, 5, 8,
6, 15, 17, 10, 9, 38, 22, 23, 13, 29, 25, 7, 30, 33, 34, 36,
12, 3, 39, 40, 19, 16, 20, 21, 24, 26, 27, 28, 18, 32, 31, 35,
42, 37, 41), DupeVisits = c(54, 25, 14, 61, 23, 10, 18, 17, 16,
15, 36, 24, 5, 35, 19, 21, 4, 8, 28, 18, 3, 11, 6, 46, 69, 7,
17, 15, 27, 8, 9, 14, 5, 9, 6, 29, 4, 12, 2, 2, 9, 10), TotalInteractions = c(249,
143, 136, 224, 141, 143, 150, 137, 119, 94, 237, 198, 65, 180,
117, 142, 85, 80, 179, 134, 59, 105, 73, 250, 356, 55, 128, 113,
141, 85, 82, 136, 83, 88, 96, 239, 49, 60, 39, 15, 93, 124)), row.names = c(NA,
-42L), class = c("tbl_df", "tbl", "data.frame"))
AI: What you can do is creating another dataset with the mean and mean±stdev grouped by Tier. Then you can use that in geom_hline and ggplot will pick up facets from Tier. See below;
library(dplyr)
library(tidyr)
library(ggplot2)
Interactions %>%
group_by(Tier) %>%
summarise(mean = mean(TotalInteractions),
`mean-stdev` = sum(mean(TotalInteractions), -sd(TotalInteractions), na.rm = T),
`mean+stdev` = sum(mean(TotalInteractions), +sd(TotalInteractions), na.rm = T)) %>%
pivot_longer(-Tier) -> Int_hline
Interactions %>%
ggplot(aes(fill=Tier, color=Tier,
x=reorder(`Booth Name`, -BoothOrder, max), y=`TotalInteractions`)) +
# scale_fill_manual(values=GraphColors) + ## commented-out since we don't have GraphColors
geom_col() +
geom_hline(data = Int_hline, aes(yintercept = value)) +
geom_text(aes(label=TotalInteractions), color='black', nudge_y = 10) +
facet_grid(Tier ~ ., scales = 'free_y', space = 'free_y', drop = T) +
xlab(label = 'Booth Name') +
ylab(label = 'Total Interactions, grouped by Tier, ordered by Booth Order in Exhibit Hall') +
coord_flip()
Created on 2020-10-11 by the reprex package (v0.3.0) |
H: Channels and neurons in a CNN
I am in the process of learning what a convolutional layer is, and its functions. I have a couple of questions. One is what in_channels and out_channels are. In my code, I have the following:
nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(4,4), padding=1)
I understand that the in_channels for this layer is 3 because RGB, but I'm confused on why out_channels is 64, and why I can change it to seemingly any other number without the program crashing.
Another question is what exactly a "neuron" is. For convolutional layers, is a neuron each filter? Also, if it is, what do the activation functions judge when evaluating each filter?
AI: out_channels can be thought of as the number of features/filters the layer can learn. So, for eg., if a layer of a CNN has out_channels set to 64, it can "learn" 64 features. These features could be edges/patterns. This is a hyper-parameters and can set to an arbitrary number.
The activation of a neuron in a layer will be high if the input to that layer contains the feature learned by the layer, that's how a convolution (or correlation) function works.
In a CNN, a neuron is the result of computing the dot product of the input and the filter/kernel at every window of the convolution. |
H: TicTacToe Linear Regression low accuracy and R^2 score
Im using the python sklearn library to attempt a linear regression TicTacToe AI.
I create my training set by simply having the computer play random 'blind' games against itself. For example... Player one plays a random segment of the board. Next player two plays a random valid segment of the board etc. This goes on until the board is full or someone has won. Each time player one wins, i store the board states leading up to the win. Every loss, i simply mark that board state (and past board states of the same game ) as a loss for player one. For every tie game(full board) I do not count it as anything. I play about 20k of these games. At the end I get my training data set which includes the board state (the feature set) and the outcome which is the percentage (a floating pint value. eg .8 is 80%) of games won for that state.
So for example going from board top left to bottom right: [1, 1, 1, 2, 0, 2, 0, 0, 0] would be:
X X X
O - O
- - -
would have a '1' or 100 percent after playing 20k random games etc.
I'm trying to predict the success rate of player one's next move. Basically the success rate of any free segment based on the board state.
However; after training sklearn linear regression with my training data, I get a very low R^2 score of .14 and any test is highly inaccurate. I'm beginning to think there is a flaw in my data? Is this how data scientists would go about creating the training set for tic tac toe?
AI: Linear regression will not work for this problem because the relationship between the board features and target variable that you are using is not linear.
Is this how data scientists would go about creating the training set for tic tac toe?
It is not 100% clear what your goal is. For simplicity I will select that your goal as "Predict the probability of X winning eventually given the current board state and completely random play in future by both sides." That appears to be what you are doing.
As an aside, this is not a direct path to training a neural network to predict the best moves to make in a game. For this simple game, it might work acceptably if that is your eventual goal, but if you want machine learning for game playing you should probably look into reinforcement learning, and specifically self-play with reinforcement learning, as a framework to manage the training data.
Back to your question, what you are doing is acceptable for creating a data set, although I would want to check:
For every tie game(full board) I do not count it as anything
If that means you are still storing the states that lead to a tie, but with a different label, then that is ok. If you are discarding data about ties, then that will skew the dataset and might impact your predictions - unless you are also discarding ties when testing.
This is also slightly unusual:
At the end I get my training data set which includes the board state (the feature set) and the outcome which is the percentage (a floating pint value. eg .8 is 80%) of games won for that state.
This is unusual in that you have pre-processed the data into a summary row when features are identical. This skews the dataset when used with an approximation function (linear regression - like most ML statistical learners - is an approximation function), because you lose the number of times those features occurred. Any balancing the prediction function does to make itself more accurate for common states is lost when you do this. It is more normal to keep all records separate and have the ML method resolve the best way to take averages. If you measure the accuracy of your completed model by taking random samples of new played games, it could have lower accuracy than possible due to this.
For data collection of records, it is more usual to keep all observations separate and not to summarise them before training a classifier. The classifier can then fit the data allowing for the frequency of each observation.
Other than the caveats about ties (which you may well have right), and premature taking of averages, plus the limitation that your dataset will only help predict outcomes in fully random games, then the dataset collection looks ok to me. Neither of the above problems are major enough to cause the problem that you noticed. The reason your predictions are not working with linear regression is mainly due to needing non-linearity in the prediction function.
A simple fix for this would be to use a non-linear predictor such as a neural network or maybe a decision-tree algorithm like xgboost.
If you use a neural network, the following may help:
Use sigmoid activation in the output layer and binary cross-entropy loss. This should help when your output is a probability.
Use the value $-1$ instead of $2$ for marking positions in the board played by O. This is not strictly required, but neural networks tend to learn faster and more accurately when the input data in centered around zero with close to 1 standard deviation.
It is worth noting that your averaged win rate table is already quite a reasonable predictive model for game playing. For TicTacToe it should work quite well because there are a limited number of states. After 20k games with random play, you will have a record of nearly every possible state, and some will have reasonably accurate average values (for instance each initial play by X will have ~2000 sampled continuations which should give you the win rate within a few percent). The weakness of this approach is that it cannot generalise to new unseen states, but actually that is quite hard to do in board games where fine detail matters. |
H: Definition of the Q* function in reinforcement learning
I'm making my way through Sutton's Introduction to Reinforcement Learning. He gives the definition of the $q_*$ function as follows
$$
q_*(a) = \mathbf{E}[R_t | A_t = a]
$$
where $A_t$ is the action taken at time t and $R_t$ is the reward associated with taking $A_t$. From my understanding, $q_*$ represents the true value of taking action $a$, which is the mean reward when $a$ is selected.
But I'm confused about why $t$ is included in this equation at all. Should $q_*(a)$ really be $q_*(a, t)$? Or are we to understand $q_*$ as taking the expected reward across all $t$?
AI: The reward of action $a$ is defined as a stationary probability distribution with mean $q_*(a)$. This is independent of time $t$.
However the estimate of $q_*(a)$ at time $t$, denoted by $Q_t(a)$, is dependent on time $t$
Or are we to understand q∗ as taking the expected reward across all t?
The expectation is not over time, but over a probability distribution with mean $q_*(a)$.
For eg., in the 10-armed bandit problem, the reward for each of the 10 actions comes from a Normal distribution with mean $q_*(a), a= 1,...,10$ and variance 1. |
H: How to convert DataFrame column to Rows in Python?
I have the following dataset in df_1 which I want to convert into the format of df_2. In df_2 I have converted the columns of df_1 to rows in df_2 (excluding UserId and Date). I looked up for similar answers but they are providing little complex solutions. Is there a simple way to do this?
df_1
UserId Date -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7
87 2011-05-10 18:38:55.030 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
487 2011-11-29 14:46:12.080 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
21 2012-03-02 14:35:06.867 0 1 0 1 2 0 2 2 0 1 2 2 1 3 1
df_2
day | count
-7 0
-7 0
-7 0
-6 0
-6 0
-6 1
-5 0
-5 1
-5 0
. .
. .(Similarly for other columns in between)
. .
6 0
6 0
6 3
7 0
7 0
7 1
AI: Try using pandas melt
dataT = pd.DataFrame({"userID":[1,2,3],"date":["2020-01-01","2019-01-02","2020-08-12"],"-7":[0,1,1],"-6":[1,0,0],"-5":[0,0,0]})
Input:
dataT.melt(value_vars= ["-7","-6","-5"], value_name="count")
Output:
Update
By taking the comment by Benji the code would be:
dataT.melt(id_vars= ["userID","date"], value_name="count")
With output: |
H: Does data anonymization conflict with GDPR rules?
There are GDPR articles that relate to a person's ownership of their data e.g., Art. 17 GDPR Right to erasure (‘right to be forgotten’) and Art. 20 GDPR Right to data portability. In case one would anonymize the data without a way to "restore" the relation between the person (name + e-mail address) (which in turn would allow handling of the person-specific data), I'd say this would conflict with these GDPR articles. Are there data anonymization techniques that allow to "restore" the relation between name + contact e-mail after the data has been anonymized? This would allow satisfying these GDPR rules.
AI: Formally speaking, this is clarified in the GDPR Recital 26, unofficially titled Not Applicable to Anonymous Data:
The principles of data protection should therefore not apply to anonymous information, namely, information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.
This Regulation does not, therefore, concern the processing of such anonymous information, including for statistical or research purposes.
Informally speaking, the claim that data anonymization would violate the data subject's rights to data erasure and portability, hence we should seek to use reversible anonymization techniques, sounds awkward, and against the very spirit of GDPR; and the official interpretation regarding anonymization is very clear:
Effective data anonymization is made up of two parts:
It is irreversible.
It is done in such a way that it is impossible (or extremely impractical) to identify the data subject.
In other words:
if a subject's data have been effectively anonymized, they are no longer personal data, hence they are no longer governed by GDPR; consequently, Articles 17 & 20 are not applicable, and this does not constitute any conflict
if any personal data used as a source for the anonymized ones remain, they are subject to GDPR; data subjects can exercise their right to erasure and portability for these non-anonymized data
if the personal data used as a source for the anonymized ones are already erased (possibly in compliance with GDPR), then neither the right to erasure nor the right to portability are applicable anymore, and this does not constitute any kind of conflict either.
Notice that the fact that ensuring effective anonymization is not as clear-cut as it may sound has also been legally recognized in the Opinion 05/2014 on Anonymisation Techniques of the Data Protection Working Party:
Thus, anonymisation should not be regarded as a one-off exercise and the attending risks should be reassessed regularly by data controllers. |
H: Determining whether a Machine Learning model is overfitted with regard to the stability of the features
I need to know how would I get to know if I have overfitted my Machine Learning model on the train data. The performance metric I have used is Logistic Loss. Does the stability of the features affect the performance of my model? If yes, how do they relate?
AI: You need to look for the differences in the training loss and the cross-validation and test losses. If those are low, it means the model performs fairly well. Ideally, the training loss should be roughly equal to the cross-validation and test losses. If not, the model may be overfitting.
This difference also hints at an insignificant overlap between the training data points and the cross-validation and test data points. Such features are said to be unstable. In such a case, the model only gets to see the data points in the training data and not those in the cross-validation and test data and is thereby overfitting. Hence, it performs poorly. You can verify this by computing the percentage of data points present in the cross-validation and test data from those in the training data for different features in your dataset. |
H: How to Set the Same Categorical Codes to Train and Test data? Python-Pandas
NOTE:
If someone else it's wondering about this topic, I understand you're getting deeper in the Data Analysis world, so I did this question before to learn that:
You encode categorical values as INTEGERES only if you're dealing with Ordinal Classes, i.e. College degree, Customer Satisfaction Surveys as an example.
Otherwise if you're dealing with Nominal Classes like, gender, colors or names, you MUST convert them with other methods since they do not specific any numerical order, most known are One-hot Encoding or Dummy variables.
I encorage you to read more about them and hope this has been useful.
Check the link below to see a nice explanation:
https://www.youtube.com/watch?v=9yl6-HEY7_s
This may be a simple question but I think it can be useful for beginners.
I need to run a prediction model on a test dataset, so to convert the categorical variables into categorical codes that can be handled by the random forests model I use these lines with all of them:
Train:
data_['Col1_CAT'] = data_['Col1'].astype('category')
data_['Col1_CAT'] = data_['Col1_CAT'].cat.codes
So, before running the model I have to apply the same procedure to both, the Train and Test data.
And since both datasets have the same categorical variables/columns, I think it will be useful to apply the same categorical codes to each column respectively.
However, although I'm handling the same variables on each dataset I get different codes everytime I use these two lines.
So, my question is, how can I do to get the same codes everytime I convert the same categoricals on each dataset?
Thanks for your insights and feedback.
AI: First, note that Random Forests can handle categorical variables (moreover, if you have too much categories, reducing this number is a good practice).
If you want to apply a filter to your data, I'd suggest you using sklearn transformers (like OneHot Encoder, Label Encoding, ... pick the one you need according to what you want to do).
In this case, you have to fit the encoder in your train dataset, and then apply it in your test. If you want to apply this in a real case, you have to save your trained encoders alongside your trained model, so you can apply the encoder directly to the new data before predicting on it, so it has the same pattern.
Here is an example with Label Encoder
from sklearn import preprocessing
train, test = ... # SEPARATE YOUR DATA AS YOU WANT
le = preprocessing.LabelEncoder()
trained_le = le.fit(train)
train = trained_le.transform(train)
test = trained_le.transform(test) |
H: Smart sentence segmentation not splitting on abbreviations
Sentencer from SpaCy and NLTK does not catch the fact that typical abbreviations (e.g. Mio. for Million in German) and the resulting sentence split is not correct. I understand that sentencers are supposed to be simple and quick but I am wondering if there is a better one that takes into account something more than uppercased words and punctuation? Alternatively, how to make SpaCy / NLTK / ... sentencer work for such sentences?
I am interested primarily with sentencers with Python API.
AI: Neural tools trained on Universal Dependencies corpora use learned models for tokenization and sentence-spliting. Two I know of are:
UDPipe – developed at Charles University in Prague. Gets very good results (at least for parsing), but has a little unintuitive API.
Stanza – developed at Stanford University. The API is quite similar to Spacy.
However, they are quite slow compared to regex-based sentence-spliting. |
H: Model accuracy: how to determine it?
I have some doubts regarding the approach to building a classifier such as Multinomial Naive Bayes or SVM. I will go through the steps to see if the approach is fine. I do have not a lot of experience in model build, so any suggestions would be great!
My dataset has approx. 1115 obs having positive value (0) and 66 obs having negative value (1).
The distribution of the dependant variable is shown in the figure below.
I split the dataset into the train (70) and test (30), using stratify (it should help in case of such discrepancy between classes, hopefully):
from sklearn.model_selection import train_test_split
y=df['Label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30, stratify=y)
Then I imported the sim model to create an SVM classifier:
from sklearn import svm
clf = svm.SVC(kernel='linear')
clf.fit(X_train, y_train)
And to Predict the response for the test dataset I used the following:
y_pred = clf.predict(X_test)
For accuracy calculation, I used the following code:
from sklearn import metrics
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
print("Precision:",metrics.precision_score(y_test, y_pred))
print("Recall:",metrics.recall_score(y_test, y_pred))
getting different values every time I re-run it:
Accuracy: 0.75
precision recall f1-score support
0 0.95 0.79 0.86 316
1 0.08 0.30 0.13 20
accuracy 0.76 336
macro avg 0.51 0.54 0.49 336
weighted avg 0.90 0.76 0.82 336
2nd re-run
Accuracy: 0.8005952380952381
precision recall f1-score support
0 0.94 0.84 0.89 316
1 0.07 0.20 0.11 20
accuracy 0.80 336
macro avg 0.51 0.52 0.50 336
weighted avg 0.89 0.80 0.84 336
Confusion Matrix:
[[265 51]
[ 16 4]]
3rd re-run
Accuracy: 0.7797619047619048
precision recall f1-score support
0 0.94 0.81 0.87 316
1 0.08 0.25 0.12 20
accuracy 0.78 336
macro avg 0.51 0.53 0.50 336
weighted avg 0.89 0.78 0.83 336
Confusion Matrix:
[[257 59]
[ 15 5]]
I have a couple of questions on these results and I hope to find answers to them:
Which value should I take into account for saying that my model has an accuracy of ...?
Does it make sense to run a model where there are so few values = 1 for the dependent variable?
AI: Which value should I take into account for saying that my model has an accuracy of ...?
None. Accuracy is practically meaningless in such class imbalanced settings; the metrics of interest here are precision, recall, and f1 score.
Now, it's true that the values of these metrics also fluctuate between runs, much similar to the reported accuracy. But this is to be expected due to the small-sample effects - your sample is so small that even a difference in the classification of a couple of samples in your (even smaller) validation set is enough to give the observed discrepancies.
Does it make sense to run a model where there are so few values = 1 for the dependent variable?
Indeed it does; there are plenty of applications where the positive values are a very small percentage of the whole dataset (thing of sick people versus the general population of healthy ones, or engine faults versus long-running times where no fault is present). That's why this is a (large enough) sub-topic of machine learning called class imbalance or imbalanced classification, with its own specific approaches. I suggest you start googling ruthlessly. |
H: Scikit-learn SelectKBest is picking up obviously unwanted Features
Dataset
Dataset Summary: Bank Loan (classification) problem
Problem Summary:
I am exploring ways to simplify EDA Process (Exploratory Data Analysis) of finding the best fit variables
I came across SelectKBest from Scikit Package
The implementation went fine except some variables it returned me are obviously not going be a good factor (like primary keys in the dataset)
Is there a problem in the implementation? or is the package supposed to behave in that manner?
import numpy
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
from sklearn.preprocessing import LabelEncoder
# My internal code to read the data file
from src.api.data import LoadDataStore
# Preping Data
raw = LoadDataStore.get_raw()
x_raw = raw.drop(["default_ind", "issue_d"], axis=1)
y_raw = raw[["default_ind"]].values.ravel()
# NA and Encoding
for num_var in x_raw.select_dtypes(include=[numpy.float64]).columns.values:
x_raw[num_var] = x_raw[num_var].fillna(-1)
encoder = LabelEncoder()
for cat_var in x_raw.select_dtypes(include=[numpy.object]).columns.values:
x_raw[cat_var] = x_raw[cat_var].fillna("NA")
x_raw[cat_var] = encoder.fit_transform(x_raw[cat_var])
# Main Part of this problem
test = SelectKBest(score_func=f_classif, k=15)
fit = test.fit(x_raw, y_raw)
ok_var = []
not_var = []
for flag, var in zip(fit.get_support(), x_raw.columns.values):
if flag:
ok_var.append(var)
else:
not_var.append(var)
ok_var
['id', 'member_id', 'int_rate', 'grade', 'sub_grade', 'desc', 'title', 'initial_list_status', 'out_prncp', 'out_prncp_inv', 'total_rec_late_fee', 'recoveries', 'collection_recovery_fee', 'last_pymnt_d', 'next_pymnt_d']
not_var
['loan_amnt', 'funded_amnt', 'funded_amnt_inv', 'term', 'installment', 'emp_title', 'emp_length', 'home_ownership', 'annual_inc', 'verification_status', 'pymnt_plan', 'purpose', 'zip_code', 'addr_state', 'dti', 'delinq_2yrs', 'earliest_cr_line', 'inq_last_6mths', 'mths_since_last_delinq', 'mths_since_last_record', 'open_acc', 'pub_rec', 'revol_bal', 'revol_util', 'total_acc', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp', 'total_rec_int', 'last_pymnt_amnt', 'last_credit_pull_d', 'collections_12_mths_ex_med', 'mths_since_last_major_derog', 'policy_code', 'application_type', 'annual_inc_joint', 'dti_joint', 'verification_status_joint', 'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m', 'open_il_6m', 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il', 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m']
Its clear id, member_id should NOT belong to the best features list! any idea what I am doing wrong?
Edit: Did more digging and the reply by @Icrmorin is right. (its a kaggle dataset, so will not know why) but here is the box plot for id
AI: There seems to be two possible approaches to your problem :
If they are just identification features that you know aren't informative, you should remove them yourself. SelectKBest - like almost any other EDA tools - works on all the features you provide it, there is no way it knows what features are supposedly uninformative identification features and which are not.
It is possible that somehow the identification feature is informative. I can think of at least two reasons : correlation with time as the instance are entered in order and what you want to observe change with time. Or, if your identification feature is not unique (instances observed trough multiple different times), correlation between your observation. Depending on how your identification feature is built and what you want to achieve, you might want to keep this information or not. |
H: Can a Box plot be used for finding the useful features from the dataset?
I am reading a book by professor Trevor Hastie and professor Robert Tibshirani called "Introduction to Statistical Learning". In the applied section of the chapter 4, there is a question 11(b) that says:
Explore the data graphically in order to investigate the association between "mpg01" and the other features. Which of the other features seem most likely to be useful in predicting "mpg01"? Scatterplots and boxplots may be useful tools to answer this question. Describe your findings.
Here I tried plotting those box plots but how to find the useful features by analyzing them.
I'm certain that from the correlation matrix and the scatter plots I can judge what all features could be useful, but how come a box-plot be used for that matter, could someone please share their opinion.
AI: This is actually very simple: the more separation you see between the boxes, the stronger the 'predictive power' of the covariate.
Boxplots can be actually viewed as histograms looked at from the top. The box itself encompasses 50% of the data (from 25th to 75th percentile), and the line inside the box is the median. Whiskers show you the bounds of the data, up to 1.5 IQR (if I remember correctly). Anything over that range is an outlier.
Consider 2 plots: displacement and acceleration. In the displacement plot the boxes are completely separated, meaning that at least 50% of the data 'mass' is in a completely different place for mpg01='0' than for mpg01='1'. On the other hand, in acceleration the top of one box is around the median in another box - indicating poor separation. |
H: Unable to successfully merge dataframes in pandas along labels
I have two different dataframes, they both share the same labels, "Country" and "Year", I am trying to merge these together as one by these two columns.
This is my code:
joined = pd.merge(left = df, right = df1, on = ["Year", "Country"])
This is the result I receive for joined.head():
0 rows × 34 columns
Any suggestions?
Thanks.
AI: Try the following:
new_df = pd.merge(df, df1, how='left', left_on=["Year", "Country"], right_on = ["Year", "Country"]) |
H: Can we use feature selection and dimensionality reduction together?
I have a dataset having about 10,000s of features. The features have a hierarchy inherent to them. I found an algorithm performing feature engineering, taking the hierarchy of the features into consideration. After the procedure the feature space will be changed and the original features may not exist. This algorithm will reduce the number of features to about 2000 features. As the next step I am planning to use autoencoders(to perform dimensionality reduction) and obtain a latent representation to perform the classification task. The reason I didn't use the original dataset for the autoencoders is because I want to use the information on the hierarchy of the features for my model. Is this a meaningful model? Is it pointless to compress the feature space twice?
Thank you!
AI: Every time you compress the feature space you are losing some information. The original feature engineering stage you outlined sounds like a meaningful compression & might make sense in the context of your problem. The second compression on the other hand might only serve to lose some information.
I would only perform the second compression if the classifier you are training is struggling resource-wise to train your model after the initial compression. In this case you might want to sacrifice some information to train the model quicker. But this step should only be investigated after you try training your model with only the original compression & are not satisfied with the speed of training.
Good luck! |
H: Terminology in machine learning: exogenous features vs external features
I am currently writing a scientific paper and do not know whether to call some of my input features of my neural network either external or exogenous.
My neural network receives as input features like the outside temperature, which are completely independent of the mapped mathematical function.
Are these independent features called exogenous or external? When is which term used?
AI: Exogenous simply means a value that is determined outside the context of your model & is then imposed on your model. Endogenous means the model determines the value.
I don't know about "external" as this word seems to depend on context. But you would be right to say these variables are exogenous. |
H: Python sklearn model.predict() gives me different results depending on the amount of data
I train my XGBoostClassifier().
If my testing set has:
0: 100
1: 884
It attempts to predict 210 1's. Around 147 are wrong (False positives) and 63 1's correctly predicted (True positives).
Then I increase my testing sample:
0: 15,000
1: 884
It attempts to predict 56 1's. Around 40 are wrong (False positives) and 16 1's correctly predicted (True positives).
Am I missing something? some theory? some indication on how to use model.predict(X_test)?
Does it say somewhere - if you try to predict 10 items is gonna try harder than if you try to predict 10000 items? In what situation model.predict(X_test) would give me a different result for Joe Smith if his prediction is accompanied by 8000 more rows?
The code I use is the following:
from xgboost import XGBClassifier
xgb = XGBClassifier(subsample=0.75,scale_post_weight=30,min_child_weight=1,max_depth=3,gamma=5,colsample_bytree=0.75)
model = xgb.fit(X_train,y_train)
y_pred_output = model.predict(X_test)
cm = confusion_matrix(y_test, y_pred_output)
y_pred_output2 = model.predict(X_test2) #contains the same 884 1's plus 14500 more rows with 0's as the target value
cm = confusion_matrix(y_test2, y_pred_output2)
it produces two different matrices:
#Confusion matrix for y_test with 15000 0's and 884 1's
[[14864 136]
[ 837 47]]
#Confusion matrix for y_test with 500 0's and 884 1's
[[459 41]
[681 203]]
Notice that the same 884 positive class items are being used across both attempts. Why would the true positives go down to 47 just because we now have more Negatives on the X_test?
AI: If XGBoostClassifier is fed the same input data over and over again it will yield the same results. There is no inherent randomness in this classifier that would different results for the same input. Additionally - there should be no difference in the result of an individual prediction if it's requested in a smaller batch versus a larger batch (again the result will be identical).
On the other hand - if you train XGBoost on different data their outputs will definitely be different. If you add new data to the underlying dataset & train with that - new & different patterns will emerge that XGBoost will try to take advantage of and the entire tree network will be fit very differently.
I suspect what you are observing is a bug in structuring your input data that you are then feeding to the .predict() method. If you share a sample of your code maybe we can drill-down on the issue. |
H: Equation of a Multi-Layer Perceptron Network
I'm writing an article about business management of wine companies where I use a Multi-Layer Perceptron Network.
My teacher then asked me to write an equation that lets me calculate the output of the network. My answer was that due to the nature of multi-layer perceptron networks there is no single equation per se. What I have is a table of weights and bias. I can then use this formula:
$$f(x) = (\sum^{m}_{i=1} w_i * x_i) + b$$
Where:
m is the number of neurons in the previous layer,
w is a random weight,
x is the input value,
b is a random bias.
Doing this for each layer/neuron in the hidden layers and the output layer.
She showed me an example of another work she made (image on the bottom), telling me that it should be something like that. Looking at the chart, I suppose that it is a logistic regression.
So, my questions are the following:
Is there any equation to predict the output of a multi-layer perceptron network other than iterating over each neuron with $w*x+b$?
Should I just tell my teacher that a logistic regression is a different case and the same does not apply to this type of neural networks?
Is the first formula correct to show that a value of a neuron is the sum product of the previous layers plus the bias?
Edit 1: I didn't wrote the formula but I do also have activation functions (relu).
AI: You are forgetting one element of the MLP which is the activation function. If your activation function is linear - then you can simply flatten out all the neurons into one single linear equation. The advantage of MLP however is its non-linearities so I suspect in your network you do have some activation (sigmoid? tanh? relu? etc..).
As for your graph - you could simply output predictions from your MLP and plot the exact scatter plot you have above. The only difference would be you wouldn't have a simple way of expressing this network in algebraic notation (as you have done on the existing x-axis).
To describe networks effectively in text you should look into matrix notation describing the weights and inputs of each layer. Maybe take a look at something like this to get started: https://www.jeremyjordan.me/intro-to-neural-networks/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.