text
stringlengths 83
79.5k
|
---|
H: Should I report loss on the last batch or the entire training dataset?
In the training loop below, you can see that train_loss represents the loss on the most recent batch.
Whereas the eval_loss is computed on the entire training dataset.
When I print it out, should I be recalculating the train_loss against the entire training split instead of the last batch?
for epoch in range(epochs):
# Loop over batches.
for i, batch in enumerate(samples_train['features']):
train_preds = model(samples_train['features'][i])
train_loss = loss_fn(train_preds, samples_train['labels'][i])
optimizer.zero_grad()
train_loss.backward()
optimizer.step()
# Only the training data is batched.
eval_preds = model(samples_evaluate['features'])
eval_loss = loss_fn(eval_preds, samples_evaluate['labels'])
print(f"Epoch: {epoch}, Train_Loss: {float(train_loss)}, Eval_Loss: {float(eval_loss)}")
AI: The losses should be calculated over the whole epoch (i.e. the whole dataset) instead of just the single batch. To implement this you could have a running count which adds up the losses of the individual batches and divides it by the number of batches after the epoch ends. |
H: Are there any tree-based models that use a genetic algorithm to generate the trees?
I have a large dataset (195 features x 20m samples) that I have trained using XGBoost. I would like to see if a genetic algorithm can beat XGBoost since the data has so much noise it is prone to overfitting.
I would like to use a tree-based model so I don't have to standardize the data, and the features do have some interrelationships.
Are there any python packages that have this all done? Ie.that can create trees through a genetic optimization process?
AI: Very new release:
genetic-tree
The main objective of the package is to allow creating decision trees that are better in some aspects than trees made by greedy algorithms.
The creation of trees is made by genetic algorithm. In order to achive as fast as possible evolution of trees the most time consuming components are wrtitten in Cython. Also there are implemented mechanisms for using old trees to create new ones without need to classify all observations from beggining (currently in developmnet). There is planned to allow multithreading evolution.
The created trees should have smaller sizes with comparable accuracy to the trees made by greedy algorithms.
Also worth to be checked:
PyGAD
PyGAD is an open-source Python library for building the genetic algorithm and optimizing machine learning algorithms.
References:
https://hal.inria.fr/hal-01405549/document
https://www.hindawi.com/journals/tswj/2014/468324/
https://www.kdnuggets.com/2018/07/genetic-algorithm-implementation-python.html |
H: What is this PIP syntax: "azureml-dataset-runtime[pandas,fuse]"
I came recently across this environment file:
name: azureml_mnist
channels:
- conda-forge
- defaults
dependencies:
- ipykernel=5.5.3
- matplotlib=3.4.1
- python=3.8
- pip
- pip:
- azureml-dataset-runtime[pandas,fuse]
Regarding this line:
azureml-dataset-runtime[pandas,fuse]
What do the square brackets mean? I've never seen packages declared like this and could not find anything in the docs to explain what this [ ] syntax means or does.
AI: Here you go, you can check several answeres here:
https://stackoverflow.com/questions/46775346/what-do-square-brackets-mean-in-pip-install
So, [] denotes extra packages that should be installed together with azureml-dataset-runtime
Here is a link where you can check other extra packages required:
https://www.wheelodex.org/projects/azureml-dataset-runtime/ |
H: my graph does does not showing correct results of fine tuning and transfer learning it is showing single line in each graph instead of two lines
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.8, 1])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 1.0])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
AI: Based on this data, I extracted from a comment of yours:
initial_epochs = 2.0
val_acc = [0.6486, 0.6486, 0.6486]
acc = [0.7000, 0.7000, 0.7000]
loss = [0.6015, 0.5935, 0.4653]
val_loss = [0.6964, 0.5359, 0.6738]
I can plot with:
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.4, 1]) # You had plt.ylim([0.8, 1])
plt.plot([initial_epochs-1, initial_epochs-1], plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.grid() # I added a grid for both plots
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 1.0])
plt.plot([initial_epochs-1, initial_epochs-1], plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.grid() # I added a grid for both plots
plt.show()
I assume (based on the data I have), that the problem was due to plt.ylim([0.8, 1]), because val_acc, as well as acc both, were < 0.8, meaning the axis limits were focusing on a part of the plot where no data is.
See the following plot with plt.ylim([0.4, 1]): |
H: How to make a neural network output a specific number out of a certain range?
I have a neural network with an Input layer, 2 hidden Dense layers and an Output layer.
I would like for each neuron in the Output layer to give me a number between 0 and 2 (either 0, 1 or 2), like so:
If I use a neuron for each possibility (a neuron for 0, a neuron for 1 and another for 2) and then pick the one with the best prediction, the output layer length would be far too much.
Is there a way to implement this ? (I am fairly new to neural networks and the like)
AI: There are two ways to do that:
Scale your output data from [0, 2] to [0, 1] and apply Sigmoid activation at the end.
Make your own custom activation function that output everything in [0, 2]
I strongly suggest you no. 1, it's way faster to implement. |
H: Large variation in cross-validation scores
I'm training a CNN with 5-fold stratified cross-validation. On the first fold, my accuracy is ~80%, on each subsequent fold the accuracy is ~50%. Finally, upon fitting the entire training set my accuracy jumps back to ~80%. I can see that on the folds with bad performance the accuracy and loss never really change. Is it simply that one fold received all/most of the signal from the dataset or is there something else going on here?
AI: Based on the description my guess is that any or all of the following problems occur:
the data is not randomized, which could explain why the first fold get a very different performance than the others
perhaps the dataset is very small, causing massive variations in performance and overfitting.
in any case the model is very unstable, possibly due to overfitting caused by the model being too complex and/or dataset too small. |
H: How to calculate convolution for 2nd conv Layer in CNN, Do we need to average across all feature maps?
I understand that for the first layer (assuming we have a grayscale image) we calculate the convolution of 3*3 receptive field as a weighted sum of receptive weights with pixels
$ x1 · w1 + x2 · w2 + x3·w3 + ... + xn · wn$
But for the second layer, where we already have $N$ feature maps in the last convolution layer, how would we calculate the convolution(for a particular pixel/cell)? should we take an average of the $N$ weighted sums we have from feature maps?
If the question isn't clear, I have tried to highlighted it from the image taken from famous 3d visualization.
In the image, for the pixel in question(hovered and squared) we have inputs coming from 4 feature maps. I think they are four integers (weighted sums of the receptive field of the bottom right corner from each feature map).
How is the value (convolution) for this particular pixel/cell calculated? showed we take average like below?
(I can add more details if the question is not clear)
weighted_sum_fmap1 + weighted_sum_fmap1 + weighted_sum_fmap1 + weighted_sum_fmap1 / 4
AI: No, you don't average across all feature maps.
When the input has multiple channels, you need your convolution filter to have the same number of channels. Therefore, the filter "covers" the full depth of the input. Then, you simply perform the element-wise multiplication of the filter with the overlapping region in the input and add all the resulting elements together.
Therefore, if the input to a convolutional layer is a grayscale image, a filter is of dimensions [3,3,1], while if the input is a color image (with 3 channels), your filter is [3, 3, 3], like this (source):
The same happens with feature maps with N channels: we need a filter with N channels. |
H: Using Linear Regression to Learn Polynomial Regression
Let's start by considering one-dimensional data, i.e., $d=1$. In OLS regression, we would learn the function
$$
f(x)=w_{0}+w_{1} x,
$$
where $x$ is the data point and $\mathbf{w}=\left(w_{0}, w_{1}\right)$ is the weight vector. To achieve a polynomial fit of degree $p$, we will modify the previous expression into
$$
f(x)=\sum_{j=0}^{p} w_{j} x^{j}
$$
where $p$ is the degree of the polynomial. We will rewrite this expression using a set of basis functions as
$$
f(x)=\sum_{j=0}^{p} w_{j} \phi_{j}(x)=\mathbf{w}^{\top} \boldsymbol{\phi}
$$
where $\phi_{j}(x)=x^{j}$ and $\phi=\left(\phi_{0}(x), \phi_{1}(x), \ldots, \phi_{p}(x)\right)$. We simply apply this transformation to every data point $x_{i}$ to get a new dataset $\left\{\left(\phi\left(x_{i}\right), y_{i}\right)\right\}$. Then we use linear regression on this dataset, to get the weights $\mathbf{w}$ and the nonlinear predictor $f(x)=\sum_{j=0}^{p} w_{j} \phi_{j}(x),$ which is a polynomial (nonlinear) function in the original observation space. Notes
How does this work? Could anyone give me an example and explain it to me in simple terms? How would I go about implementing this in Numpy?
AI: It is quite simple to understand (and to implement using matrices).
Consider a specific example (to generalise later). You have a polynomial function of a single feature $x$):
$$ f(x) = \omega_0 x^0 + \omega_1 x^1 + \ldots \omega_n x^n $$
You can organise coefficients and features in vectors and get $f$ by a scalar product:
$$ \mathbf{\omega} = \begin{pmatrix} \omega_0, \\ \vdots \\ \omega_n \end{pmatrix}, \qquad \mathbf{x} = \begin{pmatrix} 1, \\ x \\ x^2 \\ \vdots \\ x^n \end{pmatrix}$$
Hence $$ f(x) = \omega^T\mathbf{x}$$.
This is nothing else than a multi-feature linear regression where the $i$-th feature is now the $i$-th power of $x$.
In numpy, imagine you have an array of data x.
To create the vector $\mathbf{x}$ above, you can do (for $n=3$, for instance)
X = np.ones((len(x), 4))
X[:,1] = x
X[:,2] = np.power(x,2)
X[:,3] = np.power(x,3)
And then using sklearn LinearRegression,
model = LinearRegression()
model.fit(X, y)
UPDATE: In sklearn has been recently introduced PolynomialFeatures that precisely performs the transformation I described in numpy (you asked in numpy, but this might be useful as well). |
H: Mututal Information in sklearn
I expected sklearn's mutual_info_classif to give a value of 1 for the mutual information of a series of values with itself but instead I'm seeing results ranging between about 1.0 and 1.5. What am I doing wrong?
This video on mutual information (from 4:56 to 6:53) says that when one variable perfectly predicts another then the mutual information score should be log_2(2) = 1. However I do not get that result:
import pandas as pd
from sklearn.metrics import confusion_matrix
y = [1,1,1,1,1,0,0,0,0,0]
print("Confusion matrix:")
print(confusion_matrix(y,y))
print("Mutual information:")
result = mutual_info_classif(pd.DataFrame(y), y)
print(result)
which gives:
Confusion matrix:
[[5 0]
[0 5]]
Mutual information:
[1.28730159]
When the two variables are independent, I do however see the expected value of zero:
x = [1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1]
y = [1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0]
print("Confusion matrix:")
print(confusion_matrix(x,y))
print("Mutual information:")
result = mutual_info_classif(pd.DataFrame(x), y)
print(result)
which gives:
Confusion matrix:
[[5 5]
[5 5]]
Mutual information:
[0]
Why am I not seeing a value of 1 for the first case?
AI: Sklearn has different objects dealing with mutual information score.
What you are looking for is the normalized_mutual_info_score.
The mutual_info_score and the mutual_info_classif they both take into account (even if in a different way, the first as a denominator, the second as a numerator) the integration volume over the space of samples.
Your code becomes
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.metrics import normalized_mutual_info_score, mutual_info_score
y = [1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0]
print("Confusion matrix:")
print(confusion_matrix(y,y))
print("Mutual information:")
result = normalized_mutual_info_score(y, y)
result_not_normalised = mutual_info_score(y,y)
print('norm: ', result)
print('not-norm: ', result_not_normalised)
giving,
Confusion matrix:
[[10 0]
[ 0 10]]
Mutual information:
norm: 1.0
not-norm: 0.6920129648318737
Note also that $ln(2) \simeq 0.6931$. |
H: Why we use an activation function for introducing nonlinearity instead of a polynomial Perceptron implementation?
I perceive a single perceptron as a single linear function $y = a_1x_1 + a_2x_2 + ... + a_nx_n + b_0$ with a goal to calculate the best weights combination $ w_1, w_2, ..., w_n $ that minimizes the given loss function.
The problem with this type of network is that it would not be able to perform well on a non linear dataset, thus an activation function would be used in order to tackle this. I am wandering what could happen if instead of a linear Perceptron, we introduce a polynomial Perceptron in the form of $ y = a_1x_1^k + a_2x_2^r + ...+ a_nx_n + b_0$ and how this will compare with the original perceptron.
AI: Well, thanks to the universal approximation theorem from a purely theoretical point of view, absolutely nothing.
The main issue is with computation. You can find more information here. Mainly, you want functions easy to calculate (polynomials are ok) but with specific regions where derivatives are monotonic (here polynomials are not good) and approximating the identity near the origin (again, polynomials not good). |
H: NLP methods specific to a language?
What NLP methods / algorithms depend on the features existing only in some languages? For example, does French has any NLP algorithms that English NLP and Spanish NLP do not have?
AI: This question is quite open, but nonetheless, here are some:
lemmatization/stemming only makes sense in languages where there is a lemma/stem in the word. Some languages like Chinese have no morphological variations (apart from some arguable cases like the explicit plural 们), and therefore lemmatization and stemming are not applied in Chinese.
Word-based vocabularies are used to represent text in many NLP systems. However, in agglutinative and polysynthetic languages, using word-level vocabularies is crazy, because you can put together a lot of affixes and form a new word, therefore, a prior segmentation of the words is needed.
In some languages like Chinese and Japanese, there are no spaces between words. Therefore, in order to apply almost any NLP, you need a preprocessing step to segment text into words. |
H: Why is KNN better at K-Fold Cross Validation than XGBoost or Random Forest?
I've been running K-Fold cross validation multiple times for KNN, random forest and XGBoost.
KNN can complete sklearn's cross_val_score, so much faster consistently.
They all use the same preprocessed data, all have the same test/train split with a random state, etc.
When ran individually the timings are within 10% of each other.
When ran using sklearn's cross val score, KNN can do 50 k-folds in less than 10 minutes, when the other two take over an hour each. (this is consistent, i've ran it 5 times for all 3 algorithms)
Mathematically, or maybe due to sklearn's code, is there a reason for this? (The dataset is 200,000 by 100 columns, at a 25% test/train split)
AI: It's not that KNN is better at k-fold cross validation.
Its just that KNN doesn't do any training apart from storing a footprint of the training data within the model.
KNN's logic resides in its inference step i.e. the predict() call where it determines the k nearest neighbors for the newly supplied instance from previously supplied training data and makes a prediction for the label.
So it's likely that KNN is faster than most other models for small to medium sized datasets. |
H: Difference between $Q(s,a)$ ,$V^*(s)$ and $V^\pi(s)$ in Markov Decision Process?
I am new to RL and I am trying to understand how to find solutions of an MDP.
This is what I understand so far -> since the nature of our environment is stochastic, at a state 's' if we take an action 'a' we do not know which state we would end up in. So we define the following terms :
T(s, a, s') or P(s'/s, a) - transition probability. This is the probability that starting at state 's' and taking action 'a' we end up in state s'
V(s) - value of a state, which tells us how good or bad this state 's' is.
R(s) - reward of being in a state 's', this is also known as the immediate reward of being in state 's'
Q(s) - state action value function. This tells us that given we are at a state 's' and we take a particular action 'a' what is the expected utility of the state s' we might end up in
Now coming to the part that confuses me.
From what I understand so far, the difference between v(s) and q(s) is that v(s) gives us the utility of a state assuming we do not know which action to take while q(s) is the value of that state given we take a particular action 'a'.
Coming to $v^\pi(s)$ = same as what I defined v(s) to be, so the value of a state
$v^*(s)$ = value of a state given we take the optimal action 'a' so this is like following max (q(s)) over all actions
I read online how to mathematically define these terms and I am not sure if I correctly understand them. Also, I don't really understand the difference between $v^*(s) $ and $v^\pi(s)$
I found some conflicting ideas online but since I am just starting out I am not sure what is lacking in my understanding:
$
V^\pi(s) = R(s) + \gamma*max\sum P(s'/s,a)*V^\pi(s')
$
$V^*(s) = max\ Q*(s,a)$ and here we can substitute for Q*(s,a) as $Q^*(s,a) = \sum T(s,a,s')*[R(s) + \gamma V^*(s')]$ so we get $V^*(s) = max \sum T(s,a,s')*[R(s) + \gamma V^*(s')]$
$V(s) = \gamma*max\sum P(s'/s,a)*V(s')$
As you can see there are different equations which are defining the same V(s) quantity? So what exactly is the clear definition of V(s)? Are these equations just dependent on the case of RL we are dealing with?
I don't understand how to distinguish between these, any suggestions/links/readings are much appreciated! Thanks!
AI: Your confusion seems to come from mixing up between some policy $\pi$ and an optimal policy $\pi^*$. Your summary is generally correct, but missing these extra details.
Let me try go through it again. Starting with the MDP definitions:
First of all, we have the transition probabilities $T(s,a,s') = P(s'|s,a)$ which are conditional probabilities of arriving at state $s'$, given that we've taken action $a$ in state $s$
The (expected) reward is generally associated with the state-action pair $R(s,a)$ - there is some slight variations about it in literature, but that's a subject for a different discussion.
Then we have the policy $\pi(a,s) = P(a|s)$ - the probability of taking an action $a$ in state $s$ for an agent following the policy.
Given all three things above: $T$, $R$ and $\pi$ (plus a discount factor $\gamma$) you can define your value functions as average discounted rewards collected by the agent that follows $\pi$.
$V^\pi(s)$ is the average reward that the agent following $\pi$ will collect, when starting from the state $s$.
$Q^\pi(s,a)$ is the average reward that the agent following $\pi$ will collect, when starting from the state $s$ and taking an action $a$.
Quite often authors drop the index $\pi$ and assume implicitly that we are dealing with some policy $\pi$, but one should keep in mind that some (maybe unspecified) policy is always in the context when we are talking about value functions $V$ or $Q$.
These value functions satisfy the following recursive relationships for any policy $\pi$ (note that there's no $\max_a$ in these):
$$V^\pi(s) = \sum_a\pi(a,s)\left[R(s,a) + \gamma \sum_{s'}T(s',s,a)V^\pi(s')\right]$$
$$Q^\pi(s,a) = R(s,a) + \gamma \sum_{s',a'}T(s',s,a)\pi(a'|s')Q^\pi(s',a')$$
Now, some policies maximize the expected reward - these policies are called "optimal" and standardly denoted with a star: $\pi^*(a,s)$ - optimal policy. The index $\pi$ for the value functions is usually replaced with a star as well: $V^*$ and $Q^*$. The optimal value functions satisfy Bellman equations (the ones that have $\max_a$ in them):
$$V^*(s) = \max_a\left[R(s,a) + \gamma \sum_{s'}T(s',s,a)V^*(s')\right]$$
$$Q^*(s,a) = R(s,a) + \gamma \max_{a'}\sum_{s'}T(s',s,a)Q^*(s',a')$$
Hope this clarifies it. |
H: Which Policy Gradient Method was used by Google's Deep Mind to teach AI to walk
I just saw this video on Youtube.
Which Policy Gradient method was used to train the AI to walk?
Was it DDPG or D4PG or what?
AI: They used Distributional Proximal Policy Optimization (DPPO). In the article that video is associated to, they provide a brief overview of it:
In order to learn effectively in these rich and challenging domains, it is necessary to have a reliable and scalable reinforcement learning algorithm. We leverage components from several recent approaches to deep reinforcement learning. First, we build upon robust policy gradient algorithms, such as trust region policy optimization (TRPO) and proximal policy optimization (PPO) [7, 8], which bound parameter updates to a trust region to ensure stability. Second, like the widely used A3C algorithm [2] and related approaches [3] we distribute the computation over many parallel instances of agent and environment. Our distributed implementation of PPO improves over TRPO in terms of wall clock time with little difference in robustness, and also improves over our existing implementation of A3C with continuous actions when the same number of workers is used.
Here are some resources:
The deepming blog post describing the method
The original article: Emergence of locomotion behaviours in rich environments |
H: Can Boosting and Bagging be applied to heterogeneous algorithms?
Stacking can be achieved with heterogeneous algorithms such as RF, SVM and KNN. However, can such heterogeneously be achieved in Bagging or Boosting? For example, in Boosting, instead of using RF in all the iterations, could we use different algorithms?
AI: The short answer is yes. Both boosting and bagging meta-algorithms do not assume specific weak learners, thus any learner can do, no matter if uses same algorithm or different one.
The way the meta-algorithms are defined, they use the weak learners as black-box models, without reference to their implementation or algorithmic principle, nor similarity.
For Boosting:
In machine learning, boosting is an ensemble meta-algorithm for
primarily reducing bias, and also variance[1] in supervised learning,
and a family of machine learning algorithms that convert weak learners
to strong ones.[2] Boosting is based on the question posed by Kearns
and Valiant (1988, 1989):[3][4] "Can a set of weak learners create a
single strong learner?"
A weak learner is defined to be a classifier that is only slightly
correlated with the true classification (it can label examples better
than random guessing). In contrast, a strong learner is a classifier
that is arbitrarily well-correlated with the true classification.
For Bagging:
Although it is usually applied to decision tree methods, it can be
used with any type of method. Bagging is a special case of the model
averaging approach.
For Bagging the required condition to improve performance is that the weak learners should be instable (thus perturbed versions of the learners affect outcomes), but other than that, learners are black boxes, as mentioned above.
For further reference:
Boosting, wikipedia
Bagging, wikipedia
The Strength of Weak Learnability
Bagging Predictors |
H: Are we allowed to transform the continuous target variable by creating a log transformation in order to have a normal distribution?
The following code gives the target variable Item_Outlet_Sales before transformation and Item_Outlet_Sales_log which is transformed
#treat extreme values in Item_Outlet_Sales
train['Item_Outlet_Sales_log'] = np.log(train.Item_Outlet_Sales)
test['Item_Outlet_Sales_log'] = np.log(test.Item_Outlet_Sales)
plt.figure(1)
plt.subplot(121)
sns.distplot(train.Item_Outlet_Sales)
sns.distplot(test.Item_Outlet_Sales);
plt.subplot(122)
sns.distplot(train.Item_Outlet_Sales_log)
sns.distplot(test.Item_Outlet_Sales_log);
Then use the new target variable (Outlet_Item_Sales):
#creating dummies for the training dataset
X = train.drop('Item_Outlet_Sales', 1) #drop the log target column
y = train.Item_Outlet_Sales_log
X = pd.get_dummies(X)
train = pd.get_dummies(temp_train)
AI: If your objective is to convert non-normal data into something that looks more normal / gaussian - try the Box-Cox Transform here.
It is a family of transforms that looks at your data - and provides the best possible transformation. |
H: How to operate dataset from paperswithcode.com?
I'm trying to use IG-3.5B-17k dataset from paperswithcode.com, but I can't figure out how to do that exactly.
How to do it? I guess I need to use this site API? Am I even able to use this dataset?
AI: This is specified in the dataset description in paperswithcode.com (emphasis mine):
IG-3.5B-17k is an internal Facebook AI Research dataset for training image classification models. It consists of hashtags for up to 3.5 billion public Instagram images.
So the dataset is not public.
In the paper, however, the authors argue that the images and their hashtags are "visible" in Instagram:
Our datasets have two nice properties: public visibility and simplicity. By using publicly accessible images, the data used in our experiments is
visible to everyone. To see what it looks like, the images are browsable by hashtag at https://www.instagram.com/explore/tags/ followed by a specific hashtag;
for example https://www.instagram.com/explore/tags/brownbear shows images tagged with #brownbear. Our data is also taken from the “wild”, essentially as-is, with minimal effort to sanitize it. This makes the dataset construction process particularly simple and transparent.
I understand that they are saying "you can go and query Instagram yourself" to see the images in the dataset, but I don't think this is actually practical or even allowed in their terms of service. |
H: Is it wrong to transform the target variable and test the model without dropping the column that was transformed? What's the disadvantage about it?
I have a linear regression model, I have transformed the target variable Item_Outlet_Sales into Item_Outlet_Sales_log on both training and testing dataset. I did not delete the Item_Outlet_Sales.
Here is the snippet:
#treat extreme values in Item_Outlet_Sales
train['Item_Outlet_Sales_log'] = np.log(train.Item_Outlet_Sales)
test['Item_Outlet_Sales_log'] = np.log(test.Item_Outlet_Sales)
sns.distplot(train.Item_Outlet_Sales_log);
sns.distplot(test.Item_Outlet_Sales_log);
#distribution
I dropped the target variable: Item_Outlet_Sales_log and assigned it to y
#creating dummies for the training dataset
X = train.drop('Item_Outlet_Sales_log', 1) #drop the log target column
y = train.Item_Outlet_Sales_log
X = pd.get_dummies(X)
train = pd.get_dummies(train)
test = pd.get_dummies(test)
Is it wrong to to do in terms of giving the model proper training? What is the disadvantage of it? Is it recommended to delete the old target variable or not?
AI: The problem is that, by definition, your target variable is not available at inference time, and that is why you want to predict it. If your target variable was available at inference time, then there is no point in predicting it.
Therefore, if you use the target variable (or a transformation of it) as input to your model, what data are you going to feed to that variable at inference time? |
H: select hyperparameters using Latin hypercube sampling (LHS) from a large matrix/grid of parameter combinations
I have a matrix with each row corresponds to a hyperparameter for the XGBoost model. There are seven parameters to tune in XGBoost (as shown below: nrounds/iterations, max_depth, eta, gamma, colsample_byTree, min_child_weight, and subsample). I did a literature review to specify the range and interval of values for each parameter. Using those ranges and intervals, the parameter space generated around 62,500 parameter combinations. I am using R caret::train function to generate the best hyperparameter combination for my dataset. However, the amount of simulations (62,500) is too much. I read about the Latin hypercube sampling (LHS) and I think that is what I need to reduce the number of simulations by applying initial selection of hyperparameters using LHS. But I am having trouble implementing the approach in my dataset. My goal is to generate a manageable number of hyperparameter combinations (i.e., ~500) using LHS, and then use caret::train function to select best parameters. I would like to ask for help in implementing LHS using my parameter space.
nrounds <- seq(from = 200, to = 1000, by = 200)
maxdepth <- seq(from = 2, to = 10, by = 2)
eta <- c(0.01, 0.05, 0.1, 0.2, 0.3)
gamma <- seq(from = 0, to = 0.4, by = 0.1)
colsample_bytree <- seq(from = 0.4, to = 1, by = 0.2)
min_child_weight <- seq(from = 1, to = 5, by = 1)
subsample <- seq(from = 0.6, to = 1, by = 0.1)
dataGrid <- expand.grid(nrounds, maxdepth, eta, gamma, colsample_bytree, min_child_weight, subsample)
AI: dials from Tidymodels has a grid_latin_hypercube function you can use for this https://dials.tidymodels.org/reference/grid_max_entropy.html |
H: I am confused about these models
In the first problem, it's been told to accept the maximum number of good customers, if at least 98% of the customers that are do not repay their debt correctly identified.
I am confused about what is meant by this? Do I need to train the models and set the threshold value accordingly so that I may obtain 98% accuracy? And for the second one, I need to decrease the threshold value so that accuracy comes down to 85%?
I am stuck in this for a while. Please help me show the correct path. Thanks.
data: German Credit Risk data.
Variables: Several independent variables with the dependent variable "Credit_Risk" which responses in "Good" and "Bad".
AI: The goal is to identify at least 98% of the customers, that do not repay their debt. So the bank can "accept a maximum number of 'good' customers, that can be granted loans" Here the goal is focused on the bad customers.
There should be at least 85% good customers accepted while the side focus is to reject as many bad customers as possible.
I think the difference between 1) and 2) is identifying bad customers in 1) and good customers in 2) |
H: Multiple Features with the same categorical value
In working with a telecommunications data set with multiple categorical variables which all depend on Internet Service, a separate categorical variable, I ran into the problem where 'No Internet Service' is dummy encoded several times in my training data. See the below pie charts for an illustration of the problem.
I was wondering what can be done to avoid this repetition of 'No Internet Service' as a feature value across several features, as it increases the dimensionality of my training data significantly and makes all of my features very highly interdependent.
Feature selection after encoding helps, but often leaves just the dummy variable 'No Internet Service' when removing the categories which don't contribute to the label.
AI: The increase in dimensionality due to 'No Internet Service' could be handled maybe with a 2 layer model. If 'No Internet Service' is true, you run the first level model with a handful of other variables (excluding those which depend on internet service availability). The second layer model runs only when internet service available, with all the variables. |
H: Dimension of output in Dense layer Keras
I have the sample following model
from tensorflow.keras import models
from tensorflow.keras import layers
sample_model = models.Sequential()
sample_model.add(layers.Dense(32, input_shape=(4,)))
sample_model.add(layers.Dense(16, input_shape = (44,)))
sample_model.compile(loss="binary_crossentropy", optimizer="adam", metrics = ["accuracy"])
IP for the model:
sam_x = np.random.rand(10,4)
sam_y = np.array([0,1,1,0,1,0,0,1,0,1,])
sample_model.fit(sam_x,sam_y)
The confusion is the fit should have thrown an error of shape mismatch as the input_shape for the 2nd Dense Layer is given as (None,44) but the output for the 1st Dense Layer (which is the input of the 2nd Dense Layer) will be of shape (None,32). But it ran successfully.
I dont understand why there was no error. Any clarifications will be helpful
AI: The answer can be found by just printing
sample_model.summary()
giving
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 32) 160
_________________________________________________________________
dense_1 (Dense) (None, 16) 528
=================================================================
Total params: 688
Trainable params: 688
Non-trainable params: 0
_________________________________________________________________
Indeed, the input_shape argument in a layer that is not the first one is ignored in a Sequential model. |
H: Original k-Means Research Paper
I'm having difficulty searching for the original published paper proposing k-Means as an algorithm. I have been inspired to find it as reference for similar work, inspired by this TowardsDataScience article.
I have tried wiki's references and Google Scholar but with no luck.
AI: According to the Wikipedia article, it doesn't look like there is a definitive research article that introduced the k-means clustering algorithm.
Hugo Steinhaus had the original idea in 1956. The standard algorithm was proposed by Stuart Lloyd at Bell Labs in 1957, but apparently that wasn't published until 1982. |
H: Data wrangling dates
I have a feature with data creation dates. I have normalized them all to the same format and split them to 'day', 'month' and 'year' columns. But now I have a question. Should I apply normalization or standardization to these columns, or on dates this does not have sense?
AI: You might want to apply one-hot encoding instead. These are not really continuous features. If you consider each day of the week or month of the year a category, then you can instead treat them as categorical variables.
The year is trickier as it does not repeat itself. I would suggest to maybe instead of using the year to use a date difference: which can now be treaded as a continuous variable. You can do any regular scaling (standard scaling, max abs scaling ...) |
H: How to select the split point for Continuous Attribute Age
For the above table, midpoints for possible split points are 22.5 and 35. I have calculated the entropy and gain for each value and 35 had the minimum Entropy and highest gain. Is it correct ?
Given High -> (-), and Low -> (+)
D<22.5 => [0+, 2-],
Entropy (D<22.5) = 0, since all the values are of the same class High.
D>22.5 => [2+, 2-],
Entropy (D>22.5) = 1, since the values are distributed equally among Low and High classes.
D<35 => [2+, 3-],
Entropy (D<35) = -[2/6 x $log_2$(2/6) + 3/6 x $log_2$(3/6)]= 0.5
D>35 => [0+, 1-],
Entropy (D>35) = 0, since all the values are of the same class High
Gain (D, Age>22.5) = 0.918 - 2/6 (0) - 4/6 (1) = 0.2513
Gain (D, Age>35) = 0.918 - 5/6 (0.5) - 1/6 (0) = 0.5103
Is that right?
AI: For 35 split, there is an error in the denominator. It should be 5 as the total number of items for D<35 is 5
D<35 => [2+, 3-], Entropy (D<35) = -[2/5 x $log_2$(2/5) + 3/5 x $log_2$(3/5)] |
H: Precision or Recall when dealing with critical cases?
I have to create an AI that classifies mutliple objects in order to accept them or not inside a machine.
The problem is that some objects could be really harmful to the machine if they get accepted.
Should I focus on an high Precision or an high Recall, in order to not accept this kind of objects?
AI: Let us say your target variable "1" represents "Harmful Object".
Precision answers the question - Of all objects predicted as Harmful - How many are really harmful ?
Recall answers the question - Of all the Harmful objects out there - How many were correctly identified ?
You want to increase on Recall in your case - since that reduces the overall risk to the machine. |
H: Do weights of keywords for each topic add up to 1 in topic modeling?
When you run a topic modeling (say LDA), you can get outputs for some number of topics with corresponding keywords and their weights. Based on my understanding, people usually output top 10 or top 20 keywords for each topic. For these keywords, they also have weights which represent how important each keyword is for a certain topic.
For example, if I decided to draw out top 10 keywords for each topic, then the example output will be shown below.
topic 0: 0.2*keyword1 + 0.15*keyword2 + 0.09*keyword3 +... + 0.005*keyword10
topic 1: ...
...
topic n
I'm not sure how many maximum keywords I can draw out for each topic but do these weights for each topic add up to 1?
AI: Yes, the weights would add up to 1. This is a dirchlet random variable - review documentation on scipy here - https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.dirichlet.html |
H: How can I plot a line for time series data with categorical intervals in R
I am working with single time-series measurements that I want to plot for the time window of about 1 week.
This is the data I am working with.
This is my R script:
library(tidyverse)
library(ggplot2)
filesource <- "C:/ ... /testData.csv"
df <-read.csv(filesource, header = TRUE)
ggplot() +
geom_line(data = df, aes(x = date, y = value, group = 1), color = "red") +
ggtitle("Some Measure over Time") +
xlab("Time") +
ylab("Some Measure in %")
This produces this plot.
What I want is to have the individual unique weekdays show on the x-axis like this, as if I would plot the days as individual categories but only show the first one of each day. I cannot really hardcode this, because I am working with different participants, days and value amounts per day.
Expected Outcome:
So I created a new variable with the weekdays:
df$day <- weekdays(as.Date(df$date, '%d-%m-%Y'))
However, when I want to use this column as the x-axis variable, The days are not in the right order, and all values of one day obviously get plotted on top of each day:
geom_line(data = df, aes(x = day, y = value, group = 1), color = "red")
I have seen this being somewhat solved in python: Visualizing Time Series Data
However, I really want to use R and Markdown to create automated participant reports.
If this is easier to accomplish with another plotting function in R, I am open ears. I just like the customizability of ggplot.
I hope my example is clear. I guess this can be solved with the right ggplot() parameters and settings.
Does anyone have a solution for ending up with something more like the expected outcome montage?
AI: Would something like this work? I simply add an extra column that indicates the row number (which is later used as the x-axis) to make sure all values are displayed as a new point instead of plotting on top of each other for the same day. I then specifiy the custom x ticks and labels by selecting the first row for each day and get the row number (which specifies where the ticks and labels have to be drawn) and the day name (which specifies what the labels should display).
library(readr)
library(ggplot2)
df <- read_csv("testData.csv") %>%
mutate(
date = as.Date(date, "%d-%m-%Y"),
day = weekdays(date),
row = row_number()
)
ticks <- df %>% group_by(day) %>% filter(row_number() == 1) %>% select(row)
ggplot() +
geom_line(data = df, aes(x = row, y = value, group = 1), color = "red") +
ggtitle("Some Measure over Time") +
xlab("Time") +
ylab("Some Measure in %") +
scale_x_continuous(breaks=pull(ticks, row), labels=pull(ticks, day)) |
H: What representation of data should I choose, pandas, numpy or tensors to train neural network model
I am having a pandas dataframe divided to X_train, X_test, y_train, y_test by train_test_split and I am using it to train my neural network model for binary classification. It is taking some time and I was wondering if it would be faster if I changed it from pandas.dataframe to numpy.array or tensor.
AI: It's hard to say for sure without knowing more about the size of your dataset, the architecture of your model, and the libraries you are using.
Changing from pandas.Dataframe to numpy.array is very unlikely to make your training faster. Pandas dataframes are already backed by numpy arrays.
Depending on what neural network library you are using and what hardware you have available, changing from Dataframes to tensors may help. Libraries like Tensorflow and PyTorch allow you to move computation to a GPU/TPU, which may speed up training.
The benefit of using tensor data structures from DL libraries is that they can be moved to a GPU/TPU. They aren't inherently faster. So if you don't have a GPU, or if you are using scikit-learn to train the model, then there's no point switching from pandas to tensors |
H: Why are convolutions still used in some Transformer networks for speech enhancement?
So I’ve read in Attention is All You Need that Transformers remove the need for recurrence and convolutions entirely. However, I’ve seen some TNNs (such as SepFormer, DPTNet, and TSTNN) that still utilize convolutions. Is there any particular reason for this? Doesn’t that defeat the purpose of Transformers?
AI: We find some justifications in the Conformer paper:
Convolutions are better than Transformers at detecting fine-grained patterns:
While Transformers are good at modeling long-range global context, they are less capable to extract fine-grained local feature patterns. Convolution neural networks (CNNs), on the other hand, exploit local information and are
used as the de-facto computational block in vision.
Together, they Transformers and convolutions work better than separately:
Recent works have shown that combining convolution and self-attention improves over using them individually [14]. Together, they are able to learn both position-wise local features, and use content-based global interactions. |
H: I'm trying to do a time series model without a datetime field in python. Is this possible?
I have a dataset with data like this:
Day Revenue
1 1.2
2 1.5
3 1.1
4 1.34
I want to do a time series model on it, but am getting this error:
ValueError: view limit minimum -35.45 is less than 1 and is an invalid Matplotlib date value. This often happens if you pass a non-datetime value to an axis that has datetime units
When I plt, it assigns all of the date to 1/1/1970. I understand why, because it's not a date time field. Out of curiosity, I tried converting the Day column to a datetime, but it assigned every day to 1/1/70. Is there a way to either convert the column to a datetime field and have it assign a new date starting with a specific date (say 1/1/2017, 1/2/2017, etc) or is there a work around when you just have the day counts (1,2,3,4)?
AI: You can use timedelta function from datetime to achieve this, starting from a known date.
Example code can be something like,
from datetime import datetime, timedelta
start = datetime.strptime('2021-01-01', '%Y-%m-%d')
all_dates = [start + timedelta(x) for x in range(10)]
print(all_dates)
Replace the range(10) above with the sequence of the actual days in your dataset. |
H: Help with Classification using scikit-learn models
I'm using the Titanic data set to classify the missing Cabins. There is a lot of missing Cabin values. My objective is just to assign the letter of the Cabin without the room number. So, I'm just wanting to use the models to assign the section letter. I've used 4 models:
(2) RandomForestClassifier with two different parameters
a LinearSVC model
a OVR with SVC as the base estimator
However, my scores are extremely poor. I have not tried using the GridSearchCV yet to try to find the best parameters for the models because I wanted to see if there is any recommendations on which parameters to use that could possible give better scores than what I was able to get.
Here is my notebook to see my work:
https://github.com/SugarfreeTX/kaggle/blob/master/cabin_classification.ipynb
The data set is in the repository.
AI: Let us try to think from first principles for this problem. It is not clear what outcome you want to achieve by trying to predict cabin section here. I'm assuming it is purely learning to apply ML algorithms. We can hypothesize that the cabin sections assigned to a passengers would've been assigned by their class and fair paid.
You can note (from the notebook link) that there are about 295 rows in the data where you have cabin section calculated. There are 7 unique values for cabin class. So you are effectively trying to fit a multi-class classifier with 7 possible output values from a data of size 295. It is unlikely to give much accuracy no matter which algorithm you choose.
You might find some mild success by trying to predict it using fare paid and passenger class. Other variables in dataset (sex, age) may not be that useful for this task.
Also, with so few columns and rows, using random forest etc. is overkill here (you don't have a large number of columns). tl:dr; is this is probably not a good task for machine learning. |
H: VC dimension for Gaussian Process Regression
In neural networks, the VC dimension $d_{VC}$ equals approximately the number of parameters (weights) of the network. The rule of thump for good generalization is then $N \geq 10 d_{VC} \approx 10 * (\text{number of weigts})$.
What is the VC dimension for Gaussian Process Regression ?
My domain is $X = \mathbb{R}^{25}$, meaning I have 25 features, and I want to determine the number of samples $N$ I must have to archive good generalization.
AI: The expressiveness of the gaussian process grows with a number of training points. So, the vapnik-chervonenkis dimension in fact is infinite (pretty much the same way it's infinite for k nearest neighbors) and unfortunately your rule of thumb is not applicable here.
You should probably rely on train/validation split to estimate the generalization. From my experience, GP generalizes much better than neural nets, but the exact required train size depends on data distribution complexity itself. |
H: Understanding clusters after applying PCA then K-means
I have a dataset grouped by customer level, and the rows are sum_mexico, sum_uk, ... etc to indicate if the customer has spent money at stores in those countries..similarily counts for these as well. I end up with 200 columns.
I would like to cluster this data to observe the spending habits and see if i can group them by these features. I'm unsure what clustering method would be best but i would like to provide meaningful results to business.
I've read about using PCA then k-means. E.G. you can carry out K-means on your Principcal componenets. What i don't understand is how i can then interpret the results. e.g. say i have the below situation from https://365datascience.com/tutorials/python-tutorials/pca-k-means/:
i can visually see 4 clusters, but how can i get the characteristics of each cluster.. What can i say about cluster 1 for example, are they charactertized by high spend in mexico ? (assuming i got these results myself using my own data and steps from link!) My end goal is to understand what charcterizes each cluster e.g. it may be that certain customer IDS have high spends in spain and wine etc.
AI: PCA removes the connection with the original features,so the interpretation of the visualisations in the principle component space is therefore not very meaningful.
E.g. cluster A has higher values of PC1, where cluster B has higher values of PC2.
If you can clearly see that PC1 is only representative of Feature X, then fine, but this isn't often the case.
Instead use PCA to discover which features best represent the data, and use this to remove unnecessary (unrepresentative) features so that you can visualise the original data using its most important original features. And therefore describe the clusters with meaningful differences.
You might want to experiment with a scatter matrix, to explore which features spaces have the cleanest distinction between clusters. |
H: Why does the MAE still remain, at all?
This may seem to be a silly question. But I just wonder why the MAE doesn't reduce to values close to 0.
It's the result of an MLP with 2 hidden layers and 6 neurons per hidden layer, trying to estimate one outputvalue depending on three input values.
Why is the NN (simple feedforward and backprop, nothing special) not able to maybe even overfit and meet the desired training values?
Costfunction = $0.5 (Target - Modeloutput)^2$
EDIT:
Indeed I found an inconsistency in the inputdata.
Already cheering, I was hoping to see a better result, after fixing the issue with the input data. But what I got is this:
I'm using a Minibatch-SGD and now I think it might get trapped in a local minimum. I read about Levenberg-Marquardt algorithm, which is said to be stable and fast. Is it a better algorithm for global minimum detection?
AI: There can be other reasons related to the model but the most simple explanation is that the data contains contradicting patterns: if the same features correspond to different target values, there is no way for any model to achieve perfect performance.
Let me illustrate with a toy example:
x y
0 4
0 4
1 3
1 4
1 3
1 3
2 8
2 7
2 7
2 6
The best a model can do with this data is to associate x=0 with y=4, x=1 with y=3 and x=2 with y=7. Applying this best model to the training set would lead to 3 errors and the MAE would be 0.3. |
H: Which Algorithm did OpenAI used to create a hide and seek playing Agent?
I just saw this video on youtube: https://www.youtube.com/watch?v=kopoLzvh5jY&t=9s
Which Algorithm did OpenAI used to create a hide and seek playing Agent?
Was it Genetic Algorithm or Policy Gradients or something else?
If it was Policy Gradient method, then which Policy Gradient method did they used?
AI: This is specified in the original paper that led to that video:
Policies are optimized using Proximal Policy Optimization (PPO) (Schulman et al., 2017) and Generalized Advantage Estimation (GAE) (Schulman et al., 2015)
They, nevertheless, used concepts often used in evolutionary algorithms, specifically competitive co-evolution, as the goal of the seeking agents is contrary to the goal of the hiding agents, and this drives the reinforcement signal. |
H: Minimal working example or tutorial showing how to use Pytorch's nn.TransformerDecoder for batch text generation in training and inference modes?
I want to solve a sequence-to-sequence text generation task (e.g. question answering, language translation, etc.).
For the purposes of this question, you may assume that I already have the input part already handled. (I already have a tensor of dimensions batch_size x num_input_tokens x input_dim representing the input sequences. Also, all input sequences in my problem are of the same length, so no masking is required on the input side of things).
Now, I want to generate the output sequences using nn.TransformerDecoder. I'm aware of Pytorch's official tutorial SEQUENCE-TO-SEQUENCE MODELING WITH NN.TRANSFORMER AND TORCHTEXT. Unfortunately, the official tutorial doesn't meet my needs, for the following reasons:
nn.TransformerDecoder is not used in the example.
The example is about language modeling, not text generation. There is no forward loop that generates text word by word.
I've searched around the web and I've found a few things, but nothing like a simple and minimal working example that directly applies to my problem setting. Concretely, on the output side of things I need the following:
I want to generate output sequences in batch. I've found codes on GitHub where people appear to be doing text generation, but they do it for a single sequence at a time, not a batch of multiple sequences.
The output sequences may have different lengths.
I want to train my model with the teacher-forcing strategy and batches of multiple sequences. Given that in training I know the lengths of the sequences in advance, you may assume that I already have my batches padded with zeroes. However, I still need to figure out how to implement the forward function of my model, with a generation loop that uses nn.TransformerDecoder. Basically, I need to figure out how to iterate word-wise over my batch of output sequences, masking out the future words in each step (so that the model doesn't cheat by trivially predicting the next words).
Then, I need a similar forward function for inference mode. I need to figure out how to implement the generation loop to do basically the same as in training mode, except that instead of teacher-forcing I want to implement greedy search (i.e. use the tokens with highest predicted probability at iteration i as the next input for iteration i+1).
I already know how to do all this using LSTMs. Below you can see the forward function of a model that I implemented in the past to do exactly what I just said with an LSTM. The same forward function is used for both training and inference, depending on the value of the variable 'mode':
def forward(
self,
image_local_features,
question_vectors,
answers=None,
max_answer_length=None,
mode='train',
):
if mode == 'train':
batch_size, max_answer_length = answers.shape
assert answers is not None
else:
batch_size = image_local_features.size(0)
assert max_answer_length is not None
y = self.embedding_table(self.start_idx).expand(batch_size, -1)
o = torch.zeros(batch_size, self.hidden_size).to(DEVICE)
h = self.W_h(question_vectors)
c = self.W_c(question_vectors)
if mode == 'train':
answer_embeddings = self.embedding_table(answers.permute(1,0))
assert answer_embeddings.shape == (max_answer_length, batch_size, self.embed_size)
output = []
for t in range(max_answer_length):
y_bar = torch.cat((y,o),1)
assert y_bar.shape == (batch_size, self.embed_size + self.hidden_size)
assert h.shape == (batch_size, self.hidden_size)
assert c.shape == (batch_size, self.hidden_size)
h, c = self.lstm_cell(y_bar, (h, c))
e = (self.W_attn(image_local_features) * h.unsqueeze(1)).sum(-1)
att = torch.softmax(e,-1)
a = (image_local_features * att.unsqueeze(2)).sum(1)
assert a.shape == (batch_size, self.image_local_feat_size)
u = torch.cat((a,h),1)
assert u.shape == (batch_size, self.hidden_size + self.image_local_feat_size)
v = self.W_u(u)
o = self.dropout(torch.tanh(v))
assert o.shape == (batch_size, self.hidden_size)
output.append(self.W_vocab(o))
if mode == 'train':
y = answer_embeddings[t] # teacher-forcing
else:
y = self.embedding_table(torch.argmax(output[t], 1)) # greedy search
assert y.shape == (batch_size, self.embed_size)
output = torch.stack(output, 1)
assert output.shape == (batch_size, max_answer_length, self.vocab_size)
return output
Another way to phrase my question would be: how can I reimplement what I did with LSTMs using nn.TransformerDecoder instead?
Any minimal working / hello world example that shows how to do batch training and batch inference with nn.TransformerDecoder for text generation will be very appreciated.
Note: alternatively, if there is a straightforward way of accomplishing the same with an out-of-the-box solution from hugginface, that would be awesome too.
AI: After a Googling around, I think this tutorial may suit your needs.
However, it seems you have a misconception about the Transformer decoder: in training mode there is no iteration at all. While LSTM-based decoders are autoregressive by nature, Transformers are not. Instead, all predictions are generated at once based on the real target tokens (i.e. teacher forcing). To train a Transformer decoder to later be used autoregressively, we use the
self-attention masks, to ensure that each prediction only depends on the previous tokens, despite having access to all tokens. You can have a look at the Annotated Transformer tutorial in its Training loop section to see how they do it.
Another difference between LSTMs and Transformers is positional encodings, which are used by Transformers to be able to know the position of each token.
Regarding inference time, the easiest approach is to implement greedy decoding (e.g. this), where at each timestep you simply take the most probable token. This decoding strategy, however, will probably give poor results (e.g. the typical token repetitions). A better option is beam search, where at each timestep you keep the most probable K partially decoded sequences, although it is more complex to implement and I have not found any implementation online meant for nn.TransformerDecoder; maybe you can have a look at OpenNMT's implementation. |
H: Performance metrics changing significantly based on batch size
I am working on a binary classification problem where there is significant class imbalance (minority class makes up nearly 10%). The dataset has ~15,000 observations and I have split this in to a training, validation and test set (that are stratified).
Using PyTorch I build a neural network with 5 fully connected layers (using ReLU activation), CrossEntropyLoss and SGD optimiser. Below is parts of my code
The problem is that my training vs validation loss changes a lot based on batch size (passed in the DataLoader). If I use a batch size of 64, the loss functions look like
which is quite odd. But if I use an unconventionally large batch size of say 1000, it looks like:
This looks more familiar but I can't make sense of what is going wrong here. I am also seeing that the training set reaches a high recall fairly quickly (after ~4 epochs) while the validation set improves slowly. So there seems to be an issue of overfitting as well.
I don't really know where I am going wrong: my neural network architecture consists of 5 fully connected layers with appropriate input and output dimensions. I initialise the weights. The forward function applies ReLU to the inputs (I don't use Softmax because I only need to classify 0 or 1 so I thought I can simply use argmax, see the 'c' variable in the code above).
I have tried setting Shuffle to true in the training_loader but this produces highly fluctuating training loss values.
AI: Batch size is very related to the learning rate, especially in non-adaptive optimizers like the vanilla SGD that you are using.
I would suggest two alternatives:
Tune (reduce) the learning rate. You can check the answers to this SO question for some heuristics on how to choose the value. Apart from that, I recommend the article Don't Decay the Learning Rate, Increase the Batch Size.
Use an adaptive optimizer, like Adam. |
H: Applying an algorithm on it's own training data for descriptive purposes?
Good Day, I am newer to data science so I am not confident in this. To set up the question I will describe my data and approach.
Data
I don't want to share specific data examples as I want to try and keep some anonymity. I have data for events between 2016 and 2019. Each event has ~14 features (categorical and numerical) as well as a binary label of success or failure. Each of these is run through different transformers (normalizing and one-hot encoding).
Approach
What I am interested in doing is knowing how likely events are to success (not necessarily predict if they will succeed).
I played around with train/test splits to find an algorithm that worked best. I ran a stratified k-fold cross-validation on the data in a grid search to tune the hyperparameter using logloss as my measurement of choice. I am implementing this in Python so Scikit-Learn as my tool of choice. The algorithm I settled on was GradientBoostingClassifier.
Problem
What I am interested in doing now is as I look at events in 2020, I am curious on how likely they are to succeed. When I look at my 2020 data I have the same 14 features (transform them the same way) and as well I instantly know if they succeeded or failed. So predicting Success/Failure is not interesting nor what I want to do. I can easily generate probabilities on this 2020 data, using my trained model, with predict_proba in sklearn.
Question
Now my partner wants me to apply this model on the same training data from 2016-2019. Initially this feels like a big no-go. You never predict your own training data as it will be heavily biased. But I am not predicting. These feels more like a traditional statistics descriptive problem where I look at the nature of known data and see how it behaves.
So again, I am not interesting in knowing/predicting if something WILL succeed or fail (we know this instantaneously). I am more interested in knowing if it succeeded (or failed) vs how likely it was to do so. A success with a 5% chance vs succeeding with 40% chance is way more interesting in this problem.
So my question, per the title is, can you apply a trained algorithm / model on its own training data if my interest is not in predicting forward but evaluating success vs likelihood of success (and still yield useful information)?
AI: can you apply a trained algorithm / model on its own training data if my interest is not in predicting forward but evaluating success vs likelihood of success (and still yield useful information)?
Yes, absolutely. There are quite a few cases where applying a model on the training data is useful. The most common is probably to detect overfitting: a high difference in performance between the training and test set is a sign of overfitting. In general it can also be useful to know how the model performs on the training set in order to obtain an upper baseline for the performance.
Everybody says "don't predict on the training set" simply because it's a simple rule to remember and such an easy mistake to make for beginners. But as long as one understands what they are doing and knows that obviously the predictions obtained on the training set are biased, there's no problem.
The task described in the question makes sense to me, you have my permission to predict on the training set ;)
Just one more remark: there might be a bias in the training data itself, in the sense that if success for an event is very unlikely then the event will fail more often than suceed even when the condition of its success are satisfied. If the data contains this kind of cases, it means that the model is trained to predict "fail" for potentially successful features. Whether or not this impacts the model depends on whether there are enough similar but successful events in the data, I think. |
H: Training Objective of language model for GPT3
On page 34 of OpenAI's GPT-3, there is a sentence demonstrating the limitation of objective function:
Our current objective weights every token equally and lacks a notion of what is most important to predict and what is less important.
I am not sure if I understand this correctly. In my understanding, the objective function is to maximize the log-likelihood of the token to predict given the current context, i.e., $\max L \sim \sum_{i} \log P(x_{i} | x_{<i})$. Although we aim to predict every token that appears in the training sentence, the tokens have a certain distribution based on the appearance in human litterature, and therefore we do not actually assign equal weight to every token in loss optimization.
And what should be an example for a model to get the notion of "what is important and what is not". What is the importance refer to in here? For example, does it mean that "the" is less important compared to a less common noun, or does it mean that "the current task we are interested in is more important than the scenario we are not interested in ?"
Any idea how to understand the sentence by OpenAI?
AI: This may be best understood with a bit more of context from the article:
A more fundamental limitation of the general approach described in this paper – scaling up any LM-like model, whether
autoregressive or bidirectional – is that it may eventually run into (or could already be running into) the limits of the
pretraining objective. Our current objective weights every token equally and lacks a notion of what is most important to
predict and what is less important. [RRS20] demonstrate benefits of customizing prediction to entities of interest.
I think that the relevant part of the reference [RRS20] is this paragraph:
Recently, Guu et al.(2020) found that a “salient span masking” (SSM)
pre-training objective produced substantially better results in open-domain question answering.
This approach first uses BERT (Devlin et al., 2018) to mine sentences that contain salient spans
(named entities and dates) from Wikipedia. The
question answering model is then pre-trained to reconstruct masked-out spans from these sentences,
which Guu et al. (2020) hypothesize helps the
model “focus on problems that require world
knowledge”. We experimented with using the
same SSM data and objective to continue pretraining the T5 checkpoints for 100,000 additional
steps before fine-tuning for question answering.
With that context in mind, I understand that the sentence in the GPT-3 papers means that in normal language models, the predictions of every token has the same importance weight toward the computation of the loss, as the individual token losses are added together in an unweighted manner. This as opposed to the salient span masking approach, which finds tokens that are important to predict by means of a BERT-based preprocessing. |
H: Calculating optimal number of topics for topic modeling (LDA)
am going to do topic modeling via LDA. I run my commands to see the optimal number of topics. The output was as follows: It is a bit different from any other plots that I have ever seen. Do you think it is okay? or it is better to use other algorithms rather than LDA. It is worth mentioning that when I run my commands to visualize the topics-keywords for 10 topics, the plot shows 2 main topics and the others had almost a strong overlap. Is there any valid range for coherence?
Many thanks to share your comments as I am a beginner in topic modeling.
AI: LDA being a probabilistic model, the results depend on the type of data and problem statement. There is nothing like a valid range for coherence score but having more than 0.4 makes sense. By fixing the number of topics, you can experiment by tuning hyper parameters like alpha and beta which will give you better distribution of topics.
The alpha controls the mixture of topics for any given document. Turn
it down and the documents will likely have less of a mixture of
topics. Turn it up and the documents will likely have more of a
mixture of topics.
The beta controls the distribution of words per topic. Turn it down and
the topics will likely have less words. Turn it up and the topics will
likely have more words.
The main purpose of lda is to find hidden meaning of corpus and find words which best describe that corpus.
To know more about coherence score you can refer this |
H: How can I convert a dataframe column of fraction float that is stored as a string into float?
The df is as follows:
>>> df['csl']
0 250/500
1 500/1000
2 500/1000
3 500/1000
4 100/300
695 500/1000
696 250/500
697 100/300
698 250/500
699 250/500
Name: csl, Length: 700, dtype: object
by the dtype is in the form of array objects:
df.unique()
array(['250/500', '500/1000', '100/300'], dtype=object)
The following code gives me an error:
df['csl'] = df['csl'].astype(float)
ValueError: could not convert string to float: '250/500'
AI: The easiest way of achieving this would probably to split the string column using the fraction character and then dividing the first value by the second value:
import pandas as pd
df = pd.DataFrame({"col": ["250/500", "100/300", "500/1000"]})
df["result"] = df["col"].str.split("/").apply(lambda x: float(x[0]) / float(x[1]))
# col result
# 250/500 0.500000
# 100/300 0.333333
# 500/1000 0.500000
If you have a very large dataframe it is faster to save the intermediate result and then perform the division to make use of vectorization:
df[["numerator", "denominator"]] = df["col"].str.split("/", expand=True)
df["result"] = df["numerator"].astype(float) / df["denominator"].astype(float)
# col numerator denominator result
# 250/500 250 500 0.500000
# 100/300 100 300 0.333333
# 500/1000 500 1000 0.500000 |
H: Removing correlation between independent variables
If there are two continuous independent variables that show a high amount of correlation between them, can we remove this correlation by multiplying or dividing the values of one of the variables with random factors (E.g., multiplying the first value with 2, the second value with 3, etc.). We would be keeping a copy of the original values of the independent variable whose values have been transformed for proper comparison of the original values with the prediction results received. Can this be considered as an alternative to dropping one of the highly correlated variables?
AI: High correlation between 2 features, eg $x_1$ and $x_2$ means that there is a linear relation between the two features (ie one is a linear transformation of the other), $x_2 = c_0 + s \cdot x_1 + \epsilon$.
That means any linear transformation of one or both features (eg multiplying by random factors), simply leaves the linear relation intact.
$x_{21}$ = $a \cdot x_2$, $x_{11} = b \cdot x_1$
Then
$x_{21} = a \cdot c_0 + a \cdot s/b \cdot x_{11} + \epsilon'$
Thus again there is linear relation thus high correlation.
Only non-linear transforms can alter the linear relation, but these can affect negatively the outcome.
Once a very strong correlation exists between 2 features then one of them is simply discarded, and only one remains, as it does not offer any new information by itself.
See also: https://datascience.stackexchange.com/a/87653/100269 |
H: Explanation of random forest performance difference to when using categories and when using dummy variables
I have some hand coded feature which is a category with values "High", "Low", and "Normal".
I created this feature myself and my problem performance (classification) increased dramatically when using it with expanding these by dummy variables.
Now since I'm trying random forest, I thought I change "High, Low, Normal" to 1, -1, 0 instead.
Now the same model doesn't learn at all.
I thought it should become easier actually for it to split.
Does this have to do with me putting normal to 0?
Thank you for any explanation helping me to understand this.
AI: It should work: the variable is ordinal so using numerical values makes sense.
So there's a bug somewhere, here are a few suggestions of things to look at:
Possibly a type conversion error somewhere: make sure the variable is interpreted as numerical.
Check whether the model actually uses the variable: if not then it's likely some type error; if yes then I would investigate what goes wrong: for example it might help to plot this variable vs. target in the two cases where the variable is categorical or numerical.
Maybe some difference between the preprocessing of the training and test set: apply the model on the training set, if the performance is good then it's likely that there's something wrong in the preprocessing of the test set. |
H: Probability that ensemble model is correct based on accuracies of its classifiers
I'm trying to understand what I did wrong when trying to answer this question. The exact question is:
Assume that we have 3 trained prediction models, and each model outputs either
-1 or 1. We then tested the accuracies of these models and obtained the following outcomes:
Model
Accuracy
m1
0.60
m2
0.55
m3
0.45
Let M be the ensemble model that outputs a plurality vote of these three models. If we assume that the errors of the models m1, m2, and m3 are independent, what is the probability that M(x) would be correct on a test instance x?
I thought that because this was a plurality vote, and the classification errors are independent of each other, I could simply take the weighted average of the accuracy of the three classifiers:
$
\begin{align*}
P(X) &= \sum_{all\ models\ M_j} P(C_i|x,M_j)P(M_j) \\
&= \frac{1}{L} \sum_{all\ models\ M_j} P(C_i|x,M_j) \\
&= \frac{1}{3}(0.60+0.55+0.45)\\
&= 0.53
\end{align*}
$
But I was told that this is incorrect (with no context as to why).
Can someone explain why this is incorrect? If this is a plurality vote (which to me assumes that the votes of each classifier are equal), why can I not simply take the weighted average?
AI: It's not the actual data, it's the probabilities. So you should consider all the scenarios of voting.
For the Ensemble to be correct,
Either any two or all the three should be correct
=$[m_1*m_2*(1- m_3) + m_1*(1-m_2)*m_3) + (1-m_1)*m_2*m_3] + [m_1*m_2*m_3]$
= [0.6*0.55*(1-0.45) + 0.6*(1-0.55)*0.45 + (1-0.6)*0.55*0.45] + 0.6*0.55*0.45
= 0.5505 |
H: How can I group by elements in a column in pandas?
I have a dataset that looks like this:
I need to transform it so it looks like this:
Meaning I need to show the balance by balance groups and time gaps.
Now the time gaps are elements of a column. I tried to use the following;
b = a.pivot_table(values='GAP', index=a.index, columns='BALANCE_GROUP', aggfunc='first')
With different parameters but it outputs something really weird. I also tried to use the groupby function which works fine, but it won't allow me to set the elements in the gap column as seperate columns...
b = a.groupby(['BALANCE_GROUP','GAP']).sum()
and I got something like this:
AI: If I understand what you need, I think it is this:
b = a.pivot_table(values='TOTAL_BALANCE_EUR', index=['NSFR_GROUP', 'BALANCE_GROUP'], columns='GAP', aggfunc='sum')
b
It's easier for others to help you if you make the data available to others. Just make a tiny dataframe with 10 rows for instance. Also, you can make the code a bit easier to read by enclosing it in three backticks: ``` or using the code sample button when you write your post. |
H: Shaping columns (and column headers) into multi-index rows using pivot/pivot_table
Problem
I am trying to shape a dataset which includes stacked dates/data as rows and half-hourly data as columns (Trading periods, eg. TP1, TP2), into a chronological half-hourly order with the stacked data becoming columns (see image below for the output format) and the column headers (TP1, TP2, etc) becoming indices along with the dates.
I am new to Python and Pandas (C and MATLAB programming experience), initially I thought to loop through the data and append the relevant parts of each row into a new data-frame. It seems to me however that either the df.pivot() or pd.pivot_table() would be a more suitable way to achieve this as it would allow some aggregation at the same time.
Sample dataset:
df = pd.DataFrame({'POC_Code': ['ARA', 'ARA', 'ARA', 'BPE', 'BPE', 'BPE'],
'Trading_date': ['1/04/2018','2/04/2018','3/04/2018','1/04/2018','2/04/2018','3/04/2018'],
'TP1':[10120,11760,25930,3545,12749,11358],
'TP2':[10170,11790,25890,4329,13793,15448],
'TP3':[10200,11750,25860,3465,13943,16132]})
POC_Code Trading_date TP1 TP2 TP3
0 ARA 1/04/2018 10120 10170 10200
1 ARA 2/04/2018 11760 11790 11750
2 ARA 3/04/2018 25930 25890 25860
3 BPE 1/04/2018 3545 4329 3465
4 BPE 2/04/2018 12749 13793 13943
5 BPE 3/04/2018 11358 15448 16132
My Attempt
Using the following I get close what I want (the image below), but I am still not sure how to turn the TP1, TP2, into indices:
piv = df.pivot_table(values=['TP1','TP2','TP3'], index=['Trading_date'], columns='POC_Code')
TP1 TP2 TP3
POC_Code ARA BPE ARA BPE ARA BPE
Trading_date
1/04/2018 10120 3545 10170 4329 10200 3465
2/04/2018 11760 12749 11790 13793 11750 13943
3/04/2018 25930 11358 25890 15448 25860 16132
I tried to access the column headers like this, but it's not valid which I get: piv = df.pivot_table(values=['TP1','TP2','TP3'], index=['Trading_date', df.columns.values[2:5]], columns='POC_Code')
Desired Output
I need the output to be a chronological format (by date & trading period) such as the multi-index way below, so that I can combine it into a single index of the form 2018-04-01-01 for export to another program. If this combination of the indices can be done in the same step, even better.
Is this possible with pivot/pivot_table, or would something like this (https://stackoverflow.com/questions/37430940/python-pandas-converting-column-headers-into-index) be a better approach (or a combination of both)? The full dataset includes 65 POC_Codes and 50 Trading periods, so I would like to use a pivot table as I also plan to sum these into various categories.
Full dataset can be found at: https://www.emi.ea.govt.nz/Wholesale/Datasets/Generation/Generation_MD/201804_Generation_MD.csv
AI: For this I would perform two steps, first transforming the data from a wide to long format using pandas.melt (i.e. transform the TP columns over the rows) and then use pandas.pivot to get the desired format.
result = (
df
.melt(id_vars=["POC_Code", "Trading_date"])
.pivot(index=["Trading_date", "variable"], columns="POC_Code")
.reset_index()
)
result
# Trading_date variable value
# ARA BPE
# 1/04/2018 TP1 10120 3545
# 1/04/2018 TP2 10170 4329
# 1/04/2018 TP3 10200 3465
# 2/04/2018 TP1 11760 12749
# 2/04/2018 TP2 11790 13793
# 2/04/2018 TP3 11750 13943
# 3/04/2018 TP1 25930 11358
# 3/04/2018 TP2 25890 15448
# 3/04/2018 TP3 25860 16132
If you want to change the column names you can simply assign the new column names to df.columns:
result.columns = ["Trading_date", "TP", "ARA", "BPE"] |
H: Should I make single machine learning model for predicting price of house or 6 different models, given 6 datasets for different cities?
I am currently working on "Housing Prices in Metropolitan Areas of India" (kaggle dataset), with 6 different csv files for 6 different cities.
All have same columns (40).
Working on a real estate website project, I am confused, should I make separate model for each csv or merge all csv to form one model (after data cleaning) to predict house prices.
AI: Both options would work. It would depend on which method you want to use, what you want to do afterwards, or computing capacity.
E.g. developing a model for each city might allow you to contrast and compare the outcomes more simply. But you can also do multivariate predictions to predict for each city even once you have combined your dataset.
Also if your dataset is very large it might make sense to treat the cities independently to make the computations slightly cheaper. |
H: Why is averaging the vectors required in word2vec?
While implementing word2vec using gensim by following few tutorials online, one thing that I couldn't understand is the reason why word vectors are averaged once the model is trained. Few example links below.
https://www.kaggle.com/ananyabioinfo/text-classification-using-word2vec/notebook?scriptVersionId=11358361&cellId=7
https://www.kaggle.com/varun08/sentiment-analysis-using-word2vec?scriptVersionId=2185653&cellId=15
My questions are:
Is it just to create a single vector instead of vectors of the dimension size or to increase the accuracy or is there any reason behind this?
Is it mandatory to take average of the vectors or are there any alternatives to this.
I have gone through the original paper on word2vec but that doesn't give clear explanation on this.
AI: The reason to average the embedded vectors of the words in a paragraph or document is to obtain a single fixed-size vector that represents the whole text. Then, the document-level vector can be used as input to a document classification model or any other document-level model.
If you explicitly want to compute word-level representations and then combine them into a document/paragraph-level representation, then averaging is the standard approach.
On the other hand, to obtain document/paragraph/sentence-level representations in general, there are many alternatives to combining word-level vectors. Some remarkable examples include doc2vec for paragraph/document-level, or LASER or BERT for sentence-level representations. |
H: Recurrent Neural Network (RNN) Vanishing gradient problem - Why does it affect earlier timesteps more?
I understand the concept of backpropagation in standard neural networks and backpropagation through time with RNNs, why this causes exponentially smaller gradients at earlier time steps and most of the maths behind it all, but what I don’t understand is why this affects the earlier timestep in particular? Since, the parameters (weights) in the RNN are all shared between timesteps, why is it that the earlier timestep is more affected? Wouldn’t they all be affected since they all share the same badly optimized weight which is never updated due to the many small terms in the multiplicative product which produces $\frac{\partial E}{\partial w} $? I feel like I'm totally misunderstanding something here. Many thanks
AI: Of course that all weights are the same, but the update applied to the weights has a contribution from each of the timesteps, and the contribution associated with the first timesteps is what is more affected by the vanishing gradient problem. |
H: why my batch size doesn't change
it happens quite a lot for me that I declared batch_size=64 or any number and end up watching Keras training my net on whole data instead of the batch. then I edit it many times and finally finished with the very first code working.
for example this simple code:
data_train=pd.read_csv("train.csv")
y=data_train["label"]
dt=data_train.drop("label", axis=1)
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(500, activation="relu", name="dense_1")(inputs)
x = layers.Dense(500, activation="relu", name="dense_2")(x)
x = layers.Dense(300, activation="relu", name="dense_3")(x)
x = layers.Dense(64, activation="relu", name="dense_4")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
opt = keras.optimizers.Adam(learning_rate=0.01)
model.compile(optimizer=opt, loss="sparse_categorical_crossentropy", metrics=["accuracy"])
X_train, X_test, y_train, y_test = train_test_split(dt, y, test_size=0.2)
X_train, Xvalid, y_train, yvalid = train_test_split(X_train, y_train, test_size=0.2)
history=model.fit(X_train,y_train, batch_size=64 ,epochs=30,validation_data=(Xvalid,yvalid))
and it train itself with :
26880/26880 [==============================] - 3s 108us/sample - loss: 0.4717 - acc: 0.8718 - val_loss: 0.4149 - val_acc: 0.8847
how I can fix it?
Thanks
AI: Using model.fit() will always train on the whole dataset for n epochs. The batch_size argument denotes how many samples are used to calculate the gradient and updates the parameters. If you want to train the model on just a batch of data (i.e. a subset of your total dataset) simply pass the subset of data to model.fit(). |
H: Why can't I use 2D-arrays as features for CCA (Canonical Correlation Analysis) classifier?
The Problem
When using fit of the scikit learn CCA classifier it won't allow me to use arrays as features. The error ValueError: Found array with dim 3. Estimator expected <= 2. can be produced whit the following code
from sklearn.cross_decomposition import CCA
CCA_model = CCA(n_components = 3, max_iter=20000)
input_arr = [[[k*-1+j*-i*-1 for k in range(125)] for j in range(2)] for i in range(189)]
input_arr = np.array(input_arr)
print("INPUT SHAPE:", input_arr.shape)
input_lbl = [[(-(-1+(-1)**(1+k+j)))/2 for k in range(3)] for j in range(189)]
input_lbl = np.array(input_lbl)
print("LABEL SHAPE:", input_lbl.shape)
model = CCA_model.fit(input_arr, input_lbl)
>>INPUT SHAPE: (189, 2, 125)
>>LABEL SHAPE: (189, 3)
The question
Why is it so, shouldn't it be allowed to use arrays as single features? Is there any parameter I need to modify to do this?
AI: No, coding-wise, it cannot use multi-dimensional arrays as single features. You can flatten the array though. Essentially, you want to reshape your input to be (189, 250). |
H: Plot a training/validation curve in Pytorch Training
I have the following training method and I'm confused how may I modify the code to plot a training and validation curve history graph with matplotlib
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
since = time.time()
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
train_loss_min = np.Inf
for epoch in range(1, n_epochs + 1):
print('\n')
print('Running Epoch ', epoch, ' of ', n_epochs)
print('\n')
time_elapsed = time.time() - since
print('Training for {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
# train the model
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
## record the average training loss
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
if batch_idx % 10 == 0:
print('Epoch ', epoch, ' Training batch ', batch_idx)
print('Train Loss ', train_loss)
# validate the model
model.eval()
print('\n')
print(' Evaluating the model')
print('\n')
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
output = model(data)
loss = criterion(output, target)
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))
print('Valid Loss ', valid_loss)
# print training/validation statistics
print('\n')
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
print('\n')
## save the model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model..'.format(valid_loss_min, valid_loss))
torch.save(model.state_dict(), save_path)
print('Model Saved')
valid_loss_min = valid_loss
time_elapsed = time.time() - since
print('Training completed in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
return model
AI: You should use Tensorboard. It has been integrated with PyTorch. See this. |
H: Logistic regression with unbalanced data, scoring based only on rare class
I have a dataset off app. 600.000 data points in which 0.2% (1.200 samples) is labelled as signifying a rare event. I want to use logistic regression to help me predict this rare event, but even when I apply weighting, the classification accuracy is poor.
I know that I can rebalance the dataset, but the problem with that is that lots of other weird stuff is going on that signifies various events but which is not indicative of the exact type of event I’m trying to predict. Therefore, rebalancing would itself be a large and complicated task in trying to get representative weirdness into the cropped dataset.
I have found that I get great predictive performance by using a home-made logistic regression classifier that scores based on the error measure below, and then using scipy.optimimize.minimize with method='Powell' to tune the parameters to minimize this measure:
e = 1 - tp/(tp+fp+fn)
for the rare class only, ignoring true negatives.
This lets me include the full dataset (and ignore weights and rebalancing considerations), and it makes it clear in the scoring when the classifier starts erroneously including other types of events that shouldn’t be caught by this classifier (i.e. increasing number of false positives).
The problem is that my home-grown classifier/trainer is naturally much slower to train than the sklearn version, and I’d like to build on mature optimized packages instead. So my question is: Is there a Python implementation of a logistic regression classifier that lets me perform fitting based only on the rare class as described above? It seems that setting the rare class weight arbitrarily high does not make it perform like my home-grown method that explicitly ignores the common class.
AI: Firstly, when you have an imbalanced dataset accuracy is not a good metric to be using (see https://en.wikipedia.org/wiki/Precision_and_recall#Imbalanced_data). You should consider what the ultimate use-case of this model is and what metric is properly capturing the performance of the model considering that use case. For example, when classifying the presence of cancer, false negatives are much more undesirable than false positives so you would want to ensure you are using a metric that captures that appropriately.
In sklearn there is a class_weight parameter of the LogisticRegression model which allows you to essentially weigh misclassifications of different classes differently. Setting this to 'balanced' will automatically adjust this weight to be inversely proportional to the amount of samples of that class in your data which might be beneficial. You may want to adjust this in a custom manner also.
Changing the metric you are evaluating on doesn't change the actual training of the model, so I am guessing that your custom implementation of logistic regression should not function significantly differently to the sklearn version in terms of performance (if it does their may be other issues), it seems you are just using a different metric. There are also a number of other metrics besides accuracy that you can use in sklearn (https://scikit-learn.org/stable/modules/model_evaluation.html#model-evaluation) perhaps consider balanced-accuracy to begin with. It is also not to hard to apply your own custom metric to the results from the sklearn logistic regression model.
Tools such as the classification_report (https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) and/or confusion matrix (https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html) can also be enlightening when dealing with imbalanced data. |
H: Specifying class or sample weights in Keras for one-hot encoded labels in a TF Dataset
I am trying to train an image classifier on an unbalanced training set. In order to cope with the class imbalance, I want either to weight the classes or the individual samples. Weighting the classes does not seem to work. And somehow for my setup I was not able to find a way to specify the samples weights. Below you can read how I load and encode the training data and the two approaches that I tried.
Training data loading and encoding
My training data is stored in a directory structure where each image is place in the subfolder corresponding to its class (I have 32 classes in total). Since the training data is too big too all load at once into memory I make use of image_dataset_from_directory and by that describe the data in a TF Dataset:
train_ds = keras.preprocessing.image_dataset_from_directory (training_data_dir,
batch_size=batch_size,
image_size=img_size,
label_mode='categorical')
I use label_mode 'categorical', so that the labels are described as a one-hot encoded vector.
I then prefetch the data:
train_ds = train_ds.prefetch(buffer_size=buffer_size)
Approach 1: specifying class weights
In this approach I try to specify the class weights of the classes via the class_weight argument of fit:
model.fit(
train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds,
class_weight=class_weights
)
For each class we compute weight which are inversely proportional to the number of training samples for that class. This is done as follows (this is done before the train_ds.prefetch() call described above):
class_num_training_samples = {}
for f in train_ds.file_paths:
class_name = f.split('/')[-2]
if class_name in class_num_training_samples:
class_num_training_samples[class_name] += 1
else:
class_num_training_samples[class_name] = 1
max_class_samples = max(class_num_training_samples.values())
class_weights = {}
for i in range(0, len(train_ds.class_names)):
class_weights[i] = max_class_samples/class_num_training_samples[train_ds.class_names[i]]
What I am not sure about is whether this solution works, because the keras documentation does not specify the keys for the class_weights dictionary in case the labels are one-hot encoded.
I tried training the network this way but found out that the weights did not have a real influence on the resulting network: when I looked at the distribution of predicted classes for each individual class then I could recognize the distribution of the overall training set, where for each class the prediction of the dominant classes is most likely.
Running the same training without any class weight specified led to similar results.
So I suspect that the weights don't seem to have an influence in my case.
Is this because specifying class weights does not work for one-hot encoded labels, or is this because I am probably doing something else wrong (in the code I did not show here)?
Approach 2: specifying sample weight
As an attempt to come up with a different (in my opinion less elegant) solution I wanted to specify the individual sample weights via the sample_weight argument of the fit method. However from the documentation I find:
[...] This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x.
Which is indeed the case in my setup where train_ds is a dataset. Now I really having trouble finding documentation from which I can derive how I can modify train_ds, such that it has a third element with the weight. I thought using the map method of a dataset can be useful, but the solution I came up with is apparently not valid:
train_ds = train_ds.map(lambda img, label: (img, label, class_weights[np.argmax(label)]))
Does anyone have a solution that may work in combination with a dataset loaded by image_dataset_from_directory?
AI: ...Because the Keras documentation does not specify the keys for the class_weights..
You may get an idea with these two parameters,
labels: Either "inferred" (labels are generated from the directory structure), or a list/tuple of integer labels of the same size as the number of image files found in the directory. Labels should be sorted according to the alphanumeric order of the image file paths (obtained via os.walk(directory) in Python).
class_names: Only valid if "labels" is "inferred". This is the explicit list of class names (must match names of subdirectories). Used to control the order of the classes (otherwise alphanumerical order is used).
So, it will be mapped to alphabetical order if not specified explicitly.
traindata = image_dataset_from_directory("/content/Data") # Default
traindata = image_dataset_from_directory("/content/Data",class_names=['train_data','test_data']) #as per the values passed in class_names
Does anyone have a solution that may work in combination with a dataset loaded by image_dataset_from_directory?
You may try a wrapper of a custom Generator over the initial Keras generator. Below is a very basic code sample
def cust_gen():
for image, label in traindata:
yield image, label, np.ones((2,1)) # 3rd parm is the sample_weight
history = model.fit(cust_gen(), epochs=epochs)
You will have to write a custom code to create the sample_weight for each batch. |
H: Neural network for time series forecasting with an auxiliary data
Lets say we have 2 data sets. First is the close price time series data set and we want to predict future values of it. The second is volumes of each price from the first data set and we do not want to predict on it but use this data to help predict future price values.
What kind of neural networks is suitable for this task?
In my sight this may be an LSTM with some changes.
Advise me please.
AI: You can use RNN architectures like LSTM, and GRU. RNNs take input vectors in each time step, so you can add your extra data to input vector. Your input shape will be batch_size x sequence length x num_of_features. |
H: Does not using more filters in deeper CNN creates more images?
For example, we have applied 32 filters to a single image. Then it created 32 different images (stack of convolutional values).
And in the second layer, if we apply 64 filters, are all these filters going to be applied on all those 32 images? If so, then it will create 64*32 numbers of output or I am understanding wrong?
I have become confused because when I studied keras documentation, it says that using 64 filters it will create 64 outputs. If anybody enlights me on how the second or deeper layer works in CNN briefly it will be helpful for me.
AI: No, your understanding is not correct.
Each of the 64 filters of the second layer will be applied to each of the 32 channels from the output of the first layer, resulting in 64 channels in the output of the second layer.
When the input of a convolutional layer has multiple channels, the convolution filter itself has the same number of channels. In your example, if we are using $3\times3$ filters, each filter in the second layer will be a tensor of dimensions $3\times3\times32$. Therefore, the filter "covers" the full depth of the input. Then, you simply perform the element-wise multiplication of the filter with the overlapping region in the input and add all the resulting elements together. Applying just 1 filter, we obtain a result with 1 channel.
This way, the number of channels of the output of a convolutional layer is the same as the number of filters in the convolution. |
H: Training network 10 time slower on 16core vs 8core C++ API
Pytorch seems to run 10 times slower on a 16 core machine vs 8 core machine. Any thoughts on why that is and what/if any thing I can do to speed up the 16 core machine? Thank you
Below is a list of details in the order in which you find them.
16 core pytorch env
16 core lscpu
8 core pytroch evn
8 core lscpu
16 core CMake Cache can be made avaible
8 core CMake Cache can be made avaible
Pytorch was built from source on both 16 core and 8 core
16 Core Details
PyTorch version: 1.7.0+cpu
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.4 LTS (x86_64)
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Clang version: Could not collect
CMake version: version 3.10.2
Python version: 3.6 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.4
[pip3] torch==1.7.0+cpu
[pip3] torchvision==0.4.2
[conda] Could not collect
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
Stepping: 7
CPU MHz: 2700.057
CPU max MHz: 2900.0000
CPU min MHz: 1200.0000
BogoMIPS: 5799.68
Virtualization: VT-x
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 40 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acp i mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmonpebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_ cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadl ine_timer aes xsave avx lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority e pt vpid xsaveopt dtherm arat pln pts md_clear flush_l1d
8 Core Details
PyTorch version: 1.7.0+cpu
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.4 LTS (x86_64)
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Clang version: Could not collect
CMake version: version 3.10.2
Python version: 3.6 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.4
[pip3] torch==1.7.0+cpu
[pip3] torchvision==0.4.2
[conda] Could not collect
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 58
Model name: Intel(R) Core(TM) i7-3740QM CPU @ 2.70GHz
Stepping: 9
CPU MHz: 3491.793
CPU max MHz: 3700.0000
CPU min MHz: 1200.0000
BogoMIPS: 5387.33
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 6144K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d
AI: I don't have any answer as to why this is. The hand-wavy answer I once received was that PyTorch doesn't effectively utilize large number of CPU cores. But as to your second question, I have experienced the same issue using the python framework and had success using the torch.set_num_threads(n) function to artificially limit cores on machines with more CPUs which improved performance, perhaps this will work for the C++ API as well. |
H: Visualization methods in R to examine correlation of labels against response
Question
What are some good plotting methods in R for examining the relationship between a target variable and various explanatory variables? In particular, I'm looking for visualization techniques that scale to more variables than the traditional scatterplot matrix.
More details
The scatterplot matrix is a great tool for visualizing pairwise relationships between variables. For example, with the swiss dataset in R, we can easily plot a matrix of scatterplots.
library(datasets)
data(swiss)
plot(swiss[1:3])
which yields
I am interested in the case where I want to predict some response, say Fertility using some combination of explanatory variables. I want to closely examine how each explanatory variable correlates with Fertility. If I have many columns in my dataframe, using plot(swiss) becomes unwieldy.
For example, the following plot (generated following instructions here) shows pairwise correlations for all columns in a dataframe. If I could plot something like this but only showing correlations between Fertility and other columns, that would be useful.
library(datasets)
data(swiss)
plot(swiss[1:3])
library(devtools)
library(inspectdf)
library(tidyverse)
library(readr)
show_plot(inspect_cor(swiss))
which yields
AI: Below are two functions using my favorite packages:
The first one shows a scatterplot of every column against the target column
The second one shows the correlation of every column with the target column, with confidence intervals (I found how to do that with ggplot here).
Code:
library(ggplot2)
library(reshape2)
library(plyr)
scatterplot <- function(data, targetColumn='Fertility') {
d<-melt(data,id.vars = targetColumn)
# ggplot(d, aes_string('value',targetColumn))+geom_point()+facet_grid(variable~.)
ggplot(d, aes_string('value',targetColumn))+geom_point()+facet_wrap(variable~.)
}
corplotCI <- function(data, targetColumn='Fertility', method='pearson') {
d<-ldply(colnames(data), function(col) {
if (col != targetColumn) {
r <- cor.test(data[,col], data[,targetColumn],method=method)
data.frame(variable=col,cor=r$estimate, lowerCI=r$conf.int[1],upperCI=r$conf.int[2])
}
})
ggplot(d,aes(cor,variable))+geom_point(size=3)+geom_errorbarh(aes(xmin = lowerCI,xmax = upperCI),height=.5)+coord_cartesian(xlim=c(-1,1))
}
Usage:
scatterplot(swiss)
corplotCI(swiss) |
H: Preparing for interview - Logistic regression question
So I'm doing some exercises to prepare for a interview test. However there's one of the assignments I don't understand. Maybe some of you can explain what they want me to do? It would help me to understand the underlying questionframes I may come across.
This is the assignment:
Logit Regression
Have the function LogitRegression(arr) read the input array of 4 numbers x, y, a, b, separated by space, and return an output of two numbers for updated a and b (assume the learning rate is 1). Save up to 3 digits after the decimal points for a and b. The output should be a string in the format: a, b
def LogitRegression(arr):
# code goes here
return arr
# keep this function call here
print(LogitRegression(input()))
Logistic regression is a simple approach to do classification, and the same formula is also commonly used as the output layer in neural networks.
We assume both the input and output variables are scalars, and the logistic regression can be written as:
y = 1.0 / (1.0 + exp(-ax - b))
After observing a data example (x, y), the parameter a and b can be updated using gradient descent with a learning rate.
Examples:
Input: [1, 1, 1, 1]
Output: 0.881, 0.881
Input: [2.2, 0.0, 5.1, 5.7]
Output: 7.3, 6.7
What I don't understand is, do they only give me 4 scalars and want me to train on x, y and then predict a and b. Or is a and be supposed to be weights and bias and I need to return the trained ones? The output numbers in the second example doesn't match probabilities so it can't be that. I might be overthinking this and it's just a simple thing I need to do?
This is the code I've tried:
arr = np.array([2.2, 0.0, 5.1, 5.7])
arr2 = np.array([1.0, 1.0, 1.0,1.0])
def Logit(arr):
learnrate = 1
X = arr[0]
y = arr[1]
weights = arr[2]
bias = arr[3]
y_hat = 1/(1+np.exp(np.dot(X, -weights) - bias))
new_weights = weights + learnrate * (y - y_hat) * X
new_bias = bias + learnrate*(y - y_hat)
print(new_weights, new_bias)
Logit(arr)
Output:
2.900000098664297 4.700000044847409
AI: They give you the current values of the model parameters $a$ and $b$, and a new data point $(x,y)$, and they request to perform one training step with gradient descent using the data point, and returning the updated values for $a$ and $b$.
The problem with your code is that the sign of the learning rate is wrong in the parameter update. If you change it to minus, the result is as expected:
new_weights = weights - learnrate * (y - y_hat) * X
new_bias = bias - learnrate*(y - y_hat) |
H: Combining textual and numeric features into pre-trained Transformer BERT
I have a dataset with 3 columns:
Text
Meta-data (intending to extract features from it, then use those i.e., numerical features)
Target label
Question 1: How can I use a pre-trained BERT instance on more than the text?
One theoretical solution suggests having BERT fed the text and another neural network with the numerical features fed into this one, then aggregating their output, into another neural network.
Is that the most efficient approach?
Question 2: How can you connect neural networks?
You get the output from each, but then what?
You get classification output from BERT, you get classification
output from MLP based on numerical features.
You concatenate these and feed them to another MLP, and you get the
final prediction? Wouldn't that last prediction be less robust ?
In other words, does the last MLP encapsulate the other 2 networks?
If so, what happens if BERT predicts on 90%, but the first MLP just
50%, will we get a lesser outcome?
Question 3: Any tips on how to implement this in pytorch?
AI: When they are talking about aggregating their outputs, they mean the final embeddings (just before the classification layer) not the output of the network itself.
You take the embeddings from both of the networks and concatenate them vertically. This concatenated embedding you use to predict the final output you wanted.
Torch has a function torch.cat for this purpose. |
H: Forecasting with Neural network and understanding which underlying model is favoured
If I have a very large set of data (~ 1TB). How can I use Neural Network on this data to understand which underlying distribution (eg. let's say a Gaussian or a Poissonian with a certain mean, sd) is favoured by the data. At least, can I conclude, if one of the distributions (out of the 2 distributions) is not favoured?
hence the main goal was to see, if when forecasting, after learning from this dataset, can I have the knowledge that based on which underlying distribution (or favouring which distribution) the algorithm is predicting.
AI: To be clear, do you think the data could be distributed according to a particular probability distribution? If so you should model it directly without a neural network. Your model parameters are simply the parameters of that probability distribution ($\mu$ and $\sigma^2$ for a gaussian, $\lambda$ for a poisson etc.,).
If you think your data is distributed according to a distribution with parameters conditioned on some input features you should have a model (for example a linear model, neural network etc.,) output the parameters of that probability distribution.
In either of these cases, you could then take a fully Bayesian approach (See, for example, chapter 5.3 of Machine Learning A probabilistic Perspective K. Murphy), but a simpler approach would be to perform Maximum Likelihood Estimation. The calculations for Maximum Likelihood Estimation are straightforward and readily available for common distributions. For a Gaussian you simply calculate the sample mean and variance and those are your model parameters. If you are outputting the distribution parameters from a neural network you will need to set your loss as the negative log likelihood associated with that particular distribution. Perform Maximum Likelihood Estimation for all models you are considering, and then compare each of the model's likelihood under a held out validation set. The likelihood is calculated as the product of probabilities of each data sample under that model. The distribution that fits the data the best is the one which the has highest likelihood on the held-out validation set. |
H: Pandas: Group by Single Column Entries
So have this table above. I'm trying to aggregate the occupations such that the table results in:
I've tried using df.groupby(['Occupation']) but I get an error.
All I know is that my final step would be to set the index to "Occupation". But I still don't know how to group via entries in the single Occupation column here.
Also, what type of table would the final table be name/called?
I know it's not called a mutiindex table because there is only one index that the results are being grouped by.
AI: Your issue might occur if you didn't tell that the second row in the table should also be considered a header line.
To address this, try to reset the header at the beginning.
import pandas as pd
df = pd.read_csv('YOUR/FILE/DIRECOTRY.csv', skiprows=1) // ignore the 2nd row (0-indexed)
df.rename(columns={0:'Index'}, inplace=True) // optional
df.groupby(['Occupation']) |
H: Bellman operator and contraction property
Currently, I am learning about Bellman Operator in Dynamic Programming and Reinforcement Learning. I would like to know why is Bellman operator contraction with respect to infinity norm? Why not another norm e.g. Euclidian norm?
AI: The proofs I have seen for contraction of the bellman operator (this page has a really nice run through, explicitly utilize the fact that an infinity norm is being used. This does not necessarily mean that contraction doesn't occur under other norms, but it does suggest one of two possibilities:
The contraction proof is only valid under the infinity norm.
The infinity norm is just the easiest metric to prove the contraction property.
When showing that the Bellman Operator converges to a fixed point it is satisfactory to simply show that it is a contraction, it doesn't matter what sort of contraction it is, so we would typically prove the contraction that is easiest to show.
That being said, intuitively, I would imagine that the question of whether or not this contraction holds under other norms would impact questions surrounding the rate of convergence to the fixed point (ie., how many times we must apply the bellman operator for convergence). After a cursory search I was not able to find any theory explicitly proving or disproving the contraction under other norms. This might be an interesting question to raise in the mathematics stack exchange. |
H: Computing SVD through Eigendecomposition of correlation matrix
I am following the excellent series on SVD by Steve Brunton from the University of Washington, on YouTube, but I have trouble interpreting his 4th video on the subject.
If I understand correctly, he mentions that one can compute the economy SVD decomposition $X = \hat{U}\hat{\Sigma}V^T$ with the following :
$$X^TX= V\hat{\Sigma}\hat{U}^T\hat{U}\hat{\Sigma}V^T = V\hat{\Sigma}^2V^T \implies X^TXV = V\hat{\Sigma}^2 $$
$$XX^T= \hat{U}\hat{\Sigma}\hat{V}^T\hat{V}\hat{\Sigma}\hat{U}^T = \hat{U}\hat{\Sigma}^2\hat{U}^T \implies XX^T\hat{U} = \hat{U}\hat{\Sigma}^2 $$
Instead of using U,S,VT = svd(X) in Python, I want to try decomposing the image in its SVD, and reconstructing the original using only $r$ values with this technique. I am trying to apply this on a picture of The Starry Night, loaded in a grayscale numpy array X, which I do this way :
r = 10
XT = X.transpose()
C_1 = XT @ X
C_2 = X @ XT
[Lambda_1, V_hat] = np.linalg.eig(C_1)
[Lambda_2, U_hat] = np.linalg.eig(C_2)
V_hat_T = V_hat.transpose()
X_tilde = U_hat[:,:r] @ np.diag(Lambda_1)[0:r,:r] @ V_hat_T[:r,:]
But I can't reproduce the image, whatever the value I attribute to $r$, whereas the product of U @ S @ VT , computed with the svd(X) function, perfectly reconstructs the original picture.
What am I doing wrong? It is certainly superfluous to mention that I am a beginner, and I probably made some big mistake.
AI: The problem is occurring for a few reasons:
You must ensure the correct ordering of eigenvalues/eigenvectors.
You should be taking the square root of the eigenvalues (Lambda_1) to get the singular values.
You need to take special care to ensure you are using the right signs for the eigenvectors as these won't necessarily be correct by default. This is because we assume that the singular values are the positive roots of the eigenvalues, but the two sets of eigenvectors we retrieve don't necessarily respect this assumption when joined in the singular value decomposition.
Here is the working example:
r = 10
XT = X.transpose()
C_1 = XT @ X
C_2 = X @ XT
[Lambda_1, V_hat] = np.linalg.eig(C_1)
[Lambda_2, U_hat] = np.linalg.eig(C_2)
# The eigenvalues/eigenvectors returned from np.linalg.eig are not sorted so may differ between Lambda_1 and Lambda_2
# When performing economy SVD we also want to ensure the largest eigenvalues come first
i_1 = np.argsort(Lambda_1)[::-1]
i_2 = np.argsort(Lambda_2)[::-1]
V_hat = V_hat[:, i_1]
U_hat = U_hat[:, i_2]
# We must take the square root of eigenvalues to get the singular values
Lambda = np.sqrt(np.sort(Lambda_1)[::-1])
# X @ V_hat and U_hat @ np.diag(Lambda) should be equal. But the eigenvectors retrieved may not have the correct signs
# This code checks if X @ V_hat and U_hat @ np.diag(Lambda) have different signs and then updates V_hat wherever the sign is incorrect
same_sign = np.sign((X @ V_hat)[0] * (U_hat @ np.diag(Lambda))[0])
V_hat = V_hat * same_sign.reshape(1, -1)
V_hat_T = V_hat.transpose()
X_tilde = U_hat[:,:r] @ np.diag(Lambda)[0:r,:r] @ V_hat_T[:r,:] |
H: Pytorch: understanding the purpose of each argument in the forward function of nn.TransformerDecoder
According to https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html, the forward function of nn.TransformerDecoder contemplates the following arguments:
tgt – the sequence to the decoder (required).
memory – the sequence from the last layer of the encoder (required).
tgt_mask – the mask for the tgt sequence (optional).
memory_mask – the mask for the memory sequence (optional).
tgt_key_padding_mask – the mask for the tgt keys per batch (optional).
memory_key_padding_mask – the mask for the memory keys per batch (optional).
Unfortunately, Pytorch's official documentation on the function isn't exactly very thorough at this point (April 2021), in terms of the expected dimensions of each tensor and when it does or doesn't make sense to use each of the optional arguments.
For example, in previous conversations it was explained to me that tgt_mask is usually a square matrix used for self attention masking to prevent future tokens from leaking into the prediction of past tokens. Similarly, tgt_key_padding_mask is used for masking padding tokens (which happens when you pad a batch of sequences of different lengths so that they can fit into a single tensor). In light of this, it makes total sense to use tgt_mask in the decoder, but I wouldn't be so sure about tgt_key_padding_mask. What would be the point of masking target padding tokens? Isn't it enough to simply ignore the predictions associated to padding tokens during training (say, you could do something like nn.CrossEntropyLoss(ignore_index=PADDING_INDEX) and that's it)?
More generally, and considering that the current documentation is not as thorough as one would like it to be, I would like to know what the purpose is of each argument of nn.TransformerDecoder's forward function, when it makes sense to use each of the optional arguments, and if there are nuances in the usage one should keep in mind when switching between training and inference modes.
AI: About the need for tgt_key_padding_mask
While padding is usually applied after the normal tokens (i.e. right padding), it is perfectly fine to apply it before normal tokens (i.e. left padding). For instance, fairseq supports parameter left_pad to specify precisely this.
For left padding to be handled correctly, you must mask the padding tokens, because the self-attention mask would not prevent the hidden states of the normal token positions to depend on the padding positions.
About the meaning of each argument
I think the best documentation is in MultiHeadAttention.forward:
key_padding_mask – if provided, specified padding elements in the key will be ignored by the attention. When given a binary mask and a value is True, the corresponding value on the attention layer will be ignored. When given a byte mask and a value is non-zero, the corresponding value on the attention layer will be ignored
attn_mask – 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all the batches while a 3D mask allows to specify a different mask for the entries of each batch.
With that information and knowing where keys, values and queries come from in each multi-head attention block, it should be clear the purpose of each parameter in nn.TransformerDecoder.forward. Also, the documentation of MultiheadAttention.forward contains info about the expected shapes. |
H: How to determine precision level of outcome in ML models?
I understand there are lot of ML products that can help predict the time to event. For ex, customer will purchase in the next 30 days or Patient has chances of re-admission in next 60 days etc.
But why do you think why don't people predict to the level of date?
ex: For ex, customer will make his next purchase on Feb 2nd, 2016 or patient has chance of readmitting on Mar 23rd, 2018.
Don't know from an ordinary layman perspective, I feel dates might be more useful. Of course, all our predictions are reported based on confidence Intervals.
For ex: If we know the date when the course will end, we can plan our schedule accordingly to do other tasks/sign up for new courses etc. But if we are told that your course might end anytime in next 30/60/90 days, we are at risk of wasting few days before we enroll for another course etc. Hope my example makes sense.
Can experts here suggest me on when
a) Prediction to date level might be a good thing?
b) Prediction to date level might be a bad thing?
AI: The reason you get predictions without an exact time is because that is how models are trained. They are not trained to predict the exact time but a window for it.
The reason a model is not trained to predict exact time is because it introduces a lot of problems starting with data imbalance and the huge number of classes it would introduce. Also, DL/AI is not God or Laplace's demon, so to predict the exact is not going to happen. |
H: LSTM for Stock Return Prediction
I am writing my masters thesis and am using LSTMs for daily stock return prediction. So far I am only predicting numerical values but will soon explore a classification style problem and predict whether it will go up or down each day.
I have explored several scenarios
A single LSTM using as input only the past 50 days return data
A stacked (2 layers) using as input only the past 50 days return data
The results are not great for either (and I didn't expect them to be). So I tried some feature engineering using 3 day MA, 5 day MA, 10 day MA, 25 day MA, 50 day MA of the daily returns as well as the actual daily return, meaning I have 6 input features. All other variables are kept constant yet the model now overfits (see the training and test loss plots below). Does anyone have any ideas why this may be?
Test Loss in orange and Train in blue
AI: I am not sure this type of model is a good use case for the particular task. More specifically, citing Chollet from Deep Learning with Python book,
Always remember that when it comes to markets, past performance is not a good predictor of future returns—looking in the rear-view mirror is a bad way to drive. Machine learning, on the other hand, is applicable to datasets where the past is a good predictor of the future.
— Deep learning with python, Francois Chollet
What is essentially being argued here is that, stock historical data is not a phenomenon that repeats itself based on its own underlying distribution. |
H: 3d input for Dense Layer Keras
Is there any example of how Keras Dense layer handles 3D input.
The documentation explains the following:
If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 1 of the kernel (using tf.tensordot).
But I could not understand the internal matrix calculation
For example:
import tensorflow as tf
from tensorflow.keras.layers import Dense
sample_3d_input = tf.constant(tf.random.normal(shape=(4,3,2)))
dense_layer = Dense(5)
op = dense_layer(sample_3d_input)
based on the documentation for a 3D input of shape (m,d0,d1), the shape of Layer's weight_matrix (or) kernel will have the shape (d1,units) which is (2,5) in this case. But I dont understand how the op is calculated to have the shape (m,d0,units)
AI: In a regular fully connected layer (Dense), the computation is done using the following Matrix operation : $$R = A*W + B$$
With all matrixes being vectors (if batch size = 1), exept $W$, which has size(inputsize, outputsize).
for a 3D input, the way TF computes the output is simply by applying this formula only to the last dimension, considering all other dimensions as similar to batch sizes.
So in your example with input of size (4,3,2), Tensorflow 'flattens' the vector into a (12, 2) matrix, computes the result as if it was a 2D input, this gives a (12, 5) vector, and then 'unflattens' the vector to give back its 3 dimensions (4,3,5). |
H: Memory issue when trying to initiate zero tensor with pytorch
I am facing a memory issue while trying to initialize a torch.zeros -
torch.zeros((2000,2000,3200), device=device)
Getting the following error:
RuntimeError: CUDA out of memory. Tried to allocate 47.69 GiB (GPU 0; 8.00 GiB total capacity; 1.50 KiB already allocated; 6.16 GiB free; 2.00 MiB reserved in total by PyTorch)
My question is: why does a zero tensor need that big memory? Or am I doing any mistake?
P.S. I was checking with getsizeof in another system - the size of this tensor is showing as 72bytes only.
AI: The reason the tensor takes up so much memory is because by default the tensor will store the values with the type torch.float32. This data type will use 4kb for each value in the tensor (check using .element_size()), which will give a total of ~48GB after multiplying with the number of zero values in your tensor (4 * 2000 * 2000 * 3200 = 47.68GB). What you could try to do is change the datatype from torch.float32 to something like torch.int8, which only uses 1kb for each value, reducing the memory needed by 75% to 12GB. But given that you only seem to have 8GB available this will also not solve your problem. The only solution therefore would simply be to use a tensor will fewer values. |
H: Is positional encoding (in transformers) an estimation of the relative positions of words in the training corpus texts?
Is this some kind of estimation of the relative positions of words in the training texts? are they creating some kind of statistical "distribution" of words? is "cat" usually 2 or 3 words away from "milk" in English language? things have to have a meaning, havent they? Is BERT just adding some aditional dimensions to the vector space to include info on the relative positions of words?
AI: First, we should clarify that positional encodings are not necessarily applied to words but tokens. Nowadays, using subword tokens is the norm, and infrequent words can be divided into many subword tokens.
Regarding the role of positional encoding: according to research, positional encodings have different roles depending on where they are used:
Transformer decoders for autoregressive language modeling: positional embeddings learn about absolute positions (see What Do Position Embeddings Learn? An Empirical Study of Pre-Trained Language Model Positional Encoding).
Masked language models (i.e. BERT): the positional encodings only help differentiating between tokens, and the order does not actually matter much (see Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little).
So, to answer the question:
No, positional encodings do not estimate the relative positions of words in the training corpus texts. |
H: Reason for adding 1 to word index for sequence modeling
I notice in many of the tutorials 1 is added to the word_index. For example considering a sample code snippet inspired from Tensorflow's tutorial for NMT https://www.tensorflow.org/tutorials/text/nmt_with_attention :
import tensorflow as tf
sample_input = ["sample sentence 1", "sample sentence 2"]
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
lang_tokenizer.fit_on_texts(sample_input)
vocab_inp_size = len(lang_tokenizer.word_index)+1
I dont understand the reason for adding 1 to the word_index dictionary. Wont adding a random 1 affect the prediction. Any suggestions will be helpful
AI: First, note that they are just adding 1 to the size of the vocabulary, not to the token IDs themselves, so the predictions are not affected.
Then, why adding 1 ?
Because Tokenizer.word_index is a python dictionary that contains token keys (string) and token ID values (integer), and where the first token ID is 1 (not zero) and where the token IDs are assigned incrementally. Therefore, the greatest token ID in word_index is len(word_index). Therefore, we need vocabulary of size len(word_index) + 1 to be able to index up to the greatest token ID.
Update: note that adding 1 to the vocabulary size has nothing to do with out of vocabulary words: the words that are not pretrained are encoded as the out-of-vocabulary token (oov_token) if it was provided when building the tokenizer, or ignored if not. The oov token, if provided, has index 1. |
H: Concat two tensors of different dimensions
I have two tensors. For example -
a = torch.randn((500, 200, 10))
b = torch.randn((500, 5))
I have to concat each of b tensor to all elements of corresponding a tensor i.e., each 200 tensors of a[0] should get concatenated with b[0] - final dimension should be (500, 200, 15).
Without using explicit for loop, how can I achieve this in Pytorch efficiently?
AI: For that, you should repeat b 200 times in the appropriate dimension this way:
c = torch.cat([a, torch.unsqueeze(b, 1).repeat(1, 200, 1)], dim=2)
c.shape
As desired, the shape of the result is torch.Size([500, 200, 15]) |
H: Stacked Bar Chart in R
I wondered if someone could help with a re-labelling 'fill' variables ' and the x-axis on a stacked bar chart please?
My ambition is to:
Express the fill variable(s) as of six domains in the health survey, titled ' MR, Mob, SC, Act, Pain, Anx')
And have the x-variable representing the 'health states' (0,1, 2, 3,4, 5), rather than X0, X1....X5
library(ggplot2)
library(dplyr)
library(reshape2)
library(viridis)
library(hrbthemes)
library(readxl)
data <- read.table(text = "0 1 2 3 4 5
MR 155 211 64 14 1 1
Mob 0 393 51 2 0 0
SC 0 427 12 7 0 0
Act 0 386 45 15 0 0
Pain 0 379 62 5 0 0
Anx 0 355 73 18 0 0", header = TRUE)
data$row <- seq_len(nrow(data))
data2 <- melt(data, id.vars ="row")
ggplot(data2, aes(x = variable, y = value, fill = row)) +
geom_bar(stat = "identity") +
xlab("\nHealthState") +
ylab("Count\n") +
theme_bw()
Would appreciate all feedback.
AI: Thank you for providing a good reproducible code, I can't resist answering the question :)
Hope the following code helps:
data <- read.table(text = "0 1 2 3 4 5
MR 155 211 64 14 1 1
Mob 0 393 51 2 0 0
SC 0 427 12 7 0 0
Act 0 386 45 15 0 0
Pain 0 379 62 5 0 0
Anx 0 355 73 18 0 0", header = TRUE)
#data$row <- seq_len(nrow(data))
data$row <- row.names(data)
data2 <- melt(data, id.vars ="row")
data2$variable <- sub("X",as.character(data2$variable),replacement="",fixed=TRUE)
ggplot(data2, aes(x = variable, y = value, fill = row)) +
geom_bar(stat = "identity") +
xlab("HealthState") +
ylab("Count") +
theme_bw() |
H: Reducing (Variance) | the gap between my weights
I have ML ready samples. And each sample has a weight.
The weights distribute between [0-1]
My problem arise because there are a lot of samples which are 0.001, 0.00x
And a lot of samples which are 0.997, 0.99x
I am going to sample data based on these weights. And samples with 0.99x will overshadow the other samples in the data set while 0.00x samples will have 0 significance.
The solution I am looking for, is some kind of function over those weights that will balance them a little bit / reduce those huge gaps (Therefor reducing the variance) AND still preserve their order
So if 0.997 turned into 0.88, 0.996 will turn into something < 0.88
For example:
in [0.01, 0.1, 0.4, 0.6, 0.8, 0.997]
I would want something like:
[0.15, 0.2, 0.42, 0.6, 0.78, 0.9] (Just an example)
AI: Try the softmax function:
weights = [0.01, 0.1, 0.4, 0.6, 0.8, 0.997]
temperature = 1
weights = np.array(weights) / temperature
new_weights = np.exp(weights) / np.exp(weights).sum() # softmax function
You can tweak the temperature hyperparameter.
Higher the value - the "new_weights" come closer - yet order is preserved |
H: X Second samples taken on unevenly spaced intervals
I have dataset of following specification:
512 samples taken at unevenly spaced intervals over the year
Each sample is an 8 second data from sensors with 4ms resolution
Samples are not labeled
For example, I have 5 samples taken on first day, then more then 10 samples take on 5th day and so on.
I want to cluster data to check if I can infer the mode of operation for the machine from single 8 second sample. Also, I want to measure the performance of the component over the year for predictive maintenance.
Currently I want to use self organizing maps for clustering purposes. I am new to this data science and am currently learning. The usual methods use evenly spaced samples. Also each samples in these cases is single input (Like stock value at the end of a day) instead of X second data taken at time Y.
My question is: How do I input such data into any model?
AI: It depends how you want to cluster the data, but here are some options....
FEATURE ENGINEERING
You could, for example, completely ignore the timestamps and just seek
to cluster the different modes of operation are based on the
magnitude of the feature alone. Here, simply extract the values into
a new feature list.
Otherwise, you have to consider what is important about the samples. For example, does the fact that one snapshot containing 5 samples vs 8 samples distinguish it between modes? If so they, build your feature vector based off counting the number of samples in a day.
In doing so, you will create single features for each day from various attributes in the day (e.g. magnitude, number of samples), enabling you to cluster them based off these features.
RESAMPLING
Conversely, you could resample all the data into uniform intervals, but this would probably need lots of assumptions, and would therefore introduce noise. |
H: Train at 99% and Validation split accuracy is not more than 70%
I am training a model that get accuracy on train set up to 99%
but the validation split test not increase more than 70-72%
this is how my model is configurated:
model = Sequential()
#model.add(TimeDistributed(Dense(64, activation='linear')))
#model.add(GRU(512, return_sequences=True, activation='linear',))
model.add(LSTM(128, return_sequences=True, activation='linear'))
model.add(Conv1D(64, 64, strides=1, padding='same'))
model.add(Dense(128, activation="linear"))#, kernel_regularizer=regularizers.l1_l2(l1=1e-4, l2=1e-6)
model.add(Dropout(0.2))
#model.add(Conv1D(16, 16, strides=1, padding='same'))
#model.add(TimeDistributed(Dense(32, activation='linear')))
#model.add(GRU(512, return_sequences=True, activation='linear'))
model.add(LSTM(128, return_sequences=True, activation='linear'))
model.add(Conv1D(64, 64, strides=1, padding='same'))
model.add(Dense(128, activation="linear"))#,kernel_regularizer=regularizers.l1_l2(l1=1e-4, l2=1e-6)
model.add(Dropout(0.2))
#model.add(BatchNormalization())
model.add(Dense(1,activation='linear'))
model.compile(optimizer='adamax',loss="mse",metrics=['accuracy'])
model.fit(X_train,y_train,epochs=8000, batch_size=256, verbose=1, validation_split=0.1, callbacks=callback)
what can be the issue?
AI: This means your model is overfitting the train set. Try simplifying the model by reducing the number of neurons in each layer. That will reduce your train accuracy but may allow your validation accuracy to increase.
You know that you are not overfitting when both train and validation accuracy hover around the same value. |
H: why my pytorch liner regression failed?
I am new to pytorch, i want start from a simple example-linear regression:
I created some random training and test sample.
here is my code:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as opt
from numpy import random
from util import *
np.random.seed(100)
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.l1 = nn.Linear(100, 100)
self.l2 = nn.Linear(100, 1)
def forward(self, x):
x = (self.l1(x))
x = (self.l2(x))
return x
mlp = MLP().float()
target = nn.MSELoss()
o = opt.SGD(mlp.parameters(), lr=0.02, momentum=0.9)
w = torch.tensor(np.random.rand(100, 1) * 3).float()
x = torch.tensor(np.random.rand(100, 100) * 100).float()
y = torch.mm(x, w) + 2
test_x = torch.tensor(np.random.rand(100, 100) * 10).float()
test_y = torch.mm(test_x, w) + 2
for epoch in range(100):
op = mlp(x)
if epoch == 10:
print(op)
sys.exit(1)
o.zero_grad()
loss = target(op, y)
loss.backward()
o.step()
test_pred = mlp(test_x)
print(test_pred.shape)
print(test_y.shape)
print('%dth: loss=%.4f os_loss=%.4f'%(epoch, loss.item(), target(test_pred, test_y).item()))
I found it loss become nan, when several rounds passed.
i cant find out why, i think my netual network framework is correct, can you help on this?
AI: The problem seems to come from your learning rate and the non-normalization of your data. Here your network is clearly unstable and thus gets to sky high values (10^20) which lead to NaN values. A typical learning rate for SGD is 0.001, but this is for normalized datas (inputs-outpus between 0 and 1). Here your inputs and ouputs have high values, that are amplified even more by MSE (which squares the error). So this is why there is an issue with the learning rate here, the resulting gradient is way too strong.
This is how I understand the network behaviour, that is clearly unstable with lr=0.02.
A way to solve it is greatly diminish your learning rate (lr=0.00000001 worked for me). Another way around is to Normalize your data (inputs and ouputs)
Here is your modified code, you can try changing the learning rate and see how it changes the network behaviour (from stable to unstable).
import torch
import torch.nn as nn
import torch.optim as opt
import numpy as np
import matplotlib.pyplot as plt
import math
import os
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE" # This line may be useless
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.l1 = nn.Linear(100, 100)
self.l2 = nn.Linear(100, 1)
def forward(self, x):
x = self.l1(x)
x = self.l2(x)
return x
mlp = MLP().float()
target = nn.MSELoss()
o = opt.SGD(mlp.parameters(), lr=0.00000001) #Try 0.0000001 and 0.000001
w = torch.tensor(np.random.rand(100, 1) * 3).float()
x_train = torch.tensor(np.random.rand(100, 100)*100).float()
y_train = torch.mm(x_train, w) + 2
test_x = torch.tensor(np.random.rand(100, 100)*10).float()
test_y = torch.mm(test_x, w) + 2
losslist = []
losstestlist = []
for epoch in range(100):
op = mlp(x_train)
o.zero_grad()
loss = target(op, y_train)
losslist.append(math.log10(loss))
loss.backward()
o.step()
test_pred = mlp(test_x)
loss = target(test_pred, test_y)
losstestlist.append(math.log10(loss))
plt.plot(losslist, 'r', label='train loss')
plt.plot(losstestlist, 'b', label='test loss')
plt.legend()
plt.show()
Hope this helps. |
H: Dimensions of Transformer - dmodel and depth
Trying to understand the dimensions of the Multihead Attention component in Transformer referring the following tutorial https://www.tensorflow.org/tutorials/text/transformer#setup
There are 2 unknown dimensions - depth and d_model which I dont understand.
For example, if I fix the dimensions of the Q,K,V as 64 and the number_of_attention_heads as 8, and input_embedding as 512 , can anyone please explain what is depth and d_model?
AI: d_model is the dimensionality of the representations used as input to the multi-head attention, which is the same as the dimensionality of the output. In the case of normal transformers, d_model is the same size as the embedding size (i.e. 512). This naming convention comes from the original Transformer paper.
depth is d_model divided by the number of attention heads (i.e. 512 / 8 = 64). This is the dimensionality used for the individual attention heads. In the tutorial you linked, you can find this as self.depth = d_model // self.num_heads. Each attention head projects the original representation into a smaller representation of size depth, then computes the attention, and then all the attention head results are concatenated together, so that the final dimensionality is again d_model. You can find more details on the individual computations in this other answer.
Note that the implementation of the multi-head attention in the tutorial is not a straightforward implementation from the original paper but it is equivalent: in the original paper, there are different matrices $W_i^Q, W_i^K, W_i^V$ for each attention head $i$, while in the implementation of the tutorial there are combined matrices $W^Q, W^K, W^V$ that compute the projection for all attention heads, which is then split into the separate heads by means of the function split_heads. |
H: Storing and loading bottleneck features for transfer learning on large data sets (Keras)
I would like to apply transfer learning on a pretty large image data set in order to solve a classification problem. Currently I load a pre-trained net without the top layers, add my own top layers, freeze the base model and start training.
Since the data set is too big to fit into memory I load the train and validation sets using prefetched TF Datasets:
train_ds = keras.preprocessing.image_dataset_from_directory (training_data_dir,
batch_size=batch_size,
image_size=img_size,
label_mode='categorical')
val_ds = keras.preprocessing.image_dataset_from_directory (validation_data_dir,
batch_size=batch_size,
image_size=img_size,
label_mode='categorical')
train_ds = train_ds.prefetch(buffer_size=buffer_size)
val_ds = val_ds.prefetch(buffer_size=buffer_size)
Now in order to reduce the training time I found a nice approach on this page: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
Before training the model, they first run the base model for each input image and store the output of the base model, that is the bottleneck features. Training is then done on these bottleneck features. That means that during training the base network does not need to be executed, saving a lot of computation time.
Now for their approach they have a data set that fits in memory and the set of all bottleneck features over all images is represented as a single numpy array which is stored in and loaded from a single file.
Unfortunately this is not possible in my case since the original image set is so large it will not fit into memory and also converted to a bottleneck feature set it is still to large. So I am looking for a solution where the bottleneck features of a single image are stored in a single file and where those bottleneck feature vectors can be loaded in a way that is similar to how image_dataset_from_directory loads images.
AI: You can first write the bottleneck features into a tfrecords file, and then load them as a dataset for the training phase.
In the tensorflow documentation you can find complete examples of how to do both. |
H: How to explain a relationship between Accuracy and F1 Score / F-Measure?
I am building a CNN model for pitch estimation using a song recording. Pitch estimation is done by inputting spectrogram to CNN model and make the CNN predict pitch sequence (250 pitch values per recording) from that spectrogram. For the evaluation metrics, I am using Accuracy and F1 Score. Sample of overall test result are given below using mean measurement.
Some notes:
Val-Acc is the validation accuracy. I am using this to see how well the model analyze new data that is not given during training.
Delta acc is the difference value between accuracy and val-acc.
Right now, I am wondering how can I explain the relationship between Accuracy and F1 Score. My supervisor said to me that accuracy is measured to get how accurate the model performs, and F1 is how well the model performs. Is the relationship really like that? May I get some insight on how to explain the relationship between them?
AI: Saying that accuracy is measured to get how accurate the model performs, and F1 is how well the model performs
This doesn't mean anything, it's obviously too vague.
The first things to check in order to understand this relationship are the definitions of accuracy and F1-score.
Wikipedia has a good page which explains how different classification evaluation measures are related.
Observations on your results:
The accuracy and F1-score are almost identical everywhere. This suggests that your data is probably quite well balanced, i.e. the difference in the number of positive vs. negative instances isn't very big. Why? Because if the data was imbalanced then the model would over-predict the majority class, and this would cause the F1-score to be much lower than the accuracy: assuming the majority class is the negative class, the recall would be somewhat low but the accuracy could still be high because most instances (majority class) would be correctly predicted.
As a consequence, there's no insight to gain from analyzing the relationship between accuracy and F1-score since they're virtually identical. The small differences might be due to the geometrical mean between precision and recall. F1-score is more informative in case of imbalanced data, but this is not the case here.
The F1-score is calculated only on the training data. It would be more useful to calculate it on the validation data.
There's some serious overfitting happening especially with the high learning rates, but with the low learning rates the fact that difference between training and validation accuracy increases is also worrying. Maybe the model is too complex or there are not enough instances in the data. Ideally the two accuracy values should converge. |
H: Get the prediction probability using prediction function
I'm new to SVM models. I took custom SVM classifier from the github. In there standard predict function was overwritten by custom predict function.
def predict(self, instances):
predictions = []
for instance in instances:
predictions.append((int)(np.sign(self.weight_vector.dot(instance.T)[0] + self.bias)))#class is determined based on the sign -1,1
I want to get prediction probability of each class. [-1 probability, 1 probability]
I know from standard SVM predict_proba function I can get the probability. Since this is a customized SVM how do I get that? Can somebody help me.
Thank you
AI: I have followed scikit-learn https://github.com/scikit-learn/scikit-learn/blob/fd237278e/sklearn/linear_model/_base.py#L293 link and was able to get predicted probabilities. |
H: Applying Sci-kit Learn's kNN algorithm to Fresh Data
While I was studying Scikit-learn's kNN algorithm, I realized that if I use sklearn.model_selection.train_test_split, the provided data gets automatically split into the train data and the test data set, according to the proportions provided as parameters.
Then based on the train data, the algorithm looks at the k-nearest neighbor points closest to the test data points to determine whether the test data points belong to a certain criteria or not.
I was wondering whether there was a way to predict the criteria NOT for the test data sets, which were already a part of the provided data set, but brand new data that were not provided during the whole process.
Is there a way to do that using sci-kit learn?
AI: KNN is not fitted to "the k-nearest neighbor points closest to the test data points". You specify the fit option, like:
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X, y)
Usually this will be xtrain, ytrain, while you test the model performance using "new" (unseen) data and compare the true targets to the prediction.
neigh.predict(xtest)
or
neigh.predict_proba(xtest)
See docs: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html |
H: Trouble performing feature selection using boruta and support vector regression
I was trying to select the most important features of a data set using Boruta in python. I have split the data into training and test set. Then I used SVM regressor to fit the data. Then I used Boruta to measure feature importance.The code is as follows:
from sklearn.svm import SVR
svclassifier = SVR(kernel='rbf',C=1e4, gamma=0.1)
svm_model= svclassifier.fit(x_train, y_train)
from boruta import BorutaPy
feat_selector = BorutaPy(svclassifier, n_estimators='auto', verbose=2, random_state=1)
feat_selector.fit(x_train, y_train)
feat_selector.support_
feat_selector.ranking_
X_filtered = feat_selector.transform(x_train)
But I get this error KeyError: 'max_depth'.
What might be causing this error?
Does Boruta work with any kind of models? i.e linear models, tree-based models, neural nets, etc.?
AI: Without the full code, there are few things to check but I would try as follows:
Note that you are passing svm_model to BorutaPy and according to your code you should pass the fitted object that is svm_model
from sklearn.svm import SVR
svclassifier = SVR(kernel='rbf',C=1e4, gamma=0.1)
svm_model= svclassifier.fit(x_train, y_train)
from boruta import BorutaPy
feat_selector = BorutaPy(svm_model, n_estimators='auto', verbose=2, random_state=1)
feat_selector.fit(x_train, y_train)
feat_selector.support_
feat_selector.ranking_
X_filtered = feat_selector.transform(x_train)
EDIT:
After reading the implementation of Boruta and since this is based on tree models.
Parameters
----------
estimator : object
A supervised learning estimator, with a 'fit' method that returns the
feature_importances_ attribute. Important features must correspond to
high absolute values in the feature_importances_.
SVM does not have the attribute feature_importances_, so Boruta can only receive tree models like DecisionTrees, RandomForest, XGB, etc.
If you want to use SVM anyway I would recommend to change the feature selection algorithm to PermutationImportance, which is quite similar way of computing importance base on random repeated permutation, but in this case you will have to provide a metric to measure the decrease on performance when a feature is shuffled. |
H: How Many Features Are Used in Random Forest When max_features is not an integer?
In the RandomForestClassifier function in sklearn.ensemble, one of the parameters is max_features, which controls the size of the random subset of features that are considered at each node splitting. I would think this needs to be an integer, since of course the size of a subset needs to be an integer.
But one of the options here is "log2", which sets max_features to log_2(n_features).
I was wondering how that works, considering log_2(n_features) will not be an integer unless n_features is a power of 2.
Thanks.
AI: The argument for the number of features gets passed down to the BaseDecisionTree class. Looking at the code, you can see that the value that is used is calculated by first taking log with base two and then converting it to an integer (i.e. rounding it).
if self.max_features == "auto":
if is_classification:
max_features = max(1, int(np.sqrt(self.n_features_)))
else:
max_features = self.n_features_
elif self.max_features == "sqrt":
max_features = max(1, int(np.sqrt(self.n_features_)))
elif self.max_features == "log2":
max_features = max(1, int(np.log2(self.n_features_)))
else:
raise ValueError("Invalid value for max_features. "
"Allowed string values are 'auto', "
"'sqrt' or 'log2'.") |
H: Loading pretrained model with Pytorch
I saved my model with this code:
from google.colab import files
torch.save(net, 'model.pth')
# download checkpoint file
files.download('model.pth')
Then uploaded this way and checked on an image (x):
model = torch.load('model.pth')
model.eval()
torch.argmax(model(x))
And on the old session, it worked great, but then I started a new session and tried the code above, and got such an error:
AttributeError: Can't get attribute 'EfficientNet' on <module '__main__'>
Maybe somebody knows how to deal with it?
AI: Got the very same error recently.
Your network is usually defined as a class (here class EfficientNet(nn.Module). It seems when we load a model, it needs the class to be defined so it can instantiate it.
In my case, the class was defined in the training .py file. So what I did to fix that error was just copy-paste (it seems importing it didn't work for me, so I had to copy paste) the whole class definition to the file where you load your model.
In my case the class definition looked something like that and needed to be pasted before using the torch.load function:
class FCNClass(nn.Module):
def __init__(self, in_feat=1000, nb_classes=3, nb_hid=1, hidden_size=500, act=nn.ReLU):
super(FCNClass, self).__init__()
self.act = act()
self.flat = nn.Flatten()
self.fc1 = nn.Linear(in_feat, hidden_size)
self.fcs = nn.ModuleList()
for nb in range(nb_hid):
self.fcs.add_module("hid" + str(nb), nn.Linear(hidden_size, hidden_size))
self.out = nn.Linear(hidden_size, nb_classes)
def forward(self, x):
x = self.flat(x)
x = self.fc1(x)
x = self.act(x)
for lay in self.fcs:
x = lay(x)
x = self.act(x)
x = self.out(x)
x = self.act(x)
return x
torch.load('Model.h5')
The class basically defines the architecture of the model, if it is not defined, then the load function doesn't know where to put the weights. Mine was done with torch.load(path) and not model.load_state_dict(torch.load(path)) but it should not make a difference |
H: Do I have to scale/normalize my training data for LSTM Classification, even if I only have one feature?
I have a time-series data as follows:
# Time, Bitrate, Class
0.2, 312, 1
0.3, 319 1
0.5, 227 0
0.6, 229 0
0.7, 219 0
0.8, 341 1
1.0, 401 2
I am using only the "Bitrate" column as a feature, and "Class" for the labels for an LSTM classification model. In case of multiple features, I need to scale my data of course, to prevent domination from one feature to another. However, in my case, do I still need to scale/normalize my data, considering there is only one feature?
Thanks!
AI: Normalizing the features ensures that they take on reasonable values say between -3 and +3. This ensures that you don't run into a numeric overflow or under flow issues in your network. For e.g. just see what value np.exp(312) or np.exp(-312) takes on.
where, 312 is a value of the bitrate in your observation.
Certain activation functions such as the sigmoid might run into numerical precision issues if your data is not normalized. So, in this case it doesn't matter if your data contains only 1 feature. |
H: What if Training and testing dataset comes from the same source?
I am working on a classification problem in which I have to distinguish between healthy and damaged plates. when I use the combination of k-means clustering and SVM algorithm together with 10-fold cross validation, I can achieve the accuracy up to 95%. All the training and validation datasets come from the experiment.
For the testing, can I get the datasets after repeating same experiments with same specimen or I have to use different sets of specimens?
AI: You have to use a different set of specimens. Or you can keep one or two specimens from the original set aside and use them as test. Use data augmentation and transfer learning in that case. |
H: Which ANN structure to use?
Let $\mathcal{S}$ be the training input data set where each input $u^i \in \mathcal{S}$ has $d$ features.
I want to design a ANN so that the cost function below is minimized (the sum of
square of pairwise differences between model outputs) and the given constraint is satisfied, where $w$ is ANN model parameter vector.
Question: what kind of ANN is suitable for this purpose?[
AI: A Siamese network (a network with multiple outputs) will work for such a case. |
H: Can you choose a binary feature matrix for a binary classification model
This may be a stupid, but, I am new to deep learning (and machine learning for that matter) and I can't seem to find any literature to help with my question. All I can see when Googling many different questions (trying to change keywords to try get a hit on my question) is about binary classification. And also, binary classification where the feature matrix consists of real numbers.
I would like to know, is it possible to build a binary classifier with a binary feature matrix? And please can you point me to some literature.
AI: I'm not aware of any literature specific to the case of classification based on binary features since it's just a subset of the general case, but it's definitely possible.
A very common example is traditional text classification, where the document is represented as a bag of words: there are different options but each word in the vocabulary can be represented as a boolean variable, representing whether it belongs to the document or not. For example (among many others), a Bernoulli Naive Bayes classifier can be trained on such data. |
H: Why is the kernel of a Convolutional layer a 4D-tensor and not a 3D one?
I am doing my final degree project on Convolutional Networks and trying to understand the explanation shown in Deep Learning book by Ian Goodfellow et al.
When defining convolution for 2D images, the expression is:
$$S(i,j) = (K*I)(i,j) = \sum_{m}\sum_{n}I(i+m,j+n)K(m,n)$$
where $I$ is the image input (two dimensions for widht and height) and $K$ is the kernel.
Later it states that the input is usually a 3D tensor. This is because usually the input image has multiple channels (say, red, green and blue channels). Furthermore, it says that it can also be a 4D-tensor when the input is seen as a batch of images, where the last dimension represents a different example, but that they will omit this last dimension for the sake of simplicity. I understand this.
Let now $I_{i,j,k}$ be the input element with row $j$, column $k$ (height and width) and channel $i$. The output of the convolution is a similarly-structured 3D-tensor $S_{i,j,k}$.
Then, the generalized convolution expression for this 3D-tensor input is
$$S_{i,j,k} = \sum_{m,n,l} I_{l,j+m-1,k+n-1}K_{i,l,m,n}$$
where "the 4-D kernel tensor $K$ with element $K_{i,j,k,l}$ giving the connection strength between a unit in channel $i$ of the output and a unit in channel $j$ of the input, with an offset of $k$ rows and $l$ columns between the output unit and the input unit".
I am completely lost in this definition. Why is the Kernel 4-D and not 3-D (which is a logical generalization of the first formula)? What is the analogous to the sentence "giving the connection strength between a unit in channel $i$ of the output and a unit in channel $j$ of the input" in the initial 2D Kernel tensor? I think that is the main things that has to be understood to understand the 4D-kernel.
AI: The first formula you quote is for an image with one input channel and one output channel, it just focuses on height and width. In this case, if we consider a 5x5 convolution, the Kernel will just have size 5x5, $m$ and $n$ and going from -2 to +2.
Now if our input has 3 channels (RGB, but could be feature maps). we need to use each channel as an input, and the weights will be different for each input map. So K becomes 3-D : 3x5x5. This new dimension corresponds to the $l$ in your formula.
If we want to have 10 outputs, we need 10 different ways of convoluting the input feature maps, so our Kernel will have one more dimension, leading to a 10x3x5x5 size for the Kernel. This last dimension corresponds to the $i$ of your formula.
To recap, the 4 dimensions of the Kernel $i, l, m, n$ stand for :
$i$ : Output dimension
$l$ : Input dimension
$m, n$ : height and width
About this sentence "giving the connection strength between a unit in channel $i$ of the output and a unit in channel $j$ of the input".
I feel it is wrong as I would rather interpret it as : "giving the connection strength between a unit in channel $i$ of the output and a unit in channel $l$ of the input" |
H: What do you do with one hot encoding items that are a non-match for all classes in a confusion matrix?
I have trained a model for one-hot binary prediction for many classes, and am now applying it to the testing set of samples. However, a lot of the predictions for samples are 0 for every class. I'm not sure what to do with these results, as I need to make a confusion matrix (nxn for the number of classes) but I don't know where these predicted-no-class results should go. Do I just discard them? I would imagine that this would create a faulty image of the error rate of the model.
AI: It depends on the design of your task, there are two options:
The task is regular multiclass classification, i.e. every instance must belong to exactly one class. In this case it would be a mistake to one-hot-encode the class, it can simply be encoded as an int (for example with LabelEncoder). The model will always predict exactly one class for one instance so the case of zero class is impossible.
The task is multi-label clasification, i.e. every instance can belong to zero, one or multiple classes. In this case an instance can be predicted as belonging to no class at all, this is normal. In this setting the confusion matrix should not be done with a $n \times n$ matrix across classes, because the classes are independent (btw it's not only about the case of zero class, the case of multiple classes would also be impossible to represent this way). Instead there should be one binary confusion matrix for every independent class. |
H: Train and predict two labels in a single process
I have a python program that makes predictions using scikit-learn RandomForestClassifier. The label is called "default" and it's the default status of a loan. This works fine.
What I need now is to extend this model, and have another label called "Prepayment Percentage" that needs to be trained using the same data as the "default" label. Ideally, the model will be trained once and the predictions will also run only once for both labels. Is this possible with RandomForestClassifier?
AI: This would not be possible since the two variables you are trying to predict are of a different type. You are first predicting the default label, which would be yes/no, so this is a classification problem. The second variable you are trying to predict is the prepayment percentage, which is a continuous variable, this is therefore a regression problem. You are not able to combine the two (i.e. a regressor and a classifier) into one model using RandomForestClassifier. You might be able to create a single model yourself that combines the two using the BaseForest base class. |
H: BERT embedding layer
I am trying to figure how the embedding layer works for the pretrained BERT-base model. I am using pytorch and trying to dissect the following model:
import torch
model = torch.hub.load('huggingface/pytorch-transformers', 'model', 'bert-base-uncased')
model.embeddings
This BERT model has 199 different named parameters, of which the first 5 belong to the embedding layer (the first layer)
==== Embedding Layer ====
embeddings.word_embeddings.weight (30522, 768)
embeddings.position_embeddings.weight (512, 768)
embeddings.token_type_embeddings.weight (2, 768)
embeddings.LayerNorm.weight (768,)
embeddings.LayerNorm.bias (768,)
As I understand, the model accepts input in the shape of [Batch, Indices] where Batch is of arbitrary size (usually 32, 64 or whatever) and Indices are the corresponding indices for each word in the tokenized input sentence. Indices has a max length of 512. One input sample might look like this:
[[101, 1996, 4248, 2829, 4419, 14523, 2058, 1996, 13971, 3899, 102]]
This contains only 1 batch and is the tokenized form of the sentence "The quick brown fox jumps over the lazy dog".
The first word_embeddings weight will translate each number in Indices to a vector spanned in 768 dimensions (the embedding dimension).
Now, the position_embeddings weight is used to encode the position of each word in the input sentence. Here I am confused about why this parameter is being learnt? Looking at an alternative implementation of the BERT model, the positional embedding is a static transformation. This also seems to be the conventional way of doing the positional encoding in a transformer model. Looking at the alternative implementation it uses the sine and cosine function to encode interleaved pairs in the input. I tried comparing model.embeddings.position_embeddings.weight and pe, but I cannot see any similarity. The in the last sentence under under A.2 Pre-training Procedure (page 13) the paper states
Then, we train the rest
10% of the steps of sequence of 512 to learn the
positional embeddings.
Why is the positional embedding weight being learnt and not predefined?
The next layer after the positional embedding is the token_type_embeddings. Here I am confused about how the segment label is inferred by the model. If I understand this correctly each input sentence is delimited by the [SEP] token. In the example above there is only 1 [SEP] token and the segment label must be 0 for that sentence. But there could be a maximum of 2 segment labels. If so, will the 2 segments be handled separately or are they processed in parallel all the same as one "array"? How does the model handle multiple sentence segments?
finally the output from theese 3 embeddings are added togheter and passed through layernorm which I understand. But, are the weights in these embedding layers adjusted when fine-tuning the model to a downstream task?
AI: Why are positional embeddings learned?
This was asked in the repo of the original implementation without an answer. It didn't get an answer either in the HuggingFace Transformers repo and in cross-validated, also without answer, or without much evidence.
Given that in the original Transformer paper the sinusoidal embedding were the default ones, I understand that during preliminary hyperparameter tuning, the authors of BERT decided to go with learned embeddings, deciding not to duplicate all experiments with both types of embeddings.
We can, nevertheless, see some comparisons between learned and sinusoidal positional embedding in the ICLR'21 article On Position Embeddings in BERT, where the authors observe that:
The fully-learnable absolute PE performs better in classification,
while relative PEs perform better in span prediction.
How does the model handle multiple sentence segments?
This is best understood with the figure of the original BERT paper:
The two sentences are encoded into three sequences of the same length:
Sequence of subword tokens: the sentence tokens are concatenated into a single sequence, separating them with a [SEP] token. This sequence is embedded with the subword token embedding table; you can see the tokens here.
Sequence of positional embedding: sequentially increasing positions form the initial position of the [CLS] token to the position of the second [SEP] token. This sequence is embedded with the positional embedding table, which has 512 elements.
Sequence of segment embeddings: as many EA tokens as the token length of the first sentence (with [CLS] and [SEP]) followed by as many EB tokens as the token length of the second sentence (with the [SEP]). This sequence is embedded with the segment embedding table, with has 2 elements.
After embedding the three sequences with their respective embedding tables, we have 3 vector sequences, which are added together and used as input to the self-attention layers.
Are the weights in these embedding layers adjusted when fine-tuning the model to a downstream task?
Yes, they are. Normally, all parameters are fine-tuned when fine-tunine a BERT-based model.
Nevertheless, it is also possible to simply use BERT's representations as input to a classification model, without fine-tuning BERT at all.
In this article you can see how these two approaches compare. In general, for BERT, you obtain better results by fine-tuning the whole model. |
H: what does one Shot learning mean? do they only need one image to train for some new class detection?
Being new to deep learning I am somewhat struggling to grasp the idea of one shot learning.
Let us say I have a class to detect which didn't exist in training dataset such as COCO or Image NET. Can I train model for that class using only image or the training set must be large as for YOLO or RCNNs?
AI: One-Shot Learning refers to the problem when you only have very few or a single sample for some classes in your training dataset. A common application is, for example, face recognition. Here you may have only a single image per person in your dataset. Nevertheless, you'd like your neural net to be able to recognize that person from new images. A good intro is provided by Andrew Ng here.
A popular example are Siamese Nets introduced by Koch et al. The basic idea is to learn a latent representation of the images in your training set. When a new image is presented during inference the net calculates the latent representation of that new image and searches for the image in your training set whose latent representation is most similar to the one of the new image and predicts the corresponding class.
When you do not have any sample of a specific class available this is considered Zero-Shot Learning. However, in this case the neural net requires the dataset to include some auxiliary information for all images. The most popular being the Animals with Attributes (AWA) dataset:
Initial approaches to solve the Zero-Shot Learning problem did so in two steps:
For a given image learn to predict the attributes
Learn to predict the class based on the attributes
For details you may refer to the paper Attribute-Based Classification for Zero-Shot Visual Object Categorization. More recent approaches take another route though.
However, both approaches require, as always in Deep Learning, large datasets to learn. |
H: Training is not stable with extreme class imbalance
I'm dealing with a multi-class classification problem with around 30 categories.
This problem has a severe class imbalance:
Around 300 examples for the least common class.
Around 100k examples for the most common class.
I don't want the classification model to be dummy and predict the most common class for most of the examples, for this reason, I'm using class_weight='balanced' in my LogisticRegression from sklearn. However, in this case, the classes that the algorithm predicts are mostly the less frequent ones. I understand the model overfits them somehow, as it assigns every sample from these class a very high weight.
On the other hand, if I don't apply the class weights, the model predicts the most common categories.
Is there a way to solve this? Is there a way to ensure the model predicts approximately the same proportion of samples for each category?
AI: There are probably many different strategies but it's a difficult problem when the imbalance is as severe as it is here.
Without any correction the model is likely to ignore the smallest classes, as you noticed. However forcing the class weight as if the data is balanced is certainly too strong a correction. A middle ground would be to resample the training set instances yourself before fitting the model: by trying different ways to undersample the large classes and/or oversample the small classes you should be able to find an optimal tradeoff between the two extremes (use a separate validation set to determine the optimal combination).
Is there a way to ensure the model predicts approximately the same proportion of samples for each category?
Maybe I misunderstood but this looks like a bad idea: if the true proportions are not equal then the model shouldn't predict equal proportions either. The ideal scenario is for the model to predict the correct label every time, which implies predicting the true proportion for every class.
It might also be useful to analyze the performance in simpler configurations, e.g. by picking a few "average size" classes and observing how well the classifier discriminates between them only. The harder it is for a classifier to predict correctly, the more it relies on basic class proportion since it doesn't know any better. |
H: How to create a confusion matrix for one node of a decision tree?
I am doing past papers for my data science exam and was curious about one of the questions. They ask us to create a confusion matrix by hand for one node of a decision tree.
I understand how to create a decision tree for an entire model, but I am unsure on how to create one for just one variable. Should the entire first row be 0s other than the class A as it should always predict class A or am I missing something.
I have attached the decision tree and the exact wording of the question below. Thanks!
"(iv) Present a confusion matrix for the [leftmost terminal] node."
AI: CM is about True Vs Predicted.
Since only one node is in the discussion which means we will have only one column of "Predicted" but all the possible rows of "True"
Let's assume 100 samples in the Node. The Node must be classified as "A"
Below should be the CM - Rows are "True" and the Column is the "Predicted"
\begin{array} {|r|r|}
\hline
&A &B &C &D &E &F \\
\hline
A &62 &- &- &- &- &- \\
\hline
B &21 &- &- &- &- &- \\
\hline
C &13 &- &- &- &- &- \\
\hline
D &00 &- &- &- &- &- \\
\hline
E &02 &- &- &- &- &- \\
\hline
F &02 &- &- &- &- &- \\
\hline
\end{array} |
H: scikit learn target variable reversed (DecisionTreeClassifier)
I created a Decision Tree Classifier using sklearn, defined the target variable:
#extract features and target variables
x = df.drop(columns="target_column",)
y = df["target_column"]
#save the feature name and target variables
feature_names = x.columns
labels = y.unique()
#split the dataset
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size = 0.3, random_state = 42)
Additionally I checked the count of each of the two classes (Success, Failure) within y which confirmed to me that each has the correct count.
Then I fitted my DTClassifier:
clf = DecisionTreeClassifier(
criterion='gini',
splitter='best',
max_depth=None,
min_samples_split=2,
min_samples_leaf=1,
min_weight_fraction_leaf=0.0,
max_features=None,
random_state=42,
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
#class_weight="balanced",
presort='deprecated',
ccp_alpha=0.0,
)
clf.fit(x_train, y_train)
The problem becomes apparent at the visualization step when I plotted the tree, each node shows me class = Failure when Failure is the minority and vice versa. Further down the line plotting the confusion matrix and calculating all the performance metrics it also becomes apparent, that the labels were reversed and I cannot figure out as to why.
Any ideas where I might need to look for the answer? If more code is necessary to give a feedback I can provide.
Much appreciated.
AI: It might be because of the conflict of order between model.classes_ and series.unique()
For a binary labels,
model.classes_ = array([0, 1])
series.unique() = array([1, 0])
Try creating a constant value i.e. np.array[0,1] for labels and see. |
H: Replace column in a dataframe with another column based on index
I have the following dataframe A:
score content content_preprocessed for_analysis
1 5 a aaa True
2 5 b NaN False
3 1 c ccc True
4 1 d ddd True
I need to replace content_preprocessed column with the following new column from dataframe B:
content_preprocessed
1 aaz
3 ccf
4 ddo
which contains new values for content_preprocessed where for_analysis is set to True. The values for content_preprocessed should remain NaN if the corresponding for_analysis is False.
How do I do that?
AI: Try this:
A.loc[A.for_analysis & (A.index.isin(B.index)) , content_preprocessed] = B.content_preprocessed |
H: Tree complexity and gamma parameter in xgboost
According to xgboost paper, regularization is given by:
$$\Omega(f) = \gamma T + \lambda || w||^2$$
where $\gamma$ is the complexity of a tree (i.e., number of leaves in the tree).
The parameter gamma in xgboost library, on the other hand, controls the minimum split at a node in order to proceed. Hence, is the $\gamma$ in the equation above used by xgboost software package? I could not find any reference to it.
AI: In the paragraph following equation (1):
$T$ is the number of leaves in the tree.
$\gamma$ is a hyperparameter that affects how much regularization occurs on the size (number of leaves) of the tree.
Now it turns out that you can interpret $\gamma$ (at least roughly, see note at bottom) as ([source]):
Minimum loss reduction required to make a further partition on a leaf node of the tree. The larger gamma is, the more conservative the algorithm will be.
You can see that from equation (2), the regularized objective:
$$\mathcal{L}(\phi) = \sum_i l(\hat{y}_i, y_i) + \sum_k \Omega(f_k),\\
\text{where }\Omega(f)=\gamma T + \frac12 \lambda \|w\|^2.$$
By making the split, you increase $T$ by one, so the penalty increases by $\gamma$, and so your base loss term $l$ needs to decrease by at least $\gamma$ for this to be an overall improvement. Note: Of course, this ignores what happens to the leaf weights $w$ in splitting one node into two. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.