text
stringlengths 83
79.5k
|
---|
H: The robustness of the Frobenius and L2,1 norms to the outlier
I have a question about the properties of the Frobenius and L$_{2,1}$ norms. Why is the L$_{2,1}$ norm more robust to the outlier than the Frobenius norm?
PS: For a matrix $A\in\mathbb{R}^{n\times d}$, it can be easily seen that
$$
\text{Frobenius norm:}\qquad\Vert A\Vert_F=
\left(\sum_{i=1}^{n}\sum_{j=1}^{d}\vert a_{i,j}\vert^2 \right)^{\frac{1}{2}}=\sum_{i=1}^{n}\Vert A(i,:)\Vert_2^2,
$$
and
$$
L_{2,1}\,\,norm: \qquad\Vert A\Vert_{2,1}=\sum_{i=1}^{n}\left(\sum_{j=1}^{d}\vert a_{i,j}\vert^2 \right)^{\frac{1}{2}}=\sum_{i=1}^{n}\Vert A(i,:)\Vert_2,$$
where $A(i,:)$ is the $i$-th row of $A$.
I would be very grateful if some could answer my question.
AI: I have only a couple of hints:
Frobenius norm, by definition, takes equal account of all data in matrix (all rows and columns).
Whereas $L_{2,1}$ norm is Frobenius norm but per row instead, so outliers in other rows do not affect (equally) norm of current row. |
H: How can I classify specific types of words in a document given I have the full text of the document and the labels
I am working on a project that involves picking out specific kinds of objects from text. The documents I am going though are life sciences and biomedical in nature, and in these documents there are specific biomedical "objects" I want to pick out. The nature and variety of the text objects means I can't use regex or string matching. It has to be some kind of classification.
These text objects can be one word, or multiple words, but they are always in sequence.
An example sentence would be like
During the process of protein synthesis, X was used.
I need to pick out X. Luckily, I have plenty of labeled documents, and plenty of labels to go along with it. So I know a human can pick out these objects. So the challenge now is to get a machine to be able to pick out these types of objects from unseen text. I am working under the assumption that these specific text objects all fall under somewhat similar grammatical and textual context, so given enough labeled data, a machine should be able to learn how to pick out the text object.
Two Main Questions.
How do I label specific words in a document such that some model will understand that given a sequence of text, the object at position Y is a labeled and what we should be trying to classify.
Does anything I just said make any sense? Is there any research on what I've been talking about, because I've looked around and have not been able to find much.
AI: The task you describe corresponds exactly to Named Entity Recognition (NER). This is a standard task for which there are many available libraries. NER is usually done with sequence labelling models such as Conditional Random Fields. Sequence labelling implies that the data is provided as a sequence which looks like this:
During <features ...> O
the O
process O
of O
protein O
synthesis O
, O
X_token1 B_classX
X_token2 I_classX
X_token3 I_classX
was O
used O
. O
Here I'm using the common BIO format (Begin, Inside, Outside an entity) but there are variants. The model is trained with data annotated in this way, where there can be additional features (very often POS tag and others). Then when fresh text is provided (with the features) the model predicts the BIO tag for every token.
There has been a lot of research and resources produced for extracting specific entities in the specific context of biomedical data, so you might be interested in exploring these specific resources as well.
Medline, PMC are huge collections of biomedical abstracts/papers
There are many tools for extracting biomedical annotations based on Medline/PMC data: PubTator, cTakes, SciSpacy,etc. |
H: Is it mandatory to change the dtype='object' to 'category' before label encoding
I have seen some people change the datatype(from object to category) of the feature they want to encode.
AI: It depends on the approach you take for label encoding. If you want to use .cat.codes then you need to convert it into category datatype.
You can also use sklearn labelencoder which does this inherently bcoz it can be used to transform non-numerical labels (as long as they are hashable and comparable) to numerical labels. |
H: Neural Network Optimization steps order
I have a very basic question on the optimization algotithm, when I'm adjusting weights and biases in a NN, should I:
Forward propagate and backpropagate to calculate gradient descent (DC) for each batch once and then repeat for iterations_number times.
or
Forward propagate and backpropagate to calculate gradient descent (DC) one batch for iterations_number times and then continue with the next batch.
AI: One iteration means that you do one forward pass and one backward pass for a single batch containing batch_size examples. Then you move on to the next batch. Also see this post on SE SO.
Note that this is not identical to an epoch. An epoch is only completed when all examples of your dataset have been passed through your network. And as long as your batch size is less than the number of datapoints you will need $\frac{n}{\text{batch size}}$ (with $n$ being the sample size) iterations to complete an epoch. Also see this answer on SE DS. |
H: Representing the architecture of a deep CNN
Suppose I am feeding a $60\times60$ RGB image as the input to deep CNN with the first layer created using the following Keras code model.add(Conv2D(64, (3, 3), input_shape=(60, 60, 3))).
Will 64 filters be created for each channel (red, green and blue) of the image?
Is the following representation of the network correct? If 64 filters are created for each channel, then should I write "3X64@58X58"?
AI: Will 64 filters be created for each channel (red, green and blue) of
the image?
No, Rather filters also will have 3 channels. It would be (333).
Its work something like this-
Borrowed from- http://datahacker.rs/convolution-rgb-image/
More here-
https://cs231n.github.io/convolutional-networks/. |
H: Can a trained recognition model be used to generate a sample?
Suppose we have trained a cat classification network. It takes in an image (as a vector) x and returns $\hat{y}\in(0,1)$. The loss function is the typical cross entropy function. Shouldn't it be possible to now perform gradient descent on the space of images to obtain an example of a "cat picture" that our network really thinks is a cat?
I expect that this process would give us something very ugly, and not actually cat-like, but I am just curious if this works.
AI: Generative models can. Among others, you can use GAN, auto-encoders or VAE, ... As far as I know, discriminative approaches can not generate a sample.
Hope it helps. |
H: why do transformers mask at every layer instead of just at the input layer?
working thru the annotated transformer, I see that every layer in both the encoder (mask paddings) and decoder (mask padding + future positions) get masked. Why couldn't it be simplified to just one mask at the first layers of encoder and decoder?
AI: If the masking were only applied in the first layer, the self-attention in the subsequent layers would bring to each position information from future tokens.
Let's break it down with numbers:
At layer $i$, if causal masking is applied, the output at position $t$ contains information about layer $i-1$ at positions $1..t-1$, that is, $L_{i,t} = f_i(L_{i-1,1},...,L_{i-1,t-1})$.
If no causal masking is applied, then the output at position $t$ contains information about layer $i-1$ at all positions in the sequence of length $T$, that is, positions $1..T$ $L_{i,t} = f_i(L_{i-1,1},...,L_{i-1,T})$
If causal masking is applied at layer 1 (the first layer) but not at layer 2 or 3, we obtain that for position t at layer 3 we would have: $L_{3,t} = f_3(L_{2,1},...,L_{2,T}) = f_3(L_{2,1},...,f_1(L_{1,1},...,L_{1,T}))$, which means that position $t$ contains information from future tokens, as $T > t$.
Note: The original answer was wrong and was completely edited. To check the original answer, refer to the post timeline. |
H: Import image into Tensorflow model
I trained a simple Handwriting model with Tensorflow and MNIST:
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28,28)),
keras.layers.Dense(100,activation = 'relu'),
keras.layers.Dense(100,activation = 'sigmoid'),
keras.layers.Dense(10,activation = 'sigmoid'),
])
As you can see, I flatted my first layer into 784 px One-dimensional array.
Now on paper I wrote a number :
After changing it's scale into 28*28 with image editor(GIMP), I loaded my image into my code:
img_width, img_height = 28, 28
img = image.load_img('rgb_seven.jpeg', target_size=(img_width, img_height))
x = image.img_to_array(img)
This is x result:
array([[[167., 170., 179.],
[168., 171., 180.],
[168., 171., 180.],
...,
[174., 175., 180.],
[174., 175., 180.],
[167., 170., 175.]],
[[173., 176., 185.],
[172., 175., 184.],
[166., 169., 178.],
and :
images = np.vstack(x)
images
And it's images result is :
array([[167., 170., 179.],
[168., 171., 180.],
[168., 171., 180.],
...,
[166., 173., 179.],
[166., 173., 179.],
[166., 173., 179.]], dtype=float32)
Before to predict I have to do flat my images so I do this:
x_images_flattened = images.reshape(len(images),28*28)
But I got error:
ValueError Traceback (most recent call last)
<ipython-input-30-db6c299f19c6> in <module>
3 images = np.vstack(x)
4 # images
----> 5 x_images_flattened = images.reshape(len(images),28*28)
ValueError: cannot reshape array of size 2352 into shape (784,784)
Why I got cannot reshape array of size 2352 into shape (784,784) my image has 28*28 size.
And how can I predict that?
AI: By default, the image is loaded as a color Image i.e. 784*3 = 2352
Load image as grayscale i.e. use parameter color_mode="grayscale"
No need to of np.vstack(), simply reshape to (-1,784) |
H: What’s the most suitable programming language for AI development?
For the past couple of years I’ve been learning how to use Python to script. But I would like to start getting into scripting more things like computers and AI. So, with that said, and please no hate, what in your opinion would be best to script things like that? For example - JavaScript or c#/c+/c++
AI: Disclaimer: this is massively subjective topic, and below is just my opinion.
I would say it depends on your focus. You could do work in these areas with any major programming language, but some have better suited capabilities and/or better support from a community.
My biased and very simple answer would be to use Python for basically everything, and learn C++ for times when performance is important. And then, if you get really serious, you'll need to learn CUDA (extensions to C++ to programme directly for Nvidia GPUs). If you are proficient at Python and C++, there isn't really anything you couldn't do.
In summary
If you want to do research, you could probably do everything with just Python. There are libraries such a NumPy/PyTorch/Tensorflow which do all the heavy lifting for you.
If you want to go the way of robotics and embedded hardware/software, you'll likely need C/C++.
If you want to make models to deploy in a browser, perhaps Javascript would be useful. |
H: Using the whole dataset for testing (not validation) in case of small datasets
for an object detection task I created a small dataset to train an object detector. The class frequency is more or less balanced, however I defined some additional attributes with environmental conditions for each image, which results in a rather uneven distribution of classes depending on the viewed attribute (e.g. class X occurring with attributes A and B).
I used a typical training/validation split and the loss curve lets me conclude that no over-fitting is occurring. I am aware that in general for testing unseen images with a realistic feature distribution should be used, however due to the small size of the dataset, splitting off additional 10 or 20 % for testing or randomly drawing examples would result in certain combinations of classes and attributes not or only barely occurring in the test dataset and manually selecting examples would be very time-consuming.
I am therefore now wondering, if it reasonable to use the whole dataset (train+val) for testing and calculation of performance metrics as a compromise since I see no signs over-fitting?
AI: Considering that you have not used a cross-validation strategy, you could try to use the LOOCV (Leave One Out Cross Validation) strategy, so you have several splits (as many as samples considered in your training dataset) and still leave some samples as a final never-seen-before validation set.
This is costly from a performance perspective, but interesting for small datasets as in your case. Another option, in case the dataset is not really small, is usin k-fold cross-validation (with k being 5, 10...).
A post with practical info can be found here and a related topic discussion can be found here |
H: What is the difference between TDNN and CNN?
I read about time delay neural network (TDNN) and I am not sure I understood it. From what I read it seems that tdnn works just like one dimensional convolutional neural network (CNN).
What are the differences between the architectures, if they exist?
AI: I found the answer - there is no difference.
According to the paper "Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks": "Time Delay Neural Networks (TDNNs), also known as one dimensional
Convolutional Neural Networks (1-d CNNs)..." |
H: Using Palmer Penguins Dataset Instead of Iris Flower Dataset
I have been trying to replace the Iris Flower dataset with the Palmer Penguin dataset for a neural network tutorial. I am using the tutorial at https://www.kaggle.com/antmarakis/another-neural-network-from-scratch
The Palmer Penguin dataset should be a good replacement for the Iris Flower dataset because they both have 4 input variables and three species for the output. So I removed the rows with missing data from the penguin dataset and reduced it to 50 rows for each class to resemble the Iris Flower dataset. Unfortunately the training, validation, and testing accuracy are not very good and I cannot figure out how to improve it. I have tried removing the body mass in grams since it has far larger values than the other measurements in millimeters. I cannot get good predictions for data rows not used in the training.
Here is my modified code for the neural network:
import numpy as np
import pandas as pd
penguins = pd.read_csv("data/penguins_size_no_missing_extracted.csv")
penguins = penguins.sample(frac=1).reset_index(drop=True) # Shuffle
X = penguins[['culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm']]
X = np.array(X)
print(X[:5])
from sklearn.preprocessing import OneHotEncoder
one_hot_encoder = OneHotEncoder(sparse=False)
Y = penguins.species
Y = one_hot_encoder.fit_transform(np.array(Y).reshape(-1, 1))
print(Y[:5])
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.15)
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.1)
def NeuralNetwork(X_train, Y_train, X_val=None, Y_val=None, epochs=10, nodes=[], lr=0.15):
hidden_layers = len(nodes) - 1
weights = InitializeWeights(nodes)
for epoch in range(1, epochs+1):
weights = Train(X_train, Y_train, lr, weights)
if(epoch % 20 == 0):
print("Epoch {}".format(epoch))
print("Training Accuracy: {}".format(Accuracy(X_train, Y_train, weights)))
if X_val.any():
print("Validation Accuracy: {}".format(Accuracy(X_val, Y_val, weights)))
print()
return weights
def InitializeWeights(nodes):
"""Initialize weights with random values in [-1, 1] (including bias)"""
layers, weights = len(nodes), []
for i in range(1, layers):
w = [[np.random.uniform(-1, 1) for k in range(nodes[i-1] + 1)]
for j in range(nodes[i])]
weights.append(np.matrix(w))
return weights
def ForwardPropagation(x, weights, layers):
activations, layer_input = [x], x
for j in range(layers):
activation = Sigmoid(np.dot(layer_input, weights[j].T))
activations.append(activation)
layer_input = np.append(1, activation) # Augment with bias
return activations
def BackPropagation(y, activations, weights, layers):
outputFinal = activations[-1]
error = np.matrix(y - outputFinal) # Error at output
for j in range(layers, 0, -1):
currActivation = activations[j]
if(j > 1):
# Augment previous activation
prevActivation = np.append(1, activations[j-1])
else:
# First hidden layer, prevActivation is input (without bias)
prevActivation = activations[0]
delta = np.multiply(error, SigmoidDerivative(currActivation))
weights[j-1] += lr * np.multiply(delta.T, prevActivation)
w = np.delete(weights[j-1], [0], axis=1) # Remove bias from weights
error = np.dot(delta, w) # Calculate error for current layer
return weights
def Train(X, Y, lr, weights):
layers = len(weights)
for i in range(len(X)):
x, y = X[i], Y[i]
x = np.matrix(np.append(1, x)) # Augment feature vector
activations = ForwardPropagation(x, weights, layers)
weights = BackPropagation(y, activations, weights, layers)
return weights
def Sigmoid(x):
return 1 / (1 + np.exp(-x))
def SigmoidDerivative(x):
return np.multiply(x, 1-x)
def Predict(item, weights):
layers = len(weights)
item = np.append(1, item) # Augment feature vector
##_Forward Propagation_##
activations = ForwardPropagation(item, weights, layers)
outputFinal = activations[-1].A1
index = FindMaxActivation(outputFinal)
# Initialize prediction vector to zeros
y = [0 for i in range(len(outputFinal))]
y[index] = 1 # Set guessed class to 1
return y # Return prediction vector
def FindMaxActivation(output):
"""Find max activation in output"""
m, index = output[0], 0
for i in range(1, len(output)):
if(output[i] > m):
m, index = output[i], i
return index
def Accuracy(X, Y, weights):
"""Run set through network, find overall accuracy"""
correct = 0
for i in range(len(X)):
x, y = X[i], list(Y[i])
guess = Predict(x, weights)
if(y == guess):
# Guessed correctly
correct += 1
return correct / len(X)
f = len(X[0]) # Number of features
o = len(Y[0]) # Number of outputs / classes
layers = [f, 4, 8, o] # Number of nodes in layers
lr, epochs = 0.15, 100
weights = NeuralNetwork(X_train, Y_train, X_val, Y_val, epochs=epochs, nodes=layers, lr=lr);
print("Testing Accuracy: {}".format(Accuracy(X_test, Y_test, weights)))
print()
# Make predictions
x = [41.5,18.5,201]
guess = Predict(x, weights)
print("Prediction: ", end='')
print(guess)
x = [50.2,18.7,198]
guess = Predict(x, weights)
print("Prediction: ", end='')
print(guess)
x = [49.9,16.1,213]
guess = Predict(x, weights)
print("Prediction: ", end='')
print(guess)
The output is:
[[ 59.6 17. 230. ]
[ 49. 19.5 210. ]
[ 45.3 13.7 210. ]
[ 43.2 16.6 187. ]
[ 45.2 15.8 215. ]]
[[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]]
Epoch 20
Training Accuracy: 0.3508771929824561
Validation Accuracy: 0.38461538461538464
Epoch 40
Training Accuracy: 0.3508771929824561
Validation Accuracy: 0.38461538461538464
Epoch 60
Training Accuracy: 0.3508771929824561
Validation Accuracy: 0.38461538461538464
Epoch 80
Training Accuracy: 0.32456140350877194
Validation Accuracy: 0.23076923076923078
Epoch 100
Training Accuracy: 0.32456140350877194
Validation Accuracy: 0.23076923076923078
Testing Accuracy: 0.43478260869565216
Prediction: [0, 1, 0]
Prediction: [0, 1, 0]
Prediction: [0, 1, 0]
The CSV file has the following data:
species,island,culmen_length_mm,culmen_depth_mm,flipper_length_mm,body_mass_g,sex
Adelie,Torgersen,39.5,17.4,186,3800,FEMALE
Adelie,Torgersen,40.3,18,195,3250,FEMALE
Adelie,Torgersen,36.7,19.3,193,3450,FEMALE
Adelie,Torgersen,39.3,20.6,190,3650,MALE
Adelie,Torgersen,38.9,17.8,181,3625,FEMALE
Adelie,Torgersen,39.2,19.6,195,4675,MALE
Adelie,Torgersen,41.1,17.6,182,3200,FEMALE
Adelie,Torgersen,38.6,21.2,191,3800,MALE
Adelie,Torgersen,34.6,21.1,198,4400,MALE
Adelie,Torgersen,36.6,17.8,185,3700,FEMALE
Adelie,Torgersen,38.7,19,195,3450,FEMALE
Adelie,Torgersen,42.5,20.7,197,4500,MALE
Adelie,Torgersen,34.4,18.4,184,3325,FEMALE
Adelie,Torgersen,46,21.5,194,4200,MALE
Adelie,Biscoe,37.8,18.3,174,3400,FEMALE
Adelie,Biscoe,37.7,18.7,180,3600,MALE
Adelie,Biscoe,35.9,19.2,189,3800,FEMALE
Adelie,Biscoe,38.2,18.1,185,3950,MALE
Adelie,Biscoe,38.8,17.2,180,3800,MALE
Adelie,Biscoe,35.3,18.9,187,3800,FEMALE
Adelie,Biscoe,40.6,18.6,183,3550,MALE
Adelie,Biscoe,40.5,17.9,187,3200,FEMALE
Adelie,Biscoe,37.9,18.6,172,3150,FEMALE
Adelie,Biscoe,40.5,18.9,180,3950,MALE
Adelie,Dream,39.5,16.7,178,3250,FEMALE
Adelie,Dream,37.2,18.1,178,3900,MALE
Adelie,Dream,39.5,17.8,188,3300,FEMALE
Adelie,Dream,40.9,18.9,184,3900,MALE
Adelie,Dream,36.4,17,195,3325,FEMALE
Adelie,Dream,39.2,21.1,196,4150,MALE
Adelie,Dream,38.8,20,190,3950,MALE
Adelie,Dream,42.2,18.5,180,3550,FEMALE
Adelie,Dream,37.6,19.3,181,3300,FEMALE
Adelie,Dream,39.8,19.1,184,4650,MALE
Adelie,Dream,36.5,18,182,3150,FEMALE
Adelie,Dream,40.8,18.4,195,3900,MALE
Adelie,Dream,36,18.5,186,3100,FEMALE
Adelie,Dream,44.1,19.7,196,4400,MALE
Adelie,Dream,37,16.9,185,3000,FEMALE
Adelie,Dream,39.6,18.8,190,4600,MALE
Adelie,Dream,41.1,19,182,3425,MALE
Adelie,Dream,36,17.9,190,3450,FEMALE
Adelie,Dream,42.3,21.2,191,4150,MALE
Adelie,Biscoe,39.6,17.7,186,3500,FEMALE
Adelie,Biscoe,40.1,18.9,188,4300,MALE
Adelie,Biscoe,35,17.9,190,3450,FEMALE
Adelie,Biscoe,42,19.5,200,4050,MALE
Adelie,Biscoe,34.5,18.1,187,2900,FEMALE
Adelie,Biscoe,41.4,18.6,191,3700,MALE
Adelie,Biscoe,39,17.5,186,3550,FEMALE
Chinstrap,Dream,50,19.5,196,3900,MALE
Chinstrap,Dream,51.3,19.2,193,3650,MALE
Chinstrap,Dream,45.4,18.7,188,3525,FEMALE
Chinstrap,Dream,52.7,19.8,197,3725,MALE
Chinstrap,Dream,45.2,17.8,198,3950,FEMALE
Chinstrap,Dream,46.1,18.2,178,3250,FEMALE
Chinstrap,Dream,51.3,18.2,197,3750,MALE
Chinstrap,Dream,46,18.9,195,4150,FEMALE
Chinstrap,Dream,51.3,19.9,198,3700,MALE
Chinstrap,Dream,46.6,17.8,193,3800,FEMALE
Chinstrap,Dream,51.7,20.3,194,3775,MALE
Chinstrap,Dream,47,17.3,185,3700,FEMALE
Chinstrap,Dream,52,18.1,201,4050,MALE
Chinstrap,Dream,45.9,17.1,190,3575,FEMALE
Chinstrap,Dream,50.5,19.6,201,4050,MALE
Chinstrap,Dream,50.3,20,197,3300,MALE
Chinstrap,Dream,58,17.8,181,3700,FEMALE
Chinstrap,Dream,46.4,18.6,190,3450,FEMALE
Chinstrap,Dream,49.2,18.2,195,4400,MALE
Chinstrap,Dream,42.4,17.3,181,3600,FEMALE
Chinstrap,Dream,48.5,17.5,191,3400,MALE
Chinstrap,Dream,43.2,16.6,187,2900,FEMALE
Chinstrap,Dream,50.6,19.4,193,3800,MALE
Chinstrap,Dream,46.7,17.9,195,3300,FEMALE
Chinstrap,Dream,52,19,197,4150,MALE
Chinstrap,Dream,50.5,18.4,200,3400,FEMALE
Chinstrap,Dream,49.5,19,200,3800,MALE
Chinstrap,Dream,46.4,17.8,191,3700,FEMALE
Chinstrap,Dream,52.8,20,205,4550,MALE
Chinstrap,Dream,40.9,16.6,187,3200,FEMALE
Chinstrap,Dream,54.2,20.8,201,4300,MALE
Chinstrap,Dream,42.5,16.7,187,3350,FEMALE
Chinstrap,Dream,51,18.8,203,4100,MALE
Chinstrap,Dream,49.7,18.6,195,3600,MALE
Chinstrap,Dream,47.5,16.8,199,3900,FEMALE
Chinstrap,Dream,47.6,18.3,195,3850,FEMALE
Chinstrap,Dream,52,20.7,210,4800,MALE
Chinstrap,Dream,46.9,16.6,192,2700,FEMALE
Chinstrap,Dream,53.5,19.9,205,4500,MALE
Chinstrap,Dream,49,19.5,210,3950,MALE
Chinstrap,Dream,46.2,17.5,187,3650,FEMALE
Chinstrap,Dream,50.9,19.1,196,3550,MALE
Chinstrap,Dream,45.5,17,196,3500,FEMALE
Chinstrap,Dream,50.9,17.9,196,3675,FEMALE
Chinstrap,Dream,50.8,18.5,201,4450,MALE
Chinstrap,Dream,50.1,17.9,190,3400,FEMALE
Chinstrap,Dream,49,19.6,212,4300,MALE
Chinstrap,Dream,51.5,18.7,187,3250,MALE
Chinstrap,Dream,49.8,17.3,198,3675,FEMALE
Chinstrap,Dream,48.1,16.4,199,3325,FEMALE
Gentoo,Biscoe,50,16.3,230,5700,MALE
Gentoo,Biscoe,48.7,14.1,210,4450,FEMALE
Gentoo,Biscoe,50,15.2,218,5700,MALE
Gentoo,Biscoe,47.6,14.5,215,5400,MALE
Gentoo,Biscoe,46.5,13.5,210,4550,FEMALE
Gentoo,Biscoe,45.4,14.6,211,4800,FEMALE
Gentoo,Biscoe,46.7,15.3,219,5200,MALE
Gentoo,Biscoe,43.3,13.4,209,4400,FEMALE
Gentoo,Biscoe,46.8,15.4,215,5150,MALE
Gentoo,Biscoe,40.9,13.7,214,4650,FEMALE
Gentoo,Biscoe,49,16.1,216,5550,MALE
Gentoo,Biscoe,45.5,13.7,214,4650,FEMALE
Gentoo,Biscoe,48.4,14.6,213,5850,MALE
Gentoo,Biscoe,45.8,14.6,210,4200,FEMALE
Gentoo,Biscoe,49.3,15.7,217,5850,MALE
Gentoo,Biscoe,42,13.5,210,4150,FEMALE
Gentoo,Biscoe,49.2,15.2,221,6300,MALE
Gentoo,Biscoe,46.2,14.5,209,4800,FEMALE
Gentoo,Biscoe,48.7,15.1,222,5350,MALE
Gentoo,Biscoe,50.2,14.3,218,5700,MALE
Gentoo,Biscoe,45.1,14.5,215,5000,FEMALE
Gentoo,Biscoe,46.5,14.5,213,4400,FEMALE
Gentoo,Biscoe,46.3,15.8,215,5050,MALE
Gentoo,Biscoe,42.9,13.1,215,5000,FEMALE
Gentoo,Biscoe,46.1,15.1,215,5100,MALE
Gentoo,Biscoe,47.8,15,215,5650,MALE
Gentoo,Biscoe,48.2,14.3,210,4600,FEMALE
Gentoo,Biscoe,50,15.3,220,5550,MALE
Gentoo,Biscoe,47.3,15.3,222,5250,MALE
Gentoo,Biscoe,42.8,14.2,209,4700,FEMALE
Gentoo,Biscoe,45.1,14.5,207,5050,FEMALE
Gentoo,Biscoe,59.6,17,230,6050,MALE
Gentoo,Biscoe,49.1,14.8,220,5150,FEMALE
Gentoo,Biscoe,48.4,16.3,220,5400,MALE
Gentoo,Biscoe,42.6,13.7,213,4950,FEMALE
Gentoo,Biscoe,44.4,17.3,219,5250,MALE
Gentoo,Biscoe,44,13.6,208,4350,FEMALE
Gentoo,Biscoe,48.7,15.7,208,5350,MALE
Gentoo,Biscoe,42.7,13.7,208,3950,FEMALE
Gentoo,Biscoe,49.6,16,225,5700,MALE
Gentoo,Biscoe,45.3,13.7,210,4300,FEMALE
Gentoo,Biscoe,49.6,15,216,4750,MALE
Gentoo,Biscoe,50.5,15.9,222,5550,MALE
Gentoo,Biscoe,43.6,13.9,217,4900,FEMALE
Gentoo,Biscoe,45.5,13.9,210,4200,FEMALE
Gentoo,Biscoe,50.5,15.9,225,5400,MALE
Gentoo,Biscoe,44.9,13.3,213,5100,FEMALE
Gentoo,Biscoe,45.2,15.8,215,5300,MALE
Gentoo,Biscoe,46.6,14.2,210,4850,FEMALE
Gentoo,Biscoe,48.5,14.1,220,5300,MALE
AI: Have a look at the weights of your model at each step and the gradients that are being applied. In many cases the gradients are of order 10^-10 or smaller, meaning that the weights of the model basically do not change at all. The reason for this is that a neural network is sensitive to the scale of the data. It is therefore often good practice to scale your input variables, e.g. on a 0-1 scale. Simply dividing each column in the input by their max value using X_train /= X_train.max(axis=0) allows me to reach 90%+ training accuracy after 100 epochs (depending on the initialization of the weights). You can take the scaling even further by using something like MinMaxScaler or StandardScaler from scikit-learn. |
H: Why fitting/training a model can be considered as learning?
I was looking around and I can't find a good answer, I just want to know why it can be considered as learning and is not just "calibration" or "parametrization".
I feel the word "learning" is overqualified for the things the models do.
Thanks in advance.
AI: When a baby tunes the connections between neurons in his brain until he can recognize a dog and say "this is a dog!" we say that he learned.
Why wouldn't we say the same about neural networks (or other models) that happened to be inside a computer and not in a human brain? |
H: Which algorithm should I choose and why?
My friend was reading a textbook and had this question:
Suppose that you observe $(X_1,Y_1),...,(X_{100}Y_{100})$, which you assume to be i.i.d. copies of a random pair $(X,Y)$ taking values in $\mathbb{R}^2 \times \{1,2\}$. Your plot the data and see the following:
where black circles represent those $X_i$ with $Y_i=1$ and the red triangles represent those $X_i$ with $Y_i=2$. A practitioner tells you that their misclassification costs are equal, $c_1 = c_2 = 1$, and
would like advice on which algorithm to use for prediction. Given the options:
Linear discriminant analysis;
K-Nearest neighbours with $K=5$
K-Nearest neighbours with $K=90$.
What would be the best algorithm for this? I think it should be $5$, as the bigger the $K$, the worse the accuracy gets? What would be your choice and why?
AI: You can choose the optimal method using cross-validation. If your sample size is relatively small, use leave-one-out cross-validation... I would not be surprised if $K = 5$ worked well. Linear discriminant analysis (LDA) will not work here because it implies linear decision boundaries. Unless you enlarge the set of predictors with non-linear transformations.
Also, the picture above is a classic case where support vector machines (SVM) with a Gaussian kernel could be of use. R has a friendly implementation of SVM in the "kernlab" package. |
H: What does IBA mean in imblearn classification report?
imblearn is a python library for handling imbalanced data. A code for generating classification report is given below.
import numpy as np
from imblearn.metrics import classification_report_imbalanced
y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report_imbalanced(y_true, y_pred,target_names=target_names))
The output for this is as follow
pre rec spe f1 geo iba sup
class 0 0.50 1.00 0.75 0.67 0.87 0.77 1
class 1 0.00 0.00 0.75 0.00 0.00 0.00 1
class 2 1.00 0.67 1.00 0.80 0.82 0.64 3
avg/total 0.70 0.60 0.90 0.61 0.66 0.54 5
What is the meaning of iba in this classification report. Here pre stands for precision, rec stands for recall, spe stands for specificity, f1 stands for f1 measure, and geo stands for geometric mean. All these are metrics for measuring performance of imbalanced classes.
AI: If you look at the imblearn documentation for classification_report_imbalanced, you can see that iba stands for "index balanced accuracy". For more information on what the index balanced accuracy is and it's value in cases on imbalanced datasets, have a look at the original paper. |
H: Test set larger than train set
There is a two class dataset with 1121 values in total, having 230 from same class and 891 from the other class. The training set is choosen as 230+230=460 from both classes and the test set as the entire 1121 data.
1)Accuracy values are less than 0,50 even some are as low as 0,18 and 0,20. Does this make sense? For a two class outcome, there is more chance for an accurate prediction if I toss a coin. Can there be an accuracy of less than 0.50 for a two class prediction?
2)When both test-train set is choosen from the 460 class balanced rows and k-fold(1:10) is made, the accuracy levels are considerably higher, up to 0,90.
3)Can the difference between the results be because the test set is much larger than the train set?
AI: Do I correctly understand, that the test data is the whole dataset, whereas training is only a subset of it? Training and test data must not overlap. The test is a measure of quality on unseen, unfamiliar data.
In the case of inbalanced data and two class classification the naive classifier, predicting always the most probable class has the quality 891 / 1121. Any sensbible model should beat this score.
To handle inbalanced data you can use several approaches - undersampling the majority class, oversampling the minority https://machinelearningmastery.com/random-oversampling-and-undersampling-for-imbalanced-classification/. Also many classifiers have the attribute weights, which can be set to balanced - this would penalize larger for mistakes on minority class.
In the case of imbalanced data measure not only accuracy, but precision and recall as well https://en.wikipedia.org/wiki/Precision_and_recall |
H: How to measure a almost (?) ordinal classification?
I have a model where I predict classes to define instructions for a trader robot. The classes are -2, -1, 1 and 2 (strong sell, light sell, light buy and strong buy) and I'm using a simple confusion matrix to asses the performance of the model, however I would like to know if there are other ways that takes into account the "distance" between the classes.
I've found some content about ordinal regression, but I already made my classification in other way (not sure if this content applies to me though).
Any suggestions?
AI: You could just calculate the mean absolute error.
However, this quality measure (and any other measure taking into account the distance) would not be consistent with the loss function you used while training your classifier I guess. So it would not be fair to your algorithm. It is like if your professor would change the grading system after you submitted your test.
Instead of changing the quality measure you might want to reconsider your approach. The suggestion to use the ordinal regression sounds good. In case if the cost of misestimation between neighbors is similar you could even use plain regression and then round. |
H: How to access GPT-3, BERT or alike?
I am interested in accessing NLP models mentioned in scientific papers, to replicate some results and experiment.
But I only see waiting lists https://openai.com/blog/openai-api/ and licenses granted in large commercial deals https://www.theverge.com/2020/9/22/21451283/microsoft-openai-gpt-3-exclusive-license-ai-language-research .
How can a researcher not affiliated to a university or (large) tech company obtain access so to replicate experiments of scientific papers ?
Which alternatives would you suggest to leverage on pre-trained data sets ?
AI: OpenAI has not released the weights of GPT-3, so you have to access it through their API. However, all other popular models have been released and are easily accessible. This includes GPT-2, BERT, RoBERTa, Electra, etc.
The easiest way to access them all in a unified way is by means of the Transformers Python library by Huggingface. This library supports using the models from either Tensorflow or Pytorch, so it is very flexible.
The library has a repository with all the mentioned models, which are downloaded automatically the first time you use them from the code. This repository is called the "model hub", and you can browse its contents here. |
H: How do the linear layers in the attention mechanism work?
I think I now the answer to my question but I dont really get confirmation.
When taking a look at the multi-head-attention block as presented in "Attention Is All You Need" we can see that there are three linear layers applied on the key, query and value matrix. And then one layer at the end, which is applied on the output of the matrix multiplication of the score matrix an the value.
The three linear layers at the beginnging: When the key/query/value with shape (seq-len x emb-dim) enter the linear layer the output is still (seq-len x emb-dim). Does that mean, the same linear layer is applied on every "index" of the input matrix. Like this (pseudo-code):
fc = linear(emb-dim, emb-dim) # in-features and out-features have the shape of emb-dim
output_matrix = []
for x in key/query/value:
# x is one row the input matrix with shape (emb-dim x 1)
x = fc(x)
# x after the linear layer has still the shape of (emb-dim x 1)
output_matrix.append(x)
# output_matrix has now the shape (seq-len x emb-dim); the same as the input-matrix
So is this indeed what happens? I couldn't explain why the output is the same as the input otherwise.
The linear layer before the output: So the output of the matrix multiplication of the score matrix an the value is also (seq-len x emb-dim) and therefore the output of the linear layer is too. So the output of the whole attention block has the same shape as the input.
So Im just asking for comfirmation if the explaination I wrote is correct. And if not: What am I understanding wrong?
Extra question: When I want to further use the output of the attention block further for classification, I would have to take the mean along the seq axis in order to get a vector of fixed shape (emb-dim x 1) so I can feed it into a classification layer. But I guess that valueable information is getting lost in that process.
My question: Could I replace the last linear layer with an RNN to get the desired output shape and without losing information?
AI: Your understanding is not correct.
The relevant information is described in the original paper in section 3.2.2:
The three sets of projection matrices you are referring to are $W^Q_i \in \mathbb{R}^{d_{model} \times d_k}$ for the Queries, $W^K_i \in \mathbb{R}^{d_{model}\times d_k}$ for the Keys and $W^V_i \in \mathbb{R}^{d_{model}\times d_v}$ for the Values. Notice that the $i$ subindex in the matrix names refers to the attention head, indicating that there is a different matrix for each attention head.
The final projection matrix is $W^O \in \mathbb{R}^{hd_v \times d_{model}}$.
Given the number of attention heads, $h = 8$, the dimensions of the matrices are defined by $d_k=d_v=d_{model}/h=64$.
The three sets of matrices project the embedding dimensionality $d_{model}$ into a space with 8 times smaller dimensionality ($d_k=d_v=d_{model}/8$). However, note that for each of $W^K$, $W^V$ and $W^Q$ there are 8 matrices (one per attention head) and, analogously, 8 scaled dot products are computed. The results of the dot products are 8 vectors of dimensionality $d_{model}/8$; those 8 vectors are concatenated (see figure below), giving a tensor with the original dimensionality $d_{model}$.
Then, the final matrix multiplication by $W^O$ doesn't change the dimensionality, obtaining again the original one.
About your second question, either of the approaches you describe (averaging and using an RNN) is technically feasible, but what people normally do when using transformers for classification is to use BERT's approach, that is, adding a special token [CLS] at the beginning of the sequence and using the output at that position for the classification, just projecting with a matrix multiplication into a space with dimensionality equal to the number of classes. |
H: How to calculate the different metrics for multi class classification
My confusion matrix has the following structure:
(Predicted)
C= ( actual) [TN FP
FN TP]
How can I calculate the Mathews Correlation Coefficient (MCC) value for multi-class expressed as
MCC = (TP .* TN - FP .* FN) ./ ... sqrt( (TP + FP) .* (TP + FN) .* (TN + FP) .* (TN + FN) );
Also, I have some doubts regarding the calculation of the following measures for multi-class. Please correct me where wrong.
for i=1:nClasses
TN(i)=C(i,i);
FP(i)=sum(C(i,:))-C(i,i);
FN(i)=sum(C(:,i))-C(i,i);
TP(i)=sum(C(:))-TP(i)-FP(i)-FN(i);
end
AI: As you can see this MCC formula is for binary classification, so you can only calculate its results by considering the problem as binary.
[edited to clarify OP's confusion] What is a confusion matrix? It shows for every true class $X$ as a row and every predicted class $Y$ as a column how many instances have true class $X$ and are predicted as $Y$. If there are only two classes (binary classification), the only possibilities are
$X$ positive and $Y$ positive -> TP
$X$ positive and $Y$ negative -> FP
$X$ negative and $Y$ positive -> FN
$X$ negative and $Y$ negative -> TN
However when there are more than two classes (multiclass classification) it's impossible to use this distinction positive/negative directly, so there are no general TP,FP,FN,TN cases.
With multiple classes one can calculate binary classification metrics for every class. This is done by considering the target class as positive and all the other classes as negative (as if they are merged into one big negative class).
Example: suppose we have classes A, B, C. If we focus on class A, the confusion matrix is like this:
A B C
A TP FN FN
B FP TN TN
C FP TN TN
to present it another way:
A B or C
A TP FN
B or C FP TN
Now if we focus on class B the confusion matrix becomes:
A B C
A TN FP TN
B FN TP FN
C TN FP TN
In your code the TP and TN categories are swapped:
TP(i)=C(i,i);
...
TN(i)=sum(C(:))-TP(i)-FP(i)-FN(i); |
H: Why removing rows with NA values from the majority class improves model performance
I have an imbalanced dataset like so:
df['y'].value_counts(normalize=True) * 100
No 92.769441
Yes 7.230559
Name: y, dtype: float64
The dataset consists of 13194 rows and 37 features.
I have tried numerous attempts to improve the performance of my models by oversampling and undersampling to balance the data, One Class SVM for outlier detection, using different scores, hyperparametre tuning, etc. Some of these methods have improved the performance slightly, but not as much as I would like:
Applying RandomUnderSampling:
from imblearn.under_sampling import RandomUnderSampler
rus = RandomUnderSampler(random_state=42)
X_train_rus, y_train_rus = rus.fit_resample(X_train, y_train)
# Define and fit AdaBoost classifier using undersampled data
ada_rus = AdaBoostClassifier(n_estimators=100, random_state=42)
ada_rus.fit(X_train_rus,y_train_rus)
y_pred_rus = ada_rus.predict(X_test)
evaluate_model(y_test, y_pred_rus)
Using Oversampling techniques such as SMOTE:
# SMOTE
from imblearn.over_sampling import SMOTE
# upsample minority class using SMOTE
sm = SMOTE(random_state=42)
X_train_sm, y_train_sm = sm.fit_sample(X_train, y_train)
# Define and fit AdaBoost classifier using upsample data
ada_sm = AdaBoostClassifier(n_estimators=100, random_state=42)
ada_sm.fit(X_train_sm,y_train_sm)
y_pred_sm = ada_sm.predict(X_test)
# compare predicted outcome through AdaBoost upsampled data with real outcome
evaluate_model(y_test, y_pred_sm)
I then decided to attempt removing rows with missing data from samples from the majority class as I saw this in an article. I did this gradually by increasing the threshold (thresh) parameter in pandas dropna function, and each time I removed more rows, the performance improved. Finally, I removed all rows from the majority class with missing data like so:
df_majority_droppedRows = df.query("y == 'No'").dropna()
df_minority = df.query("y == 'Yes'")
dfWithDroppedRows = pd.concat([df_majority_droppedRows, df_minority])
print(dfWithDroppedRows.shape)
(1632, 37)
This reduced the number of rows I have dramatically down to 1632 and changed the distribution in the target variable such that what was perviously the minority class('Yes') was now the majority class:
Yes 58.455882
No 41.544118
Name: y, dtype: float64
Testing the model, I found it performed best, with high recall and precision values.
So my questions are,
Why did this method outperform other oversampling and undersampling techniques?
Is it acceptable that what was previously a minority class is now the majority class or can this cause overfitting?
Is it realistic to build a model that relies on input with no missing data for the majority class samples?
EDIT
In response to the questions in the comment by @Ben Reiniger:
I dealt with the missing values like in the data by using KNNImputer for numeric data and SimpleImputer for categorical data like so:
def preprocess (X):
# define categorical and numeric transformers
numeric_transformer = Pipeline(steps=[
('knnImputer', KNNImputer(n_neighbors=2, weights="uniform")),
('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer(transformers=[
('cat', categorical_transformer, selector(dtype_include=['object'])),
('num', numeric_transformer, selector(dtype_include=['float64','int64']))
])
X = pd.DataFrame(preprocessor.fit_transform(X))
return X
After dropping rows, I defined the feature of matrix and the target, preprocessed and then split the data, like so:
# make feature matrix and target matrix
X = dfWithDroppedRows.drop(columns=['y'])
y = dfWithDroppedRows['y']
# encode target variable
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(y)
# preprocess feature matrix
X=preprocess(X)
# Split data into training and testing data
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.2, random_state=42)
Finally, to calculate if the missing values are identically distributed (originally) in the two classes, I ran the following
np.count_nonzero(df.query("y == 'No'").isna()) / df.query("y == 'No'").size
0.2791467938526762
np.count_nonzero(df.query("y == 'Yes'").isna()) / df.query("y == 'Yes'").size
0.24488639582979205
So the majority class has about 28% missing data and the minority class has about 25% missing data.
AI: You have a combination of two problems in your data:
imbalance
missing values
In your experiments there's a confusion about what the true distribution of the data is (or should be): either the "real data" is 97% no, or the "real data" is after removing missing values in which case it's almost balanced. It's very important to decide this based on the problem that you're trying to solve: "in production", does the model have to produce a prediction for every instance even if it has missing values? If yes the true distribution is 97% no (original problem). If no, then the model only predicts for "complete" instances, i.e. many instances are discarded due to missing values (and this happens much more often with "no").
This is a crucial point because whichever way you train the model, it should be evaluated on a test set which reflects the true distribution of the data.
I would assume that your real goal is to predict for every instance, i.e. you don't want to ignore instances even if they have missing values. I will try to address both options though:
option A: real data is 97% no.
option B: real data is 58% yes.
Why did this method outperform other oversampling and undersampling techniques?
The two methods were evaluated on two very different test sets, so the performance between them is simply not comparable.
If the resampling experiments were properly evaluated on the original data (not resampled), then they provide you with a reliable estimate of the performance in option A.
If the resampling experiments were (wrongly) evaluated on resampled data, then the difference is certainly due to the missing values, because imputing a very large proportion of the data causes a lot of noise. In this case the resampling experiments are neither valid for option A (wrong distribution in the test set) nor B (missing values in the test set).
Is it acceptable that what was previously a minority class is now the majority class or can this cause overfitting?
It depends which problem you're trying to solve:
For the original problem "option A", no it is not acceptable to modify the distribution in the test set.
For the new problem "option B", the majority class is "yes". The original data with 97% "no" is irrelevant.
Is it realistic to build a model that relies on input with no missing data for the majority class samples?
This is about specifying the exact problem you want to solve, that's for you to decide :) |
H: How can I save my learning rate on each finished epoch using Callbacks?
I used LearningRateScheduler for my model training. I want to save learning rates on each epoch in CSV file (or other document files).
Is there any way to save those learning rates using callbacks?
AI: You may write a Custom Callback and save the LR in a file.
You will get it by - self.model.optimizer.learning_rate
Custom Callback - Keras docs
class CustomCallback(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print("LR - {}".format(self.model.optimizer.learning_rate))
my_callbacks = [ CustomCallback() ]
LR - <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=0.001> |
H: ValueError: Input 0 of layer conv1_pad is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 224, 3]
I am trying to make a gender classifier. I am using MobileNet from Tensorflow with input shape as (224,224,3). After training the model, I tried to check if the model was working by passing an image to the model to predict, but it throws the error in the title. I tried debugging it and google the error and tried everything I could, but the error still persists. Please help me to understand why the error.
model = tf.keras.applications.mobilenet.MobileNet(input_shape = (224,224,3), include_top = False, pooling = 'avg')
#model.summary()
x = Dropout(rate=0.4)(model.output)
x = Dense(3)(x)
x = Softmax()(x)
model= Model(model.inputs, x)
for layer in model.layers[:-3]:
layer.trainable = False
model.compile(
optimizer=Adam(lr=0.001),
loss='categorical_crossentropy'
)
datagen = tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function = tf.keras.applications.mobilenet.preprocess_input,
shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True, validation_split = 0.1)
training = datagen.flow_from_directory(
'...\Pictures\Model',
target_size=(224, 224)
)
validation = datagen.flow_from_directory(
'...\Pictures\Model',
target_size=(224, 224)
)
batch_size = 32
history = model.fit_generator(generator = training, steps_per_epoch = training.samples//batch_size, epochs = 10,
validation_data = validation, validation_steps = validation.samples//batch_size, verbose = 2)
man = np.array(tf.keras.preprocessing.image.load_img('...\\Pictures\\Detected Faces\\0156224224_face.jpg', target_size = (224,224)))
model.predict(man) #Here the model throws an error
AI: You need to make sure to add an extra dimension for the batch size, if you are passing in a single image the batch size would be 1. You can use np.expand_dims to add the extra dimension. |
H: what's the motivation behind BERT masking 2 words in a sentence?
bert and the more recent t5 ablation study, agree that
using a denoising objective always results in better downstream task performance compared to a language model
where denoising == masked-lm == cloze.
I understand why learning to represent a word according to its bidirectional surroundings makes sense. However, I fail to understand why is it beneficial to learn to mask 2 words in the same sentence, e.g. The animal crossed the road => The [mask] crossed the [mask]. Why does it make sense to learn to represent animal without the context of road?
Note: I understand that the masking probability is 15% which corresponds to 1/7 words, which makes it pretty rare for 2 words in the same sentence to be masked, but why would it ever be beneficial, even with low probability?
Note2: please ignore the masking procedure sometimes replacing mask with a random/same word instead of [mask], T5 investigates this choice in considerable length and I suspect that it's just an empirical finding :)
AI: Because BERT accepts the artificial assumption of independence between masked tokens, presumably because it makes the problem simpler and yet gave excellent results. This is not discussed by authors in the article or anywhere else to my knowledge.
Later works like XLNet have worked towards eliminating such an independence assumption, as well as other potential problems identified in BERT. However, despite improving on BERT's results on downstream tasks, XLNet has not gained the same level of attention and amount of derived works. In my opinion, this is because the improvement did not justify the complexity introduced by the permutation language modeling objective.
The same assumption is made by other pre-training approaches, like Electra's adversarial training. The authors argue that this assumption isn’t too bad because few tokens are actually masked, and it simplifies the approach. |
H: input shape of keras Sequential model
i am new to neural networks using keras,
i have the following train samples input shape (150528, 1235) and output shape is (154457, 1235) where 1235 is the training examples,
how to put the input shape, i tried below but gave me a
ValueError: Data cardinality is ambiguous:
x sizes: 150528
y sizes: 154457
Please provide data which shares the same first dimension.
code:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
physical_devices = tf.config.experimental.list_physical_devices('GPU')
print("Num GPUs Available: ", len(physical_devices))
tf.config.experimental.set_memory_growth(physical_devices[0], True)
model = Sequential([
Dense(16, input_shape=(150528,), activation='relu'),
Dense(32, activation='relu'),
Dense(154457)
])
model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.MeanSquaredError()])
model.fit(im_features, amp_vo_features, batch_size=10, epochs=30, shuffle=True, verbose=2)
AI: You just need to make sure that your x and y values have the same row dimension. This can be done using numpy.transpose or numpy.swapaxes.
import numpy as np
t = np.random.rand(150528, 1235)
np.transpose(t).shape # (1235, 150528)
np.swapaxes(t, 0, 1).shape # (1235, 150528)
This should give the shapes of (1235, 150528) for x and (1235, 154457) for y. |
H: How to run unmodified Python program on GPU servers with scheduled GPUs?
Say I have one server with 10 GPUs. I have a python program which detects available GPU and use all of them.
I have a couple of users who will run python (Machine learning or data mining) programs and use GPU.
I initially thought to use Hadoop, as I find Yarn is good at managing resources, including GPU, and YARN has certain scheduling strategies, like fair, FIFO, capacity.
I don't like hard-coded rules, eg. user1 can only use gpu1, user2 can only use gpu2.
I later find Hadoop seems to require the program written in map-reduce pattern, but my requirement is to run unmodified code as we run on Windows or local desktop, or modify as little as possible.
Which knowledge should I look at for running and scheduling python programs on a machine with multiple GPUs?
AI: A popular solution used for job management on GPU environments is SLURM.
SLURM allows specifying the resources needed by a job (e.g. 2 CPUs, 2Gb of RAM, 4 GPUs) and it will be scheduled for execution when the needed resources are available.
A job can be any program or script. |
H: Transformer architecture question
I am hand-coding a transformer (https://arxiv.org/pdf/1706.03762.pdf) based primarily on the instructions I found at this blog: http://jalammar.github.io/illustrated-transformer/.
The first attention block takes matrix input of the shape [words, input dimension] and multiplies by the attention weight matrices of shape [input dimension, model dimension]. The model dimension is chosen to be less than the input dimension and is the dimension used as output in all subsequent steps.
There is a residual connection around the attention block and the input is meant to be added to the output of the attention block. However the output of the attention block is shape [words, model dimension] and the input is form [words, input dimension]. Should I interpolate the input down to the model dimension as is done in ResNet? Or maybe add another weight matrix to transform the input?
AI: The input dimensionality is the embedding size, which is the same as the model dimensionality, as explained in section 3.4 of the article:
3.4 Embeddings and Softmax
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $d_{model}$.
Therefore, the input dimensionality and the model dimensionality are the same, which makes them suitable for the residual connection. |
H: Accuracy goes straight for about 200 epochs then start increasing
Can anyone explain the following observation?
Why did the accuracies keep to be a straight line with a very smooth decrease of loss?
By the way, why is the loss lines so beautifully smooth for the first 400 epochs?
Is this because of the learning rate or other reasons?
AI: The accuracy depends on a threshold, whereas the loss doesn't. ML software tends to assume a threshold of 0.5, which is not a good fit in cases where there's some class imbalance.
I believe that, until epoch 500, your model is learning (loss is going down), but the default threshold doesn't allow you to see it in terms of accuracy.
If you pick another threshold you might see different results.
Regarding the loss going from smooth to noisy, it might be that it is learning the "easy cases" first, and then decreasing very easily and smoothly, and after epoch 400-500 it starts to overfit some of "hard cases", thus the loss becoming noisier. |
H: How does tree-based algorithms handle linearly combined features?
While I am aware that tree-based algorithms (e.g., DT, RF, XGBoost) are 'immune' to multi-collinearity, how do they handle linearly combined features?
For example, is there is any additional value or harm in including the three feature: a, b and a+b in the model?
AI: If the sum of the two feature makes sense on the domain semantically, it might be a good idea.
But while trees can handle redundant features pretty well, increasing the number of features without adding any extra "value" or "information" can lead to lower performance in certain situations. For example, if there is no added value and you aggressively sample or restrict the number of features in each tree the two related feature will take the place of a useful features when they are both selected.
If you think the sum is a better feature than the partitions you might consider addig the sum only, or the sum and one of the components. |
H: How to train a model using a daunting huge training dataset
I have an extremely huge dataset and I'm wondering me how could be the right way to set an experiment to use this data to train a model.
I understand that I can use data-reduction to, for instance, drop out some variables. In despite data-reduction can actually reduces the data amount, as I can see this technique is intended to improve the model training effectiveness, not to deal with the practical issues that comes out from the data amount.
One of my ideas is to suffle the whole data first and then split the data into 'small' chunks. Once I have, let's say' $N$ chuncks, I can train the same model using each chunck as follows:
initialize(M);
for(n in N) {
D = load_chunck(n);
M = train(M, D);
}
Although this approach can be effective to fit the experiment to computational resources at hand, I'm afraid that training the model this way can affect the model's quality by including bias from the latter chuncks. In addition, N is now a new hyperparameter to be set.
Another alternative I can see is by using statistical sampling:
D = retrieve_sampling(sampling_size);
if (D is good)
M = train(D);
I'm wondering me if there are other ways to do it then the ones I've cited here.
AI: Quite often with massive datasets the model doesn't actually need the whole data. So I think the first step is to check whether using the whole data is useful: run an ablation study where you use say 1%, then 2%, 3%, .., up to say 10% of the data (adapt the levels to your case of course). Each run consists in training on the x % subset and evaluating on a validation set (be sure to separate your real final test set before anything, this study should only use a validation set).
The goal is to estimate how much gain in performance is obtained by adding training data. Plotting the performance as a function of the amount of data should give a decent idea of the trend, even if it doesn't reach the point of maximal performance, i.e. when more data doesn't improve performance anymore. With this information you can make a better decision about how to proceed with the real training. |
H: whats wrong with the graph
I wanted to plot a bar graph which shows covid cases reported in different regions in different regions of USA.
So here is my code:
import pandas as pd
import matplotlib.pyplot as plt
datainput = pd.read_csv("MD_COVID-19.csv")
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
rgn1=list(datainput['Allegany'])
rgn2=list(datainput['Anne_Arundel'])
rgn3=list(datainput['Baltimore'])
rgn4=list(datainput['Baltimore_City'])
rgn5=list(datainput['Calvert'])
rgn6=list(datainput['Caroline'])
rgn7=list(datainput['Washington'])
region=['Allegany','Anne_Arundel','Baltimore','Baltimore_City','Calvert','Caroline','Washington']
rgn1s=0
rgn2s=0
rgn3s=0
rgn4s=0
rgn5s=0
rgn6s=0
rgn7s=0
for item in rgn1:
rgn1s +=item
for item in rgn2:
rgn2s +=item
for item in rgn3:
rgn3s +=item
for item in rgn4:
rgn4s +=item
for item in rgn5:
rgn5s +=item
for item in rgn6:
rgn6s +=item
for item in rgn7:
rgn7s +=item
tc=[]
tc.append(rgn1s)
tc.append(rgn2s)
tc.append(rgn3s)
tc.append(rgn4s)
tc.append(rgn5s)
tc.append(rgn6s)
tc.append(rgn7s)
ax.bar(region,tc)
plt.ylabel("Number of Cases")
plt.show()
My bar graph is shown as above. Why are only three bar graphs coming?
Where are the other four? Also how can I draw pie chart?
AI: The reason the other bars are missing is because of your method of summing the values and the missing values in your dataset. The way you are adding the values together means that if even one value is missing (NA) the total for that column will be missing as well, and as a result, will not be in your final plot. It is better to use pandas built-in methods (for conciseness, speed, but also to avoid errors such as the one you encountered) to add up the values as follows (using random values):
import pandas as pd
import matplotlib.pyplot as plt
datainput = pd.read_csv("MD_COVID-19.csv")
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
regions = ['Allegany', 'Anne_Arundel', 'Baltimore', 'Baltimore_City', 'Calvert', 'Caroline', 'Washington']
# select all region columns and calculate the sum column-wise
total_cases = datainput[regions].sum()
ax.bar(regions, total_cases)
plt.ylabel("Number of Cases")
plt.show()
Which gives the following result: |
H: How to calculate the "Evidence" for Naive Bayes text classification?
I'm trying to write a Naïve Bayes text classification from scratch in Python, but I can't quite grasp what I should do to write the actual classifier.
One question that popped up was: "What formula do you use?"
Do you use the Bayes Rule/Theorem? Or do you use another equation? This information had conflicting answers (or as I thought) on different articles that I read.
Another question that popped up was: "How do you calculate the evidence?"
One article told me that I just don't calculate the evidence, and another one told me to do it, but never said how. The way I understand it, it's
P(X)*P(X1)...(P(XN)
But what's P(X)? How do you calculate that in text classification?
AI: Naive Bayes assumes feature independence so to obtain a w sentence's probability for C1 class you should multiply the probabilities of the words that make up the w sentence. The multiply thing you mentioned comes from here.
And another thing, as you see below when you divide P1 to P2 P(w)'s will cancel each other out so you don't need to think about the evidence.
Lastly there will be a logarithm process in order to handle very small numbers and you should use laplacian smoothing method to prevent zero dividing. |
H: Is my model underfitting?
Model:
model = models.Sequential()
#add model layers
model.add(layers.Conv1D(64, kernel_size=3, activation='relu',padding='same'))
#model.add(layers.Conv1D(16, kernel_size=3, activation='sigmoid',padding='same'))
model.add(layers.MaxPooling1D( pool_size=2, strides=None, padding='same', data_format=None))
#model1.add(tf.keras.layers.LSTM(100,return_sequences=True))
model.add(layers.Flatten())
model.add(layers.Dense(500, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(120, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(60, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(3, activation='softmax'))
I have initialized an early-stopping as well.
Is my model underfitting?
Epoch 213/400
360684/360684 [==============================] - 38s 106us/sample - loss: 0.3868 - acc: 0.8426 - val_loss: 0.2698 - val_acc: 0.9050
Epoch 214/400
360684/360684 [==============================] - 38s 106us/sample - loss: 0.3851 - acc: 0.8439 - val_loss: 0.2542 - val_acc: 0.9105
Epoch 215/400
360684/360684 [==============================] - 38s 105us/sample - loss: 0.3843 - acc: 0.8444 - val_loss: 0.2532 - val_acc: 0.9130
Epoch 216/400
360684/360684 [==============================] - 38s 105us/sample - loss: 0.3849 - acc: 0.8436 - val_loss: 0.2506 - val_acc: 0.9116
Epoch 217/400
360684/360684 [==============================] - 38s 105us/sample - loss: 0.3825 - acc: 0.8450 - val_loss: 0.2529 - val_acc: 0.9127
Epoch 218/400
360684/360684 [==============================] - 38s 106us/sample - loss: 0.3815 - acc: 0.8446 - val_loss: 0.2541 - val_acc: 0.9120
Epoch 219/400
360684/360684 [==============================] - 39s 108us/sample - loss: 0.3821 - acc: 0.8442 - val_loss: 0.2598 - val_acc: 0.9094
Epoch 220/400
360684/360684 [==============================] - 40s 110us/sample - loss: 0.3818 - acc: 0.8456 - val_loss: 0.2545 - val_acc: 0.9123
Epoch 221/400
360684/360684 [==============================] - 37s 104us/sample - loss: 0.3789 - acc: 0.8457 - val_loss: 0.2436 - val_acc: 0.9154
Epoch 222/400
360684/360684 [==============================] - 38s 105us/sample - loss: 0.3819 - acc: 0.8456 - val_loss: 0.2506 - val_acc: 0.9115
Epoch 223/400
360684/360684 [==============================] - 38s 105us/sample - loss: 0.3795 - acc: 0.8456 - val_loss: 0.2507 - val_acc: 0.9151
Epoch 224/400
360684/360684 [==============================] - 37s 104us/sample - loss: 0.3791 - acc: 0.8466 - val_loss: 0.2558 - val_acc: 0.9091
Epoch 225/400
360684/360684 [==============================] - 38s 106us/sample - loss: 0.3793 - acc: 0.8466 - val_loss: 0.2531 - val_acc: 0.9100
The difference between acc and val_acc is significantly small, like 5-6%. Should I be worried?
AI: I don't think you need to worry, instead I would ask myself if the accuracy I'm getting is good enough for the task that the NN is supposed to do.
Having higher training loss than validation loss can mean different things:
Your validation data is easier to assess than training data. If the train/validation split is done randomly and there is enough data in both subsets, this shouldn't be the case.
You're using dropout in training but not in validation. This is the default of some deep learning libraries, and it makes sense. If this is the case and you want to see less of a gap, try to reduce the amount of dropout rate and you'll see less of a gap.
To sum up, I don't think it's an issue but you might be able to improve your validation performance by reducing the amount of regularization or increasing the complexity of the NN. However, this is just a hypothesis and the only way to know is to re-train and check the new performance.
Edit
By default, keras doesn't do dropout in prediction so this is likely your case since you have high dropout rates. |
H: Does experience with Keras count as experience with Tensorflow?
Many Machine Learning job postings I've seen request experience with Tensorflow. If I have experience with Tensorflow, but only through building neural networks using the Keras API. Does that count?
I have yet to see a tutorial or any code anywhere that uses Tensorflow without the Keras API, so I don't see how one learns Tensorflow strictly. But then why don't employers just request experience with the Keras API?
AI: Currently, Keras is part of Tensorflow, so if you are using the Keras that comes bundled with Tensorflow (tf.keras), technically one is a subset of the other. This was not always like that (see this), so if you are using the old version of Keras that is a separate package, then technically it hides the complexity of Tensorflow, so your experience would be only Keras.
About tutorials not using the Keras API (I understand that you are referring to using model.compile, model.fit, etc), you should understand that tutorials are normally introductory material, which makes using only Keras a sensible decision, avoiding the rougher Tensorflow reality (static graph vs. eager execution, sessions, etc). Bare Tensorflow (no model.fit) is used all over the place for real-world use.
Anyway, in the end, it is up to employers to decide whether your experience fits their needs. |
H: Which part should be frozen during transfer learning?
I want to use transfer learning and fine tuning and I need to decide which part of
the original model will be used and which part will be frozen. I'm thinking about four possilbe cases:
small/large new dataset and this set is similar/not similar to the original dataset. What should be done to achieve best results in each of the cases?
AI: Ideally I follow the below rules for performing transfer learning, I could do successful object detection using Yolo model by traning only last layer and was able to do leaf classification by training good number of layers. |
H: When would C become nescessary to do an analysis or manage data?
I use python in my day to day work as a research scientist and I am interested in learning C. When would a situation arise where python would prove insufficient to manipulate data?
AI: Most of the common libraries you would use for data manipulation do actually use C (or C++ or Fortran, etc.) under the hood.
There are even libraries such as CuPy, which offers the entire NumPy API, but can run your code on a GPU. Using GPUs for speed is a much more common use case these days (in my experience), compared to writing the C/C++ version.
EDIT: here is a related answer, about which programming languages are most competitive for AI, ML, DataScience, etc.
In my opinion, you might need to "do it yourself" in one of 3 cases:
1. Speed
You need it to run faster than current libraries offer - e.g. if the clustering algorithms in Scikit-Learn are too slow
2. Memory
You need to use less memory that existing algorithms - perhaps a specific method on your Pandas DataFrame uses more memory that you have available
3. New Algorithms
You need something that is fairly fast or very low level, and no existing library offers it. I would normally suggest trying your idea first using NumPy though, before trying to roll your own binaries. |
H: Random Forest Classifier cannot recognise parameter grid
I am trying to run the below code to extract the feature importances of my random forest, but I'm getting the following error TypeError: init() got an unexpected keyword argument 'randomforestclassifier__max_depth'. Can anyone tell me what is wrong?
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score
from sklearn.pipeline import make_pipeline
from imblearn.over_sampling import RandomOverSampler
from imblearn.under_sampling import RandomUnderSampler
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.metrics import f1_score
x, y = make_classification(n_samples=10000, weights=[0.99], flip_y=0)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 1)
paramgrid_rf = {'n_estimators': [500],
'max_depth': [4],
'random_state': [0],
'max_features': ['sqrt']
}
imba_pipeline_rf = make_pipeline(RandomOverSampler(sampling_strategy=0.35, random_state=0),
RandomUnderSampler(sampling_strategy=0.9, random_state=0),
RandomForestClassifier())
new_params = {'randomforestclassifier__' + key: paramgrid_rf[key] for key in paramgrid_rf}
cv = RepeatedStratifiedKFold(n_splits=5, n_repeats=3, random_state=0)
grid_imba_rf = GridSearchCV(imba_pipeline_rf, param_grid=new_params, cv=cv, scoring='f1',
return_train_score=True)
scores = cross_val_score(imba_pipeline_rf, x_train, y_train, scoring='f1', cv=cv, n_jobs=-1)
grid_imba_rf.fit(x_train, y_train)
y_pred_rf = grid_imba_rf.predict(x_test)
print('F1 score on validation data: ', f1_score(y_test, y_pred_rf))
rf_final = RandomForestClassifier(**grid_imba_rf.best_params_).fit(x_train_res,y_train_res)
rf_final.feature_importances_
AI: Your grid search dictionary contains the argument names with the pipeline step name in front of it, i.e. 'randomforestclassifier__max_depth'. Instead, the RandomForestClassifier has argument names without the pipeline step name, i.e. max_depth. You therefore need to remove the first part of the string which denotes the name of the step in your original pipeline. You can do this using a dictionary comprehension:
# original
{'randomforestclassifier__max_depth': 4, 'randomforestclassifier__max_features': 'sqrt', 'randomforestclassifier__n_estimators': 500, 'randomforestclassifier__random_state': 0}
# splitting the key on '__' and take only the last part
{k.split("__")[-1]: v for k, v in grid_imba_rf.best_params_.items()}
# {'max_depth': 4, 'max_features': 'sqrt', 'n_estimators': 500, 'random_state': 0}
This changes one line in the original script to:
rf_final = RandomForestClassifier(
**{k.split("__")[-1]: v for k, v in grid_imba_rf.best_params_.items()}
).fit(x_train,y_train)
rf_final.feature_importances_ |
H: Changing the batch size during training
The choice of batch size is in some sense the measure of stochasticity :
On one hand, smaller batch sizes make the gradient descent more stochastic, the SGD can deviate significantly from the exact GD on the whole data, but allows for more exploration and performs in some sense a Bayesian inference.
Larger batch sizes approximate the exact gradient better, but in this way one is more likely to overfit the data or get stuck in the local optimum. Processing larger batch sizes also speed-ups calculations on paraller architectures, but increases the demand of RAM or GPU RAM.
Seems like a sensible strategy would be starting from smaller batch sizes to have a lot of exploration in the initial stages, and then increase gradually the batch size to fine-tune the model.
However, I have not seen implementing this strategy in practice? Did it turn out to be inefficient? Or the appropriate choice of learning rate scheduler with the Dropout does this well enough?
AI: Efficient use of resources
It is a balancing game with the learning rate, and one reason you don't normally see people do this is that you want to utilise as much of the GPU as possible.
It is commonly preferred to start with the maximum batch size you can fit in memory, then increase the learning rate accordingly. This applies to "effective batch sizes" e.g. when you have 4 GPUs, each running with batch_size=10, then you might have a global learning rate of nb_gpu * initial_lr (used with sum or average of all 4 GPUs).
The final "best approach" is usually problem specific - small batch sizes might not work for GAN type models, and large batch sizes might be slow and sub-optimal for certain vision based tasks.
Friends don't let friends use large batch sizes
There is literature to support usage of small batch sizes at almost all times. Even though this idea was supported by Yann Lecun, there are differences of opinion.
Super convergence
There are also other tricks that you might consider, if you are interested in faster convergence, playing with learning rate cycling. |
H: Multi-Feature One-Hot-Encoder with varying amount of feature instances
Let's assume we have data instances like this:
[
[15, 20, ("banana","apple","cucumber"), ...],
[91, 12, ("orange","banana"), ...],
...
]
I am wondering how I can encode the third element of these datapoints. For multiple features values we could use sklearn's OneHotEncoder, but as far as I could find out, it cannot handle inputs of different length.
Here is what I've tried out:
X = [[15, 20, ("banana","apple","cucumber")], [91, 12, ("orange","banana")]]
ct = ColumnTransformer(
[
("genre_encoder", OneHotEncoder(), [2])
],
remainder='passthrough'
)
print(ct.fit_transform(X))
This will only output
[[1.0 0.0 15 20]
[0.0 1.0 91 12]]
as expected, because the tuples are handled as the possible values this feature can be represented with.
We can't embed our features directly (like [15, 12, "banana", "apple", "cucumber"]), because
we don't know how many instances of this feature we will have (two? three?)
each position would be interpreted as an own feature and thus if we had banana in the first nominal slot in one datapoint and in the second one in our second nominal slot, they would not count to the same "pool of values" a feature can embody
Example:
X = [["banana","apple","cucumber"], ["orange","banana", "cucumber"]]
enc = OneHotEncoder()
print(enc.fit_transform(X).toarray())
[[1. 0. 1. 0. 1.]
[0. 1. 0. 1. 1.]]
This representation contains 5 slots instead of 4, because the first slot is interpreted as using banana or orange, the second one as apple or banana and the last one only has the option cucumber.
(This would also not solve the problem of having different amounts of feature values per datapoint. And replacing empty ones with None does not solve the problem either, because then None faces this positional ambiguity.)
Any idea how to encode those "Multi-Muliti-"features, that can take multiple values and consist of a varying amount of elements? Thank you in advance!
AI: I think you can transform this into a text preprocessing problem and then use CountVectorizer. You basically build "documents" by putting together all the words in your raw data and then use CountVectorizer on those documents.
from sklearn.feature_extraction.text import CountVectorizer
X = [["banana","apple","cucumber"], ["orange","banana", "cucumber"]]
# Create documents
X_ = [' '.join(x) for x in X]
enc = CountVectorizer()
print(enc.fit_transform(X_).toarray())
Returns
[[1 1 1 0]
[0 1 1 1]]
which has 4 different values as you expected. |
H: Extract data from json format and paste to column using python
In my column with json data, I have this list I want to extract to column:
"list":[
{
"id":"list",
"item":[
{
"value":"Hergestellt in Italien aus 100% reinem Platin-Flüssigsilikon"
},
{
"value":"Geruchs- und geschmacksneutral"
},
{
"value":"Kältebeständig bis -60°C"
},
{
"value":"Inklusive Rezeptbuch und 50 Eisstielen"
},
{
"value":"spülmaschinengeeignet"
}
],
"decorators":[
]
},
I have extracted other data using this code but the value was string and not a dictionary:
if 'item' in item and item['item']:
if isinstance(item['item'], str):
cur_model_info[item['id']] = item['item']
elif isinstance(item['item'], list):
elements = [element['value'] for element in item['item']]
cur_model_info[item['id']] = ','.join(elements)
I tried to use this for the above format of data but I got this error:
TypeError: sequence item 0: expected str instance, dict found
What should I change in order to be able to export in a separate column the data having each element from the list in a new cell of that column?
expected output
Using the above code, the data is exported in this format
AI: Given that the data snippet you provided is not a valid json structure I assume that the data is a list with dictionaries in it. Using this assumption, you can extract the values to a nested list using a list comprehension:
data = [
{
"id":"list",
"item":[
{
"value":"Hergestellt in Italien aus 100% reinem Platin-Flüssigsilikon"
},
{
"value":"Geruchs- und geschmacksneutral"
},
{
"value":"Kältebeständig bis -60°C"
},
{
"value":"Inklusive Rezeptbuch und 50 Eisstielen"
},
{
"value":"spülmaschinengeeignet"
}
],
"decorators":[]
}
]
print([[i["value"] for i in x["item"]] for x in data])
# [['Hergestellt in Italien aus 100% reinem Platin-Flüssigsilikon',
# 'Geruchs- und geschmacksneutral',
# 'Kältebeständig bis -60°C',
# 'Inklusive Rezeptbuch und 50 Eisstielen',
# 'spülmaschinengeeignet']] |
H: Once a predictive model is in production, how it can be evaluated?
I have a data science project, predicting customer's next purchase day. Customer's one year behavioral data was split to 9 and 3 months for train and test, using RFM analysis, I trained a model with different classifiers and the best one's result is as follow:
Accuracy of XGB classifier on training set: 0.93
Accuracy of XGB classifier on test set: 0.68
This is my school's project, and I was wondering, in real world projects, how can we evaluate a model's performance after it's on production. How can I measure how successful my model was? What if the performance measures in production are much lower than my test result?
AI: This is in fact a very good question.
The answer is simple, but depends on the case.
In general, what we do after pushing a model to production we apply an audit process. Let me explain: in reality machine learning models that are being pushed to production are pushed to replace another process (e.g, manual process- this is the case of automation). At the beginning everything predicted by machine learning model are audited through another process (e.g, manual), we call this stage the pilot stage. By comparing the model performance to the manual process we establish the quality of the model. Once we are happy, we start reducing the audit percentage from 100% to 5% or so ( there is some math behind what should be the audit percentage). This audit will never go away and will always be used to measure the performance of the model and to establish ground truth data for new samples that can be added to the training set.
In fact, training models in theory is something and using them in production is something else. It is really a complex process. Just to mention: we also like to implement some protection mechanisms to protect the model. For example data drift detection, uncertainty detection and so on. |
H: Is there any package in python that can identify similarity between alphanumeric alias names of a parameter?
For example: for a parameter like input voltage,
Alias names : V_INPUT, VIN etc.
Now, I want the software to recognize each of the alias names as same. Is there any package/method by which I can achieve this?
Nltk is only allowing for dictionary words.
AI: If you know there are only specific variants, you can obviously make a look-up table yourself (i.e. a Python dictionary).
Otherwise you could try using a fuzzy matching library, like fuzzywuzzy.
This will give you a "closeness" score for your search term, based on your list of parameters (measurements). Here is an example of how you could use it:
In [1]: from fuzzywuzzy import process
In [2]: measurements = ["Voltage", "Current", "Resistance", "Power"]
In [3]: variants = ["VOLT", "voltage_in", "resistnce", "pwr", "amps"] # notice typos etc.
In [4]: for variant in variants:
...: results = process.extract(variant, measurements, limit=2)
...: print(f"{variant:<11} -> {results}") # See which two were found to be closest
...: best = results[0] # Take the best match by score (first in the list)
...: if best[1] < 70: # Set a threshold at 70%
...: print(f"Rejected best match for '{variant}': {best}")
VOLT -> [('Voltage', 90), ('Current', 22)]
voltage_in -> [('Voltage', 82), ('Resistance', 30)]
resistnce -> [('Resistance', 95), ('Current', 38)]
pwr -> [('Power', 75), ('Current', 30)]
amps -> [('Voltage', 26), ('Resistance', 22)]
Rejected best match for 'amps': ('Voltage', 26)
So most worked out pretty well, including the typo example.
Obviously this does not kind of semantic search, as so amps do not get related to Current in any way.
To go the way of semantic encodings, you might want to look into "word embeddings", which do indeed try to match the real meaning of words, based on their semantic meaning. To start here, you could look into Word2Vec or GloVe` embeddings. Perhaps there is even a tool or library that already offers this capability.
These approaches will not inherently deal with things like typos, so for best results, you could even combine the two approaches. |
H: What does the oob decision function mean in random forest, how get class predictions from it, and calculating oob for unbalanced samples
I am interested in finding the OOB score for random forest using sklearn, when it is used for a binary classification task, and there are unbalanced samples. What does the oob decision function mean in random forest, and how get class predictions from it?
I read RandomForestClassifier OOB scoring method but am still not clear. Does the oob decision function provide class probabilities, and if so, do I get the class predictions by taking whichever number is higher (e.g. by doing something like pred_train = np.argmax(forest.oob_decision_function_,axis=1))?
Since my classes are unbalanced, would it be correct to say I can't used sklearn's default OOB score here, and I should do the above to get some kind of F1 score from the OOB predictions, to get a better estimate of my random forest's error?
AI: Every Tree gets its OOB sample.
So it might be possible that a data point is in the OOB sample of multiple Trees.
oob_decision_function_ calculates the aggregate predicted probability for each data points across Trees when that data point is in the OOB sample of that particular Tree.
The reason for putting above points is that OOB will give you the mean of probability but it will not tell you anything about the standard deviation of the probability across Trees.
Does the oob decision function provide class probabilities,
Yes
and if so, do I get the class predictions by taking whichever number is higher (e.g. by doing something like pred_train = np.argmax(forest.oob_decision_function_,axis=1))?
Yes
Since my classes are unbalanced, would it be correct to say I can't use sklearn's default OOB score here
OOB score is still the default score i.e. Accuracy. So, will not help for the Imbalanced class. |
H: Find correlation between grades from two raters
The question is whether we can find a correlation between two sets of grades (categorical data).
Let’s say we have a dog competition and there are 1000 dogs participating.
There are two rounds of assessment
first round
dog owners give their assessment on the scale from A to C. Where A is excellent and C is bad. There are four criteria for assessment during both tours (behaviour etc).
second round
one judge gives his assessment of one dog based on the same assessment criteria as in round 1. however, grades vary from M - meeting expectation, E - exceeding expectation, B - Bellow expectation.
We understand that M is B, E is A and B is C.
After two rounds our table would look like:
| dog | round one | round two |
| --------------- | --------- | --------- |
| Dog1_criteria1 | A | B |
| Dog1_criteria2 | A | E |
| Dog1_criteria3 | A | E |
| Dog1_criteria4 | B | M |
| Dog2_criteria1 | A | E |
| Dog2_criteria2 | B | M |
| Dog2_criteria3 | A | E |
| Dog2_criteria4 | C | B |
....
How do we find a correlation between the two sets of answers? Thank you!
AI: You can treat this as a type of inter-rater agreement problem and use Cohen's Weighted Kappa or a similar measurement. Weighted kappa takes into account the distribution of ratings for each round and the difference between grades.
Three matrices are involved: the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros. Off-diagonal cells contain weights indicating the level of separation between ratings. Often, cells one off the diagonal are weighted 1, those two off 2, etc.
source:Wikipedia
The equation for weighted κ is:
$$\kappa = 1 - \frac{{\sum_{i=1}^k}{\sum_{j=1}^k}w_{ij}x_{ij}}{{\sum_{i=1}^k}{\sum_{j=1}^k}w_{ij}m_{ij}}$$
where k=number of codes and $w_{ij}, x_{ij}, m_{ij} $ are elements in the weight, observed, and expected matrices, respectively. When diagonal cells contain weights of 0 and all off-diagonal cells weights of 1, this formula produces the same value of kappa as the calculation given above.
In practice, you may want to use an implementation in Python, R, or a statistical software package rather than manual calculations.
Here is some intuition from a similar example on why to use weighted kappa (round 2 grades are converted to round 1 scale):
| dog | round one | round two |
| --------------- | --------- | --------- |
| Dog1_criteria1 | A | C |
| Dog1_criteria2 | A | B |
| Dog1_criteria3 | A | A |
| Dog1_criteria4 | B | B |
| Dog2_criteria1 | A | A |
| Dog2_criteria2 | A | A |
| Dog2_criteria3 | A | A |
| Dog2_criteria4 | C | C |
You could look at % agreement and give a score of 6 of 8, or 0.75. That seems good, but suppose the judges just gave everyone an A. That would also be 0.75. So we need to factor in the frequency of the grades and the probability of agreement between each combination. That's where the expected matrix comes in.
Then there is the degree of agreement between two ratings. You usually want to assign more agreement to an A/B or B/C than to an A/C. And the differences may not be linear at all. The weight matrix allows you to account for the differences.
The observed matrix is simply the count of observations for each possible pair of ratings.
A final note: there are several variations of Kappa, such as quadratic weighted kappa. Most of them should work well for comparing grades. |
H: Finding the duplicate values between all columns and sort in new column with Pandas?
I have this DataFrame:
CL1 CL2 CL3 CL4
0 a a b f
1 b y c d
2 c x d s
3 x s x a
4 s dx s s
5 a c d d
6 s dx f d
7 d dc g g
8 f x s t
9 c x a d
10 x y y a
11 c a x y
12 f s d s
13 d d w a
Intention:
With help Pandas
1- I wanna search and find the similar values between all columns (CL1-CL4) and sort in a new column (SIM).
2- I wanna find the non-similar values between columns and sort in another column (NON-SIM).
What i want
How can i do that? With df.pivot_table i was not successful.
AI: Given your input data is saved in a variable df, I count the values which occur in all 4 unique columns as follows:
import pandas as pd
import numpy as np
output = (
df
.melt()
.drop_duplicates()
.groupby("value")
.agg(count=("value", "count"))
.reset_index()
)
output["SIM"] = np.where(output["count"] == 4, "SIM", "NON-SIM")
output = output.pivot(columns="similarity", values="value")
print(output)
similarity NON-SIM SIM
0 NaN a
1 b NaN
2 c NaN
3 NaN d
4 dc NaN
5 dx NaN
6 f NaN
7 g NaN
8 NaN s
9 t NaN
10 w NaN
11 x NaN
12 y NaN |
H: Keras next(); what does (2, 256, 128, 128, 3) mean
I have used the next() method on a Keras generator. I then turned this into a numpy array to get the shape:
data = generator.next()
data = np.array(data)
print(data.shape)
>>> (2, 256, 128, 128, 3)
256 is my batch size, and (128, 128, 3) is my image size. But what does 2 represent?
AI: Keras generator returns a Tuple for data and label. So is that 2.
First you should unpack it with a tuple then use image and labels.
yielding tuples of (x, y) where x is a numpy array containing a batch of images with shape (batch_size, *target_size, channels) and y is a numpy array of corresponding labels
img, labels = traindata.next()
img.shape, labels.shape
Output - ((16, 224, 224, 3), (16, 2))
We are also supposed to return a Tuple of batches of images/labels in case we build a custom generator for Keras. |
H: Individual models gives quite same distribution on Test set, whereas Ensembling gives better result but very different distribution
I am working on a binary classification problem with unbalanced data (17% for positive class).
The problem is as following:
My three individual models when predicting on the test set (for which I don't have the labels) gives quite similar distribution as for Train set.
But ensemling these models, while giving slighltly better result (F1-score), it drastically changes the distribution on Test set going from ~20% to 5%.
My question is :
I am confused between choosing the best individual model which maintains almost same distribution but lose some efficiency
Or
The ensembled one who gives really different distribution
And I have no Idea about the Test set distribution.
Thanks for any help
AI: Depending on the size of the test and training set, and how they were sampled, their distributions may not be the same. if the sets aren't very big or weren't randomly sampled then their distributions won't necessarily be the same, and may not correspond to the distribution of the population
One way to test this is to compare the distributions of the other variables in the test and training sets.
In terms of selecting individual models or the ensemble for prediction, I would recommend trying the individual models and the ensembled combination in k-fold validation, and selecting whichever approach results in the best performance. This way you are using the entire training set for training and validation, and the assessment of this performance should be the best approximation of test set performance. |
H: How to properly set up neural network training for stable accuracy and loss
I have a DenseNet121 implemented in Pytorch for image classification. For now, the training set-up is pretty straightforward:
the data is loaded. An important characteristic here is that the validation data is fixed from the outset and can never change. The rest of the data I split into training and testing. None of the data sets overlap!
for every epoch iterate through the training data loader, calculating loss, optimizing etc.
every 100 batches evaluate the loss using the validation data loader
at the end of an epoch compare the current state’s loss on validation data with the state that previously had the best validation loss (for the first epoch just compare this with a random high number like 1e5) and save the current state if it better or keep the older state.
after all epochs are finished save the state with the lowest validation error as the best model
the best model is then applied to test data. Accuracies are calculated and ROC curves drawn
I was wondering how to extend my set-up to make it a proper statistical experiment set-up, with the goal of getting stable accuracy and loss results ie I want to be able to say that my model gives consistently the same results. I was thinking along the lines of running the testing step say 10 times and averaging the error? Or do you see some deficiency in the training that I could improve to improve stability?
Thanks in advance
AI: The learning rate is one of those first and most important parameters of a model, and one that you need to start thinking about pretty much immediately upon starting to build a model. It controls how big the jumps your model makes, and from there, how quickly it learns.
There are learning rate technique called Cyclical Learning Rates.
Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. Check this out : Cyclical Learning Rates for Training Neural Networks. And implementation in pytorch and details blog : Adaptive - and Cyclical Learning Rates using PyTorch.
By this small trick, you can build a stable version of your model. Best of luck. |
H: "Rare words" on vocabulary
I am trying to create a sentiment analysis model and I have a question.
After I preprocessed my tweets and created my vocabulary I've noticed that I have words that appear less than 5 times in my dataset (Also there are many of them that appear 1 time). Many of them are real words and not gibberish. My thinking is that if I keep those words then they will get wrong "sentimental" weights and gonna make my model worse.
Is my thinking right or am I missing something?
My vocab size is around 40000 words and those that are "rare" are around 10k.Should I "sacrifice" them?
AI: Instead of dropping rare words or incorporating them risking their scarcity in the training data leads to poor predictions, you can opt for a third alternative: using a subword vocabulary.
You can use approaches like byte-pair encoding (BPE) to extract a subword vocabulary, that removes the out-of-vocabulary word problem and reduces data sparsity in general. There is the canonical python implementation as well as the popular implementation by Google called sentencepiece. |
H: How to preprocess with NLP a big dataset for text classification
TL;DR
I've never done nlp before and I feel like I'm not doing it in the good way. I'd like to know if I'm really doing things in a bad way since the beginning or there's still hope to fix those problems mentioned later.
Some basic info
I'm trying to do some binary text classification for a university task and I'm struggling at the classification because the preprocessing with NLP is not being the best.
First of all, it's important to note that I need to have efficiency in mind when designing things because I'm working with very large datasets (>1M texts) that are loaded in memory.
This datasets contains data related to new articles with title, summary, content, published_date, section, tags, authors...
Also, it's important to mention that as this task being part of a learning process I'm trying to create everything by myself instead of using external libraries (only for boring or complex tasks)
Procedure
The basic procedure for the NLP preprocessing is:
Feature extraction -> str variable with title, summary and content attributes joined in the same string
Lemmatization -> same str as input but with lemmatized words
Stopword filtering
Corpus generation -> dict object with lemmatized words as key and the index they're being inserted in the dictionary as value.
After generating the corpus with all those samples, we can finally safely vectorize them (which is basically the same process as above but without the building corpus step).
As you might guess, I'm not strictly following the basic bag of words (BOW) idea since I need to relieve memory consumption so it raises two problems when trying to work with AI algorithms like DecisionTreeClassifier from sci-kit.
Problems
Some of the problems I've observed till the moment are:
Vectors generated from those texts needs to have the same dimension Does padding them with zeroes make any sense?
Vectors for prediction needs also to have the same dimension as those from the training
At prediction phase, those words that hasn't been added to the corpus are ignored
Also, the vectorization doesn't make much sense since they are like [0, 1, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3] and this is different to [1, 0, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3] even though they both contain the same information
AI: Let me first clarify the general principle of classification with text data. Note that I'm assuming that you're using a "traditional" method (like decision trees), as opposed to Deep Learning (DL) method.
As you correctly understand, each individual text document (instance) has to be represented as a vector of features, each feature representing a word. But there is a crucial constraint: every feature/word must be at the same position in the vector for all the documents. This is because that's how the learning algorithm can find patterns across instances. For example the decision tree algorithm might create a condition corresponding to "does the document contains the word 'cat'?", and the only way for the model to correctly detect if this condition is satisfied is if the word 'cat' is consistently represented at index $i$ in the vector for every instance.
For the record this is very similar to one-hot-encoding: the variable "word" has many possible values, each of them must be represented as a different feature.
This means that you cannot use a different index representation for every instance, as you currently do.
Vectors generated from those texts needs to have the same dimension Does padding them with zeroes make any sense?
As you probably understood now, no it doesn't.
Vectors for prediction needs also to have the same dimension as those from the training
Yes, they must not only have the same dimension but also have the same exact features/words in the same order.
At prediction phase, those words that hasn't been added to the corpus are ignored
Absolutely, any out of vocabulary word (word which doesn't appear in the training data) has to be ignored. It would be unusable anyway since the model has no idea which class it is related to.
Also, the vectorization doesn't make much sense since they are like [0, 1, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3] and this is different to [1, 0, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3] even though they both contain the same information
Indeed, you had the right intuition that there was a problem there, it's the same issue as above.
Now of course you go back to solving the problem of fitting these very long vectors in memory. So in theory the vector length is the full vocabulary size, but in practice there are several good reasons not to keep all the words, more precisely to remove the least frequent words:
The least frequent words are difficult to use by the model. A word which appears only once (btw it's called a hapax legomenon, in case you want to impress people with fancy terms ;) ) doesn't help at all, because it might appear by chance with a particular class. Worse, it can cause overfitting: if the model creates a rule that classifies any document containing this word as class C (because in the training 100% of the documents with this word are class C, even though there's only one) and it turns out that the word has nothing specific to class C, the model will make errors. Statistically it's very risky to draw conclusions from a small sample, so the least frequent words are often "bad features".
You're going to like this one: texts in natural language follow a Zipf distribution. This means that in any text there's a small number of distinct words which appear frequently and a high number of distinct words which appear rarely. As a result removing the least frequent words reduces the size of the vocabulary very quickly (because there are many rare words) but it doesn't remove a large proportion of the text (because the most frequent occurrences are frequent words). For example removing the words which appear only once might reduce the vocabulary size by half, while reducing the text size by only 3%.
So practically what you need to do is this:
Calculate the word frequency for every distinct word across all the documents in the training data (only in the training data). Note that you need to store only one dict in memory so it's doable. Sort it by frequency and store it somewhere in a file.
Decide a minimum frequency $N$ in order to obtain your reduced vocabulary by removing all the words which have frequency lower than $N$.
Represent every document as a vector using only this predefined vocabulary (and fixed indexes, of course). Now you can train a model and evaluate it on a test set.
Note that you could try different values of $N$ (2,3,4,...) and observe which one gives the best performance (it's not necessarily the lowest one, for the reasons mentioned above). If you do that you should normally use a validation set distinct from the final test set, because evaluating several times on the test set is like "cheating" (this is called data leakage). |
H: Does overfitting depend only on validation loss or both training and validation loss?
There are several scenarios that can occur while training and validating:
Both training loss and validation loss are decreasing, with the training loss lower than the validation loss.
Both training loss and validation loss are decreasing, with the training loss higher than the validation loss.
Training loss decreasing, but validation loss is increasing.
I am aware that overfitting occurs in scenario 3, but does overfitting occur in scenario 1? If so, does this mean that overfitting only occurs when either scenario 1 or scenario 3 occur? Otherwise, if overfitting only occurs in scenario 3, does this mean that overfitting only occurs when validation loss is increasing?
AI: In my opinion, only case 3 should be considered overfitting. As @stans has mentioned, there is not a very rigorous definition of overfitting so other people might think differently.
I wouldn't say the point where the validation loss stops decreasing is where bias and variance are minimized since there is a trade-off between bias and variance:
A constant model will have very low variance, but very high bias.
An overfitting model will have very low bias, but very high variance.
The point where the validation loss starts increasing can be considered optimal in terms of the sum of squared bias and variance, that is, an optimum of the generalization error. |
H: Unigram tokenizer: how does it work?
I have been trying to understand how the unigram tokenizer works since it is used in the sentencePiece tokenizer that I am planning on using, but I cannot wrap my head around it.
I tried to read the original paper, which contains so little details that it feels like it's been written explicitely not to be understood. I also read several blog posts about it but none really clarified it (one straight up admitted not undertanding it completely).
Can somebody explain it to me? I am familiar with the EM algorithm but I cannot see how it related to the loss function in order to find the subwords probabilities...
AI: The explanation in the documentation of the Huggingface Transformers library seems more approachable:
Unigram is a subword tokenization algorithm introduced in Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, 2018). In contrast to BPE or WordPiece, Unigram initializes its base vocabulary to a large number of symbols and progressively trims down each symbol to obtain a smaller vocabulary. The base vocabulary could for instance correspond to all pre-tokenized words and the most common substrings. Unigram is not used directly for any of the models in the transformers, but it’s used in conjunction with SentencePiece.
At each training step, the Unigram algorithm defines a loss (often defined as the log-likelihood) over the training data given the current vocabulary and a unigram language model. Then, for each symbol in the vocabulary, the algorithm computes how much the overall loss would increase if the symbol was to be removed from the vocabulary. Unigram then removes p (with p usually being 10% or 20%) percent of the symbols whose loss increase is the lowest, i.e. those symbols that least affect the overall loss over the training data. This process is repeated until the vocabulary has reached the desired size. The Unigram algorithm always keeps the base characters so that any word can be tokenized.
Because Unigram is not based on merge rules (in contrast to BPE and WordPiece), the algorithm has several ways of tokenizing new text after training. As an example, if a trained Unigram tokenizer exhibits the vocabulary:
["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"],
"hugs" could be tokenized both as ["hug", "s"], ["h", "ug", "s"] or ["h", "u", "g", "s"]. So which one to choose? Unigram saves the probability of each token in the training corpus on top of saving the vocabulary so that the probability of each possible tokenization can be computed after training. The algorithm simply picks the most likely tokenization in practice, but also offers the possibility to sample a possible tokenization according to their probabilities.
Those probabilities are defined by the loss the tokenizer is trained on. Assuming that the training data consists of the words 1,…, and that the set of all possible tokenizations for a word is defined as (), then the overall loss is defined as
$\mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right )$
There are some parts that are not very detailed, though, for instance, how it initializes the base (seed) vocabulary to a large number of symbols".
This part is more clearly explained in the original article by the end of section 3.2:
There are several ways to prepare the seed vocabulary. The natural choice is to use the union of all characters and the most frequent substrings in the corpus. Frequent substrings can be enumerated in $O(T)$ time and $O(20T)$ space with the Enhanced Suffix Array algorithm (Nong et al., 2009), where T is the size of the corpus.
About the details of the expectation maximization algorithm used to compute probabilities, this is what happens:
[Expectation] Estimate each subword probability by the corresponding frequency counts in the vocabulary
[Maximization] Use the Viterbi algorithm to segment the corpus, returning the optimal segments.
You can check the details, together with practical examples, in this tutorial. |
H: Filenotfounderror: [errno 2] no such file or directory: 'nsr001.ecg'
I am trying to save contents of physiobank Normal Sinus Rhythm RR Interval Database into a numpy array but I keep getting an error:
Traceback (most recent call last):
File "AverageRRI.py", line 20, in <module>
averageArray = np.fromfile(file,dtype=float)
FileNotFoundError: [Errno 2] No such file or directory: 'nsr001.ecg'
but the file does exist in the directory.
import os
import numpy as np
for root, dirs, files in os.walk('normal-sinus-rhythm-rr-interval-database-1.0.0'):
for file in files:
if file.endswith(".ecg"):
print(file)
averageArray = np.fromfile(file,dtype=float)
print(averageArray)
When I add the pathname like:
averageArray = np.fromfile('normal-sinus-rhythm-rr-interval-database-1.0.0/nsr001.ecg',dtype=float)
print(averageArray)
It works.
Thanks so much!
AI: You just need to add the root directory in front of the filename so that the full filepath is correct. For this you can use os.path.join:
import os
import numpy as np
for root, dirs, files in os.walk('normal-sinus-rhythm-rr-interval-database-1.0.0'):
for file in files:
if file.endswith(".ecg"):
print(os.path.join(root, file))
averageArray = np.fromfile(os.path.join(root, file), dtype=float) |
H: How is the validation set processed in PyTorch?
Say, one uses the MNIST dataset and splits the provided training data of size 60,000 into a training set (50,000) and a validation set (10,000). The provided test data of size 10,000 is used as the test set. The ML algorithm is a neural network.
The training set is processed (in minibatches) by the code below. First, one sets the gradients to zero. Then, the model makes a prediction, and the loss is calculated. Next, the gradients are computed, and the weights are updated via backpropagation.
def train(data, label):
model.zero_grad()
prediction = model(data)
loss = loss_function(prediction, label)
loss.backward()
optimizer.step()
return loss
As I understand, the validation set is used for hyperparameter tuning, whereas the test set is used for evaluation of the final model (as a reference to compare performance to other models). The accuracy on the test set is measured after "freezing" the model, like in the code below.
for parameter in model.parameters():
parameter.requires_grad = False
model.eval()
So, my questions are:
When processing the validation set, is it correct to use the code of
train() or must one omit the backpropagation?
If one assumes that a neural network applies dropout, is dropout
enabled or disabled while processing the validation set?
AI: With regards to the backpropagation, since the network is not trained on the validation set and parameters are not updated we do not have to use backpropagation. The calculation of gradients can be disabled by using the torch.no_grad() context manager from pytorch. By not calculation gradients the model is able to process data faster. Dropout is turned off when setting the model in evaluation mode using model.eval() and is often used in combination with torch.no_grad() to turn off the computation of gradients, see also this stackoverflow answer and the pytorch documentation. |
H: Access keys of pandas dataframe when using groupby
I have the following database:
And I would like to know how many times a combination of BirthDate and Zipcode is repeated throughout the data table:
Now, my question is: How can I access the keys of this output? For instance, how can I get Birthdate=2000101 ZipCode=8002, for i = 0?
The problem is that this is a 'Series' object, so I'm not able to use .columns or .loc here.
AI: You may use
df.groupby(['BirthDate', 'ZipCode']).size().reset_index().rename(columns={0: 'n'})
and now you have a data frame that you can easily manipulate. |
H: Deploying ML/Deep Learning on AWS Lambda for Long-Running Training, not just Inference
Serverless technology can be used to deploy ML models to production, since the deployment package sizes can be compressed if too large (or built from source with unneeded dependencies stripped).
But there is also the use case of deploying ML for training, not just inference. For example, if a company wanted to allow power users to retrain a model from the front-end.
Is this feasible for Lambda given the long training times?
Whereas latency wouldn't be issue (cold start delay is fine) the runtime could be fairly long (hours).
AI: I've used ECS (Fargate) to train models, the retraining trigger could be the start of an ECS service. While ECS has a little latency, it handles well long runtimes.
You can then serve the model via a lambda. |
H: Transformers understanding
i have i a big trouble. I don't understand transformers. I understand embedding, rnn's, GAN's, even Attention. But i don't understand transformers. Approximately 2 months ago i decided to avoid usage of transformers, because i found them hard. But i can't anymore avoid transformers. Please, help me. I want to use and understand work of transformers. How can i start to work with them?Past the fact that i want to understand their idea in general, i also want to can write/implement them using keras/tensorflow
Of course i tied to read some tutorials. But i don't understand them anyway.
AI: These are the answers to the specific doubts that you pointed out in the comments:
Transformers use many building blocks, like self-attention, layer normalization, residual connections, etc. Tutorials like The illustrated transformer are very useful to understand these blocks and how they fit together.
The role of the softmax is to normalize the sum up to 1. In the example, you can see that the softmax-normalized values are 0.88 and 0.12, which add up to 1. The result of the softmax is then used as weights for the values, which are then added together.
The decoder is very similar to the encoder, especially at training time. The main differences are that the queries are taken from the target side while keys and values are from the source side and that the attention is masked to avoid the prediction for time t to be dependent on the tokens at the same and future positions.
The decoder receives both the output of the encoder and the target sequence, either the full sequence at training time or the partial sequence at inference time.
At training time, the decoder receives the whole target sentence tokens. At inference time, we don't have the target sentence; instead, we use the model autoregressively: at each decoding step we pass as input the previous predictions, get the prediction for the next token, concatenate it with the previous step input and use it as input for the next step; at the first step of the autoregressive decoding we simply pass as input a sequence with just the special token <s>. |
H: Why is np.where not returning '1'? Only returns '0'
This code should return a new column called orc_4 with the value of 1 if the value of the row in df['indictment_charges'] contains 2907.04 or 0 if not.
Instead it is returning all 0's
for index, item in enumerate(df.indictment_charges):
s = '2907.04'
if s in str(item):
df['orc_4'] = np.where(item == s, 1, 0)
Why won't it return 1?
Example output for df.indictment_charges:
['2903.112907.022907.042907.04']
AI: You are first checking if the item contains the string, but then in np.where you are checking if the values are equal (item == s), which is obviously different. In addition, you set the whole column equal to the value from np.where (and overwriting it after each row), which results in the whole column getting the value based on the final row of the dataframe.
To avoid looping over the rows (which is relatively quite slow) you can use pandas.Series.str.contains to check if a string contains a certain value like this:
df["orc_4"] = np.where(df["indictment_charges"].str.contains("2907.04"), 1, 0) |
H: Information leakage when train/test are truly i.i.d.?
I am well aware that to avoid information leakage, it is recommended to fit any transformation (e.g., standardization or imputation based on the median value) on the training dataset and applying it to the test datasets. However. I am not clear what is the risk of applying these transformations to the entire dataset prior to train/test split if the the data is iid and the train/test split is indeed random?
For example, if the original data set has certain statistical characteristics(e.g., mean, median, and std) then I would expect a random data spilt with generate a train and test datasets that have the same statistical characteristics. Therefore, standardizing the entire datasets and then splitting should produce the same results as splitting the dataset, standardizing based on the train database and transforming the test dataset. The same argument can be made for imputation based on the median value.
Am I missing something?
AI: Well, keep in mind that when you standardize/impute data you're estimating parameters. Given the conditions that you've defined and having enough data such that the estimates are good, then I don't think it should matter to use the training data or all the data (as a matter of fact, the estimate of the parameter using training data should be very similar to the parameter using all the data).
However, when the datasets are small these estimates may have high variance, and if you want your cross-validation to be reliable you might want to take it into account. In general, I think for scaling and missing imputation this should not be an issue, but if you're doing some things that are more prone to overfitting like target encoding, or imputing using a model instead of a single parameter, then you should be more careful.
In order to not even think about having enough data or not, the cross-validation framework tries to treat the test data as what the serving data should be when putting the model into production, and then you'll always be alright. |
H: Importance of normal Distribution
I have been reading about probability distributions lately and saw that the Normal Distribution is of great importance. A couple of the articles stated that it is advised for the data to follow normal distribution. Why is that so? What upper hand do I have if my data follows normal distribution and not any other distribution.
AI: This is an interesting question. So sorry for a long winded answer. The tl:dr; is it is a mix of some real applicability, theoretical basis, historical baggage (due to limited compute power) and obsession for analytically tractable models (instead of simulation/computational models). We should be very careful and discerning while using it in real problems.
Details
The importance of normal distribution comes from the following facts/observations,
Many naturally occurring phenomenon seem to follow normal distribution when sample size is large (more on this below).
In Bayesian statistics, if you assume a Normal distribution prior on parameters, then posterior distribution is also normal. This makes computations easier.
Somewhat related, central limit theorem tells us that average of a samples from any distribution (no fat tails) follows normal distribution. So normal distribution is useful and provides theoretical basis for doing population level parameter estimates from samples (think of election predictions). But again, this assumes the underlying data comes from a distribution which is well behaved and extreme values are very unlikely.
In short, normal distribution can be thought of as a good base case, which is analytically tractable, easy to code up and also seems to be applicable to many models of nature. Somewhat broken analogy, but in Physics we consider linear second order differential equations to study many systems. Now not all systems actually are linear second order, but it is a reasonable approximation under some constraints that is easier to analyze and code up.
And over-usage of normal distribution everywhere is actually controversial.
As we have to more computing power and access to Monte Carlo based simulation based method, we are no longer limited by using only analytically tractable distributions. We can use distributions which more accurately fit the reality.
Normal distributions are useful for natural phenomenon (heights of students in a class) but are way inaccurate to model mostly man made systems (income of people in a town, potential swings of stock indices during panic).
For example, many critics of probabilistic financial models observe that the underlying models use normal distribution. But real market swings are mostly fat tailed (distributions where extreme outcomes are more likely than normal distributions). If you want to go down on deeper into this, start with statistical consequences of fat tails by Nassim Nicholas Taleb. Fun fact, if you look at the wild swings in the price of GameStop stock from the r/wallstreeetbets saga, Taleb pointed out that the swings are not actually wild if you consider a fat tailed distribution. |
H: Python stemmer for Georgian
I am currently working with Georgian texts processing. Does anybody know any stemmers/lemmatizers (or other NLP tools) for Georgian that I could use with Python.
Thanks in advance!
AI: I don't know any Georgian stemmer or lemmatizer. I think, however, that you have another option: to use unsupervised approaches to segment words into morphemes, and use your linguistic knowledge of Georgian to devise some heuristic rules to identify the stem among them.
This kind of approach consists of a model trained to identify morphemes without any labels (i.e. unsupervisedly). The most relevant Python package for this is Morfessor. You can find its theoretical foundations in these publications: Unsupervised discovery of morphemes; Semi-supervised learning of concatenative morphology.
Also, there is a Python package called Polyglot that offers pre-trained Morfessor models, including one for Georgian. Therefore, my recommendation is for you to use Polyglot's Georgian model to segment words into morphemes and then write some rules by hand to pick the stem among them.
You should be able to evaluate the feasibility of this idea by adapting this example from Polyglot's documentation from English to Georgian (by changing the language code en and the list of words):
from polyglot.text import Text, Word
words = ["preprocessing", "processor", "invaluable", "thankful", "crossed"]
for w in words:
w = Word(w, language="en")
print("{:<20}{}".format(w, w.morphemes)) |
H: Math of Logistic regression cost function
In the current scikit-learn documentation for binary Logistic regression there is the minimization of the following cost function:
$$\min_{w, c} \frac{1}{2}w^T w + C \sum_{i=1}^n \log(\exp(- y_i (X_i^T w + c)) + 1)$$
Questions:
what is the $c$ term? It is not explained in the documentation
What is the cost function minimized when LogisticRegression(multiclass=multinomial) is used instead?
AI: The c (small one) term is bias or intercept added to the model. This is similar to intercept we add in case of linear regression. The library allows you to set bias term to zero too.
When set to multinomial model, the cost function will try to minimize cross-entropy loss. Hard for me to write down the equation here, but I always go back to this useful explanation of how cross entropy loss works and how it is minimized. In other words, a multinomial regression will work like a single layer neural network with logistic activation function. |
H: Dimensionality reduction convolutional autoencoders
I don't understand how convolutional autoencoders achieve dimensionality reduction. For FFNN based autoencoder, the reduction is easy to understand: the input layer has N neurons, and the hidden ones have M neurons, where N is greater than M.
Instead, in a convolutional autoencoder, the input image is wide and thin, and it becomes small and thick. It results in an amount of information that is greater than the initial ones.
I report a practical example to explain better what I mean:
# INPUT: 28 x 28 x 1 (wide and thin)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img) #28 x 28 x 32
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) #14 x 14 x 32
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1) #14 x 14 x 64
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) #7 x 7 x 64
conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2)
# OUTPUT: 7 x 7 x 128 (small and thick)
In this example, we start from a 28x28 single-channel image (784), and the encoder output will be 7x7x128 (6272). Is this a dimension reduction?
AI: Why don't you use a lower number of filters in the last convolution? Instead of 128 you can just choose whatever number you want, e.g. 10.
Also, normally after the convolutional (and pooling layers), you flatten the output (therefore losing the spatial information) and then project with a dense layer onto the final representation space. You can control the dimensionality of the representation space with the shape of the matrices of the last dense layer. |
H: Semantic segmentation of an image with multiple labels per pixel
I am building a model for a multiclass sematic segmentation of a skin disease. At a moment I am using U-Net for binary classifications.
In this multiclass problem I have the following cases. There are four types of skin damage. There are four degrees of damage for each skin damage type: healthy, mild, moderate, severe. Healthy skin is A0_B0_C0_D0, "mild b and severe c" corresponds to A0_B1_C3_D0. I would like to train a single multiclass model to predict a dictionary {A: scoreA, B: scoreB, C: scoreC, D: scoreD}. Given that
4(damage types)**4(damage degrees) = 256 combinations
Do I need to train a 256 class model? My concern is that I only have 200 images in my training set and some of the combinations will not appear at all in the training set. Is there a way to train a 12 class model returning four values corresponding to the most likely damage type for each damage type?
Update. Consider an rgb image. You want to classify brightness of each color channel in categories
Intensity class
0-63 -> "0"
64-127 -> "1"
128-191 -> "2"
192-255 -> "3"
Then each pixel belongs to one of 64 classes (r0g0b0, r0g0b1, ..., r3g2b2, r3g3b3). The training set has pixels of colors r0, r1, r2, r3, g0, g1, g2, g3, b0, b1, b2, b3, but it has no pixels of color r0g1b2 or of color r2g3b0. Three separate models (one per channel) will easily learn to predict the channel category, but it will never output r0g1b2 and r2g3b0 classes in 64 class model because it have never seen those classes. How to overcome this problem?
AI: Three separate models (one per channel) will easily learn to predict the >channel category, but it will never output r0g1b2 and r2g3b0 classes in 64 class model because it have never seen these classes. How to overcome this problem?
The only way to solve this problem is to use channels (one for each skin damage type) for each pixel, and treat it as a regression rather than a classification.
In other words, use a multi output regression.
For your output, use a convolutional layer that gives the same number of rows and columns as the input, but 4 channels (one for each skin damage type).
Your ground truth (y_true) should be an array the same width and height as your input, but with 4 channels (one for each skin damage type) with each holding the severity rating of that pixel for the corresponding skin damage type.
Your loss function could be something used for regression such as MAE (mean absolute error).
This is because a classification will output 0 for classes it has never seen samples of since that is what minimizes the loss. A regression, on the other hand, will treat the target variable as continuous, and even if it hasn't seen examples of all severity levels of Type A skin damage (for example), it can still output them.
You could then choose thresholds to classify the outputs into the corresponding labels.
Ex. for each output <0.5 is 0, >0.5 is 1, >1.5 is 2, >2.5 is 3, >3.5 is 4
so [0.1, 1.6, 0.7, 5] means {A: 0, B: 2, C: 1, D: 4}
This is also more useful and easier to interpret in a clinical context as the severity is ordinal rather than just categorical, so a doctor would be better off knowing that a particular mild case (for example) had a prediction of 1.4 rather than 0.6, because they both correspond to a mild prediction but a 1.4 is closer to moderate and may be treated differently than a 0.6
This is an interesting problem from a learning and research perspective, but I develop deep learning based prognosis and diagnosis models using medical images at a big pharma/life sciences company and can tell you that a dataset of 200 images for a task this complex will be insufficient for decent performance or reliable results. multi-fold training/validation/testing AND some slight image augmentation will be necessary, but probably still insufficient. |
H: Backpropagation of a transformer
when a transformer model is trained there is linear layer in the end of decoder which i understand is a fully connected neural network. During training of a transformer model when a loss is obtained it will backpropagate to adjust the weights.
My question is how deep the backpropagation is?
does it happen only till linear layer weights(fully connected neural net) ?
OR does it extend to all the decoder layer weight matrices(Q,K,V) and Feed forward layers weights?
OR does it extend to the even the encoder + decoder weights ?
Please help me with the answer.
AI: Backpropagation extends to the full model, through all decoder and encoder layers up to the embedding tables. |
H: What are the inputs to the first decoder layer in a Transformer model during the training phase?
I am trying to wrap my head around how the Transformer architecture works. I think I have a decent top-level understanding of the encoder part, sort of how the Key, Query, and Value tensors work in the MultiHead attention layers. What I am struggling with is the decoder part, specifically the inputs to the very first decoder layer.
I understand that there are two things. The output of the final encoder layer, but before that an embedded (positional encoding + embedding) version of... well something.
In the original paper in Figure 1, they mention that the first decoder layer input is the Outputs (shifted right). I am a little confused on what they mean by "shifted right", but if I had to guess I would say the following is happening
Input: <Start> How are you <EOS>
Output: <Start> I am fine <EOS>
and so the input to the first decoder layer will be [<Start> I am fine].
What is the need for shifting the sequence? Why would we not just input the target sequence itself? I was thinking maybe it's because of the auto-regressive nature of the decoder part, but then the only difference between the sequences would be the <EOS> token if I am seeing this correctly.
As you can probably tell I am a little bit confused by how some of the parts work, so any help to get to a better understanding would be much appreciated.
AI: Following your example:
The source sequence would be How are you <EOS>
The input to the encoder would be How are you <EOS>. Note that there is no <start> token here.
The target sequence would be I am fine <EOS> . The output of the decoder will be compared against this in the training.
The input to the decoder would be <start> I am fine <EOS>.
Notice that the input to the decoder is the target sequence shifted one position to the right by the token that signals the beginning of the sentence. The logic of this is that the output at each position should receive the previous tokens (and not the token at the same position, of course), which is achieved with this shift together with the self-attention mask. |
H: Feature and the Gaussian Distribution (classification)
I have a question regarding variable following or not a random distribution.
I selected 4 features negatively correlated to the label (Fraud/No Fraud). The notebook I'm taking the inspiration from plotted the distribution of these feature regarding the label. What came out is that my feature 1 (Fraud only) is following a Normal Distribution.
Here are my questions :
Why is it important to know if my feature is following a Normal Distribution ? -> My guess : some models need it for faster convergence or better results
Is there any interest to visualize my features as Non Fraud vs Fraud and compare the distributions ?
If my features are not following a Normal Distribution but are scaled, should I still force them to a Gaussian like shape ?
Thank you very much !
AI: It completely depends on the type of model. Some models need to represent the features with parameters: for example Naive Bayes with numerical features needs to have a way to calculate the probability based on the value, and the most common case is to assume that the features follow a normal distribution. On the other hand whether a feature is normally distributed or not doesn't matter at all for Decision Trees.
Yes, it can be very informative in order to know whether this feature is a good indicator or not: the more different the distributions, the more easily the algorithm can distinguish the classes using this feature.
No, don't change the distribution of a feature (unless you have a specific reason to do so, e.g. based on expert-knowledge for this particular data). Any way you would do that would certainly alter the overall distribution of the data and/or the way the features are related within an instance, so the model would not learn from the true distribution and therefore its predictions on real data would likely go wrong. |
H: Are all 110 million parameter in bert are trainable
I am trying to understand are all these 110 million parameters trainable of bert uncased model. Is there any non trainable parameters in this image below?
By trainable I understand they are initialized with random weight and during pretraining these weights are backpropagated and updated.
AI: All these parameters are trainable.
Note that in normal Transformers it is typical to have fixed (non-trainable) positional embeddings, but in BERT they are learned.
Note also the "pooler" component, which is an extra projection that was not mentioned in the paper, but which the authors commented on later. |
H: Why does storing the output of .count() result in all 1's or 0's?
I have a dataframe which I run loc on to find all values within the loc parameters. Whatever the loc returns gets stored in a variable. I try to use this variable to store it in another dataframe column but it only returns 1's and 0's for the whole column.
How would I get what is being printed to be properly appended to a new dataframe?
I think the .count() has something to do with it, meaning it is changing the value of s4 and s5 when applied to the new dataframe.
for name in pros_unique:
s4 = df.loc[(df.pros_split == name) & (df.orc_4 == 1)].orc_4.count()
s5 = df.loc[(df.pros_split == name) & (df.orc_5 == 1)].orc_4.count()
tots_df['orc_4_totals'] = s4
tots_df['orc_5_totals'] = s5
# This prints out in the correct format
print(name, s4, s5)
Output of print statement:
name1 77 22
name2 62 32
name3 43 29
name4 1 1
name5 61 24
This is the tots_df dataframe:
pros orc_4_totals orc_5_totals
0 name1 1 0
1 name2 1 0
2 name3 1 0
3 name4 1 0
4 name5 1 0
```
AI: Because on each loop you assign a single value to the entire column:
tots_df['orc_4_totals'] = s4 # entire column orc_4_totals = s4!
(the same thing for the other column.
It equals all ones then, just because your final loop value fills it with 1.
You need to insert single values instead, for each row of your target dataframe.
However, because I don't know if your target dataframe tots_df has an existing index, I would suggest first saving the values in a list, then assigning them:
orc_4_values = []
orc_5_values = []
for name in pros_unique:
# perform your checks, then assign using:
s4 = df.loc[(df.pros_split == name) & (df.orc_4 == 1)].orc_4.count()
s5 = df.loc[(df.pros_split == name) & (df.orc_5 == 1)].orc_4.count()
orc_4_values.append(s4)
orc_5_values.append(s5)
# Add to dataframe after the loop
tots_df['orc_4_totals'] = orc_4_values
tots_df['orc_5_totals'] = orc_5_values |
H: How to show two pictures in one cell in Jupyter Notebook? (matplotlib) (python)
As a beginner, can someone please help me with this? Is there any way to show both pictures in a single cell's output? My output could only display one picture. Thank you for your time and attention!
Here is my code:
from skimage import data
image_coffee = data.coffee()
image_horse = data.horse()
fig = plt.figure(dpi=100)
plt.imshow(image_coffee)
plt.imshow(image_horse)
AI: One way to do it (without getting into lots of the inner-workings of Jupyter notebooks), it to use two matplotlib Axes in one plot. Then you show one image in each of these:
from skimage import data
image_coffee = data.coffee()
image_horse = data.horse()
fig, axs = plt.subplots(1, 2, figsize=(15, 8)) # one row of Axes, two columns = 2 plots
The axs variables is a list containing the two Axes, so just access each one and plot your image on it like this:
axs[0].imshow(image_coffee)
axs[1].imshow(image_horse)
If the plots don't pop automatically, either run plt.show() or make sure your notebook has executed %matplotlib inline in a cell. |
H: The upper range of a collected dataset is most likely accurate, but the rest may suffer biased omissions: How to call this phenomenon?
Background: In collecting a dataset of a specific unit ordered by a numeric variable, it is possible that the upper 'cloud' of the dataset is correct, while the 'tail' seems inaccurate.
I can thus trust the upper bounds of the dataset, while I deem the mid- and lower ranges to be rather questionable.
Question: Is there a data scientific term for this phenomenon? If so, how is it called?
Example: I use various sources to gather a list of major publishers of scholarly journals. In total, I found 150 publishers that publish at least 50 journals.
At the top, I found the publisher AAA with 3.000 journals, BBB with 2.500 journals, CCC with 1.900 journals, DDD with 1.500 journals etc.
With my industry knowledge, I can confirm that the top 20 is likely to be accurate.
However, in the lower ranges, there are publishers like XXX with 51 journals, YYY with 50 journals, ZZZ with 50 journals etc. Many of them are rather obscure, and even as an expert I may have never heard of them. I can imagine that the 'tail' is rather inaccurate with large-scale omissions, such as publishers from the Global South.
I thus tend to trust the top 20 or so of that list, but not the ones that rank between #21 and #150.
AI: The case described can be cast as imbalanced dataset problem, or rare events problem.
Even more generally as highly non-uniform underlying distribution problem (which is an umbrella term for both cases).
References:
Machine Learning Tips: Handling Imbalanced Datasets
Handling imbalanced datasets in machine learning |
H: Hyperparameter tuning with Bayesian-Optimization
I'm using LightGBM for the regression problem and here is my code.
def bayesion_opt_lgbm(X, y, init_iter = 5, n_iter = 10, random_seed = 32, seed= 100, num_iterations = 50,
dtrain = lgb.Dataset(data = X_train, label = y_train)):
def lgb_score(y_preds, dtrain):
labels = dtrain.get_labels()
return 'r2', r2_score(labels, y_preds), True
def hyp_lgb(num_leaves, feature_fraction, bagging_fraction, max_depth, min_split_gain, min_child_weight):
params = {'application': 'regression',
'num_iterations': 'num_iterations',
'early_stopping_round': 50,
'learning_rate': 0.05,
'metric': 'lgb_r2_score'}
params['num_leaves'] = int(round(num_leaves))
params['feature_fraction'] = max(min(feature_fraction, 1), 0)
params['bagging_fraction'] = max(min(bagging_fraction, 1), 0)
params['max_depth'] = int(round(max_depth))
params['min_split_gain'] = min_split_gain
params['min_child_weight'] = min_child_weight
cv_results = lgb.cv(params,
train_set = dtrain,
nfold = 5,
stratified = False,
seed = seed,
categorical_feature = [],
verbose_eval = None,
feval = lgb_r2_score)
print(cv_results)
return np.max(cv_results['r2-mean'])
bounds = {'num_leaves': (80,100),
'feature_fraction': (0.1, 0.9),
'bagging_fraction': (0.8, 1),
'max_depth': (5,10,15,20),
'min_split_gain': (0.001, 0.01),
'min_child_weight': (10,20)
}
optimizer = BayesianOptimization(f = hyp_lgb, pbounds = bounds, random_state = 32)
optimizer.maximaze(init_points= init_iter, n_iter = n_iter)
bayesion_opt_lgbm(X_train, y_train)
When I run my code, I get an error something like that, Please help me where am i missing
TypeError Traceback (most recent call last)
TypeError: float() argument must be a string or a number, not 'tuple'
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
<ipython-input-57-86f7d803c78d> in <module>()
40 #Optimize
41 optimizer.maximaze(init_points= init_iter, n_iter = n_iter)
---> 42 bayesion_opt_lgbm(X_train, y_train)
43
2 frames
/usr/local/lib/python3.6/dist-packages/bayes_opt/target_space.py in __init__(self, target_func, pbounds, random_state)
47 self._bounds = np.array(
48 [item[1] for item in sorted(pbounds.items(), key=lambda x: x[0])],
---> 49 dtype=np.float
50 )
51
ValueError: setting an array element with a sequence.
AI: The pbounds must all be pairs; you cannot specify a list of options for max_depth.
The package cannot deal with discrete hyperparameters very directly; see section 2, "Dealing with discrete parameters", of their "advanced tour" notebook about this. |
H: Would there be any reason to pretrain BERT on specific texts?
So the official BERT English model is trained on Wikipedia and BookCurpos (source).
Now, for example, let's say I want to use BERT for Movies tag recommendation. Is there any reason for me to pretrain a new BERT model from scratch on movie-related dataset?
Can my model become more accurate since I trained it on movie-related texts rather than general texts? Is there an example of such usage?
To be clear, the question is on the importance of context (not size) of the dataset.
AI: Sure, if you have a large and good quality in-domain dataset, the results may certainly be better than with the generic pretrained BERT.
This has already been done before: BioBERT is a BERT model pretrained on biomedical texts:
[...] a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement).
Of course, other factors may be taken into account in the decision to pretrain such a model, e.g. computational budget. |
H: Find parameters to maximise output score
Not sure this is the right place to ask. Lets say there is a function f() where its implementation is unknown but it returns a score. I would like to get the highest possible score by modifying the input parameters. I also try to be better than brute force (finding all possible combination of input parameters, if that is even possible)
I do know that
f() runs an algorithm against a known dataset. The algorithm is tweaked by the input parameters.
f() needs 6 parameters
I know the type of each parameter (int, float, boolean)
I know the range of each parameter i.e [-1,5](int), [0,1](float, percentage, i.e 0.5 = 50%)
Some parameters have an open range i.e >1 (int)
Some are dependent of each other. i.e min and max parameter. That is min < max.
Any good pointers to algorithms that could help me solve that would be highly appreciated.
AI: In this kind of optimization problem a genetic algorithm is often a good approach, assuming computing the value of f() is not too costly.
The idea is to represent the 6 parameters as "genes". In the first generation their values are assigned randomly, then each "individual" in the generation (combination of parameters) is evaluated (i.e. calculate f), and the top performing "individuals" are selected. The next generation is obtained by cross-over and random mutation, and the process is repeated until f converges to a maximum. |
H: Features selection in imbalanced dataset
I have some doubts regarding an analysis. I have a dataset with class imbalance. I am trying to investigate some information from that data, e.g., how many urls contain http or https protocols.
My results are as follows:
http in dataset with class 1: 10
http in dataset with class 0: 109
https in dataset with class 1: 180
https in dataset with class 0: 1560
I am trying to build a classifier based on some features and the presence of protocols was supposed to be taken into account.
However, on the basis of the above results, what do you think I should say?
Does it make sense to say that the most websites having class 0 have an https protocol, even if I have a dataset with class imbalance?
For a model, I would consider resampling techniques. Should I work on this analysis (so make this conclusion) after the resampling, or it would make sense to check features importance with other tests (e.g., Pearson correlation, if it is appropriate in this case)?
Any suggestion would be greatly appreciated it.
AI: What this shows is that the protocol is not a very discriminative feature:
the probability of class 1 given http is 10/(109+10)=0.084
the probability of class 1 given https is 180/(180+1560)=0.103
If these conditional probabilities were very different this feature would be more helpful to predict the class, but they differ only slightly. Note that the feature might still be useful, but it doesn't have a huge impact on its own. In case you're interested to know if the difference is significant (i.e. not due to chance), you could do a chi-square test.
Does it make sense to say that the most websites having class 0 have an https protocol, even if I have a dataset with class imbalance?
It is factually correct, but most websites having class 1 also have https so it's not a very useful information (and on its own this information might be confusing for some readers).
For a model, I would consider resampling techniques. Should I work on this after the resampling, or it would make sense to check features importance with other tests (e.g., Pearson correlation, if it his appropriate in this case)?
Feature selection can done either before or after resampling, it doesn't matter. The two things are independent of each other because the level of correlation between a feature and the class is independent from the proportion of the class.
I don't think Pearson correlation is good for categorical variables. I think conditional entropy would be more appropriate here (not 100% sure, there might be other options). |
H: Understanding the last two Linear Transformations in LeNet-5
I need help with understanding the LeNet-5 CNN:
How/Why does FC3 and FC4 have 120 and 84 parameters?
How are the filters 6 and 16 chosen? (intuition based on the dataset?)
Everywhere that I have looked, I haven't found an answer to #1, including LeCunn's original paper.
What am I missing?
I am tasked with swapping out the 5 x 5 kernels (f = 5), with 3 x 3 kernels (f = 3). If I understand where those values (especially 84, 120) come from, I think I will be able to do it. I was able to implement LeNet-5 using PyTorch.
If you have any suggestions what values would work best and why, I would be grateful. The dataset is Cifar-10.
Update:
I modified my code, as @Oxbowerce suggested to make to sure the the image size before unflattening matches the image size in the view before activating the first fully connected network:
In my constructor for the class LeNet:
self.fc1 = nn.Linear(4 * kernel_size * kernel_size * 16, 120) # added 4
In the feed forward network:
x = x.view(-1, 4 * self.kernel_size * self.kernel_size * 16) # added 4
Here is my network class:
class LeNet(nn.Module):
def __init__(self, activation, kernel_size:int = 5):
super().__init__()
self.kernel_size = kernel_size
self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=kernel_size, stride=1)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=kernel_size, stride=1)
self.fc1 = nn.Linear(4 * kernel_size * kernel_size * 16, 120) # added 4
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.activation = activation
def forward(self, x):
x = self.pool(self.activation(self.conv1(x)))
x = self.pool(self.activation(self.conv2(x)))
x = x.view(-1, 4 * self.kernel_size * self.kernel_size * 16) # added 4
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.fc3(x)
return x
I call the train method with this:
model.train(activation=nn.Tanh(), learn_rate=0.001, epochs=10, kernel_size=3)
Train method:
def train(self, activation:Any, learn_rate:float, epochs:int, momentum:float = 0.0, kernel_size:int = 5) -> None:
self.activation = activation
self.learn_rate = learn_rate
self.epochs = epochs
self.momentum = momentum
self.model = LeNet(activation=activation, kernel_size=kernel_size)
self.model.to(device)
optimizer = torch.optim.SGD(params=self.model.parameters(), lr=learn_rate, momentum=momentum)
for epoch in range(1, epochs + 1):
loss = 0.0
correct = 0
total = 0
predicted = 0
for batch_id, (images, labels) in enumerate(self.train_loader):
images, labels = images.to(device), labels.to(device)
optimizer.zero_grad()
outputs = self.model(images)
# Calculate accurracy
predicted = torch.argmax(outputs, 1)
correct += (predicted == labels).sum().item() # <--- exception
total += labels.size(0)
loss.backward()
optimizer.step()
accuracy = 100 * correct / total
self.train_accuracy.append(accuracy)
self.train_error.append(error)
self.train_loss.append(loss.item())
self.test()
for i in self.model.parameters():
self.params.append(i)
Here is the stack trace of the error:
Error
Traceback (most recent call last):
File "/usr/lib/python3.9/unittest/case.py", line 59, in testPartExecutor
yield
File "/usr/lib/python3.9/unittest/case.py", line 593, in run
self._callTestMethod(testMethod)
File "/usr/lib/python3.9/unittest/case.py", line 550, in _callTestMethod
method()
File "/home/steve/workspace_psu/cs510nlp/hw2/venv/lib/python3.9/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/home/steve/workspace_psu/cs510dl/hw2/test_cs510dl_hw2.py", line 390, in test_part2_relu_cel_k3
model.train(activation=activation, learn_rate=learn_rate, momentum=momentum, epochs=epochs, kernel_size=kernel_size)
File "/home/steve/workspace_psu/cs510dl/hw2/cs510dl_hw2.py", line 389, in train
correct += (predicted == labels).sum().item()
File "/home/steve/workspace_psu/cs510nlp/hw2/venv/lib/python3.9/site-packages/torch/tensor.py", line 27, in wrapped
return f(*args, **kwargs)
Exception: The size of tensor a (16) must match the size of tensor b (4) at non-singleton dimension 0
AI: The choice for the number of neurons in the last two dense layers and the number of filter is somewhat arbitrary and most of the time determined by trying different configurations (using something like a hyperparameter grid search). See also this answer on stats stackexchange. If you want to change the size of the 5x5 kernels you will only have to change the number of neurons in the first fully connected layer, the last two don't have to change for the network to be "valid". |
H: What are the activation functions in Convolutional Layers for?
I read a lot about CNNs but I didn't quite understand some things:
What are the activation function in CLayers for? If I understood it right, the only weights in these layers are the ones in Filters, and for the activation function a weighted sum is needed?
The computational effort should increase, shouldn't it? When there are many Filters many Feature Maps (Map with the dot-products) are produced. All of them are given to the next Layer, so if they are as big as the input Image and there are 10 Filters in the first CLayer the second CLayer would have to apply 10x the computational effort(per Filter in the second Layer) that the first layer had or, since they all take all the last layers outputs (the feature maps).
If there are more Layers than one, how does the backpropagation know what they have to change on each one of them, especially since the backpropagation occurs after all layers are applied.
(CLayer = Convolutional Layer)
AI: 1 - Activation functions are non-linear functions. These are added in between layers which are simply Linear transformations.
Example without activation function:
ConvLayer1(Input) -> ConvMaps1
ConvLayer2(ConvMaps2) -> ConvMaps2
Mathematically, this would be $I_{nput} \circledast K_{ernel_1} \circledast K_{ernel_2} $,
which is equivalent to $I_{nput} \circledast K_{ernel_3} $,
where $K_{ernel_3} = K_{ernel_1} \circledast K_{ernel_2} $
That means that creating two layers would simply waste computational effort and give no gain whatsoever.
2 - Yes, if you keep the same image Width and Heigh and just stack more channels, there convolution maps will grow out of control, this is why you compensate this with Pooling Operations and use of large strides. Also, increasing the number of parameters without proper care and architecture design usually leads to overfitting.
3 - Backpropagation is simply a computationally efficient and convenient way of expressing the Partial Derivatives of the function and acts exactly as it would if you just took the derivative of the whole function at once. There is plenty of content online on it, you probably didn't understand the process of weight updating. |
H: Chi-Squared test: ok for selecting significant features?
I would have a question on the contingency table and its results.
I was performing this analysis on names starting with symbols as a possible feature, getting the following values:
Label 0.0 1.0
with_symb 1584 241
without_symb 16 14
getting a p-value which lets met conclude that variables are associated (since it is less than 0.05).
My question is if this result might be a good result based on the chi-squared test, so if I can include in the model.
I am selecting individually features to enter the model based on the chi-squared. Maybe there is another way to select the most appropriate and significant features for the model.
Any suggestions on this would be great.
AI: I will raise several issues that could arise if you are selecting features based on chi-2 tests
Repeated use of chi-2 test can lead to spurious results unless you correct for the number of times you run it
You can include features that are correlated with each other, i.a. A is correlated with B, and both are correlated with label. Not sure, but I think, this can lead to results where model performs worse with more features.
I would try starting with all the features, remove the ones linearly correlated. But this is just a suggestion.
Also, mutual information can be used to estimate how well any given feature describes the label. |
H: Correlations, p-values and features selection
By using correlation matrix, I got some results:
Count_words -0.098857
Count_numbers -0.008305
Count_symbols -0.025853
Count_question -0.031649
Count_equal 0.224223
Count_characters 0.09
I used this line of code (in case you are familiar with Python): df.drop("Target", axis=1).apply(lambda x: x.corr(df.Target))
If I understand correctly, the above results should suggest that there is not correlation between the variables considered.
Since I would like to add the above variables (or some of them) in a model already built with other features (textual), I would like to know if I can include all of them based on the fact that they are not correlated to each other and that the p-value is less than 0.05. My doubt is if the above results do not make sense and do not suggest that these variables can be used in the model.
I hope you can give me some advices on that. Thanks
AI: The fact that a feature has low correlation with the target variable shows that it's not a good indicator on its own, but that doesn't mean that it can't be useful for the model when combined with the other features.
The only way to know if these features are useful is to use them to train a model, then evaluate on a validation set and see if it improves performance.
the p-value is less than 0.05
Is this the result of a correlation significance test? It depends on the test but in general a p-value lower than 0.05 means that there is a significant difference, i.e. in this case it probably means that the correlation is truly not zero. Anyway imho this wouldn't prove anything with respect to using these features or not. |
H: Selecting most important features for multilinear regression
I have a set of 25 features. I would like to choose the best features for my model. Originally, I was looking at the correlation of features with respect to response, and only taking those which are highly correlated and run a regression model. Then, using that model I would predict the outcome based on test data, and compare it to actual (metric RMSE) and this would be how I assess it.
I could then add each feature in order of decreasing correlation with response to the feature set and keep calculating above.
Is there any other way I could select features? Could I e.g. run a random forest and use a feature importance report from that to also select the most important features? Then run a regression?
What is the best way to compare each regression model to the next? There are so many metrics: AIC, BIC, ADJ $R^2$ I am confused as to which one is the simplest way to compare... in fact, MSE is not even given in the sm.OLS function (stats models in python) summary:
AI: Be careful choosing features based on correlation! Yes it is true that features that are correlated with the response variable may be good predictors, however if the features are correlated with each other then you are introducing multicollinearity into your model, which is bad.
If you want to avoid this you should choose features which are correlated with your response variable but not with each other. For example...
y = a + b + c
Assume:
y is correlated with all three (a, b, c)
a is correlated with b
c is not correlated with a and b
You should only use one of either a or b and c to predict y.
With regard to choosing which model is best, you should use a combination of metrics. As you increase the number of features your R^2 will tend to increase regardless of model performance. AIC and BIC can be used only to compare similar models and the "best" model according to AIC/BIC will be the one with the lowest score relative to the other models.
In my opinion RMSE is the best indicator of a good model, but you should also evaluate R^2, AIC, and BIC.
If you are unsure of which features to use I suggest you try stepwise regression to evaluate many models quickly and (maybe) find the best one. |
H: Keras weird loss and metrics during train
I am doing some testing with tensorflow, and I bumbed into a very weird behaviour.
Here is my code
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images1, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = train_images1[:32] / 255.0
train_labels = train_labels[:32]
test_images = test_images / 255.0
batch_size = 32
epochs = 1
train_data = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).batch(batch_size)
adam = tf.keras.optimizers.Adam(lr=0.002, beta_1=0.9, beta_2=0.999)
input_layer = tf.keras.layers.Input(shape=(28,28,))
flatter = tf.keras.layers.Flatten()(input_layer)
dense1 = tf.keras.layers.Dense(128,
kernel_regularizer=tf.keras.regularizers.l2(0.01),
activation='relu')(flatter)
dense2 = tf.keras.layers.Dense(64,
kernel_regularizer=tf.keras.regularizers.l2(0.01),
activation='relu')(dense1)
output_layer = tf.keras.layers.Dense(10,
activation='softmax',name='output')(dense2)
model_naive = tf.keras.models.Model(inputs=input_layer,outputs=output_layer)
model_naive.compile(optimizer=adam,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=['accuracy'])
model_naive.summary()
history = model_naive.fit(x=train_data, validation_data=train_data, epochs=epochs)
model_naive.evaluate(train_data)
I simply import fashon_mnist (only one batch) and pass it as both train and validation data, to make comparison. No dropout in the network... so I would expect to find same loss and metric but this is the output
1/1 [==============================] - 1s 815ms/step - loss: 5.4532 - accuracy: 0.0312 - val_loss: 5.0106 - val_accuracy: 0.3125
To be sure I even did a model.evaluate() and this is what I find
1/1 [==============================] - 0s 12ms/step - loss: 5.0106 - accuracy: 0.3125
exactly the same found during training.
So, provided the evaluation is correct...what are these numbers "loss: 5.4532 - accuracy: 0.0312" ? I am using only one batch, so I would expect no averages over batches are involved.
Please help me to understand, this is driving me crazy. Thank you!
EDIT:
With only 1 batch keras weirdly seems to print loss and score before applying gradients. The same does not happen with more than one batch where probably some strange average is performed.
Still not solved the issue, any comment is still very welcomed!
AI: This is because the calculation of the loss and accuracy are done before the first weight update (i.e. the model with the initialized parameters). After the loss is calculated the first time the loss is used to backpropagate the error throughout the network and to update the parameters. The loss is not calculated again during training (since this would just be an added training cost). The adjusted parameters are then used to calculate the loss for the next batch (training or validation). It would therefore make more sense to compare the training statistics of the second batch with the validation statistics of the first batch. |
H: How can i solve the classification's problem with cross validation in LogisticRegression?
I want to make a data frame with most repeated word in sentences and make a classification via Logistic-Regression.
I tried to write the steps clearly in codes.
The column B is my target.
What I have: (Sample)
raw_data={"A":["This is yellow","That is green","These are orange","This is a pen","This is an Orange"],
"B":["Yes","No","Yes","No","No"] }
df=pd.DataFrame(raw_data)
df
A B
0 This is yellow Yes
1 That is green No
2 These are orange Yes
3 This is a pen No
4 This is an Orange No
What I did:
### 1-Import Libraries:
import numpy as np
import pandas as pd
### 2- Create data set:
raw_data={"A":["This is yellow","That is green","These are orange","This is a pen","This is an Orange"],
"B":["Yes","No","Yes","No","No"] }
df=pd.DataFrame(raw_data)
df
A B
0 This is yellow Yes
1 That is green No
2 These are orange Yes
3 This is a pen No
4 This is an Orange No
### 3- Count the word and charachters
df['word_count'] = df['A'].agg(lambda x: len(x.split(" ")))
df['char_count'] = df['A'].agg(lambda x:len(x))
df
A B word_count char_count
0 This is yellow Yes 3 14
1 That is green No 3 13
2 These are orange Yes 3 16
3 This is a pen No 4 13
4 This is an Orange No 4 17
### 4- Count the most repeated words in column "A"
df_word_count=pd.DataFrame(df.A.str.split('').explode().value_counts()).reset_index().rename({'index':"A,"A":"Count"},axis=1)
display(df_word_count)
list_word_count=list(df_word_count["A"])
len(list_word_count)
A Count
0 is 4
1 This 3
2 yellow 1
3 These 1
4 orange 1
5 green 1
6 That 1
7 are 1
8 a 1
9 pen 1
10 Orange 1
11 an 1
### 5- Make a ZERO-Matrix
allfeatures=np.zeros((df.shape[0],len(list_word_count)))
allfeatures.shape
### 6- Create a data frame
for i in range(len(list_word_count)):
allfeatures[:,i]=df['A'].agg(lambda x:x.split().count(list_word_count[i]))
Complete_data=pd.concat([df,pd.DataFrame(allfeatures)],axis=1)
display(Complete_data)
A B word_count char_count 0 1 2 3 4 5 6 7 8 9 10 11
0 This is yellow Yes 3 14 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 That is green No 3 13 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0
2 These are orange Yes 3 16 0.0 0.0 0.0 1.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
3 This is a pen No 4 13 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0
4 This is an Orange No 4 17 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0
### 7- change columns name from list
#This creates a list of the words
l = list(df_word_count["A"])
l.insert(0,"char_count")
l.insert(0,"word_count")
l.insert(0,"B")
l.insert(0,"A")
# Finally, I rename all the columns with the names that I have in the list l
Complete_data.columns = l
### 8- Define X and Y
x=Complete_data.drop(["A","B"],axis=1) # Features
y=Complete_data["B"] # Target
### 9- Encoding of Target
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(y)
### 10- Train|Test split
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 0)
### 11- Import Sklearn needed packages
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import r2_score
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
### 12- Prediction and Regression with Cross-Validation
LogReg=LogisticRegression()
LogReg.fit(x_train,y_train)
cv_LogReg=cross_val_score(LogReg,x_train,y_train,cv=2)
cv_LogReg_pred=cross_val_predict(LogReg,x_train,y_train,cv=2)
print("Score: ",r2_score(y_train,cv_LogReg_pred))
Error:
The Algorithm can't find any classification (0,1), although I used the LabelEncoder
ValueError Traceback (most recent call last)
<ipython-input-127-2d7e54ebfd6c> in <module>
4 #LogReg_pred=LogReg.predict(x_test)
5 cv_LogReg=cross_val_score(LogReg,x_train,y_train,cv=2)
----> 6 cv_LogReg_pred=cross_val_predict(LogReg,x_train,y_train,cv=2)
7
8 print("Score: ",r2_score(y_train,cv_LogReg_pred))
.
.
.
This solver needs samples of at least 2 classes in the data, but the data contains only one class: 0
I don't know what I did wrong ♂️
AI: Since you are doing cross validation on a sample, while choosing a sample it is dividing in such a way that the sample contains only one class hence you are getting that error. If you have more data , you should not get this error. I have performed simple Logistic regression on these 5 records and I am able to create a model so can you increase your data and check?
I have added data like this:
2- Create data set:
raw_data={"A":["This is yellow","That is green","These are orange","This > is a pen","This is an Orange",
"This yllow","That geen","These ornge","This a >pn","This an Ornge"],
"B":["Yes","No","Yes","No","No",
"Yes","No","Yes","No","No"] }
And one more thing, I changed the r2_score in the last line to accuracy score. |
H: Using R to organize/rearrange CSV - group by multiple columns?
I have a CSV that I need to clean up / organize in a usable way using R. I need to group by the property ID and then want to take all the unique years for the defor year column and make each year into a sperate column with the amount of deforestation for that year. My data frame / CSV looks like this:
Prop_ID deforYear deforHA
1 2010 15
1 2011 0
1 2012 10
2 2010 35
2 2011 45
2 2012 0
and I want the output to look like this:
Prop_ID defor_2010 defor_2011 defor_2012
1 15 0 10
2 35 45 0
I'm assuming I use the group_by function in dplyr but I can't seem to figure it out.
AI: This very simple using the pivot_wider function from tidyr:
library(tidyr)
df <- data.frame(
Prop_ID = c(1, 1, 1, 2, 2, 2),
deforYear = c(2010, 2011, 2012, 2010, 2011, 2012),
deforHA = c(15, 0, 10, 35, 45, 0)
)
df %>%
pivot_wider(names_from=deforYear, values_from=deforHA, names_prefix="deforYear_")
# result
Prop_ID deforYear_2010 deforYear_2011 deforYear_2012
1 1 15 0 10
2 2 35 45 0 |
H: Pytorch's CrossEntropyLoss?
Can anybody explain what's going on here? I thought I knew how cross entropy loss works.
I have tried with Negativeloglikelihood as well?
AI: The problem is that, in Pytorch, CrossEntropyLoss is more than its name suggests. The documentation says that:
This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
This should behave like you expect:
Loss = nn.NLLLoss()
y = torch.tensor([0.25, 0.25, 0.29, 0.21]).unsqueeze(0)
y_true = torch.tensor([2])
Loss(torch.log(y), y_true)
Note that, as pointed out by the documentation of nn.NLLLoss, its input are log-probabilities, which explains why I added the torch.log. |
H: Getting very low/ wrong accuracy from RandomizedSearchCV
I am currently using RandomizedSearchCV to optimize my hyper-parameters. However the reported scores of each iteration is very low. When I then evaluate the highest scoring candidate I get very high accuracy (0.97), while the RandomizedSearchCV reports something much lower (0.32).
search = clf_rand_search.fit(x_traintest, y_traintest)
print(search.score(x_validation,y_validation))
0.32
print(accuracy_score(y_validation.flatten(), search.predict(x_validation).flatten()))
0.9798260869565217
My input and output are both 2-D matrix with (100,9) and (100,230) shape for the train/test data. With lower samples for the validation data.
Should I format my data differently for the RandomizedSearchCV?
Input features first two are normalised and the last one-hot encoded.
Output classification 0 or 1 for 230 nodes.
clf = MLPClassifier(solver = 'adam',
max_iter=9999,
alpha=1e-5
)
hidden_layers = 8
neurons = list(range(10,210,5))
m = [0]*(hidden_layers*len(neurons))
for i in range(1,hidden_layers+1):
for idx,i2 in enumerate(neurons):
m[((i-1)*len(neurons)) + (idx)] = [neurons[idx]]*i
param_space = {
'hidden_layer_sizes': m,
'activation': ['identity', 'logistic', 'tanh', 'relu'],
'learning_rate': ['constant','invscaling','adaptive'],
'learning_rate_init': np.arange(1e-4,0.1+1e-4,1e-4)
}
clf_rand_search = RandomizedSearchCV(clf, param_space, n_iter=10,
scoring="accuracy", verbose=True, cv=2,
n_jobs=-1)
AI: UPDATE AFTER EXCHANGING COMMENTS
You might be facing issues with the computation of the accuracy
I think the MLP with the log-loss can work well with your output data. Your output data is a vector $(N, 230)$ with $N$ the number of samples, with only 1s or 0s. This data is indeed a one-hot encoded vectors with multiple 1s.
You are computing the accuracy by flattening the predictions and comparing them elemt-wise ($N*230$ elemetns),
For computing the accuracy, I guess if the classifier is not comparing the one-hot vectors (ground truths and predictions) element wise , but comparing if they re the same for each one of the $N$ samples. Think that if only one element of the 230 onte-hot encoded is missclassified in one sample, this accuracy will drop by $1/$N$*100$ %. If accuracy is computed element-wise, the drop in the accuracy will be $1/(230*N)*100$ %...
Try to update your accuracy computation assingning a 1 only if each predicted (230, 1) vector of each sample is equal to its (230,1) ground truth vector.
PREVIOUS ANSWER BEFORE COMMENTS
I guess you are experimeting problems with your data...
Your data does not seem to be a binary classification problem, for binary classification your data should have the following dimensions:
Input: $(N, K)$
Output: $(N, 1)$ or $(N, 2)$ if is one-hot encoded.
With $N$ the number of sample of the split, $K$ the dimension of the feature input space
If your output data is of size
$(N, D)$,
then it could be a regression problem that maps a feature space of $K$ to $D$. Yout MLP should try to mimmic a function.
$f'(X): K \to D$
For regression problems you must use other metrics, like MAE or MSE
You can treat the problem from various perspectives (just some ideas).
as a masking/segmentation problem in 1D (making an analogy with mask segmentation for 2D images...)
A Bayesian approach could work well also. You can try to estimate the posterior probability of parameter of the Binomial distribution which models the nodes being 1 or 0 (posterior) from your data (likelihood)... and then update your posterior distribution with new data so that you can have updated and more accurate credible intervals. Obviouslly you may thing about if a Binomial distribution models fine your data (i.e. independancy, etc...) Maybe other probability distribution works better
Thge following example works well. I know it does not answer the question (I'm just trying to help), but at least, we can see with an example that RandomizedSearchCV should give the same score as the accuracy_score with your code for a binary classification problem with a MLP
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import accuracy_score
from scipy.stats import uniform
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
import numpy as np
X, y = make_classification(n_samples=1000, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = MLPClassifier(solver = 'adam',
max_iter=9999,
alpha=1e-5
)
hidden_layers = 2
neurons = list(range(10,210,5))
m = [0]*(hidden_layers*len(neurons))
for i in range(1,hidden_layers+1):
for idx,i2 in enumerate(neurons):
m[((i-1)*len(neurons)) + (idx)] = [neurons[idx]]*i
param_space = {
'hidden_layer_sizes': m,
'activation': ['identity', 'logistic', 'tanh', 'relu'],
'learning_rate': ['constant','invscaling','adaptive'],
'learning_rate_init': np.arange(1e-4,0.1+1e-4,1e-4)
}
clf_rand_search = RandomizedSearchCV(clf, param_space, n_iter=2,
scoring="accuracy", verbose=True, cv=2,
n_jobs=-1)
search = clf_rand_search.fit(X_train, y_train)
print(search.score(X_test,y_test))
print(accuracy_score(y_test.flatten(), search.predict(X_test).flatten())) |
H: Can Micro-Average Roc Auc Score be larger than Class Roc Auc Scores
I'm working with an imbalanced data set. There are 11567 negative and 3737 positive samples in train data. There are 2892 negative and 935 positive samples in validation data. It is a binary classification problem and I used Micro and Macro averaged ROC for evaluation of my model. However, I noticed that Micro averaged Roc-Auc score is higher than Class specific Roc-Auc scores. That doesn't make sense to me.
As you can see on the plot, micro averaged roc-auc score is higher for all points. If it is possible can you explain the reason behind that ? I used that sklearn-link and I converted it for binary classification (y-true -> one hot representation). I also added my code on below.
xgboost_model = XGBClassifier(n_estimators= 450,max_depth= 5,min_child_weight=2)
xgboost_model.fit(X_train,y_train)
yy_true,yy_pred = yy_val, xgboost_model.predict_proba(Xx_val)# .predict_proba gives probability for each class
# Compute ROC curve and ROC area for each class
y_test = flat(yy_true) # Convert labels to one hot encoded version
y_score = yy_pred
n_classes=2
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=2)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=2)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve for nonSampling Training Data')
plt.legend(loc="lower right")
plt.savefig('nonsample.png', format='png', dpi=600)
plt.show()
AI: In a binary problem there's no reason to average the ROC (or any other metric). Normally micro and macro performances are used to obtain a single performance value based on individual binary classification measures in the case of multiclass classification.
So here what happens is this:
The ROC curves for the two classes are the mirror of each other (using the top-left bottom-right diagonal as symmetry axis), because they represent exactly the same points but with the positive and negative classes swapped.
The macro-average curve is the average of both curves, which doesn't make a lot of sense since both already have the same shape. This is why all the AUC values are identical for macro, class 0 and class 1.
The micro-average ROC is the weighted average, so it's made mostly of the majority class (around 75%): since most points in the majority class are correctly predicted as the majority class, the performance looks much better. This is related to the fact that the micro-average F1-score is equal to accuracy, although in the ROC I'm not sure how the points are weighted exactly.
In my opinion, this is a good illustration of why ROC curves should be used very carefully (or not at all) in a multiclass setting. ROC curves are meant for binary (soft) classification, they are useful and interpretable in this context but not necessarily in another context.
In general it also illustrates that it's not because something can be calculated that the resulting value makes sense ;) |
H: Undersampling for credit card fraud detection before or after Train/Test Split
I have a credit card dataset with 98% transactions are Non-Fraud and 2% are fraud.
I have been trying to undersample the majotrity class before train and test split and get very good recall and precision on the test set.
When I do the undersampling only on training set and test on the independent set I get a very poor precision but the same recall!
My question is :
Should I undersample before splitting into train and test , will this mess with the distribution of the dataset and not be representative of the real world?
Or does the above logic only apply when oversampling?
AI: Answering your questions, it is important to remember that this kind of transformations, in this case the change in the minority class distribution by oversampling or in the majority class by undersampling, must be only done in the training dataset, so you do not alter the real situation when you want to apply your model (which, at training time, is your test and hold-out set).
About dealing with class imbalance, in addition to resampling methods, you can apply balancing weights to the learning algorithm itself, via parameters like class_weight for some classifiers with scikit-learn (for instance in this one) or via scale_pos_weight with XGBoost (read this) |
H: Creating parallel keras layers
I am new to Keras and ML and I want to create a NN that can seperate a bitmap-like image into its visual components.
My approach is to feed a two dimensional image (lets say 8 by 8 pixels) into a NN, which then outputs a three dimensional matrix (e.g 3 by 8 by 8). Every element of the first dimension represents an image of a component (see this illustration).
As a first version I created a Keras sequential model with 1 flatten layer and 2 dense layers, 192 units each (3x8x8=192). After around 5000 training images, the model performance was still mediocre.
Question: Is there a way of splitting the output of the last dense layer into three seperate images that will then be processed in a parallel manner by further dense layers? Is there a better approach to the problem I am facing?
AI: So, yes and no.
First, there's not a layer that does this in the standard Keras API. It might be possible to write a custom layer to do it, but I'm not comfortable enough doing that to guide you.
What you COULD do is create three different layers, each of which accepts the whole thing as input, and let each of them figure out what parts they're responsible for. More on that in a minute.
Second, to do it requires you to use the Functional API. The Sequential API is just that - sequential. But functional is pretty easy once you understand what's going on.
Ok. Now for the above approach.
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Flatten, Concatenate
from tobias_code import get_my_data, compiler_args
data = get_my_data() # obviously, this is a stand-in for however you get your data.
input_layer = Input(data.shape[1:])
hidden = Flatten()(input_layer)
This is where the Functional API differs from the Sequential API. Because the Sequential API assumes each layer attaches to the previous layer, you don't have to manually do so. The Functional API allows you to connect layers however you'd like! But to allow for that, it requires you to manually connect each layer to the previous. I typically reuse 'hidden' to chain my layers together, deviating only for the input, output, and any unusually connected layers.
hidden = Dense(192, activation='relu')(hidden)
main_output = Dense(192, activation='relu')(hidden)
Here's where yours would end. You'd call model = tf.keras.Model(input_layer, main_output), do whatever compiling you had to do, and begin training. But we're going to continue on. (You probably wouldn't use relu for your output layer, but eh...)
# I'm going to build each individual parallel set of layers separately
branch_a = Dense(96, activation='relu')(main_output)
branch_a = Dense(48, activation='relu')(branch_a)
output_a = Dense(24)(branch_a) # This will be one of my outputs, so I want a linear activation
branch_b = Dense(96, activation='relu')(main_output) # note that it is main_output again
branch_b = Dense(48, activation='relu')(branch_b)
output_b = Dense(24)(branch_b)
branch_c = Dense(96, activation='relu')(main_output) # and again. 3 layers are all sharing that output.
branch_c = Dense(48, activation='relu')(branch_c)
output_c = Dense(24)(branch_c)
Now, the Functional API can handle multiple outputs. You can define separate loss functions for each output and separate metrics for each output. But I haven't done a lot with that, so rather than tell you incorrectly how to do it, we're going to concatenate your outputs into one, which is probably what your output looked like in the sequential model.
final_output = Concatenate()([output_a, output_b, output_c])
model = tf.keras.Model(input_layer, final_output)
model.compile(**compiler_args)
model.summary()
Now, I haven't run it, and I make no claims as to its efficacy; I pretty much typed it here and am hoping for the best. But it should get you where you want to be, and hopefully if it's not exactly what you want it will set you on the right path. |
H: Sum of squares for matrix valued data over $\mathbb{R}$ and $\mathbb{C}$
Let us assume we have $k \times k$ matrix valued data and assume this is organized (possibly as time series):
$$ M_1, M_2, \ldots, M_n $$
Now, assume we are interested in writing down an error function that mimics sums of squares. This can naively be written as
$$ \sum_{i=1}^n (M_i - \hat M_i)^2 $$
where $\hat M_i$ is the $i$-th estimation. The question is, what is actually the proper way to write this function explicitly? For vectors, the Euclidean norm is "naturally" picked. What about this case?
One option is to multiply out these matrices and treat each of the resulting matrix's elements on its own. For example the element at position 11 would have its own "error function" that looks like:
$$
\sum_i (a_{11}^2 +a_{12}a_{21})
$$
and similarly for the other three elements. Here $M-\hat M \equiv A = (a)_{ij}$. Does this even make sense?
Furthermore, how to treat the same example having complex valued matrices?
AI: Essentially you want to pick a function that will give you the "size" of a matrix. The most obvious way I can think of is by choosing a matrix norm, which is a map $\lVert \cdot \rVert \colon \mathbb{R}^{k, k} \to [0, \infty)$ (or you could generalise to a complex $k \times k$ matrix if you wished).
Your suggestion seems similar to computing
$$S = \sum_i (M_i - \hat M_i)^2$$
then using the Frobenius norm $\lVert S \rVert_F$ to turn this into a real number. The Frobenius norm essentially means "squash $S$ into a vector of dimension $k \times k$, then compute the Euclidean norm".
There are plenty of other norms you could consider, such as the operator norms. Interestingly enough it can be proved that any two matrix norms are equivalent up to a constant scaling factor, so your choice of norm isn't particularly important, and minimising the "sum of squares" under any norm should be more or less the same. |
H: Dealing with dates in dataframe
I have a dataframe which has two date columns - Start Date and End Date,
Let's suppose start date is 4/10/2019 and end date is 4/10/2020 (MM/DD/YYYY) now I want to split these dates in a list which will have all the intermediate months (from october 2019 to october 2020). Is there any function in python that can hep me with this?
AI: You can likely use pandas.date_range as follows assuming your columns are called start and end:
import pandas as pd
df.apply(lambda x: pd.date_range(x["start"], x["end"]), axis=1) |
H: How to Iterate over rows in a dataframe
So I've been trying to iterate over rows of my dataframe and my goal is to find matching rows based on two particular columns (say C and P). Then i need to do some manipulations as well in the data for the rows. I've read quite a few answers here telling to use iterrows() or itertuples() but that doesnt serve my purpose because I cannot manipulate my data using them. Same goes for functions like groupby since it only allows manipulation on the whole groups not elements of those groups(correct me if I am wrong here because thats what I have seen in the articles on groupby(). What approach should I use to match rows in my data frame to each other based on columns and then manipulate them.
AI: "What approach should I use to match rows in my data frame to each other based on columns and then manipulate them."
Use pandas.DataFrame.loc:
Setting values
Set value for all items matching the list of labels
df.loc[['viper', 'sidewinder'], ['shield']] = 50
df
max_speed shield
cobra 1 2
viper 4 50
sidewinder 7 50
Set value for rows matching callable condition
df.loc[df['shield'] > 35] = 0
df
max_speed shield
cobra 30 10
viper 0 0
sidewinder 0 0 |
H: Issue translating large amounts of tweets using Google Translate
I am working on translating large amounts of tweets using this deep-translator which uses the Google Translate API.
Initially everything was fine and tweets were translated with no problems whatsoever but I recently encountered an issue.
The issue with the data currently is that when there are non-English words in a particular tweet as in the attached image, Google Translate marks it as English and does not translate them. In the API I have set the source language to auto-detect the words then translate them to English.
The only workaround I have come up with to solve this is to turn the tweet into chunks and perform batch translations on them.
Example string:
"NEW YOUTUBE VIDEO OUT NOW:TOTTENHAM NEWS TRANSFER WINDOW UPDATE 손흥민 Son Award Link to Premier League Defen... TOTTENHAM NEWS TRANSFER WINDOW UPDATE Carabao Cup Win Final. 손흥민 Son Contract"
Output after chunking the string into batches with a maximum of 5 words:
['NEW YOUTUBE VIDEO OUT NOW:TOTTENHAM', 'NEWS TRANSFER WINDOW UPDATE 손흥민', 'Son Award Link to Premier', 'League Defen... TOTTENHAM NEWS TRANSFER', 'WINDOW UPDATE Carabao Cup Win', 'Final. 손흥민 Son Contract']
The issue is to translate the batch it takes about 13 seconds whereas if I translate the entire string(even though it won’t work) it takes under a second as the batched string performs 6 API request compared to the single API request for the normal string.
The time to translate will be quite high when I batch the strings and translate them.
Assuming it takes 13 seconds per tweet with 5000 tweets in the csv file I am currently working on it will take roughly 18 hours and I have csv files with significantly more tweets than this.
The chunk size of 5 is what I found to still be able to translate the words even if there is a single non-English word in the string. Anything more than that will still not translate the tweets.
Anyone have any workarounds that are faster than this?
AI: So this was my work around:
Convert the list of strings that need to be translated into an indexed tuple with the first value being the index and the second value being the string:
s = [(0, 'NEW'), (1, 'YOUTUBE'), (2, 'VIDEO'), (3, 'OUT'), (4, 'NOW:TOTTENHAM'), (5, 'NEWS'), (6, 'TRANSFER'), (7, 'WINDOW'), (8, 'UPDATE'), (9, '손흥민'), (10, 'Son'), (11, 'Award'), (12, 'Link'), (13, 'to'), (14, 'Premier'), (15, 'League'), (16, 'Defen...'), (17, 'TOTTENHAM'), (18, 'NEWS'), (19, 'TRANSFER'), (20, 'WINDOW'), (21, 'UPDATE'), (22, 'Carabao'), (23, 'Cup'), (24, 'Win'), (25, 'Final.'), (26, '손흥민'), (27, 'Son'), (28, 'Contract')]
Next as the words that were causing issues when translating were ASCII I extracted them using this code:
(index, item) for (index, item) in s if not item.isascii()]
# result
[(9, '손흥민'), (26, '손흥민')]
Then I ran the translations on these words instead of the entire string or the chunks of strings.
After translating I then added them back in the list of tuples
Convert the list of tuples to a list of strings and concatenate them into a single string
This significantly reduced the time to translate the data. |
H: MLP sequential fitting
I am fitting a Keras model, using SGD
Input dataset X_train has 55000 entries.
Can anyone explain the yellow highlighted values?
For me, when each epoch is done, this should correspond to 55000/55000.
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",metrics=["accuracy"])
history = model.fit(X_train , y_train, epochs=30 , validation_data=(X_valid, y_valid))
AI: These numbers refer to minibatches, not individual samples. The data is not fed to the model one by one, but in small groups called "minibatches" or simply "batches". The size of the minibatch (the number of elements to be included in each minibatch) can be specified as a parameter to the fit method. As you did not provide any value, it took the default value of 32. This is specified in the documentation:
batch_size: Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
Your data size divided by the batch size gives us the number of batches, which is the number you see: ceil(55000 / 32) = 1719. As they are not divisible, the last batch would have a few elements less; specifically, the last batch has 24 elements (= 55000 - 32 * 1718) instead of 32. |
H: Regression analysis and least square method relation?
I want to know where Regression analysis is most used at, what's its competitor methods, and how least square method relates to regression analysis.
AI: I want to know where Regression analysis is most used at
Regression analysis is used for analyzing the relationship between some independent variables and a dependent variable. In particular it can be used for predicting/forecasting the dependent variable. In Machine Learning a regression task is a supervised task where the target (dependent variable) is numerical (as opposed to classification). It's used in a huge range of applications.
what's its competitor methods
There's no competitor since it's a whole domain: any method which does the same thing is a regression method so it's part of regression analysis.
how least square method relates to regression analysis.
Least Squares Regression is a specific method which can be used in regression analysis. Its most common use is with linear regression, which is a specific case of regression for a linear function (it's one of the most simple forms of regression). |
H: Does the model(best fitting line/curve) changes when the training data is changed in the cross validation?
From my understanding - a machine learning algorithm goes through the inputs (independent variables) and predicts the output (dependent variable). I believe, what line/curve would best define the training data will depend on the training data. When it decides the best line or curve, the new inputs are plotted and based on plotting, the target variables are found.
My question is, during the cross-validation, the training data changes, due to the difference in splits, shouldn't that change the best fitting line, and shouldn't that make K different models(=number of splits)? If yes (it makes K different model), then what use would that be? If no, then where am I going wrong?
AI: Your intuition that the model changes depending on the data input is right. What changes is namely the parameters, e.g. weights if you are performing a regression analysis.
Weights change primarily when you are training the data. You can reliably say that whenever you train the model it changes (namely its parameters) .
During cross validation, you split your datasets into an arbitrary number of subsets. You use some of them as training sets, while you use the rest as validation sets, and measure the error.
You repeat this through and through and you measure error of each. In each iteration when you choose different subsets (kth iteration uses different test-validation sets than the k+1th ones), you do change the model as you are re-training, (however this should not be important as you are doing this to have an approximation on how the model will perform with new data). Finally, you'll have an estimation on the model's error, (but you can use this process to choose features with most predictive power, if you are performing cross validation for feature selection). Edit: It is however important noting that cross validation is used primarily for getting an estimation on how the model would perform in general, not for training test selection.
This goes for numerical models, but I do think they apply to decision trees and other probabilistic models as well, because this process would change information metrics that are given to the model as an input (e.g. split criteria of the decision tree). |
H: Tensorflow's .shuffle(BUFFER_SIZE)
I came across the following function in Tensorflow's tutorial on Machine Translation:
BUFFER_SIZE = 32000
BATCH_SIZE = 64
data_size = 30000
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
I went through several blogs to understand .shuffle(BUFFER_SIZE), but what puzzles me is the fact that a BUFFER_SIZE > DATA_SIZE results in a perfectly uniform shuffling. Neither do I understand what they mean by 'uniform shuffling', nor do I understand how a BUFFER_SIZE> DATA_SIZE is even possible.
From what I understand, tensorflow keeps a BUFFER_SIZE of elements, selects a random element and adds the next input element into the buffer. This makes sense if the BUFFER_SIZE is <= DATA_SIZE. But, what happens to the buffer in case we have more number of elements than the size of the dataset? Do we not have some NULL elements? How does it result in uniform shuffling?
Could anyone please explain to me with an example of how BUFFER_SIZE > DATA_SIZE results in a uniform shuffling? And, what exactly do we mean by uniform shuffling?
AI: Shuffling begins by making a buffer of size BUFFER_SIZE (which starts empty but has enough room to store that many elements). The buffer is then filled until it has no more capacity with elements from the dataset, then an element is chosen uniformly at random. This means that each example in the buffer is equally likely to be chosen, with probability 1/BUFFER_SIZE. Then, a new example is loaded to fill the slot in the buffer that was emptied. This continues until there is nothing left to load.
A uniform shuffle would be what you would think of as truly random: any sequence of examples is equally likely. If the buffer is smaller than the size of the dataset, this is not possible. Here is an example of why: imagine I have ten examples, labelled 1 to 10, and a buffer of size 2. First I load examples 1 and 2 into the buffer, then I have no more space so I must randomly select something from the buffer. Therefore, my random shuffle always begins with example 1 or 2: not uniformly random!
If you have a buffer as big as the dataset, you can obtain a uniform shuffle (think the same process through as above). For a buffer larger than the dataset, as you observe there will be spare capacity in the buffer, but you will still obtain a uniform shuffle. |
H: p-value and effect size
Is it correct to say that the lower the p-value is the higher is the difference between the two means of the two groups in the t-test?
For example, if I apply the t-test between two groups of measurements A and B and then to two groups of measurements B and C and I find that in the first case the p-value is lower than the second case, could one of the possible interpretations be that the difference between the means of group A and B is greater than the difference between the means of group B and C?
AI: No.
What you refer to (the difference between the means of group A and B) is actually the effect size, and it has absolutely nothing to do with the p-values.
The situation is nicely summarized in the (highly recommended) paper Using Effect Size—or Why the P Value Is Not Enough (emphasis mine):
Why Report Effect Sizes?
The effect size is the main finding of a quantitative study. While a
P value can inform the reader whether an effect exists, the P value will not reveal the size of the effect. In reporting and
interpreting studies, both the substantive significance (effect size)
and statistical significance (P value) are essential results to be
reported.
Why Isn't the P Value Enough?
Statistical significance is the probability that the observed
difference between two groups is due to chance. If the P value is
larger than the alpha level chosen (eg, .05), any observed difference
is assumed to be explained by sampling variability. With a
sufficiently large sample, a statistical test will almost always
demonstrate a significant difference, unless there is no effect
whatsoever, that is, when the effect size is exactly zero; yet very
small differences, even if significant, are often meaningless. Thus,
reporting only the significant P value for an analysis is not
adequate for readers to fully understand the results.
In other words, the p-value reflects our confidence that the effect indeed exists (and it's not due to chance), but it says absolutely nothing about its magnitude (size).
In fact, the practice of focusing on the p-values instead of the effect size has been the source of much controversy and the subject of fierce criticism lately; see the (again, highly recommended) book The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives.
The following threads at Cross Validated may also be useful:
P value as a measure of effect size?
Are effect sizes really superior to p-values? |
H: Sort Pandas Dataframe per column
I have a dataset with age + another 14 variables. I have created 13 bins representing different age groups like so:
data["age_bins"] = pd.cut(data["age"], [16,20,25,30,35,40,45,50,55,60,65,70,75,80])
Then calculated the mean value of those 14 other variables per age group like so
data_age_bins_means = data.groupby(["age_bins"]).mean()
resulting in a 14 by 13 DataFrame called data_age_bins_means
Finally, I want to output a data structure with the 5 variables with the greatest mean value in descending order per each age group i.e. first sort each age group column separately and then choose those five variables with the greatest mean values for each age group. I was thinking about a MultiIndex solution but would badly need some help on a neat solution here. Many thanks!
P.s. I finally want to save that data structure to .json for easy loading to JavaScript
AI: How about a lambda using df.apply? So,
import pandas as pd
import numpy as np
#initializing variable names
variable_names =['var_' + str(i) for i in range(1, 15, 1)]
variable_names.insert(0, 'age')
Generating random data
data =pd.DataFrame(np.random.randint(0,100,size=(100, len(variable_names))), columns=variable_names)
# 5 bins
data['age_bins'] =pd.cut(data["age"],5)
I assumed 5 bins, you can pass in fixed bins if you like. You might have to deal with NaNs .
#remove the age column
data_age_bins_mean =data.groupby(['age_bins']).mean().drop('age', 1)
col_names =variable_names[1:]
#creating a dict for each age_bin containing key value pairs for the top 5 values
age_bin_top_vars =data_age_bins_mean.apply(lambda x: {col_names[i]: x[i] for i in np.argsort(x)[::-1][:5]},
axis =1)
So, np.argsort to sort the variables with the highest values.
Output age_bin_top_vars
age_bins
(0.902, 20.6] {u'var_7': 52.523809523809526, u'var_6': 51.28...
(20.6, 40.2] {u'var_7': 65.36842105263158, u'var_6': 57.157...
(40.2, 59.8] {u'var_6': 52.0, u'var_3': 54.04347826086956, ...
(59.8, 79.4] {u'var_5': 52.2, u'var_14': 56.3, u'var_12': 5...
(79.4, 99.0] {u'var_4': 57.11764705882353, u'var_13': 58.35...
Convert the series to json
age_bin_top_vars.to_json()
'{"{"closed":"right","closed_right":true,"length":19.698,"open_left":true,"right":20.6}":{"var_7":52.5238095238,"var_6":51.2857142857,"var_14":51.8095238095,"var_12":52.4285714286,"var_9":51.5238095238},"{"closed":"right","closed_right":true,"length":19.6,"open_left":true,"right":40.2}":{"var_7":65.3684210526,"var_6":57.1578947368,"var_4":55.9473684211,"var_12":56.3684210526,"var_8":53.9473684211},"{"closed":"right","closed_right":true,"length":19.6,"open_left":true,"right":59.8}":{"var_6":52.0,"var_3":54.0434782609,"var_2":55.6086956522,"var_9":55.8695652174,"var_8":52.6086956522},"{"closed":"right","closed_right":true,"length":19.6,"open_left":true,"right":79.4}":{"var_5":52.2,"var_14":56.3,"var_12":54.05,"var_10":50.3,"var_8":50.55},"{"closed":"right","closed_right":true,"length":19.6,"open_left":true,"right":99.0}":{"var_4":57.1176470588,"var_13":58.3529411765,"var_11":58.4117647059,"var_1":65.8235294118,"var_10":60.3529411765}}'
age_bins is a categorical index, if instead of
"{"closed":"right","closed_right":true,"length":19.698,"open_left":true,"right":20.6}"
style keys you want '(19.698,20.6]' change the age_bins to string. |
H: What is the num_initial_points argument for Bayesian Optimization with Keras Tuner?
I've implemented the following code to run Keras-Tuner with Bayesian Optimization:
def model_builder(hp):
NormLayer = Normalization()
NormLayer.adapt(X_train)
model = Sequential()
model.add(Input(shape=X_train.shape[1:]))
model.add(NormLayer)
for i in range(hp.Int('conv_layers',2,4)):
model.add(Conv1D(hp.Choice(f'kernel_{i}_nr',values=[16,32,64]), hp.Choice(f'kernel_{i}_size',values=[3,6,12]), strides=hp.Choice(f'kernel_{i}_strides',values=[1,2,3]), padding="same"))
model.add(BatchNormalization(renorm=True))
model.add(Activation('relu'))
model.add(MaxPooling1D(2,strides=2, padding="valid"))
model.add(Flatten())
model.add(Dropout(hp.Choice('dropout_flatten',values=[0.0,0.25,0.5])))
for i in range(hp.Int('dense_layers',1,2)):
model.add(Dense(hp.Choice(f'dense_{i}_size',values=[500,1000])))
model.add(Activation('relu'))
model.add(Dropout(hp.Choice(f'dropout_{i}_others',values=[0.0,0.25,0.5])))
model.add(Dense(hp.Choice('dense_size_last',values=[100,200])))
model.add(Activation('relu'))
model.add(Dense(2))
model.add(Activation('softmax'))
opt = Adam(learning_rate=lrn_rate_init)
earlystop = EarlyStopping(monitor='val_loss',patience=8,restore_best_weights=True)
model.compile(loss='categorical_crossentropy',optimizer=opt,metrics=['accuracy'])
return model
tuner = BayesianOptimization(model_builder,objective='val_loss',num_initial_points=??,max_trials=tuner_trials,directory='BayesianOptimization/',project_name='BayesianOptimization')
What do the num_initial_points argument does exactly and what should I set it to in my case?
Reading the documentation I see the description
The number of randomly generated samples as initial training data for Bayesian optimization
but not being an expert I don't exactly get what it means and how it will impact the optimization process.
AI: The Bayesian optimization algorithm selects points to test based on a balance between exploring uncertain regions and exploiting high-performing regions. But before you've tested very many points, there's not much information to go on. So, in this implementation you can specify a number of completely-at-random points to evaluate to start, and after that the actual Bayesian exploration begins.
Setting a high number of random points gives you guaranteed "exploration" points; indeed, in the documentation for the package bayesian-optimization, we find:
init_points: How many steps of random exploration you want to perform. Random exploration can help by diversifying the exploration space.
(The default is 5 in that package, and 3 times the dimension in keras-tuner.)
That said, you can also make the algorithm focus more or less on exploration/exploitation directly, using the beta parameter (kappa in bayesian-optimization, see this example notebook). |
H: Where to get models with weights instead of only weights? What's the purpose of .h5 files?
I have downloaded .h5 files from qubvel/resnet and qubvel/efficientnet. I was trying to use some models as a backbone for my model but I'm getting the following error:
ValueError: No model found in the config file.
As explained here this is because the .h5 file contains only weights, not a model.
So those .h5 files are only weights. What's the purpose of having only weights without architecture?
I was trying to do following code:
resnet18_path_to_file = "models/resnet18.h5"
resnet18 = tf.keras.models.load_model(resnet18_path_to_file)
resnet18.compile()
inputs = resnet18.input
outputs = resnet18.layers[-2].output
return tf.keras.models.Model(inputs=inputs, outputs=outputs, name="custom_resnet18")
AI: There are several options when saving and loading a keras model, as explained at https://www.tensorflow.org/guide/keras/save_and_serialize:
save the whole configuration, including the architecture, weights and even the last training state
but also the model architecture and the weights can be saved as independent files, and that is what you might have loaded: an .h5 with only the weights, and you also need the .json with the model architecture, based on the snippet below:
-model architecture and model weights to disk as separate files, which is what it might be done by the user who trained the model:
json_config = model.to_json()
with open('model_config.json', 'w') as json_file:
json_file.write(json_config)
# weights saving to disk
model.save_weights('path_to_my_weights.h5')
So you need to do the following:
-load the model as follows (you need the model json also):
with open('model_config.json') as json_file:
json_config = json_file.read()
new_model = keras.models.model_from_json(json_config)
new_model.load_weights('path_to_my_weights.h5') |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.