text
stringlengths 83
79.5k
|
---|
H: breaking joined words into meaningful ones during text mining
I'm performing an aspect-based sentiment on consumer complaints. I'm tokenizing at the sentence level.
tidy_complaints <- tidy_complaints %>%
unnest_tokens(
output = sentence,
input = consumer_complaint_narrative,
token = 'sentences'
) %>%
mutate(sentence_id = 1:n())
However, some of the the complaints contains words that are joined together such as "trailcertified" or "creditorcompany".
Is there any need to break this words up into meaningful words? if yes, how do I accomplish this and still keep the sentence intact?
AI: The compound split is not trivial, but there are solutions that kind of work (as you can read on the link @Aditya shared).
Another way of dealing with the matter would be to tokenize the strings into character n-grams. |
H: How to use a RBF kernel to create a "Kernel Space" using the similarity of each pair of point?
I am trying to use Semi-Unsupervised clustering using reinforcement learning following this paper.
Assume I have n data-points each of which has d dimensions. I also have c pairwise constraints of whether two elements are supposed to be in the same cluster or not.
The paper states that "the original input dimension of the dataset is
appended to a kernel space with a similarity metric to each
pairwise point in the set of constraints" creating a d + 2c dimensional space. They also say the kernel they use is an RBF.
Can anybody explain to me what do they mean? how can you use the similarity of a pair of elements to generate a whole new dimension for every data point in the dataset? what is a "kernel space" exactly in this case? if I try to google it I only get results relevant to operating systems kernels.
AI: Let's take a simple example, a binary classification using only 2 dimensions with only 8 observations. Consider the first 8 rows of the dataset. It is impossible to linearly separate the data using a hyper-plane.
So, we can use a kernel transform. The books usually drop you off right here(no help). It does not tell us how to transform. Transform using what? How many dimensions? What constraints?
We could consider the simplest transformation, f(x,y)=xy. We could take this transformation and now consider it a new dimension. So we started with (D2+Class) features and we could add 1 dim (to get D3+Class) if we chose.
Q. Does this help? I suggest either plotting this dataset out by hand. Using f(x,y)=xy is Not very helpful.
So let's try another, f'(x,y)=x^2 + y^2. We started with (D2+Class) features and added another dimension. We could conceivably plot D4+C, BUT using dimensions 1,2 & 4 are easier to visualize. I suggest plotting by hand (for effect) {d1,d2,d4} to your graphic or plot it using 3D software.
Now ask yourself, Is this new situation linearly separable?
As for the constraints, Do you remember LaGrange multipliers? Well if you want to use a gaussian like constraint we could use a Radial Basis Function (RBF).
Where:
RBF: K(x,y) = exp(-gamma * (||x−y||)^2)), gamma > 0
The first part is kernel 101. The LaGrangian is kernel 400. lol
dataset
d1 d2 C d3 d4
| row | x | y | class |f1(x,y)=xy|f'(x,y)=x^2 + y^2|
| ---:|--:|--:| -----:|---------:|----------------:|
| 1 | 1| 0| 0| 0| 1|
| 2 | 0| 1| 0| 0| 1|
| 3 | -1| 0| 0| 0| 1|
| 4 | 0| -1| 0| 0| 1|
| 5 | 2| 0| 1| 0| 4|
| 6 | 0| 2| 1| 0| 4|
| 7 | -2| 0| 1| 0| 4|
| 8 | 0| -2| 1| 0| 4| |
H: I am not getting classification output by predict_generator()
I am trying to classify pre-downloaded images from my dataset to "Rock" , "Paper" , "Scissor" classifications but I am getting outputs as numbers.
I have already divided the dataset to Train folder where 3 folders with the names of the categories exist.
Is it because I am not using a convolutional neural network?
I don't understand what to do here .
Code:
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Flatten
import numpy as np
train_directory = 'D:\D_data\Rock_Paper_Scissors\Train'
training_datgagen = ImageDataGenerator(rescale = 1./255)
training_generator = training_datgagen.flow_from_directory(
train_directory,
target_size = (28,28),
class_mode = 'categorical', classes = ["Rock", "Paper" , "Scissor"])
validation_directory = 'D:\D_data\Rock_Paper_Scissors\Train'
validation_datagen = ImageDataGenerator(rescale= 1./255)
validation_generator = validation_datagen.flow_from_directory(
validation_directory,
target_size = (28,28),
class_mode = 'categorical',
classes = ["Rock", "Paper" , "Scissor"]
)
model = Sequential()
model.add(Flatten(input_shape = (28,28,3)))
model.add(Dense(128,activation = 'relu'))
model.add(Dense(64, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(3, activation = 'softmax'))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy',metrics = ['accuracy'],)
filenames = validation_generator.filenames
nb_samples = len(filenames)
desired_batch_size = 1
model.fit_generator(training_generator,epochs=20,validation_data = validation_generator)
predict = model.predict_generator(validation_generator, steps = np.ceil(nb_samples/desired_batch_size))
print(predict)
Output:
[[2.2792105e-01 7.7206087e-01 1.8017196e-05]
[2.8221733e-03 9.9715865e-01 1.9176909e-05]
[5.1788869e-04 9.9948126e-01 8.8551565e-07]
...
[9.9283433e-01 7.1645100e-03 1.1474091e-06]
[9.9999678e-01 3.2666808e-06 4.5816462e-09]
[9.9640071e-01 3.5960779e-03 3.1497514e-06]]
AI: m.predict_generator returns a numpy array. you can take the maximum probability of a class by numypy argmax function or use predict_classes. This link might help-
How to predict class label from class probability given by predict_generator for testdata? |
H: What is the model architectural difference between transductive GCN and inductive GraphSAGE?
Difference of the model design.
It seems the difference is that GraphSAGE sample the data.
But what is the difference in model architecture.
AI: The main novelty of GraphSAGE is a neighborhood sampling step (but this is independent of whether these models are used inductively or transductively). You can think of GraphSAGE as GCN with subsampled neighbors.
In practice, both can be used inductively and transductively.
The title of the GraphSAGE paper ("Inductive representation learning") is unfortunately a bit misleading in that regard. The main benefit of the sampling step of GraphSAGE is scalability (but at the cost of higher variance gradients). |
H: Why is my training accuracy 0.0?
The Sizes of both the true label and predicted label are same still, the training accuracy is 0.0
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
Data Preprocessing
train=pd.read_csv(r'C:\Users\yashd\Downloads\Datasets\titanic\train.csv')
train=train.dropna()
y_train=np.array(train['Survived'])
train=train.drop('Survived',axis=1) #removing the label from the data
train=train.drop('PassengerId',axis=1) #removing irrelevant features from the training data
train=train.drop('Cabin',axis=1)
train=train.drop('Name',axis=1)
train=train.drop('Ticket',axis=1)
train['Sex']=np.where(train['Sex']=='male',1,0) #assigning a value of 1 to male and 0 to female
train['Embarked']=np.where(train['Embarked']=='S',1,np.where(train['Embarked']=='Q',2,3))
train['Fare']=(train['Fare']-train['Fare'].mean())/train['Fare'].var()
train['Age']=(train['Age']-train['Age'].mean())/train['Age'].var()
x_train=np.array(train)
x_train=x_train.T
y_train=y_train.reshape(1,-1)
Neural Network with 2 hidden layers ,128 neuorns in the first hidden layers and 64 in the second hidden layer. The output layers consists of single sigmoid Neuron
class FNN:
def __init__(self):
self.W1=None
self.b1=None
self.W2=None
self.b2=None
self.W3=None
self.b3=None
def sigmoid(self,x):
return 1/(1+np.exp(-x))
def forward_prop(self,x):
self.Z1=np.dot(self.W1,x)+self.b1
self.A1=np.tanh(self.Z1)
self.Z2=np.dot(self.W2,self.A1)+self.b2
self.A2=np.tanh(self.Z2)
self.Z3=np.dot(self.W3,self.A2)+self.b3
self.A3=self.sigmoid(self.Z3)
return self.A3
def back_prop(self,x,y):
self.forward_prop(x)
m=x.shape[1]
self.dZ3=self.A3-y
self.dW3=np.dot(self.dZ3,self.A2.T)/m
self.db3=np.sum(self.dZ3,axis=1,keepdims=True)/m
self.dZ2=np.dot(self.W3.T,self.dZ3)*(1-self.A2**2)
self.dW2=np.dot(self.dZ2,self.A1.T)/m
self.db2=np.sum(self.dZ2,axis=1,keepdims=True)/m
self.dZ1=np.dot(self.W2.T,self.dZ2)*(1-self.A1**2)
self.dW1=np.dot(self.dZ1,x.T)/m
self.db1=np.sum(self.dZ1,keepdims=True)/m
def fit(self,x,y,epochs=100,learning_rate=0.01,plot=True,disp_loss=False):
np.random.seed(4)
self.W1=np.random.rand(128,x.shape[0])
self.b1=np.zeros((128,1))
self.W2=np.random.randn(64,128)
self.b2=np.zeros((64,1))
self.W3=np.random.randn(1,64)
self.b3=np.zeros((1,1))
m=x.shape[1]
loss=[]
for i in range(epochs):
self.back_prop(x,y)
self.W1-=learning_rate*self.dW1
self.b1-=learning_rate*self.db1
self.W2-=learning_rate*self.dW2
self.b2-=learning_rate*self.db2
self.W3-=learning_rate*self.dW3
self.b3-=learning_rate*self.db3
logprobs=y*np.log(self.A3)+(1-y)*np.log(1-self.A3)
cost=-(np.sum(logprobs))/m
loss.append(cost)
e=np.arange(1,epochs+1)
if plot:
plt.plot(e,loss)
plt.show()
if disp_loss:
print(loss)
def predict(self,x):
y=np.where(self.forward_prop(x)>=0.5,1,0)
return y
F=FNN()
F.fit(x_train,y_train)
y_pred=F.predict(x_train)
print('Predicted Label:',y_pred)
print('True Label:',y_train)
acc=accuracy_score(y_train,y_pred)
print(acc)
Output
Loss Plot
AI: Based on your screenshot, it's quite clear that the accuracy isn't 0.0 since the first two predictions match the true labels. So something must be wrong with how the accuracy is calculated.
If you go to sklearn's documentation, you'll see that accuracy_score requires 1-d arrays while it seems that you are feeding it 2-d arrays. My guess is that right now, it compares the elements of your arrays and checks if they are identical. Because you feed a 2-d array, it checks whether all predictions match, which unless you are perfectly correct, will always yield you 0.0.
Doing the following should fix your issue:
acc=accuracy_score(y_train[0], y_pred[0]) |
H: Hill Climbing Algorithm - Optimum Step Size
I am implementing a standard hill climbing algorithm to optimise hyper-parameters for a predictive model. The hill climbing algorithm is being applied as part of a two-stage approach:
Apply grid search with large values applied to the hyper-parameter to find a 'best' starting point
Apply hill climbing algorithm in this space with a large number of different, random start points to find a local optimum
The large values that are passed in the first step, the grid search are
1*10^seq(-4, 5, by=1)
[1] 1e-04 1e-03 1e-02 1e-01 1e+00 1e+01 1e+02 1e+03 1e+04 1e+05
So I am struggling to choose an optimum step size that isn't too large that is skips the peak or too small that it takes too long to converge. I don't think a single value for the step size is appropriate for all values passed in the grid search since, for example the difference between 1e-04 and 1e-03 is vastly different to 1e03 and 1e04. So I want the step size to be proportionate to the grid search start point. I know the search space I'm looking at is
grid_search_optimum/10 to grid_search_optimum*10
My question, therefore, is what is an accepted value for the step size in relation to the search space? I haven't found any opinions in literature around this and, in general, the only advice is to choose a step size that is "sufficiently small". Any advice or pointers to relevant papers would be greatly appreciated!
AI: Usually it worked for me that if the search space was know then annealing rate (divide the size size with number of iteration)helped to decrease/increase the step size gradually to get to local max/min but the draw back is it might get stuck in local and might need some "momentum" to go on, another draw back it it might be very slow.however it doesn't seems like the case with your question that you bother about it.
Please refer-
https://courses.cs.washington.edu/courses/csep573/11wi/lectures/04-lsearch.pdf
This paper proposes self adaptive step size search-
https://link.springer.com/chapter/10.1007%2F3-540-34783-6_56 |
H: Implementation of BERT using Tensorflow vs PyTorch
BERT is an NLP model developed by Google. The original BERT model is built by the TensorFlow team, there is also a version of BERT which is built using PyTorch. What is the main difference between these two models?
AI: There are not only 2, but many implementations of BERT. Most are basically equivalent.
The implementations that you mentioned are:
The original code by Google, in Tensorflow. https://github.com/google-research/bert
Implementation by Huggingface, in Pytorch and Tensorflow, that reproduces the same results as the original implementation and uses the same checkpoints as the original BERT article. https://github.com/huggingface/transformers
These are the differences regarding different aspects:
In terms of results, there is no difference in using one or the other, as they both use the same checkpoints (same weights) and their results have been checked to be equal.
In terms of reusability, HuggingFace library is probably more reusable, as it is designed specifically for that. Also, it gives you the freedom of choosing TensorFlow or Pytorch as deep learning framework.
In terms of performance, they should be the same.
In terms of community support (e.g. asking questions in github or stackoverflow about them), HuggingFace library is better suited, as there are a lot of people using it.
Apart from BERT, the transformers library by HuggingFace has implementations for lots of models: OpenAI GPT-2, RoBERTa, ELECTRA, ... |
H: Does this mean underfitting?
I am training model to classify fruit images belonging to 60 classes. I have this result:
Validation accuracy is greater than training accuracy. Does this mean underfitting? If yes, can I fix this by adding more layers to the neural network or by increasing the number of neurons?
AI: It seems weird.
Your validation has a higher score than your training. This literally means that your model performs better in unseen data than what it sees.
Typical underfitting is that you achieve the same in train than in test.
In my opinion, since you are not providing much information, you are not splitting right the data. It might be for a lot of reasons:
The test is too small or too easy to predict
There is a temporal dependency and you are not using it (data leakage).
There are groups in your dataset and you are splitting by groups...
And a thousand more. From the visualizations that you are adding my guess your train test split is not performed correctly |
H: Address parsing using spaCy
I am trying to parse addresses from various documents using spaCy using NER but the results are not so accurate.
I know this is bit generic question but it would be a great help if I could get reference of any past work or good articles or techniques to apply to this.
AI: Please look at my comment to add more information to your post. Based on the information you provided, here are my remarks:
SpaCy is trained to find locations, not addresses per se
If you use a "common" language, SpaCy is trained using WikiNER data, where locations aren't addresses but more like geographical places like city names, country names etc. So it's quite normal to not be able to detect full addresses.
You likely need to train your own entity recognizer. They detail how to do this on their website, including code samples: https://spacy.io/usage/training#ner
Don't underestimate SpaCy's rule-based matching
Is it a fancy neural network? No. Does it matter? Also no. SpaCy allows you to create rules to find entities and in cases like addresses which are generally following a pattern across entities. |
H: Why is my loss increasing in gradient descent?
When the learning rate is 0.01 the loss seems to be decreasing whereas when I increase the learning rate even slightly, the loss increases. Why does this happen? Are the gradients calculated wrong?
Neural Network with 2 hidden layers ,128 neurons in the first hidden layers and 64 in the second hidden layer. The output layer consists of a single sigmoid Neuron
class FNN:
def __init__(self):
self.W1=None
self.b1=None
self.W2=None
self.b2=None
self.W3=None
self.b3=None
def sigmoid(self,x):
return 1/(1+np.exp(-x))
def forward_prop(self,x):
self.Z1=np.dot(self.W1,x)+self.b1
self.A1=np.tanh(self.Z1)
self.Z2=np.dot(self.W2,self.A1)+self.b2
self.A2=np.tanh(self.Z2)
self.Z3=np.dot(self.W3,self.A2)+self.b3
self.A3=self.sigmoid(self.Z3)
return self.A3
def back_prop(self,x,y):
self.forward_prop(x)
m=x.shape[1]
self.dZ3=self.A3-y
self.dW3=np.dot(self.dZ3,self.A2.T)/m
self.db3=np.sum(self.dZ3,axis=1,keepdims=True)/m
self.dZ2=np.dot(self.W3.T,self.dZ3)*(1-self.A2**2)
self.dW2=np.dot(self.dZ2,self.A1.T)/m
self.db2=np.sum(self.dZ2,axis=1,keepdims=True)/m
self.dZ1=np.dot(self.W2.T,self.dZ2)*(1-self.A1**2)
self.dW1=np.dot(self.dZ1,x.T)/m
self.db1=np.sum(self.dZ1,keepdims=True)/m
def fit(self,x,y,epochs=100,learning_rate=0.01,plot=True,disp_loss=False):
np.random.seed(4)
self.W1=np.random.rand(128,x.shape[0])
self.b1=np.zeros((128,1))
self.W2=np.random.randn(64,128)
self.b2=np.zeros((64,1))
self.W3=np.random.randn(1,64)
self.b3=np.zeros((1,1))
m=x.shape[1]
loss=[]
for i in range(epochs):
self.back_prop(x,y)
self.W1-=learning_rate*self.dW1
self.b1-=learning_rate*self.db1
self.W2-=learning_rate*self.dW2
self.b2-=learning_rate*self.db2
self.W3-=learning_rate*self.dW3
self.b3-=learning_rate*self.db3
logprobs=y*np.log(self.A3)+(1-y)*np.log(1-self.A3)
cost=-(np.sum(logprobs))/m
loss.append(cost)
e=np.arange(1,epochs+1)
if plot:
plt.plot(e,loss)
plt.title('LOSS PLOT')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
if disp_loss:
print(loss)
def predict(self,x):
y=np.where(self.forward_prop(x)>=0.5,1,0)
return y
F=FNN()
F.fit(x_train,y_train)
y_pred=F.predict(x_train)
Output
Learning Rate:0.01
Learning Rate:1
AI: Based on your plots, it doesn't seem to be a problem in your case (see my comment). The reason behind that spike when you increase the learning rate is very likely due to the following.
Gradient descent can be simplified using the image below.
Your goal is to reach the bottom of the bowl (the optimum) and you use your gradients to know in which direction to go (in this simplistic case, should you go left or right). The gradient tells you in which direction to go, and you can view your learning rate as the "speed" at which you move. If your learning rate is too small, it can slow down the training. If your learning rate is too high, you might go in the right direction, but go too far and end up in a higher position in the bowl than previously. That's called diverging.
Also, good to note that it could be completely normal that your loss doesn't always decrease. This is particularly true if you use mini-batch gradient descent. In that scenario, your gradient may not always be completely accurate, and you might simply make a step in the wrong direction every once in a while.
I hope this explanation helps! |
H: Clustering on categorical attributes
I have a dataset with only 2 categorical attributes out of 9. How can I get a clustering analysis on it?
I am using R. Do you have any advices about instructions, how to do it, topics, ...?
here's my dataset
thanks
AI: Here is nice implementation of mixed type data in R-
https://dpmartin42.github.io/posts/r/cluster-mixed-types
This question right here-
K-Means clustering for mixed numeric and categorical data
and a Discussion Thread of Kaggle-
https://www.kaggle.com/general/19741
There are ways, to either map your categorical data to numeric type and then you can go about the business as usual, or choose similarity measures which works for categorical data type, in this case you have options to choose from counting frequencies etc . |
H: Linear vs Non linear regression (Basic Beginner)
So my doubt is basically in Linear regression, We try to fit a straight line or a curve for a given training set. Now, I believe whenever the features (independent variable) increases, parameters also increases. Hence computing these parameters is computationally expensive. So, I guess that's the reason we move to Non linear!? Is my understanding right?
And my next doubt is, in overfitting for linear regression, we say that the model memorizes. What I understand is that the parameters are adjusted in such a way that it can only predict for the given input and will give bad results for output
And the reason is because we haven't fitted a generalized line or curve. But we are plotting a line or curve that passes through all output values. Is my understanding correct?
AI: Whether you use a linear model or a non-linear model depends completely on whether or not the data is linearly separable, not the number of features. If you have a dataset that is linearly separable, i.e a linear curve can determine the dependent variable, you would use linear regression irrespective of the number of features. Since real-world data is rarely linearly separable and linear regression does not provide accurate results on such data, non-linear regression is used.
And to answer your second question, yes you are correct. Overfitting occurs when your model has a very high accuracy for training data but a considerably low accuracy for validation/test data. It happens because the parameters are adjusted completely based on your training set, so there is not a generalised curve. Hence on newer data, the model would perform badly. |
H: Data analysis PCA
I have a question about the functioning of PCA. I have a dataset with only 2 categorical attributes out of 9. Is it good to calculate pca between those two? Does it help me understanding anything special?
here's my dataset
thanks
AI: PCA is not recommended for categorical features. There are equivalent algorithms for categorical features like CATPCA and MCA. |
H: Gradient of a function in Python
I've defined a function in this way:
def qfun(par):
return(par[0]+atan(par[3])*par[1]+atan(par[4])*par[2])
How can I obtain the gradient of this function for only some of the elements (par [0:2]) in a specific point? I only find functions with only one "x", so for those cases it is simple, but when your function has more parameters what should I do?
AI: Several options:
You can use the defintion of the derivative to have an approximation....
def f(x):
return x[0]**2 + 3*x[1]**3
def der(f, x, der_index=[]):
# der_index: variable w.r.t. get gradient
epsilon = 2.34E-10
grads = []
for idx in der_index:
x_ = x.copy()
x_[idx]+=epsilon
grads.append((f(x_) - f(x))/epsilon)
return grads
print(der(f, np.array([1.,1.]), der_index=[0, 1]))
If you can solve it analytically, it is better you write the derivative function by yourself
You can also use symbolic programming, like in Matlab, with the library sympy https://towardsdatascience.com/taking-derivatives-in-python-d6229ba72c64
Another way to do it is going for the "differentiable programming" paradigm or "software 2.0" |
H: Keras Early Stopping: Monitor 'loss' or 'val_loss'?
I often use "early stopping" when I train neural nets, e.g. in Keras:
from keras.callbacks import EarlyStopping
# Define early stopping as callback
early_stopping = EarlyStopping(monitor='loss', patience=5, mode='auto', restore_best_weights=True)
# ...THE MODEL HERE...
# Call early stopping in .fit
history = model.fit_generator(..., callbacks=[early_stopping])
Question: I often wonder if it is better to monitor the loss (monitor='loss') or the validation loss (monitor='val_loss'). Are there some takeaways from the literature? What is best practice?
My intuition would be that monitoring the validation loss gives a more direct feedback of the learning process, since in case the validation loss does not increase any further after some epochs, there seems to be little more to learn.
On the other hand - when I use dropout for instance - the loss will often "lag behind" the validation loss (loss $>$ val_loss). In this case, I believe that there still is something to learn for the model, even if this does not immediately translate into improvements in the validation loss.
AI: Validation loss. Overall training loss should keep decreasing so monitoring it isn't as meaningful. If your issue is noise in the validation loss, increase patience. |
H: Having trouble understanding the x and y axis in SVM when training and testing data
I wrote some code based on this article.
In the code in the article they have created a partition of 80 percent test and 20 percent data
#What percentage of data you want to keep for training
percentage = 80
partition = int(len(hog_features)*percentage/100)
Later they have created the following variables, the data frame is two dimensions np array --- data_frame[hog_features,labels]:
x_train, x_test = data_frame[:partition,:-1], data_frame[partition:,:-1]
y_train, y_test = data_frame[:partition,-1:].ravel() , data_frame[partition:,-1:].ravel()
clf.fit(x_train,y_train)
now what I don't understand is why for the second dimension of the array (the labels) for the x variables they're including everything except the last value and for the y variables the second dimension of the array is just including the last element of the array from what I can understand using array slicing (Maybe I'm wrong).
AI: The answer to your questions is simply : that is how training works :
The training set is a matrix in which the last column correspond to the labels and the rest correspond to the samples.
When slicing the matrix, you have two parts: the first one concerns the rows, the second one concerns the columns.
ROWS:
:partition means in fact the first 80% rows - this concerns the training set x_train, y_train
partition: means the last 20% rows - concerns the test set, x_test and y_test
COLUMNS:
:-1 mean all but the last column - concerns x_train and x_test
-1: mean just the last column - concerns y_train and y_test
Therefore, here you get for x_train and x_test everything but the last value. Be careful, in fact you get 80% of the data is for x_train (:partition) and 20% is for x_test (partition:). Now, for the labels you get only the last column, and 80% of the rows are for y_train while 20% of the rows are for y_test.
One more detail, your x_train and y_train is a 5D array, according to the documentation. |
H: ValueError: Shapes are incompatible when fitting using ImageDataGenerator
I got this error ValueError: Shapes (None, 1) and (None, 3) are incompatible when training my Sequential model. I could not figure out which shapes are actually incompatible. This is the first time I do image classification. Here are my codes:
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(3, activation='softmax'))
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
# this throws value error
model.fit(
train_generator,
steps_per_epoch=25,
epochs=20,
validation_data=valid_generator,
validation_steps=5,
verbose=2
)
I want the last Dense layer to return an array of 3 probability scores (large, medium, small).
Here is how I created the train_generator and valid_generator:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
image_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
horizontal_flip=True,
shear_range = 0.2,
fill_mode = 'nearest')
index = len(df)//10
df_train = df[index:]
df_valid = df[:index]
train_generator = image_datagen.flow_from_dataframe(
df_train,
x_col = 'png_image',
y_col = 'target',
target_size=(150, 150),
batch_size=4,
color_mode = 'rgb',
class_mode='sparse',
)
valid_generator = image_datagen.flow_from_dataframe(
df_valid,
x_col = 'png_image',
y_col = 'target',
target_size=(150, 150),
batch_size=4,
color_mode = 'rgb',
class_mode='sparse',
)
The dataframes look like this:
AI: Have you tried setting class_mode='categorical' in your generators?
I believe class_mode='sparse' works when your loss function is defined as 'sparse_categorical_crossentropy' whereas 'categorical_crossentropy' works for class_mode='categorical'. |
H: Implementing the SVHN CNN architecture in Srivastava et al. 2014 Dropout paper
I am trying the implement the CNN architecture introduced in Srivastava et al. 2014 Dropout paper (appendix B.2), for the SVHN dataset. I implemented only the convolutional layers part, without dropout or any regularization for the moment, following this description of the mentioned paper:
"The convolutional layers have 96, 128 and 256 filters respectively.
Each convolutional layer has a 5 × 5 receptive field applied with a stride of 1 pixel. Each
max pooling layer pools 3 × 3 regions at strides of 2 pixels."
This is my code, implemented in TensorFlow 2.0 with Keras API
from tensorflow.keras import layers, Sequential
model = Sequential(name= "fMap_svhn_DANN")
model.add(layers.Conv2D(filters= 96, kernel_size= 5, activation= 'relu', input_shape= (32,32,3)))
model.add(layers.MaxPool2D(pool_size= 3, strides= 2))
model.add(layers.Conv2D(filters=128, kernel_size= 5, activation= 'relu'))
model.add(layers.MaxPool2D(pool_size= 3, strides= 2))
model.add(layers.Conv2D(filters= 256, kernel_size= 5, activation= 'relu'))
model.add(layers.MaxPool2D(pool_size= 3, strides= 2))
model.add(layers.Flatten())
And this is the error I get"
Negative dimension size caused by subtracting 5 from 4 for 'conv2d_11/Conv2D' (op: 'Conv2D') with input shapes: [?,4,4,128], [5,5,128,256].
Any idea to help me ?
AI: This means that your input shape was too small by the time it reached the last Conv2D layer.
By removing the last Conv2D and MaxPool2D layer, this is how the model looks like:
You can see that the last output before the Flatten layer has height and width of 4, which is smaller than your kernel_size of 5 in the last Conv2D layer.
Heres what you can do:
Increase the input image height and width OR
Set padding='same' in the Conv2D layers |
H: Neural Networks to Output Metrics other than Prediction
I have a Deep Learning Network that predicts values. However, along with the prediction i also want it to output some further metrics. For example, for each image i want it to give me the total sum of all pixels. How does one achieve this?
AI: You could create a custom callback function. Let's say you want to find the total sum of all pixels in your validation data. Here's how the callback function would look like:
import numpy as np
from keras.callbacks.callbacks import Callback
class F1Callback(Callback):
def __init__(self, val_gen, batch_size, valid_freq, freq=1):
super().__init__()
self.val_gen = val_gen
self.batch_size = batch_size
self.pixel_sum = {}
"""
pixel_sum will look as such:
{1: {"1.jpg":5,
"2.jpg":3,
.
.
.},
2:
"""
self.epoch = 0
def on_train_begin(self, logs={}):
self.epoch += 1
self.pixel_sum[self.epoch] = {}
def on_epoch_end(self, epoch, logs={}):
batches = len(self.val_gen)
# logic to store the pixel values from predictions to self.pixel_sum
for batch in range(batches):
xVal, yVal = next(self.val_gen)
# deriving pixel values after passing it through model
pixel_values = np.asarray(self.model.predict(xVal))
# some way of retrieving image paths from val_gen
# storing
for pixel_value, image_path from zip(pixel_values, image_paths):
self.pixel_sum[self.epoch][image_path] = pixel_value
return |
H: Is vanilla RNN suitable for time series prediction?
I read this document: https://iamtrask.github.io/2015/11/15/anyone-can-code-lstm/
It was pretty simple, but I don't understand how to use it for predict the next sequence (for example) in trading systems.
In the above article it "loops" and returns each cycle a "bit", but if do I need only ONE output (like in trading systems)?
I suppose that I have to feed "outputs" also/only, is this right?
Thank you
AI: Absolutely yes, Vanilla RNNs can be very good predictors for time series data. You can check this detailed official TensorFlow on time series forecasting, for example.
As they said in part 1:
the model will be given the last 20 recorded temperature observations, and needs to learn to predict the temperature at the next time step
This also gives you information about how to structure your data for training. Given an input sequence of given length, predict the next n steps of the series - where n can be 1 or more. The choice is very task-specific.
Assuming you are working with univariate time series (i.e. single series with no other variables) then you can shape your X_train as:
( number of observations , length of input series , 1 )
If you are working with multivariate datasets, then change 1 with your number of explanatory variables.
At each training iteration, the model will look at input data, and the actual outcome, and try to "learn" a mapping between x and y. |
H: Machine learning on classifying speech
So, I have 9k of 1 second wav files of a person speaking. These are labeled by whether the person speaking is wearing a face mask or not. I am supposed to come up with a machine learning model to classify these criteria.
So far I tried using KNN on the mfcc feature of the audio. This gets around 56% accuracy on test data.
I also tried to convert the wav files to jpg spectograms and apply a CNN. This one gets 60% accuracy on test data.
But I don't have that much experience, I am not sure which features of audio would best help to this problem.
Also, if you could recommend a machine learning model for this particular problem.
AI: If you have 4,500 examples of each category, you’re doing better than random guessing. This sounds like a hard problem where the examples will have only subtle differences, so that’s an accomplishment. (I assume you’re doing some kind of out-of-sample testing.) Consider how you do in classifying perhaps 20 examples of each. I would be curious to hear how you do. This is quite a different problem if you get 38/40 than 20/40.
The way to apply a CNN, though, would be to convert to a spectrogram and then run the CNN on the spectrogram array. Converting to a literal picture is unnecessary and might harm performance. There are many ways to convert your signal to two dimensions of time-frequency space. You’ve probably tried a Fourier transform. Check out wavelets.
Since you have time series data, consider recurrent neural nets and long short-term memory. These can be combined with CNN. You will find examples of CNN/LSTM code on GitHub, I’d imagine.
You have a fairly small data set, however, for deep learning approaches. Consider simpler models like logistic regression. Your in-sample performance may suffer while improving out-of-sample performance.
Finally, consider a proper scoring rule like Brier score. Frank Harrell has written much about this on the statistics Stack, Cross Validated. Shamelessly, I will mention that you may be interested in a question of mine on CV where I somewhat challenge the idea of proper scoring rules when there is a hard decision to be made: https://stats.stackexchange.com/questions/464636/proper-scoring-rule-when-there-is-a-decision-to-make-e-g-spam-vs-ham-email. My example comes from NLP, but there could be a speech situation (say text dictation, where, at some point, you have to decide to print “dad” or “bad” or “bicycle”). |
H: How to plot an image from a series of bits inPython?
I am given a dataset of 0s and 1s for some handwritten letters (below is a series for one image) and i'd like to visualize them in python. How can this be achieved?
'000000000000001010000000000000000000000000000111110000000000000000000000000001111000000000000000000000000000111100000000000000000000000000001111000000000000000000000000000111110000000000000000000000000011111100000000000000000000000000111111100000000000000000000000001111111000000000000000000000000011111110000000000000000000000000111111110000000000000000000000011111111100000000000000000000000111111111100000000000000000000001111111111000000000000000000000011111111110000000000000000000001111111111110000000000000000000011111111111100000000000000000000111111011111100000000000000000001111100111110000000000000000000111111000111110000000000000000001111100001111110000000000000000011111000001111100000000000000001111110000011111100000000000000011111000000111111100000000000000111110000000111111000000000000011111100000001111110000000000000111110000000001111110000000000001111100000000011111100000000000111111000000000011111100000000001111110000000000111111110000000011111100000001111111111111100000111111111111111111111111111100011111111111111111111111111111000111111111111111111111111111100001111111111111110000011111100000011111100000000000000111111000000111110000000000000001111110000011111100000000000000001111110000111110000000000000000011111100001111100000000000000000011111000111111000000000000000000111110001111110000000000000000001111110011111100000000000000000011111100111110000000000000000000011111001111100000000000000000000111110011111000000000000000000001111101111110000000000000000000011111111111100000000000000000000011111111110000000000000000000000111111111100000000000000000000001111111110000000000000000000000001110'
do i need to take it octet by octet to get a pixel value?
AI: You could use at least these two options:
PIL library:
from PIL import Image
img = Image.new('1', (width, height)) #width and height of the original image
data=[int(x) for x in x] #x being your array
img.putdata(data)
img.save('my_handwritten_number.png')
img.show()
You can see here an example with the whole process, from reading a hand written image file through converting it to an array and displaying it again to an image:
import numpy as np
np_img=np_img.reshape(28900, )
width = np.sqrt(len(np_img))
from PIL import Image
img = Image.new('1', (int(width), int(width)))
data=[int(np_img) for np_img in np_img]
img.putdata(data)
img.save('my_hand_written_digit.png')
img.show()
getting as a result this reconstructed digit:
nevertheless, I see your array is a 1D array whose square root is not an integer, so it seems it does not come from an square image; you need to be able to convert it to a matrix (via the Image.new method).
If I do it with your array, this is what I get:
another option you have is with Keras:
import tensorflow.keras as keras
keras.preprocessing.image.array_to_img(np_img.reshape(170, 170, 1),
data_format=None, scale=True, dtype=None) |
H: How to build a classifier for determining if two pairs are a match?
I'm trying to build a classifier that can determine if two addresses are a match or non-match.
Let's assume I have a data set of address pairs that have a match or non-match label.
I'm new to ML so something may be going over my head but is this how I'm thinking of approaching the task:
Approach for building classifier
Segment each address into it's subcomponents such as streetname, zipcode etc
Construct a comparison vector with one dimension for each field. The fields holds the value from a string similarity method such as jaro-winkler.
All my comparison vectors with an associated match label can now be fed into training a classifier
Predicting if a pair is a match
Build a list of candidate address pairs by using a method such as blocking
Construct a comparison vector the same way as the classifier was built above
The comparison vector for each candidate is now fed into the classifier and this will resolve to either a match or a non-match
My approach was inspired by research paper Machine learning innovations in address matching: A practical comparison of word2vec and CRFs but a lot of the things are vaguely described for a beginner.
To sum up I'm looking for any input on whether this approach holds up or I'm misunderstanding something completely
AI: Generally your approach looks good to me. Here are a few comments/suggestions:
You didn't mention how the set of labelled pairs is obtained. This part can be tricky when doing record linkage among a large set of N addresses, since it's very difficult to manually annotate all the N*N pairs. Bootstrapping is a common approach afaik.
The way the data is obtained may also have consequences about the proportion of positive/negative cases. In general the proportion positive cases is very low, and this can cause the usual problems of class imbalance. It's important not to train the model with a positive/negative ratio completely different from the one in the test set/in production.
It's often useful to diversify the similarity measure: instead of using only Jaro-Winkler, you can think of cosine-TFIDF, Levenshtein edit distance, and a lot of variants. In the same logic it's useful to vary the levels of comparison, e.g. characters, characters bigrams/trigrams, words. |
H: Bar plot with varying length
Hello folks,
I am trying to plot a grouped bar plot of two variables with varying lengths, which means the x and y length of both the variables are different. The format of the data is given below:
This is for NRI.
This is for RI.
I want these two datasets to be grouped together. When I try to plot it both the datasets are overlapping each other. If anyone can help me in this regard it will be much appreciated.
Here is the code I used:
import numpy as np
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import matplotlib.path as mpath
from PIL import *
import os
import sys
import csv
from matplotlib import rc, rcParams
import pandas as pd
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
#df = pd.read_csv('E:/v.csv')
df = pd.read_excel('IC12_freq.xlsx',sheet_name='NRI')
df_1 = pd.read_excel('IC12_freq.xlsx',sheet_name='RI')
x = df.IC12.values
x1 = df_1.IC12.values
y = df.FRQ.values
y1 = df_1.FRQ.values
fig, ax = plt.subplots(figsize=(10,10))
ax.bar( x+1.3,y,color = 'w', width = 1.3,hatch='***',edgecolor='k',label='NRI',align='center')
#ax.twinx()
ax.bar( x1,y1,color = 'w', width = 1.3,hatch='/////',edgecolor='k',label='RI',align='center')
#ax.plot(x, y,'ro',color = 'k')
#ax.plot(x1, y1,'ro',color = 'r')
ax.xaxis.set_major_locator(MultipleLocator(10))
ax.xaxis.set_major_formatter(FormatStrFormatter('%d'))
ax.xaxis.set_minor_locator(MultipleLocator(5))
ax.xaxis.set_minor_formatter(FormatStrFormatter('%d'))
# For the minor ticks, use no labels; default NullFormatter.
ax.yaxis.set_major_locator(MultipleLocator(10))
ax.yaxis.set_major_formatter(FormatStrFormatter('%d'))
ax.yaxis.set_minor_locator(MultipleLocator(5))
ax.yaxis.set_minor_formatter(FormatStrFormatter('%d'))
plt.rcParams["font.weight"] = "bold"
axis_font = {'fontname':'Arial', 'size':'14'}
tick_font = {'fontname':'Arial', 'size':'5'}
ax.tick_params(which='both', width=2)
ax.tick_params(which='major', length=7)
ax.tick_params(which='minor', length=7)
plt.xlabel('IC12(kt)', fontweight='bold',**axis_font)
#plt.xticklabel(**tick_font)
plt.ylabel('Frequency(%)', fontweight='bold',**axis_font)
ax.set_facecolor("#f1f1f1")
ax.spines['top'].set_linewidth(1.5)
ax.spines['right'].set_linewidth(1.5)
ax.spines['bottom'].set_linewidth(1.5)
ax.spines['left'].set_linewidth(1.5)
leg = ax.legend()
#plt.grid(True)
#plt.style.use('ggplot')
##plt.xlabel("x axis", **axis_font)
#plt.ylabel("y axis", **axis_font)
#plt.bar(y,x)
#plt.savefig('IC12_frq.tif', bbox_inches='tight', dpi=300)
plt.show
AI: I would first decide to bin the x-axis such that it can be plotted in groups. Thus if we want to for example group them into bins of width 5 then plot them next to each other we would do something like this
width_box = 5
low = -40
high = 40
width_graph = 1.5 # should be at most width_box/2 to not overlap
def sum_in_range(df, min_, max_):
return df[(df['IC12']<max_) & (df['IC12']>=min_)].FRQ.sum()
indices = np.array(range(low, high, width_box))
sum1 = [sum_in_range(df, i, i+width_box) for i in indices]
sum2 = [sum_in_range(df_1, i, i+width_box) for i in indices]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.bar(indices,sum1,width_graph,color='b',label='-Ymin')
ax.bar(indices+width_graph,sum2,width_graph,color='r',label='Ymax')
ax.set_xlabel('Test histogram')
plt.show()
Which gives us
You can play around with the width_graph parameter to change how wide the bars look, and you can change width_box to group up the x-axis in different ranges |
H: Building a scorecard
I would like to build a scorecard from scratch.
Currently I have been working on data cleaning, and I created a dataset for training and another for test.
What I have is a dataset of around 700 rows (maybe not so many as I could expect; it is for a computer science project), 20 columns. The target is if the borrower has defaulted or not (indicator). Most of the factors are categorical or Boolean.
I am assigning a score based on these parameters, specifically:
- if the account has defaulted in the last 90 dats, then assign 60;
- if the account has defaulted in the last 30 days, then assign 50;
- if the account has not defaulted in the last 90 days, then assign 80;
...
I would like to know if this is a good approach and/or if there is an algorithm to assign scores to the accounts or if I should do something different.
I do not know very well the logic behind a scorecard, so any information would be extremely useful.
Thanks
AI: What you describe it is NOT a typical approach for scorecard design.
Typically binary target - "0/1" is used for target variable and logistic regression is typical method to create a scorecard.
Typically to define "default trigger" one fixes two time periods:
1) fix say 1 year from credit start, that means defaults happening after 1 year are not considered as defaults
2) fix say 90 day period - means "default trigger" is 1 if client delays payment for more than 90 days from the payment date, that means you do not consider as "default" if delay is less than 90 days.
There can be some other conditions - like threshold on amount of delinquency - if below - do not consider as "default". And may be some other.
You might consider the book:
Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring
by Naeem Siddiqi |
H: Error when checking target: expected Output to have 2 dimensions, but got array with shape (631, 80, 2641)
I'm making a Text Chunking program using Bi-LSTM from the model I of the paper "Neural Models for Sequence Chunking". The inputs are sequences of words and the outputs are "B-NP", "I-NP", and "O".
The problem I have is the output. The expected output is 2 dimensions, not 3 dimensions.
The X Train shape is (631, 80) while the Y Train shape is (631, 80, 2641).
I get the problem, but I don't know how to solve it. I've used np.array and reshape on Y Training, but no luck.
I don't have experience using Tensorflow and Keras and this is my first time, so I was hoping you could help me.
# Token Indexing
word2idx = {"PAD": 0, "UNK": 1}
word2idx.update({w: i+2 for i, w in enumerate(vocab_words) if w in vocab_words})
tag2idx = {t: i for i, t in enumerate(vocab_tags)}
idx2tag = {v: k for k, v in iteritems(tag2idx)}
# Padding
X = [[word2idx[w[0]] for w in s] for s in quran_sentences]
X = pad_sequences(maxlen=max_length, sequences=X, padding="post",value=word2idx["PAD"])
y = [[tag2idx[w[1]] for w in s] for s in quran_sentences]
y = pad_sequences(maxlen=max_length, sequences=y, padding="post", value=tag2idx["O"])
y = [to_categorical(i, num_classes=n_tags) for i in y]
# Split Data
X_tr, X_te, y_tr, y_te = train_test_split(X, y, test_size=0.2, shuffle=False)
# Parameters
hidden_state_encoder_size = 100
batch_size = 64
training_epoch = 200
embedding_size = 80
# Input
inputs = Input(shape=(max_length,), name="Input")
# Embedding
embed = Embedding(input_dim=n_words+2, output_dim=embedding_size, input_length=max_length, name="Embedding")(inputs)
# Bi-LSTM
encoder = Bidirectional(LSTM(units=hidden_state_encoder_size, return_state=True, name="LSTM"), name="Bi-LSTM")
encoder_outputs, forward_h, forward_c, backward_h, backward_c = encoder(embed)
# Concatenated
state_h = Concatenate(name="Concat_1")([forward_h, backward_h])
state_c = Concatenate(name="Concat_2")([forward_c, backward_c])
# Average
average = Average()([state_h, state_c])
# Outputs
outputs = Dense(num_decoder_tokens, activation="softmax", name="Output")(average)
# Build Model
model = Model(inputs, outputs, name="Seq2Seq Chunking")
# Compile & Training
# Compile
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
# run training
model.fit(X_tr, np.array(y_tr),
batch_size=batch_size,
epochs=training_epoch,
validation_split=0.1)
Error:
ValueError: Error when checking target: expected Output to have 2 dimensions, but got array with shape (631, 80, 2641)
AI: Alright, so... I got help from someone. He mentions changing LSTM to return_sequences.
# Input
inputs = Input(shape=(max_length,), name="Input")
# Embedding
embed = Embedding(input_dim=n_words+1,
output_dim=embedding_size,
input_length=max_length,
name="Embedding")(inputs)
# Bi-LSTM
encoder = Bidirectional(LSTM(units=hidden_state_encoder_size,
return_sequences=True,
dropout=dropout_rate,
name="LSTM"),
name="Bi-LSTM")
encoder_outputs = encoder(embed)
# # Average
# average = Average()([state_h, state_c])
# Outputs
outputs = Dense(n_tags,
activation="softmax",
name="Output")(encoder_outputs)
The average layer is still wrong, but it's the close thing. I might have to edit it later. |
H: P-values in orange data mining
I'm new to orange and struggling to find p-values, using linear regression tools.
It seems the outputs are only coefficients. What am I missing?
AI: Orange mostly doesn't have p-values. We are generally against showing them, due to the Texas sharpshooter problem. You can read more about it here. |
H: Number of parameters in Resnet-50
I'm using Keras, and I am struggling to know how many parameters Resnet-50 has. Keras documentation says around 25M, while if I use model.param_count() when loading a Resnet-50 model, it says 234M.
Which one is correct? I'm confused.
model.summary prints this:
Total params: 234,355,586
Trainable params: 210,767,874
Non-trainable params: 23,587,712
AI: from keras.applications.resnet50 import ResNet50
resnet_model = ResNet50(weights='imagenet')
#resnet_model.count_params()
resnet_model.summary()
Total params: 25,636,712
Trainable params: 25,583,592
Non-trainable params: 53,120
Check your code once to be sure that it is ResNet50 |
H: How to best handle imbalanced text classification with Keras?
I implemented a text classification model using Keras.
Most of the datasets that I use are imbalanced. Therefore, I would like to use SMOTE to handle said imbalance.
I tried both on plain text, and once the text was vectorized, but I don't seem to be able to apply SMOTE on text data.
I use imblearn and received the following error:
Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 6
How can I fix this error? And is SMOTE a good idea? If not, what other ways could I deal with class imbalance?
AI: First of all, to reassure you, SMOTE should work on text data. SMOTE will work on any data type as long as there is a way to compute the distance between data points.
Based on the error message you receive, it seems that it's an implementation issue (adding part of your code or how much data you have would greatly help).
As the error states, you have only 3 samples but the method requires at least 6. My guess is that something went wrong and you should have much more than 3 samples. |
H: Balanced Accuracy vs. F1 Score
I've read plenty of online posts with clear explanations about the difference between accuracy and F1 score in a binary classification context. However, when I came across the concept of balanced accuracy, explained e.g. in the following image (source) or in this scikit-learn page, I was a bit puzzled as I was trying to compare it with F1 score.
I know that it is probably impossible to establish which is better between balanced accuracy and F1 score as it could be situation-dependent, but I would like to understand some pros/cons of the two performance metrics, as well as some applications in which one could be more suitable and useful than the other (especially in an imbalanced binary classification context).
AI: One major difference is that the F1-score does not care at all about how many negative examples you classified or how many negative examples are in the dataset at all; instead, the balanced accuracy metric gives half its weight to how many positives you labeled correctly and how many negatives you labeled correctly.
When working on problems with heavily imbalanced datasets AND you care more about detecting positives than detecting negatives (outlier detection / anomaly detection) then you would prefer the F1-score more.
Let's say for example you have a validation set than contains 1000 negative samples and 10 positive samples. If a model predicts there are 15 positive examples (5 truly positive and 10 it incorrectly labeled) and predicts the rest as negative, thus
TP=5; FP=10; TN=990; FN=5
Then its F1-score and balanced accuracy will be
$Precision = \frac{5}{15}=0.33...$
$Recall = \frac{5}{10}= 0.5$
$F_1 = 2 * \frac{0.5*0.33}{0.5+0.3} = 0.4$
$Balanced\ Acc = \frac{1}{2}(\frac{5}{10} + \frac{990}{1000}) = 0.745$
You can see that balanced accuracy still cares about the negative datapoints unlike the F1 score.
For even more analysis we can see what the change is when the model gets exactly one extra positive example correctly and one negative sample incorrectly:
TP=6; FP=9; TN=989; FN=4
$Precision = \frac{6}{15}=0.4$
$Recall = \frac{6}{10}= 0.6$
$F_1 = 2 * \frac{0.6*0.4}{0.6+0.4} = 0.48$
$Balanced\ Acc = \frac{1}{2}(\frac{6}{10} + \frac{989}{1000}) = 0.795$
Correctly classifying an extra positive example increased the F1 score a bit more than the balanced accuracy.
Finally let's look at what happens when a model predicts there are still 15 positive examples (5 truly positive and 10 incorrectly labeled); however, this time the dataset is balanced and there are exactly 10 positive and 10 negative examples:
TP=5; FP=10; TN=0; FN=5
$Precision = \frac{5}{15}=0.33...$
$Recall = \frac{5}{10}= 0.5$
$F_1 = 2 * \frac{0.5*0.33}{0.5+0.3} = 0.4$
$Balanced\ Acc = \frac{1}{2}(\frac{5}{10} + \frac{0}{0}) = 0.25$
You can see that the F1-score did not change at all (compared to the first example) while the balanced accuracy took a massive hit (decreased by 50%).
This shows how F1-score only cares about the points the model said are positive, and the points that actually are positive, and doesn't care at all about the points that are negative. |
H: In backpropagation, scale is also important?
I think backpropagation is needed to find the direction of gradient decent method.
I also wonder, the scale is also important?
I heard some issue of vanishing(or exploding) gradient problem.
If the direction of the backpropagation is remained,
we still could apply gradient decent method(cause we still know the direction to update) and could finally get optimal answer.
If I'm correct, there is no real limit in deep learning?
I mean, although the speed may be slow, we can always finish the train work of neural network?
AI: In the process of backpropagation,first of all we feed forward the model.Each node receives the input from its previous layer,calculate the weighted suma and the apply the activation function to the weighted sum and pass this result to next layer as input. When we got the result.
Lets say we are classifying animals images,the each output node will correspond to the different category more or less and node with the largest output will be considered as the output of the model.
Then the loss is calculated on the result,lets say we are using SGD for the loss function.for SGD the loss is calculated as "d(loss)/d(weights)".Here Backpropagation comes into play.
Since the output is calculated with the weight and input,to change the result we need to change these two values(weight and input).Since we can't change the input ,we will change the weights.These weights are updates as
"new weight=old weight - (gradient*learning_rate)"
Here is the role of scale
For sake of simplicity,lets say for a particular connection weight is initialized to 0.833 and its optimal value is 0.437
Also,Gradient w.r.t to a particular weight in a network is the product of some components that resides later in the network(These components are gradients w.r.t. to weights)
Therefore,the earlier the weight ,more number of components will be required to calculate its gradients.
1.Vanishing Gradient:
If we set our learning rate(or considered as scale) to 0.0001.
"gradient*learning_rate"
This value will be very small.
It will take a lot of time to reach the optimal state and at some point will be stuck in a point where the there is almost no change in the weight.
2.Exploding Gradient:
If we set our learning rate(or considered as scale) to 0.01.
"gradient*learning_rate"
The scale will be larger enough to reach the optimal value for weight and therefore the optimal value will be skipped.
for simplicity lets say gradient is 1
"new weight=old weight - (gradient*learning_rate)"
new weight=0.833-0.01=0.823
new weight=0.823-0.01=0.813
.
.
.
new weight=0.453-0.01=0.443
new weight=0.443-0.01=0.433
new weight=0.433-0.01=0.423
It can be seen that the optimal value is skipped.
That is why the scale is important and should be used carefully. |
H: Tensor Flow Time Series Tutorial Question
In the tutorial, they normalize the data and say "The mean and standard deviation should only be computed using the training data"
What does this refer to? Why should you only use the training data?
AI: When building any Machine Learning model, the only observable data you have is training data. Test data is supposed to be unobserved data, meaning that even though you might have it now, you need to act as if you didn't. When you apply normalisation, you first observe the data to get the parameters you need. As you are only supposed to be able to observe the training data, you can't use the test data to calculate those values. Doing so would be like cheating, as you are accomodating your parameters to new unobserved data (how can you observe unobserved data?).
Imagine you build a model today and you want to make predictions tomorrow. You can't use tomorrow data to build your model since you don't have it yet. You are not supposed to know tomorrow's mean and std, though your hope is that they will be similar enough. That is why when you normalose/standarise you get the parameters with the training data and then use them to transform both train and test data, so you can use them as inputs for your model. |
H: How to run a model stored as a .Rda file?
I am using an already written R code which has a line of code as shown below
model_predictors <- buildModel(flag, data, outcomeName, folder)
model<-model_predictors$model
predictorsNames<-model_predictors$predictorsNames
auc <- model_predictors$auc
save(model, file=paste(folder,studyName,'_model_', flag$model[1], '_', outcomeName,".Rda",sep=''))
As you can see in the last line of code, the (training) model is saved in folder in a .Rda file format with a naming convention _model_. Now, I have to use this model to test/evaluate on unseen dataset.
So now my question, I see a .Rda file in my folder path and I can load the .Rda file in Rstudio but how do I make inference from this model? I am new to R and can anyone help me with this please?
Can someone help me understand how to run this model please?
AI: I suppose you come from Python. R is quite different as it does not require to save to external files your model, you just need to have it in your workspace. Once you have a model object, you just have to use the predict function and specify the unseen dataset. Here's an example of a tree model:
model<-tree(Y~X1+X2, data=dataset) #estimate of the tree model
predictions<-predict(model, newdata=unseen_dataset) #prediction on new data
Be sure variable names between datasets correspond. |
H: Suspiciously low False Positive rate with Naive Bayes Classifier?
I am performing phishing URL classification, and I am comparing several ML classifiers on a balanced 2-class data-set (legitimate URL, phishy URL).
The ensemble and boosting classifiers such as Random Forest, Ada Boost, Extra Trees etc AND K-NN achieve an accuracy about 90%, and a False Positive Rate about 11-12%. (fig.)
On the other hand, classifiers such as SVM, Logistic Regression, Multinomial NB and Bernoulli NB seem to perform poorly with accuracies fluctuating between 70% - 80% and higher false positives.
Here is the thing.
I also tried Gaussian NB and although it yields by far the worst accuracy 58.84% it has an incredibly low False Positive Rate 2.14% (and thus a high FNR)
I have no idea why this is happening. Any ideas?
Why some classifiers perform so poorly and others not?
I parametrized them all with Grid Search, they are used on the same dataset (about 30k records of each class) and I perform a 3-fold cross validation. It doesn't make any sense to me, especially for SVM. At last I use about 20 features.
P.S: I use python's sk-learn library
AI: I find the easiest way for people to understand this is to think of the confusion matrix. Accuracy score is just one measure of a confusion matrix, namely all the correct classifications over all the prediction data at large:
$$\frac{True Positives + True Negatives}{True Positives + True Negatives + False Positives + False Negatives}$$
Your False Negative Rate is calculated by:
$$\frac{False Negatives}{False Negatives + True Positives}$$
One model may turn out to have a worse accuracy, but a better False Negative Rate. For example, your model with worse accuracy may in fact have many False Positives but few False Negatives, leading to a lower False Negative Rate. You need to choose the model which produces the most value for your specific use case.
Why do some classifier perform poorly? While an experience practitioner might surmise what could be a good modeling approach for a dataset, the truth is that for all datasets, there is no free lunch... also known as "The Lack of A Priori Distinctions Between Learning Algorithms" You don't know ahead of time if the best approach will be deep learning, gradient boosting, linear approaches, or any other number of models you could build. |
H: Can we use one hot encoding instead of for loops?
I have an anomaly detection model, that I run per store with a bunch of features.
I intend to run this code, everyday, per store. Now, lets say I have 8000 stores, I would imagine, I should write a for loop and iterate and create the model per store:
final is the dataframe that consists of all the stores
Something like:
for store in final['StoreNbr'].unique():
run the model.
My input features include: Store number, Cashier, etcc (many other features). Can I use one hot encoding for the StoreNbr column and then run the model once as opposed to a for loop and running the model per storenbr?
AI: If you want to train the data each day and have a new model per store and per day, you should not use One Hot Encoding. If you do One Hot Encoding you will get one single trained model which will learn from the data you have in all the stores. I suppose you do not want one trained model but one model for each store. |
H: Correlation between words, then texts
I have texts collected on different topics. I would like to study the possible correlation between these.
I have started looking at the word frequency and it seems that in one dataset the word with highest frequency is cat; in another dataset is mice;
in another one is house.
Do you know about some technique or approach (similarity, classification,...) to show the possible correlation between datasets (e.g. between cat and mice)? Should it need only the expert judgement?
AI: One method you can try first is cosine similarity. It works by counting the number of occurrences of each word in the vocabulary for each individual document. Next, you put these counts into vectors and then you take the cosine of the angle between them.
If you have more than one text on a topic, you can combine them into a single text for the purpose of finding the cosine similarity between various topics.
If you have many texts that are examples of each topic, then you can create a supervised machine learning model. However, since your purpose is interpretation instead of prediction, I would recommend a technique like cosine similarity before getting insight from interpreting a predictive model. |
H: Why is the testing set standardized using fit values from the training set
I was wondering, in the feature scaling part of data preprocessing, why is the data in the testing set standardized using the fit values from the training set? Why aren't the fit values recalculated separately for the testing set and then used for transforming the testing set? Here is the code I am using as reference:
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train[:, 3:] = sc.fit_transform(x_train[:, 1:3])
x_test[:, 3:] = sc.transform(x_test[:, 1:3])
In this example, x_train is the training set and x_test is the testing set.
Thanks!
AI: You could do that. Assuming your train/test data are from the same distribution, which is in fact what you assume, they should have a similar mean and standard deviation. The result would be similar, in theory, so by the same token, why would you refit the scaling on test data?
In practice, the mean/variance estimate from the presumably smaller test set is probably not as accurate. You will be transforming the test data differently than the training data, and the model may not fit as well.
Taking it to a logical conclusion, how would you then predict for a single future instance? you can't scale one instance based on stats from one instance; it has no variance.
The principle here is to fit a model on training data and then apply that fixed model, of which scaling may be a part, to future data. So there is no particular upside to refitting this specially, and some potential downside. |
H: Number of internal nodes in a Decision Tree
The number of internal nodes in a decision tree is not equal to the size (n) of the input feature vector? It is always less than n or the upper bound on the number of decision nodes will be n? How to draw a relation?
AI: Building a decision tree is a process where the algorithm picks the first feature to split on $i_1$ from the set of features $n$ that it can split on $i_1={1,...,n}$.
After splitting, the algorithm has to decide whether to stop building the tree or split on another feature $i_2={1,...,n} \ $ AND $i_2 \ne i_1$, and the process repeats where the algorithm keeps deciding whether to stop or split on a new feature $i_3={1,...,n} \ $ AND $i_3 \ne i_2 \ne i_1$. This keeps going until the algorithm decided to stop building the tree or it has split on all $n$ features and there exists no feature $i_{n+1}={1,...,n}$ that can satisfy the condition $i_{n+1} \ne i_n \ne ... \ne i_2 \ne i_1$. Thus there are no more features to split on.
Thus the maximum number of nodes has to be less than or equal to the size ($n$) of the input feature. |
H: Setting activation function to a leaky relu in a Sequential model
I'm doing a beginner's TensorFlow course, we are given a mini-project about predicting the MNIST data set (hand written digits) and we have to finish the code such that we get a 99% accuracy (measured as 0.01 loss) in less than 10 epochs.
In principle I am getting the accuracy, but the loss only reaches <0.01 at the 10th epoch (hence assignment is counted as failed).
As per instructions, I'm not allowed to change the model.compile arguments, so I decided I can try to change the activation function to a leaky relu, using the code I was given. It is not as straightforward as it seems and everything I found online does not seem to work. Most suggestions are in the model.add() format, which I can't figure out how to incorporate/substitute in the code they provided us, without changing it too much (everything failed).
The current code is given below:
model = tf.keras.models.Sequential([
keras.layers.Flatten(input_shape=(28,28)),
keras.layers.Dense(128,activation=tf.nn.relu),
keras.layers.Dense(10,activation=tf.nn.softmax)
])
Any help would be appreciated!
AI: Try this, and tune alpha.
model = tf.keras.models.Sequential([
keras.layers.Flatten(input_shape=(28,28)),
keras.layers.Dense(128,activation=tf.keras.layers.LeakyReLU(alpha=0.3)),
keras.layers.Dense(10,activation=tf.nn.softmax)
]) |
H: Can I plug models like Linear Regression into a CNN feature map result?
I was learning about image recognition on the Orange Software and I saw that I can feed my image database into a CNN(they call image embedding) that has as output a feature map of the image and then I can feed that feature map into models like Logistic/Linear Regression. And that is my objective, compare those models, but I want to do it without Orange.
I am thinking about using VGG16 as my CNN and extract the feature map from it. After that I want to plug those Linear/Logistic Regression to predict my image. Does that work at all? Is it possible?
(Using python and a proper labeled image database. Also I don't want to use the CNN alone, I really would like to use those 'simpler' models consuming the feature map)
AI: What you're explaining is basically almost every CNN model where you basically have a fully connected layer at the end of the convolutions and that is equivalent to having a linear/logistic regression at the end (given there's only one output)
The only difference between all of those is whether there's an activation function or not, and if you are trying to build a classifier then you would definitely want an activation function to map the output values to a probability value between 0 and 1.
Example:
That "Fully Connected" layer is just basically a Linear/Logistic regression (10 of them actually since we have 10 outputs in this example) and a softmax applied to the output values to scale them all between 0/1 and ensure the sum is exactly 1. |
H: Is this over-fitting or something else?
I recently put together an entry for the House Prices Kaggle competition for beginners. I decided to try my hand at understanding and using XGBoost.
I split Kaggle's 'training' data into 'training' and 'testing'. Then I fit and tuned my model on the new training data using KFold CV and got a score with scikit's cross_val_score using a KFold with shuffle.
the average score on the training set with this cross validation was 0.0168 (mean squared log error).
Next, with the fully tuned model, I check its performance on the never before seen 'test' set (not the final test set for the Kaggle leader board). The score is identical after rounding.
So, I pat myself on the back because I've avoided over-fitting... or so I thought. When I made my submission to the competition, my score became 0.1359, which is a massive drop in performance. It amounts to being a solid 25 grand wrong on my house price predictons.
What could be causing this, if not overfitting?
Here is the link to my notebook, if it helps: https://www.kaggle.com/wesleyneill/house-prices-walk-through-with-xgboost
AI: I'm not an avid Kaggler, but I do remember a case where the evaluation in time related data was randomly picked (which favored nearest neighbor-approaches, since exact duplicates could exist).
I'm not sure whether there are clues on the evaluation data this time (perhaps you can tell). But a possible overfit could be time related.
If the test set is just a random subsample of the test/train part, and the evaluation part is not randomly sampled, but for instance a holdout of the year 2011, you can still learn rules specific for the time dimension and not find them in test.
A possible way of tackling that would be to resample the test set accordingly. |
H: Reason for Huge Jump in Loss For One Epoch Only?
I was wondering what possible reasons there could be for a huge jump in loss for only one epoch during training. I am getting a result like...
Epoch 1/10
2020-05-13 18:42:19.436235: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:184] Filling up shuffle buffer (this may take a while): 30910 of 40000
2020-05-13 18:42:22.360274: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:233] Shuffle buffer filled.
200/200 [==============================] - 173s 863ms/step - loss: 0.1844 - val_loss: 0.4250
Epoch 2/10
200/200 [==============================] - 167s 833ms/step - loss: 80890.9766 - val_loss: 0.5157
Epoch 3/10
200/200 [==============================] - 166s 830ms/step - loss: 0.0549 - val_loss: 0.2966
Epoch 4/10
200/200 [==============================] - 170s 849ms/step - loss: 0.0488 - val_loss: 0.2708
Which strikes me as very odd. This is a 3-lay LSTM network using Keras.
I graphed the data after normalization, but nothing out of the ordinary appears.
Here is the code I used to create the network...
# create new more complicated network
multi_step_model = tf.keras.models.Sequential()
multi_step_model.add(tf.keras.layers.LSTM(32,
return_sequences=True,
input_shape=
[past_history,len(features_considered)]))
multi_step_model.add(tf.keras.layers.LSTM(16, activation='relu'))
# output layers are same as prediction count
multi_step_model.add(tf.keras.layers.Dense(future_target))
multi_step_model.compile(optimizer=tf.keras.optimizers.RMSprop(clipvalue=1.0), loss='mae')
# run training
multi_step_history = multi_step_model.fit(train_data_multi, epochs=EPOCHS,
steps_per_epoch=EVALUATION_INTERVAL,
validation_data=val_data_multi,
validation_steps=50)
AI: I cannot say for sure, but it seems that it might be an error in the printout. The validation loss doesn't seem to spike, so it seems that the loss may not actually be what is printed.
I can only advise to run training multiple times and see if this happens again. If yes, try toying with the learning rate. There is a small chance that the learning rate could be too high making the loss diverge for one epoch.
Based on your code, it seems that your optimizer clips gradient with an absolute value larger than 1. In a scenario where most of your gradients are larger, this could lead to your model being optimized in the "wrong direction". This could lead to the loss being so large for some epochs. |
H: How to view entire 5788 rows and 7 columns.?
I want to see entire 5788 rows and 7 columns of my CSV data. I used below code and it is giving little data by stating [5788 rows x 7 columns] but showing only 10 rows and 5 columns in python prompt. How to get display of entire rows and column there?
import pandas as pd
df = pd.read_csv("BusinessData.csv")
print (df)
AI: pandas has a max rows setting - https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html
Though perhaps looking at a 5,000+ row csv in an editor, or a spreadsheet or some IDEs have a csv editor would be more useful. |
H: Is BERT a language model?
Is BERT a language model in the sense of a function that gets a sentence and returns a probability?
I know its main usage is sentence embedding, but can it also provide this functionality?
AI: No, BERT is not a traditional language model. It is a model trained on a masked language model loss, and it cannot be used to compute the probability of a sentence like a normal LM.
A normal LM takes an autoregressive factorization of the probability of the sentence:
$p(s) = \prod_t P(w_t | w_{<t})$
On the other hand, BERT's masked LM loss focuses on the probability of the (masked) token at a specific position given the rest of the unmasked tokens in the sentence.
Therefore, it makes no sense to use the token probabilities generated by BERT and multiply them to obtain a sentence level probability.
A secondary issue is the BERT's tokenization is subword-level so, even if it would make sense to compute a sentence-level probability with BERT, such a probability would not be comparable with a word-level LM, as we would not be taking into account all possible word segmentations into subwords.
UPDATE: there is a new technique called Masked Language Model Scoring (ACL'20) that allows precisely what the OP asked for. From the article:
To score a sentence, one creates copies with each token masked out.
The log probability for each missing token is summed over copies to
give the pseudo-log-likelihood score (PLL).
So the answer is now YES. It is possible to score a sentence using BERT, by means of the described pseudo-log-likelihood score. |
H: Regime detection to identify transitions between habitats
The following figure represents the concentration of a substance (referred to as Element in the code) measured in an organism throughout its life. There are several distinct regimes in the data that correspond with this organism moving in and out of a certain area. I know this organism moved between these two areas, but I dont know the specific times that transition from one location to the other occurred. For instance, you can see that from time 0 to around 125 the organism is in what we will call location 1: Loc1, after which you can see the shift to what we will call location 2: Loc2. Further, you can see that it returns to and leaves Loc1 somewhere around time 300, and again around time 400. At the very end of the time series, you can see yet another drop that which is "incomplete" but nevertheless corresponds to the organism returning to Loc1 (I know this to be true because it was captured in Loc1 thereby ending the data series). You will notice that it seems each time the organism returns to Loc1, the concentration of this Element seems to decrease. This is likely a physiological function that I cannot yet explain, or it may have to do with the amount of time it spent in Loc1, either way, it may be important to consider in my question.
I actually have conducted this same experiment on numerous individuals that return do different locations (which would parallel the organism in this example returning to Loc1). My ultimate goal is to compare the concentrations of the Element between locations that the organisms use. However, since I don't have exact times at which I can say "organism a is in Loc1 at this point, and after that point it is in Loc2", I need a quantitative way to set boundaries for each location so that I have something unbiased to compare.
Below I will provide the data and code that I used to fit a Hidden Markov Model for Regime Detection for this example. The data for the organism is stored in data frame 'a'.
> dput(a)
structure(list(Element = c(0.94, 0.9, 0.91, 0.86, 0.87, 0.89,
0.96, 0.87, 0.87, 0.87, 0.8, 0.99, 0.9, 0.94, 0.81, 0.88, 0.71,
0.87, 0.78, 0.78, 0.91, 0.86, 0.81, 0.82, 0.79, 0.88, 0.8, 0.91,
0.8, 0.72, 0.91, 0.87, 0.88, 0.8, 0.9, 0.82, 0.81, 0.77, 1.03,
0.92, 0.92, 0.91, 0.88, 1.06, 0.95, 1.03, 0.88, 0.88, 0.76, 0.85,
0.76, 0.78, 0.83, 0.81, 0.83, 0.79, 0.91, 0.78, 0.83, 0.87, 0.91,
0.87, 0.86, 0.85, 0.94, 0.84, 0.92, 0.93, 0.96, 0.87, 0.81, 0.82,
0.89, 0.98, 0.89, 0.86, 0.92, 0.95, 0.85, 0.89, 0.92, 0.95, 0.8,
0.89, 0.76, 0.87, 0.88, 0.73, 0.92, 0.82, 0.89, 0.72, 0.86, 0.77,
0.72, 0.7, 0.91, 0.72, 0.74, 0.69, 0.79, 0.72, 0.72, 0.66, 0.67,
0.68, 0.71, 0.87, 0.54, 0.61, 0.57, 0.49, 0.53, 0.58, 0.57, 0.55,
0.56, 0.5, 0.54, 0.56, 0.51, 0.54, 0.54, 0.54, 0.55, 0.56, 0.56,
0.58, 0.59, 0.65, 0.5, 0.56, 0.67, 0.5, 0.68, 0.66, 0.6, 0.82,
0.62, 0.78, 0.79, 0.86, 0.87, 0.81, 1.23, 1.06, 1.22, 1.21, 1.32,
1.13, 1.22, 1.51, 1.33, 1.5, 1.63, 1.69, 1.93, 1.44, 1.86, 1.88,
1.7, 1.82, 1.8, 1.63, 1.85, 1.78, 1.96, 1.84, 1.81, 1.71, 1.89,
1.71, 1.52, 1.46, 1.78, 1.69, 1.73, 1.82, 1.58, 1.71, 1.73, 1.61,
1.78, 1.65, 1.8, 1.75, 1.72, 1.84, 1.72, 1.57, 1.36, 1.44, 1.42,
1.5, 1.42, 1.37, 1.33, 1.36, 1.2, 1.32, 1.3, 1.46, 1.14, 1.35,
1.24, 1.28, 1.23, 1.22, 1.24, 1.13, 1.35, 1.14, 1.33, 1.3, 1.09,
1.15, 1.04, 1.28, 0.95, 0.9, 1.04, 1.06, 0.83, 0.81, 0.86, 0.85,
0.92, 0.78, 0.79, 0.83, 0.92, 0.85, 0.86, 0.98, 0.9, 0.87, 0.9,
0.87, 1.03, 0.91, 0.93, 1.05, 0.96, 0.98, 0.96, 0.99, 1.12, 0.98,
1.09, 1.06, 1.07, 1.09, 1.04, 1.24, 1.04, 0.95, 1.01, 0.93, 1.17,
0.85, 1, 1.07, 1.08, 0.92, 1.21, 0.98, 0.86, 0.89, 0.85, 0.79,
0.92, 0.82, 1.02, 0.88, 0.84, 0.86, 0.86, 0.81, 0.96, 0.74, 0.76,
0.82, 0.82, 0.79, 0.78, 0.64, 0.62, 0.72, 0.67, 0.74, 0.71, 0.52,
0.73, 0.7, 0.67, 0.67, 0.56, 0.58, 0.59, 0.57, 0.65, 0.56, 0.62,
0.56, 0.59, 0.58, 0.58, 0.59, 0.59, 0.64, 0.73, 0.66, 0.63, 0.63,
0.76, 0.69, 0.76, 0.7, 0.86, 0.83, 0.96, 0.94, 1.03, 1, 1.02,
1.21, 1.05, 1.17, 1.15, 1.3, 1.08, 1.27, 1.21, 1.15, 1.33, 1.37,
1.44, 1.51, 1.51, 1.54, 1.52, 1.53, 1.48, 1.52, 1.56, 1.38, 1.47,
1.44, 1.52, 1.37, 1.39, 1.36, 1.11, 1.27, 1.19, 1.24, 1.08, 1.04,
1.06, 1.11, 1.06, 1.19, 1.16, 1.32, 1.2, 1.14, 1.28, 1.46, 1.23,
1.1, 1.21, 1.08, 1.21, 1.2, 1.37, 1.11, 0.96, 1.01, 1.2, 1.06,
1.1, 1.14, 1.02, 1.03, 0.89, 1.02, 0.95, 0.94, 1, 0.96, 0.93,
0.87, 0.84, 0.81, 0.87, 0.94, 0.94, 0.89, 0.78, 0.9, 1.05, 0.98,
0.95, 0.92, 1.04, 0.99, 1.09, 1.12, 1.13, 1.07, 1.26, 1.31, 1.28,
1.22, 1.25, 1.28, 1.35, 1.2, 1.4, 1.33, 1.24, 1.37, 1.42, 1.26,
1.29, 1.45, 1.27, 1.23, 1.31, 1.24, 1.49, 1.27, 1.36, 1.31, 1.6,
1.25, 1.36, 1.28, 1.24, 1.12, 1.17, 1.46, 1.22, 1.24, 1.29, 1.2,
1.26, 1.12, 1.31, 1.11, 1.19, 1.11, 1.01, 1.16, 1.07, 1.24, 1.03,
1.18, 1.13, 1.21, 1.18, 1.24, 1.13, 1.2)), class = "data.frame", row.names = c(NA,
-464L))
library(tidyverse)
library(quantmod)
library(depmixS4)
hmm.a <- depmix(Element ~ 1, family = gaussian(), nstates = 2, data=a)
hmmfit.a <- fit(hmm.a, verbose = FALSE)
plot.ts(a)
# Output both the true regimes and the
# posterior probabilities of the regimes
post_probs <- posterior(hmmfit.a)
layout(1:2)
plot(post_probs$state, type='s', main='True Regimes', xlab='', ylab='Regime')
matplot(post_probs[,-1], type='l', main='Regime Posterior Probabilities', ylab='Probability')
legend(x='topright', c('Loc1','Loc2'), fill=1:2, bty='n')
This seems to have done a pretty decent job at identifying boundaries for each of Loc1 and Loc2. However, it did not pick up the final return to Loc1, where the organism was captured.
Now that I have described what I am trying to accomplish, and given an example, could someone possibly suggest a more robust approach to accomplishing this, or maybe something I should change in this model that will pick up the final return to Loc1? any advice is appreciated
AI: There's too much noise in the data which makes filtering the extrema a bit harder than it seems. But the best idea I can think of is using a smoothed curved fitted to the data as an indicator of the local mins and max in order to identify the regimes.
Below, I show a rudimentary trial to further explain this idea. But it surely needs tuning of the parameters and using other filters based on the nature of data to give us the answer. As is, while it gives us some answers, there are extra points of change identified which would not desirable in your case.
##just adding row numbers as a column to the data
a$Time = row.names(a)
attach(a)
##fitting a smooth curve on top of our data
smoothingSpline = smooth.spline(Time, Element, spar=0.65)
##ploting actual data points and smooth curve on one plot for illustration
plot(Time,Element, type = "l", xaxt = "n")
lines(smoothingSpline, col = "red")
axis(1, at=seq(0,500, by=25))
detach(a)
There are couple points in the curve where Times < 100 which are local min and max. Further, when I identify the points where the sign of the discrete second derivative changes (local extrema), those points are picked up as well.
##finding local max
which(diff(sign(diff(smoothingSpline$y)))==-2)+1
#> [1] 69 173 253 346 432
##finding local min
which(diff(sign(diff(smoothingSpline$y)))==2)+1
#> [1] 25 119 234 299 393
Tuning this solution can get you the points where bacterial state changes. I can tell you that first point of each of the vectors above should be excluded but how we can get R to tell us that, I am not sure at the moment. To be fair, I think this problem needs some human supervision at the end, but there may be a technique which I am overlooking here. |
H: How does the Naive Bayes algorithm function effectively as a classifier, despite the assumptions of conditional indpendence and bag of words?
Naive Bayes algorithm used for text classification relies on 2 assumptions to make it computationally speedy:
Bag of Words assumption: the position of words is not considered
Conditional Independence: words are independent of one another
In reality, neither of those conditions often holds, yet Naive Bayes is quite effective. Why is that?
AI: The main reason is that in many cases (but not always) the model obtains enough evidence to make the right decision just from knowing which words appear and don't appear in the document (possibly also using their frequency, but this is not always needed either).
Let's take the textbook example of topic detection from news documents. A 'sports' article is likely to contain at least a few words which are unambiguously related to sports, and the same holds for many topic as long as the topics are sufficiently distinct.
In general tasks which are related to the general semantics of the text work reasonably well with unigrams (single words, unordered) as features, whether with NB or other methods. It's different for tasks which require taking syntax into account, or which require a deeper understanding of the semantics. |
H: Value on Decision Tree plot
After plotting a sklearn decision tree I check what it says in each box and there is one feature "value"
that I am not sure what it refers.
The first line will be the column and the value where it splits, the gini the "disorder" of the data and sample the number of samples in the node.
But value?
AI: value represents the number of items in each class.
If you look at the top node, you should view it as:
There are:
35100 samples of class 0
16288 samples of class 1
which sums up to 51388 samples total |
H: Does regularisation make the loss noisy?
I implemented dropout and got a loss plot like this
Before implementing regularisation the loss the was not noisy at all
I understand why implementing dropout would increase the noise as different neurons would be active in successive epochs, but does L2 regularisation add noise? Why or why not
AI: I think your best bet to answer that question is to continue to test it on other datasets to confirm that your observation holds true in multiple situations.
Here is my intuition (although I'm by no means an expert).
The regular loss function is (wx + b - y)^2
The loss function for L2 Regularization is L2 = (wx + b -y)^2 + lamdba * w^2
In the first equation, you only have to worry about w as one term. As a simple example, If you update it by .1 and it improves, you'd be safe to update it again in the next round by .1
In the L2 equation, you have an additional instance of w to deal with and it's squared and multiplied by another number. It would be easy to imagine that the same circumstance may lead you to jump the gap and skip over where the gradient is equal to zero. |
H: What is the purpose of a confusion matrix in a classification problem?
I am studying machine learning. After some research I understood that a typical workflow for a classification problem (after having prepared the data) is the following:
Split data in test, train and validation sets
Train the model
Generate the confusion matrix
Analyze the metrics: accuracy, precision, recall and f1
Tune hyper-parameters based on the metric I have decided to optimize.
My question is: why do we ever need the confusion matrix? Shouldn't we already know what metric we need to optimize given the type of problem we are trying to solve?
I am asking this because, as far as I understand, if we have enough computational power, we could basically group steps 2 and 5 by applying a grid-search (which consists basically in a cross-validation for each tuning parameter) which takes as an input the metric to be measured. This would imply that you need to know the metric beforehand and furthermore you cannot get the confusion matrix.
Thanks in advance for the replies.
AI: Ok, so let me answer some of these questions for you:
What is the purpose of a confusion matrix?
A confusion matrix is merely a visual help for you to better interpret the performance of your model. It's a way to graphically visualize the true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN). Confusion matrices become more and more useful when you deal with a large number of different classes. It can give you some great insights about how your model is doing. Let's say you are training an image detection classifier. It might be good to know that your model is confused between dogs and wolves but isn't confused between cats and snakes.
Another purpose of the confusion matrix is to define a related cost matrix. In my example, being confused between dogs and wolves might be understandable and it shouldn't mean that your model is bad at what it does. However, if it is confused between classes it shouldn't be confused about, this should be properly represented in your performance metric.
Here is a good blog detailing these concepts: https://medium.com/@inivikrant/confusion-cost-matrix-helps-in-calculating-the-accuracy-cost-and-various-other-measurable-a725fb6b54e1
Shouldn't we already know what metric we need to optimize given the type of problem we are trying to solve?
Here you are confusing two things. On the one hand, yes, you should know in advance which metric you want to optimize (i.e. accuracy, precision, recall, etc.) but it doesn't mean that you know the value of that metric in advance. If you dumb down hyperparameter tuning, it's roughly this:
train model $M$ with hyperparameters $H$
evaluate the performance $P$ of model $M$
choose new hyperparameters $H$ and repeat step 1 and 2
pick model $M$ with hyperparameters $H$ such that $P$ is optimized
Computing the accuracy, precision, recall, or F1-score can be done if you know the TP, FP, TN, FP (see this link for more info). So technically, you don't have to create the confusion matrix per se, but you definitely need to compute the TP, FP, TN, FP to evaluate the performance of your model.
if we have enough computational power, we could basically group steps 2 and 5
You can only optimize your hyperparameters if you compute the performance of your model for each set of hyperparameters. You could skip step 3 as it doesn't technically affect your training process. It only helps you understand better what's going on. But you definitely cannot skip step 4. |
H: Organization finder in spaCy
I am trying to find the parties between which the agreement is executed from the statement below.
AGREEMENT, dated as of January 10, 2000, is entered into by and between ABC-EFG GROUP Inc. having an address at 418 Mona Drive, Prominade 34, Florida 34673, United States of America and Rob Cummins, an individual residing in the state of Florida, and having an address at 13 test Dr, Arosa, FL 43566
To get this, I tried the following code:
for chunk in doc.noun_chunks: # after loading "nlp = spacy.load("en_core_web_sm")" and doc = nlp("string")
print(chunk.text, chunk.root.text, chunk.root.dep_,chunk.root.head.text)
which gives me:
AGREEMENT AGREEMENT nsubjpass entered
January January pobj of
ABC-EFG GROUP Inc. Inc. pobj between
an address address dobj having
418 Mona Drive Drive pobj at
United States States conj Drive
America America pobj of
Rob Cummins Cummins conj America
an individual individual appos States
the state state pobj in
Florida Florida pobj of
an address address dobj having
13 test test pobj at
Dr Dr ROOT Dr
FL FL appos Arosa
I don't understand how I am supposed to find the two parties from this output.
I am not specific to spaCy. A suggestion of any other model or method also will work.
Please note that I have already tried NER but the results are very poor.
AI: Using dependency parsing alone will not give you what you need. You may be able to get your answer by interpreting the dependency tree. For instance, in this case ABC-EFG GROUP Inc. is a pobj of between, which could infer that it is party 1. In this particular case, the dependency parsing isn't fully correct and party 2 (Rob Cummins) is difficult to find.
I would recommend you try the following things:
It seems that the parties are either organization or people, so you could narrow down your candidates to proper nouns and you could use NER to see whether any ORG or PER are detected. That will not tell you exactly whether they are the parties, but they could give you useful hints.
In this case, it feels like a plain regular expression could work. Whatever proper noun occurs after between seems to be party 1, and whatever follows and seems to be party 2. Of course, a regular expression would only work for cases written with the same pattern, but it is always wise to consider. Perhaps, you are already a fair amount of your scenarios with a solution that will always work. You could also use the presence of words like "between" and "and" as a feature.
If you don't have a lot of training examples, you could take a look at libraries like mimesis and faker. They allow you to generate company names, addresses, people names, etc. |
H: What is the best ML/DL model to choose to calculate mobile network utilization increase
Let’s imagine the following scenario:
The marketing department decides to do the promotion next month and would like to give to every single mobile customer extra 10GB of data.
Due to simplicity, I have the following 5 features:
A: Amount of data volume downloaded by all customers per base station/transmitter (MB)
B: Number of connected customers to the base station (#)
C: Total duration of the connected customers to the base station (seconds)
D: Radio resource utilization (%)
E: Average throughput per customer (kbps)
Considering the marketing requirements, I can work out that the average increase of data volume per base station is 10% (Feature A).
The question is: What impact is this promotion going to have on the mobile network (every single base station), mainly on Radio resource utilization (Feature D) and the average throughput per customer (Feature E).
I have tons of data for each base station from low to high traffic scenario, but I cannot train the model with 10% extra data volume traffic increase as this situation never happened in the network. It is still going to be "the peak data volume per Base station so far" + extra 10%.
How can I train the model, if target label is unknown (peak data volume + extra 10% never happened on the particular base station and I cannot take the data/stast from a different base station as the traffic pattern is different)?
It would be enough to point me into the direction, I can find more info and to study it further. Thanks.
AI: I do not see why having a percentage greater than 100% might be a problem to you? You might normalize it by dividing by 100, so 100% might become 1. That feature will be your label.
I suggest you try several regression algorithms, with different settings, and choose the best model i.e. the model which provides the best result. You have a bunch of regressors in scikit learn, you may also try XGBoostRegressor or CatBoostRegressor. |
H: AWS SageMaker Model as endpoint size limit
Is there a size limit imposed on models deployed on AWS SageMaker as endpoints? I first tried to deploy a simple TensorFlow/Keras Iris classification model by converting to protobuf, tarring the model, and deploying. The size of the tarred file was around 10KB, and I was able to deploy that successfully as an endpoint. However, I tried the same process with a Nasnet model where the size of the tarred file ended up being around 350MB, and I got the following error:
The primary container for production variant AllTraffic did not pass
the ping health check. Please check CloudWatch logs for this
endpoint.
Could it be because the model is too large to deploy? I tried increasing the instance type from 'ml.m4.xlarge' to a higher tier but that did not work either.
AI: It doesn't seem that it is about the size of the model. I am no SageMaker expert, but the error message suggests that the model was deployed, but that something went wrong when the health check was run.
This could be caused by many different things, but the most probable would be that there is a bug in the code. Please check the following:
Can the model be loaded properly?
Can the model make a prediction?
Can the model make multiple predictions? |
H: sklearn serialize label encoder for multiple categorical columns
I have a model with several categorical features that need to be converted to numeric format. I am using a combination of LabelEncoder and OneHotEncoder to achieve this.
Once in production, I need to apply the same encoding to new incoming data before the model can be used. I've saved on disk the model and the encoders using pickle. The problem here is that the LabelEncoder keeps only the last set of classes (for the last feature it has encoded), thus it can't be used to encode all the categorical features for the new data. To face this issue I am saving on disk a different LabelEncoder for each one of the categorical features, but this does not seem to scale very well to me, especially when you have a large number of categorical features.
What is the common practice for this situation? Is it possible to serialize and save just one encoder for all the categorical features to be used in production?
AI: LabelEncoder is meant for the labels (target, dependent variable), not for the features. OrdinalEncoder can be used for features, and so can take a 2d array rather than the 1d array LabelEncoder requires, and so you can use a single transformer for all your categorical columns. (You can use a ColumnTransformer to select those categorical columns, if you have continuous ones too.)
But, you should not use OrdinalEncoder before OneHotEncoder anymore: OneHotEncoder has for some time now applied directly to string categorical columns. |
H: What is the form of data used for prediction with generalized stacking ensemble?
I am very confused as to how training data is split and on what data level 0 predictions are made when using generalized stacking. This question is similar to mine, but the answer is not sufficiently clear:
How predictions of level 1 models become training set of a new model in stacked generalization.
My understanding is that the training set is split, base models trained on one split, and predictions are made on the other. These predictions now become features of a new dataset. One column for each model's prediction, plus a column that contains the ground truth for these predictions.
Split training data into train/test.
Train base models on training split.
Make predictions on test split (according to linked answer, use k-fold CV for this).
Create a feature for each model, filling it with that model's predictions
Create a feature for the ground truth of those predictions.
Create a new model and train it on these predictions and ground truth features.
Question 1: Are these the only features used to train the "meta" model? In other words, are none of the actual features of the original data included? The linked answer says it is common to include the original data, but I have not read about it elsewhere.
Question 2: If the above algorithm is correct, What is the form of the data when making predictions? It seems like it would also have to have predictions as independent variables. If so, that means running all new incoming data through all base models again, right?
Question 3: I keep seeing an "out-of-fold" requirement for the first level predictions. It seems that doing a simple train/test split as mentioned above would fulfill this. However, would you not want a 3rd split to test the combined model's generalization? Or is this type of ensemble bulletproof enough not to worry about it?
AI: Q1. This can be done either way. You may use only the base model predictions, or those and all the original features, or anywhere in between. Passing along the original features may be known as "feature-weighted stacking," the idea being that the meta-estimator can learn that some of the base models are better on certain subsets of the original data (but AIUI the original approach there is to select/engineer more useful subsetting features, not to simply pass along all the original features; I assume this depends on context).
For example, the parameter passthrough in sklearn's StackingClassifier passes the original dataset along, as does use.feat in mlr's makeStackedLearner.
Q2. Absolutely, making predictions requires each of the base models making their predictions (this can be done in parallel), then passing those off to the meta-estimator.
Q3. Yes, a simple split suffices to prevent the data leak that occurs in more naive stacking, where the meta-estimator is fitted using the "predictions" of the base models on their own train set. And yes, you will need another test set for evaluating the performance of the ensemble. k-fold cross-validation is often recommended for the base models so that more data is available for all the models to train on. |
H: Plotting clustered sentences in Python
I have the following three sentences, extracted from a dataframe. I would like to check the similarity and create clusters based on their level of similarity.
Authors Sentences
John Smith Some people do not completely understand the risk of UV rays.
Jane Lower People do not understand the risk of UV rays, wrote the journalist in the Herald.
Craig Avatan In Berlin, people do not know how dangerous can be for their health a long exposure to UV rays.
I would need to cluster them based on words and their sequences (like plagiarism).
I have tried to use k-means, but I have not completely understood how to create clusters to plot.
Something like this:
I have tried to use k-mean as follows:
def sent_tokenization (line):
line = re.sub(r"[^a-zA-Z]", " ", line.lower())
words = word_tokenize(line)
words_lemmed = [WordNetLemmatizer().lemmatize(w) for w in words if w not in stop_words_list]
return words_lemmed
tfidf_vect = TfidfVectorizer(tokenizer= sent_tokenization)
tfidf = tfidf_vect.fit_transform(df['Sentences'])
kmeans = KMeans(n_clusters=2).fit(tfidf)
However I am not able to plot the results. What I am looking for is something that can be easily visualised. Specifically, I would need to plot in a scatter plot as in the example, which can show the name of authors based on their sentence similarity, like in plagiarism. I am trying to see which authors have written similar texts.
In my example, I should have the first two authors closers than the third one, as their sentences are very similar (in terms of words and structure).
Could you please give me advice on how to plot/cluster the above information?
If you need more information, feel free to ask.
AI: It can be made in many ways. My starting approach would be (1) to apply the tf-idf as in your code snippet, then (2) reducing the output matrix to a lower dimensionality (say 2D for seeing it in a scatterplot) with a decomposition method (or with topic modelling approach), and finally (3) apply a clustering algorithm to visualize the samples/documents of each cluster.
However, this may have many flaws, because I don't know if the decomposition results would make any sense and I'm not aware of how many clusters are good enough, but anyway it would be useful as an exploratory analysis.
The logic would be something like that: (didn't tested)
# 1
tfidf_vect = TfidfVectorizer(tokenizer= sent_tokenization)
tfidf = tfidf_vect.fit_transform(df['Sentences'])
# 2
decomp_method = NMF(n_components=2)
tf_2d = decomp_method.fit_transform(tfidf) # output shape: [N, 2]
# 3
kmeans = KMeans(n_clusters=2)
tf_kmeans_labels = kmeans.fit(tf_2d).labels_ # array
# visualization
data = pd.DataFrame(tf_2d, columns=['c1', 'c2'])
data['kmeans_labels'] = tf_kmeans_labels
scatterplot(x='c1', y='c2', hue='kmeans_labels',data=dat)
``` |
H: Split a string and create a news columns with parents
let's say I have this dataframe, and I would like to split it into several after (.) and have it like that, with levels
AI: I think you can achive the expected result by applying the following method. I've just create a fake dataframe to test.
import pandas as pd
df = pd.DataFrame(columns=["section"], data=[["level1.level2.level3.level4.level5"],["level1.level2.level3.level4.level5.level6"]])
columns = ['section', 'level', 'parent']
def split_parent(row):
levels = row["section"].split(".")
count_level = len(levels)
sub_levels = '.'.join(levels[:(count_level-1)])
# create a new row for each level of section
new_df = pd.DataFrame(columns=columns, data=[[row["section"], count_level, sub_levels]])
for num_level in range(count_level-1, 0, -1):
sec = ".".join(levels[:num_level])
par = "" if num_level == 1 else ".".join(levels[:num_level-1])
new_df = new_df.append({'section': sec, 'level': num_level, 'parent': par}, ignore_index=True)
return new_df
result = pd.DataFrame(columns=columns)
for index, row in df.iterrows():
result = result.append(split_parent(row), ignore_index=True)
print(result)
As result, you'll get:
section level \
0 level1.level2.level3.level4.level5 5
1 level1.level2.level3.level4 4
2 level1.level2.level3 3
3 level1.level2 2
4 level1 1
5 level1.level2.level3.level4.level5.level6 6
6 level1.level2.level3.level4.level5 5
7 level1.level2.level3.level4 4
8 level1.level2.level3 3
9 level1.level2 2
10 level1 1
parent
0 level1.level2.level3.level4
1 level1.level2.level3
2 level1.level2
3 level1
4
5 level1.level2.level3.level4.level5
6 level1.level2.level3.level4
7 level1.level2.level3
8 level1.level2
9 level1
10
If you need any further explanation, just write to me.
I hope I helped you. |
H: Norm type in cost function of ANN
I'm reading a tutorial about ANN. They use the following cost function:
As you can see this equation includes a norm. I'm new to the concept of norm.
My question is what kind of norm they use here (There are Absolute-value norm, Euclidean norm, Euclidean norm of a complex number, Taxicab norm, Manhattan norm, p-norm, Maximum norm, infinity norm, uniform norm, supremum norm, Zero norm, etc.)
AI: Reading the article they state that
The two vertical lines represent the $L^2$ norm of the error, or what is known as the sum-of-squares error (SSE)
Otherwise known as the Euclidean norm, Euclidean length, $L^2$ distance, $ℓ^2$ distance, $L^2$ norm, $ℓ^2$ norm, $2$-norm, or square norm.
Also speaking from my experience, most of the time when a norm is specified without a subscript then you can generally assume that it is the Euclidean norm. |
H: how does gradient descent update weigths in neural network
Im currently trying to learn about back propagation, and it's going forward, but theres one thing that keeps me scratching my head, and doesnt really seems to be answered in any of the videos or articles im looking at.
I understand now, that based on my loss, the weigths of my network is updated. But what i dont understand is how this happens. lets say i have this exercise network with the following weigths:
W_1 = 1.2 - w_2 = 0.4 - W_3 = 1.0
Now i do some training, and lets say i have the loss o.8.
Now when i use my loss to update my weights, what happens specifically to the weights? are something being added, subtracted maybe multiplied?
Thanks a lot
AI: In short it is "Added" to previous value of the weight.
Here is the algo from tom mitchell's, in your case it shall be W_1 = w_1+delta,W_2 = w_2+delta, W_3 = w_3+delta. |
H: Fitting models to 2d data
I am reading the documentation on tf.lattice (https://www.tensorflow.org/lattice/overview)
I am wondering how the training data was created/trained. Is it represented by (x,y) intpositions and some intensity value output?
AI: Is the answer already there?
"
The function can capture non-linear interactions between features. You can think of the lattice parameters as the height of poles set in the ground on a regular grid, and the resulting function is like cloth pulled tight against the four poles.
With features and 2 vertices along each dimension, a regular lattice will have parameters. To fit a more flexible function, you can specify a finer-grained lattice over the feature space with more vertices along each dimension. Lattice regression functions are continuous and piecewise infinitely differentiable.
"
EDIT-
Specifically for these pictures, they have given the example of dateset ranking restaurants(links are below). These graphs shows number of reviews(Y-Axis)/Average rating (x-axis).It has source code also.
https://www.tensorflow.org/lattice/tutorials/shape_constraints
https://www.tensorflow.org/lattice |
H: How much features are needed for Reinforcement learning?
I am trying to learn and use reinforcement learning. Now I have only 6 numeric features in my dataset. Can I still use RL? in other words can be using RL for a such a number of features sensible?
AI: It is relatively common when learning basics of RL (as opposed to Deep RL with neural networks etc), to consider a single discrete state variable. For instance many grid worlds, maze solvers etc just enumerate the positions. For practical learners, the variable is effectively one-hot encoded, but it is still a single variable.
The number of state features is not relevant to whether RL is applicable or not.
Assuming you have correctly formulated your problem as a Markov Decision Process (MDP), then the important things to consider are:
Do the state variables capture enough data about the current state so that the problem has the Markov property (that predictions of future rewards and state can be made using only the current state and action choices)?
Do the state variables need to be pre-processed before being used as features for your learning algorithm?
Is your state space, once turned into features, small and suitable for learning with a simple tabular method?
If the state space is not small, does a simple transformation (such as using a discrete tiling) make the problem simple, possibly linear, or is it complex? Simple problems can be adjusted to work with tabular approaches or linear approximators, complex ones will need approaches from Deep RL, such as DQN or A3C. |
H: difference in l1 and l2 regularization
I have seen at different places saying that: l1 regularization penalizes weights more than l2.
But the derivative of l1 norm is $\lambda$ and l2 norm is 2$\lambda$w. So l1 regularization subtracts smaller value than l2. Then why is it called that l1 penalizes weights more than l2. Or is it incorrect to say it like this?
AI: That is generally not true, to be more accurate we can say that L1 promotes sparsity. if a weight is larger than 1 then L2 cares more about it than L1 while if a weight is less than 1 then L1 cares more about it than L2.
For a quick example imagine two weights, $w_1 = 15$ and $w_2 = 0.02$, let's imagine that the model considers reducing both of those weights by a small number $\epsilon=0.001$ (for the sake of simplicity reducing either weight by $\epsilon$ increases the model-error part of the loss at the same rate). Now, let's calculate how much the loss changes for both weights under both regularization terms.
For $w_1$ and L1: $\Delta L=|15|-|14.999| = 0.001 $ (Loss decreased by 0.001)
For $w_1$ and L2: $\Delta L=|15|^2-|14.999|^2 = 0.029999 $ (Loss decreased by 0.029999)
We can see that for $w_1$ the loss decreases about 28 times more for L2 compared to L1. So L2 regularizes the weights more in this case (i.e. L2 is willing to sacrifice more model complexity to regularize this weight more).
For $w_2$ and L1: $\Delta L=|0.02|-|0.019| = 0.001 $ (Loss decreased by 0.001)
For $w_2$ and L2: $\Delta L=|0.02|^2-|0.019|^2 = 0.000039 $ (Loss decreased by 0.000039)
We can see that for $w_2$ the loss decreases about 24 times more for L1! So for smaller weights L1 is willing to sacrifice more model expressiveness just to reduce the weights while L2 barely pays any attention (compared to L1) to weights close to 0.
It's possible that in practice you perceive L1 regularizes weights more than L2 becauses when you perform L1 regularization you notice a lot of the weights tend to be exactly 0 while in L2 almost no weights are exactly 0 so you might tend to think that L1 is "stronger", but that just comes from the point we notices above where L2 barely pays any attention to weights close to 0 while L1 still sees a benefit of $\epsilon$ no matter how small the weight was before the $\epsilon$ change and thus it promotes sparsity.
There are a lot of practical and theoretical differences between L1 and L2 regularization, too many to list here. For example one practical difference is that L1 can be a form of feature elimination in linear regression. A theoretical difference is how L2 regularization comes from the MAP of a Normal Distributed prior while the L1 comes from a Laplacean prior.
EDIT: I just reread your post and yes, looking at the derivatives you should also get the same insight. For $w>1 \Rightarrow w\lambda > \lambda$ thus L2 regularizes large weights more while for $w<1 \Rightarrow w\lambda < \lambda$ thus L1 regularizes small weights more. |
H: How would you - on-the-fly - prevent a neural network from overfitting using a Keras callback?
I have a neural network that starts to overfit in that the validation loss begins to increase while the training loss stays ~ flat with epochs.
Is there a generic algorithm - obvious or otherwise, well-known or not - to stop the training early if overfitting is somehow detected?
I note that catboost implements such an algorithm but I have found it nowhere else.
https://catboost.ai/docs/concepts/overfitting-detector.html
Is this all simply a matter of rolling my own callback function and stopping when the training and validation losses start to diverge..?
Preference for TF, Keras, python3, ...
Thanks as ever
AI: Sounds like you're just looking for EarlyStopping, which will stop training when validation loss does not improve for N epochs. It's the same as Iter in catboost. |
H: Random Forrest Sklearn gives different accuracy for different target label encoding with same input features
I'm using sklearn Random Forrest to train my model. With the same input features for the model I tried passing the target labels first with label_binarize to create one hot encodings of my target labels and second I tried using label_encoder to encode my target labels. In both cases I'm getting different accuracy score. Is there a specific reason why this is happening, as I'm just using a different method to encode the labels without changing any input features.
AI: Yes. With y being a 1d array of integers (as after LabelEncoder), sklearn treats it as a multiclass classification problem. With y being a 2d binary array (as after LabelBinarizer), sklearn treats it as a multilabel problem.
Presumably, the multilabel model is predicting no labels for some of the rows. (With your actual data not being multilabel, the sum of probabilities across all classes from the model will probably still be 1, so the model will never predict more than one class. And if always exactly one class gets predicted, the accuracy score for the multiclass and multilabel models should be the same.) |
H: How to handle columns with large/infinte values in dataset for ML classification
Computed a column using a formula (formula does't involve any log functions, just a group by with .sum()), but as expected this column would result in infinite/exponential values like below:
-inf
nan
inf
-3.000e+32
7.3297+23 ...etc (similar data)
My doubt is how should I handle this column as I could also not avoid this column for prediction for a binary classification problem. What are the techniques to use before using this column in model classifier.
Thanks!
AI: Depends a bit on the model you are going to run. I will explain a bit for Linear Models and for Decision trees ensembling (gradient boosting and random forest)
Decision tree
Not much to do, when the tree is built each branch will choose a split. If there is any information gain with a large value, it will choose it and make a split.
Some implementations, as catboost have a quantization in it. The split could be done eventually in something like 95% quantile and that will work with large values.
Generalized Linear Models
You will need to change this. My suggestion is creating a feature based in the values of the column (if df.col1> 9999: 1 else 0).
And then Winsorizing by a certain threshold. |
H: How is learning rate calculated in sklearn Lasso regression?
I was applying different regression models to Kaggle Housing dataset for advanced regression. I am planning to test out lasso, ridge and elastic net. However, none of these models have learning rate as their parameter.
How is the learning rate calculated for these model? Is it dependent on the dataset being trained? I know these models are regularized linear regression and must use learning rate to update their model weights. Or is their a different way to update the model?
AI: With sklearn you can have two approaches for linear regression:
1) LinearRegression object uses Ordinary Least Squares (OLS) solver from scipy, as Learning rate (LR) is one of two classifiers which have closed form solution. This is achieve by just inverting and multiplicating some matrices.
2) SGDRegressor which is an implementation of stochastic gradient descent, very generic one where you can choose your penalty terms. To obtain linear regression you choose loss to be L2 and penalty also to none (linear regression) or L2 (Ridge regression)
There is no "typical gradient descent" because it is rarely used in practise. If you can decompose your loss function into additive terms, then stochastic approach is known to behave better (thus SGD) and if you can spare enough memory - OLS method is faster and easier (thus first solution).
This answer is mostly extracted from https://stackoverflow.com/questions/34469237/linear-regression-and-gradient-descent-in-scikit-learn-pandas/34470001#34470001
The OP @Netro is and the answer is from @lejlot |
H: While downsampling training data should we also downsample the validation data or retain validation split as it is?
I am dealing with class imbalance problem.
In this case, I am down sampling the majority class lables in the training set.
Among training, validation and test splits, the majority class in training split is down-sampled, and test split is retained as it is. However, should the validation split be downsampled according to the training-set or should it be retained as it is?
This is because the validation set controls the training process.
AI: I would recommend not to downsample the validation set. In the end you care about performance on the test set with the skewed class distribution. Therefore your validation set (used for hyperparameter selection, early stopping etc.) should have the same distribution in my opinion.
Have you considered upsampling the minority class? By downsampling you loose training data, which might contain valuable information and therefore might harm the learning process. |
H: What are the differences and advantages of TensorFlow and Octave for machine learning?
I have been exploring the different libraries and languages you can use in order to implement machine learning. During this, I have stumbled upon a library TensorFlow and Octave(a high-level programming language) as both are intended for numerical computations.
What are the differences and advantages of using either?
AI: Octave is a great language for prototyping and experimenting with ML algorithms, as it has built-in support for numerical linear algebra such as matrix and vector calculations. Octave is optimized for rapid calculations, which is very useful in Machine Learning. It is also quite easy to do matrix multiplications in Octave as Matrices are first-class objects in Octave.
Tensorflow is indeed a versatile platform for machine learning with an ever-expanding list of packages and frameworks getting built.
Octave is a good tool for learning the essentials and internals of mathematics of machine learning and Tensorflow is a good platform for building industry solutions for machine learning projects. Hence both are good for their own purposes. |
H: How to pre-process the name String of a customer?
I implement logistic regression to predict if a customer is a business or a non-business customer with the help of TensorFlow in Python. I have several feature candidates like name, street, zip, longitude and latitude. At them moment I am thinking of how to use the name field. The name often has repeating parts like “GmbH” (e.g. “Mustermann GmbH”) which in this context has a similar meaning to Corp. which is an indicator that the customer is a business customer. This information is useless in combination with the other parts of the name because then the name will be unique. So my question is: how should I pre-process this field so that only repeating parts will be used to predict the classification?
AI: You may want to tokenize the strings, e.g., “Mustermann GmbH” tokenizes into "Mustermann" and "GmbH". Allow for spaces and commas certainly, perhaps also hyphens and other punctuation.
You may want to look into Natural Language Processing (NLP) if you're classifying text, but whatever method you choose should have better luck sniffing out business vs. non-business using tokens of the strings. |
H: Evaluate Keras model with scikit-learn metrics
How does Keras calculate accuracy, precision, recall, and AUC?
I've created a model for categorical classification (i.e., multiple classes) by using keras.losses.CategoricalCrossentropy() as loss function, and in the model.compile() method I've specified the following metrics:
METRICS = [
keras.metrics.CategoricalAccuracy(name='acc'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
When I evaluate the model with model.evaluate(X_test, y_test, verbose=2) the result is the following:
147/147 - 1s - loss: 0.5127 - acc: 0.8073 - precision: 0.8437 - recall: 0.7696 - auc: 0.9537
Then, I want to compare its performance with the results obtained with other algorithms (e.g., DecisionTreeClassifier() implemented in scikit-learn).
To evaluate the other classifiers, I'm using the following:
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
print("AUC: %2.3f" % roc_auc_score(y_test, pred_bin))
print("ACC: %2.3f" % balanced_accuracy_score(lb.inverse_transform(y_test), pred))
print("PRE: %2.3f" % precision_score(lb.inverse_transform(y_test), pred, average='weighted'))
print("REC: %2.3f" % recall_score(lb.inverse_transform(y_test), pred, average='weighted'))
However, when I test the performance of my NN with this piece of code, I obtain the following results:
ACC: 0.715 PRE: 0.801 REC: 0.807 AUC: 0.920
As you can note, the result differ from those obtained with model.evaluate() of Keras.
How can I fairly compare the different methods?
AI: You could use class KerasClassifier from keras.wrappers.scikit_learn, which wraps a Keras model in a scikit-learn interface, so that it can be used like other scikit-learn models and then you could evaluate it with scikit-learn's scoring functions, e.g.:
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.metrics import roc_curve, auc
keras_model = ...
classifier = KerasClassifier(keras_model, batch_size=32)
y_pred_keras = classifier.predict(X_test).ravel()
fpr, tpr, thresholds = roc_curve(y_test, y_pred_keras)
auc_score = auc(fpr, tpr)
You can find more info on the Keras docs, or googling around (e.g. this and this). |
H: Why Scikit and statsmodel provide different Coefficient of determination?
First of all, I know there is a similar question, however, I didn't find it so much helpful.
My issue is concerning simple Linear regression and the outcome of R-Squared. I founded that results can be quite different if I use statsmodels and Scikit-learn.
First of all my snippet:
import altair as alt
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
np.random.seed(0)
data = pd.DataFrame({
'Date': pd.date_range('1990-01-01', freq='D', periods=50),
'NDVI': np.random.uniform(low=-1, high=1, size=(50)),
'RVI': np.random.uniform(low=0, high=1.4, size=(50))
})
Output:
Date NDVI RVI
0 1990-01-01 0.097627 0.798275
1 1990-01-02 0.430379 0.614042
2 1990-01-03 0.205527 1.383723
3 1990-01-04 0.089766 0.142863
4 1990-01-05 -0.152690 0.292427
5 1990-01-06 0.291788 0.225833
6 1990-01-07 -0.124826 0.914352
My independent and dependent variable:
X = data[['NDVI']].values
X2 = data[['NDVI']].columns
Y = data['RVI'].values
Scikit:
regressor = LinearRegression()
model = regressor.fit(X, Y)
coeff_df = pd.DataFrame(model.coef_, X2, columns=['Coefficient'])
print(coeff_df)
Output:
Coefficient
NDVI 0.743
print("R2:", model.score(X,Y))
R2: 0.23438947208295813
Statsmodels:
model = sm.OLS(X, Y).fit() ## sm.OLS(output, input)
predictions = model.predict(Y)
# Print out the statistics
model.summary()
Dep. Variable: y R-squared (uncentered): 0.956
Model: OLS Adj. R-squared (uncentered): 0.956
Method: Least Squares F-statistic: 6334.
Date: Mon, 18 May 2020 Prob (F-statistic): 1.56e-199
Time: 11:47:01 Log-Likelihood: 43.879
No. Observations: 292 AIC: -85.76
Df Residuals: 291 BIC: -82.08
Df Model: 1
Covariance Type: nonrobust
coef std err t P>|t| [0.025 0.975]
x1 1.2466 0.016 79.586 0.000 1.216 1.277
Omnibus: 14.551 Durbin-Watson: 1.160
Prob(Omnibus): 0.001 Jarque-Bera (JB): 16.558
Skew: 0.459 Prob(JB): 0.000254
Kurtosis: 3.720 Cond. No. 1.00
And scatterplot of data:
How should I proceed with this analysis?
AI: You need to add an intercept to statsmodels manually, while it is added automatically in sklearn.
import altair as alt
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
np.random.seed(0)
data = pd.DataFrame({
'Date': pd.date_range('1990-01-01', freq='D', periods=50),
'NDVI': np.random.uniform(low=-1, high=1, size=(50)),
'RVI': np.random.uniform(low=0, high=1.4, size=(50))
})
X = data[['NDVI']].values
X2 = data[['NDVI']].columns
Y = data['RVI'].values
# Sklearn (note syntax order X,Y in fit)
regressor = LinearRegression()
model = regressor.fit(X, Y)
print("Coef:", model.coef_)
print("Constant:", model.intercept_)
print("R2:", model.score(X,Y))
# Statsmodels (note syntax order Y,X in fit)
X = sm.add_constant(X) # manually add a constant here
model = sm.OLS(Y, X).fit()
print(model.summary())
Results
Sklearn
Coef: [-0.06561888]
Constant: 0.5756540424787774
R2: 0.0077907160447101545
Statsmodels
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.008
Model: OLS Adj. R-squared: -0.013
Method: Least Squares F-statistic: 0.3769
Date: Tue, 19 May 2020 Prob (F-statistic): 0.542
Time: 11:18:42 Log-Likelihood: -25.536
No. Observations: 50 AIC: 55.07
Df Residuals: 48 BIC: 58.90
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.5757 0.059 9.796 0.000 0.457 0.694
x1 -0.0656 0.107 -0.614 0.542 -0.281 0.149
==============================================================================
Omnibus: 5.497 Durbin-Watson: 2.448
Prob(Omnibus): 0.064 Jarque-Bera (JB): 3.625
Skew: 0.492 Prob(JB): 0.163
Kurtosis: 2.122 Cond. No. 1.85
============================================================================== |
H: Replace human judgements with mathematical approach/theories
I would like to give a context of what I did.
1) Let's say there are two dictionaries (dict A and dict B) each containing a list of words/terms as shown below.
2) Now my task is to find matching words for dict A in dict B
3) I use an automated tool (fuzzy-matching/similarity) to do the above task and the output looks like below
4) Once I get the output as above, you can see that there are some records with match % less than 100. It is totally possible that dict B didn't have the exact matching term. It's fine.
5) So, what I do is review terms that have match % less than 50. Meaning I take those terms (that are less that are 50% match) and check for a related term in dict B again. Doing this, I am able to update the output like below. Because we know through human experience that sore throat lozenge and strepsils are related (matching is better now when compared to earlier where it was mapped to orange (totally irrelevant)). So this problem is more of a semi-automated task rather than full-blown ML task
So, now my question is (not on NLP or ML but below)
1) But how can I prove that choosing 50% as the threshold for manual review is the right one? Because this is a subjective thing/ based on individual judgment. Meaning I could have chosen 30% or 40% as well, it could have saved my time in manually reviewing terms
2) Meaning, this 50% isn't written in stone but what I am looking for is some theory/mathematical/statistical approach through which I can arrive at this threshold value rather than based on my judgment/subjective which I cannot defend/justify?
Can you people share some views/techniques on how can this be done in a systematic approach?
AI: When it comes to choosing a threshold, I can see 3 approaches:
Make an educated guess
This is what you are currently doing. You pick a value and would need to argue why this is a reasonable threshold. Obviously, the argument is as strong as the assumptions you make.
Unsupervised way
If you compute the matching score for all pairs between A and B, you can analyze the histogram of these scores. Most likely, you will have quite a lot of scores near 0 and a few scores near 100. Then, you can decide to pick a threshold. This itself can be done in different ways. You can pick the median/mean matching score, this becomes close to option 1 but at least, the number comes from a specific mathematical concept. You can use the Jenks-Fisher algorithm (here is a Python implementation). In summary, the algorithm will find a threshold that splits your data into clusters which minimize the intra-cluster deviation. This would be better than the median/mean since it's expected that the data will be quite skewed.
Supervised way
If you somehow have access to which matches are confirmed/overturned by humans, you could use entropy/information gain to find the best threshold. This is similar to building a decision tree of depth 1 (a decision stump), where your input is the matching score and the target is a binary variable (whether or not a human says this is a match).
So your data would be something like this:
score, is_match
0.0, 0
0.1, 0
0.2, 1
0.3, 0
0.4, 1
0.5, 0
0.6, 0
0.7, 0
0.8, 1
0.9, 1
1.0, 1
My personal recommendation would be to use the Jenks-Fisher algorithm (option 2) |
H: Finding gender affinity for businesses
What are the different models I can use to find the gender affinity of businesses using yelp dataset-- https://www.kaggle.com/yelp-dataset/yelp-dataset . I need to find Probablity (Male buying from a merchant) and Probablity (Female buying from a merchant) where both probablities add to 1 . There is no information about gender so I can use the Genderize api to find gender using names of users. This is an unsupervised problem
AI: Since you're only interested in the gender affinity of businesses in your dataset, no model is needed. You can compute gender affinity directly: For each business, find the total number of customers, the total number of male customers, and the total number of female customers. Then $Pr(Male) = \frac{males}{total}$ and $Pr(Female) = \frac{females}{total}$ |
H: What's my target variable?
I am beginner in data science. I have this "aids" dataset from "mdhglm" package in R.
dataset = aids, info = Repatead Measures on AIDS Data
data("aids", package = "mdhglm")
Here, I want to know what is my target variable?
I am sorry, if the question is too basic.
AI: Your target variable is whatever you want to predict. For this particular dataset, logical choices are probably "death", "event", or "AZT". You'd typically want to use some kind of patient data to predict these outcomes. It wouldn't make much sense, for example, to try to build a model that predicts the treatment course from the death variable - although it's statistically feasible, you'd generally prefer a potentially causative relationship of predicting death from the treatment course. Death cannot possibly be causative of treatment course, since it always occurs after treatment is given.
Even if you could determine the treatment course from whether or not someone died, it's not going to be very useful from a practical, clinical perspective. There are few cases where you'd know whether someone died and didn't know their treatment course (but wanted to know); it'll be far more useful to have someone's current course of treatment, and predict whether they'll die. But in principle, your target can be any variable at all - whether the resulting model is useful, or meaningful, or implementable in practice will depend on what those values actually represent. |
H: Loss Function - Decreasing a lot at beginning of the epoch
I noticed something when looking at the output of the verbose. When I train my model, in the early part of the epoch (first 20 %), the loss is decreasing a lot. And then in the rest of the epoch (last 80%), the loss is very stable and doesn't change that much until the next epoch. It does the same thing.
I build a model that is training a kind of large dataset (60000 entries). I am using Keras and Tensorflow and my model is just a simple regression model with conv2d and dense layers. I am trying to get the best loss function possible. The loss function that I am using is just a simple Mean Squared Error (MSE). I am also using Adam optimizer.
I don't understand how the loss is decreasing a lot at the beginning of the epoch and less for the rest of the training. Should I reduced the size of my database ? Is it considered overtraining then ?
Is there a way to speed up this process (like early stopping but just for the epoch) ? As I see it most of the training is done for the first part of the data. Maybe I am wrong.
AI: Have you already plotted the training and validation loss over multiple epochs? What you would expect is something that looks like an exponential decay.
Chances are your models learns the mean of the target variable in the first few batches, which reduces the MSE-loss already considerably and only later learns the subtle differences in your data. You can try to use different batch sizes as well.
Are you generally happy with the performance on your model? Does it perform well on the test/validation set? If this is the case there might be no reason to be worried about. |
H: How to understand ANOVA-F for feature selection in Python. Sklearn SelectKBest with f_classif
I am trying to understand what it really means to calculate an ANOVA F value for feature selection for a binary classification problem.
As I understand from the calculation of ANOVA from basic statistics, we should have at least 2 samples for which we can calculate the ANOVA value. So does this mean in the Sklearn implementation that these samples are taken from within each feature? What exactly do these samples represent in the case of feature selection for this problem?
I tried to setup a very simple example which I have listed below, but I am still struggling to understand what the ANOVA value really means here? I'm also struggling to understand how to calculate this by hand, which usually helps me see what is happening on the inside.
In this example, repayment status 0 means the loan is repaid and 1 means it defaulted. I have only supplied 5 rows of data to keep it simple.
The code and results are as follows:
AI: Intuition
We have two classes and we want to find a score for each feature saying "how well this feature discriminates between two classes". Now look at the figure bellow. There are two classes red and blue and two features on $x$ and $y$ axes.
$x$ feature is a better separator than $y$ because if we project data on $x$ axis we get two completely separated classes but if we project data onto $y$, two classes have overlap in the middle of axis (comment if we need more clarification).
What makes $x$ better than $y$? As you see in the figure above:
According to $x$, two classes are far from each other.
Math Translation: The distance between means of class distributions on $x$ is more than $y$.
According to $x$, the scatter of classes do not fall on each other but according to $y$ they do. It means that according to $x$, classes are more compact so more probable to not have an overlap with another class.
Math Translation: The variance of each single class according to $x$ is less than those of $y$.
Now we can easily say $\frac{distance\_between\_classes}{compactness\_of\_classes}$ is a good score! Higher this score is, better the feature discriminates between classes.
Now we know, according to this definition, what $good$ and $bad$ features mean. Let's find a math formulation to quantize it.
Mathematics (to do on paper)
Let's formulate our two criteria:
Distance between means of class distributions is the numerator. Population is taken into account, I assume for statistical significance (needs a reference from a statistician!).
A concept similar to sample variances of classes is the denominator. Here instead of dividing sum of squares by $(sample\_population -1)$, we sum up all $(sample\_population -1)$s and divide the final value by them.
Now Back To Your Data
To calculate the above you calculate sum of between-class distances and sum of within-class variations for each feature according to different classes. I do it for only one feature. Let's choose Loan.
Class 1: [5000, 18000]
Class 2: [47500, 45600, 49500]
Mean of all points: (47500 + 45600 + 49500 + 5000 + 18000) / 5 = 33120
Mean 1: (5000 + 18000) / 2 = 11500
Mean 2: (47500 + 45600 + 49500) / 3 = 47533
Numerator: 2 x (11500 - 33120)^2 + 3 x (47533 - 33120)^2 = 1,558,052,507
For denominator we go with Sum of Squares Within class (it is simply the numerator in formulation of sample variance):
SSW 1: (5000 - 11500)^2 + (18000 - 11500)^2 = 84,500,000
SSW 2: (47500 - 47533)^2 + (45600 - 47533)^2 + (49500 - 47533)^2 = 7,606,667
Na = 2, Nb = 3 --> (Na - 1) + (Nb - 1) = 1 + 2 = 3
Denominator: (84,500,000 + 7,606,667)/3 = 30,702,222
Now the F-Score for feature Loan is:
F-Score: 1,558,052,507 / 30,702,222 = 50.74
as you see with your calculation in Python.
Note
I tried to explain it in a simple way. For example, the denominator of sample variance is called degree of freedom but I skipped those terms for simplicity.
Just understand the main idea. Further the means and smaller the within variances, better the feature is. You can formulate it yourself as well (however you will not have p-values anymore ;) )
Finding P-values and understanding what it means is another story, which I skipped.
Hope it helped.
Good Luck! |
H: Mean estimation for nested location data
I want to estimate the average income for a location. I have nested data in the following way: A block is inside a neighborhood, which is inside a zipcode, which is inside a district, which is inside a region, which is inside a state.
I want to estimate the average income at a block level, and the issue is that I don't have much data at that level. I have much more data at a state level, but it is not such a good approximation.
How would you deal with this problem? Are there any ways to incorporate the uncertainty of not having many data points at a block level? Are there any Bayesian frameworks that allow us to incorporate data of all levels? Is it possible that mixed models are able to do so?
If you explain any method, if you can provide a python package where that method is built, it'll be great!
Thanks!
AI: I don't know if that is the case, but if some kind of continuity assumptions are realistic, you could try to move away from categorical variables (block) to continuous variables (longitude and latitude). Then, if you have information on two neighboring blocks, you could interpolate those values with say a spline.
Of course, this can also be fitted into a machine learning model with predictors such as average income of blocks with distance < x.
And if you don't have data of nearby blocks, then your state average might be the next best approximation.
Your state level data can serve as a predictor and also as validation.
Also, plotting your data always helps get some kind of intuition. |
H: Learning Resources for Recommendation system
Beginner here:
Could you please suggest some of the learning resources (books/youtube/articles) for beginners who want to build a recommendation system for their organization.
Have no clue about it except for Amazon and Netflix example.
AI: If you have a technical background, I would certainly recommend the chapter 9 from Mining Massive Datasets by Jure Leskovec. Video lecture of the topic is also available there. Look for chapter 9. This blog could also give you some overall insight.
Please keep in mind that, ML-wise, recommendation is a more creativity-related topic than other classic questions like Classification or Regression and indeed all those classic problem could be a step in building a reasonable recommender engine. |
H: Machine learning methods on 1 feature dataset
Assuming I have following dataset but much longer. Can I use any machine learning methods having only one feature? Giving Name and predicting Fullname. I'm newbie and think that doesn't make any sense cause it applies just basic statistics computations as mode. But maybe is something better? If some methods can be used, which ones?
Name | Fullname
------------------
John | Novak
Adam | Johnson
Sophia | Loren
Dave | Miller
Anna | Smith
Dave | Smith
...
AI: Can I use any machine learning methods having only one feature?
Yes!
In fact, many NLP classifications tasks are in this format. Given 1 piece of text, classify something. For example:
Given 1 review, classify the sentiment
Given 1 news article, classify the topic
Given 1 chat message, classify the intent
And now you have:
Given 1 name, classify the Fullname
Can a better method be used?
Like you mentioned, you could just find the most common Fullname for a given name and every time you get a name you have a lookup table for the Fullname. However, what will happen when a name you have never seen before appears, how do you classify it? Are you also assuming that you already have the full list of Fullnames?
Assumption: you know all Names and Fullnames
In this case, do as you suggested. Create a dictionary mapping Name-Fullname by finding the most common Fullname for every Name.
Assumption: you know all Fullnames but not all Names
Let say you have the mappings:
Peter -> Johnson
John -> Smith
Then, there is a name you have never seen before, Pete for example, which does not appear in your mapping table.
You could try two approaches:
The simple way - find which name in the mapping is closest to Pete using some word distance measure, like Levenshtein.
The more robust way - forget the notion of mapping table and use a machine learning model. You will need the following things:
A text vectorizer to transform your text into a numerical vector. I would suggest a character level n-gram TF-IDF.
A classifier. If you use the vectorizer I suggested, then you will need a linear classifier, like an SVM.
If you go to with approach two, when you encounter the name Pete, it will be spit into n-grams (e.g. [pe, et, te, pet, ete]) and vectorized.
Assumption: you don't know all Fullnames and you don't know all Names
This gets more interesting because you could be working with Fullname generation.
It could be used when you move to names from other countries as well.
For example, you already have the mapping:
Peter -> Johnson
John -> Smith
Then you start dealing with Dutch names and encounter Pieter and Jan.
Then you might want to get the following results where even the Fullnames are different:
Pieter -> Janssen
Jan -> Smeets
For this, you could use a seq-to-seq Recurrent Neural Network. The architecture can be similar to ones used for neural language translation.
However, all embeddings you create have to be character level. Instead of learning an embedding for every word, you learn for every character. You also feed your network one character at a time. This way, you will be less likely to find "out of vocabulary" tokens (except for when you find character from another alphabet). |
H: Transformer decoder output - how is it linear?
I'm not quite sure how's the decoder output is flattened into a single vector.
As from my understanding, if we input the encoder with a length N sentence, it's output is N x units (e.g. N x 1000), and we input the decoder with a length M sentence, the output of the decoder will give us M x units output. M is not fixed (M should be the length of the decoder's raw input) and will change during the different steps of inference.
How do we go from here to a single vector?
Screen shot from "Attention is all you need"
AI: I'm not quite sure how's the decoder output is flattened into a single vector
That's the thing. It isn't flattened into a single vector. The linear transformation is applied to all $M$ vectors in the sequence individually. These vectors have a fixed dimension, which is why it works. |
H: BLEU_SCORE gives bad scores - what am I doing wrong?
I want to calculate the BLEU_SCORE but it gives me bad results I don't know why?
For example, this is my reference and predicted sentences:
ref :
['electron', 'and', 'a', 'proton']
predicted :
['electron', 'and', 'a', 'proton']
ref :
['to', 'reach', 'the', 'nectar', 'at', 'the', 'bottom', 'of', 'flowers']
predicted :
['to', 'reach', 'the', 'nectar', 'at', 'the', 'bottom', 'of', 'flowers']
ref :
['during', 'the', 'summer', 'near', 'the', 'north', 'pole']
predicted :
['during', 'the', 'summer', 'near', 'the', 'north', 'pole']
ref :
['only', 'blue', 'light', 'is', 'reflected', 'by', 'the', 'block']
predicted :
['only', 'blue', 'light', 'is', 'reflected', 'by', 'the', 'block']
ref :
['between', '20', 'and', '40', 'degrees', 'latitude']
predicted :
['between', '20', 'and', '40', 'degrees', 'latitude']
ref :
['external', 'and', 'internal', 'combustion', 'engines']
predicted :
['external', 'and', 'internal', 'combustion', 'engines']
ref :
['cleaning', 'disinfecting', 'and', 'in', 'swimming', 'pools']
predicted :
['cleaning', 'disinfecting', 'and', 'in', 'swimming', 'pools']
ref :
['body', 'mass', 'index', 'bmi']
predicted :
['body', 'mass', 'index', 'bmi']
ref :
['they', 'put', 'nutrients', 'into', 'the', 'soil', 'that', 'plants', 'use', 'to', 'grow']
predicted :
['they', 'put', 'nutrients', 'into', 'the', 'soil', 'that', 'plants', 'use', 'to', 'grow']
ref :
['structure', 'of', 'earth', 'interior']
predicted :
['structure', 'of', 'earth', 'interior']
And here is the code I used to calculate the BLEU_SCORE :
from nltk.translate.bleu_score import corpus_bleu
print("Individual n-gram")
print("Individual 1-gram")
print('BLEU-1: %f' % corpus_bleu(ref, pre, weights=(1.0, 0, 0, 0)))
print("Individual 2-gram")
print('BLEU-2: %f' % corpus_bleu(ref, pre, weights=(0, 1.0, 0, 0)))
print("Individual 3-gram")
print('BLEU-3: %f' % corpus_bleu(ref, pre, weights=(0, 0, 1.0, 0)))
print("Individual 4-gram")
print('BLEU-4: %f' % corpus_bleu(ref, pre, weights=(0, 0, 0, 1.0)))
OUTPUT:
Individual n-gram
Individual 1-gram
BLEU-1: 0.015625
Individual 2-gram
BLEU-2: 0.000000
Individual 3-gram
BLEU-3: 0.000000
Individual 4-gram
BLEU-4: 0.000000
Anyone can help with this I don't know why it doesn't give me good results?
AI: You must be getting a warning with this output. Warning pretty much tells you the reason why the scores are 0. Because there ARE NO 2-gram , 3 -grams in you example which are overlapping.
Here is the detailed explanation, I couldn't explain it better-
https://github.com/nltk/nltk/issues/1838
EDIT-
Solution-
Although the warning tells the reason, here is how you can fix this-
Notice the ref and pre,
from nltk.translate.bleu_score import corpus_bleu
ref =[[['electron', 'and', 'a', 'proton']]]
pre =[['electron', 'and', 'a', 'proton']]
print("Individual n-gram")
print("Individual 1-gram")
print('BLEU-1: %f' % corpus_bleu(ref, pre, weights=(1.0, 0, 0, 0)))
print("Individual 2-gram")
print('BLEU-2: %f' % corpus_bleu(ref, pre, weights=(0, 1.0, 0, 0)))
print("Individual 3-gram")
print('BLEU-3: %f' % corpus_bleu(ref, pre, weights=(0, 0, 1.0, 0)))
print("Individual 4-gram")
print('BLEU-4: %f' % corpus_bleu(ref, pre, weights=(0, 0, 0, 1.0)))
You can refer help of python- |
H: Converting a column to a DataTimeIndex and query by day
I've a time series dataset which I thrown into Pandas. I have later converted the Date column into a DateTime column and then transformed to an index.
To here, everything is ok.
Problems arise when I try to segment the data by day. If I do something like df['2020-05-20'] python thrown an exception.
If I limit the selection to Year and Month everything is ok.
I thought initially this could have due to multiple rows with the same index, but this is not the case (I've tested with a unique date serie.
See below a reproducible example:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.array([['2019-05-26','4'],['2019-06-02','4'],['2019-06-09','2'],['2019-06-16','3'],['2019-06-23','2'],['2019-06-30','3'],['2019-07-07','4'],['2019-07-14','4'],['2019-07-21','3'],['2019-07-28','2'],['2019-08-04','5'],['2019-08-11','4'],['2019-08-18','4'],['2019-08-25','3'],['2019-09-01','5'],['2019-09-08','4'],['2019-09-15','4'],['2019-09-22','3'],['2019-09-29','4'],['2019-10-06','3'],['2019-10-13','7'],['2019-10-20','4'],['2019-10-27','4'],['2019-11-03','3'],['2019-11-10','4'],['2019-11-17','3'],['2019-11-24','4'],['2019-12-01','3'],['2019-12-08','2'],['2019-12-15','2'],['2019-12-22','1'],['2019-12-29','3'],['2020-01-05','4'],['2020-01-12','5'],['2020-01-19','7'],['2020-01-26','23'],['2020-02-02','19'],['2020-02-09','9'],['2020-02-16','9'],['2020-02-23','52'],['2020-03-01','26'],['2020-03-08','30'],['2020-03-15','46'],['2020-03-22','62'],['2020-03-29','100'],['2020-04-05','87'],['2020-04-12','56'],['2020-04-19','55'],['2020-04-26','45'],['2020-05-03','45'],['2020-05-10','41'],['2020-05-17','29']]), columns=['Week', 'Volume'])
df['Week'] = pd.to_datetime(df['Week'])
df.set_index('Week', inplace = True)
df.index
At this stage, with your index converted to a DateTimeIndex, both the following statements fail:
df['05-26-2019']
# or
df['2019-05-26']
But a query by month alone with either df['May 2019'] or df['05-2019'] work fine.
Any idea why I can't segment also using the day?
AI: Solution-
df.loc['05-26-2019']
Borrowed from-
https://stackoverflow.com/questions/36871188/how-to-access-pandas-dataframe-datetime-index-using-strings |
H: How do I train a model with pre-saved weights?
I need to train a model but the data after preprocessing is too large to fit in the RAM. Is it possible to preprocess the data in batches and train a model for the small data set and then save the weights and then train it again for the next batches with the saved weights?
the workflow would be this way:
preprocess -> train -> save weights -> preprocess next batch -> train with saved weights ...
If yes, how can I do it?
AI: As you said:
Is it possible to preprocess the data in batches and train a model for the small data set
Yes! In practice, training of Neural Networks always happens with batches. You never fit the whole dataset at once in the model, whatever medium sized dataset could crash any machine.
This is how it works: You extract a slice of your dataframe (the number is usually in the range of 32-256 observations, but it's very task-specific), you train the model on that, and you move to the next until all dataset is completed. This is what an epoch is, in the Deep Learnig jargon.
... and then save the weights and then train it again for the next batches with the saved weights?
There's no need to save and reload the model at each iteration, just save it once it was trained for the number of epochs of your choice. If the following day you want to keep training, just reload it and keep iterating one batch at a time. |
H: VAE generates blue images
I'm trying to train a VAE to generates celebA faces.
The problem I'm facing is that the model only generates blue faces and I'm not sure why and how to fix it.
The encoder:
def build_encoder(self):
conv_filters = [32, 64, 64, 64]
conv_kernel_size = [3, 3, 3, 3]
conv_strides = [2, 2, 2, 2]
# Number of Conv layers
n_layers = len(conv_filters)
# Define model input
x = self.encoder_input
# Add convolutional layers
for i in range(n_layers):
x = Conv2D(filters=conv_filters[i],
kernel_size=conv_kernel_size[i],
strides=conv_strides[i],
padding='same',
name='encoder_conv_' + str(i)
)(x)
if self.use_batch_norm: # True
x = BatchNormalization()(x)
x = LeakyReLU()(x)
if self.use_dropout: # False
x = Dropout(rate=0.25)(x)
# Required for reshaping latent vector while building Decoder
self.shape_before_flattening = K.int_shape(x)[1:]
x = Flatten()(x)
self.mean_layer = Dense(self.encoder_output_dim, name='mu')(x)
self.sd_layer = Dense(self.encoder_output_dim, name='log_var')(x)
# Defining a function for sampling
def sampling(args):
mean_mu, log_var = args
epsilon = K.random_normal(shape=K.shape(mean_mu), mean=0., stddev=1.)
return mean_mu + K.exp(log_var / 2) * epsilon
# Using a Keras Lambda Layer to include the sampling function as a layer
# in the model
encoder_output = Lambda(sampling, name='encoder_output')([self.mean_layer, self.sd_layer])
return Model(self.encoder_input, encoder_output, name="VAE_Encoder")
The decoder:
def build_decoder(self):
conv_filters = [64, 64, 32, 3]
conv_kernel_size = [3, 3, 3, 3]
conv_strides = [2, 2, 2, 2]
n_layers = len(conv_filters)
# Define model input
decoder_input = self.decoder_input
# To get an exact mirror image of the encoder
x = Dense(np.prod(self.shape_before_flattening))(decoder_input)
x = Reshape(self.shape_before_flattening)(x)
# Add convolutional layers
for i in range(n_layers):
x = Conv2DTranspose(filters=conv_filters[i],
kernel_size=conv_kernel_size[i],
strides=conv_strides[i],
padding='same',
name='decoder_conv_' + str(i)
)(x)
# Adding a sigmoid layer at the end to restrict the outputs
# between 0 and 1
if i < n_layers - 1:
x = LeakyReLU()(x)
else:
x = Activation('sigmoid')(x)
# Define model output
self.decoder_output = x
return Model(decoder_input, self.decoder_output, name="VAE_Decoder")
The combined model:
def build_autoencoder(self):
self.encoder = self.build_encoder()
self.decoder = self.build_decoder()
# Input to the combined model will be the input to the encoder.
# Output of the combined model will be the output of the decoder.
self.autoencoder = Model(self.encoder_input, self.decoder(self.encoder(self.encoder_input)),
name="Variational_Auto_Encoder")
self.autoencoder.compile(optimizer=self.adam_optimizer, loss=self.total_loss,
metrics=[self.total_loss],
experimental_run_tf_function=False)
self.autoencoder.summary()
The loss function:
def r_loss(self, y_true, y_pred):
return K.mean(K.square(y_true - y_pred), axis=[1, 2, 3])
def kl_loss(self, y_true, y_pred):
kl_loss = -0.5 * K.sum(1 + self.sd_layer - K.square(self.mean_layer) - K.exp(self.sd_layer), axis=1)
return kl_loss
def total_loss(self, y_true, y_pred):
# return self.LOSS_FACTOR * self.r_loss(y_true, y_pred) + self.kl_loss(y_true, y_pred)
return K.mean(self.r_loss(y_true, y_pred) + self.kl_loss(y_true, y_pred))
AI: Thanks to @SalvadorMedina's comment I fixed this embarrassing problem.
the generated image was good but I used OpenCV to save and show it and defaulted to BRG mode instead of RGB.
To fix the problem I added the line:
im_rgb = cv2.cvtColor(im_cv, cv2.COLOR_BGR2RGB)
and now it looks like that:
Note:
this problem can be solved without cvtColor(), these should work as well:
im_rgb = im_bgr[:, :, [2, 1, 0]]
im_rgb = im_bgr[:, :, ::-1] |
H: Plot Decision boundary in 3D plot
I am running logistic regression on iris dataset. I computed thetas and this is how I draw a decision boundary line.
x_values = ([min(X_train[:,0]), max(X_train[:,0])])
y_values = - (theta[0] + np.dot(theta[1], x_values)) / theta[2]
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=plt.cm.Set1, edgecolor='k')
plt.plot(x_values, y_values )
I tried this, but the result is odd.
X= np.c_[ X_train, np.zeros(100) ]
theta = theta.reshape(3)
d=0
xx, yy = np.meshgrid(np.arange(np.min(X_reduced[:, 0]), np.max(X_reduced[:, 0])), np.arange(np.min(X_reduced[:, 1]), np.max(X_reduced[:, 1])))
z = (-theta[0] * xx - theta[1] * yy - d) * 1. / theta[2]
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
ax.scatter(X[:100, 0], X[:100, 1], X[:100, 2], c=y_train, cmap=plt.cm.Set1, edgecolor='k', s=40)
ax.plot_surface(xx, yy, z, alpha = 0.5)
plt.show()
I guess d in plane equation (ax+by+c*z = d) shouldn't be equal to 0. So I'm completely confused about this.
AI: This answer is borrowed from-
https://stackoverflow.com/questions/36232334/plotting-3d-decision-boundary-from-linear-svm
from sklearn.svm import SVC
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from mpl_toolkits.mplot3d import Axes3D
iris = datasets.load_iris()
X = iris.data[:, :3] # we only take the first three features.
Y = iris.target
#make it binary classification problem
X = X[np.logical_or(Y==0,Y==1)]
Y = Y[np.logical_or(Y==0,Y==1)]
model = svm.SVC(kernel='linear')
clf = model.fit(X, Y)
# The equation of the separating plane is given by all x so that np.dot(svc.coef_[0], x) + b = 0.
# Solve for w3 (z)
z = lambda x,y: (-clf.intercept_[0]-clf.coef_[0][0]*x -clf.coef_[0][1]*y) / clf.coef_[0][2]
tmp = np.linspace(-5,5,30)
x,y = np.meshgrid(tmp,tmp)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot3D(X[Y==0,0], X[Y==0,1], X[Y==0,2],'ob')
ax.plot3D(X[Y==1,0], X[Y==1,1], X[Y==1,2],'sr')
ax.plot_surface(x, y, z(x,y))
ax.view_init(30, 60)
plt.show() |
H: spaCy - Text Preprocessing - Keeping "Pronouns" in text
I am fairly new to machine learning and NLP in general. I am trying to wrap my head around how to do proper text pre-processing (cleaning the text).
I have built a custom text classification model. I have below method that I run on all input text, before serving it to my model. (both in training and testing).
The method will remove stopwords, punctuations and lemmatize the text.
import spacy
from spacy.lang.en.stop_words import STOP_WORDS
import string
def normalize(text, lowercase, remove_stopwords, remove_punctuation):
nlp = spacy.load("en_core_web_sm", disable=['parser', 'tagger', 'ner'])
stops = spacy.lang.en.stop_words.STOP_WORDS
if lowercase:
text = text.lower()
text = nlp(text)
if remove_punctuation:
text = [t for t in text if t.text not in string.punctuation]
lemmatized = list()
for word in text:
lemma = word.lemma_.strip()
if lemma:
if not remove_stopwords or (remove_stopwords and lemma not in stops):
lemmatized.append(lemma)
return " ".join(lemmatized)
Consider below input string:
Input: You're very beautiful!
If I clean that text, using my method:
test_text = "You're very beautiful!"
test_text = normalize(test_text, lowercase=True, remove_stopwords=True, remove_punctuation=True)
It will return: -PRON- beautiful
Is this a "good" way to clean the text? I notice that -PRON- is kept and therefore also used when training the model (same goes for when testing the model, as I use the same normalize method).
Should the -PRON- also be removed from the text? And should I complete even more text preprocessing?
AI: What you are doing seems fine in terms of preprocessing. Removing less informative words like stopwords, punctuation etc. is a very common technique. Here are some of my notes:
probably best for speed to load your "nlp" object outside of the function call
"-PRON-" must be the lemma for "you're" in this case. So you shouldn't remove it as it is following the correct logic. Though, you should investigate whether your model improves if you change your preprocessing logic
if you use SpaCy to create your classification model, I'd recommend experimenting training a model without using any preprocessing. As far as I know, SpaCy handles some of this implicitly through feature engineering |
H: Is using unsupervised learning to setup supervised classification reasonable?
I've got a biological dataset describing genes. The overall idea is that there are thousands of these genes to sort through, so if ML can rank them I can then know which should be going into the lab for functional research first. Currently, I make labels for supervised classification of these genes based on their known biology (so for example some genes interact with drugs related to a disease so I label them as 'most likely to cause the disease' and this goes down until I have a final 4th label of 'unlikely to cause the disease'). The way I make these labels seems impossible to not be biased, since I'm making all the decisions, so I'm wondering if I can compare my decisions with seeing how an unsupervised model would group the data (e.g. I've got 4 labels but if the model finds 5 groups then it shows how far off I am potentially?).
Would it even also be possible to use unsupervised learning to create the labels by itself or would this too be unreliable as you can't know why it's grouping certain genes together? Or would doing this step alone actually make the supervised step redundant anyway?
AI: Is using unsupervised learning to setup supervised classification reasonable?
Absolutely. This is a common strategy in ML. As you said yourself, using information coming from the data itself has the benefit of being less biased.
Would it even also be possible to use unsupervised learning to create the labels?
Technically yes. Though, some clustering techniques require you to specify the number of clusters, which isn't helpful. As you said, if you can cluster data points in a satisfactory manner, you don't need supervised learning anymore. Also, indeed, if your scenario requires you to have an understanding of what differentiates the clusters, you may not be lucky depending on which clusters come out. They are not always interpretable.
What I would suggest would be to turn your classification problem into a regression problem. 1.0 could be most likely to cause the disease, 0.0, least likely. This way, you don't have to worry about how many labels you need in the first place. |
H: How to choose the best algorithm
this might be simple but I need to know how to choose a best algorithm based on a scenario.
I have a dataset. The target class is, let's say color, this color attribute can have upto 5 values (red, yellow, blue, green, pink)
So I have:
1.) 1 target class - color
2.) Multiple values for that target class - (red, yellow, blue, green, pink)
As this is a labelled dataset with known classes, I can use a Supervised Learning Algorithm
But I don't know which algorithm would be the best. It should be a multi-class classigication? Or multi-labeled ? Can someone help me to find the best algorithm?
note: I know choosing an algorithm depends on many factors (such as its acuracy), but at a glance, what type of algorithms should I try?
Thank you
AI: Okay, so without going in to details,
Your target values are nominal, discrete values, hence the classification model would solve your problem.
Under classification there are variety of methods available,for example, there is decision tree, Naive Bayes etc.. . you may try all of these and check which gives the better confusion metrics.
Multi-Class means, there are multiple classes , such as in you colors.
If I have books features as input and i want to predict its genre, and the available categories i can put this book in are- fictional, romance, historical. all these labels makes it multi class because multiple genre.A book can be of any of the genre.
Multi-Label means, an instance can belong to more than 1 class.
Now, if this one book is fictional romance , then this makes it multi label. Because out of 3 available categories, this one book belongs to 2(it can be more than 2 also).
Now, This was just a short answer to get you started,please read on these key words to further details.
1.) 1 target class - color
This is target attribute, color
2.) Multiple values for that target class - (red, yellow, blue, green, pink)
Multiple class/labels for target attribute, red, yellow, blue... are classes/labels. Since there are more than 2 classes its multi class.
If you input map to 2 or more colors than 1 then it makes it multi label. |
H: Online Audio annotation tools
I need to find a decent online annotation tool to transcribe audio.
There are some requirements for a potential tool:
I should be able to deliver audio files to a few labelers. I should
be able to track which files went to which labeler.
It should be safe in terms of data storage.
Any suggestions?
AI: I'm not sure if it meets all your criteria (mostly because I'm not sure I understand all your criteria!), but you could have a look at ELAN:
Description: With ELAN a user can add an unlimited number of textual
annotations to audio and/or video recordings. An annotation can be a
sentence, word or gloss, a comment, translation or a description of
any feature observed in the media. Annotations can be created on
multiple layers, called tiers. Tiers can be hierarchically
interconnected. An annotation can either be time-aligned to the media
or it can refer to other existing annotations. The content of
annotations consists of Unicode text and annotation documents are
stored in an XML format (EAF).
See also the Wikipedia page about ELAN. There is also a Wikipedia list of related software, I don't know if this is relevant. |
H: Feature importance in neural networks
Hello I am using keras to develop a neural network model and I have a data of 45 numerical predictor variables, 2 categorical targets that will be predicted each with a different model. As I found, there is no feature importance model in keras. There are three options I can use, correlation ratio between the variables, kendals rank coefficient values and lasso regulation. Which one do you think is suitable to be used for feature selection in neural networks?
AI: Unfortunately, there is no direct way to assess the "importance" of a variable in a Neural Network. One option, very time consuming, consists in removing each variable, one by one, replacing it with random noise, and checking how the performance changes. That will give you an idea on the contribution of a variable.
Alternatively, stick with importance scores of Tree-based models (such as Random
Forests), or with good old statistical analysis. Shapley value regressions are a famous example. |
H: Model stacking with instance attributes
From what I have understood about model stacking: the meta estimator trains to combine the N-models predictions to fit the ground truth. Once trained, it combines the 1st level output to approach the ground truth.
The meta estimator is a model of type :
$ (y_{pred1}, y_{pred2}, y_{pred3})\rightarrow y_{pred-stack}$
So the combination is only based on the values of 1st level predictions. However, each line of the stacking data is also linked to other attributes: "Brand", "Model", "Power". Why won't we take those attributes to determine the optimal combination? So if the model 1 is the best when the brand is "NaN", the meta will learn it and redirect every prediction having NaN brand to model 1.
So the meta estimator I propose is as follow :
$ (y_{pred1},y_{pred2},y_{pred3},$brandIsNull$)\rightarrow y_{pred-stack}$
Does this approach exist?
If not, would it be a good or bad idea?
AI: The rationale for stacking learners is to combine the strengths of the individual learners.
On the one hand your idea makes sense: if learners have different strengths, adding some features might help the meta-model detect when to give more importance to a particular learner. It's totally possible that overall the meta-model will work better this way in some cases.
On the other hand, generalizing this idea would often defeat the purpose of stacking:
if we know in advance that a particular model is particularly good in some specific identified cases, then it's likely optimal to switch entirely to this model in such cases (no stacking).
if we don't have any particular knowledge, then this would require to add many/all of the original features to the meta-model, just in case they help. But by adding features it's likely that (1) the model will try to use the features directly, hence collapsing into an individual learner itself, and (2) the model will be more complex, therefore more prone to overfitting and likely not to make the best use of the individual learners' output.
To sum up: the stacking approach relies on "making things simple" for the meta-model, so that it only has to make a call about the answers of the individual learners. This way the meta-model can "focus" solely on optimally use these answers. The more features we add to it, the more risk that it will not to its job correctly. |
H: Decision tree on big categorical dataset
I would like to use algorithm ID3 in order to find a decision tree of my dataset. I would like to see which of the attributes and values lead to the different value of rating (1<= x <= 5). Do you think it is possible to do it? I got stucked but I don't know if it is because it is not a possible thing to do or it is my fault as a beginner. Can you give me some advise or hints please?
AI: You can see this dataset as a regression problem and not a classification one. The label is ordered, so predicting 2.8 is not a mistake, it is just that it is close to 3 (satisfactory but not there).
I can suggest to start with a regression tree and then round the results to have integers.
Also you can consider this a multiclassification task, but as a starter I would go with regression. |
H: How do I build a DQN which selects the correct objects in an environment based on the environment state?
I have an environment with 4 objects in it. All of these objects can either be selected or not selected. So the actions taken by my DQN should look like - [1,0,1,1],[0,0,0,1],[1,1,0,0]...etc
Where 1 denotes that the object was selected and 0 denotes that the object was not selected.
The environment state being given as input to the DQN consists of attributes for each of the object and other factors of the environment. The DQN would get rewards based on the selection it made.
I'm new to reinforcement learning and I've only built DQNs that had to select one action out of the entire action space. But how do I build a DQN or a Reinforcement learning network for this particular environment?
AI: The DQN agent does not need to care what the actions represent, in your case it only needs to make a discrete choice, and it is simple to enumerate the action space. Ignoring the meaning of them for a moment, you have 16 discrete actions. The simplest way to model that is to have a single index discrete action space of 16 actions for the agent which you then map to the selections that you need in order to assess the results. As long as you do this consistently (e.g. take the binary representation of the action index number), this is fine.
It is also possible that using a more direct representation would help, depending on the true nature of the value function. In which case you could use it, provided you model the neural network for $\hat{q}(s,a,\theta)$ with action vector concatenated to state vector in the input, and a single output of the estimated action value for that specific combination. To assess which action to take, you would create a minibatch of 16 inputs, all of which have the same state component, and cover the 16 possible input variations. Then you would pick the combination with the highest estimate and look at the action part of the input vector to discover which action was estimated to be best.
If you are not sure which approach would suit the problem best, you could try both. |
H: Is over fitting okay if test accuracy is high enough?
I am trying to build a binary classifier. I have tried deep neural networks with various different structures and parameters and I was not able to get anything better than
Train set accuracy : 0.70102
Test set accuracy : 0.70001
Then I tried machine learning algorithms such as KNN and Decision Trees etc. And I found that Random forest Classifier from Scikit-learn with n_estimators=100 gave me
Train set accuracy : 1.0
Test set accuracy : 0.924068
I tried adjusting other parameters such as max_depth, criterion But the decrease in training set accuracy also caused the test set accuracy to drop. Like
Train set accuracy : 0.82002
Test set accuracy : 0.75222
My question is, is this
Train set accuracy : 1.0
Test set accuracy : 0.924068
acceptable ? Even thought the model is over fitting, the test set accuracy is better.
AI: If you properly isolate your test set such that it doesn't affect training, you should only look at the test set accuracy. Here are some of my remarks:
Having your model being really good on the train set is not a bad thing in itself. On the contrary, if the test accuracy is identical, you want to pick the model with the better train accuracy.
You want to look at the test accuracy. That is your primary concern. So pick the model that provides the best performance on the test set.
Overfitting is not when your train accuracy is really high (or even 100%). It is when your train accuracy is high and your test accuracy is low.
it is not abnormal that your train accuracy is higher than your test accuracy. After all, your model has an advantage with the train set since it's been given the correct answer already.
At the end of the day, training a machine learning model is like studying for a test. You (the model) use learning resources such as books, past exams, flash cards etc. (train set) to perform well on a test/exam (test set). Knowing your learning resources perfectly doesn't mean you are overfitting. You would be overfitting if this is all you knew and couldn't perform well on the exam at all. |
H: Chi-square test - how can I say if attributes are correlated?
I am experimenting a course's teorical contents on this dataset. After data cleaning, I am trying to use chi-square test. I wrote the following code:
chisq.test(chocolate$CompanyMaker, chocolate$Rating, simulate.p.value = TRUE)
chisq.test(chocolate$SpecificBeanOriginOrBarName, chocolate$Rating, simulate.p.value = TRUE)
chisq.test(chocolate$CompanyLocation, chocolate$Rating, simulate.p.value = TRUE)
chisq.test(chocolate$BeanType, chocolate$Rating, simulate.p.value = TRUE)
chisq.test(chocolate$BroadBeanOrigin, chocolate$Rating, simulate.p.value = TRUE)
chisq.test(chocolate$CompanyMaker, chocolate$CocoaPerc, simulate.p.value = TRUE)
chisq.test(chocolate$SpecificBeanOriginOrBarName, chocolate$CocoaPerc, simulate.p.value = TRUE)
chisq.test(chocolate$CompanyLocation, chocolate$CocoaPerc, simulate.p.value = TRUE)
chisq.test(chocolate$BeanType, chocolate$CocoaPerc, simulate.p.value = TRUE)
chisq.test(chocolate$BroadBeanOrigin, chocolate$CocoaPerc, simulate.p.value = TRUE)
And these are my results:
RATING
CompanyMarker = 0.29
Specific... = 0.6267
CompanyLocation = 0.1819
BeanType = 0.5372
BroadBeanOrigin = 0.1534
COCOA PERC
CompanyMarker = 0.0004998
Specific... = 0.902
CompanyLocation = 0.04748
BeanType = 0.8136
BroadBeanOrigin = 0.8356
Online, I read about significance level, but i didn't quite understand it. In particular, is it at 0.5 or 0.05? Which values are "ok"?
From what I understood, I should say that CompanyMarker, CompanyLocation and BroadBeanOrigin are related to Rating, while CompanyMarker and CompanyLocation are related to cocoaPercent.
Is this right?
If not, can you write or link me an example or a guide to do it right?
Thanks.
AI: Chi-Square is used to determine which of the attributes are most informative. Its used in feature Selection.
So, if you have an attribute A ,B and C and output Y, we are trying to know Y depends on A or B or C ? A or B or C might be independent also i.e. has no affect on output Y.
So Chi-Sqaure is a statistical test to find out which attribute is independent and can be removed.
A contingency tables is created for each attribute value and its frequencies/occurrence is recorded and p_values above/below threshold determines if its relevant or not.
More about it here-
https://machinelearningmastery.com/chi-squared-test-for-machine-learning/
Online, I read about significance level, but i didn't quite understand
it. In particular, is it at 0.5 or 0.05? Which values are "ok"?
Please read about p-value |
H: How to save and load a Q-Learning Agent
I know this may sound nooby, but how do I save a Deep Q-Learning agent's progress? I mean when I close at i.e. episode 500 when my agent is trained and I restart (in my case a pygame) my agent is going to load its data and it will continue with its progress ( does not start again very stupid :) ). How can it be done?
If you need more details/clarification, feel free to ask! :)
Thanks for any answer! I really do appreciate it!
AI: To save a model's state, it is enough to save the model's parameters. If you are using Torch, then you can save it as follows torch.save(model.state_dict(), path_to_save).
When you want to resume training with a saved model, you would have to first create the model instance, and then you can use model.load_state_dict(torch.load(path_of_saved_model)) to update the new instance with the parameter values of the saved model.
You can find more reference here: https://pytorch.org/tutorials/beginner/saving_loading_models.html
There will be something similar for Tensorflow or any other framework that you use.
You can also have an evaluation function that evaluates how good your model is after a certain number of iterations, and save the one which performs the best.
Hope that helps! |
H: Plotting grafic result
I am learning how to plot data. I am trying to obtain this grafic result on my dataset. The thing is, I can't do it. The cool plot is from a kaggle's kernel, which code is:
ggplot(cocoa,aes(x=pct, y=rating)) +geom_point(aes(colour=factor(location))) +
theme( legend.position="bottom",legend.key.width=unit(0.2,"cm"),legend.key.height=unit(0.2,"cm")) +
xlab("Cocoa Percent(%) ") + ylab("Chocolate Bar Rating") +
ggtitle("Scattor plot of Cocoa Percent vs Chocolate Bar Rating")
When I try to install "ggplot" on RStudio, it says Error in install.packages : object"ggplot" not found. So I tryied copying the text in plot() function. Obviously, it gives me errors on the arguments.
The only thing I managed to do is this:
plot(data=chocolate, Rating~CocoaPerc, xlab="Rating", ylab="Cocoa Percentage", col="brown",
main="Valutazione e Percentuale di cacao")
Can anybody help me achieving the cool result? Or can you advise me any other "tricks" or guides about data plotting and grafic study of datasets? Thanks
AI: You probably did:
install.packages(ggplot)
This is a common mistake: when installing a package you need quotes, otherwise R interprets the name of the package as a string and since the string isn't defined you get an "object not found" error. The correct version is:
install.packages("ggplot")
(with single or double quotes, it doesn't matter)
Note: I would normally use the ggplot2 package, but maybe ggplot redirects to the same package:
install.packages("ggplot2") |
H: Create new column with rounded values
I am trying to create a new column in this dataset. I would like to have rating to be 1, 2, 3, 4 or 5 instead of 1.25, 2.00, 3.75, ..., 5.00.
Can you help me? The only solution I found is this
chocolate$Rat = gsub("1.25", "1", chocolate$Rating)
but I know it is not a good way to work. Is there something faster which can to the work efficiently?
Thanks
AI: The R function is “round”. The default is to round to the nearest integer, which is what you want. The function takes vector inputs, so you can apply it to your column in one line instead of using a “for” loop.
Then you’ll tack in on to your original data in any number of ways, such as “expanded_data <- data.frame(original_data, round(column))”. |
H: Apriori gives 0 rules
I am trying to use apriori() on this dataset.
After cleaning it, I made all attributes categorical with as.factor(). Then I uset these instruction:
chocolate_rules <- apriori(chocolateApriori, parameter=list(minlen=3, supp=0.1, conf=0.7), appearance=list(default="lhs", rhs=c("RatFactor=1", "RatFactor=2", "RatFactor=3", "RatFactor=4", "RatFactor=5")))
where RatFactor is categorial rounded "Rating" from the original dataset (and has only 5 possible values).
It seems to work, but when I call "chocolate-rules" the result is set of 0 rules.
Can you explain me why? Or can you help me getting another result?
AI: "Apriori" algorithm is used for "Association Rules" learning.
In very simple terms its trying to determine that if people who buy chocolates ,do they buy roses also with that? or do they buy chocolate with ice-cream more? or its chocolate+roses+ice-cream always together? or any combination of it.
So, the data which contains these purchase transactions are analysed and then "frequent item sets" are determined.
From this "rules" are derived , eg. here a rule can be, chocolates are bought with roses, or roses are bought with chocolates.
Each rule has confidence and support.
Now in you dataset, do you don't seem to have transactions, rather it has attributes regarding chocolates and its rating.which doesn't suit associative learning since the objective is different.
For the "learning" purpose, if its predicting that there are NO rules, means there are NO rules to determine.
Here something to get you started-
https://en.wikipedia.org/wiki/Apriori_algorithm
https://en.wikipedia.org/wiki/Association_rule_learning |
H: when x is a vector, derivative of vector diag(f'(x)) is formal notation?
https://web.stanford.edu/class/cs224n/readings/gradient-notes.pdf
(4)
this note says this
$$
\frac{\partial \textbf{z}}{\partial \textbf{x}} = \text{diag}(f'(\textbf{x}))
$$
I know this means make a vector to square matrix which has an original vector as diagonal component.
but, since $\textbf{x}$ is a vector, $f'(\textbf{x})$ can't be a vector, derivative of a vector should be a matrix, right? Is this a formal notation for element-wise derivative of vector?
AI: You are partially right and partially wrong: $f'(\textbf{x})$ is a matrix, but $\text{diag}(f'(\textbf{x}))$ means taking the diagonal of that matrix and making a vector out of it. |
H: How distribution of data effects model performance?
I am working on House Prices: Advanced Regression Techniques dataset. I was going through some kernels noticed many people converted SalePrice to log(SalePrice) see below:
I can see that taking a log transform reduced the skewness of data and made it more normal-like. But, i was wondering will it improve my model's performance or is use-full in any way. If it is then how a normal distribution of target variable was a catalyst?
AI: Good question. Your interpretation is adequate. Using a logarithmic function reduces the skewness of the target variable. Why does that matter?
Transforming your target via a logarithmic function linearizes your target. Which is useful for many models which expect linear targets. Scikit-Learn has a page describing this phenomenon: https://scikit-learn.org/stable/auto_examples/compose/plot_transformed_target.html
Important to note
If you modify your targets before training, you should apply the inverse transform at the end of your model to compute your "final" prediction. That way, your performance metrics can be comparable.
Intuitively, imagine that you have a very naive model which returns the average target regardless of the input. If your targets are skewed, it means that you will under-/over-shoot for a majority of the predictions. Because of this, the range of your error will be greater, which worsens scores such as the Mean Absolute or Relative Error (MAE/MSE). By normalizing your targets, you reduce the range of your error, which ultimately should improve your model directly. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.