text
stringlengths 83
79.5k
|
---|
H: Flattening output before calculating metrics
I use scikit-learn to calculate precision, recall and f1 scores which only accept 1D arrays, but my model's outputs are 2D (binary segmentation maps). My question is, is it ok to simply flatten the outputs, or is there some other function I should use to calculate the metrics in my case?
AI: You need to evaluate the percentage of pixels in the output map which were correctly classified, and it doesn't matter whether it is represented as 2D or 1D. The demand is just that mask's and the ground truth pixels have to match each other.
You can find an example about Pixel Accuracy here.
So, if you would like, you can convert (stretch 2D) into a 1D vector. However, you can also implement computation of precision and recall by yourself. If you use python, it's easy to leverage construction with np.sum(mask), np.sum(target), or np.sum(mask*target) to calculate it for 2D:
TP = np.sum(mask*target)
FP = np.sum(mask*np.where(target == 0, 1, 0))
FN = np.sum(np.where(mask == 0, 1, 0)*target)
Precision = TP / ( TP + FP )
Recall = TP / ( TP + FN )
F1_measure = 2 * Precision * Recall /(Precision+Recall) |
H: How to disable GPU with TensorFlow?
Using tensorflow-gpu 2.0.0rc0. I want to choose whether it uses the GPU or the CPU.
AI: I've seen some suggestions elsewhere, but they are old and do not apply very well to newer TF versions. What worked for me was this:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
When that variable is defined and equal to -1, TF uses the CPU even when a CUDA GPU is available. |
H: Decision Tree split error
During the split function of the decision tree, I am getting an assertion error stating:
Error > Assert: type(C) == dict
I couldn't find any errors in the code up until now. Could you please correct it if there is any problem with it?
My code:
def split(X,Y,i):
def __find_best_split(self):
best_split = {}
for column in self.X:
information_gain,split =
self.__find_best_split_for_column(column)
if split: continue
if not best_split or best_split["information_gain"]
<information_gain:
best_split =
{"split":split,"col":column,"information_gain":information_gain}
C= (best_split)
return C
AI: def __find_best_split(X, Y, i):
best_split = {}
for column in X:
information_gain,split = (2,3)#self.__find_best_split_for_column(column)
if not split: continue
if not best_split or best_split["information_gain"] <information_gain:
best_split = {"split":split,"col":column,"information_gain":information_gain}
C= (best_split)
return C
I ran this code in python 3.7 and it gave me the correct output: {'split': 3, 'col': 'a', 'information_gain': 2}. Also as pointed out by Ben Reiniger, it should be if not split: continue rather than if split: continue. In your case based on the indentation, C should never get defined or if indented properly, it should give empty dictionary. In both the scenario, getting error assert type(C) == dict does not makes sense. |
H: Hinge loss question
Hinge loss is usually defined as
$$L(y,\hat{y}) = max(0,1-y\hat{y}) $$
What I don't understand is why are we comparing zero with $1-y\hat{y}$ instead of some other constant. Why not make it $2-y\hat{y}$, or $\sqrt2-y\hat{y}$ or just take $y\hat{y}$, to check if the observation would be on the right side of the hyperplane? Is there any reason behind '1' as a constant?
Thanks
AI: There's no particular reason. It needs a constant different from zero, and 1 fits nicely due to the fact that anything multiplied by 1 is the same thing. You'd get the same result if you replace it with a different number everywhere and adjust the regularization. |
H: Delete/Drop only the rows which has all values as NaN in pandas
I have a Dataframe, i need to drop the rows which has all the values as NaN.
ID Age Gender
601 21 M
501 NaN F
NaN NaN NaN
The resulting data frame should look like.
Id Age Gender
601 21 M
501 NaN F
I used df.drop(axis = 0), this will delete the rows if there is even one NaN value in row. Is there a way to do as required?
AI: The complete command is this:
df.dropna(axis = 0, how = 'all', inplace = True)
you must add inplace = True argument, if you want the dataframe to be actually updated. Alternatively, you would have to type:
df = df.dropna(axis = 0, how = 'all')
but that's less pythonic IMHO. |
H: Identifying if the sentence if it comprise information about education
Given a sentence I am trying to classify if the sentence contain information about education. For example:
sentence1 = "Require minimum four years of professional degree."
sentence2 = "no degree required for this job."
For identifying as a first step I have built a vocab which have the set of keywords for identifying education based sentences. I was partly successful until I have problems with sentences like this.
sentence3 = "BE or BTech or any degree equivalent to it"
In my vocab 'BE' is also a keyword as 'BE' represents bachelors of engineering degree (in case of country India). As the parsing of algorithm is done in lower case, the issue is coming with 'BE' will be 'be' in sentence. Because of that every sentence that contains 'be' is recognized as educational sentence.
To built a strong ML model I dont have much data. If I want to use vocab based recognition, for this I have to understand the words beside them in the sentence.
Are there any built models so as to import and identify such scenarios or labelled dataset available for it ?
Are there any methods for accomplishing such task ?
AI: I think what you are looking is to differentiate between 'be' and 'BE' based on context. Word2Vec is a good place to start but to determine the difference between words based on contexts is 'Sense2Vec.' Word Sense Disambiguation paper is worth to look at. I am not sure if this is what you're looking for because if you have 'be' in every sentence which is most likely, then this won't be an ideal solution. You might have to do some extra data preprocessing like changing 'be' to 'Bachelor of Engineering' based on your sentence. |
H: Understanding the "Wide" part of Google's wide and deep
Google's wide and deep recommender model sounds really cool, but I'm struggling to believe I'm grasping the wide section right so wanted to check my understanding.
Their paper says the following:
The wide component consists
of the cross-product transformation of user installed apps
and impression apps
Each example
corresponds to one impression
Let's say we have 5 apps, A through E. My understanding is that the cross-product transformation would represent that as 20 columns, representing each possible combination of installed and impressed app (making 25, but then presumably the 5 "matching" cross-products like and(installed=App_A, impressed=App_A) would be removed because presumably Google is smart enough not to impress Apps the user already has). Let's also say we have 3 Users, called X - Z. X has installed apps A and C, and is shown app B and D. Y has installed App B and is shown A and E. Z has installed apps A, C and D and is shown apps B and E. With that dataset, the cross-product transformation should look (I think) like this:
My question is; is my understanding of the transformation there correct? If so that's going to be one gigantic matrix in fairly short order, particularly given they have over a billion users and a million different apps.
AI: Indeed you will have a large and very sparse matrix. The two important concepts in this are:
feature cross: you have two categorical features (here impressed and installed) and you do a carthesian product of both to create a new feature (as you did here). The issue is that crosses generate sparse matrices (as you can see in your example)
Neural netowrks perform well with dense, correlated features while linear models work better with sparse and low correlated feartures. Your cross features are sparse with low correlation, which leads to the second concept:
wide and deep network: since your dataset is a mix between dense features (continuous features) and sparse (cross) you can separate your network in two: the sparse will be directly sent to your output and thus treated as they would be in a linear model while your dense features will go through several hidden layers before going to the output.
The wide part behaves just like a linear model (you can also use sparse matrix calculus to speed it up) and the deep part like a traditional neural network and you get the best of both worlds. Given the size of the dataset (billion users x million apps), treating part of the data as a sparse input will speed up training and inference.
I went a bit further than what you asked but I figured it could be of help. |
H: Why the Logistic regression model trained with tensorflow performed so poor
I trained a logistic regression model with tensorflow but the accuracy of the model was so poor (accuracy = 0.68). The model was trained using simulated dataset and the result should be very good. is there something wrong with the code ?
#simulated dataSet
sim_data <- function(n=2000){
library(dummies)
age <- round(abs(rnorm(n,mean = 60, sd = 20)))
lac <- round(abs(rnorm(n,3,1)),1)
wbc <- round(abs(rnorm(n,10,3)),1)
sex <- factor(rbinom(n,size = 1,prob = 0.6),
labels = c("Female","Male"));
type <- as.factor(sample(c("Med","Emerg","Surg"),
size = n,replace = T,
prob = c(0.4,0.4,0.2)))
linPred <- cbind(1,age,lac,wbc,dummy(sex)[,-1],
dummy(type)[,-1]) %*%
c(-30,0.2,4,1,-2,3,-3)
pi <- 1/(1+exp(-linPred))
mort <- factor(rbinom(n,size = 1, prob = pi),
labels = c("Alive","Died"))
dat <- data.frame(age=age,lac=lac,wbc=wbc,
sex=sex,type=type,
mort = mort)
return(dat)
}
set.seed(123)
dat <- sim_data()
dat_test <- sim_data(n=1000)
#logistic regression in conventional method
mod <- glm(mort~.,data = dat,family = "binomial")
library(tableone)
ShowRegTable(mod)
#diagnotic accuracy
pred <- predict.glm(mod,newdata = dat_test,
type = "response")
library(pROC)
roc(response = dat_test$mort,predictor = pred,ci=T)
predBi <- pred >= 0.5
crossTab <- table(predBi,dat_test$mort)
(crossTab[1]+crossTab[4])/sum(crossTab)
#choose different cutoff for the accuracy
DTaccuracy <- data.frame()
for (cutoff in seq(0,1,by = 0.01)) {
predBi <- pred >= cutoff;
crossTab <- table(predBi,dat_test$mort)
accuracy = (crossTab[1]+crossTab[4])/sum(crossTab)
DTaccuracy <- rbind(DTaccuracy,c(accuracy,cutoff))
}
names(DTaccuracy) <- c('Accuracy','Cutoff')
qplot(x=Cutoff, y = Accuracy, data = DTaccuracy)
#tensorflow method
library(caret)
y = with(dat, model.matrix(~ mort + 0))
x = model.matrix(~.,dat[,!names(dat)%in%"mort"])
trainIndex = createDataPartition(1:nrow(x),
p=0.7, list=FALSE,times=1)
x_train = x[trainIndex,]
x_test = x[-trainIndex,]
y_train = y[trainIndex,]
y_test = y[-trainIndex,]
# Hyper-parameters
epochs = 30 # Total number of training epochs
batch_size = 30 # Training batch size
display_freq = 10 # Frequency of displaying the training results
learning_rate = 0.1 # The optimization initial learning rate
#Then we will define the placeholders for features and labels:
library(tensorflow)
X <- tf$placeholder(tf$float32, shape(NULL, ncol(x)),
name = "X")
Y = tf$placeholder(tf$float32, shape(NULL, 2L), name = "Y")
#we will define the parameters. We will randomly initialize the weights with mean “0” and a standard deviation of “1.” We will initialize bias to “0.”
W = tf$Variable(tf$random_normal(shape(ncol(x),2L),
stddev = 1.0),
name = "weghts")
b = tf$Variable(tf$zeros(shape(2L)), name = "bias")
#Then we will compute the logit.
logits = tf$add(tf$matmul(X, W), b)
pred = tf$nn$sigmoid(logits)
#The next step is to define the loss function. We will use sigmoid cross entropy with logits as a loss function.
entropy = tf$nn$sigmoid_cross_entropy_with_logits(labels = Y,
logits = logits)
loss = tf$reduce_mean(entropy,name = "loss")
#The last step of the model composition is to define the training op. We will use a gradient descent with a learning rate 0.1 to minimize cost.
optimizer = tf$train$GradientDescentOptimizer(learning_rate = learning_rate)$minimize(loss)
init_op = tf$global_variables_initializer()
#Now that we have trained the model, let’s evaluate it:
correct_prediction <- tf$equal(tf$argmax(logits, 1L), tf$argmax(Y, 1L),
name = "correct_pred")
accuracy <- tf$reduce_mean(tf$cast(correct_prediction, tf$float32),
name = "accuracy")
#Having structured the graph, let’s execute it:
with(tf$Session() %as% sess, {
sess$run(init_op)
for (i in 1:5000) {
sess$run(optimizer,
feed_dict = dict(X=x_train, Y=y_train))
}
sess$run(accuracy,
feed_dict=dict(X = x_test, Y = y_test))
})
The accuracy obtained by glm() method is quite good (accuracy=0.95), which is as expected; however, the accuracy was only 0.68 with TensorFlow method. How can I solve the problem?
AI: Your learning rate is too high. In TensorFlow models I usually set a learning rate between 0.001 and 0.00005 to achieve acceptable results. |
H: sklearn.accuracy_score(y_test, y_predict) vs np.mean(y_predict == y_test)
What is the difference between these two methods for finding model accuracy?
I have used both methods in python3 and i normally get identical results. However in few cases i get completely different results, so I am trying to figure out the possible reason for this.
AI: Both of these methods act differently. You will only get the same results in very few cases or if you are testing only one row at a time.
np.mean(y_test==y_pred) first checks if all the values in y_test is
equal to corresponding values in y_pred which either results in 0 or 1. And then takes the mean of it (which is still 0 or 1).
accuracy_score(y_test, y_pred) counts all the indexes where an element
of y_test equals to an element of y_pred and then divide it with the total number of elements in the list.
For example-
import numpy as np
from sklearn.metrics import accuracy_score
y_test = [2,2,3]
y_pred = [2,2,1]
print(accuracy_score( y_test, y_pred))
print(np.mean(y_test==y_pred))
This code returns -
0.6666666666666666
0.0
You will get the same result from both the method if you have only one sample/element to test. You can find more details here on accuracy_score and np.mean.
Also, accuracy_score is only for classification data. As mentioned in the first line here. |
H: Binary classfication vs One-class classification
Why do we need samples of both classes for the training of binary classification algorithms, if one-class algorithms can do the job with only samples from one class?
I know that one-class algorithms (like one-class svm) were proposed with the absence of negative data in mind and that they seek to find decision boundaries that separate positive samples (A) from negative ones (Not A).
Hence the traditional binary classification problem (between (A) and (B) for example) can be formulated as a classification of (A) and (not A = B).
Is it about better classification results or am I missing something?
Thank you in advance
AI: Binary classification is needed when requirement is to capture data into two classes. if you can't capture data in two classes where you need only one you go for One-class.
you can check this link for better explanation
http://rvlasveld.github.io/blog/2013/07/12/introduction-to-one-class-support-vector-machines/
If you take binary classification, svm tries to find best possible space between A and B. If there is only one class A model tries to create a boundary around it and classify. Take for example patient disease classification: For +ve some symptoms t1, t2, t3, t4, t5 for -ve he has t1, t2, t7. in the above case it is difficult to classify using one class because model classifies patient having t1, t2 as +ve because of proximity to +ve class. The second label gives you more info for better classification. |
H: CNNs: understanding feature visualization Channel Objectives (SOLVED)
I'm trying to follow a paper on deep NN feature visualization using beautiful examples from the GoogLeNet/Inception CNN. see: https://distill.pub/2017/feature-visualization/
The authors use backpropagation to optimize an input image to maximizes the activation of a particular (Inception) neuron/feature, or entire channel.
For example, Inception Layer 4a, Unit 11 is feature 12 of 192 from the 1x1 convolution path of Inception Layer 4a before filter concatenation (see: https://distill.pub/2017/feature-visualization/appendix/googlenet/4a.html#4a-11).
For Layer 4a 1x1 convolution the shapes are:
# Layer 4a
input: [14,14,480]
output: [14,14,512]
# 1x1 convolution
kernel: [1,1,480] # total 192 kernels
output: [14,14,192] # channels [0..192] of Layer4a output
Layer4a slice: tf.slice( layer4a_output, (0,0,0), (14,14,192) )
# Layer4a Unit 11
layer4a_unit_11 = tf.slice(layer4a_output, (11,0,0), (1,1,1)) # numpy [11,1,1]
In a related article, the authors state (see: https://distill.pub/2018/building-blocks/) ,
"We can think of each layer’s learned representation as a three-dimensional cube. Each cell in the cube is an activation, or the amount a neuron fires. The x- and y-axes correspond to positions in the image, and the z-axis is the channel (or detector) being run."
Furthermore, they offer a diagram which super-imposes the cube of Layer4a over the input image with the (x,y) axis overlaying the image itself.
I understand that the Neuron Objective is the input image that produces the highest activation for Layer 4a, Unit 11 which can be found at index=[11,0,0] of Layer 4a output=[14,14,512]. In this case, (x,y)=[11,0]. Each [1,1,480] kernel generates a feature map of shape=[14,14,1] with a total of 196 activations.
kernel => channel or feature map and activation => neuron or feature.
Question
But what is the intuitive concept of the (Positive) Channel Objective? In this example, Unit 11 sits in the same channel as 14x14=196 other neurons, but the channel objectives for all these neurons are different. If the optimized image for the Channel Objective maximizes the sum of neuron activations for channel 0, (e.g. slice=[14,14,0] of 192 1x1 convolutions or 512 total layer 4a channels) wouldn't it be the same for all 192 neurons in the same channel? Obviously, by the examples we see this is not true.
How does the Channel Objective relate to the Neuron Objective for Unit 11?
ANSWER
I understand that the Neuron Objective is the input image that produces the highest activation for Layer 4a, Unit 11 which can be found at index=[11,0,0] of Layer 4a output=[14,14,512].
This is where my understanding went off the rails. Layer 4a Unit 11 is actually channel/feature 12 of 192 for the 1x1 convolution. It is NOT the 12 of 196 neuron of channel 1. My fault for confusing 192 channels with 196 neurons/channel.
Instead, as mentioned the in answer, Unit 11 is a single neuron in channel 11, usually located near the center, e.g. Neuron Objective is (x,y,z)=(7,7,11) and Channel Objective is (x,y,z)=(:,:,11)
AI: As you noticed, the activations on mixed4a for a normal sized input is [14,14,512]. Intuitively, these dimensions correspond to [x_position, y_position, channel].
When we talk about "channel" and "neuron" objectives for visualization of channel 11, we mean:
Neuron objective: maximize activations[7, 7, 11]
maximize channel 11 in one position (generally middle)
Channel objective: maximize tf.reduce_sum(activations[:, :, 11])
maximize channel 11 everywhere
This diagram from feature visualization also goes over the difference: |
H: Accuracy noise patterns during model training
I'm training a logistic regression model on a small dataset. I have about 1300 samples that I split into a training and a testing set (70% and 30% respectively).
The training seems ok, however when I plot the accuracy of my model w.r.t. the epoch, some repeating noisy patterns appear at the end well after the accuracy is stabilised (after 800 epochs, see images below).
The training is done with an Adam optimizer. I'm using a learning rate of 0.04 and a weight decay of 0.07 that I found after doing a random search.
Is it something that may happen when training and is without consequence or does it reflect an issue with my data / training / software implementation ?
AI: It is likely that in your training phase you are reaching local minima, then since you 'persist' at each epoch you get off of your point and then after the 800 epochs you reach it again.
Look at the image below, imagine you reach the blue point, but you keep persisting, looking for another point, then you will be off of your track but it will eventually come back. The pattern will be more clear in logistic regression since theoretically you are applying gradient descent to a simpler function than a complex neural network. |
H: how to create multiple plot from a panda Dataframe
I want to plot multiple plots.
The data is stored in a pandas dataframe and each row should be a seperate plot.
Each row has an ID (ZRD_ID) which doenst matter and a date (TAG) and 24 values to be plotted.
import pandas as pd
import numpy as np
df = pd.read_csv('./Result_set_edited.csv')
df = df.drop("ZRD_ID", axis=1).drop("TAG", axis=1)
x = df.iloc[[0]]
print(df.head())
returns:
W01 W02 W03 W04 W05 ... W20 W21 W22 W23 W24
0 72616 156076 141025 72629 72631 ... 0 0 0 0 0
1 67114 171650 139920 67291 67292 ... 172924 93511 72445 72445 72445
2 66893 161919 134041 66913 66911 ... 166244 86672 67114 67120 67124
3 66603 171297 134227 66615 66631 ... 166078 86622 66871 66877 66879
4 66759 167198 133523 67126 67128 ... 163999 74525 66562 66568 66574
To start easier, since I am really new to this I thought of plotting the first line alone first.
since the columns are named 'W01', 'W02', ... , 'W24' I thought I coould use them as labels for the x-axis.
I just didn't find a way to do so, since its the header of the df I guess.
So i created a new array and tried to plot it with the first row of my Dataframe:
y = np.arange(0,24,1)
y.reshape(1,24)
print(y)
print(df.iloc[[0]].values)
plt.plot(y, x)
plt.show()
when trying to plot my values I get the following Error:
ValueError: x and y must have same first dimension, but have shapes
(24,) and (1, 24)
thanksfor the help on how to fix the Error for plotting the first line.
PS: I would appreciate some hints on how to improve my question, since it is my first one.
Cheers
AI: If I correctly got what you meant: you want to plot the first row of all the columns?
I guess this could work.
x= range(0,24)
y = df.iloc[0,:].values
xticks = df.columns.tolist()
I'm assuming you want a line plot
plt.plot(x,y)
plt.xticks(xticks) # to change the x ticks of the graph.
plt.show()
The '0' signifies the first row and ':' for all the columns. |
H: Why does accuracy remain the same
I'm new to machine learning and I try to create a simple model myself. The idea is to train a model that predicts if a value is more or less than some threshold.
I generate some random values before and after threshold and create the model
import os
import random
import numpy as np
from keras import Sequential
from keras.layers import Dense
from random import shuffle
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
threshold = 50000
samples = 5000
train_data = []
for i in range(0, samples):
train_data.append([random.randrange(0, threshold), 0])
train_data.append([random.randrange(threshold, 2 * threshold), 1])
data_set = np.array(train_data)
shuffle(data_set)
input_value = data_set[:, 0:1]
expected_result = data_set[:, 1]
model = Sequential()
model.add(Dense(3, input_dim=1, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1, activation='relu'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(input_value, expected_result, epochs=10, batch_size=5)
_, accuracy = model.evaluate(input_value, expected_result)
print('Accuracy: %.2f' % (accuracy*100))
The problem is that accuracy is always about 0.5 and if I check the training process I see something like this.
Epoch 1/10
5/10000 [..............................] - ETA: 8:07 - loss: 6.4472 - acc: 0.6000
230/10000 [..............................] - ETA: 12s - loss: 7.4283 - acc: 0.5391
455/10000 [>.............................] - ETA: 7s - loss: 7.8642 - acc: 0.5121
675/10000 [=>............................] - ETA: 5s - loss: 7.9277 - acc: 0.5081
890/10000 [=>............................] - ETA: 4s - loss: 7.7693 - acc: 0.5180
1095/10000 [==>...........................] - ETA: 4s - loss: 7.9045 - acc: 0.5096
1305/10000 [==>...........................] - ETA: 3s - loss: 7.8306 - acc: 0.5142
1515/10000 [===>..........................] - ETA: 3s - loss: 7.7558 - acc: 0.5188
1730/10000 [====>.........................] - ETA: 3s - loss: 7.7516 - acc: 0.5191
1920/10000 [====>.........................] - ETA: 2s - loss: 7.7149 - acc: 0.5214
2120/10000 [=====>........................] - ETA: 2s - loss: 7.7245 - acc: 0.5208
2340/10000 [======>.......................] - ETA: 2s - loss: 7.7422 - acc: 0.5197
2565/10000 [======>.......................] - ETA: 2s - loss: 7.7668 - acc: 0.5181
2785/10000 [=======>......................] - ETA: 2s - loss: 7.8015 - acc: 0.5160
3000/10000 [========>.....................] - ETA: 2s - loss: 7.9032 - acc: 0.5097
3210/10000 [========>.....................] - ETA: 2s - loss: 7.9134 - acc: 0.5090
3435/10000 [=========>....................] - ETA: 2s - loss: 7.9629 - acc: 0.5060
3660/10000 [=========>....................] - ETA: 1s - loss: 7.9578 - acc: 0.5063
3875/10000 [==========>...................] - ETA: 1s - loss: 7.9696 - acc: 0.5055
4085/10000 [===========>..................] - ETA: 1s - loss: 7.9861 - acc: 0.5045
4305/10000 [===========>..................] - ETA: 1s - loss: 7.9823 - acc: 0.5048
4530/10000 [============>.................] - ETA: 1s - loss: 7.9737 - acc: 0.5053
4735/10000 [=============>................] - ETA: 1s - loss: 8.0063 - acc: 0.5033
4945/10000 [=============>................] - ETA: 1s - loss: 7.9955 - acc: 0.5039
5160/10000 [==============>...............] - ETA: 1s - loss: 7.9935 - acc: 0.5041
5380/10000 [===============>..............] - ETA: 1s - loss: 7.9991 - acc: 0.5037
5605/10000 [===============>..............] - ETA: 1s - loss: 8.0432 - acc: 0.5010
5805/10000 [================>.............] - ETA: 1s - loss: 8.0466 - acc: 0.5008
6020/10000 [=================>............] - ETA: 1s - loss: 8.0189 - acc: 0.5025
6240/10000 [=================>............] - ETA: 1s - loss: 8.0151 - acc: 0.5027
6470/10000 [==================>...........] - ETA: 0s - loss: 7.9843 - acc: 0.5046
6695/10000 [===================>..........] - ETA: 0s - loss: 7.9760 - acc: 0.5052
6915/10000 [===================>..........] - ETA: 0s - loss: 7.9926 - acc: 0.5041
7140/10000 [====================>.........] - ETA: 0s - loss: 8.0004 - acc: 0.5036
7380/10000 [=====================>........] - ETA: 0s - loss: 7.9848 - acc: 0.5046
7595/10000 [=====================>........] - ETA: 0s - loss: 7.9752 - acc: 0.5052
7805/10000 [======================>.......] - ETA: 0s - loss: 7.9568 - acc: 0.5063
8035/10000 [=======================>......] - ETA: 0s - loss: 7.9557 - acc: 0.5064
8275/10000 [=======================>......] - ETA: 0s - loss: 7.9802 - acc: 0.5049
8515/10000 [========================>.....] - ETA: 0s - loss: 7.9748 - acc: 0.5052
8730/10000 [=========================>....] - ETA: 0s - loss: 7.9944 - acc: 0.5040
8955/10000 [=========================>....] - ETA: 0s - loss: 7.9934 - acc: 0.5041
9190/10000 [==========================>...] - ETA: 0s - loss: 7.9854 - acc: 0.5046
9430/10000 [===========================>..] - ETA: 0s - loss: 7.9975 - acc: 0.5038
9650/10000 [===========================>..] - ETA: 0s - loss: 8.0190 - acc: 0.5025
9865/10000 [============================>.] - ETA: 0s - loss: 8.0337 - acc: 0.5016
10000/10000 [==============================] - 3s 255us/step - loss: 8.0397 - acc: 0.5012
I tried to change the layers count and the number of nodes in the layer but the result is basically the same. What am I missing to make it work?
AI: You have two separate problems going on.
Use sigmoid
First, when performing a binary classification problem, you should set the activation of your final layer to sigmoid (or softmax, which is equivalent in the binary classification case).
Scale your data
Second, when using neural networks, it's important to make sure your data is of a "reasonable" scale. A "reasonable" scale is usually something in the range of a 0-mean, unit-variance normal distribution.
Effect of fixing these issues
Let's look at the effect of fixing these issues after 5 epochs.
If I change your last layer to sigmoid:
model.add(Dense(3, input_dim=1, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
I get ~94% accuracy:
Epoch 1/5
10000/10000 [==============================] - 1s 127us/sample - loss: 107.2031 - acc: 0.5594
Epoch 2/5
10000/10000 [==============================] - 1s 118us/sample - loss: 0.8730 - acc: 0.6688
Epoch 3/5
10000/10000 [==============================] - 1s 118us/sample - loss: 0.6432 - acc: 0.7455
Epoch 4/5
10000/10000 [==============================] - 1s 119us/sample - loss: 0.5688 - acc: 0.7899
Epoch 5/5
10000/10000 [==============================] - 1s 119us/sample - loss: 0.3340 - acc: 0.8631
10000/10000 [==============================] - 0s 10us/sample - loss: 0.2087 - acc: 0.9440
Accuracy: 94.40
If I change keep the last activation as sigmoid, but also scale your input values to be between 0 and 1:
train_data.append([random.randrange(0, threshold) / 100000, 0])
train_data.append([random.randrange(threshold, 2 * threshold) / 100000, 1])
Then I get 99.8% accuracy.
Epoch 1/5
10000/10000 [==============================] - 1s 128us/sample - loss: 0.5206 - acc: 0.7013
Epoch 2/5
10000/10000 [==============================] - 1s 114us/sample - loss: 0.2051 - acc: 0.9732
Epoch 3/5
10000/10000 [==============================] - 1s 115us/sample - loss: 0.1083 - acc: 0.9943
Epoch 4/5
10000/10000 [==============================] - 1s 116us/sample - loss: 0.0697 - acc: 0.9953
Epoch 5/5
10000/10000 [==============================] - 1s 116us/sample - loss: 0.0512 - acc: 0.9967
10000/10000 [==============================] - 0s 10us/sample - loss: 0.0450 - acc: 0.9980
Accuracy: 99.80 |
H: Multi-Output Regression with neural network in Keras
I have got an .xlsx Excel file with an input an 2 output columns. And there are some coordinates and outputs in that file such as:
x= 10 y1=15 y2=20
x= 20 y1=14 y2=22 ...
I am trying to do that regression using tensorflow. But somehow i can't manage to do it. I am leaving my code here, I would appraciate it if someone could help! I also have test datas ready as well.
training_data = pd.read_excel(...\training_data.xlsx',sheet_name="i1-o2")
training_data_X = training_data['i1']
training_data_Y = training_data[['o1','o2']]
testing_data = data = pd.read_excel(....\testing_data.xlsx',sheet_name="i1-o2")
testing_data_X = testing_data['i1']
testing_data_Y = testing_data[['o1','o2']]
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(6, activation='linear')
])
model.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['accuracy'])
model.fit(training_data_X,training_data_Y,epochs=10,batch_size=100)
val_loss,val_acc = model.evaluate(testing_data_X,testing_data_Y)
print(val_loss,val_acc)
AI: I found some mistakes:
input data must be numpy objects, not pandas
this Network has 6 output nodes, not 2
the number of layers is completely exagerated IMHO
the Flatten() layer at the beginning is not correct
the way you called ReLU's is not correct
This should be enough:
from tf.keras.models import Sequential
from tf.keras.layers import Dense
from tf.keras.activations import relu
model = Sequential([
tf.keras.layers.Dense(128, activation = relu),
tf.keras.layers.Dense(128, activation = relu),
tf.keras.layers.Dense(2, activation = None)
])
Check if the loss works at this point. Alternatively, you need to write your own custom loss function using Keras backend functions. |
H: Python Pandas agregation
I have a data set recording daily values of several metrics say R1, R2, for example:
Date Metric cur_val Mgmt_Lmt
1/1/2019 R1 38.94927536 100
1/2/2019 R1 38.83188406 100
1/3/2019 R1 38.71449275 100
1/4/2019 R1 38.59710145 100
1/5/2019 R1 38.47971014 100
1/6/2019 R1 38.36231884 100
I am trying to get the data in below format: basically agg by month year
MGMT_LmtAgg Jan-19 Feb-19 Mar-19
100 min 80 80 80
R1 max 90 90 90
avg 85 85 85
75 min 80 80 80
R2 max 90 90 90
avg 85 85 85
I am trying df.groupby([pd.Grouper(freq='M'), 'Metric']).agg({'cur_val': ['sum', 'mean', 'min', 'last']}).T
this gives me R1 and R2 as columns, I want to see as rows. Could you help? I am beginner in python. Thanks!
AI: I think the way you want to see your data is best achieved by using pivot tables.
Try the following code:
pd.pivot_table(df, index=['Metric', 'Mgmt_Lmt'],
columns=pd.Grouper(freq='M'),
values='cur_val',
aggfunc=['sum', 'mean', 'min', 'last'])
This will index your table by metric and mgmt and give Month-Year as columns:
sum mean min last
Date 2019-01-31 2019-01-31 2019-01-31 2019-01-31
Metric Mgmt_Lmt
R1 100 155.092754 38.773188 38.597101 38.597101
R2 100 76.842029 38.421014 38.362319 38.362319
By doing df_pivoted.stack(level=0) you will have:
Date 2019-01-31
Metric Mgmt_Lmt
R1 100 sum 155.092754
mean 38.773188
min 38.597101
last 38.597101
R2 100 sum 76.842029
mean 38.421014
min 38.362319
last 38.362319 |
H: Finding correlation between MNIST digits
What way would be correct to calculate the correlation between say digit '1' and digit '7' images from MNIST? Taking average values of all digit '1' pixels and digit '7' pixels to compute correlation between those would be a correct?
AI: You can’t. Correlation is a measure of a variable changes as another variable changes. One goes up by a certain amount, the other usually goes up too: positive correlation. And so on.
What you can calculate is how similar or how different images of 1s are, compared to images of 7s. You could average all images of each, by summing the images to get one images with very high pixel values, and then dividing all pixel values by the number of images that you summed.
Then you can represent the average 1 and the average 7 as a long vector of 784 pixels and calculate the distance between these two as a measure of their similarity. |
H: Keras ANN Trained Model's Accuracy change on prediction
I have trained an ANN Binary classifier using Keras. It gives 90% accuracy.
After testing when I predict same data again but pass only one class then accuracy decreases to 40%.
I have figured out that if I pass mixed classes while predicting then it will give me around 90% accuracy and if I pass data points of only one class then accuracy decreases .As I increase the data points of other class as well then accuracy increases.
Long in short.
CASE 1:
100 samples from class 0,
100 sample from class 1,
on predicting using trained model Accuracy = 90%
CASE 2:
Same 100 samples from class 0 passes to same trained Models give me 40% accuracy.
Why accuracy changes ?
EDIT: I'm performing Standardization before predicting every time which effects the predictions. How to handle this case?
Any suggestions would be much appreciated. Thanks
AI: This problem is solved if we use same Mean and Standard Deviation which used for standardization of training samples. |
H: Fine-tuning a Pre-trained model (Resnet50) do I need to validate it or just train it?
When fine-tuning a Resnet50 model should I do the standard train validation split and train like any normal CNN or should I just do the training and not the validation if I am going to use this model as a base model in another more advanced model?
I think I will need to do the train/val split but wanted to get a good answer for this from other people as I have seen on Github people just training with no validation.
Thanks for all the input.
AI: While using an already trained Resnet50 model or in any other Transfer learning cases, you have to train the model on your dataset but not from scratch, just remove few last layers of the pre-trained model and froze the remaining layers. Then you should train the model as you normally would with cross validation. |
H: Data Reshaping for CNN using Keras
I'm a beginner in Keras. I've loaded MNIST dataset in Keras and checked it's dimension.
The code is
from keras.datasets import mnist
# load data into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
print("Shape: ", X_train[0].shape)
And the output is
(60000, 28, 28, 1)
(60000, 10, 2, 2, 2, 2)
(10000, 28, 28, 1)
(10000, 10, 2, 2)
Shape: (28, 28, 1)
As X_train and X_test are already in the shape (#sample, width, height, #channel). Do we still need reshaping? Why?
The tutorial I'm following use the following reshaping code:
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')
My second question is that why is .astype('float32') is used in code?
Lastly, I could not understand the output of print(y_train.shape) and print(y_test.shape).
Please suggest.
I've already read Reshaping of data for deep learning using Keras however still my doubts are unclear.
AI: Answer 1
The reason for reshaping is to ensure that the input data to the model is in the correct shape. But you can say it using reshape is a replication of effort.
Answer 2
The reason for converting to float so that later we could normalize image between the range of 0-1 without loss of information. |
H: how to pass parameters over sklearn pipeline's stages?
I'm working on a deep neural model for text classification using Keras. To fine tune some hyperparameters i'm using Keras Wrappers for the Scikit-Learn API.
So I builded a Sklearn Pipeline for that:
def create_model(optimizer="adam", nbr_features=100):
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(nbr_features,)))
...
model.compile(loss='binary_crossentropy', optimizer=optimizer,metrics=["accuracy"])
return model
estimator = Pipeline([("tfidf", TfidfVectorizer()),
('norm', StandardScaler(with_mean=False)),
("km", KerasClassifier(build_fn=create_model, verbose=1))])
grid_params = {
'tfidf__max_df': (0.1, 0.25, 0.5, 0.75, 1.0),
'tfidf__max_features': (100, 500, 1000, 5000,),
... }
gs = GridSearchCV(estimator,
param_grid,
...)
I want to pass max_features parameters from tfidf stage to km stage as nbr_features. Any Hack/Workaround to do that ?
AI: I figured out how to do that by monkey patching ParameterGrid.__iter__ and GridSearchCV._run_search methods.
ParameterGrid.__iter__ iterates over all possible combinations of hyerparameters (dict of param_name: value). so i modified what it yields (one configuration of hyperparameters params) by adding "km__nbr_features" equal to 'tfidf__max_features':
params["km__nbr_features"] = params['tfidf__max_features']
Important: "km__nbr_features" must be missing from grid_params so the trick works.
Here is some code:
from sklearn.model_selection import GridSearchCV, ParameterGrid
import numpy as np
from itertools import product
def patch_params(params):
# Updates a configuration of possible parameters
params["km__nbr_features"] = params['tfidf__max_features']
return out
def monkey_iter__(self):
"""Iterate over the points in the grid.
Returns
-------
params : iterator over dict of string to any
Yields dictionaries mapping each estimator parameter to one of its
allowed values.
"""
for p in self.param_grid:
# Always sort the keys of a dictionary, for reproducibility
items = sorted(p.items())
if not items:
yield {}
else:
keys, values = zip(*items)
for v in product(*values):
params = dict(zip(keys, v))
yield patch_params(params)
# replacing address of "__getitem__" with "monkey_getitem__"
ParameterGrid.__iter__ = monkey_iter__
def monkey_run_search(self, evaluate_candidates):
"""Search all candidates in param_grid"""
evaluate_candidates(ParameterGrid(self.param_grid))
# replacing address of "_run_search " with "monkey_run_search"
GridSearchCV._run_search = monkey_run_search
Then i preformed Grid Search normaly:
def create_model(optimizer="adam", nbr_features=100):
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(nbr_features,)))
...
model.compile(loss='binary_crossentropy', optimizer=optimizer,metrics=["accuracy"])
return model
estimator = Pipeline([("tfidf", TfidfVectorizer()),
('norm', StandardScaler(with_mean=False)),
("km", KerasClassifier(build_fn=create_model, verbose=1))])
grid_params = {
'tfidf__max_df': (0.1, 0.25, 0.5, 0.75, 1.0),
'tfidf__max_features': (100, 500, 1000, 5000,),
... }
# Performing Grid Search
gs = GridSearchCV(estimator,
param_grid,
...)
Update:
In case you used RandomizedGridSearchCV you must monkey patch ParameterGrid.__getitem__ insted.
def monkey_getitem__(self, ind):
"""Get the parameters that would be ``ind``th in iteration
Parameters
----------
ind : int
The iteration index
Returns
-------
params : dict of string to any
Equal to list(self)[ind]
"""
# This is used to make discrete sampling without replacement memory
# efficient.
for sub_grid in self.param_grid:
# XXX: could memoize information used here
if not sub_grid:
if ind == 0:
return {}
else:
ind -= 1
continue
# Reverse so most frequent cycling parameter comes first
keys, values_lists = zip(*sorted(sub_grid.items())[::-1])
sizes = [len(v_list) for v_list in values_lists]
total = np.product(sizes)
if ind >= total:
# Try the next grid
ind -= total
else:
out = {}
for key, v_list, n in zip(keys, values_lists, sizes):
ind, offset = divmod(ind, n)
out[key] = v_list[offset]
return patch_params(out)
raise IndexError('ParameterGrid index out of range')
ParameterGrid.__getitem__ = monkey_getitem__ |
H: why multiplication (squares) doesn't work for neural networks?
Below code creates the sum of 2 random numbers and then we train for 1000 examples and then we are able to predict which works fine.
Consider the below code for creating random data :
def random_sum_pairs(n_examples, n_numbers, largest):
X, y = list(), list()
for i in range(n_examples):
in_pattern = [randint(1,largest) for _ in range(n_numbers)]
out_pattern = sum(in_pattern)
X.append(in_pattern)
y.append(out_pattern)
# format as NumPy arrays
X,y = array(X), array(y)
# normalize
X = X.astype('float') / float(largest * n_numbers)
y = y.astype('float') / float(largest * n_numbers)
return X, y
# invert normalization
def invert(value, n_numbers, largest):
return round(value * float(largest * n_numbers))
training the model :
n_examples = 1000
n_numbers = 2
largest = 1000
n_batch = 100
n_epoch = 500
model = Sequential()
model.add(Dense(20, input_dim=n_numbers))
model.add(Dense(100, input_dim=n_numbers))
model.add(Dense(1000, input_dim=n_numbers))
model.add(Dense(100, input_dim=n_numbers))
model.add(Dense(20))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer=adam)
X, y = random_sum_pairs(n_examples, n_numbers, largest)
model.fit(X, y, epochs=n_epoch, batch_size=n_batch, verbose=2)
predicting the model with:
result = model.predict(X, batch_size=n_batch, verbose=0)
# calculate error
expected = [invert(x, n_numbers, largest) for x in y]
predicted = [invert(x, n_numbers, largest) for x in result[:,0]]
rmse = sqrt(mean_squared_error(expected, predicted))
print('RMSE: %f' % rmse)
# show some examples
for i in range(20):
error = expected[i] - predicted[i]
print('Expected=%d, Predicted=%d (err=%d)' % (expected[i], predicted[i], error))
Result:
RMSE: 0.000000
Expected=120, Predicted=120 (err=0)
Expected=353, Predicted=353 (err=0)
Expected=1316, Predicted=1316 (err=0)
Expected=839, Predicted=839 (err=0)
Expected=731, Predicted=731 (err=0)
Expected=867, Predicted=867 (err=0)
Expected=276, Predicted=276 (err=0)
Expected=36, Predicted=36 (err=0)
Expected=601, Predicted=601 (err=0)
Expected=1805, Predicted=1805 (err=0)
Expected=1045, Predicted=1045 (err=0)
Expected=422, Predicted=422 (err=0)
Expected=1795, Predicted=1795 (err=0)
Expected=861, Predicted=861 (err=0)
Expected=469, Predicted=469 (err=0)
Expected=362, Predicted=362 (err=0)
Expected=119, Predicted=119 (err=0)
Expected=1021, Predicted=1021 (err=0)
But let's say I change the logic in random_sum_pairs to provide single numbers and squares of those numbers: (and change n_numbers = 1)
def random_sum_pairs(n_examples, n_numbers, largest):
X, y = list(), list()
for i in range(n_examples):
in_pattern = [randint(1,largest) for _ in range(n_numbers)]
#print(in_pattern)
out_pattern = in_pattern[0]*in_pattern[0]
#print(out_pattern)
X.append(in_pattern)
y.append(out_pattern)
# format as NumPy arrays
X,y = array(X), array(y)
# normalize
X = X.astype('float') / float(largest * largest)
y = y.astype('float') / float(largest * largest)
return X, y
This doesn't work at all and errors are huge. Results:
RMSE: 75777.312879
Expected=556516, Predicted=567106 (err=-10590)
Expected=403225, Predicted=458394 (err=-55169)
Expected=86436, Predicted=124424 (err=-37988)
Expected=553536, Predicted=565147 (err=-11611)
Expected=518400, Predicted=541642 (err=-23242)
Expected=927369, Predicted=779632 (err=147737)
Expected=855625, Predicted=742415 (err=113210)
Expected=159201, Predicted=227260 (err=-68059)
Expected=48841, Predicted=52929 (err=-4088)
Expected=71289, Predicted=97981 (err=-26692)
Expected=363609, Predicted=427054 (err=-63445)
Expected=116964, Predicted=171435 (err=-54471)
Expected=5476, Predicted=-91040 (err=96516)
Expected=316969, Predicted=387879 (err=-70910)
Expected=900601, Predicted=765921 (err=134680)
Expected=839056, Predicted=733601 (err=105455)
Why does this happen? I mean then for linear operations like summation, we don't even require neural networks and neural network is failing in a simple case like above squares of numbers, so how to train a neural network for learning the squares of numbers? I am not looking for 100% accurate results, but at least somewhat closer I was expecting.
Note: I know we don't need that dense network with those many hidden layers (I guess). I have tried with a single hidden layer as well, with similar results.
AI: The reason you cannot fit non-linear functions (here sums of squares) is simply that your neural network is actually not a proper neural network: it simply resolves to a single linear element.
Why is that? Recall from the Keras documentation the Dense layer:
keras.layers.Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)
where
activation: Activation function to use (see activations). If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
Since you don't specify explicitly any activation, you actually use a linear one for all your layers. And it is well-known that a neural network comprised simply of linear units is equivalent to a simple linear unit (check Andrew Ng's lecture Why Non-linear Activation Functions for a detailed explanation); in fact, it is only with non-linear activation functions that neural networks begin to be able to do interesting things.
So, you should add activation='relu' in all your layers except the final one, which should remain as is (final layers in regression settings, like here, need linear activation functions).
You may also find the discussion in the Stack Overflow thread Is deep learning bad at fitting simple non-linear functions outside training scope (extrapolating)? interesting.
UPDATE (after comment):
Determining the argument input_dim for any layer other than the first one is meaningless (the input dimension of layer N is simply the output dimension of layer N-1). Remove all input_dim arguments except in the first layer.
Start simple: In the linked SO thread, there is a clear demonstration of a much simpler network that can indeed calculate the square of its input. There is a good chance that 500 epochs are not enough for the weights of your (unnecessarily large) network to converge (this was not the problem before because, as said, your network was essentially a simple linear unit).
Additionally, you now seem to have an input you never use (in_pattern[1]); this can lead to further delay in the convergence of your model.
Please keep in mind that there is never a guarantee that any NN model can do a specific job, and experimenting with the architecture and the hyperparameters is always expected. |
H: What I'm doing wrong with my CNN Keras?
In my project I have 700 images for each class (pdr and nonPdr) totalizing 1400 images. To validation I've put 28 samples.
The problem is that my validation loss and accuracy is unstable. This is my code:
def ReadImages(Path):
LabelList = list()
ImageCV = list()
classes = ["nonPdr", "pdr"]
# Get all subdirectories
FolderList = [f for f in os.listdir(Path) if not f.startswith('.')]
# Loop over each directory
for File in FolderList:
for index, Image in enumerate(os.listdir(os.path.join(Path, File))):
# Convert the path into a file
ImageCV.append(cv2.resize(cv2.imread(os.path.join(Path, File) + os.path.sep + Image), (256,256)))
#ImageCV[index]= np.array(ImageCV[index]) / 255.0
LabelList.append(classes.index(os.path.splitext(File)[0]))
ImageCV[index] = cv2.addWeighted(ImageCV[index],4, cv2.GaussianBlur(ImageCV[index],(0,0), 256/30), -4, 128)
return ImageCV, LabelList
visible = Input(shape=(256,256,3))
conv1 = Conv2D(16, kernel_size=(3,3), activation='relu', strides=(1, 1))(visible)
conv2 = Conv2D(16, kernel_size=(3,3), activation='relu', strides=(1, 1))(conv1)
bat1 = BatchNormalization()(conv2)
conv3 = ZeroPadding2D(padding=(1, 1))(bat1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(32, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.01))(pool1)
conv5 = Conv2D(32, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.01))(conv4)
bat2 = BatchNormalization()(conv5)
pool2 = MaxPooling2D(pool_size=(1, 1))(bat2)
conv6 = Conv2D(64, kernel_size=(3,3), activation='relu',strides=(1, 1), padding='valid')(pool2)
conv7 = Conv2D(64, kernel_size=(3,3), activation='relu',strides=(1, 1), padding='valid')(conv6)
bat3 = BatchNormalization()(conv7)
conv7 = ZeroPadding2D(padding=(1, 1))(bat3)
pool3 = MaxPooling2D(pool_size=(1, 1))(conv7)
conv8 = Conv2D(128, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.01))(pool3)
conv9 = Conv2D(128, kernel_size=(2,2), activation='relu', strides=(1, 1), padding='valid')(conv8)
bat4 = BatchNormalization()(conv9)
pool4 = MaxPooling2D(pool_size=(1, 1))(bat4)
conv10 = Conv2D(256, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.02))(pool4)
conv11 = Conv2D(256, kernel_size=(3,3), activation='relu', padding='valid', kernel_regularizer=regularizers.l2(0.02))(conv10)
bat5 = BatchNormalization()(conv11)
pool5 = MaxPooling2D(pool_size=(1, 1))(bat5)
flat = Flatten()(pool5)
output = Dense(1, activation='sigmoid')(flat)
model = Model(inputs=visible, outputs=output)
opt = optimizers.adam(lr=0.001, decay=0.0)
model.compile(optimizer= opt, loss='binary_crossentropy', metrics=['accuracy'])
data, labels = ReadImages(TRAIN_DIR)
test, lt = ReadImages(TEST_DIR)
model.fit(np.array(data), np.array(labels), epochs=8, validation_data = (np.array(test), np.array(lt)))
model.save('model.h5')
And after run it, I've got the follow return:
Train on 1400 samples, validate on 28 samples
Epoch 1/8
1400/1400 [==============================] - 2289s 2s/step - loss: 10.5126 - acc: 0.9529 - val_loss: 9.6115 - val_acc: 1.0000
Epoch 2/8
1400/1400 [==============================] - 2245s 2s/step - loss: 9.8477 - acc: 0.9550 - val_loss: 9.1845 - val_acc: 0.9643
Epoch 3/8
1400/1400 [==============================] - 2271s 2s/step - loss: 8.3761 - acc: 0.9864 - val_loss: 7.6834 - val_acc: 1.0000
Epoch 4/8
1400/1400 [==============================] - 2225s 2s/step - loss: 7.7146 - acc: 0.9736 - val_loss: 15.1970 - val_acc: 0.5000
Epoch 5/8
1400/1400 [==============================] - 2204s 2s/step - loss: 7.8170 - acc: 0.9436 - val_loss: 6.5526 - val_acc: 1.0000
Epoch 6/8
1400/1400 [==============================] - 2215s 2s/step - loss: 7.1557 - acc: 0.9407 - val_loss: 5.8400 - val_acc: 1.0000
Epoch 7/8
1400/1400 [==============================] - 2267s 2s/step - loss: 6.4109 - acc: 0.9450 - val_loss: 5.2029 - val_acc: 1.0000
Epoch 8/8
1400/1400 [==============================] - 2269s 2s/step - loss: 5.8860 - acc: 0.9479 - val_loss: 13.2224 - val_acc: 0.5000
And when I try to predict some test's images, All of the samples returns the class 0 (wrong):
PREDICT.py
model = load_model('model.h5')
for filename in os.listdir(r'v/'):
if filename.endswith(".jpg") or filename.endswith(".png"):
ImageCV = cv2.resize(cv2.imread(os.path.join(TEST_DIR) + os.path.sep + filename), (256,256))
ImageCV = cv2.addWeighted(ImageCV,4, cv2.GaussianBlur(ImageCV,(0,0), 256/30), -4, 128)
ImageCV = ImageCV.reshape(-1,256,256,3)
print(model.predict(ImageCV))
print(np.argmax(model.predict(ImageCV)))
[[0.]]
0
[[0.]]
0
[[0.]]
0
So, what I'm doing wrong in my project? How can I fix it?
I appreciate any help
UPDATE
After add this code:
perm = np.random.permutation(len(data))
data = data[perm]
labels = labels[perm]
Returns this numbers:
Train on 1400 samples, validate on 28 samples
Epoch 1/5
1400/1400 [==============================] - 2232s 2s/step - loss: 10.5725 - acc: 0.9629 - val_loss: 10.1279 - val_acc: 1.0000
Epoch 2/5
1400/1400 [==============================] - 2370s 2s/step - loss: 10.2828 - acc: 0.9729 - val_loss: 9.5293 - val_acc: 1.0000
Epoch 3/5
1400/1400 [==============================] - 2290s 2s/step - loss: 9.6735 - acc: 0.9707 - val_loss: 8.8646 - val_acc: 1.0000
Epoch 4/5
1400/1400 [==============================] - 2269s 2s/step - loss: 8.6198 - acc: 0.9950 - val_loss: 8.1976 - val_acc: 1.0000
Epoch 5/5
1400/1400 [==============================] - 2282s 2s/step - loss: 8.2455 - acc: 0.9836 - val_loss: 7.8586 - val_acc: 1.0000
The value goes better, but when I try to predict images, the return is always 0.. (but i'm passing imgs class 0 and class 1)
What should I do now?
AI: I guess your model is biased toward the first half. Although Keras has built in shuffle=True in model.fit() arguments, according to this document it might be non-effective when steps_per_epoch=None.
I suggest shuffling your data before training using numpy.random.shuffle(array). Probably something like this:
data = np.array(data)
labels = np.array(labels)
perm = np.random.permutation(len(data))
data = data[perm]
labels = labels[perm]
model.fit(data, labels, epochs=8, validation_data = (np.array(test), np.array(lt))) |
H: Understanding python XGBoost model dump output of a very simple tree
I am trying to understand the model dump output from XGBoost. I would like to step through and see exactly how the model arrived at it's prediction. To simplify I trained a model with 1 tree and 1 max depth, and as expected all records get one of two predictions as it has a single split - the values are {0.5386398434638977, 0.5011891722679138}. However, when I look at the model dump I see the following
booster[0]:
0:[f40<70.5] yes=1,no=2,missing=1
1:leaf=0.00475667231
2:leaf=0.154868156
I have no idea how to interpret this in a way that makes sense with the prediction. What am I missing? Thanks!
AI: The scores at leaves measure log-odds, not probabilities. (With more trees, these scores get summed to give a final log-odds approximation, then goes through a sigmoid to get probability approximations.)
And indeed, $1/(1+e^{-0.00475667})\approx 0.501189$. |
H: Improving classifcation when some are less represented?
I have a multi-class classification problem. It performs quite well but on the least represented classes it doesn't. Indeed, here is the distribution :
And here are the classification results (I took the numbers off the labels):
.
Therefore how to improve classifcation when some are less represented ?
I thought of duplicating a few rows of the classes it doesn't predict well in the train sample. But maybe this assumption is entirely false, maybe it is not because they are less represented that are badly classified. Maybe I should have a look on the feature selection I did by hand and rather do a PCA ?
Update
class weight with inverted frequency
I passed the class_weight parameter in model.fit() which is a list of the inverted frequency of the classes on the dataset:
>>> lossWeights = df['grade'].value_counts(normalize=True)
>>> lossWeights = lossWeights.sort_index().tolist()
>>> print(lossWeights)
[0.204064039408867, 0.2954361054766734, 0.29536185163720663, 0.13638619240799768, 0.04878839466821211, 0.014684149521877717, 0.0052792668791654595]
weights = {0: 1 / 0.204064,
1: 1 / 0.295436,
2: 1 / 0.295362,
3: 1 / 0.136386,
4: 1 / 0.048788,
5: 1 / 0.014684,
6: 1 / 0.005279}
history = model.fit(x_train.as_matrix(),
y_train.as_matrix(),
validation_split=0.2,
epochs=epochs,
batch_size=batch_sz, # Can I tweak the batch here to get evenly distributed data ?
verbose=2,
class_weight = weights,
callbacks=[checkpoint])
It diminished on the test Set Accuracy: 86.57% (it was 88.54% before) but better balanced the results on the confusion matrix :
class weight with inverted frequency + focal loss
Focal loss is designed to address class imbalance by down-weighting inliers (easy examples) such that their contribution to the total loss is small even if their number is large. It focuses on training a sparse set of hard examples.
def focal_loss(gamma=2., alpha=4.):
gamma = float(gamma)
alpha = float(alpha)
def focal_loss_fixed(y_true, y_pred):
"""Focal loss for multi-classification
FL(p_t)=-alpha(1-p_t)^{gamma}ln(p_t)
Notice: y_pred is probability after softmax
gradient is d(Fl)/d(p_t) not d(Fl)/d(x) as described in paper
d(Fl)/d(p_t) * [p_t(1-p_t)] = d(Fl)/d(x)
Focal Loss for Dense Object Detection
https://arxiv.org/abs/1708.02002
Arguments:
y_true {tensor} -- ground truth labels, shape of [batch_size, num_cls]
y_pred {tensor} -- model's output, shape of [batch_size, num_cls]
Keyword Arguments:
gamma {float} -- (default: {2.0})
alpha {float} -- (default: {4.0})
Returns:
[tensor] -- loss.
"""
epsilon = 1.e-9
y_true = tf.convert_to_tensor(y_true, tf.float32)
y_pred = tf.convert_to_tensor(y_pred, tf.float32)
model_out = tf.add(y_pred, epsilon)
ce = tf.multiply(y_true, -tf.log(model_out))
weight = tf.multiply(y_true, tf.pow(tf.subtract(1., model_out), gamma))
fl = tf.multiply(alpha, tf.multiply(weight, ce))
reduced_fl = tf.reduce_max(fl, axis=1)
return tf.reduce_mean(reduced_fl)
return focal_loss_fixed
model.compile(loss=focal_loss(alpha=1),
optimizer='nadam',
metrics=['accuracy'])
model.fit(X_train, y_train, epochs=3, batch_size=1000)
I just had to add it to my model:
def create_model(input_dim, output_dim):
print(output_dim)
# create model
model = Sequential()
# input layer
model.add(Dense(100, input_dim=input_dim, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
# hidden layer
model.add(Dense(60, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
# output layer
model.add(Dense(output_dim, activation='softmax'))
# Compile model
# model.compile(loss='categorical_crossentropy', loss_weights=None, optimizer='adam', metrics=['accuracy'])
model.compile(loss=focal_loss(alpha=1), loss_weights=None, optimizer='adam', metrics=['accuracy'])
return model
It gave me an overall accuracy of 88%. However it gave me back a very bad classification on the least represented class:
focal loss
It has a decent test Set Accuracy: 88.27% and the classification is better balanced :
Now I have to questions. I'm still not satisfied. How to improve this classification ? Which model should I use between the first and the last updates ?
Resample
I tried to down sample the majority classes
import os
from sklearn.utils import resample
# rebalance data
#df = resample_data(df)
if True:
count_class_A, count_class_B,count_class_C, count_class_D,count_class_E, count_class_F, count_class_G = df.grade.value_counts()
count_df = df.shape[0]
class_dict = {"A": count_class_A,"B" :count_class_B,"C": count_class_C,"D": count_class_D,"E": count_class_E, "F": count_class_F, "G": count_class_G}
counts = [count_class_A, count_class_B,count_class_C, count_class_D,count_class_E, count_class_F, count_class_G]
median = statistics.median(counts)
for key in class_dict:
if class_dict[key]>median:
print(key)
df[df.grade == key] = df[df.grade == key].sample(int(count_df/7), replace = False)
#replace=False, # sample without replacement
#n_samples=int(count_df/7), # to match minority class
#random_state=123)
# Divide the data set into training and test sets
x_train, x_test, y_train, y_test = split_data(df, APPLICANT_NUMERIC + CREDIT_NUMERIC,
APPLICANT_CATEGORICAL,
TARGET,
test_size = 0.2,
#row_limit = os.environ.get("sample"))
row_limit = 552160)
# Inspect our training data
print("x_train contains {} rows and {} features".format(x_train.shape[0], x_train.shape[1]))
print("y_train contains {} rows and {} features".format(y_train.shape[0], y_train.shape[1]))
print("x_test contains {} rows and {} features".format(x_test.shape[0], x_test.shape[1]))
print("y_test contains {} rows and {} features".format(y_test.shape[0], y_test.shape[1]))
# Loan grade has been one-hot encoded
print("Sample one-hot encoded 'y' value: \n{}".format(y_train.sample()))
However it the results where catastrophic. The model accuracy and the model loss looked to have some issues :
And everything was classified in "A" on the test set.
AI: Class imbalance is a common problem, there are several ways in which people tackle this problem, below are few.
If possible try augmenting the class for which the data is less. (Some might call it oversampling).
You can use class weighting which penalizes less (during training) the class whose data is not sufficient.
Using Focal Loss, an example of it can be found at - Using focal loss for Fraud detection
There are other ways as well possible, try the one which is best suitable and convenient in your scenario. |
H: How to install boruta in conda?
I want to install boruta in my anaconda environment, but if I execute
conda install boruta
It displays:
PackagesNotFoundError: The following packages are not available from current channels:
- boruta
Current channels:
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
I guess I could install it using the pip that belongs to the Anaconda environment, but I am not sure if that is the best way.
Any suggestions?
AI: According to the instructions at both Github and Anaconda cloud, you should try
conda install -c conda-forge boruta_py |
H: Infer family type, size from reviews
I have a bunch of reviews:
User_id, review
1, "We (a family of 4 adults) chose this and view and loved this place"
1, "My husband and I, with our 2 teen sons, visit this restaurant at least once..."
2,"My partner and I booked table for a short holiday, their wine menu was awesome"
2,"My wife is a fan of jazz and she's expecting, so visited this place "
What techniques/packages are available to, for instance, estimate that:
User Id 1 => family of 4, 2 sons (13-19)
User Id 2 => family of 2, expecting
:
:
I have been googling around, to little help, and other than creating my own labeled dataset, I was hoping there are some NLP techniques that can help bootstrap my training set, which can then be curated by humans.
AI: I can think of two options:
Train a custom supervised tagger for your data, typically with a sequence labeling method such as CRF. This will require a quite large amount of annotated data (and a specific formatting), but if done well it should give you quite accurate results.
Use manually defined patterns based on keywords (such as "My * and I", "family", "friends") directly associated with a predefined category, and match these patterns in the data. You can do several iterations of defining new patterns for cases that are not matched, thus refining progressively. Depending on your data and how far you go with this you should be able to correctly match most cases, possibly reaching as good accuracy as a tagger for much less work.
Btw be careful how you represent your data: the same user id might not always give you the same group of people since they can go to a restaurant one day with their family, the next day with their wife, the day after with their mistress, etc. Also a "family of 2, expecting" usually becomes "a family of 3" after 9 months ;) |
H: SVM hyperplane margin
so that $H_0$ is equidistant from $H_1$ and $H_2$.
However, here the variable $\delta$ is not necessary. So we can set $\delta=1$ to simplify the problem.
$$w\cdot x+b=1 $$
and
$$w\cdot x+b=−1$$
Why is this assumption is taken? If it is taken, we can get the distance between two planes as $2$ directly because both are parallel and differ by $2$. how it is $\frac{2}{\left\|w\right\|}$ instead.
got equations from this https://www.svm-tutorial.com/2015/06/svm-understanding-math-part-3/
AI: After we have
$$w^Tx + b = \pm \delta$$
We can always divide everything by $\delta$,
$$\left( \frac{w}{\delta}\right)^Tx + \left( \frac{b}{\delta}\right)=\pm1$$
Now, we can set $\tilde{w}=\frac{w}{\delta}$ and $\tilde{b}=\frac{b}{\delta}$.
$$\tilde{w}^Tx+\tilde{b}=\pm1$$
This is as if we have set $\delta=1$ from the beginning.
The derivation of the distance formula has been given in equation $(19)$ in the article that you linked to and you might like to be more specific if you can't understand it. The distance should be $\frac{2\delta}{\|w\|}$ if $\delta$ is not set to be $1$. |
H: NMT, What if we do not pass input for decoder?
For transformer-based neural machine translation (NMT), take English-Chinese for example, we pass English for encoder and use decoder input(Chinese) attend to encoder output, then final output.
What if we do not pass input for decoder and consider it as a 'memory' model for translation.
Is it possible and what will happen?
It seems decoder could be removed and there only exist encoder.
Could I do translation task like text generation?
See:
https://github.com/salesforce/ctrl/blob/master/generation.py
https://einstein.ai/presentations/ctrl.pdf
AI: The currently dominant NMT paradigm is based on an encoder-decoder architecture where the translation is generated autoregressively. This means that each token in the translation is generated conditioning not only on the source sentence but also on the previously generated tokens. The encoder generates a representation of the source sentence in one go and then the decoder takes this representation and generates the target side tokens one by one. This way, the decoder is responsible for the autoregressive part.
When you have an architecture where there is no decoder, you have non-autoregressive (NAR) NMT. It is possible to build a NAR model, but most attempts to train models directly in such a way have failed. They usually repeat the same words once and again, utterly generating garbage instead of actual translations.
Nevertheless, if NAR NMT were possible, it would lead to huge speedups ($O(1)$ instead of $O(n)$ complexity). This is currently a very active line of research. Most of the proposed attempts rely on creating some sort of intermediate (latent) representation to feed to the decoder, which then decodes non-autoregressively.
These are some of the latest articles from that area:
Non-Autoregressive Neural Machine Translation
Non-Autoregressive Machine Translation with Auxiliary Regularization
Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference using a Delta Posterior |
H: Counting the transition in a dataframe overtime
I am stuck at a problem and am thinking how to come out of it. I want to write a code in python with dataframe as below:
data = {'Id':['a', 'a', 'b', 'b', 'c', 'c'],
'value':['Active', 'Notactive', 'Active', 'Superactive', 'Notactive', 'Superactive'],
'date':['8-09-2019','15-09-2019','8-09-2019','15-09-2019','8-09-2019','15-09-2019']}
df = pd.DataFrame(data)
I want to reach to the result where i count the number of users who did transition in the 7 day difference:
Active to notactive: 1
Active to superactive: 1
Not active to superactive: 1
Active to Active: 0
Any help on how to proceed would be appreciated on python 3.7
Thanks
AI: For a small dataset try this:
# change type of date column so that it is possible to find min and max correctly
df['date'] = pd.to_datetime(df['date'])
# put value of the oldest resord and newest record side by side
df_min_max_grouped = pd.merge(
df.loc[df.groupby('Id')['date'].idxmin()],
df.loc[df.groupby('Id')['date'].idxmax()],
on='Id',
suffixes=('_old', '_new')
)
# group by old value and new value and find counts
results = df_min_max_grouped.groupby(['value_old', 'value_new']).size()
results
Output:
value_old value_new
Active Notactive 1
Superactive 1
Notactive Superactive 1
dtype: int64
The approach I chose for this answer:
Convert date values to be of type datetime, Correct ordering will not work with string dates.
First find the oldest and the newest records for any Id,
This happens in : df.loc[df.groupby("Id")['date'].idxmin()] for oldest and df.loc[df.groupby("Id")['date'].idxmax()] and newest records.
Join the oldest and newest records on the basis of Id, and then
Group by old value and new value combination to get a count in each group.
I found that resetting index of the resulting Series produces nice output:
results.reset_index()
value_old
value_new
0
0
Active
Notactive
1
1
Active
Superactive
1
2
Notactive
Superactive
1
The combinations not listed here have 0 records against them. I hope you find this helpful! |
H: Policy gradient vs cost function
I was working with continuous system RL and obviously stumbled across this Policy Gradient.
I want to know is this something like cost function for RL? It kinda gives that impression considering we are finding out how efficient the system is as a whole (weighted sum of rewards multiplied by all the policies).
Let's take the example of vanilla PG:
$$g = \mathbb E\Big[\sum R_t*\frac{(\partial)}{(\partial\theta)}ln\pi_\theta(a_t|s_t)\Big]$$
Here, the gradient is nothing but the expected value of the Return (which is nothing but the discounted sum of all the reward) multiplied by the how the policy (the network output) needs to change according to the network weights.
This seems similar to a cost function where we use the total error using cross-entropy (something similar to the information given by return) and then we use this to see how the weights of the neural network can be changed through backpropagation.
Let me know if I've got this right.
AI: Yes there is a cost (score/utility) function and your intuition is correct. In vanilla PG we optimize the expected return $J$ of a trajectory $\tau$ under policy $\pi$ parametrized by $\theta$:
$$\nabla_{\theta} J\left(\tau\right)
\approx \frac{1}{N} \sum_{i=1}^{N} \nabla_{\theta} \log \pi_{\theta}\left(\tau_{i}\right) r\left(\tau_{i}\right)$$
(You can find lots of information here: https://spinningup.openai.com/en/latest/algorithms/vpg.html)
The vanilla PG is very closely related to maximum likelihood:
$$\nabla_{\theta} J_{\mathrm{ML}}(\theta) \approx \frac{1}{N} \sum_{i=1}^{N} \nabla_{\theta} \log \pi_{\theta}\left(\tau_{i}\right)$$
The two gradients are almost the same except the reward multiplication. This reward is the learning signal. Thinking in terms of backpropagation your gradients are being multiplied by the reward signal. You can think of the vanilla PG as a cost-sensitive classification - with the broad sense.
I will give you a very simple example with Neural Networks in order to demonstrate you the result of the PG learning mechanism. Assume a very simple architecture:
( input --> CNN --> CNN --> fully connected (fc) layer --> out1: V(input), out2: $\pi(a|input)$ )
which can be trained asynchronously or synchronously. The output has two heads one for the expected reward and one for the policy.
Let's assume a task in which an agent has to learn to select between two rewarding targets (orange r=10, green r=1). The targets are presented either both of them or one at a time randomly upon completion of an episode.
Assume now that the agent is fully trained. Taking the fc representations from various episodes and running tsne (clustering) we get the following picture:
This 2D representation tells us that the network has a "clear understanding" when there is only the green target, only the orange target and both of them together. We could colored differently the cluster to get a sense of the expected reward or the action preference (as the fc encodes spatial information, reward information and action preference).
Policy Gradients are mapping states to action distributions (and/or reward predictions). This means that the learned function (the Neural Network) should have the appropriate representations to do that mapping. And this is essentially what is learned by the network: a decision boundary (given input state). Please note that this is a very simplistic example to get an insight of what the network has learned so you can easily draw parallels to the classification case. |
H: Pyspark Matrix Transformation
Let's assume I have the following dataframe in PySpark:
Customer | product | rating
customer1 | product1 | 0.2343
customer1 | product2 | 0.4440
customer2 | product3 | 0.3123
customer3 | product1 | 0.7430
There can be several customer product combinations but every combination is unique already. I want to archive the following outcome in the most efficient manner:
Customer (Index) | product 1 | product 2 | product 3
customer 1 | 0.2343 | 0.4440 | 0.0000
customer 2 | 0.0000 | 0.0000 | 0.3123
customer 3 | 0.7430 | 0.0000 | 0.0000
Each combination which is not represented in the first table will be set to zero. It has to be efficient because the output matrix will have a size of 59578 rows × 21521 columns and I want to avoid the computational cost as good as possible.
Is there any solutions for this? I didn't found a good solution on the web so far.
Thanks for your help up front.
AI: The way to do this in PySpark is to use groupBy and pivot. Since you don't want to do any actual aggregation, just the pivot, you can use first here.
from pyspark.sql.functions import first
(df.groupBy("Customer")
.pivot("product")
.agg(first("rating"))
.fillna(0))
pivot will give nulls when there is no value so fillna needs to be used as well to give the wanted result. |
H: Difference of sklearns accuracy_score() to the commonly accepted Accuracy metric
I am trying to evaluate the accuracy of a multiclass classification setting and I'm wondering why the sklearn implementation of the accuracy score deviates from the commenly agreed on accuracy score: $\frac{TP+TN}{TP+TN+FP+FN}$
For sklearn the sklearn.metrics.accuracy_score is defined as follows(https://scikit-learn.org/stable/modules/model_evaluation.html#accuracy-score):
$\texttt{accuracy}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples}-1} 1(\hat{y}_i = y_i)$
This seems like its completly neglecting the true negatives of the classification.
Example:
Predicted 1 2 3
Actual
1 5 2 0
2 8 6 2
3 3 4 12
And here the TP,TN,FP and FN:
TP TN FP FN
1 5 24 11 2
2 6 20 6 10
3 12 21 2 7
SUM 23 65 19 19
In the "standard" average score I would calculate:
$\frac{23+65}{23+65+19+19}=0,698$
In the sklearn implementation however it would be:
$\frac{1}{42}*23= 0,548$
Why is this different? And is the other metric somewhere mentioned in the literature, I couldn't find anything so far.
AI: Your "commonly agreed on" and "standard" accuracy is meant for binary classification, in which case it agrees with the other formula from sklearn. In that case, "positive/negative" refer to the two classes, so this is also a little different from your version.
Your version of it is a sort of average of (the "mediant" of) the implicit one-vs-rest classifiers. As such, your score is meaningful, but will generally be larger than the actual common multiclass accuracy metric. For a balanced problem, a constant classifier will get a mediant-of-OVR-accuracy score of $(n-1)^2/n^2$ but an accuracy score of just $1/n$. (Back to the binary case, to compare your method, you'd have to interpret e.g. "Sum(TN)" as including both diagonal entries, so the "accuracy" there is actually $1/2n$, which agrees with the mediant-of-OVR score.)
As such, your metric is similar to macro-averaged scores (though I've never heard of that for accuracy, only precision/recall/Fbeta). Micro Average vs Macro average Performance in a Multiclass classification setting
Finally, as an opinion, since accuracy measures the probability of getting the prediction right, it's easier to interpret; your metric gives credit for not misclassifying a sample into each class it's not put in, hence the inflation. Of course, this also perhaps makes a multiclass model's accuracy sound terrible (sklearn says "this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted"). |
H: How does the given data gets plotted on a graph
I come from a programming background and learning the math behind the data science and algorithms now.
I would like to understand the logic behind how a data gets plotted in a graph when using Logistic regression.
Lets consider a Testing data like this,
ID Age Grade Location Churn
1 24 1 A 1
2 25 1 A1 1
3 28 1 A2 1
4 31 2 A3 0
And this could get plotted like this in Logistic regression like this -
[A mocked image below]
How did these ID's usually gets plotted in the graph. What's the math behind a data like this getting plotted in a graph.
I understand the plane separating the points. But couldn't understand how the points got there in the first place.
This may be basic for experienced people. But it has been a question in my mind for a while. I tried to get any online materials but I couldn't or may be I don't know what to search for.
Also, does this plotting differs in any way for SVM and other models. I guess only the model splitting the points will differ.
Please give me leads in understanding this one.
AI: First let's understand the problem. This is a classification task. The last column represents your target value and rest of the columns are features for a given id. The target value represents which class a given row will belong to. In the plot it is represented by red (Class - 1) and green (Class - 0). We use rest of the column values to plot a given point.
Now we just simply plot one value vs another. The graph that you have shown is for only two variable but in reality we will be using all the variable and the separation will not be a line, but a hyperplane. Since it is difficult to visualize, we usually use only two dimensions for demonstration purposes.
As an example, here is a plot for Age vs Grade marking different classes with different colors:
I hope you get the basic idea. Also, I suggest you to look at tutorials for kNN classifier. It will give you more understanding of the cocept. |
H: What technique to use in order to identify what position an audio sample is at in a longer audio sample?
I am interested in what techniques and algorithms could be used in order to tackle the following problem:
I have a database of audio samples, specifically live performances of various songs. I have about a dozen songs and for each of them I have again about a dozen samples of that song's performance.
I am hoping that by having more than one sample for each song, it will be possible to better "lock down" the song's general features and filter out noise and differences between performances. These being live performances, each sample is a bit different, some are captured at better quality than others (directly from the sound guy versus a phone recording in the crowd), some songs have interludes, false starts (guitarist forgot to turn on the amp), start too late, end too early…
Now the next thing that I aside from this database is a live feed of a currently playing song and I am interested in using ML to find out which song is the live feed most likely to be. The way I see this, it could either be that the live feed continues to be captured so the changes of matching it up with features of the existing sample database grow or it might be more practical to periodically chop of fixed sized chunks if it is not possible to use a live feed like this.
I am interested in finding out what is the most common / most reliable approach to find what song the live audio most likely is and on top of that what position in the song is the live feed currently at.
AI: In the first step, I think you should start with discretizing your song wave and then take a Fourier Transform for each chunk (in numpy for example you can use numpy.fft()). This Link might be helpful. After that, you can try sequence pattern recognition models. |
H: True positives and true negatives, F1 score: multi class classification
I have 4 classes for an application of classification of animal kingdom: 1 --> invertibrates; 2 --> vertibrates; 3--> mammal; 4 ---> ambhibian. Given a mixture of images the objective is to identify mammals correctly. In the confusion matrix for this example will the TP denote the class 3 (mammal)?
Q1: Therefore, in general TP is the class category which is of utmost significance? What if all the classes are equally important, then how to denote TP and TN?
Q2: How to calculate the F1 score? should it be done separately for each class? If then there will be multiple TN's w.r.t each class !!
Can somebody please help clear these confusions? Thank you.
AI: In a multiclass problem there is one score for each class, counting any other class as a negative.
For example for class 1:
TP instances are gold standard class 1 predicted as class 1
FN instances are gold standard class 1 predicted as class 2,3 or 4
FP instances are gold standard class 2,3 or 4 predicted as class 1
TN instances are gold standard class 2,3 or 4 predicted as class 2,3 or 4 (here errors don't matter as long as class 1 is not involved)
In other words, the problem is evaluated as if it was a binary classification problem for every class individually. Doing the same process for every class independently (since the status of an instance depends on the target class), one obtains a different F1-score for each class.
After that, one generally calculates either the macro F1-score or the micro F1-score (or both) in order to obtain an overall performance statistic. |
H: The effect of imbalanced distribution of data
I read on Google's ML website if I have classification dataset with a ratio of 90% for one classification and 10% of the data for another classification.
In that case, should I use the exact same percentage of data for each classification?
i.e. deleting around 80% of the dataset to make it 10% for each classification.
The reason is that Google said that the ML model will learn and then it is more likely to have a classification of the 90% and that won't provide good predictions. (i.e) The predictions might be biased towards a single label/feature.
My dataset is 90% to 10% but that is indeed the actual ratio and it is more likely to have the classification of the 90%
Shall I delete 80% of my data or keep it as is and let the ML learn that it is indeed more likely to have a classification of the 90%?
AI: The best way forward here depends highly on the real life question you try to answer.
Let's say you want to make a medical diagnosis:
'Sick with exotic Illness X' or 'Not sick with exotic illness X' in this case you might want to catch all instances of being sick as a warning sign and could live with 'false positives'.
Conversely your algorithm will be used to predict 'customers likely to cancel soon', in this case it would not be a good idea to proactively talk to 'false positives' i.e. customers who did not plan to cancel about why they might be dissatisfied.
In either cases your training set and indeed reality might be severely unbalanced but the cost and consequences of this varies.
In the first case I would recommend using balancing methods (like the aforementioned Under-/Oversampling, etc.) to improve recognition of the minority class while in the second case that might be unnecessary.
In any case I would practically go on to do the following:
Include balancing/sampling in your beauty contest of algorithms and parameters and check the impact on the accuracy of predicting the test set (which is left unbalanced as in the original).
This will simply show you whether the inherent bias of the training set is problematic for your real world case (i.e. produces models that never identify the minority class) or not. |
H: Feature selection is not that useful?
I've been doing a few DataScience competitions now, and i'm noticing something quite odd and frustrating to me. Why is it frustrating? Because , in theory, when you read about datascience it's all about features, and the careful selection, extraction and engineering of those to extract the maximum information out of raw variables, and so far, throwing every variable as it is in the mix seems to work fine with the right encodings. Even removing a variable that has 80% nulls ( which in theory should be an overfitting contributor ) decreases the performance of the regression model slightly.
For a pratical case : I have long/lat for a pickup point and destination point. I did the logical task of calculating the distance ( all kinds of them ) from these points. And dropped the long/lat. Model performs way better when you include both ( coordinates & distance ) in the features list. Any explanations? And a general thought on my dilemma here with the real utility of feature selection/engineering/extraction
EDIT : could it be that the information we can get out of the coordinates is bigger than the distance? Is it just possible to extract features that are more beneficial to my model that plain long/lat?
AI: My experience is the same. I think in my case at least it's largely down to the algorithms I would generally use, all of which have the capacity to ignore features or down-weight them to insignificance where they're not particularly useful to the model. For example, a Random Forest will simply not select particular features to split against. A neural network will just weight features into having no effect on the output and so on. My experience is that algorithms which take every feature into account (like a vanilla linear regression model) generally suffer far more.
Additionally, in a "production" rather than competitive environment I found that feature selection became much more important. This is generally due to covariate shift - the distribution of values for certain features changes over time and, where that change is significant between your training dataset and the live predictions you're making day-to-day, this can really trash your model's outputs completely. This kind of problem seems to be administered out of the datasets used for competitions, so I never experienced it until starting to use ML at work. |
H: Is it possible to create a rule-based algorithm to compute the relevance score of question-answer pair?
In information retrieval or question answering system, we use TD-IDF or BM25 to compute the similarity score of question-question pair as the baseline or coarse ranking for deep learning.
In community question answering, we already have the question-answer pairs to collect some statistics info. Without deep learning, could we invent an algorithm like BM25 to compute the relevance score of question-answer pair?
AI: could we invent an algorithm like BM25 to compute the relevance score of question-answer pair?
It depends:
BM25 (actually cosine with BM25 weighted vectors) is a simple similarity measure, ultimately based on counting words in common. Proposing a different similarity measure is easy, for instance there are various measures used for MT evaluation (including some quite sophisticated ones) which could be used as well. Of course, these measures don't actually measure the relevance, they just offer a crude approximation.
However if there was such a rule-based algorithm which would be able to actually measure the relevance of an answer in any context, then for all means and purposes we would have solved AI: judging the semantic relevance is much more subtle than counting words in common. In particular if there is such an algorithm, then the problem of question answering is solved: you can just generate all the possible answers and loop until one is found relevant to the question.
People have tried to do "intelligent" rule-based algorithms in NLP for decades, before realizing that ML is more efficient and performs much better in most tasks. So it's extremely unlikely that a rule-based algorithm would suddenly outperform ML on a non-trivial task like this. |
H: How to get Keras accuracy for each step in an epoch like in Tensorflow?
Like in tensorflow I get accuracy for each step -
Step 1, Minibatch Loss= 68458.3359, Training Accuracy= 0.800
Step 10, Minibatch Loss= 451470.3125, Training Accuracy= 0.200
Step 20, Minibatch Loss= 582661.1875, Training Accuracy= 0.200
Step 30, Minibatch Loss= 186046.3125, Training Accuracy= 0.400
Step 1, Minibatch Loss= 161546.6250, Training Accuracy= 0.600
Step 10, Minibatch Loss= 286965.3125, Training Accuracy= 0.400
Step 20, Minibatch Loss= 205545.7500, Training Accuracy= 0.600
Step 30, Minibatch Loss= 202164.6562, Training Accuracy= 0.800
Step 1, Minibatch Loss= 214717.7969, Training Accuracy= 0.600
Step 10, Minibatch Loss= 108088.7344, Training Accuracy= 0.800
Step 20, Minibatch Loss= 80130.6016, Training Accuracy= 0.800
Step 30, Minibatch Loss= 28674.1875, Training Accuracy= 0.800
Step 1, Minibatch Loss= 78675.6641, Training Accuracy= 0.400
Step 10, Minibatch Loss= 168231.2812, Training Accuracy= 0.600
Step 20, Minibatch Loss= 77828.1406, Training Accuracy= 0.600
Step 30, Minibatch Loss= 56584.9609, Training Accuracy= 0.800
Step 1, Minibatch Loss= 29474.0898, Training Accuracy= 0.600
Step 10, Minibatch Loss= 79742.9531, Training Accuracy= 0.800
Step 20, Minibatch Loss= 0.0000, Training Accuracy= 1.000
Step 30, Minibatch Loss= 6736.4688, Training Accuracy= 0.800
But in keras I get accuracy for each epoch -
156/156 [==============================] - 6s 39ms/step - loss: 13.0185 - acc: 0.1923
Epoch 2/10
156/156 [==============================] - 3s 18ms/step - loss: 12.9151 - acc: 0.1987
Epoch 3/10
156/156 [==============================] - 3s 18ms/step - loss: 13.1218 - acc: 0.1859
Epoch 4/10
156/156 [==============================] - 3s 18ms/step - loss: 12.9151 - acc: 0.1987
Epoch 5/10
156/156 [==============================] - 3s 18ms/step - loss: 13.1218 - acc: 0.1859
Epoch 6/10
156/156 [==============================] - 3s 18ms/step - loss: 12.9151 - acc: 0.1987
Epoch 7/10
156/156 [==============================] - 3s 18ms/step - loss: 12.8118 - acc: 0.2051
Epoch 8/10
156/156 [==============================] - 3s 18ms/step - loss: 12.8118 - acc: 0.2051
Epoch 9/10
156/156 [==============================] - 3s 18ms/step - loss: 12.8118 - acc: 0.2051
Epoch 10/10
156/156 [==============================] - 3s 18ms/step - loss: 12.9151 - acc: 0.1987
AI: Create a custom CallBack:
from tensorflow.keras.callbacks import Callback
class NBatchLogger(Callback):
"A Logger that log average performance per `display` steps."
def __init__(self, display):
self.step = 0
self.display = display
self.metric_cache = {}
def on_batch_end(self, batch, logs={}):
self.step += 1
for k in self.params['metrics']:
if k in logs:
self.metric_cache[k] = self.metric_cache.get(k, 0) + logs[k]
if self.step % self.display == 0:
metrics_log = ''
for (k, v) in self.metric_cache.items():
val = v / self.display
if abs(val) > 1e-3:
metrics_log += ' - %s: %.4f' % (k, val)
else:
metrics_log += ' - %s: %.4e' % (k, val)
print(f'step: {self.step}/{self.params['steps']} ... {metrics_log}')
self.metric_cache.clear()
The inspiration for the code is here. |
H: (Python Basic) more elegant way of creating a dictionary
Is there a more elegant way to write a code like this?
my_dic = {
'Model':['Apple', 'Banana', 'Pineapple', 'Melon', 'Orange', 'Grape'],
'AAA':[
method1(y, y_Apple),
method1(y, y_Banana),
method1(y, y_Pineapple),
method1(y, y_Melon),
method1(y, y_Orange),
method1(y, y_Grape)]
,'BBB':[
method2(y, y_Apple),
method2(y, y_Banana),
method2(y, y_Pineapple),
method2(y, y_Melon),
method2(y, y_Orange),
method2(y, y_Grape)]
,'CCC':[
method3(y,y_Apple),
method3(y, y_Banana),
method3(y, y_Pineapple),
method3(y, y_Melon),
method3(y, y_Orange),
method3(y, y_Grape)]
,'DDD':[
method4(y, y_Apple),
method4(y, y_Banana),
method4(y, y_Pineapple),
method4(y, y_Melon),
method4(y, y_Orange),
method4(y, y_Grape)]
,'EEE':[
method5(y, y_Apple),
method5(y, y_Banana),
method5(y, y_Pineapple),
method5(y, y_Melon),
method5(y, y_Orange),
method5(y, y_Grape)]
}
AI: You can try:
my_dic = dict()
my_dic['Model'] = ['Apple', 'Banana', 'Pineapple', 'Melon', 'Orange', 'Grape']
y_list = [y_Apple, y_Banana, y_Pineapple, y_Melon, y_Orange, y_Grape]
keys = zip(['AAA', 'BBB', 'CCC', 'DDD', 'EEE'], ['method1', 'method2', 'method3', 'method4', 'method5'])
func = lambda F, a, b: eval(F)(a,b)
for name, method in keys:
my_dic[name] = [ func(method, y, y2) for y2 in y_list] |
H: What do you call a feature that always has the same value?
Is there a standard term for a feature that always has the same value, i.e. that can be discarded without loss of information?
For example I am trying to classify cats vs dogs, and every example in my training set has has_two_eyes=true.
I am thinking something like "useless," "redundant," "constant," or "degenerate," but I don't know what the standard terminology is here.
AI: I think there are a few terms, but the one I have seen most often is "Zero Variance Predictor" or "Zero Variance Feature" |
H: Validation Curve Interpretations for Decision Tree
I'm working on a machine learning class, and we're using supervised learning right now, starting with decision trees. I'm using the UCI Credit Card dataset (whether or not certain people will default in their payments due to past history).
Using a decision tree classifier for this attempt. Running a validation curve using scikit-learn, I'm getting a plot I'm not quite sure how to interpret. As you can see from the axes, the parameter is min_samples_leaf, and I'm varying it from 1 to 30 (by 2).
Based on this plot and some Googling, I believe the correct way to interpret this is that this dataset has high bias with no variance and nothing is really being learned. Or, in other words, decision trees are not a good algorithm for this dataset, since there doesn't really seem to be a trade-off.
For max depth, I'm getting a validation curve that looks like this:
Based on what I see here, there is quite a bit of bias at the smaller set, and more variance as the depth increases. Given that GridSearchCV returns an ideal max_depth of 5 and min_samples_leaf of 19. (edit: corrected numbers). Those numbers seem to indicate a very high bias, and there really is nothing to be learned here using decision trees.
Overall, based on the min_samples_leaf, I would hesitate to recommend a decision tree for this data set. However, the learning curve and the max_depth validation curve both seem to show there might be some value.
Puzzling to me is that the accuracy score (using metrics.accuracy_score and the ideal parameters from GridSearchCV) is 82%, which doesn't seem that bad. How do I reconcile these crappy validation curves with the accuracy score?
AI: A few comments on the top of my head:
both parameters min_samples_leaf and max_depth are not very important for decision trees, so it's not surprising not to see much variation (or not all) across different values:
the fact that min_sample_leaf doesn't influence the performance simply means that the algorithm finds enough good predictors in the features to create leaves with a high number of instances, apparently always more than 30.
I'm less sure about max_depth, but I assume that it's a parameter which is used to prune the tree in order to avoid overfitting. This is exactly what happens here as soon as it's increased above 5: the algorithm creates a deeper (more complex) tree by using very specific conditions on the features which turn out to be specific to the training set, hence the divergence between training score and CV score (that's a clear sign of overfitting).
Value 5 for max_depth is indeed optimal, as seen on the second graph (due to overfitting with higher values). The value 19 returned by grid search for min_samples_leaf is as optimal as any other value between 1 and 30, as seen on the first graph: grid search just happened to pick this one but it doesn't have any impact anyway.
The 82% accuracy is completely normal: that's the Y axis on both graphs ;) It indeed looks like a decent performance, but we can't say for sure since there's no comparison: maybe the dataset is super easy and 82% is just the majority class, or maybe it's super hard and 82% is a great achievement.
Actually these graphs don't show much about the learning, they just show that the parameters studied are not really relevant. To observe something more interesting try:
Ablation study: pick a random subset of the training set instances and train only with this subset. Do this for say 10%, 20%, ..., 100% of the data, then plot the performance as a function of the size of the training set. This will show how many instances are needed for the model to reach its max performance (educated guess: not that many).
Feature selection: use only the N most informative features (as measured by information gain for instance) and plot the performance for different values of N. If you're lucky you might see an increase until a particular optimal value of N followed by a small decrease when uninformative features are added. If it happens, that last part would be due to overfitting. |
H: Transform test data when using a persistent model
I'm quite new to data science and only slowly following the necessary steps to get valid results using scikit-learn. As far as I understand you fit and transform the training data and only transform the test data (using the parameters retrieved by the earlier fitting). For my project a persistent model is necessary, for that I export the trained model using joblib.
When applying the model on test data later, is there a way to retrieve the parameters (for transformation) generated during the training process?
AI: In the same way that you use joblib to persist your saved model, you should persist the transformers that you use in the pipeline too. So for example if you're using StandardScaler() and OneHotEncoder(), those also need to be joblib.dump()ed so you can import them into your prediction script.
The simplest way to achieve this is to add both transformers and estimator into a Scikit Learn pipeline and use joblib to persist that. |
H: (Feature Selection) different results from L2-based and Tree-based
I am doing feature selection using Sklearn:
Tree-based feature selection : RandomForestClassifier.feature_importances_
L2-based feature selection: LogisticRegression.coef_
Target variable is binary classes. The training set is standardized.
How should I interpret when a certain feature shows significant importance in Random Forest estimator, but negative coefficient in Logistic Regression?
AI: Negative coefficient in logistic regression means negative relationship between predictor variable and the response variable.
Example- in a model price can be a predictor and will have negative relationship with binary response variable product purchased or not.
And neagtive coefficient in logistic regression does not mean relationship has low strength, it only means changes in predictor has a reverse effect on response variable and if coefficient is highly negative it means feature is very important and small changes in it impacts response but in reverse direction.
Feature importance does not tell you nature or direction of relationship but only tells you strength of relationship so they are never negative. Coefficients in logistic regression tells you both strength and direction or nature (positive or negative) of relationship. Also high importance in random forest means strong relationship between predictor and response but importance column derived from tree based models is silent on nature or direction.
Hope this helps. |
H: Checking if ML model is possible
How can I check if a machine learning model is feasible on a given dataset? What techniques like EDA, correlation etc. can be used to judge if a model is possible i.e. data and predictor variables will give reasonably accurate forecasts or in other words there is good enough signal in predictors?
AI: When in doubt simply build a model and test how good it is (accuracy, MSE, whatever).
If those KPIs are not up to par, think about improving the parameters, feature engineering, etc.
I find this approach to be more valuable than to spend a lot of time analyzing correlations and other classical statistical analysis.
If simple models (Regression, GLM, RandomForest, XGB) do not produce results with any sensible accuracy and you see no path to successful feature engineering then you have your answer as well.
The only downside to this approach might be a perfectly good model on the first try ;) |
H: Using a trained Model from Pickle
I trained and saved a model that should predict a sons hight based on his fathers height.
I then saved the model to Pickle.
I can now load the model and want to use it but unfortunately a second variable is demanded from me (besides the height of the father) I think I did something wrong when training the model?
I will post the part of the code wher I think the error is in, please ask if you need more.
#Spliting the data into test and train data
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
#Doing a linear regression
lm=LinearRegression()
lm.fit(X_train,y_train)
#testing the model with an example value
TestValue = 65.0
filename = 'Father_Son_Height_Model.pckl'
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.predict(TestValue)
print(result)
The error message says:
ValueError: Expected 2D array, got scalar array instead:
array=65.0.
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
Thanks alot in advance.
AI: You need to use loaded_model.predict(TestValue), not loaded_model.score(TestValue). The latter is for evaluating the models accuracy, and you would also need to pass the true height of the son, which is the y value it's asking for. |
H: How to save/load a Model (Pickle) with a specific path/directory
I seems like a very basic think but I couldnt find an answer to it.
I want to save my model to a specific directory using pickle.
The two algorithms below work fine for saving it in the same directory as the code itself but I want to save all my models in a dedicated folder.
I tried to just change the "filename" to "filepath" and well, make it a path but the world isnt that easy it seems.
Example Path: C:\Learning\Python\Data Science\02_TrainedModels
.
# save the model to disk
filename = 'Father_Son_Height_Model.pckl'
pickle.dump(lm, open(filename, 'wb'))
filename = 'Father_Son_Height_Model.pckl'
loaded_model = pickle.load(open(filename, 'rb'))
With This Code:
# save the model to disk
filepath = r'H:\99_Lernen\Python\Data Science\02_Trained Models\Father_Son_Height_Model.pckl'
pickle.dump(lm, open(filepath, 'wb'))
I get this Error:
FileNotFoundError: [Errno 2] No such file or directory: 'H:\99_Lernen\Python\Data Science\02_Trained Models\Father_Son_Height_Model.pckl'
In this line of code:
pickle.dump(lm, open(filepath, 'wb'))
AI: The "\" escapes the following sign when parsed, so the path cannot be read.
Use "/" instead and it should work. Also this question was probably more of a SO question ;) |
H: How to validate a clustering model without a ground truth?
Im dealing with a dataset (text messages about source code comments) that are not labeled. I don't have a assumption about the implicits classes in this dataset. I want to discovery (by clustering) the common hidden patterns shared by the groups of messages. This is a unsupervised learning problem. I was asked how i will validate this method (patterns discovery, clusters) without a dataset of correct answers to measure the output of the model with the "reality". Im neither a specialist in the field of the messages dataset to manualy inspect and label the data. So, how to approach this question or provide a scientific explanation about the model output? How to prove that the clusters generated by the model are reasonable or correct?
AI: In my opinion there are two ways:
Ask a few experts to assess the quality of the clusters based on a sample (after the clustering has been done, much easier than pre-annotating the whole data especially in the case of clustering)
If the clustering is done in the perspective of using the result in another task, the performance of this other task will reflect the quality of the clustering.
Imho any measure based on the distance between clusters or other technical measure would be a flawed evaluation, because it would depend on the quality of the representation. Such measures might provide some useful indications though, just not a proper evaluation for the task. |
H: shapes (127,1) and (13,) not aligned: 1 (dim 1) != 13 (dim 0)
i am try to find score of linear regression
it gives me this type error my code is below
from sklearn import datasets
bostan=datasets.load_boston()
x=bostan.data
y=bostan.target
from sklearn import preprocessing
x_scale=preprocessing.scale(x)
yfrom sklearn import model_selection
x_train,x_test,y_train,y_test=model_selection.train_test_split(x_scale,)
from sklearn.linear_model import LinearRegression
clf=LinearRegression()
clf.fit(x_train,y_train)
y_pred=clf.predict(x_test)
y_pred=y_pred.reshape(-1,1)
y_test=y_test.reshape(-1,1)
y=clf.score(y_pred,y_test) # problem is in this line
```
AI: The score method of the classifier object does not work the way you are trying it to. You need to directly give x_test as input and that it will calculate y_pred on its own and give you the result with y_test. So, you do not need to reshape and the correct syntax would be:
y = clf.score(x_test, y_test)
If for any reason you need to calculate the score using y_pred then take a look at sklearn.metrics module. |
H: Meaningful and Non-Meaningful Data?
I can understand what meaningful data is like its important information that can be used to evaluate something but I don't get what non-meaningful data is? Is it less important data?
AI: "meaningful" is a vague word anyway, but yes you got the idea: in the context of a particular task meaningful data is the information which contributes to solving the task. Non-meaningful is the opposite, so information which doesn't help for the task. Sometimes non-meaningful data makes it harder to the ML algorithm to capture the relevant information, since it has to correctly identify what is meaningful and what is not (as opposed to the case where it's provided only with relevant information).
Example: if you want to predict how fast a car can go, engine type and manufacturer are meaningful data. Colour is not. |
H: Classification vs Regression Model what should I choose?
I am working on a problem like 'customers next month revenue prediction'. Here revenue will be the target variable. Again we actually segment the customers based on there revenue(like if they give less than 200 they will be in category 'A' else 'B'). I have to predict both(provable revenue + category). What will be the right approach, choose a regression model and predict the revenue and then categorized or i should choose separate model(regression for revenue, classification for category).
AI: The segmentation sounds rather arbitrary if it is simply a reflection of the revenue you are trying to predict rather than having been derived from other bjective data that then predicts revenue it is unlikely to add very much (though obviously past revenue is a good predict of future revenue - the question is does the sementation improve the prediction vs just using the past data.
So if it were me I would be inclined to do the following:
(1) try some dimension reduction and/or clustering to explore whether or not you really have meaningful customer segments rather than just an arbitrary cutoffs by revenue.
(2) if you have a good segmentation try creating a customer classifer to predict segment memberhip
(3) if your classifier has some value use the (non-linear) segment prediction as a additional input to a regression to predict revenue.
(4) validate and/or test the model against some reserved data not just the main data set.
With any regression though correct model specification is vital. So the first thing you need to do is explore the dataset (plot lots of stuff against lots of other stuff, look at some distributions, etc) to try and understand its structure before you specify a model.
Particulary, if your primary data source is the time series of past sales you probably need to look at including mean-reversion and momentum terms in the specification (see ARM, ARIMA etc models ).
(5) It would also be fun to try a Recurrent Neural Network, though I have never done this myself. |
H: Best way to visualize huge amount of data
I have a data set of around 3M row. I has only 2 category (category- 2:1 ratio). Now i want to visualize(scatter plot) it's distribution to understand can the data linearly separable or not(In order to choose model type).I already try this and the plot is not understandable. What will be the best way to visualize this data set?
AI: I have three suggestions that may help.
Reduce the point size
Make the points highly transparent
Downsample the points
Since you do not provide any sample data, I will use some random data to illustrate.
## The purpose of S1 is to intermix the two classes at random
S1 = sample(3000000)
x = c(rnorm(2000000, 0, 1), rnorm(1000000, 3,1))[S1]
y = c(rnorm(2000000, 0, 1), rnorm(1000000, 3,1))[S1]
z = c(rep(1,2000000), rep(2,1000000))[S1]
plot(x,y, pch=20, col=rainbow(3)[z])
The base plot without any adjustments is not very nice. Let's apply suggestions 1 and 2.
plot(x,y, pch=20, cex=0.4, col=rainbow(3, alpha=0.01)[z])
Reducing the point size and making the points highly transparent helps some. This gives a better idea of the overlap between the two distributions.
If we downsample, we don't need quite as much transparency.
## The purpose of S2 is to downsample the data
S2 = sample(3000000, 100000)
plot(x[S2],y[S2], pch=20, cex=0.4, col=rainbow(3, alpha=0.1)[z[S2]])
This gives a different view that provides a similar, but not identical understanding of the two distributions.
These are not magic, but I think that they are helpful. |
H: Confusion for considering accuracy or standard deviation in selecting the best parameters
I have a model with a various parameters to test.
The size of the dataset I have is not really large (~500 documents).
My issue is that when I test the parameters using 10 CV, some of them produce high accuracy value but the Standard deviation value of the folds (accuracy values of the folds) is high.
ex.
Model setup 1: acc: 0.81, STD: 0.23
Model setup 2: acc: 0.76, STD: 0.05
Setup 1 has higher accuracy but the std is high, where setup 2 has lower accuracy but with more stable results.
Thus, how can I pick the best model?
AI: You are perfectly right to pay attention to the std dev across CV folds, especially with a small dataset. As you observed, different models show different values for the performance but also for the std dev, so you have to arbitrate a tradeoff between performance and stability:
The safe option is to choose the model with lower accuracy and low variance. It might not always perform optimally but at least it won't perform too bad.
The risky option is the high accuracy, high variance model: in average it will perform best, but you have a higher risk that it actually performs poorly.
This choice depends on the context, i.e. what the model is intended for. |
H: Why does the neural network keep giving out the same output for every input?
Made a neural network using TensorFlow's Keras that is supposed to match an IP to one of the 7 type of vulnerabilities and give out what type of vulnerability that IP has.
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(50, activation=tf.nn.relu),
tf.keras.layers.Dense(7, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(xs, ys, epochs=500)
xs is the list of IPs and ys is its corresponding vulnerability from 0 to 6 (seven in total).
The output for this remains the same for every input, i.e.,:
[[0.22258884 0.20329571 0.36828393 0.11352853 0.04444532 0.02388807 0.02396955]]
AI: Most probably, this is because your model is way too simple - a single 50-node hidden layer for a 7-class problem does not sound adequate.
Try adding more hidden layers (and not quite sure if you really need a Flatten layer just after the input), e.g.,:
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(100, activation=tf.nn.relu),
tf.keras.layers.Dense(50, activation=tf.nn.relu)
tf.keras.layers.Dense(50, activation=tf.nn.relu),
tf.keras.layers.Dense(7, activation=tf.nn.softmax)
])
Experiment with the exact number and size of the layers. |
H: Unsupervised Clustering for n-length word arrays
I have a series of arrays
[Apple,Banana,Cherry,Date]
[Apple,Fig,Grape]
[Banana,Cherry,Date,Elderberry]
[Fig,Grape]
and I would like to build some clusters that associate the arrays into groups based on overlap
Group1: Array1 and Array3 as they have 3 common words
Group2: Array2 and Array4 as they have 2 common words
etc..
I was thinking kmeans but there is really not a distance calculation - more like an overlap one.
Does anyone have a suggestions?
Thanks!
AI: Assuming the dimensionality is reasonable, I would not use K-means or any generic algorithm, instead I would write a code which directly gives me the exact result by building a map of the groups:
// Assuming data is an array of size N containing all the arrays
// clusters is a map associating each group with a set of arrays
for i=0 to N-1
for j=i+1 to N-1
group = overlap(data[i], data[j])
add data[i] to the set clusters[group]
add data[j] to the set clusters[group]
An alternative version if the number of different values and size of the sets allow it and/or if it's possible to precompute the groups of interest:
for i=0 to N-1
for every subset S of data[i]
add data[i] to the set clusters[S] |
H: Explanation behind the calculation of training loss in deep learning model
I am trying to model an image classification problem using convolution neural network. I came across a code on Github in which I am not able to understand the meaning of following line for loss calculation in the training loop.
I am omitting most of the detail and only placing the relevent code-
for batch_idx, (data, target) in enumerate(final_train_loader):
loss = criterion(output,target)
#Idea behind the below line
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
Cross-entropy loss function is being used here.
AI: The line you're asking about
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
is basically calculating the average train_loss for the finished batches
To illustrate, suppose 4 batches have been done (with average loss named avg_loss) and current is calculated from 5th batch (with loss named new_loss)
The new average loss is from
$\frac {4 \times \text{avg_loss} + \text{new_loss}} {5}$
This is exactly the same as
$\text{avg_loss} + \frac {\text{new_loss} - \text{avg_loss}} {5}$
which is the calculation done by the code |
H: How to Avoid rarely used discrete feature values in a dataset
On Google's ML crash course it states:
Good feature values should appear more than 5 or so times in a data
set. Doing so enables a model to learn how this feature value relates
to the label. That is, having many examples with the same discrete
value gives the model a chance to see the feature in different
settings, and in turn, determine when it's a good predictor for the
label.
This make sense, but if that happens what should we do? For example, consider a dataset with street_name as feature and out 2000 rows, only 4 of them have street_name equal to "Learning St.".
Should we remove those rows containing rare feature values? Or what?
Any insights would be greatly appreciated.
AI: Working with Categorical Features can raise a few challenges. For instance, you might encounter features with high cardinality in certain values or the other case (your case), features with rare categories.
The first thing you can consider is, what you will lose if you drop those rows. What extra info do you get with keeping the rare categories. Here you probably need the domain knowledge of the project to identify the impact of the specific data. If your dataset is small or you don't feel safe to remove those rows, then you should look for alternatives.
The second approach is to group all the rare categories in one new called "Rare" or "Other". So, whenever you have a record with that value, you replace it with the new one.
Finally, you can try either a Dimentionality Reduction such as PCA or a Feature Hashing where you will map multiple categories in one new value. |
H: Interpreting fraction of zero weights in TensorFlow
I am using the TensorFlow to do a simple linear classification using logistic regression. The graph included from the TensorBoard displays what they call the fraction of zero weights. How do I interpret this in terms of model evaluation? I am assuming this is good since I got the good results in terms of loss, precision, recall, etc but not sure.
Thank you.
AI: If you are performing linear (logistic) regression your weights are simply your $\beta_i$. If none of them are $0$ that simply means all features are 'important' to some degree. |
H: ValueError: Expected 2D array, got 1D array instead:
I was following this example online for simple text classification
And when I create the classifier object like this
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train', shuffle=True)
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
X_train_counts.shape
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)
and run predict on the test data it gives an error
import numpy as np
twenty_test = fetch_20newsgroups(subset='test', shuffle=True)
predicted = clf.predict(twenty_test.data)
np.mean(predicted == twenty_test.target)
ValueError: Expected 2D array, got 1D array instead:
But when I do the same thing using Pipeline it works as in the example
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
text_clf = text_clf.fit(twenty_train.data, twenty_train.target)
twenty_test = fetch_20newsgroups(subset='test', shuffle=True)
predicted = text_clf.predict(twenty_test.data)
np.mean(predicted == twenty_test.target)
Out[37]: 0.7738980350504514
Why is that?
AI: It seems to be because the predict method on your Pipeline object requires the input to match the input of the first object in your pipeline, which is the CountVectorizer. It any case, it only requires an iterable object, which your 1d array indeed is.
The classifier you train without the pipeline ends up being a MultinomialNB object, whose predict method actually requires a 2d array of shape num_samples * num_features.
Maybe you need to pass your test data through each of the individual steps manually before feeding anything into the final classifier? |
H: XGBoost vs ARIMA for Time Series analysis
Doing time series analysis, I have doubts on choosing the right model. I want to predict the next 30 mins window, from the input dataset which contains the no. of error count for that particular 1 min interval.
Should I use XGBoost or ARIMA regressions?
Most of article or tutorial I found online use ARIMA for time series, while XGBoost is used more in Kaggle competitions.
AI: Usually, ARIMA regressions are used in classical statistical approaches, when the goalis not just prediction, but also understanding on how different explanatory variables relate with the dependent variable and with each other. ARIMA are thought specifically for time series data.
On the contrary, XGBoost models are used in pure Machine Learning approaches, where we exclusively care about quality of prediction.
XGBoost regressors can be used for time series forecast (an example is this Kaggle kernel), even though they are not specifically meant for long term forecasts. But they can work.
In conclusion, I don't know your data, but if I had to bet I'd go for ARIMA. However both could work great, and I suggest you to try both and pick the best for your particular need. They are comparatively quick to implement, it shouldn't take too much time. |
H: Machine learning method to predict event date
Let's say I have a big dataset consisting of variables including but not limited to the start/end date of loans, their notional amount, a loan prepayment indicator etc.
My goal is to create a model that will be trained on past data in order to predict the prepayment date of current loans and I was wondering which ML method would be most suitable for this case. My first thought was to handle this as a classification problem, using interval dates to predict a prepayment date interval, but I believe that there should be a more robust & sophisticated approach to this.
AI: You can both utilize regression and classification models in here.
Also I suggest log transform your financial features (eg: debt, last credit risk, average loan amount etc) when using algorithms except decision tree based models (since they are not affected by monotonic transformations).
Classification
You can construct a model such that predicts if a customer would pay "x" days after payment day or not. In this model you need to find best "x" by comparing results of different x day models. Also you need to find optimal confidence range in each model in comparison.
Regression
In this setting, you will predict how much day a customer will pay after payment date. Again you should obtain confidence intervals such that you calculate 3 days for a customer which means she will pay 3 days after (it can be earlier for negative predictions). However you know that your model is powerful on +2 -2 day interval. Then you should say this particular customer will pay in 1 or 5 days after payment date.
Algorithms
I suggest xgboost, lightgbm and RNNs. But its not so clear maybe you have a linear space and it converges fast with SVMs, line fits etc. Yet they are pretty good, kaggle winner learners.
I hope it helps. I thinks it will be a good starting point. |
H: How to find similar points to a positive set when you don't have any negative set?
The task I'm used to do is the following. A client comes to see me with a set of clients (called positive companies) and he wants me to find other similar prospects. Usually, he also gives me a set of negatives companies and I have a big set of potential companies (that I call the basket).
I perform this task by doing a Adaboost classifier that I train with the positives and negatives. I then run this classifier on the basket. Each company in the basket receives a score and the highest score shows the most promising prospects for the client.
Now, a new client doesn't have any set of negatives to give and I'm a little bit lost. I can not do a supervised learning anymore, obviously. I first thought of performing a k-nearest neighbours on each positive and I would receive a list of "close" prospects. The problem with that is that I don't have a score anymore.
Furthermore, with the k-nearest method, I should define a distance which I don't like because I don't want to give subjective weights to features. Indeed, the Adaboost classifier would learn some weights and would itself predict which features are important.
Could someone indicate me how I could tackle this problem?
AI: To summarize, you have labeled data in one class (positive) and unlabeled data. You want to find the positive examples in the unlabeled data. The general name for this setting in machine learing is one-class classification, which is a fairly broad field.
A sub-area that is particularly relevant is positive-unlabeled learning, which is the problem of training a classifier when one has just positive and unlabeled data.
Also note that you have all the examples that need to be predicted at training time. Therefore you can use a transductive learning algorithm. Particularly, if you have a notion of which companies are similar, you could construct a graph by connecting similar companies by edges. You could then run a graph propagation algorithm that would assign scores to the unlabeled items.
Finally, here is a similar question where the answer suggests a method of positive-unlabeled learning. |
H: what arguments should I pass to dbscan or optic in order to divide the data in a specific way
I have thousands of very similar datasets that needs to be divided in diagonal way to two groups.
for example:
and
I tried to play with the argument of dbscan and optic as epsilon and minPoints and even metric and none of them helped me to divide the data properly to 2 groups.
I only succeed to divide the data using dbscan. If I remove the noise between these groups to make them a complete separate 2 groups, I did it using histogram
j = 1
hist, bin_edges = np.histogram(data, bins=500)
max_bin = np.where(np.amax(hist) == hist)[0][0]
max_noise = bin_edges[max_bin+j]
filtered_indicies = data > max_noise
data = data[filtered_indicies]
these lines remove noise from the data, between the groups and also around it when
j > 1
and that causing me to remove necessary data that I need to reprocess later.
so, I am going back the my main question, how can I know which epsilon, minPoints or other argument of dbscan can help me divide this data properly?
or is there maybe a better way then what I presented here above (histogram) to remove the noise between these groups without removing necessary data?
AI: Start with what the parameters mean.
$\varepsilon$ is the search radius around each point. You need this search radius to be small enough that it can't fully "bridge the gap" between the clusters. If the gap width is variable, $\varepsilon$ needs to be small enough to accommodate the narrowest gap. Note that we can exclude the occasional straying point from consideration when determining the gap width as minPoints can handle a small amount of "bleed" into the gap. However, by inspecting the data you can see that making $\varepsilon$ smaller than the smallest gap would cause the points on the far right side to be excluded from the right cluster.
minPoints is the minimum number of points within $\varepsilon$ to point $x$ in cluster $c$ for $x$ to be included in $c$. When we determined the minimum gap width, we allowed a few points from each cluster to cross into the gap. minPoints must be large enough that those stray points don't bridge the clusters. Specifically, if there are $n$ points from cluster $c1$ within $\varepsilon$ of a point in cluster $c2$, minPoints must be greater than $n$. Note that like with $\varepsilon$, setting minPoints large enough to keep the primary clusters separate would cause some points on the edges to be excluded from these clusters.
DBSCAN can't cleanly cluster all points into the two clusters that you want because it connects using local connectivity, not global coherence. Even one bridge from one bunch of points to the other groups them into a single cluster. In order to separate the two clusters, there will be outliers. If outliers are unacceptable, you could join them to the two main clusters using a simple heuristic, such as joining to whichever cluster has a closer center to the outlier. If post-processing is unacceptable, you might want to use a global clustering method such as spectral clustering; you might also have success with mean shift. |
H: SMOTE vs SMOTE-NC for binary classifier with categorical and numeric data
I am using Xgboost for classification. My y is 0 or 1 (true or false). I have categorical and numeric features, so theoretically, I need to use SMOTE-NC instead of SMOTE. However, I get better results with SMOTE.
Could anyone explain why this is happening?
Also, if I use some encoder (BinaryEncoder, one hot, etc.) for categorical data, do I need to use SMOTE-NC after encoding, or before?
I copied my example code (x and y is after cleaning, include BinaryEncoder).
_train, X_val, y_train, y_val = train_test_split(x, y, test_size=0.2, random_state=1)
smt = SMOTE()
X_resampled, y_resampled = smt.fit_resample(X_train, y_train)
params_model1 = {
'booster': ['dart', 'gbtree', 'gblinear'],
'learning_rate': [0.001, 0.01, 0.05, 0.1],
'min_child_weight': [1, 5, 10, 15, 20],
'gamma': [0, 0.5, 1, 1.5, 2, 5],
'subsample': [0.6, 0.8, 1.0],
'colsample_bytree': [0.6, 0.8, 1.0],
'max_depth': [3, 4, 5, 6, 7, 8],
'max_delta_step': [0, 1, 2, 3, 5, 10],
'base_score': [0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65],
'reg_alpha': [0, 0.5, 1, 1.5, 2],
'reg_lambda': [0, 0.5, 1, 1.5, 2],
'n_estimators': [100, 200, 500]
}
skf = StratifiedKFold(n_splits=3, shuffle=True, random_state=1001)
xgb = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.3, gamma=1,
learning_rate=0.1, max_delta_step=0, max_depth=10,
min_child_weight=5, missing=None, n_estimators=1000, n_jobs=1,
nthread=None, objective='binary:logistic', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=None, subsample=0.8, verbosity=1)
scoring = 'f1'
rs_xgb = RandomizedSearchCV(xgb, param_distributions=params_model1, n_iter=1,
scoring=scoring, n_jobs=4, cv=skf.split(X_resampled, y_resampled), verbose=3,
random_state=1001)
rs_xgb.fit(X_resampled, y_resampled)
refit = rs_xgb.best_estimator_
joblib.dump(refit, 'validator1.pkl')
loaded_xgb = joblib.load('validator1.pkl')
y_predict = loaded_xgb.predict(X_val.as_matrix())
print(confusion_matrix(y_val, y_predict))
print("Final result " + str(f1_score(y_val, y_predict)))
AI: You have to keep in mind that machine learning is still largely an empirical field, full of ad-hoc approaches that, while they happen to work well in most cases, they lack a theoretical explanation as to why they do so.
SMOTE arguably falls under this category; there is absolutely no guarantee (theoretical or otherwise) that SMOTE-NC will work better for your data compared to SMOTE, or even that SMOTE will perform better compared with much simpler approaches, like oversampling/undersampling. Quoting from section 6.1 on SMOTE-NC of the original SMOTE paper (emphasis added):
SMOTE-NC with the Adult dataset differs from our typical result: it performs worse than plain under-sampling based on AUC. [...] even SMOTE with only continuous features applied to the Adult dataset, does not achieve any better performance than plain under-sampling.
The authors proceed to offer some possible explanations as to why they see such not-typical performance with SMOTE/SMOTE-NC on the said dataset, but as you will see this has to do with a deep focus on the dataset itself and its characteristics, and it is itself rather empirical in nature and hardly "theoretical".
Bottom line: there is not really much to be explained here regarding your question; any further detail will require going deep with the specific characteristics of your dataset, which of course is not possible here. But I would suggest to not bother, and continue guided by your experimental results, rather than by any (practically non-existent) theory on the subject... |
H: How to test hypothesis?
I have a table named app_satisfaction which has user_id, satisfaction, # of people they've invited.
I've grouped by satisfaction and found that on averge.
people in satisfaction ="BAD" group invted 2.25 people,
"GOOD" group invited 2.09 people, and
"EXECELLENT" group invited 1.89 people.
So my hypothesis is people who dislike the app are more likely to invite people since inviting people gives them free coupon and they do not like to spend their own money on app they dislike.
I have a problem, just by looking at average invite in each group it seems unreasonable to draw conclusion. Also there are more people in "GOOD", "EXCELLENT" group compared to "BAD" group.
How can I test my hypothesis? what are the approaches one might take in real world problems?
AI: As far as I understand, you have factors ("bad, "good" etc) and continuous "invitations". If you want to compare two groups you could use a t-test (e.g. Wilcoxon). If you want to compare all of the groups, you could use a simple linear regression of form:
$$ invitations = \beta_0 satisfaction_1 + \beta_1 satisfaction_2 + ... + u.$$
R example:
library("e1071")
iris = iris
table(iris$Species)
#iris = iris[!(iris$Species=="versicolor"),]
library(dplyr)
iris %>%
group_by(Species) %>%
summarise_at(vars(Sepal.Length), funs(mean(., na.rm=TRUE)))
Result (means):
# A tibble: 3 x 2
Species Sepal.Length
<fct> <dbl>
1 setosa 5.01
2 versicolor 5.94
3 virginica 6.59
Compare two groups:
# Two-samples Wilcoxon test
wilcox.test(iris$Sepal.Length[iris$Species=="setosa"], iris$Sepal.Length[iris$Species=="virginica"])
# The p-value is less than the significance level alpha = 0.05. We can conclude that Sepal Length is significantly different
Result:
Wilcoxon rank sum test with continuity correction
data: iris$Sepal.Length[iris$Species == "setosa"] and iris$Sepal.Length[iris$Species == "virginica"]
W = 38.5, p-value < 2.2e-16
alternative hypothesis: true location shift is not equal to 0
Regression:
# Simple linear regression
summary(lm(Sepal.Length~Species, data=iris))
# p-values are smaller than 0.05 which means each factor's contribution is statistically different from the intercept
Result:
Call:
lm(formula = Sepal.Length ~ Species, data = iris)
Residuals:
Min 1Q Median 3Q Max
-1.6880 -0.3285 -0.0060 0.3120 1.3120
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.0060 0.0728 68.762 < 2e-16 ***
Speciesversicolor 0.9300 0.1030 9.033 8.77e-16 ***
Speciesvirginica 1.5820 0.1030 15.366 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.5148 on 147 degrees of freedom
Multiple R-squared: 0.6187, Adjusted R-squared: 0.6135
F-statistic: 119.3 on 2 and 147 DF, p-value: < 2.2e-16
The interesting bit here is Pr(>|t|). If the number in this column is smaller 0.05, you can say that the factor is significantly different from the intercept (which is the base category, in this case "setosa").
In this application, the column Estimate directly gives you the mean of "setosa" for the intercept. The effect for "versicolor" is 0.9300, where 5.0060+0.9300=5.936, which is the mean for "versicolor" and so on. |
H: how to use word embedding to do document classification etc?
I just start learning NLP technology, such as GPT, Bert, XLnet, word2vec, Glove etc. I try my best to read papers and check source code. But I still cannot understand very well.
When we use word2vec or Glove to transfer a word into a vector, it is like:
[0.1,0.1,0.2...]
So, one document should be like:
[0.1,0.1,0.2...]
[0.1,0.05,0.1...]
[0.1,0.1,0.3...]
[0.1,0.15,0.1...]
.......
So, one document is a matrix. If I want to use some traditional method like random forest to classify documents, how to use such data? I was told that Bert or other NLP models can do this. But I am really curious about how the word embedding are applied in the traditional methods?
AI: So, one document is a matrix. If I want to use some traditional method like random forest to classify documents, how to use such data?
You can't, at least not directly because traditional methods require a fixed number of features for every instance. In the case of document classification the instance must represent the document, so unless all the documents have exactly the same length (unrealistic) it's impossible to use a set of vectors as features.
The traditional approach would consist in representing a document with a vector where each cell represents a word in the vocabulary, and the value is for instance the TFIDF weight of the word in the document. |
H: Analyzing Sentiments of Financial News related to a Company
I'm trying to build a model which gives me the sentiments of the Financial News related to a company and I want to predict the stock price accordingly. But the major problem that I'm facing is understanding the news for the counterpart.
Let's say I have a news headline as "Total is under pressure and CEO has confirmed that they are planning to sell stakes soon". My model will always give negative sentiment which is correct, but this might actually be a good news for Shell or Exxon, lets say. But how do I tell my model that it is actually a good news for Shell.
Also is there any good process to understand which news relate to which companies and how I can calculate the sentiment accordingly. Maybe a good labelled data-set or pre-trained architecture which might help me out?
P.S. Most important, is there any labelled dataset or any other pre-trained architecture which I can use to calculate the sentiments of financial news?
AI: But how do I tell my model that it is actually a good news for Shell.
You would probably need a complex semantic analysis from which you could build a knowledge graph, from which you could extract the logical implications of one news for other entities not mentioned in the news.
P.S. Most important, is there any labelled dataset or any other pre-trained architecture which I can use to calculate the sentiments of financial news?
You could ask on https://opendata.stackexchange.com/.
As far as I know, really good resources (data and algorithms) for financial applications are very expensive and usually kept secret by the biggest financial institutions. |
H: Predicted and true values distributions comparison
Is this alarming when a distribution of predicted values differs from a distribution of true values? I use xgbregressor and get the following plots
Usage of boxcox doesn't improve the case.
My data is spatial-temporal. I make a cash-flow forecasting for some city and time is treated like 12 features corresponding to months that I feed to xgb.
The figure shows data for one year.
AI: I don't know the method you're using but I suspect that what you observe here is a common problem with supervised learning: models tend to favour predictions close to the mean, that is avoid extreme predictions because these are usually more risky (higher loss if it's a mistake). As a consequence the std deviation of the predictions is often significantly smaller than the s.d. of the ground truth.
Afaik there's no perfect solution. Typically you could try to encourage risky predictions a bit more in the loss function, if that's an option with your method. But in most applications it's safer to learn to live with this issue. |
H: Multivariate Regression Error “AttributeError: 'numpy.ndarray' object has no attribute 'columns'”
I'm trying to run a multivariate linear regression but I'm getting an error when trying to get the coefficients of the regression model.
The error I'm getting is this: AttributeError: 'numpy.ndarray' object has no attribute 'columns'
Here's the code I'm using:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as seabornInstance
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn import metrics
%matplotlib inline
# Main files
dataset = pd.read_csv('namaste_econ_model.csv')
dataset.shape
dataset.describe()
dataset.isnull().any()
#Dividing data into "attributes" and "labels". X variable contains all the attributes and y variable contains labels.
X = dataset[['Read?', 'x1', 'x2', 'x3', 'x4', 'x5', 'x6' , 'x7','x8','x9','x10','x11','x12','x13','x14','x15','x16','x17','x18','x19','x20','x21','x22','x23','x24','x25','x26','x27','x28','x29','x30','x31','x32','x33','x34','x35','x36','x37','x38','x39','x40','x41','x42','x43','x44','x45','x46','x47']].values
y = dataset['Change in Profit (BP)'].values
plt.figure(figsize=(15,10))
plt.tight_layout()
seabornInstance.distplot(dataset['Change in Profit (BP)'])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
regressor = LinearRegression()
regressor.fit(X_train, y_train)
coeff_df = pd.DataFrame(regressor.coef_, X.columns, columns=['Coefficient'])
coeff_df
Full error:
Traceback (most recent call last):
File "", line 14, in coeff_df = pd.DataFrame(regressor.coef_, X.columns, columns=['Coefficient'])
AttributeError: 'numpy.ndarray' object has no attribute 'columns'
Any help on this will be highly appreciated!
AI: Using .values on a pandas dataframe gives you a numpy array. This will not contain column names and such. You do this when setting X like this:
X = dataset[['Read?', 'x1', .. ,'x47']].values
But then you try to get the column names from X (which it does not have) by writing X.columns here:
coeff_df = pd.DataFrame(regressor.coef_, X.columns, columns=['Coefficient'])
So store your column names in a variable or input them again, like this:
coeff_df = pd.DataFrame(regressor.coef_, ['Read?', 'x1', .. ,'x47'], columns=['Coefficient']) |
H: Classifying Letters using CNN - Help
so some context, I'm trying to develop an OCR (for fun) and for that reason I decided to first find text within a page, parse it in to letters within the text and from there try and classify the letters that were extracted one by one.
For the classification I'm trying to build a CNN based upon the following database:
http://www.ee.surrey.ac.uk/CVSSP/demos/chars74k/
and the following CNN architecture:
# CNN architecture
input = Input(shape=(32, 32, 1))
conv1 = Conv2D(32, (3, 3))(input)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
drop1 = Dropout(0.5)(pool1)
conv2 = Conv2D(64, (3, 3))(drop1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
drop2 = Dropout(0.5)(pool2)
flatten = Flatten()(drop2)
fully_connected = Dense(512, activation='relu')(flatten)
output = Dense(62)(fully_connected)
model = Model(inputs=input, outputs=output)
model.compile(loss='sparse_categorical_crossentropy',optimizer='Adam', metrics=['accuracy'])
print(model.summary())
model.fit(X, y, epochs=10 , validation_split=0.3)
However, all I seen to get no matter what I try (epochs, batch size, validation split) etc is 0 accuracy.... So assuming my database and labels are fine... what could be going wrong?
AI: So there are some things you could consider.
First of all, normally datasets like this (or for example mnist) will be one hot encoded. So you have a vector with the length of all classes, each position stands for one class and is 1 if it is the class and 0 otherwise.
As you are predicting values between zero and one, the activation of your output should do the same. So the first thing i would change is the activation function of the last layer to sigmoid. It is linear in your case.
Other things to consider are data preprocessing like normalizing pixel values between 0 and 1. Also no data preprocessing at all can lead to unstable training it is unlikely to cause no learning at all. |
H: Forecasting time series outside the training/test set
I am trying to predict some time series based on precedent values using LSTM.
I have pretty good results when I compare the predicted time series with the test set (0,18% error)
I just miss how to forecast outside the interval of data ^^'
I have to admit that I used a point by point prediction method that looks like this:
def predict_point_by_point(model, data):
predicted = model.predict(data)
predicted = np.reshape(predicted, (predicted.size))
return predicted
I then, I used it to override the predict function.
maybe the original function could have nailed the prediction to have a future time series? maybe the point by point isn't that bad neither?
I mean; how could I predict, some precise interval of time series (3months for example) without just reffering to the test set?
Example: the test set starts 01/01/2018 and ends 01/12/2018 and I want to predict 4 months from 02/12/2018
Thanks in advance for your help
AI: Let's say you trained a forecasting model using the following base data:
time | index_1 | index_2 | label
01/01/2018 | 80 | 70 | 1
01/02/2018 | 60 | 30 | 0
01/03/2018 | 75 | 90 | 1
You used time, index_1 and index_2 to predict label. Then you would simply need a dataset like this to predict 01/04/2018:
time | index_1 | index_2
01/04/2018 | 60 | 75
Using your model on this data set should predict the label-value.
Now in a time series this can be more complicated, let's say what you actually want to predict is the label-value of time X from the indicie values of time X-4 months. In this case your data to build the model should look like this:
time | index_1_lag_4_months | index_2_lag_4_months | label
01/04/2018 | 80 | 70 | 1
01/05/2018 | 60 | 30 | 0
01/06/2018 | 75 | 90 | 1
This model would predict the label-value for 01/04/2018 based on the indice values of 01/01/2018. To actually get a prediction we again give a data set like this:
time | index_1 | index_2
01/04/2018 | 60 | 75
Only the output would not be the label-value for 01/04/2018 but instead for 01/08/2018. |
H: Classify if someone is home based on time
I have a dataset with locations and a timestamp of a subject. For each location and timestamp I determined by comparing the location to the home address if the subject was at home or not (0/1) and added this value to the dataset.
Now, I want to train a model to learn based on the timestamp when it is most likely that the subject is at home. Thus, if you give the model some timestamp, it will classify if the subject was at home at this time. The model learns the "best time" for someone being at home so to say.
Obviously people are not at home at the same time every day but over a long period of time there should be some pattern and I want the model to classify based on this pattern.
What would be a fitting algorithm to do this?
AI: This is an ideal case for feature engineering!
I did this same case for myself using the google takeaway data to predict whether I am at home or at work.
Instead of just using time I extracted the following features:
Work Day --> 1 / 0
Day of the Week
Month
Year
Time
I then trained a random forest classification model to tell me whether I am at home, at work or other place based on those five features.
As a successive step I used this model to actually identify dates where I "moved" or was "on holiday" because of the difference between prediction and actual labels. |
H: What is the mean of inconsistent in machine learning and why 1NN is well known to be inconsistent
I really need help understanding the meaning of consistency in machine learning, why it's important, and why 1NN is considered to be inconsistent.
AI: Concistency of any algorithm in machine learning or statistics rather means that assuming you train on an infinite amount of data that your algorithm will converge to the true value of your estimate. Meaning that if you feed infinite datasets into the algorithm your error will converge to 0
Now regarding 1NN:
1NN has an important property: When feeding infinite training examples the error rate of the 1NN converges to a limit of twice the Bayes error. Now if you take the definition above and match the 1NN's property you'll see the reason why it's inconsistent.
See his paper for a proof of the property: http://cseweb.ucsd.edu/~elkan/151/nearestn.pdf |
H: Can a decision in a node of a decision tree be based on comparison between 2 columns of the dataset?
Assume the features in the dataframe are columns - A,B,C and my target is Y
Can my decision tree have a decision node which looks for say, if A>B then true else false?
AI: Yes, but not in any implementation that I am aware of.
The idea is mentioned in Elements of Statistical Learning, near the end of section 9.2.4 under the heading "Linear Combination Splits." But this is not implemented in the popular CART or Quinlan-family of decision tree algorithms*, it is not done in sklearn's trees, and I do not know of any other python or R package that use it.
Some R packages do a more limited version, where a split can be made on two features, but these splits are of the form "$x_1>\alpha\text{ and }x_2\leq\beta$" as opposed to direct comparisons of the variables. See https://stats.stackexchange.com/questions/4356/does-rpart-use-multivariate-splits-by-default
An obvious problem is computational requirements: just checking over all pairs of features is now quadratic, and allowing arbitrary linear combinations of two features is potentially much much larger. On the other hand, if you want to restrict to direct comparisons $x_1\geq x_2$ (with no coefficients), that should be tractable (if substantially slower than CART). The Elements authors suggests Hierarchical Mixtures of Experts model instead if incorporating linear combinations is desired.
Oh, one more comment. If you really want splits like $x_1\geq x_2$, you could just generate all the features $x_i-x_j$; then a more common implementation of decision trees will be able to make your splits, when considering these new features. (Probably there will be some side effects, and still the computational problem arises: you've added $\binom{m}{2}$ features.)
* I've found a comment that suggests that CART does support multi-feature ("surrogate") splits?:
https://stackoverflow.com/a/9996741/10495893 |
H: How to deploy machine learning models as a chrome extension?
I have trained a stance detection model using SVMs. Wanted to know how can I deploy this as a chrome extensions. I do understand that the question is a bit broad but any links, suggestions etc. will be appreciated:)
AI: Since what you want to do is to apply a pre-trained model and use the predictions on the client side, you have two options:
Client-side application of the model: the extension would have to download the model file and store it, then it would need to compute the predictions from this model. This would probably require the SVM library used for the training to be available on the client side, this might be an issue.
Server-side application of the model: the extension would have to upload the data to the server, then the server computes the predictions using the model, and sends the predictions back to the client.
Usually option 1 is preferable since it doesn't require transfering the input data to the server and then sending back the predictions. It's also safer with respect to the user privacy, their data is not uploaded anywhere. However it requires the client to be able to compute the predictions, so probably requiring the SVM library to be installed and the extension must be able to communicate with it, so it might not be feasible. |
H: Why don't we use space filling curves for high-dimensional nearest neighbor search?
Some space filling curves like the Hilbert Curve are able to map an n-dimensional space to a one dimensional line whilst preserving locality. Does that mean that we could map a dataset of high dimensional points to a line and expect the order of the nearest neighbors to be preserved?
If so, wouldn't that be more efficient than building a Ball tree?
AI: Space filling curves are sometimes used for nearest neighbor search. See these applications of Z-order curves and Hilbert curves.
The idea is as follows. Let $f$ be a space-filling curve. Given a point $x$, index it as $f^{-1}(x)$.* Given a query point $y$, return all points indexed in an interval around $f^{-1}(y)$. If $y$ is close to $x$, there is a good chance that $x$ will be returned so long as $f^{-1}$ tends to preserve locality. Different space filling curves have this property to different degrees.
* Note that space-filling curves are not injective so the inverse is not uniquely defined. But in practice we choose a finite grid on $[0, 1]^n$ and an appropriate iterate that is bijective so we don't have a problem. |
H: sklearn.metrics.average_precision_score getting different answers for same data but different formats
I was trying to learn how average precision (AP) is calculated and implemented in scikit-learn. I have read the documentation, but I don't think I fully understand it yet.
Consider the following two snippet:
import numpy as np
from sklearn.metrics import average_precision_score
y_true = np.array([0, 1, 0])
y_scores = np.array([0.4, 0.4, 0.8])
average_precision_score(y_true, y_scores) # 0.3333333333333333
and
y_true = np.array([[1,0], [0,1],[1,0]])
y_scores = np.array([[0.6,0.4], [0.6,0.4], [0.2,0.8]])
average_precision_score(y_true, y_scores) # 0.45833333333333326
From what I understand, these are the same data but formatted in different ways. The first is only showing the true labels and predicted scores for the positive class, while the second is giving information on both classes.
But why are they giving different answers? In particular, how are these two results calculated? And which one is correct? I was reading this post, but I didn't understand how that precision-recall table in the answer is constructed. Can anyone can go through a similar calculation for my example?
AI: By explicitly giving both classes, sklearn computes the average precision for each class. Then we need to look at the average parameter: the default is macro:
Calculate metrics for each label, and find their unweighted mean...
If you switch the parameter to None, you get
average_precision_score(y_true, y_scores, average=None)
# array([0.58333333, 0.33333333])
whose second entry agrees with the answer for the positive class, and whose average gives you the other output you're seeing.
As for hand calculations:
For the positive class, with a threshold in $(0.8,1]$ we get zero positive predictions, and so 0 recall and 1 precision (by convention). With the threshold in $(0.4,0.8)$ we get one false positive, one false negative, and one true negative, so 0 recall and 0 precision. With the threshold in $[0, 0.4)$ we get one true positive and two false positives, so 1 recall and 0.333 precision. So the table from the linked question in this case is
R P
1 0.0 1.0
2 0.0 0.0
3 1.0 0.333
Finally, the average precision computation is
$$(0.0-0.0)\cdot 0.0 + (1.0-0.0)\cdot 0.333 = 0.333$$
Somewhat degenerate example, but it checks out.
For the other class:
$(0.6,1]$ gives zero "positive" predictions, so 0 and 1 again.
$(0.2,0.4)$ gives one true positive, one false positive, and one true negative.
$[0,0.2)$ gives two true positives, one false positive. So
R P
1 0.0 1.0
2 0.5 0.5
3 1.0 0.666
and average precision is
$$(0.5-0.0)\cdot 0.5 + (1.0-0.5)\cdot0.666 = 0.58333$$ |
H: Plotting Stacked Histogram for Time-series data
Given the dataset:
timestamp item itemcount
2019-03-18 07:40:08.759 A 10
2019-03-18 08:40:08.759 B 5
..................................................
2019-05-20 07:40:08.759 D 4
2019-05-21 07:40:08.759 E 8
I want to plot stacked histogram like:
where the x-axis should be the date and y axis the itemcount and stack will be each item. I want the graph with subplots for every month.
I am new here so will be happy to get any feedback on my mistakes. Thank you.
here's one sample code i found online which plots the same graph in the figure above.
# Import Data
df = pd.read_csv("https://github.com/selva86/datasets/raw/master/mpg_ggplot2.csv")
# Prepare data
x_var = 'manufacturer'
groupby_var = 'class'
df_agg = df.loc[:, [x_var, groupby_var]].groupby(groupby_var)
vals = [df[x_var].values.tolist() for i, df in df_agg]
# Draw
plt.figure(figsize=(16,9), dpi= 80)
colors = [plt.cm.Spectral(i/float(len(vals)-1)) for i in range(len(vals))]
n, bins, patches = plt.hist(vals, df[x_var].unique().__len__(), stacked=True, density=False, color=colors[:len(vals)])
# Decoration
plt.legend({group:col for group, col in zip(np.unique(df[groupby_var]).tolist(), colors[:len(vals)])})
plt.title(f"Stacked Histogram of ${x_var}$ colored by ${groupby_var}$", fontsize=22)
plt.xlabel(x_var)
plt.ylabel("Frequency")
plt.ylim(0, 40)
plt.xticks(ticks=bins, labels=np.unique(df[x_var]).tolist(), rotation=90, horizontalalignment='left')
plt.show()
AI: This solution uses plotnine which is based on R ggplot and the grammar of graphics.
Generally you have to transform the timestamp to months first and then understand that:
X = Months
Y = itemcount
Group = item
This should help also for your matplotlib solution!
from datetime import datetime
import pandas as pd
from plotnine import *
df['month'] = pd.DatetimeIndex(df['timestamp']).month
p = (ggplot(pdf, aes(x = 'month', y = 'itemcount',fill='item')) + geom_bar( Star = 'identity',position='fill')
# fill here acts the same as group in matplotlib
``` |
H: Publish without validation score?
My mentor wants me to write and submit an academic paper reporting a predictive model, but without any validation score.
Everything I have read in textbooks or the Internet says that this is wrong, but is there any case where only reporting a train score makes sense?
Background
The model was fit "by hand" by someone in our team, using a visual inspection of features extracted from our entire dataset. It is a linear model based on hand-crafted features extracted from some very nonlinear and high-dimensional data. The linear model is based on less than fifty features, but those features were extracted from thousands. We do not have any data left to use as validation.
AI: The most likely issue here is to do with
fifty features, but those features were extracted from thousands
If those features were selected according to a pre-data-analysis theory, and other selections were not considered, then a linear model that fit the data might be strong proof that the theory was plausible.
However, a linear model that fits well due to selection from a large feature set in order to make it fit is very likely to be overfit. You absolutely need a hold-out test data set in this case, as you have used your initial data to form a hypothesis, and have no proof of validity at all.
I cannot advise you whether to submit the paper or not. There may be ways you can word it to make it clear that the work establishes a hypothesis and does not validate it (but without making a song and dance about the lack of rigour in validation, as then you are undermining your own submission).
I think that as long as you do not try to obfuscate the lack of follow up work, and present results so far accurately, then it is a fair submission - it may then get rejected if a reviewer wants to see some validation, or it may get accepted and there will need to be follow up work that either validates or refutes the model in a second paper. That might be your work, it might be another team's.
How good/bad those scenarios are depends on how your field works in general. Perhaps ask with some relevant details on https://academia.stackexchange.com/ to gauge your response, as in some ways this is a people problem - how to please your mentor whilst retaining pride in your work and progressing your career (which in turn depends on a mix of pleasing your supervisor and performing objectively good work).
Your mentor may still be open to discussing the technical merits of the work. Perhaps they have not fully understood the implications that you are seeing for how the model was constructed. However, they might fully understand this, and may be able to explain from their view the merits of publishing at an early pre-validation stage for this project. |
H: Growth function of a 6-dimensional linear classifier
In our course, we are dealing with a d-dimensional classification problem
($\chi = \mathbb{R}^{d}$ as our input space, and $y = \{-1,+1\}$). Our hypothesis class $H$ consists of all hypotheses of the following form:
$h(x) = a\cdot \text{sign}(x_i - b)$, where $i = \{1,2,\dots,d\}$, $a \in \{-1,+1\}$, and $b\in\mathbb{R}$.
We have already shown that the growth function $m_{H}(3) = 2^3$ for $d=2$ by showing all 8 possible dichotomies for three chosen points. We further know that for a $d$-dimensional linear perceptron, the VC-dimension is always equal to $d+1$.
We know want to show that for $d=6$, $m_{H}(7) < 2^7$, i.e. that the VC-dimension of our hypothesis class is lower than 7.
Could you help us out with this?
Thanks a lot!
AI: Suppose we have $n$ points in $\chi$. We have $d$ choices of $i$, two choices of $a$, and at most $n+1$ effective choices of $b$ so the number of possible combinations of outputs on those $n$ points is at most $2d(n+1)$. I'm sure you can take it from here. |
H: Python : How to output graphs using lists method and how to change graph lines to "-" or "*"
Question
Please show me Python programming codes that shows graphs using the list method.
Moreover, I want to know how to change graph lines to "-" or "*".
Thank you for your answer in advance.
%matplotlib inline
import decimal
import numpy as np
import matplotlib.pyplot as plt
decimal.getcontext().prec = 10
r=0.005
D0=12000000
HC=100000
x=0
y=0
for i in range(100):
D= D0*(1-r)**i - HC*(1-(1-r)**i)/r
x=i
y=D
print("D["+str(i)+"] ="+"{:,f}".format(D))
print("D["+str(i)+"] ="+"{:,.1f}".format(round(D,0)))
plt.plot(x, y, marker="*",color = "red", linestyle = "--")
plt.show()
AI: Partially, I solved by myself.
%matplotlib inline
import decimal
import matplotlib.pyplot as plt
decimal.getcontext().prec = 10
r=0.005
D0=12000000
HC=100000
x=0
y=0
X=[];
Y=[];
for i in range(100):
D= D0*(1-r)**i - HC*(1-(1-r)**i)/r
x=i
y=D
X.append(x)
Y.append(y)
# print("D["+str(i)+"] ="+"{:,.1f}".format(round(D,0)))
plt.grid(True)
plt.plot(X, Y, color = "b")
#plt.show()
plt.savefig('./Graph.png') |
H: Minimum Possible Test MSE
I have a little confusion.
What follows is from Introduction to Statistical Learning (2013) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani.
My understanding of what is going on is the following. The black curve is a function, let us say $y=f(x)$.
We have a random variable $g$ which I write as $g=f+\varepsilon$, $f$ plus noise.
The data points are a subset of the plane, $n$ trials of $g$, the (possibly multi)-set $T:=\{(x_i,g(x_i)):i=1,\dots,n\}$.
I imagine (incorrectly as I know that splines are not polynomials) that the yellow curve is a degree one polynomial (flexibility two) that is the best LS fit to the the trials (the training), the blue is the best degree five polynomial (flexibility six), and the green is the best degree $(20+\alpha)$ polynomial (flexibility $20+\alpha+1$).
In my head, the training should be the data points $T$, while the test should be $f$ (as in $f$ is the expectation of the test data).
I understand that the grey line is telling me that increasing the flexibility (degree of the polynomials in my head), allows me to approximate better the set $T$. However, if I have duplicates of $x$ in $T$, say $x_j=x_k=x^*$, with different $g$ values (e.g. something like $(20,3),\,(20,5)$ both in $T$), then I cannot have a polynomial (or indeed spline or any function) $p$ that has $p(x_j)$ and $p(x_k)$ different: $p(x_j)=p(x_k)=p(x^*)$, single-valued.
Therefore, if I have such duplications in the $x$ variable, I cannot reduce the MSE to zero.
In turn, the red line shows, that when we overfit the data with too much flexibility, the fitted curve is (my usage) biased to the training and so will not model well $f$, and so we have this increasing.
The problem I can't square (excuse the pun), is the dashed line.
It says minimum possible test MSE over all models. Whether 'test' refers to $f$ or $T$ this does not make sense to me.
If 'test' here means $f$, well surely this is zero? We can approximate $f$ arbitrarily well with a polynomial of large enough degree.
If 'test' here means the data $T$, we must conclude that $T$ contains $x$ duplicates: otherwise we could fit a polynomial of degree $n+1$ through all the test points and get this to zero. Therefore there must be duplicates, and so, perhaps, this theoretically best fit goes through all the points which are not duplicated, and goes through the average $g(x_i)$ of the duplicated points... and the answer turns out to be one... but then the grey line should not go below this...
Therefore I conclude that the dashed line is the best possible fit to $f$... but why isn't this zero?
Questions:
Am I right to be confused by this? Is the black $f$ the test or the
training?
Am I misunderstanding something else? Perhaps these (smoothing) splines cannot
well-approximate as well as polynomials?
AI: Important disclaimer: I'm not a statistician and I'm not sure about my interpretation!
I also thought at first about the duplicates, but I think the problem might be with this assumption:
In my head, the training should be the data points , while the test should be (as in is the expectation of the test data).
Specifically the last part: in principle the test set is made of points from the same distribution as the training data, with the same risk of noise. In other words, the test set $t$ is similar to $T$: $t=\{(x_i,g(x_i)):i=1,\dots,m\}$ (and not $t=\{(x_i,f(x_i)):i=1,\dots,m\}$).
If 'test' here means the data , we must conclude that contains duplicates: otherwise we could fit a polynomial of degree +1 through all the test points and get this to zero.
Importantly, the test set $t$ is different from the training set $T$, and the estimated function $\hat{f}$ is based only on the points in $T$. So this way it makes sense that even a perfect estimate $\hat{f}=f$ might not be able to predict the true (noisy) value for every point $x\in t$. That could explain the non-zero minimum test MSE. |
H: Neural Networks: Predicting probabilities of the possible values of y, instead of just predicting y
I have a true value y that I'd like to predict with a regression, but I'm interested in the probabilities that y will be different values. Y is theoretically continuous but in the dataset it is rounded to integers. Let's say y could be 0-9. I want 10 probabilities, one for each possible value. I tried doing this categorically, with a neural network having 10 output nodes, this hurts the predictions since we lose the relationships between categories, 1 is closer to 2 than it is to 10.
Example toy problem:
y is the weight of an object in pounds. The dataset has Y values rounded s.t. y can be from 0 to 9 pounds. Predict the probabilities that y will be 0-9 pounds based on the features X.
Example output: [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,] (ten # summing to 1)
I'd like to be able to accomplish this with Keras.
AI: Here are two approaches that use the idea of havng your prediction be a distribution over your continuous variables rather than a single value.
"Easy" but computationally expensive
Independently train K regressors. You now have K samples from a distribution, and you can
Big downside - you have to train lots of networks, and run lots of networks at inference time.
For some details "Confidence and prediction intervals for neural network ensembles"
Harder
Have one network explicitly predict a parametrized (e.g. Gaussian/ mixture of gaussians) distribution. For details see "Mixture density network", for example
So in the case of a single Gaussian, the network would output a mean and standard deviation, and so you have an estimate of the uncertainty.
A possible danger here is that you might need to be clever about how to train the variance parameter. You could imagine a case where a network does a good job on the means but is either over- or under- confident in its predictions. |
H: Is it necessary to normalize data for XGBoost?
MinMaxScaler() in scikit-learn is used for data normalization (a.k.a feature scaling). Data normalization is not necessary for decision trees. Since XGBoost is based on decision trees, is it necessary to do data normalization using MinMaxScaler() for data to be fed to XGBoost machine learning models?
AI: Your rationale is indeed correct: decision trees do not require normalization of their inputs; and since XGBoost is essentially an ensemble algorithm comprised of decision trees, it does not require normalization for the inputs either.
For corroboration, see also the thread Is Normalization necessary? at the XGBoost Github repo, where the answer by the lead XGBoost developer is a clear:
no you do not have to normalize the features |
H: how to check all values in particular column has same data type or not?
I have column 'ABC' which has 5000 rows. Currently, dtype of column is object. Mostly it has string values but some values dtype is not string, I want to find all those rows and modify those rows. Column is as following:
1 abc
2 def
3 ghi
4 23
5 mno
6 null
7 qwe
8 12-11-2019
...
...
...
4900 ert
5000 tyu
In above case, I can use for loop to find out rows which do not have desired dtype. I just wanted to know, is their better way to solve this issue.
Note: I am using Pandas.
AI: You can get the type of the entries of your column with map:
df['ABC'].map(type)
So to filter on all values, which are not stored as str, you can use:
df['ABC'].map(type) != str
If however you just want to check if some of the rows contain a string, that has a special format (like a date), you can check this with a regex like:
df['ABC'].str.match('[0-9]{4}-[0-9]{2}-[0-9]{2}')
But of course, that is no exact date check. E.g. it would also return True for values like 0000-13-91, but this was only meant to give you an idea anyways. |
H: Is it necessary to convert labels in string to integer for scikit_learn and xgboost?
I have a tabular data with labels that are in string. I will feed the data to decision trees in scikit_learn and XGBoost classifier.
Is it necessary to convert the labels in string to integers for these algorithms? Will the algorithms work less effectively when the labels are not in numerical format?
I am using python v3.7
AI: TL;DR
If your labels are part of your feature matrix, you need to convert them to numerals using OneHotEncoding (docs). However, if your labels are just your targets, you can leave them as is.
1) Is it necessary to convert the strings to integers?
You did not specify if your string labels are one of your features in your feature matrix X, or if they are your target y. So, here is some data to cover both cases:
X = [
['sunny'],
['rainy'],
['rainy'],
['cloudy'],
['very rainy'],
['sunny'],
['partially cloudy']
]
y = [
'no umbrella',
'umbrella',
'umbrella',
'no umbrella',
'umbrella',
'no umbrella',
'no umbrella'
]
If you try to fit a DecisionTreeClassifier with this, like this:
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
clf.fit(X, y)
You get an error:
ValueError: could not convert string to float: 'sunny'
So definitely, you need to convert them to integers, for example:
from sklearn.preprocessing import OrdinalEncoder
encoder = OrdinalEncoder()
encoder.fit(X)
X_encoded = encoder.transform(X)
>> [[3.] # sunny
>> [2.] # rainy
>> [2.] # rainy
>> [0.] # cloudy
>> [4.] # very rainy
>> [3.] # sunny
>> [1.]] # partially cloudy
This transforms your labels into integers. And now you are able to .fit() your model.
This means that features in X must be transformed to integers, however, target labels in y can remain as strings.
2) Will the algorithms work less effectively?
If you don't convert your targets y into integers, there will be no decrease in your algorithms performance.
Now, given that you need to convert your string features X into numerals, the way that you convert will affect the algorithm.
In Sklearn you can use the OrdinalEncoder (docs) or the OneHotEncoder (docs). The way in which they encode X is different, observe:
OrdinalEncoder
from sklearn.preprocessing import OrdinalEncoder
encoder = OrdinalEncoder()
encoder.fit(X)
X_encoded = encoder.transform(X)
>> [[3.] # sunny
>> [2.] # rainy
>> [2.] # rainy
>> [0.] # cloudy
>> [4.] # very rainy
>> [3.] # sunny
>> [1.]] # partially cloudy
OneHotEncoder
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
encoder.fit(X)
X_encoded = encoder.transform(X)
[[0. 0. 0. 1. 0.] # sunny
[0. 0. 1. 0. 0.] # rainy
[0. 0. 1. 0. 0.] # rainy
[1. 0. 0. 0. 0.] # cloudy
[0. 0. 0. 0. 1.] # very rainy
[0. 0. 0. 1. 0.] # sunny
[0. 1. 0. 0. 0.]] # partially cloudy
For algorithms like DecisionTreeClassifiers, the second option, namely OneHotEncoder is better because there are more dimensions to finding boundary lines. So you can have a shallower trees. Where as, if you encode with a simple LabelEncoder, you will need to have deeper tree.
Have a look at the tree generated given the two types of input:
LabelEncoder, notice how 3 decisions nodes are needed.
OneHotEncoder, notice how only 2 decisions nodes are needed. |
H: Multi-Output Regression with Keras
I am trying to do a multi-output regression using TensorFlow. I have got a dataset in Excel which includes a column of input points and 2 columns of output.
I converted all numbers to NumPy objects. And I am trying to do a basic regression but accuracy is always 1.0, I also want to draw a graph but dunno where to start. Could Anyone please help? My code is here:
import os
import numpy as np
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from keras.models import Sequential
from keras.layers import Dense
from keras.activations import relu
training_data = pd.read_excel(r'C:\Users\lenovo\Desktop\Yeni klasör\training_data.xlsx',sheet_name="i1-o2")
training_data_X = training_data['i1']
traindataX = np.array(training_data_X)
training_data_Y = training_data[['o1','o2']]
traindataY = np.array(training_data_Y)
testing_data = data = pd.read_excel(r'C:\Users\lenovo\Desktop\Yeni klasör\testing_data.xlsx',sheet_name="i1-o2")
testing_data_X = testing_data['i1']
testing_data_Y = testing_data[['o1','o2']]
testdataX = np.array(testing_data_X)
testdataY = np.array(testing_data_Y)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128,activation=tf.nn.tanh))
model.add(tf.keras.layers.Dense(128,activation=tf.nn.tanh))
model.add(tf.keras.layers.Dense(2))
model.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['accuracy'])
model.fit(traindataX,traindataY,epochs=500)
val_loss,val_acc = model.evaluate(testdataX,testdataY)
print(val_loss,val_acc)
AI: You are in a regression setting, where accuracy is meaningless (it is meaningful only in classification settings).
Simply remove the metrics=['accuracy'] argument from your model compilation, so that model.evaluate returns the loss only (discard also val_acc).
See the SO thread What function defines accuracy in Keras when the loss is mean squared error (MSE)? for more details. |
H: Training CNN for Regression
Background: I am using CNN to predict forces acting on a circular particle in a granular medium. Based on the magnitude of the forces, particle exhibits different patterns on its surface. The images are greyscaled 64-by-64 pixels. You can see different pictures with the magnitude of the corresponding force on the x-axis attached below.
My attempt at a solution: I am relatively new to deep learning and data science and decided to use a simple conv net to run a regression. My code is provided below. I tried to fit the model using adam optimizer and MSE as a loss function, but it takes forever and sometimes aborts execution by itself. What could be the problem? I am running it on a PC with 8GB RAM, 1TB SSD, Intel i7 CPU, and GTX 1080 GPU.
def build_model():
model = Sequential()
model.add(Conv2D(64, kernel_size=(3, 3), strides = (1,1),
padding = 'valid', activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, kernel_size = (3, 3), strides = (1,1),
padding = 'valid', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides = (2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(53824, activation='relu'))
model.add(Dense(53824, activation='relu'))
model.add(Dense(53824, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='linear'))
return model
Images of 9 particles with different force labels on x-axis:
AI: Although building neural network models is admittedly still an art rather than a science, there are some (unwritten) rules, at least for initial approaches to a problem, such as yours here (I guess).
One of them is that dense layers with 50,000 nodes are too large, and AFAIK I have never seen such large layers in practice; multiply this x3 (layers), and no wonder your code takes forever.
I would certainly suggest to experiment with a dense layer size between 100 - 1000, and even start with less than 3 dense layers. Reducing your 1st CNN layer to 32 is also certainly an option. |
H: Predict correct answer among ten answers for a given question
I have a case study to solve where I am given a dataset of questions and its answers, there are ten answers for a particular question.
It's a classification problem where correct answer is having class_label = 1 and all other nine answers having class_label = 0.
Which deep learning model would fit best for this type of case study and how should I proceed?
AI: It's not DL but I suggest you start with the following approach: for every question $Q$ and its set of candidate answers $(A_1,..,A_{10})$, represent each pair $(Q,A_i)$ as an instance with its label 0 or 1. You could start with a few simple features such as:
number of words in common
similarity score, e.g. cosine TFIDF
... other indicators of how well question $Q$ and answer $A_i$ match
Train a regression model on this (e.g. decision tree, SVM,...). When the model is applied to a new question+answers, it returns a score (mostly between 0 and 1) for each of the 10 answers; finally select the answer which has the highest score.
You can certainly improve on this idea, e.g. with sentence embeddings.
Note: a simple baseline system based on the same idea would be to select the answer which has the maximum number of words in common with the question. |
H: Visualizing Variance / Standard Deviation for categories
Data Structure:
Method Category Variance for X
1 A 20
1 B 14
1 C 16
2 A 14
2 B 19
Where X was not used for classification, but is evaluation criteria. The objective is select method which produces classification with minimum possible variance for X for most of class / overall least variance.
My question: is there some standard (or obscure) method of visualizing variance for large number of categories?
AI: I know this may not exactly correspond to what you need, but my first idea would not be to visualize the variances. Instead, I would define a metric for each method.
For instance, considering all results obtained with a given, you have a vector of outputs (one Variance for X for each Category), then you can simply compute a p-norm of this vector, and compare it to the norm obtained for other methods:
$$\left\lVert x\right\rVert_p = \left( \sum_{i=1}^n x_i^p \right)^{1/p} $$
$p=2$ gives you the euclidian norm, you can increase or decrease $p$ depending on if you want to raise attention on maximal values or the average.
In terms of visualization, you could simply plot (in a bar plot for instance) a few norms that you selected (orders 1, 2, and infinite for instance).
For a pure visualization, you could plot the data in violin plots. For instance in python:
import seaborn as sns
sns.violinplot(x="Method", y="Variance for X", data=your_data_as_df)
Each violin would give you an idea of how the data is distributed over all categories. |
H: Basic sympy problem in anaconda
I have enabled sympy on Anaconda to use it to solve basic linear equations, but whenever I try to type something up it gives me an error when defining the variables:
from sympy import *
x,y=symbols(’x,y’);
solution=solve((4*x-3*y-17,7*x+5*y-11),x,y);
P=(solution[x],solution[y]);
print(P);
This code gives me this error message:
x,y=symbols(’x,y’);
^
SyntaxError: invalid character in identifier
Does anyone know how I could fix this?
AI: Just change the ’ to " it works just fine, see:
from sympy import *
x,y=symbols("x,y");
solution=solve((4*x-3*y-17,7*x+5*y-11),x,y);
P=(solution[x],solution[y]);
print(P)
(118/41, -75/41) |
H: How to find bias for perceptron algorithm?
My question is very basic. I am starting with ML and am working on the perceptron algorithm. I successfully computed the weights for this input data:
X = [[0.8, 0.1], [0.7, 0.2], [0.9, 0.3], [0.3, 0.8], [0.1, 0.7], [0.1, 0.9]]
Y = [-1, -1, -1, 1, 1, 1]
Output_weights = [-0.7, 0.5]
But I didn't take bias into account, i.e. I assumed the discriminator line goes through the origin. Now, let's say I add another point into my training set:
new_X = [4,4]
new_Y = [-1]
How do I proceed if I want to compute the bias as well? In the first iteration for example, I'd set default weights to $[0,0]$, so I find the first point that is incorrectly classified.
Without bias, it is easy. I compute the dot product
0.8*0 + 0.1*0 = 0
should be $-1$, so it is incorrectly classified. I update the weights to:
[-0.8,-0.1]
However, taking bias into account, I get:
0.8*0 + 0.1*0 + bias
Now, how do I update the weights and the bias? What is the procedure?
I have searched several tutorials like this or this but didn't find an answer. A link to some resource would help, too.
AI: Rather than viewing your data as
X = [[0.8, 0.1], [0.7, 0.2], [0.9, 0.3], [0.3, 0.8], [0.1, 0.7], [0.1, 0.9]]
Y = [-1, -1, -1, 1, 1, 1]
You could have treat the problem as
X = [[1, 0.8, 0.1], [1, 0.7, 0.2], [1, 0.9, 0.3], [1, 0.3, 0.8], [1, 0.1, 0.7], [1, 0.1, 0.9]]
Y = [-1, -1, -1, 1, 1, 1]
That is to append a $1$ in every single entry of $X$. The weight corresponding to $1$ would be the bias.
That is $Y=Ax+b$ can be written as $Y=[A, e]\begin{bmatrix}x \\b\end{bmatrix}$ where $e$ is the all one vector. |
H: If i no longer have access to feature in a data set should i retrain my model?
based on naive Bayes theorem if I no longer have access to the address book to tell whether the email author is known. Should I re-train the model, and if so, how?
AI: Naive Bayes, as far as I know, does not have any internal method of dealing with missing data (which is what you have). Thus, there are really two options:
1) Retrain the model, discarding the variable that contains the is author known variable. This method will probably make your model worse if the is author known variable is important, but at least you won't have any missing data to deal with.
2) Impute the missing values using some sort of algorithm, like MICE, mode imputation, knn, random forest, etc. For this problem, just from a first glance there might be correlations between x1 and your other variables that you can exploit to make a good guess for what x1 would be, given the other known values. This introduces more uncertainty in your model and might even make it worse then simply dropping x1 outright (option 1), but it is worth a shot if x1 is an important predictor. |
H: What is the meaning of the parameter "metrics" in the method model.compile in Keras?
I don't have very clear the meaning of the parameter metrics of the compile method of the class model in Keras:
model.compile(..., metrics = ['accuracy'], ...)
the documentation states:
List of metrics to be evaluated by the model during training and testing
what I don't understand is:
are these the metrics used to evaluate the performance of the network at the end of each training epoch, i.e. at the end of each epoch the code makes the network predict on the training set and calculates the metrics passed
OR
are these the metrics used to train the network, i.e. is the network trained with the goal of obtaining the best possible value for these metrics (I don't know how it could do this, something like: if the recall is low then it corrects the weights more when working with positive samples)?
(or it has another meaning?)
AI: The argument metrics is meant to define your criterion for training evaluation. Let me make an example: if you are training a classifier, you want to evaluate your model based on how accurate (in percentage) it is. Therefore, your metric is accuracy (experessed as a float in the [0, 1] range). The higher the accuracy, the lower the loss, the better the model.
metrics must not be confused with loss. The loss function is something you need in order to "punish" your model when it makes mistakes. The loss function is at the basics of backpropagation and of weight update, it's the loss object what Neural Networks use to learn. Metrics instead is something that humans watch to understand how good a model is and communicate it.
The definition of some metrics is optional, you can evaluate a model based on the loss value only if you want. Sometimes you don't need to specify it. For example in regression tasks, i.e. when you have to predict continuous outputs, you speecify a loss (usually MSE or RMSE) and also evaluate your model based on that. |
H: Computing Jaccard Similarity between two documents
Data Mining:
Compute the Jaccard similarity of D1 and D2 on 2-shingles is. Sim(D1,D2) =
D1 = the quick brown fox jumps over the lazy dog
D2 = jeff typed the quick brown dog jumps over the lazy fox by mistake
Could you explain how to reach the conclusion, step-by-step? Thanks.
AI: You forgot a few 2-shingles (bigrams but without duplicates) in the second set but you got the idea right:
$S_1$ = { "the quick", "quick brown", "brown fox", "fox jumps", "jumps over", "over the", "the lazy", "lazy dog" }
$S_2$ = { "jeff typed", "typed the", "the quick", "quick brown", "brown dog", "dog jumps", "jumps over", "over the", "the lazy", "lazy fox", "fox by", "by mistake" }
Remark: For this particular example, in each of these two sets every sequence of 2 words appears only once, so there's no need to remove duplicates to obtain the set. In the general case this might be necessary (see the Wikipedia example).
To calculate Jaccard similarity we need to count:
The intersection $S_1 \cap S_2$, i.e. the 2-shingles in common: | { "the quick", "quick brown", "jumps over", "over the", "the lazy" } | = 5
The union $S_1 \cup S_2$, i.e. all the distinct 2-shingles: | { "the quick", "quick brown", "brown fox", "fox jumps", "jumps over", "over the", "the lazy", "lazy dog", "jeff typed", "typed the", "brown dog", "dog jumps", "lazy fox", "fox by", "by mistake" } = 15
Jaccard similarity:
$$\frac{S_1 \cap S_2}{S_1 \cup S_2} = \frac{5}{15} = 0.33$$ |
H: How do I deal with changing values in a categorical variable when I am aggregating customer records
My requirement is to build a model to predict if a new customer will return to their website. I need to analyze what drives customer repeat for both new and returning customers. The only information given is the dates in which the customer has performed the transaction, the marketing channel through which they came, the deal which they used, their age and income, a flag which says whether they are a new customer or not. The data is for the last 6 years. A customer is regarded as a new customer if they don’t return within 24 months.
My approach is to aggregate the customer transaction records and derive the variables such as the Average number of visits, the average amount of transaction, create a flag variable which records did they return since their last visit. My problem here is there are a lot of categorical variables as you can see (marketing channel they came through, the interface they used, their income, the deal program they used, etc) which keeps on changing for the same user in different transactions. How do I deal with those variables?
Any small hint at least is appreciated as I am stuck here completely.
Thanks in advance
AI: The standard option would be to create one feature for each possible pair (variable, category) and use the frequency of this particular pair as value for this particular customer. If the number of times doesn't matter, it could be transformed into a binary feature, i.e. just indicating whether this customer has ever been seen with this category.
In case some categories appear very rarely, it's usually counter-productive to include them so you could have a minimum frequency in order to filter out rare pairs. |
H: Enormous dataset, how to proceed?
So I have started my master thesis and I have been handed a time series dataset with 2000 rows and nearly 600 columns of data. I have dealt with time series before, but nowhere similar to this level of complexity. Many courses in time series only deals with univariate time series, and now I'm supposed to work on a time series with 600 different factors with extreme loads of nan-values and present it in a simple and illustrating way. Plotting single univariate time series sort of doesn't make sense as there are still 600 other time series to consider. I was just wondering if anyone here have any tips on how to proceed? Any input would be appreciated.
AI: You don't give a lot of details about what you want to do so I'm going to say basic things... hopefully this helps:
Check and clean up the data: if you have columns which contain mostly NaN values they're likely useless, so you can discard them. You can also ditch any feature which always contains the same value.
Check the correlation between features: you might have some features which are redundant with each other, remove the ones which are less likely to be informative.
Work with a small subset (rows) first, analyze it and implement a pipeline using this subset. Check from time to time that your pipeline can handle the full dataset, but it's likely that a subset is enough for most of the development. If possible use a few different subsets to cover more cases.
Needless to say, keep a backup of the original data ;) |
H: How do you see the element of a csv table with many columns (>30) which the names of its columns is more than 10 character in pandas?
How do you see in pandas the element of a csv table with many columns (>25) which the names of its columns is more than 10 character?
I have 5000 rows and 32 columns and the label of some columns are more than 10 characters.
How I ca see them and work with different columns?
Excel does not work! All of the items are sloppy
Access is OK but could not detect the long labels of items!
What is your solution for it?
AI: I dont know which compiler you are using but mine is jupyter.
You can make wider notebook by:
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
By doing that you can display more columns with df.head(). If you cant display still ("..." between columns), you can use iloc. For you data I used df.iloc[:5,:15] and df.iloc[:5,15:].
Also which is a better solution you can change pandas display options by:
pd.set_option('display.max_columns', 500)
Here is the screenshot of my notebook: |
H: Predicting the speed of a car
I'm working on the commaai speedchallenge. The goal of the challenge is to predict the speed of a car based on a dashcam video. So far all the examples that I found (example 1, example 2) use some kind of method that takes in the information at frames $x_{t-h}, \ldots, x_{t-2}, x_{t-1}$ when trying to predict the speed at frame $x_t$. I think this makes sense, because you can get an idea of the speed by looking at differences between two frames. Without a reference point (with just one frame) it would be very hard to predict the car speed..
Well apparently not! While looking for solutions I also found this repository. I'll summarise the approach that is used:
Rescale the frames to 50x50 images, and divide them into two sets: X_train and X_test.
Take X_train, X_test and pass them through the first two blocks of a convolutional network called VGG16 that is already trained, so that you get output X_train_features and X_test_features. The writer of the code explains that he wants to use transfer learning because his computer specs are not good enough to train a network himself.
Train a linear regressor based on the extracted features and compute the mean squared error.
If I understand the idea correctly, the author of this code just looks at each frame individually, passes the frame through the first two convolutional blocks of the network VGG16, and predicts the speed of the car based on the extracted features, without looking at earlier frames!
I've tested the code and it works, it has a mean square error just above 1!
Question: Am I missing something, or can you predict the speed of a car just by looking at the features of a single frame? If the answer is yes; then how?
I know that you can take the weights from the early layers of trained convolutional networks and use them for other purposes. This is possible because the early layers of convolutional networks usually just filter for very general/coarse shapes. What I just don't understand is how you could predict something that is relative to other frames based on just those low level features. If a human were to look at a single frame in the data, he would not be able to predict the speed of the car correctly, right?
AI: There are some visual indicators of speed that should be present even in one frame. An easy example to think of is "blur", looking at a static image you could still determine "movement" (and therefore velocity) if a picture is blurred vs. absolutely sharp.
Additionally perspective, depth and shape of other objects might differ depending on velocity.
So it certainly could be possible to train a model based only on static images but it certainly should not outperform a model using both features of static images and differences between frames.
Edit:
I also looked at the repo and the presented method a bit. In the used method there could be a huge over-fitting problem if the video is only one single drive!
In this case the image detection would only identify "where" the picture was taken and since any point along the drive is associated with a single speed it could work from there i.e.: when the truck drove past a palm tree he was driving at 30 mph so identifying a palm in the picture should predict 30mph which works for a single drive data set but obviously is not able generalize. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.