text
stringlengths 83
79.5k
|
---|
H: Word embeddings and punctuation symbols
I have a decent understanding of word embeddings (at its core, one can think of a word being converted into a vector of, say, 100 dimensions, and each dimension given a particular value... this allows to do math with the words, also it makes the training sets to be non-sparse...)
But today something came to my mind, what about punctuation symbols such as , . () ? ! ... ?
They do have a huge impact on the meaning of sentences and, like words, the position and context in which they are used is relevant.
So the question is, how should this be modeled? are pretrained sets like GloVe including punctuation symbols? should I simply remove punctuation symbols from text?
AI: Punctuation such as , . () ? ! ... ? are all included in pretrained word vectors such as Glove. However, it is common to find phrases, abbreviations, and misspellings that are not contained within pretrained vectors. What to do with those phrases is highly dependent on the goals of the analysis and context of the text. |
H: Comparative Analysis of two sets of data
I have 2 sets of data which consist of marks of 60 students in a particular subject in year 1 and year 2 respectively.
Year 1 : 86, 76, 87, 67, 89, 95.... so on (60 students)
Year 2 : 82, 67, 99, 77, 65, 78.... so on (60 students)
I want to compare these two sets of marks and find if year 2 students performed better than year 1 students.
What kind of techniques are appropriate for this problem? I was told that the statistical techniques like ANOVA, hypothesis testing techniques are too common for this problem and I was asked to try something with Machine learning.
Does machine learning fit in the picture? If yes, what kind of algorithms could be used?
AI: No, this is the classical application of statistical test. You should consider the problem in the scope of two-sample test. Machine learning does not fit in the picture. |
H: How do I crop faces with a neural network?
I am looking to build a facial recognition system and realized I could probably pump up accuracy by first cropping the faces. I know I can use a Haar Cascade classifier to do this but would prefer to use a deep learning approach, as that is my current area of focus. To do this do I simply create a CNN with inputs being an image and outputs being the X, Y, Width, Height? Or is there more to it? I have tried this approach with an Adam optimizer and around 600 pictures but results haven't been great. Any ideas on how to improve this?
AI: You are looking for a fully convolution neural network trained for image segmentation. The architecture is typically convolutions followed by de-convolutional layers with out pooling.
The forward inference serves to extract information with convolutions. The de-convolutions serve to resize the tensors to the size of the inputs height and width. A fully connected layer then classifies each pixel from the input. Deconvolutions demonstration.
Unfortunately you need pixel wise labels to train a network to do this. You could look for a pretrained model that already segments faces from images or you could grind out hand labeling for a 1000 images and use transfer learning (there are online tools for doing this such as Label box. |
H: Change values of consecutive repeating letters in pandas column
I've got one Dataframe like:
id value block
1 a 1
2 a 1
3 a 2
4 a 2
5 b 3
6 c 4
And I want to change the value column to the next value based where
the series changes. Like below. The change must be defined by the
block column.
id value block
1 a 1
2 a 1
3 b 2
4 b 2
5 c 3
6 None 4
I thought about using shift but there are different lengths of continuity in the value column.
I think this should work somehow by using cumsum to detect the change.
AI: For this particular case in which difference between consecutive block elements is 0 or unitary, you could use diff() and something like this:
import pandas as pd, numpy as np
df = pd.DataFrame({'id': [1, 2, 3, 4, 5, 6], 'block': [1, 1, 2, 2, 3, 4]})
values = ['a', 'b', 'c', 'None']
indexs = np.insert(np.cumsum(np.diff(df.block)), 0, 0)
df['value'] = [values[np.asscalar(x)] for x in indexs]
When value is already part of your dataframe, you could simply replace the values list given above by the following:
df = pd.DataFrame({'id': [1, 2, 3, 4, 5, 6], 'value': ['a', 'a', 'a', 'a', 'b', 'c'], 'block': [1, 1, 2, 2, 3, 4]})
values = np.append(df['value'].unique(), 'None')
indexs = np.insert(np.cumsum(np.diff(df.block)), 0, 0) # unchanged
df['value'] = [values[np.asscalar(x)] for x in indexs] # unchanged
which conducts to the same result. |
H: Spliting keras model into multiple GPU's
Dear fellow Data Scientists.
I'm having a problem with splitting model into multiple GPU's. I have read something about "towering" in native tensorflow but my whole architecture is already written in keras (tensorflow backend of course). Keras as far as I know only supports data paralellism which is useless while operating on images bigger than 1760x1760 in my case(Yolo architecture). I'm asking for advice, how could I achieve this without using native tensorflow I must run this model on 4500x4500 images and I can use up to 4 Tesla k40 (11GB) GPU's.
EDIT:
I'm already using batch = 1
AI: For the device parallelism (aka model parallelism) see this FAQ: Device parallelism.
Here is an example of doing this with Horovod. |
H: Accuracy differs between MATLAB and scikit-learn for a decision tree
Is there any possibility to vary the accuracy of same data set in matlab and jupyter notebook by using python code ?
For same data set, at first I applied it in matlab and get 96% accuracy for decision tree method, then I apply that same data set in jupyter notebook by using python code where I get 53% accuracy for C4.5 (decision tree) by using k-fold cross validation.
I didn't understand where's the problem for getting different accuracy for same dataset and same method.
My procedure in python code is given below:
import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.model_selection import KFold
train=pd.read_csv('E://New.csv')
train.head()
# define X and y
feature_cols = ['Past','Family_History','Current','current or previous
workplace','diagnosed with a mental health condition by a
medical professional?','do you feel that it interferes with
your work when being treated effectively?','Gender']
X = train[feature_cols]
# y is a vector, hence we use dot to access 'label'
y = train['Diagonised condition']
kfold = KFold(n_splits=10,random_state=None)
model = tree.DecisionTreeClassifier(criterion='gini')
results = cross_val_score(model, X, y, cv=kfold,scoring = 'accuracy')
result = results.mean()*100
std = results.std()*100
print (result)
AI: It is hard to make a direct comparison between a white box implementation (scikit-learn) and a black box implementation (MATLAB).
One guess they are using different algorithms. scikit-learn uses an optimized version of the CART algorithm. Maybe MATLAB uses ID3, C4.5, or something else. Another guess two implementations are using different hyperparameters (e.g., different splitting criteria, max depth, minimum node size, ...).
Since decision trees are white-box models, you can examine their internal structure. Plot both trained trees. See how they each are making the splits and how many splits are being made. |
H: Kmeans large dataset
we are currently performing a K-MEANS under scikit-learn on a data set containing 236027 observations with 6 variables in double format (64 bits).
According to our calculations, the complexity of the algorithm is O(n * k * v * i), with n the number of observations, k the number of clusters, v the number of variables and i the number of iterations which is 300 maximum.
Let (236027 * 3 * 6 * 300) * 64 (size of the double in bits): we have 81570931200 bits which is ~10.1 GB of memory.
However, by running the algorithm on a 600 GB VM, it crashed due to lack of memory.
Are we wrong in our calculation of the space required for KMEANS?
Is KMEANS suitable for this volume?
What solutions are available to us?
Thanks you
AI: KMeans wasn't the problem, but the silhouette analysis that follows. Python somehow, jump to the silhouette before terminating the fitting of KMeans. |
H: Why is logistic regression not sigmoidal?
The blue dots are the raw data and the line is my logistic regression. The line is quite straight and not sigmoidal as I would have expected. I suspect there is something wrong in my gradient descent equation but I don't understand the maths well enough to find the mistake.
This is the code I used to generate the logistic regression:
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from scipy import stats
def logit(z):
return (1/(1+np.exp(-z)))
def cost(data, weights):
x_val = data[:,0:2]
y_val = data[:,2]
m = len(x_val)
scores = np.dot(x_val, weights)
cost = np.sum(y_val*scores - np.log(1 + np.exp(scores)))
return cost
def gradient_descent(data, weights, learning_rate):
x_val = data[:,0:2]
y_val = data[:,2]
m = len(x_val)
scores = np.dot(x_val, weights)
weights -= (learning_rate/m)*np.dot(np.transpose(x_val),(logit(scores)-y_val))
return weights
dataset = pd.read_csv("/Users/An/Desktop/data/glass.csv")
dataset.sort_values('Al', inplace=True) # sort by ascending "Al" values
dataset.head()
dataset['binary'] = dataset.Type.map({1:0, 2:0, 3:0, 5:1, 6:1, 7:1})
ones = np.ones(len(dataset))
data = np.stack((ones,dataset["Al"], dataset["binary"]), axis=-1)
weights = np.random.rand(2)
def log_reg(data, weights):
for i in range(9000):
weights = gradient_descent(data, weights, 0.000001)
loss = cost(data, weights)
return weights
w = log_reg(data, weights)
AI: if would suggest you implement the following formulas:
Sigmoid activation function
$$\sigma(x) = \frac{1}{1+e^{-x}}$$
Output (prediction) formula
$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$
Error function
$$\text{Error}(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$
The function that updates the weights
$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$
$$ b \longrightarrow b + \alpha (y - \hat{y})$$
If you want the corresponding Python code for the above just let me know. I also have it but I guess it would be more useful for you (from the point of view of learning) to implement it yourself. In any case, if you want it, I can update my answer with the Python code.
Source code:
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def output_formula(features, weights, bias):
return sigmoid(np.dot(features, weights) + bias)
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x, weights, bias)
d_error = y - output
weights += learnrate * d_error * x
bias += learnrate * d_error
return weights, bias |
H: Stop CNN model at high accuracy and low loss rate?
I train my CNN model with a large number of epochs, with each epoch I print the training loss and accuracy, but there is a lot of high and low in these two metrics, I want to do early stopping with for example loss at 0.2 and accuracy at %95 or more because I get this at more than one epoch, my question is:
1- is early stopping done on train set or validation set?
2- if on validation set, should I print validation loss and accuracy for each epoch with with a train set loss and accuracy?
3- can you give an idea to do early stopping with a piece of code but not in keras, because I use tensorflow not keras?
AI: Early stopping is determined based on the validation set's results (either loss, accuracy or some other special metric).
Usually early stopping is checked every single epoch so you will need to check your validation accuracy/loss after each epoch. You don't have to print it, but if it is already calculated, there is no reason to withhold it from yourself. (you can also check validation results every x epochs if it slows your training process too much).
There are many sources that show how to use early stopping with tensorflow:
Early Stopping with TensorFlow and TFLearn
Early stopping with tf.estimator, how?
Early-stopping functionality for use with tf.estimator API. |
H: Useful metrics to compare network-output image to true image?
I'm designing a supervised network that would require to output an image. I'm wondering what are the best metrics to find similarity between the output and actual target image. So far, my best assumption is to calculate the distance of RGB values, and give less weight to the background. Are there better ways?
AI: Widely adopted method to measure similarity between two images is structural similarity:
Wikipedia
Useful links
Implementation
However, if you aim on training a network that should produce images of desired properties, I would consider GANs framework. In this case, the similarity between the generated and target image is defined by a special discriminator network.
Depending on what is your task, there are many GAN sub-types. |
H: Multi-label classification for text messages (convert text to numeric vector)
Given a dataset of messages which are labeled with 20 features, I want to predict the value of each feature for a new message.
Dataset example:
message feature1 feature2 feature3 feature3 feature4 ...
'hi' 1 0 1 1 0 ...
'i am bussy' 0 0 0 0 1 ...
... ... ... ... ... ... ...
Split data into train & test to train the model:
from sklearn.model_selection import train_test_split
x= df.iloc[:,0:1].values
y = df.iloc[:,1:-1].values
train_x, test_x, train_y, test_y = train_test_split(x, y, random_state=42)
Now, my train_x is an array of text values (impossible to fit into a train model), how could I convert them to numeric vectors?
AI: What you want to do is find a vector representation of those strings which are in your $X$ vector. Two such techniques are Bag-of-Words and $n$-grams.
Bag-of-Words (BoW)
This technique will build a dictionary with all the words that exist in your training set. Then we will build a vector with the count of each word in each instance. For example let's consider these three separate instances:
'hi'
'i am bussy'
'how are you doing?'
Then we can see that the following "words" in this training set are: hi, i, am, bussy, how, are, you, doing. So the vector representation of the above strings would be:
[1, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 1, 1, 0, 0, 0, 0]
[0, 0, 0, 0, 1, 1, 1, 1]
There are ways to make this technique more effective by removing tenses from verbs or plurality of words. This is called stemming and should be used with BoW.
n-grams
n-grams is a feature extraction technique for language based data. It segments the Strings such that roots of words can be found, ignoring verb endings, pluralities etc...
The segmentation works as follows:
The String: Hello World
2-gram: "He", "el", "ll", "lo", "o ", " W", "Wo", "or", "rl", "ld"
3-gram: "Hel", "ell", "llo", "lo ", "o W", " Wo", "Wor", "orl", "rld"
4-gram: "Hell", "ello", "llo ", "lo W", "o Wo", " Wor", "Worl", "orld"
Thus in your example, if we use 4-grams, truncations of the word Hello would appear to be the same. And this similarity would be captured by your features. Then you can vectorize the results of the $n$-gram in the same way as BoW.
Term Frequency-Inverse Document Frequency (TF-IDF)
Both of the above techniques can be enhanced with TF-IDF. This removes words that appear too often and do not have much information about the string, for example: the, like, a, ab, is...
For a term $t$ in a string $s$, the weight for that term is given by
$W_{t,s} = TF_{t,s} log(\frac{N}{DF_t})$
where N is the number of strings in your corpus, $TF$ is the number of times the term appeared in the given instance string and $DF$ is the number of documents in which the term appears. |
H: Machine learning algorithm to classify matrices
I want to know what could be first choice for a machine learning algorithm to classify matrices.
Each matrix belongs to either class A or class B.
The classification problem is: To classify each matrix into either class A or class B (say signal matrix or noise matrix).
Each matrix is arranged in a fashion where each row contains certain features of an object associated with the matrix.
I obviously have thousands of such class A and class B matrices to train on.
I will be really obliged if you could help. Thanks.
AI: You can indeed use a simple neural network for this task, flatten your matrix and feed the matrix entries as inputs to your network. However, I would suggest using a convolutional neural network, these have had very good performance on image data, which is essentially a matrix. This class of NN is powerful because it uses a neighborhood around each point as an input and thus significantly reduces the number of inputs to the network, which in turn reduces the model complexity.
I would also suggest using some matrix transformations to get better features. For example you can perform eigenvector decomposition of the matrix to ensure that the features you are feeding to your network are orthogonal. This may increase performance of your classifier.
If you give us some examples of your dataset we can help you build a network to start testing ideas. |
H: Is there a disadvantage to letting a model train for a large number of epochs?
I created a model to solve a time series forecasting problem. I had a limited amount of time series with which I could train the model therefore I decided to augment the data. The data augmentation strategy I used is quite basic but has shown to increase the accuracy of my model.
I wrote my own data_generator which I use to train my model with, using the fit_generator function in keras. Essentially it takes in the whole training data set that I have, shuffles all the time series and the augmentation process takes place specifically in each batch. In each batch I randomly pick, per time series in the batch, start and end points, so that each batch contains varying length slices of each series within the batch. This creates obviously an almost endless stream of data but it is entirely reliant on the number of epochs the model is run for as the dataset is not augmented upfront. No noise or anything is applied to the data set, the augmentation is purely from varying the lengths of the time series and the start and end points of the series.
I observe that my loss continues to decrease over time I have tried 100, 500, 1000, 5000 and 10,000 epochs. In general the accuracy of the model predictions does get better but at some point with diminishing returns. It is hard to say when as I am still tuning the model architecture and hyperparameters.
Does such an augmentation strategy affect how I can interpret the loss of model? As the longer I train the model for the more "new" data it sees instead of constantly seeing the same data and training on it.
AI: If you can be sure that the model is not seeing the same instances repeatedly then there is very good chances that your model is not overfitting and that is precisely what you can measure from your validation set, you should see continuously downward loss which will eventually plateau at the local optimum that has been eventually reached by your model, that is the best possible solution attained given the starting conditions. If your model would be overfitting then you should see the loss function start increasing for the validation.
The best way to know if you are overfitting is to take your model and then apply it to a completely new dataset and measure the performance. If the performance is good, then you are fine.
p.s. To overcome falling inside a bad optimum you can train multiple models each with different starting conditions for the model parameters and then make an ensemble classifier. |
H: Are Intersection over Union (IoU) scores preferring larger objects?
According to the fcn model I implemented and the PASCAL VOC benchmark (here)
I find that objects with larger sizes in an image receive better IoU or AP score in the test set. Why do the IoU scores have a bias towards larger scale objects? Can anyone give an explanation why this could happen?
AI: There are a lot of reasons for this behaviour:
training data: In your training data, there could be more larger objects than smaller objects, so the model didn't have enough information about smaller objects to learn to segment them well.
training procedure: Depending on your training procedure (e.g. loss function), your model will focus more on some type of objects (in your case, larger objects ...)
model architecture: Every model has some advantages, so some models are better in recognizing objects of various sizes. E.g. convolutional and pooling layers can increase a receptive field of neurons so that they could recognize objects of various sizes more easily. Also, some model architectures are specialized in recognizing models of varying sizes.
Possible directions:
training data: Know your data! :) Inspect the dataset to see what's in. The things to look for: are all classes represented well, are labels (ground truth) good, sizes of objects, ...
training procedure: Try with various loss functions (cross entropy, dice loss, cross entropy + dice loss, focal loss, ...). Also, dataset augmentation techniques are important (e.g. with dataset augmentation you can create objects of various sizes ...).
model architecture: Resnet, Inception, InceptionResnet - those are the backbones that you might want to use for your backbone as they have proved to be good for semantic segmentation. |
H: In Pytorch, if I have a 2D tensor, how to iterate over this tensor to get every value changed
I have a 2d Tensor, whose size is 1024x1024 and the values in the tensor is 0.3333, 0.6667, and 1.0000, so I would like to change all these values to 0,1,2.
Could some one tell me how to iterate over this tensor.
AI: Consider the original tensor is $x$.
x = torch.tensor([[0.3333, 0.6667, 1.0000], [1.0000, 0.6667, 0.3333]])
produces
x
tensor([[ 0.3333, 0.6667, 1.0000],
[ 1.0000, 0.6667, 0.3333]])
We can do a broadcasting and then cast it to an integer tensor
y = (3 * x - 1).int()
y
tensor([[ 0, 1, 2],
[ 2, 1, 0]], dtype=torch.int32
) |
H: What model is suitable for classification of a small data set?
I have a dataset that consists of 365 records, and I want to apply a classification model on it (binary classification).
As an output, in addition to the classification labels, I want to retrieve the classification confidence for each instance.
I don't know how to deal with such a case. Can I use, for example, linear classifiers (SVM, logistic regression) with this small dataset? Because, I want to retrieve the classification confidence as well.
I read that decision trees can be a good classifier for small datasets, but how can I retrieve the classification confidence with it?
The dataset consists of tweets, each classified as positive or negative (from a sentiment perspective), and my feature vector consists of 2400 features (combination between word2vec embeddings and other features).
Also, do you recommend me to use word2vec embeddings with such a small dataset? I think the classifier can't learn something from them using small dataset.
AI: The question whether to use a linear classifier depends less on the number of samples you have in your dataset and more whether your dataset is linearly separable (by the way, SVMs can be non-linear with the kernel trick).
Now with regards to confidence in the classification, In SVMs there is a method that calculates the probability that a given sample belongs to a particular class using Platt scaling ("Original Paper"). This is the approach that is used in sklearn's SVM confidence implementation. You can read more about it in the following link:
How To Compute Confidence Measure For SVM Classifiers
In both SVMs and linear regression models you can calculate the distance of a sample from the border and treat it as a confidence measurement (but it is not exactly that).
With decision trees I'm not an expert but a similar question was posted and answered in the following link:
Decision tree, how to understand or calculate the probability/confidence of prediction result
I would strongly recommend using some known embedding method like the word2vec, since as you mentioned, your dataset is too small for your model to be able to properly learn an encoding of context and vocabulary from. |
H: How does Gradient Descent and Backpropagation work together?
Please forgive me as I am new to this. I have attached a diagram trying to model my understanding of neural network and Back-propagation? From videos on Coursera and resources online I formed the following understanding of how neural network works:
Input is given, which gets weight assigned to it using a probability distribution.
The activation functions use the weights to provide the predicted value.
The cost or loss functions calculate the error of the prediction between the actual class and the predicted value.
The optimization functions such as gradient descent use the results of the cost function to minimize the error.
If the above is correct then I am struggling to understand the connection between Gradient Descent and Backpropagation?
Here is an image of my understanding so far:
AI: First, remember that the derivative of a function gives the direction in which the function increases, and its negative, the direction in which the function decreases.
Training a model is just minimising the loss function, and to minimise you want to move in the negative direction of the derivative. Back-propagation is the process of calculating the derivatives and gradient descent is the process of descending through the gradient, i.e. adjusting the parameters of the model to go down through the loss function.
Back-propagation is called like this because to calculate the derivative you use the chain rule from the last layer (which is the one directly connected to the loss function, as it is the one that provides the prediction) to the first layer, which is the one that takes the input data. You are "moving from back to front".
In gradient descent one is trying to reach the minimum of the loss function with respect to the parameters using the derivatives calculated in the back-propagation. The easiest way would be to adjust the parameters by substracting its corresponding derivative multiplied by a learning rate, which regulates how much you want to move in the gradient direction. But there are some more advanced algorithms like for example ADAM. |
H: how to update column in data frame based on condition
How to update column IsLCap column in dataframe based on Lvalue column whether it is capitilized or not.
df.loc[df.Lvalue.istitle(), 'IsLCap'] = 1 # need to be corrected
getting an error -AttributeError: 'Series' object has no attribute 'istitle'
AI: You access string functions with .str. This should work:
df.loc[df.Lvalue.str.istitle(), 'IsLCap'] = 1
Update
To check the length of the strings in the column you can use the string method .str.len:
df.loc[df.Lvalue.str.len() < 3, 'less_than_three'] = 1
Pandas Series string methods |
H: Why doesn't class weight resolve the imbalanced classification problem?
I know that in imbalanced classification, the classifier tends to predict all the test labels as larger class label, but if we use class weight in loss function, it would be reasonable to expect the problem to be solved. So why we need some approaches like down sampling or up sampling for imbalanced classification problem?
AI: Class weights do help with the imbalance problem ("resolve" seems too much), but upsampling has a certain advantage on it.
If you think about it, downsampling/upsampling the number of samples in each class to balance the dataset is almost exactly the same as using class weights.
For example, say you have a dataset containing 3 samples divided into 2 classes: and you are training with an MSE loss.
You can choose to upsample the number of samples from class B, which will get you the following cost function over a single epoch:
Here is where the small difference comes into effect, if you are training in a batch gradient descend (a single weights update per epoch), the prediction for the 2 identical B1 samples will be the same, so the loss function can be written as:
Which is exactly the same as using a weighted loss function. However if you are using a mini-batch gradient descend (as most model our days), the 2 different samples may appear in different mini-batches, and so the predictions for them at the same epoch won't be the same (because one of them will pass thru a model that was already updated once).
This is a small difference but sometimes it is important. It means that with weighted classes, the effective learning rate varies between mini-batches. In some cases that can make learning unstable. So, when possible, upsampling is the better approach (practically it produces slightly better results).
The problem is that you can't always upsample/downsample without any worry. If we go back to our example but this time we have a dataset of 5 samples, divided into 2 classes:
Downsampling is problematic - which of the 3 A class samples do we ignore?
Upsampling is problematic - which of the 2 B class samples do we duplicate?
It can be solved by randomly choosing which samples to keep/ignore at each epoch, however with very big datasets, this can lead to slower processing... So the weighted classes is still a valid option. |
H: Outliers handling
I have a large dataset of >100 columns with nearly all types of data.
I want to remove outliers from my dataset for which purpose I've decided to use IQR.
Problem is even when I apply quantile of 0.25/0.75, I still get significant amount of outliers in columns like ClientTotalIncome, etc. Further by doing that, I eliminate more than 90% data.
My code in Python for outliers removal is as follows:
num_train = train.select_dtypes(include=['number'])
cat_train = train.select_dtypes(exclude=['number'])
Q1 = num_train.quantile(0.25)
Q3 = num_train.quantile(0.75)
IQR = Q3 - Q1
idx = ~((num_train < (Q1 - 1.5 * IQR)) | (num_train > (Q3 + 1.5 *
IQR))).any(axis=1)
train_cleaned = pd.concat([num_train.loc[idx], cat_train.loc[idx]], axis=1)
Any ideas?
AI: It's not always a good idea to remove data from your dataset.
In some circumstances - and income is a good example - your data will be skewed / long-tailed and so will lie outside of the interquartile range. This doesn't imply that there is anything wrong with the data, but rather that there is a disparity between observations.
Nevertheless, if you are set on removing observations perhaps you should consider scaling your features prior to determining which observations are outliers. For example, taking the log of a feature and then applying your outlier removal based on the log(variable).
Don't forget that IQR doesn't carry over well to categorical and ordered features. |
H: In linear regression, is there anything I can do if the coefficient for one of the features is unrealistic/inappropriate?
I'm building a simple linear regression model that predicts Home Price using Square Footage, Number of Bed(s), and Number of Bathroom(s).
After creating the model, I noticed that the coefficients for Square Footage and Number of Bed(s) were positive, which makes sense since Home Prices increases as Square Footage/Number of Bed(s) increases. However, the coefficient for Number of Bathroom(s) was negative, which makes no sense since Home Price does not decrease as Number of Bathroom(s) increases!
My question is this: is the negative coefficient something I can "fix" by incorporating more data into my model, or is this just a quirk in linear regression?
AI: What could be happening here is multicollinearity - where some of your predictors are correlated. In this case, it is not unreasonable to think that square footage, number of beds and number of bathrooms are all positively correlated with each other. This is not a big deal if your main goal is to just predict house prices, but what multicollinearity does in essence is that it makes it 'harder' to estimate the actual coefficient values, so your interpretations of the coefficients can't carry as much weight. I suspect that using more data will indeed reduce the chance of this happening. |
H: Find all potential similar documents out of a list of documents using clustering
I'm working with the quora question pairs csv file which I loaded into a pd dataframe and isolated the qid and question so my questions are in this form :
0 What is the step by step guide to invest in sh...
1 What is the step by step guide to invest in sh...
2 What is the story of Kohinoor (Koh-i-Noor) Dia...
3 What would happen if the Indian government sto...
.....
19408 What are the steps to solve this equation: [ma...
19409 Is IMS noida good for BCA?
19410 How good is IMS Noida for studying BCA?
My dataset is actually bigger (500k questions) but I will use these questions to showcase my problem.
I want to identify pairs of questions that have a high probability of asking the same thing. I thought about the naive way, which is to turn each sentence into a vector using doc2vec and then for each sentence calculate the cosine similarity with every other sentence. Then, keep the one with the highest similarity and in the end print all those that have a high enough cosine similarity. The problem is this would take ages to finish so I need another approach.
Then I found an answer in another question that suggests to use clustering to solve a similar problem. So following is the code I implemented based on that answer.
"Load and transform the dataframe to a new one with only question ids and questions"
train_df = pd.read_csv("test.csv", encoding='utf-8')
questions_df=pd.wide_to_long(train_df,['qid','question'],i=['id'],j='drop')
questions_df=questions_df.drop_duplicates(['qid','question'])[['qid','question']]
questions_df.sort_values("qid", inplace=True)
questions_df=questions_df.reset_index(drop=True)
print(questions_df['question'])
# vectorization of the texts
vectorizer = TfidfVectorizer(stop_words="english")
X = vectorizer.fit_transform(questions_df['question'].values.astype('U'))
# used words (axis in our multi-dimensional space)
words = vectorizer.get_feature_names()
print("words", words)
n_clusters=30
number_of_seeds_to_try=10
max_iter = 300
number_of_process=2 # seads are distributed
model = KMeans(n_clusters=n_clusters, max_iter=max_iter, n_init=number_of_seeds_to_try, n_jobs=number_of_process).fit(X)
labels = model.labels_
# indices of preferable words in each cluster
ordered_words = model.cluster_centers_.argsort()[:, ::-1]
print("centers:", model.cluster_centers_)
print("labels", labels)
print("intertia:", model.inertia_)
texts_per_cluster = numpy.zeros(n_clusters)
for i_cluster in range(n_clusters):
for label in labels:
if label==i_cluster:
texts_per_cluster[i_cluster] +=1
print("Top words per cluster:")
for i_cluster in range(n_clusters):
print("Cluster:", i_cluster, "texts:", int(texts_per_cluster[i_cluster])),
for term in ordered_words[i_cluster, :10]:
print("\t"+words[term])
print("\n")
print("Prediction")
text_to_predict = "Why did Donald Trump win the elections?"
Y = vectorizer.transform([text_to_predict])
predicted_cluster = model.predict(Y)[0]
texts_per_cluster[predicted_cluster]+=1
print(text_to_predict)
print("Cluster:", predicted_cluster, "texts:", int(texts_per_cluster[predicted_cluster])),
for term in ordered_words[predicted_cluster, :10]:
print("\t"+words[term])
I thought that this way I could find for each sentence the cluster that it most likely belongs in and then calculate the cosine similarity between all other questions of that cluster. This way instead of doing it on all the dataset I will be doing it on far fewer documents. However using the code for an example sentence "Why did Donald Trump win the elections?" I have the following results.
Prediction
Why did Donald Trump win the elections?
Cluster: 25 texts: 244
trump
donald
clinton
hillary
president
vote
win
election
did
think
I know that my sentence belongs to cluster 25 and I can see the top words for that cluster. However how could I access the sentences that are in this cluster. Is there any way to do it?
AI: As much as i understood from your description, you can just assign your labels as a new column to questions_df. There is no shuffling involved, so the following will do the trick:
questions_df["cluster"] = labels |
H: What is the best way to predict multiple outcome from a single entity?
Let's say i have three model: Facial recognition, Face landmark detection, Emotion recognition.
Now if i want to predict those three feature from a single image. What should be my approach?
Should i combined those three model? or
Run three model in three different thread?
AI: All three models fit to single GPU
Since you have already trained the models and models are separate (do not share the features), you could construct the computational graph in a way that you have only one input (your image), but that input is pushed to three different branches of the computational graph (each branch is one of your three models). At the output of such constructed graph, you will get three outputs (one from each of three branches).
This way you will run all three models at once.
If you are using TF, it will look like this:
output_1, output_2, output_3 = sess.run(output_op, feed_dict:{input_layer: input_image})
where the output_op holds a list of outputs from three models (hence, we unpack them to three variables output_1, output_2 and output_3); input_layer is the tensor operation which takes the image and pushes it to three branches as already described.
This is only possible if your GPU memory is large enough to fit all three models into the memory.
All three models do not fit to single GPU
In this case, assuming you have access to multiple GPUs, you could modify the computational graph which combines three models to use different GPUs for each branch.
Run one after another
Also, this can be always done.
If you are using TF, this link could be useful. |
H: Calculate image width
In this code below, a picture can be loaded into openCV and then the region of interest RIO can be created by just selecting a box around something with the mouse, then press enter.
What I am trying to do is measure the apparent width in pixels of the ROI. I am going thru some of the theory in this pyimagesearch blog post for calculating distances in images.
I have a folder called images and a .png file named 1 within the folder.
import cv2
image = cv2.imread("images/1.png")
cv2.imshow("image", image)
marker = cv2.selectROI("image", image, fromCenter=False, showCrosshair=True)
print(marker)
print(marker[1]-marker[3])
The output is below but I am not really sure how/if this can be used. I think its stating the ROI x,y height & width but I am not sure:
(317, 181, 269, 359)
-178
>>>
Any tips help... Thanks
AI: cv2.selectROI() returns a 2D rectangle with (x, y, width, height) (see the Rect constructor - that object is created from selectROI()).
So, if you want to measure the apparent width in pixels, you need marker[2]. |
H: SAP HANA or Hadoop?
this is a question regarding a career choice. I am a fresher and I recently joined an MNC in Data Engineering team. There I was offered training in either Hadoop or SAP HANA. I am in doubt as to which one should I choose. Can anyone help me make the right choice? Which of these two has better scope based on the current trend?
Thanks in advance.
AI: If you are looking for current trends, you would choose Hadoop without question. The issue with items like SAP, SAS (or other massive enterprise packages like that) is that they are all 4th generation programming languages with a lot of overhead and the corresponding licensing fees. I would be concerned that more & more enterprises will find themselves moving towards Hadoop and away from the enterprise packages. I see those packages edging more and more towards the trend of serving as a "data lake" whereas Hadoop will be more of an active, day-to-day environment relied on for data science (and that's a good thing!). |
H: CV (Curriculum Vitae) Recommender System using Machine Learning, Python, Apache Solr(Back-end), AngularJS (Front-end)
I am very new to Machine Learning. This is my college project. I want to develop web application of CV RECOMMENDER SYSTEM in Python. I have lots of CVs in the format of .txt.
My questions are following:
What type of learning I can apply? (Supervised/Unsupervised)
I have different file in one folder. So Is there need to make a corpus? If there is a need to make a corpus then can I create a corpus?
I am very confused with the flow of the application.
Can please anyone suggest me the steps how can I start to develop web application?
Your help will be appreciated.
AI: I guess, there are many ways, to recommend CVs, but here is what I would do:
I would use TF-IDF or LSI, so an unsupervised learning approach and I would use a corpus.
I would apply word stemming, create a dictionary containing unique word stems of the CVs, create a corpus containing word stem id from dictionary and word stem count for each word stem in each CV, so the document frequencies (DF) for all word stems, then create TF-IDF (term frequency inverse document frequency) model or an LSI (latent semantic indexing) model from dictionary and corpus. Then you have TF-IDF or LSI vectors for all CVs.
For matching and recommending, you apply word stemming to text input you want to match CVs with, calculate the LSI or TF-IDF vector for this input and match it with the most similar CV by using cosine similarity calculation.
Here and here you can find short Python code examples for the approach I described using LSI or TF-IDF. |
H: LabelEncoding a Dataframe
I have a dataframe with integer and categorical variables. Should I label encode all variables (both integer and categorical) or should I encode only the categorical features?
AI: You only need to label encode the categorical features. For e.g. if the first column contains categorical data then you can do something like shown below-
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
x[:, 0] = labelencoder.fit_transform(x[:, 0]) |
H: How are weights calculated in a feed-forward neural network before they are summed up with bias?
I have read a lot of papers and watched different videos, it seems like they explain how they are summed up with bias before entering the activation function.
What I am trying to understand is the whole flow process of data from the matrix all the way through to the output layer. What I am struggling with is how do they gain their initial weight?
I have drawn a diagram hoping to explain what I understand so far. Please let me know if I am on the right track and help me understand how weights are calculated.
AI: Initial weights are simply initialized by the programmer, usually according to some random distribution (e.g. Gaussian).
There are several well known initializers, such as xavier,
that have been shown to improve the training process.
If you read ML papers, you will see that researchers always mention how they initialized weights for their models so that their results are reproducible. |
H: K-Means clustering - What to do if a cluster has 0 elements?
I'm writing code for k-means clustering. I have around 100000 vectors of size 128x1 (SIFT descriptors). I'm trying different initialization methods such as Forgy and Random Partition.
What if suppose, no vectors are classified to a cluster (in one of the iterations)? How to calculate centroid for that cluster?
AI: This is mostly an issue with really bad initialization (random vector generation as well as random labeling are stupid, don't use it - choose k points wth sampling, or k-means++) and with data where k-means doesn't work well at all. So if this happens, you know the results won't be good!
Either way, the standard and straightforward solution is simple: use the previous mean if a cluster becomes empty. It could be assigned points later again. And if it doesn't, well, then the cluster is empty. No surprises here, no infinite loops, convergence issues, etc. |
H: Can ROC/AUC help model training or just be used for model chosing?
if can be used to help for training?how?
AI: Your answer would depend on the model you have in mind. For something like multi-variate linear regression, you would use it for factor reduction. For something like a neural network, you could use it while you test different settings for any number of things like the dropout rate or the learning rate. It really just depends on what you're working on at the time. |
H: Should unique vectors (SIFT descriptors) be used in K-Means Clustering?
I'm doing image classification by extracting SIFT features, clustering them and then finding BOVW histogram and classifying.
I have around 180 training images from which I'm extracting SIFT descriptors. I need to cluster these features using k-Means clustering. Now, among the SIFT descriptors, some are duplicate. Before applying K-Means, should I remove these duplicate vectors or should I not? Or does it not make any difference?
Thanks!
AI: It does make a difference.
If you have many duplicates, you can merge them into weighted vectors. It's straightforward to add to existing code. |
H: Keras decision threshold for Multiple Label prediction
I'm training a Neural Network to predict multiple labels for a given input. My input is a 200 sized vector of integers and the output should be a boolean vector of size 28. My y has a 1 on the corresponding classes the example corresponds to, i.e. the y should look like:
[0, 0, 1, 0, ... 1, 0, 0].
Now, I've used a sigmoid function on my final layer, and I'm training with binary crossentropy since I want my model to treat each class as independent from each other.
When I fit my model, I get a pretty decent categorical accuracy, i.e. ~75%, but I'm wondering which is the threshold that Keras uses to say a class is or isn't present on the example, I mean, which is the threshold where it decides:
prediction[prediction >= threshold] = 1
prediction[prediction < threshold] = 0
AI: You can look at your multi class classification as 28 separate binary classifiers (1 for each output feature).
The output of a binary classification is the probability of a sample belonging to a class, so the threshold per each feature is 0.5. |
H: How to define a multi-dimensional neural network with keras
I have implemented a simple neural network with keras that takes an input of 50 values and returns a classification of '0' or '1'. I believe the model is expecting an input shape of (50, 1). I'd like to add another 50 data values for each input, but I'd like them to be associated with the original 50 respective inputs. So instead of making the input of shape (100, 1), I guess I'd like to make it of shape (50, 2). I would like the neural network to know from the start that each input feature has two values associated with it, instead of it thinking there are 100 separate input features. Here's what I have so far:
model = Sequential()
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(100, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
Can anyone show me the way the alter this structure to accept my new input shape?
AI: If you want to use a simple neural network which takes in a 50x2 matrix you can do this y flattening your matrix to a 100x1 vector. This will make no difference in the way that the neurons are processed. Each neuron will take all the inputs and make a decision on its weight accordingly.
If you want some neighborhood feature selection then you can use a convolutional neural network like
model = Sequential()
model.add(Conv2D(32, kernel_size=(2, 2),
activation='relu',
input_shape=input_shape,
padding='same'))
model.add(Conv2D(64, (2, 2), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(1, activation='softmax'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
This will do a feature mixing within a 2x2 window across your matrix which will thus mix your two values before feeding them to the deeper part of the network.
Doing this is also a good idea as it serves as a means to reduce the complexity of your model which may lead to better results. |
H: ML technique to predict next year output based on text quantities
I have a random data that I would like to predict how much a quantity will be in 2020.
The data looks like this:
year components total_components
2019 [Pen, Pencil, Books 4
Paper]
2018 [Pen, Pencil, Books 5
Paper, Eraser]
2017 [Pen, Pencil, Books 6
Paper, Eraser, Napkin]
2016 [Pen, Pencil, Books 7
Paper, Eraser, Napkin,
laptop]
In my head, I thought of time series forecasting or RNNs but the data is a bit strange to apply the techniques.
Which ML technique would you suggest here? Thanks :-)
AI: I'm not sure machine learning is the right method to count the number of arguments in a list. You can just write a function that does that and it's all you need...
Now if you want to predict the number of total components, assuming it holds some kind of temporal relation, there are a few techniques possible. But remember: Garbage in, Garbage out...
The easiest is to build a regression model by using past numbers as features, this gives an equation like this:
$$Y_t = a_1 \times Y_{t-1} + a_2 \times Y_{t-2} + ... + b$$
Next you can look at time series (ARIMA, SARIMA), which are a bit more advanced: additional to the time steps, they can take moving average and seasonal components into account.
And finally, you can also look at neural networks. If you have enough, and more importantly meaningful data, you can look at multilayer perceptron with a time window, 1d-convolutions and RNN/LSTM... Plenty resources exist if you want to go with this. |
H: Problem with Linear Regression and Gradient Descent
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
x = np.linspace(0, 10, 1000)
points = np.genfromtxt("Dokumente/Salary.csv", delimiter=",")
points2 = points[1:, :]
def gradient_descent(current_b, current_m, learning_rate, points):
b_gradient = 0
m_gradient = 0
N = float(len(points))
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1] / (10 ** 4)
b_gradient += -(2/N) * (y - ((x * current_m) + b_gradient))
m_gradient += -(2/N) * x * (y - ((x * current_m) + current_b))
b_new = current_b - (learning_rate * b_gradient)
m_new = current_m - (learning_rate * m_gradient)
return [b_new, m_new]
def compute_error(points, b, m):
total_error = 0
N = len(points)
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1] / (10 ** 4)
total_error += (1/N) * ((y - ((m * x) + b)) ** 2)
return total_error
def run(points, staring_b, starting_m, learning_rate, num_iterations):
b = staring_b
m = starting_m
for i in range(num_iterations):
learning_rate = learning_rate
b, m = gradient_descent(b, m, learning_rate, np.array(points))
plt.plot(i, compute_error(points, b, m), "ro")
return [b, m]
learning_rate = 0.000001
init_b = 0
init_m = 0
number_of_iterations = 10000
print(compute_error(points2, init_b, init_m))
[b, m] = run(points2, init_b, init_m, learning_rate, number_of_iterations)
print(compute_error(points2, b, m))
plt.plot(points2[:,1] / (10 ** 4), "ro")
x = np.linspace(0, 35, 1000)
plt.plot(x, ((x * m) + b))
plt.show()
I played with learning rate and the number of iterations, but this does not fit the data intuively.
Can i get some tips on my code please?
Thank you in advance.
AI: Gradient descent probably didn't converge. I'd sugest scaling your feature on a [-1,1] or [0,1] scales, try increasing your learning rate and maybe add some code which checks the gradient's slope and if there is perhaps sequence of <0,>0,<0 it means that the learning rate is locally too large and cost function can't converge. In that case, learning rate should be decreased. |
H: Why Epochs take longer as learning proceeds?
I am training a deep learning model in Tensorflow on GPU (Amazon AWS) and what I observe is that in the beginning each epoch takes only less than a second but let's say after 80 epochs one epoch takes more than 8 minutes.
What could be a possible reason?
AI: As the number of epochs increase the error goes down and the neural network has less to learn from the given data. The learning rate also decreases towards the end which makes the calculations of the gradients and weight updates numerically unstable. These are some of the reasons that contribute to slow learning. You can have a look here Quora for more details where Ian Goodfellow has answered a similar question. |
H: How to predict correct text from incorrect text with machine learning?
I have a dataframe like this:
import pandas as pd
df = pd.DataFrame({'incorrect': ['jak', 'mya', 'kfka'],
'correct': ['jack', 'maya', 'kafka']})
incorrect correct
jak jack
mya maya
kfka kafka
I want to predict correct results from incorrect values with machine learning in python. Which model can I use for this case? Thanks.
AI: The task that you are trying to do is really difficult to accomplish using machine learning as there is no pattern in how the corrected words are produced from incorrect words. One thing you can do is populate a dictionary of different names from the internet and then follow the suggestions provided here. |
H: What is the best way to predict time series data?
I have monthly price data for tomatoes for the last 9 yrs for a particular town and I'm looking to predict the prices of tomatoes 6 months into the future.
I had considered using Linear Regression in Tensorflow (something I learnt about a week ago), because there is a direct relationship between price and rainfall for the location in question. That is, high rainfall means a supply glut and a price drop, low rainfall means scarcity and price hikes.
However I've found that I can't get accurate weather forecasts for 6 months into the future and have to use some other way other than Linear Regression to predict future tomato prices.
What is the best way to do this?
AI: You can find the seasonality in your data (if there exist) and by removing the seasonality, find the trend of tomato price and predict the future price using the seasonality data and trend data. If you are using python, you can find the seasonal library useful to apply this method to your data.
The advantage of this method to your method is you don't need to think over finding dependent features and any feature engineering.
Moreover, as you have mentioned in tags of your question, you can using RNN to predict the future value by as your data is time series and sequential intrinsically. You can find this article useful. |
H: Predicting of the function values
There is a function: $f(t,x,y,z)$, where $t$- time, $x,y,z$- some arguments.
The values of $f$ for $t\in [a,b]$ are known (100 samples). What is the most accurate way of predicting the value of $f$ for $t=b+1$, if the values of $x,y,z$ for $t\in [a,b+1]$ are known?
May I implement deep-learning, and if yes, which method would be more reliable?
ADDITIONAL INFO
I'm propagating the satellite position using an analytical method. During the propagation, the errors accumulate. I endeavor to predict the future position errors. $f$ represents the error value and arguments $x,y,z$ are the affecting parameters, such as distance to the Moon, Sun, solar flux, atmospheric density, etc.
AI: Before trying out deep learning I suggest you to try out Random Forests as the nature of your data seems non-linear to me and RFs are good at fitting non-linear data. If it doesn't work out well try doing a simple feed forward neural network with non-linear activation function as it also fits non linear functions really well. Also, machine learning is more about trying and testing as every dataset is different and you can never know in advance which model will work out well. |
H: The proper way to codify Na in a list in R
I am trying to impute missing timeseries present in different dimentions, row by row, on the whole date set. I showed the type of return of na.kalman() and it happens to be a tribble, I am not so keen on R, so I thought converting it to a plain vector will do it, something like array(as.data.frame(imputed_slice))
library("imputeTS")
library(readr)
slice <- read_csv("slice.csv")
little_slice <- slice[c(750: 850)]
for (row in 1:nrow(little_slice)) {
ts <- little_slice[row,]
if(any(is.na(ts)))
{
imputed_ts <- na.kalman(ts, model = "auto.arima")
little_slice[row,] <- array(imputed_ts)
}
}
#529, 811, 812 missings
Looking for warning, I see something like:
imputeTS: No imputation performed for column 64 because of this Error
in na.kalman(data[, i], model, smooth, nit, ...): Input data needs at
least 3 non-NA data point for applying na.kalman
For every row, so I tried to force other columns to be NA
Like this:
slice$`811` <- NA
slice$`812` <- NA
slice$`822` <- NA
slice$`832` <- NA
To sum up, why na.kalman() not recognizing my NAs?
AI: I made it work by coercing the timeserie to numerics
little_slice <- slice[c(750: 850)]
for (row in 1:nrow(little_slice)) {
ts <- as.numeric(little_slice[row,])
if(any(is.na(ts)))
{
imputed_ts <- na.kalman(ts, model = "auto.arima")
little_slice[row,] <- imputed_ts
}
} |
H: Why is Local Outlier Factor classified as Unsupervised if it requires training data with no outliers?
In Scikit-Learn, the Local Outlier Factor (LOF) algorithm is defined as an unsupervised anomaly detection method.
So then I don't understand why this algorithm requires pre-filtered training data. Perhaps "training data" here simply means "data to start with?" But the example code provided by SciKit-Learn clearly shows training data which explicitly contains NO anomalies. Does that mean that this model would NOT work if the training data contains anomalies? And more importantly, how do I find anomalies in the training data using this algorithm?
Here is the website.
https://scikit-learn.org/stable/auto_examples/neighbors/plot_lof_novelty_detection.html#sphx-glr-auto-examples-neighbors-plot-lof-novelty-detection-py
I changed the sample code to include anomaly data in the training data set and the model still found a decision boundary that looked correct. So am I just getting confused because of the way the documentation and sample code is written? Or is does this model really need a clean training data set?
AI: It appears that SciKit-Learn implements two "modes" for LocalOutlierFactor, where one is unsupervised and one is semi-supervised. I think I misunderstood the documentation and implementation at first.
The two modes are "outlier" and "novelty". The sample code provided on the SciKit-Learn website uses Novelty detection mode. When using the "outlier" mode the model does NOT require a training dataset--an unlabeled dataset is used as whole and a score is given to every data point.
novelty : boolean, default False
By default, LocalOutlierFactor is only meant to be used for outlier detection (novelty=False). Set novelty to True if you want to use LocalOutlierFactor for novelty detection. In this case be aware that that you should only use predict, decision_function and score_samples on new unseen data and not on the training set. |
H: How do I add together multiple columns based on header names?
Here's what my dataframe looks like
Server Performance123 Performance456
server1 1024 0
server2 110 0
server3 0 1024
Here's what I want it to ultimately look like
Server Tier1
server1 1024
server2 110
server3 1024
Now obviously I could just add the two columns together but I can't be sure what the "123" or "456" part of the CSV I'm importing will look like as it's the last part of the UID of the datastore.
I've tried this
df['Tier 1'] = df.filter(like='Performance')
But I can't assign that as a new column in the dataframe. Any advice? I'm a bit of a newcomer to this so ran out of ideas quickly.
AI: Well after much searching and a lot of errors I have the below working code
df['Tier 1'] = df.groupby(df.columns.str.contains('Performance'),axis=1).sum()[1] |
H: Can I save prediction value in same csv file as a another column using panda python
I have csv data file and I design LSTM model to predict values. Then I want to save that prediction value in same csv file. Can I do that? I tried using one code then in my csv file only had prediction values and delete other columns. Can anyone give me a suggestion for that.
import pandas as pd
import numpy as np
data = pd.read_csv('data1.csv')
data = pd.DataFrame(data,columns=['x','x1','x2','y'])
data.columns = ['x', 'x1', 'x2','y']
pd.options.display.float_format = '{:,.0f}'.format
data = data.dropna ()
d = ['y']
y=data['y'].astype(int)
cols=['x', 'x1', 'x2']
x=data[cols].astype(int)
scaler_x = preprocessing.MinMaxScaler(feature_range =(-1, 1))
x = np.array(x).reshape ((len(x),3 ))
x = scaler_x.fit_transform(x)
scaler_y = preprocessing.MinMaxScaler(feature_range =(-1, 1))
y = np.array(y).reshape ((len(y), 1))
y = scaler_y.fit_transform(y)
print("row",len(y))
n = data.shape[0]
p = data.shape[1]
fill_missing(data.values)
train_start = 0
train_end = int(np.floor(0.65*n))
test_start = train_end+1
test_end = n
x_train = x[np.arange(train_start, train_end), :]
x_test = x[np.arange(test_start, test_end), :]
y_train = y[np.arange(train_start, train_end), :]
y_test = y[np.arange(test_start, test_end), :]
x_train=x_train.reshape(x_train.shape +(1,))
x_test=x_test.reshape(x_test.shape + (1,))
seed = 20
np.random.seed(seed)
fit1 = Sequential ()
fit1.add(LSTM(
output_dim = 10,
activation='relu',
input_shape =(3,1)))
fit1.add(Dense(output_dim =1))
fit1.add(Activation(linear))
batchsize = 10
fit1.compile(loss="mean_squared_error",optimizer="adam")
fit1.fit(x_train , y_train , batch_size = batchsize, nb_epoch =10, shuffle=True)
print(fit1.summary ())
pred1=fit1.predict(x_test)
pred1=fit1.predict(x_test)
real_test = scaler_y.inverse_transform(np.array(y_test).reshape ((len(y_test), 1))).astype(int)
pred1 = pd.DataFrame(pred1, columns=['pred1']).to_csv('data1.csv')
AI: if you want this column in the same dataframe just do
data['pred'] = pred1
data.to_csv('data1.csv')
The first line automatically adds a column called 'pred' to the dataframe with values coming from pred1.
Hope it helps. Good luck! |
H: How to compare two dataframes and put the counted unique values in a first dataframe's column?
I have two different (geo)dataframes, one has 690 and the other has 1826 rows. The first one is a grouped based on the nearness (spatial near) of the second dataframe. Thus, they bound with FID_1 and NEAR_FID columns. Every FID_1 in the first dataframe corresponds to at least 2 NEAR_FIDs in the second dataframe. My challenge is to count the number of NEAR_FIDs in the second dataframe and put as a row in the first one. I am using python 3.5+. Thanks for any suggestion.
AI: What you want can be achieved using themerge function in pandas. Here is the code:
df1 = pd.DataFrame({'FID_1':[1,2,3]})
df2 = pd.DataFrame({'NEAR_FID':[1,1,1,1,2,2,3,3]})
df_freq = df2.NEAR_FID.value_counts().reset_index()
df = pd.merge(df_freq, df1, how='inner', left_on=['index'], right_on=['FID_1'])
df
index NEAR_FID FID_1
0 1 4 1
1 3 2 3
2 2 2 2
You can remove any of the column index or FID_1 to get your desired output.
For more on merge function look at the official documentation
and for a very good explanation of its working look here.
EDIT 1:
Merge function only accepts Dataframes and not series. So, if your data is in series format, convert it to dataframe first.
You can check the type of your data by:
print(type(df2))
And that should output pandas.core.series.Series
Use this to change it to a dataframe:
df = df.to_frame().reset_index() |
H: Production: TensorFlow and Keras
I always here about TensorFlow is good because it is used for deploying and production. Does that mean that people don't use Keras for deploying models? If keras is now integrated into TensorFlow, does that mean that it can also be used for deployment and production?
AI: Once model has been trained with Keras
It can be exported to TensorFlow model OR
With tf.keras it can be served as http service
Examples :
https://towardsdatascience.com/deploying-keras-models-using-tensorflow-serving-and-flask-508ba00f1037
https://medium.com/tensorflow/training-and-serving-ml-models-with-tf-keras-fd975cc0fa27
https://medium.com/@mr.acle/exporting-deep-learning-models-from-keras-to-tensorflow-serving-7d4a6e49ce3 |
H: Meaning of subscript in min max value function
This possibly is a very stupid question, but i have not been able to find the answer on the internet and have got no clue which keywords to use while searching.
What's the meaning of $\mathbb{E}_{x \sim p_{data}(h)} [...]$
Where ... is some function.
in context like
$\mathbb{E}_{x \sim p_{data}}[...] + \mathbb{E}_{x \sim p_{z}} [...]$
and
$\mathbb{E}_{x \sim p_{data}(h)} [...] + \mathbb{E}_{x \sim p_{z}(z)} [...]$
This is written in a lot of papers, but what does it mean?
AI: I'm supposing you are reading the paper on GAN where $[...]$ is the loss function. $\mathbb{E}_{x \sim p_{data}(h)} [...]$ now represents the expected loss where the data is taken from the distribution $p_{data}(h)$.
Edit:
$\mathbb{E}[ ]$ is an operator which gives the expected value of the function on which it is acted upon. When specifically acted on the loss function it is called risk or empirical loss. And L(z) that you are asking is the loss.
For more on its definition and explanation, see the below questions:
Risk function or expected loss function
why do we calculate risk when we already have loss functions? |
H: Divide a column by itself with mutate_at dplyr
Hi I'd like to turn each non zero value of my selected columns to a 1 using mutate_at()
BRAND MEDIA_TYPE INV1 INV2
<chr> <chr> <dbl> <dbl>
b1 newspapers 2 27
b1 magazines 3 0
b2 newspapers 0 0
b3 tv 1 145
b4 newspapers 4 40
b5 newspapers 5 0
b1 newspapers 1 0
b2 newspapers 0 28
The final result should be like follow:
BRAND MEDIA_TYPE INV1 INV2
<chr> <chr> <dbl> <dbl>
b1 newspapers 1 1
b1 magazines 1 0
b2 newspapers 0 0
b3 tv 1 1
b4 newspapers 1 1
b5 newspapers 1 0
b1 newspapers 1 0
b2 newspapers 0 1
Do you have any suggestion on how to solve it?
Thank you!!
AI: mutate_at(my_data, vars(starts_with("INV")), sign)
Although I see your INV1 column is a date? I'm not sure how much that'll affect things.. |
H: Cutting numbers into fixed buckets
I am trying to put numeric data into fixed number of buckets using Python/R.
I have data in key:value format {1 : 12.3, 2 : 4.7, 3 : 7.4, 4 : 15.9, ......, 50 : 24.1}, which is device_id:data_usages I need to bucket based on value into nine buckets (1,5,25,50,150,250,1000,5000,10000), So later I can see which data points are in which bucket.
What function can do this in Python OR R?
AI: You don't really need to implement an algorithm to achieve this. There are a few tools that will do this for you.
You can get the data assigned to buckets for further processing using Pandas, or simply count how many values fall into each bucket using NumPy.
Assign to buckets
You just need to create a Pandas DataFrame with your data and then call the handy cut function, which will put each value into a bucket/bin of your definition. From the documentation:
Use cut when you need to segment and sort data values into bins.
In [1]: import pandas as pd
In [2]: import numpy as np # to create dummy data
Create some dummy data, put it in a dataframe and define the bins:
In [3]: data = np.random.randint(low=1, high=10001, size=1000)
In [4]: df = pd.DataFrame(data=data, columns=["data"])
In [5]: bins = np.array([1,5,25,50,150,250,1000,5000,10000])
Pass the data, along with the bin definitions to the cut function and assign it back as a new column in the dataframe:
In [6]: df["bucket"] = pd.cut(df.data, bins)
You can then inspect the first few rows to see that the values have now been labelled with the relevant bucket:
In [7]: df.head()
Out[7]:
data bucket
0 8754 (5000, 10000]
1 2970 (1000, 5000]
2 6778 (5000, 10000]
3 2550 (1000, 5000]
4 5226 (5000, 10000]
Counting how many in each bucket
Here is an example using NumPy, to get an idea of the distribution, as a histogram.
Using the data and bins as defined above, we pass them to the numpy histogram function, which will count how many data points fall into each bin:
In [8]: np.histogram(data, bins)
Out[8]:
(array([ 0, 2, 1, 8, 6, 61, 417, 505]),
array([ 1, 5, 25, 50, 150, 250, 1000, 5000, 10000]))
Where the first row tells you how many values fell into each bin, and the second row confirms the bins used.
You can get your dictionary of data into the same form as my dummy data above (into a numpy array) by doing this:
data = np.array([v for v in your_dict.values()]) |
H: Supervised or unsupervised learning for predicting energy consumption for new buildings
I’m working on an model for auto dimensioning district heating pipes for new district heating areas (new customers). I have energy consumption data on hourly basis and describe data about these consumers (e.g. building year, renovation year and floor area) for a lot of district heating consumers. I want to predict dimensional load (energy consumption) for new building connected to the grid based on the describe data (e.g. building year, renovation year, floor area). I have through about using supervised or unsupervised learning, but do not know which of them that fit my needs? Is there an mode (e.g. clustering, SMV, …) that are better than other for this purpose?
AI: Building on The Lyrist answer, I would try to organize all that data and create as many instances as possible of Input-Output. If I understand your case correctly, the input would be the multiple data point that you feel have an impact on the load (if you're not sure about it's impact, chuck it in for now) that you would set up as a vector of values. Then the output would be a single value of load, such as Kwh/day (that doesn't sound right), but just get one 'value' you're interested in predicting. Don't forget to keep training, validation and test sets.
Now you can frame your problem both as a classification problem by creating categories, such as from 0 to 10, 11 to 20, etc, or go for a regression problem, predicting a continuous value. Do lookup past application of similar problem, as they are, to some extent, application specific. Note that the some algorithm might work better with regression, other with classification. Personally, I think that creating buckets of classification might be easier, unless predicting an exact value is important to you.
Now regarding the specific model to go for, it usually is recommended to start with simple model (SVM's is a popular one) and build to more complicated model (kerneled SVM), if you enough data! Resist the temptation of going straight for a 12 layers MLP, as those old SVM's. |
H: In Machine Learning, what is the point of using stratified sampling in selecting test set data?
I am currently learning machine learning via this book "Hands-On Machine Learning with Sci-kit learn and Tensorflow" by Aurelien Geron.
In page 76 and 77, the author talks about using stratified sampling so that your test set would be more representative of the whole data. I didn't really understand the point of this since it would not affect the accuracy of your training model ? or how would selecting better test sets affect the accuracy of your training model ?
AI: When the distribution of your data is balanced or you have enough samples of each class, a normal shuffle split will work well. But if your data distribution is unbalanced and one of the classes is in minority, you do stratified sampling so as to ensure that your train and test splits represent the true nature of your data. For more details take a look here. |
H: How to generate list with out for loop in python
data frame
S/N Type Number Capacity
1 Bike 2 5
2 Tempo 1 30
3 Truck-1 1 60
4 Truck-2 1 90
I would like to generate capacitylist = [5,5,30,60,90]
Is it possible to do it with out for and using map function in python. Thanks Alot.
AI: ndf = df.loc[ np.repeat(df.index.values, df['Number']) ]
values = ndf['Capacity'].tolist()
# [5, 5, 30, 60, 90]
Repeat the rows you wish you duplicate using np.repeat, and then, takes the selected column (here, capacity) and turn it into a list using .tolist(). |
H: How to find correlation between time-series of different units?
I have 3 time-series data.
NDVI(normalized difference vegetation index) mean
Precipitation
Temperature
All of these have their own unit.
Now I want to find similarity/correlation between NDVI and precipitation,
NDVI and temperature. Basically, my aim is to find "NDVI is more correlated with precipitation or temperature".
Should I normalize precipitation and temperature to NDVI values?
AI: I recommend you to scale all three time series to unitless values, with zero mean and standard deviation of 1. After that, you can search for correlation among normalized time series. Python has StandardScaler function that could help with it. |
H: How can I see a long string in my dataframe?
I have a column in my dataframe in which there are sentences which are too long. I want to see them as a whole but every time I perform even a simple iloc operation i get output like 'i am going to...'. How can I remove the ... and see the whole sentence ?
AI: At first, here is the Data.
data = {
'Capacity' : [
"qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzbefzrfkjhnvezkqcfhaze",
"qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzbefzrfkjhnvezkqcfhaze",
"qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzbefzrfkjhnvezkqcfhaze",
"qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzbefzrfkjhnvezkqcfhaze"
]
}
df = pd.DataFrame(data)
print(df['Capacity'])
# 0 qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvg...
# 1 qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvg...
# 2 qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvg...
# 3 qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvg...
You can set pd.options.display.max_colwidth to increase the number of characters you want to display.
pd.options.display.max_colwidth = 20
print(df['Capacity'])
# 0 qsfjvanzicgbaefz...
# 1 qsfjvanzicgbaefz...
# 2 qsfjvanzicgbaefz...
# 3 qsfjvanzicgbaefz...
pd.options.display.max_colwidth = 100
print(df['Capacity'])
# 0 qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzb...
# 1 qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzb...
# 2 qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzb...
# 3 qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzb...
# This seeks the maximum length of the current column 'Capacity'.
pd.options.display.max_colwidth = int(df['Capacity'].str.len().max()) + 1
print(df['Capacity'])
Or you can use the .values method instead, so that you are watching the 'raw' data.
# Reset the option.
pd.options.display.max_colwidth = 50
print(df['Capacity'].values)
# ['qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzbefzrfkjhnvezkqcfhaze'
# 'qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzbefzrfkjhnvezkqcfhaze'
# 'qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzbefzrfkjhnvezkqcfhaze'
# 'qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzbefzrfkjhnvezkqcfhaze']
Or simply select one row from the column you're interested in.
print(df['Capacity'][2])
# qsfjvanzicgbaefzrkrsdvjcfhnefzraezsufchnezsuvgcfnezefzrdfchnzefzreicfjnefzrzekcrghzebefzrkcfjfzbefzrfkjhnvezkqcfhaze |
H: What Machine Learning Algorithm could I use to determine some measure in a date?
I am getting stuck with this problem. Let's say that we have the next information.
CustomerID: 1, Date: 3/2/2018, Quantity: 3, Total: 390.78, Min: 130.26, Max: 130.26
We want to determine given a day, month and year what will be the total summation of sales until the last day of that month, I'm using Microsoft Azure Machine Learning Studio and I modified the data a little bit (I know a little bit of Python and R), for a day we could have the next data.
Date: 3/2/2018, TotalSalesToday: 4023.45, FirstToToday: 1322.92 TargetValue: 42611.27
where TargetValue is the total summation of sales until the last day of the month (or the value that we want to predict), FirstToToday the Summation of the sales from the first day up to that day, TotalSalesToday the total summation of the sales of that day. There are some columns that we can find or generate given the day like RemainingWorkDays, RemainingHolidays, RemainingNonWorkDays, etc. And maybe 31 columns that we can make telling the ML what is the summation of all the days before -1, all the days before -2, and so on.
I did an experiment on Microsoft Azure Machine Learning Studio and it's giving a 100% coefficient of determination (I was using Boosted Decision Tree Regresion and Tunel Model Hyperparameters), I think that's because the ML knows that in a given month the TargetValue doesn't change it's value, so it does something like if(month == 2) PredictValue = 42611.27, what I can do? When testing this ML let's suppose that in the first day we got a TotalSalesToday: 1000000 so my ML returns 50000, and obviously this is not a logical and coherent answer regarding this value (1000000).
Is there something I need to change in the data? What do we need in order to make the ML gives at least a coherent answer? Is there something I forget?
Thanks in advance!! :)
AI: I would calculate the daily deltas of your sales data and implement a time series forecast model by using an Azure ML R/Python model module that predicts the daily deltas until the end of the month and returns the resulting monthly sum.
If you are not familiar with time series analysis, here is a link to a good description of some basic methods and how to apply them using R.
I think, would predict the daily deltas using an ARIMA model. I found this website containing a description of a time series analysis work flow using R with which you could find a good ARIMA model.
I hope, the above is useful for you. |
H: Is there any similarity function to compare two strings and give them a score like scipy cosine similarity for comparing arrays?
I want to compare strings and give them score based on how similar the content is in them just like comparing two arrays in scipy cosine similarity.
For example :
string one : 'Pair of women's shoes'
string two : 'women shoes' pair'
Logically I would want a high score between the two strings. Is there any way to do so ? I am comparing array of strings with another array in a single column in my dataframe. I want to find similar rows this way. Can this be achieved ?
AI: You can try the Levenshtein Distance. From Wikipedia this is the abstract
In information theory, linguistics and computer science, the
Levenshtein distance is a string metric for measuring the difference
between two sequences. Informally, the Levenshtein distance between
two words is the minimum number of single-character edits (insertions,
deletions or substitutions) required to change one word into the
other.
Then you can use this Python function to compute it yourself or just install a Python package that does it for you
memo = {}
def levenshtein(s, t):
if s == "":
return len(t)
if t == "":
return len(s)
cost = 0 if s[-1] == t[-1] else 1
i1 = (s[:-1], t)
if not i1 in memo:
memo[i1] = levenshtein(*i1)
i2 = (s, t[:-1])
if not i2 in memo:
memo[i2] = levenshtein(*i2)
i3 = (s[:-1], t[:-1])
if not i3 in memo:
memo[i3] = levenshtein(*i3)
res = min([memo[i1]+1, memo[i2]+1, memo[i3]+cost])
return res
print(levenshtein("Python", "Pethno"))
print(levenshtein("Pair of women's shoes","women shoes' pair"))
>> 3
>> 16
Source code for the above snippet
Or if you want to do it directly on your DataFrame, you can do it like that
df['LD'] = df.apply(lambda row: levenshtein(row['text1'], row['text2']), axis=1) |
H: What machine learning model should be used to predict coincidence factor
I have coincidence factor for different sizes of groups and the associated attributes (e.g. building usage type and floor area) for each consumer in the group. I want to predict coincidence factor for new groups using the attribute for each consumer in the new group and the number of consumers in the group. What machine learning method can solve the problem? :)
Defintions:
Coincidence factor is the peak of a system divided by the sum of peak loads of its individual components
AI: From what you are saying I guess you have both categorical and continuous variables, One suggestion would be to check the distributions of your features to see if, more or less, the criteria for a linear regression are met.
If you have too many variables, which I still haven't picked up, you could even apply a PCA and then a regression. |
H: How to get accuracy, F1, precision and recall, for a keras model?
I want to compute the precision, recall and F1-score for my binary KerasClassifier model, but don't find any solution.
Here's my actual code:
# Split dataset in train and test data
X_train, X_test, Y_train, Y_test = train_test_split(normalized_X, Y, test_size=0.3, random_state=seed)
# Build the model
model = Sequential()
model.add(Dense(23, input_dim=45, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
tensorboard = TensorBoard(log_dir="logs/{}".format(time.time()))
time_callback = TimeHistory()
# Fit the model
history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback])
And then I am predicting on new test data, and getting the confusion matrix like this:
y_pred = model.predict(X_test)
y_pred =(y_pred>0.5)
list(y_pred)
cm = confusion_matrix(Y_test, y_pred)
print(cm)
But is there any solution to get the accuracy-score, the F1-score, the precision, and the recall? (If not complicated, also the cross-validation-score, but not necessary for this answer)
Thank you for any help!
AI: Metrics have been removed from Keras core. You need to calculate them manually. They removed them on 2.0 version. Those metrics are all global metrics, but Keras works in batches. As a result, it might be more misleading than helpful.
However, if you really need them, you can do it like this
from keras import backend as K
def recall_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_m(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc',f1_m,precision_m, recall_m])
# fit the model
history = model.fit(Xtrain, ytrain, validation_split=0.3, epochs=10, verbose=0)
# evaluate the model
loss, accuracy, f1_score, precision, recall = model.evaluate(Xtest, ytest, verbose=0) |
H: Do I classify all types if they are mutually exclusive
I am trying to classify an image that can represents 3 states. Up, down or Middle.
If the image does NOT represent Up or Down, then it is by default Middle.
Should I train my CNN with a dataset including all three, or just Up and Down?
Which would this make classification more or less accurate?
AI: You won't be able to train your model correctly if you set only two classes. Because for all the samples the model will only predict Up or Down. This will cause even the Middle classes to be predicted as Up or Down, which will reduce your model's accuracy. Hence you need to keep all three classes as labels. |
H: Data weight averaging
say I have 2 datasets and need to get an average but with a weight based on the number of students.
school 1 = 98.1 . (50 students)
School 2 = 95.4 . (169 students)
How would i get the average of both with a weighted average reflected the average of the student totals i think it would be around = 96.ish
thanks for any insight!
AI: There are 50 + 169 = 219 students in total.
The weights are 50/119 = 0.23 and 169/219 = 0.77 respectively.
Hence, 0.23*98.1 + 0.77*95.4 = 96.021 |
H: imbalanced dataset in text classififaction
I have a data set collected from Facebook consists of 10 class, each class have 2500 posts, but when count number of unique words in each class, they has different count as shown in the figure
Is this an imbalanced problem due to word count , or balanced according number of posts. and what is the best solution if it imbalanced?
update
My python code:
data = pd.read_csv('E:\cluster data\One_File_nonnormalizenew2norm.txt', sep="*")
data.columns = ["text", "class1"]
data.dropna(inplace=True)
data['class1'] = data.class1.astype('category').cat.codes
text = data['text']
y = (data['class1'])
sentences_train, sentences_test, y_train, y_test = train_test_split(text, y, test_size=0.25, random_state=1000)
from sklearn.feature_extraction.text import CountVectorizer
num_class = len(np.unique(data.class1.values))
vectorizer = CountVectorizer()
vectorizer.fit(sentences_train)
X_train = vectorizer.transform(sentences_train)
X_test = vectorizer.transform(sentences_test)
model = Sequential()
max_words=5000
model.add(Dense(512, input_shape=(60874,)))
model.add(Dense(20,activation='softmax'))####
model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(X_train, y_train,batch_size=150,epochs=10,verbose=2,validation_data=(X_test,y_test),shuffle=True)
predicted = model.predict(X_test)
predicted = np.argmax(predicted, axis=1)
accuracy_score(y_test, predicted)
predicted = model.predict(X_test)
predicted = np.argmax(predicted, axis=1)
accuracy_score(y_test, predicted)
0.9592031872509961
AI: Thank you for your message Ahmed. There are things to point out:
Is this an imbalanced problem? Which problem? THIS is not a problem. This is data.
What analysis is going to be done? In some cases you need posts and in some you need these keywords.
What method is going to be done for that analysis? Some methods get keywords as input and some get posts.
But about the numbers themselves; Not necessarily. The smallest class has 20% of the largest population and moreover, the scale is pretty high (20000 samples). So it is not necessarily an imbalanced class distribution. Again, see what you want to do with this data. That determines the answer much more accurate.
Hope it helped. If you write about the task you want to do I can post the solution here.
Cheers,
UPDATE
Well, then the problem is pretty straight-forward. These unique words are probably not much meaningful here. I certainly recommend that you try BoW models first (TF-IDF and classic BoW) for modeling your corpus. Then tune the hyperparameters of models and using a simple Multinomial Naive Bayes you will get an acceptable result.
Data is not counted that imbalanced. I had a problem in which some classes had 3000-4000 samples and some only 20! That is certainly called imbalanced but here you still have enough data to represent your minority class and also you will use Precision-Recall for evaluation instead of Accuracy so you will be fine. I strongly recommend you to have a look at this for Python implementation and also seeing some imbalanced data in practice.
The DL thing is answered in the comment. |
H: How to learn irrelevant words in an information retrieval system?
Right now my recommender system for information retrieval uses word embedding stogether with Tfidfs weights like written here:
http://nadbordrozd.github.io/blog/2016/05/20/text-classification-with-word2vec/
Using Tfidf improves results. But I have the problem that irrelevant keywords (high frequent words) still have a large impact.
Can I learn a system such that it learns on which words to pay attention - preferred in an unsupervised way?
What can you suggest for a better information retrieval using word embedings?
AI: If you are working with TF-IDF then it's important to experiment with min_df and max_df parameter. I guess you are on Python since you linked a Python tutorial. Here is the TF-IDF documentation and the related text to the above parameters.
max_df : float in range [0.0, 1.0] or int, default=1.0 When building the vocabulary ignore terms that have a document frequency
strictly higher than the given threshold (corpus-specific stop words).
If float, the parameter represents a proportion of documents, integer
absolute counts. This parameter is ignored if vocabulary is not None.
min_df : float in range [0.0, 1.0] or int, default=1 When building the vocabulary ignore terms that have a document frequency strictly
lower than the given threshold. This value is also called cut-off in
the literature. If float, the parameter represents a proportion of
documents, integer absolute counts. This parameter is ignored if
vocabulary is not None.
You might find several rules of thumb on the web. Some of them suggest using a flat number on the min_df close to 5-7 documents and a percentage on the max_df about 80-85%. Maybe even lower. With this, you will be able to get rid of garbage, misspelt or unwanted tokens. Keep in mind that you need to try different combinations to get the right balance in your model. |
H: Using the validation data
I'm unclear on the exact process of using the validation data.
Let's say that I fit my neural network model and adjust hyperparameters using the training set and validation set. Do I then evaluate the test set on this model? Or do I recombine the validation and training sets and fit a fresh model with the hyperparameters that I found during the validation phase, and then evaluate on the test data? I have seen a number of different notebooks and examples that do both ways.
Surely, once I've found my hyperparameters, it makes sense to fit a fresh model using the full training set (recombined with validation set), since the validation loss has no effect on the weights.
AI: The common procedure is this:
Fit the model on train split and adjust hyperparameters using the training split and validation split, as you already said.
Then, to get a "final measure" of performance, train model on both training and validation splits using the hyperparams found during validation phase, and evaluate on test split. |
H: Display a subset of classes in axes in ggplot
I have the following plot, is there any way in ggplot to display just the numbers 1 to 10 instead of all of them?
Numbers from 10 and after are not so important, but I need to display the ones before.
Thank you
AI: I used this kind of wor around, but if you have another solution, please enlighten me :)
df %>%
ggplot() +
geom_bar(mapping = aes(x = N%>% as.factor())) +
xlab(label = 'N') +
theme(axis.text.x = element_text(angle = 45, size = 8)) |
H: Guys I am trying to create a application to create training data
import os
import cv2
import matplotlib.pyplot as plt
from colorama import Fore
import random
import numpy as np
import pickle
training_data = []
def create_training_data():
path = os.getcwd()
path_dataset = os.path.join(path,'dataset')
categories = ['a','b']
for category in categories:
path_categories = os.path.join(path_dataset,category)
index = categories.index(category)
for img in os.listdir(path_categories):
try:
path = os.path.join(path_categories,img)
img_array = cv2.imread(path,cv2.IMREAD_GRAYSCALE)
training_data.append([img_array,index])
except Exception as e:
print(Fore.RED+path)
print(Fore.GREEN+f"imported all data from cateogry {category}")
#For every object in the training dataset
#training_data[element][data type*]
#0 - Stores the values of all images
#1 - stores the values of all category
create_training_data()
print(training_data[1201][1])
plt.imshow(training_data[1201][0],cmap="gray")
rows,columns = training_data[0][0].shape
print(f"rows-{rows} cols-{columns}")
plt.show()
random.shuffle(training_data)
X = []
Y = []
for feature,label in training_data:
X.append(feature)
Y.append(label)
X = np.array(X).reshape(-1,rows,columns,1)
pickle_out = open(X.pickle,'wb')
pickle.dump(X,pickle_out)
pickle_out.close()
pickle_out = open(Y.pickle,'wb')
pickle.dump(Y,pickle_out)
pickle_out.close()
After I run this script I am getting an error:
Traceback (most recent call last):
File "trainingdata.py", line 55, in
pickle_out = open(X.pickle,'wb')
AttributeError: 'numpy.ndarray' object has no attribute 'pickle'
AI: The error in the line pickle_out = open(X.pickle,'wb') is that the name of file must be in quotes. So the correct code would be:
pickle_out = open('X.pickle','wb')
pickle.dump(X,pickle_out)
pickle_out.close()
Do the same for Y also. Also you can do it more concisely in the following way:
with open('X.pickle', 'wb') as f:
pickle.dump(X, f) |
H: Transfer learning - small database
I am trying to use transfer learning in medical (ultrasound pictures). The problem is - I have very limited picture database = 400 (360+40). I am using resnet50 (I don't think this is important but maybe I'm wrong). Resnet as feature extractor + SVM is not great but normalized confusion matrix is somewhat about:
1.0 0
0.4 0.6
Now, I wanted to fine-tune resnet. And the problem is that CM at the beginning looks like:
0.8 0.2
0.6 0.4
is something like this:
1.0 0
0.8 0.2
Below you can see training + test loss/accuracy.
Now I thought it is overfitting (due to too large rate capacity / database) but someone pointed that network might not be learning. What is the case?
AI: Seeing your graphs it does not show that the model is not learning, as the training accuracy is high. If the model wasn't learning anything even this would have been low. Also, you are right that the model is overfitting because it is performing very well on the training set and poorly on the test set.
The overfitting scenario is also confirmed by the fact that you have a very small dataset. You might want to fine-tune all the layers of the resnet as shown here. Also, try early stopping and dropout to prevent the model from overfitting. |
H: unsupported operand type(s) for -: 'list' and 'list' using python
Here I have a data file and I designed neural network to predict value. I have a three inputs. These three inputs affect to predict value bysubtarcting and adding. If my three inputs are x1,x2,x3 . X1 and X2 add together and that value will subtract with the X3 value and give the predict value. This process will happened in x_train. I wrote a code. But it gives me this error. Can nyone help me to solve this problem?
My code is,
data = pd.DataFrame(data,columns=['X1','X2','X3'])
data = data.values
a = []
for i in range(0,len(data)):
X1 = data[i][0]
a.append([X1])
b = []
for i in range(0,len(data)):
X2 = data[i][1]
b.append([X2])
c = []
for i in range(0,len(data)):
X3 = data[i][2]
c.append([X3])
x_train =[]
x =[]
for i in range(0,len(data)):
x_train.append([a[i] ,b[i], c[i]])
x.append((a[i] - b[i] + c[i]))
x_train =np.array(x_train)
x = np_array(x)
AI: You are facing the error on the line x.append((a[i] - b[i] + c[i])) due to expression a[i] - b[i] + c[i]. Let us see the how it is being calculated. Suppose I have the following data:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Then the values of a, b and c are:
a
Out[20]: [[1], [4], [7]]
b
Out[21]: [[2], [5], [8]]
c
Out[22]: [[3], [6], [9]]
Let us take i=0 and print a[0]:
Out[23]: [1]
Hence what you are calculating is [1] - [4] + [7] instead you should be calculating 1 - 4 + 7. To do this change the line with error to:
x.append((a[i][0] - b[i][0] + c[i][0])) |
H: Can I use the Softmax function with a binary classification in deep learning?
I want to create a deep learning model (CNN) for binary classification, can I used the softmax function instead of the sigmoid function in binary classification?
Adding the classification layer to the model, will be like this
model.add(Dense(2, activation='softmax'))
AI: For binary classification, it should give the same results, because softmax is a generalization of sigmoid for a larger number of classes. |
H: Intuitive Explanation of R-squared
Here is a nice definition of R-squared that I have found on the internet.
R-squared is a statistical measure of how close the data are to the
fitted regression line. It is also known as the coefficient of
determination, or the coefficient of multiple determination for
multiple regression.
The definition of R-squared is fairly straight-forward; it is the
percentage of the response variable variation that is explained by a
linear model. Or:
R-squared = Explained variation / Total variation
R-squared is always between 0 and 100%:
0% indicates that the model explains none of the variability of the
response data around its mean. 100% indicates that the model explains
all the variability of the response data around its mean. In general,
the higher the R-squared, the better the model fits your data.
However, there are important conditions for this guideline that I’ll
talk about both in this post and my next post.
Could someone please explain to me what does "variability of the response data around its mean" stand for, and also "it is the percentage of the response variable variation that is explained by linear model"? I am having trouble understanding these concepts.
AI: --variability of the response data around its mean---
It means the model You trained tells you how much Accuracy it can provide when predicting. Higher the R_squared ,More accurate the model.
it is the percentage of the response variable variation that is explained by linear
This is literally the same question as above. |
H: Should I expect major performance improvements by scaling my features?
I'm trying to decide whether I should scale my features & responses for training, and I'm in a situation where I can't just try both scaling and not scaling.
My features currently have an std around 0.05, and the behavior of the timeseries I'm studying is very much scale dependent (0.5 means a VERY different thing than a 0.05 in terms of what's happening in the market).
Can I expect major improvements in performance, optimizer-wise, if I scaled my features to have std of 1?
My models are different combinations of MLP and 1d conv, and my algorithm is gradient descent with the Adam optimizer.
Thank you! [and yes; I'm another one of those people who are trying to forecast the stock market]
Louis is right that I should try both. But I'm leaving the question up just in case someone comes along and gives a mathematical proof of exactly normalization is useful /useless.
AI: It's impossible to know whether the performance will improve without knowing what algorithm you are using. Even then the only way to tell is to try both.
That being said, I can't think of any scenario that standardisation would hurt the performance. |
H: How to put multiple features into RNN input vector
I am trying to code a recurrent neural network (LSTM) to create music in python and was considering using multiple features instead of just the note pitch as an input into the network. Initially I had just the note pitch so it was fed into the network by one-hot encoding it. The other two features I want to add are the note duration and the offset between the notes. How should the input vector be organised so that all the data is fed through the network?
I have tried combining all of the data into a long vector with all 3 features one-hot encoded and then concatenated but this caused the output to become 'NaN'. Any help would be appreciated.
Link a gist of my code:
https://gist.github.com/treutm/1b1f49e3d0a9de53cd67c136292f329f
AI: A common way to input several features to an LSTM (or any RNN) is, as you did, to concatenate them in a vector. I suspect your NaN are related to a different issue in the code, and I recommend you to debug it and see when and why it happens.
A different way of combining several features is using embeddings for each feature and combine them via concatenation. For example, for the note pitch, you have an embedding for each note, for the note duration, the same, and so on. The input to the RNN would be a concatenation of that.
A third option would be to have an embedding to all possible combinations of the embeddings. That would be an embedding for each triplet of note pitch, note duration and offset. This representation can be also powerful.
My recommendation is that you start with the easiest to implement, and if the result is not satisfactory in terms of model performance, think about the next one. |
H: Generating image embedding using CNN
I have a CNN model using cifar -10 dataset. The model was built using Keras (Tensorflow).
Now based on this model, I have to generate an image embedding (vector). That means - an input image comes and I have to output the embedding vector of that image.
I am not sure how to do that. This is not a straight forward prediction/classification output. Rather I have to output the embedding of the input image (which is off couse the predicted embedding but an embedding vector nonetheless).
Any suggestion?
AI: You should use something like an autoencoder.
Basically. you pass your images through a CNN (the encoder) with decreasing layer size.
The last layer of this network is the one that produce the embeddings (that is, a lower dimensional representation of your input), and the number of neurons you use here is the length of your vector embedding for the input images.
Now, your embeddings are useful only if they actual encode the data in your images.
To achieve this, you need another network (the decoder) that takes as input the image embedding and outputs an image with the same dimension of the input.
Here, you try to minimize a loss function that tells you the distance between the image you generate from the embedding and the initial image (it might be Euclidean distance between pixel values).
Finally, if you need to output the embedding of an image, you just need to pass the image through the encoder network and collect the output.
Here's a simple tutorial. |
H: What does the "Loss" value given by Keras mean?
I setup my neural net to use mean square error as shown below. To my understanding (and from reading the documentation) this means that if the correct result of a row is 0.7 and the net predicts 0.8 the contribution to the loss by this entry is (0.8 - 0.7) squared = 0.01
from keras.models import Sequential
from keras.layers import Dense
#...build up neural network layers here...
net.compile(optimizer = 'adam', loss = 'mean_squared_error')
net.fit(training_data, training_results, batch_size = 4, epochs = 100)
I get the following output.
Epoch 100/100
1190/1190 [==============================] - 0s 133us/step - loss: 0.0082
Wow the the loss is tiny, my little neural network is doing so well! However if I validate the result on my original training data
prediction = net.predict(training_data)
prediction_delta = (prediction - training_results)
Although some of the values in the prediction_delta are small overall the loss is way higher than 0.0082 with single values as high as 0.44. Note that this is for the same training data used to fit the net and is not the test data (which also shows similar results) so I would expect to get the value 0.082 back. How does Keras calculate this loss number?
AI: When you evaluated the model, you didn't use the same way to calculate the loss as it was calculated during training.
During the training, you used mean_squared_error:
net.compile(optimizer = 'adam', loss = 'mean_squared_error')
So, you should use that same error function (mean_squared_error) while evaluating.
If you do it manually, you need this:
prediction = net.predict(training_data)
prediction_delta = np.mean((prediction - training_results) ** 2)
Or you could let the Keras do it for you automatically with evaluate() function:
net.evaluate(training_data, training_results) |
H: Dataframe has no column names. How to add a header?
I am using a dataset to practice for building a decision tree classifier.
Here is my code:
import pandas as pd
tdf = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data', sep = ',', header=0)
tdf.info()
The column has no name, and i have problem to add the column name, already tried reindex, pd.melt, rename, etc.
The column names Ι want to assign are:
Sample code number: id number
Clump Thickness: 1 - 10
Uniformity of Cell Size: 1 - 10
Uniformity of Cell Shape: 1 - 10
Marginal Adhesion: 1 - 10
Single Epithelial Cell Size: 1 - 10
Bare Nuclei: 1 - 10
Bland Chromatin: 1 - 10
Normal Nucleoli: 1 - 10
Mitoses: 1 - 10
Class: (2 for benign, 4 for malignant)
Thanks,
AI: For any dataframe, say df , you can add/modify column names by passing the column names in a list to the df.columns method:
For example, if you want the column names to be 'A', 'B', 'C', 'D'],use this:
df.columns = ['A', 'B', 'C', 'D']
In your code , can you remove header=0? This basically tells pandas to take the first row as the column headers . Once you remove that, use the above to assign the column names. |
H: Newbie question on restricted boltzmann machine
I’m quite a newbie to RBMs so I’m trying to understand how do you feed real valued data to it given that all the visible and hidden units are binary?
AI: Here is an answer which might help: https://www.quora.com/What-are-best-approaches-to-modeling-real-value-continuous-input-variables-in-RBM-pretraining |
H: How large of a value should a weight have in a neural network?
If you're assigning random values to the weights in a neural network before back-propagation, is there a certain maximum or minimum value for each weight ( for example, 0 < w < 1000 ) or can weights take on any value? Could a network potentially have weights of 0.1, 0.0009, and 100000?
AI: One of the problems that can occur when training a neural network is known as the exploding gradient problem. A poorly initialised network could lead to a large increase in the norm of the gradient during training. These larger values will basically run the weights out of the number precision of the computer, resulting in NaN values.
This post gives more information on the exploding gradient problem and how to solve it. A related post discusses different initialization strategies. |
H: Recognizing circled numbers on a piece of paper
I've built a handful of CNN using tensorflow, keras, pytorch for recognizing text/number/objects in an image. What I'm trying to figure out how to do now is how to recognize numbers on a piece of paper that are circled by a pen or pencil. So on a piece of paper, there are two rows of 6 sets of numbers, and someone will go through and 2-3 sets of numbers, and I'd like to be able to take a picture or scan the paper, and have a model be able to recognize which number sets were circled. What would be the best way to start going about this?
AI: The problem you are describing is known as the Object Localization problem. The main DNN architecture used today includes R-CNN and related algorithms as well as YOLO algorithm. It's a well-studied problem in computer vision. you can found plenty of information online. |
H: Numpy array from pandas dataframe
I am new in using python for data science.
What is the difference between selecting a a column with: df['name'].values and df.iloc[:,1].values and df.iloc[:,1:2].values they return differnt types of numpy vectors. why?
AI: Not entirely sure what you mean by "numpy vectors" but am assuming the question is why each of these methods return essentially (almost but not quite) the same output...
Reference: pandas docs.
df['name'].values is a "Series corresponding to colname". In other words, you're just calling the data from that column and putting the in an array by calling .values.
.iloc is a "Purely integer-location based indexing for selection by position". Same as above but you're calling the indexed location of the column where df.iloc[:, 1] is df.iloc[all rows, col 2]. Probably an easier method to call multiple consecutive columns in a DataFrame then writing out each individual column name.
df.iloc[:,1:2].values <-- creates an array of arrays where the main array is the column that you called (col2) and each row values is contained in a subarray. This is--I think-- because you're slicing the dataframe between column index locations 1 and 2 (rather than just calling loc 1 like above). This would mean that each row is being called individually so that a new array is created for each row that exists between column index locations 1 and 2 (which is the 'parent' array). |
H: What's the correct reasoning behind solving the vanishing/exploding gradient problem in deep neural networks.?
I have read several blog posts where the solution to solve the vanishing/exploding gradient problem in a deep neural network is suggested to be using Relu activation function instead of tanH & sigmoid.
But, I have encountered an explanation by Prof. Andrew NG lecture that explains that a partial solution to the vanishing gradient problem is a better or more careful choice of the random initialization of weights in your neural network.
i.e the solution is:
To set the variance of Wi to be equal to 1/n, where n is the number of input features that are going into a neuron. Along with the assumption that the input features of activations are roughly mean 0 and standard variance 1. So, what it's doing is that it's trying to set each of the weight matrices w so that it's not too much bigger than 1 and not too much less than 1, therefore, it doesn't explode or vanish too quickly.
So, if you are using a ReLu activation function then setting the variance of Wi to be equal to sqrt(2/n) works better**.
and if you are using a TanH activation function then setting the variance of Wi to be equal to sqrt(2/n) works better.
or in some cases, it's being suggested to use Xavier initialization
Also, if we need we can tune of variance parameter as another hyperparameter by multiplying into the above formula and tune that multiplier as part of your hyperparameter search.
Therefore, choosing a reasonable scaling for how to initialize the weights helps weights not to explode too quickly and not decay to zero too quickly, which in turn could help in training a reasonably deep network without the weights or the gradients exploding or vanishing too much and not simply using ReLu!.
Please correct me if my understanding is wrong or incomplete!
AI: I think the two aspects you mention are two faces of the same medal: if your weights are too high or low, the activation of a layer ends up being too high or low. If you use $tanh(z)$ or $sigmoid(z)$ as activation functions, you'll end up with values that will have a derivative almost to zero, and the gradient will vanish as it is back-propagated into the network.
By using $relu(z)$, instead, you use a linear function for $z>0$ and therefore the derivative is constant as the activation grows. This helps the gradient which does not approach zero while it is back-propagated.
This works in every case: both if you use weights that are too high or too small.
Tanh and sigmoid are similar in the sense that they are basically linear near $z=0$, but almost flat when $|z|>>0$, ReLU solves the problem in the latter case, because it becomes linear for any $z>0$.
On the other hand, another strategy is to prevent $z$ from being too large or too small, and therefore you attempt to keep it near the origin ($z=0$). In this way, both tanh and sigmoid are linear, as ReLU would have been for any $z>0$.
The expected result is the same: the derivative of the function is almost linear, and therefore the gradients can traverse the network without vanishing or exploding.
Of course, there are other factors here that must be considered (e.g. ReLU clamps to 0 every negative value, which has other sort of effects), but beside these, in both cases you get a similar outcome (non-vanishing gradients) with different approaches (by tweaking the activation function or by tweaking the weights). |
H: clustering with heterogeneous (quantitative and qualitative)data?
I'm a Phd student and I have the results of some approaches (algorithms) that I would like to analyze. Data (results) are stored in csv files as follows: - the lines describe each algorithm with its parameters and the result obtained. - Some columns (characteristics) are quantitative and others are qualitative (enumerated types) and the last column is the results which presents the efficiency of the algorithm and it is a numerical value that I can convert in classes (of intervals of values). I would like to highlight at first what is the tendency of the good and worse algorithmes. Then, what are the features that contributed to obtain these results I'm beginner in machine learning and I searched a lot on the net but i didn't find the appropriate method . Could you point me to a clustering/ learning method for this purpose.
I would be very greatful.
Best regards.
AI: K-means algorithm cannot be directly used for data with both numerical and categorical values because of its objective function. K-means uses Euclidean distance, which cannot be defined for categorical data.
However, there is a simple method which combines Euclidean distance and Hamming distance for the finding the similarity between instances that include both numerical and categorical features. In this method Euclidean distance is used as a metric for computing the similarity between numerical data and hamming distance for computing the similarity between categorical data. Both of them with the appropirate weights form an objective function that can handle mixed data.
For more information check the relevant paper
https://grid.cs.gsu.edu/~wkim/index_files/papers/kprototype.pdf |
H: Least Squares optimization
The cost function given as $\hat{\beta} = (Y - \beta X)^T (Y-\beta X)$ is used to evaluate the weights $\beta$. Here $X$ is the data and $Y$ is the output. On taking the derivative, we get the estimates of the weights. This is a Least Squares formulation.
1) Can Least Squares (LS) be used when the observation (outputs) $y_i$, $i=1,2,..,N$ number of examples are categorical? I don't quite get the picture how classification problems using LS works in terms of derivative for categorical cases.
2) Can LS be used when the data $X$ is a one-hot encoding? Would the formulation and derivative be the same?
AI: Coming to your first question:
Yes, you can but it's not advisable to use LS as a cost function for classification task since the optimization problem becomes non-convex! More on it, since your model in logistic regression would be sigmoid(a non-linear function) which means your cost function will have a lot of local optima!(think of a surface of the cost function as a Himalayan valley!), So, when you will use Gradient descent(an iterative approach) for minimizing your cost function, GD will be stuck somewhere in the local minima instead of global minimal, so you will not learn the best model parameters for your model on your data! Therefore, we change the cost function from LS to Log loss for classification task that ensures a nice bell-shaped curve surface of the cost function where the gradient descent can reach to global minima and give you the corresponding best model parameters which you can happily take it later to do predictions(on test-set)!
More on it here
Much more theoretical understanding of it you could find in ISLR at page 129 4.2 Why Not Linear Regression? |
H: Does Gradient Boosting detect non-linear relationships?
I wish to train some data using the the Gradient Boosting Regressor of Scikit-Learn.
My questions are:
1) Is the algorithm able to capture non-linear relationships? For example, in the case of y=x^2, y increases as x approaches negative infinity and positive infinity. What if the graph looks like y=sin(x)?
2) Is the algorithm able to detect interactions/relationships among the features? Specifically, should I add features that are the sums/differences of the raw features to the training set?
AI: GB method works by minimizing a loss function and by splitting each node in a fashion that produces high pure leaves. there is no population formula being estimated and therefore you can estimate all types of relations between the target and the features.
However I wouldn't put in the model correlated variables as:
For gradient boosted trees, there's generally no strong need to check
for multicollinearity because of its robustness. But practically
speaking, you still should do some basic checks. For example, if you
discover that two variables are 100% the same, then of course there's
no point in keeping both. Even if it's 98% correlated, it's usually
okay to drop one variable without degrading the overall model.
Source: Quora |
H: Splitting image dataset with few subjects but many data
I'm carrying out training/testing of a convolutional neural network for facial expression recognition with various datasets - all labelled by 7 emotion classes.
For other datasets, there are a large number of mostly unique subjects so I randomly split. In this case, however, there are only 6 subjects but a large number of images for each subject in each class. Randomly splitting seems ineffective because of the similarity in images - think of how an emotion changes per frame.
Is the best method to separate an entire subject for testing? Or something else?
I did run the network with random splitting and achieved 100% validation accuracy so I believe that is unlikely to be the best method. Thanks for your time.
AI: I think you're hitting on the fact that by training and testing on the same subjects your model is not going to be able to generalize to new subjects very well. If you're only interested predicting emotions for these subjects, you are taking the right approach. However, if you want to generalize your model to new, unseen subjects, you should split your training and testing sets such that subjects in the training set are not in the test set, and vice versa to get a more accurate test score. Most likely it will not perform as well and you should consider collecting more data on new subjects if possible.
Here is a similar question on Stack Exchange. |
H: How to fill in missing value of the mean of the other columns?
I had a movie dataset including 'budget' and 'genres' attributes.
I'd like to fill in the missing value of budget with the mean budget of each genre.
I first create two dataframes with or without budget.
BudgetNull = data[data['budget'].isnull()]
BudgetNotNull = data[data['budget'].notnull()]
Then, calculate the mean budget of each genre based on the BudgetNotNull dataset.
budget_of_genre = BudgetNotNull.groupby('genres')['budget'].mean()
Finally, I'd like to fill in the budget of BudgetNull based on its genre.
For instance, 'Marine Boy' is a action movie, therefore, fill in budget_of_genre['Action'].
How do I do this via for loop? Or there's other ways?
AI: Using a transform as follows could work:
df["budget"] = df.groupby("genres")["budget"].transform(lambda x: x.fillna(x.mean()))
The mean calculation uses only the non-null values in its calculation. So the mean of each group's non-null values are imputed to that same group's null values.
See also this question on Stack Overflow. |
H: How to turn linear regression into logistical regression
I followed these articles to implement logistic regression.
I'm confused however because after training the model and getting the weights for my variables I don't now how to use the one-hot vector to turn this into confidence scores for the different classes.
I've got the formula: y' = x1W1 + x2W2 + x3W3 + b
I've got values for all Ws and b.
I've got my one-hot vector: [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
How do I combine all this into confidence for each class?
AI: You should use softmax to convert your output in probabilities. For only two classes, you have the formula $P(x \in class 1) = \frac{\exp(y_{\text{class1}})}{\exp(y_{\text{class1}}) + \exp(y_{\text{class2}})}$. It mentioned in your article. |
H: How to find what values are assigned to labels that where encoded using LabelEncoder?
places = ['India','France','India','Australia','Australia','India','India','France']
Here places are the DataFrame Series, now how can I find that which
label was encoded with values like India = 0 , Australia = 1 ,France =
2.
This is ok for few labels what if there are 100's of labels available
in a huge dataset.
AI: Use the classes_ attribute of your LabelEncoder. For example:
le = preprocessing.LabelEncoder()
le.fit(places)
print(le.classes_)
The index of the label in le.classes_ is the encoded value of the label. See another example here. |
H: Predicting Intent to do X with a confidence score or intent percentage score?
I have a data set like:
did_purchase action_1_30d action_2_20d action_2_10d ....
False 10 20 100
True ....etc
Where did_purchase shows whether the customer purchased or not, and the columns indicate the volume of actions taken before the purchase (or non-purchase) event.
So, for the first row the customer did 10 of action_1 within 30 days of the purchase event, but didn't purchase in the end.
I have been using sklearn's LogisticRegression to predict the did_purchase false/true, and can get about 89% accuracy, which is nice.
However, I'd like a percentage intent score instead. So it could say user-321 has a 46% chance of purchasing in the next 10 days.
What would be a good algo/approach for this?
AI: You could use the probabilities output by LogisticRegressions predict_proba method.
Almost all classifiers give you a probability in sklearn. One exception is the support vector classifier which will give you a points distance to the decision hyperplane, which can be interpreted as a confidence (you can get probabilities for the support vector classifier, but it is through a computationally costly cross validation process). |
H: Constant Learning Rate for Gradient Decent
Given, we have a learning rate, $\alpha_n$ for the $n^{th}$ step
of the gradient descent process. What would be the impact of using a constant value for $\alpha_n$ in gradient descent?
AI: Intuitively, if $\alpha$ is too large you may "shoot over" your target and end up bouncing around the search space without converging. If $\alpha$ is too small your convergence will be slow and you could end up stuck on a plateau or a local minimum.
That's why most learning rate schemes start with somewhat larger learning rates for quick gains and then reduce the learning rate gradually. |
H: How to use K.function with two inputs and a concatenate layer?
In Keras, I try to compute use the K.function between some layers. But I get an error when I use a Concatenate layer.
Here is a minimal code you can try yourself:
import numpy as np
from keras.layers import *
from keras.models import Model
def test_model(concat, conv2_dim=32, kernel=(3,3), dropout=0,
optimizer='adam', extra_conv=False, padding='valid',
width_dense=1024, batch_norm=False):
img_shape = (64,64,1)
image_input = Input(shape=img_shape)
entropy_vector = Input(shape=(1,))
conv = Conv2D(conv2_dim, kernel_size=kernel,
activation='relu', padding=padding,
input_shape=img_shape)(image_input)
conv = MaxPooling2D(pool_size=(2, 2), padding=padding,)(conv)
final_conv = Flatten()(conv)
dense = Dense(width_dense, activation='relu')(final_conv)
concat_layer = Concatenate()([dense, entropy_vector])
if(concat): #Skipping
flat = Dense(width_dense//4, activation='relu')(concat_layer)
else:
flat = Dense(width_dense//4, activation='relu')(dense)
output = Dense(1, activation='sigmoid')(flat)
model = Model(inputs=[image_input, entropy_vector], outputs=[output])
model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
return model
X = [np.ones((1,64,64,1)), np.ones(1,)]
m = test_model(concat=False) # you can try with False
fn = K.function(m.input, [m.output])
m.predict(X)
fn(X)
But I get this error if concat=True in the test_model function:
InvalidArgumentError: ConcatOp : Ranks of all input tensors should match: shape[0] = [1,1024] vs. shape[1] = [1]
[[{{node concatenate_18/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](dense_112/Relu, _arg_input_44_0_1/_4327, concatenate_18/concat/axis)]]
[[{{node dense_114/Sigmoid/_4329}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_68_dense_114/Sigmoid", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
AI: Use X = [np.ones((1,64,64,1)), np.ones((1, 1))]. |
H: could not broadcast input array from shape (2,3) into shape (3) while using timestamp to build neural network in python
Here I want predict value every 60 minutes. So I have data 540 with three inputs. so I wrote an code with time steps and it gave me this error. Can anyone help me to solve this problem?
my code :
y=data['y1'].astype(int)
cols=['x1', 'x2', 'x3']
x=data[cols].astype(int)
n = x.shape[0]
p = x.shape[1]
x = x.values
y = y.values
train_start = 0
train_end = int(np.floor(0.8*n))
test_start = train_end+1
test_end = n
x_train = x[np.arange(train_start, train_end), :]
x_test = x[np.arange(test_start, test_end), :]
y_train = y[np.arange(train_start, train_end), :]
y_test = y[np.arange(test_start, test_end), :]
x_train=x_train.reshape(x_train.shape +(1,))
x_test=x_test.reshape(x_test.shape + (1,))
num_time_steps = 9
num_features = x.shape[1]
x_train = np.zeros((x_train.shape[0] - num_time_steps + 1, num_time_steps, num_features), dtype="float32")
x_test = np.zeros((x_test.shape[0] - num_time_steps + 1, num_time_steps, num_features), dtype="float32")
for i in range(len(x_train)):
for timestep in range(num_time_steps):
x_train[i][timestep] = x_train[i + timestep]
for i in range(len(x_test)):
for timestep in range(num_time_steps):
x_test[i][timestep] = x_test[i + timestep]
y_train = y_train[num_time_steps - 1:]
y_test = y_test[num_time_steps - 1:]
change code:
train_end = 80
x_train=x[0: train_end ,]
x_test=x[train_end +1: ,]
y_train=y[0: train_end]
y_test=y[train_end +1:]
x_train=x_train.reshape(x_train.shape +(1,))
x_test=x_test.reshape(x_test.shape + (1,))
num_time_steps = 9
num_features = x.shape[1]
x_train_n = np.zeros((x_train.shape[0] - num_time_steps + 1, num_time_steps, num_features), dtype="float32")
x_test_n = np.zeros((x_test.shape[0] - num_time_steps + 1, num_time_steps, num_features), dtype="float32")
for i in range(len(x_train_n)):
for timestep in range(num_time_steps):
x_train_n[i][timestep] = x_train[i + timestep]
for i in range(len(x_test_n)):
for timestep in range(num_time_steps):
x_test_n[i][timestep] = x_test[i+timestep]
y_train_n = y_train[num_time_steps - 1:]
y_test_n = y_test[num_time_steps - 1:]
Error,
AI: First of all, you should use separate variables for the 3D and 2D x_trains:
x_train_n = np.zeros((x_train.shape[0] - num_time_steps + 1, num_time_steps, num_features), dtype="float32")
x_test_n = np.zeros((x_test.shape[0] - num_time_steps + 1, num_time_steps, num_features), dtype="float32")
And, your outer loop should span the new x_train, i.e. x_train_n:
for i in range(len(x_train_n)):
for timestep in range(num_time_steps):
x_train_n[i][timestep] = x_train[i + timestep].squeeze()
Change the other loop (the one for the test case) as well. |
H: Is it possible to plot decision boundaries for only a subset of features?
I have a sklearn Random Forest classifier with 59 features as input. I'd like to plot the decision boundaries of only two features at indices i1 i2.
If I use the average/median values for the remaining features, the classifier ends up in a path that ignores the features i1/i2.
I'd like to plot the decision boundary of i1/i2 while ignoring the remaining features.
Is this possible? or useful at all?
AI: For decision trees, this can be both possible and useful. For the random forest (RF), each decision tree has their own boundaries for these features. And, since RF trains each tree with a subset of original features, some trees don't have any these features and so boundaries. This might also occur due to pruning. So, you have a set of boundaries, which can be plotted at the same time to get a feeling of how the trees in the forest learnt your dataset. However, these boundaries also depend on other features. If some more explanatory feature is selected in a higher node in some trees, the low level thresholds might mislead you. |
H: Keras input shape returning an error
I am currently learning about Keras and have a problem with the input shape of a dense layer.I am currently trying to the mnist dataset.I understand that the input_shape for the train images is (60000,28,28) also i understand that keras ignores the first dimension as it is the batch size hence the input shape entered in the dense model should be (28,28) but when putting that i get an error yet when i put input shape of(784,) the model runs.Could someone please explain why is that so
(train_images, train_labels), (test_images, test_labels) =
mnist.load_data()
print(train_images.shape)
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28,28)))
network.add(layers.Dense(10, activation='softmax'))
AI: You always need to flatten your pictures when connecting the input to a Dense layer in Keras (Note that this is not the case for CNNs or RNNs). The reason is when the dense layer is built, based on the Dense layer code, the input dim is the last element you pass in the inputs (input_dim = input_shape[-1]). Therefore, although you are passing an input of (28,28) keras thinks that the shape is only 28. This also explains why the input of (,784) indeed works.
You can check the Dense layer code here |
H: Neural Network Initialization - Every layer?
Does every layer of a Neural Network require weight initialization or just the first? Does the first layer feed into the next layer and initialize itself? My intuition is that every layer needs its own initialization but I'm finding it hard to see that said explicitly. Thanks!
AI: Yes, every layer that contains weights needs to be initialised.
Every weight is just a number (usually a decimal, floating point number). It has to be initialised with some value so that the algorithm (e.g. backpropagation) has some values to work with.
Any learned parameters need to be initialised, as they will be adapted during training. This means you can inialised e.g. the parameters of momentum, and it is changed over time.
There are "layers" that don't need initialisation. I mean layers that perform some operations, but don't have dynamic weights; weights that are altered during training.
An example would be a standard dropout layer (with no learned parameters). This is called a layer, but doesn't require initialisation as it doesn't have any weights that are changed during training. |
H: Computing number of batches in one epoch
I have been reading through Stanford's code examples for their Deep Learning course, and I see that they have computed num_steps = (params.train_size + params.batch_size - 1) // params.batch_size [github link].
Why isn't it num_steps = params.train_size // params.batch_size instead?
AI: The double-slash in python stands for “floor” division (rounds down to nearest whole number), so if the result is not an integer, it will always miss the last batch, which is smaller than the batch size.
For example:
Given a dataset of 10,000 samples and batch size of 15:
# Original calculation:
# (10000 + 15 - 1) // 15 = floor(667.6) = 667
# Your calculation:
# 10000 // 15 = floor(666.667) = 666
Your formula calculates number of full batches, theirs total number of batches. |
H: Normally distribute occurence or counts
I am creating a mock of sales data. One of the columns is salesperson_id where each id can occur more than once (a salesperson can have multiple sales). I want to generate this column in such a way that if I create a chart of distribution of sale count (not total sales) per salesperson_id, it would be a normal curve.
Let's say I have an array of unique salesperson_id called salespersons, I want to generate n number of records. For example, I have 6 salespersons and I want to generate total of 14 records. This would be the distribution of their counts:
id count
---------
A 1
B 2
C 4
D 4
E 2
F 1
If arranged in a bar chart, this would look like a normal distribution. I also need some way to control the 'flatness' of the chart.
AI: Here is one way to do it:
from collections import Counter
from numpy.random import binomial
# Generate mock sales data by randomly sampling if a salesperson makes a sale
p = 0.5 # Control "flatness" of distribution
n_records = 14
salesperson_id = binomial(n=100, p=p, size=n_records).tolist() # Binomial is a reasonable approximation for a discrete normal distribution
count_per_salesperson = Counter(salesperson_id)
id, count = zip(*count_per_salesperson.items()) |
H: Product of dot products in neural network
In a neural network, it is common to compute a dot product of the form
$$\langle w, x \rangle = w_1 x_1 + w_2 x_2 + \ldots + w_n x_n$$
and use it as argument to some activation function. This is done for several different weights $w$ and activation functions (one per neuron). The results are used as input to the next layer, where we compute again several dot products and use them as argument to more activation functions.
Using many layers leads do several theoretical difficulties of deep learning and one has to make some effort to make everything works. But at the end, hopefully it will end up working.
I was thinking in a way of "cheating" the use of many layers. The idea is to use just one layer in a way that the argument to the activation function is complex enough so the program will learn properly. Instead of a dot product I was thinking in using products of dot products:
$$\prod_{k=1}^L\langle w^{(i)}, x \rangle = (w_1^{(1)} x_1 + w_2^{(1)} x_2 + \ldots + w_n^{(1)} x_n) \cdot \ldots \cdot (w_1^{(L)} x_1 + w_2^{(L)} x_2 + \ldots + w_n^{(L)} x_n)$$
The result is them used as argument to some activation function. Since this argument already has several weights, my hope is that the need of several layers of activation functions wouldn't be necessary anymore. I even though about using sum of these products and tested it. I'm getting $\sim 80\%$ of accuracy in a classification problem where I know it is possible to get more than $90\%$. Does my approach has a fundamental limitation I'm not aware of?
Thanks.
Extra: Above I gave a simplified description of my idea (which I think is enough), but it may be helpful to put the whole thing here. My final model of learning is given by the following formula
$$ = \textbf{f}\left( \sum_{r=1}^R \left( \prod_{i=1}^{L-1} \langle \textbf{w}^{(i,r)}, \textbf{x} \rangle \right) \left[
\begin{array}{c}
w_1^{(L,r)} \\
\vdots\\
w_m^{(L,r)} \\
\end{array}
\right] \right) = $$
$$ = \left[
\begin{array}{c}
f\left( \sum_{r=1}^R \sum_{i_1, \ldots, i_{L-1}=1}^n w_{i_1}^{(1,r)} \ldots w_{i_{L-1}}^{(L-1,r)} x_{i_1} \ldots x_{i_{L-1}} w_1^{(L,r)} \right) \\
\vdots\\
f\left( \sum_{r=1}^R \sum_{i_1, \ldots, i_{L-1}=1}^n w_{i_1}^{(1,r)} \ldots w_{i_{L-1}}^{(L-1,r)} x_{i_1} \ldots x_{i_{\ell-1}} w_m^{(L,r)} \right) \\
\end{array}
\right]$$
$f:\mathbb{R} \to \mathbb{R}$ is the activation function and $\textbf{f} = (f, \ldots, f)$ is the "vectorized" version of this function. Note that I tried to come up with a very flexible formulation before the activation function. The idea is that this would avoid the need of more layers. I hope everything is clear at this point. Thanks again.
AI: Well, you are basically just creating a big nth degree polynomial with no small degrees.
So yes, fundamentally flawed because you don't have a bias in your weights, and then the optimization will be very complex because each parameter/weight depends on all the others entries in the computation. |
H: Autoencoder doesn't learn to reduce dimesions
I coded a neural network from scratch in Python. I tried it with the XOR problem and it learned correctly. So I tried to encode an Autoencoder with 3 inputs (and therefore with also 3 outputs) to reduce a color (r, g, b) in one dimension. I have normalized the data from 0 to 1 so I can use activation functions like sigmoid, relu etc. I have tried many different activation functions and learning rates, but the Autoencoder error(calculated with the mean squared error) is high (the lowest I got is 0.1), although I have trained it for more than 30,000 iterations. Did I miss something? (I think so, but is it possible to reduce a color to a size with good accuracy in the first place?)
Thank you all
AI: What kind of color are you trying to learn? If it's a random color, then yep, tough luck, you can't reduce the dimensionality of a 3D Euclidian manifold to less than 3D.
To be more precise, this is the case for an infinite manifold, for non-infinite one, you could get a 1D curve that goes through a 3D cube (in philosophy, it would be close to a Kohonen map, despite having just one node instead of a set of discrete nodes), but you would need a very big set of intermediate layers to make it happen. |
H: What is the ideal database that allows fast cosine distance?
I'm currently trying to store many feature vectors in a database so that, upon request, I can compare an incoming feature vector against many other (if not all) stored in the db. I would need to compute the Cosine Distance and only return, for example, the first 10 closest matches. Such vector will be of size ~1000 or so.
Every request will have a feature vector and will need to run a comparison against all feature vectors belonging to a subset within the db (which will most likely be in the order of thousands of entries per subset in the worst case scenario).
Which database offers the flexibility to run such a query efficiently ?
I looked into postgres but I was wondering if there were alternatives that better fit this problem. Not sure it matters much, but I'm most likely going to be using Python.
I found this article about doing it in SQL.
EDIT: I am open to alternative solutions for this problem that are not necessarily tied to SQL.
AI: If it's only a few thousand entries each with a 1,000 features, you may just be able to keep it in RAM if you are running this on some kind of server. Then when you get a new feature vector, just run cosine similarity routine. An easy way to do this is just use something standard like pandas and scikit-learn.
Alternatively you can keep everything in SQL, load it into something like pandas and use scikit-learn.
I'm actually not sure you'll get much of a speed up, if any, by writing the computation in SQL itself. |
H: Where is my error in understanding gradient descent calculated two different ways?
The gradient descent algorithm is, most simply, w'(i) = w(i)-r*dC/dw(i) where w(i) are the old weights, w'(i) are the new weights, C is the cost, r is the learning rate. I'm aware of the graphical justification for this.
For one weight, this is w' = w - r*dC/dw.
Second, we also have this equation deltaC ~= sum(dC/dw(i) * deltaw(i) ), which is just the definition of linearity of C near the point that its derivative is calculated. For one weight, this is the same as deltaC/deltaw ~= dC/dw, e.g., the definition of derivative.
Let there be only one weight, and let s = -deltaC. Then we have -s = dC/dw * (w'-w), where we've split up deltaw into the original and perturbed value. Then w'-w = s * (1/ (dC/dw)), and w' = w -s * (1/dC/dw). (Since we want to reduce the cost, we want deltaC to be <= 0, so s is >= 0, and looks like a normal positive learning rate.)
What I haven't been able to understand is why I get two different answers for what appears to be the same operation, updating the weights to lower the cost. In one case I use dC/dw, and in the other, I use 1/(dC/dw.) In both cases, r and s are small positive numbers.
What am I missing?
AI: Your second function is not an update rule, it is just a re-statement of the approximation between using finite differences to measure a gradient and the infinitesimal ones used in calculus. Most importantly, $s$ is not a learning rate or parameter that you are free to change. It is, by definition, the (negative of the) change observed in $C$ when you alter $w$ to $w'$.
The second equation you have derived is the same as:
$$w + \Delta w \approx w + \Delta C \frac{dw}{dC}$$
In addition, for some reason you decided to drop the approximation symbol partway through and treat your manipulations as if you had created a new weight update rule. Be careful when manipulating expressions on either side of an approximation, they do not always follow the same rules as for an equality, and treating them as such can lead you down some false paths. |
H: How to deal with overestimation of small values and underestimation of high values in XGBoost?
I'm running XGBoost to predict prices on a cars dataset, I was wondering what alternatives are there for this kind of problem where smaller values are overestimated and higher prices underestimated.
I tried applying log to prices since it has a skewed to the right distribution, but still having this undesirable effect.
Also, as a bonus question, log(price) improved the prediction score, the mean relative error or MRE calculated as mean(ABS(RD)) by 2 percent, if anyone has the intuition onto why this could have happened that would be great.
In the image below RD is the relative difference between predictions and the actual values, and the price bucket is a bucketized variable where the number indicates the price low interval bound over 1000.
AI: I dont think its necessarily related to the type of algorithm performing the regression(XGBoost here) - but to inherent nature of regression algorithms.
Many loss function are aimed to reduce distance between $y$ and $\hat{y}$.
That can lead to model predictions distribution being tighter around $y$'s mean.
Couple of things I would check to verify this:
1) Compare distribution of real $y$ and predicted $\hat{y}$.
2) Verify this error pattern on others regression model.
Things I would try to improve results:
1) over sample low/high y values in training set.
2) Adjust loss function so errors on low/high y values will have more weight.
3) Look for features that emphasize low/high y values and engineer them better. |
H: what is difference between set() and word_tokenize()?
from nltk.tokenize import sent_tokenize ,word_tokenize
sentence = 'jainmiah I love you but you are not bothering about my request,
please yaar consider me for the sake'
word_tok = word_tokenize(sentence)
print(word_tok)
set_all = set(word_tokenize(sentence))
print(set_all)
Actually both word_tokenize() and set(word_tokenize()) both returns
same answers what makes the difference?
AI: There are two differences between word_tokenize and set.
Word_tokenize
Returns a list (try print(type(word_tok)))
Returns all the tokens, regardless if there are duplicates
Set
Returns a set (try print(type(set_all)))
Returns all the unique tokens
Try this instead
sentence = 'jainmiah jainmiah jainmiah I love you but you are not bothering about my request, please yaar consider me for the sake'
word_tok = word_tokenize(sentence)
print(word_tok)
set_all = set(word_tokenize(sentence))
print(set_all) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.