text
stringlengths 83
79.5k
|
---|
H: How to model a Bimodal distribution of target variable
I want to regress on this target, have tried multiple transformations to bring it to normal but its not helping, read some stuffs online but none of the suggestions have worked till now.
I am attaching the residual histogram as well, somehow the residuals are normally distributed.
Thanks in Advance.
AI: To my understanding you should be looking for something like a Gaussian Mixture Model - GMM or a Kernel Density Estimation - KDE model to fit to your data.
There are many implementations of these models and once you've fitted the GMM or KDE, you can generate new samples stemming from the same distribution or get a probability of whether a new sample comes from the same distribution.
In python an example would be like this:(directly taken from here)
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from sklearn.neighbors import KernelDensity
# Plot a 1D density example
N = 100
np.random.seed(1)
X = np.concatenate((np.random.normal(0, 1, int(0.3 * N)),
np.random.normal(5, 1, int(0.7 * N))))[:, np.newaxis]
X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]
true_dens = (0.3 * norm(0, 1).pdf(X_plot[:, 0])
+ 0.7 * norm(5, 1).pdf(X_plot[:, 0]))
fig, ax = plt.subplots()
ax.fill(X_plot[:, 0], true_dens, fc='black', alpha=0.2,
label='input distribution')
for kernel in ['epanechnikov', 'tophat', 'gaussian']:
kde = KernelDensity(kernel=kernel, bandwidth=0.5).fit(X)
log_dens = kde.score_samples(X_plot)
ax.plot(X_plot[:, 0], np.exp(log_dens), '-',
label="kernel = '{0}'".format(kernel))
ax.text(6, 0.38, "N={0} points".format(N))
ax.legend(loc='upper left')
ax.plot(X[:, 0], -0.005 - 0.01 * np.random.random(X.shape[0]), '+k')
ax.set_xlim(-4, 9)
ax.set_ylim(-0.02, 0.4)
plt.show()
In the end the kde model, could be used for sampling new data points or predicting the probability of a new sample to have been generated from this distribution.
You should play around with different kernels in KDE models or number of base distributions in GMMs, along with other parameters to get optimal results for your data. |
H: Non-mutal exclusive classification task examples
I am reading the excellent Hands-on Machine Learning with Scikit-Learn and TensorFlow and in chapter 10, the author says:
"For the output layer, the softmax activation function is generally a
good choice for classification tasks (when the classes are mutually
exclusive). For regression tasks, you can simply use no activation
function at all."
All classification problems I can think of (binary classification, image classification, etc) generate mutually exclusive classes.
Can someone give me a few examples of non-mutual exclusive classification problems?
AI: For example, due to the complexity of the images in the ImageNet database. Algorithms will often use hundreds or thousands of output nodes to be capable of classifying a large array of different things. Researchers also relax the cost function and allow the $k$ highest outputs to be considered. If one of these is correct then the example is considered to have been correctly classified. Furthermore with the existence of such complex data, often times a certain object might be a subset of another object.
For example, consider the picture of a human being. If the algorithm detects a human with the highest activation at the output node, but also has high activations for hands, eyes, face. This is not wrong! The image does in fact contain these objects as well and them being detected by the model is encouraging. |
H: LSTM text generation
Most of the examples I have found online for LSTMs refer to "random" text generation.
One of the problems that I'm trying to solve is to generate a "summary" of many docs into 1 doc. For example:
News Article: Barack Obama life history (2K words)
Wikipedia: Barack Obama (10K words)
Barack Obama biography (10K words)
Will generate an article about Obama in X words.
Is LSTM the right approach?
AI: Although you can generate text in this way - sampling from a RNN trained to predict next character or word - it will not be meaningful. At best it will be grammatically accurate (in terms of nouns, verbs, adjectives etc) but semantic nonsense. It would not be a summary, except by lucky accident.
To generate summaries using LSTM directly, you would need to train a network with real examples of summary inputs and outputs. This would be similar task to machine translation, but much harder due to variability in size of the inputs. It is unlikely that you will find enough training data to test the idea fully, and it is not clear that such a direct approach can yield acceptable results even with large amounts of training data.
In fact text summarisation is not a solved problem by any means. There are deep learning approaches, such as Google Brain team's effort using TensorFlow, which you could study to get some sample code and a sense of state-of-the-art. This approach uses an attention model to extract apparently informational content (i.e. content that would have low probability of appearing in some assumed generic document, thus is assumed to be interesting due to standing out). It is possible to use a trained LSTM to build such an attention-based model - the intuition is that the parts of the document that the already-trained LSTM is least able to predict are likely to contain noteworthy information. |
H: python: Are there are some class like Voting classifier for three or four regression model
I want to ensemble three or four regression model: like GBDT, XGBDT, SVM. I know there are votingC = VotingClassifier() for classifier. I want to know are there some methods or function for ensemble regression model. Now, I just know set the simple weight for each model. how about other good method for ensemble regression model by python. Thanks!
AI: There are many ways in which you could create an ensemble from your base models. Some resources to take a look at are the following:
Firstly, I would point your attention towards this question, that has a lot of answers and interesting ideas.
Regarding implementation:
I have used in the past the brew module that has extensive implementations of different stacking-blending techniques etc, while using different combination rules and diversity metrics. It is also compatible with sklearn models.
Another interesting project is xcessiv, which I haven't used myself but provides a helpful gui for managing your ensemble's details.
Finally, regarding theoretical details, I would suggest you take a look into this survey that focuses on ensembles in regression tasks. |
H: How to restore deleted objects in R?
Suppose if I delete all the objects from current session:
rm(list=ls())
Is there any way in base R or using a function from a package which lets me restore the deleted objects from the current session?
AI: The answer is unfortunately no. There is no handy ctrl-z method.
A tip to avoid these situations: I suggest you always save either the 'environment' or, as I prefer to do, the scripts with the codes for the desired objects, and save them regularly. I never type any commands directly into a work space, but always in a script which I save. This is so I can always look back at which steps I took in my coding. Personally, I save my scripts with date notations in the file name - so when I change a script I'll still have access to the old version of the scripts. It has happened quite a few times that I've changed a code and later realized that the old code was better. It's quite a hassle when you don't have the old file saved. |
H: What exactly means CNN is position equivariant
There is quite a good explanation which fully comply with my vision. But seems it lacks one final step. As Jean states, moving an object significantly in the input image will cause the change in which neuron is activated in the yellow layer (the one previous to the first fully connected layer). So that we see that the part of the network before FCL is position equivariant. Then author says that because the network detects an object at any location, the FCLs should have taken care of it.
Does the equivariance property hold for the whole network, including trailing classifier? (I've read Difference between "equivariant to translation" and "invariant to translation" but not sure I apply it to this case correctly)
Should a network detect an object in right-top corner if only trained with images with that object at left-bottom corner?
(Tried this demo but it doesn't seem to address the second question)
AI: In your link, the author Jean states:
Additionally, I believe that if a CNN is trained showing faces only at one corner, during the learning process, the fully-connected layer may become insensitive to faces in other corners.
I also believe this to be correct. The FCN does not in any way add or improve translation invariance. Instead it will treat all outputs - each individual pixel of the "feature maps" - of the last convolutional layer as entirely different features. It must be trained with enough examples in order to generalise well.
However, the feature maps are not themselves simple, clean detectors of objects as the simplified explanation of CNNs might imply. In a deep network they can be very complex and respond to a wide range of stimuli. They will also respond somewhat fuzzily, so that e.g. an eye or the side of a head showing an ear can trigger multiple feature map pixels (to the feature map's kernels, the same object slightly translated will look like distorted version of the same feature, and will still match enough sub-components of the object to trigger a positive response). The last layer will not necessarily detect full objects, but significant chunks of objects, areas of important texture etc. So position can still be quite fluid, and the "head detector one pixel off in last convolutional layer" scenario is not particularly realistic - although it may affect relative strength/confidence of predictions.
This can still be a problem if you need your network to generalise. If you suspect that your training data might not be covering enough variations in position, orientation etc of images, then a common approach is data augmentation. As a reflected, rotated or cropped image of an object should usually be classified as the same object, then you can pre-process your training data using those transforms to make many random variations of the input images. Some deep learning frameworks will allow you to do this continuously, generating fresh images for each and every batch. |
H: How to distort data in a clever way?
For example, I have some time series. How can I change my data, so it will not be obvious to understand what was original values?
Ideally transformation would allow to revert and reconstruct original data with some noise(or to save the relationship between time series values with no option to revert to original values)
I thought of: multiplying each value of series by some number, which will do shift and scale operation or I can compute some continuous function(like log), where argument is my time series data.
Nevertheless, I believe there should be more advanced method.
Thanks!
Update:
After careful considerations, and answer of @D.W. below and trying to transform data, I found that I spend more time on this than on main problem I have.
Initial problem was twofold:
1) to send real data to home computer to play at home any time I want.
2) to send real data to home computer, transform it a bit, to share with people in the web, to get some advise on my main problem.
After thinking, stressing, askying, googling I decided to drop the idea to do anything.
Few good links to people who have the same problem and are intended to transfer, transform data:
For problem 1) I believe stream cipher is a an option to read more about ( as it transforms your data to a random string). Result is bad, as if one will look at your working computer logs, you are busted.
For problem 2) read about laplace noise. Also, believe to be bad.
Regards.
AI: The short answer is: don't do that; anything you come up with on your own is likely to leak information in subtle ways.
The longer answer is: talk to your data protection authority. Only use a method if they consider it acceptable. (Hint: it's very likely that the answer you will get back is "no, don't do that".)
The technical details are: there is an entire body of work on how to anonymize or de-identify data, so it can be revealed in a way that can't be linked to the people whom the data is about. However, there's a long history of attempts to do this that failed; where it was actually possible to re-identify the people, violating their privacy. See the following research paper:
Myths and Fallacies of "Personally Identifiable Information". Arvind Narayanan and Vitaly Shmatikov. Communications of the ACM, 53:6, June 2010.
If you wanted to consider a scheme to do what you want, you'd need to tell us a lot about the nature of the data, what it reflects, what the privacy interests are, how the data is distributed, what other external sources might be available that might enable re-identification, and many other details. Since you haven't provided any of that information, your specific question isn't answerable in its current form.
But ultimately, I hope I've convinced that what you're trying to do is a bad idea. Don't endanger people's data by going "cowboy" with some homebrew scheme to obfuscate the data; instead, talk to your data protection authority about what their requirements are. |
H: An Artificial Neural Network (ANN) with an arbitrary number of inputs and outputs
I would like to use ANNs for my problem, but the issue is my inputs and outputs node numbers are not fixed.
I did some google searches before asking my question and found that the RNN may help me with my problem. But, all examples which I've found somehow have a defined number of input and output nodes.
So, I'm looking for a strategy, how to make it real or at least some examples, preferable in Keras or PyTorch.
More details about my issue:
I have two inputs lists, where the length of the first one is fixed and equals two, e.g.:
in_1 = [2,2]
but the length of the second list is flexible, the length can be from three to inf, e.g.:
in_2 = [1,1,2,2]
or
in_2 = [1,1,1,2,2,2,3,3,3]
Also, input lists depend on each other. The first list shows the dimension of the output list. So if in_1 = [2,2], means the output must have a possibility to be reshaped to [2,2] form.
Currently, I'm thinking to combine two input list into one:
in = in_1 + in_2 = [2, 2, 1, 1, 2, 2]
Moreover, the output has the same length as the in_2 list, e.g.:
if input lists are:
in_1 = [2, 2]
in_2 = [1, 1, 2, 2]
Output should be:
out = [1, 2, 1, 2]
Any ideas are welcome!
AI: The answer may depend on the significance of the length of the input vector or how it originates.
However, the simplest solution is usually to know the largest size input and use that as number of vectors. If the given input has lesser length, you can do the padding with zeros or appropriate symbols. So instead of having a vector $[1, 2, 3]$ and $[1, 2, 2, 3]$ you can have vectors $[1, 2, 3, 0]$ and $[1, 2, 2, 3]$.
The same can apply for output. If output expected is $[1, 2, 1]$ and $[1, 3, 4, 1]$ you can treat the first output as $[1, 2, 1, 0]$
Isn't this just a hack?
Typically, Neural networks does function approximation. Ideally, it represents vectors (matrices) as input and vectors (matrices) as output. This is why it is always desirable that size of your input vector be fixed. |
H: Keras: visualizing the output of an intermediate layer
I have read the docs here and I understand the general idea. I am able to visualize the weights of the intermediate layers. However, I'm having trouble visualize the activations. Here's what I have:
I trained my model and saved the weights in a file called weights_file.
Thanks to this jupyter notebook, I got the values of the weights. First I defined my model:
def mlp_model(hid_dim=10):
model = Sequential()
model.add(Dense(units=hid_dim, input_dim=X.shape[1], activation='relu'))
model.add(Dense(Y.shape[1], activation='softmax'))
model.load_weights(weights_file)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
model_created = mlp_model(hid_dim=15)
To get the weights, I did this:
W = model_created.layers[0].kernel.get_value(borrow=True)
W = np.squeeze(W)
print("W shape : ", W.shape) #(153, 15)
W_out = model_created.layers[1].kernel.get_value(borrow=True)
W_out = np.squeeze(W_out)
print('W_out shape : ', W_out.shape) #(15, 8)
From there I could create Hinton diagrams using this. However, when I try to work with the activations:
get_first_output = theano.function([model_created.layers[0].input], [model_created.layers[1].output])
layer_out = get_first_output([X[0,:]])[0]
I get this error:
TypeError: ('Bad input argument to theano function with name "mlp1_visualize_weights.py:131" at index 0 (0-based). \nBacktrace when that variable is created:\n\n File "mlp1_visualize_weights.py", line 213, in <module>\n mlp_repeat(X, Y, Xtest, Ytest, params_to_use, weights_file)\n File "mlp1_visualize_weights.py", line 125, in mlp_repeat\n model_created = mlp_model(hid_dim=hid_val, lr=lrate, reg_val=reg, momentum=moment, nest=nestval, optimizer=optim)\n File "mlp1_visualize_weights.py", line 105, in mlp_model\n model.add(Dense(units=hid_dim, input_dim=X.shape[1], kernel_initializer=\'he_uniform\', activation=\'relu\', W_regularizer=l2(reg_val), b_regularizer=l2(reg_val)))\n File "/mnt/data/caugusta/pkgs/anaconda2/lib/python2.7/site-packages/keras/models.py", line 426, in add\n dtype=layer.dtype, name=layer.name + \'_input\')\n File "/mnt/data/caugusta/pkgs/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 1392, in Input\n input_tensor=tensor)\n File "/mnt/data/caugusta/pkgs/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 1303, in __init__\n name=self.name)\n File "/mnt/data/caugusta/pkgs/anaconda2/lib/python2.7/site-packages/keras/backend/theano_backend.py", line 184, in placeholder\n x = T.TensorType(dtype, broadcast)(name)\n', 'TensorType(float32, matrix) cannot store accurately value [array([ 0. , 0.2037037 , 0.20138889, 0.21100917, 0.62962963,\n 0.6875 , 0.61206897, 0.44660194, 0.31168831, 0.17391304,\n 0. ...
I would like to look at just one input example, and find the activation and the weights from just that input example. Essentially I'm trying to figure out which features of the data each hidden unit is picking up.
Can anyone explain how to get the activations of intermediate layers in Keras?
AI: Consider this network
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(1,28,28)))
convout1 = Activation('relu')
model.add(convout1)
convout2 = MaxPooling2D()
model.add(convout2)
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation('softmax'))
Now you can visualize the activation using this function.
def layer_to_visualize(layer):
inputs = [K.learning_phase()] + model.inputs
_convout1_f = K.function(inputs, [layer.output])
def convout1_f(X):
# The [0] is to disable the training phase flag
return _convout1_f([0] + [X])
convolutions = convout1_f(img_to_visualize)
convolutions = np.squeeze(convolutions)
print ('Shape of conv:', convolutions.shape)
n = convolutions.shape[0]
n = int(np.ceil(np.sqrt(n)))
# Visualization of each filter of the layer
fig = plt.figure(figsize=(12,8))
for i in range(len(convolutions)):
ax = fig.add_subplot(n,n,i+1)
ax.imshow(convolutions[i], cmap='gray')
# Specify the layer to want to visualize
layer_to_visualize(convout1)
If you want you can refer my jupyter notebook. |
H: Recurrent neural network producing same predictions
I am trying to train a recurrent neural network that I built in keras on timeseries data to predict number of sales for next 10 days. For this, I've created my dataset as -
var(t) -> var(t+1)
var(t+1) -> var(t+2)
var(t+2) -> var(t+3)
var(t+3) -> var(t+4) and so on
I did Min-Max scaling on this data and the RNN code is as follows -
model = Sequential()
model.add(LSTM(20, input_shape=(1, look_back),activation='tanh',bias_initializer='ones'))
model.add(Dense(1, activation='linear',bias_initializer='ones'))
opt=adam(lr=0.1)
model.compile(loss='mean_squared_error',optimizer=opt)
model.fit(xtrain, ytrain, epochs=100, batch_size=1, verbose=2)
But the plot I am getting is when I did predictions on xtrain (green = ytrain, blue = ypred) -
The rnn isn't learning anything at all. Its producing same results for each dataset. I've tried adding hidden layers, increasing number of neurons, changing parameters (learning rate, momentum), optimizers (sgd, adam, adagrad, rmsprop), lstm activation fxn (tanh, softsign). I got little fluctuations in some cases in the graph. But the output is mostly constant.
Also, I've only 200 datasets.
Can someone please guide me what I am doing wrong here. What else I can try. Will small sized data not work using RNN at all ? If so, is there any other way to solve this problem (except ARIMA model) ?
EDIT - Increased batch size to 100 and epochs to 1000. Received some better results. Also, I did mean normalization [(x-mean)/std_dev] instead of MinMax scaling.
AI: Try increasing your batch size. If your batch size is small, the gradients are a rough approximation of the true gradients. |
H: Notion of cluster centers and cluster comparison in Density Based Algorithms
I have done some research on clustering algorithms since for my goal is to cluster noisy data and identify outliers or small clusters as anomalies. I consider my data noisy because of my main feautures can have quite varying values. Therefore, my focus has been on density based algorithms with quite some success.
However, I am unable to grasp the idea of cluster comparison in such algorithms since the notion of cluster centers cannot be properly defined.
My dataset constists of network flows and I split the dataset in subsets based on an identifier. After applying clustering on each subset I want to be able to compare the clusters that are created on each subset so that I can compare the subsets themselves in some context.
Would appreciate some help from data scientist gurus on how to approach the concept of cluster comparison or cluster center in such algorithms.
Thanks all!
AI: You can either use the medoid, you can sometimes compute a centroid (and just ignore that it may be outside of the cluster), or you can do pairwise comparisons and take the average of that rather than comparing centers. |
H: Multidimensional regression in Keras
I'm trying to implement the One Hidden Layer Model presented in this article using Keras.
This is my code:
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Activation
from keras import optimizers
model = Sequential()
model.add(Dense(100, input_dim=9216))
model.add(Activation('relu'))
model.add(Dense(30))
model.compile(loss='mean_squared_error', optimizer=sgd, metrics=['accuracy'])
sgd = optimizers.SGD(lr=0.01, momentum=0.9, nesterov=True)
hist = model.fit(X_train, y_train, epochs=10, verbose=0, validation_split=0.2)
y_pred = model.predict_classes(X_valid)
X_train shape is (2140, 9216)
y_train shape is (2140, 30)
X_valid shape is (1783, 9216)
But y_valid shape is (1783,). I'm trying to understand why I'm not getting a (1783, 30) output. Am I missing something?
AI: The problem is using predict_classes in:
model.predict_classes(X_valid)
this is designed to select the argmax (index of the maximum output) and choose it as the predicted class, for a classifier.
You have a regression problem, and just want the raw output from the network. So instead you should call:
y_pred = model.predict(X_valid) |
H: TensorFlow - Resume training in middle of an epoch?
I have a general question regarding TensorFlow's saver function.
The saver class allows us to save a session via:
saver.save(sess, "checkpoints.ckpt")
And allows us to restore the session:
saver.restore(sess, tf.train.latest_checkpoint("checkpoints.ckpt"))
Inside the TensorFlow documentation, there is an example code (with an added epoch loop, and restore):
# Create a saver.
saver = tf.train.Saver(...variables...)
# Launch the graph and train, saving the model every 1,000 steps.
sess = tf.Session()
saver.restore(sess, tf.train.latest_checkpoint("checkpoints.ckpt"))
for epoch in xrange(25):
for step in xrange(1000000):
sess.run(..training_op..)
if step % 1000 == 0:
# Append the step number to the checkpoint name:
saver.save(sess, 'my-model', global_step=step)
The problem is that if we stopped the training loop at epoch=15, and execute again, then if we would start at epoch=0 again, but the model is trained up to epoch=15.
Is there a way to resume from epoch=15?
AI: The network doesn't store its training progress with respect to training data - this is not part of its state, because at any point you could decide to change what data set to feed it. You could maybe modify it so that it knew about the training data and progress, stored in some tensor somewhere, but that would be unusual. So, in order to do this, you will need to save and make use of additional data outside of the TensorFlow framework.
Probably the simplest thing to do is add the epoch number to the filename. You are already adding the current step within the epoch, so just add in the epoch multiplied:
saver.save(sess, 'my-model', global_step=epoch*1000000+step)
When you load the file, you can parse the filename to discover what epoch and step you were on and use those as the start point for the xrange functions. To make this easier to re-start from any given checkpoint, you could use argparse to allow your script to take the name of the checkpoint file you want to use.
In brief, it might look like this:
# Near top of script
import argparse
import re
# Before main logic
parser = argparse.ArgumentParser()
parser.add_argument('checkpoint')
args = parser.parse_args()
start_epoch = 0
start_step = 0
if args.checkpoint:
saver.restore(sess, tf.train.latest_checkpoint(args.checkpoint))
found_num = re.search(r'\d+', args.checkpoint)
if found_num:
checkpoint_id = int(found_num.group(0))
start_epoch = checkpoint_id // 1000000
start_step = checkpoint_id % 1000000
# Change to xrange:
for epoch in xrange(start_epoch, 25):
for step in xrange(start_step, 1000000):
sess.run(..training_op..) # etc
# At end of epoch loop, you need to re-set steps:
start_step = 0
You may want to reduce the number of checkpoints you are creating - as it stands you would have 25,000 checkpoint files generated by your code.
Another option would be use a single checkpoint file, and to save and restore a Python pickle of a simple dict containing the state at the time you made the checkpoint, with a similar name. |
H: How do I forecast sales data down to the individual item?
I have a dataset that looks like,
order datetime, customer id, product name, type of product, quantity sold
I want a model to forecast sales for each individual item. I'm thinking of using one of the following but would like some advice, literature, or other options.
One time series forecast (ARIMA) for each product
RNN
HMM
What's the best method to predict sales of each individual item into the future?
AI: Do you want to forecast sales by day over a series of days? That would be more like ARIMA. Do you want to forecast "How many widgets will we sell in the next month"? That would be more like a regression problem.
As an aside, if the retailer you're working with has a large assortment of products that changes over time (e.g. seasonal clothing) then you should also look into forecasting by product category and not an individual product. It will be much more accurate. |
H: Identfying spikes in data
I am a little new at this - I am used to just querying data and not so much analysis of the data so any help would be greatly appreciated.
I have some data that is trending month over month. Some months with an increase in volume and others with a decrease. The goal is to identify a "spike" in the volume. Basically, is the increase abnormally large. I was trying to figure out the best way to do this. I thought the best thing to do was take the standard deviation and anything outside of two deviations - BAM!, that's a spike. Then I thought about it some and I got concerned about what happens next month. That spike is still in that data, it happened, but doesn't it shift the standard deviation so that the next month may not show a "spike" when without the prior "spike" it could have? For continuous data maybe this is less an issue but my data is rolling, I only have the last 6 months of data. If there is a "spike" this month and every one is put on notice, and the level never changes, then in six months wouldn't it seem like there is no spike?
I guess my question is, how would you quantify a measure for a "spike" given this limitation?
AI: Your intuition about the algorithm is about right and is a known technique from six-sigma.
As you may expect from six-sigma, take 3 times deviation. You could also take the running average and deviation to compare the new value to if you expect trends.
You need to inspect the spike but can consider removing it, which will reduce variations again. |
H: Using GridSearchCV for custom kernel SVM in scikit-learn
I would like to use scikit-learn's GridSearchCV() to do a grid search on custom parameters in a kernel I have specified. Specifically, the kernel is of the form
SeqKernel(x, y, orig_kernel, cut_off, order)
Here, orig_kernel is a kernel typically used in SVM learning (such as linear, polynomial, RBF, or sigmoid). I wish to perform a grid search over values of cut_off and order, with the additional caveat that only the pairs such that order $\le$ cut_off are considered. What would be the best way to implement this?
AI: Look at the source code and modify it to meet your conditions.
https://github.com/scikit-learn/scikit-learn/blob/ab93d65/sklearn/model_selection/_search.py#L685 |
H: Why Decision Tree boundary forms a square shape and SVM a circular/oval one?
I was going through a Udacity tutorial wherein a few data points were given and the exercise was to test which of the following models best fit the data: linear regression, decision tree, or SVM. Using sklearn, I was able to determine that that SVM is the best fit followed by decision tree. I got a very distinct decision boundary when these two algorithms were applied:
Is there any specific reason for the said shapes or does it just depend on the data sets?
The code was quite straightforward; just reading the CSV, separating the features and then applying the algorithms as shown below:
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
import pandas
import numpy
# Read the data
data = pandas.read_csv('data.csv')
# Split the data into X and y
X = numpy.array(data[['x1', 'x2']])
y = numpy.array(data['y'])
# import statements for the classification algorithms
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
# Logistic Regression Classifier
classifier = LogisticRegression()
classifier.fit(X,y)
# Decision Tree Classifier
classifier = GradientBoostingClassifier()
classifier.fit(X,y)
# Support Vector Machine Classifier
classifier = SVC()
classifier.fit(X,y)
AI: Shape of the SVM decision boundary depends on the kernel (similarity function) used. The "standard" version of SVM has linear decision boundary. The one displayed could be using Gaussian kernel.
Decision boundary of a decision tree is determined by overlapping orthogonal half-planes (representing the result of each subsequent decision) and can end up as displayed on the pictures.
See more here:
https://shapeofdata.wordpress.com/2013/07/02/decision-trees/
https://www.quora.com/What-are-Kernels-in-Machine-Learning-and-SVM |
H: What is deconvolution operation used in Fully Convolutional Neural Networks?
When I was reading this this paper, Fully Convolutional Networks for Semantic Segmentation, I found that they use an up-sampling layer to classify each pixel in to a class. I have two questions:
How do you understand the mathematics behind the de-convolution operation?
Why do we use an upsampling layer? Is it for extract more global context?
AI: Upsampling layer is used to increase the resolution of the image. In segmentation, we first downsample the image to get the features and then upsample the image to generate the segments.
For deconvolution operation we pad the image with zeroes and then do a convolution operation on that, hence it is upsampled.
For eg: - If after downsampling the images becomes:
[[1, 1]
[1, 1]]
Then if we pad it with zeroes,
[[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 1, 1, 0, 0]
[0, 0, 1, 1, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]]
Now, if we do convolution on the above image with a 3 x 3 filter, we will get an image of shape (4, 4). Thus we upsampled an image of shape (2, 2) to (4, 4). |
H: Sorting bar chart in Tableau by one category of bars
I have a bar chart that looks like this (truncated for space, orange label obscured for privacy):
Right now, it is ordered by US state. The color distinction is for something called "Entity".
Here are the set ups for Marks, Columns, and Rows, and the legend:
What I'd like to do is sort the order in which these appear, not by state, but rather by the volume measure for the blue Entity ("Industry"). So, in the example above, the states would be re-ordered to NY, NJ, NV, NM.
Is there a way to do this in Tableau? Thanks.
AI: To my knowledge you have to create an auxiliary variable
If [Entity] == "Industy" THEN [Volume] ELSE 0 END
Then you can sort by this. Right click on State then select sort and choose sort by auxiliary variable. |
H: Handling large imbalanced data set
I have an imbalanced data set consisting of some 10's of millions text strings, each with thousands of features created by uni- and bigrams, and additionally I have also the string length and entropy of string as features.
It is a multiclass data set (40-50 classes), but it is imbalanced. Some classes can be 1000x smaller compared to the largest class. I have restricted the data to 1 million strings per class as maximum, otherwise the imbalance could be even larger.
Because of this I want to use over-sampling to improve the data for the underrepresented classes. I have looked into ADASYN and SMOTE from the python imblearn package. But when I run it the process eats up all my RAM in the swap memory, and soon after the process gets killed. I assume because the memory is not enough.
My question is now how to best proceed. Obviously my data is too large to be over-sampled as it is. I have thought of two options, but I cannot make out which is the most "correct".
I sent in only one underrepresented class and the largest class, and repeated this for each underrepresented class. I am not sure if this could mean that classes might start to overlap though.
I instead under-sample the data, maybe down to 100k samples per class. This might reduce the data enough such that I can run oversampling on the less represented classes (with 1k-10k samples).
Any other options that are more appropriate that I have missed?
AI: There are multiple options, depending on your problem and the algorithms you want to use. The most promising (or closest to your original plan) is to use a generator to prepare batches of training data. This is only useful for models that allow for partial fits, like neural networks. Your generator can just stratify examples by for example generating a batch that includes exactly one of each target. One epoch would be when you served all the samples from the biggest class.
Downsampling is not a bad idea but it depends on the difficulty of your task, because you do end up throwing away information. You could look at some curves depending on the amount of samples for your model, if it looks relatively capped this wouldn't be a big issue.
A lot of models allow for weighting classes in your loss function. If we have 10,000 of class A and 1,000 of class B, we could weight class B 10x, which means mistakes that way count much harder and it will focus relatively more on samples from class B. You could try this but I could see this going wrong with extreme imbalances.
You can even combine these methods, downsample your biggest classes, upsample your smaller classes and use weights to balance them perfectly.
EDIT: Example of the batch options:
We have 4x A, 2x B and 1x C, so our set is:
A1 A2 A3 A4 B1 B2 C1
Regular upsampling would go to:
A1 A2 A3 A4 B1 B2 B1 B2 C1 C1 C1 C1
But this will not fit in our memory in a big data setting. What we do instead is only store our original data in memory (could even be on disk) and keep track where we are for each class (so they are seperated on target).
A: A1 A2 A3 A4
B: B1 B2
C: C1
Our first batch takes one of each class:
A1 B1 C1
Now our C class is empty, which means we reinitialize it, shuffle them (in this case it's only one example).
A: A2 A3 A4
B: B2
C: C1
Next batch:
A2 B2 C1
B and C are empty, reinitialize them and shuffle:
A: A3 A4
B: B2 B1
C: C1
Next batch is:
A3 B2 C1
And our last one of the epoch would be A4 B1 C1
As you can see, we have the same distribution as the full memory option, but we never keep more in memory than our original ones, and the model always gets balanced, stratified batches. |
H: Concatenate dataframes Pandas
I have three dataframes. Their shapes are (2656, 246), (2656, 2412) and (2656, 7025). I want to merge dataframes as above:
So It will result a (2656, 9683) Dataframe. Thanks for any help.
Typo on image: on Dataframe 3, it will 7025, not 5668.
AI: Assuming that the rows are in same order that you wish to merge all of the dataframes, you can use the concat command specificying axis=1.
new_df = pd.concat([df1, df2, df3], axis=1)
If the row index for each of the data frames are different and you want to merge them in the current order, you can also apply ignore_index:
new_df = pd.concat([df1, df2, df3], ignore_index=True)
For details on the merge, join & concatenation operations, please refer to the pandas docs. |
H: SGD data should be randomly selected or sequentially feed?
When training neural network, is it better to randomly choose data for every batch or feed the data sequentially? Why?
AI: If you feed the data randomly, there are fewer chances of overfitting than if you feed the data sequentially.
If the data is fed sequentially, then the network will not learn to generalize as the weights will be updated for that batch according to that same image again and again.
It is always recommended to feed the randomly. |
H: Why is duplicating inputs bad?
I am trying to predict an output value based on several continuously-valued inputs using a regression model.
I am not sure what approach is appropriate to scale/transform the input data for the regression. Let's just pretend that it is unlabeled data.
My most naive approach would be to add each input multiple times:
input
log(input)
sqrt(input)
and then let the regression model worry about finding which flavor of each input (if any) is significant.
What are the risks with using this approach?
AI: The issue with building a regression model on all 3 of these is that you are potentially introducing multicollinearity into the model. Although log(input) and sqrt(input) are not linear functions of the input a quick test (using Matlab) shows they are still highly correlated (depending on the range)
input=rand(1,100);
input_log=log(input);
input_rt=sqrt(input);
corrcoef(input, input_log) %->0.8549
corrcoef(input, input_rt) %->0.9779
corrcoef(input_sq, input_rt) %0>0.9407
Multicollinearity will not reduce the predictive power of the model, but it will make the regression coefficients of these variables difficult to interpret since small adjustments in the data may cause the regression model to switch which of the 3 input variations it finds significant. It also adds unnecessary complexity to the model. I would train the models with the 3 separately if you want to observe the effects.
Additionally, applying things like sqrt and log to your output can be useful if the data currently can't be described by a linear relationship. |
H: Importing Common Constants to R and Python
I use Python for data munging, R for data analysis, and I combine both by running the munging/analysis sequentially in a Makefile. My Python and R scripts depend on various constants. I typically store my Python constants in a settings.py file, and I import this file across several Python scripts which depend on those constants. My R constants are typically not used in more than one script, so I store those constants in the script which uses them.
Some constants are shared by both the Python scripts and R scripts. Right now, I am defining them separately in R/Python files. However, it would be better if there was some way of defining constants in common across Python and R files, since this redundancy can lead to mistakes.
What would be the best way for me to define constant in common across files? Obviously, there is the problem that these are two different languages; so, for example, a list in Python is not quite the same as a list in R. However, there might be some solution which is more elegant than simply repeating constants across files.
I believe this would be the best place for this question, but if it belongs in the Python or R StackExchanges, please let me know.
AI: You should consider storing this sort of data into a format that is more universally accessible. I would suggest either JSON or YAML. JSON can be readily parsed by both R and Python
Your data would look something like:
{
"FIRST_YEAR": 1952,
"IV_TYPES": [
["p", 6],
["n", 6],
["w", 4]
]
} |
H: Image Feature Vectors
I have downloaded a dataset from Amazon. http://jmcauley.ucsd.edu/data/amazon/ Dataset involves feature vectors of images. There are around 1.5 M feature vectors.
Dataset consists of 10 characters (the product ID), followed by 4096 floats (repeated for every product).
Every product image involves feature vectors with (4096x1) size. Feature vectors involve float numbers.
What do these float numbers mean?
What I understood is, there are at total 4096 features, and each index of feature vectors indicate a specific feature. The values in feature vectors indicate the frequency of regarding feature in all specific image.
Is it so? Or, if it is not, what might be the right explanation?
Thanks,
AI: The same link shows how these features are extracted, with a deep look into the cited article "Image-based recommendations on styles and substitutes":
Features are calculated from the original images using the
Caffe deep learning framework [11]. In particular, we used a
Caffe reference model with 5 convolutional layers followed
by 3 fully-connected layers, which has been pre-trained on
1.2 million ImageNet (ILSVRC2010) images. We use the
output of FC7, the second fully-connected layer, which results
in a feature vector of length F = 4096.
The reference neural network that was mentioned is the BAIR Reference CaffeNet at the Caffe Model Zoo, which is a slightly modified version of AlexNet.
Since the model was trained over ImageNet, which contains a wide variety of photographs of various categories (1000 of them, if I recall correctly), retrieving the neural codes of one of the layers (obtained just by forward propagation) will give you visual features with a fair representation of the images, even if the network was not specifically trained for Amazon's tasks (such as product recommendation). What these values actually mean is not something that tangible: it is the outcome of multiple 2D convolutions and other normalization and regularization functions, the parameters of which were adjusted specifically for classifying photographs from ImageNet.
The FC7 layer has a rectified linear unit activation (ReLU), which means that they are all non-negative numbers (potentially with several zeros). And since it's a fully connected layer that follows several convolutions, there is no intuitive mapping between a feature index and a certain characteristic of the image. You may picture the network as a highly complex function that yields a high-level representation of the image, under the form of a vector of numbers.
See also the paper Neural Codes for Image Retrieval, where the authors retrieve features from a pre-trained neural network in this fashion, for retrieving images in a different image domain. |
H: Prepending Input layer to pre-trained model
I'm trying to input numpy arrays of shape (1036800,) - originally images of shape (480, 720, 3) - into a pre-trained VGG16 model to predict continuous values.
I've tried several variations of the code below:
input = Input(shape=(1036800,), name='image_input')
initial_model = VGG16(weights='imagenet', include_top=False)
x = Flatten()(initial_model(input).output)
x = Dense(200, activation='relu')(x)
x = Dense(1)(x)
model = Model(inputs=input, outputs=x)
Previous variations of the above code yielded errors related to the input being the wrong dimensions, input_shape needing to have 3 channels (when using (1036800,) for that parameter in the initialization of VGG16), and the most recent error that results from running the above code is this:
Traceback (most recent call last):
File "model_alex.py", line 57, in <module>
model = initialize_model()
File "model_alex.py", line 20, in initialize_model
x = Flatten()(initial_model(input).output)
File "/home/aicg2/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 596, in __call__
output = self.call(inputs, **kwargs)
File "/home/aicg2/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 2061, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "/home/aicg2/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 2212, in run_internal_graph
output_tensors = _to_list(layer.call(computed_tensor, **kwargs))
File "/home/aicg2/.local/lib/python2.7/site-packages/keras/layers/convolutional.py", line 164, in call
dilation_rate=self.dilation_rate)
File "/home/aicg2/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 3156, in conv2d
data_format='NHWC')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 639, in convolution
input_channels_dim = input.get_shape()[num_spatial_dims + 1]
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 500, in __getitem__
return self._dims[key]
IndexError: list index out of range
Here is the full code. Here is the sample data file used in the script.
One of the possible approach towards fixing this might be to resize the raw image files to 224x224 and turn them into numpy arrays of shape (224, 224, 3) so they can be plugged into the pre-trained model's first layer. However, I don't want to warp the images or waste another night pre-processing data when I should already be training.
Besides that, I all I can think to do is Google my problem and try to adapt the found solutions or aimlessly tweak various shape related parameters and functions -- neither of which has gotten me very far over the past 4 hours.
AI: The issue is that you shouldn't flatten the images into 1-dimensional vector because the VGG16 contains 2D convolution layers (e.g. spatial convolution over images), which require the input to have the shape of (number_of_images, image_height, image_width, image_channels), given that keras.backend.image_data_format() returns 'channels_last'. If your image_data_format is 'channels_first', change the input data shape to (number_of_images, image_channels, image_height, image_width).
Here is your fixed code (tested with Keras 2.0.4):
x_train = x_train.reshape((x_train.shape[0], 480, 720, 3))
x_test = x_test.reshape((x_test.shape[0], 480, 720, 3))
initial_model = VGG16(weights='imagenet', include_top=False)
input = Input(shape=(480, 720, 3), name='image_input')
x = Flatten()(initial_model(input))
x = Dense(200, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(1)(x)
model = Model(inputs=input, outputs=x)
model.compile(loss='mse', optimizer='adam')
model.fit(x_train, y_train, epochs=20, batch_size=16)
score = model.evaluate(x_test, y_test, batch_size=16) |
H: Why do convolutional networks work so well for images?
Convolutional artificial neural networks work well in particular with images. Why?
AI: The convolution neural networks take into consideration that an image already has a two-dimensional structure. This is a domain knowledge provided by humans and not something the neural network can pick up.
For example, consider if I would give you the raw pixels shuffled in a flattened array (or any other signal for that matter). Would you be able to reconstruct its 2D structure? Probably this would be almost impossible (unless someone builds an algorithm to solve jigsaw puzzles at the pixel level!)
Applying convolution with various strides we are looking for patterns in the patches of the image and not on the individual pixels. Using different strides we are explicitly expressing the amount of resolution that we are investigating the patterns of the image.
So given that these convolution filters are trainable we are able to have an architecture that works well with images. In fact, it would work well with any signal that would have a 2D structure. |
H: What does a Word2Vec represents?
Does a single word vector, trained using Word2Vec or a similar approach, carries information or meaning?
AI: So we have that a word to vector model has been trained on a certain corpus to be able given words as inputs (one hot encoding is one way for the input) to represent the word as a vector of a usually high dimensionality of numbers.
Does it carry information? Yes, we could say that it carries the information about the words. Similar words have similar vectors. The point is that us as humans we do not define the similarity but the neural network defines the similarity for us based on the corpus that we are providing.
Does it carry meaning? No, because this representation does include grammatical or syntactical information and therefore the answer would be that it does not carry any meaning. |
H: What are the disadvantages of using deep neural networks compared to a linear model?
We have heard a lot about the advantages that artificial neural networks have over other models but what are the disadvantages of them in comparison to the simplest case of a linear model?
AI: First of all, we should say that a single affine layer of a neural network without any non-linearities/activations is practically the same as a linear model. Here we are referring to deep neural networks that have multiple layers and activation functions (non-linearities as relu, elu, tanh, sigmoid etc.)
Second of all nonlinearities and multiple layers introduce a nonconvex and usually rather complex error space which means that we have many local minimums that the training of the deep neural network can converge to. This means that a lot of hyperparameters have to be tuned in order to get to a place in the error space where the error is small enough so that the model will be useful.
A lot of hyper parameters which could start from 10 and reach up to 40 or 50 are dealt with bayesian optimization using Gaussian processes to optimize them which still does not guarantee good performance.
Their training is very slow and adding the tuning of the hyperparameters into that makes it even slower where in comparison the linear model would be much faster to be trained. This introduces a serious cost-benefit tradeoff.
A trained linear model has weights which are interpretable and give useful information to the data scientist onto how various features play a role for the task at hand. |
H: What do the hidden layers of a neural network try to learn?
Typically all the layers of an artificial neural network are trainable. But what are the hidden layers trying to learn?
AI: Each hidden layer represents a nonlinear application of a function on the inputs of the previous layer.
The first layers perform a function on our data-inputs and the last layer is responsible for either using softmax for classification purposes or be an affine layer for regression.
The in-between layers are necessary in order to stack complexity and therefore avoid having to create a layer of very high complexity.
In fact, in theory, two level deep neural network can represent any function. But how complex would this function need to be?
A layer of a very high complexity would be able to map our raw inputs into our final classification or regression task.
However what would be simpler to do is to let the intermediate hidden layers learn useful features regarding the given task and then the next hidden layer could exploit these features in its own way. And then the next hidden layer could use these newly transformed features in its own way and so on. So stacking nonlinear functions one on top of the other is more clever than requiring a single nonlinear function to do all the hard work.
There is a limit on how many layers you can stack together for the main reason that these hidden layers get trained by backpropagation which uses the chain rule of derivation in order to pass the deltas of the error back to the weights of the layers and tune them but a long series of these operations could end-up with exploding or vanishing gradients which means really poor performance or a model that cannot be trained. |
H: What are the advantages of the rectified linear (relu) over the sigmoid activation function in networks with many layers?
The state of the art of nonlinearity is to use rectified linear units (ReLU) instead of a sigmoid function in deep neural networks. What are the advantages?
AI: The sigmoid function becomes asymptotically either zero or one which means that the gradients are near zero for inputs with a large absolute value.
This makes the sigmoid function prone to vanishing gradient issues which the ReLU does not suffer as much.
In addition, ReLU has an attribute which can be seen both as positive and negative depending on which angle you are approaching it. The fact that ReLU is effectively a function that is zero for negative inputs and identity for positive inputs means that it is easy to have zeros as outputs and this leads to dead neurons. However, dead neurons might sound bad but in many cases, it is not because it allows for sparsity. In a way, the ReLU does a similar job of what an L1 regularization would do which would bring some weights to zero which in turn means a sparse solution.
Sparsity is something that, lots of times, leads to a better generalization of the model but there are times which has a negative impact on performance so it depends.
A good practice when using ReLU is to initialize the bias to a small number rather than zero so that you avoid dead neurons at the beginning of the training of the neural network which might prevent training in general. |
H: Data not consistent
I have data set of client profile and mutual funds , now the problem is there is huge numbers of different mutual funds available and based on history I can see for certain type of client profile more than 20 different funds have been suggested ?
Now I have no idea which algorithm should I use to come up with most recommended fund for certain client profile ?
AI: I assume you want to build a Recommendation Engine which will recommend mutual funds to the clients based on their profile. Recommender System is a special class of Machine Learning and make use of Supervised/Unsupervised Machine Learning techniques.
According to Wikipedia,
A recommender system or a recommendation system (sometimes replacing
"system" with a synonym such as platform or engine) is a subclass of
information filtering system that seeks to predict the "rating" or
"preference" that a user would give to an item.
So essentially, the item here in your problem statement is the mutual fund of which you need to predict the preference a client would give. Higher rating/preference of a Mutual Fund, let's call it X for a client A will mean X should be recommended to A.
Now, how do you give rating/preference to a list of items? Recommender Systems uses a number of approaches listed below:
Popularity Based Approach
Content Based Approach
Collaborative Filtering Based Approach which composes of Nearest Neighbor and Matrix Factorization
Hybrid Approach
Popularity Based Approach - Popularity Based Approach will recommend items, in your case, mutual funds, based on the total count of purchases. It will rank the mutual fund with maximum purchase in the past at the top. The item at the top will be recommended to every user/client.
Advantages of Popularity Based Approach is that it is very easy to
implement. The approach is used to list out the most popular items in
e-commerce websites, online news (Google News). It can use context (for example, most popular products at this time of day). It can use user and item features (for example, most popular products with feature F1, most popular products among user with feature UF1).
Disadvantage of Popularity Based Approach: No personalized
recommendation. Every one is recommended the same item.
Content Based Approach - In Content Based Approach, features of both the item as well as the user/client is used to offer personalized recommendation.
According to Wikipedia,
Content-based filtering methods are based on a description of the item
and a profile of the user’s preferences. In a content-based
recommender system, keywords are used to describe the items and a user
profile is built to indicate the type of item this user likes. In
other words, these algorithms try to recommend items that are similar
to those that a user liked in the past.
So, let's say if one of the client with features UF1, UF2 purchased a mutual fund X with a set of features F1, F2 and F3 in the past. New mutual fund products, lets say Y and Z will only be recommended to this client if their features matches with that of F1, F2 and F3.
It is also called Classification Based Approach because you are feeding in list of features of products and user into a classifier to predict what item should be recommended. To build a Content Based Approach model, you can use Machine Learning models like Naive Bayes Classifier or a Decision Tree.
Advantage of Content Based Approach over Popularity Based Approach is
that it gives personalized recommendation.
Disadvantage: It is not scalable. It is difficult to collect high
quality information on user and product.
Collaborative Filtering Approach -
According to Wikipedia,
Collaborative filtering methods are based on collecting and analyzing
a large amount of information on users’ behaviors, activities or
preferences and predicting what users will like based on their
similarity to other users.
Without going into technical jargons, Collaborative Filtering Approach is of two types -- User-based Collaborative Filtering & Item-based Collaborative Filtering.
In User-based Collaborative Filtering, let's say user/client A bought two items X and Y and user/client B bought the same items X and Y. So these two users are similar to each other based on their purchase history. Now if the user/client B purchases a third item Z, Z will also be recommended to user/client A.
In Item-based Collaborative Filtering, let's say a user/client A bought items W, Y and Z. User/client B bought W and Y. User/client C bought only Y.
What will be recommended to user/client C? Answer is item W because item W was co-purchased with item Y by user/client A and B.
These comes under Nearest Neighbor techniques of Collaborative Filtering because we are trying to classify a user or an item by determining its nearest neighbor. An algorithm known as kNN (k-nearest Neighbor) is used to do the task.
There is one another technique known as Matrix Factorization as well.
Hybrid Based Approach - Hybrid Based Approach, as name says, is a combination of Content Based Approach and Collaborative Filtering Approach. It is used largely used by Netflix and Amazon to give best recommendations. |
H: Why Orange "Predictions" and "Test & Score" produce different results on the same data?
It is not very clear what is the difference between the following two schemes:
From help docs:
Test & Score widget: tests learning algorithms on data.
Predictions widget: shows models’ predictions on the data. Which model? Pretrained one? Where is it taken from?
Obviously the results of Confusion Matrix are different. So how should I train and test my model is not clear.
If can easily do this:
What's the point of a separate single Predictions widget connected to some learner? What does it actually test?
AI: Perhaps I can elaborate a bit on this.
Test&Score is used for evaluating a model. You provide T&S with training data and the learner (e.g. Random Forest) and then the widget performs 10-fold cross-validation on training data, leaving out 1/10 of a data for testing. Results of all 10 fold are then combined into a single evaluation result.
Predictions, however, does not perform cross-validation. It doesn't test the data provided. Instead, Random Forest passes a model to Prediction, not a Learner. A model is built on train data. Predictions widget does not consider the real class, it uses the model from RF and applies it to the (new) data.
These are two different procedures, thus you are getting different results. |
H: What is the purpose of the discriminator in an adversarial autoencoder?
This is specific to the generative adversarial network (GAN) proposed in A. Makhzani et al. "Adversarial Autoencoders". In a traditional GAN, the discriminator is trained to distinguish real samples in $p(x)$ from fake generated samples output by the generator. On the other hand, the variational autoencoder uses discriminators over the latent codes $z$ instead of the original samples.
What I cannot understand is how the discriminator is able to distinguish between the latent code's prior distribution $p(z)$ and the posterior distribution $q(z)$. What is the reasoning behind the discriminator model attempting to discriminate something that is not a sample $x$, as in the traditional GAN? And should this discriminator be trained before the rest of the autoencoder?
AI: The purpose of having a prior distribution $p(z)$ in any generative adversarial network is to be able to smoothly match a latent code $z$ in a known distribution to an input $x$ in the domain and vice versa. The encoder of a simple autoencoder, without any additional measures other than the in typical pipeline
$$x \rightarrow E \rightarrow z \rightarrow D \rightarrow x'$$
would only require $x$ to approach $x' = D(E(x))$, and for that purpose the decoder may simply learn to reconstruct $x$ regardless of the distribution obtained from $E$. This means that $p(z)$ can be very irregular, making generation of new samples less feasible. Even with slight changes to the bottleneck vector, we cannot be sure that the encoder would ever be able to produce that code with any $x$.
In an adversarial autoencoder (AAE) however, the encoder's job is two-fold: it encodes inputs in $p(x)$ to the respective code in $q(z)$ so that:
it minimizes the reconstruction cost $f(x, D(E(x)))$ (where $f$ is a distance metric between samples, such as the mean squared error);
while it learns to adapt $q(z)$ to the prior distribution $p(x)$.
The latter task is effectively enforced because the discriminator receives:
positive feedback from codes in $p(z)$;
and negative feedback from codes in $q(z)$.
Even if the discriminator might not know anything about any of the two distributions at the beginning, it is only a matter of making enough iterations before it does. The ideal encoder will manage to trick the discriminator to the point of having approximately 50% accuracy in the discrimination process.
Also note that $p(x)$ may not be just a Gaussian or uniform distribution (as in, some sort of noise).
Quoting from Goodfellow's Deep Learning book (chapter 20):
When developing generative models, we often wish to extend neural networks to implement stochastic transformations of $x$. One straightforward way to do this is to augment the neural network with extra inputs $z$ that are sampled from some simple probability distribution, such as a uniform or Gaussian distribution. The neural network can then continue to perform deterministic computation internally, but the function $f(x, z)$ will appear stochastic to an observer who does not have access to $z$.
Although denoising autoencoders rely on this aspect to learn a model that ignores noise from a sample, the same paper on AAEs (section 2.3) shows that combining noise with a one-hot encoded vector of classes can be used to incorporate label information about the sample. This information is only provided to the discriminator, but it still influences how the encoder produces $q(z)$. |
H: How to use boolean data in DecisionTreeClassifier in sklearn?
I am trying to build a decision tree using python and sklearn DecisionTreeClassifier.
One of the data_type used for splitting the tree is Boolean(let it be x).However the tree that is generated contains comparisons like x<=0.5 .
This does not make sense.
Can anybody suggest how to use boolean values in Decision trees.
AI: Why would this not make sense? It tries to create a split with maximal separation between signal and background. Any float >0 and < 1 will result in the same split. |
H: Dataset - Sample pdfs for text processing?
I'm looking for a rather large amount of pdf files for testing my text processing program. Tried looking for an open site to get like some thousand pdfs, but wasn't able to find anything. I don't really know if that is the right place to ask (probably not) but maybe one has a good tip for me.
Thanks in advance.
AI: A GitHub repo with ~1,000 pdfs is here.
Another GitHub repo has a corpus of pdf examples, including edge-cases, is here. |
H: How can I do tree_method ='exact' in XGBoost classifier?
I am doing XGBoost classification on a huge data set and its showing:
Tree method is automatically selected to be 'approx' for faster speed. to use old behavior(exact greedy algorithm on single machine), set tree_method to 'exact'
How can I shift it to be exact?
I have tried clf = xgb(tree_method='exact') and it returned error!
AI: Indeed tree_method is a parameter for Tree Booster. There are 4 choices, namely, auto, exact, approx and hist. The default is set to auto which heuristically chooses a faster algorithm based on the size of your dataset. Approximate algorithm (approx) is chosen in your case since your dataset is huge.
To change the tree construction algorithm, you have to pass tree_method in the train function as follows:
import xgboost as xgb
...
parameters = {'tree_method': 'exact'} #You can add other parameters as well
model = xgb.train(parameters, x_train, num_round) #num_round is the number of rounds for boosting
prediction = model.predict(x_test)
...
I hope this helps. |
H: What are the inputs to a logistic regression? Probability or trial result?
It is a very basic question, but cannot find a satisfactory answer to. When we do logistic regression, what are the inputs? Suppose we have a dataset of students giving number of hours each student spent studying, and the end result of whether they passed or failed. The wikipedia article on LR goes to do a fitting with the logistic curve based on probability as a function of number of hours. But how do I get the probabilities if all I have is whether they passed or failed?
AI: From Wiki: The graph shows the probability of passing the exam versus the number of hours studying, with the logistic regression curve fitted to the data.
The probability curve is the output of the LR, not the input. Typically during training, the output class (or target class) will be discrete class labels with 1 or 0. During inferencing, the output will be a continuous value between 0 and 1. To generate the probability curve, just feed in different values of "hours studying" into the trained model. |
H: ML project ideas for dataset
Not sure if this is the right forum, but currently i have a dataset which contains a list of TV shows. Each record contains pricing between competitors (price in provider 1. Example: Itunes) TV show cover image, synopsis, country of origin, language, etc. Looking for ideas what project is suggested that i can prototype it, i want to learn ML and this may be a useful dataset.
AI: If this is your first ML project, you should try to predict one feature with the other features; i.e., see if highest bidder can be predicted based on cover image, synopsis, country of origin, etc. For this you can use a basic NN as a baseline, and keep adding layers until it's powerful enough. To choose the feature to be predicted, you can make a correlation matrix or use random forests to make sure there are some relationships within the data and you're not simply trying to predict an uncorrelated variable.
A little bit more difficult would be to predict the precise pricing a provider would give a movie based on the price it's given other movies (of course, without looking at the other competitors' pricing).
The hardest, I think, would be to use the cover image as input to a CNN in order to predict the genre of movie, country of origin, etc. |
H: python print values seasonal_decomposition
I am totally beginner in Python and after using seasonal_decompose for time series decomposition result=seasonal_decompose(series, model='additive', freq=365) I got plotted results with commands result.plot() and pyplot.show(), but I cannot understand how to print this results values on screen or how to see decomposed time series values?
I have plot, but I need to have values in console or some data file.
Complete code:
from pandas import read_csv
series=read_csv('B.csv', header=0, parse_dates=[0], index_col=0, squeeze=True)
from random import randrange
from matplotlib import pyplot
from statsmodels.tsa.seasonal import seasonal_decompose
result=seasonal_decompose(series, model='additive', freq=365)
result.plot()
pyplot.show()
AI: seasonal_decompose returns an 'object with seasonal, trend, and resid attributes.' We can access the data by calling the object:
res = seasonal_decompose(series, model='additive', freq=365)
residual = res.resid
seasonal = res.seasonal
trend = res.trend
print trend
etc... |
H: Difference between segmentation and effect of explanatory variables
In the context of discrete choice models, what difference does it make in segmenting my sample based on a particular "criteria" and study the effects of explanatory variables on each segment VERSUS Just adding the "criteria" as another explanatory variable in the unsegmented population
Say, I want to build a commuter mode choice model using MNL, what difference does it make when I split my population via gender and study the effects of various explanatory variables VERSUS adding gender as another explanatory variable in the entire unsegmented population.
AI: If you choose to build a model where one of the categorical features such as a gender plays a big role in the outcome, then the overall model results will give a large weight to the gender, and other features will have a smaller weight. While your model will be applicable for the unsegmented population, the predictive power might not be very good.
Now, suppose that you wanted to identify if the weights of the other features differed for males vs. females and if the segmented models had better predictive power, then by segmenting the population and creating models for each gender, you could explore changes in the magnitude or direction of the weights of each feature & evaluate the goodness of fit of your models to see if there is an improvement in the segmented population models vs. the overall model.
If performing logistic regression, you could scale the coefficients or Z-values returned in the model parameters for each model and check if the relative magnitude & direction of the overall vs. the segmented models gave you similar or different results. You could also look at the confusion matrix and use various accuracy measures to assess if the segmented models performed better, worse or the same as the overall model. In this way, you could determine whether for the outcome independent gender-based models are more appropriate for your question than a single overall model.
But, this leads to the question of if I have "n" features, should I create multiple models by segmenting the "x" categories in each of the "n" features? This really depends on the data question being asked. In some cases, it is valuable to run multiple models where each feature is dropped one-by-one to assess feature importance. In some cases, just evaluating the weight & relevance of each feature from the overall model is sufficient. Going beyond, one could look at optimizing models by choosing which of all "n" features should be segmented to yield the model with the best predictive power. This paper from FICO describes one methodology. This article details an approach and comparison of various models. |
H: How to evaluate the performance based on rate data
I have the following data:
Goal Achieved
100 90
150 130
200 175
...
The first column "Goal" is the number which should be done that day, and the column "Achieved" is the number that was actually done that day. Each row stands for a day.
I want to use this data to assess the performance of a worker. Here below I have two ideas:
Method 1
Method 2
Which one of the methods is reasonable and mathematically right?
Or if neither is good, what should I do to evaluate the performance of a worker?
AI: It depends on what you want to achieve, and how you'd define performance.
Suppose it is a running exercise program with 3 exercises in a week with increasing targets.
The first formula calculates the average exercise success, the second formula defines the weekly success, where the exercises are weighted with the goals.
There is no right or wrong, just interpretation of the numbers.
Just to confuse you a bit more: suppose:
Goal: 8, Achieved 1, Goal: 2, achieved 1
Goal: 2, Achieved 0, Goal: 8, achieved 3.
Case 1. beats Case 2. on both instances in formula 1, but loses in formula 2. |
H: How to handle missing data for machine learning
I'm trying to come up with a data structure to predict water visibility in a lake. I have some measured samples but would like to take other features into the equation.
As an example, I would like to get weather data such as rain and temperature for the past 7 days of an event. I got this data from BigQuery which seems to work fine. But sometimes it does not find data for all the past 7 days. How would I handle such a case? What would be a good way to structure my features? I thought something like:
dayofyear,temp,water_temp,temp-1,rain-1,dayofyear-1,temp-2,rain-2,dayofyear-2,....,temp-7,rain-7,dayofyear-7,visibility
While the last one would be the field I like to predict at some point.
AI: There are three main approaches to handling missing data.
Impute - use some method to fill in the missing values with reasonable guesses. You could interpolate between two time points, take the average value over all time points, or use a variety of other techniques leveraging co-occurrence of other variables to get a reasonable estimate.
Ignore - some methods can just ignore missing data, and not use it in the model at all
Utilize - for cases where data is not missing-at-random, missingness itself can be an informative feature. You could include missing values as another data point to model your output. |
H: Categorize observations with inconsistent text descriptions
Given data table with inconsistent item descriptions, how could I most effectively assign an item category using R (i.e. dplyr), MySQL, or Python? An R based solution is preferred.
MySQL is the data source. As is, case-when logic assigns an item category based on an item description. There is no common key or category to aggregate the items, hence the need to create one. However, not all like items are named the same. These are the same items, but sold across different locations. The is little to no consistency for item descriptions between locations. Real world data provides plenty of unstructured learning opportunities.
For example, consider an item like 'Whole Cheese Pizza'. This item exists with multiple descriptions such as 'Whl Cheese Pizza', 'Pizza Chs Whole', 'Cheese Pizzza Whole', 'Whole Cheese' etc. Ideally these all roll into one category named 'Pizza'. Case-when logic uses finds item descriptions like '%Pizza%' or 'Whl Cheese%' or 'Whole Cheese%' to assign the 'Pizza' category.
Clearly not ideal nor really scaleable. More pragmatic than programmatic.
Without seeing the catalog, are there any suggestions on how to apply a more programmatic method to catalog items with inconsistent text descriptions for aggregation?
Can provide additional context or details as needed. Thank you for time and expertise!
edit: Sample file here with item descriptions and rollups attached. Goal is to recreate the rollup category without case-when logic. Note the missing rollup category is the default case.
edit: fixed link to file
AI: Thanks for the real-world problem! Interesting challenge there. Some thoughts:
regex (regular expressions) may not work because "Whole Cheese" (the description) doesn't even contain "pizza" (the category)!
the "case-when logic" you mentioned is SQL's equivalent of handling conditional (if/else) logic using SQL's CASE statement. This can be helpful but super-tedious due to the huge number of rules required. (And then what happens when you have misspelt descriptions e.g. pizzza?)
wrt the comment by @Spaceman, my thought is that using stringdist to calculate string distance to then perform clustering, which is unsupervised, wastes all the useful labelled data which can be used!
My suggestion:
Use a combination of "case-when logic" and classification algorithms, to generate a set of labelled data (the labels being the rollups) in which then you can perform some supervised classification for the unlabelled data.
In your sample file, some categories have already been labelled. You can use that as training data for your supervised learning (e.g. SVM).
This may require some text mining knowledge (e.g. DocumentTermMatrix, tokenizers). I refer you to the wonderful tm and RTextTools package in R. |
H: validation/test set uniqueness question
Hopefully a simple question, but it's a little unclear to me on how best to separate train/validate/test sets.
I have say 100 examples of class A. I'm classifying text into either class A, which I care about, or class B, which could be any text in the world (negative class). I have, obviously, far more examples of class B.
When I split the data into train/validate/test sets, is it imperative that the test set, which is not at all used in training/tuning, NOT have any examples of class A that were used in training? In the real world (and given my limited samples), the text it will classify against will have some exact examples of class A, but not always (there could be variations - of which I do not have all of them).
I can ensure that the test set have unique class B text, but it is unclear if I have to also maintain completely unique class A examples in the test set, since the real world won't necessarily be like this. Would it make sense to also have x% of class A examples from training in the test set, or should it always be 0% in the test set?
AI: Let's take your example. If you have 2 classes A and B. The percentage of class A present in the data is less than B. So basically you have an imbalanced dataset. You have to ensure (and this goes for both balanced and imbalanced datasets) that both the train and test sets contain both A and B classes.
You ask weather is it ok to not have any data of class B in the test set and I would say it is wrong. You have to have both the classes in the test and in the train/validation set. To achieve this, you can use stratify = target in the train_test_split function you use when splitting the data. Also you should use nested cross validation to ensure all your data is used.
Keep in mind that stratify = target won't solve your class imbalance problem and you have to deal with it separately. It only ensures that the both the classes are distributed both in the train/valid and test sets and hence you get a fair distribution. Now how you deal with it depends on you.
EDIT 1: Based on the comment the SO is asking weather he can use the samples that were in the train set in the test set as well. Then the answer is pretty simple. A big NO! You cannot use the data that was in your train set into your test set as it will lead to data leakage!! This is a big mistake and would result in overly positive results. Google data leakage and you'll see how big of a problem it is. Basically you cannot use data that is present in the train set into your test set.
Keep you test data completely and absolutely separate from your train/valid set. Only then you can get a generalizable result that would benefit you in the real world application.
As I mentioned before, if you have a small dataset, use nested cross validation. It will select the best model and tune your Hyperparameters at the same time and ensure that all you data gets utilized. |
H: How to precompute one sequence in a sequence-pair task when using BERT?
BERT uses separator tokens ([SEP]) to input two sequences for a sequence-pair task. If I understand the BERT architecture correctly, attention is applied to all inputs thus coupling the two sequences right from the start.
Now, consider a sequence-pair task in which one of the sequences is constant and known from the start. E.g. Answering multiple unknown questions about a known context. To me it seems that there could be a computational advantage if one would precompute (part of) the model with the context only. However, if my assumption is correct that the two sequences are coupled from the start, precomputation is infeasible.
Therefore my question is:
How to precompute one sequence in a sequence-pair task while still using (pre-trained) BERT? Can we combine BERT with some other type of architecture to achieve this? And does it even make sense to do it in terms of speed and accuracy?
AI: Each token position at each of the attention layers of BERT is computed taking into account all tokens of both sequences. This way, there is not a single element that depends on just the first sequence and therefore it is not possible to precompute anything to be reused for different second sequences.
As you can see, the very nature of BERT's network architecture prevents you from factoring out the computations involving the first sequence.
In other similar architectures like ALBERT, there are some parts that could be reused, as the embedding computation (because ALBERT's embeddings are factorized, making the embedding matrix smaller but adding a multiplication at runtime), but I am not sure that reusing this computation would save a lot.
I don't know of any architecture made for sequence pairs that would let you do what you described, as most sequence pair approaches derive from BERT, which itself relies on computing attention between every token pair.
One option would be to use a network that gives you a fixed-size representation (i.e. a vector) of a sentence: you would use it on each of the sentences in a pair, and then you would feed both vectors to a second network (e.g. a multilayer perceptron receiving the concatenation of both vectors) to compute the final output. Depending on the task, this may give you good results and allow you to do the mentioned precomputing. To obtain the sentence representation vector, you may use BERT itself (the output at the [CLS] position) or some other architecture like LASER. |
H: Visualizing effect of regularization for linear regression problem
I wanted to put together an example notebook to demonstrate how regularization makes an impact for such a simple model as a simple linear regression. When executing the below script though, I notice that the LinearRegression() and Ridge() models both return the same coef_ and intercept_ and hence, the same regression plots, no matter how I change the regularization strength.
The data is a small dataset containing GDP per capita values in U.S. dollars for 2021 and life satisfaction scores, for a set of countries (n=40). The dataset plotted with the regression line(s) looks like this:
In Aurélien Géron's Hands-On Machine Learning with Scikit-Learn and TensorFlow book (page 27), I read the following:
Constraining a model to make it simpler and reduce the risk of overfitting is called regularization. For example, the linear model we defined earlier has two parameters, θ0 and θ1. This gives the learning algorithm two degrees of freedom to adapt the model to the training data: it can tweak both the height (θ0) and the slope (θ1) of the line. (...) If we allow the algorithm to modify θ1 but we force it to keep it small, then the learning algorithm will produce a simpler model than with two degrees of freedom, but more complex than with just one.
Based on this, I had the understanding that by including the alpha hyperparameter (i.e., the regularization lambda), I am essentially putting an additional constraint on the squared sum of coef_ and intercept_, thereby forcing the model to adjust them during training, since the new cost function now includes the lambda-term. In my current setting, coef_ (slope) is in the range of 1e-5 while intercept_ is around 5.75. Their squared sum is around 33. I would expect the regularized model to decrease the intercept_ because that has a much higher impact on the squared sum, than coef_ does. Yet, when running Ridge(alpha=...) with any alpha value, the learned parameters (and hence the regression plot) are identical to those of the unregularized simple linear regression.
Do I misunderstand something fundamental here? How could I see the effects of regularization on such a model?
My full code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge
X = np.c_[df['GDP_2021']]
y = np.c_[df['LifeSatisf']]
model1 = LinearRegression()
model1.fit(X, y)
model2 = Ridge(alpha=100)
model2.fit(X, y)
print("Coefs model 1: ", model1.coef_, model1.intercept_)
print("Coefs model 2: ", model2.coef_, model2.intercept_)
x_new = np.linspace(0, 140000, 10000)[:, np.newaxis]
y_1 = model1.predict(x_new).flatten().tolist()
y_2 = model2.predict(x_new).flatten().tolist()
plt.figure(figsize=(10, 3))
ax = plt.axes()
ax.scatter(df['GDP_2021'], df['LifeSatisf']);
ax.plot(x_new, y_1, c='r', linestyle='--');
ax.plot(x_new, y_2, c='b', linestyle='--');
plt.title("Life satisfaction vs. GDP per capita regression models");
plt.xlabel("GDP per capita (2021) [US$]");
plt.ylabel("Life satisfaction [-]");
Dataset as a comma separated file:
Country,LifeSatisf,GDP_2021
Australia,7.3,62618.59
Austria,7.1,53793.37
Belgium,6.9,50412.71
Canada,7.4,52791.23
Czech Republic,6.7,25806.38
Denmark,7.6,67919.59
Finland,7.6,53522.57
France,6.5,45028.27
Germany,7.0,50787.86
Greece,5.4,19827.16
Hungary,5.6,18527.59
Iceland,7.5,68843.65
Ireland,7.0,102394.02
Italy,6.0,35584.88
Japan,5.9,40704.3
Korea,5.9,35195.52
Luxembourg,6.9,131301.6
Mexico,6.5,9967.39
Netherlands,7.4,57714.88
New Zealand,7.3,48348.99
Norway,7.6,82244.23
Poland,6.1,17318.5
Portugal,5.4,24457.14
Slovak Republic,6.2,21383.29
Spain,6.3,30536.86
Sweden,7.3,58639.19
Switzerland,7.5,93515.48
Turkey,5.5,9406.58
United Kingdom,6.8,46200.26
United States,6.9,69375.38
Brazil,6.4,7741.15
Chile,6.5,16799.37
Estonia,5.7,27100.74
Israel,7.2,49840.25
Latvia,5.9,19538.9
Russia,5.8,11273.24
Slovenia,5.9,28939.27
South Africa,4.7,6861.17
Colombia,6.3,5892.14
Lithuania,5.9,22411.65
AI: Your understanding of regularization is completely correct, it just seems that the value for alpha you are using is too low for this example to have any meaningful impact. Increasing the value for alpha to something much larger (10e9 for the example below) does show the impact of regularization on the parameters of a model: |
H: How to count the number of rows by variable in R
Context: I am trying to determine a way to create an extra step in between my dataset and the code below or optimise the code altogether. Currently, the data frame "df_b" looks as follows. In column 4, the repetitions exceed 1 (as they denote the number of times a word appears +/- 5 words of a term.
Problem: I am trying to count the number of lemmas that appear within each year and create a tibble. This is what I have tried to do below but I realise now that I am not doing so as some rows in repetition exceed 1.
Possible solution 1: create another column for each rows holding the value 1, then input into code below
Possible solution 2: create new tibble by grouping by year and counting the number of lemmas in each year (not sure how to code this one up)
# sum the repetitions of words by year
sum_repeat_b <- aggregate(df_b[, 4], list(df_b$year), sum)
sum_repeat_b <- dplyr::rename(sum_repeat_b, "year"="Group.1")
sum_repeat_b <- dplyr::rename(sum_repeat_b, "sum_repeat_b"="x")
show(sum_repeat_b)
AI: Would the following work for you? The following code should count the number of unique values in the lemma column within each group based on the year column.
library(dplyr)
library(tidyr)
df %>%
group_by(year) %>%
summarise(count = n_distinct(lemma)) |
H: How to improve the learning rate of an MLP for regression when tanh is used with the Adam solver as an activation function?
I'm trying to use an MLP to approximate a smooth function f : R^3 -> R, that takes a point in space as an argument, and returns a scalar value.
The MLP architecture has a 3-dimensional (for 3 point coordinates) input layer, N hidden layers and a single linear scalar output layer, since the output should be the function value:
x x x
x x x x
x x x ... x x
x x x x
x x x
I'm using the tanh activation function because I want the model (MLP) to be continuously differential.
I'm playing with different hidden layer architectures, using the Adam solver, and I get this behavior for the MSE loss
The maximal validation error that I get with this mean MSE loss is 99.982942% - is this generally considered accurate for regression?
For the network with hidden layers (16, 16, 16, 16, 16), the error stagnates but then drops and oscillates. I suspect the oscillation is due to the diminishing gradients when using the tanh activation function for learning, is this true?
How to set/improve the learning rate, are there techniques that prevent oscillations when the solver (Adam, SGD, ...) approaches the optimum?
AI: If your goal is to set the optimum value of hyperparameters (weather it be learning rate, no of layers, activation function etc.) you should look into Keras Tuner.
The reason as to why the learning curve is oscillating is not clear to me without seeing your code/data. But if you want to set an optimum value of learning rate then look into Kera Tuner provided you ARE using keras + tensorflow.
If you are using PyTorch as the frame work then you could use any of the following HP tuning methods/packages:
1.)Ray Tune
2.) Optuna
3.) Auto Pytorch
4.) BoTorch
There are many more packages. You could do a google search.
Cheers! |
H: How to control a decision tree?
This is my R script for a decision tree:
library(caret)
library(rpart.plot)
library(plyr)
library(dplyr)
library(rpart)
data("iris")
names(iris) = tolower(names(iris))
table(iris$species)
suppressMessages(library(caret))
index = createDataPartition(y=iris$species, p=0.7, list=FALSE)
train = iris[index,]
test = iris[-index,]
trainctrl <- trainControl(method = "cv", number = 5, verboseIter = FALSE)
dt.model <- train(species~., data=train, method = "rpart",
tuneLength = 10,
preProcess = c("center", "scale"),
trControl = trainctrl,
metric="Kappa")
dt.predict <-predict(dt.model, test)
confusionMatrix(dt.predict, test$species)
rpart.plot(dt.model$finalModel)
varImp(dt.model)
my feature importance are:
> varImp(dt.model)
rpart variable importance
Overall
petal.width 100.00
petal.length 96.95
sepal.length 45.08
sepal.width 0.00
Is there a way to consider less petal.width? For example, I want that my tree use more petal.length and sepal.lenght than petal.width. Is it possible?
My problem is that in my dataset that I am using the decision tree on, one variable x is used more than the others. This x variable, however, affects the value of another y variable which is the one that actually characterizes my classes. Using an example with Iris: it's like saying that petal.length is related proportionally to petal.width, ie:
petal.width petal.length class
1 4 Virginica
1 6 Setosa
2 6 Virginica
2 7 Setosa
my model discriminates more the classes in base to the "petal.width" of my dataset rather than to the "petal.lenght". For this I would want to give a various weight to the characteristics
AI: In the comments you wrote:
I can't exclude it because it is related on an other feature
If you mean related as in a function of two other features, you could add columns to explicitly show those relations, then delete the original column. E.g. add a column called petal.width.plus.length and then remove petal.width.
Taking a step back, I think your problem could be rephrased as decision trees end up in a local minima, and so fail to find the best solution? If so, consider another algorithm? E.g. random forest, GBMs (e.g. XGBoost) or even deep learning.
If other algorithms also don't do what you expect, you should also consider that the data is not saying what you think it is. Maybe you need to collect more data, or oversample some rows, or look for bugs (e.g. some data has turned into NAs).
Finally (and I suppose this is the answer to the question you are asking) you can manually partition the data on the field you believe is most important, and then build a decision tree for each partition. As you need to do that each time you want to do predict you definitely want to write your own wrapper class to store the partitions. |
H: Beginning my data science journey
I am a Master's student in physics. Lately I have been intrigued by the field of data science. I have beginner level knowledge of python, undergraduate level knowledge of mathematics and master's level knowledge of physics. I now want to learn data science. I scoured the internet but could not get a good answer. I am willing to give around 1 to 1.5 hours daily. What steps should I take to become proficient in data science i.e what books should I read, what courses should I do, etc.?
AI: I too have a masters degree in physics so maybe you can relate!
The first thing to do would be to get your fundamentals in python strong. Pick up a python beginners tutorial from Youtube and learn all the fundamentals. This should take you less than a week.
A good starting point for all newbies is Kaggle. It is a platform for budding and experienced data scientists where you can start learning data science from scratch if you are a newbie (there are Kaggle tutorials for this purpose) or you can take part in Kaggle competitions where you can gauge where you stand against other people.
Another great option is Youtube tutorials (as mentioned above). There are literally hundreds of tutorials on virtually every topic you could imagine in data science/data analysis. Also they will provide you with proper guidelines and best practices to follow in data science.
One channel I would recommend following is Krish Naik (I personally follow his channel too). He has complete tutorials on virtually almost all topics in data science. His channel is a good place to build your fundamentals in data science. Basically the strategy to follow is (and he recommends the same strategy!) follow his tutorials to get a good grasp on basics and once you get them, you can move on to advance stuff. I personally followed this same strategy and trust me it helps a lot!
Another channel you could follow is Codebasics. He has some good tutorials too!
I did not read any books because I am much more comfortable learning from videos where each and every small and silly things are explained. But it is a matter of personal preference. If you learn easily from books then go ahead.
There is plenty of material on the internet for data science be it articles, videos or competitions. Just select one or two platform and start without wasting too much time on gathering the materials or choosing which platform to start!
Cheers! |
H: Why do I get different results at inference time even with fixed seed?
I am a very beginner in deep learning and am playing with voice cloning project. I trained my dataset and used the trained model to synthesize some sentences and was surprised to get a very different output each time I ran the synthesis (output ranging for very good quality to very poor with unintelligible content).
I understood that this was due to the initial state of the model that was set up randomly thanks to a random seed, but in the project I use the seed is fixed to 1234 and used to initialize the random generators.
Can it be that with the same initial seed the outputs differ at inference time for the same input ? Does it have to do with a bad dataset ? What are the reasons for that ?
Thank you
AI: In PyTorch a typical gotcha that leads to this behavior is forgetting to set the model in evaluation mode when doing inference. You can do this by invoking .eval() on the model.
Evaluation mode changes the behavior of some stochastic elements that can lead to not deterministic results, like batch normalization and dropout.
Apart from that, unless you have explicitly stochastic elements in your model (e.g. explicit sampling from distributions), you should get deterministic results. |
H: Persistence and stationarity together
I am trying to analyse a time series. I want to get only quantitative results (so, I'm excluding things like "looking at this plot we can note..." or "as you can see in the chart ...").
In my job, I analyse stationarity and persistence. First, I run ADF test and get "stationary" or "non-stationary" as results. Then, I need to work on persistence. To do so, I use ACF.
My question is: suppose I got "non-stationary" time series. Is it right to run ACF on it (without differencing)? I would like to comment upon stationarity and persistency without having to differenciate (so, just run tests on the original data and getting "asnwers" like "strong positive persistence", "weak negative persistence", ...).
Thanks to who will even just read my question
AI: Assuming by persistence you mean degree of autocorrelation.
ACF is simply a function that can be derived for any stochastic process, whether stationary or not. But the estimation of ACF for non-stationary, which is what you are doing, is the problem.
Generally, ACF is a function of both time, $t$ and lag, $h$:
$$\gamma(t,t+h) \equiv Cov[X_t,X_{t+h}]$$
For stationary processes, this is value is same for all $t$ and therefore, ACF is estimated only as a function of lag, $h$. If your process is however non-stationary then you cannot estimate the ACF from a given time series (a sample of stochastic process) as you will only have one value for each time point.
On the question of differencing, understand that it works for unit root processes. Non-stationarity can be due to other reasons as well (such as time trend, seasonality, etc.) which can even invalidate your ADF results. Assuming you are using ADF correctly and has identified a unit root (not just non-stationarity) then taking difference, testing for unit root again and then estimating ACF would be the right way to go. |
H: How to improve language model ex: BERT on unseen text in training?
I am using pre-trained language model for binary classification. I fine-tune the model by training on data my downstream task. The results are good almost 98% F-measure.
However, when I remove a specific similar sentence from the training data and add it to my test data, the classifier fails to predict the class of that sentence. For example, the sentiment analysis task
"I love the movie more specifically the acting was great"
I removed from training all sentences containing the words " more specifically" and surprisingly in the test set they were all misclassified, so the precision decreased by a huge amount.
Any ideas on how can I further fine-tune/improve my model to work better on unseen text in training to avoid the problem I described above? (of course without feeding the model on sentences containing the words "more specifically")
Note: I observed the same performance regardless of the language model in use (BERT, RoBERTa etc).
AI: In order to make your model more robust to different wordings, you may try with data augmentation techniques, that is, creating variations of your sentences and adding them to the training set with the same label as the original sentence.
There are frameworks like TextAttack that offer several text augmentation techniques. Another option is using back-translation (i.e. translating your sentence into a second language and then translating that again into English), either locally with publicly available machine translation models or via some API like google translate.
Note that making fine-tuned language models resistant to this kind of (common) problems is an active area of research. For the latest advances, you can check this NeurIPS'21 article. |
H: Error: `raise ValueError( ValueError: Missing column provided to 'parse_dates': 'Date'
I am using a .csv with two columns. The first has dates and the second has temperatures. I would like to plot it with dates on the x-axis and temperatures on the y-axis.
I used this command:
dataset = pandas.read_csv('/home/UbuntuUser/Desktop/mesurements.csv',
usecols=[1], engine='python', skipfooter=3, index_col=['Date'],
parse_dates=['Date'])
but I got the error:
Error: raise ValueError(
ValueError: Missing column provided to 'parse_dates': 'Date'
Any ideas why? From searching, I found this suggestion, that does not help me.
Update:
Part of the code is from here:
import numpy as np
import matplotlib.pyplot as plt
from scipy.fftpack import fft, ifft
import pandas as pd
# Import csv file
df = pd.read_csv('rsam_2016-17_fft_test.csv', index_col=['DateTime'], parse_dates=['DateTime'])
print(df.head())
#plot data
plt.figure(figsize=(12,4))
df.plot(linestyle = '', marker = '*', color='r')
plt.savefig('rsam_2016_2017_snippetforfft.jpg')
plt.show()
Source
AI: The problem is that you are setting the Date column as index the same time you are parsing it. Once it is set as the index, this is not a column anymore (but the index) so the parse_dates does not find any column name Date
Try this:
dataset = pandas.read_csv('/home/UbuntuUser/Desktop/mesurements.csv',
engine='python', skipfooter=3,
parse_dates=['Date'])
dataset.set_index('Date', inplace = True) |
H: Calculating statistical ranks between datasets with unpaired observations
The problem is the following:
I have multiple datasets for which I want to calculate a ranking for each. All observations contained in the datasets can be arbitrarily permuted, so they are unpaired, to speak in the words of statisticians.
Example datasets are:
dataset1 = [0.6487500071525574, 0.6499999761581421, 0.6412500143051147, 0.6662499904632568, 0.6225000023841858, 0.6324999928474426, 0.637499988079071, 0.6287500262260437, 0.6412500143051147, 0.6212499737739563]
dataset2 = [0.6075000166893005, 0.6287500262260437, 0.6312500238418579, 0.6162499785423279, 0.6012499928474426, 0.6150000095367432, 0.6387500166893005, 0.6200000047683716, 0.5950000286102295, 0.5849999785423279]
dataset3 =[0.6237499713897705, 0.612500011920929, 0.6075000166893005, 0.6162499785423279, 0.6187499761581421, 0.6287500262260437, 0.6200000047683716, 0.6237499713897705, 0.5824999809265137, 0.5787500143051147]
I understand for datasets with paired observation, one would simply rank each observation column-wise and simply average over the average each observation has in each dataset. Example:
ranks_dataset2 = [3, 2, 2, 2.5, 3, 3, 1, 3, 2, 3]
=> Avg.Rank: 2.45
But how would I do this for unpaired observations?
AI: I think I figured out a nice way on how to rank the datasets:
The rank r(D) of dataset D is calculated by subtracting from the total number of datasets N_D the sum of the number of wins W_i of each observation i in D averaged over the total number of observations |D| in respect to the complete set of observations without |D|, which is N - |D|, multiplied the number of remaining datasets N_D - 1. So if all observations in D have N-|D| wins, the rank of the dataset |D| is 1. If all observations have no wins (they are 0), the rank of the dataset |D| is simply N_D. |
H: Applied and view jobs ratio
I have the following data set where the column "kind" can be V(view) or A(apply), how can I do the following things given a particular job id how many applicant Apply (A) to that particular job and how many applicant View(V) the particular job? So I want a column with job and two columns with One labled view other labelled A for the job type.
I am working in Jupyter notebook python, pandas, if someone can initiate or show me similar code will be really helpful
event_datetime resume_id job_id event_platform kind Time Date
0 2021-05-01 08:42:05 158655 38820886 IOS_APP V 08:42:05 2021-05-01
1 2021-05-01 08:42:05 158655 38901970 IOS_APP V 08:42:05 2021-05-01
2 2021-05-01 08:42:05 158655 38919645 IOS_APP V 08:42:05 2021-05-01
3 2021-05-01 08:42:05 158655 38928402 IOS_APP V 08:42:05 2021-05-01
4 2021-05-01 08:42:05 158655 38847632 IOS_APP V 08:42:05 2021-05-01
5 2021-05-01 08:42:05 158655 38892709 IOS_APP V 08:42:05 2021-05-01
6 2021-05-01 08:42:05 158655 38987246 IOS_APP V 08:42:05 2021-05-01
7 2021-05-01 08:42:05 158655 38983852 IOS_APP V 08:42:05 2021-05-01
8 2021-05-01 08:42:05 158655 38970661 IOS_APP V 08:42:05 2021-05-01
9 2021-05-01 08:42:05 158655 38982545 IOS_APP V 08:42:05 2021-05-01
10 2021-05-01 08:42:05 158655 38815059 IOS_APP V 08:42:05 2021-05-01
11 2021-05-01 08:42:05 158655 38939806 IOS_APP V 08:42:05 2021-05-01
12 2021-05-01 08:42:05 158655 38957174 IOS_APP V 08:42:05 2021-05-01
13 2021-05-01 08:42:05 158655 39002021 IOS_APP V 08:42:05 2021-05-01
14 2021-05-01 08:42:05 158655 38396095 IOS_APP V 08:42:05 2021-05-01
15 2021-05-01 08:42:05 158655 38969243 IOS_APP V 08:42:05 2021-05-01
16 2021-05-01 08:42:05 158655 38880479 IOS_APP V 08:42:05 2021-05-01
17 2021-05-01 08:42:05 158655 38973440 IOS_APP V 08:42:05 2021-05-01
18 2021-05-01 08:42:05 158655 38908077 IOS_APP V 08:42:05 2021-05-01
19 2021-05-01 08:42:05 158655 38899054 IOS_APP V 08:42:05 2021-05-01
AI: This should be possible by first aggregating the data to get the number of records for each job identifier and kind of action and then pivoting the rows to the columns:
(
df
.groupby(["job_id", "kind"])
.count()["resume_id"]
.reset_index()
.pivot(index="job_id", columns="kind")
) |
H: How can I change shape of the input image array as per my trained TensorFlow model input?
I have a Tensorflow model weight file that I am using to make the prediction on test images. These test images are in NumPy array format and the shapes of the images are (720, 1280, 3).
I am getting the following error while making the prediction-
ValueError: Input 0 is incompatible with layer model: expected shape=(None, 416, 416, 3), found shape=(1, 720, 1280, 3)
When I tried to change the shape like below-
image_np.shape=(416,416,3)
It is giving me the following error-
ValueError: cannot reshape array of size 2764800 into shape (416,416,3)
I am using Tensorflow 2.x with Python 3.7.
Please help me to resolve this issue.
AI: The issue is that the model expects images of 416 by 416 pixels, whereas you are using larger images. Simply using reshape doesn't work since the overall number of pixels is still to high for a 416x416 image (720 * 1280 > 416 * 416). Therefore you have to resize your image first to 416x416 before passing it to your model. You can either directly resize to 416x416 (which would give an image with a different aspect ratio) or resize first but retain the aspect ratio and then pad the image to get to 416x416. You could use the resize function from the cv2 library to resize an image. |
H: Can I leave natural outliers in a dataset in training?
Can I leave unedited natural outliers in a dataset (outliers that have not appeared just because of mistyping of mistakes in the data)? Or should I also remove them or change them?
AI: Yes you should keep the natural outliers in a dataset. They represent an extreme end of the data you have and contain useful info. They also help you with anomaly detection if you wish.
But it also depends on the type of problem at hand. If for example in the case of Titanic dataset, where we are classifying who survived and who didn't. It is ok to remove the outliers as removing them won't be detrimental to the result. The passengers are already dead and removing the outliers won't lead to some serious loss.
On the other hand in the case of classifying weather a patient has a tumor or not, removing the outliers would be a bad idea, as it will lead to misclassifying and ultimately incorrect diagnosis/treatment.
If you are certain that these outliers are because of mistyping then you can safely remove them, but only if you are certain that they are because of mistyping. Else for a real world problem, it is always wise to keep the outliers.
Cheers! |
H: Print histogram for each of the columns in my table with one single command
I would like to draw a histogram for each of the columns in my data.frame without having to write the the names of all of them, similar to what I did for inspect their unique values with:
sapply(data, unique)
So I tried
sapply(data, hist)
This command draws the histogram correctly, but the title for each of them is "Histogram of X[[i]]":
How can I draw the histograms but with the correct title?
AI: I finally got it with the following code:
for (col in 2:ncol(data)) {
hist(data[,col], main=colnames(data)[col], xlab=colnames(data)[col])
} |
H: Should I merge multiple target bins into one for better results?
I have a multiclass classification task where the target has 11 different classes. The target to classify is the Length of Stay in a hospital and the target classes are in different bins, for example, 1-10, 11-20, 21-30 and so. So far I have tried Neural Net for my task but I am not getting a good performance.
Now I read this statement on Kaggle that decreasing the bins of the target i.e. instead of 11 classes merge 2 or more classes into 1 so that I have less no of classes, might help in improving the model performance. Basically the more the number of classes, the worse your model performs. My question is, Will decreasing the number of classes improve my model performance? Is it advisable to do so in real world datasets?
AI: In general, yes: decreasing the number of classes mechanically increases the probability that the classifier finds the right one. Even in the worst case scenario where the class is picked randomly, the probability of every remaining class increases when there's one less class. Another way to look at it: all other things equal, the number of errors can only decrease when two classes are merged.
Whether it's advisable is a different matter. It's completely artificial: the performance may be higher but only because the problem is made easier. So it might look better on paper, but it's not in any way a real improvement. Additionally it would rarely cause a large performance increase unless a large number of classes are removed. Normally the decision should be made only because it makes sense for the task.
Finally I think that in this case this is not the right question: this task should probably be designed as a regression problem, not a classification one. The target ranges are ordinal, so a regression model should be able to make a better use of the information with a continuous target variable. For example each range could be represented as the mean: 5 for 1-10, 15 for 10-20, etc. If needed the predicted value can be mapped back to a range at the end of the process.
Also my usual advice is to start with a simple model first, for instance decision tree or SVM regression (SVR). |
H: Clustering text data based on sentiment?
I am scraping reviews off Amazon with the intent to perform sentiment analysis to classify them into positve, negative and neutral. Now the data I would get would be text and unlabeled.
My approach to this problem would be as following:-
1.) Label the data using clustering algorithms like DBScan, HDBScan or KMeans. The number of clusters would obviously be 3.
2.) Train a Classification algorithm on the labelled data.
Now I have never performed clustering on text data but I am familiar with the basics of clustering. So my question is:
1. Is my approach correct?
2. Any articles/blogs/tutorials I can follow for text based clustering since I am kinda new to this?
PS: I am familiar with both NLP and Clustering algo's but I have never performed Clustering on text data.
AI: In my opinion there are two main problems with your approach:
The clustering is extremely unlikely to correspond to sentiment, unless the features that you use for clustering are specifically engineered to represent sentiment. In general text clustering tend to group documents by common words, i.e. similar topic. This might lead to different categories of reviews by type of product, for example.
The second and I think most important issue is that without any labelled data, you can't evaluate the system. A common mistake would be to use the classes obtained from the clustering in order to evaluate the classification model: this doesn't evaluate the full task of sentiment analysis since there's no way to know how well the clustering represents sentiment. The proper method is to manually annotate a random subset of documents for the purpose of evaluation.
Also in general the second part with the classification model is not needed because the unsupervised clustering model can directly be applied to new instances. |
H: Why not rule-based semantic role labelling?
I have recently found some interest in automatic semantic role labelling. Most introductory texts (e.g. Jurafsky and Martin, 2008) present approaches based on supervised machine learning, often using FrameNet (Baker et al. 1998) and PropBank (Kingsbury & Palmer, 2002). Intuitively however, I would imagine that the same problem could be tackled with a grammar-based parser.
Why is this not the case? Or rather, why would these supervised solutions be preferred?
Thanks in advance.
References
Jurafsky, D., & Martin, J. H. (2009). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall.
Baker, C. F., Fillmore, C. J., & Lowe, J. B. (1998). The Berkeley FrameNet Project. 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, 86–90. https://doi.org/10.3115/980845.980860
Kingsbury, P., & Palmer, M. (2002). From treebank to propbank. In Language Resources and Evaluation.
AI: There is a subtle but important difference between "semantic role" and "grammatical role" (I think there's a specific term for the latter but I forgot it).
Grammatical role is strictly about syntax. For example in the sentence "John sent a letter to Mary":
"John" is subject
"a letter" is object
"Mary" is an indirect object
This is what a syntactic parser (typically dependency parser) would normally identify.
By contrast semantic role is mostly about the semantics. We could describe the semantic of the above sentence like this:
The predicate is "to send"
This predicate can have 3 arguments:
the sender is "John"
the object is "a letter"
the receiver is "Mary".
Typically one needs a specific resource like PropBank in order to know what are the expected and optional arguments specific to a predicate.
So far the difference might look thin, but it will become clearer with this new example: "A letter was sent to Mary by John". The grammatical roles become:
"a letter" is subject
"Mary" is indirect object
"by John" is some kind of dependent clause (I forgot what this is called but it's definitely not subject).
Whereas the semantic roles are exactly the same as previously, because the semantic didn't change.
So in general semantic role labelling is harder than simple identification of syntactic roles. This is why specific resources and methods are developed for it.
Disclaimer: my knowledge on the topic is from 10 years ago, I'm not aware of the recent developments in this domain. I'm under the impression that DL methods bypass this step for many applications by producing end to-end systems, but I'm not sure at all about this. |
H: Change line type in gpplot in R
Context: I have two variables under emotion_dict that I am graphing in the same line graph.
Problem: However when I change the linetype in geom_line, it changes the appearance of both variables.
Question: Does anyone know how to alter the code below to keep the line types separate and display a key? I have done this before only with the ggline function, but that is not appropriate here.
See attached file and code below for more context:
posneg_plot2 <- d_posneg %>%
ggplot(mapping = aes(x=year, y=rel_freq, group=emotion_dict, colour=emotion_dict)) +
geom_line(alpha = 1, size=0.7, linetype=2, colour="black") +
theme_light() +
labs(x="Year", y="Positive and Negative Sentiment (%)") +
scale_x_continuous(breaks=seq(1970,2017,2)) +
theme(axis.text.x=element_text(angle=45, hjust=1)) +
scale_color_hue(labels = c("Negative Sentiment", "Positive Sentiment")) +
labs(colour = "LIWC Dictionaries") + theme(legend.position = "bottom") +
theme(text=element_text(family="Times New Roman", size=17))
#stat_cor(, method = "pearson", p.accuracy = 0.001, r.accuracy = 00.01, size = 4.5, colour = "black")
EDIT: Answer below and now attempting to change the labels within the group accordingly:
AI: You should be able to simply specify the field to be used for the linetype for the linetype argument within an aes mapping as follows:
posneg_plot2 <- d_posneg %>%
ggplot(mapping = aes(x=year, y=rel_freq, group=emotion_dict, colour=emotion_dict)) +
geom_line(aes(linetype=emotion_dict), alpha = 1, size=0.7, colour="black") |
H: How to plot the sum of something over an interval of time?
Say I have a dataframe with date as index. I would like to plot a line plot of some values in a column A over a given time frame. Say for the month of August. In column A I have several entries for example for the 02/08/2020 and a four different values on 03/08/2020 and so I would like to plot the sum of those values.
Is there an easy one line that would do it?
AI: You could aggregate and sum values in a single line by pandas groupby() function. Here's an example:
Making a simple dummy dataset:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
values = np.random.randint(1, high=100, size=50)
df = pd.DataFrame(data=values, columns=['values'])
Adding some dates as indices:
def get_days():
return sorted(np.random.choice(range(1, 31), size=25, replace=True))
year = '2021'
months = ['08', '09']
dates = []
for month in months:
days = get_days()
for day in days:
dates.append(f'{day}/{month}/{year}')
df.index = dates
Dataframe looks like the following:
Aggregate and sum the data:
answers = df.groupby(df.index, sort=False).sum() # since index already sorted
Printing answers:
Simply plot it
dates = answers.index
sums = answers.values
fig, ax = plt.subplots()
ax.plot(range(len(dates)), sums, 'ro-')
ax.set_xticks(range(0, len(dates), 3)) # need to adjust to ensure plot is readable
ax.set_xticklabels(dates[::3], rotation=90)
ax.set_xlabel("Dates")
ax.set_ylabel("Sums")
ax.set_title("Sum Over Dates")
plt.show();
The plot to the above code looks like the following:
You can now control how you want to aggregate in several ways. You can add temporary columns as days, months, years and aggregate them hierarchically(and in a single line) if that is what you desire as shown here.
Plotting it in a single line will not only be difficult, but less readable too, so I'd suggest avoiding that. |
H: Choose ROC/AUC vs. precision/recall curve?
I am trying to get a clear understanding on various classification metrics, including knowing when to choose ROC/AUC as opposed to opting for the Precision/Recall curve.
I am reading Aurélien Géron's Hands-On Machine Learning with Scikit-Learn and TensorFlow book (page 92), where the following is stated:
Since the ROC curve is so similar to the precision/recall (or PR) curve, you may wonder how to decide which one to use. As a rule of thumb, you should prefer the PR curve whenever the positive class is rare or when you care more about the false positives than the false negatives.
The book demonstrates the ROC and PR curve for an imbalanced binary classification problem where target class is roughly 90% zero and 10% one. It shows the PR curve and the ROC curve where the above referenced ROC bias against imbalanced datasets is clearly reflected: ROC has an overly optimistic view of model performance.
Yet, I don't understand it thoroughly, why exactly is
ROC overly optimistic for imbalanced binary classification problems, and
PR curve favoring false positives over false negatives.
Generally speaking, I understand why precision and recall are useful for classification problems with an imbalance. For such problems, accuracy is highly biased. From precision we can infer the presence of false positives (the more FPs there are, the lower the precision) and similarly, from recall we can infer the presence of false negatives (the more FNs there are, the lower the recall).
However, when looking at the axes of the ROC curve, there true positive rate (TPR, recall) is plotted against false positive rate. The higher the x-axis (FPR) value, the more FPs there are. The lower the y-axis (TPR) value, the more FNs there are. This seems to be nearly analogous to the precision-recall curve though, where the lower the y-axis (precision) value, the more FPs there are and the lower the y-axis (recall) value, the more FNs there are. In other words, both ROC and PR curve seem to provide information on both FPs and FNs.
AI: Although this is not a demonstration by an experimental exercise (we can actually try it out), you can get an intuitive understanding because, while PR-AUC uses Precision and Recall as indicators:
ROC-AUC uses Recall and FPR (False Positive Rate) which makes use of the TN (True Negatives) value which, for datasets with a high imbalance, is likely to be high and contributes to the overly optimistic ROC-AUC:
So, TN (a value less challenging for imbalanced datasets) is used in ROC-AUC, but not considered in PR-AUC.
Some references:
Imbalanced Learning: Foundations, Algorithms, and Applications, 2013
Learning from Imbalanced Data Sets, 2018
A Survey of Predictive Modelling under Imbalanced Distributions, 2015
https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-imbalanced-classification/ |
H: Document Content
I have a set of .pdf/.docx documents with content. I need to search for the most suitable document according to a particular sentence. For instance:
Sentence: "Security in the work environment"
The system should return the most appropriate document which contains at least the content expressed in the sentence. It should be a sort of search bar with advanced capabilities; I have a constraint: I can not have an apriori classification since the number of documents and the related category could vary on time.
How should I address this kind of task?
AI: If you are asking how to integrate this, I would leverage existing search technologies such as storing documents in mongo database or using solr indices just to name a few..
If you are asking on the implementation details, take a look on topic modeling, tf-idf, cosine similarity, synonym replacements, k-nearest neighbors to get you started. A lot of these techniques could either be used at index time on incoming documents or at query time to perform minimum work to reduce search space scope even if your documents & queries are dynamically changing. You'll probably want to allocate a test set of ranked documents expected to be returned by sample queries so you can benchmark your improvements. |
H: Massive difference in accuracy of KNN depending on random_state
pardon the noob question but I am baffled by the following behavior. My model has MASSIVELY different results based on the random seed. I want to train a KNN classifier on the famous Kaggle Titanic problem where we attempt to predict survival or not. I focus only on the "Sex" feature, to make things easier.
The problem becomes now that by changing the random seed the results of the accuracy change incredibly. For example, one random seed gives me a score of 0.78 and another random seed gives me a score of 0.17, and different random seeds give more or less everything in between. How can this significant change in score behavior be explained? Also, why does this change in accuracy become less significant when the n_neighbors = 2 or above? Thanks in advance! Here is the code in question.
df_sex = df[["Sex"]]
y = df["Survived"]
def training(state):
X_train, X_test, y_train, y_test = train_test_split(df_sex, y, random_state=state )
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
print("Test set score: {:.2f}".format(knn.score(X_test, y_test)))
print("Train set score: {:.2f}".format(knn.score(X_train, y_train)))
training(0)
training(3)
training(10)
training(21)
gives output
Test set score: 0.78
Train set score: 0.79
Test set score: 0.22
Train set score: 0.21
Test set score: 0.17
Train set score: 0.23
Test set score: 0.77
Train set score: 0.79
AI: $k$NN is the most simple supervised learning algorithm: given an input instance $x$, it finds the $k$ instances most similar to $x$ in the training set, and then predicts $x$ as the majority class between these $k$ instances.
You use a single boolean feature to represent an instance, which means that an instance can only be true or false. The concept of "most similar" has very little sense in this case: a true instance is "equally most similar" to all the training instances which also have true for gender. This means that for every prediction $k$NN has to pick the $k$ most similar instances randomly among all the instances with the same gender value, and consequently the target class is practically random as well... And this is why the performance is almost entirely determined by the random state.
This would be less likely to happen when using multiple and/or more complex features, since strict equality of all the features would not happen in general. |
H: How to feed a Knowledge Base into Language Models?
I’m a CS undergrad trying to make my way into NLP Research. For some time, I have been wanting to incorporate "everyday commonsense reasoning" within the existing state-of-the-art Language Models; i.e. to make their generated output more reasonable and in coherence with our practical world. Although there do exist some commonsense knowledge bases like ConceptNet (2018), ATOMIC (2019), OpenMind CommonSense (MIT), Cyc (1984), etc., they exist in form of knowledge graphs, ontology, and taxonomies.
My question is, how can I go about leveraging the power of these knowledge bases into current transformer language models like BERT and GPT-2? How can we fine-tune these models (or maybe train new ones from scratch) using these knowledge bases, such that they retain their language modeling capabilities but also get enhanced through a new commonsense understanding of our physical world?
If any better possibilities exist other than fine-tuning, I'm open to ideas.
AI: In my opinion this is a very difficult question, and it's not sure that this can be done.
Symbolic methods and statistical methods are hard to combine. In fact, statistical ML methods became mainstream because they could solve most problems better than symbolic methods. This is especially true in NLP: the multiple attempts at rule-based representations of languages (in the 80s and 90s) were not only expensive to build but also they never proved capable of covering the full diversity of natural language.
There have been various attempt at hybrid models in specific tasks, but to my knowledge none of these hybrid methods proved good enough compared to pure statistical methods. What can work however is to introduce knowledge represented by resources as some of the features used by a statistical model. In this case the model is not symbolic at all, but it uses information coming from symbolic resources.
also get enhanced through a new commonsense understanding of our physical world
Be careful not to assume that any of these models understands anything at all. Their result can be extremely convincing, but these are not strong AI. Natural Language understanding is far from achieved (and it may never be). You might be able to somehow use symbolic resources in order to enhance the output of a model, but making such a model perform some actual reasoning about what it's talking about is a whole other story (a sci-fi one, for now at least). |
H: Extracting the metadata form Json file making it columns
I have the following json data file which I have converted to pandas dataframe. The columns are as follows
Index(['id', 'title', 'abstract', 'content', 'metadata'], dtype='object')
I am particularly interested in the column 'metadata' an element of the column looks like
df_json.loc[78, 'metadata']
"{'classification': {'name': 'Manufacturing, Transport & Logistics'}, 'subClassification': {'name': 'Warehousing, Storage & Distribution'}, 'area': {'name': 'Southern Suburbs & Logan'}, 'location': {'name': 'Brisbane'}, 'suburb': {'name': 'Milton'}, 'workType': {'name': 'Casual/Vacation'}}"
So I want to make columns extracting the information out of 'metadata' columns for example location. I am not sure how to extract it and put it beside the same json file with added columns such as location etc.
id title abstract content metadata clean_content
0 38915469 Recruitment Consultant We are looking for someone to focus purely on ... <HTML><p>Are you looking to join a thriving bu... {'standout': {'bullet1': 'Join a Sector that i... Are you looking to join a thriving business th...
1 38934839 Computers Salesperson - Coburg Passionate about exceptional customer service?... <HTML><p>· Casual hours as r... {'additionalSalaryText': 'Attractive Commissio... middotnbspnbspCasual hours as required transit...
2 38946054 Senior Developer | SA Readifarians are known for discovering the lat... <HTML><p>Readify helps organizations
AI: If the column contains actual dictionaries you can simply use the .str accessor to access the data in the dictionary by simply using the keys. For example, if you want to get the location from the dictionary you would use df_json["metadata"].str["area"].str["name"] to get the value 'Southern Suburbs & Logan'. |
H: should text augmentation take place before or after splitting the dataset?
I've a text dataset with ~20000 samples (which is not enough).
I used text augmentation to "invent" more samples so essentially I've multiplied each sample by 10 - ending up with ~200000 samples (each of the 10 is a different kind of augmentation method).
I did that for the whole dataset before splitting it to train and test datasets.
Should I've done it only for the training dataset?
UPDATE:
based on the answer given, I've a follow up question:
What should the order of operations should be?
I understood by now that there augmentation should be done only on the train dataset, but what about tokenization and stemming?
Is the below the correct order?
splitting data set into 2 datasets: train and test
perform augmentation only on train dataset
stemming and tokenize both datasets
text encoding
create model
fit data on model
evaluate
I guess my question is regarding step #3.
Is it correctly placed in the above order?
AI: There are at least two reasons why the split should be made first:
In theory at least, there is a true distribution of the data for the target task. Any model should always be evaluated on the true distribution of the data, because the goal is to predict on this distribution. Since data augmentation modifies this distribution, it's as if the model is evaluated on a different task instead of the target task.
Augmentation techniques create artificial instances which are usually easier for a model to classify, since they follow the pattern used to generate them. If these easier instances are used in the test set, the performance is very likely to be overestimated. This is a case of data leakage: some information from the training set "leaks" in the test set.
Final remark: I think that augmentation techniques should be used with caution on text data. In general text is very difficult to simulate. |
H: The behavior of the cross validation error and training error in underfitting case is not clear
I currently study the "Machine Learning" course on Coursera.org by Andrew Ng, it comes to a topic that discusses the performance of learning algorithms under different conditions.
Here, we discuss the case when the algorithm underfits the training data (high bias).
I cannot understand why the cross-validation error (Jcv(theta)) decreases as we increase the training set size(m), and why it then flattens out. And I cannot understand why the training error (Jtrain(theta)) flattens out at some region, I understand why it increases as we increase m.
I asked about it in the coursera discussion forum and studied it twice, but it is still doesn't make sense to me.
Notice that :
The cross-validation error(Jcv(theta)) is a measure of how well the algorithm fits the cross validation set, the lower Jcv(theta) the better the algorithm fits the cross validation set.
The training error (Jtrain(theta)) is a measure of how well the algorithm fits the training set, the lower Jtrain(theta) the better the algorithm fits the training set.
m is the number of examples in the training set only.
AI: One possible reason is the algorithm has very small learning capacity.
When m is very small, the model memorizes the data hence the very small training error. As m increases, the model is unable to learn the properties of the training data so the error goes up.
For the validation data when m is small, the model falls to generalize from the memorized data. The error goes down as m goes up because the model is learning the more general properties of the data.
This theoretical diagram is confusing because in practice most people would never use a low capacity model on such small data, thus would never see learning curves like this. |
H: For a student who is a beginner in quantitative research and statistics, which is the better statistical tool to start: R or IBM SPSS? Why?
Currently, I am writing my research design. However, I am still indecisive on what statistical tool should I use for the data analysis. I tried looking up on the internet and there are disparate answers to my question.
I have noticed that R (Programming Language) and IBM Statistical Package for the Social Sciences are two of the recurring tools that are mentioned when it comes to this question. So, which is better? I need some insights so I can settle on an answer to this dilemma.
Thanks in advance!
AI: I would go for R over SPSS any day. Anything you can do with SPSS can be done with R, the inverse is not necessarily true. The advantage of using R over SPSS in my opinion is that R is a full-fletched programming language, where SPSS is more a graphical user interface for doing statistics. In R you can never accidentally click the wrong option, and still get (wrong) output. I guess it comes down to what your goals are. If you really want to become adept at data analysis, maybe also in your professional career, R might be the better option. However, it probably has a steeper learning curve than SPSS. So if you just want to finish your research and never look back, SPSS might be the go-to option for you. None of the above is set in stone, it is just the way I see it. Both are powerful tools to do statistical analyses. Hope that helps. |
H: How could I estimate slope of lines on a scatter plot?
I have a list of coordinate pairs. To the human eye, they form lines with a constant slope:
This is how I generated that image above:
import numpy as np
np.random.seed(42)
slope = 1.2 # all lines have the same slope
offsets = np.arange(10) # we will have 10 lines, each with different y-intercept
xslist=[]
yslist=[]
for offset in offsets:
# each line will be described by a variable number of points:
size = np.random.randint(low=50,high=100)
# eachline starts from somewhere -5 and -2 and ends between 2 and 5
xs = np.random.uniform(low=np.random.uniform(-5,-2), high=np.random.uniform(2,5),size=size)
# add some random offset and some random noise
ys = slope * xs + offset + np.random.normal(loc=0, scale=0.1, size=1) + np.random.normal(loc=0, scale=0.01, size=size)
xslist.append(xs)
yslist.append(ys)
# bring all x and y points together to single arrays
xs = np.concatenate(xslist) # xs: array([-0.37261674, 0.58267626, -3.72592914 ...
ys = np.concatenate(yslist) # ys: array([-0.53638699, 0.61729781, -4.52132114,
# plot results
import matplotlib.pyplot as plt
plt.scatter(xs,ys)
I can generate lots of xs and ys. In my real world scenario, I won't know which point belongs to which line, so cannot simply separate the points to different groups and just apply least squares fitting to each group.
How could I, using machine learning or otherwise, build a function which takes xs and ys as input, and returns a slope estimate of the lines on an image like above?
Why simple least squares fitting doesn't seem to work
Let's generate new data where the failure of least squares fitting is more obvious. Let's have a slope of 2.4 and y-intercepts between 0 and a few hundred.
Data generation:
import numpy as np
np.random.seed(42)
slope = 2.4
offsets = np.arange(0,500,100)
xslist=[]
yslist=[]
for offset in offsets:
size = np.random.randint(low=50,high=100)
xs = np.random.uniform(low=np.random.uniform(-5,-2), high=np.random.uniform(2,5),size=size)
ys = slope * xs + offset + np.random.normal(loc=0, scale=0.1, size=1) + np.random.normal(loc=0, scale=0.01, size=size)
xslist.append(xs)
yslist.append(ys)
xs = np.concatenate(xslist)
ys = np.concatenate(yslist)
Least squares fitting of a line using np.polyfit():
a, b = np.polyfit(xs, ys, deg=1)
Note that I cannot fit to just one line, as I don't know which points belong to one line.
Plot results:
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10))
plt.scatter(xs,ys)
line_x = np.arange(-5,5,0.01)
line_y = a*line_x + b
plt.plot(line_x,line_y,c='r',linewidth=10)
plt.gca().set_aspect(1/8)
ie:
The obtained slope using least squares fitting (ie the slope of the red line) is very much different than the slope of the lines formed by black dots. (Note that the scale is different on the x and y axis.)
Printing both a (our slope estamate) and the real slope slope:
print(a)
print(slope)
get:
4.295790412452058
2.4
This error is too much for my real world application.
Function to generate mock data
As requested in the comments, here is a function to generate data similar to the above examples:
def get_data(number_of_examples):
np.random.seed(42)
list_of_xs = []
list_of_ys = []
true_slopes = []
for _ in range(number_of_examples):
slope = np.random.uniform(low=-10, high=10)
offsets = np.arange(0,
np.random.randint(low=20, high=200),
np.random.randint(low=1, high=10))
xslist=[]
yslist=[]
for offset in offsets:
size = np.random.randint(low=np.random.randint(low=40, high=60),
high=np.random.randint(low=80, high=100))
xs = np.random.uniform(low=np.random.uniform(-5,-2),
high=np.random.uniform(2,5),size=size)
ys = slope * xs + offset + \
np.random.normal(loc=0, scale=0.1, size=1) + \
np.random.normal(loc=0, scale=0.01, size=size)
xslist.append(xs)
yslist.append(ys)
xs = np.concatenate(xslist)
ys = np.concatenate(yslist)
list_of_xs.append(xs)
list_of_ys.append(ys)
true_slopes.append(slope)
return list_of_xs, list_of_ys, true_slopes
Try it, get 10 examples:
list_of_xs, list_of_ys, true_slopes = data = get_data(10)
Plot results (the slope of the red line is what I am trying to predict using the coordinates of the blue dots):
for xs, ys, true_slope in zip(list_of_xs, list_of_ys, true_slopes):
plt.figure()
plt.scatter(xs, ys)
plt.plot(xs, xs*true_slope, c='r')
and so on.
AI: The procedure that you can use is the following. First cluster your data with gaussian mixture models. This method should also work with multiple lines with different slopes. It should be able to deal with intersections as points near an intersection can belong to both clusters and a wrong classification will not lead to huge differences in the results of the regression.
I will post the complete code.
# Your code for generating the data
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
slope = 2.4
offsets = np.arange(0,500,100)
xslist=[]
yslist=[]
for offset in offsets:
size = np.random.randint(low=50,high=100)
xs = np.random.uniform(low=np.random.uniform(-5,-2), high=np.random.uniform(2,5),size=size)
ys = slope * xs + offset + np.random.normal(loc=0, scale=0.1, size=1) + np.random.normal(loc=0, scale=0.01, size=size)
xslist.append(xs)
yslist.append(ys)
xs = np.concatenate(xslist)
ys = np.concatenate(yslist)
We will use your data points to generate multiple gaussian mixture models. We will fix the number of components by using the number of components with the minimal value of the Bayesian Information Criterion (BIC).
# Create multiple Gaussian Mixture models
from sklearn.mixture import GaussianMixture
X = np.vstack((xs, ys)).T
n_components = np.arange(1, 21)
models = [GaussianMixture(n, covariance_type='full', random_state=0).fit(X) for n in n_components]
# Get optimal number of components by using the index of the components with the minimal value for the Bayesian Information Criterion (BIC)
n_components_optimal = np.argmin(np.array([model.bic(X) for model in models])) + 1
Plot the results and see how well the clustering with the optimal number of clusters works.
# Code for plotting
gaussian_mixture_model_optimal = GaussianMixture(n_components_optimal, covariance_type='full', random_state=0).fit(X)
labels = gaussian_mixture_model_optimal.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis')
Now, use the clustered data and create subdataframes from them and fit your linear regressions.
import pandas as pd
df = pd.DataFrame({
"x": xs,
"y": ys,
"cluster": labels,
})
#
cluster_number = 1
X_sub = df.query('cluster == @cluster_number').values |
H: Could I use some elements of my target variable to predict it?
I'm trying to predict if a company will bankrupt, I use a dataset of 2020 and I manually created my target variable with the status of the company the status date, status reason to create my target variable.
Could I use these variables too for my model or because I build my target with it it's totally forbidden ? (My opinion is that I cannot but I'm curious to hear what the community will say)
Thanks.
AI: If you would already have those datapoints before a company actually goes into bankruptcy then you can then them in your model since when predicting to the future you could have access to that data. However, if you would only know the data once the bankruptcy event happens (e.g. date of bankruptcy) then you cannot use this variable in your model since you would be leaking data (using data that is in the future and the model would not have access to when the model is actually deployed and used). |
H: Trouble understanding regression line learned by SGDRegressor
I am working on a demonstration notebook to better understand online (incremental) learning. I read in sklearn documentation that the number of regression models that support online learning via the partial_fit() method is fairly limited: only SGDRegressor and PassiveAgressiveRegressor are available. Additionally, XGBoost also supports the same functionality via the xgb_model argument. For now, I chose SGDRegressor to experiment with.
I created a sample dataset (dataset generation code below). The dataset looks like this:
Even though this dataset is clearly not a good candidate for a linear regression model like SGDRegressor, my point with this snippet is merely to demonstrate how the learned parameters (coef_, intercept_) and regression line change as more and more data points are seen by the model.
My approach:
collecting the first 100 data points after sorting the data
training an initial model on those first 100 observations and retrieving the learned parameters
plotting the learned regression line
iteration: take N "new" observations, use partial_fit(), retrieve the updated parameters, and plot the updated regression line
The problem is, the learned parameters and the regression line doesn't seem correct at all after training on the first 100 observations. I tried tinkering with the max_iter and eta0 arguments of SGDRegressor() as I thought SGD merely couldn't converge to the optimal solution as learning rate was too slow and/or maximum number of iterations was too low. However, this didn't seem to help.
Here are my plots:
My full code:
from sklearn import datasets
import matplotlib.pyplot as plt
random_state = 1
# generating first section
x1, y1 = datasets.make_regression(n_samples=1000, n_features=1, noise=20, random_state=random_state)
x1 = np.interp(x1, (x1.min(), x1.max()), (0, 20))
y1 = np.interp(y1, (y1.min(), y1.max()), (100, 300))
# generating second section
x2, y2 = datasets.make_regression(n_samples=1000, n_features=1, noise=20, random_state=random_state)
x2 = np.interp(x2, (x2.min(), x2.max()), (15, 25))
y2 = np.interp(y2, (y2.min(), y2.max()), (275, 550))
# generating third section
x3, y3 = datasets.make_regression(n_samples=1000, n_features=1, noise=20, random_state=random_state)
x3 = np.interp(x3, (x3.min(), x3.max()), (24, 50))
y3 = np.interp(y3, (y3.min(), y3.max()), (500, 600))
# combining three sections into X and y
X = np.concatenate([x1, x2, x3])
y = np.concatenate([y1, y2, y3])
# plotting the combined dataset
plt.figure(figsize=(15,5))
plt.plot(X, y, '.');
plt.show();
# organizing and sorting data in dataframe
df = pd.DataFrame([])
df['X'] = X.flatten()
df['y'] = y.flatten()
df = df.sort_values(by='X')
df = df.reset_index(drop=True)
# train model on first 100 observations
model = linear_model.SGDRegressor()
model.partial_fit(df.X[:100].to_numpy().reshape(-1,1), df.y[:100])
print(f"model coef: {model.coef_[0]:.2f}, intercept: {model.intercept_[0]:.2f}")
regression_line = model.predict(df.X[:100].to_numpy().reshape(-1,1))
plt.figure(figsize=(15,5));
plt.plot(X,y,'.');
plt.plot(df.X[:100], regression_line, linestyle='-', color='r');
plt.title("SGDRegressor on first 100 observations with default arguments");
What am I misunderstanding or overseeing here?
AI: A single call to partial_fit is very unlikely to get you a good fit, as it only performs one iteration of stochastic gradient descient. As stated in the docs:
Internally, this method uses max_iter = 1. Therefore, it is not guaranteed that a minimum of the cost function is reached after calling it once. Matters such as objective convergence and early stopping should be handled by the user.
source
I'm not very familiar with online learning and partial fits, but it seems you need to apply some kind of looping function if you want this to work. After playing around for a bit I found this simple modification already greatly improves the results:
# train model on first 100 observations
model = linear_model.SGDRegressor()
amount = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300]
for a in amount:
model.partial_fit(df.X[:a].to_numpy().reshape(-1, 1), df.y[:a])
print(f"amount: {a}, model coef: {model.coef_[0]:.2f}, intercept: {model.intercept_[0]:.2f}")
regression_line = model.predict(df.X[:800].to_numpy().reshape(-1, 1))
plt.figure(figsize=(15, 15))
plt.plot(X, y, '.')
plt.plot(df.X[:800], regression_line, linestyle='-', color='r')
plt.title("SGDRegressor on first 100 observations with default arguments")
plt.show()
Here you can see in the output that the intercept is increasing, whereas the coefficient is decreasing, which is what we would expect a good fit to look like.
I hope this is enough to get your project moving again! |
H: Labels in SVM algorithm
I am reading some ML books (Burkov's and Raschka's), and i have seen there, that for a binary classification problem using SVM, my "positive" label needs to be equal +1 and my "negative" label needs to be -1.
My dataset has labels equals to +1 and 0. Should i change all the 0 to -1? Or let's say I'm using Scikitlearn's SVC, will it automatically perform this conversion?
AI: You do not have to change your y labels, sklearn will do this for you. |
H: Difference between prototype and centroid
Are these two terms "prototype" and "centroid" exchangeable? I know prototypes can be calculated using the mean of the features. Is it the same for centroid?
AI: No, they are not exchangeable.
Centroid refers to "the arithmetic mean position of all the points in the figure". Sometimes, it does not make sense to take the mean of the data feature vectors. For instance, when clustering time series, normally you cannot apply the k-Means algorithm because it computes the cluster prototypes with the arithmetic mean; instead, for time series you may use the k-medoids algorithm, which uses actual data points as cluster prototypes. |
H: How to combine preprocessor/estimator selection with hyperparameter tuning using sklearn pipelines?
I'm aware of how to use sklearn.pipeline.Pipeline() for simple and slightly more complicated use cases alike. I know how to set up pipelines for homogeneous as well as heterogeneous data, in the latter case making use of sklearn.compose.ColumnTransformer().
Yet, in practical ML one must oftentimes not only experiment with a large set of model hyperparameters, but also with a large set of potential preprocessor classes and different estimators/models.
My question is a dual one:
What would be the preferred way to set up a pipeline where the selection of text vectorizers is treated as an additional hyperparameter for grid or randomized search?
Additionally, what would be the preferred way to set up a pipeline where the selection of multiple models can also be treated as an additional hyperparameter? What about optimizing the model-specific hyperparameters in this case?
In the first case a common use case is text vectorization: treating the choice of CountVectorizer() or TfidfVectorizer() a hyperparameter to be optimized.
In the second case a practical use case could be selecting between various algorithms or in the case of multiclass classifications, whether to use OneVsOneClassifier() or OneVsRestClassifier().
I understand that this might exactly be what AutoML solutions have been developed for. I heard of out-of-the-box AutoML solutions in the past that can do automatic model selection with hyperparameter tuning but I have no experience in any of them, thus I don't know if they indeed provide an answer for the general topics I described in this post.
AI: Some pure scikit approaches:
When pre-processing relates to data balancing & sampling strategies, consider using Imbalance-Learn components (ie: RandomUnderSample) you embed right into your pipelines. This lets you hyper tune the parameters.
Rely on passthrough functionality of grid search when deciding if certain pre-processing steps are needed at all. This however cannot express when the step is a required pre-condition to another step (ie: StandardScaler + MLPerceptronClassifier)
Consider using scikit-opt's BayesSearchCV strategy to walk through the parameter space based on previous runs rather than fixed or randomly like GridSearchCV or RandomizedSearchCV do. For many parameter tuning this may converge faster.
⚠️ In practice I find its often extremely time & computationally expensive to have full end-to-end pipelines that try to learn everything. (ie: parameters, metrics, model types, normalization stages, features, architectures, etc), so hyper tune what matters most. |
H: Mapping values in Logistic Regression
When mapping probabilities obtained in logistic regression to 0s & 1s using the sigmoid function, we use a threshold value of 0.5. If the predicted probability lies above 0.5, then it gets mapped to 1, if the predicted probability lies below 0.5, it gets mapped to 0. What if the predicted probability is exactly 0.5? What does 0.5 get mapped to?
AI: Despite the interesting comments on setting appropriate threshold values, I think the main question was about wat the threshold value actually means for the prediction.
There are different ways to implement a thresholding function. Your proposed way says that for a predicted probability p:
if p > threshold, it is predicted to be 1
AND
if p < threshold it is predicted to be 0.
This would indeed leave a gap at p == threshold, and in order to prevent this, most implementations will use a one-sided test:
If p > threshold, then it is predicted to be 1, and all other values are 0.
OR
If p < threshold, then it is predicted to be 0, and all other values are 1.
Not only is this computationally cheaper, it also prevents the aforementioned problem for p == threshold from ever occurring. |
H: Categorical to One hot encoding - Big data
I have a sales dataset which consists of binary label as output - "Business win" and "Business loss" of our products.
We have a set of 1st level customers (lets call that group as jacks) with whom we do we business. These jacks then sell our products to end customers (let's call that group as roses).
These sales data contain fields such as sales id, product id, product name, product type, market segment (like APAC, EMEA etc), jack id, jack category, jack region, rose id, rose category, rose region, project id, project name etc.
A single jack can sell same product across multiple different projects (to same or different roses).
As you can see that most of my input variables are categorical in nature.
I would like to find out what are the features that influence the business outcome? that is win or loss?
If it's a business win or loss, I would like to find out why it is so (using Lime or SHAP, etc)
My question
a) Since there are more than 100 unique products, should I create one hot encoding variable for all my 100 products? We would like to find out whether the product is one of the features that can help us predict whether the business is likely to lose or win business etc? ex: Product A when ordered has an 80% chance that this business is a loss. This is one of the features. Similarly, I would like to get this sort of detailed insight. I don't wish to know simply variable product is an important factor. I would like to know which product leads to business loss or win. Hope this helps
b) I understand we can create one-hot encoding variables for region variables because it has only 4 values like APAC, EMEA, GC, EUROPE, etc.
c) My total number of rows in the dataset is 300K. But as you see most the categorical variable has 100 unique values. How should I decide whether it has one hot encoded or not?
d) Is there any other better or alternative method to do this?
AI: Let's answer you questions one by one.
a) Since there are more than 100 unique products, should I create one
hot encoding variables for all my 100 products?
There are many ways to encode a categorical variable, a list of them you can find here. Which one you should use depends on your data. Categorical variables can be of many types like ordinal, nominal, high cardinality or low cardinality. Not all encoders work with all types of categorical variables. A simple Google search will lead you to articles where you can find all the necessary info regarding which encoder to use when. Here are a few articles I found article 1, article 2, article 3.
Since your cardinality is more than 100, using OneHotEncoder will lead to increase in dimensionality which is not a good thing. So you should go for other encoders like OrdinalEncoder TargetEncoder or others, depending on your data type.
I would like to know which product leads to business loss or win.
You can get these types of insights using Shap and/or Lime easily.
b) I understand we can create one-hot encoding variables for region
variable because it has only 4 values like APAC, EMEA, GC, EUROPE etc.
Yes you can one hot encode them provided they do not have a sense or order between them.
c) My total number of rows in dataset is 300K. But as you see most of
categorical variable has 100 unique values. How should I decide
whether it has one hot encoded or not?
As it said above, what kind of encoder to use depends on your data type and your problem type i.e. weather it is a classification or regression problem.
d) Is there any other better or alternative method to do this?
Yes there are. Check out the links I mentioned above!:D
Cheers! |
H: How to make a model suffer from underfitting
I would like to show an example of my model when it is overfitting, and when it is underfitting. Now overfitting is pretty straight forward, just train on small data, and the model will remember the data. But how do I show example of underfitting? I have a couple of suggestions:
lowering the learning rate, but I do not think this is really underfitting. This is just slow learn model.
add a dropout layer with 0.99, so that the network could not learn well enough.
maybe change the architecture somehow that it will be harder to learn.
I think (2) is the best option, what do you guys think?
My model:
conv2d (kernel=5,feature_maps=20)
batch_normalization
ReLU
Flatten()
FullyConnected(11,500->10)
Softmax
and the task it to label MNIST hand written.
AI: One way to underfit is to reduce the complexity of the model so the model is unable to learn patterns in the training data.
Here are a couple of options:
Reduce the number of layers
Reduce the size of the layers
Use a linear activation function |
H: Logistic Regression mapping formula
Sigmoid function predicts the probability value which is between 0 & 1. What is the formula in logistic regression that maps the predicted probabilities to either 1 or 0?
AI: You get the output of the logistic regression $\sigma$, which is between $0$ and $1$.
Default option (is spit out out from most packages): In order to get class labels you simply map all values $\sigma \leq 0.5$ to $0$ and all values $\sigma >0.5$ to $1$. The $\sigma =0.5$ belonging to class $0$ can be different in different implementations (practically irrelevant). But it must be deterministic in order to get reproducible results. This implies that random assignment for the threshold $0.5$ should not be done.
Depending on your application you might change this rule. For example if the negative effect of wrong labels $1$ is associated with high cost (e.g. label 1 means that a person does get good conditions for life insurance). If you only want to give good conditions when your model predicts a high probability larger than 0.95. Then we would have the following rule: All values $\sigma \leq 0.95$ to $0$ and all values $\sigma >0.95$ to $1$. You have to implement this by our own from the probabilities that the logistic regression fit gives you. |
H: How to perform regression on image data using Tensorflow?
Overview
I understand the surface of the mathematics* of simple neural networks. I went through single-label image classification problems (ie using MNIST & fashion-MNIST datasets) using the native tensorflow, performed multi-label image classification using Vertex AI's AutoML, and I am aware of regression tutorials on tabular data (ie this). I can perform classification on tabular data using the same principles as used for image data.
In this question, I am asking about how to perform regression on image data.
Tabular Data
Image Data
Regression
Basic regression: Predict fuel efficiency
Relatively little literature, the theme of this question
Classification
Very similar NN architecture works as for image classification
See MNIST digit classification for example
Earlier threads (ie this, this, this, this, this or this) related to this topic are either unanswered or don't provide a fully reproducible setup. This question aims to do so: in the following section, I present how I create mock data, how I train my model, and what the problem is with it.
A regression problem
What the model should estimate
I can create a large number of images with a tilted elongated rectangle on them, with some other points on the image as noise:
Given an image, I am trying to build a Tensorflow model that estimates the slope of this rectangle.
Reproducible data generation
# Imports for this and the following sections:
import PIL
import glob
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
A simple function that creates this_many pngs as above, each 10 by 10 inch, saved with 20 dpi (so the result will be 200 by 200 pixels):
def generate_data(this_many,lim = 10,prefix=''):
for i in tqdm(range(this_many)):
# create plot with limits
plt.figure(figsize=(5,5))
plt.xlim([-lim,lim])
plt.ylim([-lim,lim])
# add tilted rectangle
angle = np.random.uniform(low=0, high=180)
plt.gca().add_patch(Rectangle((0,0),lim-1,1,angle=angle,facecolor='k'))
plt.gca().add_patch(Rectangle((0,0),-lim+1,1,angle=angle,facecolor='k'))
# add scatter plot as noise
xs = np.random.uniform(low=-lim, high=lim, size=50)
ys = np.random.uniform(low=-lim, high=lim, size=50)
plt.scatter(xs, ys, s=100, c='k')
# tidy up
plt.gca().set_aspect('equal')
plt.gca().get_yaxis().set_visible(False)
plt.gca().get_xaxis().set_visible(False)
plt.savefig(f'{prefix}sample{i:04}_angle_{int(angle*100):05}.png',dpi=20)
plt.close()
Each filename will also contain the angle at which the rectangle is rotated on the image.
#Generate 10000 such pngs:
!mkdir pngs
generate_data(10000,prefix='pngs/')
Read in image data
#List all `png`s we have generated:
pngs = glob.glob('pngs/sample*png')
Create numpy arrays from it (as here):
ims = {}
for png in pngs:
ims[png]=np.array(PIL.Image.open(png))
Let's call the arrays created from pngs questions, as the neural net will be questioned on the slope of rectangles appearing on these:
#Let's call the arrays created from pngs `questions`
questions = np.array([each for each in ims.values()]).astype(np.float32)
#Check the first color channel of the first image:
plt.imshow(questions[0][:,:,0])
It could be improved, but decent enough.
#Read in the slopes to an array:
solutions = np.array([float(each.split('_')[-1].split('.')[0])/100 for each in ims]).astype(np.float32)
#Check the slope on the image above:
solutions[0] # outputs: 100.88
Seems correct.
Where Tensorflow comes in
#Define our model:
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(4, 4, activation='sigmoid'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=100/4*100/4, activation='sigmoid'),
tf.keras.layers.Dense(units=1000, activation='sigmoid'),
tf.keras.layers.Dense(units=50, activation='sigmoid'),
tf.keras.layers.Dense(units=1)
])
#Compile:
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.SGD(0.01))
#Fit:
history = model.fit(questions, solutions, epochs=10, batch_size = 200, verbose=1)
#Output:
Epoch 1/10
50/50 [==============================] - 22s 442ms/step - loss: 2698.5503
Epoch 2/10
50/50 [==============================] - 21s 426ms/step - loss: 2699.3318
Epoch 3/10
50/50 [==============================] - 22s 436ms/step - loss: 2699.2109
Epoch 4/10
50/50 [==============================] - 22s 445ms/step - loss: 2701.0398
Epoch 5/10
50/50 [==============================] - 21s 422ms/step - loss: 2700.9006
Epoch 6/10
50/50 [==============================] - 22s 437ms/step - loss: 2701.7229
Epoch 7/10
50/50 [==============================] - 22s 439ms/step - loss: 2698.1746
Epoch 8/10
50/50 [==============================] - 22s 431ms/step - loss: 2695.8655
Epoch 9/10
50/50 [==============================] - 22s 444ms/step - loss: 2700.7979
Epoch 10/10
50/50 [==============================] - 22s 435ms/step - loss: 2697.8386
The fact that loss is not decreasing while being thousands high is alarming.
Test model
#Generate test data:
generate_data(30,prefix='pngs/test_')
#Read in pngs:
test_pngs = glob.glob('pngs/test_*png')
test_ims = {}
for png in test_pngs:
test_ims[png]=np.array(PIL.Image.open(png))
#Prepare test questions and solutions as before:
test_questions = np.array([each for each in test_ims.values()]).astype(np.float32)
test_solutions = np.array([float(each.split('_')[-1].split('.')[0])/100
for each in test_ims]).astype(np.float32)
#Apply model:
test_answers = model.predict(test_questions)
test_answers is:
print(test_answers)
#array([[90.65718 ],
# [90.65722 ],
# [90.65722 ],
# [90.65722 ],
# .
# .
# .
# [90.65722 ],
# [90.657196],
# [90.65721 ],
# [90.65723 ]], dtype=float32)
i.e., all of them are almost the same. Correct estimates would've been close to test_solutions:
print(test_solutions)
#array([ 21.56, 128.17, 126.59, 104.89, ... 168.03, 68.59, 124.97, 155.32], dtype=float32)
i.e. the model is completely wrong. It does not seem to be the case that simply tweaking epoch numbers or batch size is going to help. (I did try some other numbers though, but it indeed did not help.)
Question
What tensorflow architecture would allow a model to be capable of estimating the slope of these above rectangles after training?
In other words: what's wrong with the approach above?
* Excellent backpropagation explanations here and here.
AI: I think the issue is mostly with your network architecture.
You are using only one convolutional layer, and you are using all sigmoid activations!
Adding more convolutional layers, changing the activations from sigmoid to relu, and changing the optimizer to Adam gives me a loss below 5 after 30 epochs:
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(3, 3, activation='relu'),
tf.keras.layers.Conv2D(3, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(2),
tf.keras.layers.Conv2D(3, 3, activation='relu'),
tf.keras.layers.Conv2D(3, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(2),
tf.keras.layers.Conv2D(3, 3, activation='relu'),
tf.keras.layers.Conv2D(3, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=512, activation='relu'),
tf.keras.layers.Dense(units=256, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=1)
])
model.compile(loss='mean_squared_error', optimizer="adam")
history = model.fit(questions, solutions, epochs=30, batch_size=200, verbose=1)
Which gives the following training output for the last 5 epochs:
Epoch 25/30
50/50 [==============================] - 3s 54ms/step - loss: 4.8650
Epoch 26/30
50/50 [==============================] - 3s 57ms/step - loss: 5.5044
Epoch 27/30
50/50 [==============================] - 3s 56ms/step - loss: 6.0381
Epoch 28/30
50/50 [==============================] - 3s 55ms/step - loss: 4.7235
Epoch 29/30
50/50 [==============================] - 3s 54ms/step - loss: 4.4355
Epoch 30/30
50/50 [==============================] - 3s 55ms/step - loss: 4.1494 |
H: How does Keras Tokenizer choose tokens given a sentence?
I tried to find the answer to this question but I can't find anything, so I ask here: How does Keras Tokenizer choose tokens given a sentence of words ?
To be more precise with what I want to know, given this simple example:
#Import module
from keras.preprocessing.text import Tokenizer
# define a document
doc = ['The cat sat on the mat']
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the document
tokenizer.fit_on_texts(doc)
encoded_doc=tokenizer.texts_to_sequences(doc)
print('word_index : ',tokenizer.word_index)
This method creates the vocabulary index based on word frequency and then it basically takes each word in the text and replaces it with its corresponding integer value from the word_index dictionary.
Therefore, this means that in the step in which tokenizer is fit on the document (I think in this step), it decides that the tokens are the words of the sentence. Why ? Is it possible to change this choice and choose as tokens the letters of the sentence ?
Thank you in advance.
AI: This is simply how the tokenizer works given the defaults that are defined, see also the documentation. By default the value for the split argument is ' ', meaning that it splits the sentences on every space character to get the tokens for that sentence. You can change this to get other multi-character tokens from a sentence. In addition, there is the char_level keyword which would create tokens use each character instead of multiple characters. |
H: Custom vectorizer transformer in sklearn with cross validation
I created a custom transformer class called Vectorizer() that inherits from sklearn's BaseEstimator and TransformerMixin classes. The purpose of this class is to provide vectorizer-specific hyperparameters (e.g.: ngram_range, vectorizer type: CountVectorizer or TfidfVectorizer) for the GridSearchCV or RandomizedSearchCV, to avoid having to manually rewrite the pipeline every time we believe a vectorizer of a different type or settings could work better.
The custom transformer class looks like this:
class Vectorizer(BaseEstimator, TransformerMixin):
def __init__(self, vectorizer:Callable=CountVectorizer(), ngram_range:tuple=(1,1)) -> None:
super().__init__()
self.vectorizer = vectorizer
self.ngram_range = ngram_range
def fit(self, X, y=None):
print(f">>> Vectorizer.fit() called with vectorizer={self.vectorizer} and ngram_range={self.ngram_range}.")
return self
def transform(self, X, y=None):
print(f">>> Vectorizer.transform() called with vectorizer={self.vectorizer} and ngram_range={self.ngram_range}.")
X_ = X.copy()
X_vect_ = self.vectorizer.fit_transform(X_) # problem is in this line!
X_vect_ = X_vect_.toarray()
# print(X_vect_.shape)
# print(self.vectorizer.vocabulary_)
# time.sleep(5)
return X_vect_
(Side note: time.sleep(5) was merely added to make debugging easier by preventing debug info overflowing on one another.)
I intend to use the custom vectorizer in the following way, with a pipeline and a hyperparameter tuning step:
pipe = Pipeline([
('column_transformer', ColumnTransformer([
('ltype_encode', OneHotEncoder(handle_unknown='ignore'), ['Type']),
('text_vectorizer', Vectorizer(), 'Text')],
remainder='drop')
),
('model', LogisticRegression())
])
param_dict = {
'column_transformer__text_vectorizer__vectorizer': [CountVectorizer(), TfidfVectorizer()]
}
randsearch = GridSearchCV(pipe, param_dict, cv=2, scoring='f1').fit(X_train, y_train)
Now as I was debugging, I have a guess of the problem in my code: the above GridSearchCV uses a 2-fold cross validation. First, it takes half of the data to train the model and reserves the other half for evaluation. However, the Vectorizer() class's transform() method will try to call fit_transform() on the evaluation dataset again, even though when testing/evaluating, we would want to use the previously fit vectorizer without a refit.
Question is: how could I rectify this problem?
Imports:
import time
import pandas as pd
from typing import Callable
import sklearn
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import GridSearchCV
and sample data in semicolon-separated format:
Src;Text;Type;Target
A99;hi i love python very much;c;1
B07;which programming language should i learn;b;0
A12;what is the difference between python django flask;b;1
A21;i want to be a programmer one day;c;0
B11;should i learn java or python;b;1
C01;how much can i earn as a programmer with python;a;0
c01;hello <FLAG> <FLAG> I m from france i enjoyed this lecture thank u very much HEAVY BLACK HEART HEAVY BLACK HEART HEAVY BLACK HEART HEAVY BLACK HEART;b;1
ssa;hi hola salut <FOREIGN> <FOREIGN> <FOREIGN> <FOREIGN> SMILING FACE WITH HALO HEAVY BLACK HEART CLINKING GLASSES <FLAG> <FLAG>;a;1
AI: This can be solved by simply changing the method that is called within transform to the transform method of the vectorizer. In addition you would also have to add a call to fit within the fit method to make sure that the vectorizer is actually fitted before being used to transform any data:
class Vectorizer(BaseEstimator, TransformerMixin):
def __init__(self, vectorizer:Callable=CountVectorizer(), ngram_range:tuple=(1,1)) -> None:
super().__init__()
self.vectorizer = vectorizer
self.ngram_range = ngram_range
def fit(self, X, y=None):
print(f">>> Vectorizer.fit() called with vectorizer={self.vectorizer} and ngram_range={self.ngram_range}.")
self.vectorizer.fit(X)
return self
def transform(self, X, y=None):
print(f">>> Vectorizer.transform() called with vectorizer={self.vectorizer} and ngram_range={self.ngram_range}.")
X_ = X.copy()
X_vect_ = self.vectorizer.transform(X_)
X_vect_ = X_vect_.toarray()
return X_vect_ |
H: Data snooping and information leakage?
I need help in deciding whether my below implementation imposes data snooping bias and information leakage from the test/evaluation set to the train set.
I have a text corpus of 10k+ short online comments. Many have special symbols and emojis. These might theoretically be handled by vectorizers (by transforming them to their Unicode representation for instance) but I believe it's more valuable to transform them to their descriptions because that would provide (at least with a higher likelihood) actual information on whether the user is happy, sad, frustrated, etc. Thus my function I have written for this purpose applies the following transformation:
original = "hello this is a sample comment "
transformed = "hello this is a sample comment SMILING FACE WITH HALO EMOJI"
My plan is to train and evaluate multiple models using cross validation and hyperparameter tuning, for binary classification. At first, I just loaded my entire dataset (i.e. before the train/test split) and used df.apply(lambda x: transform_emojis(x)) where transform_emojis() is the function that performs the above string transformation.
However, I am now wondering if this might be a mistake, and whether I'd better incorporate this function into the pipeline, right before the vectorization step. Let's consider an example: a very rare emoji (e.g. squid: ) might be present in only one comment out of the 10k+. If in one iteration of the K-fold cross validation the squid happens to show up in the test set, the count vectorizer (or tf-idf) fitted on the training set would know nothing about this symbol and drop it during the .transform() of the test set.
In contrast, my current implementation transforms the entire corpus before modelling. Continuing with the squid example this means that irrespective of whether the squid shows up in the training or test set, it is already converted to string representation: "SQUID". If it shows up in the training set, the vectorizer will then create a separate feature named 'squid'. If it shows up in the test set, the vectorized will again not know anything about it and hence the word 'squid' will be thrown away during the .transform() step.
It seems to me that there is no risk of data leakage and data snooping bias in this case but I am somewhat uncertain whether my reasoning is correct.
AI: While technically this may be a situation of information leakage since you're applying a pre-processing step on the whole dataset before performing a dataset split, I don't think it matters too much in this specific case. Normally these types of steps would have to be done after the data is split to make sure that there is no information leaking to the training dataset that is not in the training dataset itself (e.g. new keywords). However, since the mapping from emoji to words is already known beforehand (i.e. is not derived from information in any new data but is static) and the number of emoji's isn't likely to change (to my basic understanding at least) I wouldn't expect it to have any noticeable impact. I would still include this step in the pipeline though because it would make the overall process cleaner and it shouldn't impact the results given that the mapping is static. |
H: No module named 'model'
I am trying to use the CoAtNet class in the following link
CoAtNet Class from Github
but I always have error while I am running the following lines:
from torch import nn, sqrt
import torch
import sys
from math import sqrt
#sys.path.append('.')
from model.conv.MBConv import MBConvBlock
from model.attention.SelfAttention import ScaledDotProductAttention
and I get this error:
ModuleNotFoundError: No module named 'model'
from this line:
from model.conv.MBConv import MBConvBlock
can any one helps me Regards in advance !
AI: If you are just using the CoAtNet.py file it gives this error since it is missing relative imports from other folder in the repository (i.e. the MBConvBlock and ScaledDotProductAttention classes). Make sure to either clone the full repository or copy just the two python files (SelfAttention.py and MBConv.py) in which the two classes are defined. |
H: Recommended number of features for regression problem
In the following link the answer recommends a feauture amount of N/3 for regression (or it is quoted).
Where N corresponds to the sample size:
How many features to sample using Random Forests
Is there any paper which quotes this?
AI: Not sure what is meant by a paper. Are you asking if there is mathematical proof that this is the best setting all of the time?
There are some experiments pointed to from here, a text book quote in that answer, and in the link you posted.
The answer is "it depends". You can tune this parameter for your data and problem. Perhaps n/3 is a good place to start and maybe close enough to the optimum that it does not need to be further tuned, compared to the other parameters and the time you have. |
H: Error in assign a numeric value
I would like to transform a string into a number. This is my code:
test <- starwars
home <- test$homeworld
home <- as.numeric(home)
but I have this error:
Error: 'list' object cannot be coerced to type 'double'
My goal is to assign a number to every homeworld. Obviously the lines with the same homeworld will have the same number assigned.
How can I fix my code?
AI: The way I read it, you want to transform home to contain numbers instead of text, with similar numbers representing similar homeworlds.
As r correctly states, you cannot turn a list into a single numeric (double) value. Instead, you first have to convert the list into a factor.
To do this in R, you run:
data(starwars, package = "dplyr")
df <- starwars
home <- df$homeworld
home_numeric <- as.numeric(as.factor(home))
Hope this helps! |
H: Knit with R markdown
I have tried running some code on RStudio Desktop, within the Chunk the codes runs smoothly, but when knitting to view as html, I get the message that something is wrong with a line of code. What can I do please.
[This is the code I wrote and I was okay][1]
But I got this error message while trying to view it in html
AI: The message tells you that there is no variable mcq_grade. Most likely when you ran the code yourself you had already loaded some data from a file and assigned it to mcq_grade, it could look like this:
mcq_grade <- read.table(filename)
All the code that you put in your R markdown source document must be independent because it's executed in a new environment. Btw this is the goal: this way the code is reproducible independently from the environment.
This means that if some dataset must be loaded first, it must be loaded from the R markdown document itself. Mind that the R Markdown doc is executed in the directory where the source is located, so you might have to adjust the path to the file. |
H: Changing the predicted variable from price to price/km due to better visual correlation
I'm working on a dataset of Uber Rides from Kaggle. Of the important variables there are pickup and drop-off coordinates, passenger count, datetime of pickup, distance and the final price. I'm currently in the exploration phase and just about to begin feature engineering. When I'm plotting the different potential correlations, some of them just feel odd to plot fare against something. For example, fare vs passenger count or fare vs hour doesn't make much sense to me, as the average fare also depends on how far people usually go. Price/km on the other hand seems to have more sense when plotting. Price/km vs year can show yearly price increase, price/km vs hour shows how the hour affects the pricing of the trip and so on.
Example: price/km vs hour, makes sense
My question is, would it make sense to change the predicted variable from fare to price/km and then multiply by the trip distance? Obviously if I predict one I have to remove the other from the dataset, but does it matter which one I predict?
AI: The target you will want to use will depend entirely on your goal with this analysis. In your case you are asking whether you should set your target to either fare or fare/km.
As you stated, fare/km does make more sense if you want to predict the price per kilometer of your ride (i.e. was there surge pricing? how does price change by time?).
However, consider the question "How much do I expect to earn from a customer X?". Then price on Uber does not depend only on the distance. It also depends on the current supply and demand, the pick-up location (region), etc and predicting a fare/km might obfuscate the total cost of the ride.
For example, we want to predict ride cost. Our features are
pick-up location
day of week
time of day
We can expect that the average ride cost at 3am from a clubbing district will be higher (people heading home from drinking, more kilometers) let's say 100\$ rides, 25km. In comparison at 8pm (when people might be going from restaurants to night clubs, less kilometers) 20\$ rides, 5km. However, if you used the fare/km, then the two would have the same target. |
H: Hidden Markov Models: Best practices in selecting observable variables
I am just getting started with Hidden Markov Models. In selecting my observable variables, there are some where I believe the recent change in the variable is potentially more predictive than its level. For example, in finance, the level of of an interest rate may not be as important as how much it has recently changed.
Given that HMM presumes that only the present state matters, am I violating a best practice if I used the delta of variables in this manner? Are there potential pitfalls of which I should be aware?
AI: Your question is whether you violate best practice if you use the delta in your HMM, and if there's any pitfalls.
I think to answer the first, and prevent the second, you could do what is often done in the case where people want to use a HMM whilst using information from more than one previous state: rewrite the latent chain, such that it becomes a 'standard' Markov chain again. This will probably require some wrangling, but prevents violating the Markov property and any potential pitfalls you're worried about.
Say your original latent chain is $X_t$. You can rewrite this to $Y_t=(X_t, X_{t-1})$.
This $Y_t$ then meets the fulfills the Limited Horizon assumption in the sense that it only depends on $Y_{t-1}$.
Depending on your specific use-case, make sure to also adjust the definition of the original state $Y_0$ |
H: Binary classification with seperate training and testing datasets
I have two datasets (train.csv) and (test.csv) revolving around predicting the death outcome for a disease. Both sets include 20 independent variables (age, weight, etc), but only the train.csv dataset contains the true death outcome (0 for alive, 1 for death). The training dataset has a shape of (650, 21), whereas the test dataset has a shape of (200, 20). I'm also using python, and the dataset is imbalanced with only around 30 deaths in the training set.
I have to train the model using the training set, and predict the outcome using the test dataset, as well as display stuff such as performance tables. I then have to put the predicted outcomes into a new .csv file for submission which gets tested against a different dataset and gives me an ROC/AUC score. This submission has to have the same amount of observations as the test dataset, which is 200.
I just have a few questions revolving around this.
Do I have to use train_test_split on the training dataset? Or is it uneccsary because I already have seperate training/testing datasets?
I've created 2 different models using XGBoost and a Linear regression model. However, the accuracy for the training model tends to be really high (0.8-0.98), but when submitting my predictions to test on kaggle, it has a really low AUC/ROC score (0.5 or lower). Is this due to overfitting or is there another issue?
Is there any way to get an accurate AUC/ROC score for my predictions before I submit them to kaggle? I only have a limited amount of submissions, and I would like to know I'm atleast submitting something good rather than wasting all my submissions. Sorry if that is worded confusingly.
AI: Allright, here's what I think:
I would always recommend splitting the training set if the amount of data allows it. Your train/test split in the train data wil then be a train/validation split. The validation set will allow you to test your model for things such as overfitting on the training data.
Even though you can get the ROC/AUC score for the test set predictions, you can't use this for tuning your model, as this would be data leakage.
This question is a big part of the whole data science process, and too broad to be answered in this context. Try your best to try different configurations and see what works best! :)
Also, high train set performance and low test set performance can never be due to overfitting, it is overfitting.
Yes, see the train/validation split suggestion in part 1. |
H: How to draw a density plot?
This is my R script to draw the density plot:
df <- iris
plot(density(df$Sepal.Length), main="Density Plot", ylab="Frequency", sub=paste("Skewness:", round(e1071::skewness(df$Sepal.Length), 2)))
Is there a way to plot three density plots (one for every species: setosa, virginica and versicolor) in the same graph?
AI: For plotting I prefer to use the ggplot2 package since I find it easier to use and more versatile. Using this package you can easily plot multiple density plots:
library(ggplot2)
iris %>%
ggplot(aes(x=Sepal.Length, colour=Species)) +
geom_density() |
H: ValueError: Negative dimension size caused by subtracting 5 from 3
I get this error
ValueError: Negative dimension size caused by subtracting 5 from 3 for '{{node conv2d_77/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](conv2d_76/Relu, conv2d_77/Conv2D/ReadVariableOp)' with input shapes: [?,32,8,3], [5,5,32,32].
when I don't have padding = 'same' in my second layer in my model, and I get this error
TypeError: add() got an unexpected keyword argument 'padding' when I have it.
Model:
model_cnn.add(Conv2D(filters = 32, kernel_size = (5, 5), activation = 'relu', input_shape = (SIZE, SIZE, 3), padding = 'same'))
model_cnn.add(Conv2D(filters = 32, kernel_size = (5, 5), activation = 'relu'), padding = 'same')
model_cnn.add(MaxPool2D(pool_size = (2, 2)))
model_cnn.add(Dropout(rate = 0.3))
model_cnn.add(Conv2D(filters = 64, kernel_size = (3, 3), activation = 'relu'))
model_cnn.add(Conv2D(filters = 64, kernel_size = (3, 3), activation = 'relu'))
model_cnn.add(MaxPool2D(pool_size = (2, 2)))
model_cnn.add(Dropout(rate = 0.3))
model_cnn.add(Flatten())
model_cnn.add(Dense(256, activation = 'relu'))
model_cnn.add(Dropout(rate = 0.3))
model_cnn.add(Dense(43, activation = 'softmax'))
model_cnn.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
```
AI: The first error you're getting is likely because the input becomes too small for the network to perform a 5 by 5 convolution on. The second error is caused by the fact that you are placing the padding argument in the wrong place. You are currently using it for the model.add call, whereas you should use it with the Conv2D classs:
model_cnn.add(Conv2D(filters = 32, kernel_size = (5, 5), activation = 'relu', padding = 'same')) |
H: Specificity over 100
I am construction a deep neural network for a classification task, when I look at the metrics, I have a specificity of 1.04. Is it possible to have this metric over 100 ? How do you interpret it?
Thank you.
AI: The specificity is defined as
$Specificity = \frac{\sum{True Negative} }{\sum{True Negative} + \sum{False Positive} }$.
These counts are strictly positive values and as such the specificity cannot be negative. You can also see that the specificity must be less than 1 because it is a ratio.
For more details on sensitivity and specificity you can check this answer: Usage of Precision Recall on an unbalanced dataset |
H: Logistic regression - Odds ratio vs Probability
In Logistic regression, the final values we achieve are associated with Probability. Then why do we need Logit/Log of odds? We can directly use probability.
Is Logit used to get the equation of a best fit line?
AI: The Log of Odds is used for interpretation purposes if we want to compare Logisitic Regression to Linear Regression. Unlike linear regression, $\beta_0 + \beta_1X$ does not directly give you the estimated value of your response variable. It gives the estimated log of odds, here's a short derivation that you already may have seen:
$$p = \frac{e^{\beta_0+\beta_1X}}{1+e^{\beta_0+\beta_1X}}$$
$$\frac{p}{1-p}=e^{\beta_0+\beta_1X}$$
$$ ln(\frac{p}{1-p}) = \beta_0+\beta_1X$$
This is different from linear regression which takes the following form:
$$ \hat y = \beta_0 + \beta_1X$$
If $\beta_0+\beta_1X$ doubles, $\hat y$ doubles in the case of linear regression but probability does not double, the log of odds does. |
H: Keras: How to restore initial weights when using EarlyStopping
Using Keras, I setup EarlyStoping like this:
EarlyStopping(monitor='val_loss', min_delta=0, patience=100, verbose=0, mode='min', restore_best_weights=True)
When I train it behaves almost as advertised. However, I am initializing my model weights before training using weights I know are a good baseline.
The problem is when I train, although EarlyStopping kicks in, it ignores my initial model and often picks the best model since training started (excluding initial model). The model that it picks is often worse than my initial one.
Is there a way to force it to consider the initial model?
AI: Based on how the EarlyStopping callback is implemented there doesn't seem to be way to accomplish this. After an epoch ends (in your case more specifically the end of the first epoch) it checks if the value at the end of the epoch is an improvement over the current value (see this function, where the current value is stored in self.best. When the training of the model starts this variable is initialized to numpy.Inf of -numpy.Inf, depending on the mode that is used (see this function. This means that the value at the end of the first epoch is always an improvement over the value at the start of the training, therefore the callback will only restore weights back to the first epoch at maximum. |
H: Logistic Regression - Odds & log of odds
ln(p1−p)=β0+β1X
The equation of line in the above equation denotes that the log of odds is linearly related to the predictor variables.
Why is log of odds linearly related to the predictor variables, but not the plain odds?
AI: For understanding this first we will have to look at the maths of logistic regression.
The equation of linear regression is given by :
P(y|x;w) = Sigmoid(wTx + b)
Now if we take log on both sides and folow the match in the image below, it clearly show why log of odds linearly related to the predictor variables
After step 6, shown in above image if you take log on both sides, it becomes log of odd
i.e ***ln(p/1-p) is linearly related wTx +b***
Image Ref : https://ayearofai.com/rohan-6-follow-up-statistical-interpretation-of-logistic-regression-e78de3b4d938 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.