text
stringlengths 83
79.5k
|
---|
H: In k-fold-cross-validation, why do we compute the mean of the metric of each fold
In k-fold-cross-validation, the "correct" scheme seem to compute the metric (say the accuracy) for each fold, and then return the mean as the final metric.
Source : https://scikit-learn.org/stable/modules/cross_validation.html
However, why can't we just compute the metric directly on all the predictions, as we have a estimation for all the data.
Thanks
AI: It's ok to compute the global performance on the concatenation of the predictions for all the K folds after running the cross-validation process, it depends on the goal and on the metric (for instance the mean accuracy over the folds gives the same result as the global accuracy, but that's not true for every evaluation measure).
But very often the goal involves not only measuring performance accurately but also measuring the variance of the performance across the folds, in order to detect instability. This can't be done from the concatenation of the predictions, so it's often more convenient to keep the folds results separated.
(this is my interpretation, there might be other reasons) |
H: Sequence prediction with unlimited predictions
I have a special kind of prediction problem.
I have observed $M$ sequences $X_m = [x_1, x_2, ..., x_N]$ where the distance $d$ between $x_n$ and $x_{n+1}$ is drawn from the same normal distribution, eg $d \sim N(\mu, \sigma^2)$. I can learn the parameters $\mu, \sigma$.
Now I need to predict/generate a whole sequence at once where a prediction $\hat{x}_n$ is considered correct if it falls within some absolute tolerance of the true data point $x_n$. There is one caveat: I can make as many predictions as I want without penalty but there must always be a minimum distance $\epsilon$ between predictions $\hat{x}_n$ and $\hat{x}_{n+1}$, where we can safely assume $\epsilon << d$. Intuitively, this makes me want to predict a pattern rather than trying to predict each point individually.
To re-iterate: a prediction that is outside the tolerance of a true data point is not penalized. We only need to maximize the number of correct predictions (that fall within the tolerance of a true data point).
Example 1
Prediction: [10, 20, 30]
True observation: [11, 21, 31]
------------------------------
3 correct predictions if tolerance >= 1, else 0 correct predictions
Example 2
Epsilon = 4 (eg we can predict with a minimum distance 4)
Prediction: [6, 10, 15, 22, 30, 35]
True observation: [11, 21, 31]
-----------------------------------
tolerance = 1 => 2 correct predictions (10, 30)
tolerance = 2 => 3 correct predictions (10, 22, 30)
What would be a good way to approach this problem? Are there problems that are similar?
Edited for clarity.
AI: Your biggest issue with the evaluation scheme you have - "success" means within tolerance, "failure" means outside tolerance, plus your constraint on model outputs needing to vary per time step - is that it will be hard to extract gradients in order to train the prediction model directly. This rules out many simple and direct regression models, at least if you want to use "maximise number of scores within tolerance" as your objective function. The constraints on sequential predictions and allowing re-tries are also non-differentiable if taken as-is.
I think you have two top level choices:
1. Soften the loss function, and add the hard function as a metric
Use a differentiable loss function that has best score when predictions are accurate and constraints are met. For example your loss function for a single predicted value could be
$$L(\hat{x}_n, \hat{x}_{n+1}, x_{n+1}) = (\hat{x}_{n+1} - x_{n+1})^2 + \frac{a}{1+e^{s(|\hat{x}_n - \hat{x}_{n+1}| - \epsilon)}}$$
the second constraint part is essentially sigmoid with $a$ controlling the relative weight of meeting constraints with accuracy of the prediction and $s$ controlling the steepness of cutoff around the constraint.
a. The weighting between prediction loss and constraint loss will be a hyper-parameter of the model. So you would need to include $a$ and $s$ amongst parameters to search if you used my suggested loss function.
b. You can use your scoring system, not as an objective function, but as a metric to select the best model on a hyper-parameter search.
c. With this approach you can use many standard sequence learning models, such as LSTM (if you have enough data). Or you could just use a single step prediction model that you feed current prediction plus any other features of the sequence that is allowed to know, and generate sequences from it by calling it repeatedly.
This system should encourage re-tries that get closer to the true value.
2. Use your scoring system directly as a learning goal
This will require some alternative optimising framework to gradient descent around the prediction model (although some frameworks can generate gradients internally). Genetic algorithms or other optimisers could be used to manage parameters of your model, and can attempt to change model parameters to improve results.
For this second case, assuming you have some good reason to want to avoid constructing a differentiable loss function at all, then this problem can be framed as Reinforcement Learning (RL):
State: Current sequence item prediction (or a null entry), as well as any known information such as tolerance, length of sequence, current sequence item value (which may be different from current prediction) $\epsilon$, $d$, $\mu$ or $\sigma$ can be part of the current state.
The action is to select next sequence value prediction, or probably more usefully, the offset for the next sequence item value. Using offsets allows you easily add constraint for minimum $\epsilon$
The reward is +1 for being within tolerance or 0 otherwise.
Time steps match the time steps within a current sequence.
You can use this to build a RL environment and train an agent that will include your prediction/generator model inside it. There are a lot of options within RL for how to manage that. But what RL gives you here is a way to define your goal formally using non-differentiable rewards, whilst internally the model can still be trained using gradient based methods.
The main reason to not use RL here is if the prediction model must be assessed at the end of generating the sequence. In which case the "action" might as well be the whole sequence, and becomes much harder to optimise. It is not 100% clear to me from the question whether this is the case.
Caveat: RL is a large and complex field of study. If you don't already know at least some RL, you can expect to spend several weeks getting to grips with it before starting to make progress on your original problem.
There are alternatives to RL that could equally apply, such as NEAT - deciding which could be best involves knowing far more about the project (e.g. the complexity of the sequences you wish to predict) and practical aspects such as how much time you have available to devote to learning, testing and implementing new techniques.
Have you forgotten something?
If you allow infinite re-tries, then an obvious strategy is to generate a very large sequence moving up and down using different step sizes (all greater than $\epsilon$). This doesn't require any learning model, just a bit of smart coding to cover all integers eventually. Chances are this model is only a few lines of code in most languages.
If this is to be ruled out, then some other rule or constraint is required:
Perhaps only positive increments are allowed in the predicted sequence (so we cannot re-try by subtracting and trying again)? This conflicts with your "unlimited predictions" statement.
Perhaps a sub-goal here is to make the guessing efficient? In which case RL could be useful, as you can add a a discount factor to reward processing in order make the model prefer to get predictions correct sooner. |
H: How to predict based on multiple samples?
I am relatively new to ML so I apologies in advance if my question shows lack of understating of the field.
The problem
A particular study course has a high drop-out rate and we want to reduce it.
The dataset
We have two data sets:
Data about the student themselves (e.g. scored of initial aptitude tests, demographic). This dataset includes the feature to predict. E.g.
ID, AGE, RESULT_TEST1, RESULT_TEST2,DROPPED_OUT
AAAA, 21, 0.6, 0.4, TRUE
BBBB, 20, 0.3, 0.9, FALSE
Exams taken by the students during the course. This dataset contains data about the type of the exam (e.g. subject, level of complexity) and the results obtained by the student. Some of the exams are required, some are mandatory and some can be taken on voluntary bases. That is, each student can take an arbitrary list of exams. E.g.
ID, EXAM_ID, EXAM_COMPLEXITY, EXAM_SCORE
AAAA, XXXXX, 0.8, 0.4
BBBB, YYYYY, 0.2, 0.8
The goal
The idea is to use ML to calculate the likelihood of a particular student to drop-out during the course using historical data. The system should be able to predict based on the two datasets, how likely is a student to at drop-out so that we can give him/her more support.
The challenge
How do I combine the two datasets to train a model? I could create a superset joining the two but what would I then use to predict a result?
E.g. If I train the model on:
ID, AGE, RESULT_TEST1, RESULT_TEST2, EXAM_ID, EXAM_COMPLEXITY, EXAM_SCORE
AAAA, 21, 0.6, 0.4,XXXXX, 0.8, 0.4
BBBB, 20, 0.3, 0.9,YYYYY, 0.2, 0.8
How do I predict the likelihood of DROPPED_OUT passing all the data I have for a new student (multiple samples)?
Any explanation or pointers to documentation would be greatly appreciated.
Thank you in advance.
Alex
AI: This is a reasonably standard problem for supervised ML:
The class is the variable "dropped_out"
Given the goal to predict a variable which is specific to a particular student, an instance must represent a student, not an exam.
This definition of what an instance should consist of seems to be the part that you didn't reach yet: you correctly saw that you need to join the two datasets but in your example you join them by exam id. As a result you obtain "instances" which each represent a particular exam by a particular student, and of course the same student might appear several times in the data. The solution is to join your datasets by student id in order to make a single instance contain all the information for one student, i.e. something like this:
AGE, RESULT_TEST1, RESULT_TEST2, SCORE_EXAM1, SCORE_EXAM2, SCORE_EXAM3,...., DROPPED_OUT
However it seems that the exams are not normalized, so I see two options:
Simplification: for each student, give only some summary statistics about their performance at exams, for example min, max, avg, std dev for both the score and the complexity. This gives a fixed number of features (8 in my example), each with a specific role so that the ML method can "make sense" of it.
Refactor the data: if possible, rearrange the exam data so that a column corresponds to the same exam for different students. This would mean that the exam complexity is not needed anymore, because the distribution of the grades is the only thing which matters. It's ok to have some missing/undefined values for the students who didn't take a particular exam, most ML methods can deal with that.
The second option is very likely to give better results than the first, but it might be impractical to transform the data this way. |
H: What exactly is a trained model for machine-learning?
I am curious about the deployment phase of a machine-learning model. So, after you run your script using python to train your millions of data and it works, what are some methods of deploying your model to the real world? I know libraries such as pickle which is being used to serialize a model exists, but what exactly is a model?
Is it some sort of blueprint which is then used to craft the predicted answer? If so, what does that mean that having a pickled object in the real world is much faster at churning predictions than re-training your model again?
AI: A machine learning model is an algorithm which learns features from the given data to produce labels which may be continuous or categorical ( regression and classification respectively ). In other words, it tries to relate the given data with its labels, just as the human brain does.
Mathematical functions are used to map the features ( produced as a result of feature extraction ) to their labels.
Models trained on a high amount of data can generalise themselves
better. Generalisation is the ability of a model to give generalised
predictions across varied or diverse data. It is not biased towards
the data on which it was originally trained.
Production of models has these basic steps involved:
Collection of suitable data.
Preprocessing of data for training the model.
Training the model.
Evaluating the model.
Hosting the model for production.
Deploying a model:
This could be done in many ways. You can serve a model or run it on an IoT device. You basically want to freeze all the trainable parameters so that they are constants. The model's learning capabilities are removed so that it could only make predictions. There are some models which exhibit Online Learning.
Different frameworks like TensorFlow, Keras, PyTorch etc. have their own methods of saving models. Like in Keras, we can save a NN model as a hd5f serialized file. With TensorFlow Lite, we can run a model on an IoT device.
Firebase MLKit hosts the model in the cloud created with TensorFlow.
The best deployment platform is determined by its usability,
scalability and developer friendliness. |
H: Why do we use a softmax activation function in Convolutional Autoencoders?
I have been working on an image segmentation project where I have created a convolutional autoencoder. I saw this image and implemented it using Keras.
At the output layer, the author has used the softmax activation function. Shouldn't it be ReLU?
According to me, this seems to be a regression problem, where we need
to predict the continuous values for the segmented image pixels. If so, why are we using a softmax function, instead of ReLU or a linear function?
AI: Looking at the images, it seems an image segmentation problem. As you see, the Autoencoder is not returning a more or less accurate representation of the original input image, but the segmentation of it. Each of the output colors are segments of the image that have been classified. Probably the classes are something such as: "road", "tree", "building", "sky", etc.
In other words, it's an Autoencoder performing a classification task. That explains the softmax layer at the end. |
H: Which function should I use for a nonlinear regression in panel data in R?
I have a data set with different years, so it is panel data. I know what features I should use in my model for a nonlinear regression. I already used the nls()function for each year to find coefficients and then took an average of these coefficients.
I would like to know if it is rational to do this, or should I follow another strategy for a nonlinear regression for a panel data?
AI: I don't quite get what your problem is (what is non-linear in your data/model?). One way of modelling a panel data set is to use a dummy-variable FE model and "control" for non-linearity in continuous variables/features by applying generalised additive models (GAM) with regression splines.
I don't understand why you split up your data for regression? Why don't control year via indicator/dummy variables? |
H: Why such a big difference in number between training error and validation error?
Question
Why such a big difference between my 'Train loss' and 'Validation loss' as shown in the picture below? Is it a signal that my codes are wrong and my trained network is wrong as well?
Some of my codes are as follows:
DATA_SPLIT_PCT = 0.2
timesteps = 5
n_features = 20
epochs = 100
batch = 32
lr = 0.0001
lstm_autoencoder = Sequential([
# Encoder
LSTM(8, activation='relu', input_shape=(timesteps, n_features), return_sequences=True),
LSTM(4, activation='relu', return_sequences=False),
RepeatVector(timesteps),
# Decoder
LSTM(4, activation='relu', return_sequences=True),
LSTM(8, activation='relu', return_sequences=True)
TimeDistributed(Dense(n_features)),
])
adam = optimizers.Adam(lr)
lstm_autoencoder.compile(loss='mse', optimizer=adam)
for stock in stock_list: # 500 stocks in stock_list
lstm_autoencoder_history = lstm_autoencoder.fit(X_train_dict[ticker], X_train_dict[ticker],
epochs=epochs,
batch_size=batch,
validation_data=(X_valid_dict[ticker], X_valid_dict[ticker]),
verbose=False).history
plt.plot(lstm_autoencoder_history['loss'], linewidth=2, label='Train')
plt.plot(lstm_autoencoder_history['val_loss'], linewidth=2, label='Valid')
plt.show()
I used the for loop to feed my data into lstm_autoencoder network. In the dictionary variable stock_list, there are 500 stock names such as 'AAPL'.
I plotted lstm_autoencoder_history['loss'] and lstm_autoencoder_history['val_loss'] and it is weird because usually validation loss is higher than train loss.
I am curious to know why my plot has smaller amount of validation loss. For your information, I used Keras as my deep learning framework. And since I used Keras, I thought this library would handle the different proportion of training set size and validation set size by averaging the errors.
AI: I think this is because your training set is much larger than your validation set. What this means is that for training; losses are being accumulated for comparatively larger number of examples than validation. Hence your training error is larger than your validation error.
I don't think its a problem with your code. If you see closely the trend line has almost same slope for both training loss and validation loss; this suggests that you are okay as far as code is concerned. |
H: identify similarities in a dataset
I have a dataset of customers:
age height weight eye_colour
30 174 74 Nan
20 191 71 Nan
28 165 56 Brown
...
I would like to identify similarities between customers where the eye_colour is Nan in order to help identify the cause of this missing value.
Is there any method to help identify similarities?
AI: In general identifying similarities is done with clustering, but in this case what you're looking for is a potential pattern in the data leading to a missing value in a specific column, right? So I would try to train a decision tree using all the columns except the target one as features, and using a binary class indicating whether the target column has a missing value or not. Visualizing the decision tree obtained after training should show the pattern if there is one. |
H: k fold cross validation
i want to use KFold from mode_selection instead of cross_validation ut it didn't work for the pobject Kfold
from sklearn.model_selection import KFold
import xgboost as xgb
# Some useful parameters which will come in handy later on
ntrain = X_train.shape[0]
ntest = X_test.shape[0]
SEED = 123 # for reproducibility
NFOLDS = 10 # set folds for out-of-fold prediction
kf = KFold(shuffle=False, n_splits= NFOLDS, random_state=SEED)
def get_oof(clf, x_train, y_train, x_test):
oof_train = np.zeros((ntrain,))
oof_test = np.zeros((ntest,))
oof_test_skf = np.empty((NFOLDS, ntest))
for i, (train_index, test_index) in enumerate(kf):
x_tr = x_train[train_index]
y_tr = y_train[train_index]
x_te = x_train[test_index]
clf.train(x_tr, y_tr)
oof_train[test_index] = clf.predict(x_te)
oof_test_skf[i, :] = clf.predict(x_test)
oof_test[:] = oof_test_skf.mean(axis=0)
return oof_train.reshape(-1, 1), oof_test.reshape(-1, 1)
xgb_oof_train, xgb_oof_test = get_oof(xgb,x_train, y_train, x_test)
i got this error TypeError: 'KFold' object is not iterable
AI: Try to use the split method as enumerate argument, instead of kf (e.g.:for i, (train_index, test_index) in enumerate(kf.split(X)):
Hope it helps! |
H: Can word embedding be used for text classification on a mix of English and non-English text?
I'm doing text classification on text messages generated by consumers and just realized even though most of the replies provided by consumers are in English, some are in French. I've used Keras word embedding, conv1D and maxpooling to learn the structure in the text and didn't use any other text preprocessing techniques such as stop words removal etc.
In this case, I think it should be fine to use word embedding on both languages since word embedding learns the meaning of individual words regardless of languages...Is this reasonable? Or maybe I do need to separate the languages and build different models for each language?
AI: In this case, I think it should be fine to use word embedding on both languages since word embedding learns the meaning of individual words regardless of languages...Is this reasonable? Or maybe I do need to separate the languages and build different models for each language?
If I think logically you are correct. Word embedding is merely a
collection of Tokens, which derived its features on the basis of
nearby words in a sentence. So if you have sufficient raw data(mix of
both), I think its good to go, though results will explain you more
:).
However its good to see how such models will behave in case we have mix of LeftToRight(LTR) and RTL languages. |
H: How do GD, Batch GD, SGD, and Mini-Batch SGD differ?
How do these four types of gradient descent functions differ from each other?
GD
Batch GD
SGD
Mini-Batch SGD
AI: Gradient Descent is an optimization method used to optimize the parameters of a model using the gradient of an objective function ( loss function in NN ). It optimizes the parameters until the value of the loss function is the minimum ( of we've reached the minima of the loss function ). It is often referred to as back propagation in terms of Neural Networks.
All the below methods are variants of Gradient Descent. You can learn more from this video.
Batch Gradient Descent:
The samples from the whole dataset are used to optimize the parameters i.e to compute the gradients for a single update. For a dataset of 100 samples, updates occur only once.
Stochastic Gradient Descent:
Stochastic GD computes the gradients for each and every sample in the dataset and hence makes an update for every sample in the dataset. For a dataset of 100 samples, updates occur 100 times.
Mini Batch Gradient Descent:
This is meant to capture the good aspects of Batch and Stochastic GD. Instead of a single sample ( Stochastic GD ) or the whole dataset ( Batch GD ), we take small batches or chunks of the dataset and update the parameters accordingly. For a dataset of 100 samples, if the batch size is 5 meaning we have 20 batches. Hence, updates occur 20 times.
All the above methods use gradient descent for optimization. The main
difference is that on how much samples are the gradients calculated.
Gradients are averaged in Mini-Batch and Batch GD.
You can refer to these blogs/posts:
Batch gradient descent versus stochastic gradient descent
Gradient Descent Algorithm and Its Variants |
H: Converting pandas series object to int in pandas
I started my machine learning journey by deciding to explore recommender systems so that I can apply it in some of the projects for my company. Now, I am using Pandas for data analysis. The data set is the imdv movies data set. Here, I am trying to convert a pandas series object to int but it converts the series to float64.
Here is the screenshot:
'clean_ids' is the method that I am using to do this and you can see that 'id' changes to float64. Please help me with this.
AI: Your code is correct. The problem is that you have NaN values in your id column and python interprets NaN as float. In your code any decimal values it encounters will be converted to integer but represented as $x.0$ due to presence on NaN values. See here for more.
As for a solution to your problem you can either drop the NaN values or use IntegerArray from pandas. It is currently experimental but suits yor problem. |
H: Improve the loss reduction in a neural network model
The following code is to train a neural network model of a given dataset (50,000 samples, 64 dim).
from keras import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
X, y = process_dataset()
model = Sequential([
Dense(16, input_dim=X.shape[1], activation='relu'),
Dense(16, activation='relu'),
Dense(1, activation='sigmoid')
])
'''
Compile the Model
'''
model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy'])
'''
Fit the Model
'''
model.fit(X, y, shuffle=True, epochs=1000, batch_size=200, validation_split=0.2, verbose=2)
In the beginning, you can see below that the val_loss gets reduced from one epoch to another very well.
Epoch 82/1000
- 0s - loss: 0.2036 - acc: 0.9144 - val_loss: 0.2400 - val_acc: 0.8885
Epoch 83/1000
- 0s - loss: 0.2036 - acc: 0.9146 - val_loss: 0.2375 - val_acc: 0.8901
When the model takes many epochs, the loss change becomes so small, especially when the number of epochs increases.
Epoch 455/1000
- 0s - loss: 0.0903 - acc: 0.9630 - val_loss: 0.1317 - val_acc: 0.9417
Epoch 456/1000
- 0s - loss: 0.0913 - acc: 0.9628 - val_loss: 0.1329 - val_acc: 0.9443
Kindly, I have two questions:
What does this phenomenon mean? i.e., the loss begins to decrease very well at the beginning but not much reduction by the time the training epochs takes a lot of iteration.
What is the possible solution for this?
Thank you,
AI: Generally, the decrease in loss tends to be smaller, the longer you train your model. You can think about this in a way, that the model first makes good progress in learning, but later any further improvement becomes harder (thus slower). At some point the model stops to learn. This comes from the logic of gradient decent, a numerical optimization process which is behind most ML models. If the model has learned what it is able to learn, loss does not decrease any more.
What can you do about this? You can try to make your model „better“ in terms of learning capacity. You can increase the capacity of the model (more neurons) or add more layers. You can also adjust the learning rate during learning by „callbacks“ (ReduceLROnPlateau). In this case, you lower the LR automatically if learning progress becomes small. By doing so you can try to make the model to learn more detailed patterns. See callbacks for Keras: https://keras.io/callbacks/.
Here is a nice blogpost about how to train NN: http://karpathy.github.io/2019/04/25/recipe/ |
H: What arguments should I pass to input_shape parameter of LSTM function in Keras?
My dataset has 2944424 rows and 6 columns. I am using an LSTM in Keras to forecast taxi demand. I am having problem with the input_shape parameter of the LSTM.
It gives the error:
ValueError: Error when checking input: expected lstm_15_input to have 3 dimension
AI: Input should be of shape (batch size, number of steps, features). See the following for details:
https://machinelearningmastery.com/reshape-input-data-long-short-term-memory-networks-keras/ |
H: What is whole word masking in the recent BERT model?
I was checking BERT GitHub page and noticed that there are new models built from a new training technique called "whole word masking". Here is a snippet describing it:
In the original pre-processing code, we randomly select WordPiece tokens to mask. For example:
Input Text: the man jumped up , put his basket on phil ##am ##mon ' s head
Original Masked Input: [MASK] man [MASK] up , put his [MASK] on phil [MASK] ##mon ' s head
The new technique is called Whole Word Masking. In this case, we always mask all of the the tokens corresponding to a word at once. The overall masking rate remains the same.
Whole Word Masked Input: the man [MASK] up , put his basket on [MASK] [MASK] [MASK] ' s head
I can't understand "we always mask all of the the tokens corresponding to a word at once". "jumped", "phil", "##am", and "##mon" are masked and I am not sure how these tokens are related.
AI: phil ##am #mon is a subword encoding of the single word “philammon” into 3 tokens. The comment just means that they mask words as opposed to tokens by taking into account subword encoding.
For more on subword encodings take a look at the slides from cs224, especially Byte Pair Encoding, from the Feb 14 subwords lecture at http://web.stanford.edu/class/cs224n/index.html#schedule. |
H: drop columns and rows in one line in pandas
I want to drop a range of rows and columns of a dataframe, I did it as follow:
df.drop(df.columns[3:], axis=1, inplace=True)
df.drop(df.index[3:], axis=0, inplace=True)
Can I do the two processes in one method instead of two?
Or is there any more sufficient way to accomplish this?
AI: Why don't you do this:
df = df.iloc[:3, :3]
It returns the same thing as your code. |
H: Are mini batches sampled randomly in Keras' Sequential.fit method()
When you .fit a Keras Sequential() model, you can specify a batch_size parameter. I have noteiced it is sometimes defined independently from the actual dataset size. Does it mean that mini-batches are sampled randomly, instead of scrolling down the dataset from top to bottom, slice by slice?
AI: If you set shuffle=True as an argument of the model.fit method, Keras will shuffle the dataset before splitting it into batches (source), otherwise the dataset will be processed sequentially. |
H: Is there a way to put a separate line between clusters for k-Means Clustering?
k-Means Clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining.
here is a piece of code to perform a 2-d k-Means Clustering
from sklearn.datasets.samples_generator import make_blobs
X, y_true = make_blobs(n_samples=300, centers=3,
cluster_std=1.1, random_state=0)
plt.scatter(X[:, 0], X[:, 1], s=50);
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
y_kmeans = kmeans.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='viridis')
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200);
outputs a figure (fig_1)
this figure (fig_2) comes from wiki
is there a way to put a separate line shown in fig_2 on fig_1?
AI: there are two answers to this question.
The first one is yes, you can do it with python code. From a Sklearn tuto, you can plot the decision boundary by using meshgrid:
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(X[:, 0], X[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
And there is a second answer, the analytical one:
The decision boundary is the group of points at which we can't decide the label. For the Kmeans it is the equidistant points from two centroides. A small calculation shows that this is a line. To find it's equation, you only need the coordinates of the two centroides.
After that you find the middle of the segment between those two, find an orthogonal vector to this segment going throw the middle and you have your equation. You just need to plot it and voila! Coding this don't take long, i'll try to update the answer if you want. |
H: Around how much difference between two classes can be accounted as an imbalanced dataset?
I am building a binary classification model which has 17K values as class A and 10K values as class B. I want to know when a dataset can face the issues of "IMBALANCED Dataset" ?
AI: In general there will be not a hard rule about this, but this dataset seems to be like balanced.
The point about disbalanced is that you have to keep in mind that the accuracy of your model will have a different starting point. For this python has the function to run a baseline with dummy classification. https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html. This gives you a good indication what is your baseline (if you choose the the strategy to most_frequent). In your case, any prediction model which always forecast class A, would have a accuracy of: 17/(10+17) = 0,6296... , so any binary classification with a score under this value would be really bad |
H: Distinguish Multi-Task vs Single-incremental Task in Continual Learning
I read through the Internet and found this:
Most of Continuous Learning studies focus on a Multi-Task scenario, where the same model is required to learn incrementally a number of isolated tasks without forgetting the previous ones.
And they explain PermutedMNIST is Multi-Task. I have no idea why?
In other words,
Single-Incremental-Task considers a single task which is
incremental in nature. In other words, we still add new classes
sequentially but the classification problem is unique and when
computing accuracy we need to distinguish among all the classes
encountered so far.
However, I still can not clarify two these definitions? Could someone help me out and give me example of Multi-Task scenario and Single-Incremental-Task scenario.
AI: TL;DR/Summary: The classes ($y1$, $y2$, $y3$ below) in multi-task can be anything (it may be that $y1 \cap y2 = \emptyset$, $y1 \cap y2 = \emptyset$, and so on). In Incremental we take the labels (and data from a common set, (i.e. $y[:2] \subseteq y[:4] \subseteq y$, by the definition of subsetting)
It is just a question of the interpretation of the definition. The definitions look very similar to each other but the devil's in the details.
Assuming that the model can distinguish the upper bond number of classes: For example, it is an ANN with $N$ neurons in the output layer and the number of classes ($k$) in the task with most classes is $k <= N$ (multi-task) or the total number of classes ($k$) is also $k <= N$ (single-incremental-task); we can say that:
Multi Task
Here we will train the model on different tasks over time, this is often called as reinforcement learning. In semi-python pseudo code (where .train already includes things like cross validation):
model = Whatever(...)
X1 = [[1, 0],
[2, 2],
[3, 0]]
y1 = [0, 1, 0]
model.train(X1, y1)
X2 = [[4, 4],
[5, 5],
[6, 0]]
y2 = [1, 2, 0]
model.train(X2, y2)
X3 = [[7, 0],
[8, 8],
[9, 9]]
y3 = [0, 1, 1]
model.train(X3, y3)
score = model.score(X3, y3)
Here the tasks may or may not be related. Or often are slightly related (e.g. identifying different types of objects in each training).
Single Incremental Task
This is also training the model several times, here we have a single task in X but do not feed the entire dataset at once. In semi-python pseudo code:
model = Whatever(...)
X = [[1, 0],
[2, 2],
[3, 0],
[4, 4],
[5, 5],
[6, 0]]
y = [0, 1, 0, 3, 2, 0]
model.train(X[:2, :], y[:2])
score1 = model.score(X[:2, :], y[:2])
model.train(X[:4, :], y[:4])
score2 = model.score(X[:4, :], y[:4])
model.train(X, y)
score3 = model.score(X, y)
Here the task is one but it may be a big one. One place where this technique is used is to build a learning curve, which is one way of evaluating if we have enough data to understand the variation of the task.
Extra note: in the multitask case we said that $y1 \cap y2 \cap y3 = \emptyset$ could be (and most likely is) possible. One example would be: $y1$ are different models of cars and $y2$ are different models of ships. And the question is: Do understanding different models of cars help with differentiation different models of ships?
(P.S. ys will always be enumerated from 0 up to the number of classes, i.e. the numeric values of y will always be the same but their class meaning does not need to be). |
H: How does Byte Pair Encoding work?
I am using this to do some Byte Pair Encoding (BPE). My corpus looks like this.
When I run the learn_bpe, I get a vocabulary that looks like this.
e r
r e
o n
o r
t i
) ;</w>
a c
n t
' ,</w>
er r
a l
r o
h e
m e
When I try to combine it again to see if it worked with ubword-nmt apply-bpe -c data/jsvocab.txt < data/javascript.txt > tst.txt, the resulting file has a lot of strange @ characters.
const p@@ re@@ F@@ or@@ m@@ at@@ t@@ e@@ d@@ B@@ l@@ o@@ c@@ k@@ N@@ a@@ me@@ s = {
'@@ ap@@ i@@ -@@ p@@ ro@@ j@@ ect@@ s@@ '@@ : '@@ A@@ P@@ I P@@ ro@@ j@@ ect@@ s@@ ',
'@@ b@@ a@@ s@@ i@@ c@@ -@@ c@@ ss@@ '@@ : '@@ B@@ a@@ s@@ i@@ c C@@ S@@ S@@ ',
'@@ b@@ a@@ s@@ i@@ c@@ -@@ h@@ t@@ m@@ l@@ -@@ and@@ -@@ h@@ t@@ m@@ l@@ 5@@ '@@ : '@@ B@@ a@@ s@@ i@@ c H@@ T@@ M@@ L an@@ d H@@ T@@ M@@ L@@ 5@@ ',
'@@ c@@ ss@@ -@@ f@@ le@@ x@@ b@@ o@@ x@@ '@@ : '@@ C@@ S@@ S F@@ le@@ x@@ b@@ o@@ x@@ ',
'@@ c@@ ss@@ -@@ g@@ r@@ i@@ d@@ '@@ : '@@ C@@ S@@ S G@@ r@@ i@@ d@@ ',
de@@ v@@ o@@ p@@ s@@ : '@@ D@@ e@@ v@@ O@@ p@@ s@@ ',
e@@ s@@ 6@@ : '@@ E@@ S@@ 6@@ ',
'@@ in@@ f@@ or@@ m@@ ation@@ -@@ se@@ c@@ ur@@ i@@ t@@ y@@ -@@ w@@ i@@ th@@ -@@ he@@ l@@ me@@ t@@ j@@ s@@ '@@ : '@@ I@@ n@@ f@@ or@@ m@@ a@@ ti@@ o@@ n S@@ ec@@ ur@@ i@@ t@@ y w@@ i@@ t@@ h H@@ e@@ l@@ me@@ t@@ J@@ S@@ ',
j@@ q@@ u@@ er@@ y@@ : '@@ j@@ Q@@ u@@ er@@ y@@ ',
'@@ j@@ s@@ on@@ -@@ ap@@ i@@ s@@ -@@ and@@ -@@ a@@ j@@ a@@ x@@ '@@ : '@@ J@@ S@@ O@@ N A@@ P@@ I@@ s an@@ d A@@ j@@ a@@ x@@ ',
'@@ m@@ on@@ g@@ o@@ d@@ b@@ -@@ and@@ -@@ m@@ on@@ g@@ o@@ o@@ se@@ '@@ : 'M@@ on@@ g@@ o@@ D@@ B an@@ d M@@ on@@ g@@ o@@ o@@ se@@ ',
'@@ t@@ he@@ -@@ d@@ o@@ m'@@ : '@@ T@@ h@@ e D@@ O@@ M@@ ',
'@@ ap@@ i@@ s@@ -@@ and@@ -@@ m@@ i@@ c@@ ro@@ serv@@ i@@ c@@ e@@ s@@ '@@ : '@@ A@@ P@@ I@@ s an@@ d M@@ i@@ c@@ ro@@ serv@@ i@@ c@@ e@@ s@@ ',
'@@ ap@@ i@@ s@@ -@@ and@@ -@@ m@@ i@@ c@@ ro@@ serv@@ i@@ c@@ e@@ s@@ -@@ p@@ ro@@ j@@ ect@@ s@@ '@@ : '@@ A@@ P@@ I@@ s an@@ d M@@ i@@ c@@ ro@@ serv@@ i@@ c@@ e@@ s P@@ ro@@ j@@ ect@@ s@@ '
}@@ ;
And so on. I'm not sure what I'm missing, but it seems that it didn't fully reconstruct the text from the vocabulary?
AI: Your BPE vocabulary is quite small given how the strings you want to segment look like. The biggest problem here is that BPE expects tokenized sentences with tokens separated by spaces. The BPE model is not aware of JS syntax and you feed it with JS code, so there is no wonder that it does not learn anything syntactically plausible.
You can interpret the BPE file a log of what subwords got merged and you can see that the longest string you get is something like:
m').valueOf
H:mm').valueOf
I would recommend some preprocessing, perhaps some JS lexer, so it respects the syntax and perhaps bigger vocabulary. |
H: Confusion regarding the Working mechanism of Activation function
For binary classification irrespective of the model used, the sigmoid function is a good choice for output layer because the actual output value ‘Y’ is either 0 or 1 so it makes sense for predicted output value to be a number between 0 and 1.
My confusion is that is there a binary step function in the output layer which squashes the values of the linear combination of weights and and inputs to 0 or 1? Does classification means always applying a thresholding function on top of a linear or non-linear function which is in the hidden layer?
Say the predicted output value is 0.75 and actual Y is 0. Then, how is 0.75 converted to 1? The loss function would calculate the error as actual - predicted = 0-0.75 = -0.75
Can somebody please explain the math or point out some links where the working steps are shown? Thank you.
AI: I think following picture by Rubens Zimbres can express how this process will be done except loss function part:
Is there a binary step function in the output layer which squashes the values of the linear combination of weights and and inputs to 0 or 1?
Answer : Yes if you consider following example shows how Sigmoid squishes values to be between 0 and 1:
Does classification means always applying a thresholding function on top of a linear or non-linear function which is in the hidden layer?
Answer: It does.
Say the predicted output value is 0.75 and actual Y is 0. Then, how is 0.75 converted to 1? The loss function would calculate the error as actual - predicted = 0-0.75 = -0.75
Answer: Loss function and calculation of error for classification-based NN is not just by computation of $Y_Actual - Y_prediction$
but It is used normally by combination of Softmax activation function and cross-entropy as follow:
See the whole training process Multinomial Logistic Classification $D(S(wx+b),L)$ |
H: What is the difference between an autoencoder and an encoder-decoder?
I want to know if there is a difference between an autoencoder and an encoder-decoder.
AI: Auto Encoders are a special case of encoder-decoder models. In the case of auto encoders, the input and the output domains are the same ( typically ). The Wikipedia page for Autoencoder, mentions,
The simplest way to perform the copying task perfectly would be to
duplicate the signal. Instead, autoencoders are typically forced to
reconstruct the input approximately, preserving only the most relevant
aspects of the data in the copy.
For instance, they are used for denoising data. The inputs for such a model are the noisy inputs and the outputs are the denoised inputs. You can find an example here.
Encoder-Decoder models can also have different input and output domains, like in the case of neural machine translation. The inputs and the outputs have different domains, as they belong to two different languages and can have variable lengths as well. |
H: Auto.arima with xreg in R, restriction on forecast periods
I am using the forecast package and implement auto.arima with xreg. Here I want to forecast only for 1 year ahead but I am unable to use h parameter in the forecast function. Below is the reason for that:
Definition is given in manual(F1 check):
h = "Number of period of forecast but if xreg is used 'h' is ignored and the forecast period will be number of rows"
Please suggest me an alternate way to use h for the specific period forecast.
AI: Using xreg suggests that you have external (exogenous) variables. In this, a regression model is fitted to the external variables with ARIMA errors.
When forecasting you need to provide future values of these external variables. In practice, these are often forecasts or could be known. For example, if you're trying to predict Sales and you use Advertising spend as an external variable, you may know the advertising spend for the upcoming year.
auto.arima then produces forecasts for the length of xreg, therefore disregarding h.
Based on your comments below, I've provided an example script demonstrating this based on the Sales example above.
library(forecast)
# Generate sample data
sales <- sample(100:170, 4*10, replace = TRUE)
advertising <- sample(50:70, 4*10, replace = TRUE)
# Create time series objects.
sales_ts <- ts(sales, frequency = 4, end = c(2017, 4))
fit <- auto.arima(sales_ts, xreg = advertising)
# If we pass external_regressor into the forecast, h will be disregarded and we will
# get a forecast for length(external_regressor)
wrong_forecast = forecast(fit, h = 4, xreg = advertising)
length(wrong_forecast) # Will be 40
# To forecast four quarters in advance, we must provide forecasted external regressor data
# for the upcoming four quarters, so that length(new_regressor) == 4.
# In reality, this data is either forecasted from another forecast, or is known. We'll randomly generate it.
upcoming_advertising <- sample(50:70, 4, replace = TRUE)
correct_forecast <- forecast(fit, xreg = upcoming_advertising)
length(correct_forecast$mean) # Will be 4
The key things to note are:
If we forecast with the same regressors as we did when generating the forecast, h will be disregarded and a forecast will be generated for the length of xreg in your case, 10 years.
As such, we must provide new data for xreg for the length of time we wish to forecast - in your case, 4 quarters. |
H: Is text recognition by definition a part of image recognition?
I'm referring to more advanced text recognition systems that are using neural networks to find and extract text from images like the ones Google and Microsoft are offering on their ML platforms.
If not, what is the technical difference to other branches of image recognition like object or face recognition?
AI: Yes. Please take a look on how to do text recognition in images using opencv and Tesseract opensource libraries. |
H: How to apply a trained Random Forest model to a new data set in R?
So I have a data set that is essentially football players statistics in 2017 and 2018. I have trained my model to use the 2017 data to predict the 2018 number of touchdowns. My code is below:
set.seed(1)
data.rf <- randomForest(2018_td ~ ., data = data, proximity = TRUE)
In my data set, I had the actual # of touchdowns in 2018, and trained a random forest algorithm to predict that value. Now, I want to apply the trained random forest on the same 2018 data set, but to predict 2019 # of TD, that I don't have.
I'm not sure if I'm missing something or if I have a fundamentally wrong understanding of how RF works. How would I go about predicting those 2019 values from my data.rf model?
AI: You need to setup another data frame that has the unlabeled 2019 observations. Assuming you have multiple predictors, your new data would have the exact same columns as your 2018 data but with no 2019_td column (since you presumably don't know anything about the 2019 season). That is, the players in 2019 that you wish to predict on, along with their predictors, but no target column.
Once you have the data frame, you can use predict(trained_model, newdata) to score your new predictions.
Edit: As an aside, you have data that is influenced by time. What happens if a player gets injured next season? What about trends in the players performance? |
H: Anomaly detection k-means in Time Series
I'm trying to use k-means to detect anomalies in the Amount column. I have the following part of my dataset:
8 2018-09-06 -2760.14
9 2018-10-04 -1149.73
10 2018-11-07 -1551.41
11 2018-12-06 -1208.17
12 2019-01-07 -244.02
13 2019-02-06 111.00
14 2019-03-06 139.24
15 2019-04-04 -27315.10
16 2019-05-07 -30326.58
17 2019-06-06 -1633.44
I used 2 as number of clusters and I get the following centroids:
[[ -237.045]
[-28820.84 ]]
In my case the anomalies are rows 15 and 16 but now they form their own cluster so the distance to their centroid will be insignificant and thus not an anomaly
My question is, how can I detect the anomalies if they form their own cluster?
AI: K-means is sensitive to extreme values, such as outliers.
This is because A) it assigns every point to a cluster, but outliers shouldn't be clustered, and B) it minimizes the sum-of-squares, which puts more weight on far instances such as outliers. Larger errors get even larger, and hence k-means tries to minimize the cost of outliers by making them cluster centers.
It's the wrong tool for this problem, you don't have a clustering problem. |
H: What does embedding mean in machine learning?
I just met a terminology called "embedding" in a paper regarding deep learning. The context is "multi-modal embedding"
My guess: embedding of something is extract some feature of sth,to form a vector.
I couldn't get the explicit meaning for this terminology and that stops me from fully understanding the author's idea and model mechanism
I check the dictionary and search on line,but the explanation is based more on the real life meaning rather than meaning as a machine learning terminology.
And that raise a more generalized and frequently met question, when you find some machine learning terminology/word that you can't understand well, where can you get the solution, some specific way to google? join a machine learning group? raise a question in stack exchange?
AI: In the context of machine learning, an embedding is a low-dimensional, learned continuous vector representation of discrete variables into which you can translate high-dimensional vectors. Generally, embeddings make ML models more efficient and easier to work with, and can be used with other models as well.
Typically, when I stumble upon jargon I'm not familiar with I first turn to Google, and if it can't be found I ping my colleagues and data science forums. |
H: Dueling Network gradient with respect to Advantage stream
Looking at Dueling DQN:
$Q = V + A - mean(A)$
For simplicity, let's assume we are working with 4 neurons. Recall that Value stream only has 1 neuron $(v_0)$
Re-writing the above equation, we get:
$$
\left[ \begin{array}{c}
Q_0\\Q_1\\Q_2\\Q_3
\end{array}
\right] =
\left[ \begin{array}{c}
v_0\\v_0\\v_0\\v_0
\end{array}
\right] +
\left[ \begin{array}{c}
a_0\\a_1\\a_2\\a_3
\end{array}
\right] -
\left[ \begin{array}{c}
\frac{1}{N}(a_0 + a_1 + a_2 + a_3)
\\\frac{1}{N}(a_0 + a_1 + a_2 + a_3)
\\\frac{1}{N}(a_0 + a_1 + a_2 + a_3)
\\\frac{1}{N}(a_0 + a_1 + a_2 + a_3)
\end{array}
\right]
$$
Question:
V (the Value) is a usual Dense Layer which has 1 neuron at the output.
A (the Advantage) is a usual Dense Layer which has N neurons at the output - what is the correct gradient vector to be passed to the neurons of this Advantage stream?
Thoughts:
From this post:
Since Q is a simple sum of functions you have:
$$\nabla_{\theta} Q(s,a) = \nabla_{\theta}V(s) + \nabla{\theta}A(s,a)
- \frac{1}{numActions} \sum_{a'}\nabla_{\theta}A(s, a')$$
You get the gradients of the V and A networks as usual by backprop.
To me, the correct gradient vector to be passed to the advantage stream is:
$$gradForA = \frac{dE}{dQ}\frac{dQ}{dA}$$
however, the second fraction of the equation is what makes me puzzled.
Is it as simple as:
$$\frac{dQ}{dA} = \left[ \begin{array}{c}
1-\frac{1}{N}\\1-\frac{1}{N}\\1-\frac{1}{N}\\1-\frac{1}{N}
\end{array}
\right] $$
It's probably not, especially if we look at the "mean" vector, in the example above.
We can see that its every entry contains contribution of all advantage neurons - because they are summed.
Am I doomed to perform 2 backpropagations for the advantage dense layer, in parallel? - One for $+A$ and one for $-mean(A)$. I would then add-up (component-wise) the two gradient vectors. Sounds like a clumsy idea..
AI: I verified the following formulas with Gradient Checking, and it's confirming they are correct. (Getting discrepancy less than 0.00002)
Let's draw the tree diagram of what's affecting what:
If we try to find gradient affecting $a_0$, we have to sum-up the pink chains. They make it evident that $a_0$ has affected all $Q$ variables, thus we sum these chains.
So the formula for entire gradient that has to enter $A$ layer is:
$$
gradForA = \left[ \begin{array}{c}
\frac{\partial E}{\partial a_0}
\\\frac{\partial E}{\partial a_1}
\\\frac{\partial E}{\partial a_2}
\\\frac{\partial E}{\partial a_3}
\end{array}
\right] = \left[ \begin{array}{c}
\\\frac{\partial E}{\partial Q_0}(1 - u) + \frac{\partial E}{\partial Q_1}(-u)+ \frac{\partial E}{\partial Q_2}(-u) + \frac{\partial E}{\partial Q_3}(-u)
\\\frac{\partial E}{\partial Q_0}(- u) + \frac{\partial E}{\partial Q_1}(1-u)+ \frac{\partial E}{\partial Q_2}(-u) + \frac{\partial E}{\partial Q_3}(-u)
\\\frac{\partial E}{\partial Q_0}(- u) + \frac{\partial E}{\partial Q_1}(-u)+ \frac{\partial E}{\partial Q_2}(1-u) + \frac{\partial E}{\partial Q_3}(-u)
\\\frac{\partial E}{\partial Q_0}(- u) + \frac{\partial E}{\partial Q_1}(-u)+ \frac{\partial E}{\partial Q_2}(-u) + \frac{\partial E}{\partial Q_3}(1-u)
\end{array}
\right]
$$
where $u$ is $\frac{1}{N}$
Bonus:
By looking at the above image, we see that the $v_0$ has similarly affected all $Q$ variables, so its gradient is:
$$
gradForV = \frac{\partial E}{\partial Q_0} + \frac{\partial E}{\partial Q_1}+ \frac{\partial E}{\partial Q_2} + \frac{\partial E}{\partial Q_3}
$$
$gradForV$ is a scalar (not a vector) because value stream only has 1 neuron, always.
Bonus #2
to compute $gradForA$ it's obvious to do via 2 for-loops, nested in one another. However, you can get rid of nesting and improve code performance by re-writing the formula as follows:
$$
gradForA = R + scalarB
$$
where
$scalarB = (-u)(\frac{\partial E}{\partial Q_0} + \frac{\partial E}{\partial Q_1}+ \frac{\partial E}{\partial Q_2} + \frac{\partial E}{\partial Q_3})$
and
$$ R = \left[ \begin{array}{c}
\frac{\partial E}{\partial Q_0}
\\\frac{\partial E}{\partial Q_1}
\\\frac{\partial E}{\partial Q_2}
\\\frac{\partial E}{\partial Q_3}
\end{array}
\right]
$$
The cool thing is that $scalarB$ can be computed once (it's a scalar), and re-used when computing the $gradForA$ vector.
Notice that $scalarB$ and $gradForV$ are almost the same thing, just with extra $(-u)$ So you can re-use this value in your code as well. |
H: Using doccano for Aspect Based Sentiment Analysis annotation
Currently looking for a good tool to annotate sentences regarding aspects and their respective sentiment polarities.
I'm using SemEval Task 4 as a reference. The following is an example in the training dataset:
<sentence id="2005">
<text>it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.</text>
<aspectTerms>
<aspectTerm term="quality" polarity="positive" from="14" to="21"/>
<aspectTerm term="GUI" polarity="positive" from="36" to="39"/>
<aspectTerm term="applications" polarity="positive" from="118" to="130"/>
<aspectTerm term="use" polarity="positive" from="143" to="146"/>
</aspectTerms>
</sentence>
Can I easily use doccano for such a task? Or would I be better off using some other tool, such as brat?
AI: You can use sequence labeling feature to annotate the text: |
H: (SOLVED) Power BI, Page level filter not working with many to one relation
EDIT: Solution found.
Turns out the relations were set to Cross filter direction: Single on all relations (and the one from Month table.Month to A.Month was not set to active).
Setting Cross filter direction: Both fixed the problem.
I have four tables A, B, Month table and Quarter table
A has a column A.Month, and many other columns with irrelevant data
B has a column B.Quarter, and many other columns with irrelevant data
Month table has columns Month table.Month and Month table.Quarter
Quarter table has a column Quarter table.Quarter
One to many relations are shown as this: "<"
Quarter table.Quarter < Month table.Quarter as there are multiple (3) months in one quarter
Month table.Month < A.Month as there are several rows of data for each month
Quarter table.Quarter < B.Quarter as there are several rows of data for each quarter
Here is a visualisation of the relations
On my dashboard, I have a visual for A, and a visual for B
What I'm trying to do is to apply a page level filter with Month table.Month, thinking that therefore, A should only show data for whatever month/months are selected in the filter, and B should show the data for the corresponding quarter/quarters.
Filter to January
Expectation:
A shows data for January
B shows data for Q1
Reality:
A shows data for all months
B shows data for all quarters
Switch page level filter from Month table.Month to Quarter table.Quarter and set filter to Q1
Expectation:
A shows data for January, February and Mars
B shows data for Q1
Reality:
A shows data for all months
B shows data for Q1
What am I missing?
AI: Posting this as an answer.
Turns out the relations were set to Cross filter direction: Single on all relations (and the one from Month table.Month to A.Month was not set to active).
Setting Cross filter direction: Both fixed the problem. |
H: Obtaining consistent one-hot encoding of train / production data
I'm building an app that will require user input. Currently, on the training set, I run the following code, in which data is a pandas dataframe with a combination of categorical and numerical columns.
dummified_data = data.get_dummies()
train_data = dummified_data[:10000]
test_data = dummified_data[10000:12000]
Currently, I have a hand-written function that takes user-inputted data and transforms it into a format like dummy data. This doesn't seem sustainable as the number of columns/the size of my categorical variables grows.
Is there a way to dummify training data and production data consistently?
AI: Use sklearn.preprocessing.OneHotEncoder and transfer the one-hot encoding to your web-service ( i'm guessing that's how you're using the model for inference ) via sklearn.pipeline.Pipeline. The pipeline will save the state of your fit on your training data and apply the same function on your production data.
Example :
pipeline1 = Pipeline([
('OneHotEncoder', OneHotEncoder())
])
pipeline1.fit(trainingdata.column1.values.reshape(-1,1))
This is how you create a pipeline containing the onehotencoder , fit your data on the pipeline. All is left is dumping your pipeline in a file, loading it later in your production environment, and call the transform method on your loaded pipeline :
joblib.dump(pipeline1,"pipeline1.joblib")
# Production environment
pipeline1 = joblib.load('pipeline1.joblib')
momo = pipeline1.transform(productiondata.column1.values.reshape(-1,1)).toarray()
And here , the variable momo contains your production data with the pipeline ( containing the one-hot encoding operation ) applied to it. |
H: conceptnet api documentation for /wn/ part of the URI
for url http://api.conceptnet.io/c/en/music/n/wn/communication
/c/en/music/n is explained the doc (https://github.com/commonsense/conceptnet5/wiki/URI-hierarchy) as:
"the initial /c to make it a concept, a part that indicates its language (usingthe BCP 47 language code for that language), and a part with the concept text. An optional fourth component gives the part of speech (as a single letter, following the convention of WordNet)."
But couldn't find documentation for the part /wn/communication. Could you point to the documentation for this or explain this part of URI .. Thanks
AI: This is the finer-grained word sense information that was added in ConceptNet 5.7. WordNet defines "music" in the topic area of "communication". We keep this information in the word sense. The idea is to distinguish WordNet senses, but not to make the extremely fine distinctions that WordNet makes.
I didn't describe it on the "URI Hierarchy" page yet, sorry about that. It's described in the changelog for now: https://github.com/commonsense/conceptnet5/wiki/Changelog
"music" is a tricky example, because WordNet has finely-distinguished senses of it that appear to be in different WordNet topic areas, even though they all seem to relate to the same meaning. Its different senses are listed in topic areas including "communication", "act", and even "music" itself. |
H: Are there any algorithms for a classification problem involving unlimited classes, and only a few instances per class
The Scenario: A group of people must summarize specific parts of speeches they hear. They hear a new speech every day, and it's possible that multiple members of group are listening to the same speech.
The Question: What ml algorithm or approach could you take to link summaries that refer to the exact same part of the exact same speech?
Why this question is interesting: most classification problems deal with a large amount of instances and only a few classes. This problem is about dealing with very few instances (1-5) for a growing number of classes (The specific part of the specific speech).
AI: Yes there are couple of methods. I recommend reading the Siamese Networks paper. From there you can check more modern approaches like Prototypical Networks and so on. In general the keyword you are looking for is few shot learning.
All this approaches basically depend on finding similiarities between two or more inputs. One input is your unknown input and the others are your known samples. Your network will predict a class corresponding to maximum similarity. |
H: Is it normal for F1 scores to be lower on a binary classification task as compared to a 3-class classification task?
I am trying to understand if the F1 scores are higher for a binary classification problem than for a multiclass classification problem.
AI: In general the lower the number of classes the easier it is for a classifier to assign the right category. However this completely depends on the characteristics of the data, in particular how well the features match the classes.
For example if one tries to somehow classify pictures of dogs, cats and rabbits into two classes, it's possible that the performance will be poorer than into 3 classes. This is an obvious example but sometimes the data might contain patterns which fit more easily into 3 groups than 2. |
H: CNNs - Hyperparameter tuning with different training sizes of the same data set
I would like to compare how much the classification performance (test accuracy) of CNNs changes depending on the size of the data set.
For this I would like to use a data set like MNIST or Fashion MNIST.
I would like to start by first training a CNN with only a subset ob about 1000 images, then a CNN with a subset ob about 5000 images, then with a subset of about 10000 images etc.
For each ot these data sets an optimized CNN should be generated.
My current plan would be the following:
Create a simple base model with only 1 Conv Layer (with 32 filter), MaxPooling2D, Dropout (0.5) and a Dense Layer with about 100 Neurons.
Using this base model, I would search for the smallest dataset (with 1000 images) using GridSearchCV for batch size and optimizer und reuse both for all other datasets.
Then I would do the following for each data set:
Search for the optimal architecture of the network (Number of Conv
layer, Number of filter, Number of dense layer, Number of Neurons in Dense layer) using GridSearchCV (RandomizedSearchCV will propably be less suitable at this point)
Search for learning rate, weight decay, dropout rate etc. (using GridSearch or RandomizedSearch).
Maybe I could just use the default setting for one or the other not so important parameter and exclude them from the search.
I am not sure if my approach is really recommendable.
I also tend to use RandomizedSearchCV to determine all hyperparameters simultaneously for each data set.
Many thanks in advance for your comments and suggestions.
AI: First suggestion: you should first find a CNN architecture that satisfies you, and then stick with it.
Second suggestion: be careful with cross validation. CNNs are extremely "heavy" models, they can easily take hundreds of thousands or millions of trainable parameters. Running cross validation on those could be computationally impossible.
In other words, keep the two problems separated.
I suggest you to take a look at common CNN architectures. There's plenty of them on Kaggle competitions (for MNIST, Fashion MNIST or any other dataset). You can choose one that works fine, and then use it to show how much the classification performance (test accuracy) of CNNs changes depending on the size of the data set. |
H: Generating Synthetic Image to improve the performance of classifier
I need some suggestion from experts. For my project work, I have been learning about Generative Adversarial Network.
I am trying to make a classifier (say CNN) to train with original CIFAR-10 dataset and then evaluate the performance.
After that, Generating more data of all classes using GAN and retrain the model and compare the performance.
But I don't know how feasible will the work be. How complex will be the work and also I want to know if the idea is acceptable as a project?
AI: I think it could work well, if you try to reduce the false positive error. Generating images for negative examples shouldn't be to difficult. Generating images for positive examples on the other hand would require a good classifier first. |
H: How to interpret feature importance (XGBoost) in this case?
I found two dominant features from plot_importance. My dependent variable Y is customer retention (whether or not the customer will retain, 1=yes, 0=no). My problem is I know that feature A and B are significant, but I don't know how to interpret and report them in words because I can't tell if they have a position or negative effect on the customer retention. Is there a way to find that out or anything that helps make it clear?
Thanks.
AI: Pictures usually tell a better story than words - have you considered using graphs to explain the effect?
Perhaps 2-way box plots or 2-way histogram/density plots of Feature A v Y and Feature B v Y might work well. |
H: How to predict next visit date based on this data
I have a dataset shown below. Here, status is if visit has been done or not and schedule is if next_action_scheduled.
df =
visit_date |status |scheduled_visit
2018-09-13|1|0000-00-00
2018-11-13|1|0000-00-00
2019-01-13|1|0000-00-00
2019-03-13|0|0000-00-00
2019-05-13|0|0000-00-00
So consider this case like user is visiting in every 2 month.
I would like to find this pattern for all users based on the historical visits with the customer. I would like to find the next predicted visit for that customer and if we missed that visit, then I ask "Do you want to schedule a visit" Or "You missed a visit with this customer."
I'm new in Ml. I tried to use RNN for time series forecasting but I'm getting a really high error rate. Prediciton is nowhere close. What model/algorithm I should use and learn about to make a model for this? I want to create a solution like if I feed in the user_id it will check all these values and send notification within app.
Thank you
AI: Assume x person in df table has visit_data and scheduled visit for Y person,
initally calculate the difference and create that as your target variable. there after
you can create a model and predict the no.of days, once you predict the no.of days add
same to your visit date to get the schedule date. below is the process in code
#calculating the differnce in visit and schedule visit
df['#.of days']= df['scheduled_visit']-df['visit_date']
#now my table as below
visit_date|#serial.no|#.of days
#create Regression or ARMIA model on serial.no and #.of days
#start predicting the values for next 30 events, you will get to know the no.of days values
#finally add those values to your visit_date you will get the schedule visit date
Start with small model like ARIMA and check with results instead of deeplearning models(RNN),choose your model based the data size. |
H: is there a way to customize my loss function to increase recall in one class only?
First, thank you.
I'm using Tensorflow 1.13 with Python 3. I'm trying to create a CNN classifier on data with 4 classes, one-hot encoded. However, the insights I need it to produce are really around spotting either classes 0 or 3. If it tells me a data row is of class 1 or 2, that's of little use to my goals. Output from sklearn classification_report is below and I want to increase recall on classes 0 and 3 during model training. Precision would be nice, too.
precision recall f1-score support
0 0.13 0.42 0.20 1484
1 0.15 0.74 0.24 696
2 0.81 0.34 0.48 12325
3 0.13 0.23 0.17 1495
avg / total 0.65 0.35 0.41 16000
I'm relatively new to both python and TF, having gone through several tutorials, and am just now starting to build my own models. I've not really been able to stretch myself out past capabilities discussed in any of the learning courses I've watched. I know that Keras, for example, has custom loss function capability, but I'm not sure I understand how to integrate it into what I've already done. Relevant bits from my existing project are below.
# my current loss function, what I want to
# replace because just plain accuracy isn't
# terribly helpful
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_true,logits=y_pred))
# setting up the trainer
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001)
train = optimizer.minimize(cross_entropy)
# run it
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(steps):
batch_x, batch_y = dataHelper.next_batch(100)
sess.run(train,feed_dict={x:batch_x,y_true:batch_y,hold_prob:0.5})
if (i + 1) % 100 == 0:
print('on step {}'.format(i))
preds_decoded = tf.argmax(y_pred,1)
trues_decoded = tf.argmax(y_true,1)
final_predictions, true_values = sess.run([preds_decoded, trues_decoded],
feed_dict={x:price_data_only_test,y_true:test_labels,hold_prob:1.0})
print(classification_report(y_true=true_values,y_pred=final_predictions))
```
AI: What you want to do to boost the performance of one of your classes is to add class weights to your loss function. This will essentially tell your network to pay more attention to some classes (note that this might deteriorate performance on the rest of the classes).
How to compute the weights.
To see what weight you'll give to each class you can either put it yourself or you can let scikit-learn help you. The problem is that your data is imbalanced. This means that you have much more samples in class 2 than you have in classes 0, 1 and 3. Through the use of class weights you can make it so that your model pays more attention to the under-represented classes, so that they appear balanced. Scikit-learn has a function that calculates what weights you should give each sample.
from sklearn.utils.class_weight import compute_class_weight
weight_array = compute_class_weight('balanced', np.unique(y) y) # y is the array containing your labels
weight_dict = dict(zip(np.unique(y), weight_array)) # dictionary containing your weights
Keras
In keras you can also add class weights in the fit method (which trains the model).
This is easier as you just need to pass a dictionary with the weight of each class (i.e. what we calculated previously)
model.fit(X_train, y_train, class_weight=weight_dict, ...)
TensorFlow
In TensorFlow you can do this simply by using this softmax_cross_entropy loss instead of the one you are currently using. This supports class weights (weights attribute).
This requires you to pass a Tensor with a shape of (batch_size,) where its elements are the weight you want to pass to each sample. What you need to do is to see what samples batch_y has and give them the respective weight!
batch_weights = [weight_dict(x) for x in batch_y] # assuming that your batch_y is a numpy array that is not one-hot encoded
# if it is one-hot encoded:
batch_weights = [weight_dict(x) for x in np.argmax(batch_y, axis=1)] # assuming batch y is a numpy array
# if batch_y is not a numpy array you need to get it through
batch_y_arr = sess.run(batch_y)
Then pass the batch_weights to the weights attribute of the loss (through a placeholder using the feed_dict - like you do with batch_x and batch_y) |
H: Targeted information extraction / focused extractive summarization
I have a large collection of project manuals, each with a large number of pages. Each manual contains some form of summary paragraphs, although these are not necessarily similar in structure or format from one to the next. The rest of the manual generally contains a large amount of various information in relation to the project, and is not always relevant to the desired content to be extracted and summarized.
In theory-
paragraph 1 - Project Summary (Extract this)
paragraph 2 - Background info (ignore)
paragraph 3 - Background info (ignore)
paragraph 4 - Background info (ignore)
paragraph 5 - Project Requirements (extract this)
Is it possible to somehow selectively target paragraphs/ sentences within a document for extractive summarization, and if so, is possible to train a model with datasets containing full texts and their desired summarizations?
So far, I have tried general extractive summarization methods utilizing TF-IDF. However, due to the substantial amount of extraneous information within the document, critical text within the summary paragraphs is usually ignored. I've considered manually increasing word counts across various keywords, but I think this will still ignore relevant sentences and phrase within close proximity (same worry with binary classification of desired paragraphs/ sentences).
Can this be done, and is there a better approach than what I've been trying so far?
AI: What you describe is a supervised problem, an unsupervised system cannot guess which parts of the documents are relevant for your purpose. In this option you need to annotate a sample of documents with a binary class, then train a system using features based on the context (for example titles of the parts).
But imho this depends how many documents you have: if it's less than a few hundreds, semi-manual annotation is going to be faster and give better results. |
H: Purpose of converting continuous data to categorical data
I was reading through a notebook tutorial working with the Titanic dataset, linked here, and noticed that they highly favored ordinal data to continuous data.
For example, they converted both the Age and Fare features into ordinal data bins.
I understand that categorizing data like this is helpful when doing data analytics manually, as fewer categories makes data easier to understand from a human perspective. But intuitively, I would think that doing this would cause our data to lose precision, thus leading to our model losing precision or accuracy.
Can someone explain when is converting numerical data to ordinal data is appropriate, and the underlying statistics of why it is effective?
AI: Your intuition is generally correct - in many cases, premature discretization of continuous variables is undesirable. Doing so throws away potentially meaningful data, and the result can be highly dependent on exactly how you bucket the continuous variables, which is usually done rather arbitrarily. Bucketing people by age decade, for example, implies that there is more similarity between a 50-year-old and a 59-year-old than there is between a 59-year-old and a 60-year-old. There can be some advantages in statistical power to doing this, but if your binning doesn't reflect natural cutpoints in the data, you may just be throwing away valuable information.
You can find a very similar question here:
https://stats.stackexchange.com/questions/68834/what-is-the-benefit-of-breaking-up-a-continuous-predictor-variable?noredirect=1&lq=1 |
H: Is it valid to shuffle time-series data for a prediction task?
I have a time-series dataset that records some participants' daily features from wearable sensors and their daily mood status.
The goal is to use one day's daily features and predict the next day's mood status for participants with machine learning models such as linear regression.
I think cross-validation could be a good way for me to evaluate the performances. However, would shuffling the data randomly be fine?
Someone told me that because I am using a time-series dataset and I am trying to do a prediction task, shuffling the data randomly will cause some mix-up of future and past, which makes my model meaningless.
However, I think I can still use the strategy of randomly shuffling the dataset because the learning model is not a time-series model and, for each step, the model only learns from exactly 1 label value instead of a series of labels.
AI: It depends on how you formulate the problem.
Let's say you have a time-series of measurements X and are trying to predict some derived series of values (mood) Y into the future:
X = [x0, x1, x2,.....]
Y = [y0, y1, y2,.....]
Now, if your model has no memory, that is, it is merely mapping xN -> yN, then the model does not care what order it sees x and y in. Feed-forward neural networks, linear regressors etc. are memory-less models.
But, if your model has memory, that is, it is mapping a sequence of features to the next days moods xN-2, xN-1, XN... -> yN-2, yN-1, yN..., then the ordering matters. The model in this is keeping track of what it has seen before. The model's internal parameters are changing and persisting with each new example it sees. The current prediction depends on the last prediction. Recurrent neural networks have memory, so order matters.
You can get around the memory requirement by restructuring your dataset. You can concatenate consecutive features and map them to next day's mood instantaneously (xN-2, xN-1, xN) -> yN. In this way, your input feature will incorporate information about the past, but the model won't care about the order since all the temporal information is encoded in the new feature, and not the model. Your new dataset will look like:
Xnew = [(x0, x1, x2), (x1, x2, x3), (x2, x3, x4),...]
Y = [ y2, y3, y4,...] |
H: How can I know if my conv1D model is overfitted or underfitted from loss curve?
I am working on classification of time series multivariate data. By doing PCA, I converted multivariate to uni-variate and fed it into a conv1d in keras.
However, I am getting a very high accuracy and low loss both in validation and in training. How can I justify this?
I have tried cross validation, but the results are not much different.
I am using adam optimizer (learning rate:0.0001). With 0.001, my model fails to converge.
I have made sure that I am not mixing the training and validation datasets.
I have shuffle both datasets independent of each other. I trained on 3728 samples and validated on 610 samples.
Can we expect such a high accuracy with binary classification?
AI: From the curves you are showing yes. Over-fitting would mean that your validation accuracy would be lower than your training accuracy, which is not the case here. Since you say that your training and validation sets are completely independent (i.e. no training samples are present in the validation set) you can consider the results reliable.
However accuracy might not be the best indicator of the model's performance. Make sure your dataset is balanced (i.e. the number of samples in both classes are equal to one another). If not try another metric that better represents the performance of your model. |
H: Strange binary classification result with a model that indicate it has been well-trained
The problem :
I am trying to build a model for binary classification for melanoma 'MEL' and nevus 'NV' the dataset is from ISIC archive ISIC 2019 but for 8 different type of skin lesion, I am using just two type as I mentioned (binary classification).
the two classes are unbalanced 10000 for 'NV' and 3000 for 'MEL'.
the dataset is splitted to train and validation.
I perform to solution for the problem : downsampling and Image augmentation.
the train folder is now contain two subfolder 'MEL' and 'NV' with 10000 image
te validation folder contain 904 image for each class.
using keras, I fine-tuned Densenet201, and used preprocessing function for densenet
from keras.applications.densenet import DenseNet201, preprocess_input
densenet_model = DenseNet201(input_shape=(224, 224, 3), include_top=False, weights="imagenet")
then i add some layers at the end of the pretrained model
#get the last layer shape
last_layer = densenet_model.get_layer('relu')
print('last layer output shape:', last_layer.output_shape)
last_output = last_layer.output
# Flatten the output layer to 1 dimension
x = layers.GlobalMaxPooling2D()(last_output)
# Add a fully connected layer with 512 hidden units and ReLU activation
x = layers.Dense(512, activation='relu')(x)
# Add a dropout rate of 0.5
x = layers.Dropout(0.5)(x)
# Add a final sigmoid layer for classification
x = layers.Dense(2, activation='sigmoid')(x)
# Configure and compile the model
model = Model(densenet_model.input, x)
I didn't freeze any layer, then I compile the model
optimizer = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None,
decay=0.0, amsgrad=True)
model.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy','binary_accuracy'])
filepath = "densenet.h5"
# Declare a checkpoint to save the best version of the model
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1,
save_best_only=True, mode='max')
# Reduce the learning rate as the learning stagnates
reduce_lr = ReduceLROnPlateau(monitor='val_acc', factor=0.5, patience=2,
verbose=1, mode='max', min_lr=0.00001)
callbacks_list = [checkpoint, reduce_lr]
# Fit the model
history = model.fit_generator(train_batches,
steps_per_epoch=train_steps,
validation_data=val_batches,
validation_steps=val_steps,
epochs=20,
verbose=1,
callbacks=callbacks_list)
The result :
Epoch 1/20
1701/1701 [==============================] - 793s 466ms/step - loss: 0.4436 - acc: 0.7890 - binary_accuracy: 0.7890 - val_loss: 0.3416 - val_acc: 0.8404 - val_binary_accuracy: 0.8404
Epoch 00001: val_acc improved from -inf to 0.84043, saving model to densenet.h5
Epoch 2/20
1701/1701 [==============================] - 720s 423ms/step - loss: 0.3447 - acc: 0.8450 - binary_accuracy: 0.8450 - val_loss: 0.3564 - val_acc: 0.8446 - val_binary_accuracy: 0.8446
Epoch 00002: val_acc improved from 0.84043 to 0.84458, saving model to densenet.h5
Epoch 3/20
1701/1701 [==============================] - 728s 428ms/step - loss: 0.2718 - acc: 0.8835 - binary_accuracy: 0.8835 - val_loss: 0.3785 - val_acc: 0.8487 - val_binary_accuracy: 0.8487
Epoch 00003: val_acc improved from 0.84458 to 0.84873, saving model to densenet.h5
Epoch 4/20
1701/1701 [==============================] - 726s 427ms/step - loss: 0.2051 - acc: 0.9172 - binary_accuracy: 0.9172 - val_loss: 0.3779 - val_acc: 0.8581 - val_binary_accuracy: 0.8581
Epoch 00004: val_acc improved from 0.84873 to 0.85813, saving model to densenet.h5
Epoch 5/20
1701/1701 [==============================] - 728s 428ms/step - loss: 0.1529 - acc: 0.9403 - binary_accuracy: 0.9403 - val_loss: 0.3923 - val_acc: 0.8581 - val_binary_accuracy: 0.8581
Epoch 00005: val_acc did not improve from 0.85813
Epoch 6/20
1701/1701 [==============================] - 728s 428ms/step - loss: 0.1163 - acc: 0.9553 - binary_accuracy: 0.9553 - val_loss: 0.4813 - val_acc: 0.8498 - val_binary_accuracy: 0.8498
Epoch 00006: val_acc did not improve from 0.85813
Epoch 00006: ReduceLROnPlateau reducing learning rate to 4.999999873689376e-05.
Epoch 7/20
1701/1701 [==============================] - 727s 427ms/step - loss: 0.0407 - acc: 0.9864 - binary_accuracy: 0.9864 - val_loss: 0.5726 - val_acc: 0.8667 - val_binary_accuracy: 0.8667
Epoch 00007: val_acc improved from 0.85813 to 0.86670, saving model to densenet.h5
Epoch 8/20
1701/1701 [==============================] - 728s 428ms/step - loss: 0.0287 - acc: 0.9904 - binary_accuracy: 0.9904 - val_loss: 0.5919 - val_acc: 0.8711 - val_binary_accuracy: 0.8711
Epoch 00008: val_acc improved from 0.86670 to 0.87113, saving model to densenet.h5
Epoch 9/20
1701/1701 [==============================] - 728s 428ms/step - loss: 0.0253 - acc: 0.9909 - binary_accuracy: 0.9909 - val_loss: 0.5453 - val_acc: 0.8720 - val_binary_accuracy: 0.8720
Epoch 00009: val_acc improved from 0.87113 to 0.87196, saving model to densenet.h5
Epoch 10/20
1701/1701 [==============================] - 730s 429ms/step - loss: 0.0216 - acc: 0.9927 - binary_accuracy: 0.9927 - val_loss: 0.5498 - val_acc: 0.8706 - val_binary_accuracy: 0.8706
Epoch 00010: val_acc did not improve from 0.87196
Epoch 11/20
1701/1701 [==============================] - 729s 428ms/step - loss: 0.0145 - acc: 0.9954 - binary_accuracy: 0.9954 - val_loss: 0.6332 - val_acc: 0.8822 - val_binary_accuracy: 0.8822
Epoch 00011: val_acc improved from 0.87196 to 0.88219, saving model to densenet.h5
Epoch 12/20
1701/1701 [==============================] - 731s 430ms/step - loss: 0.0176 - acc: 0.9939 - binary_accuracy: 0.9939 - val_loss: 0.6256 - val_acc: 0.8756 - val_binary_accuracy: 0.8756
Epoch 00012: val_acc did not improve from 0.88219
Epoch 13/20
1701/1701 [==============================] - 734s 432ms/step - loss: 0.0117 - acc: 0.9965 - binary_accuracy: 0.9965 - val_loss: 0.5959 - val_acc: 0.8838 - val_binary_accuracy: 0.8838
Epoch 00013: val_acc improved from 0.88219 to 0.88385, saving model to densenet.h5
Epoch 14/20
1701/1701 [==============================] - 736s 433ms/step - loss: 0.0132 - acc: 0.9958 - binary_accuracy: 0.9958 - val_loss: 0.7139 - val_acc: 0.8598 - val_binary_accuracy: 0.8598
Epoch 00014: val_acc did not improve from 0.88385
Epoch 15/20
1701/1701 [==============================] - 735s 432ms/step - loss: 0.0109 - acc: 0.9963 - binary_accuracy: 0.9963 - val_loss: 0.6139 - val_acc: 0.8720 - val_binary_accuracy: 0.8720
Epoch 00015: val_acc did not improve from 0.88385
Epoch 00015: ReduceLROnPlateau reducing learning rate to 2.499999936844688e-05.
Epoch 16/20
1701/1701 [==============================] - 734s 431ms/step - loss: 0.0048 - acc: 0.9980 - binary_accuracy: 0.9980 - val_loss: 0.6759 - val_acc: 0.8764 - val_binary_accuracy: 0.8764
Epoch 00016: val_acc did not improve from 0.88385
Epoch 17/20
1701/1701 [==============================] - 733s 431ms/step - loss: 0.0028 - acc: 0.9992 - binary_accuracy: 0.9992 - val_loss: 0.7179 - val_acc: 0.8805 - val_binary_accuracy: 0.8805
Epoch 00017: val_acc did not improve from 0.88385
Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.249999968422344e-05.
Epoch 18/20
1701/1701 [==============================] - 734s 432ms/step - loss: 0.0014 - acc: 0.9996 - binary_accuracy: 0.9996 - val_loss: 0.7525 - val_acc: 0.8816 - val_binary_accuracy: 0.8816
Epoch 00018: val_acc did not improve from 0.88385
Epoch 19/20
1701/1701 [==============================] - 734s 431ms/step - loss: 0.0011 - acc: 0.9997 - binary_accuracy: 0.9997 - val_loss: 0.7580 - val_acc: 0.8803 - val_binary_accuracy: 0.8803
Epoch 00019: val_acc did not improve from 0.88385
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 20/20
1701/1701 [==============================] - 733s 431ms/step - loss: 8.0918e-04 - acc: 0.9997 - binary_accuracy: 0.9997 - val_loss: 0.7667 - val_acc: 0.8800 - val_binary_accuracy: 0.8800
Epoch 00020: val_acc did not improve from 0.88385
the Confusion matrix :
the Classification Report from sklearn.metrics
precision recall f1-score support
MEL 0.48 0.47 0.48 904
NV 0.48 0.49 0.49 904
accuracy 0.48 1808
macro avg 0.48 0.48 0.48 1808
weighted avg 0.48 0.48 0.48 1808
The dilemma
as you can see, this is an utter trash model.
what you think the problem is ? please any suggestions, this is happening for a month without any improvement.
if you want to take a look to other piece of code I will post them .
this is the code for augmentation
class_list = ['MEL','NV']
for item in class_list:
# Create a temporary directory for the augmented images
aug_dir = 'aug_dir'
os.mkdir(aug_dir)
# Create a directory within the base dir to store images of the same class
img_dir = os.path.join(aug_dir, 'img_dir')
os.mkdir(img_dir)
# Choose a class
img_class = item
# List all the images in the directory
img_list = os.listdir('base_dir/train_dir/' + img_class)
# Copy images from the class train dir to the img_dir
for fname in img_list:
# source path to image
src = os.path.join('base_dir/train_dir/' + img_class, fname)
# destination path to image
dst = os.path.join(img_dir, fname)
# copy the image from the source to the destination
shutil.copyfile(src, dst)
# point to a dir containing the images and not to the images themselves
path = aug_dir
save_path = 'base_dir/train_dir/' + img_class
# Create a data generator to augment the images in real time
datagen = ImageDataGenerator(
rotation_range=60,
width_shift_range=0.1,
height_shift_range=0.1,
#zoom_range=0.1,
shear_range= 0.2,
horizontal_flip=True,
vertical_flip=True,
brightness_range=(0.9,1.1),
fill_mode='nearest')
batch_size = 50
aug_datagen = datagen.flow_from_directory(path,
save_to_dir=save_path,
save_format='jpg',
target_size=(224, 224),
batch_size=batch_size)
# Generate the augmented images and add them to the training folders
num_aug_images_wanted = 10000 # total number of images we want to have in each class
num_files = len(os.listdir(img_dir))
num_batches = int(np.ceil((num_aug_images_wanted - num_files) / batch_size))
# run the generator and create about 6000 augmented images
for i in range(0, num_batches):
imgs, labels = next(aug_datagen)
# delete temporary directory with the raw image files
shutil.rmtree('aug_dir')
the preprocessing code
# Declare a few useful values
num_train_samples = train_len
num_val_samples = val_len
train_batch_size = 16
val_batch_size = 100
image_height = 224
image_width = 224
# Declare how many steps are needed in an iteration
train_steps = np.ceil(num_train_samples / train_batch_size)
val_steps = np.ceil(num_val_samples / val_batch_size)
# Set up generators
datagenr = ImageDataGenerator(
preprocessing_function= \
keras.applications.densenet.preprocess_input)
train_batches = datagenr.flow_from_directory(
train_path,
target_size=(image_height, image_width),
batch_size=train_batch_size)
val_batches = datagenr.flow_from_directory(
val_path,
target_size=(image_height, image_width),
batch_size=val_batch_size)
# Note: shuffle=False causes the test dataset to not be shuffled
test_batches = datagenr.flow_from_directory(
val_path,
target_size=(image_height, image_width),
batch_size=val_batch_size,
shuffle=False)
I am a new to the field so please, if you think this question is inappropriate, don't just dislike the question and go,leave a comment that may help me and others to improve ourselves, to not ask question similar.
AI: There are many points that you should check/try :
- use Flatten layer instead of GlobalMaxPooling2D (with optional conv2d layers to reduce spatial dimensions even more before the Flatten)
- your final Dense layer should have only one unit and not two since you are doing binary classification
- no down sampling but balance classes in each batch with a training generator that you pass to fit_generator
- check that all your augmentations are not degrading any discrimitive information in the image
- larger learning rate like 1e-3
- increase the patience of ReduceLRonPlateau
- use Area Under the ROC Curve (aka AUC) as an evaluation metric. This is a much better metric in case of inbalanced dataset. |
H: Pattern Recognition, Bishop - MLE and Least squares section 3.1
Please refer to an equation in Pattern recognition and Machine Learning by Bishop. My query is related to manipulation of an error term, equation 3.18:
$$E_{D}(\boldsymbol{w}) = \frac{1}{2}\sum_{n=1}^N\{t_{n} - w_{0} - \sum_{j=1}^{M-1}w_{j}\phi_{j}(x_{n})\}^{2} \tag{3.18}$$
the author goes on to get:
$$w_{0} = \overline{t} - \sum_{n=1}^{M-1}w_{j}\overline\phi_{j} \tag{3.19} $$
$$where \quad \overline{t} = \frac{1}{N}\sum_{n=1}^Nt_{n} \qquad \overline{\phi_j} = \frac{1}{N}\sum_{n=1}^{N}\phi_{j}(x_{n}) \tag{3.20}$$
If I take the derivate of 3.18, w.r.t $w_{0}$, set it to zero, and divide both sides by $N$, I get:
$$w_{0} = \frac{1}{N}\sum_{n=1}^{N}t_{n} - \frac{1}{N}\sum_{n=1}^{N}\sum_{j=1}^{M-1}w_{j}\phi_{j}(x_{n})$$
Am I creating any error above? If so, please help. If not, my problem is related to the second term (R.H.S., above equation):
The right term,
$\frac{1}{N}\sum_{n=1}^{N}\sum_{j=1}^{M-1}w_{j}\phi_{j}(x_{n})$, is the
average of double sum. I can't understand how we can simply 'push'
$\frac{1}{N}$ inside to get
$\sum_{j=1}^{M-1}\frac{1}{N}\sum_{n=1}^{N}w_{j}\phi_{j}(x_{n})$ and
eventually equation for $\overline\phi_{j}$ (ref. 3.20), since:
$$\frac{1}{2}[(1+2) + (3+5)] \quad != 1 + 3 + \frac{1}{2}(2 + 5)$$
LHS is average of sum of sums and RHS is average 'pushed' inside.
AI: We have
$$\frac12 [(1+2)+(3+5)]=\frac12(1+3)+\frac12(2+5)$$
We can swap the order of summation of the LHS since addition is commutative and associative.
After we decided how to pair them up, we just distribute $\frac12$ into them using distributive law.
\begin{align}
\frac1N \sum_{n=1}^N \sum_{j=1}^{M-1}w_j \phi_j (x_n) &=\frac1N \sum_{j=1}^{M-1}\sum_{n=1}^N w_j \phi_j (x_n) \\
&= \sum_{j=1}^{M-1}\frac1N\sum_{n=1}^N w_j \phi_j (x_n) \\
\end{align}
where the first equality is due to we are dealing with finite sum, and hence we can swap the order of additions, that is summing up the row first or column first doesn't matter.
After which, we use distributive law to bring $\frac1N$ inside the first summation. |
H: A2C Continuous for Pendulum-v0 working implementation, negation for loss and entropy calculation
very good implementation of A2C continuous for Pendulum-v0
Code has snippet to stop execution when mean of last 10 or 20 is higher than -20 but the results look like:
episode: 706 score: [-13.13392661]
episode: 707 score: [-12.91221984]
episode: 708 score: [-50.38036647]
episode: 709 score: [-74.58410041]
episode: 710 score: [-138.1596521]
episode: 711 score: [-87.3867222]
episode: 712 score: [-63.28444052]
episode: 713 score: [-0.37368592]
episode: 714 score: [-13.28473712]
episode: 715 score: [-117.78089523]
episode: 716 score: [-25.65207563]
episode: 717 score: [-0.36829411]
episode: 718 score: [-50.81750735]
episode: 719 score: [-0.33565775]
episode: 720 score: [-0.47168285]
episode: 721 score: [-0.35240929]
episode: 722 score: [-0.40577252]
episode: 723 score: [-0.37114168]
episode: 724 score: [-25.73963544]
episode: 725 score: [-37.70957794]
Even with the reward/10 line, still pretty good. However, I don't understand these lines regarding negation of loss and why the entropy equation looks different from what I saw in Packt Publishing Deep Reinforcement Learning Hands-On per picture below:
The code:
def actor_optimizer(self):
#placeholders for actions and advantages parameters coming in
action = K.placeholder(shape=(None, 1))
advantages = K.placeholder(shape=(None, 1))
# mu = K.placeholder(shape=(None, self.action_size))
# sigma_sq = K.placeholder(shape=(None, self.action_size))
mu, sigma_sq = self.actor.output
#defined a custom loss using PDF formula, K.exp is element-wise exponential
pdf = 1. / K.sqrt(2. * np.pi * sigma_sq) * K.exp(-K.square(action - mu) / (2. * sigma_sq))
#log pdf why?
log_pdf = K.log(pdf + K.epsilon())
#entropy looks different from log(sqrt(2 * pi * e * sigma_sq))
#Sum of the values in a tensor, alongside the specified axis.
entropy = K.sum(0.5 * (K.log(2. * np.pi * sigma_sq) + 1.))
exp_v = log_pdf * advantages
#entropy is made small before added to exp_v
exp_v = K.sum(exp_v + 0.01 * entropy)
#loss is a negation
actor_loss = -exp_v
#use custom loss to perform updates with Adam, ie. get gradients
optimizer = Adam(lr=self.actor_lr)
updates = optimizer.get_updates(self.actor.trainable_weights, [], actor_loss)
#adjust params with custom train function
train = K.function([self.actor.input, action, advantages], [], updates=updates)
#return custom train function
return train
Again, the entropy equation coded was this: entropy = K.sum(0.5 * (K.log(2. * np.pi * sigma_sq) + 1.)) which looks different from what's given in the textbook photo above.
Also, why is the loss a negation? actor_loss = -exp_v?
Is it negated because it is gradient ascent rather than gradient descent of the objective function for a policy gradient?
AI: Again, the entropy equation coded was this: entropy = K.sum(0.5 * (K.log(2. * np.pi * sigma_sq) + 1.)) which looks different from what's given in the textbook photo above.
They are the same after simple algebraic manipulations.
The entropy of a single variable Gaussian distribution with pdf $p(x|\mu, \sigma)$ is
\begin{align}
\mathcal{H}(p) & = ln(\sqrt{2 \pi e \sigma^2}) \\
& = 0.5 \cdot ln(2 \pi e \sigma^2) \\
& = 0.5 \cdot (ln(2 \pi \sigma^2) + 1)
\end{align}
In policy gradient, we assume these $\sigma$s are isotropic. Thus the total entropy is the sum above $\mathcal{H}$s.
Also, why is the loss a negation? actor_loss = -exp_v? Is it negated because it is gradient ascent rather than gradient descent of the objective function for a policy gradient?
Yes. In policy gradient you would like to maximize the likelihood log_pdf of getting higher episode returns advantages. At the same time we prefer a policy of high entropy. Deep learning frameworks usually only provide optimizer for minimizing the loss. Therefore you negate exp_v as loss. |
H: How are Lagrangian multipliers zero except for support vectors in dual representation of SVM?
How can we conclude that the Lagrangian multipliers are zero, except support vectors, in a dual problem? I cannot seem to see it.
$$L(\alpha)=-\frac{1}{2}\sum_i \sum_j \alpha_i \alpha_j y_i y_j x_i' x_j + \sum_i{\alpha_i}
$$
AI: In optimization, we have something called complementary slackness condition, it is part of the KKT conditions.
Every constraint, $g_i(x^*)\le 0$ in the primal corresponds to a dual variable $\mu_i$ (Lagrange multiplier). The condition state that $$g_i(x^*)\mu_i=0$$
For points that are not support vectors, we have $g_i(x^*)<0$, hence we must have $\mu_i=0$. |
H: How to transform many columns of TRUE/FALSE/NA to just TRUE/FALSE?
I have a dataframe that consists of a few columns of text, and then a bunch of columns that are TRUE/FALSE or NA (they were TRUE/FALSE but I left-joined them with merge and that added NAs).
Eg:
issue # | title | body | x | y | z | lbl1 | lbl2 | lbl3 | lbl4 | lbl5
1 | blah | blah | blah | blah | blah | TRUE | FALSE | FALSE | TRUE | FALSE
2 | blah | blah | blah | blah | blah | TRUE | FALSE | FALSE | TRUE | FALSE
3 | blah | blah | blah | blah | blah | NA | NA | NA | NA | NA
4 | blah | blah | blah | blah | blah | NA | NA | NA | NA | NA
5 | blah | blah | blah | blah | blah | TRUE | FALSE | FALSE | TRUE | FALSE
I know how many columns need to not converted (and also their names), though I don't know how many label columns there are (or their names - they don't share any prefix).
I tried doing:
data[,-7] <- as.logical(isTRUE(data[,-7]))
Since this seemed to work with -1 for the same elsewhere, however my first columns all ended up as TRUE/FALSE too.
I also tried:
data[8:ncol(data)] <- sapply(data[8:ncol(data)], isTRUE)
But that resulted in everything being FALSE!
I also tried:
data[data==NA] <- FALSE
But that didn't seem to do anything (still has NAs).
I'm completely new to ML and R so please bear that in mind when answering. I don't know hardly any of the functions (or even completely understand all the syntax for selecting/replacing subsets of the dataframe as I'm trying to do here!).
AI: I found that this works (though I'm interested if there's a better way - eg. avoiding the loop, or being able to exclude the columns by name instead of being just first 8).
for (i in 8:ncol(data)){
data[,i] <- sapply(data[,i], isTRUE)
} |
H: Data-generating probability distribution, probability distribution of a dataset, in ML
In Goodfellow I, Bengio Y, Courville A. Deep learning. MIT press; 2016 Nov 10.
http://thuvien.thanglong.edu.vn:8081/dspace/bitstream/DHTL_123456789/4227/1/10.4-1.pdf
p. 102 (for example), it is said that with Unsupervised Learning, one usually wants to ''learn the entire probability distribution that generated a dataset'', $p(\vec{x})$.
My question is that I would like to have a better interpretation/understanding of the concept, i.e., can we say that $p(\vec{x})$, for a given an example (represented as a vector) $\vec{x}$, in the dataset, means the ''probability to find this example à priori'' in the wild... or something like that? E.g. if it is an image of a cat of breed X, e.g., $\vec{x_i}$ = ''vectorized image of this cat of breed X'' does $p(\vec{x_i})$ means the probability (we estimate given our limited dataset of course) to obtain this image of this cat of breed X, if we draw a sample from this dataset - and even if we want to generalize - the prob to obtain this cat if we sample the TEST SET. ?
There is a similar question here, but the answer is far from answering anything about the question:
What does it mean for the training data to be generated by a probability distribution over datasets
AI: It's more of a theoretical distribution, that a concrete one.
The main idea is this:
we consider all data to have an underlying distribution which generates the data. Through the procedure of creating a dataset we effectively sampled some instances from it. Now I like to think of this distribution as a theoretical notion of all possible data of this type that could ever exist.
Let me give you an example:
suppose we have the cats vs dogs dataset. This dataset contains $25000$ images of cats and dogs. Now we can consider the cat images as samples from a larger population. But what would this population include? All cat images on the web? All cat images in existence? or all cat images that could ever conceivably exist? Let's refer to this as this population as $C$. This population follows a certain distribution (not every image is an image of a cat); this distribution essentially tells us what makes a cat, a cat. Now if I were to take a picture of a cat tomorrow (let's call this $c$), I would have effectively taken a sample out of this dataset (i.e. $c \sim C$).
Where does this come into play in Machine Learning?
Well, generative models essentially try to learn this distribution, and they try to do this through its samples (i.e. our dataset). They look at its samples and try to generalize to identify what distribution spawned them. Essentially they try to answer the question what makes $c$ a sample of $C$?
Furthermore, even discriminative models make several assumptions about the data (e.g. that the samples are independent and identically distributed, that the training and test set follow the same underlying distribution)
More formally
The training and test data are generated by a probability distribution over datasets called the data-generating process. We typically make a set of assumptions known collectively as the i.i.d. assumptions. These assumptions are that the examples in each dataset are independent from each other, and that the training set and test set are identically distributed, drawn from the same probability distribution as each other. This assumption enables us to describe the data-generating process with a probability distribution over a single example. The same distribution is then used to generate every train example and every test example. We call that shared underlying distribution the data-generating distribution, denoted $p_{data}$. This probabilistic framework and the i.i.d. assumptions enables us to mathematically study the relationship between training error and test error.
- I. Goodfellow et al. "The Deep Learning Book" sec 5.2
I'd suggest reading chapter 5 from this book because the authors explain a lot of well known ML concepts (bias, variance, overfitting, underfitting etc.) through the scope of this data-generating distribution.
Edit after comment suggestion:
The question is essentially,
how does the data-generating distribution fit into the training process of a neural network?
The answer isn't so obvious, mainly because neural network classifiers are discriminatve models (i.e. they don't try to identify the data-generating distribution; rather they try to find out what features separate the classes among one another). Also I'd like to add that, as stated previously, the data-generating distribution is a theoretical concept, not a concrete one employed during training.
There is a way, through, we can tie this into the whole training procedure. Initially, consider that NNs try to minimize the cross-entropy loss between its predictions $\hat y$ and the actual labels $y$:
$$
Loss(y, \hat y) = - \sum_i y_i \, log \, \hat y_i
$$
Now let's think of $y$ and $\hat y$ not as tensors but as probability distributions. The first represents the probability that a sample would belong to class $y$, while the second represents the probability with which the network thinks a sample belongs to that class.
We can take this one step further and compute the KL divergence between $y$ and $\hat y$. This metric essentially tells us the difference between two distributions (higher values mean distributions are more different).
$$
KL \left( y \|\| \hat y \right) = \sum_i y_i \, log \, \frac{y_i}{\hat y_i}
$$
Note that minimizing cross-entropy is equivalent to minimizing the KL divergence between these two distributions. If the two distributions are identical their KL divergence has a value on $0$; this value increases the more they differ.
Minimizing the KL divergence between two distributions is the same as minimizing the JS divergence between them. This is a metric derived from KL, which can be used as a distance function between distributions (i.e. *how close $y$ is to $\hat y$).
So if you think of it this way, Neural Networks are trained to minimize the distance between the actual data-generating distribution $y$ and their perception of the data-generating distribution $\hat y$.
In order for this to be achievable, some assumptions must be held:
The samples we have must be representative of the distribution (i.e. $y_i^{train} \sim y$).
The test samples we'll use to evaluate our network on must follow the same distribution (i.e. $y_i^{test} \sim y$).
The network must have sufficient capacity to learn this distribution.
The right optimization strategy needs to be followed to minimize the distance.
etc. |
H: Is conditional GAN supervised learning?
I am trying to understand this paper about conditional GAN, it says that extra information y (class labels) is given to the network. However, I cannot understand its usage during training or its benefits. As far as I know, GAN is unsupervised learning, in this case of extra information usage, can we say that the architecture is supervised?
AI: I'm assuming you understand the original gan paper.
So there are 2 distribution at the start - first distribution that the original images follow and a random distribution that the fake images follow.
Discriminator task is to figure out which image came from what distribution, whereas generator is trying to learn the real distribution and make the random distribution similar to the real one.
Now giving image labels as input is like giving extra bit of information about the distribution. This doesn't change the game, it's the same unsupervised one.
There will two consequences of adding this extra bit of information :
Even the random distribution that the fake images follow will have some pattern. Hence convergence will be faster.
You can control the output of generator at test time by giving label for image you want to generate. |
H: is there a way to normalize [-3,1] to ${\begin{bmatrix} \dfrac{-3}{\sqrt{10}}\\ \dfrac{1}{\sqrt{10}}\\ \end{bmatrix}}$ with python?
I am learning SVD by following this MIT course
The lecturer is trying to normalize a vector
$${\begin{bmatrix}
-3\\
1\\
\end{bmatrix}}$$
to
$${\begin{bmatrix}
\dfrac{-3}{\sqrt{10}}\\
\dfrac{1}{\sqrt{10}}\\
\end{bmatrix}}$$
I tried this with Python NumPy
np.linalg.norm(v1,ord=2,axis=1,keepdims=True)
and got
array([[3.],
[1.]])
I would like to get something like this
[[-0.9486833 ],
[ 0.31622777]]
is there a way with Python (for instance, NumPy) to do the job? any other 3rd party library is also appreciated.
AI: You have already computed that, but you've not bound the output to a variable, also called name in python. Try the following snippet:
result = np.linalg.norm(v1,ord=2,axis=1,keepdims=True)
print(result)
Based on the edit, I update the answer. As you may find answers to your question, a typical way to find what you need is something like the following function:
def normalize(v):
norm = np.linalg.norm(v)
if norm == 0:
return v
return v / norm
Equivalently, there is a function called normalize in sklearn.preprocessing which can be employed for your task. |
H: Should input images be normalized to -1 to 1 or 0 to 1
Many ML tutorials are normalizing input images to value of -1 to 1 before feeding them to ML model. The ML model is most likely a few conv 2d layers followed by a fully connected layers. Assuming activation function is ReLu.
My question is, would normalizing images to [-1, 1] range be unfair to input pixels in negative range since through ReLu, output would be 0. Would normalizing images into [0, 1] range instead be a better idea?
Thank you.
AI: In addition to just initialization (as the great answer of Djib2011 notes), many analyses of artificial neural networks utilize or rely on the normalization of inputs and outputs (e.g., the SELU activation). So normalizing the input is a good idea.
Often, however, this can be done with normalization layers (e.g., LayerNorm or BatchNorm), and furthermore, we may want to enforce that the pixels are in a particular fixed range (since real images are like this). This is especially important when the output is an image (e.g., for a VAE of images). Since we need to compare the input image $I$ to the output image $\widehat{I}$, it should be readily possible to enforce the pixel values of $\widehat{I}$ into a simple, known, hard range. Using sigmoid produces values in $[0,1]$, while using tanh does so in $[-1,1]$. However, it is often thought that that tanh is better than sigmoid; e.g.,
https://stats.stackexchange.com/questions/142348/tanh-vs-sigmoid-in-neural-net
https://stats.stackexchange.com/questions/330559/why-is-tanh-almost-always-better-than-sigmoid-as-an-activation-function/369538
https://stats.stackexchange.com/questions/101560/tanh-activation-function-vs-sigmoid-activation-function
In other words, for cases where the output must match the input, using $[-1,1]$ may be a better choice. Furthermore, though not "standardized", the range $[-1,1]$ is still zero-centered (unlike $[0,1]$), which is easier for the network to learn to standardize (though I suspect this matters only rather early in training).
Also, for this phrase
would normalizing images to [-1, 1] range be unfair to input pixels in negative range since through ReLu, output would be 0
the answer is "no". Mainly because the non-linear activation happens after other layers first in nearly all cases. Usually those layers (e.g., fully connected or conv) have a bias term which can and will shift around the range anyway (after some additional, usually linear, transformation occurs).
It is true however that values below zero do "die" wrt to their contribution to the gradient. Again, this may be especially true early in training. This is one argument for using activations other than ReLU, like leaky ReLU, and it is a real danger. However, the hope is that these values should have more than one way to propagate down the network.
E.g., multiple outputs in the first feature map (after the first convolutional layer, before the activation) will depend on a given single input, so even if some are killed by the ReLU, others will propagate the value onwards.
This is thought to be one reason why ResNet is so effective: even if values die to ReLU, there is still the skip connection for them to propagate through.
Despite all this, it is still probably more common to normalize images with respect to the statistics of the whole dataset. One problem with per-image normalization is that images with very small pixel value ranges will be "expanded" in range (e.g., an all blue sky with a tiny cloud will immensely highlight that cloud). Yet, others may consider this a benefit in some cases (e.g., it may remove differences in brightness automatically).
Ultimately, the optimal approach is up for debate and likely depends on the problem, data, and model. For more, see e.g.
[1],
[2],
[3],
[4],
[5],
[6],
[7] |
H: Predict using a saved regression model
I have trained an ANN model for a regression problem which takes 10 parameters as input and gives 1 output. After training, I saved the model as json and weights as a .h5 file using keras. Now I want to make predictions on the new data. I have loaded the model and my question here is how do I scale this single row of input values, before feeding it to the model? Some of the machine learning algorithms did not require scaling(Standardization/Normalization) So I could load those models and used for making predictions. How do I do in an ANN model, since we scale the data for training?
AI: Use sklearn.preprocessing.OneHotEncoder for example and transfer the one-hot encoding to your web-service ( i'm guessing that's how you're using the model for inference ) via sklearn.pipeline.Pipeline. The pipeline will save the state of your fit on your training data and apply the same function on your production data.
Example :
pipeline1 = Pipeline([
('OneHotEncoder', OneHotEncoder())
])
pipeline1.fit(trainingdata.column1.values.reshape(-1,1))
This is how you create a pipeline containing the onehotencoder , fit your data on the pipeline. All is left is dumping your pipeline in a file, loading it later in your production environment, and call the transform method on your loaded pipeline :
joblib.dump(pipeline1,"pipeline1.joblib")
# Production environment
pipeline1 = joblib.load('pipeline1.joblib')
momo = pipeline1.transform(productiondata.column1.values.reshape(-1,1)).toarray()
And here , the variable momo contains your production data with the pipeline ( containing the one-hot encoding operation ) applied to it. |
H: What is the difference between handcrafted and learned features
I am having difficulty understanding what the differences are between handcrafted and learned features.
Is it just the case that the handcrafted features are the input variables, and that the learned features would refer to the output variable? Or are they (handcrafted and learned) both types of input variables?
Can anyone help with this?
Thanks.
AI: In short
The classification is the following:
handcrafted are features that are manually engineered by the data scientist.
learned features are ones that are automatically obtained from a machine learning algorithm
Let me give you an example:
Suppose you are doing an image classification task, where you wanted to classify cats from dogs. You want to build a classifier but you are faced with the dilemma, how do I input my data to the classifier?. You have two options:
Use the raw pixel data. The issue with this is that you have a vast feature space, which makes it hard for models to generalize.
Attempt to extract features from the image so that you can reduce your feature space.
Now if you choose the second you have two more options:
Manually define a set of features and extract them. Some examples include edge detection, corner detection, histrograms, etc. The problem with this approach is that nothing guarantees that the number of corners is a good descriptor for classifying cat and dog images.
The alternative is to train a ML model to identify and extract useful features for this specific classification task. This is exactly what a Convolutional Neural Network does. It searches for what features are best to classify the images?.
Traditionally the first approach was used extensively in Machine Learning. That changed however with the arrival of Deep Learning.
An example of what features CNNs identify when classifying different types of images:
Note that these features aren't general, low-level features like edges or corners; instead they are tailored for each class. That is the power of training a model to extract features.
Comparison
Because learned features are extracted automatically to solve a specific task, they are extremely effective at it. In fact deep learning models that perform feature extraction and classification outperform models that classify manually extracted features by a large margin. This is one of the reasons why deep learning is so popular.
On the other hand, we have no control on what features the model will extract from the data. In many cases these features are only good for classifying the data and have no real-world interpretation. They are only good for the task that they were trained for. |
H: Removing # from header
I have a $(418,2)$ matrix and I want to convert it to csv. so I write:
np.savetxt('titanic1.csv', Sol, fmt='%.2f', delimiter=",",header="PassengerId,Survived")
But the header I get is "# PassengerId" instead of "PassengerId" which is what I want. How can I solve this?
AI: The documentation of numpy says you can add the parameter comments='', but warns that np.loadtxt() might not work on the resulting file. |
H: How is the "loss" calculated which is supplied by the callback log in Keras?
I.e. categorical cross entropy? binary cross entropy? Something else?
Or is it perhaps the loss function which you pass into the model.compile method?
AI: Yes, it is the loss function you pass to model.compile.
See here for loss metric usage examples. You can also define your own metric (see "Custom metrics" at the bottom of the page from the last link).
You can read about the Model class documentation here. The first method is the compile method with argument descriptions.
If you want to dig deeper into Keras callbacks and the built-in loss functions, check out these sources:
losses source code
Keras callbacks |
H: Where to store dataset when creating neural network models and using github?
I have been recommended to use GitHub for all my coding projects as it is a great way to demonstrate to employers what you have been working on as well as just sharing anything you have created that might be useful to others. So I am storing my code for Neural Network Models I am working in a GitHub repository but I am not sure when I download a data set to run a test with where I should download that too? Inside or outside the git repository?
Sorry if this is a dumb question but new to this. I can imagine arguments for either answer. Mainly size being too big or the results being easily reproducible.
AI: If the dataset isn't too large, you may add datasets on GitHub itself. Generally project datasets aren't large, but again, I can't be sure as you haven't mentioned any approximate size. But if it is large, you may refer to data source in your ReadMe or within your .py file as an external hyperlink.
For example, save dataset on Google Drive & turn on Viewable option by generating shareable URL. Now that generated link can be refered to in your ReadMe file as something like: "Kindly refer this external link to access dataset". Alternatively, you may also read data directly within your .py file from that generated link, and any project reviewer would notice that external link being used to read dataset.
Hope that answers your query! |
H: Why does np.linalg.eig produce an opposite-signed eigenvector?
I am learning SVD by following this MIT course. In this video, the lecturer is finding the SVD for
$$
\begin{pmatrix} 5 & 5 \\ -1 & 7 \end{pmatrix},
$$
which involves finding the eigenvalues for
$$
C^T C =
\begin{pmatrix}
26 & 18 \\ 18 & 74
\end{pmatrix}.
$$
In the example (at the time in the link above), the lecturer finds eigenvalues
$$\begin{pmatrix}-3/\sqrt{10} \\ 1/\sqrt{10} \end{pmatrix},
\begin{pmatrix} 1/\sqrt{10} \\ 3/\sqrt{10} \end{pmatrix}.$$
But np.linalg.eig produces the opposite vector to the second one:
w, v = np.linalg.eig(C.T*C)
v
matrix([[-0.9486833 , -0.31622777],
[ 0.31622777, -0.9486833 ]])
Why?
AI: Any scalar multiple of an eigenvector is also an eigenvector. LAPACK (which np.linalg.eig uses under the hood) chooses to return unit-length eigenvectors (good for SVD!), but this still leaves two choices, and there doesn't seem to be a convention for which one to return; it's up to the underlying algorithm (which in turn may depend on the input data).
https://stackoverflow.com/questions/17998228/sign-of-eigenvectors-change-depending-on-specification-of-the-symmetric-argument |
H: CASE sentence for SQL query
I have a question abut MySQL
I would like to handle a column with characters without changing the original DB. Below is the levels of the column named 'Maker'.
기아, 현대, BMW, VolksWagen, Porsche
So there are about 10 기아s, 20 현대s, 30 BMWs and so on.
As you can see, some of them are written in Korean and others are in English.
I do not want to touch English-written words. I only want to change Korean words into English.
I tried to you CASE and IF sentences like below.
SELECT
CASE
WHEN Maker='기아' THEN 'KIA'
WHEN Maker='현대' THEN 'HyunDai'
END AS new_Maker
FROM table_name
It successfully changes KIA and HyunDai but BMW, VolksWagen, and Porsche shows as NULL.
I would appreciate any advice. Thank you.
AI: You haven't specified what happens for values other than '기아' ('KIA') and '현대' ('HyunDai').
Try
SELECT
CASE
WHEN Maker='기아' THEN 'KIA'
WHEN Maker='현대' THEN 'HyunDai'
ELSE Maker
END AS new_Maker
FROM table_name |
H: conv net data retrieval on unseen class
I have build a conv net for image classification which work "well"
Now I extract features from last fully connected layer and use it for image retrieval (find image most similar to my target image) using hamming distance. it's working prety well even if I'm not able to predict how it'll be rotation invariant, sensible to noise..
I have try image retrieval only on class that have been seen by my model while training (but not training data). Do you know if it could work on class that have never been seen by the model while training?
e.g
Let's says model has been train on car and truck. I want to find most similar image to that yellow small car and it's work well it's return me yellow and small car.
Now let's imagine I apply it on a new data set with Plane. I want to find most similar image to that small yellow plane. Should I expect to find yellow small plane or that'll be totaly random?
Since network has never seen plane while training, is it possible to predict, or at least have an intuition, of the result?
AI: It depends. Let me explain you what features are detected when you train a classification model.
Let's take same example as yours - car and truck classification. The CNN will extract features from both type of images - tires, windows, shape, texture etc.
Then it'll separate the features - if it sees something like bigger tires, bigger windows, rectangular shape, it'll give more weight to truck class. Similarly for car class.
Now coming to your image retrieval part. This is what you're doing - passing all the image through CNN and extracting features for each image. Then comparing your test image features with other features to find most similar one.
Now think, what if you pass new image to the same CNN ? What features will be extracted.
CNN will extract the same features as it used to for car and truck - tires, windows, shape etc. Because it's trained for that only. It won't start extracting wings as feature if you pass airplane.
So if you pass airplane as test image - same features as car and truck would be extracted and you will get results based on the comparison of these features. Hence if you pass yellow small airplane, you might get yellow small car having same tire size as your airplane or yellow small truck having shape similar to your plane or in best case yellow small airplane. |
H: How to interpretate multiple histograms corresponding to each feature in multiple linear regression for relationship?
Used matplotlib to plot the histograms for each feature in Boston dataset available in scikitlearn library.
How to interpretate the histograms to determine the correlation or significance of that feature in model training?
AI: A histogram shows the distribution of values in a feature/variable. For instance in CHAS you see that there are two values (0,1), which means binary encoding (aka one-hot or dummy).
Histograms are helpful to understand the general structure of data and to detect things such as outlier or censoring. They are also good to get a quick idea of skewness in the data/distribution. Look at „age“ for example. You can see concentration of values at the top of the x-values. This likely is due to houses being older than 100 years are recorded as „100“ in the data. A similar pattern occurs for the price.
However, wrt „significance“ and correlation, histograms do not tell you a lot. For correlation, look at correlation measures. For significance (I guess you mean importance), you need to do sequential testing of features or you employ Lasso/Ridge regression for a more systematic approach. |
H: How do I find the count of a particular column, based on another column(date) using pandas?
I have a dataframe with 3 columns, such as SoldDate,Model and TotalSoldCount. How do I create a new column, 'CountSoldbyMonth' which will give the count of each of the many models sold monthly?
Date Model TotalSoldCount
Jan 19 A 4
Jan 19 A 4
Jan 19 A 4
Jan 19 B 6
Jan 19 C 2
Jan 19 C 2
Feb 19 A 4
Feb 19 B 6
Feb 19 B 6
Feb 19 B 6
Mar 19 B 6
Mar 19 B 6
The new df should look like this.
Date Model TotalSoldCount CountSoldbyMonth
Jan 19 A 4 3
Jan 19 A 4 3
Jan 19 A 4 3
Jan 19 B 6 1
Jan 19 C 2 2
Jan 19 C 2 2
Feb 19 A 4 1
Feb 19 B 6 3
Feb 19 B 6 3
Feb 19 B 6 3
Mar 19 B 6 2
Mar 19 B 6 2
I tried doing
df['CountSoldbyMonth'] = df.groupby(['date','model']).totalsoldcount.transform('sum')
but it is generating a different value.
AI: data['CountSoldbyMonth']= data.groupby(['Date','Model']).TotalSoldCount.transform('count') is working perfectly. |
H: Difference between LASSO penalty in neural network and just LASSO regression
I wonder whether those two have any significant differences.
I think in neural network, the lasso penalty put on the loss function makes the model simpler and introduces more sparsity by automatically reducing some of the features to 0. I assume that's basically the lasso regression is doing?
AI: In its „pure“ form, Lasso is just a penalty on top of the RSS of a normal (OLS) regression. The result is that your OLS parameters are shrunken.
See section 6.2.2 of „Introduction to Statistical Learning“.
In neural nets the weights are shrunken (there can be many of them and usually much more than in normal regression). In Keras/TF this happens layer-wise. See this post for some background.
So the concepts are quite different. Also the objective of using L1 can be quite different, e.g. when it is used to approach high dimensional problems in OLS. What they have in common is that you try to get rid of unnecessary „noise“ in your estimation to get a better fit (and to avoid overfitting).
Note that NOT the features are shrunken but parameters/weights are shrunken. |
H: ValueError: pos_label=1 is not a valid label: array(['N', 'Y'], dtype='
X = train_encoded_df.iloc[:, 1: ]
y = train_encoded_df["Loan_Status"]
print("Precision:",metrics.precision_score(y_test, y_pred))
My training data contains the categorical features encoded using get_dummies().
This is causing the error:
> ValueError: pos_label=1 is not a valid label: array(['N', 'Y'], dtype='U1')
How to fix this?
AI: pos_label is an argument of scikit-learn's precision_score (docs); its purpose is, well, to indicate which label is the positive one and, if not given explicitly (like in your case here), it assumes the default value of 1 (again, check the docs).
Since it seems that the positive label in your case is 'Y', replace the last line with:
print("Precision:",metrics.precision_score(y_test, y_pred, pos_label='Y')) |
H: Multiclass Regression for density prediction
This is my first question in the DS-community, so I'm happily willing to accept any kind of (meta-) advice :).
I have time-series data for a set of users (~100), whereas each 15 min it is logged to which antenna (~80) they were connected (similar to cell phone connections).
Based on this data, I created a density vector, which for some time t (i.e. 01.01.2016 at 06:00) counts how many users are connected to which antenna.
Such a density object (for same day and time as given above) might look like this:
100: 5;
101: 2;
102: 3;
103: 0;
whereas the first number refers to some ID of the antenna, and the 2nd number refers to the number of users connected to the antenna.
I'm planning to feed this time-series data to a recurrent neural network.
The results should be the predicted number of users connected to the antenna at the next time step (so every 15 minutes). So it might predict for 01.01.2016 at 06:15:
100: 7;
101: 0;
102: 1;
103: 2;
Now I'm wondering what should the output layer be like? Regarding number of neurons and activation function especially. I've been reading quite a lot about multinomial logistic regression but some confirmation would be nice.
If it should output the predicted number of users per antenna, it should have probably the same number of neurons; so 80 it would be, just as in a multi-class classification scenario with a softmax activation function.
So what I need is a different activation function, but even after reading quite a bit, I couldn't wrap my head around it yet.
i.e. Get multiple output from Keras proposes to use a linear activation function, but in their case, they tried to predict the next 3 values, by using regression; whereas I am trying to predict the next 1 value for a set of antennas.
PS: For constructing the Neural Network, I'm using (Tensorflow-) Keras.
PPS: For feeding the neural network, I would generate the density vector for all time steps, and then feed batch-wise with batch_size = 80 (number of antennas). Out of curiosity: I happen to have only 1 feature, so the input_shape probably be (batch_size, 1, 1*80); If I was to have 2 features, would it then be (batch_size, 1, 2*80)?
PPPS: Not even quite sure how to name this problem. I think it would probably be called a (time-series) multiclass regression problem but I couldn't find any example with the same name (left aside the multinomial logistic regression).
AI: If you want to predict the raw number of users, then this is a classical regression problem. Set an output layer with a node for each antenna, and no activation function.
If instead you need to predict a probability distribution / frequency on all antennas, use a softmax activation, so that each output vector would sum up to 1. |
H: Neural network got a lucky guess. Can it be trusted?
Say you come across a loss curve as shown below. At which loss should you trust the model? The initial lucky guess or after it has stabilized?
And more importantly, why?
AI: The loss in the curve suggests that the training can be improved by tuning hyperparameters, especially the learning rate and/or the batch size. Therefore the optimal decision would be to keep refining the model instead of using the already trained model.
If tuning hyperparameters is not an option, at least you should re-split training and validation data or at very least repeat with a different random seed.
If none of those is an option, you should take the best validation loss. The reason is that we are assuming that there is no leak of the validation data to the training data and that the validation data is representative of the data the model is going to be tested with. Without any other evidence, we should assume that the "sweet spot" you found would also lead to better results in a piece of test data the model has not seen before. |
H: How to cache GridSearchCV optimizer result in Google Colab?
I have to execute GridSearchCV() cell every time I reload the page and it takes a lot of time. Is there a caching mechanism which stores the GridSearchCV result so that I can use it without executing the cell on the reload?
Code:
neigh_clf = KNeighborsClassifier()
grid_search = GridSearchCV(neigh_clf, param_grid, cv=5,verbose=3,n_jobs=-1)
grid_search.fit(X_train, y_train)
grid_search.best_params_
I am trying to save grid_search object here so that I can retrieve best_params_ value after reload
AI: We can save the trained model or any other file via Google Colaboratory.
How I'm using it?
I have mapped my Google Drive with Google Colaboratory notebook and
saved trained model as a pickle file in it. You can create a file and save your data in it.
How to integrate Google Drive with Google Colaboratory notebook?
#Add and execute below mentioned line of code in Google colaboratory notebook cell.
from google.colab import drive
drive.mount('ndrive')
#Once you execute these two lines, it will ask you to authorize it. Please follow the steps as visible on the Google Colaboratory Notebook.
That's it :).
Few important points:
Now you can run usual linux command to browse the directories.
By default it mapped to content folder and you can create sub directories in it.
You can visit these sub-directories via Google-Drive application as well. |
H: in TensorFlow 2.0, what is the different between LSTM and LSTMCell objects?
I am trying to implement an RNN in TensorFlow 2.0 (beta1). Looking at the layer functions (inherited from Keras) I found:
tf.keras.layers.LSTM
and
tf.keras.layers.LSTMCell
What is the different between the two? If you take a look at their arguments, they seem the same.
AI: You are right - the difference is minimal. The base LSTMCell class implements the main functionality required, such as the build method, whereas the LSTM class only container an entry point: the call method, as well as a bunch of getters to retrieve attribute values. LSTMCell is the base class, which is used as a cell that is used inside the LSTM class.
All links point to relevant parts of tensorflow.keras source code.
My advice would be to use the standard LSTM class in your model as a normal layer. If you have a GPU at your disposal, you might want to use the version of the layer that is optimised with CUDA for execution on a GPU. As per the documentation:
Note that this cell is not optimized for performance on GPU. Please use
tf.keras.layers.CuDNNLSTM for better performance on GPU.
There is also a GRU layer as well as a CuDNNGRU layer.
If you want to tweak how things work under the hood, you might create a class and inherit from the LSTMCell, or even the base class:
from tensorflow.python.keras.engine.base_layer import Layer
class MyLSTM(Layer):
pass
But you would have to implement many things for yourself. |
H: How to efficiently iterate a supervised model over the Cartesian product of very large number of records?
The problem:
Two large databases, with ~1M records each, "old customer data" and "new customer data". The data came from different sources and was ingested at different times, so there are many duplicates, but the duplicates might not be exact matches. For example In the old data, the customer was listed as "Michael Smith" but in the new data, they were listed as "Mike Smith" or "M. Smith", or the names match, but the address field is different: How do we now if it is different people with the same name or the same person who changed address?
Approach:
It seems possible to use a supervised learning approach to classify a pair of records as either being duplicates or not.
Question:
Assuming such a model is possible and we have trained it, how would we iterate over the entire data set to produce our predictions?
To apply a supervised approach (or any ML/Probabilistic approach), I can only think of naively examining each pair of records one-by-one, but this means our models would have to iterate through $10^{12}$ records, which doesn't seem feasible even with advanced compute capabilities?
How can one efficiently iterate/search through the combined data set in a situation like this?
AI: This problem is called record linkage and there are methods to avoid iterating the whole cartesian product. The main method I know was called "blocking" and consists in doing a first "rough" pass to create groups of matching candidates (the "blocks"). For example you can create groups which contain at least X n-grams in common. This can be done through one linear iteration through all the entities, storing them in every applicable bin based on their n-grams (an entity can be stored in several bins). I assume that some kind of clustering could also be used to generate the groups of similar entities. Then you end up with multiple groups of smaller size, and you run the cartesian product comparison on every group individually. This can greatly reduce the complexity.
note: I was working on this maybe 10 years ago so there might be more recent approaches. |
H: Python to sum values in a column
I've created a Python code that reads the data from an excel file using Pandas.
Code for your reference:-
import pandas as pd
def myFunc():
file = r"C:\Documents\myFile.xlsx"
new_dataframe = pd.read_excel(file,'Sheet1')
new_dataframe.fillna(value="No Data Found",inplace=True)
print new_dataframe
myFunc
Current Output:-
Name date amount_used
0 P1 2018-07-01 40.0
1 P1 2018-07-01 40.0
2 P1 2018-07-15 40.0
3 P2 2018-08-01 20.0
4 P2 2018-09-15 50.0
5 P2 2018-08-15 40.0
6 P3 2018-08-10 20.0
7 P3 2018-08-10 50.0
8 P3 2018-08-10 40.0
In the final output, I need to sum the amount_used column based on Name and date column.
Expected Output:-
Name date amount_used
0 P1 2018-07-01 80.0
1 P1 2018-07-15 40.0
2 P2 2018-07-01 20.0
3 P2 2018-08-15 90.0
4 P3 2018-08-10 110.0
How can I achieve this using pandas ?
AI: You can use groupby and then sum
Take a look at https://stackoverflow.com/questions/39922986/pandas-group-by-and-sum |
H: r"""What does it mean?"""
I see r""" followed by a comment in quite a few of the source codes for PyTorch and the one I am looking at is here. What is the r for?
AI: An r in front of a string makes it a raw string literal. In short it means that any \ in that string will not be seen as an escape character, like when you write \n for newline.
Some more info can be found here:
What exactly do “u” and “r” string flags do, and what are raw string literals? |
H: Implication of a dominant Principal Component in PCA analysis
I need help, are there any practical implications of a dominant principal component. For example, if of three PCs, PC1 explains almost 100% of the variance in this dataset, What does this mean in practical terms? or what does it say about the dataset?
Any help is highly appreciated. Thanks!
AI: The principal components describe the amount of the total variance that can be explained by a single dimension of the data.
This is equivalent to the spread of the datapoints in a given dimension. The dimensions are (of course) direction that are orthogonal i.e. at 90 degrees to one another.
Have a look at this example of data points, where the red lines show the breasth of the data in two dimensions:
The dimensions don't have to be in X and Y - they could be pointing in any direction, but must be orthogonal. (more detail in this answer)
We can clearly see a bigger spread in the horizontal X-dimension, so I might expect it to account for 80% of the variance in the dataset. The vertical Y-dimension has less variance, less spread, so explains a smaller amount of the total variance. In this simple 2d example, it would explain the remaining 20% of the variance (it must sum to 100%).
In practical terms, if principal components have all very similar values, you might expect the data to form a circle (in 2d), and this means there is little directionality in the feature-space. You might like to think in terms of correlation between the features; a movement in one direction of the space does not guarantee a movement of a certain direction in the second feature. The opposite would be true if e.g. the first component had a normalised value of ~1 i.e. explained approximately 100% of the variance.
I say normalised, because the raw values that come out of PCA do not necessarily between 0 and 1 - so you can normalise them to help interpretation.
In a higher dimensional space, say with 10 variables (so 10d feature space), PCA computes eigenvectors and eigenvalues, which look for orthogonal dimensions that explain the variance of the data points, but these are not all all constricted to the dimensions of your features themselves! This means that you cannot just say that the first component (e.g. with a value of 0.6) is there because of a feature X, i.e. not due to a single feature, but a mixture of the features.
About the dataset
If the first PC explains nearly all of the data's variance, it is likely that you can express you data a lot more succinctly. PCA is often used as a dimensionality reduction method - so in this extreme case, you can reduce a high-dimensional space and convert it to a lower dimensional space without (theoretically) losing much of the explanatory power i.e. a model should be able to learn as much about the feature space from the one predominant PC, compared to the rich high-dimensional space. This of course saves compute time for any model.
I said *theoretically) above, because using PCs to model will make the interpretation of the resulting model itself - in terms of the original input features - more difficult.
For a more thorough introduction, read through a blog post like this one.
For some practical examples, check out the Sci-kit learn documentation. |
H: Binary classification as a 2-class classification problem
I want to create a dog-classifier, which outputs the probability of an image containing a dog.
I have two approaches in mind -
Binary classifier (1-class), which just outputs the probability of the image containing a dog. This seems reasonable to me.
2-class classifier with two classes denoting "dog" and "not-dog". But my problem with this approach is that the neural network has to learn the "not-dog" class as well, which is impossible since it has no pattern and is different in each training example.
Would the second approach be less effective than the first? Or even work at all?
AI: tl;dr
The two approaches you mention are equally effective.
Why?
First of all, I'll assume you're referring to a neural-network-type model. You should note that Convolutional Neural Networks are discriminative models, meaning that they are trained to find the differences between two classes (e.g. dog and not-dog).
Would the second approach be less effective than the first?
Just to disprove this claim, consider the following. You have a 2-output classification network; let's name the outputs dog and not-dog. Now imagine, as you say, that the network can only identify dog patterns. All it would have to do was to give a positive weight to those patterns for the dog class and a negative weight for the not-dog class. In this sense the not-dog class would be trained as the opposite of the dog class, which would not make it less effective than a single-output binary classifier.
Even if the not-dog class could not be trained and remained constant, due to the fact that we're using a softmax activation, predictions would be generated just by the relative difference between the dog output and the constant not-dog output. This is exactly like having a single-output binary classifier. |
H: Python pandas index error
Encountred an index error when trying to run the code found here.
The error occurs in section In[10]:
for n in range(len(dominantTheta)):
shift = dominantTheta[n]
regressionDelta += dominantAmp[n] * np.cos(n * np.array(range(len(df))) + shift)
Error message:
Traceback (most recent call last):
File "C:/Users/xxx/Desktop/Python Workspace/DSP.py", line 62, in <module>
shift = dominantTheta[n]
File "C:\Users\xxx\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\series.py", line 868, in __getitem__
result = self.index.get_value(self, key)
File "C:\Users\xxx\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\indexes\base.py", line 4375, in get_value
tz=getattr(series.dtype, 'tz', None))
File "pandas\_libs\index.pyx", line 81, in pandas._libs.index.IndexEngine.get_value
File "pandas\_libs\index.pyx", line 89, in pandas._libs.index.IndexEngine.get_value
File "pandas\_libs\index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 987, in pandas._libs.hashtable.Int64HashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 993, in pandas._libs.hashtable.Int64HashTable.get_item
KeyError: 0
I have the latest versions of numpy and pandas installed, and using Python 3.6
AI: In that section, within the for-loop, n is an integer (you loop over a range().
If you use the square brackets to access the Pandas DataFrame (or a Series, in this case), you are trying to access the index of the table. In this case, the index is a timestamp - have a look at the df in section Out [7].
So you need to get values by timestamp, or to use the row number itself, you must do as Vaalizaadeh said in a comment and use the .iloc accessor to the pandas object.
So just change shift = dominantTheta[n] to this: shift = dominantTheta.iloc[n]. |
H: When to use model.train() vs model.eval() in Pytoch?
I have a model that is used in a reinforcement learning algorithm for checkers, a la AlphaZero. Similar to that network, mine features batch normalization after each convolution layer. I am aware that this will cause different behavior/output when using .eval() vs .train()
However, I am unsure of when to use eval() vs train(). The model is used at two different points in the algorithm: First, the network is used to generate many games of self-play. Secondly, the network is trained using the positions of theses games, with the evaluation labels taken from the terminal value of the game (-1, 0, +1) and the 'improved policy' labels are taken to be the visit counts after the UCB-tree-search.
It seems to me that when the network is fully trained, I will use .eval(), as that should be 'what the network really thinks'. Therefore, for the games of self-play, I should also use .eval(). Ostensibly, this should result in stronger games of self-play and thus higher quality data. Finally, if I used .eval() in the self-play step I must also use it in the learning phase, otherwise if the network outputs are different the loss won't even be calculated using the actual outputs! I know that the network learns valuable information from .train(), as the batch norm layers learn about the mean/variance of the data universe.
As I type this, I am starting to suspect that I should be using .train() for both the self-play and learning phases. Still that seems wrong as the discrepancy in output between train() and eval() for a given position can be quite large.
AI: I can see arguments to have the self-play phase use both .train() and .eval(), so I had a look at the implementation of facebook's ELF OpenGo and saw that they have the model in eval mode during self-play (see selfplay.py). I would do as they do because their software seems to work. |
H: fast ML algorithms for binary classification with (large+sparse) binary input data
I'm sorry that this is so very broad, but as a non-ML scientist it feels to be almost impossible to keep up with recent developments (esp. in deep learning etc.). Hence, I'm asking for guidance on how to handle this specific use case:
The goal is to predict a binary output from ~50,000 binary input variables (the input data being rather sparse with about 1,000 1s on average). The training dataset includes several thousand (fairly balanced) labeled samples. I already have a non-ML solution to this giving good results but it is computationally expensive. Thus, my questions:
Which ML algorithms work well (i.e. train reasonably fast on a small HPC-cluster) on binary data of that scale.
Do they allow to extract information about the inputs (i.e. the magnitude of loadings of the individual binary variables).
How large are the performance advantages of having binary data? As opposed to using the 50k binary input variables I could run a PCA and use the first couple hundred PCs (it takes about 500 to recover 90% of the variance) for training/prediction. What would the advantages/caveats be?
The order of the input variables is not really "random", but their importance might be. Hence I think CNNs would not be the best idea, but are non-convoluting NNs even feasible at this scale? Additionally, it is usually only a few input variables that mostly decide the output, if that makes any difference in model selection.
I have worked with ML in the past, but this is several years ago and my theoretical knowledge is more than rusty. Also, the variety of NN architectures / frameworks etc. has exploded since then, hence I wanted to ask for some input before blindly trying out everything.
AI: I mentioned in a comment that boosting might be a option. However, after second thoughts, I guess (as far as I can tell from the description of your data) that you might be better off starting with logistic regression with regularization (lasso, elastic net, ridge). Why?
You want to learn „fast“ (not much tuning etc)
Your data tends to be of high dimension
You don‘t know what feature(s) are good predictors
This together makes me think, that 1) trying lasso would be the thing to start. If this fails, 2) go on with ridge.
Lasso can shrink the impact of features to zero (good in high dimensional data). However, this may lead to a situation in which „too much“ regularization happens. So you can try ridge, in which case features are „shrunken“, but will never become zero.
I don‘t know if you work with R or Python, but lasso/ridge is available in both. Make sure you find the right lambda (tuning parameter for regularization) by cross validation (this is not too expensive).
Here is a good R tutorial: https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html
For Python, there are also good tutorials around.
Some background can be found in „Introduction to statistical learning“, for which R and Python code is online. http://www-bcf.usc.edu/~gareth/ISL/ |
H: Would K-means be Appropriate to Use with Four or More Variables?
Just a general question that I'm trying to mentally visualize. I'm fairly new to using k-means clustering and have used it before on two variables, which creates a 2-D plot of points. I also know, although I haven't done it before, that you can plot a k-means cluster with three variables utilizing the x, y, and z axes. But now I'm currently in a position where I have four variables, normalized by their z-scores, in which I'm not sure how to use the k-means clustering appropriately. Should I be using a k-means cluster in this circumstance?
Thanks
AI: You can use k-Means clustering in all the dimensions you need. This technique is based on a k number of centroids that self-adjust to the data and "cluster" them. The k centroids can be defined in any number of dimensions.
If you want to find the optimal number of centroids, the elbow method is still the best. You iterate the algorithm changing the value of k each time, and record how much error the clustering is producing; once you're done you plot the error levels against each k value and check the optimal k visually. There is plenty of tutorial on it, you can start here for example. |
H: Why are there over 600 open pull-requests in Scikit-learn github repo?
I considered submitting a function that I deem missing to scikit-learn repo, but as of June 28th 2019 there are over 600 pending Pull Requests: https://github.com/scikit-learn/scikit-learn/pulls.
Can you explain the situation with maintainers? I mean, reviewing a pull-request is not that big of a deal compared to writing commits and also this normally has higher priority than writing code, so why I see new commits in the project, but PRs are still not reviewed?
By the way, I also found a scikit-learn-contrib project that seems to be contain many additional functions: https://github.com/scikit-learn-contrib.
AI: mean, reviewing a pull-request is not that big of a deal compared to writing commits and also this normally has higher priority than writing code, so why I see new commits in the project, but PRs are still not reviewed?
I assume that it's not that simple if they want to maintain the stability and the homogeneity of their code base. If they were just quickly accepting PRs the whole software would become an unmanageable mess in no time, causing problems for a large community of users. It's almost unavoidable for any significant software project to become harder and slower to maintain and develop when it reaches a certain size, especially if many users depend on it.
Also it's maintained by a community of volunteers so there might not be that many contributors who are both knowledgeable enough about the code and have enough free time to study all the PRs coming regularly. |
H: Dimension change of convolutional layer after applying the feature map
In this lecture I don't understand how the output has 1 layer after applying the feature map to the 3 layer input.
AI: In the video the speaker says, "center the filter on each pixel and perform a dot product which gives one value per position."
So I'm guessing it is a dot product between (L1, L2, L3) dot (F1, F2, F3) where L's are pixel values in the input and F's are whatever values are in the filter. |
H: Creating new column in dataframe based on conditions in 2 other columns
I would like to create a new column in my dataframe based on values from both the gender and experimental_grouping columns.
As I have it written below, the column df['group_gender'] has 'control_m' in every row.
Any ideas on why the if statement isn't working/a better way to accomplish this goal?
def group_and_gender(group):
for i in group:
if i == 'Controls':
for i in df['gender']:
if i == 'M':
df['group_gender'] = 'control_m'
else:
df['group_gender'] = 'control_f'
else:
for i in df['gender']:
if i == 'M':
df['group_gender'] = 'dp_m'
else:
df['group_gender'] = 'dp_f'
group(df['experimental_grouping'])
AI: -the problem with an inaccurate filling of column group_gender is that in df['group_gender'] = 'dp_m' in the following code, if i == 'M' you are filling the whole column with dp_m, instead you should use methods like iloc but it is not really an efficient way specifically when having a large dataset. In following, I have provided a better way.
for i in df['gender']:
if i == 'M':
df['group_gender'] = 'dp_m'
else:
df['group_gender'] = 'dp_f'
The more appropriate way
#creating an instant dataframe
import pandas as pd
d = {'gender':['M','M','F', 'M', 'F','M']}
df = pd.DataFrame(d)
#filling the 'group_gender' column
df.loc[df['gender'] == 'M', 'group_gender'] = 'dp_m'
df.loc[df['gender'] == 'F', 'group_gender'] = 'dp_f'
-You are not filling in experimental_gropuping and returning the column:
group(df['experimental_grouping'])
-What is this group? (Can you provide more details about the first part of your code?) |
H: What is the best way to replace NaN values for ranked columns
I have a column named BsmntQual that gives a ranking on the height of the basement per each house. These are all of the unique values in this column:
print(train['BsmtQual'].unique().tolist())
>>> ['Gd', 'TA', 'Ex', nan, 'Fa']
This is the legend of this particular column:
BsmtQual: Evaluates the height of the basement
Ex Excellent (100+ inches)
Gd Good (90-99 inches)
TA Typical (80-89 inches)
Fa Fair (70-79 inches)
Po Poor (<70 inches
NA No Basement < Not to be confused with the nan value above
This is what I did for my other ranked columns but this one did not have NaN values:
train['ExterQual'] = train['ExterQual'].replace(['Ex', 'Gd', 'TA', 'Fa'], [4, 3, 2, 1]) # Exterior Quality
For numerical values, a common way is to fill all NaN values with the mean of the column. But what is a good way of replacing the NaN values for columns such as these?
Here is the full dataset
AI: Your legend clearly states that missing values mean that there is no basement. You could fill the missing values with ’NoBase’ to make that point clearer (train[‘BsmtQual’].fillna(‘NoBase’, inplace=True)).
When you rank them then, you just add another ranking for ‘NoBase’, maybe 0, based on your example rating that gives a higher value to more quality. |
H: Predicting similarity between nouns like university names and tech companies?
I am trying to extract entities like university studied at and tech companies from resumes , I have a list of popular universities and companies and I want to find out which university best matches the extracted entity
Example
1) University in the list : IIT Bombay
Extracted entity : Education : Indian Institute of technology Bombay Btech
2)University in the list : Infosys
Extracted entity : Infosys India Ltd.
As you can see , there are extra unwanted words , short forms , expanded forms etc recognized in the extracted entity , is there any sentence similarity algorithms best suited for this purpose ?
Using SpaCy for entity extraction.
AI: This is a case of entity resolution for which a standard method is not available. You will have to write your own method also using abbreviation resolution. The python Dedupe package has some distance metrics which you could use to calculate the similarities. |
H: KNN RandomizedSearchCV typerror
While trying to study a binary classification problem with KNN and trying to tune the parameters of the model I'm getting a typerror that I quite don't understand. Is a parameter missing or something?
TypeError: init() takes exactly 1 positional argument (0 given)
Here is my code:
import pandas as pd
import numpy as np
from sklearn import model_selection
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV
# generate example data
X = pd.DataFrame({
'a': np.linspace(0, 10, 500),
'b': np.random.randint(0, 10, size=500),
})
y = np.random.randint(0, 2, size=500)
# set search parameters
n_neighbors = [int(x) for x in np.linspace(start = 1, stop = 100, num = 50)]
weights = ['uniform','distance']
metric = ['euclidean','manhattan','chebyshev','seuclidean','minkowski']
random_grid = {
'n_neighbors': n_neighbors,
'weights': weights,
'metric': metric,
}
# run search
knn = KNeighborsClassifier()
knn_random = RandomizedSearchCV(estimator = knn, random_state = 42,n_jobs = -1,param_distributions = random_grid,n_iter = 100, cv=3,verbose = 2)
knn_random.fit(X,y)
knn_random.best_params_
Full error:
_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\process_executor.py", line 418, in _process_worker
r = call_item()
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\process_executor.py", line 272, in __call__
return self.fn(*self.args, **self.kwargs)
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 567, in __call__
return self.func(*args, **kwargs)
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 225, in __call__
for func, args, kwargs in self.items]
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 225, in <listcomp>
for func, args, kwargs in self.items]
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py", line 528, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\neighbors\base.py", line 916, in fit
return self._fit(X)
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\neighbors\base.py", line 254, in _fit
**self.effective_metric_params_)
File "sklearn\neighbors\binary_tree.pxi", line 1071, in sklearn.neighbors.ball_tree.BinaryTree.__init__
File "sklearn\neighbors\dist_metrics.pyx", line 286, in sklearn.neighbors.dist_metrics.DistanceMetric.get_metric
File "sklearn\neighbors\dist_metrics.pyx", line 443, in sklearn.neighbors.dist_metrics.SEuclideanDistance.__init__
TypeError: __init__() takes exactly 1 positional argument (0 given)
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-2-b5a9f7ea82d0> in <module>
29 knn_random = RandomizedSearchCV(estimator = knn, random_state = 42,n_jobs = -1,param_distributions = random_grid,n_iter = 100, cv=3,verbose = 2)
30
---> 31 knn_random.fit(X,y)
32 knn_random.best_params_
~\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params)
720 return results_container[0]
721
--> 722 self._run_search(evaluate_candidates)
723
724 results = results_container[0]
~\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates)
1513 evaluate_candidates(ParameterSampler(
1514 self.param_distributions, self.n_iter,
-> 1515 random_state=self.random_state))
~\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params)
709 for parameters, (train, test)
710 in product(candidate_params,
--> 711 cv.split(X, y, groups)))
712
713 all_candidate_params.extend(candidate_params)
~\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py in __call__(self, iterable)
928
929 with self._backend.retrieval_context():
--> 930 self.retrieve()
931 # Make sure that we get a last message telling us we are done
932 elapsed_time = time.time() - self._start_time
~\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py in retrieve(self)
831 try:
832 if getattr(self._backend, 'supports_timeout', False):
--> 833 self._output.extend(job.get(timeout=self.timeout))
834 else:
835 self._output.extend(job.get())
~\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py in wrap_future_result(future, timeout)
519 AsyncResults.get from multiprocessing."""
520 try:
--> 521 return future.result(timeout=timeout)
522 except LokyTimeoutError:
523 raise TimeoutError()
~\Anaconda3\lib\concurrent\futures\_base.py in result(self, timeout)
430 raise CancelledError()
431 elif self._state == FINISHED:
--> 432 return self.__get_result()
433 else:
434 raise TimeoutError()
~\Anaconda3\lib\concurrent\futures\_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
TypeError: __init__() takes exactly 1 positional argument (0 given)
AI: The problem is with the metric seuclidean. The SEuclideanDistance constructor requires a parameter (see the DistanceMetric documentation). This parameter is not given, hence the error about the missing argument.
It should in principle be possible to give the parameter in the searchgrid, but there are several known issues with RandomizedSearchCV that make this impossible (or at least harder than necessary).
So until these issues are fixed I would suggest to remove seuclidean from the list of search parameters, or to use GridSearchCV. Whatever option is the best choice depends on the details of what you are trying to achieve. |
H: VC dimension of half spaces over the real line
I'm studying VC dimension and I'm having a little difficulty understanding it. I read lots of explanations, but when I come across this simple exercise I did not get a good intuition. The problem is this:
Find the VC dimension for the following case,
$h (x) = \mathbb{1}_{{a < x}}$, with parameter $a, X \in R$.
The resolution is as follows:
VC-dimension = 1.
(a) It can shatter point $0$, by choosing a to be $2$ and $-2$.
(b) It can not shatter any two points ${x1, x2}, x1 < x2$, because the
labeling $x1 = 1$ and $x2=0$ can not be realized.
My doubts are:
1) In item (a) what is parameter $a$? I looked for similar exercises, but there was no concrete explanation about this parameter, what is its function? In this item it takes as equals $2$ and $-2$, with $-2 < 0$ ok, but $2 < 0$, I did not understand, from this conclusion, why it is correct.
2) In item (b) I did not understand the inequality, as he came to the conclusion that $x1 < x2$ can not be realized, why is the inequality $x1 > x2$ not valid?
AI: First you need to understand how to calculate the VC-dimension. There are two conditions for the VC-dimension to be $n$ (here $n=1$):
You need to find one set of $n$ points that can be shattered (i.e. classify all possible $|\{0,1\}|^n=2^n$ labelings correctly) and
It must be impossible to find any set of $n+1$ points that can be shattered.
Now to your questions:
1) $a$ defines the family of possible decision functions in your classification problem. In your case, $a$ defines all possible half spaces along the real axis. That means all points greater than $a$ a classified as $1$ and all points less than $a$ are classified as $0$.
First they show that one point can be shattered. And they choose that point to be $x=0$. They could choose another point but it is enough to show it for $x=0$.
To shatter $x=0$ you need to find an $a$ so that $h(0)=0$ and an $a$ such that $h(0)=1$. (Those are the $2^1=2$ possible labels.)
If the label is $0$ you can choose any $a>0$ to label $x$ correctly. $2$ is one example. With $a=2$ all $h(x)$ will return $0$ for all points less than $2$ (which includes $x=0$).
If the label is $1$ you can similarly choose any $a<0$. $-2$ is one example, but you could also choose $-0.01$ or $-1232132158932$. In the case $a=-2$, $h(x)$ will return 1 for all $x>-2$ (which again includes $x=0$).
So you fulfilled the first condition.
2) Now they show the second condition. But maybe they left out some of the details. Let me try to fill them in.
To show the second condition, you need to pick any two points and show that it is impossible to find $a$ to classify all possible labels correctly. Let's call the two points $x1$ and $x2$. The $2^2=4$ possible labels are: $\{0,0\},\{0,1\},\{1,0\},\{1,1\}$.
Clearly, if $x1=x2$ it follows that $h(x1)=h(x2)$ and therefore the labels must be the same and and only $\{0,0\}$ and $\{1,1\}$ are possible but not $\{0,1\},\{1,0\}$.
Therefore, we can move on to the case where $x1 \neq x2$. Since the naming of $x1$ and $x2$ is arbitrary we can assume that $x1 < x2$ ("without loss of generality" because if $x1 > x2$ we can just rename $x2$ as $x1$ and $x1$ as $x2$ and we would have $x1 <x2$ again).
Now, in this case it is also impossible to achieve all four possible labeling combinations. In particular they show that the labeling $\{1,0\}$ is impossible.
If the label of $x1$ is $1$ then it follows (similarly to 1) with $x=0$) that $a$ must be less than $x1$. But then $a < x1 < x2$ and therefore $h(x2)$ must also be $1$ and can't be $0$.
(You could now move on to the case $x1 > x2$ but as I tried to explain above that's basically the same idea, but now with the labeling $\{0,1\}$).
See this question for a very similar example with intervals instead of half lines. |
H: What algorithm is best suited to derive the best match between two people in a data set?
Say i have a large data set that contains the following data;
username,age,sex,music_genre,sports_genre,art_genre,rating
DanTheMan,25,male,rock,football,cubsim,50
LoopyLucy,23,female,pop,null,popart,76
MadDog,33,null,pop,football,cubsim,57
I would like run through the whole data set and match two users based on ̶t̶a̶s̶t̶e̶s̶
genres and rating that are similar or closer then others like a best match. The gender does not need to be considered, What would be the best algorithm to acquire this result? There is an equal amount of users, Everyone will get a match.
I have taken a look into the stable marriage algorithm (Gale-Shapley) however I would appreciate someone else's suggestion and opinion.
Thanks!
AI: Given your data sample, unless you have a more subtle way to measure similarity between different values for a given genre (e.g. some other resource indicating that football is closer to basketball than to tennis for instance), it seems that the only similarity measure that you can use is to count how many tastes two users have in common.
The similarity score can be only 0, 1, 2 or 3, so I don't think you really need clustering. You can simply build a map where the key is the concatenation of the 3 "genre" columns, and the value is the set of users which have these tastes. Some users might not have an exact match (3 identical genres), so you do the same process but for only 2 genres in common, and then for only one. |
H: np.loadtxt function showing error, could not convert string to float: '"Date"'
This is my stock market csv data:
Date,Open,High,Low,Close,Adj Close,Volume
43283,511,514.950012,503.5,512.599976,512.599976,261839
43284,512.599976,520,509.700012,512,512,332619
43285,512,515.950012,507.950012,514.299988,514.299988,173621
43286,515.549988,517.5,509.399994,510.899994,510.899994,117474
43287,510.049988,516.5,510.049988,514.25,514.25,82106
43290,514.200012,528.5,514.200012,523.650024,523.650024,322861
43291,530,534.900024,522.099976,532.549988,532.549988,404132
43292,533.400024,541.75,531,536.599976,536.599976,267510
43293,539.450012,545,535.25,537.25,537.25,254942
43294,540,540.799988,520.5,523.900024,523.900024,240378
43297,524,529.75,518.549988,523.099976,523.099976,191192
43298,523,540,519.799988,538.049988,538.049988,213308
43299,542.349976,542.799988,515.849976,524.200012,524.200012,557333
43300,528,536.900024,518.849976,527.299988,527.299988,201716
43301,527.599976,536.450012,524.950012,534.450012,534.450012,156703
43304,534.5,544.950012,531.049988,540.799988,540.799988,209083
43305,542.950012,549,538.450012,546,546,216217
43306,547,547.5,529.450012,531.849976,531.849976,145508
43307,537,543.900024,527,541.650024,541.650024,547093
43308,545,555,538,553.650024,553.650024,540695
43311,555,570,551.099976,568.450012,568.450012,564010
43312,582,584.950012,548,550.099976,550.099976,942588
43313,552.450012,555.549988,538.650024,544.900024,544.900024,440881
I am trying to load stock market data csv file in a jupyter note book using
import numpy as np
np.loadtxt(r"C:\Users\Souro\Downloads\Data.csv",delimiter=",")
but it shows the following error after compiling:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-54-6552d575b229> in <module>
----> 1 np.loadtxt(r"C:\Users\Souro\Downloads\Data.csv",delimiter=",")
c:\python3.7.2\lib\site-packages\numpy\lib\npyio.py in loadtxt(fname, dtype, comments, delimiter, converters, skiprows, usecols, unpack, ndmin, encoding, max_rows)
1139 # converting the data
1140 X = None
-> 1141 for x in read_data(_loadtxt_chunksize):
1142 if X is None:
1143 X = np.array(x, dtype)
c:\python3.7.2\lib\site-packages\numpy\lib\npyio.py in read_data(chunk_size)
1066
1067 # Convert each value according to its column and store
-> 1068 items = [conv(val) for (conv, val) in zip(converters, vals)]
1069
1070 # Then pack it according to the dtype's nesting
c:\python3.7.2\lib\site-packages\numpy\lib\npyio.py in <listcomp>(.0)
1066
1067 # Convert each value according to its column and store
-> 1068 items = [conv(val) for (conv, val) in zip(converters, vals)]
1069
1070 # Then pack it according to the dtype's nesting
c:\python3.7.2\lib\site-packages\numpy\lib\npyio.py in floatconv(x)
773 if '0x' in x:
774 return float.fromhex(x)
--> 775 return float(x)
776
777 typ = dtype.type
ValueError: could not convert string to float: '"Date"'
How can I get rid of this error?
AI: The problem might arise because of the meta-text in the .csv or .txt file that is not really written there but is copied when its content is loaded somewhere.
I think it is better to first import your text in an array or a string and then split it and save into the dataframe specifically when your data is not too large.
import csv
arrays = []
path = "C:\\Users\\Souro\\Downloads\\AXISBANK.csv"
with open(path, 'r') as f:
reader = csv.reader(f)
for row in reader:
row = str(row).replace('\\', '') #deleting backslash
arrays.append(row)
Then take a look at arrays[:10] to find where the meta data ends and delete the unwanted data (meta data)
and then converting the 'arrays' array into the dataframe.
for instance:
arrays = arrays[9:]
df = pd.DataFrame(arrays[1:], columns=arrays[0]) #arrays[0] is the columns names
about your comments:
if you look at the text in each row (print each row), you would find out that a backslash is at the end of each row, so by replace('\',' ') we are substituting each backslash with nothing(''). why two \? It is the way that we declare backslash, otherwise, it won't be recognized.
row=str(row).replace('\\',' ')
and about
open('text.txt','r')
It opens the file 'text.txt' in reading mode (r). |
H: Classification targets with heterogenous meanings
I am training a classification model on a dataset of users on a website and each has 100 different measurements of their behaviour on the platform.
Most of these users are dormant but about 10% will reawken. We are interested in purchasing activity once awake.
We have separated the customers into two classes:
Those who reawaken and purchase products 5+ times (positive class)
Those who remain dormant and those who rewaken but purchase less than (or including) 4 times.
We also have a separate model that sorts customers into groups
Will reawaken.
Remain dormant
My question is:
If we output the probability of being in the positive class for both of these classifiers, what will the product of these two probabilities mean semantically?
Will it be P(5+ and reawaken), or will it be something else?
AI: Will it be P(5+ and reawaken), or will it be something else?
The events are not independent, so one cannot assume that p(A and B) = p(A) * p(B).
Let's denote the events as follows:
A = reawakens (not A = remains dormant)
B = purchases at least 5 times
The event "reawakens and purchases more than 5 times" is "A and B". In general we have:
p(A and B) = p(A/B) p(B)
p(A and B) = p(A/B) (p(A and B) + p(not A and B))
But it's impossible for a customer to stay dormant and buy anything, so:
p(not A and B) = 0
Which gives us:
p(A and B) = p(A/B) p(A and B)
p(A/B) = 1
Then we obtain:
p(A and B) = p(B)
Note that this makes sense intuitively: the probability of buying at least 5 times is the same as awakening and buying at least 5 times.
Currently the labels of the two models overlap so I don't think you can infer much from combining their outputs. A way to make it usable would be to make the first model consider only awakening customers in order to avoid the overlap. But it might be more useful to train a single joint model to get a clear picture of your data. In general such a model would classify between 4 categories:
not A and not B
not A and B
A and not B
A and B
But since p(not A and B) = 0 there are actually only 3 labels corresponding to:
stays dormant
awakens and buys less than 5 times
awakens and buys at least 5 times |
H: Dataset columns throwing KeyError
For some reason, the columns in my dataset are throwing the following KeyError:
Traceback (most recent call last):
File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\pandas\core\indexes\base.py", line 2657, in get_loc
return self._engine.get_loc(key)
File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 1601, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 1608, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'FirePlaceQu'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/security/Downloads/AP/Boston-Kaggle/Boston.py", line 29, in <module>
train['FirePlaceQu'] = train['FirePlaceQu'].fillna('NFp')
File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\pandas\core\frame.py", line 2927, in __getitem__
indexer = self.columns.get_loc(key)
File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\pandas\core\indexes\base.py", line 2659, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 1601, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 1608, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'FirePlaceQu'
I couldn't decipher what the error was about. It gave the exact same error on other columns as well so it wasn't just FirePlaceQu. I double checked the spelling so it isn't that either.
This is my code:
import pandas as pd
train = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/train.csv")
test = pd.read_csv("https://raw.githubusercontent.com/oo92/Boston-Kaggle/master/test.csv")
categorical_columns = ['MSSubCLass', 'MSZoning', 'LotShape', 'LandContour', 'LotConfig', 'Neighborhood', 'Condition1',
'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd',
'Foundation', 'Heating', 'Electrical', 'Functional', 'Fireplaces', 'GarageType', 'GarageYrBlt',
'GarageCars', 'GarageArea', 'PavedDrive', 'Fence', 'MiscFeature', 'SaleType', 'SaleCondition']
ranked_columns = ['LotArea', 'Utilities', 'LandSlope', 'ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'BsmtExposure',
'BsmtFinType1', 'BsmtFinType2', 'HeatingQC', 'KitchenQual', 'FireplaceQu', 'GarageQual', 'GarageCond',
'PoolQC', 'OverallQual', 'OverallCond']
numerical_columns = ['LotFrontage', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF',
'TotalBsmtSF', '1stFlrSF', '2ndFlrSf', 'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath',
'FullBath', 'HalfBath', 'Bedroom', 'Kitchen', 'TotRmsAbvGrd', 'Fireplaces', 'GarageYrBlt', 'GarageCars',
'GarageArea', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'SceenPorch', 'PoolArea',
'MiscVal', 'MoSold', 'YrSold']
binary_columns = ['Street', 'CentralAir']
train['Alley'] = train['Alley'].fillna('NoAy')
train['BsmtQual'] = train['BsmtQual'].fillna('NBs')
train['BsmtCond'] = train['BsmtCond'].fillna('NBs')
train['BsmtExposure'] = train['BsmtExposure'].fillna('NBs')
train['BsmtFinType1'] = train['BsmtFinType1'].fillna('NBs')
train['BsmtFinType2'] = train['BsmtFinType2'].fillna('NBs')
train['FirePlaceQu'] = train['FirePlaceQu'].fillna('NFp')
train['GarageType'] = train['GarageType'].fillna('NGr')
train['GarageFinish'] = train['GarageFinish'].fillna('NGr')
train['GarageQual'] = train['GarageQual'].fillna('NGr')
train['GarageCond'] = train['GarageCond'].fillna('NGr')
train['PoolQC'] = train['PoolQC'].fillna('NPo')
train['Fence'] = train['Fence'].fillna('NFn')
train['MiscFeature'] = train['MiscFeature'].fillna('NoN')
train['Utilities'] = train['Utilities'].replace(['AllPub', 'NoSeWa'], [2, 1]) # Utilities
train['ExterQual'] = train['ExterQual'].replace(['Ex', 'Gd', 'TA', 'Fa'], [4, 3, 2, 1]) # Exterior Quality
train['LandSlope'] = train['LandSlope'].replace(['Gtl', 'Mod', 'Sev'], [3, 2, 1]) # Land Slope
train['ExterCond'] = train['ExterCond'].replace(['Ex', 'Gd', 'TA', 'Fa', 'Po'], [4, 3, 2, 1, 0]) # Exterior Condition
train['HeatingQC'] = train['HeatingQC'].replace(['Ex', 'Gd', 'TA', 'Fa', 'Po'], [4, 3, 2, 1, 0]) # Heating Quality and Condition
train['KitchenQual'] = train['KitchenQual'].replace(['Ex', 'Gd', 'TA', 'Fa'], [3, 2, 1, 0]) # Kitchen Quality
AI: It's FireplaceQu not FirePlaceQu .
A KeyError means it doesn't find the column in the dataset.
to simply check the spelling you can print all the columns:
for col_name in train:
print(col_name)
or check if your spelling of a column is correct:
"FirePlaceQu" in list(map(lambda x: x, train)) |
H: Keras Model question for Pre trained model extension
I want to add a few more layers to a Resnet50 model and my question is - do I need to compile it and train it on new data or can I just use it as it is? Will it just give me the Resnet50 results?
Here is what I am trying:
def base_model():
resnet = resnet50.ResNet50(weights="imagenet", include_top=False)
x = resnet.output
x = Conv2D(128, (3, 3), activation='relu',padding='same')(x)
x = Conv2D(128, (3, 3), activation='relu',padding='same')(x)
x = MaxPooling2D((2, 2), strides=(2, 2))(x)
x = Conv2D(256, (3, 3), activation='relu',padding='same')(x)
x = Conv2D(256, (3, 3), activation='relu',padding='same')(x)
x = MaxPooling2D((2, 2), strides=(2, 2))(x)
x = Conv2D(512, (3, 3), activation='relu',padding='same')(x)
x = Conv2D(512, (3, 3), activation='relu',padding='same')(x)
x = MaxPooling2D((2, 2), strides=(2, 2))(x)
x = GlobalAveragePooling2D()(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.6)(x)
x = Dense(4096, activation='relu')(x)
x = Dropout(0.6)(x)
x = Lambda(lambda x_: K.l2_normalize(x,axis=1))(x)
return Model(inputs=resnet.input, outputs=x)
And then use it like this :
enhanced_resnet = base_model()
img = image.load_img(file, target_size=(224, 224))
img = image.img_to_array(img)
x = resnet50.preprocess_input(img)
x = np.array([x])
feature = enhanced_resnet.predict(x)
What I want as return value is the features from the image and not a prediction, as I am using a distance equation to tell the similarity between images.
AI: Short answer - Yes.
The pretrained Resnet model has trained parameters (weights and biases). When you add extra layers on top of it, the parameters of these layers are randomly initialized. Therefore the final model will give random results.
You need to train the new model after adding extra layers on your new dataset. You don't need to train all the previous layers, just few of them.
For further reading search for - transfer learning. That's what you're trying to do. |
H: Improving Accuracy of the Deep Learning Model
In my current project, I have only 647 rows (500 for training and 147 for testing) and I have applied the Keras Sequential model using the following code:
from keras import models
from keras import layers
from keras import regularizers
model = models.Sequential()
model.add(layers.Dense(5,activation="relu",input_shape=(train_x.shape[1],)))
model.add(layers.Dense(1,activation="sigmoid"))
from keras import optimizers
#network = model.compile(optimizer=optimizers.RMSprop(lr=0.001),loss="binary_crossentropy",metrics=["accuracy"])
network = model.compile(optimizer=optimizers.Adam(lr=0.05, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False),loss="binary_crossentropy",metrics=["accuracy"])
result = model.fit(train_x,train_y,epochs=60, batch_size=32)
Following is the result of a few Epochs:
Epoch 1/60
500/500 [==============================] - 1s 3ms/step - loss: 0.7253 - acc: 0.5520
Epoch 2/60
500/500 [==============================] - 0s 137us/step - loss: 0.6379 - acc: 0.6640
Epoch 3/60
500/500 [==============================] - 0s 134us/step - loss: 0.6035 - acc: 0.6880
Epoch 4/60
500/500 [==============================] - 0s 158us/step - loss: 0.5852 - acc: 0.6980
Epoch 5/60
500/500 [==============================] - 0s 136us/step - loss: 0.5864 - acc: 0.7140
Epoch 6/60
500/500 [==============================] - 0s 134us/step - loss: 0.5552 - acc: 0.7240
Epoch 7/60
500/500 [==============================] - 0s 141us/step - loss: 0.5475 - acc: 0.7280
Epoch 8/60
500/500 [==============================] - 0s 164us/step - loss: 0.5340 - acc: 0.7460
Epoch 9/60
500/500 [==============================] - 0s 138us/step - loss: 0.5389 - acc: 0.7280
Epoch 10/60
500/500 [==============================] - 0s 139us/step - loss: 0.5374 - acc: 0.7540
===================For the simplicity I am sharing first and last few epochs result=
Epoch 55/60
500/500 [==============================] - 0s 161us/step - loss: 0.4947 - acc: 0.7800
Epoch 56/60
500/500 [==============================] - 0s 168us/step - loss: 0.5058 - acc: 0.7660
Epoch 57/60
500/500 [==============================] - 0s 158us/step - loss: 0.5011 - acc: 0.7700
Epoch 58/60
500/500 [==============================] - 0s 154us/step - loss: 0.5062 - acc: 0.7660
Epoch 59/60
500/500 [==============================] - 0s 156us/step - loss: 0.5040 - acc: 0.7600
Epoch 60/60
500/500 [==============================] - 0s 147us/step - loss: 0.4994 - acc: 0.7800
Using the above configuration (I have also tried different Neural Network architecture, the above one looks fine), I am able to achieve the best accuracy so far and which is train accuracy ~ 78 % and test accuracy ~ 72 %. I also tried with Logistic regression but in this case train accuracy ~ 65 %.
Here it looks like overfitting occurs so I tried with L2 Regularization and Dropout but none of them help to achieve the better accuracy. Unfortunately, I can't generate more data.
What should I do to achieve the better accuracy of the model (given a limited amount of data)? How can I increase the efficiency of both train and test data?
AI: You can try bunch of things like:
Data augmentation.
Transfer learning on model trained on similar dataset.
Train multiple neural network with different intialisation of weights and then take majority voting during inference.
The last point matters if you're training from scratch because weights intialisation direct the model towards different minima. So with average of different networks you're essentially taking average of different minimas which might increase overall accuracy. |
H: model.cuda() in pytorch
If I call model.cuda() in pytorch where model is a subclass of nn.Module, and say if I have four GPUs, how it will utilize the four GPUs and how do I know which GPUs that are using?
AI: model.cuda() by default will send your model to the "current device", which can be set with torch.cuda.set_device(device).
An alternative way to send the model to a specific device is model.to(torch.device('cuda:0')).
This, of course, is subject to the device visibility specified in the environment variable CUDA_VISIBLE_DEVICES.
You can check GPU usage with nvidia-smi. Also, nvtop is very nice for this.
The standard way in PyTorch to train a model in multiple GPUs is to use nn.DataParallel which copies the model to the GPUs and during training splits the batch among them and combines the individual outputs. |
H: how to avoid tokenizing w/ sklearn feature extraction
I'm trying to analyze some machine log files and the column I'm looking at can have values like 'Part.C1.11.Reading Status'. I want to treat the complete string as one token and I don't want it to be split into 'Part', 'C1', '11' and 'Reading# and 'Status'.
I've got the vague feeling that the token_pattern is the parameter I need to adjust so I tried to specify the beginning and the end of a string like so:
from sklearn.feature_extraction.text import CountVectorizer
cvo = CountVectorizer(token_pattern='^$',lowercase=False)
OriginCV = cvo.fit_transform(log['Message_Origin']).toarray()
However, the last line throws an error:
ValueError: empty vocabulary; perhaps the documents only contain stop words
I've also tried to explictly include dot and space in the token_pattern like so:
cvo = CountVectorizer(lowercase=False, token_pattern=r"(?u)\b\w\w+\b|\.|\s")
Throws no errors but does not do the trick (no change except for an additional token '.')
Not changing the default token_pattern does split the string at the spaces and colons though. I found this solution, which however modifies the string by removing e.g. the colons.
Any other idea how to solve this?
Thanks,
Mark
AI: The default for token_pattern is (?u)\b\w\w+\b where \w\w+ translates to [a-zA-Z0-9_][a-zA-Z0-9_]+ (which can be written as [a-zA-Z0-9_]{2,}). This matches 2 or more alphanumeric characters (as defined between the square brackets).
The \b matches word boundaries: anything that is not an alphanumeric character, next to something alphanumeric. This includes spaces and punctuation, so it also includes the dot and causes the separation.
The (?u) activates unicode matching, but it is not so important for your question.
If we also want to match the dot as part of the words, we can copy the default regexp, replace \w with its expanded version and add the dot to it: (?u)\b[a-zA-Z0-9_.]{2,}\b, as can be seen on this website. To add the word "Status" as well you could list it as an option explicitly: (?u)\b[a-zA-Z0-9_.]{2,}(=? Status)?\b.
Alternatively, you can pass a function to the tokenizer parameter to define a more complex algorithm for tokenization.
Edit: If you want to avoid tokenization completely (as your own answer states), the CountVectorizer, which is a token counter may not be the correct pre-processing step to choose: it will simply make everything a single token and return the count of 1. (Or maybe I misunderstood your question) |
H: Making Inference from a Correlation Heatmap
I have constructed a Heatmap of my dataset for visualization. I have searched on various sites regarding what can I infer from the heatmap and I am unable to get any clear understanding from them. I want to select useful features for Exploratory Data Analysis and Visualization to get some insights. If you can provide any suggestion or advice it would be extremely helpful for me. Thank you
AI: Assuming you're using df.corr(), the results from a heatmap are Pearson correlation coefficients which can be thought of as "the explainability between two arrays."
An effect score closer to 0 translates to there being no relationship. A score closer to 1 or -1 is a positive or negative relationship. A perfect score of 1 is a direct correlation.
Additionally, I would caution against taking action using these values without testing the normalcy/distribution of your data. |
H: LSTM get next output with Keras
So I'm learning RNN, and tried to do a prediction LSTM, but I do not understand how the output works. I have this LSTM RNN:
lstm_model = Sequential()
lstm_model.add(LSTM(100, batch_input_shape=(BATCH_SIZE, TIME_STEPS, x_t.shape[2]), dropout=0.0, recurrent_dropout=0.0, stateful=False, kernel_initializer='random_uniform'))
lstm_model.add(Dropout(0.5))
lstm_model.add(Dense(25,activation='relu'))
lstm_model.add(Dense(1,activation='sigmoid'))
optimizer = optimizers.RMSprop(lr=0.003)
lstm_model.compile(loss='mean_squared_error', optimizer=optimizer)
and fit with:
history = lstm_model.fit(x_t, y_t, epochs=300, verbose=2, batch_size=BATCH_SIZE,
shuffle=False, validation_data=(trim_dataset(x_val, BATCH_SIZE),
trim_dataset(y_val, BATCH_SIZE)), callbacks=[csv_logger])
When I then try to predict using
gotten = lstm_model.predict(x_test_t[-500:],batch_size=BATCH_SIZE)
Will I get only one new output? I'm very confused with the way you run and get predictions
Thanks for the help
AI: With your current model, you will indeed get a single scalar output. This is because the last layer is Dense with just 1 unit.
In other words, with this model you can map a sequence to a single value that may represent, e.g., the label of the series or its next value depending on what target you train the network on. |
H: How to use the fillna method in a for loop
I am working on a housing dataset. In a list of columns (Garage, Fireplace, etc), I have values called NA which just means that the particular house in question does not have that feature (Garage, Fireplace). It doesn't mean that the value is missing/unknown. However, Python interprets this as NaN, which is wrong. To come across this, I want to replace this value NA with XX to help Python distinguish it from NaN values. Because there is a whole list of them, I want use a for loop to accomplish this in a few lines of code:
na_data = ['Alley', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2', 'FireplaceQu', 'GarageType',
'GarageFinish', 'GarageQual', 'GarageCond', 'PoolQC', 'Fence', 'MiscFeature']
for i in range(len(na_data)):
train[i] = train[i].fillna('XX')
I know this isn't the correct way of doing it as it is giving me a KeyError: 0. This is kinda like a pseudocode way of doing it to visualize what I'm trying to accomplish. What is the way to automate fillna('XX') on this list of columns?
AI: what you are looking for is replace().
And you don't need to write all the columns you can iterate over the columns name simply.
for col in train:
train[col].replace("NA","XX",inplace=True)
You can do it on all the dataset in one line:
train.replace("NA","XX", inplace=True)
Or on specific columns:
for cols in na_data:
train[col].replace("NA","XX",inplace=True) |
H: Comparing a column value to list of values and if it contains the value then assigning that list value to new Column
Column
and wanted to check with this list
list_CI = ['Application', 'Server & Storage', 'Active Directory',
'Please Select Value', 'SAP', 'WPS-Core Infra Services',
'Workplace', 'WPS-Telecoms', 'Citrix', 'Networks',
'WPS-Connectivity Support', 'WPS-Groupware', 'WPS-Smarthands',
'Networks & Telecomms']
I tried to apply lambda but fail. How can solve through regex or any other method?
Please hint and shed some light. Thanks.
AI: This is the solution, after 4-5 attempt this code solves the problem
def extractServiceName(x):
if x in list_CI:
return x
else:
return ''
Sample_data['Extracted_New_CI'] = Sample_data['CI Name'].apply(lambda x : extractServiceName(x)) |
H: What model should I use for bounding box detection?
I am working on building a cow detector for a local farm. I have a dataset of images with bounding boxes (not segmentation polygons) for every cow appearing in the images (different number of cows in every image). What is the best and easiest model to use to accomplish this task? Many cows stand behind others so there will be a lot of cases of occlusion. Is there a go-to model that I can get running quickly in Keras to perform well on this dataset? Any tips would be appreciated.
AI: There are 2 object detectors that are quite popular:
Haar Cascade Classifier, introduced by Viola and Jones: available on OpenCV (Python, Java and C++) and on Matlab computer vision toolbox (and probably on many other languages) is a great model for when deep learning is not a option.
Yolo, You Only Look Once: It is a real-time object detector and classifier, it uses a DL CNN and if I am not mistaken has cows on it's training dataset (so you can use transfer learning). It is part of the Darknet library and you should totally check the site. Also, check Tiny Yolo is you need some more speed.
You can should also take a look at:
MMOD (max-margin object-detection): has a great implementation using dlib.
Fast R-CNN and R-CNN: Deep learning classifier, check the paper on this site.
My advice would be to go straight to YOLO and try it out without training for the COW class, then do some fine tunning by retraining few layers with your data. If you need more accuracy than that you can go for R-CNN (probably has some models already trained with cows).
if you lack on computational power you can try MMOD and HCC. But if you got the time you should check BDNN: Binary convolution neural networks for fast object detection, don't know if it is implemented in any libraries already since this paper is relatively fresh (march 2019). |
H: Find errors if any in tensorflow computation graph
Given a computational graph in tensorflow find errors if any in the underlying code.
import tensorflow as tf
import numpy as np
input = tf.placeholder(shape=(1, 224, 224, 3), dtype=tf.float32)
output = input + 5
with tf.Session() as sess:
network_input = np.random.randint(5, size=(700, 350, 3))
out = sess.run(output, feed_dict = {input : network_input})
The code is throwing error as the shape of the placeholder and the input given is not matching.
So for fixing it I am changing the shape of the place holder.
Is it the correct approach or the input given should be reshaped to the placeholder shape?
AI: According to me, reshaping of the input in perfectly ok.
If you are changing the shape of the parameter just to fulfill the requirements of the method parameter or for further processing of the logic, it is perfectly ok, as long as you know what you are sending and how it will be processed by further code.
But in feed_dict, input and placeholder dimensions should be same otherwise you will get the compilation error.
Note: In case you are not sure about the exact dimensions of your
placeholder, you can use [None] as a dimension value while declaring a
placeholder |
H: Filters in convolutional autoencoders
I have a question regarding the number of filters in a convolutional Autoencoder.
As far as I have understood, as the network gets deeper, the amount of filters in the convolutional layer increases.
A typical pattern would be to $16, 32, 64, 128, 256, 512 ...$
Right now I am looking into Autoencoders and on the Keras Blog I noticed that they do it the other way around.
They start with $16$ filters in the first layer, then the number of filters is decreased:
https://blog.keras.io/building-autoencoders-in-keras.html
Now I am wondering if this is usual when working with Autoencoders or does it depend on what kind of features the network should learn?
Thanks in advance,
Cheers,
Michael
AI: Autoencoders are meant to reduce the dimensionality of your data. Increasing the number of filters would do the opposite. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.