text
stringlengths 83
79.5k
|
---|
H: Does running a Decision Tree classifier several times help?
To introduce, I am a novice in ML techniques. I recently had to write a scikit-learn based decision tree classifier to train on a real dataset. Someone suggested me that I must run mu model several thousand times and plot the accuracies on a graph. Here's the rub: I manually ran it around 20 -30 times and every time it gave the same accuracy (for both gini and entropy base). Is that wrong? Should it show slight variations every time?
AI: The scikit-learn DecisionTreeClassifier takes a parameter called random_state. If this is set to an integer, your model should produce the same results every time.
The person suggesting you run the model many times would be correct, assuming you allow for no set random state. This means the results should be slightly different every time, because there is some random selection going on in the algorithm. Here is an example from the splitter classes:
# Draw a feature at random
f_j = rand_int(n_drawn_constants, f_i - n_found_constants,
random_state)
If are are not setting that random state (or any other kind of random seed), I am not sure off the top of my head, why or how the results would always be identical. |
H: Are Decision Trees Robust to Outliers
I read that decision trees (I am using scikit-learn's classifier) are robust to outlier. Does that mean that I will not have any side-effect if I choose not to remove my outliers?
AI: Yes.
Because decision trees divide items by lines, so it does not difference how far is a point from lines. |
H: Validation loss
I am having trouble wrapping my brain around validation loss. It's my understanding that loss is calculated at the end of the feed forward in a NeuralNet and is used in back propagation to update the weights. But I also see validation loss as a metric and don't understand fully what it is for other than observation, is validation loss used in updating weights? Or is it simply a measurement of how far off your observations were at the current epoch?
AI: Validation loss is the same metric as training loss, but it is not used to update the weights. It is calculated in the same way - by running the network forward over inputs $\mathbf{x}_i$ and comparing the network outputs $\mathbf{\hat{y}}_i$ with the ground truth values $\mathbf{y}_i$ using a loss function e.g. $J = \frac{1}{N} \sum_{i=1}^{N} \mathcal{L}(\mathbf{\hat{y}}_i, \mathbf{y}_i)$ where $\mathcal{L}$ is the individual loss function based somehow on the difference between predicted value and target.
is validation loss used in updating weights?
No. In fact that would be counter to the purpose of it.
Or is it simply a measurement of how far off your observations were at the current epoch?
Yes. The point of using a data set that you did not train on, is to measure how well your model is generalising to unseen records. Very often when building a predictive model, that is the main goal, and is actually more important than fitting to your training data (the fitting to data part is necessary for learning, but is not the goal). This is the case whenever you are building a model to use to make decisions on its predictions against new, previously unseen and unlabelled, data.
It is not safe to use the training data in the same role to check for generalisation. It is possible for many model types to learn the training data perfectly, but be terrible at predicting from new values that are not from the training set. This is something you will want to avoid, is often caused by overfitting, and neural networks will often overfit. Using a validation set is a way to monitor and help control against overfitting.
Neural networks (and other model types) typically use a validation set on every epoch, because training too long can cause over-fitting, and models don't recover from that, they just get worse form that point on. So it can save a lot of wasted effort to monitor validation loss, and stop training when it has not improved for a long while, or starts to get worse. |
H: Why running the same code on the same data gives a different result every time?
I am using Keras in Jupyter Notebook.
I understood that for the same results, the random numbers should be produced from the same seed each time.
So, in the first of all my codes, I set random.seed as 1234 in a cell.
np.random.seed(1234)
Then other cells are the code for my model and the fit and evaluate code. But each time that I run the model cells, the loss values are different!
Why does it happen? How can I solve it?
AI: It's not quite enough to set only the numpy random seed, as you've seen - the Keras documentation also notes that it's necessary to set:
the python hash seed
the core python rng
the random seed of your backend to keras
and force your backend to use a single thread.
The interaction of all of these will generally result in different loss values due mainly to changes in random weight initialisation, which data ends up in your train/val/test splits, and the order that data is passed to your model for training.
As others have mentioned, a small amount of variance is to be expected and it's certainly not worth hamstringing your performance over (by limiting yourself to a single thread); setting the seeds and rng should be enough to satisfy people your results are reproducible.
If your results still have unacceptable variance after taking those steps then it might be an indication that your model is brittle and may not generalise well to new data so you'll want to address that.
Final thought - when you say the values are different each time you run the model cell, are you deleting/overwriting the model you already trained? If you are running the model cell on existing model/weight values then that's the same as training for more epochs and would usually have a large impact on your results. |
H: What database should I use?
I am a high-school students who is learning about data science in his free time. I have gotten a neural network to work which is able to solve xor problems. My neural network uses sigmoid as the activation function for both the hidden and output layers. It also has only one hidden layer. I am wondering about what would be the best simple problem which I could solve with my neural net. I would like a database in which there is a probability output or something similar since I've had problems converting sigmoided output to normal values. I have looked on the UCI machine learning repository but have found nothing witch has caught my eye. I would appreciate any help! :)
AI: Search "dataset" instead of "database".
see these:
A Neural Network from Scratch in just a few Lines of Python code
Multi Layer Neural Networks with Sigmoid Function Deep Learning for Rookies |
H: Can I create pretrain model with tensorflow?
I takes a long time for train neural network model. It have to train every time when I run code. If I get high accuracy from training , Can I use same training model with another code without new training anymore ? How to do?
AI: You don't need to train your model each time form the beginning. After training, you can store you model and next time you can load it, train it from that point or use it for your recall phase. You can find good tips here and here which have discussed storing and loading models in Tensorflow. |
H: How to upload a saved file from google colab to a site such as kaggle or github?
We are able to load a file from a site such as kaggle and git hub to google collab, we apply the below code to download from other sites into google colab:
!wget nlp.stanford.edu/data/glove.6B.zip
but I am very curious to know if I am able to upload a saved file (such as an output from google colab) to kaggle or github with a command like above?
AI: In your directory /miryam/myproject:
git init .
git add myfile
git commit -m "First commit"
git remote add origin remote https://github.com/yours_project
git push origin master |
H: Policy-based RL method - how do continuous actions look like?
I've read several times that Policy-based RL methods can work with continuous action space (move left 5 meters, move right 5.5312 meters), rather than with discrete actions, like Value-based methods (Q-learning)
If Policy-based methods produce probability of taking an action from a current state $S_t$, how can such an action be continuous? We might get 4 probabities for our actions, but we still have to choose a single action from from four:
$A_1: 10\%$
$A_2: 35\%$
$A_3: 5\%$
$A_4: 50\%$
Thus, it's not obvious how can my action be something continuous like: "turn +19.2345 angles clockwise". Such an action must have already been pre-defined to the "19.2345" value, right?
AI: The main requirement of on-policy policy gradient methods is that they use a parametric policy $\pi(a|s, \theta)$ that is differentiable with respect to the parameters $\theta$.
This is not restricted to only describing discrete probability distribution functions (e.g. softmax output layer of neural network). A description of any probability distribution function that is differentiable and possible to sample from is all that is required. This is true for the Normal distribution for instance, so one relatively common solution in continuous spaces is for a neural network to output the mean and standard deviation for the distribution of each component of the action vector that accepts continuous values.
Typically the neural network does not perform the sampling to choose an action. This is also true for a softmax output - it is only additional code, outside of the NN, that interprets the values and selects the action. In addition, and unlike softmax, the NN does not need to directly represent the probability distribution function, just enough data to drive the sampling process. However, the nature of the distribution function does need to be taken into account when calculating the gradient in policy gradient methods.
Thus, it's not obvious how can my action be something continuous like: "turn +19.2345 angles clockwise". Such an action must have already been pre-defined to the "19.2345" value, right?
What the policy might output here, is the two parameters of the distribution $\mathcal{N}(\mu, \sigma)$, which you then must sample to get an action like "turn x degrees clockwise". So for example, the neural network could output $(25, 7)$ and then additional code will interpret those values as describing the distribution and take a sample. If you got a mean 25, standard deviation 7, then at the point you select the action you could get "turn +19.2345 degrees clockwise" amongst a range of other values. The value 19.2345 does not need to be pre-defined or represented in the neural network in order to do that. |
H: How to perform a reggression on 3 functions using a Neural Network
I am currently building a neural network using Keras to perform a regression.
I have 4 independent variables W,X,Y,Z. They are used to predict 3 different functions f1(W,X,Y,Z), f2(W,X,Y,Z), f3(W,X,Y,Z).
Should my output layer have 1 or 3 neurons? Also, should I be using a relu or linear activation function for the output layer? I'm currently using MSE for my loss function and adam for my optimizer.
For my metrics, should I use 'accuracy' or 'r2'?
Any suggestions? I'm sorry, I'm new to deep learning...
AI: Should my output layer have 1 or 3 neurons?
The easiest thing to do is to create 3 separate networks, one for each function you want to approximate.
While it may certainly be beneficial if you combine all 3 outputs into the same model via a multi-task framework, I'd suggest starting off with separate networks, which is more intuitive. Then, if you want to see if combining them under one network improves your performance, you could check out how to create models with multiple outputs in keras.
Also, should I be using a relu or linear activation function for the output layer?
Linear, definitely.
For my metrics, should I use 'accuracy' or 'r2'?
Accuracy is a metric for classification, not regression. $r^2$ can be used, but for certain problems it isn't very reliable. You could try $r^2$, as well as MSE, MAE, MSLE (MSE of the $log$s of the predicted/actual values), etc. |
H: word2vec - log in the objective softmax function
I'm reading a TensorFlow tutorial on Word2Vec models and got confused with the objective function. The base softmax function is the following:
$P(w_t|h) = softmax(score(w_t, h) = \frac{exp[score(w_t, h)]}{\Sigma_v exp[score(w',h)]}$, where $score$ computes the compatibility of word $w_t$ with the context $h$ (a dot product is commonly used). We train this model by maximizing its log-likelihood on the training set, i.e. by maximizing $ J_{ML} = \log P(w_t|h) = score(w_t,h) - \log \bigl(\Sigma_v exp[score(w',h)\bigr)$
But why $\log$ disappeared from the $score(w_t,h)$ term?
AI: No, the logartihm doesn't disappear. From the equation
,
When you want to calculate
,
it essentially means calculating ,
Now ,
So ,
as . |
H: How to implement LSTM using Doc2Vec vectors?
I would like to build a ANN for text classification, which has an LSTM layer, and using weights obtained via a Doc2Vec model trained before:
model_doc2vec = Sequential()
model_doc2vec.add(Embedding(voacabulary_dim, 100, input_length=longest_document, weights=[training_weights], trainable=False))
model_doc2vec.add(LSTM(units=10, dropout=0.25, recurrent_dropout=0.25, return_sequences=True))
model_doc2vec.add(Flatten())
model_doc2vec.add(Dense(3, activation='softmax'))
model_doc2vec.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
At the moment I am not able to get the weights in the Embedding() layer mentioned above. I would like to know which is the easiest way to get these weights.
AI: Weights are nothing but the pretrained word vectors . You can use any of word2vec or Glove embedding and create an embedding matrix to get that . Essentially each row of the matrix will be the vector for a word in the word2vec/Glove vocabulary. Please have a look here |
H: Dataset with disproportionately more of a single label than any other
I'm using the data analysis software Orange to analyze rows of data with labels $\{H, T,L\}$. $T$ is the neutral state of the system I'm trying to model so data is almost always labeled with $T$. This leads to an extremely high $97$% classification accuracy, that is untrue. Almost no datapoint with an $H, L$ actually gets labeled with an $H, L$ respectively by the neural net. The neural net is doing what's logical: flatten everything to $T$ since that occurs most of the time. How can I repair this situation?
AI: There are a few ways:
Oversample the under-represented class.
issue: Leads to very constrained boundaries (not smooth) around the under-represented class
Weigh the loss coming from under-represented class more than over-represented class. If you think about it, this is mathematically doing the same thing as 1
Use SMOTE (here) (i recommend this). Basically samples additional points randomly from lines joining nearest under-represented classes and increases the dataset size accordingly. scikit contrib implementation of smote
There are other ways but they offer very marginal improvement over SMOTE |
H: Problem in Recommendation for categorical data?
I have been building a recommendation model to recommend certain questions in an interaction platform to users to help each other.
I have calculated an affinity score between categories to find which top categories are to be recommended. But each category has questions by users in itself. The amount of the questions increases with every new post a user posts in a certain category. Now how can I choose which of these questions to recommend when I have chosen the category through my affinity score ? Do I make it random ? Do I display the questions which come first in the data base ? Or is there any better alternative ?
AI: Welcome to the site, I'll propose few alternatives below in increasing order of complexities.
Simple sorting criteria: Think what the user wants to see and create a score for sorting questions. Ordering of question based on some criteria like number of answers or number of views it has already received. Show top N based on score.
Derived sorting criteria: Use a combination of factors like number of answers, freshness of question, popularity of question based on number of views to create a derived score. Show top N based on score.
Adding Discovery to (1-2): The options above will penalize new questions and run into typical explore Vs. exploit trade-off in any recommendation system. You can alleviate that by adding a random bump to score of new questions so that they get a chance to participate.
Jointly learning categories and questions: You can setup your recommendation algorithm to actually work at question ID level and use categories as side information. Check this worked out example of LightFM library in Python. Coincidentally, it uses stackoverflow questions and categories as examples.
The first 3 options are easy to implement but biased. Fourth one will need some data transformations and I'm not sure if your objective is to model directly at question ID level. |
H: doc2vec - How does the inference step work in PV-DBOW
I am quite confused about how we generate new paragraph vectors in PV-DBOW?
If I want to use the embeddings to classify some text how would I generate a vector for a new paragraph?
In the original paper the authors wrote:
At prediction time, one needs to perform an inference step to compute
the paragraph vector for a new paragraph. This is also obtained by
gradient descent.
It is unclear to me how they would compute the paragraph vector for a new paragraph using gradient descent.
AI: The original paper does a lot of hand-waving on the implementation of inference step and is not clear. So your confusion is justified. I'll explain at high level below. I'm assuming only PV-DBOW model.
Training Phase
In this model, we forget word ordering information and setup a very simple neural network.
Represent all input document tags in a vocabulary to get unique ID for each document tag. Do the same for words.
For a given document ID, the input is one-hot encoded representation of document tag IDs. Output is one hot encoded representation of a randomly selected word.
We want to setup training such that for a given tag ID, a randomly selected word from that document will be predicted with high probability.
So the neural network getting trained transforms one hot encoded representation to document/tag vector. The tag vector is passed through another layer with softmax at the output. Both set of weights are adjusted during training phase.
What training achieves: It represents all document tags in a new space such that probabilities for randomly selected words in each document are maximized starting from that vector space representation to softmax output.
Important to note that there are two set of weights. Input to hidden layer and hidden to softmax output.
Inference Stage
Words are unique across document. So a word 'India' in training and inference stage gets mapped to same ID in vocabulary.
But there's not such concept for document ID since documents are assumed to be unique and there's no shared ID between training and inference.
Inference stage runs a (sort of) reverse calculation. What vector space representation is most appropriate for this document, if I use the same set of weights from hidden space to output layer? The weights from hidden layer to softmax output are kept constant!
Select a word from new document at random. Start with a random representation for the document vector (input to hidden layer). Pass it through neural network from hidden to softmax output with constant weights (learnt during training). Adjust the randomly initialized weights such that softmax probability is maximized for the selected word. Repeat this process many times.
This is the reason why you need stochastic gradient descent and need to specify number of steps.
Summary
Document IDs are not common from training and inference stage (like word IDs in w2v).
There are two set of weights, keep one constants during inference stage based on learning from training phase. Adjust the other using SGD.
You mapping new document to document vector space such that documents which had similar words during the training phrase are close in the document vector space.
This is a very good tutorial on the implementation by Richard Berendsen. I have ignored additional complexities like negative sampling, context window from the explanation. |
H: Early stopping and final Loss or weights of models
In a deep model, I used the Early stopping technique as below in Keras:
from keras.callbacks import EarlyStopping
early_stopping = [EarlyStopping(monitor='val_loss',
min_delta=0,
patience=2,
verbose=2, mode='auto')]
model.fit(train_x, train_y, batch_size=batch_size,
epochs=epochs, verbose=1,
callbacks=early_stopping,
validation_data=(val_x, val_y))
model.fit(train_x, train_y, batch_size=batch_size,
epochs=epochs, verbose=2,
callbacks=early_stopping,
validation_data=(val_x, val_y))
Now, when I run this code, in the output it prints the loss value for training and validation of each epoch.
I set the patience=2 in the early stopping. So, it continues the training process two times after when the validation loss increased instead of decreased.
Some things like this:
Epoch 1/10
- 198s - loss: 99.7160 - val_loss: 123.0397
Epoch 2/10
- 204s - loss: 78.7000 - val_loss: 109.0344
Epoch 3/10
- 208s - loss: 65.4412 - val_loss: 78.0097
Epoch 4/10
- 268s - loss: 61.9812 - val_loss: 79.0312
Epoch 5/10
- 298s - loss: 59.1124 - val_loss: 79.3397
Epoch 6/10
- 308s - loss: 57.2200 - val_loss: 218.0397
Epoch 00007: early stopping
In the end, what will be the final weights of the model and the Loss values? The final time of training or two times before it?
If it considers the final epoch, so should it be better if I set the patience as little as possible to overcome the overfitting?
Thank you
AI: EarlyStopping
The final weights will be saved, not the weights where your patience parameter is triggered.
Looking at the documentation for EarlyStopping, it seems not to be involved with saving weights at any point - it isn't mentioned.
EDIT
Upon further investigation (reading the source code), it seems you can indeed save the best, using the EarlyStopper callback
The class can be initialised with the aprameters restore_best_weights, as seen here. Then at the end of training, when your waiting period has overshot the patience parameter, the model's weights are returned to be the best weights (weights of the model at the time of loewst validation loss:
if self.restore_best_weights:
if self.verbose > 0:
print("Restoring model weights from the end of the best epoch")
self.model.set_weights(self.best_weights)
It does this by tracking a chosen metric and comparing it to the recorded best value. By default this will be the validation loss. Check out the course code and it's description of the class there.
If these parameters don't work with your code, you will need to upgrade to the latest version (master branch from GitHub). This new parameter was only added 10 days ago!
ModelCheckpoint
If you want to use a callback to prevent overfitting, have a look at the ModelCheckpoint callback class. This has options to save the model weights at given times during training, and will allow you to keep the weights of the model at the end of the epoch specifically where the validation loss was at its minimum. This is selected using the the save_best_only parameter - use it like this:
from keras.callbacks import EarlyStopper, ModelCheckpoint
checkpointer = ModelCheckpoint(filepath, monitor='val_loss', verbose=0,
save_best_only=False, save_weights_only=False,
mode='auto', period=1)
# your early stopper as before
early_stopper = ...
All callbacks must be placed in a list:
my_callbacks = [checkpointer, early_stopper]
# pass to model along with other parameters as you did already
model.fit(..., callbacks=my_callbacks, ...)
If you don't use save_best_only, the default behaviour is to save the model at the end of every epoch.
You can set the filepath by using certain dynamic variables available during training, so that the filenames have some useful information:
E.g. filepath is weights.{epoch:02d}-{val_loss:.2f}.hdf5
the model checkpoints will be saved with the epoch number and the validation loss in the filename.
Have a look at the linked documentation for how to use the other parameters of ModelCheckpoint. |
H: when to use dot product and when to use the common product In neural networks?
I wanted to know when to use dot product and when to not , I also don't know when we must transpose an array and why should we , could someone help me to understand this ?
If you could give me the derivative of cost function with respect to each Weigh (W1, W2, W3) using notations bellow to compare it with the results I found I'll appreciate that !
AI: Neuron-level operations:
First, let's describe the output of each neuron in the network individually.
For the first hidden layer:
$$ A_2^1 = f_2(Z_2^1) = f_2(X \cdot W_1^1)$$
$$ A_2^2 = f_2(Z_2^2) = f_2(X \cdot W_1^2)$$
$$ A_2^3 = f_2(Z_2^3) = f_2(X \cdot W_1^3)$$
Let me explain the notation a bit: $A_2^1$ is the output of the first neuron of the second layer. $f_2$ is the activation function of the second layer. $W_1^2$ is line connecting the second neuron of the weights you denote $W1$ in the figure.
Second hidden layer:
$$ A_3^1 = f_3(Z_3^1) = f_3(A_2^1 \cdot W_2^{11} + A_2^2 \cdot W_2^{21} + A_2^3 \cdot W_2^{31})$$
$$ A_3^2 = f_3(Z_3^2) = f_3(A_2^1 \cdot W_2^{12} + A_2^2 \cdot W_2^{22} + A_2^3 \cdot W_2^{32})$$
$$ A_3^3 = f_3(Z_3^3) = f_3(A_2^1 \cdot W_2^{13} + A_2^2 \cdot W_2^{23} + A_2^3 \cdot W_2^{33})$$
Again $A_3^1$ is the output of the first neuron of the third layer and $f_3$ is the activation function of the the third layer. This time $W_2^{12}$ corresponds to the weight $W2$ that connects the first neuron of the previous layer to the second neuron of the current layer.
For the output layer:
$$ \hat y = A_4 = f_4(Z_4) = f_4(A_3^1 \cdot W_3^1 + A_3^2 \cdot W_3^2 + A_3^3 \cdot W_3^3)$$
The notation follows the rules above.
Note that all the above operations are between simple numbers, meaning that all $\cdot$ symbols you saw above are common multiplications! Because this way of representing Neural Networks isn't effective, instead of describing each neuron individually, we usually describe the operations of layers with vectors or arrays. This is where your question lies (about the dot product and the transposing).
Layer-level operations:
First hidden layer:
$$ A_2 = f_2(Z_2) = f_2(X \cdot W_1) $$
In the above equation $W_1$, $Z_2$ and $A_2$ are all $1 \times 3$ arrays. While $X$ remains a simple number (you can think of it as a $1 \times 1$ array).
Second hidden layer:
$$ A_3 = f_3(Z_3) = f_3(A_2 \cdot W_2) $$
Again $A_3$, $Z_3$ and $A_2$ are $1 \times 3$ arrays. This time $W_2$ is a $3 \times 3$ array.
Output layer:
$$ A_4 = f_4(Z_4) = f_4(A_3 \cdot W_3) $$
Here, $A_4$ and $A_4$ are numbers and $W_3$ is a $3 \times 1$ array.
Every $\cdot$ in the above is a basic matrix multiplication. Because this example wasn't the best to answer your question, I'm going to change it a bit. Imagine if the second hidden layer had $4$ neurons instead of $3$. In order for us to get from a $1 \times 3$ array to a $1 \times 4$ one we would need a weight array with a shape of $3 \times 4$.
$$
\begin{pmatrix}
A_2^1 & A_2^2 & A_2^3
\end{pmatrix}
\cdot
\begin{pmatrix}
W_2^{11} & W_2^{12} & W_2^{13} & W_2^{14}\\
W_2^{21} & W_2^{22} & W_2^{23} & W_2^{24}\\
W_2^{31} & W_2^{32} & W_2^{33} & W_2^{34}
\end{pmatrix}
=
\begin{pmatrix}
Z_3^1 & Z_3^2 & Z_3^3 & Z_3^4
\end{pmatrix}
$$
This is summarized as $A_2 \cdot W_2 = Z_3$. Because in matrix multiplication the order of the two matrices counts, we need to make sure that the operation is possible. If they are in a reverse order we need to transpose the weight array. This would result in the following equation: $W_2^T \cdot A_2 = Z_3$. |
H: Prepare data : many products per row
I would like to find how to change my data structure to make it compatible with a ML model, with the following constraints :
A Product is defined by {id, price, continuousVariable1, ..., continuousVariable10}
The training set is train.csv. Each line contains a date, a Product, and a list of Products (competing products)
For each row of train.csv, the size of the list of Products may change.
It would be easy if I had only one competing products : I could put all columns in the same file. But I have many competing products.
The output variable is the product price.
AI: I my opinion, the best way is if you create a column for every(!) product. Yes, u will get a high dimensional sparse Matrix but it should solve you problem quickly. |
H: How do I predict continuous value from time series data?
I have a glove that have 2 IMUs (Inertial Measurement Unit) attached to it. It can give the rotation value as Quaternion (x,y,z,w) and acceleration of the hand (x,y,z).
I put it on my left hand and I want to predict the position of the hand relative to some fixed point on my body (e.g. my head).
When I collect the data, I just use the Vive controller to track my hand.
Right now I just feed the value into a simple model like SVM to predict (x,y,z) of the hand relative to my head.
The problem is that the output is jumpy as the sensors are not stable enough. And because it doesn't take into account data in the previous timestep.
I want to utilize time series data as my data is time series in nature.
Which algorithms I should use if I'm treating my data as time series?
You can suggest me things I should learn about too.
AI: I will first list some examples of models you could look into.
You can suggest me things I should learn about too.
You can look at more classical regression type models, or go deeper in to deep neural networks, utilising Long Short Term Memory cells, which model relationships over time.
Classical
The main term that will help you search for related tutorials/documentation will be autoregressive, which is a fancy name for saying a target value is predicted from its recent history. How recent, is a parameter you can tune your model for, called lags.
Have a look at the options in the statsmodels module for Python. More specifically, you might want to try out the ARIMA model class. For more details on ARIMA, check out this thread.
There are many traditional models that you could use for a time-series problem. Terms you might consider searching for include:
Here is an introduction article about these Generalised Linear Model (GLM) -
Here is a variant, called General Additive Models (GAM) with a good walkthtough using R
Deep Learning
This is a much newer topic and is overall a fair bit complexer than the classical models discussed above. If you go this route, you would probably make progress quickest by using a Deep Learning framework such as Keras, which allows you to build complex models without too much time investment. It is a wrapper library that uses either Tensorflow, Theano or CNTK in the background (these are the available backends).
In a deep learning approach involving time-series analysis, you almost certainly want to start with an LSTM model. This stands for Long Short Term Memory, and uses a complex cell to monitor various states as you pass in your time-series data. It is a really big topic, so I will just provide:
an introduction to the logic behind LSTM, and
a nice tutorial on how to get things working
If you decide to go this way (as opposed to classical models), then yes, you should understand backprop.
This is a really broad problem that you have described and there are many many approaches. It is also likey very active in research within the fields of (self-)localisation. |
H: RL agent, how to forbid actions?
In Q-learning, how to tell the agent that action $a_7$ is unavailable from within state $s_{t}$?
Is supplying a very large negative reward good, or might throw it off-track?
From what I read (link), it's best to work with small rewards (between -1.0 and 1.0)
AI: In Q-learning, how to tell the agent that action $a_7$ is unavailable from within state $s_{t}$?
Partially, it depends on what you mean by "unavailable". If the environment is such that it is possible to take the action, but that the consequences are very bad, then a negative reward due to the consequences is probably the best option. You might do this for a real-time system where the action can actually be taken, cannot be automatically blocked in any way, and has unwanted consequences.
Perhaps an action is unavailable, because it is not possible to even attempt it in the supplied environment. This could occur in a board game where certain moves are not allowed by the rules of the game. There you have a few options, depending on how you have constructed the agent.
A simple approach is to only present the agent with actions that it is allowed to take in the first place. The code for the environment should already know this list, so if you were considering penalising the choice, then you can also choose not to present it. This works well for value-based methods such as Q-learning, when you have implemented $Q(s,a)$. If the set of allowed actions is $\mathcal{A}(s)$, then the greedy action choice from state $s_t$ is $a_t = \text{argmax}_{a \in \mathcal{A}(s_t)} Q(s_t, a)$. All you need to do is implement $\mathcal{A}(s)$ in code and loop over different calls to Q(s,a) to find the best value.
If you have implemented the Q function in parallel over the whole action space (a good choice for efficiency with neural networks), then the network takes only the state $s$ as input, and outputs a vector e.g. $[Q(s, a_0), Q(s, a_1), Q(s, a_2), Q(s, a_4), Q(s, a_5) ... Q(s, a_N)]$. In that case you would use your implementation of $\mathcal{A}(s)$ to mask out unwanted actions before choosing the best action. It is a little bit wasteful to calculate the non-required actions, but if it was a rare enough exception, then probably still more efficient overall.
Beyond Q-learning, you could also use a similar action mask when you had a policy network that output softmax probabilities of actions. In that case you would also want to re-normalise the probabilities before generating the selected action.
Unless you have a specific goal or benefit in mind for the agent to learn about actions that are blocked by the environment, then it is simplest to query the environment to support the blocking, as described above. This is definitely the case for game-playing bots covering moves which are illegal according to the game rules. The justification is that the agent is learning to play the game, and the goal of it learning what the rules are for the game is a less interesting distraction.
Alternatively, it should work to allow the incorrect action selection, then give a negative reward (can be a small one, in same scale as other rewards in the system). Note this should also increment the time step and return the unaltered state. An agent will eventually learn not to take such a pointless, barred action. Maybe sometimes you are interested in whether the agent can learn the rules. |
H: Can we use a neural network to perform arithmetic operation between 2 numbers?
How to develop a neural network which can perform subtraction?
AI: It is an easy task. Make some training data, two inputs and one output. this is a regression task. You can train a simple MLP or you can employ RNNs such as LSTM for this task. Take a look at here. |
H: Anomaly detection on time series
I've just started working on an anomaly detection development in Python.
My data sets are a collection of timeseries. More in details, data are coming from some sensors/meters which record and collect data on boilers or other equipments.
As I said before, the data which I have to work with, are timeseries, so a timestamp and the relative value detected by sensor; a value is anomalous when it's bigger or smaller than the others near it; basically a peak.
I need to develop an unsupervised classification model, because I haven't labels for all data.
Another important aspect, is that this data are "season dependent"; in fact a boiler should be has a higher consumptions in winter than summer. Those values must not be considered as anomalies.
Since I've no experince on this topic, I'm here to ask you, what is the best algorithm/approace to solve this problem.
Furthermore, do you know some books or links to suggest?
AI: For understanding the seasonality of time series data I would start with Holt-Winters Method or ARIMA. Understanding these algorithms will help with understand how time series forecasting works.
Time series forecasting
For unsupervised classification, I would start with something like k-means clustering for anomaly detection.
Anomaly Detection with K-Means Clustering
These links should be a good starting point, I hope this helps. |
H: Given data that is labeled as outliers, how can I classify data as outliers?
I have a dataset that is a mixture of sparse binary features and quantitative features. I only have definite outliers labeled. How should I approach trying to classify unlabeled data?
I considered using OSVM or other methods of one-class classification.
However, in my data the normal data points are clustered close to the mean. The outliers are generally points that deviate from the mean in any direction. My problem is that the outliers form a sort of high dimensional doughnut around the normal data.
Considering that the deviations occur in all directions, what algorithms would be best suited to the task? Keep in mind that I have significantly less normal labeled data points for training although the normal points will outnumber the outliers in the unlabeled data.
PS I posted this question on Cross Validated as well. Which site should this question be posted on?
EDIT:
Mahalanobis is able to work fairly well. However, I have the labeled outliers. Is there someway I could use them to improve accuracy?
AI: If you are sure that your data are actually normally distributed and that your outliers actually form a high-dimensional ring around your "good" data, you simply require a distance-metric, e.g. Mahalanobis distance is suitable for normal data, and determine the threshold value for distance from the mean to consider all points beyond the high-dimensional ball or ellipsoid depending on the variance structure outliers according to your description. |
H: Efficient dimensionality reduction for large dataset
I have a dataset with ~1M rows and ~500K sparse features. I want to reduce the dimensionality to somewhere in the order of 1K-5K dense features.
sklearn.decomposition.PCA doesn't work on sparse data, and I've tried using sklearn.decomposition.TruncatedSVD but get a memory error pretty quickly. What are my options for efficient dimensionality reduction on this scale?
AI: Have you heard of Uniform Manifold Approximation and Projection (UMAP)?
UMAP (Uniform Manifold Approximation and Projection) is a novel
manifold learning technique for non-linear dimension reduction. UMAP is
constructed from a theoretical framework based in Riemannian geometry
and algebraic topology. The result is a practical scalable algorithm
that applies to real world data. The UMAP algorithm is competitive
with t-SNE for visualization quality, and arguably preserves more of
the global structure with superior run time performance. Furthermore,
UMAP as described has no computational restrictions on embedding
dimension, making it viable as a general purpose dimension reduction
technique for machine learning.
Check their code and original paper for list of pros and cons, it is super easy to use.
Quick Facts: UMAP can handle large datasets and is faster than t-SNE and also supports fitting to sparse matrix data, and contrary to t-SNE, a general purpose dimension reduction technique, meaning that not only it can be used for visualisation but also for reducing the feature space for feeding into other machine learning models.
Concrete Examples: I have benchmarked the method and compared it against some other dimensionality reduction techniques benchmark notebook, if interested to have a quick look and a jump start. |
H: Why does Bagging or Boosting algorithm give better accuracy than basic Algorithms in small datasets?
I was working with a small dataset, with 392 values, and it was kind of an imbalanced dataset, with 262 values belonging to class 1 and rest 130 to class 0. So I did the upsampling technique, importing sklearn.resampling module.
However, the total dataset was now around 520 values. I applied basic, algorithms first like Logistic Regression and SVM Classifier, and since we all know that precision is not a good accuracy metric for imbalanced dataset, I use the f1-score and recall score. In logistic Regression I found out, it was giving 78% f1-score for class 1 and 80% for class 0 , and almost 99% f1-score for class 0 in SVM and 72% for class 1, which shows that it is overfitting.
But to my surprise I found out that Random Forest gave me a better accuracy, with having around 83% f1-score for class 0 and 82% for class 1 . But till now everywhere I have seen that for bagging and boosting algorithms to work well, we need a lot of data, which is not the case in this scenario.
I've searched google a lot, but unfortunately I haven't been able to get any specific answer, and I need to know the fundamentals, why does this happen?
Logistic Regression:
precision recall f1-score support
0 0.80 0.80 0.80 91
1 0.78 0.78 0.78 82
avg / total 0.79 0.79 0.79 173
[[73 18]
[18 64]] (confusion matrix)
SVM with rbf-kernel:
precision recall f1-score support
0 0.80 0.99 0.88 91 (kind of overfitting for class 0)
1 0.98 0.72 0.83 82
avg / total 0.89 0.86 0.86 173
[[90 1]
[23 59]]
Random Forest Classifier:
precision recall f1-score support
0 0.82 0.86 0.84 87
1 0.85 0.81 0.83 86
avg / total 0.84 0.84 0.84 173
[[75 12]
[16 70]]
AI: SVMs finds a boundary to divide the data between the two classes, and the rule of thumb is to use SVMs for binary classification problems. Typically, when you have linear dependencies SVMs performs better than random forest since there is a distance to measure. But random forest performs better with categorical values / non-linear dependencies. Random forest does benefit from more data, but nature of data even in small data-sets matters.
Logistic regression follows similar logic, if your underlying relation is linear in nature then it probably perform better than Random Forest. |
H: Can Reinforcement learning be applied for time series forecasting?
Can Reinforcement learning be applied for time series forecasting?
AI: Yes, but in general it is not a good tool for the task, unless there is significant feedback between predictions and ongoing behaviour of the system.
To construct a reinforcement learning (RL) problem where it is worth using an RL prediction or control algorithm, then you need to identify some components:
An environment that be in one of many states that can be measured/observed in a sequence.
An agent that can observe current state and take actions in the same sequence.
The evolution of state in the sequence should depend on some combination of the current state and the action taken, and may also be stochastic.
There should be a reward signal that the RL agent can observe or measure. The value of reward should depend on the same factors as the evolution of the state, but can depend on them in a different way.
The general case of time series forecasting can be made to fit with this by treating the prediction as the action, having the state evolution depend on only the current state (plus randomness) and the reward based on state and action. This will allow RL to be applied, but causality only flows one way - from the environment into your predictive model. As such, the best you can do for rewards for instance is to use some metric about the correctness of the predictions. Consequences for good or bad predictions do not affect the original environment. Essentially you will end up wrapping some predictive model for the sequence (such as a neural network) in a RL layer which could easily be replaced by basic data set handling for a supervised learning problem.
One way you could meaningfully extend series forecasting problems into RL problems is to increase the scope of the environment to include the decisions made based on the predictions, and the state of the systems that are affected by those decisions. For instance, if you are predicting stock prices, then include your portfolio and funds in the state. Likewise the actions stop being the predictions, becoming buy and sell commands. This will not improve the price prediction component (and you are likely better off treating that as a separate problem, using more appropriate tools - e.g. LSTM), but it will frame the issue overall as a RL problem. |
H: Image classification if rotated version same
I asked this question on stackoverflow but was advised to come here.
I have some images to classify. I see that Convolutional neural network may be best for this, e.g. here. However, for my images, their rotated versions (to any degree) is effectively same image. What should I do so that this aspect is also taken into account.
One method I thought is to create many rotated versions (say, every 30 degree rotation) of each training image and add them to train set with same label. Is there any other way? Also, will it help if I get each image in a circular area?
Two example images are shown below:
AI: Just encode your images first and do the classification on the encoded data (Wavelet transformation). Parts of similar image regions will result in very similar encoded matrix. Further denoising the encoded matrix by zeroing the absolute values that are less or equal some small constant should reduce overfitting and improve your accuracy.
Here an example for a simple Haar-Wavelet transform encoder:
First we take the R values of the RGB-image and divide it into 8x8 matrices such as this one:
$\begin{pmatrix}{97,95,99,94,97,88,90,95}\\{88,86,89,90,95,96,99,98}\\{95,99,94,91,88,85,90,94}\\{78,75,84,86,80,74,77,99}\\{71,75,78,70,71,84,91,90}\\{67,74,69,70,65,68,74,70}\\{59,55,64,68,70,84,86,90}\\{79,82,88,90,85,82,88,90}\end{pmatrix}$
Now we begin with the first row: $\begin{pmatrix}{97,95,99,94,97,88,90,95}\end{pmatrix}$
Grouping two adjacent values we obtain the 4 groups:
{97,95},{99,94},{97,88},{90,95}
Next step is to calculate the average of each group:
(96, 96.5, 92.5, 92.5)
And the difference of the second value of each group and the average:
95-96 = -1, 94-96.5 = -2.5, 88-92.5 = -4.5, and 95-92.5 = 2.5
Putting them together: (-1, -2.5, -4.5, 2.5)
Attaching them to the list of our average values we get:
(96, 96.5, 92.5, 92.5, -1, -2.5, -4.5, 2.5)
Now we repeat that process, but only using the first 4 values:
Grouping: {96, 96.5}, {92.5, 92.5}
Averages and differences: (96.25, 92.5, 0.25, 0)
One last time the same process but now only using the first 2 values:
Grouping: {96.25, 92.5}
Average and difference: (94.375, -1.875)
Putting it all together we obtain the new values of the first row, where we can see, that one value was already zeroed out:
(94.375, -1.875, 0.25, 0, -1, -2.5, -4.5, 2.5)
The next steps would be just applying the above process on the other 7 rows and then doing the same for every column of the resulting 8x8 matrix. Then doing the same on every other 8x8 part of our RGB-image's R-matrix and then doing the same for the G-matrix and B-matrix. The resulting RGB-matrices will be very sparse because of all the zeroed values. We got rid of all low entropy pixels what is left are the core features of the objects from the image. Using some simple similarity metrics will result in a high accuracy classifier.
The whole process can be very easy implemented using a simple recursive function, or by using linear algebra and matrix multiplications.
This is a lossless compression, meaning the resulted matrix still have all needed information to decode it and getting the original image.
However it is possible to apply some denoising routines to get more zeroes, but doing it, information will be lost. |
H: MDP - RL, Multiple rewards for the same state possible?
This question is from An introduction to RL Pages 48 and 49. This question may also be related to below question, although I am not sure:
Cannot see what the "notation abuse" is, mentioned by author of book
On page 48, it is mentioned that p:S * R * S * A -> [0,1] is a deterministic function:
The dynamics function $p : \mathcal{S} \times \mathcal{R} \times \mathcal{S} \times \mathcal{A} \rightarrow [0, 1]$ is an ordinary deterministic function of four arguments.
However, on page 49, in equation 3.4, there is summation over r:
$$\sum_{s' \in \mathcal{S}}\sum_{r \in \mathcal{R}} p(s',r|s,a) = 1 ,\text{for all } s \in \mathcal{S}, a \in \mathcal{A}(s)$$
My question is, does this mean, it is possible that performing an action $a$ that takes us to state $s'$, could result in multiple rewards?
AI: it is possible that performing an action $a$ that takes us to state $s′$, could result in multiple rewards?
Yes, that is true the general case that any $(s,a)$ pair can result in a range of results for both $s'$ and $r$. Also $s'$ and $r$ can vary independently, provided each ones distribution only depends on $(s,a)$. In practice $r$ often depends strongly on one or more of $s$, $a$ or $s'$ (if it depends on the latter, then it still depends in absolute terms on just $s$ and $a$ because $s'$ does - it is just that values of $s'$ and $r$ are allowed to correlate).
This does not affect the statement about $p(s',r|s,a)$ being deterministic. It is the probability of specific $s',r$ results occurring that should be deterministic and depend on $(s,a)$ in a Markov Decision Process. |
H: Data Augmentation recommended pipeline
I want to perform image classification using Keras and a dataset made of 50 classes. At the moment, I have only 7 images per class and I need to perform data augmentation in order to train the model and obtain acceptable accuracy values.
I am using the ImageDataGenerator class from keras which is recommended for image augmentation on the fly (during training). Since the classification is performing badly, I was wondering if it would be necessary to perform offline augmentation, i.e, enlarge the dataset before the training, because I honestly think that 7 is far from being a reasonable number of images per class.
Is it a common practice to perform both types of augmentation (before and during the training)? I am planning to use some 3rd party software or tools like imgaug to enlarge the dataset first and save the augmented images to disk and only then perform real-time augmentation with ImageDataGenerator class.
In conclusion, the flow would be similar to this:
Image pre-processing and offline data augmentation => enlarge the original dataset
Training with real-time augmentation => Load the dataset and use ImageDataGenerator
What do you think?
Thank you.
AI: Out of the two pipelines you mentioned, I'd recommend the second (i.e. real-time augmentation). This is better than the first, because by performing random augmentations the network sees different images at each epoch.
I'd recommend imgaug, which is a python library for performing data augmentation. I've found it very helpful as it can work with keras' ImageDataGenerator very well. The way can do this is:
from imgaug import augmenters as iaa
seq = iaa.Sequential([...]) # list of desired augmentors
ig = ImageDataGenerator(preprocessing_function=seq.augment_image) # pass this as the preprocessing function
gen = ig.flow_from_directory(data_dir) # nothing else changes with the generator
One final note I'd like to make is that $7 \cdot 50 = 350$ images are very few for deep learning, even with augmentation. Make sure you use a pre-trained network or else you will have a serious overfitting problem. |
H: Python SkLearn Gradient Boost Classifier Sample_Weight Clarification
Using Python SkLearn Gradient Boost Classifier - is it true that sample_weight is modifying how the algorithm penalizes errors made on that particular class, rather than feeding more data into the trees by oversampling from that class. If you have any links to code that confirms this that would be ideal. Thanks
AI: The sample_weight parameter is used both to weigh the splits as well as to weigh the errors.
Errors
For example, using the LeastSquaresError, the implementation uses the sample weights thusly:
def __call__(self, y, pred, sample_weight=None):
if sample_weight is None:
return np.mean((y - pred.ravel()) ** 2.0)
else:
return (1.0 / sample_weight.sum() *
np.sum(sample_weight * ((y - pred.ravel()) ** 2.0)))
So if we provide large weights to certain samples, their individual error will have a bigger influence on the overall error for that sample set. The squared prediction error for each sample is scaled by by the corresponding weight, and the sum is normalised by the sum of the sample weights.
If no weights are provided, (leaving the default sample_weight=None), then they are initialised to a vector of ones, example here.
Splits
In the case that you set the parameter min_weight_fraction_leaf upon class instantiation, requiring each leaf to obtain a minimum fraction of the total sum of weight to indeed become a leaf, then sample_weight is also used to scale that value. You can see it here:
# Set min_weight_leaf from min_weight_fraction_leaf
if self.min_weight_fraction_leaf != 0. and sample_weight is not None:
min_weight_leaf = (self.min_weight_fraction_leaf *
np.sum(sample_weight)) |
H: In natural language processing, why each feature requires an extra dimension?
I am reading Machine Learning by Example. I am trying to understand natural language processing. The book used Scikit-learn's fetch_20newsgroups data as an example.
The book mentioned that the text data in the 20 newsgroups dataset that we downloaded from fetch_20newsgroups data is highly dimensional. I do not understand this statement.
It is my understanding that dimension is used to describe axies that an array has.
For example,
import numpy as np
np.random.seed(0)
x1 = np.random.randint(10, size=6)
print("x1",x1) # 1 dimensions
np.random.seed(0)
x2 = np.random.randint(10, size=(3,4))
print("x2",x2) # 2 dimensions
np.random.seed(0)
x3 = np.random.randint(10, size=(3,4,5))
print("x3",x3) #3 dimensions
How does no. of axies relates to feature in NLP? Why one feature equals to one dimension? Please explain. Thanks.
Below is the code from the book that used to download the data for your reference.
from sklearn.datasets import fetch_20newsgroups
groups = fetch_20newsgroups()
AI: You can think of each feature running along its own axis on a graph. Just because we store all feature e.g. in a single DataFrame – one feature per column – it doesn't mean the data's structure is just 2d (rows and columns). This is not the case only in NLP, but in most contexts involving statistics and modelling.
We can see this with your example data. There are text blocks, which you should match to a category (as far as I can tell).
The initial dataset contains other meta-data, such as a desctiption of the dataset, the names of the target categories and also the location of each sample's file. We don't really care about these for the pure modelling part. So there are only text blocks, called data, and the target categories, called target. Your input is then 1d - the text blocks.
I will show how to put that into a dataframe, being very verbose about dimensions and features:
from sklearn.datasets import fetch_20newsgroups
groups = fetch_20newsgroups()
import pandas as pd # needed to use a dataframe
# Get the desired parts from "groups"
desired = ['data', 'target'] # we don't care about the 'filenames' ona so on
# make a new dictionary with only desired key-value pairs
only_data = {k: v for k, v in groups.items() if k in desired}
Now we put this into a dataframe
df = pd.DataFrame.from_dict(only_data)
# Check the shape of the dataframe
df.shape
(11314, 2)
So there are 11314 samples of 1 feature, to 1 target variable. This is therefore 1-dimensional input data (we don't count the target variable).
When we have e.g. 50 features, explaining some target variable, it may be referred to as a 50-dimensional input space. People then may use dimensionality reduction techniques, such as Principal Components Analysis, which will attempt to squeeze the 50 dimensions into a lesser number (you can choose how many to use).
In your data, you will likely pre-process the text samples to create more features. These will just be new columns in the dataframe, whose shape could become e.g. (11314, 40) if you add another 38 features, by doing things like counting words or constructing some word-embeddings. |
H: Is the neural network in DQN used to learn like a supervised model?
Is the neural network in DQN used to learn like a supervised model?
AI: Is the neural network in DQN used to learn like a supervised model?
Yes.
In DQN, the neural network is used as a function approximator to learn the action value function $Q(s,a)$ - the neural network approximation to it is sometimes noted $\hat{q}(s,a,\theta)$ to show that this is an ongoing, approximate estimate, and that the values that it outputs depend on the NN's learnable parameters (weights and biases).
Why is this "supervised learning"? That is because in order to improve an estimate for Q, the network is presented with labelled data. In DQN this is specifically a mini-batch drawn from experience replay memory, but in all cases in RL, the format of each row of data is the same: A representation of the current state (or state+action sometimes), and a "ground truth" target value to learn to associate with that input. This learning by correct example is exactly supervised learning.
Now, there are some complications:
The "ground truth" is built by the RL algorithm on-demand, and equals the underlying real value $q_{\pi}(s,a)$ only in expectation over time. However, that is fine for any statistics-based learning algorithm that will learn a mean value.
The expected "ground truth" changes over time as RL improves the policy, so higher Q values will be expected, and the old lower ones need to be forgotten. This is why RL algorithms build the "ground truth" on demand - for single-step Q-learning, as used in basic DQN that is the $r + \gamma \text{max}_{a'}\hat{q}(s', a')$ formula that you may see in explanations, and is called the TD Target. That means that RL algorithms prefer to work with online algorithms that are biased towards most-recent data and forget older data. Neural networks and linear regression trained by gradient descent both fit this requirement.
An aside: In RL, to avoid using the term model twice, the term "function approximator" is used, where supervised learning might call the same thing a "model". That is because RL already uses the term model to refer to models of the environment (e.g. the rules of a game, or all the known transition probabilities and rewards). |
H: What should be 'y_train' in Keras LSTM?
I refer to the example given at the Keras website here:
from keras.models import Sequential
from keras.layers import LSTM, Dense
import numpy as np
data_dim = 16
timesteps = 8
num_classes = 10
# expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True,
input_shape=(timesteps, data_dim))) # returns a sequence of vectors of dimension 32
model.add(LSTM(32, return_sequences=True)) # returns a sequence of vectors of dimension 32
model.add(LSTM(32)) # return a single vector of dimension 32
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
# Generate dummy training data
x_train = np.random.random((1000, timesteps, data_dim))
y_train = np.random.random((1000, num_classes))
# Generate dummy validation data
x_val = np.random.random((100, timesteps, data_dim))
y_val = np.random.random((100, num_classes))
model.fit(x_train, y_train, batch_size=64, epochs=5, validation_data=(x_val, y_val))
For a real-world example, what should be y_train and y_val? Should they be the same as x_train and x_val respectively, since they come from the same sequence?
Also, how should I understand data_dim and num_classes?
AI: This sort of answers my question. |
H: RL Advantage function why A = Q-V instead of A=V-Q?
In RL Course by David Silver - Lecture 7: Policy Gradient Methods, David explains what an Advantage function is, and how it's the difference between Q(s,a) and the V(s)
Preliminary, from this post:
First recall that a policy $\pi$ is a mapping from each state, $s$, action $a$, to the probability $\pi(a \mid s)$ of taking action $a$ when in state $s$.
The state value function, $V^\pi(s)$, is the expected return when starting in state $s$ and following $\pi$ thereafter.
Similarly, the state-action value function, $Q^\pi(s, a)$, is the expected return of when starting in state $s$, taking action $a$, and following policy $\pi$ thereafter.
In my understanding, $V(s)$ is always larger than $Q(s, a)$, because the function $V$ includes the reward for the current state $s$, unlike $Q$. So, why is the advantage function defined as $A = V - Q$ rather than $A = Q - V$ (at minute 1:12:29 in the video)?
Actually, V might not be larger than Q, because $s$ might actually contain a negative reward. In such a case how can we be certain what to subtract from what, such that our Advantage is always positive?
$Q(s, a)$ returns a value of entire total reward that's expected ultimately, after we pick an action $a$. $V(s)$ is the same, just with an extra reward from current state $s$ as well.
I don't see why a value of $Q - V$ would be useful. On the other hand, $V - Q$ would be useful because it would tell us the reward we would get on $s_{t+1}$ if we took the action $a$.
AI: In my understanding, $V(s)$ is always larger than $Q(s,a)$, because the function $V$ includes the reward for the current state $s$, unlike $Q$
This is incorrect. There is not really such a thing as "the reward for current state" in the general case of a MDP. If you mean the $V(S_t)$ should include the value of $R_t$, then this is still wrong, given David Silver's use of the conventions for time step indexing. It is possible to associate immediate reward with either the current time step, leading to sequence $S_0, A_0, R_0, S_1, A_1, R_1$ etc or you can use the convention of immediate reward being on next time step $S_0, A_0, R_1, S_1, A_1, R_2$ etc. David Silver (and Sutton & Barto's book) uses the latter convention.
Under that convention:
$$V(s) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty}
\gamma^{k}R_{t+k+1}|S_t=s]$$
$$Q(s,a) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty} \gamma^{k}R_{t+k+1}|S_t=s, A_t=a]$$
You can see that the first term in the expansion of the sum for both Q(s,a) and V(s) is $R_{t+1}$. If you changed the convention, then both would include the equivalent value, but would be labelled $R_{t}$ in any formula.
Q and V do not differ in which time steps they sum reward over. They may differ in the value of $R_{t+1}$ because $V(s)$ assumes following the policy $\pi$ when selecting $A_t$ whilst $Q(s,a)$ uses the value $a$ supplied as a parameter for $A_t$, which can be different.
how can we be certain what to subtract from what, such that our Advantage is always positive?
Advantage can be negative, that is fine. It means that the action $a$ in $A(s,a)$ is a worse choice than the current policy's.
I don't see why a value of $(Q−V)$ would be useful. On the other hand, $(V−Q)$ would be useful
Both would be equally useful, it is mainly convention that we work with finding maximum Advantage representing the benefit of choosing a specific action instead of following the current policy, as opposed to finding the minimum "Disadvantage". However, the concept of Advantage in this context is arguably the more natural view.
because it would tell us the reward we would get on $s_{t+1}$ if we took the action $a$
As explained above, this is wrong. The value in $A(s,a)$ expresses the potential benefit we would get for changing the policy $\pi(s)$. That might include changes to $R_{t+1}$, but is not limited to a single time step.
Some RL approaches do create a predictive function for expected immediate reward $\hat{r}(s,a)$ - typically this is a secondary component, used to help refine parameters for other function approximators. |
H: Complex HTMLs Data Extraction with Python
Does anybody know a way of extracting data with python from more convoluted website structures? For example, I'm trying to extract data from the players in the ATP profiles, but it's just so complicated I quit. I think they're pulling data from some database in the script and I suspect that even if I tried I wouldn't be able to get it.
I then started using a specialized software called ParseHub, which pulls the data somewhat visually. It's a pretty good software, but they make it slow on purpose just so you buy it, and it is particularly not cheap.
AI: Been there, done that, it is still hard. For complex HTML sources, using shallow feature analysis proved to be best -- so a package like Dragnet is a good place to start.
Our final result was a process chain (luigi), where we could mix, match and reorder the following text extraction tools by HTML source:
Shallow feature extraction (Dragnet)
Tag Stripping (Python stdlib)
Preg Replace (re Module)
Html2Text
BeautifulSoup
Pass Through
The key is what you are planning to do with it in the next step. For some things, your can just carry the tags in the text (boolean find), others you cannot (classification between sites). Configure, test, repeat.
But a one size fits all, that is really hard -- I am also not convinced that some of the services can do much beyond Dragnet. |
H: Feature Extraction Using A CNN
I want to use a CNN to extract features from a dataset. My questions are:
What is the stopping criteria when training a CNN to extract features?
What is the difference between extracting features after training for 50 epochs or 100 epochs?
How should I choose the best layer for feature extraction?
Does it depend on the accuracy of the model?
AI: my question is what the stopping criteria of train CNN to extract features?
Generally speaking, the best features are the ones that, when classified, should have the best performance. So train your model, while monitoring a metric of your choice on a validation set. Stop the training phase when that metric stops improving.
What the difference between extract features after train 50 epoch and 100?
Assuming no overfitting has occurred between epochs 50 and 100, the features of the latter will be more sophisticated and better equipped for classification on the given dataset.
If the model has started to overfit between these two epochs, then the latter model will have begun to memorize the training set. The features of this model will have begun to extract features tailored to the training set images.
How to choose the best layer for extraction?
You should get the highest-level features available from the CNN. The most usual case is taking the previous layer of the first fully connected one. However, if that layer is a pooling one, you might want to take the previous one (which contains more information).
Are it depend on the accuracy of model?
This comes back to the first question. If you think that accuracy is a good enough metric to represent your model's performance, then yes. There are many cases, however, where this is not the case (e.g. imbalanced dataset). Whichever metric you choose, it should be measured on the validation set. |
H: A Neural Network That Learns Bitwise XOR
I am trying to build a deep neural network that learns the coordinate-coordinate bitwise XOR of two matrices, but it performs poorly.
For example, in the 2 bits case, its accuracy stays around 0.5. Here is the code snippet:
from keras.layers import Dense, Activation
from keras.layers import Input
import numpy as np
from keras.layers.merge import concatenate
from keras.models import Model
size=1
data1 = np.random.choice([0, 1], size=(50000,size,size))
data2 = np.random.choice([0, 1], size=(50000,size,size))
labels = np.bitwise_xor(data1, data2)
a = Input(shape=(size,size))
b = Input(shape=(size,size))
a1 = Dense(size, activation='sigmoid')(a)
b1 = Dense(size, activation='sigmoid')(b)
merged = concatenate([a1, b1])
hidden = Dense(1, activation='sigmoid')(merged)
hidden = Dense(3, activation='sigmoid')(hidden)
hidden = Dense(5, activation='relu')(hidden)
hidden = Dense(4, activation='sigmoid')(hidden)
hidden = Dense(3, activation='sigmoid')(hidden)
outputs = Dense(1, activation='relu')(hidden)
model = Model(inputs=[a, b], outputs=outputs)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit([data1, data2], np.array(labels), epochs=15, batch_size=32)
What's going on here?
Epoch 1/15
50000/50000 [==============================] - 7s 130us/step - loss: 0.7118 - acc: 0.5044
Epoch 2/15
50000/50000 [==============================] - 4s 78us/step - loss: 0.6933 - acc: 0.5023
Epoch 3/15
50000/50000 [==============================] - 4s 74us/step - loss: 0.6934 - acc: 0.5030
Epoch 4/15
50000/50000 [==============================] - 4s 86us/step - loss: 0.6935 - acc: 0.5002
Epoch 5/15
50000/50000 [==============================] - 4s 79us/step - loss: 0.6934 - acc: 0.5015
Epoch 6/15
50000/50000 [==============================] - 5s 96us/step - loss: 0.6935 - acc: 0.5030
Epoch 7/15
50000/50000 [==============================] - 5s 105us/step - loss: 0.6934 - acc: 0.5026
AI: I think there might be a few things going on.
You might have a reason but I don't know why you have shaped your input data into three dimensions: size=(50000,size,size).
Also, you might have a reason but I don't know why you ran each feature separately through a different layer (each with a single hidden unit), and then merged the outputs before running the merged output through another series of layers:
a = Input(shape=(size,size))
b = Input(shape=(size,size))
a1 = Dense(size, activation='sigmoid')(a)
b1 = Dense(size, activation='sigmoid')(b)
merged = concatenate([a1, b1])
Also, I suspect that running the features through a single hidden unit reduces the information sent through the rest of the network, so the network cannot learn the XOR function.
Here is some code that works for me:
from keras import models
from keras.layers import Dense
import numpy as np
Simulate data:
X_1 = np.random.choice([0, 1], size = (50000, 1))
X_2 = np.random.choice([0, 1], size = (50000, 1))
X = np.concatenate((X_1, X_2), axis = 1)
Y = np.bitwise_xor(X[:, 0], X[:, 1])
FNN Model:
# Define model.
network_fnn = models.Sequential()
network_fnn.add(Dense(4, activation = 'relu', input_shape = (X.shape[1],)))
network_fnn.add(Dense(4, activation = 'relu'))
network_fnn.add(Dense(1, activation = 'sigmoid'))
# Compile model.
network_fnn.compile(optimizer = 'rmsprop', loss = 'binary_crossentropy', metrics = ['acc'])
# Fit model.
history_fnn = network_fnn.fit(X, Y, epochs = 5, batch_size = 32, verbose = True) |
H: Why does the naive bayes algorithm make the naive assumption that features are independent to each other?
Naive Bayes is called naive because it makes the naive assumption that features have zero correlation with each other. They are independent of each other. Why does naive Bayes want to make such an assumption?
AI: By doing so, the joint distribution can be found easily by just multiplying the probability of each feature whilst in the real world they may not be independent and you have to find the correct joint distribution. It is naive due to this simplification. |
H: Calculate whether datapoints are part of a larger distribution
I have some normally distributed variables (~800) and some variables that are in some way special (~30). I need to find out whether the special ones can be considered normal members of the distribution.
I drew a histogram of the full distribution with the special members as scatter. In addition to that, the normal distribution of all variables and that of the special members. The plot shows these graphics for the strongest difference between the calculated distributions. Green is the normal distribution of all members, orange that of the special ones.
How do I calculate the likelihood of whether the special members are part of the larger ensemble, i.e. they don't stand out?
AI: One way addressing your question is to ask if your two datasets(regular points and irregular points) come from the same distribution?
The kolmogorov-smirnof test is a common tool for putting a number on the 'likelihood' of two datasets being from the same distribution.
python scipy has an implementation of this test.
Be aware that this test is more sensitive to differences around the means of each distribution, than to differences around the edges. From the looks of the graph in the question, its not an issue in your case, so the test should work.
This is a very concise explanation of the test |
H: Imputation missing values other than using Mean, Median in python
I heard that Mean, Median isn't the best way to impute the missing values, why would that be?
In my scenario, I have data like this
Brand|Value
A|2,
A|NaN,
A|4,
B|8,
B|NaN,
B|10,
C|9,
C|11
if using mean imputation the data would be
Brand|Value
A|2,
A|7.3,
A|4,
B|8,
B|7.3,
B|10,
C|9,
C|11
which does make sense for brand B to be 7.3 but doesn't make sense if brand A 7.3 because the value of Brand A has its tendency somewhere around 2 and 8
is there any other way to fill the missing values based on the Brand?
This is an example of data with only 2 features, with 1 feature that may has pattern for missing values, what if there are like 20 features, and there would be more than one features that may have pattern to better define the missing values.
How to apply this in Python?
AI: So if you want to impute some missing values, based on the group that they belong to (in your case A, B, ...), you can use the groupby method of a Pandas DataFrame. So make sure your data is in one of those first.
import pandas as pd
df = pd.DataFrame(your_data) # read documentation to achieve this
Then, it is just a case of chaining a few steps together:
df["Value"] = df.groupby("Brand")["Value"].transform(lambda x: x.fillna(x.mean()))
df.groupby simply groups the dataframe into sub-dataframes (groups), such that each group only contains one Brand
transform() will apply a function to a dataframe - so to each of the individual groups created in groupby
the nameless function (a lambda function) calls the DataFrame's fillna() method on each dataframe, using just the mean() to fill the gaps
You can simply substitute the mean() method for anything you like. You could also create a more complicated function, ifyou need it, and replace that lambda function. It would simply need to take a dataframe as input and return a dataframe with a comparable index. |
H: how to interpret predictions from model?
I'm working on a multi-classification problem - Recognizing flowers.
I trained the mode and I achieved accuracy of 0.99.
To predict, I did:
a = model.predict(train[:6])
output:
array([[5.12799371e-18, 2.08305119e-05, 1.14476855e-07, 1.28556788e-02,
1.46144101e-08, 1.85072349e-05],
[7.72907813e-32, 7.86612819e-09, 8.08554124e-13, 1.87227300e-08,
4.61950422e-10, 6.42609745e-02],
[0.00000000e+00, 1.34313246e-02, 9.67072342e-13, 2.82699081e-12,
1.10958222e-10, 4.68058548e-14],
[7.75535319e-27, 6.51194032e-09, 2.49026186e-07, 1.88803018e-08,
3.77964647e-03, 7.01414028e-05],
[7.24011743e-22, 5.85804628e-07, 1.61177505e-09, 2.27746829e-01,
5.44432410e-09, 3.94427252e-06],
[1.81492225e-15, 3.36600904e-04, 4.39262622e-05, 8.63518100e-04,
9.29966700e-06, 9.75337625e-02]], dtype=float32)
How do I interpret this? How do I know get the label it predicted? I have five labels 0-4, which are assigned to 5 types of flowers.
My notebook is here.
What am I doing wrong here?
AI: Alright so I rewrote some parts of your model such that it makes more sense for a classification problem. The first and most obvious reason your network was not working is due to the number of output nodes you selected. For a classification task the number of output nodes should be the same as the number of classes in your data. In this case we have 5 kinds of flowers, thus 5 labels which I reassigned to $y \in \{0, 1, 2, 3, 4\}$, thus we will have 5 output nodes.
So let's go through the code. First we bring the data into the notebook using the code you wrote.
from os import listdir
import cv2
daisy_path = "flowers/daisy/"
dandelion_path = "flowers/dandelion/"
rose_path = "flowers/rose/"
sunflower_path = "flowers/sunflower/"
tulip_path = "flowers/tulip/"
def iter_images(images,directory,size,label):
try:
for i in range(len(images)):
img = cv2.imread(directory + images[i])
img = cv2.resize(img,size)
img_data.append(img)
labels.append(label)
except:
pass
img_data = []
labels = []
size = 64,64
iter_images(listdir(daisy_path),daisy_path,size,0)
iter_images(listdir(dandelion_path),dandelion_path,size,1)
iter_images(listdir(rose_path),rose_path,size,2)
iter_images(listdir(sunflower_path),sunflower_path,size,3)
iter_images(listdir(tulip_path),tulip_path,size,4)
We can visualize the data to get a better idea of the distribution of the classes.
import matplotlib.pyplot as plt
%matplotlib inline
n_classes = 5
training_counts = [None] * n_classes
testing_counts = [None] * n_classes
for i in range(n_classes):
training_counts[i] = len(y_train[y_train == i])/len(y_train)
testing_counts[i] = len(y_test[y_test == i])/len(y_test)
# the histogram of the data
train_bar = plt.bar(np.arange(n_classes)-0.2, training_counts, align='center', color = 'r', alpha=0.75, width = 0.41, label='Training')
test_bar = plt.bar(np.arange(n_classes)+0.2, testing_counts, align='center', color = 'b', alpha=0.75, width = 0.41, label = 'Testing')
plt.xlabel('Labels')
plt.xticks((0,1,2,3,4))
plt.ylabel('Count (%)')
plt.title('Label distribution in the training and test set')
plt.legend(bbox_to_anchor=(1.05, 1), handles=[train_bar, test_bar], loc=2)
plt.grid(True)
plt.show()
We will now transform the data and the labels to matrices.
import numpy as np
data = np.array(img_data)
data.shape
data = data.astype('float32') / 255.0
labels = np.asarray(labels)
Then we will split the data.. Notice that you do not need to shuffle the data yourself since sklearn can do it for you.
from sklearn.model_selection import train_test_split
# Split the data
x_train, x_test, y_train, y_test = train_test_split(data, labels, test_size=0.33, shuffle= True)
Let's construct our model. I changed the last layer to use the softmax activation function. This will allow the outputs of the network to sum up to a total probability of 1. This is the usual activation function to use for classification tasks.
from keras.models import Sequential
from keras.layers import Dense,Flatten,Convolution2D,MaxPool2D
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.callbacks import ModelCheckpoint
from keras.models import model_from_json
from keras import backend as K
model = Sequential()
model.add(Convolution2D(32, (3,3),input_shape=(64, 64, 3),activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128,activation='relu'))
model.add(Dense(5,activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
Then we can train our network. This will result in about 60% accuracy on the test set. This is pretty good considering the baseline for this task is 20%.
batch_size = 128
epochs = 10
model.fit(x_train, y_train_binary,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test_binary))
After the model is trained you can predict instances using. Don't forget that the network needs to take the same shape in. Thus we must maintain the dimensionality of the matrix, that's why I use the [0:1].
print('Predict the classes: ')
prediction = model.predict_classes(x_test[0:1])
print('Predicted class: ', prediction)
print('Real class: ', y_test[0:1])
This gives
Predict the classes: 1/1 [==============================] - 0s
6ms/step Predicted class: [4] Real class: [4]
Some suggestions
The model you are currently using is the one that is most common for MNIST. However, that data only has a single channel thus we don't need as many layers. You can increase the performance by increasing the complexity of your model. Or by reducing the complexity of your data, for example you can train using the grayscale equivalent of the images, thus reducing the problem to a single channel. |
H: In batch normalization, shouldn't using DropConnect harm test accuracy?
In my understanding of batch normalization, mean and variance are calculated over the entire batch and then added to the population average. This average is then applied to the test set to estimate the mean and variance of the overall test set.
However, DropConnect removes connections during training as a form of network regularization. The model calculates batch statistics through a network that is missing usually half its connections. At test time, all connections are used. Shouldn't this affect the mean and variance of the population and/or the test set, throwing the estimated mean and variance off, or will the network stabilize itself over time?
AI: TL;DR
This is an interesting idea and probably best to be tested with your specific problem; however it is generally understood that you will get better results by not using both Dropout (including DropConnect) and Batch-Norm together, given their overall effects during training. Recent evidence/tests. Having said that, I think it would balance out in the end anyway, as you propose.
More considerations
Looking at the explanation of the base implementation, you could be correct in saying that batch normalisation computations of mean and variance could be influenced - but surely it depends in which order you compute your batch statistics?
There is an ongoing debate (see e.g. this question) as to which order to apply layers such as batch-norm, dropout and activations themselves to the weights. There is the argument (pointed out above) that using both together isn't a good idea. There are some more great points made here.
If you have a look at the three implementations summarised here, you can see that there isn't a big difference between Dropout and DropConnect... it is just about scaling the values correctly, so that the (expected) sum remains consistent.
To each their own!
It almost gets philosphical! I believe it depends on your own personal perspective on why we use each of these layers. If you see Dropout as a means of restricting the network and preventing the co-adaptation of neurons, we are trying to block flow of information through certain paths of the network, so it might make sense to also remove that information from the batch-norm computations.
If you see batch-norm as being a surgically precise method to man-handle the input distribution to a layer, you might want to compute batch-norm stats before the DropConnect sets any weights to zero and use those on the entire outbound batch. |
H: How to use scikit-learn normalize data to [-1, 1]?
I am using scikit-learn MinMaxScaler() to normalize to $[0, 1]$, but I want to normalize to $[-1, 1].$ What function do I have to use to normalize this way?
AI: Scaling between 0 and 1 is simply written for an array of values arr = $[x_{1}, x_{2}, ...., x_{n}]$ as scaled_array = (arr-arr.min())/(arr.max()-arr.min())
But scaling between two values can be inherited from normalisation in Images when values are scaled between 0 to 255 (normally).
You can write scaling as scaled_values = ((val - min)*(new_max - new_min)/(max - min)) + new_min
where new_min, new_max are the maximum and minimum values you are scaling in between and max and min are the maximum and minimum values in your array.
For example if you want to scale values between -1 and 1 for an array [2, 5, 0], it reduces to ((arr - 0)*(1-(-1))/(5-0)) - 1 = [-0.2, 1, -1]
Check this wiki page.
Edit:
As pointed out rightly in comments by @AnkitSeth, MinMaxScalar do have feature_range argument you can pass on. But I am sure the internal job done by feature_range is the same thing as I had mentioned above. |
H: What are some situations when normalizing input data to zero mean, unit variance is not appropriate or not beneficial?
I have seen normalization of input data to zero mean, unit variance many times in machine learning. Is this a good practice to be done all the time or are there times when it is not appropriate or not beneficial?
AI: A detailed answer to the question can be found here.
[...]are there times when it is not appropriate or not beneficial?
Short answer: Yes and No. Yes in the terms, that it can significantly change your output of e.g. clustering algorithms. No, on the other hand, if these changes are what you want to achieve. Or to put it in the words of the author of the mentioned source:
Scaling features for clustering algorithms can substantially change the outcome. Imagine four clusters around the origin, each one in a different quadrant, all nicely scaled. Now, imagine the y-axis being stretched to ten times the length of the the x-axis. instead of four little quadrant-clusters, you're going to get the long squashed baguette of data chopped into four pieces along its length! (And, the important part is, you might prefer either of these!)
The take-home-message of this is: always think carefully about what you want to achieve and what kind of data your algorithms prefer - it does matter! |
H: How to choose dimensionality of the Dense layer in LSTM?
I have a task of multi-label text classification. My dataset has 1369 classes:
# data shape
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
(54629, 500)
(23413, 500)
(54629, 1369)
(23413, 1369)
For this task, I've decided to use LSTM NN with the next parameters:
# define model
maxlen = 400
inp = Input(shape=(maxlen, ))
embed_size = 128
x = Embedding(max_features, embed_size)(inp)
x = LSTM(60, return_sequences=True,name='lstm_layer')(x)
x = GlobalMaxPool1D()(x)
x = Dropout(0.1)(x)
x = Dense(2000, activation="relu")(x)
x = Dropout(0.1)(x)
x = Dense(1369, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy']
batch_size = 32
epochs = 2
model.fit(X_train, Y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
Question: Are there any scientific methods for determining Dense and LSTM dimensionality (in my example, LSTM dimension=60, I Dense dimension=2000, and II Dense dimension=1369)?
If there are no scientific methods, maybe there are some heuristics or tips on how to do this with data with similar dimension.
I randomly chose these parameters. I would like to improve the accuracy of the model and correctly approach to solving similar problems.
AI: This is a very broad and difficult question, because these parameters depend completely on the complexity of your dataset. There is (currently) no "scientific method" to determine these parameters. The best solution is to apply a form of hyperparameter optimization.
See this similar post for more answers. |
H: How to know for sure if we can learn from a given data or not?
I want to know that given a set of data and a target, how we can know for sure whether we can learn from that data to make any inference or not?
AI: how can we know for sure
We can't.
A toy example to show why even humans can not do this for sure:
Assume you get the number 2, 4, 8, 16, 32, ?? and want to extrapolate to the next number ??. A natural extension of the series would be 64, but we can not take this for granted. The next number can just as well be 0. You can not be sure.
Only given the data and without additional assumptions about what you would expect to see, you can not learn a correct model per-se. You always have to be critical about your data. |
H: overfit a Random Forest
I am trying to overfit to the maximum a random forest classifier using scikit-learn to make some tests.
Does somebody know what hyperparameters I can tune to do that? Or does somebody know which other model I could apply to achieve a overfitted to the maximum a non-linear model?
AI: Decision Trees are definitely easier to overfit than Random Forests. The averaging effect (see bagging) is meant to combat overfitting.
Other than that I think the default parameters will overfit.
Example:
from sklearn.tree import DecisionTreeRegressor
# Create a dataset
x = np.linspace(0, 10 * np.pi, 50).reshape(-1,1)
y = x + 3 * np.sin(x)
noise = np.random.random(50).reshape(-1,1)
noise -= noise.mean() # center noise at 0
noisy = y + noise * 2
# Define a Decision Tree (with default parameters)
dtr = DecisionTreeRegressor()
dtr.fit(x, noisy)
y_dtr = dtr.predict(x)
# Draw the two plots
plt.figure(figsize=(14, 4))
ax1 = plt.subplot(121)
ax1.plot(np.linspace(0, 10 * np.pi, 100),
np.linspace(0, 10 * np.pi, 100) + 3 * np.sin(np.linspace(0, 10 * np.pi, 100)),
color='gray', label='desired fit', zorder=-1, alpha=0.5)
ax1.plot(x, y_dtr, color='#ff7f0e', label='decision tree', zorder=-1)
ax1.scatter(x, noisy, label='data')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('Model Overfit')
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
ax1.yaxis.set_ticks_position('left')
ax1.xaxis.set_ticks_position('bottom')
ax1.legend()
ax2 = plt.subplot(122)
ax2.plot(np.linspace(0, 10 * np.pi, 100),
np.linspace(0, 10 * np.pi, 100) + 3 * np.sin(np.linspace(0, 10 * np.pi, 100)),
color='gray', label='desired fit', zorder=-1, alpha=0.5)
ax2.plot(x, y_dtr, color='#ff7f0e', label='decision tree', zorder=-1)
ax2.set_xlabel('x')
ax2.set_ylabel('y')
ax2.set_title('Same graph')
ax2.spines['right'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax2.yaxis.set_ticks_position('left')
ax2.xaxis.set_ticks_position('bottom')
ax2.legend()
Running the code below will produce the following figure: |
H: How can generative models be used in machine learning classification applications?
My understanding of generative models is that they generate data to match certain statistical properties. Intuitively, I find it hard how generative models can be used for classification purposes. On the other hand, discriminative models being used for classification is self-explanatory.
AI: You will find an explanation on Wikipedia. But let us sum up:
Given your Data $D$ you are interested in target values $Y$, your classifications. Discriminative models are, like your said, the straight forward way of modelling the correspondence of your target given your data, $P(Y|D)$. The generative model, on the other hand, calculates $P(D|Y)$.
Let us consider the example of e-mail spam filtering. We have a set of reference e-mails and a label for each mail, which indicates whether it is spam. If we now, for example, look at Naive Bayes, we can see it utilizes the Bayes Formula to calculate the posterior estimate $P(Y|X) = \frac{P(X|Y)P(X)}{P(Y)}$. Opposed to other Bayesian Inference methods, Naive Bayes does not hold $P(X)$ as a model, but $P(X|Y)$, which can be modelled by our reference e-mail set. In this domain, all e-mails can be modelled as equally distributed, thus making $P(X)$ uninteresting. $P(Y)$ can also be approximated by the reference set (only looking at the labels).
This way, we can generate probabilities for our label $Y$, which is binary in our case. Now we can apply the maximum a posteriori approach: We calculate $P(Y=true|X)$ and $P(Y=false|X)$ and see which of both is more likely. The most likely option will be selected as our classification. |
H: NER: Extracting entities from an article
Description : I have dataset of categorised articles and to extract specific values from respective categorised article I have regex created for each category.
Aiming for:
Nlp techniques which learns the context of the content and avoids/minimizes the use of regex
If some new (similar) article comes up, depending on the learning (from 1) it tries to give the specific values.
Steps taken:
Created a dataframe with various features like : 'Name of the author', 'published date', etc. and got the values from the dataset by using regex
Research_work:
I was considering these options ahead this stage :
Using CNN : it will classify new articles depending on the feature values it learnt on and then use regexs for entity extraction. (It wont achieve the first aim)
Using CRF (medium_article): making use of POS+IOB tagging
Is there any other way around ?
cons of above stated methods?
AI: CRF is a standard for such case but bi-LSTM + CRF are said to be even better (e.g. https://arxiv.org/pdf/1508.01991.pdf). Not sure if you need POS as this is usually solved using the same techniques - may not help in the main task of entity extraction.
Depending how different are articles in specific categories and how much data you have for each of them, you may need to train each category separately and have some classifier in the beginning to decide category - like in your point 1. but word2vec-like classifier may be more robust here. |
H: Why Extra-trees should only be used within ensemble methods?
I was reading scikit-learn documentation for Extremely Randomized Trees and I found this warning:
Warning: Extra-trees should only be used within ensemble methods.
Why is that?
AI: In a random forest tree, a random subset of features is available for consideration at each split. Extra-trees takes this a step further by using a random threshold at each split. The idea is that a forest (an ensemble of trees) with a large number of trees that have learned something different from each other (due to the random nature of the features and thresholds at each split) will result in a better prediction than any single tree.
If one were to use a random threshold for each split (what extra-trees do) in a single tree, one would likely end up with a suboptimal tree that predicts poorly. |
H: Reinforcement learning: decreasing loss without increasing reward
I'm trying to solve OpenAI Gym's LunarLander-v2.
I'm using the Deep Q-Learning algorithm. I have tried various hyperparameters, but I can't get a good score.
Generally the loss decreases over many episodes but the reward doesn't improve much.
How should I interpret this? If a lower loss means more accurate predictions of value, naively I would have expected the agent to take more high-reward actions.
Could this be a sign of the agent not having explored enough, of being stuck in a local minimum?
AI: How should I interpret this? If a lower loss means more accurate predictions of value, naively I would have expected the agent to take more high-reward actions.
A lower loss means more accurate predictions of value for the current policy (technically it is more complicated for Q-learning off-policy estimates, but the covergence will still be limited by experience reachable in the current policy). Unfortunately a loss metric in RL cannot capture how good that policy is.
So what it means is that your policy has settled into a pattern where values can be estimated well by the neural network that you are using for Q. For some reason it is not finding improvements to that policy - typically it should be doing that before the loss metric drops, as each improvement in value estimates should reveal better possible actions, and once those start being taken by a new policy, the value estimates become out of date, and the loss increases again.
Could this be a sign of the agent not having explored enough, of being stuck in a local minimum?
Exploration could be an issue. The "local minimum" in that case is probably not an issue with the neural network, but that small variations in policy are all worse than the current policy. As you are learning off-policy, then increasing the exploration rate may help find the better states, at the expense of slower overall learning. Also, methods that explore more widely than randomly on each action could be better - e.g. action selection methods that consistently pick unexplored state/action pairs such as Upper Confidence Bound.
Also a possibility is that the structure of your network generalises well under the current policy, but is not able to cover better policies. In that case, whenever the exploration suggests a better policy, the network will also increase estimates of unrelated action choices - so it would try them, notice they are better, then back off as the new values also cause unwanted policy changes in other situations.
If you know a better policy than the one that is being found, then you could plot a learning curve with the policy fixed, see if the network can learn it. However, usually you will not know this, so you may be stuck with trying some variations of neural network architecture or other hyperparameters.
There are other methods than DQN (e.g. A3C, DDPG), as well as many add-ons and adjustments to DQN that you could try (e.g. eligibility traces, double learning). |
H: What to do if training loss decreases but validation loss does not decrease?
I am training a LSTM model to do question answering, i.e. so given an explanation/context and a question, it is supposed to predict the correct answer out of 4 options.
My model architecture is as follows (if not relevant please ignore): I pass the explanation (encoded) and question each through the same lstm to get a vector representation of the explanation/question and add these representations together to get a combined representation for the explanation and question. I then pass the answers through an LSTM to get a representation (50 units) of the same length for answers. In one example, I use 2 answers, one correct answer and one wrong answer. From this I calculate 2 cosine similarities, one for the correct answer and one for the wrong answer, and define my loss to be a hinge loss, i.e. I try to maximize the difference between the cosine similarities for the correct and wrong answers, correct answer representation should have a high similarity with the question/explanation representation while wrong answer should have a low similarity, and minimize this loss.
The problem I find is that the models, for various hyperparameters I try (e.g. number of hidden units, LSTM or GRU) the training loss decreases, but the validation loss stays quite high (I use dropout, the rate I use is 0.5), e.g.
My dataset contains about 1000+ examples. Any advice on what to do, or what is wrong?
AI: Welcome to DataScience. This looks like a typical of scenario of overfitting: in this case your RNN is memorizing the correct answers, instead of understanding the semantics and the logic to choose the correct answers.
A typical trick to verify that is to manually mutate some labels. For instance, you can generate a fake dataset by using the same documents (or explanations you your word) and questions, but for half of the questions, label a wrong answer as correct. If you re-train your RNN on this fake dataset and achieve similar performance as on the real dataset, then we can say that your RNN is memorizing.
Note that it is not uncommon that when training a RNN, reducing model complexity (by hidden_size, number of layers or word embedding dimension) does not improve overfitting.
If it is indeed memorizing, the best practice is to collect a larger dataset. I understand that it might not be feasible, but very often data size is the key to success. If you haven't done so, you may consider to work with some benchmark dataset like SQuAD
or bAbI. |
H: Purpose of backpropagation in neural networks
I've just finished conceptually studying linear and logistic regression functions and their optimization as preparation for neural networks.
For example, say we are performing binary classification with logistic regression, let's define variables:
$x$ - vector containing all inputs.
$y$ - vector containing all outputs.
$w_{0}$ - bias weight variable.
$W=(w_1,...,w_{2})$ - vector containing all weight variables.
$f(x_i)=w_{0}+\sum_{i=1}x_{i}w_{i}=w_{0}+x^{T}W$ - summation of all weight variables.
$p(x_{i})=\frac{1}{1+e^{-f(x_i)}}$ - logistic activation function (sigmoid), representing conditional probability that $y_i$ will be 1 given observed values in $x_i$.
$L=-\frac{1}{N} \sum^{N}_{i=0} y_i*ln(p(x_i))+(1-y_i)*ln(1-p(x_i))$ - binary cross entropy loss function (Kullback-Leibler divergence of Bernoulli random variables plus entropy of activation function representing probability)
$L$ is multi-dimensional function, so it must be differentiated with partial derivative, being:
$$\frac{\partial{L}}{\partial{w}}$$
Then, the chain rule gives:
$$\frac{\partial{L}}{\partial{w_1}}=\frac{\partial{L}}{\partial{p_i}} \frac{\partial{p_i}}{\partial{w_1}}$$
After doing few calculations, derivative of the loss function is:
$$(y_i-p_i)*x_i$$
So we got derivative of the loss function, and all weights are trained separately with gradient descent.
What does backpropagation have to do with this? To be more precise, what's the point of automatic differentiation when we could simply plug in variables and calculate gradient on every step, correct?
In short
We already have derivative calculated, so what's the point of calculating them on every step when we can just plug in the variables?
Is backpropagation just fancy term for weights being optimized on every iteration?
AI: Is backpropagation just fancy term for weights being optimized on every iteration?
Almost. Backpropagation is a fancy term for using the chain rule.
It becomes more useful to think of it as a separate thing when you have multiple layers, as unlike your example where you apply the chain rule once, you do need to apply it multiple times, and it is most convenient to apply it layer-by-layer in reverse order to the feed forward steps.
For instance, if you have two layers, $l$ and $l-1$ with weight matrix $W^{(l)}$ linking them, non-activated sum for a neuron in each layer $z_i^{(l)}$ and activation function $f()$, then you can link the gradients at the sums (often called logits as they may be passed to logistic activation function) between layers with a general equation:
$$ \frac{\partial L}{\partial z^{(l-1)}_j} = f'(z^{(l-1)}_j) \sum_{i=1}^{N^{(l)}} W_{ij}^{(l)} \frac{\partial L}{\partial z^{(l)}_i}$$
This is just two steps of the chain rule applied to generic equations of the feed-forward network. It does not provide the gradients of the weights, which is what you eventually need - there is a separate step for that - but it does link together layers, and is a necessary step to eventually obtain the weights. This equation can be turned into an algorithm that progressively works back through layers - that is back propagation.
To be more precise, what's the point of automatic differentiation when we could simply plug in variables and calculate gradient on every step, correct?
That is exactly what automatic differentiation is doing. Essentially "automatic differentiation" = "the chain rule", applied to function labels in a directed graph of functions. |
H: Extracting specific data from unstructured text - NER
I have a reasonably simple problem to solve. I need to extract reservations numbers from unstructured text. Based on my research, it seems to be an NER problem. Based on a visual analysis of the dataset, I could notice that the reservation number is frequently near specific keywords, such as 'confirmation', 'reservation', 'confirmation number', 'reservations number', etc.
First, I decided to try a Regex rule to extract the data, but some minimum variations might render this solution inefficient. The reservation number can have very different variations, such as:
ZXC51657856,
EA5FFD4,
45615177413515,
QT454545EF,
At this moment, I don't have a dataset available to train a classifier to solve this issue.
I would like to receive some ideas from the community to guide me towards an elegant solution to this problem, as I'm pretty new to ML in general and time is limited.
AI: From your question, I too feel it's a NER problem. And about the dataset, unless there is a data set which tags the reservation numbers and is similar to your application, you WILL have to create your own data set.
I worked on a similar problem before and my dataset looked something like this:
<TEAM>Northern</TEAM> NNP
<TEAM>Ireland</TEAM> NNP
man NN
<PLAYER>James</PLAYER> NNP
<PLAYER>McIlroy</PLAYER> NNP
is VBZ
confident JJ
he PRP
can MD
win VB
his PRP$
first JJ
major JJ
title NN
at IN
this DT
weekend NN
's POS
<COMPETITION>Spar</COMPETITION> JJ
<COMPETITION>European</COMPETITION> JJ
<COMPETITION>Indoor</COMPETITION> NNP
<COMPETITION>Championships</COMPETITION> NNP
in IN
<LOCATION>Madrid</LOCATION> NNP
You can see that that I have the entity tag and the part of speech tag in the word. When I parse this dataset for training, I also add the IOB tags (Inside, Outside, and Beginning)
[(('Claxton', 'NNP\n'), 'B-PLAYER'),
(('hunting', 'VBG\n'), 'O'),
(('first', 'RB\n'), 'O'),
(('major', 'JJ\n'), 'O'),
(('medal', 'NNS\n'), 'O'),
(('.', '.\n'), 'O'),
(('British', 'JJ\n'), 'O'),
(('hurdler', 'NN\n'), 'O'),
(('Sarah', 'NNP\n'), 'B-PLAYER'),
(('Claxton', 'NNP\n'), 'I-PLAYER')......]
Then I just used the ClassifierBasedTagger(There are other taggers too). I can't find the source but I used this code:
class NamedEntityChunker(ChunkParserI):
def __init__(self, train_sents, **kwargs):
assert isinstance(train_sents, Iterable), 'The training set should be an Iterable'
self.feature_detector = features
self.tagger = ClassifierBasedTagger(
train = train_sents,
feature_detector = features,
**kwargs)
def parse(self, tagged_sents):
chunks = self.tagger.tag(tagged_sents)
iob_triplets = [(w, t, c) for ((w, t), c) in chunks]
return conlltags2tree(iob_triplets)
Here features is a function which returns a dictionary of the features to be used such as the previous word, previous word's pos tag etc. Just features to train the model on.
{
'word' : word,
'lemma' : stemmer.stem(word),
'pos' : pos,
'allascii' : allascii,
'next-word' : nextword,
'next-lemma' : stemmer.stem(nextword),
'next-pos' : nextpos,
'prev-word' : prevword,
'prev-lemma': stemmer.stem(prevword),
'prev-pos' : prevpos
}
You can find useful theory here
I hope this helps. |
H: Meaning of dropout
What does model.add(Dropout(0.4)) mean in Keras?
Does it mean ignoring 40% of the neurons in the Neural Network? OR Does it mean ignoring the neurons that give probability = 0.4?
AI: It means that you randomly select 40% of the neurons and set their weights to zero for the forward and backward passes i.e. for one iteration.
Have a look here for some of the reasons and benefits.
Have a look here for all the details of standard Dropout. Important is to notice that the remaining weights are commonly scaled by the value p, as to keep the expected mean value of the weights to be roughly consistent over many iterations. Different deep learning frameworks scale the weights at different different points, but the reason it the same.
From the relevant Keras documentation:
Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting. |
H: Difference between isna() and isnull() in pandas
I have been using pandas for quite some time. But, I don't understand what's the difference between isna() and isnull(). And, more importantly, which one to use when identifying missing values in a dataframe.
What is the basic underlying difference of how a value is detected as either na or null?
AI: Pandas isna() vs isnull().
I'm assuming you are referring to pandas.DataFrame.isna() vs pandas.DataFrame.isnull(). Not to confuse with pandas.isnull(), which in contrast to the two above isn't a method of the DataFrame class.
These two DataFrame methods do exactly the same thing! Even their docs are identical. You can even confirm this in pandas' code.
But why have two methods with different names do the same thing?
This is because pandas' DataFrames are based on R's DataFrames. In R na and null are two separate things. Read this post for more information.
However, in python, pandas is built on top of numpy, which has neither na nor null values. Instead numpy has NaN values (which stands for "Not a Number"). Consequently, pandas also uses NaN values.
In short
To detect NaN values numpy uses np.isnan().
To detect NaN values pandas uses either .isna() or .isnull().
The NaN values are inherited from the fact that pandas is built on top of numpy, while the two functions' names originate from R's DataFrames, whose structure and functionality pandas tried to mimic. |
H: Plotting in PySpark?
I have been searching for methods to plot in PySpark.
I couldn't find any resource on plotting data residing in DataFrame in PySpark. The only methods which are listed are:
through method collect() which brings data into 'local' Python session and plot
through method toPandas() which converts data to 'local' Pandas Dataframe.
The problem is that these both are very time-consuming functions. Is there any method by which we can plot data residing in Spark session directly (not importing it into the local session)?
AI: No, there is no such method, I have found out.
The reason is, plotting libraries run on a single machine and expect a rather sample dataset. Data on Spark is distributed among its clusters and hence needs to be brought to a local session first, from where it can be plotted.
That is why methods such as collect(), toPandas() are needed. |
H: Sklearn SVM - how to get a list of the wrong predictions?
I am not an expert user. I know that I can obtain the confusion matrix, but I would like to obtain a list of the rows that have been classified in a wrong way in order to study them after classification.
On stackoverflow I found this Can I get a list of wrong predictions in SVM score function in scikit-learn but I am not sure to have understood everything.
This is an example code.
# importing necessary libraries
from sklearn import datasets
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
# loading the iris dataset
iris = datasets.load_iris()
# X -> features, y -> label
X = iris.data
y = iris.target
# dividing X, y into train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
# training a linear SVM classifier
from sklearn.svm import SVC
svm_model_linear = SVC(kernel = 'linear', C = 1).fit(X_train, y_train)
svm_predictions = svm_model_linear.predict(X_test)
# model accuracy for X_test
accuracy = svm_model_linear.score(X_test, y_test)
# creating a confusion matrix
cm = confusion_matrix(y_test, svm_predictions)
To iterate through the rows and to find the wrong ones, the proposed solution is:
predictions = clf.predict(inputs)
for input, prediction, label in zip(inputs, predictions, labels):
if prediction != label:
print(input, 'has been classified as ', prediction, 'and should be ', label)
I didn't understand what is "input"/"inputs". If I adapt this code to my code, like this:
for input, prediction, label in zip (X_test, svm_predictions, y_test):
if prediction != label:
print(input, 'has been classified as ', prediction, 'and should be ', label)
I obtain:
[6. 2.7 5.1 1.6] has been classified as 2 and should be 1
Is the row 6 the wrong row? What are the numbers after the 6.? I am asking this because I am using the same code on a dataset that is bigger than this one, so I would like to be sure that I am doing the right things.
I am not posting the other dataset because unfortunately I can't, but the problem there is that I obtained something like this:
(0, 253) 0.5339655767137572
(0, 601) 0.27665553856928027
(0, 1107) 0.7989633757962163 has been classified as 7 and should be 3
(0, 885) 0.3034934766501018
(0, 1295) 0.6432561790864061
(0, 1871) 0.7029318585026516 has been classified as 7 and should be 6
(0, 1020) 1.0 has been classified as 3 and should be 8
When I count every line of this last output, I obtain the double of the lines of the test set... So I am not sure that I am analysing exactly the wrong list of predicted results…
I hope to have been enough clear.
AI: Welcome to SE:DataScience.
Here [6. 2.7 5.1 1.6] is the feature of the input instance which is classified wrongly. It is one row from your input feature X = iris.data.
The message means: your SVM use the input feature [6. 2.7 5.1 1.6] to predict a label, and it predicts label=2. The ground truth is label=1.
If you want to print the indices of rows that are classified wrongly, you can use
for row_index, (input, prediction, label) in enumerate(zip (X_test, svm_predictions, y_test)):
if prediction != label:
print('Row', row_index, 'has been classified as ', prediction, 'and should be ', label) |
H: Coding Problem - Extracting values from a column and forming a new dataframe [edited]
The problem statement requires extracting certain weather parameters for every hour in a particular date as denoted in the dataframe. The column 'hourly' consists of 24 lists in each entry, denoting weather parameters for each hour on that particular date. Is there a way that I can extract the parameter 'CloudCover' for all those 24 hours and form a new dataframe whose columns denote the hours in the day and the corresponding CloudCover value for a single date ?
Edit: Based on the suggestions given below, I tinkered my code and a new problem has come up. While the code suggested by @jahKnows works perfectly fine, It gives only the first value of any parameter in the hourly column entries. For eg. in the date 01-01-2016, the corresponding hourly column entry has 24 values of cloudCover. But the code suggested below gives only the first CloudCover value and moves on to the next date leaving out the other 23 CloudCover values in that particular date. Can you suggest me changes to tackle this problem ? I have attached the updated notebook link and the original dataset link below.
Notebook link(edited): https://anaconda.org/vishwa989796/okayishtrial/notebook
data set link: https://drive.google.com/drive/folders/1wMNOZapHib9AyYFaLdA1jdEdx9DdLnXx
AI: I wrote you two small functions which you can use to unpack a dataframe.
The original dataframe looks like this
import pandas as pd
df = pd.DataFrame(data = temp['data']['weather'])
df.head()
The first one is simple, it takes a dataframe and the name of a column, and it will extract the column into a new dataframe.
def extract_col_as_df(df, column_name):
data = [datum[0] for datum in df[column_name]]
df = pd.DataFrame(data = data)
return df
df_astronomy = extract_col_as_df(df, 'astronomy')
df_astronomy.head()
df_astronomy = extract_col_as_df(df, 'hourly')
df_astronomy.head()
The fact that the extracted tables sometimes have a useless list of dictionaries with a single value bothered me. Of course you can use the same function as above to extract that column as another dataframe, but then you would have a dataframe with a single column, why not just unpack it in place. So I wrote another function which cleans up the extracted dataframe by unpacking a list of dictionaries with a single value.
A better version
This version also takes a dataframe and a column name to extract a dataframe from it. But from the extracted dataframe if a column contains a list of dictionaries with only a single value it unpacks it.
def extract_col_as_df(df, column_name):
data = [datum[0] for datum in df[column_name]]
data = []
for datum in df[column_name]:
record = {}
for i in datum[0]:
# If the entry in the record is comprised of a list with a
# dictionary containing a single value then unpack it
if type(datum[0][i]) is list:
if len(datum[0][i]) == 1:
key_name = list(datum[0][i][0].keys())[0]
record.update({i: datum[0][i][0][key_name]})
else:
record.update({i: datum[0][i]})
else:
record.update({i: datum[0][i]})
data.append(record)
df = pd.DataFrame(data = data)
return df
df_astronomy = extract_col_as_df(df, 'hourly')
df_astronomy.head() |
H: Finding clusters in multidimensional data
I have a set of data from 3,000 records. There are 5 attributes per individual (labelled A - E). I can use Kendall's W (coefficient of concordance) to determine the concordance between any two records.
What I require is a way to discern any clusters which exist across the entire data, and simply which of the attributes are within each cluster. e.g. for the data below I would see 2 clusters:
Cluster 1: C, D / Cluster 2: A, B
ID | A | B | C | D | E |
001 | 55 | 80 | 125 | 114 | 75 |
002 | 75 | 78 | 110 | 105 | 95 |
003 | 135 | 105 | 95 | 93 | 92 |
004 | 120 | 115 | 101 | 55 | 44 |
What techniques or approaches could I use to help with this?
AI: In general it does not make much sense to cluster features. In an ideal world for your features to be the best they can be they should actually be independent, thus there should be no relationship between them. Typically when we talk about clustering it is clustering the instances. To attribute some associative labels to a subset of the instances based on the similarity of their feature values.
Many clustering algorithms exist, I would say that the most popular is K-means however spectral clustering and Gaussian mixtures are also frequently used. As always, each algorithm is best suited for a specific type of dataset, it is up to you to choose which is best suited, or you can just try all of them and see which is best.
Here you can find a list of clustering algorithms with their respective usecases. Always use the libraries when you want to implement standard algorithms, they are highly optimized. But for education sake it is good to look at what is happening.
I will describe a homebrew version of the K-means algorithm such that you can understand what is happening under the hood and perhaps you will see why we cluster instances and not features.
The K-means algorithm
import numpy as np
import matplotlib.pyplot as plt
First we will make some artificial data. These will consist of $n$ 2D Gaussian clusters with a given mean and variance. Here $n=5$ and we will have 300 instances per Gaussian distribution.
params = [[[ 0,1], [ 0,1]],
[[ 5,1], [ 5,1]],
[[-2,5], [ 2,5]],
[[ 2,1], [ 2,1]],
[[-5,1], [-5,1]]]
n = 300
dims = len(params[0])
data = []
y = []
for ix, i in enumerate(params):
inst = np.random.randn(n, dims)
for dim in range(dims):
inst[:,dim] = params[ix][dim][0]+params[ix][dim][1]*inst[:,dim]
label = ix + np.zeros(n)
if len(data) == 0: data = inst
else: data = np.append( data, inst, axis= 0)
if len(y) == 0: y = label
else: y = np.append(y, label)
num_clusters = len(params)
print(y.shape)
print(data.shape)
(1500,) (1500, 2)
So now we have 1500 instances (records) in 2D space. This can be extended to any number dimensions. 2 is easiest to plot.
plt.scatter(data[:,0], data[:,1])
plt.show()
I will paste the entire algorithm at the bottom of the answer for a quick copy and paste but here I will go through its different parts so you can see how it works. The algorithm goes as follows: first we initialize some centroids within the range of our data. These are the red dots in the image below
def train(self, data, verbose=1):
shape = data.shape
ranges = np.zeros((shape[1], 2))
centroids = np.zeros((shape[1], 2))
for dim in range(shape[1]):
ranges[dim, 0] = np.min(data[:,dim])
ranges[dim, 1] = np.max(data[:,dim])
if verbose == 1:
print('Ranges: ')
print(ranges)
centroids = np.zeros((self.k, shape[1]))
for i in range(self.k):
for dim in range(shape[1]):
centroids[i, dim] = np.random.uniform(ranges[dim, 0], ranges[dim, 1], 1)
if verbose == 1:
print('Centroids: ')
print(centroids)
plt.scatter(data[:,0], data[:,1])
plt.scatter(centroids[:,0], centroids[:,1], c = 'r')
plt.show()
Then we will calculate the distance from each instance (record) to each centroid.
distances = np.zeros((shape[0],self.k))
for ix, i in enumerate(data):
for ic, c in enumerate(centroids):
distances[ix, ic] = np.sqrt(np.sum((i-c)**2))
Then we will attribute each instance as belonging to the centroid it is closest to.
labels = np.argmin(distances, axis = 1)
Now we will update the position of the new centroids by finding the mean position of all instances which are closest to the given centroid in each dimension.
new_centroids = np.zeros((self.k, shape[1]))
for centroid in range(self.k):
temp = data[labels == centroid]
if len(temp) == 0:
return 0
for dim in range(shape[1]):
new_centroids[centroid, dim] = np.mean(temp[:,dim])
We then repeat this process until the centroids no longer move significantly. Usually if the difference in position for all the centroids is less than machine epsilon we consider the algorithm to have converged.
if np.linalg.norm(new_centroids - centroids) < np.finfo(float).eps:
print("DONE!")
break
For this dataset it took 16 iterations for convergence, this is the final result.
What we see here is that each instance is grouped into a cluster with similar attributes as itself. For example if the dimensions of our data represented height (x-axis) and weight (y-axis), then we can group people into 5 different BMI indices.
Purple: short and skinny
Green: short and fat
Blue: mid height and mid weight
Yellow: Mid height and fat
Turquoise: Tall and mid weight.
Each member in our population belongs to a specific BMI index which is clustered based on two measurable attributes he possesses (features), his height and his weight.
The full K-means algorithm
This is the algorithm
class Kmeans(object):
def __init__(self, k=1):
self.k = k
def train(self, data, verbose=1):
shape = data.shape
ranges = np.zeros((shape[1], 2))
centroids = np.zeros((shape[1], 2))
for dim in range(shape[1]):
ranges[dim, 0] = np.min(data[:,dim])
ranges[dim, 1] = np.max(data[:,dim])
if verbose == 1:
print('Ranges: ')
print(ranges)
centroids = np.zeros((self.k, shape[1]))
for i in range(self.k):
for dim in range(shape[1]):
centroids[i, dim] = np.random.uniform(ranges[dim, 0], ranges[dim, 1], 1)
if verbose == 1:
print('Centroids: ')
print(centroids)
plt.scatter(data[:,0], data[:,1])
plt.scatter(centroids[:,0], centroids[:,1], c = 'r')
plt.show()
count = 0
while count < 100:
count += 1
if verbose == 1:
print('-----------------------------------------------')
print('Iteration: ', count)
distances = np.zeros((shape[0],self.k))
for ix, i in enumerate(data):
for ic, c in enumerate(centroids):
distances[ix, ic] = np.sqrt(np.sum((i-c)**2))
labels = np.argmin(distances, axis = 1)
new_centroids = np.zeros((self.k, shape[1]))
for centroid in range(self.k):
temp = data[labels == centroid]
if len(temp) == 0:
return 0
for dim in range(shape[1]):
new_centroids[centroid, dim] = np.mean(temp[:,dim])
if verbose == 1:
plt.scatter(data[:,0], data[:,1], c = labels)
plt.scatter(new_centroids[:,0], new_centroids[:,1], c = 'r')
plt.show()
if np.linalg.norm(new_centroids - centroids) < np.finfo(float).eps:
print("DONE!")
break
centroids = new_centroids
self.centroids = centroids
self.labels = labels
if verbose == 1:
print(labels)
print(centroids)
return 1
def getAverageDistance(self, data):
dists = np.zeros((len(self.centroids),))
for ix, centroid in enumerate(self.centroids):
temp = data[self.labels == ix]
dist = 0
for i in temp:
dist += np.linalg.norm(i - centroid)
dists[ix] = dist/len(temp)
return dists
def getLabels(self):
return self.labels
To use the algorithm use the following, where data is artificial data we made above but can also be any numpy matrix where the records are the rows and the features are the columns.
kmeans = Kmeans(5)
kmeans.train(data) |
H: How to interpret a drastic accuracy loss while training a neuronal net (CNN)?
How can one interpret a drastic accuracy loss after ~38 epochs? Maybe more dropout should be added to the CNN network?
(x-axis shows the number of epochs)
AI: There could be several reasons:
Numeric stability issues + overfitting. Some neural activation, or some weights, continue to increase/decrease monotonically until some function (e.g. softmax) encounters a numerical stability issue. This could happen when your network is memorizing instead of properly learning.
Improper learning rate. A well defined learning algorithm usually gradually decrease its learning rate as training proceeds. If that didn't work properly, or you are using a too large constant learning rate, some activation or weights could be exploding.
For the above two reasons, you may want to print the min and max values for the activations and weights in each layer during your training. If something looks like exploding (i.e. reaching something like 10E+38) you need to check on that. Also monitor if any values become NaN or infinity.
A bug in your code for calculating accuracy or plotting. You may want to manually inspect the predicted labels and the ground truth labels to confirm whether your algorithm is performing poorly after epoch 38. |
H: What is the difference between SGD classifier and the Logisitc regression?
To my understanding, the SGD classifier, and Logistic regression seems similar.
An SGD classifier with loss = 'log' implements Logistic regression and loss = 'hinge' implements Linear SVM. I also understand that logistic regression uses gradient descent as the optimization function and SGD uses Stochastic gradient descent which converges much faster. But which of the two algorithms to use in which scenarios? Also, how are SGD and Logistic regression similar and how are they different?
AI: Welcome to SE:Data Science.
SGD is a optimization method, while Logistic Regression (LR) is a machine learning algorithm/model. You can think of that a machine learning model defines a loss function, and the optimization method minimizes/maximizes it.
Some machine learning libraries could make users confused about the two concepts. For instance, in scikit-learn there is a model called SGDClassifier which might mislead some user to think that SGD is a classifier. But no, that's a linear classifier optimized by the SGD.
In general, SGD can be used for a wide range of machine learning algorithms, not only LR or linear models. And LR can use other optimizers like L-BFGS, conjugate gradient or Newton-like methods. |
H: What is the difference between multi-layer perceptron and generalized feed forward neural network?
I'm reading this paper:An artificial neural network model for rainfall forecasting in Bangkok, Thailand. The author created 6 models, 2 of which have the following architecture:
model B: Simple multilayer perceptron with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.
model C: Generalized feedforward with Sigmoid activation function and 4 layers in which the number of nodes are: 5-10-10-1, respectively.
In the Results and discussion section of the paper, the author concludes that :
Model C enhanced the performance compared to Model A and B. This suggests that the generalized feedforward network performed better than the simple multilayer perceptron network in this study
Is there a difference between these 2 architectures?
AI: Well you missed the diagram they provided for the GFNN. Here is the diagram from their page:
Clearly you can see what the GFNN does, unlike MLP the inputs are applied to the hidden layers also. While in MLP the only way information can travel to hidden layers is through previous layers, in GFNN the input information is directly available to the hidden layers.
I might add this type of connections are used in ResNet CNN, which increased its performance dramatically compared to other CNN architectures. |
H: How to save and test CNN model on test set after training
My CNN model is trained on the training set and validated on the validation set, now I want to test it on test set, here is my code:
x_img = tf.placeholder(tf.float32, name='x_img')
y_label = tf.placeholder(tf.float32, name='y_label')
reshape = tf.reshape(x_img, shape=[-1, img_x, img_y, img_z, 1], name='reshape')
def CNN_Model(input):
conv1 = conv_layer(reshape, num_channels, n_f_conv1, name="conv1")
max_pool1 = maxpool_layer(conv1, name="max_pool1")
conv2 = conv_layer(max_pool1, n_f_conv1, n_f_conv2, name="conv2")
max_pool2 = maxpool_layer(conv2, name="max_pool2")
shape = 4*4*4*64
flattened = tf.reshape(max_pool2,shape=[-1, shape], name='flattened')
fc = fc_layer(flattened, shape, n_node_fc, name="fc")
dropout1 = dropout(fc, keep_rate, name="dropout1")
output_layer = output(dropout1, n_node_fc, num_classes, name="output_layer")
return output_layer
def train_CNN(input):
train_predict = CNN_Model(x_img)
with tf.variable_scope("cross_entropy", reuse=tf.AUTO_REUSE):
lose = tf.nn.softmax_cross_entropy_with_logits_v2(logits=train_predict, labels=y_label, name='cross_entropy')
cost = tf.reduce_mean(lose, name='reduce_mean_cost')
tf.summary.scalar("cost", cost)
with tf.variable_scope("optimization", reuse=tf.AUTO_REUSE):
optimizer = tf.train.AdamOptimizer(learning_rate, name='AdamOptimizer').minimize(cost)
init = tf.global_variables_initializer()
print("Starting session...")
with tf.Session() as sess:
sess.run(init)
all_time = 0
batch_size = 120
batch = 0
print("Starting training...")
for epoch in range(num_epochs):
train_batch = train_data[batch:batch_size]
batch += batch_size
batch_size += batch_size
start_time = time.time()
ep_loss = 0
for data in train_batch:
X = data[0]
Y = data[1]
_, c = sess.run([optimizer, cost], feed_dict={x_img: X, y_label: Y})
ep_loss += c
end_time = time.time()
all_time += int(end_time-start_time)
print('Epoch', epoch+1, 'completed out of',num_epochs,'loss:',ep_loss,
'time usage: '+str(int(end_time-start_time))+' seconds')
correct_predict = tf.equal(tf.argmax(train_predict, 1), tf.argmax(y_label, 1))
accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32), name='reduce_mean_acc')
print("Validation accuracy:", accuracy.eval({x_img:[i[0] for i in validate_data],
y_label:[i[1] for i in validate_data]}))
print("Test accuracy:", accuracy.eval({x_img:[i[0] for i in test_data],
y_label:[i[1] for i in test_data]}))
I have a test dataset stored as test_data like train_data in the code above, tried to do it in more than one way, but I did not succeed, can anyone share a testing code with me, ofcourse based on my code?
AI: I don't know exactly where you have the problem but according to comments, take a look at the following line.
_, c = sess.run([optimizer, cost], feed_dict={x_img: X, y_label: Y})
feed_dict is used for passing data to your network. As you can see, X is the training data. You can replace it with the test data. You should also change the y_labels to the labels of the test data. |
H: Navigating the jungle of choices for scalable ML deployment
I have prototyped a machine learning (ML) model on my local machine and would like to scale it to both train and serve on much larger datasets than could be feasible on a single machine. (The model was built with Python and Keras. It takes in a CSV table of inputs and spits out the corresponding CSV table of predicted outputs.)
My naive "vision" is that I'd have the model reside on a single (master) machine, whereas the data would be equally distributed among several units (whatever unit means: nodes in a "cluster", CPUs, GPUs, ... ?) The model would be projected onto these units and the learned parameters would somehow all synchronize back to the master unit. Similarly, in the case of serving, the same model would be applied to the data that resides on the different units. Does this "vision" sound reasonable? (I have had some experience with parallel computing with MPI and I vaguely remember that's how things work.)
If I were to start from a blank slate, what architecture/infrastructure should I choose to deploy my model in a scalable fashion? Below are some of the confusingly many options I have read about. (I hardly master what each of these things do, so please forgive me if it looks like a laundry list of disparate technologies.)
Kubeflow (i.e., Kubernetes)
PySpark
Amazon SageMaker
Distributed TensorFlow
Dask
Apache Mesos
REST API
Hadoop
... and dozens more
As a pure ML guy (read: Python, Keras, pandas guy coding on a laptop), I'm out of my depth with all the infrastructure jargon that comes with the above links. It's therefore overwhelming to find a starting point, or some kind of "Hello World" example I could relate to. All I want is an architecture from which I can transpose my code and have it run in an efficient and scalable manner. Which one does the job? Despite all the hype around ML, there does not seem to be any comprehensive map or comparative review of all these solutions. I did find some comparisons, for example between Spark and SageMaker, or Spark and Dask, but given my illiteracy in these subjects, they only add to the confusion.
AI: If you're looking for distributed training, you're probably looking for Apache Spark. It's not itself a training library (although it comes with Spark MLlib which implements most common algorithms in a distributed way), but Spark core is something other implementations often build on.
Pyspark is just the Python API for Spark. Mesos is a cluster manager, like YARN, that Spark can use. Hadoop is for our purposes here a storage system (HDFS) and YARN, so might have the data you're reading. Spark builds on that too.
Kubeflow is, AFAIK, a toolkit for running many relevant data science things on the Kubernetes resource manager. If you're using K8S this will help you get some common training and notebook tools running. I don't know how much it specifically assists distributing training.
Dask is a framework for distributing Python code but I haven't used it; I don't think it's a training library per se. Sagemaker is a hosted notebook and serving tool, mostly. (Plug: if you're interested in Sagemaker, but also using Spark, you'd probably like our Databricks platform even more.)
Many libraries have distributed implementations now. TensorFlow has its own distributed mode. Horovod from Uber also helps distributed TensorFlow in a more efficient way. There's TensorFlowOnSpark too. (Plug again, because this is near and dear to my heart: HorovodEstimator = Horovod + Spark)
MXnet, xgboost also have distributed implementations on Spark. There's BigDL and deeplearning4j, also distributed deep learning for Spark.
And there's more. You're probably going to run into Spark no matter what as its even more commonly used for ETL and other processing related to data science. Then pick your preferred platform. Then pick some distributed training tools to try on your problem. |
H: Can I treat text review analysis as a regression problem?
I am playing with a dataset that contains tripadvisor restaurant reviews and their labels (either 1, 2, 3, 4 or 5 stars).
Initially I was thinking of using it as a classification problem, applying softmax, cross-entropy and so on, however upon a second thought it came to me the idea to actually treat it as a regression problem, as generating an output which is continuous (such as 3.897) is perfectly fine (assuming of course that the original value would have been 4).
Does this make sense? or should I just hot-encode the answers in a vector of 5 elements (on per star?).
Also, assuming that regression is a correct way to treat this, linear activation in the last layer would be the way to go, and the metrics to look at would be the mean squared error and the mean absolute error.
UPDATE:
After browsing a bit more, I found something named Ordinal Regression, actually if one thinks a bit about it, everything makes sense. If i just treat the problem as a classification one, and I have a prediction saying that a given review has 1 star when in reality it was 4 star, how would we penalize that?
If we are using a classification problem and we are using cross entropy, that is simply a missclassification error, and labeling something as 1 start when it should have been a 4 start is consider as incorrect as labeling something with 3 stars when it should have had 4 stars. As you can see this is incorrect, both predictions (1 and 3 stars) are incorrect, but one is waaay more incorrect than others.
However if we are using a regression problem, our cost function will clearly penalize more a prediction of 1 start rather than one of 3 stars (again, here we assume that the correct label would have been 4 stars).
So given all that, I would say that regression makes sense in this case, as the categories are related to each other.
AI: Yes, regression makes sense.
Many MLer working on similar tasks (e.g. the Yelp challenge) use classification instead of regression because they collapsed the label space to 2 (positive or negative) or 3 (adding neutral). For predicting numerical scores 1~5 regression makes more sense. |
H: What Does the Normalization Factor Mean in the AdaBoost Algorithm?
I am studying the AdaBoost algorithm. The update rule for a weak hypothesis is:
$Dt+1(i) = Dt(i)exp(−αtyiht(xi))/zt $
where $zt$ is a normalization factor chosen so that $Dt+1$ is a distribution.
What does the 'normalization factor' mean? Could I have an explanation with an example, please?
AI: The normalization factor is used to reduce any probability function to a probability density function with total probability of one. See Wikipedia.
Say your unnormalized value is [0.1, 0.2, 0.3, 0.2]. You normalize it by dividing it by $z_t=0.1+0.2+0.3+0.2=0.8$, therefore get the normalized value [0.125, 0.25, 0.375, 0.25] which sums up to 1.. |
H: MinMaxScaler returned values greater than one
Basically I was looking for a normalization function part of sklearn, which is useful later for logistic regression.
Since I have negative values, I chose MinMaxScaler with: feature_range=(0, 1) as a parameter.
x = MinMaxScaler(feature_range=(0, 1)).fit_transform(x)
Then using sm.Logit trainer I got and error,
import statsmodels.api as sm
logit_model=sm.Logit(train_data_numeric_final,target)
result=logit_model.fit()
print(result.summary())
ValueError: endog must be in the unit interval.
I presume my values are out of (0,1) range, which is the case:
np.unique(np.less_equal(train_data_numeric_final.values, 1))
array([False, True])
How come? then how can I proceed.
AI: I am not sure why your MinMaxScaler didn't work, but here is a function that should scale your data into the desired range:
def rescale(data, new_min=0, new_max=1):
"""Rescale the data to be within the range [new_min, new_max]"""
return (data - data.min()) / (data.max() - data.min()) * (new_max - new_min) + new_min
Looking at the documentation of the MinMaxScaler, it seems my function above it the same as their method.
You could break your code down a little to explicitly comppute each step on its own line. This might help find the origins of your problem. I tried it out and got the expected results:
In [1]: import numpy as np
In [2]: from sklearn.preprocessing import MinMaxScaler
In [3]: x = np.random.randint(0, 10, (10, 10)).astype(np.float)
In [4]: x # generate random data in range [0, 9]
Out[4]:
array([[ 1., 4., 5., 4., 6., 1., 8., 1., 8., 9.],
[ 3., 1., 4., 4., 6., 2., 5., 1., 0., 8.],
[ 2., 0., 6., 1., 5., 2., 5., 8., 8., 4.],
[ 8., 9., 2., 8., 5., 6., 0., 5., 0., 5.],
[ 1., 3., 2., 2., 3., 2., 4., 1., 7., 5.],
[ 7., 0., 8., 8., 3., 6., 6., 6., 4., 3.],
[ 4., 3., 4., 4., 7., 6., 4., 5., 6., 7.],
[ 9., 0., 8., 9., 7., 1., 2., 2., 4., 6.],
[ 7., 4., 2., 8., 6., 5., 2., 9., 9., 9.],
[ 7., 6., 9., 2., 9., 0., 1., 5., 7., 3.]])
In [5]: scaler = MinMaxScaler() # defaults to range [0, 1]
In [6]: scaler.fit(x) # compute the scaling factors
Out[6]: MinMaxScaler(copy=True, feature_range=(0, 1))
In [7]: scaled_data = scaler.transform(x) # scale the data
In [8]: scaled_data.shape # still the same shape
Out[8]: (10, 10)
In [9]: scaled_data.min() # min and max are 0 and 1 as expected
Out[9]: 0.0
In [10]: scaled_data.max()
Out[10]: 1.0 |
H: How to compute unseen bi-grams in a corpus (for Good-Turing Smoothing)
Consider a (somewhat nonsensical) sentence - "I see saw a see saw"
The observed bi-grams would be:
"I see""see saw""saw a"and,"a see".
My aim is to smoothen out the probability mass of the bi-gram probabilities by using Good-Turing smoothing. For this, I need to find the count of unseen bi-grams, i.e., bi-grams with a frequency count of 0.
How do I do this?
1) Would this be a list of all bi-grams formed by using 2 non-consecutive words? For example, "I saw", "saw saw", "a I", etc.?
2) Would repetitions of the same word be included as bi-grams? Eg. "I I", "see see", etc.?
AI: I just remembered that we create a table with all possible words as the header of each row and of each column. As a result, the list of all bi-grams would be all possible bi-grams formed by concatenating any 2 words. |
H: Super basic logistic regression example
I am new to ML and I created a super basic logistic regression example with 4 points on the $x$ line that belong to two classes:
points = [[1, 1]]
points = points + [[2, 0]]
points = points + [[1.5, 1]]
points = points + [[2.5, 0]]
data = np.array([x[:-1] for x in points])
target = [int(x[-1]) for x in points]
alg = LogisticRegression()
alg.fit(data, target)
print (alg.coef_)
print (alg.intercept_)
print (alg.predict_proba(1))
print (alg.predict_proba(1.5))
print (alg.predict_proba(2))
print (alg.predict_proba(2.5))
I expected the model to understand that anything greater than 1.5 belongs to class 0, and anything smaller than 2 belongs to class 0. However, everything is predicted to be in class 0. Also, in this case , what is the meaning for the coef and intercept? More precisely, is there a way to deduce from the coef and intercept where the model thinks the points switch from class 0 to class 1?
AI: What is the meaning for the coef and intercept
A (binary) logistic regression algorithm tries to determine whether the data $x$ belongs to class 0 or class 1 by the value
$f(x)=\omega x+b$. If the value $f(x)>0$, the algorithm believe $x$ is more likely to be in class 1; while if $f(x)<0$, then $x$ is more likely to be in class 0.
In your code alg.coef_ is the $\omega$ above, and alg.intercept_ is the $b$ above.
Why everything is predicted to be in class 0
If you run your code, you should see the result alg.coef_=-0.354, alg.intercept_=0.307. Therefore your alg is calculating $f(x)=-0.354x+0.307$. Now if you plug in your data, i.e. $x=$1, 2, 1.5 and 2.5 respectively, you should get the value $f(x)=$-0.047,-0.401,-0.224 and -0.579 respectively.
As you see, all the four $f(x)$ values are smaller than 0, therefore your alg determines that they all belong be class 0.
But still why? Why the result is not what I expected?
Here is were regularization comes in. In the context of logistic regression, the learning algorithm regularizes that the learned parameter, $w$, should not be too large. To be specific, by default it regularizes the $l_2$ norm norm of $w$. In your case $w$ is a scalar, then $l_2$ norm is its absolute value.
In general, regularization is used to prevent overfitting. But that's another broad topic.
If you want your algorithm to behave as you expected (classify 1 and 1.5 to class 1, and classify 2 and 2.5 to class 0) , there are 2 ways.
You tell you program to use extremely weak regularization.
That can be achieved by passing a large value to the C parameter in alg = LogisticRegression() (default value is 1.0, see detail), e.g.
alg = LogisticRegression(C=1000)
In this case your algorithm should return alg.coef_=-9.894, alg.intercept_=17.156, and it can classify your four data points correctly. Notice that the absotely value of alg.coef_ is now much larger than before.
Provide your algorithm with more data.
For example, simply repeat your four data points 100 times:
data = [x[:-1] for x in points]*100
target = [int(x[-1]) for x in points]*100
Then you can get alg.coef_=-4.438, alg.intercept_=7.594 without setting C. In this case the same default regularization strength is still applied, but the additional data provide stronger evidence for your algorithm to believe that the alg.coef_ should be allowed larger (in absolutely value). |
H: What is parts of speech technique in sentiment analysis?
In an article, I saw Sentiment Analysis using Parts Of Speech(POS) technique. When I searched I got some paper on POS but I couldn't understand what POS basically is. Though I am new to sentiment analysis please help me to understand POS.
AI: Parts of Speech (POS)
This is what it is called when you label each of the words (often called tokens) of a sentence or many sentences. Usually they are labelled with grammatical descriptions, such as Noun, Adjective, Adverb. They can often get quite specific, also distinguishing e.g. between types of nouns (proper nouns etc).
You can then use these descriptions of the tokens as input to a model or to filter the tokens to extract only the parts you are interested in.
POS are usually parts of the output when we parse a block of text using an NLP toolkit, such as spaCy. Have a look here for their available POS.
Here is a snippet of parse tree of the sentence: Apple is looking at buying a UK startup for $1 billion.
Apple has been recognised as a proper noun (NNP) as well as being the subject of the first verb (shown by the arrow labelled nsubj).
For a nice introduction to POS among many other terms within NLP, check out this article..
Sentiment Analysis Perspective
There are many many reasons to include POS in a sentiment model (some examples below), but they really all boil down to one overarching reason: polysemy. The definition of which is:
the coexistence of many possible meanings for a word or phrase.
So essentially saying, that words in different contexts can have different meanings. This is of course a massive gain in information that we can pass to a model!
The word duck can be a noun (the bird) or a verb (the motion, to crouch down). If we can tell a model which one of these it is in a given sentence, the model can learn to make a lot more sense out of the sentence.
Beyond distinguishing between meanings of single words, we can also simply uses them on their usage, or placement. One example use would be to use the adverb: however.
If our parser is good enough to tell us that it used in a particular sentence as a contrasting conjunction (which technically, would be grammatically incorrect!). An example sentence could be:
I really love muffins, however, I hate strawberries.
We have two clauses: a positive one before however and one after. The first clause is positive, the latter negative. If we have a scale of -5 ro +5 for sentiment for each clause (perhaps the mean of each word in that clause) we could imagine scores such as +3 for the positive clause and -3 for the negative.
This is where I have seen some models (Vader, SentiStrength, etc.) using POS to scale those base scores. In our example, perhaps however would be used to increase the magnitude of the negative clause's score by 10%, giving it a final score of -3.3. Whether or not that makes sense depends on the use case, the data and probably the developers general experiences.
Summary
There are many uses for POS, you can imagine quite a few, whether to hand-tailor a sentiment model of just to produce more features. In any case, it is a process that extracts more information from the original raw text, applying langage models (like grammar!) that have been tested and are known to be robust for any official form of writing. |
H: Problem using Anaconda
I have installed Anaconda, but every time I open Terminal I have to go give the command:
export PATH=~/anaconda3/bin:$PATH
How can I fix this issue?
AI: You can make sure that command is executed for every terminal (meaning Anaconda will be found) by adding it to your user's bash profile.
Open a terminal and follow these steps:
open the terminal profile: gedit ~/.bashrc
at the end of the file, add: export PATH=~/anaconda3/bin:$PATH
save the file (control+s) and close it
load the changed profile: source ~/.bashrc
Now this terminal window (and any new ones) should find Anaconda! See which version of Python is now the default, using which -a python. First in the list should be something like /home/username/anaconda3/bin/python
The Anaconda setup/installation usually asks you if you want to prepend Anaconda to start of your path, so whoever installed it must have said no or skipped that step. |
H: Replacing null with average in pyspark
I have a problem during upsampling operation in PySpark. My dataframe is:
df_upsampled.show()
+-------------------+------------------+
| et| average|
+-------------------+------------------+
|2018-08-15 00:10:00| 4.165999948978424|
|2018-08-15 00:15:00| null|
|2018-08-15 00:20:00|3.6580000072717667|
|2018-08-15 00:25:00| null|
|2018-08-15 00:30:00|0.9999999925494194|
What I want to do is that by using Spark functions, replace the nulls in the "sum" column with the mean value of the previous and next variable in the "sum" column.
Wherever there is a null in column "sum", it should be replaced with the mean of the previous and next value in the same column "sum".
In this case, first null should be replaced by (4.16599 + 3.658)/2 = 3.91 and so on for the rest nulls..
What would be a good way to do this?
AI: That way I found is to add two columns to the same dataframe; one lagging and one leading. The idea is that the two variables of which average is to be computed can this way be placed in one row. Like this:
+-------------------+-------------------+-------------------+-------------------+
| et| average| prev_value| next_value|
+-------------------+-------------------+-------------------+-------------------+
|2018-08-14 04:10:00|0.11070000156760215| null| null|
|2018-08-14 04:15:00| null|0.11070000156760215|0.08800000175833703|
|2018-08-14 04:20:00|0.08800000175833703| null| null|
|2018-08-14 04:25:00| null|0.08800000175833703|0.10970000103116036|
|2018-08-14 04:30:00|0.10970000103116036| null| null|
Now, we can create a new dataframe from this such as wherever there is a null in column "average", it should take the average of the values from the same row of the next two columns.
After this, output will be like:
+-------------------+-------------------+-------------------+-------------------+
| et| average| prev_value| next_value|
+-------------------+-------------------+-------------------+-------------------+
|2018-08-14 04:10:00|0.11070000156760215| null| null|
|2018-08-14 04:15:00|0.09935000166296959|0.11070000156760215|0.08800000175833703|
|2018-08-14 04:20:00|0.08800000175833703| null| null|
|2018-08-14 04:25:00| 0.0988500013947487|0.08800000175833703|0.10970000103116036|
|2018-08-14 04:30:00|0.10970000103116036| null| null|
|2018-08-14 04:35:00|0.11205000076442957|0.10970000103116036|0.11440000049769879|
|2018-08-14 04:40:00|0.11440000049769879| null| null|
You can now .drop() the columns prev_value and next_value to get clean output dataframe.
The code to this is:
from pyspark.sql import functions as F
from pyspark.sql.window import Window
my_window = Window.partitionBy().orderBy("et")
df = df.withColumn("prev_value", F.lag(df.average).over(my_window)).withColumn("next_value",F.lead(df.average).over(my_window))
df = df.withColumn("average", F.when(F.isnull(df.average),((F.col('prev_value')+F.col('next_value'))/2)).otherwise(df.average))
df.show() |
H: What is the best way to visualize 10 Fold Cross Validation Scores?
I have trained a CNN model and I have applied 10 Fold Cross Validation because I don't have much data to train the classifier. Now I am unsure about how to visulize fold wise results. Please suggest some visualization charts or techniques to display fold wise results.
AI: Do you only have one single model? If you were only mixing the data up (and not trying different parameter values) then any plot might not be very really useful. If performance is on the y-axis, what would be on the x-axis? The fold number itself doesn't really give any insight regarding the results (assuming random batch selection).
Are there some statistics you could compute for each of the 10 folds? If there is any metric you can compute for the x-axis, then using a simple line chart would help see how performance varies with that metric. For example, if you have images, you could compute the average pixel value, or the number of true positives/negatives in that batch (if that idea applies to your problem?). An example from Scikit Learn is this, where the gamma parameter of an SVM is altered on the x-axis:
Additionally, if you have several models (e.g. CNNs with varying numbers of layers / depth), you could use a nice box-whisker plot. Seaborn offers some nice variations. Here is one example that would show four different models:
You could make a box plot for a single model too, which wouldn't be so appealing, but might look like this:
Again, I'm not really sure what you would be putting on the axes. |
H: Using K-fold cross-validation in Keras on the data of my model
I would like to use K-fold cross-validation on my data of my model.
My codes in Keras is :
a = np.array(
[[283, 95, 72, 65],
[290, 100, 80, 72],
[120,170,130,122],
[100,230,110,200],
[300,100,200,500]]
)
X = a[:,0:2]
Y = a[:,3]
from sklearn.model_selection import KFold, cross_val_score
k_fold = KFold(n_splits=3)
model = models.Sequential()
model.add(Dense(12, input_shape=(3,)))
model.add(LeakyReLU())
model.summary()
cross_val_score(model,X,Y)
But, It makes this error:
If no scoring is specified, the estimator passed should have a 'score' method. The estimator does not.
And when I select a scoring parameter as:
cross_val_score(model,X,Y, scoring= 'accuracy')
It makes another error:
TypeError: Cannot clone object '' (type ): it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' methods.
How can I use K-fold cross-validation on this model?
Thank you
AI: The cross_val_score seems to be dependent on the model being from sk-learn and having a get_params method. Since your Keras implementation does not have this, it can't provide the necessary information to do the cross_val_score. Try the manual k-fold cross validation found here: https://machinelearningmastery.com/evaluate-performance-deep-learning-models-keras/ |
H: Is there any consensus on choosing an appropriate ML approach?
I am studying data science at the moment and we are taught a dizzying variety of basic regression/classification techniques (linear, logistic, trees, splines, ANN, SVM, MARS, and so on....), along with a variety of extra tools (bootstrapping, boosting, bagging, ensemble, ridge/lasso, CV, etc etc). Sometimes the techniques are given context (eg. suitable for small/large datasets, suitable for a small/large number of predictors, etc) but for the most part, it seems like for any regression or classification problem there exist a dizzying array of options to choose from.
If I started a job in data science right now and was given a modelling problem, I don't think I could do any better than just try all the techniques I know with basic configurations, evaluate them using cross-validation and pick the best. But there must be more to it than this.
I imagine an experienced data scientist knows the catalogue of techniques well and follows some mental flowchart to decide which techniques to try, instead of mindlessly trying them all. I imagine this flowchart is a function of a) number of predictors; b) variable types; c) domain knowledge about possible relationships (linear/non-linear); d) size of the dataset; e) constraints around computation time and so on.
Is there any such agreed on, conventional flowchart to follow, to choose the techniques? Or does it really boil down to "try lots of things and see what works best on the desired measure eg. cross-validation"?
AI: My data science studies started as a Masters in Applied Statistics. One of the courses was in machine learning and it had a similar approach to what you are describing. So, I can empathize a little with your current view. But, just like other things you might have learned in life, the way you do things in an academic setting and the way you do things in a business environment (i.e. for a client) are completely different. Here's what I've learned since my initial studies:
1 - Learn Python
Sure, there's other tools out there and they're fine (I used to write R code with the best of them) but Python is where the future is at, period. Plus, very few tools scale as well as Python and that's important if you want to work on some really cool stuff.
2 - It all comes down to implementation
Guess what? All those things you're learning now (confusion matrices, factor reduction, etc) don't mean a thing to your clients. They're going to just look at you and say, "What's the product? When are you going to deploy something to my phone? Where my webapp to click on?". A large part of your job will be to turn all your work into a product and you will find yourself wearing a quasi-software-developer hat. This is also another good reason to learn python.
3 - Data pipelines take time
A LOT of your work will be on data manipulation and just making sure that the data pipelines you need are there. Sure, you have a database - but how are you going to update it? What pre-processing do you need? Where are you results stored? You will spend A LOT of time figuring out this stuff. You'll miss your school days when datasets were given to you in a nice and clean fashion :)
4 - Neural networks kick ass
Once you take a bite of this apple, it's hard to go back :). Learn Keras and enjoy the ride. After a while, you'll have to remind yourself what decisions trees are :)
5 - Model searches are much easier now
To be 100% clear, the "model search" approach you are doing now is VERY valuable experience. You should definitely work hard at those classes. However, if you have the time, look at either (1) Data Robot or (2) Watson Analytics. Both of those packages do, essentially, the same thing. They will take your dataset and find you the best model for it. All of the items you described above are done for you in a matter of seconds. It's almost scary how fast they are and they're very effective in helping you reduce your work. However, be warned that these packages only support supervised data. You will still have to do it the old-fashioned way for unsupervised data (or label some and use a neural network).
6 - I still use the theory behind other models
Even if I use neural networks a lot, the other models are still useful. You'll still use linear regression or decision trees for basic problems. It's also helpful when I decide to read some research papers on archivx or whatever. So, I'll still use them for my own study and understanding, but that's about it.
Have fun! |
H: How far can one go with excel?
in my business we handle all analytics through Excel. This includes mostly scheduling, production planning and accounting operations. We currently are looking into adding a bit of predictive modelling and Excel does suffice to a point, but doesn't have support for complex models.
As I see it, the main benefits of Excel are its ease of use and the ease with which you can find and train people to get accustomed to it. On the other hand more sophisticated environments (e.g. R, python) can handle a wider variety of analytics tasks, but require better trained individuals.
I have also read this question on if Excel is sufficient for Data Science and while it is a bit out of scope for my purpose, the conclusion is that tools like R and Python are much better than Excel.
My question is (in the context of data analytics): "How much far can we get with Excel, without needing to change to a more sophisticated tool?" or "At what point do we need to migrate from Excel to -let's say- R?"
Thank you very much!
AI: TL;DR
If you have unlimited time and use a 64-bit version of Excel, you can get as far with Excel as any other data analysis tool.
Time
I mention time as my first factor, because Excel only has basic funcitonality built in, such as summing, random number generation, lookups etc. These correspond to a kind of standard library, which Python and R also have. Using these basic functions, with enough time, you can build up pretty much any analysis tool out there. Don't expect good runtime performance. In Python and R, however, there are many many packages that people have already created, which perform well and have been tested by lots of people and so are trusted.
Memory
My second point about 64-bit Excel is because that allows a lot more memory to be used by a single instance of Excel. It allows many more cells to be filled. Using 32-bit excel will limit you to projects of around 2Gb. That is a fair amount of data, but it is a hard limit.
Then steps in 64-bit Excel, which basically means no more memory limits - only those that come from your hardware, and that means Python and R will also be stopped in their tracks.
To provide some numbers, we can simply compute the number of bits able to be stored in each version. Here in Python's interactive prompt:
In [1]: (2**32) / 10**9 # 10^9 means the result is 4.3 Gb
Out[1]: 4.294967296
In [9]: (2**64) / 10**18 # 10^15 means the result is 18.4 Eb
Out[9]: 18.446744073709553
Eb means exa-bytes. This means 18.4 million million million gigabytes.
I notice the computation shows 4Gb for 32-bit, while I said 2 Gb above. I read there is a kind of hard limit on the 32-bit version. I don't know or care why that is... I use Python and R ;-)
In any case, I hope that is enough to convince you that memory is not an issue, if you are a brave person willing to invest all your time building tools from the ground up!
Summary
If you have complicated business logic, where the actual analysis is mathematically simple, stick to Excel. Business people will love you for it.
If you want to do more than linear regression, use Python or R.
Caveats
As far as I know you cannot run remote or distributed tasks using Excel, whereas that is relatively easy using Python and (a little less so in my opinion) R. So at that point, I would give up on Excel. You'd likely have to implement your own tools in C# or C++ using the .Net framework. |
H: LOOKUP using 2 dataframes in Python
I have 2 dataframes:
df1
Id CategoryId
1 A
1 B
2 A
2 E
2 F
df2:
Id A B C D E F
1
2
I want to do a lookup which will help me fill up the values in df2 based on the values of df1
If df1 has id = 1 and CategoryId = A then I want df2 row 1 Column A to say Y else N
Final df2 should look like:
Id A B C D E F
1 Y Y N N N N
2 Y N N N Y Y
I am not sure how to do this in Python.
I would really appreciate any help on this!
Thanks
AI: To get the exact answer you provided, included entries for negative cases, you will have to create a dataframe in advance that is all possible Id and Categories values. No pain, no gain!
Let's walk through my solution, starting with imports:
In [1]: from itertools import product # will compute Id/Category possibilities
In [2]: import pandas as pd
Create your example dataframe
In [3]: df1 = df1 = pd.DataFrame(data={'Id': [1, 1, 2, 2, 2], 'CategoryId': ['A', 'B', 'A', 'E', 'F
...: ']})[['Id', 'CategoryId']]
In [4]: df1
Out[4]:
Id CategoryId
0 1 A
1 1 B
2 2 A
3 2 E
4 2 F
Here you must provide the possible values for the "Id" and "CategoryId" columns
# the ids you showed
In [5]: ids = range(1, 3) # gives [1, 2]
# Either manually create the category values...
In [6]: cats = ['A', 'B', 'C', 'D', 'E', 'F']
# Or get jiggy with some Python to be more flexible:
In [7]: cats = [chr(c) for c in range(ord('A'), ord('F') + 1)]
Knowing the possible values for each column, we can now compute all possible combinations of those:
In [8]: possibilities = list(product(ids, cats))
In [9]: possibilities
Out[9]:
[(1, 'A'),
(1, 'B'),
(1, 'C'),
(1, 'D'),
(1, 'E'),
(1, 'F'),
(2, 'A'),
(2, 'B'),
(2, 'C'),
(2, 'D'),
(2, 'E'),
(2, 'F')]
Next we can pre-allocate a results table using the possible Id and Category values:
In [8]: results = pd.DataFrame(index=ids, columns=cats).fillna(0)
In [9]: results
Out[9]:
A B C D E F
1 0 0 0 0 0 0
2 0 0 0 0 0 0
I pre-filled the dataframe with 0 values – you could use 'N'.
Now it is a simple matter of checking to see if each possible combination appears or not and filling the coresponding cell in the results dataframe with your desired value (I use a 1 – you could make it 'Y')
In[10]: for i in list(df1.itertuples()):
if (i.Id, i.CategoryId) in possibilities:
results.loc[i.Id, i.CategoryId] = 1
Check the output:
In [11]: results
Out[11]:
A B C D E F
1 1 1 0 0 0 0
2 1 0 0 0 1 1
That's a bingo! |
H: Minkowski distance with Missing Values
Im currently doing a subject for data science, and have the following point that im trying to understand.
We are looking to calculate distance in data sets where values may not be present. Now i know that R does this by default, but we are learning the "how" behind the what.
Literature we are give states.
"The idea is to normalise the inner sum by the number of valid (non-missing) terms, so distances computed from different amounts of terms are commensurable. Otherwise, distances computed with fewer missing values tend to be artificially larger."
Given the following data set.
We have the following sample formulas for Euclidean and Manhattan
Euclidean distance: d(x1, x2) = √(4/3) *( (2 – 7)2 + (1 – (-4))2 + (0 – 8)2 ) = √(4/3)*114 = 12.328
Manhattan: d(x3, x4) = (4/2) * ( |3 – 10| + |2 – 5| ) = (4/2)*10 = 20
Assuming the normalization section fro euclidean is number of non missing terms of each row divided?
How do you derive the normalization section on the Manhattan formula?
AI: The way you type formulas is a bit confusing, but here's a go at interpeting it.
Euclidean distance is defined by:
Manhattan distance is defined by:
Handling missing terms is an independent issue. The way it is handled in the example is by taking the average of the present features (dividing by their number) and multiplying by the total number of features as a way to bring the data to a scale comparable to datapoints without missing features.
Now let's have a closer look at the example. The total number of features is 4. To calculate a distance between datapoints you can only use features that are present in both:
The first (Euclidean) formula finds 3 features present in both x1 and x2. Thus your coefficient is 4/3.
The second (Manhattan) formula finds two features present in both x3 and x4. Thus the coefficient is 4/2. |
H: Display Images (url) Inside Pandas Dataframe
I would like to display images (mostly jpg and png formats) directly from their url link inside a pandas dataframe. Imagine I already have the following dataframe:
id image_url
1 http://www.nstravel.ro/wp-content/uploads/2015/02/germany-profile_6023_600x450.jpg
2 https://www.nh-hotels.com/multimedia/images/cityscape-of-berlin_retoque_600x458-tcm41-132903-32.jpg
3 https://www.scandichotels.com/imagevault/publishedmedia/8q33xlshjl4o9d5ftp3r/germany-berlin-berliner-dom-cathedral-in-the-eveni.jpg
I would like to have the actual image displayed in another column (properly resized to fit).
I am aware that I could do it differently, e.g. not using pandas and go with grids using matplotlib, but for ease of illustration and presentation I would prefer to show it within a table (dataframe).
I have found these solutions: solution1, solution2. These suggested solutions while seem very relevant but did not work for me. Even I tried with images from my local disk, but I get this using the solution2:
Python: 3.6.5, Pandas: 0.23.0, Jupyter: 4.4.0
Thanks!
AI: Actually the solution2 worked; I just had to be a bit more patient. I am posting it here in case someone have difficulties, like me, getting this to work:
import pandas as pd
from IPython.display import Image, HTML
def path_to_image_html(path):
'''
This function essentially convert the image url to
'<img src="'+ path + '"/>' format. And one can put any
formatting adjustments to control the height, aspect ratio, size etc.
within as in the below example.
'''
return '<img src="'+ path + '" style=max-height:124px;"/>'
HTML(df.to_html(escape=False ,formatters=dict(column_name_with_image_links=path_to_image_html)))
Note that df is your actual dataframe name, column_name_with_image_links is the column name that contains all image urls, and path_to_image_html is the above function. |
H: How to correctly pass Word2Vec vectors as input to an LSTM
I am trying to build a text classifier using lstm which, in its first layer, has weights get by a Word2Vecmodel.
In order to build a matrix containing the indexes of each word for each sentence, I have tried: (as mentioned here)
X_tr_word2vec = np.array(X_tr_word2vec)
y_tr_word2vec = np.array(y_tr_word2vec)
train_x = np.zeros([X_tr_word2vec.shape[0], max_sentence_length], dtype=np.int32)
train_y = np.zeros([y_tr_word2vec.shape[0]], dtype=np.int32)
for i, sentence in enumerate(X_tr_word2vec):
for j, word in enumerate(sentence[:-1]):
train_x[i,j] = model_word2vec.wv.vocab[word].index
but, when I run the code, I get this error:KeyError: 'enquiringly', what does it mean?
I suppose that it is about a wrong train_xdimension.
Update:
I have trained Word2Vec model before, with the entire training set:
model_word2vec = models.Word2Vec(X_tr_word2vec, size=150, window=9)
AI: That means your word 'enquiringly' is not in your word embedding vocabulary vocab.
For out-of-vocabulary(OOV) words, there is usually a embedding vector dedicated to them. Try to find that special symbol in the vocab and use that corresponding embedding vector. |
H: Pandas vs Linux Datascience
I joined a data science learning community in my college and we are using linux terminal commands and awk commands to practice gathering some information from big datasets stored in csv files. About 7140596 columns by 29 rows in a single file.
A sample question would be: "What was the average arrival delay (in minutes) for flights in 2005?" where we have to sum the value of the delay for every row and divide by the total number or rows.
I know that similar data manipulation can be done in Pandas in a Jupyter notebook and was wondering what are the advantages and disadvantages of each method.
Thank you!
AI: Pandas dataframes have many many more high level functions integrated right into the base classes that store the data for you.
Some of the commandline tools can be pretty powerful for manipulating text efficiently (Perl in particular), but I would argue that the learning curve is quite steep and the interactive experience is not as friendly. For one thing, it isn't easy to simply get a glimpse of your data or create an attractive plot.
While I admit that I am not a pro awk/sed or Perl user, I am pretty sure it will be a little less intuitive in those tools/languages to do something like this hypothetical computation, which involves numerical data and text:
In [1]: import pandas as pd
In [2]: import numpy as np
# Create a DataFrame holding some data over a time range
In [3]: df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo']*4,
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three']*4,
'C' : np.random.randn(32)},
index=pd.date_range('01.01.2018', periods=32))
In [4]: df.head()
Out[4]:
A B C
2018-01-01 foo one 0.965554
2018-01-02 bar one 0.053814
2018-01-03 foo two 1.075539
2018-01-04 bar three -0.999941
2018-01-05 foo two -1.940361
Now imagine we want to group the rows so we have just rows with column A contains foo in one table, and another with just the rows containing foo.
From those two tables, we only care about column C. We want to compute the moving average over a 5 day time-frame. The moving average will leave some NaN values at the beginning, so we want to drop those time-steps.
Oh, and we want to visualise that!
In[5]: df.groupby('A')['C'].rolling(5).mean().dropna().plot(grid=True, legend=True)
From that one line of code, we get this:
The above also highlights the abundance of other powerful and specialised packages avavilable within the Python environment - here I used numpy in conjunction with Pandas.
For manipulating text files, perhaps cleaning up scraped text and parsing large amounts of text using regular expressions, it might be faster to use one of the commandline options, but as soon as you want to do any data science, I would really recommend using some specialised tools, like Pandas. |
H: What is the difference between upsampling and bi-linear upsampling in a CNN?
I am trying to understand this paper and am unsure of what bi-linear upsampling is. Can anyone explain this at a high-level?
AI: In the context of image processing, upsampling is a technique for increasing the size of an image.
For example, say you have an image with a height and width of $64$ pixels each (totaling $64 \times 64 = 4096$ pixels). You want to resize this image to a height and width of 256 pixels (totaling $256 \times 256 = 65536$ pixels). In the new, larger image you only know the value of $1$ out of every $16$ pixels. How are you going to calculate the values for the rest?
Well, the methods that do that for you are called upsampling techniques. The most common are:
Nearest-Neighbor: Copies the value from the nearest pixel.
Bilinear: Uses all nearby pixels to calculate the pixel's value, using linear interpolations.
Bicubic: Again uses all nearby pixels to calculate the pixel's values, through polynomial interpolations. Usually produces a smoother surface than the previous techniques, but its harder to compute.
Other more complex resampling algorithms, e.g. Lanczos.
An article explaining the differences among image resampling techniques can be found here. |
H: Spatial Transformer Networks: how is theta differentiable?
In the paper Spatial Transformer Networks, the localization network's output, theta, is differentiable, given the current input feature map. How is this theta differentiable?
AI: In spatial transformer networks, basically, the concept of a localization network is to learn to apply a transformation to find the canonical form of the input. Imagine the output of the network $\theta$ as an activation which is passed to another layer. The point is that the sampling sequence of operations is differentiable. $\theta$ is just an output which specifies how the sampling should be performed. The sampling operation that is usually used is bilinear interpolation which although not differentiable at all points due to the floor and ceiling functions, it can backpropagate the error and is differentiable in most of its inputs. Consider the $\theta$ just as activation which is passed to the bilinear sampler for changing the input of the next network. bilinear sampling is considered to be differentiable.
To understand it better, consider the following figure which illustrates the process inside a spatial transformer easier than the one in the original paper.
As it is clear, the output of the localization network which is $\theta$ will be passed to the sampling grid. The sampling grid will be multiplied to $\theta$ to find appropriate regions in the original image. Consider that you don't multiply $\theta$ to the original image. The reason is that if you multiply by the original image, there will be multiple choices for a single pixel while if you multiply the output of the localization network by the sampling grid, for each entry there is just a single choice. Next, the sampled grid and the original image will be used in the interpolation to find the transformed image. As it is clear, $\theta$ is like the other activations. |
H: How to use build_analyzer in sklearn feature extraction
I'm trying to get list of n-gram tokens for text
Ex: 'How to use build_analyzer in sklearn feature extraction '
output :['How', 'use', 'build_analyzer', 'sklearn', 'feature', 'extraction', 'How use', 'use build_analyzer', 'build_analzer sklearn', 'sklearn feature', 'feature extraction']
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
vectorizer = TfidfVectorizer(stop_words = 'english',ngram_range=(1, 2), token_pattern=r'\b\w+\b', min_df=1)
df['Text'].apply(lambda x : vectorizer.build_analyzer(x))
TypeError: build_analyzer() takes 1 positional argument but 2 were given
AI: build_analyzer() returns a callable that let's you extract the tokenizing step from the transformation pipeline wrapped in the CountVectorizer or TfidfVectorizer. You can do something like this:
analyze = vectorizer.build_analyzer()
df['Text'].apply(lambda x: analyze(x)) #or df['Text'].apply(analyze) |
H: How to add numbers to the axes of a graph?
I am trying to make a graph of k (as in k-means) vs error, and I can't get it to show the actual number of k on the x-axis (1-15, not just the even numbers), nor any any numbers at all if I add a title to X.
I have looked at the data frame documentation and the matplotlib axes documentation, and I couldn't find anything telling me how to get it to show the numbers? Even if I have to manually specify them?
Please see images below:
rdf.plot.line(title='Number of clusters Vs mean accuracy')
rdf.plot.line(title='Number of clusters Vs mean accuracy',
x='parameters',
y='mean_validation_score')
I don't understand how to get the x axis to show all 15 numbers (1-15).
AI: Use xticks. e.g.
x=np.arange(1,16)
y = -60000*(3+np.log(1/x))
plt.plot(x,y,'b')
plt.xticks(x)
plt.show() |
H: data analysis EDA issues, indent or type
I am a newbie in DS world, right now i am working on some EDA practice, and run into an issue here.
Here is my code:
Convert some columns to numeric values
for column in cols:
auto[column] = pd.to_numeric(auto[column]
return auto
auto = read_auto_data()
error 1:
File "", line 4
return auto
^
SyntaxError: 'return' outside function
I tried to change return's indent:
Convert some columns to numeric values
for column in cols:
auto[column] = pd.to_numeric(auto[column])
return auto
auto = read_auto_data()
error 2:
TypeError Traceback (most recent call last)
in ()
1 ## Convert some columns to numeric values
2 for column in cols:
----> 3 auto[column] = pd.to_numeric(auto[column])
4 return auto
5
TypeError: 'function' object is not subscriptable
Can anyone help me understand what went wrong here?
Thanks
AI: Try something like this (the function also needs to know what cols is - perhaps pass that in as another parameter?):
def read_auto_data(auto):
for column in cols:
auto[column] = pd.to_numeric(auto[column])
return auto
auto = read_auto_data() |
H: Help with creating dimensions/features
It is quite hard to name the title properly as I just started to learn ML, will try to explain here. I want to practice ML by creating Movie suggestion algorithm. I came up with the following list of dimensions/features:
Rating
Number of votes
Genre
Actors
Directors
Writers
Year
Combination of Actors/Directors/Writers working in the team
Combination of Year and Actor or Director or Writer
Actor or Director or Writer working in some genre
There is no problem for me for numeric fields, but in case of actors I have multiple values. How to create feature for the actors?
AI: You could create a "flag" variable for particular actors. Each actor would have their own column in your data and the column would have a 1 if the actor was in that film and a 0 if the actor was not in that film.
Of course, this creates lots of columns and can be tedious to code, but binary indicator variables can be really useful if you want to build a predictive model - worth it in my opinion.
Example:
actor1 actor2 actor3
movie1 1 0 0
movie2 0 1 1
movie3 0 1 0
Edit Re: Director/Actor
It is probably unrealistic to create a variable for every permutation of director/actor combinations. If you want to see how the variables interact you can code interactions into your model when you build it. If you were to do that in R it would look like this:
movie.predictor <- lm(suggestion ~ actor1 + actor2 + director1 + director2 +
actor1:director1 + actor2:director1 + actor1:director2 +
actor2:director2,
data = movies.data)
The ":" in R asks the lm(...) function to consider the interaction of those two variables in the regression problem, assuming you are doing regression. Don't worry the same ":" technique works across several model functions. Just read the documentation for your preferred language and package and look for how to code "variable interactions"
Of course, now you are typing out all of the permutations of variable interactions. You might want to also consider a clustering algorithm to group actors and directors together. I am not a movie expert but it seems like directors have favorite actors (and vice-a-versa) so I would expect there to be clusters of directors and actors with a little overlap here and there. |
H: Reinforcement learning: Discounting rewards in the REINFORCE algorithm
I am looking into the REINFORCE algorithm for reinforcement learning. I am having trouble understanding how rewards should be computed.
The algorithm from Sutton & Barto:
What does G, 'return from step t' mean here?
Return from step t to step T-1, i.e. R_t + R_(t+1) + ... + R_(T-1)?
Return from step 0 to step t?, i.e. R_0 + R_1 + ... + R_(t)?
AI: What does G, 'return from step t' mean here?
Return from step t to step T-1, i.e. R_t + R_(t+1) + ... + R_(T-1)?
Return from step 0 to step t?, i.e. R_0 + R_1 + ... + R_(t)?
Neither, but (1) is closest.
$$G_t = \sum_{i=t+1}^T R_i$$
i.e. the sum of all rewards from step $t+1$ to step $T$.
You are possibly confused because the loop for REINFORCE goes from $0$ to $T-1$. However, that makes sense due to the one step offset from return to the sum of rewards. So $G_{T-1} = R_T$ and $G_{T} = 0$ always (there is no future reward possible at the end of the episode). |
H: What is new population in genetic algorithm?
Here is my (mis?)understanding of genetic algorithm:
Create n individuals. This is initial population
Calculate fitness of each individual in this population
for i in range(n):
select two individuals randomly with replacement from population with
probability of selection of each individual proportional to its
fitness value
cross over the two selected individuals to create two offsprings
introduce mutation
Calculate fitness value for each offspring
Now you have an initial population of n and n offsprings.
What comes now?
Is my new population the n offsprings or the n fittest individuals of the 2n (n population + n offsprings) available?
AI: In the simplest implementations, you simply discard the old population and maintain a population of size $n$ in each generation.
There is lots of variation on this though, any of which could be useful in practice. As a start to show the range of possibilities, you can do some or all of the following:
Keep a top number (say $m$) of the original population, and only generate $n-m$ new offspring in each generation.
Generate offspring in small groups and assess them. They must beat a member of the current population on the fitness measure in order to find a place (this can work nicely in round-robin tournaments, where the fitness measure is winner of a game).
Keep snapshots every so many generations (e.g. every 10 generations) of the whole population or just the best performing individuals. You may use these snapshots to assess long-term improvement or as a "gene pool" to keep good performers around in some form for a while longer than a single generation.
Variations like these are hyperparameters for your GA. They might help in certain circumstances, or be a hindrance in others. So start with the simple "everything is replaced" is recommended initially, especially if you are learning genetic algorithms. |
H: What's the best metric for evaluate an estimator for a multi class problem with class imbalance dataset?
accuracy, precision, f1, ROC are good for binary single class problem.
but for more complex problem (imbalance multi-class problem), what should i use?
Do you have any recommendation?
AI: One standard metric is the top-1 or top-5 test error rate. For instance, for top-5, your model predict 5 most likely labels, and if none of the 5 labels is the ground truth label, you mark this instance as an error. This is usually a standard metric when people working with the ImageNet data. See example here usage here. This metric does not explicitly count for that the classes are imbalanced.
Another powerful metric is the Mean Average Precision (mAP), where you calculate the Average Precisoin (AP) for each class and then average among all classes. In this metric the smaller classes get equal weights compared to larger classes. The idea comes from the information retrieval community. This metric is also suitable for multi-label classification. See example usage here. |
H: Appending DataFrames to lists in a dictionary - why does it seem like the list is being referenced by each new DataFrame?
I have a DataFrame that pairs one or more labels to a sample group and id, for a given sample stored in a database at SampleGroup/SampleID:
There are ~100 labels. I want to create binary models to do classification on each label, and then run these models in parallel to do multi-class classification. To store these models, I am creating a dictionary of form
{label_1:[df_1, model_object_1],
label_2:[df_2, model_object_2],
...,
label_n:[df_n, model_object_n]
}
Where each df is a DataFrame of the form above, except that the value of the 'Labels' column is replaced with a 1 or 0, depending on whether dictionary key 'label_i' is in the original label list for that row. Here's the code that (should) do that, that has been giving me some trouble:
models = dict.fromkeys(target_labels, [])
for label in target_labels:
label_list = []
for multi_label_list in df['Labels']:
if label in multi_label_list:
label_list.append(1)
else:
label_list.append(0)
data = {
'SampleGroup':df['SampleGroup'].copy(),
'SampleID':df['SampleID'].copy(),
'Labels':label_list
}
models[label].append(pd.DataFrame(data=data, index=df.index))
print(len(models[label]))
When I run this, each new binary label_list that is created for a label gets appended to every model in the dictionary, as if I'm creating a reference to the same label_list (similar to how df2 = df would create a reference to df, instead of a copy). The output of the above code tells the story clearly:
[
I managed to hack a fix for this by assigning each new DataFrame to the key instead of appending it to the key's value list:
models[label] = (pd.DataFrame(data=data, index=df.index))
What property of DataFrames (or perhaps native Python) am I invoking that would cause this to work fine, but appending to a list to act strangely?
AI: The problem is when you create the dictionary models using models = dict.fromkeys(target_labels, []), you actaully only created one empty list, and all keys point to that list. Everything you append goes into that same list.
For example,
models = dict.fromkeys('abcd', [])
print(models)
print(models['a'] is models['b'])
models['a'].append(3)
print(models)
will return
{'d': [], 'a': [], 'b': [], 'c': []}
True
{'d': [3], 'a': [3], 'b': [3], 'c': [3]}
Notice that models['a'] is models['b'] is True.
You can create the dictionary instead using
models = dict([(key, []) for key in 'abcd'])
print(models)
print(models['a'] is models['b'])
models['a'].append(3)
print(models)
Now it returns
{'d': [], 'a': [], 'b': [], 'c': []}
False
{'d': [], 'a': [3], 'b': [], 'c': []}
Notice that models['a'] is not models['b']. |
H: Which is better for a beginner: R or Python?
From what I can tell, R and Python are the two most popular languages for data science.
My question is which one would you recommend for someone just starting out in data science? Does any one have any clear advantages on the other? Is any one easier to learn or does any one have more potential?
Thanks a lot!
AI: It's a perennial debate. Python is more readable and quicker to learn, no question. It's also a bit more general-purpose language than R. R, on the other hand, has special statistical packages for just about anything you could even dream of doing. There are some stats it can do that Python doesn't have a library for (though I suspect people are working on that).
What's more important in the early stages of learning data science is the more fundamental theory: mathematics, linear algebra, calculus, and statistics. A firm grasp of those areas is a considerably larger share of the learning path than a particular language.
Having said that, I do happen to prefer Python because of the readability. The fact is, that very often other people will need to come in after you and read what you wrote. For that matter, you sometimes have to read your own code! Readability is more important than the ability to do fantastic things with one-liners. As for the advanced stats, that comes into play most often in the medical fields; there you definitely find a preponderance of R over Python. |
H: How to interpret my neural network with high accuracy but low probability in test results
I have built a classical ANN using keras which provides probability (using sigmoid function) of the outcomes (0 or 1). While the accuracy of the model is high when the model is fit ~90%, the outcome probability of the test set results is very poor. How can I interpret this?
Build ANN
classifier = Sequential()
classifier.add(Dense(activation="relu",input_dim=7,kernel_initializer="uniform", units = 4))
classifier.add(Dense(activation="relu",kernel_initializer="uniform", units = 4))
classifier.add(Dense(activation="sigmoid", kernel_initializer="uniform", units = 1))
classifier.compile(optimizer="adam", loss="binary_crossentropy",metrics=['accuracy'])
classifier.fit(X_train,y_train, batch_size=10,epochs=100)
Predict results:
y_pred = classifier.predict(X_test)
I have attached the test set results where result probabilities can be seen with respect to frequency. blue shows probabilities for 1 and orange for 0
Is this a case of overfitting? How can I tweak an ann to avoid overfitting?
AI: First possibility:
How well is your dataset balanced? Are there mostly 0s? If I understand your graph well, it would seem that is the case.
If you in fact have 90% 0 and 10% 1 in your dataset, then your results would indicate the model is actually just performing random selection. This means it hasn't really learny anything about the data.
Second possibility:
If your training accuracy is high, but your test accuracy is low, this generally does point to overfitting, as you suspect.
Your model is essentially memorising the training data set, making it able to simply recall the correct result quite reliably, but without learning the underlying function that describes the dataset.
Here are a few things that might be behind this:
training for too many epochs
using a model that too big and complex
not using enough regularisation
I would recommend first checking the class balances of your dataset. If it highly imbalanced, you could look into performing stratified sampling to create your train/test split. This ensures that each batch has roughly the same number of 0 and 1 samples.
Secondly, you might consider using a slightly larger network. In the first instance, this could be as simple as increasing the number of neurons in each of your Dense layers. I would suggest something like (100, 50, 1) to begin with. That will immediately tell you if the size of the model is restricting learning. |
H: What are the differences between logistic and linear regression?
I know that linear regression does "regression" and logistic regression does "classification". When we implement these two methods, the only difference I could notice is the loss function: linear regression uses a loss function like mean square error and logistic uses cross-entropy. Is there any other difference that I am not aware of?
AI: As you have mentioned, the output of linear regression is a real value while logistic regression's represents classes(classification). Their main difference is this.
The loss function of linear regression is convex which means you always can find the optimal point using customary optimisations while if you use that for logistic regression, you may get stuck in a non-global minimum which is not optimal. Consequently, people take logarithm of that loss and call it cross entropy. For simple logistic regression tasks, it is convex.
Another difference that has to be cared about is the non-linearity which is usually applied for different tasks. For logistic regression, it is customary that people use non-linearites like Tanh or Sigmoid after the linear part, inner product of weights and inputs, for specifing the similarity to class 1 or 0 for typical binary classifications. For linear regression, people usually use Linear activation function. There is a point here. The idea of using the linear activation is not due to the need for employing a linear function. It is used because its output is not limited. Consequently, you can use other functions that are one-to-one and are not limited. Consider $y = x^3$ as an example. |
H: Binary text classification with TfidfVectorizer gives ValueError: setting an array element with a sequence
I am using pandas and scikti-learn to do binary text classification using text features encoded using TfidfVectorizer on a DataFrame. Here is some dummy code that illustrates what I'm doing:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
from sklearn.feature_extraction.text import TfidfVectorizer
data_dict = {'tid': [0,1,2,3,4,5,6,7,8,9],
'text':['This is the first.', 'This is the second.', 'This is the third.', 'This is the fourth.', 'This is the fourth.', 'This is the fourth.', 'This is the nintieth.', 'This is the fourth.', 'This is the fourth.', 'This is the first.'],
'cat':[0,0,1,1,1,1,1,0,0,0]}
df = pd.DataFrame(data_dict)
tfidf = TfidfVectorizer(analyzer='word')
df['text'] = tfidf.fit_transform(df['text'])
X_train, X_test, y_train, y_test = train_test_split(df[['tid', 'text']], df[['cat']])
clf = LinearSVC()
clf.fit(X_train, y_train)
This gives the following error:
Traceback (most recent call last):
File "<ipython-input-151-b0953fbb1d6e>", line 1, in <module>
clf.fit(X, y)
File "C:\Users\Me\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\svm\classes.py", line 227, in fit
dtype=np.float64, order="C")
File "C:\Users\Me\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 573, in check_X_y
ensure_min_features, warn_on_dtype, estimator)
File "C:\Users\Me\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 433, in check_array
array = np.array(array, dtype=dtype, order=order, copy=copy)
ValueError: setting an array element with a sequence.
I have found numerous posts (e.g. here, here) mentioning that this error can indicate non-uniformity of the data. This post for the same error suggests it can also be due to a data typing issue. However, I can't see how my very simple example could be due to either of these. There is surely something simple I am missing. Help!
AI: TfidfVectorizer returns a (sparse) 2-D array or a matrix. You can't set the column df['text'] to a matrix without messing up the dimensions. Instead, you need to concatenate the result from TfidfVectorizer with the remaining features in the dataframe.
df_text = pd.DataFrame(tfidf.fit_transform(df['text']).toarray())
X_train, X_test, y_train, y_test = train_test_split(pd.concat([df[['tid']],df_text],axis=1), df[['cat']]) |
H: what is the one hot encoding for cancer data classification
I am working on a project to classify lung CT dataset using CNN and tensorflow, I know that the order for the category is cancer/no-cancer (only 2 classes), in more than one Github repository I see that they did one hot encoding like the code below:
if label == 1:
label = np.array([0, 1])
elif label == 0:
label = np.array([1, 0])
what makes me confused is: 1 means cancer and 0 means no-cancer, as I understand it should be:
if label == 1:
label = np.array([1, 0])
elif label == 0:
label = np.array([0, 1])
but why they did one hot encoding like this, I don't know I am wrong or there is another thing that I did not understand, can anyone explain it for? or give me a better way to do encoding for my data, but with code?
AI: Both ways would work equally, but the way you see in the github repo is more standard.
The standard way of converting a integer label $y_i$ (from 0 to K-1) into a one-hot vector encoding is by creating a all-zero vector of length K, and set the element indexed by $y_i$ to be 1, i.e.
label_one_hot = np.zeros(k)
label_one_hot[label] = 1 |
H: Why is the logistic regression decision boundary linear in X?
The logistic regression model,
\begin{equation}
\operatorname{p}(X) = \frac{\operatorname{e}^{\beta_0 + \beta_1 X}}{1 + \operatorname{e}^{\beta_0 + \beta_1 X}}
\end{equation}
is said to create a decision boundary linear in $X$.
As far as I understand, only the logit is linear in $X$. Is this the reason the decision boundary is linear in X? If so, why? And if this is not the case, what is the reason for this phenomenon?
I am confused about this because the decision boundary can be expressed as:
\begin{equation}
\operatorname{p}(X) = a, a \in [0,\, 1]
\end{equation}
And $\operatorname{p}(X)$ is not linear in $X$.
AI: The decision boundary $p(X)=a, a\in(0,1)$ is
$$\beta_0+\beta_1X=\log\frac{a}{1-a}$$
which is linear in $X$. |
H: Meaning of axes in a clustering plot
If you have n time series of rainfall measurements every hour (x=time, y=amount of rain), and compute the distance matrix between each pair of time series based on Dynamic Time Warping, and then plot the clusters, what do the X and Y axes on the cluster plot represent?
I have figured out that the values in the distance matrix would have the same measurement unit as the amount of rain in the time series (mm or inches of rain).
In the k-medoids or k-means plot, is X still time, and Y still the amount of rain?
AI: If you have $n$ time series of rainfall measurements every hour ($x$=time, $y$=amount of rain), and compute the distance matrix between each pair of time series based on Dynamic Time Warping, and then plot the clusters, what do the X and Y axes on the cluster plot represent?
They remain the same, $x$=time, $y$=amount of rain - dynamic time warping (DTW) shifts the times so the curves of the amount of rain align their peaks and valleys.
A different example might be easier to understand. In the paper by Berndt and Clifford (see below) the population of the Snowshoe Hare is plotted and subjected to DTW. You can see as the population of hares increased so did the ability to hunt Lynx. As the population of Lynx increased the population of hares declined. That affected the hunting of Lynx, which again allowed the hare population to increase. This creates a double-top peak in the data, using DTW these can be aligned.
In the k-medoids or k-means plot, is X still time, and Y still the amount of rain?
Having aligned the time/population (similar to time/rainfall) the locations on a map can be plotted with a k-medoids or k-means plot.
K-medoids (also: Partitioning Around Medoids, PAM) uses the medoid instead of the mean, and this way minimizes the sum of distances for arbitrary distance functions. K-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster.
The time variable would be plotted on the map for a period of time, for example a month, season, or year; so while the time is not 'lost' it's also not $x$ or $y$. Instead a color on the map could represent the amount at a location for the entire duration of the period of time.
See how when k-medoids or k-means is applied "time" does not sit on $x$ or $y$, in this example location is on the $x$ and $y$:
References:
"Alignment of curves by dynamic time warping", by Theo Gasser and Kongming Wang.
"Using Dynamic Time Warping to Find Patterns in Time Series", by Donald J. Berndt and James Clifford.
"Clustering analysis applied to NDVI/NOAA multitemporal images to improve the monitoring process of sugarcane crops", by Luciana Alvim Santos Romani, Renata Ribeiro do Valle Gonçalves, Bruno Ferraz Amaral and Agma Traina. |
H: Why aren't Genetic Algorithms used for optimizing neural networks?
From my understanding, Genetic Algorithms are powerful tools for multi-objective optimization.
Furthermore, training Neural Networks (especially deep ones) is hard and has many issues (non-convex cost functions - local minima, vanishing and exploding gradients etc.).
Also I'm that conceptually training a NN with GA is feasible. I was wondering, why aren't they used in practice? Is it a matter of performance?
AI: Training Neural Networks (NNs) with Genetic Algorithms (GAs) is not only feasible, there are some niche areas where the performance is good enough to be used frequently. A good example of this is Neuroevolution of augmenting topologies or NEAT, which a successful approach to generating controllers in simple environments, such as games.
In the more general case though, the approach does not scale well to large, deep networks with many parameters to tune.
Genetic algorithms and other global searches for optimal parameters are robust in ways that gradient-based algorithms are not. For instance, you could train a NN with step function activations, or any other non-differentiable activation functions. They have weaknesses elsewhere. One thing relevant in the case of GAs used for NNs, is that weight parameters are interchangeable in some combinations but heavily co-dependent in other combinations. Merging two equally good neural networks with different parameters - which you would do in cross-over in a GA - will usually result in a third network with poor performance. NEAT's success is partially in finding a way to address that issue by "growing" the NN's connections and matching them up between similar neural networks.
Gradient-based approaches are much more efficient. In general, and not just in domain of NNs, if you can calculate gradient of a function with respect to parameters, then you can find optimal parameters faster than most other optimising techniques. An accurate gradient guarantees at least a small improvement from a single evaluation, and most other optimisers fall into a generate-and-retry paradigm which cannot make that kind of guarantee. The weakness of tending to find local optima has turned out not be a major hindrance for the loss functions in NNs, and has been tackled with some degree of success using extensions to basic gradient descent such as momentum, RPROP, Adam etc.
In practice on a large multi-layer network, gradient methods are likely orders of magnitude faster than GA searches such as NEAT for finding network parameters. You won't find any GA-trained CNNs that solve ImageNet, or even MNIST, where the GA has found the network weights unaided. However, GAs, or at least some variants of them, are not 100% ruled out. For instance this 2017 blog reviews recent papers including Large-Scale Evolution of Image Classifiers which explores using GAs to discover NN hyperparameters which is an important task in machine learning, and not very tractable using gradient-based methods. |
H: Optimizer for Convolutional neural network
What is the best optimizer for Convolutional neural network (CNN)?
Can I use RMSProp for CNN or only for RNN?
AI: Yes, you can use the same optimizers you are familiar with for CNNs.
I don't think that there is a best optimizer for CNNs. The most popular in my opinion is Adam. However some people like to use a plain SGD optimizer with custom parameters.
An excellent article explaining the differences between most popular gradient descent based optimizers can be found here. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.