text
stringlengths 83
79.5k
|
---|
H: To remove Chinese characters as features -
I have created document-term matrix using TfIdfVectorizer, but just noticed the feature contains Chinese characters. Is it possible to remove them using Python's regex?
I believe these characters are one of reason for lower prediction accuracy of my model.
Currently I use the below for pre-processing my data-
# Pre-processing the data
def text_preprocess( data ):
# Changing to lower case
data = data.lower()
# Removing special characters
data = re.sub("(\\d|\\W)+"," ",data)
return data
Also, please note I used stopwords='english' in my TfidfVectorizer.
Please let me know if any information required. (New here, still learning)
AI: If you want to remove non-English characters then this regex will work, by selecting characters not in a given ASCII range (0 to 122, you can adjust this since it will allow some special characters):
([^\x00-\x7A])+
So to remove those characters:
data = re.sub("([^\x00-\x7F])+"," ",data) |
H: What to do after training a classifier?
In terms of application, what happens after we train a classifier? What can we learn from it?
For example, if I trained a classifier to predict the success of a Kickstarter campaign with 80% accuracy, how can I apply this information to benefit my own Kickstarter campaign? Is there a way to know which attributes influence the outcome 'success' or 'failed' the most?
AI: To see which attributes influence your model will depend on the type of model you are using. For Decision Trees, Random Forests, and Gradient Boosting based on trees you can plot feature importance metrics.
In linear models such as Logistic Regression, theoretically the coefficients will indicate how much the model will be influenced by their features.
Are you using Python? Let me know which models are you using so I can help you more. |
H: Can this be a case of multi-class skewness?
I have been working on an email data set, and trying to predict the owner team for it. But my prediction accuracy is just 58%. I have implemented data cleansing, null value removals, duplicate removal, excluding stopwords, and then calculating tf-idf value to get my final document term matrix.
Tried various classifiers (but with default settings) but max I achieved is 58% accuracy. Might give a try to hyperparameter tuning.
Could you please give me some pointers, based on your previous experience what possibly could be going wrong? I'm simultaneously doing my own research and going through various blogs and articles.
One thought - Could it be a case of data skewness?
My output classes are distributed like this -
Your thoughts please?
AI: Just telling the accuracy does not mean anything in classification problems. The first thing you must do is to calculate your baseline, that is, what is the percentage of your majority class? For the bar plot above it is difficult to measure, can you tell us in percentage, instead of counts? In this way we can assess better your results.
Also, have you plot a confusion matrix? In this way you can see where your model are getting more wrong and try to infer why this is happening.
And yes, since you have too many classes to predict and most of them are with low representativeness this will be difficult to overcome. Maybe you can try things such as Oversampling, Undersampling techniques considering one-vs-all approach. This is just an idea, I haven't yet encountering a problem with so many classes to predict. |
H: Derivation of CNN math equations in Matrix format
I've gone through jefkine's website and Jae Seo's articles to get a hold of math behind the famous CNN architecture. Although I understand it in theory, I'm unable to implement in matrix format or to put it straight... In numpy format. After digging through internet like a broken web scraper, I could only understand FC ANN after Andrew Ng tutorial and this, sudeepraja.github.io/Neural/. Can someone please help me understand CNN architecture in similar way?
I'm expecting downvotes and closed as off topic by admins for being naive and dumb. If that's the price I gotta pay to learn... So be it!
AI: I can't explain to you exactly what you want. However, if you got in this link and work on Q4, maybe this is what you are looking for. |
H: SGD vs SGD in mini batches
So I recently finished a mini batches algorithm for a library in building in java(artificial neural network lib). I then followed to train my network for an XOR problem in mini batches size of 2 or 3, for both I got worse accuracy to what I got from making it 1(which is basically just SGD). Now I understand that I need to train it on more epochs but I'm not noticing any speed up in runtime which from what I read should happen. Why is this?
Here is my code(Java)
public void SGD(double[][] inputs,double[][] expected_outputs,int mini_batch_size,int epochs, boolean verbose){
//Set verbose
setVerbose(verbose);
//Create training set
TrainingSet trainingSet = new TrainingSet(inputs,expected_outputs);
//Loop through Epochs
for(int i = 0; i<epochs;i++){
//Print Progress
print("\rTrained: " + i + "/" + epochs);
//Shuffle training set
trainingSet.shuffle();
//Create the mini batches
TrainingSet.Data[][] mini_batches = createMiniBatches(trainingSet,mini_batch_size);
//Loop through mini batches
for(int j = 0; j<mini_batches.length;j++){
update_mini_batch(mini_batches[j]);
}
}
//Print Progress
print("\rTrained: " + epochs + "/" + epochs);
print("\nDone!");
}
private Pair backprop(double[] inputs, double[] target_outputs){
//Create Expected output column matrix
Matrix EO = Matrix.fromArray(new double[][]{target_outputs});
//Forward Propagate inputs
feedForward(inputs);
//Get the Errors which is also the Bias Delta
Matrix[] Errors = calculateError(EO);
//Weight Delta Matrix
Matrix[] dCdW = new Matrix[Errors.length];
//Calculate the Deltas
//Calculating the first Layers Delta
dCdW[0] = Matrix.dot(Matrix.transpose(I),Errors[0]);
//Rest of network
for (int i = 1; i < Errors.length; i++) {
dCdW[i] = Matrix.dot(Matrix.transpose(H[i - 1]), Errors[i]);
}
return new Pair(dCdW,Errors);
}
private void update_mini_batch(TrainingSet.Data[] mini_batch){
//Get first deltas
Pair deltas = backprop(mini_batch[0].input,mini_batch[0].output);
//Loop through mini batch and sum the deltas
for(int i = 1; i< mini_batch.length;i++){
deltas.add(backprop(mini_batch[i].input,mini_batch[i].output));
}
//Multiply deltas by the learning rate
//and divide by the mini batch size to get
//the mean of the deltas
deltas.multiply(learningRate/mini_batch.length);
//Update Weights and Biases
for(int i= 0; i<W.length;i++){
W[i].subtract(deltas.dCdW[i]);
B[i].subtract(deltas.dCdB[i]);
}
}
AI: My understanding is that mini-batches are not really for speeding up the calculations... but to actually allow large datasets to be calculated.
If you have 1,000,000 examples, it would be tricky for a computer to compute forward and backward passes, but passing batches of 5,000 elements would be more feasible.
For your case, I recommend you two things
Try different batch sizes.
Make sure you shuffle your batches!!! that will certainly help you a bit. |
H: How will Occam's Razor principle work in Machine learning
The following question displayed in the image was asked during one of the exams recently. I am not sure if I have correctly understood the Occam's Razor principle or not. According to the distributions and decision boundaries given in the question and following the Occam's Razor the decision boundary B in both the cases should be the answer. Because as per Occam's Razor, choose the simpler classifier which does a decent job rather than the complex one.
Can someone please testify if my understanding is correct and the answer chosen is appropriate or not?
Please help as I am just a beginner in machine learning
AI: Occam’s razor principle:
Having two hypotheses (here, decision boundaries) that has the same empirical risk (here, training error), a short explanation (here, a boundary with fewer parameters) tends to be more valid than a long explanation.
In your example, both A and B have zero training error, thus B (shorter explanation) is preferred.
What if training error is not the same?
If boundary A had a smaller training error than B, selecting becomes tricky. We need to quantify "explanation size" the same as "empirical risk" and combine the two in one scoring function, then proceed to compare A and B. An example would be Akaike Information Criterion (AIC) that combines empirical risk (measured with negative log-likelihood) and explanation size (measured with the number of parameters) in one score.
As a side note, AIC cannot be used for all models, there are many alternatives to AIC too.
Relation to validation set
In many practical cases, when model progresses toward more complexity (larger explanation) to reach a lower training error, AIC and the like can be replaced with a validation set (a set on which the model is not trained). We stop the progress when validation error (error of model on validation set) starts to increase. This way, we strike a balance between low training error and short explanation. |
H: How to handle continuous values and a binary target?
This is going to be a very beginner's question.
I have a datset of continues features like LoanAmount, LoanDuration(multiclass?), ... ClientIncome, ClientFreeSources, etc. and a binary target whether a contract was sued or not.
I'm not sure how to approach the problem as I'm fairly new to DS.
I can reformulate the target into limited number of values like 5, 11, 18, ... which I think would be probably precieved as multiclass by a model or into continues value expressed in DebtAmount after some work at the SQL Server end.
However, the simple flag Sued 1/0 would be prefered option at the begining if possible.
I also wonder how to treat high-cardninality categorical features like post code because the number of new dummy variables which seems to me too high.
Thanks for your answers in advance.
AI: I am expecting this question is for a supervised learning problem where you have a mixture of continuous and categorical features as independent features and your dependent or target feature is a binary class feature. If that is so please follow the below advice.
There might be a much better solutions but this worked for me. I had more than 200 features and a categorical feature like pin code.
Target Feature as 1/0
I am assuming this is a supervised learning problem and the best way to go about will be by starting with a basic logisitic regression model.
If you are not impressed by the performance then you can even try some advanced classification algorithms like Naive Bayes, KNN, SVM etc.
Categorical Feature
You can use pandas one hot encoding for this
I kept a threshold, and used one hot encoding only on those unique categories which occurred more than this threshold
The ones which were less than the threshold can be dropped, but I used a special value to encode them just in case to not to loose them.
You can follow here to know how to perform one-hot encoding
I hope I answered at least to a certain extent. Again, I used this solution for my issue and it worked pretty well and hope it works for you too. |
H: Deriving new continuous variable out of logistic regression coefficients
I have a set of independent variables X and set of values of dependent variable Y. The task at hand is binary classification, i.e. predict whether debtor will default on his debt (1) or not (0). After filtering out statistically insignificant variables and variables that bring about multicollinearity I am left with following summary of logistic regression model:
Accuracy ~0.87
Confusion matrix [[1038 254]
[72 1182]]
Parameters Coefficients
intercept -4.210
A 5.119
B 0.873
C -1.414
D 3.757
Now, I convert these coefficients into new continuous variable "default_probability" via log odds_ratio, i.e.
import math
e = math.e
power = (-4.210*1) + (A*5.119) + (B*0.873) + (C*-1.414) + (D*3.757)
default_probability = (e**power)/(1+(e**power))enter code here
When I divide my original dataset into quartiles according to this new continuos variable "default_probability", then:
1st quartile contains 65% of defaulted debts (577 out of 884 incidents
2nd quartile contains 23% of defaulted debts (206 out of 884 incidents)
3rd quartile contains 9% of defaulted debts (77 out of 884 incidents)
4th quartile contains 3% of defaulted debts (24 out of 884 incidents)
At the same time:
overall quantity of debtors in 1st quartile - 1145
overall quantity of debtors in 1st quartile - 516
overall quantity of debtors in 1st quartile - 255
overall quantity of debtors in 1st quartile - 3043
I wanted to use "default probability" to surgically remove the most problematic credits by imposing the business-rule "no credit to the 1st quartile", but now I wonder whether it is "surgical" at all (by this rule I will lose (1145 - 577 = 568 "good" clients) and overall is it mathematically/logically correct to derive new continuous variables for the dataset out of the coefficients of logistic regression by the line of reasoning described above?
AI: Deriving a probability value (the continuous variable you are talking about) from the logistic model is a perfectly sound thing to do. The probability value is actually the main output from the model.
Getting from the probability value to a decision rule (e.g. from default probability to credit granting decision) is an another step that will also need to incorporate a number of business decisions concerning risk appetite - i.e. how risky clients are you willing to approve not to miss a potentially large amount of business, what the interest is going to be for different risk scores etc. The tradeoff this refers to is the well known sensitivity-specificity tradeoff which is, apart from the confusion matrix you are using, probably best visualised by the ROC curve.
From the confusion matrix it is also apparent that you were training the model on a balanced sample (default rate around 50%). This is quite unusual in credit risk modeling, usually the default rate is well below that. If that is the case, you will probably need to calibrate the probabilities, for example by fitting a yet another logistic regression model on your probabilities as a single variable (this works best when the probabilities are normally distributed). The calibrated probabilities that this second model will predict will accurately reflect the actual real-world probabilities of default. |
H: Evaluating the test set
Please find attached a part of the code which explains what I'm trying to do. Essentially I'm trying to predict the sales of supermarket stores. Im using RandomForestRegressor for this and have predicted the results on the test set. The cross validation is done on the training set with a mean accuracy of 0.52 and I have tried to calculate RMSE which comes to 1180. Now, I want to know how do I make sense of all this and know how well it performed on the test set and how to evaluate my model. It'll be extremely helpful if you could make me understand. Thanks
Ignore the np.exp(rmse) part
Best,
Pranveer
AI: welcome to the forum, I will try to clarify some things.
First of all, when talking about regression you do not calculate accuracy. Accuracy is a metric used in classification tasks, it basically measures how much labels your model got right divided by the total labels.
You are working in a regression task, that is, you are trying to predict not labels, but continuous values as your target. By default, scikit-learn estimators calculate as regression scores the R_squared metric. Intuitively, R_squared measures how much of variance your model explains. It is calculated by the below equation:
(Image taken from scikit-learn r2_score page: https://scikit-learn.org/stable/modules/model_evaluation.html#r2-score
Basically, I like to think that R2 is testing your model against the simplest possible one in a regression task, that is, just predicting the mean for your target variable. It makes sense talking about accuracy as an analogy, however I wanted to make clear that they are different things.
Second, talking about results, RMSE and how can you measure the effectiveness of your model, in linear regression context it will be highly dependent of the problem your are trying to solve. Maybe you are a social scientist trying to predict with some data the number of voters in certain candidate, an R2 of 0.4-0.5 would be amazing (?). Maybe you are sure that this is not sufficient because your variable are strongly related to your target variable (some problems and physics) and a R2 of 0.80 is unacceptable. Another thing is RMSE, it tries to measure the variance of your model that is, how far off you usually are when predicting your target. I had a problem where I was prediciting the number of days of a certain event to happen, and my variance was 15 days off, and this was not helpful at all.
A highly recommend you to read the Chapter 3 of introduction to Statistical Learning Book that will help you a lot (Actually, the whole book). They have a free version here: http://www-bcf.usc.edu/~gareth/ISL/
I hope this helps. Also, any problem to understand what I wrote, please let a comment. And if anyone spots something wrong in my answer, tell me please! |
H: how to reshape xtrain array and what about input shape?
from keras.datasets import mnist
from keras.layers import Activation,Dense,Convolution2D
from keras.models import save_model,load_model,Sequential
from keras.callbacks import TensorBoard
import matplotlib.pyplot as pl
(xtrain,ytrain),(xtest,ytest)=mnist.load_data()
model = Sequential()
model.add(Convolution2D(32,3,activation='relu',input_shape=(60000,28,28)))
model.add(Dense(10, activation='relu'))
model.summary()
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
<space>metrics=['accuracy'])
model.fit(xtrain, ytrain, epochs=10, batch_size=60000)
AI: Keras requires you to set the input_shape of the network. This is the shape of a single instance of your data which would be (28,28). However, Keras also needs a channel dimension thus the input shape for the MNIST dataset would be (28,28,1).
First we load the data as you did,
from keras.datasets import mnist
import numpy as np
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print('Training data shape: ', x_train.shape)
print('Testing data shape : ', x_test.shape)
Training data shape: (60000, 28, 28)
Testing data shape : (10000, 28, 28)
Then we reshape the examples in the MNIST dataset to have the additional channel dimension
# Input image dimensions
img_rows, img_cols = 28, 28
# Channels go last for TensorFlow backend
x_train_reshaped = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test_reshaped = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
And now we can define our model as you did. Do note the necessary Flatten layer between the Convolutional layers and the Dense layer.
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(xtrain, ytrain, epochs=10, batch_size=60000) |
H: Python & Pandas : (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' (timed out)")
I'm writing a Python Script to store JSON data into MySQL Database. I used pandas to store into MySQL Database.
I used two different modules (MySQLdb and sqlalchemy) to connect to MySQL dtaabase.
Python Code:
user@sys:~$ cat pandas_to_mysql.py
import pandas
from pandas.io import sql
from sqlalchemy import create_engine
import MySQLdb
json_string_final='[{"id": 772, "name": "abcd"}]'
df=pandas.DataFrame(eval(json_string_final))
print(df)
#----Using MySQLdb---
con=MySQLdb.connect(host='localhost',user='user1',passwd='pwd',db='myDB')
try:
df.to_sql(con=con, name='myTable1', if_exists='replace')
except Exception as error1:
print("Error with connection 1 ------> ")
print(error1)
#----Using Engine----
engine = create_engine("mysql+pymysql://{user}:{pw}@localhost/{db}".format(user="user1",pw="pwd",db="myDB"))
try:
df.to_sql(con=engine, name='myTable2', if_exists='replace')
except Exception as error1:
print("Error with connection 2 ----->")
print(error1)
I used two diffrent connections to store the data into MySQL dtabase. But connection was not established. Below is the error message.
Output:
user@sys:~$ python3 pandas_to_mysql.py
id name
0 772 abcd
Error with connection 1 ------>
Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': not all arguments converted during bytes formatting
Error with connection 2 ----->
(pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' (timed out)")
(Background on this error at: http://sqlalche.me/e/e3q8)
But i'm able to connect to MySQL database manually from the same terminal using the following command.
MySQL Connection:
user@sys:~$ mysql -u user1 -h localhost -D myDB -p
It means, there is no issues with host, user, password, database name. Then what is the issue?
AI: The following works for me:
df.to_sql(con="mysql://user:password@localhost/database_name, name='myTable2', if_exists='replace') |
H: How to Convert a Pandas Column having duration details in string format (ex:1hr 50m) into a integer column with value in minutes
Lets say i have the following data like below:
import pandas as pd
import numpy as np
df = pd.DataFrame({'Duration': ['1h 50m', '50m', '3h', '2h 30m', '5h', '60m']
})
df
Generated Output:
Column Created in pandas data frame contains duration details in string format i.e (1h 50m,50m,3h etc). I need to know how to convert this column into an integer with the value displayed in minutes (110,50,180).
AI: One liner:
df['Duration']= df['Duration'].str.replace("h", '*60').str.replace(' ','+').str.replace('m','*1').apply(eval)
Basically, it converts your string into an equation in string format and evaluates it.
Hope it helps! |
H: Keras exception: Error when checking input: expected dense_input to have shape (2,) but got array with shape (1,)
I have an understanding of this error, it means that the input that I'm passing to the model is of a different dimension that what was expected. The error also states that the input that I'm passing is of the dimension (1,) while it was expecting (2,)
I have tested the input value dimension by using x.shape and it prints out (2,) still the error exists. As a counter-intuitive move I picked one of the data that was in the training data and printed the shape of the zeroth element x1[0].shape also used that as an input, the error still exists.
model.fit works well, having error with model.predict (tried passing one of the training data hardcoded, still doesn't work)
CODE:
import tensorflow as tf
import numpy as np
from tensorflow import keras
import csv
x1, ys = [], []
with open('./house.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line = 0
for row in csv_reader:
if line > 0:
x1.append([row[1], row[3]])
ys.append(row[5])
line += 1
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[2])])
model.compile(optimizer='sgd', loss='mean_squared_error')
x1 = np.asarray(x1, dtype=float)
ys = np.asarray(ys, dtype=float)
model.fit(x1, ys, epochs=500)
print(x1[0].shape)
while True:
house_size = float(input('Enter the house size: '))
house_size = house_size/3000
bhks = float(input('Enter the BHK: '))
bhks = bhks/3
x = np.array([house_size, bhks])
try:
value = model.predict(x)
except Exception as e:
print(e)
print(x)
print(x.shape)
else:
value = value[0][0] * 500
print(value)
AI: Yoy always need to pass the data for prediction in batches, although this batch is of size one (one sample). Try changing this line:
x = np.array([house_size, bhks])
into this:
x = np.array([[house_size, bhks]])
This should work. |
H: How to set hyperparameters in SVM classification
I am studying image classification using SVMs and it is generally defined as so...
N = number of training examples
W = is the weights
f(x, W) = dot product
λ is explained to be set through cross-validation however no mention is made as to how Δ is set.
I understand that the SVM loss function wants the score of the correct class to be larger than the incorrect class scores by at least by Δ, but they don't explain how Δ is derived.
In most of the examples it is define to be Δ = 1.0, with no mention as to how 1.0 was calculated. Is this value determined through trial-and-error (cross-validation)? How does one determine what should be the value?
AI: Intuitively, SVM wants score, xiwyi, of the correct class, yi, to be greater than any other classes, xiwj, by at least Δ such that the loss becomes zero (clamped with the max operation).
Find the full blog here. Hope it helps |
H: Why do we choose principal components based on maximum variance explained?
I've seen many people choose # of principal components for PCA based on maximum variance explained. So my question is do we always have to choose principal components based on maximum variance explained? Is it applicable for all scenarios i.e text count vectors(BoW, tfidf..) where number of dimensions are really high.
Does maximum variance means most information about my data in higher dimension is captured into lower dimension?
Usually I'd plot something like this to see the variance explained.
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('Principal Components')
plt.ylabel('Variance ratio')
plt.show()
AI: do we always have to choose principal components based on maximum
variance explained?
Yes. "Maximum variance explained" is closely related to the main objective as follows.
Our main objective is: for a limited budget K dimensions, what information $\mbox{a}=(a_1,...,a_K)$ to keep from original data $\mbox{x}=(x_1,...,x_D)$ ($D \gg K$) in order to be able to reconstruct $\mbox{x}$ from $\mbox{a}$ as close as possible?
If we only allow rotation and scaling of original data, i.e. $a_k := \mbox{x}.\mbox{v}_k$ for unknown set of vectors $V_K=\{\mbox{v}_k|\mbox{v}_k \in \mathbb{R}^D, 1 \leq k \leq K\}$, and define the reconstruction error as
$$loss(\mbox{x},V_K):=\left \| \mbox{x}-\underbrace{\sum_{k=1}^{K}\overbrace{(\mbox{x}.\mbox{v}_k)}^{a_k}\mbox{v}_k}_{\hat{\mbox{x}}} \right \|^2,$$
the solution $V^*_K$ that minimizes this error is PCA. For first dimension, PCA keeps the projection of data on vector $\mbox{v}^*_1$ in the direction of largest data variance, namely $a^*_1$. For second dimension, it keeps the projection on vector $\mbox{v}^*_2$ in the direction of second largest data variance, namely $a^*_2$, and so on.
In other words, when we try to find a K-vector set $V_K$ that minimizes $loss(X,V_K)=\frac{1}{N}\sum_{n=1}^{N}loss(\mbox{x}_n,V_K)$, the solution
$V^*_K$ includes $\mbox{v}^*_k$ that is in the direction of $k\mbox{-th}$ largest data variance.
Note that "ratio of variance explained" is a measure from statistics. Using the previous notations, it is defined as:
$$\mbox{R}(X,V_K):=1 - \frac{loss(X,V_K)}{Var(X)}$$
Since variance of original data $Var(X)$ is independent of solution, minimum of $loss(X,V_K)$ is equivalent to maximum of $\mbox{R}(X,V_K)$. For example, if $K=2$, then $V^*_2=\{\mbox{v}^*_1, \mbox{v}^*_2\}$ minimizes $loss(X,V_2)$ and equivalently maximizes $\mbox{R}(X,V_2)$. Ideally, if original data $X$ can be perfectly reconstructed from $V_K$, then $R(X, V_K)$ would be $1$.
Does maximum variance means most information about my data in higher
dimension is captured into lower dimension?
Yes. If we agree that "keep as much information as possible" is equivalent to "be able to reconstruct the data as close as possible", then our objective $min_{V_K}loss(X,V_K)$ formalizes "keep as much information as possible", and its solution is "maximum variance". |
H: Identify credit card shape using machine learning
I have to approach this task: identify credit card from an image. I am attaching example image below:
I have to identify and localize the credit card from this image. The real challenge is that the card can be placed on any background and the color of the card can change based on the company it belongs to.
In order to solve this task I tried using the tensor flow object detection api. The downside of this api is that it fails to recognize cards which are not in its training data set.
My problem here is I am not concerned with the color of card or what information the card has. I am only concerned about finding the outline of card in an image and isolating the outline of card from rest of the image.
Is there a way using ML/CNN to do this. I tried OpenCV approaches to detect contours but even this approach fails when there is lot of text or other noise in the cards background.
AI: I am only concerned about finding the outline of card in an image and isolating the outline of card from rest of the image.
This can be efficiently solved by semantic segmentation (aka dense prediction) - problem in which every pixel must be labeled with class.
In your case, you will have 2 classes: credit card and background. And you will need to have annotated dataset in such way: for every image, every pixel should have a class label. If you will be annotating it manually, I guess that credit cards (because of their simple shape) can be annotated easily.
Here is a good solution in Keras for semantic segmentation models. It offers a lot of different architectures and backbones and it will be straightforward to apply it to your problem.
There are also implementations in other frameworks on the web. |
H: Application of Deep Reinforcement Learning
I'm new to deep learning, and especially to reinforcement learning. I would like to know if it's possible to predict which combination of hashtags (from a subset of chosen hashtags) would produce the most likes for a certain image.
Is it possible to have a convolutional neural network with each hashtag as a label, and take something like reward = likes / followers as a reward in a reinforcement learning like scenario?
In what other way could I face this problem? My goal isn't to predict the amount of likes, but to maximize the probability to get the most likes.
I chose this title because I think the answer could actually be question agnostic: I could use the same knowledge to define which combination of stocks would maximize my investment.
Thanks in advance.
AI: This honestly sounds more like a supervised learning problem.
For reinforcement learning to work, you would need a model that can constantly return values for a given input.
With social media, that would mean
a) posting the same image with all kinds of different hashtags and expecting people to give the most appropiate hashtags likes in a short amount of time. This will not happen.
b) searching for all occurrences of an image with different hashtags. This is basically supervised learning.
I recommend finding a social media dataset first. Try to group all occurrences of an image and find the median of likes for each hashtag. Don't forget to compensate for follower count, as more popular posters will get more likes on average regardless of hashtag. Store the best hashtag as label for the given image.
You now have Y different hashtags for X different images.
From now, you can treat it as classification problem. X is your sample count, Y is the amount of possible outputs. If you don't want to write your own loss function and only want to predict the single best hashtag, use cross entropy as loss function.
Of course you still have to choose your social media dataset, appropriate images for training and NN structure, but I hope this helps as a general approach. |
H: Text Generation
I want to generate „human like“ text/posts based on a dataset of posts from a forum. The dataset contains roughly 25k words.
I currently have the Markoc chain implemented, but i want to improve the text generated by using reccurent neural networks. My problem is that most of the solutions available are written in python, but i need to implement it in either JavaScript or PHP.
Does anyone know how to do that or where to find a food solution?
Thanks in advance
AI: Text generation can be done in JavaScript with RNN/LSTM. For example, TensorFlow.js is a JavaScript implementation of TensorFlow. Since the dataset is very small (25k words), model can be run in JS as well.
Following is an example of text generation in JS : https://github.com/reiinakano/tfjs-lstm-text-generation |
H: Apache Spark alternatives for local compute
I am creating a relatively large hobby project in Scala that needs a few ml algorithms for text classification into topics. My dataset is not huge, it is < 500,000 items with dimensionality 5 (2 dimensions are free form text).
From what I've started with on Spark, it is heavily geared toward distributed computing and other production related concerns but it has a nice ml library. Is it worth using Spark if I only plan to run my project on a local dev machine level or is there a more appropriate library out there for me in Scala? Something like scikit-learn, but in Scala.
AI: You're right, Spark is intended to scale in a distributed computing environment, but it is absolutely performs well locally.
When running locally Spark will still 'distribute' processing across local Executors, you have options to control the number of CPU cores the Spark job can use, and how many cores each Executor can use. You can get a lot out of using Spark locally, and Spark makes it very easy to scale up to a cluster of machines (e.g. on AWS or Google Cloud) when your single machine can't handle the tasks and it becomes necessary.
I'd also check out SMILE as an alternative to Spark MLlib in Scala. |
H: Should I connect my two GPUs with SLI or not? (for Keras + TensorFlow)
I have two GPUs, NVIDIA GTX 1070 Ti. For using Keras with TensorFlow back-end, should I connect them with SLI or not? If not, then they will be treated separately, and one model will be trained on one card. These are the two options from what I understood so far. Thank you.
AI: You need not connect GPUs via SLI. Keras and TensorFlow will take care of distributing batches across GPUs
https://keras.io/utils/#multi_gpu_model
Instead of SLI, if you use NV-link, Keras can use use GPU for merge as well.
https://www.nvidia.com/en-us/data-center/nvlink/
From the documentation :
Specifically, this function implements single-machine multi-GPU data
parallelism. It works in the following way:
Divide the model's input(s) into multiple sub-batches. Apply a model
copy on each sub-batch. Every model copy is executed on a dedicated
GPU. Concatenate the results (on CPU) into one big batch. E.g. if your
batch_size is 64 and you use gpus=2, then we will divide the input
into 2 sub-batches of 32 samples, process each sub-batch on one GPU,
then return the full batch of 64 processed samples.
This induces quasi-linear speedup on up to 8 GPUs.
This function is only available with the TensorFlow backend for the
time being. |
H: Keras : switch backend in Notebook
If a script is launched from command line, I use "KERAS_BACKEND" env var to switch between Theano and Tensoflow. What can be done to switch backend if script is running in a notebook ?
AI: You can set the environment variable via python's built in os module.
import os
os.environ["KERAS_BACKEND"] = "tf" # or theano
Making changes in one notebook does not seem to carry over to other notebooks that are started in the same Jupyter session (run from a single terminal).
Here I try to get the environment variable in notebook 2, and we see it doesn't exist:
Create environment variable in Notebook 1:
Now make the changes in notebook 1 (after also confirming that the environment variable didn't exist - we catch the exception)
And now go back to notebook 2 to see if it picks up on the changed value:
There are also settings that should be read by Keras on import. Have a look at the documentation. If you are on windows, there are some separate steps you should check out there. |
H: Difference between output of probabilistic and ordinary least squares regressions
If I execute the commands
my_reg = LinearRegression()
lin.reg.fit(X,Y)
I train my model. To my understanding training a model is calculating coefficient estimators.
I do not really understand the difference between this and e.g.
scipy.stats.linregress(X,Y)
calculating a 'normal' regression that also gives me the coefficient estimators and all the other statistics connected with it.
Could anyone tell me what is the difference here?
AI: They both solve the exact same objective, which is minimizing the mean squared error. However the second method can answer "how confident it is that slope is not zero, i.e. $Y$ is correlated with $X$?" via p-value.
In detail
Lets denote the data as $(X, Y) = \{(x_n, y_n)|x_n \in \mathbb{R}^D, y_n \in \mathbb{R}\}$. And the regression as $\hat{y} = Ax+B$.
Extra quantities returned by scipy.stats.linregress(X,Y) are: rvalue ($r$), and pvalue ($p$).
In statistics, $r^2$ (known as r-squared) measures the "goodness-of-fit" . That is, as regression $\hat{y}=Ax+B$ gets closer to observation $y$, $r^2$ gets closer to $1$. Since it is a function of $y$ and $\hat{y}$, it can be calculated for the first method too. So no difference here.
However, $p$ is specific to second method. scipy.stats.linregress(X,Y) adds a normality assumption to noise, i.e. assumes $\epsilon \sim N(0, \sigma^2)$ where $$\epsilon = y - \overbrace{Ax+B}^{\hat{y}}$$
On the basis of this assumption, it can answer an additional question: "how confident it is that the slope is not zero?". The first method cannot answer this question.
For example, suppose the estimated slope is $2.1$ for both methods, we still cannot tell whether this slope is significant or $Y$ is actually independent of $X$. Unless we look at the value of $p$. For example, for $p < 0.01$ we are confident (at significance level $0.01$) that $Y$ is correlated with $X$, but for $p > 0.1$ we cannot be confident, i.e. slope $2.1$ could be due to chance and $Y$ might be independent of $X$.
This link gives more details on how p-value is actually calculated in second method. |
H: How should I tackle this real-life hypermarket problem?
I registered myself in the payback program of the hypermarket I am going to.
For every 2$ I get 1 point.
I buy the same products every week (Feta 2.19\$, Milk 0.99$, ...).
I visit only in weekdays.
I would like to maximize the amount of points I gather, while I also maximize the times I visit that hypermarket (so buying all the stuff at once is an awful solution).
How should I go about modeling this real-life situation in terms of a Data Science problem in the first place? If you feel like it, suggest an approach of tackling it as well...
AI: Question
I feel that the question needs to be reformulated for clarity to:
Given a fixed set of purchased products $S=\{s_i\}$ with associated prices $p_i$, find the greatest number of disjoint subsets $S_1, S_2, etc$ such that the number of reward points attained in each subset equals the number of points attained for the whole.
A reward point is obtained for every whole number of 200 cents divisible into the total price of the set.
In other words you have a set of products that you will definitely buy then the maximal number of points you can acquire from those purchases is $floor(\frac{1}{200}\sum_i p_i)$. Since you cannot acquire more points than this any subdivisions you make, representing different trips to the hypermarket, need to reach the same amount of points.
Answer
The brute force way of considering every possible combination will be an exponential time algorithm. So you probably need to develop an algorithm which uses greedy approaches.
For example, one such approach would be to identify subsets. Say you had four products:
{milk, cheese, bread, oilves} with prices {189c, 200c, 211c, 205}, possible 4 reward points,
then a preliminary scan (over a single item) to detect prices with zero residuals will return two disjoint sets;
{cheese} and {milk, bread, olives}.
Now you can recursively treat the 3-set as the same problem, in this case no single product has a multiple of 200, but milk and bread does sum to 400 so that is a perfect split:
{cheese} and {milk. bread} and {olives}
And you have arrived at the result.
In general you cannot know how many products might be needed in a combination to make a perfect multiple of 200, but you can use some heuristics derived from your initial set. In this case $\sum_i p_i mod 200 = 805 mod 200 \equiv 5$, so you know that you cannot have the sum of residuals of subgroups totalling more than 5 otherwise your solution returns less reward points.
For example if you grouped {cheese, bread} you get 2 reward points with a residual of 11 cents. There is no combination of the remaining {milk, olives} that can collect 2 more reward points. In that case the residual is 194 cents, which actually suggests the two disjoint sets can be combined to produce an additional reward point since 11 + 194 > 200.
PS
I have no idea why you mention weekdays without giving any information about their significance to the problem. Unless you want me to assume that you only visit the hypermarket once per day in which case the total number of disjoint subsets you can have is 5. |
H: How does Naive Bayes classifier work for continuous variables?
I know that for categorical features we just calculate the prior and likelihood probability assuming conditional independence between the features.
How does it work for continuous variables? How can we calculate likelihood probability for continuous variable?
AI: The difference boils down to "how we define $P(x_i|C_k)$?", where $x_i$ is a single feature, and $C_k$ is a class from a total of $K$ classes.
Discrete
In discrete case, $P(x_i|C_k)$ is represented by a table as follows:
x_i P(x_i|C_k)
a 0.5
b 0.2
c 0.3
We have one of these tables for each feature-class pair $(i, k)$.
Lets denote $i$-th feature of data point $n$ as $x_{n,i}$. Each row of this table can be estimated using
$$\hat{P}(x_i=a|C_k) = \frac{\sum_{n:n \in C_k}{\mathbb{1}_{x_{n,i}=a}}}{N_k}$$
which divides the number of samples that have i-th feature equal to $a$ by total number of samples in class $C_k$ (of course from the training set).
Also, check out pseudocount that avoids zero estimations.
Continuous
In continuous case, we either discretize the continuous interval of $x_i$ into bins $\{b_1,..,b_m\}$ and proceed the same as discrete case, or we assume a function like Gaussian (or any other one), as follows:
$$P(x_i|C_k)=\frac{1}{\sqrt{2\pi}\sigma_{i,k}}e^{-(x_i-\mu_{i,k})^2/2\sigma_{i,k}^2}$$
This way, for each feature-class pair $(i, k)$, $P(x_i|C_k)$ is represented with two parameters $\{\mu_{i,k}, \sigma_{i, k}\}$ instead of a table in discrete case. The estimation of the parameters is the same as fitting a Gaussian distribution to one dimensional data, that is:
$$\hat{\mu}_{i,k} = \frac{\sum_{n:n \in C_k}{x_{n,i}}}{N_k}, \hat{\sigma}^2_{i, k} = \frac{\sum_{n:n \in C_k}{(x_{n,i} - \hat{\mu}_{i,k})^2}}{N_k-1}$$
Instead of Gaussian, we can opt for a more complex function, even a neural network. In that case, we should look for a technique to fit the function to data just like what we did with Gaussian.
The rest is independent of feature type
Representing $P(C_k)$ is the same for both discrete and continuous cases and is estimated as $\hat{P}(C_k)=N_k/N$.
Finally, the classifier is $$C(x_n) = \underset{k \in \{1,..,K\}}{\mbox{argmax }}\hat{P}(C_k)\prod_{i}\hat{P}(x_i=x_{n,i}|C_k)$$
Or equivalently using log-probabilities,
$$C(x_n) = \underset{k \in \{1,..,K\}}{\mbox{argmax }}\mbox{log}\hat{P}(C_k)+\sum_{i}\mbox{log}\hat{P}(x_i=x_{n,i}|C_k)$$ |
H: understanding linear algebra of a forget gate
This blog covers the basics of LSTMs.
A forget gate is defined as :
$$f_t = \sigma(W_f \cdot [h_{t-1}, x_t]+ b_f)$$
At this point the linear algebra confuses me more than it should. The syntax of $W\cdot [h,x]$ is confusing in this context. I think a vector should go into the activation function since the output $f$ is a vector, but the syntax of the forget gate above implies that the input has $2$ columns because $[h,x]$ will be an $n\times 2$ matrix
For the sake of example lets say ...
\begin{align} W &= \begin{bmatrix} 0 & 1 \\
2 &3 \end{bmatrix}\\
h &= \begin{bmatrix} -1 \\
2 \end{bmatrix}\\
x &= \begin{bmatrix} 3 \\
0 \end{bmatrix}\\
b &= \begin{bmatrix} 1 \\
-2 \end{bmatrix}\end{align}
Can anyone give the final vector that goes into the sigmoid function ?
I think the math is
$$ \begin{bmatrix} 0 & 1 \\ 2 & 3 \end{bmatrix}\begin{bmatrix} -3 & 3 \\ 2 & 0 \end{bmatrix} + \begin{bmatrix} 1 \\ -2\end{bmatrix} = \begin{bmatrix} 2 & 0 \\ 4 & 6\end{bmatrix}+ \begin{bmatrix} 1 \\ -2\end{bmatrix} = \text{ Something wrong}$$
AI: Note that $$[h_{t-1}, x_t]$$
is the concatenation of two vectors.
In your example, it would be: $$[h_{t-1}, x_t] = [-1, 2 , 3, 0]$$
and then the dimensions of $W_f$ would be $2 \times 4$, where $2$ is the dimension of the output of the LSTM cell, i.e. the activation $h_t$, that you defined to be of dimension $2$.
Hence, $$W_f \cdot [h_{t-1}, x_t] $$ is a multiplication of a matrix of dimension $2\times4$ by a vector of $4$, which will return a vector of dimesion $2$. And then the sigmoid function will be applied point wise on each of the two elements of the result.
Hope it makes sense. |
H: How to set limits of Y-axes in countplot?
df in my program happens to be a dataframe with these columns :
df.columns
'''output : Index(['lat', 'lng', 'desc', 'zip', 'title', 'timeStamp', 'twp', 'addr', 'e',
'reason'],
dtype='object')'''
When I execute this piece of code:
sns.countplot(x = df['reason'], data=df)
# output is the plot below
but if i slightly tweak my code like this :
p = df['reason'].value_counts()
k = pd.DataFrame({'causes':p.index,'freq':p.values})
sns.countplot(x = k['causes'], data = k)
So essentially I just stored the 'reasons' column values and its frequencies as a series in p and then converted them to another dataframe k but this new countplot doesn't have the right range of Y-axis for the given values.
My doubts happen to be :
Can we set of Y-axis in the second countplot in its appropriate limits
Why the does second countplot differ from the first one when i just separated the specific column i wanted to graph and plotted it separately ?
AI: Countplot from seaborn will not work as you expect. When you calculate the frequencies, you want to plot the values in p.values as they appear. Countplot will take a dataframe where labels are not aggregated and then count each one of them, as it did in the first case.
So countplot will be appropriate for the case where your dataframe looks like:
index | reason |
0 EMS
1 EMS
2 Traffic
3 Fire
4 Fire
5 EMS
6 Traffic
...
In the second case you already have your frequencies:
index | reason |
EMS 10
Traffic 21
Fire 15
Then count plot will just count the lines and it will be one for each, that is why your plot looks like that.
To solve your problem you could just plot using .plot from pandas:
df['reason'].value_counts(normalize=True).plot(kind='bar')
Where the parameter normalize=True will show normalized frequencies instead of raw count values. |
H: Grid search model isn't recognized as fitted for Graphviz
I find this really weird, and the code is really straight forward.
What am I doing wrong ?
from sklearn.model_selection import GridSearchCV
scoring_type="accuracy"
preprocess_data(X,y,0)
p_grid = {'min_samples_split':np.arange(2,10),'min_samples_leaf': np.arange(1,10)}
Tree_opt = GridSearchCV(estimator=DecisionTreeClassifier(random_state=42), param_grid=p_grid, scoring=scoring_type, cv=10)
Tree_opt.fit(X_train,y_train)
print("Best training params: {}".format(Tree_opt.best_params_))
print("Best training Score: {}".format(Tree_opt.best_score_))
dot_data = tree.export_graphviz(Tree_opt, out_file=None,feature_names=labels,class_names=class_names,filled=True, rounded=True,special_characters=True)
graph = graphviz.Source(dot_data)
graph
print(Tree_opt.score(X_test, y_test))
And I'm getting the following error:
NotFittedError Traceback (most recent call last)
<ipython-input-14-1c7fa906f99b> in <module>
8 print("Best training params: {}".format(Tree_opt.best_params_))
9 print("Best training Score: {}".format(Tree_opt.best_score_))
---> 10 dot_data = tree.export_graphviz(Tree_opt, out_file=None,feature_names=labels,class_names=class_names,filled=True, rounded=True,special_characters=True)
11 graph = graphviz.Source(dot_data)
12
~\Anaconda3\lib\site-packages\sklearn\tree\export.py in export_graphviz(decision_tree, out_file, max_depth, feature_names, class_names, label, filled, leaves_parallel, impurity, node_ids, proportion, rotate, rounded, special_characters, precision)
394 out_file.write('%d -> %d ;\n' % (parent, node_id))
395
--> 396 check_is_fitted(decision_tree, 'tree_')
397 own_file = False
398 return_string = False
~\Anaconda3\lib\site-packages\sklearn\utils\validation.py in check_is_fitted(estimator, attributes, msg, all_or_any)
949
950 if not all_or_any([hasattr(estimator, attr) for attr in attributes]):
--> 951 raise NotFittedError(msg % {'name': type(estimator).__name__})
952
953
NotFittedError: This GridSearchCV instance is not fitted yet. Call 'fit' with appropriate arguments before using this method.
I also have this warning I don't think it's important:
C:\Users\Flow\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal.
DeprecationWarning)
I think the fit works because I can get the first prints as follows:
Best training params: {'min_samples_leaf': 3, 'min_samples_split': 2}
Best training Score: 0.8809523809523809
AI: Try using Tree_opt.best_estimator_. The grid search object passes fit and score functions through to the best estimator when refit=True, but other (esp. more specific) methods need to be called directly from best estimator object. |
H: LDA as a dimensionality reducer
I know how to use LDA as a classifier.
But how to use Linear Discriminant Analysis as a dimensionality reducer to reduce the number of features and apply logistic regression on top of it.
I am using R language.
AI: We can do LDA via the lda function from the MASS package. The reduced dimension data can be computed as follows:
# Model Discriminant Analysis
library( MASS )
model = lda( class ~ ., data = X_train )
# Ploting LDA Model
projected_data = as.matrix( X_train[, 1:18] ) %*% model$scaling
You can then feed the projected_data into another supervised learning method.
Credit: The code is obtained from here. |
H: Transform an Autoencoder to a Variational Autoencoder?
I would like to compare the training by an Autoencoder and a variational autoencoder. I have already run the traing using AE. I would like to know if it's possible to transform this AE into a VAE and maintain the same outputs and inputs.
Thank you.
AI: Yes. Two changes are required to convert an AE to VAE, which shed light on their differences too. Note that if an already-trained AE is converted to VAE, it requires re-training, because of the following changes in the structure and loss function.
Network of AE can be represented as $$x \overbrace{\rightarrow .. \rightarrow y \overset{f}{\rightarrow}}^{\mbox{encoder}} z \overbrace{\rightarrow .. \rightarrow}^{\mbox{decoder}}\hat{x},$$
where
$x$ denotes the input (vector, matrix, etc.) to the network, $\hat{x}$ denotes the output (reconstruction of $x$),
$z$ denotes the latent output that is calculated from its previous layer $y$ as $z=f(y)$.
And $f$, $g$, and $h$ denote non-linear functions such as $f(y) = \mbox{sigmoid}(Wy+B)$, $\mbox{ReLU}$, $\mbox{tanh}$, etc.
These two changes are:
Structure: we need to add a layer between $y$ and $z$. This new layer represents mean $\mu=g(y)$ and standard deviation $\sigma=h(y)$ of Gaussian distributions. Both $\mu$ and $\sigma$ must have the same dimension as $z$. Every dimension $d$ of these vectors corresponds to a Gaussian distribution $N(\mu_d, \sigma_d^2)$, from which $z_d$ is sampled. That is, for each input $x$ to the network, we take the corresponding $\mu$ and $\sigma$, then pick a random $\epsilon_d$ from $N(0, 1)$ for every dimension $d$, and finally compute $z=\mu+\sigma \odot \epsilon$, where $\odot$ is element-wise product. As a comparison, $z$ in AE was computed deterministically as $z=f(y)$, now it is computed probabilistically as $z=g(y)+h(y)\odot \epsilon$, i.e. $z$ would be different if $x$ is tried again. The rest of network remains unchanged. Network of VAE can be represented as $$x \overbrace{\rightarrow .. \rightarrow y \overset{g,h}{\rightarrow}(\mu, \sigma) \overset{\mu+\sigma\odot \epsilon}{\rightarrow} }^{\mbox{encoder}} z \overbrace{\rightarrow .. \rightarrow}^{\mbox{decoder}}\hat{x},$$
Objective function: we want to enforce our assumption (prior) that the distribution of factor $z_d$ is centered around $0$ and has a constant variance (this assumption is equivalent to parameter regularization). To this end, we add a penalty per dimension $d$ that punishes any deviation of latent distribution $q(z_d|x) = N(\mu_d, \sigma_d^2)$$= N(g_d(y), h_d(y)^2)$ from unit Gaussian $p(z_d)=N(0, 1)$. In practice, KL-divergence is used for this penalty. At the end, the loss function of VAE becomes: $$L_{VAE}(x,\hat{x},\mu,\sigma) = L_{AE}(x, \hat{x}) + \overbrace{\frac{1}{2} \sum_{d=1}^{D}(\mu_d^2 + \sigma_d^2 - 2\mbox{log}\sigma_d - 1)}^{KL(q \parallel p)}$$
where $D$ is the dimension of $z$.
Side notes
In practice, since $\sigma_d$ can get very close to $0$, $\mbox{log}\sigma_d$ in objective function can explode to large values, so we let the network generate $\sigma'_d = \mbox{log}\sigma_d = h_d(y)$ instead, and then use $\sigma_d = exp(h_d(y))$. This way, both $\sigma_d=exp(h_d(y))$ and $\mbox{log}\sigma_d=h_d(y)$ would be numerically stable.
The name "variational" comes from the fact that we assumed (1) each latent factor $z_d$ is independent of other factors, i.e. we ignore other $(\mu_{d'}, \sigma_{d'})_{d' \neq d}$ when we sample $z_d$, and (2) $z_d$ follows a Gaussian distribution. In other words, $q(z|x)$ is a simplified variation to the true (and probably a more complex) distribution $p(z|x)$. |
H: What's the right way to setup an image classifier by multiple params?
I'm very new to the data science and machine learning, so apologies for my ignorance.
What I'm trying to understand is how to setup an image classifier system (maybe based on CNN) which will classify my image by multiple params. Most of the examples I found are about classifying by single class, i.e. "cat", "dog", "horse", etc, but what I'd like to have is, for example, {"red", "dog", "tongue"}. Is there a simple way to do it? The best option would be to have a ready setup, so I can just change their test dataset with mine and see the right formatting. Thanks!
UPDATE:
Also please help me understand if it's a complicated task for an experienced machine learning engineer? What'd be the timing and cost given a dataset?
AI: You normally one hot encode your labels so that every possible attribute gets it's own binary representation. So if you have 10 attributes they would be represented as [attr1, attr2, attr3, ..., attr10], with values either 1 if the attribute is present, or 0 if it is not.
Then you train a network with the same number of output neurons as possible categories and use a sigmoid activation function. |
H: Do recommendation systems necessarily use machine learning algorithms?
I am studying about evaluation of both recommendation systems and machine learning algorithms in recent times, trying to define a scope for my masters research. After some reading time I'm starting to understand several concepts, but one thing was not clear to me:
Do recommendation systems necessarily use machine learning algorithms?
I mean, I know these two can be used combined, but in most of the papers I read about recommender systems evaluation, they do not even mention anything about Machine Learning.
Also, if you can suggest some papers that I can read, I would be very grateful
AI: There's nothing about a recommendation system that absolutely necessitates some kind of machine learning. Indeed, I've seen decision systems in use that were essentially just someone's idea about what the customer's preferences ought to be.
A recommender can be based on anything from a few ad-hoc 'common sense' rules, to a logistic regression someone did on some data a few years ago and whose parameters are hardcoded into the system, to a complicated ensemble of machine-learning algorithms that are regularly and constantly trained on new data.
The use of machine learning for recommender systems is partly driven by necessity, partly by fad (at least from what I have seen). If a simple recommender works well, and accurately predicts what the user wants, there's no need for a machine to learn anything. If there's a huge amount of data, hiding some very deep relationships that humans are unable to pick out, that's where machine learning becomes useful. |
H: Sequence extraction in a dataset
I am looking for a way to extract sequences/patterns from a dataset such as this one:
dataset = ['sample1', 'sample2', 'sample3', 'sample1', 'sample2', 'sample3', 'sample3', 'sample2'...]
And my goal is to know that the sequence ['sample1', 'sample2', 'sample3'] occurs 2 times in this dataset.
Ideally, I would also like to know all sequences that occur more than once in my dataset.
Is there a library (sklearn...) that could help me do that or do I just have to iterate over my dataset and test each and every possible combination? I assume there must be a more intelligent way to do that.
Thanks for your help!
AI: You can use nltk.util.ngrams for ngram extraction.
See an example below:
To extract bigrams:
dataset = ['sample1', 'sample2', 'sample3', 'sample1', 'sample2', 'sample3',\
'sample3', 'sample2', 'sample2', 'sample3', 'sample1', 'sample2']
from nltk.util import ngrams
import collections
bigrams = ngrams(dataset, 2)
result = collections.Counter(bigrams)
result.most_common()
Out[1]:
[(('sample1', 'sample2'), 3),
(('sample2', 'sample3'), 3),
(('sample3', 'sample1'), 2),
(('sample3', 'sample3'), 1),
(('sample3', 'sample2'), 1),
(('sample2', 'sample2'), 1)]
To extract trigrams:
trigrams = ngrams(dataset, 3)
result = collections.Counter(trigrams)
result.most_common()
Out[2]:
[(('sample1', 'sample2', 'sample3'), 2),
(('sample2', 'sample3', 'sample1'), 2),
(('sample3', 'sample1', 'sample2'), 2),
(('sample2', 'sample3', 'sample3'), 1),
(('sample3', 'sample3', 'sample2'), 1),
(('sample3', 'sample2', 'sample2'), 1),
(('sample2', 'sample2', 'sample3'), 1)]
Fourgrams:
fourgrams = ngrams(dataset, 4)
result = collections.Counter(fourgrams)
result.most_common()
Out[3]:
[(('sample2', 'sample3', 'sample1', 'sample2'), 2),
(('sample1', 'sample2', 'sample3', 'sample1'), 1),
(('sample3', 'sample1', 'sample2', 'sample3'), 1),
(('sample1', 'sample2', 'sample3', 'sample3'), 1),
(('sample2', 'sample3', 'sample3', 'sample2'), 1),
(('sample3', 'sample3', 'sample2', 'sample2'), 1),
(('sample3', 'sample2', 'sample2', 'sample3'), 1),
(('sample2', 'sample2', 'sample3', 'sample1'), 1)]
...
You only need to specify the length of your n-grams and what number of repetitions is representative in your case.
Hope this helps! |
H: Using pandas get_dummies() on real world unseen data
I made a ML model, trained and tested it with my data containing categorical variables.
To create dummy variables I used pd.get_dummies() before the split.
I now want to use my model on previously unseen data where, of course, I need to re create my dummies. Should I do it still with pd.get_dummies()? In this way isn't the encoding lost? Any suggestion on how to do it?
Thanks
AI: Yes, the encoding would be lost. You should instead use sklearn OneHotEncoder and save the corresponding encoder instance so that you can re-load it on unseen data.
One can do something along these lines:
import pandas as pd
import pickle
from sklearn.preprocessing import OneHotEncoder
def get_encoder_inst(feature_col):
"""
returns: an instance of sklearn OneHotEncoder fit against a (training) column feature;
such instance is saved and can then be loaded to transform unseen data
"""
assert isinstance(feature_col, pd.Series)
feature_vec = feature_col.sort_values().values.reshape(-1, 1)
enc = OneHotEncoder(handle_unknown='ignore')
enc.fit(feature_vec)
with open(file_name, 'wb') as output_file:
pickle.dump(enc, output_file)
return enc
which can then be loaded and applied as
def get_one_hot_enc(feature_col, enc):
"""
maps an unseen column feature using one-hot-encoding previously fit against training data
returns: a pd.DataFrame of newly one-hot-encoded feature
"""
assert isinstance(feature_col, pd.Series)
assert isinstance(enc, OneHotEncoder)
unseen_vec = feature_col.values.reshape(-1, 1)
encoded_vec = enc.transform(unseen_vec).toarray()
encoded_df = pd.DataFrame(encoded_vec)
return encoded_df
where the argument enc in the latter function is an instance of OneHotEncoder that you load through pickle.load. Of course the above is just a pseudo-code example, take care that all the objects that you are using keep the initial shapes and so on.
The problem with using pd.get_dummies is that it has no memory of the previously mapped encoding: it basically turns a column into factors, whereas OneHotEncoder actually maps categorical variables to a fixed-length vector representation that is stored and kept. |
H: Proper Understanding of Condensed Nearest Neighbor
I have a question regarding the Condensed Nearest Neighbors algorithm:
Why am I returning Z, which if I understand correctly, is the array of all of the misclassified points? Wouldn't I want to return the points that were classified correctly? What benefit does this give me in returning all the points I got wrong?
AI: Condensed Nearest Neighbors algorithm helps to reduce the dataset X for k-NN classification. It constructs a subset of examples which are able to correctly classify the original data set using a 1-NN algorithm.
It is returning not the array of misclassified points, but a subset Z of the data set X.
CNN works like that:
1) Scan all elements of X, looking for an element x whose nearest prototype from Z has a different label than x
2) Remove x from X and add it to Z
3) Repeat the scan until no more prototypes are added to Z
Z used instead of X for kNN classification.
An advantage of this method is decreasing of execution time, reducing a space complexity |
H: Is Gradient Descent central to every optimizer?
I want to know whether Gradient descent is the main algorithm used in optimizers like Adam, Adagrad, RMSProp and several other optimizers.
AI: No. Gradient descent is used in optimization algorithms that use the gradient as the basis of its step movement. Adam, Adagrad, and RMSProp all use some form of gradient descent, however they do not make up every optimizer. Evolutionary algorithms such as Particle Swarm Optimization and Genetic Algorithms are inspired by natural phenomena do not use gradients. Other algorithms, such as Bayesian Optimization, draw inspiration from statistics.
Check out this visualization of Bayesian Optimization in action:
There are also a few algorithms that combine concepts from evolutionary and gradient-based optimization.
Non-derivative based optimization algorithms can be especially useful in irregular non-convex cost functions, non-differentiable cost functions, or cost functions that have a different left or right derivative.
To understand why one may choose a non-derivative based optimization algorithm. Take a look at the Rastrigin benchmark function. Gradient based optimization is not well suited for optimizing functions with so many local minima. |
H: How to transform entire pandas data frame in one hot representation?
I want all the columns one hot encoded without the need of listing out the columns or apply one hot encode one by one. I know how to do it one column then another.
AI: You can use:: pandas.get_dummies
get_dummies will only convert string columns and will keep numerical columns as it is. You can first convert categorical columns into string type and then apply get_dummies.
concated_dataset['1stFlrSF'] = concated_dataset['1stFlrSF'].astype("string")
pd.get_dummies(cacated_dataset) |
H: How do Bayesian methods do automatic feature selection?
Someone asked me this question and I do not know I answered it correctly.
I answered the question in the following way: One type of Bayesian method is Bayesian inference and feature selection has to do with ${L}^{1}$ regularization because it is used extensively for this purpose. So, for ${L}^{1}$ regularization, the penalty $\alpha \Omega (\boldsymbol{w}) = \alpha \sum_{i} |w_{i}|$ used to regularize a cost function is equivalent to the log-prior term that is maximized by MAP Bayesian inference when the prior is an isotropic Laplace distribution.
But my question is this an automatic feature selection? ${L}^{1}$ regularization finds the specific subset of the available features to be used. Also is my answer correct to this question? I am just curious to know if my line of thinking makes sense or if it does not. If it does not, please let me know why. Thanks
AI: Your interpretation of L1 regularization sounds right. Is it used to perform feature selection? Yes, in the broad sense that this 'encourages' coefficients in the linear model to be 0, and those features with 0 coefficients are not used and can be removed.
Of course, this assumption about the prior distribution of coefficients is just an assumption you're adding. You are asserting that you need a lot of evidence to believe the coefficients are nonzero. I'm not sure it's the L1 regularization that's doing the feature selection, but your assumption, implemented by L1 regularization, that's leading you to conclude that some features do not contribute to the model. |
H: How to train more models on 2 GPUs with Keras?
I got 2 GPUs of type NVIDIA GTX 1070 Ti. I would like to train more models on them in such a way that half of the models are trained on one GPU only, and half on the other, at the same time. So as training goes, one model goes to GPU1, the next model goes to GPU2, ... I don't want to train one model on the two GPUs. I use Keras - Python with TensorFlow back-end. Can you please recommend resources where I can see how to do this? Most examples/articles online cover the case only if you want to distribute one model on the two GPUs. Thank you.
AI: I would just create two separate scripts with one set of models that target one gpu and the other set of models target the other gpu. Then run the scripts as separate processes.That would easily get around Python's GIL. |
H: How to calculate mean and standard deviation of all features in a class identified by k-nearest neighbors?
I have classified my data into several neighborhoods using k nearest neighbors. I need to efficiently calculate the mean and standard deviation for all features of data points belonging to a particular neighborhood.
I am using sklearn.kneighbors.
AI: If you append the predicted neighbourhood onto your data df (let's call this neighbourhood), then using groupby and transform within a loop should do the trick.
As an example:
features = [var_1,var_2,...] # a list of the features to run over
for col in features:
df[col+'_mean'] = df.groupby('neighbourhood')[col].transform('mean') |
H: Doc2vec for text classification task
Can I use doc2vec for classification big documents (500-2000 words, 20000 total documents, classication for three classes)? Is it a problem that the documents are large enough and contain many common words? Can I train my data together with Wikipedia articles (using unique tag for every article) for a more accurate calculation word-embedings, or it can't give positive effect?
AI: Can I use doc2vec for classification big documents (500-2000 words,
20000 total documents, classification for three classes)?
Yes. This amount of documents is considered small to medium. It is worth noting that the original Doc2Vec paper experimented with 75K documents.
Is it a problem that the documents are large enough and contain many common words?
No, it is OK. Consider the fact that both common and distinctive words are increasing with the size of documents, and distinctive words are what that matters. That is, larger texts are easier to distinguish.
Can I train my data together with Wikipedia articles (using unique tag
for every article) for a more accurate calculation word-embeddings, or
it can't give positive effect?
It depends and is worth the try. If Wikipedia documents are close in format and content to your documents, they could definitely help. In the Doc2Vec paper, authors used 25K labeled documents (IMDb reviews with positive and negative labels) and 50K unlabeled ones for training.
Also, this is a nice tutorial on using doc2vec. It reports 1-2 hours for 100K documents. |
H: Using SMOTE for Synthetic Data generation to improve performance on unbalanced data
I presently have a dataset with 21392 samples, of which, 16948 belong to the majority class (class A) and the remaining 4444 belong to the minority class (class B). I am presently using SMOTE (Synthetic Minority Over-Sampling Technique) to generate synthetic data, but am confused as to what percentage of synthetic samples should be generated ideally for ensuring good classification performance of Machine Learning/Deep Learning models.
I have a few options in mind:- 1. The first option is to generate 21392 new samples, with 16904 majority samples of class A and remaining 4488 minority samples of class B. Then, merge the original and synthetically generated new samples. However, the key drawback I believe is that the percentage of minority samples in my overall dataset (original+new) would remain more or less the same, which I think defeats the purpose of oversampling the minority samples. 2. The second option is to generate 21392 new samples, with 16904 majority and remaining 4488 minority samples. Then, only merge the original data with the newly generated minority samples of the new data. This way, the percentage of minority (class B) samples in my overall data would increase (from 4444/21392 = 20.774 % to (4444+4488)/(21392+4488) = 34.513 %. This I believe is the purpose of SMOTE (to increase the number of minority samples and reduce the imbalance in the overall dataset).
I am fairly new to using SMOTE, and would highly appreciate any suggestions/comments on which of these 2 options do you find better, or any other option which I may consider alongside.
AI: First of all, you have to split your data set into train/test splits before doing any over/under sampling. If you do any strategy based on your approaches, and then split data you will bias your model and that is wrong simply because you are introducing points on your future test set that does not exist and your scores estimations would be imperfect.
After splitting you data then, you will use only SMOTE on train set. If you use SMOTE from imblearn, it will automatically balance the classes for you. Also, you can use some parameter to change that if you dont want perfect balancing, or try different strategies.
https://imbalanced-learn.readthedocs.io/en/stable/over_sampling.html#smote-adasyn
So, basically, you would have something like this:
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE
X_train, X_test, y_train, y_test = train_test_split(X, y, split_size=0.3)
X_resample, y_resampled = SMOTE().fit_resample(X_train, y_train)
Then you continue fitting your model on X_resample, y_resample. Above, X is your features matrix and y is your target labels. |
H: Does tensorflow pick samples from a dataset randomly or sequentially when training?
I have a dataset which consists of more than 10000 images. But similar images are grouped together. I mean first 50 images are very alike then the next 50 images are different(not as much similar as the first ones. I am talking about guns specifically) than the first 50 but they are similar between each other. If I choose a batch size of 50, will it lead to worse results? Or It picks random subsets of the dataset to train at a time?
I am new at deep learning if the answer is obvious sorry.
AI: Welcome to the site! My understanding is that it does it sequentially. It's usually a good idea to shuffle your dataframe as a pre-processing step and then train on that. |
H: Do I need to encode the target variable for sklearn logistic regression
I'm trying to get familiar with the sklearn library, and now I'm trying to implement logistic regression for a dataframe containing numerical and categorical values to predict a binary target variable.
While reading some documentation I found the logistic regression should be used to predict binary variables presented by 0 and 1.
My target variable is "YES" and "NO", should I code it to 0 and 1 for the algorithm to work properly, or there is no difference?
Maybe I just didn't get the idea but can someone confirm this to me.
AI: The string labels work just fine, here is an example:
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
import numpy
X, y = load_iris(return_X_y=True)
y_string = numpy.array(['YES' if label == 1 else 'NO' for label in y])
clf = LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial').fit(X, y_string)
y_pred = clf.predict(X[50:100, :])
print(y_pred)
Output:
['NO' 'NO' 'NO' 'YES' 'NO' 'YES' 'NO' 'YES' 'NO' 'NO' 'YES' 'NO' 'YES'
'NO' 'NO' 'NO' 'NO' 'YES' 'YES' 'YES' 'NO' 'NO' 'YES' 'YES' 'NO' 'NO'
'YES' 'NO' 'NO' 'YES' 'YES' 'YES' 'YES' 'YES' 'NO' 'NO' 'NO' 'YES' 'NO'
'YES' 'YES' 'NO' 'YES' 'YES' 'YES' 'NO' 'NO' 'NO' 'YES' 'NO']
Yo can replace y_string to y for the numerical example. |
H: Positive semidefinite kernel matrix from Gower distance
I have a dataframe with continuous and categorical variables and I want to obtain a kernel matrix for classification. The kernel matrix must be symmetric and positive semidefinite, so that no eigenvalue is negative. I started with Gower distance matrix for mixed data, which is not positive semidefinite. I tried to transform the Gower distance matrix into a positive semidefinite and symmetric kernel with the function D2Ksof MiRVpackage in R, with no success. I tried also to apply the approach of page 799 in Zhao, Ni et al. “Testing in Microbiome-Profiling Studies with MiRKAT, the Microbiome Regression-Based Kernel Association Test” American journal of human genetics vol. 96,5 (2015): 797-807. with no success, as well. I always obtain a indefinite kernel matrix with positive and negative eigenvalues. Any suggestion?
AI: You may use nearPD in R to convert a matrix to its Nearest Positive Definite counterpart. |
H: Validation vs. test vs. training accuracy. Which one should I compare for claiming overfit?
I have read on the several answers here and on the Internet that cross-validation helps to indicate that if the model will generalize well or not and about overfitting.
But I am confused that which two accuracies/errors amoung test/training/validation should I compare to be able to see if the model is overfitting or not?
For example:
I divide my data for 70% training and 30% test.
When I get to run 10 fold cross-validation, I get 10 accuracies that I can take the average/mean of. should I call this mean as validation accuracy?
Afterward, I test the model on 30% test data and get Test Accuracy.
In this case, what will be training accuracy? And which
two accuracies should I compare to see if the model is overfitting or not?
AI: Which two accuracies I compare to see if the model is overfitting or not?
You should compare the training and test accuracies to identify over-fitting. A training accuracy that is subjectively far higher than test accuracy indicates over-fitting.
Here, "accuracy" is used in a broad sense, it can be replaced with F1, AUC, error (increase becomes decrease, higher becomes lower), etc.
I suggest "Bias and Variance" and "Learning curves" parts of "Machine Learning Yearning - Andrew Ng". It presents plots and interpretations for all the cases with a clear narration.
When I get to run 10 fold cross-validation, I get 10 accuracies that I
can take the average/mean of. should I call this mean as validation
accuracy?
No. It is a [estimate of] test accuracy.
The difference between validation and test sets (and their corresponding accuracies) is that validation set is used to build/select a better model, meaning it affects the final model. However, since 10-fold CV always tests an already-built model on its 10% held-out, and it is not used here to select between models, its 10% held-out is a test set not a validation set.
Afterward, I test the model on 30% test data and get Test Accuracy.
If you don't use the K-fold to select between multiple models, this part is not needed, run K-fold on 100% of data to get the test accuracy. Otherwise, you should keep this test set, since the result of K-fold would be a validation accuracy.
In this case, what will be training accuracy?
From each of 10 folds you can get a test accuracy on 10% of data, and a training accuracy on 90% of data. In python, method cross_val_score only calculates the test accuracies. Here is how to calculate both:
from sklearn import model_selection
from sklearn import datasets
from sklearn import svm
iris = datasets.load_iris()
clf = svm.SVC(kernel='linear', C=1)
scores = model_selection.cross_validate(clf, iris.data, iris.target, cv=5, return_train_score=True)
print('Train scores:')
print(scores['train_score'])
print('Test scores:')
print(scores['test_score'])
Set return_estimator = True to get the trained models too.
More on validation set
Validation set shows up in two general cases: (1) building a model, and (2) selecting between multiple models,
Two examples for building a model: we (a) stop training a neural network, or (b) stop pruning a decision tree when accuracy of model on validation set starts to decrease. Then, we test the final model on a held-out set, to get the test accuracy.
Two examples for selecting between multiple models:
a. We do K-fold CV on one neural network with 3 layers, and one with 5 layers (to get K models for each), then we select the NN with the highest validation accuracy averaged over K models; suppose the 5 layer NN. Finally, we train the 5 layer NN on a 80% train, 20% validation split of combined K folds, and then test it on a held out set to get the test accuracy.
b. We apply two already-built SVM and decision tree models on a validation set, then we select the one with the highest validation accuracy. Finally, we test the selected model on a held-out set to get the test accuracy. |
H: Partial least squares (PLS)
I am relatively new to Orange, trying to utilise it for linear regression, in particular partial least squares (PLS). My statistics knowledge is in the moment not good enough to know whether I could compose an equivalent by combinding PCA with ordinary linear regression, but I would anyhow have expected a dedicated PLS widget - or as an option in the Linear regression widget.
I even found a description of PLS functionality in Orange2 here, but this is not available (or moved to where I don't find it) in Orange3.
Well, I have a python/numpy PLS implementation that I could probably adapt into a python script block, but is this necessary? Please tell me that it's just a mouse-click/import away.
AI: Well, I have a python/numpy PLS implementation that I could probably
adapt into a python script block, but is this necessary? Please tell
me that it's just a mouse-click/import away.
Welcome to the site! I wish I could tell you that what you want is a simple point-and-click but I don't think that it will be.
Generally speaking, most people use Orange as a tool for the visualization layer while you are looking for a tool for your modeling layer. Python has tons and tons of tools that you can use for modeling and should fit what you need. So, while that will be more steps for you, it will ultimately provide you more flexibility for other projects you may do in the future; just use Orange for the visualizations. |
H: Looking for help calculating a probability formula
How do I put this into a calculator or a excel spreadsheet formula. I have never done this math before, but I want to figure it out.
AI: Ok, I am not sure if I understood it correctly. But if it is only the right-side figure equation it would be:
=1/(1+EXP(3.1058))
(I have tested on LibreOffice only, but it should work on excel as well)
Now, if you want to do something more complex using the prior knowledge of event A, it should be more clear what is exactly this event and how it is related to the equation.
(I am relly not sure it this is what you want, so if it is not, I apologize) |
H: Why do I get an OOM error although my model is not that large?
I am a newbie in GPU based training and deep learning models. I am running cDCGAN (Conditional DCGAN) in TensorFlow on my 2 Nvidia GTX 1080 GPUs. My data set consists of around 320,000 images with size 64*64 and 2,350 class labels. If I set my batch size 32 or larger I get an OOM error like below. So I am using 10 batch size for now.
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,64,64,2351] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: discriminator/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_Placeholder_0_0/_41, _arg_Placeholder_3_0_3/_43, discriminator/concat/axis)]]
Caused by op 'discriminator/concat', defined at:
File "cdcgan.py", line 221, in <module>
D_real, D_real_logits = discriminator(x, y_fill, isTrain)
File "cdcgan.py", line 48, in discriminator
cat1 = tf.concat([x, y_fill], 3)
The training is very slow which I understand is down to the batch size (correct me if I am wrong). If I do help -n 1 nvidia-smi, I get the following output:
The GPU:0 is mainly used, as the Volatile GPU-Util gives me around 0%-65% whereas GPU:1 is always 0%-3% max. Performance for GPU:0 is always in P2 whereas GPU:1 is mostly P8 or sometimes P2. I have the following questions.
Why I am getting an OOM error on the large batch size although my dataset and model are not that big?
How can I utilize both GPUs equally in TensorFlow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. It is my understanding only).
Model details are as follows:
Generator:
I have 4 layers (fully connected, UpSampling2d-conv2d, UpSampling2d-conv2d, conv2d).
W1 is of the shape [X+Y, 1616128] i.e. (2450, 32768), w2 [3, 3, 128, 64], w3 [3, 3, 64, 32], w4 [[3, 3, 32, 1]] respectively
Discriminator
It has five layers (conv2d, conv2d, conv2d, conv2d, fully connected).
w1 [5, 5, X+Y, 64] i.e. (5, 5, 2351, 64), w2 [3, 3, 64, 64], w3 [3, 3, 64, 128], w4 [2, 2, 128, 256], [1616256, 1] respectively.
Session Configuration
I am also allocating memory in advance via
gpu_options = tf.GPUOptions(allow_growth=True)
session = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))
AI: Why I am getting OOM error on the large batch size although my dataset and model are not that big?
Yes, the batch size is probably the reason.
Also, another the reason is that you don't use the second GPU at all (otherwise, both GPUs will split the batch - computation - and you could use larger batches).
How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)
By default, Tensorflow occupies all available GPUs (that's way you see it with nvidia-smi - you have a process 34589 which took both GPUs), but, unless you specify in the code to actually use multi GPUs, it will use only one by default.
Here is an official TF docs about how to use multi GPUs: https://www.tensorflow.org/guide/using_gpu#using_multiple_gpus
https://www.tensorflow.org/guide/gpu#using_multiple_gpus
Here is some tutorial for multi-gpu usage with more examples: https://jhui.github.io/2017/03/07/TensorFlow-GPU/ |
H: Stacked barchart, bottom parameter triggers Error: Shape mismatch: objects cannot be broadcast to a single shape
I am working in python3 and I want to obtain a stacked barchart plot, showing three different variables on 5 different columns.
My code works fine if I do not add the 'bottom parameter' in plt.bar (but I need to, in order for the stacks to appear in the correct order):
import numpy as np
import matplotlib as plt
columns=['a','b','c','d','e']
pos = np.arange(5)
var_one=[40348,53544,144895,34778,14322,53546,33623,76290,53546]
var_two=[15790,20409,87224,22085,6940,27099,17575,41862,27099]
var_three=[692,3254,6645,1237,469,872,569,3172,872]
plt.bar(pos,var_one,color='green',edgecolor='green')
plt.bar(pos,var_two,color='purple',edgecolor='purple')
plt.bar(pos,var_three,color='yellow',edgecolor='yellow')
plt.xticks(pos, columns)
plt.show()
However, once I add the bottom parameter in bar.plot (as shown below):
import numpy as np
import matplotlib as plt
columns=['a','b','c','d','e']
pos = np.arange(5)
var_one=[40348,53544,144895,34778,14322,53546,33623,76290,53546]
var_two=[15790,20409,87224,22085,6940,27099,17575,41862,27099]
var_three=[692,3254,6645,1237,469,872,569,3172,872]
plt.bar(pos,var_one,color='green',edgecolor='green')
plt.bar(pos,var_two,color='purple',edgecolor='purple',bottom=var_one)
plt.bar(pos,var_three,color='yellow',edgecolor='yellow',bottom=var_one+var_two)
plt.xticks(pos, columns)
plt.show()
the code triggers the error
ValueError: shape mismatch: objects cannot be broadcast to a single shape
How could I fix it?
AI: if you change your code to the following:
import numpy as np
import matplotlib.pyplot as plt
columns = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'j']
pos = np.arange(9)
var_one = np.array([40348, 53544, 144895, 34778, 14322, 53546, 33623, 76290, 53546])
var_two = np.array([15790, 20409, 87224, 22085, 6940, 27099, 17575, 41862, 27099])
var_three = np.array([692, 3254, 6645, 1237, 469, 872, 569, 3172, 872])
plt.bar(pos, np.add(np.add(var_three, var_two), var_one), color='yellow', edgecolor='yellow')
plt.bar(pos, np.add(var_two, var_one), color='purple', edgecolor='purple')
plt.bar(pos, var_one, color='green', edgecolor='green')
plt.xticks(pos, columns)
plt.show()
The result will be like this: |
H: what is the best approach to detect small objects with similar shape?
I'm working a model which detect different products in supermarket shelf. In the training data, there are a lot of objects with similar shape placed very close to or stacked to each others.(eg: milks with different brands are stacked, placed on the same shelf, the model should be able to detect milk1, milk2). What is the best approach to this problem. I've tried to train a Faster RCNN, but the RPN isn't working well. I've also tried feature matching, but it cannot detect partially visible objects. Any help will be appreciated!
The training images look like this
Link to FRCNN result when detect 2 type of milk and 1 type of yogurt
faster r-cnn detection result
AI: If all objects are observed in the same distance and almost same angle, the relative height and width can be helpful features for recognizing objects with similar shape and different size. By this features different methods like GAN algorithms such as CoGAN and BiGAN may help you in this problem.
It should be noticed that for recognizing the size of the objects the features play more important role than the algorithms. |
H: Can I use an array as a model feature?
Problem
I have data that includes multiple different text inputs as well as floats, categories, etc. Therefore I need to pass several different data types as features, including text which is an int array when tokenized.
Question
Say I tokenize the several text inputs; can I pass the tokenized text array as a feature alongside my floats and categories? If not, how is this done?
Background
When I've done NLP models, my code looks similar to this:
...
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(df['Stem'])
list_tokenized_train = tokenizer.texts_to_sequences(df['Stem'])
X_train = pad_sequences(list_tokenized_train, maxlen=10)
y_train = df['TotalPValue']
...
So, the text input becomes an array of int tokens padded with zeroes, e.g. [0 0 0 0 0 0 0 1 12 52].
This is not enough to solve my problem. I want to instead use multiple tokenized string inputs and floats as features. I want to first tokenize and pad each text input like above and put them in the same input array, like this: X_train = [[0 0 0 0 0 0 0 1 12 52], [0 0 0 0 0 0 0 42 12 23], 0.0425672].
I want to then start my model like this:
model = Sequential()
model.add(Embedding(max_features, embedding_vector_length, input_length=3))
Will it work if implemented like this?
My attempts
I searched for a while but couldn't find anyone else doing it like this. Surprising to me that I couldn't find anything since it seems like a basic problem.
Just wanted to know if I have the right idea, since - as a beginner - implementation will cost a lot of time if this isn't the right way of doing it. Thanks so much for the insight!
AI: Found a fantastic answer here that avoids high-level black box libraries.
Essentially, numerical floats are categorized into bins based on boundary values which are determined by their distribution. Text tokens are hashed with column ID, and then concatenated with other columns' hashed tokens via an interaction array. All hashed tokens, whether or not they have in interaction, are fed as inputs into the model.
That's the gist of it. |
H: Compile See5 / C50 GPL Edition
See5 / C5.0 is Data Mining Tools available from rulequest
I want to compile C50 for Linux, preferably for CentOS 6.x, but I am unable to compile. I have also tried on Ubuntu, but not success there as well.
I have downloaded C50.tgz from C5.0 Release 2.07 GPL Edition
After extracting when I run ./Makefile it gives below error on Ubuntu18
./Makefile: line 9: CC: command not found
./Makefile: line 10: CFLAGS: command not found
./Makefile: line 11: S: command not found
./Makefile: line 11: LFLAGS: command not found
./Makefile: line 12: SHELL: command not found
./Makefile: line 19: src: command not found
./Makefile: line 48: obj: command not found
./Makefile: line 59: all:: command not found
cat defns.i global.c c50.c construct.c formtree.c info.c discr.c contin.c subset.c prune.c p-thresh.c trees.c siftrules.c ruletree.c rules.c getdata.c implicitatt.c mcost.c confmat.c sort.c update.c attwinnow.c classify.c formrules.c getnames.c modelfiles.c utility.c xval.c\
| egrep -v 'defns.i|extern.i' >c50gt.c
make: /bin/csh: Command not found
Makefile:75: recipe for target 'c5.0' failed
make: *** [c5.0] Error 127
./Makefile: line 61: CC: command not found
./Makefile: line 61: LFLAGS: command not found
./Makefile: line 61: -o: command not found
./Makefile: line 67: obj: command not found
./Makefile: line 67: c5.0dbg:: command not found
./Makefile: line 68: CC: command not found
./Makefile: line 68: obj: command not found
./Makefile: line 68: -g: command not found
./Makefile: line 74: src: command not found
./Makefile: line 74: c5.0:: command not found
./Makefile: line 75: src: command not found
./Makefile: line 77: CC: command not found
./Makefile: line 77: LFLAGS: command not found
./Makefile: line 77: -O3: command not found
strip: 'c5.0': No such file
./Makefile: line 82: obj: command not found
./Makefile: line 85: .c.o:: command not found
./Makefile: line 86: syntax error near unexpected token `newline'
./Makefile: line 86: ` $(CC) $(CFLAGS) -c $<'
If I run make command, it gives below error.
make c5.0
make: /bin/csh: Command not found Makefile:60: recipe for
target 'all' failed make: *** [all] Error 127
gcc version is:
gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
AI: After extracting C50.tgz file, give execution rights to Makefile.
chmod +x Makefile
Then install csh
sudo apt install csh
Run below command to check if csh is install and check csh version (if installed)
dpkg -l csh
It should show output like below.
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============================-====================-====================-==================================================================
ii csh 20110502-3 amd64 Shell with C-like syntax
Now run below command.
./Makefile
It should give output like below. (Ignore the warning messages).
./Makefile: line 9: CC: command not found
./Makefile: line 10: CFLAGS: command not found
./Makefile: line 11: S: command not found
./Makefile: line 11: LFLAGS: command not found
./Makefile: line 12: SHELL: command not found
./Makefile: line 19: src: command not found
./Makefile: line 48: obj: command not found
./Makefile: line 59: all:: command not found
cat defns.i global.c c50.c construct.c formtree.c info.c discr.c contin.c subset.c prune.c p-thresh.c trees.c siftrules.c ruletree.c rules.c g etdata.c implicitatt.c mcost.c confmat.c sort.c update.c attwinnow.c classify.c formrules.c getnames.c modelfiles.c utility.c xval.c\
| egrep -v 'defns.i|extern.i' >c50gt.c
gcc -ffloat-store -O3 -o c5.0 c50gt.c -lm
c50gt.c: In function ‘ListAttsUsed’:
c50gt.c:14025:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
Att = (Attribute) DefSVal(D[e]);
^
c50gt.c: In function ‘Error’:
c50gt.c:15561:17: warning: format not a string literal and no format arguments [-Wformat-security]
fprintf(Of, Buffer);
^~~~~~
strip c5.0
rm c50gt.c
./Makefile: line 61: CC: command not found
./Makefile: line 61: LFLAGS: command not found
./Makefile: line 61: -o: command not found
./Makefile: line 67: obj: command not found
./Makefile: line 67: c5.0dbg:: command not found
./Makefile: line 68: CC: command not found
./Makefile: line 68: obj: command not found
./Makefile: line 68: -g: command not found
./Makefile: line 74: src: command not found
./Makefile: line 74: c5.0:: command not found
./Makefile: line 75: src: command not found
./Makefile: line 77: CC: command not found
./Makefile: line 77: LFLAGS: command not found
./Makefile: line 77: -O3: command not found
./Makefile: line 82: obj: command not found
./Makefile: line 85: .c.o:: command not found
./Makefile: line 86: syntax error near unexpected token `newline'
./Makefile: line 86: ` $(CC) $(CFLAGS) -c $<'
You will see a new file c5.0*
Check command line options with below command.
./c5.0 -h
it should show below output.
C5.0 [Release 2.07 GPL Edition] Fri Mar 15 17:43:39 2019
-------------------------------
Options:
-f <filestem> application filestem
-r use rule-based classifiers
-u <bands> order rules by utility in bands
-w invoke attribute winnowing
-b invoke boosting
-t <trials> number of boosting trials
-p use soft thresholds
-e focus on errors (ignore costs file)
-s find subset tests for discrete atts
-g do not use global tree pruning
-m <cases> restrict allowable splits
-c <percent> confidence level (CF) for pruning
-S <percent> training sample percentage
-X <folds> cross-validate
-I <integer> random seed for sampling and cross-validation
-h print this message
Now C50 is ready to be used. Here is a sample command(assuming sampledata.data and sampledata.names files exists in the same folder).
./c5.0 -r -f sampledata
After successful execution of the command, sampledata.rules will be generated in the same folder.
See full tutorial from here |
H: Can we use ReLU activation function as the output layer's non-linearity?
I have trained a model with linear activation function for the last dense layer, but I have a constraint that forbids negative values for the target which is a continuous positive value.
Can I use ReLU as the activation of the output layer? I am afraid of trying, since it is generally used in hidden layers as a rectifier. I'm using Keras.
AI: Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best. |
H: What models do Create ML and Turi Create use
I'm taking a course on Apple's machine learning technologies. I just came across this paragraph:
Turi Create and Create ML are task-specific, rather than
model-specific. This means that you specify the type of problem you
want to solve, rather than choosing the type of model you want to use.
You select the task that matches the type of problem you want to
solve, then Turi Create analyzes your data and chooses the right model
for the job.
My question is when you select a task like binary image classification, and Create ML / Turi Create selects an appropriate model for you, what models does it have at its disposal. Aren't there MANY models in the market that do this? Did Apple simply pick the one they thought was best?
Thanks!
AI: TuriCreate has several different models implemented. You can either use them specifically or you can use a create method which will analyze your data and pick one of them (not always the same one).
For instance in binary classification (not image) there is support for random forest, decision tree, boosted trees, logistic regression, SVMs and nearest neighbor.
I don't know exactly how it makes the selection, but I imagine it has a lot to do with the dimensions of your data. |
H: Can I use Linear Regression to model a nonlinear function?
I have recently started studying the basics about regression, and as a beginner I started by Linear Regression.
I read this article that says that for this particular type of regression the relationship between independent and dependent variables has to be linear, which to me implies that I can only predict "lines" with Linear regression:
https://www.analyticsvidhya.com/blog/2015/08/comprehensive-guide-regression/
But then I started wondering about how to model functions like "y = log(x)" or "y= sqrt(x)" or "y=exp(x)" or "y=tan(x)" or other nonlinear functions by definition which are not "lines" but "curves".
Then I carried on doing research until I found this article that says that it is not the relationship between the independent and dependent variables that should be linear, but the final functional form passed to the model:
https://medium.freecodecamp.org/learn-how-to-improve-your-linear-models-8294bfa8a731
I want to know if that is really the case, and is it always possible to do this "change" in the functional form? Also if it is possible to use linear regression for nonlinear functions, is it still correct to measure the performance of the model using R_square metric?
Thank you.
AI: You are asking two different questions:
What is linear regression?
Linear regression means that, given a response variable $y$ and a set of predictors $x_i$, you are assuming (whether or not this is true is another matter) to model your response variable as
$$
y^{(j)}=\sum_{i=1}^N x_i^{(j)}\beta^i + \epsilon^{(j)}
$$
for each observation $y^{(j)}$, where $\epsilon^{(j)}$ is an error term with vanishing expectation value. The purpose of the algorithm is to find the set of $\beta^i$ to minimise the errors between the above formula and the actual values of the response.
May I use linear regressio to model non-linear functions?
You may use the linear regression to model anything you want, this does not necessarily mean that the results will be a good fit. The mere decision to use a model makes no assumptions on whether the underlying equation is in fact reflected by the model you choose. In case of linear regression you are essentially approximating an $N$-dimensional manifold (where all true points belong) with their projections onto a plane. Whether or not this is a good idea it depends on the data.
I want to know if that is really the case, and is it always possible to do this "change" in the functional form?
By using this or that other model you are not changing the functional form of the underlying variables. You are just dictating that the original relation (that you do not know) can be approximated by the model you choose.
Is it still correct to measure the performance of the model using R_square metric?
The $R^2$ is defined as the ratio between the residual sum of squares of your model over the residual sum of squares of the average. Basically it tells how much of the variance of the data is explained by your model compared to just taking a straight line (in correspondence of the average) passing through all your data points. |
H: Manipulating multi-indices for a pandas dataframe
I have a pandas dataframe with multi-index. I have couple of questions on this.
The indices are week numbers (38 to 42) and for each week, day of the week (DOW). So it looks like
The problem is the 2nd level index, that is, DofWeek, is automatically sorted in lexicographic ordering. But I want to have the usual weekdays ordering, i,e for week 39, I want it to appear as Monday, Tuesday, Wednesday, Thoursday, Friday, Saturday and Sunday and NOT in the order they appear. So when I graph it, it appears this way:
I want it to appear in the usual Monday, Tuesday, Wednesday etc. I have tried reindexing, but it didn't work. This is my first question.
My second question is regarding querying such dataframes. For example when I do dataframe.query('DofWeek == 'Saturday'), it is throwing an error saying Saturday is not defined. However, if I swap levels and then query, it works. For example dataframe.swaplevel().query('weekNum == 39'), it works perfectly. As far as I know, the multi-indices should be sorted in order for certain methods to work. But in my case, they are. Then why isn't it working.
Any help would be hugely appreciated.
Thanks.
AI: Ordering
You can achieve this by changing the datatype to ordered categorical:
df['DofWeek'] = pd.Categorical(
df['DofWeek'],
categories=['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday'],
ordered=True)
And then sorting the index:
df = df.sort_index()
Error
The issue with dataframe.query('DofWeek == 'Saturday') is bad quote encapsulation. You could solve it with e.g. 'DofWeek == "Saturday"' |
H: How does back propagation works through layers like maxpooling and padding?
I know back propagation takes derivatives (changing one quantity wrt other). But how this is applied when there is maxpooling layer in between two Conv2D layers? How it gains its original shape when there is padding added?
AI: Max pooling will cancel the effect of not pooled values to the gradients.
Padded values either have no effect.
Nice thing about convolution is, that it is basically reducable to a matrix multiplication and the backpropagation is simply the transposed of it. So you have already your backward pass stored in the foward pass. |
H: How to favour a particular class during classification using XGBoost?
I am using a simple XGBoost model to classify 2 classes (0 and 1) in a binary context. In case of the original data, the 0 is the majority class and 1 the minority class. The thing which is happening is that in case of classification, most 0s are being classified correctly, with many going into 1s, but most 1s are being misclassified into 0s.
I am fairly new to this, and having looked at various documentations and questions on SE, am really confused as to how I can specify my XGBoost model to favour class 1 (to be precise, if most 0s are misclassified into 1s, that is not a problem, but I want that most 1s are correctly classified as 1s (to increase the true positives, if there are false positives that is something which isn't much of a problem). The segment of code I am presently using to train and test the XGBoost are as follows (afterwards I use the confusion matrix in which the true positives (1s) are highly misclassified into 0s).
from xgboost import XGBClassifier
# fit model on training data
model = XGBClassifier()
model.fit(X_train, labels) # where labels are either 1s or 0s
# make predictions for test data
y_pred = model.predict(X_test)
y_pred = y_pred > 0.70 # account for > 0.70 probability
y_pred = y_pred.astype(int)
print(y_pred)
I just want to know if there is a simple way to specify to the XGBoost model any parameter in my code, so that the true positive rate can be increased? I can compromise of false positives being high, but I want the number of 1s to be correctly classified as 1s, instead of most of them going into 0s. Any help in this regard is appreciated.
UPDATE:
I have now tried to use scale_pos_weight in the XGBoost, with its value set to 0.70 (a random figure), but it is still landing most samples to 0, instead of 1.
AI: XGBoost has the scale_pos_weight parameter to help with this, depending on how you want to evaluate it (see tuning notes). It should be the ratio of negative count to positive count (or inverse based on how you indexed your classes).
An example in Python is here. |
H: Partial derviative of prediction (sigmoid applied) with respect to weight
I am very confused as to where a seemingly "extra" term is included in the above mentioned calculation in my Udacity course.
The above is taking the derivative of a sigmoid so why isn't it just
$$=\sigma(Wx+b)(1-\sigma(Wx+b)$$
but rather has $\frac{\partial}{\partial w_j}(Wx+b)$ tacked on the tail?
AI: Recall that for chain rule, we have $$\frac{d}{dw}h(g(w))=h'(g(w))g'(w)$$
For the context of your question, $h(t)=\sigma(t)$ and $g(w)=Wx+b$,
hence that is why we have one more term. |
H: Google Sheets: how to find max value in column B, corresponding values in column A, and max among these
I have two numeric columns, A and B. I want to find the max value in column B, which will return multiple rows. Then I want to find the max value in column A from among these rows.
A B
5 315
7 315
10 275
The function should return:
7 315
It would be fine if this were a combination of two functions - one that finds the max value in B (315), and another that finds the corresponding max value in A (7). That's how I'm trying it right now using:
=index(A2:B10,match(max(B2:B10),B2:B10,0))
But this solution only finds the latest row that has column B's max value:
5 315
Someone suggested I use the query function, but I couldn't get that to work either.
AI: Suppose you have data in the columns A, B from row 2 to 4. Put MAX(B2:B4) somewhere (in this example, I put it in B6). Then for the corresponding max in A you can use the formula MAX(ARRAYFORMULA(IF(B2:B4 = $B$6,A2:A4))). |
H: does index of my data which is of type "Date time index" plays a part in reggression?
I'm new to data science and I'm working on a regression problem. My question is the index of my data which is of type "Date time index" plays a part in regression? I mean is it Okay if i drop the index ?
AI: You can take the date into account, but you should convert it into numeric value.
Suggestions:
1) Consider "datetime" package in python to change it to numeric, or some other techniques such as indexing with date-time, and converting the index value to numeric, float, etc. I don't have your exact case, so cannot say specifically.
or
2) If you have time series data, then in many cases ARMAX, ARIMAX, Seasonal ARIMAX models can do a really good job. |
H: Calculating derivative of error at point x with respects to weight w_j
I don't know how the equation below goes from line 2 to 3 after the derivative term is moved inside the brackets. Specifically, how is it calculating the derivative of log(y_hat)? Also, if anyone can point to a good textbook or website to learn this stuff. I've just started on this free course (option to pay for extras) at edx that is so far pretty good because it has easy-to-understand lectures and jupyter notebook assignments that get you coding: https://courses.edx.org/courses/course-v1:Microsoft+DAT256x+1T2019/course/
AI: Based on chain rule we have:
$$\frac{\partial \mbox{log}(f(x))}{\partial x}=\frac{\partial \mbox{log}(z)}{\partial z}\Bigr|_{z=f(x)}\frac{\partial f(x)}{\partial x}=\frac{1}{z}\Bigr|_{z=f(x)}\frac{\partial f(x)}{\partial x}=\frac{1}{f(x)}\frac{\partial f(x)}{\partial x}$$
More close to the notations used in the question:
$$\frac{\partial \mbox{log}(f_{\mathbf{w}}(\mathbf{x}))}{\partial w_j}=\frac{\partial \mbox{log}(z)}{\partial z}\Bigr|_{z=f_{\mathbf{w}}(\mathbf{x})}\frac{\partial f_{\mathbf{w}}(\mathbf{x})}{\partial w_j}=\frac{1}{z}\Bigr|_{z=f_{\mathbf{w}}(\mathbf{x})}\frac{\partial f_{\mathbf{w}}(\mathbf{x})}{\partial w_j}=\frac{1}{f_{\mathbf{w}}(\mathbf{x})}\frac{\partial f_{\mathbf{w}}(\mathbf{x})}{\partial w_j}$$
Now by setting $f_{\mathbf{w}}(\mathbf{x})=\hat{y}$, or $f_\mathbf{w}(\mathbf{x})=1-\hat{y}$, line 2 to 3 follows. For example for $f_\mathbf{w}(\mathbf{x})=1-\hat{y}$:
$$\frac{\partial \mbox{log}(1-\hat{y})}{\partial w_j}
=\frac{\partial \mbox{log}(z)}{\partial z}\Bigr|_{z=1-\hat{y}}
\frac{\partial (1-\hat{y})}{\partial w_j}=\frac{1}{z}\Bigr|_{z=1-\hat{y}}\frac{\partial (1-\hat{y})}{\partial w_j}=\frac{1}{1-\hat{y}}\frac{\partial (1-\hat{y})}{\partial w_j}$$
Note that input $\mathbf{x}$ to the network is constant w.r.t. changes in $\mathbf{w}$. Also $x|_{x=y}$ means replace $x$ with $y$. |
H: One-Dimensional Convolutional Neural Network
Can someone explain how 'One-Dimensional Convolutional Neural Network' works. I do understand the 2-D for image but for 1-D how is the filer created. is it fixed 1-D filter within a specific time interval or the operation is the same as we convolve a signal with a filter in signal processing y = f*x
AI: is it fixed 1-D filter within a specific time interval?
Yes. The same as filters in 2D. Adjacent filters may even have no overlap with each other.
1D CNN is almost the same as 2D CNN both mathematically and visually by setting the second dimension (either the horizontal or vertical one in visualizations) to 1. This way, 1D filters are placed (possibly with some overlap) in one dimension instead of 2D filters being spread in two dimensions.
The below image shows a filter set with shared parameter $W$ covering the overlapping regions of the input.
By shared parameter we mean $f_i=\mbox{ReLU}(\mbox{sum}(W \odot \mbox{region}_i))$, where $\odot$ is a point-wise product between a region of input and $W$. |
H: How to write out the definition of the value function for continous action and state space
In the book of Sutton and Barto (2018) Reinforcement Learning: An Introduction. The author defines the value function as.
$$v_{\pi}(\boldsymbol{s})=\mathbb{E}_{\boldsymbol{a}\,\sim\, \pi}\left[\sum_{k=0}^{\infty}\gamma^kR_{t+k+1}\,\bigg|\,\boldsymbol{s}_t=\boldsymbol{s} \right]$$
If $\boldsymbol{a}\in \mathcal{A}$ and $\boldsymbol{s}\in \mathcal{S}$ are continuous I would think by using Bellman's equation for the state-value function that this can be written as the integral
$$v_{\pi}(\boldsymbol{s})=\int_{\boldsymbol{a}\in\mathcal{A}}\pi\left(\boldsymbol{a}|\boldsymbol{s} \right)\int_{\boldsymbol{s}'\in \mathcal{S}}p(\boldsymbol{s}'|\boldsymbol{s},\boldsymbol{a})\left[R_{t+1}+\gamma v_{\pi}(\boldsymbol{s}')\right]d\boldsymbol{s'}d\boldsymbol{a}.$$
Is this correct?
Also without using Bellman's equation does the integral definition of the state-value function look like this?
$$v_{\pi}(\boldsymbol{s})=\int_{\boldsymbol{a}\in\mathcal{A}}\pi\left(\boldsymbol{a}|\boldsymbol{s} \right)\int_{\boldsymbol{s}'\in \mathcal{S}}p(\boldsymbol{s}'|\boldsymbol{s},\boldsymbol{a})\left[R_{t+1}+\gamma \left[\int_{\boldsymbol{a}'\in\mathcal{A}}\pi\left(\boldsymbol{a}'|\boldsymbol{s}' \right)\int_{\boldsymbol{s}''\in \mathcal{S}}p(\boldsymbol{s}''|\boldsymbol{s}',\boldsymbol{a}')\left[R_{t+2}+\gamma\left[\cdots\right] \right]d\boldsymbol{s''}d\boldsymbol{a}'\right] \right]d\boldsymbol{s'}d\boldsymbol{a}$$
Are my integral versions correct?
AI: this can be written as the integral, is this correct?
Yes. Your derivations imply that we have assumed a deterministic reward given current state-action $(\boldsymbol{s},\boldsymbol{a})$. An stochastic reward model would be $p(\boldsymbol{s}', r|\boldsymbol{s},\boldsymbol{a})$ which requires an additional integral over reward $r$ (for example, equation (3.14) page 47)
Are my integral versions correct?
Yes. You are unfolding the recursive definition. An illustration would be the recursive definition for factorial:
$$f(n) = nf(n-1);f(0)=1$$
Which is unfolded as:
$$f(n) = n [(n-1) [(n-2)[...]]]$$
However, the difference is that the index in Bellman equation is going forward since current value depends on future values not the previous ones. |
H: CNN - imbalanced classes, class weights vs data augmentation
I have a dataset with a few strongly imbalanced classes, eg. the smallest class is about 54 times smaller than the largest. Therefore, data augmentation in order to equalize the size of classes seems like a bad idea to me (in the example above each image would have to be augmented 54 times on average). So I thought that I could do less augmentation of minority classes and then use class weights in the loss function. Is this approach better than the mere augmentation or just the use of class weights?
AI: Is this approach better than the mere
augmentation or just the use of class weights ?
Note that data augmentation is the process of changing the training samples (e.g. for images, flipping them, changing their luminosity, adding noise, etc.) and adding them back into the set. It is used for enriching the diversity of training samples, thus, in this aspect it cannot be replaced with class weighting. However, it is related to over-sampling since we can both enrich and increase the size of a class via data-augmentation. For now we are only focusing on the size increasing part, i.e. data over-sampling.
Data-oversampling and class weighting are equivalent. Copying the samples of a class 3X is equivalent to assigning a 3X weight to the class. However, the weighting is better from storage and computational point of view since it avoids working with a larger data-set.
Note that based on this equivalency we can mix and match. Increasing the size of a class 6X $\equiv$ increasing the weight 6X $\equiv$ increasing the size 3X and increasing the weight 2X.
Class weights vs over-sampling in more detail
I copied my answer from here, since the solutions are almost the same but questions are the reverse of each other.
This challenge can be tackled in two places:
Data: as you mentioned, this is done by artificially increasing the number of samples for under-represented class $uc$. This produces the same effect as data-sets that are naturally balanced,
Model: this is generally done by over-penalizing the miss-classification of $uc$ compared to other classes. One place for this modification is the loss function. A frequently used loss function in classification is cross entropy. It can be modified for this purpose as follows. Let $y_{ik}$ be $1$ if $k$ is the true class of data point $i$, and be $0$ otherwise, and $y'_{ik} \in (0, 1]$ be the corresponding model estimation. The original cross-entropy can be written as:
$$H_y(y')=-\sum_{i}\sum_{k=1}^{K}y_{ik}log(y'_{ik})$$
which can be weighted as
$$H_y(y')=-\sum_{i}\sum_{k=1}^{K}\color{blue}{w_{k}}y_{ik}log(y'_{ik})$$
For example, by setting $w_{uc} = 10$ and $w_{k \neq uc}=1$, we are essentially telling the model that miss-classifying $1$ member from $uc$ is as punishable as miss-classifying $10$ members from other classes. This is roughly equivalent to increasing the ratio of class $uc$ $10$ times in the training set using method (1).
As a concrete example, suppose ratio of three classes are $c_1=10\%$, $c_2=60\%$, and $c_3=30\%$, thus, we can set $w_{c_1}=6$, $w_{c_2}=1$, and $w_{c_3}=2$, which implies $w_{c_i} \times P(c_i)=w_{c_j}\times P(c_j)$ for all class pairs $c_i$ and $c_j$. |
H: Why a Random Reward in One-step Dynamics MDP?
I am reading the 2018 book by Sutton & Barto on Reinforcement Learning and I am wondering the benefit of defining the one-step dynamics of an MDP as
$$
p(s',r|s,a) = Pr(S_{t+1},R_{t+1}|S_t=s, A_t=a)
$$
where $S_t$ is the state and $A_t$ the action at time $t$. $R_t$ is the reward.
This formulation would be useful if we were to allow different rewards when transitioning from $s$ to $s'$ by taking an action $a$, but this does not make sense. I am used to the definition based on $p(s'|s,a)$ and $r(s,a,s')$, which of course can be derived from the one-step dynamics above.
Clearly, I am missing something. Any enlightenment would be really helpful. Thx!
AI: In general, $R_{t+1}$ is is a random variable with conditional probability distribution $Pr(R_{t+1}=r|S_t=s,A_t=a)$. So it can potentially take on a different value each time action $a$ is taken in state $s$.
Some problems don't require any randomness in their reward function. Using the expected reward $r(s,a,s')$ is simpler in this case, since we don't have to worry about the reward's distribution. However, some problems do require randomness in their reward function. Consider the classic multi-armed bandit problem, for example. The payoff from a machine isn't generally deterministic.
As the basis for RL, we want the MDP to be as general as possible. We model reward in MDPs as a random variable because it gives us that generality. And because it is useful to do so. |
H: How could I go about finding the weights or importance of inputs based on outputs?
I have a table who's inputs (sfm, fr, and doc) all affect the outputs (mmr and ra). How could I go about finding the input importance on the outputs? Basically, I'd like to be able to have a goal output in mmr and ra and have a good idea of starting parameters for sfm, fr, and doc. Does anyone have insight into something like this? Below is a sample of the data.
sfm fr doc mmr ra
60 0.15 0.1 449.6 1.85
60 0.15 0.2 896.78 0.86
60 0.15 0.25 1116.34 1.28
60 0.2 0.1 593.46 1.42
60 0.2 0.2 1183.62 0.91
60 0.2 0.25 1473.34 1.91
60 0.25 0.1 734.26 1.59
60 0.25 0.2 1464.41 1.52
60 0.25 0.25 1822.79 1.07
70 0.15 0.1 503.3 1.42
70 0.15 0.2 1003.74 0.89
70 0.15 0.25 750.31 0.99
70 0.2 0.1 665.35 1.12
70 0.2 0.2 1326.9 1.96
70 0.2 0.25 1651.5 1.73
70 0.25 0.1 822.97 0.99
70 0.25 0.2 1641.19 1.17
70 0.25 0.25 2042.57 0.85
AI: Pearson correlation can be used for this purpose. The Pearson correlation between two entity shows that how mush the values of these two are linearly related to each other.
According to the Cauchy–Schwarz inequality it has a value between +1
and −1, where 1 is total positive linear correlation, 0 is no linear
correlation, and −1 is total negative linear correlation.
According to the values you reported here, there is a strong correlation between doc and mrr so the role of doc in the prediction of mrr is more important than others.
But on the other hand none of the features doesn't have any considerable linear correlation with ra. In this case you can test some other correlation methods.
For further information visit here. It can be helpful to you.
Conclusion: the most important feature in predicting an output is the most correlated one with it which has a considerable correlation. |
H: matplotlib subplots_adjust - meaning of parameters
What are the meaning of values in subplots_adjust ?
left = 0.125 # the left side of the subplots of the figure
The documentation has number 0.125, etc but there is no explanation.
AI: These values represent the distance of your subplot from the boundary of the figure. Its value is between 0 & 1. For example, if you set top=1 the upper boundary of your subplot will coincide with the upper boundary of your figure and if you set top=0 the upper boundary of your subplot will coincide with the lower boundary of your figure and the plot will be entirely squashed down.
It is the percent of figure height and figure width.
A better way to see how this works is to plot a graph and the use the option Configure Subplots, in the window that pops up, to adjust these values. Try this and you will able to appreciate the use of these values in a much better way. |
H: What are features for state-action pairs in RL?
I read this answer: What are features in the context of reinforcement learning?
But it only describes features for the state only in the context of cartpole, ie. Cart Position, Cart Velocity, Pole Angle, Pole Velocity At Tip
On slide 18 here: http://www.cs.cmu.edu/~rsalakhu/10703/Lecture_VFA.pdf
It states:
But does not give examples. I started reading from p. 198 in Sutton's book for Value Function Approximation but also did not see examples for "features of state-action pairs" .
My best guess is for example in Cartpole-V1 (discrete action space) would be to add one more number to the tuple describing the state-action pair, ie. (Cart Position, Cart Velocity, Pole Angle, Pole Velocity At Tip, push_right) .
In the case of Cartpole I guess each state action pair could be described with a feature vector of length 3 where the final input for the tuple is either "push_left", "do_nothing", "push_right".
Would the immediate reward from taking one of the actions also be included in the tuples that form the state-action feature vector?
AI: In the cartpole example, a state-action feature could be
$$\begin{bmatrix}
\text{Cart Position}\\
\text{Cart Velocity}\\
\text{Pole Angle}\\
\text{Pole Tip Velocity}\\
\text{Action}
\end{bmatrix}$$
where Action is either left, right, or do nothing. The reward is not part of the feature vector because reward does not describe the state of the agent; it is not an input. It is a (possibly stochastic) signal received from the environment that the agent is trying to predict/control with the use of feature vectors. |
H: What is the difference between ImageNet and ImageNet1k? How to download it?
Some papers mention just ImageNet and some papers mention ImageNet 1k database?
What is the difference between these 2? Are they same or is the latter one subset of the former one?
I'm working on Generative Adversarial Nets. I wanted to train it on ImageNet Database. How to download ImageNet 1k? I went to ImageNet site and created & verified my account. Then there were several links. Which one to select?
Thanks!
AI: The ImageNet dataset consists of more than 14M images, divided into approximately 22k different labels/classes. However the ImageNet challenge is conducted on just 1k high-level categories (probably because 22k is just too much).
ImageNet Stats
When people mention results on the ImageNet, they almost always mean the 1k labels (if some paper uses the original 22k labels, they would surly mention it). So basically ImageNet=ImageNet-1k.
Regarding downloading the dataset. Since you are downloading ImageNet for your personal usage (GAN training) and not to participate in one of the challenges, it doesn't really matter, so just download the latest dataset labeled "Download links to ILSVRC2017 image data". |
H: Unnormalized Log Probability - RNN
I am going through the deep learning book by Goodfellow. In the RNN section I am stuck with the following:
RNN is defined like following:
And the equations are :
Now the $O^{(t)}$ above is considered as unnormalized log probability. But if this is true, then the value of $O^{(t)}$ must be negative because,
Probability is always defined as a number between 0 and 1, i.e, $P\in[0,1]$, where brackets denote closed interval. And $log(P) \le 0$ on this interval. But in the equations above, nowhere this condition that $O^{(t)} \le 0$ is explicitly enforced.
What am I missing!
AI: You are right in a sense that it is better to be called log of unnormalized probability. This way, the quantity could be positive or negative. For example, $\text{log}(0.5) < 0$ and $\text{log}(12) > 0$ are both valid log of unnormalized probabilities. Here, in more detail:
Probability: $P(i) = e^{o_i}/\sum_{k=1}^{K}e^{o_k}$ (using softmax as mentioned in Figure 10.3 caption, and assuming $\mathbf{o}=(o_1,..,o_K)$ is the output of layer before softmax),
Unnormalized probability: $\tilde{P}(i) = e^{o_i}$, which can be larger than 1,
Log of unnormalized probability: $\text{log}\tilde{P}(i) = o_i$, which can be positive or negative. |
H: Understanding minimizing cost correctly
I cannot wrap my head around this simple concept.
Suppose we have a linear regression, and there is a single parameter theta to be optimized (for simplicity purposes):
$h(x) = \theta \cdot x$
The error cost function could be defined as $J(\theta) = \frac1m \cdot \sum (h(x) - y(x)) ^ 2$, for each $x$.
Then, theta would be updated as:
$\theta = \theta - \alpha\cdot \frac1m \cdot \sum (h(x) - y(x)) \cdot x$, for each $x$.
From my understanding the multiplier after the alpha term is the derivative of the error cost function $J$. This term tells us the direction to head in, in order to arrive at the minimum making a small step at a time. I understand the concept of "hill climbing" correctly, at least I think.
Here is where I don't seem to wrap my head around:
If the form of the error function is known (like in our case: we could visually plot the function if we take enough values of theta and plug them in the model), why can't we take the first derivative and set it to zero (partial derivative if the function has multiple thetas). This way we would have all the minimums of the function. Then with the second derivative, we could determine whether it's a min or a max.
I've seen this done in calculus for simple functions like $y = x^2 + 5x + 2$ (may years ago, maybe I am wrong), so what is stopping us from doing the same thing here?
Sorry for asking such a silly question.
Thank you.
AI: Consider differentiating this $$\nabla_\theta\|X\theta -y\|^2=2X^T(X\theta -y)=0$$
Hence solving this, would give us $$X^TX\theta =X^Ty$$
Solving this would give us the optimal solution theoretically. However, numerical stability is an issue and also don't forget computational complexity. The complexity to solve a linear system is cubic.
Also, sometimes, we do not even know even have a closed form, a gradient based approach can be more applicable. |
H: Why Gaussian latent variable (noise) for GAN?
When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?
AI: Why people often choose the input to a GAN (z)
to be samples from a Gaussian?
Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.
Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.
The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $\epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(\mu, \sigma^2)$ for arbitrary mean $\mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.
Central limit theorem. Another commonly used justification is that since many observations are the result (average) of large number of [almost] independent processes, therefore CLT justifies the choice of Gaussian. This is not a good justification because there are also many real-world phenomena that do not obey Normality (e.g. Power-law distribution), and since the variable is the least known to us, we cannot decide which of these real-world analogies are more preferable.
This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.
Are there also potential problems associated with this?
Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption. In practice, when we make a new assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:
Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A \sim N(\mu, \sigma^2)$. After fitting the model, we may observe that the estimated variance $\hat{\sigma}^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = \color{blue}{b_1B +c} + \epsilon_1$, where $\epsilon_1 \sim N(0, \sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $\hat{\epsilon}_1 = A - \hat{b}_1B -\hat{c}$ also has a high $\hat{\sigma}_1^2$. Then, we may extract a new assumption as $A = b_1B + \color{blue}{b_2B^2} + c + \epsilon_2$, where $\epsilon_2 \sim N(0, \sigma_2^2)$ is the new "the least known", and so on.
Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $\color{blue}{\text{more layers}}$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 \sim N(0, \sigma_2^2)$ would lead to more realistic outputs, and so on. |
H: Books on time series and sequence classification
Though I have been using traditional machine learning algorithms (Regression and Classification) , I have no experience of using Time series and would like to understand what is time series and different approaches(ex:ARIMA,SARIMA,SARIMAX, LSTM etc) used for time series analysis. I also see that people use LSTM's/RNN for time series/sequence classification. Can you people recommend me a book that discusses the above algorithms and how Time series is different from sequence classification etc?
AI: There are numerous topics that you've mentioned but I will suggest those which I've read and are helpful.
For hidden markov models and markov processes I suggest reading Pattern Classification by Richard O. Duda. You can also take a look at Pattern Recognition and Machine Learning by Christopher Bishop. For better understanding Markov processes and their behaviour, you can also take the course stochastic process which is also known as random process. I suggest taking that course before reading any book, because you may need some help if you're not very familiar to the concepts.
For LSTMs, I highly suggest taking a look at the fifth course of deep learning on coursera by pr. Andrew Ng. If you do their homework you can realise how the inner operations exactly work and you'll have a very deep understanding of time series and what their nature is. After that, you can take a look at the Deep Learning book by Ian Goodfellow. |
H: tensorflow: is there a way to specify XLA_GPU with tensorflow?
following code is used to specify device on which tf node is running on
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
i have already known:
this post, tensorflow doc and xla demo
what i want to know is:
is there a way to specify XLA_GPU as the device on which tf node is running on
with tf.device('/XLA_GPU:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
executing the code above gives ValueError: Unknown attribute: 'device' in '/job:localhost/replica:0/task:0/device:/XLA_GPU:0'
this is 100% reproducible on google colab.
AI: try 'device:XLA_GPU:0'
with tf.device('device:XLA_GPU:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') |
H: Is there any augmentation tool for images and bounding boxes?
I don't have a lot of training data and I'm looking for some tools in python or executable program like labelimg that do some heavy augmentation on images, even better if they also change bounding boxes coordinate accordingly. Any help will be appreciated!
AI: I think you should look into imgaug. It supports most image augmentation and does have support for bounding boxes.
Docs: https://imgaug.readthedocs.io/en/latest/ |
H: Feasibility study of machine learning
How to know whether machine learning is possible for a given data set. I have been given a data set, I should check whether machine learning is possible or not for that data set. How can I do that. How do you come to conclusion that machine learning can be performed for the given data set or not.
AI: For this feasibility study, following will be high level steps :
For each feature perform PCA with rest of the features as train_x and feature as train_y. If you find a feature that can be predicted by other features; ML can be applied on Dataset
Can a Human solve it ? As a person; can you find patterns for a given feature, based on other features ?
Exploratory data analysis with Weka, Dataframe + Matplotlib or similar tools . https://datascienceguide.github.io/exploratory-data-analysis |
H: What are the cases in which Isomap fails to do a good job?
As above, what is a possible scenario/ dataset/ case in which Isomap fails to do a decent dimensionality reduction?
AI: Here is the visualization of COIL-20 data set in t-SNE paper:
The data-set consists of images of 20 objects (clusters). In all cases provided in the paper and some cases I found on the Internet (6000 MNIST data set, slide no. 31), the quality of IsoMap (2001) is not comparable to t-SNE (2008). |
H: neural network to find a very simple linear model (scikit-learn)
I'm trying to test different machine learning algorithm to try to find correlation between various data on MRI scans.
Since I'm dealing with medical data, I don't have access to many events, but still I'm trying to see what a simple fully connected NN while provides me.
By debugging, it looks like even if I enter the output almost as is in the input, a simple NN using the default scikit-learn NN regressor is unable to find a satisfactory model.
I tried to reproduce the effect and I'd like to share with you the following code. I guess I'm just doing something wrong, because, basically, I'd like the NN to produce a simple model where label=X2-X1 and it doesn't work to me which is really surprising me.
here is the code:
import sklearn.neural_network as NN
import matplotlib.pyplot as plt
import numpy as np
N_events=100000 # number of rows of dataset
N_features = 2 # number of features
X = np.random.rand(N_events,N_features) # chose the data randomly
labels = X[:,1]-X[:,0] # label = X2-X1
C = NN.MLPRegressor(hidden_layer_sizes=(2,2),max_iter=500000,random_state=1) # very simple scikit-learn NN regressor
n = N_events // 2 # half data for training, half for test
print(X[0:10,1]-X[0:10,0]) # check that indeed label=X2-X1
print(labels[0:10])
C.fit( X[0:n,:] , labels[0:n]) # train the model
# plot y_predicted vs y_true for the test set...
plt.figure()
y_real = labels[n:] #
y_pred = C.predict(X[n:,:])
plt.plot(y_real,y_pred,'.')
# plot y_predicted vs y_true for the training set... even this doesn't work
plt.figure()
y_real = labels[0:n] #
y_pred = C.predict(X[0:n,:])
plt.plot(y_real,y_pred,'.')
I was expecting the NN with 2x2 = 4 degrees of freedom to easily find such a simple model (note that I even don't add unused features which could disturb the fit, since N_features is 2 here), so I guess I'm just not using the code correctly.
Can anyone helps me ?
Many thanks
AI: The problem is relu which is the default activation function. It zeros the inputs smaller than 0. Problem will be solved by using other functions. For example:
C = NN.MLPRegressor(activation="identity", hidden_layer_sizes=(1,1),max_iter=25,random_state=1)
Note that even 1 hidden layer with 1 neuron is enough to find the $X_2 = X_1$ decision boundary. |
H: What kind of data visualization should I use?
I'm going to program a customized phone keyboard where some letters are larger than others, depending on how often I misstype them. For example, if I often pressed "w" instead of "e", I'd make the "e" button take up some of the space of the "w" button:
(screenshot from here)
In order to figure out how often I make specific typos, I'll need to collect data and store it in some kind of graphic organizer to help me visualize it. Right now, I'm thinking of something like a cluster map. Does that sound like a good plan, or do you have any other suggestions?
Here's an example cluster map (from Microsoft PowerBI). Instead of pictures, they would say things like "KI", to show how often I type "K" instead of "I", and the circle would be larger or smaller depending on how often I make this mistake:
Thanks!
AI: First, I think you'll need to measure when you've made a typing mistake. For example, you might log each key press and then in an analysis after, look at when you press the backspace key. If you press it only once, you might consider the key you pressed to be incorrect and the one you type after to be the correct key.
This supplies you with a truth value. It would be difficult to measure anything if you don't know what would ideally happen.
In terms of visualizing this, I would opt for a confusion matrix. There are some nice visuals provided by Seaborn, but it might look like what's in this SO answer. As you can see, each letter has a high value for itself, and maybe a couple mistakes for other letters. Looking at this plot, you might say "F" is often typed when "E" is desired. The y-axis would be the letter you intended to type, the x-axis might be the letter you actually typed. This could help you see which letters are frequently mistyped. Additionally, it would be intuitive to compute ratios off of this.
If you're not interested in which keys are mistyped as other keys, you could easily do a bar chart of key frequencies. Or a bar chart where each x-tick is a letter with proportion typed (in)correctly. |
H: Train, test and submission files - what am I supposed to do with all of them?
this might be very beginner's question.
I'm working on Kaggle's HomeCredit Default Risk problem which has among others dataset train, test and submission files as can be seen in the link provided. The test dataset does not contain TARGET feature and submission has ID and TARGET columns where TARGET has fixed value 0.5.
I'd normaly split the train dataset and used it to train and score a model and than created a submission file based on the test dataset.
Thus, I don't understand what I am supposed to do with the already existing submission file.
Futher, when I score a model based on splitted train data, I'm getting unexpected high values which can't be right. I did before OHE-encoding, standardization and PCA, it that might have some impact on the model accuracy result.
Insights would be apprecited.
AI: The submission file is just there as a reference. You are not supposed to do anything with it, but when you make your own submission it be on the same format.
Regarding your high accuracy. Try making your own submission file and submit it to Kaggle and it will show you if your score is real or not. :)
link for submission |
H: How to reach continue training in xgboost
I read the paper but found nothing talking about how to implement incremental learning.
Can someone share some basic or deep knowledge? not in coding way.
I know how to write code snippet to train incrementally.
When new data comes in, how to train incrementally if I use XGBRegressor? Reserve the old trees and train new data with new trees?
I found nothing talking about this in detail
AI: In the fit method, parameter xgb_model can be specified to continue training an old model.
https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBRegressor.fit
Edit: Yes, as I understand it, the old trees are preserved in this method. The new trees will fit the residuals of whatever data you pass: you could continue with only the new data, which will first be run through the existing trees and have their residuals fit by new trees; or continue with all the data, which will do the above but also use the old data when splitting (probably better, so that the model doesn't lose its fit to the old data, but maybe your use case cares about the new data substantially more?). In some cases, it may be better to just retrain from scratch (if the existing trees do a poor job on the new data). |
H: Logitic Regression cost function - what if ln(0)?
I am building logistic regression from scrap.
The simplified cost function I am using is (from machine learning course on coursera):
in specific case during learning,
one observation in training set y is 0 - but the specific choice of betas in:
makes g(z) = h(x) = 1 , because e.g. z > 50.
in this case my right side od J is (1 - 0) * log(1 - 1) what is -inf (I am doing my calculations in python)
I understand that in this case value of cost function should be high because the probability of y = 1 is very big while the truth is that it actually is 0.
Is the problem approximation of g(50) being 1 instead of something like: 0.999999? Or there is some more fundametal error with my logic?
because this specific example the summation of cost of all observations is nan (not a number) in my code.
AI: In practice, an offset is used to avoid log explosion due to values close to zero. For example $\hat{\text{log}}(x)=\text{log}(x + \text{1e-6})$. |
H: How to install XGBoost or LightGBM on Windows?
I'm a Windows user and would like to use those mentioned algorithms in the title with my Jupyter notebook which is a part of Anaconda installation.
I've tried in anaconda promt window:
pip install xgboost
which retuned:
Could not find a version that satisfies the requirement xgboost (from
versions: ) No matching distribution found for xgboost
Likewise, I've tried:
conda install lightgbm
which returned:
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url
https://repo.anaconda.com/pkgs/r/noarch/repodata.json.bz2 Elapsed: -
Help would be appreciated.
AI: Follow this XGBoost installation guide: https://xgboost.readthedocs.io/en/latest/build.html
If you are using python3 then make sure that you run: pip3 install xgboost
To fix the problem with lightgbm on windows try installing OpenSSL first, refer this: https://www.cloudinsidr.com/content/how-to-install-the-most-recent-version-of-openssl-on-windows-10-in-64-bit/n |
H: Is it wrong if I cluster numerical attributes and categorical attributes separately?
I have a dataset of credit customers containing mixed data types (numerical and categorical with several levels). I am trying to perform segmentation so that I can end up with k groups and then build definitions (based on attributes I have).
While there are solutions for clustering data with mixed data types (K-prototypes, hierarchical clustering with Gower's distance), why would it be wrong to cluster numerical attributes and categorical attributes separately and come up with definitions individually?
AI: There is nothing wrong with not using all attributes. In fact there are subspace clustering approaches that attempt to identify (partially) informative attributes along with clusters (but mostly for continuous variables).
On your data, you will have big data preparation issues, that would need careful weighting and nonlinear transformations. So it probably is a good idea to first try to understand each attribute before you go into any combinations.
Also bear in mind that a clustering never is correct or "optimal". A successful clustering is one that gave you a new insight. Any means that lead to verifiable insights is okay! Just don't assume that you could automate this. |
H: Can a Neural Network Measure the Random Error in a Linear Series?
I have been trying to develop a neural network to measure the error in a linear series. What I would like the model to do is infer a linear regression line and then measure the mean absolute error around that line.
I have tried a number of neural network model configurations, including recurrent configurations, but the network learns a weak relationship and then overfits. I have also tried L1 and L2 regularization but neither work.
Any thoughts? Thanks!
Below is the code I am using to simulate the data and a fit sample model:
import numpy as np, matplotlib.pyplot as plt
from keras import layers
from keras.models import Sequential
from keras.optimizers import Adam
from keras.backend import clear_session
## Simulate the data:
np.random.seed(20190318)
X = np.array(()).reshape(0, 50)
Y = np.array(()).reshape(0, 1)
for _ in range(500):
i = np.random.randint(100, 110) # Intercept.
s = np.random.randint(1, 10) # Slope.
e = np.random.normal(0, 25, 50) # Error.
X_i = np.round(i + (s * np.arange(0, 50)) + e, 2).reshape(1, 50)
Y_i = np.sum(np.abs(e)).reshape(1, 1)
X = np.concatenate((X, X_i), axis = 0)
Y = np.concatenate((Y, Y_i), axis = 0)
## Training and validation data:
split = 400
X_train = X[:split, :-1]
Y_train = Y[:split, -1:]
X_valid = X[split:, :-1]
Y_valid = Y[split:, -1:]
print(X_train.shape)
print(Y_train.shape)
print()
print(X_valid.shape)
print(Y_valid.shape)
## Graph of one of the series:
plt.plot(X_train[0])
## Sample model (takes about a minute to run):
clear_session()
model_fnn = Sequential()
model_fnn.add(layers.Dense(512, activation = 'relu', input_shape = (X_train.shape[1],)))
model_fnn.add(layers.Dense(512, activation = 'relu'))
model_fnn.add(layers.Dense( 1, activation = None))
# Compile model.
model_fnn.compile(optimizer = Adam(lr = 1e-4), loss = 'mse')
# Fit model.
history_fnn = model_fnn.fit(X_train, Y_train, batch_size = 32, epochs = 100, verbose = False,
validation_data = (X_valid, Y_valid))
## Sample model learning curves:
loss_fnn = history_fnn.history['loss']
val_loss_fnn = history_fnn.history['val_loss']
epochs_fnn = range(1, len(loss_fnn) + 1)
plt.plot(epochs_fnn, loss_fnn, 'black', label = 'Training Loss')
plt.plot(epochs_fnn, val_loss_fnn, 'red', label = 'Validation Loss')
plt.title('FNN: Training and Validation Loss')
plt.legend()
plt.show()
UPDATE:
## Predict.
Y_train_fnn = model_fnn.predict(X_train)
Y_valid_fnn = model_fnn.predict(X_valid)
## Evaluate predictions with training data.
plt.scatter(Y_train, Y_train_fnn)
plt.xlabel("Actual")
plt.ylabel("Predicted")
## Evaluate predictions with training data.
plt.scatter(Y_valid, Y_valid_fnn)
plt.xlabel("Actual")
plt.ylabel("Predicted")
AI: This problem is naturally hard. The underlying function that we try to learn is
$$\mathbf{X}=i+s+\mathbf{e} \rightarrow Y=\left \| \mathbf{X} - i - s \right \|_1 = \left \| \mathbf{e} \right \|_1=\sum_d|e_d|$$
where $i$ and $s$ are unknown random variables. For large $i$ and $s$, $\left \| \mathbf{e} \right \|_1$ is naturally hard to recover from $\mathbf{X}$. I found a working example (training error almost zero) by setting the intercept $i$ and slope $s$ to zero!, drastically shrinking the network size to work better with a small sample size (800), and increased the number of epochs to 800, which was crucial. Also, (true value, error) is plotted at the end for training data.
You can work up from this point to see the effect of increasing $i$ and $s$ on performance.
import numpy as np, matplotlib.pyplot as plt
from keras import layers
from keras.models import Sequential
from keras.optimizers import Adam
from keras.backend import clear_session
## Simulate the data:
np.random.seed(20190318)
dimension = 50
X = np.array(()).reshape(0, dimension)
Y = np.array(()).reshape(0, 1)
for _ in range(1000):
i = 0 # np.random.randint(100, 110) # Intercept.
s = 0 # np.random.randint(1, 10) # Slope.
e = np.random.normal(0, 25, dimension) # Error.
X_i = np.round(i + (s * np.arange(0, dimension)) + e, 2).reshape(1, dimension)
Y_i = np.sum(np.abs(e)).reshape(1, 1)
X = np.concatenate((X, X_i), axis = 0)
Y = np.concatenate((Y, Y_i), axis = 0)
## Training and validation data:
split = 800
X_train = X[:split, :-1]
Y_train = Y[:split, -1:]
X_valid = X[split:, :-1]
Y_valid = Y[split:, -1:]
print(X_train.shape)
print(Y_train.shape)
print()
print(X_valid.shape)
print(Y_valid.shape)
## Graph of one of the series:
plt.plot(X_train[0])
## Sample model (takes about a minute to run):
clear_session()
model_fnn = Sequential()
model_fnn.add(layers.Dense(dimension, activation = 'relu', input_shape = (X_train.shape[1],)))
model_fnn.add(layers.Dense(dimension, activation = 'relu'))
model_fnn.add(layers.Dense( 1, activation = 'linear'))
# Compile model.
model_fnn.compile(optimizer = Adam(lr = 1e-4), loss = 'mse')
# Fit model.
history_fnn = model_fnn.fit(X_train, Y_train, batch_size = 32, epochs = 800, verbose = True,
validation_data = (X_valid, Y_valid))
# Sample model learning curves:
loss_fnn = history_fnn.history['loss']
val_loss_fnn = history_fnn.history['val_loss']
epochs_fnn = range(1, len(loss_fnn) + 1)
plt.figure(1)
offset = 5
plt.plot(epochs_fnn[offset:], loss_fnn[offset:], 'black', label = 'Training Loss')
plt.plot(epochs_fnn[offset:], val_loss_fnn[offset:], 'red', label = 'Validation Loss')
plt.title('FNN: Training and Validation Loss')
plt.legend()
## Predict.
plt.figure(2)
Y_train_fnn = model_fnn.predict(X_train)
## Evaluate predictions with training data.
sorted_index = Y_train.argsort(axis=0)
Y_train_sorted = np.reshape(Y_train[sorted_index], (-1, 1))
Y_train_fnn_sorted = np.reshape(Y_train_fnn[sorted_index], (-1, 1))
plt.plot(Y_train_sorted, Y_train_sorted - Y_train_fnn_sorted)
plt.xlabel("Y(true) train")
plt.ylabel("Y(true) - Y(predicted) train")
plt.show() |
H: How important is the input data for a ML model?
Last 4-6 weeks, I have been learning and working for the first time on ML. Reading blogs, articles, documentations, etc. and practising. Have asked lot of questions here on Stack Overflow as well.
While I have got some amount of hands-on experience, but still got a very basic doubt (confusion) --
When I take my input data set with 1000 records, the model prediction accuracy is say 75%. When I keep 50000 records, the model accuracy is 65%.
1) Does that mean the model responds completely based on the i/p data being fed into?
2) If #1 is true, then in real-world where we don't have control on input data, how will the model work?
Ex. For suggesting products to a customer, the input data to the model would be the past customer buying experiences. As the quantity of input data increases, the prediction accuracy will increase or decrease?
Please let me know if I need to add further details to my question.
Thanks.
Edit - 1 - Below added frequency distribution of my input data:
Edit - 2 - Adding Confusion matrix and Classification report:
AI: To answer your first question, the accuracy of the model highly depends on the "quality" of the input data. Basically, your training data should represent the same scenario as that of the final model deployment environment.
There are two possible reasons why the scenario you mentioned is happening,
When you added more data, maybe there is no good relationship between input features and label for the new examples. It is always said that less and clean data is better than large and messy data.
If 49000 records added afterward are from the same set(i.e. have a good relationship between label and features) as that of 1000 before, there are again two possible reasons
A. If accuracy on the train dataset is small along with test dataset. e.g. training accuracy is 70% and test accuracy is 65%, then you are underfitting data. Model is very complex and dataset is small in terms of the number of examples.
B. If your training accuracy is near 100% and test accuracy is 65%, you are overfitting data. Model is complex, so you should go with some simple algorithm.
NOTE* Since you haven't mentioned about training accuracy, it is difficult to say what out of the two above is happening.
Now coming to your second question about real-world deployment. There is something called model staleness over time which is basically the problem of reducing model accuracy over time. This is the article by a product manager at Google explaining the staleness problem and how it can be solved. This will answer your second question.
Let me know if something is not clear. |
H: In calculating policy gradients, wouldn't longer trajectories have more weight according to the policy gradient formula?
In Sergey Levine's lecture on policy gradients (berkeley deep rl course), he show that policy gradient can be evaluated according to the formula
In this formula, wouldn't longer trajectories get more weight (in finite horizon situations), since the middle term, the sum over log pi, would involve more terms? (Why would it work like that?)
The specific example I have in mind is pacman, longer trajectories would contribute more to the gradient. Should it work like that?
AI: wouldn't longer trajectories get more weight?
Not necessarily. Gradient $\triangledown_{\theta}$ could be negative or positive (1D analogy), therefore, larger number of gradients could have a smaller weight, which makes sense. A consistent short trajectory is more informative (has more weight) than an inconsistent long trajectory with sign-alternating policy gradients.
Why would it work like that?
If we are comparing two consistent trajectories, where most gradients are in the same direction, this formula makes sense again. A long consistent trajectory contains more useful information (more steps that confirm each other) than a short one. In real life, compare the informativeness of a successful week to a successful year for your policy learning. |
H: Using the Stanford Named Entity Tagger in R
I am experimenting with the Stanford Named Entity Tagger here http://nlp.stanford.edu:8080/ner/process and I feel it would be useful in my research. Does anyone know of a example that I could follow so that I could do the analysis in R? Ideally I'd want to provide a string and get back a count (as a list) of the number of organisations, persons, etc recognised in the string. Thanks.
AI: https://github.com/statsmaths/coreNLP can be used as a wrapper for this library in R. Documentation has good examples for most use cases lie NER and POS. |
H: Manual feature engineering based on the output
So, I'm working on a ML model that would have as potential predictors : age , a code for his city , his social status ( married / single and so on ) , number of his children and the output signed which is binary ( 0 or 1 ). Thats the initial dataset I have.
My prediction would be based on those features to predict the value of signed for that person.
I already generated a prediction on unseen data. Upon validation of the results with the predicted result vs the real data , I have 25% accuracy. While cross-validation gives me 65% accuracy. So I thought : Over-fitting
Here is my question, I went back to the early stages of the whole process, and started creating new features. Example : Instead of the code for the city which makes no sense to have that as input to a ML model, I created classes based on the percentage of signed, the city ,with the higher percentage of 'signed' ( the ouput ) , gets assigned to a higher value of class_city, which improved a lot in my Correlation Matrix the relationship signed-class_city which makes sense. Is what I'm doing correct? or shouldn't I create features based on the ouput?
Here is my CM
:
After re-modelling with 3 features only ( department_class , age and situation ) i tested my model on unseen data made of 148 rows compared to 60k rows in the training file.
First model with the old feature ( the ID of the departement ) gave 25% accuracy while the second model with the new feature class_department gave 71% ( Again on unseen data )
Note : First model with 25% has some other features as ID's ( they might be causing the model to have such a weak accuracy with the deparment_ID )
AI: You can create features based on output values, but you should be careful in doing this.
When you use the value of class_city (based on percentage of signed for that city) for a given data point, note that this calculation cannot include the current data point, since you will not have the value of ‘signed’ during prediction.
One way to handle this is to split the total data you have into three parts - estimation, train, test. The estimation set is used only to estimate the class_city values for each city. These values can then be used in the train and test data. This way, you have the label values without your model doing anything ‘unfair’. For testing, you can infact use the data from estimation+train sets to estimate the class_city values for use in the test set. The same holds true for any unseen data. You can use the class_city values estimated from all the previous data points.
In the context of time series data, for example, the class_city value for any data point can potentially use information from all previous data points, and should not use any information from future data points! |
H: Clustering based on distance between points
I am trying to cluster geographical locations in such a way that all the locations inside each cluster are at max within 25 miles of each other. For this, I am using Agglomerative clustering. I am using a custom distance function to calculate the distances between each location. I do not want to specify the number of clusters. Instead, I want the model to cluster until all the locations within each cluster are within 25 miles of each other. I have tried doing this in both Scipy and Sklearn but haven't made any progress. Below is the approach that I have tried. It only gives me one cluster. Please help. Thanks in advance.
from scipy.cluster.hierarchy import fclusterdata
max_dist = 25
# dist is a custom function that calculates the distance (in miles) between two locations using the geographical coordinates
fclusterdata(locations_in_RI[['Latitude', 'Longitude']].values, t=max_dist, metric=dist, criterion='distance')
AI: I think for HAC (Hierachical Aglomeritive Clustering) it's always helpful to obtain the linkage matrix first which can give you some insight on how the clusters are formed iteratively. Besides that scipy also provides a dendrogram method for you to visualize the cluster formation, which can help you avoid treating the clustering process as a "black box".
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
# generate the linkage matrix
X = locations_in_RI[['Latitude', 'Longitude']].values
Z = linkage(X,
method='complete', # dissimilarity metric: max distance across all pairs of
# records between two clusters
metric='euclidean'
) # you can peek into the Z matrix to see how clusters are
# merged at each iteration of the algorithm
# calculate full dendrogram and visualize it
plt.figure(figsize=(30, 10))
dendrogram(Z)
plt.show()
# retrive clusters with `max_d`
from scipy.cluster.hierarchy import fcluster
max_d = 25 # I assume that your `Latitude` and `Longitude` columns are both in
# units of miles
clusters = fcluster(Z, max_d, criterion='distance')
The clusters is an array of cluster ids, which is what you want.
There is a very helpful (yet kinda long) post on HAC worth reading. |
H: How to visualise GIST features of an image
I am currently working on a image classification application using deep learning algorithms (either by using GIST features or CNN). I need help in understanding the below queries.
I have extracted the GIST features of an image (Reference Link). These extracted features will be given as input to deep learning algorithm to classify the images.
Is there a way to visualize the extracted features on top of the image?
CNN or GIST, Which is better for image classification? Is GIST outdated when compared to CNN?
Thank you,
KK
AI: Given that code is trivial for both (GIST + Network and Raw Pixel + Network), you can try three approaches for a given project.
GIST + Dense layers (GIST is not space-distributed)
Raw Pixels + CNN + Dense Layers
Raw pixels + CNN + Dense + input layer 2 (GIST) + Dense
For some projects, GIST can help since it is an abstract feature that CNN might or might not learn.
EDIT: This paper compares GIST and CNN
Regarding:
Is there a way to visualize the extracted features on top of the image?
This can be done with an attention layer in approach 3 (CNN + GIST).
CNN provides spacial distribution (Required for visualization) and dense layer that merges CNN's output with GIST can be used with an attention layer.
Paper for visualization |
H: Meaning of this notion in 0-1 loss?
I am reading a paper and encountered this notion:
$$1_{\{Y=1\}}$$
To me it seems to be the expression as below, but I am not entirely sure and I don't think the author explictly explained it:
if Y==1:
return 1
else:
return 0
Can someone help me to clarify this notion? Much thanks for your time (:
It appears in:
https://papers.nips.cc/paper/5073-learning-with-noisy-labels.pdf
AI: Your understanding is correct.
This is known as the indicator function.
The indicator function of a subset $A$ of a set $X$ is a function
$$1_A(x)= \begin{cases}1, & x \in A \\ 0, & x \notin A \end{cases}$$ |
H: How to Build Mobile Application for Image Recognition?
I want to write an application on (Android) phone for image recognition. The (Keras) model itself is written and trained on a desktop machine and works satisfactorily with standard images. However, I have no experience with app programing so I have no clue how to write an app itself which could utilize the model.
The purpose is to use the mobile camera for seeking the object; have a big button to shot an image; have the image classified and display the output label as text. No need for any sophisticated outlook. I can write programs in Python.
Is there any handy framework that could be used for this purpose?
AI: I would advise you to use kivy for Python. It has an active community and there is also a book on this topic Practical Computer Vision Applications Using Deep Learning with CNNs With Detailed Examples in Python Using TensorFlow and Kivy. Kivy is very easy for newbies and you can develop multiplatform applications (Windows, iOs, Android). |
H: Newton method and Vanishing Gradient
I read the article on Vanishing Gradient problem, which states that the problem can be rectified by using ReLu based activation function.
Now I am not able to understand that if using ReLu based activation function solves the problem, then why there are so many research papers suggesting the use of Newton's method based optimization algorithms for deep learning instead of Gradient Descent?
While reading research papers, I was having the strong impression that vanishing gradient problem was the core reason for such suggestions but now I am confused whether Newton's method is really needed if Gradient Descent can be modified to rectify all the problems faced during machine learning.
AI: Why there are so many research papers suggesting the use of Newton's
method based optimization algorithms for deep learning instead of
Gradient Descent?
Newton method has a faster convergence rate than gradient descent, and this is the main reason why it may be suggested as a replacement for gradient descent.
Is Newton's method really needed if Gradient Descent can be modified
to rectify all the problems faced during machine learning?
Existence of vanishing gradient problem depends on the choice of "activation function" and the "depth" of network. Newton method and gradient descent would both face this problem for a function like Sigmoid, since in the flat extremes of Sigmoid both first and second order derivatives are small and exponentially vanishing by depth. In other words, the problem is solved for both methods by the choice of function.
As a side note, 1st- and 2nd-order derivatives of Sigmoid go to zero at the same rate. Here is a graph of Sigmoid and its derivatives; zoom into the extremes.
Historical note. Newton method predates the vanishing gradient problem (which was faced after the introduction of Backpropagation in 60s) by centuries. |
H: Can you apply PCA to part of your dataset?
I am working with kaggle dataset that has over 130 features composed of 116 categorical and 14 continuous features. I plotted the heatmap for the 14 continuous variables and found that most of them are weakly correlated with the response variable but highly correlated with each other. I am trying to apply PCA to this part of the data and glue them back together as columns with the categorical variables. Is it ok to do so? Or should I one-hot-encoding / label encoding the categorical variables and do pca to the entire dataset?
AI: I plotted the heatmap for the 14 continuous variables and found that most of them are weakly correlated with the response variable but highly correlated with each other
You absolutely can select specific columns [continous data] from your original data and apply PCA on them, PCA1, PCA2 eigenvectors will show you the amount of correlation between each feature. However, you should use all of the data points or rows when applying PCA as PCA calculates the maximal variance between data points and its best to use all of them for accurate results.
So in short, you should select column[feature] wise but not row[data points] wise.
Is it ok to do so? Or should I one-hot-encoding / label encoding the categorical variables and do PCA to the entire dataset?
No need to do this, and it doesn't make sense in case of PCA as it only works on continuous data points. |
H: How to get probability of classification
I have the binary classification, I tried several models KNN, SVM, decision tree, and random forest. I have 50 000 samples, X_train has 50 000 rows and 2300 columns. Everything works well, but I want to build some semi-supervised model because I have some unlabeled samples. In this case, I need to get the probability of classification that I tried, but it doesn't work.
At first, I tried it for KNN
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 1, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
print(classifier.predict_proba(X_test[0]))
I get [[1. 0.]]. I don't understand why it is 1? (as first I thought it is 100%, but I get it for all test samples)
Then I tried it for the decision tree
classifier = DecisionTreeClassifier(random_state=0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
print(classifier.predict_proba(X_test[0]))
I get [[1. 0.]] too. Why it is an integer?
AI: It is indeed a probability of 1 because you didn't change the default parameters.
The probability for KNN is the average of all the neighbors. If there is only one neighbor n_neighbor=1 it can only be 1 or 0.
The DecisionTreeClassifier expands until all the training data is classified perfectly if you don't control the depth. Again, this likely led to overfitting and to extreme probability predictions as a result. You should try different values for max_depth and see what works best. You can do say by performing cross validation. (If you are unfamiliar with these I recommend reading up on it first.) |
H: Why does my minimal CNN example show strongly fluctuating validation loss?
I'm fairly new at working with neural networks and think I am making some basic mistake. I am trying to assign simulated images to 5 classes to test, what (if any) networks are helpful for a large data problem we have in our group. I am training the ResNet50 CNN included in Keras on 40000 (256 by 256 pixel greyscale) images. While training loss is improving quickly within the first epoch, validation loss fluctuates wildly in (to me) fairly random manner.
I am trying to use as many high level functions as possible and have ended up with the following code:
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.resnet50 import ResNet50
from keras.models import Model
from keras.layers import GlobalAveragePooling2D, Dense
inputShape = (256, 256,1)
targetSize = (256, 256 )
batchSize = 128
# First, load resnet
base_model = ResNet50(include_top =False,
weights =None,
input_shape=inputShape)
x = base_model.output
# modified based on the article https://github.com/priya-dwivedi/
# Deep-Learning/blob/master/resnet_keras/Residual_Networks_yourself.ipynb
x = GlobalAveragePooling2D()(x)
predictions = Dense(5, activation= 'softmax')(x)
model = Model(inputs = base_model.input, outputs = predictions)
# Compile model, might want to change loss and metrics
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics = ["acc"]
)
# Define the generators to read the data. Training generator performs data argumentation as well
train_datagen = ImageDataGenerator(
samplewise_center =True,
samplewise_std_normalization = True,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(
samplewise_center =True,
samplewise_std_normalization = True,
)
train_generator = train_datagen.flow_from_directory(
'trainingData/FolderBasedTrain256',
target_size=targetSize,
color_mode = 'grayscale',
batch_size=batchSize,
save_to_dir = 'trainingData/argumentedImages',
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'trainingData/FolderBasedValidation256',
target_size=targetSize,
batch_size=batchSize,
color_mode = 'grayscale',
class_mode='categorical')
# Fit the Modell, saving the history for later plotting
history = model.fit_generator(
train_generator,
epochs=20,
validation_data=validation_generator,
#steps_per_epoch = 62,
steps_per_epoch = 31,
#callbacks=[checkpointer],
validation_steps=9)
I can always create more images or train longer, but to me, this looks as if something fundamental went wrong somewhere. I would be very gratefull for any and all ideas. Thank you!
EDIT: I was told to stress that validation and training set are both created by exaclty the same simulation routine, so they should be relativly easy to classify
EDIT2: Found the error! My batch size did not fit with the amount of data and the steps by epoch, resulting in the CNN not seeing all the training data. Now everything converges nicely and I can evaluate the choice of modell. Thanks to all contributers
AI: There is nothing fundamentally wrong with your code, but maybe your model is not right for your current toy-problem.
In general, this is typical behavior when training in deep learning. Think about it, your target loss is the training loss, so it is directly affected by the training process and as you said "improving quickly". The validation loss is only affected indirectly, so naturally it will be more volatile in comparison.
When you are training, the model is attempting to estimate the real distribution of the data, however all it got is the distribution of the training dataset to rely on (which is similar but not the same).
Suggestions:
I think that your model is an over-kill for a 256x256 grayscale dataset with just 5 classes (Resnet was designed for ImageNet which contains RGB images from 1000 categories). In current state, the model finds its very easy to memorize the training set and overfit. You should look for models that are meant to be used for MNIST, or at most CIFAR10.
You can insist on this model and attempt to increase regularization, but I'm not sure if it will be enough to prevent the model from overfitting in this case. |
H: What's the difference between feature importance from Random Forest and Pearson correlation coefficient
I have following business domain. I have a product with three outputs/labels. The outputs are impacted by 1000 procedures, each procedure is digitized and measured. The customer wants to know what is the most influential procedures on the outputs.
1.
From Pearson correlation coefficient we could learn how two variables' relationship, say 1 is proportional, -1 is negative proportional and 0 is no relation. So I could find the biggest value of Pearson correlation coefficient to find more influential procedures.
2.
From Random Forest algorithm, I could know the top feature importance. So I could identify also the most influential procedures.
Which one is better?
AI: Pearson correlations capture linear relationships between the input and target variables. Therefore this only makes sense for continuous inputs and a continuous target variable, and not continuous inputs with a binary/categorical output. Correlations essentially measure the positive/negative 'change' in one feature as you increase/decrease the other.
So it doesn't make much sense to compare the relationship between your input features and the categorical outputs this way. You may as well calculate the mean input for each feature and each label, and calculate the differences between those. I found this answer on Cross-Validated which explains this much better than I can.
Feature importance in tree based models is more likely to actually identify which features are most influential when differentiating your classes, provided that the model performs well. How this feature importance is calculated depends on the implementation, this article gives a good overview of how different tree based models calculate importance for features. |
H: What activation function should I use for a specific regression problem?
Which is better for regression problems create a neural net with tanh/sigmoid and exp(like) activations or ReLU and linear? Standard is to use ReLU but it's brute force solution that requires certain net size and I would like to avoid creating a very big net, also sigmoid is much more prefered but in my case regression will output values from range (0, 1e7)... maybe also sigmoid net with linear head will work? I am curious about your take on the subject.
AI: There are two points that have to be considered.
Take care of the output of your network. If that's a Real number and can take any value, you have to use linear activation as the output.
The inner activations highly depend on your task and the size of the network that you use. What I'm going to tell you is based on experience. If you don't have a very deep network, $tanh$ and $ReLU$ may not differ very much in convergence time. If you're using very deep networks, don't use $tahn$ at all. $ReLU$ is also not recommended in some contexts. You can employ $PReLU$ in very deep networks. It does not add too many parameters to learn. You can also use $leaky-ReLU$ in order to avoid dying ReLU problem which may occur.
Finally, about the other nonlinearity that you've referred; try not to use $Sigmoid$ due to that fact that it's expected value is not equal to zero but half. It's a bit statistical stuff, but you can consider it's roughly hard for a network to learn shifted weights. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.