text
stringlengths 83
79.5k
|
---|
H: Loops in R programming,
I want to update this Remaining column in this table below:
How do I do this in R programming?
I tried
while (n.Remaining>0)
{ n.remaining <- n.total-n.expense}
Desired Output:
AI: in R you just need to do 10-cumsum(expense). This will give you the Remaining column |
H: What is the possible range of SVR parameters range?
I'm working on a regression problem. While tunning the Parameters of SVR I got the following values c=100, gamma= 10 and epsilon =100. For which I got 95 percent r-square. My question is what is the theoretical range of these parameters values.?
AI: I support vector regression the inverse regularization parameter $C$ can be selected from the interval $[0,\infty)$. In which $C=0$ means that we are very heavily regularizing and $C\to \infty$ no regularization.
The parameter $\varepsilon$ is also from the interval $[0,\infty)$. In which $\varepsilon=0$ forces the regression to penalize every point that is not exactly on the regression line. Whereas $\varepsilon > 0$ allows an indifference margin around the regression in which a deviation will not be counted as an error.
Additionally, there are slack variables of $\xi\geq 0$ and $\hat{\xi}\geq 0$. These are zero if a point is inside the indifference margin. If a data point lies above and outside the indifference margin we will have $\xi>0$ and if a data point lies below and outside the indifference margin we will have $\hat{\xi}<0$.
I think you mean the form parameter of your radial basis function when you talk about $\gamma$. If we have
$$\varphi(\boldsymbol{x}_i,\boldsymbol{x}_j|\gamma)=\exp\left[-\gamma||\boldsymbol{x}_i-\boldsymbol{x}_j||^2\right]$$
then $\gamma \in (0,\infty)$ (Note the minus sign in front of $\gamma$). For $\gamma \to 0$ we make the kernel flatter as $\varphi \approx 1$. If $\gamma \to \infty$ we will get a very peaked kernel. Which will be 1 when $\boldsymbol{x}_i\approx \boldsymbol{x}_i$ and almost zero everywhere else.
You should also have a look at the documentation for the implementation of these parameters. It might happen that these parameters are not implemented as you think (see note on $\gamma$). |
H: How to label overlapping objects for deep learning model training
I am training yolov3 to detect a custom object (chickens). In a lot of my training images I have overlapping chickens (can only see a partial chicken etc). Is there a common practice for how to label the data (bounding box) in these cases? Should you only label the portion of the image which you can see?
AI: There is no common practice in labeling the bounding boxes. It is always problem dependent. For example, if you want to count the chickens then you should also label the whole chicken as one instance of a chicken.
If you simply what to detect if there is a chicken in the picture you should label the unoccluded part.
You have to think about your problem. What is the goal of the algorithm? Could a human do the task without imagining where the rest of the object is? You should also consider the pixel imbalance for your problem. In general, the first method is a harder task than the second method because even humans have problems in labeling the bounding box for occluded objects. Hence, you will have a lot of variance due to this factor. If you label only what you see the bounding box labeling will be more reliable. As far as I know, the PASCAL Visual Object Classes data set which was used in the YOLO publication did only label what you can see and not what is occluded.
BTW I hope your task aims to improve the live quality of the chickens. It would be a shame if machine learning would be used to harm them. |
H: The principle of LM deep model
Language model(LM) is the task of predicting the next word.
Does the deep model need the encoder? From the ptb code of tensor2tensor, I find the deep model do not contains the encoder.
Or both with-encoder and without-encoder can do the LM task?
AI: The goal of LM is to learn a probability distribution over sequences
of symbols pertaining to a language.
That is, to learn $P(w_1,...,w_N)$ (resource).
This modeling can be accomplished by
Predicting the next word given the previous words: $P(w_i | w_1,...,w_{i-1})$, or
Predicting the neighbor words given the center word (Skip-gram): $P(w_{i+k}| w_i), k \in \{-2, -1, 1, 2\}$, or
Predicting the center word given the neighbor words (CBOW or Continuous Bag-of-Words): $P(w_i| w_{i-2},w_{i-1},w_{i+1}, w_{i+2})$, or other designs.
Does the deep model need the encoder? From the ptb code of
tensor2tensor, I find the deep model do not contains the encoder.
Yes. Modern LM solutions (all deep ones) try to find an encoding (embedding) that helps them to predict the next, neighbor, or center words as close as possible. However, a word encoding can be used as a constant input to other models. The ptb.py code calls text_encoder.TokenTextEncoder to receive such word encodings.
Both with-encoder and without-encoder can do the LM task?
LM task can be tackled without encoders too. For example, we can use frequency tables of adjacent words to build a model (n-gram modeling); e.g. all pairs (We, ?) appeared 10K times, pair (We, can) appeared 100 times, so P(can | We) = 0.01. However, encoder is the core of modern LM solutions. |
H: Can a neural network compute $y = x^2$?
In spirit of the famous Tensorflow Fizz Buzz joke and XOr problem I started to think, if it's possible to design a neural network that implements $y = x^2$ function?
Given some representation of a number (e.g. as a vector in binary form, so that number 5 is represented as [1,0,1,0,0,0,0,...]), the neural network should learn to return its square - 25 in this case.
If I could implement $y=x^2$, I could probably implement $y=x^3$ and generally any polynomial of x, and then with Taylor series I could approximate $y=\sin(x)$, which would solve the Fizz Buzz problem - a neural network that can find remainder of the division.
Clearly, just the linear part of NNs won't be able to perform this task, so if we could do the multiplication, it would be happening thanks to activation function.
Can you suggest any ideas or reading on subject?
AI: Neural networks are also called as the universal function approximation which is based in the universal function approximation theorem. It states that:
In the mathematical theory of artificial neural networks,
the universal approximation theorem states that a feed-forward network
with a single hidden layer containing a finite number of neurons can
approximate continuous functions on compact subsets of Rn, under mild
assumptions on the activation function
Meaning a ANN with a non linear activation function could map the function which relates the input with the output. The function $y = x^2$ could be easily approximated using regression ANN.
You can find an excellent lesson here with a notebook example.
Also, because of such ability ANN could map complex relationships for example between an image and its labels. |
H: Using Cross Validation technique for a CNN model
I am working on a CNN model. As always, I used batches with epochs to train my model. When it completed training and validation, finally I used a test set to measure the model performance and generate confusion matrix. Now I want to use Cross Validation to train my model. I can implement it but there are some questions in my mind:
1- Why most CNN models don't use Cross Validation?
2- If I use Cross Validation, how can I generate the confusion matrix? Can I split dataset to train/test and then do cross validation on train set as train/validation (i.e., doing cross validation as train/validation except for the usual train/test) and at last use test set the same way? Or how?
AI: Question 1: Why do most CNN models not apply the cross-validation technique?
$k$-fold cross-validation is often used for simple models with few parameters, models with simple hyperparameters and additionally the models are easy to optimize. Typical examples are linear regression, logistic regression, small neural networks and support vector machines.
For a convolutional neural network with many parameters (e.g. more than one million) we just have too many possible changes in the architecture. What you can do is to do some experiments with the learning rate, batch size, dropout (amount and position) and batch normalization (position). Training a convolutional neural network with a huge dataset takes quite a long time. Doing hyperparameter optimization would just be total overkill. Often in papers, they try to improve the results of other research papers. It is not the goal to get better results by improving the chosen hyperparameters but rather to come up with new ideas to solve the given task but with better accuracy or less computational effort.
Question 2: If I use cross-validation how can I generate confusion
matrix? can I split dataset to train/test then do cross-validation on
train set as train/validation (i.e. doing cross-validation as
train/validation except for the usual train/test) and at last use test
set the same way? or how?
In order to do $k$-fold cross validation you will need to split your initial data set into two parts. One dataset for doing the hyperparameter optimization and one for the final validation. Then we take the dataset for the hyperparameter optimization and split it into $k$ (hopefully) equally sized data sets $\mathcal{D}_1,\mathcal{D}_2,\ldots,\mathcal{D}_k$. For the sake of clarity let us set $k=3$. Then for each possible hyperparameter combination that we want to test we use $\mathcal{D}_1$ and $\mathcal{D}_2$ to fit our model and we use $\mathcal{D}_3$ to validate our model. Then we do the same with $\mathcal{D}_2$ and $\mathcal{D}_3$ and use $\mathcal{D}_1$ for validation. Then we do the same with $\mathcal{D}_1$ and $\mathcal{D}_3$ and use $\mathcal{D}_2$ for validation. We will get $3$ confusion matrices for every possible hyperparameter configuration. In order to derive a metric from these three results, we take the mean of these confusion matrices. Then we can scan through all averaged confusion matrices so select the hyperparameter configuration that was the best (you have to define what parts of the confusion matrix are important for your problem). Finally, we pick the 'best' hyperparameters and calculate the prediction performance on the final validation set. This performance metrics are the ones that you report. |
H: Supervised multiclass classification : is ANN a good idea ? or use other classifiers?
I have a problem deciding what to use since i'm just beginning to creating predictive models.
Let's say I have a training dataset with 5 or 6 features and a testing dataset. (With around 50k rows in training / 5k in testing). My samples are people that I would like to assign to types of products they would buy. (more than 2 classes).
I'd like to know the whole process of what to use, and based on what exactly. Also, is there a serious difference between the results rendered by an ANN and other classifiers on this type of classification?
Note: I have 10 possible classes in the output
Thanks in advance.
AI: In general, if you only have $5$ to $6$ scalar features. I would simply start with easy methods like logistic regression and discriminant analysis. I would guess that you should be able to get good results.
You should also look at the distribution of the scalar features. Maybe you can derive new features that are helpful in separating. A simple visual way to see if it is possible to separate the classes by a linear hyperplane is to use a principal components analysis (short PCA) and extract 2 or 3 factors. Then use these factors to visualize your datapoints (maybe use a random sample from the training data set and repeat this three or more times to see if the trend is there in all random samples that you looked at). You should see if the classes are well sparable.
If you see that your model performance is not good enough I would try out decision trees (these are very interesting as they allow you to see how the decisions of your classifier are made).
Depending on the model performance you could also use neural networks. I personally would rather try it with simpler models first. Neural networks are very powerful function approximators, but you will most likely not be able to extract some useful information about the relationship between the features and the classes of products. |
H: When to question output of model
I'm unsure of how to ask a question without making it seem like a code review question. At what point does one question whether they've actually implemented the algorithm and-or model correctly? Getting spot-on results is great and all, but seems highly suspect. Also, what checks can be done to ensure that the algorithm and-or model is being implemented correctly? The reason I'm asking is because I'm getting perfect classification and subsequently accuracy, precision, etc. w/ the implementation of SVM.
I am including the code, but feel free to ignore.
# Make a copy of the df
iris_df_copy = iris_df.copy()
# Create a new column, labeled 'T/F', whose value will be based on the value in the 'Class' column. If the value in the
# 'Class' column is 'Iris-setosa', then set the value of the 'T/F' column to 1. If the value in the 'Class' column is
# not 'Iris-setosa', then set the value of the 'T/F' column to 0.
iris_df_copy.loc[iris_df_copy.Class == 'Iris-setosa', 'T/F'] = 1
iris_df_copy.loc[iris_df_copy.Class != 'Iris-setosa', 'T/F'] = 0
X_svm = np.array(iris_df_copy[['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']])
y_svm = np.ravel(iris_df_copy[['T/F']])
# Split the samples into two subsets, use one for training and the other for testing
X_train_svm, X_test_svm, y_train_svm, y_test_svm = train_test_split(X_svm, y_svm, test_size=0.25, random_state=4)
# Instantiate the learning model - Linear SVM
linear_svm = svm.SVC(kernel='linear')
# Fit the model - Linear SVM
linear_svm.fit(X_train_svm, y_train_svm)
# Predict the response - Linear SVM
linear_svm_pred = linear_svm.predict(X_test_svm)
# Confusion matrix and quantitative metrics - Linear SVM
print("The confusion matrix is: " + np.str(confusion_matrix(y_test_svm, linear_svm_pred)))
print("The accuracy score is: " + np.str(accuracy_score(y_test_svm, linear_svm_pred)))
print("The precision is: " + np.str(precision_score(y_test_svm, linear_svm_pred, average="macro")))
print("The recall is: " + np.str(recall_score(y_test_svm, linear_svm_pred, average="macro")))
AI: You need to know what the outcome should be of a given test on a dataset before you try to test a new method on them. Ask yourself, 'What do I expect from this?'
Linear SVM finds a plane to cut through the data to best represent the difference between two sets.
If you have a look at what you are separating (Iris_setosa from Iris_virginica and iris_versicolor), you'll find that the clumps themselves are perfectly separated. You can draw a line easily on each graph you care to use, and that is what I have done in the picture below. If the clumps are perfectly separated, then the SVM will return a perfectly separated result.
By Nicoguaro - Own work, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=46257808
Test the SVM on separating virginica and versicolor to see how it does in a more difficult context. Or alternatively, just generate a dataset of your own from randomly placed gaussian points. |
H: Action Recognition for multiple objects and localization
I want to ask questions regarding the action detection on the video with proposed frames. I've used Temporal 3D ConvNet for the action recognition on video. Successfully trained it and can recognize action on videos.
When I do inference, I just collect 20 frames from video, feed it to the model and it gives me the result. The point is that events on different videos are not similar in size. Some of them cover 90% of the frame, but some May 10%. Let's take as an example that two objects collided and it can happen on a different scale, and I want to detect this action.
How provide to the model exact position for the action recognition, if it can happen on a different scale with different objects? What comes to mind is to use Yolo to collect Regions of Interest and feed collected frames every time the 3D convnet. But if there are a lot of objects, the speed will be very slow. How to handle it?
Are there any end-to-end solutions for the action recognition with the object location proposal for the action recognition network?
I've already looked at papers and blogs, what people suggest, couldn't find the solution for the localization issues, so the action recognition model got the correct frames.
AI: So finding actions from videos happens to be a tricky task. I have no idea about temporal 3D convnet but in order to tackle a problem like this, I would couple the CNN layer on individual frames of video and then feed the frame timeline into another layer of LSTM in order to find the context of the video.
As the action being performed on the video covers anywhere from 10% to 90% of the frame, you can perform TestTimeAugmentation on the video in order to find the action with a higher confidence rate. Similar approach could be found in this video by Google. |
H: Data Visualization with multiple dimension, and linear separability
I have a dataset of two classes with several features, how can I visualise such data using Matlab to decide if it is linear separable or not?
AI: You basically need a t-SNE plot, the t-SNE will convert the high dimensional feature vector (several features in your case) to a 2d point and then you can use matplotlib to plot, while plotting you need to send in the class of the corresponding feature to get different colour for different classes of data points. once you do this, you will be able to judge if they are linearly separable.
the code goes as follows
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
transformed_data = tsne.fit_transform(features)
k = np.array(transformed_data)
plt.scatter(k[:, 0], k[:, 1], c=class_labels, zorder=10, s=0.4)
and the above code should produce an image as shown below
Since you're looking for MATLAB code you can get the t-SNE with the function tsne(x) click on it for documentation. Also matplotlib scatter plot is pretty easy to do on Matlab, the documentation is extensive!
vote me if i was able to help ;) |
H: What skills do I need to become a data scientist? And how to show them?
I have fimiliarized myself with the recommended most important concepts (Linear Algebra, Analysis, Phython, Numpy, Pandas, a bit of Statistics, Linear regression). For the last two, I don't know how deep it should go. I know what things mean and how to get them working in python.
But the question is what now? I guess I could argue that this is a starting point and I can apply to a bad data analysis or visualisation position if I learn tableau and present myself well. But what would I do to even prove what I can do before an interview? Putting a notebook on github where I imported a dataset, cleaned it a bit, did a .desribe(), .plot() and a linear regression isn't very impressive nor interesting to anyone. So what would I do instead?
Also, this clearly isn't data science area yet. If I look at kaggle challenges, I either don't know what to do or think to myself "Clean data, LinRegression". So what should I take a look at next?
Note that I'm taking classes, but not in Data Science but in Chemistry right now.
AI: So you're still on the Basics and William's answer is pretty good, I will list here a bit of stuff to learn, and where to.
1 - You need the basics, that is already much more than you expected it to be:
Linear Algebra: knowing the best way of inverting a matrix might be useful for a computer scientist, but you're not aiming for that. You need to understand concepts and their meaning and effects such as:
Matrix Rank (For example, this could tell you, by an Autocorrelation matrix, that your data is still not enough for things like least squares.)
Meaning of Vector Spaces and basic linear transformations such as base change
Meaning of eigenvalues and eigenvectors
Calculus: also, focus on the meaning and understanding, computers can do most of the operations, even analytically
Derivatives and Integrals
Optimization
Signals and Systems: that might be a little bit biased (since I am a signal processing researcher) of me but, learning how to model certain phenomena and how they behave may help you to solve problems, that's basically applied Linear Algebra and Calculus (most things are). Aim for really basic, Signal processing is one of the research areas most affected by Data Science/Machine Learning, to the point people were making surveys to change the name of the community in IEEE.
The above can be found on many, many books and intros and won't be difficult for you to search for them.
Statistics: Machine learning derives from statistics and this is essential. Actually, you can learn calculus from a statistic point of view instead of function optimization. Links in the subjects names are free courses to Udacity or Udemy
You will need to learn Descriptive Statistics, which will allow you to understand data.
And learning Inferential Statistics.
It is important to learn what you can do with classic statistics to avoid wasting computers in problems that could be solved easily. It is a good practice to model things in the simplest way possible and escalate to more complex model if needed.
Machine Learning: Machine learning models are your everyday tools as a Data Scientist:
You can start with a crash course on Google Developers and Intro to Machine Learning in Udacity.
Then, you can go deeper... NO! Not Deep Learning, deeper into Machine Learning, also on Udacity.
But you might want to take a more formal course such as Stanfords at some point, just to fill some voids.
Machine Learning Tasklist: You stopped at linear regression? So here are the most popular models waiting for you:
Linear and Logistic Regression: pretty basic
Decision Tree: basic but highly interpretable
Support Vector Machines: simple but powerful (and I love Kernel methods, shoot me for that)
Naive Bayes
k Nearest Neighbors
K-Means Clustering
Random Forests and Gradient Boosting (Ensembles): these are really powerful and might be interpretable if you don't let them grow without care
Also, you might need some dimensionality reduction tools such as:
Principal Component Analysis
Linear Discriminant Analysis
Then you can go for more complex Neural Networks:
Self-organizing-maps, Feed Forward, RNNs, CNN's and so on...
Note: CNN's are usual in Computer Vision applications but the only thing it asks of data is that it has been organized in a way that allows for meaningful correlations between data that is near each other. Example: a process with multiple sensors in a time series might benefit from a CNN.
Data Analysis: a Data Scientist must have a personal relationship with data. For any good relation, you will need to understand your loved one (but the relationship with data is usually toxic, hahaha). Udacity has a nice Intro to Data Analysis, also for free.
Learn to express yourself:
make a youtube channel, present small tutorials, and classes to the community.
Try to answer Stack Exchange questions and help others, that will build your respect in the community and goodwill for when it is your time to ask. Also, this is a good way to practice expressing your ideas in a text.
Write a blog, it is a good way to have a notebook and also gaining attention from the community.
You can check on hands-on books such as Data Smart and Data Science from Scratch. Data Smart is about getting Insight from information and that's mostly your job as a D-Scientist.
2 - Build Respect
Try creating packages and libraries and making them available on GitHub and sharing your relevant solutions.
Win Kaggle competitions, many companies take Kaggle seriously as .... and having a good score will get you nice positions. You don't need to win in first place by the way. Also, competitions are usually good examples of real-world problems that will get you that so needed experience that you can't get while you don't have a Data Scientist role.
Also, some competitions pay really well.
Explore Kaggle, share algorithms, read and try to improve on others and search for datasets that might be of your interest.
Making datasets is a bit exhaustive but might be a way of making money while you're not ready.
3 - Get programming skills
Not only learning frameworks but understand how things work and basics of problem-solving must be put to the test every day. Also, making everything from scratch is fun and is good for learning, but when you do your work you will need code with high maturity, checked hundreds of times all over the world.
You'll need some tools, Python is a great language for data science (since the community is active and it is free, Matlab has a lot of nice tools and marvelous documentation but it is really expensive and quite a bit slow)
Some top libraries:
NumPy is the most fundamental package, understand it well
Pandas for wrangling with data
Seaborn, Bokeh, Plotly, and Matplotlib for plotting stuff and helping you making good reports
SciKit-Learn this is usually the fastest way to test a machine learning algorithm
Theano is similar to NumPy but was constructed with Machine Learning in mind
Keras, this is a library for building Neural Networks really fast, it uses Theano or TensorFlow as backends
TensorFlow, PyTorch, and other deep learning related stuff.
Also, you might want to get some knowledge of JavaScript and libraries for acquiring data on the web.
4 - Go Deeper
You may never need deep learning, depending on the area you're going to apply but this is that good nuclear weapon you hope to never have to use but someday you may:
Understand start with Udemy
There is a nanodegree on Udacity
Also, remember DL is computationally intensive and you want to avoid needing it (since these are expensive)
5 - Finally: Career
Learning never stops and you will never stop learning new concepts, every single day.
Courses are long, take them at your own pace. Try getting the basics of learning how to use, then go back and learn it for real
Try to get a few certificates and posting them on your LinkedIn. Make a few projects and create online articles on your blog and on LinkedIn and Kaggle.
Try choosing something you can relate to while searching for work. DS covers a wide range of subjects and trying to get insight from things you understand is easier than trying to get insight from things that sound like random noise for you
Build a network of collaborators, help your colleagues and try building a vast network ranging from medicine to linguistics, they might tell you what you're doing wrong while looking at data not related to your field of expertise.
Finally, and this took a bit longer than I anticipated but: don't give up. This is a long journey but is absurdly rewarding, both financially and personally. And try not working alone, create a small group of people to work with and make some projects.
This is my longest answer to a Stack Exchange question. |
H: Creating new columns based on 3 column and create new data frame
Heading
Consider my data frame
rs123 T C 0 0 1 1 0 0 1 0 0 1 0 0
rs124 T C 0 0 1 0 0 1 0 0 1 0 0 1
rs125 A A 1 0 0 1 0 0 1 0 0 1 0 0
Similarity, i have total 93 columns excluding first three
I want to create my data as
And then transform into new data frame as below
For first row if 1 is present in column 1 then output should be TT
For first row if 1 is present in column 2 then output should be TC
For first row if 1 is present in column 3 then output should be CC
For more detail you can refer below snip
Kindly help me to find solution using python, Its very urgent
Thanks in Advance.
AI: The question could have been framed better. Checkout the code below, in which your final dataframe will be in output.
import pandas as pd
input_array = [["rs123", "T", "C", 0, 0, 1, 1, 0, 0, 1, 0, 0], ["rs124", "A", "G", 0, 0, 1, 0, 1, 0, 0, 0, 1]]
raw_pd = pd.DataFrame(input_array).astype(str)
def change(a):
if list(a)[2]+list(a)[3]+list(a)[4] == "100":
return list(a)[0] + list(a)[0]
elif list(a)[2]+list(a)[3]+list(a)[4] == "010":
return list(a)[0] + list(a)[1]
else:
return list(a)[1] + list(a)[1]
output = pd.DataFrame()
output['S1'] = raw_pd[[1, 2, 3, 4, 5]].apply(lambda x: change(x), axis = 1)
output['S2'] = raw_pd[[1, 2, 6, 7, 8]].apply(lambda x: change(x), axis = 1)
output['S3'] = raw_pd[[1, 2, 9, 10, 11]].apply(lambda x: change(x), axis = 1)
output['SNP'] = raw_pd[0]
Hope this helps ;) Mark this as the correct answer if you have no other doubts. |
H: How do I create a data set that has a set of features for multiple options, with one option being the expected outcome?
Most datasets I see are:
feature 1, feature 2, feature 3, outcome
Where outcome is binary e.g. if they are cancer positive outcome will be 1 and 0 if they don't have cancer.
How do I create a dataset where there are multiple outcomes and each possible outcome has a set of features for it?
e.g. I have a question with 3 possible answers:
"What organ pumps blood around the human body?"
A. Heart
B. Liver
C. Church Organ
And each answer has a set of features with one answer being correct. How would I display this in a csv file? I want to read it into an xgboost algorithm for training.
question, option1 and features, option2 and features, option3 and features, correct option
Many thanks for your help!
AI: The final feature vector would be a concatenation like (for multi-class prediction):
Question google count | option A google count | option B google count | option C google count | option C no. words | option A no. words | other features | label
(1, 2, 3)
There is no need to put features related to option A close to each other (or in any particular order), they just need to be on the same column for all rows regardless of the label.
XGBoost parameters for multi-class classification are:
'objective': 'multi:softprob',
'num_class': 3 |
H: multipying negated gradients by actions for the loss in actor nn of DDPG
In this Udacity project code that I have been combing through line by line to understand the implementation, I have stumbled on a part in class Actor where this appears on line 55 here: https://github.com/nyck33/autonomous_quadcopter/blob/master/actorSolution.py
# Define loss function using action value (Q value) gradients
action_gradients = layers.Input(shape=(self.action_size,))
loss = K.mean(-action_gradients * actions)
The above snippet seems to be creating an Input layer for action gradients to calculate loss for the Adam optimizer (in the following snippet) that gets used in the optimizer but where and how does anything get passed to this action_gradients layer? (https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam) get_updates function for optimizer where the above loss is used:
# Define optimizer and training function
optimizer = optimizers.Adam(lr=self.lr)
updates_op = optimizer.get_updates(params=self.model.trainable_weights, loss=loss)
Next we get to train_n a K.function type function (also in class Actor):
self.train_fn = K.function(
inputs=[self.model.input, action_gradients, K.learning_phase()],
outputs=[],
updates=updates_op)
of which action_gradients are the (gradient of the Q-value w.r.t. actions from the local critic network, not target critic network).
The following are the arguments when train_fn is called:
"""
inputs = [states, action_gradients from critic, K.learning_phase = 1 (training mode)],
(note: test mode = 0)
outputs = [] are blank because the gradients are given to us from critic and don't need to be calculated using a loss function for predicted and target actions.
update_op = Adam optimizer(lr=0.001) using the action gradients from critic to update actor_local weights
"""
So now I began to think that the formula for deterministic policy gradient here: https://pemami4911.github.io/blog/2016/08/21/ddpg-rl.html
would be realized once the gradient of the Q-values w.r.t. actions were passed from the critic to the actor.
I think that the input layer for action_gradients mentioned at the top is trying to find the gradient of the output of actor w.r.t. to parameters so that it can do the multiplication pictured in the photo. However, to reiterate, how is anything passed to this layer and why is the loss calculated this way?
Edit: I missed a comment on line 55
# Define loss function using action value (Q value) gradients
So now I know that the action_gradients input layer receive the action_gradients from the critic.
Apparently it is a trick used by some implementations such as Openai Baselines: https://stats.stackexchange.com/questions/258472/computing-the-actor-gradient-update-in-the-deep-deterministic-policy-gradient-d
But still, why is the loss calculated as -action_gradients * actions ?
AI: This answer is not mine but a reply I received on Quora here: https://www.quora.com/Why-is-the-loss-for-DDPG-Actor-the-product-of-gradients-of-Q-values-actions/answer/James-MacGlashan?filter=&nsrc=2&snid3=4129828389
Hopefully it helps someone rather than me just deleting the question and I am sorry for my lack of math jax knowledge.
Let’s forget about DDPG, Q-functions, actors, and actions for a minute. I know, know, blasphemy, but bear with me for a moment and we’ll get back to that.
Let’s just talk about loss functions. We’re all good with loss functions, right? They’re the backbone of our standard supervised deep learning literature. That is, in supervised learning, what we do is we define a loss function that defines how bad a prediction is from what it should be.
As a grounding example, let’s revisit the L2 loss we use for a regression problem. We define this loss as:
L
2
(
p
,
l
)
1
2
(
p
−
l
)
2
Where
p
is a prediction from our neural net and
l
is the “label:” the value the prediction should have been. If we can tune our neural net parameters so that this loss is always zero, we win! In other words, we want to minimize this loss function.
And how do we minimize it? The standard way these days is to use some flavor of stochastic gradient descent (SGD). And to do that, what we want to do is differentiate our loss in terms of our neural-net parameters
w
. If we used vanilla SGD, we’d do:
w
t
+
1
w
t
−
α
∇
w
L
2
Okay, but there’s a catch here, our loss function was in terms of
p
, not
w
. However, we describe our neural net predictions as a function of some input and the neural-net parameters w:
p
(
x
,
w
)
for which we know how to compute
∇
w
p
. Since we know that, we can use the chain rule of differentiation, which says:
∇
w
L
2
∇
w
p
∇
p
L
2
Where since w is a vector,
∇
w
p
will refer to its (transposed) Jacobian matrix.
In our modern era of deep learning libraries, you typically don’t do that yourself. Instead, your library will use the autograd/backpropagation algorithm to compute
∇
w
L
2
for you, and it will do it by decomposing it slightly differently than we did above.
Despite that fact that modern autograd libraries will do things slightly differently, it will be important to see that we can express the gradient of our loss in terms of neural net parameters in that way.
Back to Actor Critic
Okay, now that we’ve refreshed our memory about loss functions and what applying the chain rule to a loss function looks like, let’s get back to Actor Critic and DDPG.
Let’s ask our selves: What do we really want actor critic to do?
Suppose someone handed you the actual Q-values for some policy. How can you improve that policy? Simple: by making a new policy that selects actions that maximize the Q-values you were handed. This is the foundation of RL. It’s why in Q-learning you set your policy to greedily select the action with the max Q-value (and maybe add some noise for exploration reasons).
Okay, lets assume I have a Q-function: how do I find a policy that maximizes the Q-function if the actions are continuous?? I can’t do what Q-learning does by simply evaluating each action and choosing the max, because there are an infinite number of actions to evaluate!
Wait a second though. We know how to maximize a function over continuous values. We’ve been doing it for ages in supervised learning! Note that maximizing a function is the same as minimizing the negative value of that function. That is:
arg
max
x
f
(
x
)
arg
min
x
−
f
(
x
)
Okay cool, so if I want to maximize some function, I can just use regular SGD on the negative value of that function.
From that, we now have an insight: what if we think of
−
Q
(
s
,
a
)
as a loss function, where the actions
a
are “predictions” from some neural net: the actor. In that case, we can use good ol’ SGD to train our actor to choose actions that maximize the Q-values.
To do so of course requires that we have a representation of Q that we can differentiate. But if we simultaneously train a neural net to estimate Q, and we believe its estimates are good**, then we can easily use it as the “loss” function for our actor!
Indeed, if -Q is our loss function, with actions
a
acting as our “predictions” from our neural-net actor model
μ
lets substitute that back into chain rule expression for loss functions that we wrote earlier and with which we’re familiar:
∇
w
−
Q
∇
w
μ
∇
a
−
Q
And there it is!
(You may note that the DDPG paper reverses the order of the multiplication from the above. The order really just depends on what convention the paper takes the gradients and Jacobian matrices to mean. If you go back to the original deterministic policy gradients paper, you’ll see they write it out as we did above.)
So why doesn’t DDPG just say to use autograd on -Q like we do with loss functions ?
Okay, so now you’re probably asking yourself:
Why didn’t the paper just say “use autograd on -Q”???
After all, in supervised learning papers we never write out the chain rule in that way, we just assume practitioners have an autograd library and only write the forward version of the loss.
The reason is if you’re not careful, running pure autograd will burn you!
In DDPG, recall that we’re training two neural networks: Q and
μ
. If you simply feed
μ
into Q and run autograd, it’s going to produce gradients on the parameters of Q in addition to the parameters on
μ
! But when optimizing the actor, we don’t want to change the Q function! In fact, if we did, it would be really easy to make any actor maximize Q by making Q always output huge values!
To run autograd through Q would be like letting a supervised learning algorithm change the values of labels, which clearly is not right!
You can’t simply “block gradients” on Q either, because then no gradients will flow back onto the actor.
So instead, you simply have to make sure you apply the chain rule through Q to
μ
, without adjusting any of the parameters of Q. One way to do that is to compute the gradients of each manually and multiply them, as written in the paper.
There are other ways to do it in autograd libraries too, but now you’re starting to get really specific about which library you’re using. So for an academic paper, it’s better to simply write out precisely what gradient should be computed for the actor, and then let the practitioner decide the best way to compute it in their library of choice.
** This assumption that a neural net’s estimates of the Q-function are good is really important. Because it’s an estimate, it will have errors, and a limitation of the DDPG algorithm is that your actor will exploit whatever errors exist in your neural net’s estimate of Q. Consequently, finding ways to ensure the Q-estimate is good is a very important area of work. |
H: getting the weights of intermediate layer in keras
I have an image dataset 376 classes each class has 15 pictures corresponds to a person. I would like to get the feature vector that corresponds to each person.
What I have done is, after I compiled the model I then used this link
as a reference to get the weights of the last convolutional layer. However, when I do this, I get the error:
InvalidArgumentError: You must feed a value for placeholder tensor 'conv_layer' with dtype float and shape [?,19,19,360]
How can I resolve this issue?
Here is the code that I have done so far:
train_data = np.array(train_data, dtype=np.float32)
test_data = np.array(test_data, dtype=np.float32)
train_data = train_data / 180 # to make the array values between 0-1
test_data = test_data / 180
train_label = keras.utils.to_categorical(train_label, 376)
test_label = keras.utils.to_categorical(test_label, 376)
# CNN MODEL
model = Sequential()
model.add(Conv2D(180, (3, 3), padding='same', input_shape=(180, 180, 3),
activation="relu")) #180 is the number of filters
model.add(Conv2D(180, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(360, (3, 3), padding='same', activation="relu"))
model.add(Conv2D(360, (3, 3), activation="relu"))
conv_layer = model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
flatten_layer = model.add(Flatten())
model.add(Dense(496, activation="relu"))
model.add(Dropout(0.5))
dense_layer = model.add(Dense(376, activation="softmax"))
#compiling the model
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
train_data,
train_label,
batch_size=32,
epochs=40,
verbose = 2 ,
validation_split=0.1,
shuffle=True)
# getting intermediate layer weights
get_layer_output = K.function([model.layers[0].input],
[model.layers[11].output])
layer_output = get_layer_output([conv_layer])[0]
AI: The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out
Network you want to train originally
model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(
train_data,
train_label)
Now create a subnetwork from which you want the outputs, like from above example
model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())
model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])
# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))
Now take weights from the first trained network and replace the weights of the second network like
model_new.set_weights(weights=model.get_weights())
You can check whether the weights are changed in the above step by actually adding these check statements like
print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))
model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))
This should yeild
Are arrays equal before fit - False
Are arrays equal after applying weights - True
Hope this helps. |
H: What is Coarse-to-Fine in the context of neural networks?
I read in many paper that mentions coarse-to-fine as a technique in deep learning, but I could never figure what exactly they mean. Is it related to multiscale inference, where they use coarse and fine input images?
AI: "Coarse to Fine" usually refers to the hyperparameter optimization of a neural network during which you would like to try out different combinations of the hyperparameters and evaluate the performance of the network.
However, due to the large number of parameters AND the big range of their values, it is almost impossible to check all the available combinations. For that reason, you usually discretize the available value range of each parameter into a "coarse" grid of values (i.e. val = 5,6,7,8,9) to estimate the effect of increasing or decreasing the value of that parameter. After selecting the value that seems most promising/meaningful (i.e. val = 6), you perform a "finer" search around it (i.e. val = 5.8, 5.9, 6.0, 6.1, 6.2) to optimize even further. |
H: How to choose between classification Vs regression approach?
I need to predict the profitability of the products of a retailer. I can either predict the absolute value of the profit the products will make (continuous outcome) or predict whether the products will make a profit or not (categorical outcome). Is there any advantage of approaching this as a classification problem rather than a regression problem or vice versa? Why?
AI: I would say that the main thing is what information you think is the most valuable. There are some differences.
The benefit with a regressor is that it can give you a number of how much profit you can expect. That is obviously useful when considering a new product. The drawback is that it cannot tell you how likely you are to see this profit. It will not tell you if a product is risky.
Many classifiers can tell you with what probability it thinks a product will be profitable. This can alert you to potentially risky products. But the drawback is that you won't get to know how big the potential profit is.
As I said, it depends what you value most. But nothing is stopping you from training two models and using information from both of them to make your decisions. |
H: Game theory in Reinforcement Learning
In one of the recent blog post by Deepmind, they have used game theory in Alpha Star algorithm.
Deep Mind Alpha-Star:
Mastering this problem requires breakthroughs in several AI research challenges including:
Game theory: StarCraft is a game where, just like rock-paper-scissors, there is no single best strategy. As such, an AI training process needs to continually explore and expand the frontiers of strategic knowledge.
Where the game theory is applied when it comes to reinforcement learning?
AI: Where the game theory is applied when it comes to reinforcement learning?
It is not used directly in this case, and AlphaStar makes no breakthroughs in game theory. The blog's wording here is not super precise.
The point of the quote was to explain the extra challenge, which occurs in many games where opponents can react to each other's choices and there is often a counter-strategy to any given policy. Rock-paper-scissors is the simplest game that has this challenge, but it is common in many strategy games, as the game designers typically don't want a single best strategy to dominate the game, often going to some lengths to balance options in the game so that more of their game content is used and to keep a level of uncertainty and excitement in the game-playing community.
The actual breakthroughs in regards to the quote in your question, are in finding ways to perform the kinds of long-term exploration that allow for different high-level strategies. Many RL algorithms perform relatively local exploration which would be too weak to keep track of entirely different strategies and decide when to use them.
The way that Deep Mind team approached this is explained in their blog:
To encourage diversity in the league, each agent has its own learning objective: for example, which competitors should this agent aim to beat, and any additional internal motivations that bias how the agent plays. One agent may have an objective to beat one specific competitor, while another agent may have to beat a whole distribution of competitors [ . . . ]
So Deep Mind have not resolved any of that at a theoretical level, and have not used game theory in any direct sense. However, they have identified the kind of game theory scenario that applies, and have used that in the design, making steps in an engineering sense towards practical solutions.
Other solutions in RL might also apply, such as hierarchical RL for capturing high-level actions as strategies to inform lower-level decisions, or using slow changing noise functions to drive exploration (as opposed to something which changes faster such as epsilon-greedy).
In general, game theory is related to reinforcement learning, in that both construct a formal view of optimising utility:
Game theory is useful for analysing multi-agent scenarios, but generally analyses optimal policies for relatively simple single-step or repeated games.
Reinforcement learning is well-described for single agents, and deals well with sequential decision making, but does not have much quite as much material for dealing with competitive and co-operative multi-agent environments - typically treating other agents as "part of the environment".
There is enough cross-over between the two theories that they can be used to inform each other in an intuitive way, as Deep Mind have done here.
In more tractable game environments, game theory is able to determine stable and effective policies - for instance in rock-paper-scissors, the Nash equilibrium policy (one which players will be punished for moving away from) is randomly selecting each action with 1/3 probability. Note this is not necessarily the optimal policy - that depends on the opponent's behaviour - but it is an expected stable outcome for two rational and capable opponents to arrive at.
If you develop a rock-player-scissor learning bot using RL, and it learns this strategy through self play, then you can be relatively happy that your learning algorithm worked. That would be one way of using RL and game theory together.
Deep Mind don't know the Nash equilibrium of Star Craft strategies, and in fact the strategies are only loosely defined in terms of low-level actions, so it is not clear whether it is possible. The analysis of strategies supplied in the blog (e.g. a "rushing" strategy) are based on observations of the game and adding a human narrative to help understand what is going on. In practice, it is the sampling of opponents each preferring a different strategy or set a particular goal in the game, that trains a single neural-network based bot that has experience of countering multiple strategies and can express actions that optimally beat any strategy that matches patterns it has learned in self-play and observes an opponent using. |
H: Not able to interpret decision tree when using class_weights
I'm working with an imbalanced dataset. I'm using a decision tree (scikit-learn) to build a model. For explaining my problem I've taken iris dataset.
When I'm setting class_weight=None, I understood how the tree is assigning the probability scores when I use predict_proba.
When I'm setting class_weight='balanced', I know its using target value to calculate class weights but I'm not able to understand how the tree is assigning the probability scores.
import sklearn.datasets as datasets
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
iris=datasets.load_iris()
df=pd.DataFrame(iris.data, columns=iris.feature_names)
y=iris.target
X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.33, random_state=1)
# class_weight=None
dtree=DecisionTreeClassifier(max_depth=2)
dtree.fit(X_train,y_train)
dot_data = StringIO()
export_graphviz(dtree, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png()) # I use jupyter-notebook for visualizing the image
# printing unique probabilities in each class
probas = dtree.predict_proba(X_train)
print(np.unique(probas[:,0]))
print(np.unique(probas[:,1]))
print(np.unique(probas[:,2]))
# ratio for calculating probabilities
print(0/33, 0/34, 33/33)
print(0/33, 1/34, 30/33)
print(0/33, 3/33, 33/34)
The probabilities assigned by the tree and my ratios (determined by looking at tree image) are matching.
When I use the option class_weights='balanced'. I get the below tree.
# class_weight='balanced'
dtree_balanced=DecisionTreeClassifier(max_depth=2, class_weight='balanced')
dtree_balanced.fit(X_train,y_train)
dot_data = StringIO()
export_graphviz(dtree_balanced, out_file=dot_data,filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
I'm printing unique probabilities using below code
probas = dtree_balanced.predict_proba(X_train)
print(np.unique(probas[:,0]))
print(np.unique(probas[:,1]))
print(np.unique(probas[:,2]))
I'm not able to understand (come-up with a formula) how the tree is assigning these probabilities.
AI: We should consider two points. First, class_weight='balanced' does not change the actual number of samples in a class, only the weight of class $w_{c_i}$ is changed. Second, the [un-normalized] probability of class $c_i$ in each node is calculated as
$w_{c_i}$ x (number of samples from $c_i$ in that node / size of $c_i$)
For example, in balanced mode, the [un-normalized] probability of $c_3$ in the green leaf is calculated as
$33.\bar{3}\% \times (3 / 36) ≈ 2.778\%$
compared to $36\% \times (3 / 36) = 3\%$ in unbalanced mode.
The probability (normalized) in balanced mode would be:
$100 \times 2.778/(2.778+32.258) \% = 7.9289\%$
Remark. The word "probability" is not applicable to each isolated node except for the root node. This is the un-normalized version of the probability used to classify a data point inside a leaf, though the normalization is not required for comparison. However, the notion is applicable to the aggregate of nodes at the same level and the leaves from upper levels (i.e. set of all samples). |
H: How to approach a machine learning problem?
I'm a beginner in machine learning, and no real statistical background ( just basic knowledge ). I comprehend half of what is said on forums about statistical methods and techniques for normalizing data and putting up plots to see data distribution. Anyway, i managed to create multiple predictive models following steps from people on Kaggle. However, i would like to receive some knowledge on the steps that must be taken while building a model. I'll use an example i worked on to demonstrate my steps, and if you spot a beginner's mistake and you will, just point.
I started by viewing my data ( combine is the test set + trainset minus the target column to predict )
print(combine.isnull().sum())
age 0
cmp 42855
code_dept 654
id_opportunite 0
mca 4539
nb_enfants 1624
nom_organisme 0
situation 58
Okay , some potential predictors have a variety of values of 'nulls'!
NOTE :
Situation : type = string , indicating the social situation of the person.
Nb_Enfants : type = int , indicating number of children.
cmp : type = string, indicating the name of the company this person was contracted with ( which is not a common information to get from a person over a phone call, which explains the huge number of nulls on that column )
nom_organisme : type=string , name of the call center that referred this person to us ( might be a good predictor , some centers send statiscally people more likely to sign than other centers according to my plot )
Age : type=int, no explanation needed i suppose
code_dept : type=int, refers to id's of departments , a department is a slice of a big town so these are id's to locations. ( it shoud be logically a good predictor since chance of signature is higher in some departments than others )
Question 1: is there a conclusion to draw from this ? or a predictor to eliminate like 'cmp' ? or a must-do like replace the nulls or predict them?
These are most of the features i'll be using. i want to predict if a person will sign or not with the company. I ommited the sign column, but it's a binary value
This is my output distribution. just below
Signature: 11674 (17.7 percent), Non Signature: 54250 (82.3 percent), Total: 65924
*So i have 2 output classes 0 and 1 and they are imbalanced 17% to 82%!
Question 2: should i use SMOTE in this case? or not? and are there other conclusions to draw from this?*
I started by removing what you guys call Outliers that of course i made sure they were random noises and mistakes in the data capture. ( Yes i made sure it was noise rather than a pattern that needed to be taught to the model )
Then, i visualized the columns one by one and noted the behavior against the sign column. These are some examples :
This shows the percentage of people who signed in Yellow and those who have not in Blue per number of children ( from 0 to 5 ).
Also i figured a value_counts was necessary!
df3.nb_enfants.value_counts()
Question 3 : Are there other conclusions to draw from this plot? or other ways of extracting information from the possible feature? and can I decide just by looking at this plot wether nb_enfants is a good predictor or not?
then i went on and 'manually' picked features i liked ( That's how i roll ).
Just kidding, i picked features that would logically have an impact on the prediction ( I know there are ways to spot features unseen to the naked eye, but i think i'm not on that statistical level of displaying data and extracting informations from, any advice on how to do that is welcome )
Question 4 : What's a better indicator to a good predictor in my case?
I also manually encoded age into 7 classes, situation ( which contains strings, needed encoding anyway ) into 6 classes, created 2 features based on code_dept ( where the person lives ) and nom_organisme ( which company referred this person to our company ) where i assign a class to each code_dept or nom_organisme based on the percentage of people who signed.
I didn't like the dummy variables since, i'm deploying this machine later on a server, to be used as a webservice, and i need to encode the "people" that get sent over requests the same way i encoded my train dataset, and the only way i managed to do that is by static encoding via IF's in a function that i apply to the "person" that gets through.
Question5 : I know static is bad for maintenance or updating in general. Is there way to do that webservice transformation without using static encoding? Also is it recommended at all to use static encoding on every feature?
Question6 : Any remarks on the data ?
I set up a correlation matrix for some features by doing this :
plt.figure(figsize=(16,14))
foo=sns.heatmap(train.drop(['id_opportunite','mca','code_dept','nom_organisme'],axis=1).corr(), vmax=0.6,square=True, annot=True)
Question7 : What conclusions one should draw from this matrix?
I went with :
X = train[['age','classe_dep','situation','nb_enfants','classe_organisme']]
y = train.loc[:,'signer']
Result on decision tree :
le score maximale de l'algorithme Decision Tree est : 84.11% pour max_leaf_nodes = 52
Result on the test set : 75% accuracy.
Question8 : Any general thoughts or remarks? Thanks!
Answer should be verification of my steps, and some remarks on one or many of my questions! Thanks!
AI: Plenty of questions there. I will answer about the accuracy one:
75% is larger than random chance and might have use. But you need to consider what is relevant for your application.
For example, suppose you are dealing with a security issue. Denying access to someone that is entitled to it is less damaging than allowing access to someone who is not.
If you want to reduce the number of phone calls to sell a product you want a model that will tell you to call the maximum of potential clients and even a shrinking of 10% of useless call is a good model with good profit if you don't make it absurdly expensive to keep operating. |
H: sklearn and pandas in AWS Lambda
I made a front end where I would like to make REST calls to an AWS Lambda interfaced with AWS API Gateway.
I dumped my model as a pickle file (and so my encoders) which I initially trained locally. I then stored these files in a S3 bucket.
The problem is that I cannot import libraries such as pandas and sklearn to make model predictions because the lambda console is unable to find them.
Does anyone have any suggestions to help solve this issue?
AI: You need to create a deployment package which includes the packages you want to use in Lambda (sklearn and pandas).
You can then either upload that deployment package to S3 and import it in the Lambda function, or upload it within the Lambda function itself.
The Lambda function code will have to be written outside of AWS Lambda and be included in the deployment package. Here's a guide on how to do it. |
H: On the choice of LSTM input/output dimension for a spatio-temporal problem
I am using LSTM neural networks from (R)Keras for a matter of spatio-temporal interpolation. I manage to get the network to output predictions but the results are not outstanding (very little improvement on validation loss). I am wondering about the shapes of training data and labels.
Say I have 50 dates of measurements of the variable of interest $y$, accompanied by about 100 covariates $x$ (spatial coordinates, temperatures...). Each date has 24 measurements of $y$, so nsamples=50*24=1200. If I set the timestep hyperparameter of LSTM to e.g. 3, and use a moving window of step 1, I have therefore an input table $X$ of shape (1200, 3, 100).
On the other hand, should the labels table $Y$ be of dimension (1200, 3) or (1200, 1) ? More precisely, which of the following describes the problem the best:
$$
(X_{n,t-2} ; X_{n,t-1} ; X_{n,t}) \rightarrow (Y_{n,t-2} ; Y_{n,t-1} ; Y_{n,t})
$$
$$
(X_{n,t-2} ; X_{n,t-1} ; X_{n,t}) \rightarrow (Y_{n,t} ; Y_{n,t} ; Y_{n,t})
$$
$$
(X_{n,t-2} ; X_{n,t-1} ; X_{n,t}) \rightarrow Y_{n,t}
$$
$$
(X_{n,t-2} ; X_{n,t-1} ; X_{n,t}) \rightarrow Y_{n,t+1}
$$
Or are they all plausible ways of addressing slightly different problems? As I said, I'm trying to spatially interpolate $Y$ for the 50 dates of measurements, as well as predicting $Y$ for the year(s) to come. So I expect one is more relevant than the others but I have no clue on which one.
I hope this is understandable as I clearly miss some technical vocabulary here.
AI: On time-series models
All models that you have mentioned are correct and practical depending on the problem (the index $n$ is not required). The second one however produces redundant results which is a waste of computation. Even
$$
(X_{t} ; X_{t+1} ; X_{t+2}) \rightarrow (Y_{t-1})
$$
is correct, if you are fitting on an archive and want to predict a year given the covariates from the next three years.
But only the last model
$$
(X_{t-2} ; X_{t-1} ; X_{t}) \rightarrow (Y_{t+1})
$$
is a forecasting model. So in general, if you want to interpolate into the next $k$-th year from now $t$, you should use:
$$
(X_{t-2} ; X_{t-1} ; X_{t}) \rightarrow (Y_{t+k})
$$
or
$$
(X_{t-2} ; X_{t-1} ; X_{t}) \rightarrow (Y_{t+1},...,Y_{t+k})
$$
Even a better model that takes advantage of known $Y$'s in the past would be:
$$
(X_{t-2}|Y_{t-2} ; X_{t-1}|Y_{t-1} ; X_{t}|Y_{t}) \rightarrow (Y_{t+k})
$$
where $|$ denotes vector concatenation to produce a 100 + 1 dimensional vector for each known year.
As a personal opinion, for the time-series prediction task, 24 data points per year is very small compared to the dimension of $X$, which is 100. 1200 samples for $X \rightarrow Y$ regression (ignoring the time) is more practical; if selecting 10 from 100 covariates is possible even better.
Because of the small data set, I would suggest:
$(Y_{t-m} ;...; Y_{t-1} ; Y_{t}) \rightarrow (Y_{t+k})$ for time series prediction, and
$X \rightarrow Y$ regression for estimating the relation between X and Y.
Relation to LSTM and RNN
If we use LSTM/RNN to model time-series, they would be stateful. That is, when input $X_{t-2}$ is fed to an LSTM, it keeps an internal state (hidden state) to be combined with the next input $X_{t-1}$ and so on. Regarding the input/output dimension, here is an RNN animation from a post on medium by Raimi Karim that shows an arbitrary step among 3 steps of feeding $(X_{t-2} ; X_{t-1} ; X_{t})$ to the network:
As you see, dimension and number of inputs are independent of output. We can feed 5 inputs $X_{t-4}$ to $X_{t}$, each 100 dimension (100d) and receive a 1d output by setting the dimension of hidden states to 1d, or setting it to 10d and use an extra dense layer at the end to convert 10d to 1d, or receive a 50d output, or a 150d (three 50d) output, etc.
Word "stateful" in Keras (source)
LSTM and RNN are stateful by definition, this [badly named] variable in Keras means
If stateful=True, the last state for each sample at index i in a batch will be
used as initial state for the sample of index i in the following
batch. Fabien Chollet
For example, if each batch has 24 samples indexed from 0 to 23 (each sample could have the form $(X_{t-2}, X_{t-1}, X_{t}, Y_{t+1})$), then the last hidden state $h$ from 8th sample will be used as the initial hidden state for 8th sample in the next batch. Except for special cases that there is a temporal order between batches and their samples, this must be set to False. |
H: Which is better: GPT or RelGAN for text generation?
Based on my understanding, gpt or gpt-2 are using language model loss to train and generate text, which do not contains GAN.
So which is better: GPT vs RelGAN/LeakGAN/SeqGAN/TextGAN
I am so confused about this question.
AI: According to [Caccia et al., 2018], in general textual GANs are no rival for LMs regarding several quality measures. These are the conclusions of the paper:
This research demonstrates that well-adjusted language models are a remarkably strong baseline
and that temperature sweeping can provide a very clear characterization of model performance. A
well-adjusted language model outperforms the considered GAN variants as evaluated on both local,
and more surprisingly, global metrics of quality and diversity. Our temperature sweeping framework
shares characteristics with a Receiver Operating Curve. Analogously, if one needed a single scalar to
compare NLG models, one could compute area under the curve and seek the model with the smallest
value (lower is better for our considered metrics).
GAN-based generative models have been proven effective on real-valued data, however, but there
exist many difficult pernicious issues of moving to discrete data. These issues must be overcome
before they will improve over the strong MLE baselines. On the datasets and tasks considered,
potential issues caused by exposure bias were less than the issues of training GANs in discrete data.
GAN training may prove fruitful eventually, but this research lays forth clear boundaries that it must
first surpass.
This way, OpenAI's GPT and GPT-2 may be considered superior in text generation quality to current textual GANs. |
H: How to calculate which word fits the best given a context and possible words?
I have this task for research purposes and searched a while for a framework or a paper which already took care of this problem.
Unfortunately I don't find anything which helps me with my problem.
I have a sentence like
if the age of the applicant is **higher** than 18, then ...
and a list of words like
higher, bigger, greater, wider ...
which are all a
Now I want to find find out, which of the given words approximately fits the best at the predefined position in the sentence.
The best fitting word in this example would be 'greater', but for example 'higher' would be also fine.
In my specific case, I want to show an error message if someone would write 'wider', because this doesn't make sense in this semantic context.
I hope that I explained my problem good enough.
AI: There are two options :
CBOW . Modify Word2Vec CBOW code to save the whole trained model (current implementations only persist embedding layer)
CBOW Model: This method takes the context of each word as the input and tries to predict the word corresponding to the context.
Intro : https://towardsdatascience.com/introduction-to-word-embedding-and-word2vec-652d0c2060fa
Example : https://www.tensorflow.org/tutorials/representation/word2vec
Train an LSTM / GRU to predict next word (given previous N words)
Karpathy's article is probably the best introduction to text generation with RNN (this works at character level, you will have to modify it to work at word level [Word-Vector level])
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Example :
https://medium.com/phrasee/neural-text-generation-generating-text-using-conditional-language-models-a37b69c7cd4b |
H: is it bad to have many different measurements for the same target variable?
I'm working on a dataset that has repeated measurements for the same target variable.
When I don't change anything and create model, cross validation overfits with 0.99 score but in testset it gives around 0.39.
When I use mean, std, skew, quartiles for each measurement to have only one measurement for each feature, it gives a much better score.
Can anyone explain to me why? and when it is good to use the second method?
the original dataset looks like this (all numbers are fake):
id /measurement1/measurement2/.../target/
0-1/0.18283 /0.12855 /.../ 1 /
0-2/0.1141 /0.38484 /.../ 1 /
0-3/0.4475 /0.18374 /.../ 1 /
and transformed dataset looks like this:
id /meas1_avg/meas1_std/meas1_skew/meas2_avg/meas2_std/.../target/
0 /0.28747 /0.183848/ 0.198384 /0.18484 /0.28474 /.../ 1 /
AI: Note that you are solving two different problems here.
In the first problem, you want to predict the target variable given one noisy measurement.
In the second problem, you want to predict the target variable given some statistics from a group of noisy measurements.
Your results show that the second problem is easier to solve which is intuitive, since the amount of noise (variance) for average of multiple measurements is less than only one measurement (closely related to Law of Large Numbers), thus the relation in the second problem is easier to find by the model.
Therefore, if both problems are equivalent to you, go with the second problem which is easier to solve. |
H: How to show percentage (text) next to the horizontal bars in matplotlib?
I am using seaborn's countplot to show count distribution of 2 categorical data. Fine it works but I want the percentages to show on top of the bars for each of the plot. Please how do I do it?
fig, ax = plt.subplots(1, 2)
sns.countplot(y = df['current_status'], ax=ax[0]).set_title('Current Occupation')
sns.countplot(df['gender'], ax=ax[1]).set_title('Gender distribution')
I have made edits based on the comments made but I can't get the percentages to the right of horizontal bars. This is what I have done.
total = len(df['current_status'])*1.
ax = sns.countplot(y="current_status", data=df)
plt.title('Distribution of Configurations')
plt.xlabel('Number of Axles')
for p in ax.patches:
ax.annotate('{:.1f}%'.format(100*p.get_height()/total), (p.get_y()+0.1, p.get_height()+5))
_ = ax.set_xticklabels(map('{:.1f}%'.format, 100*ax.xaxis.get_majorticklocs()/total))
AI: Here is a working example to add a text to the right of horizontal bars:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
df = pd.DataFrame(np.array([['a'], ['a'], ['b']]), columns=['current_status'])
ax = sns.countplot(y="current_status", data=df)
plt.title('Distribution of Configurations')
plt.xlabel('Number of Axles')
total = len(df['current_status'])
for p in ax.patches:
percentage = '{:.1f}%'.format(100 * p.get_width()/total)
x = p.get_x() + p.get_width() + 0.02
y = p.get_y() + p.get_height()/2
ax.annotate(percentage, (x, y))
plt.show()
Output:
You can move the texts around by changing the formula for x and y. For example, to place them on top:
x = p.get_x() + p.get_width() / 2
y = p.get_y() - 0.02
Here is a general guideline for annotating axes. |
H: What does it mean for a prior to be improper
It’s use will never result in a posterior distribution which integrates (or sums) to 1. ?
AI: An improper prior doesn't integrate/sum to 1, hence it is not a proper probability distribution on its own.
Depending on the likelihood, the posterior distribution may or may not integrate to one.
An example would be a constant function on the infinite line, e.g. $p: \mathbb{R} \rightarrow \mathbb{R}, x\mapsto 1$.
It is not normalisable (since its integral is infinite), hence improper, but yet it may serve as an uninformative prior. |
H: What are the ways to partition a large file that does not fit into memory so it can later be fed as training data?
Is there any other way to partition a large file that does not fit into memory so it can be fed as training data other than using spark? or hadoop?
AI: Yes, of cause. But, it's insignificant, because Spark and Hadoop are better.
This is my idea. Suppose that your memory can take in 100,000 examples. So splitting your data set to files with size lower than 100,000.
The key and most complex step is how to train classify with those data. Good luck, For Gradient descent series optimization algorithms (GB, SGD and so on), most algorithms (SVM, GBDT, Bayes, LR, deeplearn and so on) support this. You could
load one file to RAM and fed them to classifier until to find the best parameter.
My code is very simple. Before each iteration, re-shuffling the order of simples and re-splitting data set will boost the classifier.
import numpy as np
X = np.random.random((100, 2))
y = [1 if x[0] > x[1] else 0 for x in X]
from sklearn.linear_model import LogisticRegression
lr_cly = LogisticRegression()
def stop_train(X_s, y_s, threshold):
scores = [gnb.score(X, y) for X, y in zip(X_s, y_s)]
return np.mean(scores) > threshold
def iter_train(cly, X, y, threshold=0.99, max_iter=10):
X_s = [X[:50, :], X[50:, :]]
y_s = [y[:50], y[50:]]
iter_times = 0
while iter_times <= max_iter:
print "--------------"
for X, y in zip(X_s, y_s):
cly.fit(X, y)
print cly.score(X, y)
if stop_train(X_s, y_s, threshold):
break
iter_times += 1
iter_train(lr_cly, X, y) |
H: Normal distribution instead of Logistic distribution for classification
Logistic regression, based on the logistic function $\sigma(x) =
\frac{1}{1 + \exp(-x)}$, can be seen as a hypothesis testing problem. Where the reference distribution is the standard Logistic distribution where the p.m.f is
$f(x) = \frac{\exp(-x)}{[1 + \exp(-x)]^2}$
and the c.d.f is
$F(x) = \sigma(x) = \frac{1}{1 + \exp(-x)}$
The hypothesis to test is
$H_0: x \text{ isn't positive} \hspace{2.0cm} H_1: x \text{ is positive}$
The test statistic is $F(x)$. We reject $H_0$ if $F(x) \geq \alpha$ where $\alpha$ is the level of significance (in terms of hypothesis testing) or classification threshold (in terms of classification problem)
My question is that why they don't come up with the Standard normal distribution, which truly reflects the "distribution of nature", instead of Logistic distribution ?
AI: Nice comparison.
Generally, we are allowed to experiment with as many distributions as we want, and find the one that suits our purpose. However, the normality assumption leads to an intractable derivation consisting of the notorious erf function.
Let's first pinpoint what is $x$ in the context of logistic regression. Logistic regression model can be written as:
$$P(y=1|\boldsymbol{x})=\frac{1}{1+e^{-\boldsymbol{w}^t\boldsymbol{x}}}=F(\boldsymbol{w}^t\boldsymbol{x})$$
So your $x$ is actually $z=\boldsymbol{w}^t\boldsymbol{x}$. This means, although it is reasonable to assume that predicate $\boldsymbol{x}$ comes from a normal distribution, the same argument does not hold for a linear combination of its dimensions, i.e. $z$. In other words, the normal assumption is not as natural for $z$ as for $\boldsymbol{x}$.
But still, let's see what happens with normal assumption. The problem that we face here is analytical intractability. More specifically, to fit a similar model to observations using Maximum Likelihood, we need (1) derivative of cumulative distribution function (CDF) with respect to each parameter $w_i$, and (2) value of CDF for a given $z$ (see this lecture section 12.2.1 for more details).
For logistic distribution, the required gradient would be:
$$\begin{align*}
\frac{\partial F(\boldsymbol{x};\boldsymbol{w})}{\partial w_i}&=\frac{\partial (1+e^{-\boldsymbol{w}^t\boldsymbol{x}})^{-1}}{\partial w_i}= x_i e^{-\boldsymbol{w}^t\boldsymbol{x}}(1+e^{-\boldsymbol{w}^t\boldsymbol{x}})^{-2} =x_if(\boldsymbol{x};\boldsymbol{w})
\end{align*}$$
However for normal distribution, CDF is the erf function which does not have an exact formula, though, its gradient is tractable. Assuming $z \sim \mathcal{N}(0, 1)$, the gradient would be:
$$\begin{align*}
\frac{\partial F(\boldsymbol{x};\boldsymbol{w})}{\partial w_i}&=\frac{\partial \left(\frac{1}{2}+\frac{1}{2}\text{erf}\left(\frac{z}{\sqrt{2}}\right)\right)}{\partial w_i}=\frac{x_i}{\sqrt{2 \pi}} e^{-\frac{(\boldsymbol{w}^t\boldsymbol{x})^2}{2}}=x_if(\boldsymbol{x};\boldsymbol{w})
\end{align*}$$
In summary, the normality assumption is not as justified for $z=\boldsymbol{w}^t\boldsymbol{x}$ as for $\boldsymbol{x}$, and it leads to an intractable CDF. Therefore, we continue using the good old logistic regression!
Here is a visual comparison of normal and logistic CDFs:
taken from a post by Enrique Pinzon, which implies a large analytical cost for a small difference! |
H: VC dimension of hypothesis space of finite union of intervals
I have the following concept:
$$C = \left\{\bigcup_{i=1}^{k}(a_i, b_i): a_i, b_i \in {\Bbb R}, a_i < b_i, i=1,2,..,k\right\}
$$
and was wondering how to determine the VC dimension of C?
AI: VC dimension is defined for a hypothesis space $H$, e.g. a set of binary classifiers $C \rightarrow \{0, 1\}$. For example, hypothesis space
$$H=\{{\Bbb 1}_{x \le \theta}: \theta \in {\Bbb R}\}$$
has VC dimension $1$, because for any $C=\{a<b\}$, it does not contain a classifier that gives $\{a \rightarrow 0, b\rightarrow 1\}$.
For example, a classifier from $H$ would be $f(x)={\Bbb 1}_{x \le a}$ that gives $\{a \rightarrow 1, b \rightarrow 0\}$.
From C to H
As you have illustrated in the comments, we can build a hypothesis space $H$ from $C$ as follows:
$$H=\left\{{\Bbb 1}_{x \in C}: C = \left\{\bigcup_{i=1}^{k}(a_i, b_i): a_i, b_i \in {\Bbb R}, a_i < b_i, i=1,2,..,k\right\}\right\}$$
Meaning, each classifier in $H$ is a union of $k$ intervals that labels a point inside the union as $1$ and outside as $0$.
VC dimension of this $H$ is $2k$:
For VC $\geq 2k$: Let $A$ be an arbitrary set , and $A \rightarrow \{0, 1\}$ be an arbitrary labeling. By going from minimum to maximum member of $A$, we can cover all adjacent $1$s with one interval, and only need to use another interval when there is a $0$ barrier. Therefore, we need $k$ intervals to cover $k$ isolated regions of $1$s. Furthermore, a set with $2k$ members has at most $k$ isolated $1$s (since to have $k+1$ isolated $1$s there should be $k$ $0$ barriers in-between), and thus, needs at most $k$ intervals.
For VC $< 2k+1$ by contradiction: for any ordered set $A_{2k+1}=\{a_1<...<a_{2k+1}\}$, there is labeling $a_k \rightarrow 1_{\text{k odd}}$, i.e. $\{a_1 \rightarrow 1, a_2 \rightarrow 0,...,a_{2k+1} \rightarrow 1\}$ with $k+1$ isolated $1$s which cannot be covered with $k$ intervals. |
H: When does fitting happen in KNN?
In training session, model fitting happens to reduce error. But does KNN do this?
Reducing error only happens due to changing K value and number of features, isn't it?
So training set and test set is only for do things below, right?
Train model with training set
Given test sample, model finds K-nearest neighbors in training set
Do classification or regression for test sample
Find accuracy with MSE or RMSE
AI: Short version: this is just terminology, but arguably K-NN doesn't actually fit a model.
Conceptually, K-NN is a lazy learning model. This means that, in a sense, there's no fitting until a new instance arrives to be classified or a value has to be predicted (depending if you're using K-NN for classifying or regression, both are possible). Even then, using the term "fitting the model" is a bit out-of-place, in my opinion.
I think your confusion is in the sentence "model fitting happens to reduce error". No, it does not. Model fitting is simply getting a model (for example, a family of data distributions) and fitting the model by finding the parameters that better describe the data (thus choosing a member from that family).
Maybe you're thinking of neural network's epochs as "fitting"? What happens there is that there's a family of possible models (all the possible values for each weight in the network), and fitting the model is just finding the best possible values. The fact that neural networks do so in an iterative manner does not mean that fitting is an iterative process, fitting is the end result.
So training set and test set is only for do things below, right?
Well, that's true for every model, yes. Training and test set separation serves the only purpose of evaluating the model on the chosen metrics. At the end of the day, the final model you choose to deploy, be it lazy or eager, will use all the available data (or a portion/transformation of it, in some cases), not just the training set. |
H: Can I accurately call sklearn.model_selection.train_test_split multiple times when data doesn't fit into memory?
Consider a very large data set that doesn't fit into memory. Would I be able to get (nearly) the same behavior from multiple calls to train_test_split when calling train_test_split by passing batches of a source data set as opposed to the whole thing at once?
This code is just hypothetical to illustrate my question.
# X, y is the entire dataset.
x_train, y_train, x_test, y_test = train_test_split(X,y,stratify=y, test_size=.2)
# compared to
for x_bat, y_bat in stream_next_batch_from_file():
x_train, y_train, x_test, y_test = train_test_split(x_bat, y_bat, stratify=y_bat, test_size=.2)
# Append the splits to their respective files.
append_data(x_train, y_train, "train_set_filename")
append_data(x_test, y_test, "test_set_filename")
# etc.
AI: This should be fine. If the data is independent of its position in the list, this should give basically identical results.
If the data depends on order, then grabbing some data for training and testing from each batch will have a smoothing effect, making the training and test sets more representative than the results you would get with a raw train_test_split. |
H: Building Image Dataset In a Studio
I'm currently working in a problem of Object Detection, more specifically we want to count and differentiate similar species of moths.
We are already testing some detection algorithms:
YOLOv3
RetinaNet
Mask R-CNN
For the datasets, first we'll scrape the web for those images to have a starting model.
Then we'll get plenty of pictures from our devices that will look like this:
Unfortunately it will take 6 months to get those pictures.
Currently we have some samples of moths, and we have the resources to take pictures of them in a studio. We want to simulate the field conditions as well as taking close ups of the moths in different angles and lighting conditions. This experiment would also allow for us to get the accuracy of classification from an expert entomologist.
Is the effort of taking studio photos worth it?
Would these pictures significantly increase the models accuracy?
Is there any opportunity for publication?
Is there any paper that uses studio photos as well as field photos for object detection?
Also, there's plenty of opportunities for publication here in the company, so they would go the extra mile if we publish something, or if these pictures would improve the final accuracy of the models.
AI: Disclaimer:
This is a question that is probably going to be flagged since it is too broad and answers will be mainly based on opinion.
Is this effort worth it?
That is subjective, what commercial/social use does this have? Is this kind of detection really relevant? What can you do with those detections and classifications? That is something only an entomologist could answer properly, you will get better answers for this one with them.
If you can find this kind of motivations for your work, you probably get a good dataset with good use. But if you don't even know what kind of applications you're aiming to you should start doing more research on motivation.
Would these pictures significantly increase the model's accuracy?
It is highly probable that it will, but you can't be sure until you try. Also, train your models with recently acquired images during the acquisition process to make sure that those images are having an impact on your models.
Is there any opportunity for publication?
Surely there is, if it will be cited multiple times or if the paper will be well received by Machine Learning community? Probably. If not, that probably would get you a top publication on entomology.
Is there any paper that uses studio photos as well as field photos for object detection?
Well, there are many. These controlled datasets are pretty easy to come up with and have been used extensively see BioID for example, until today many publications use it for benchmarking on Eye Pupil Detection and Facial Landmark Detection. In the wild datasets (usually collected from the web) are usually more challenging and also more fitted to real-world application, but it is not always possible to acquire them easily.
Last year CEFET-RJ and UFRJ published papers in IEEE with created datasets for Aedes Aegypti breeding sites detection. This dataset was created using drones and manually annotated by CEFET-RJ and UFRJ undergrad students. |
H: Calculating Feature Importance of Time Series Data
I am new to time-series modeling, and I was wondering what the standard way of quantifying feature importances are in a time-series setting? What types of models allow for the greatest interpretation of the feature space?
I am looking for something, which does not necessarily function like the Random Forest Regressor's feature importance call, but provides a similar insight.
AI: For time series data,
Sensitivity analysis can help with overall Importance of a feature. For example, is "Day of the week" a good feature for stock price forecasting. LIME is one approach that can help. Details : https://arxiv.org/abs/1606.05386 . One simple way is to mask each feature and check the impact on model's performance.
Auto-corelation and Seasonality removal (Details in tutorial at end of the answer)
SHAP : (SHapley Additive exPlanations) is good at identifying features that impact output with lag (https://medium.com/datadriveninvestor/time-step-wise-feature-importance-in-deep-learning-using-shap-e1c46a655455)
End to end example :
https://machinelearningmastery.com/feature-selection-time-series-forecasting-python/ |
H: Crop all written letters from image to form a website
Above image has handwritten hindi / deavanagari letters all on a single
image.
Task is to Prepare a dataset from the given assignment.jpg image using image
processing tools
Idon't have clarity whether i can use photoshop editing tools like adobe photoshop etc.
If image processing tools mean opencv , is it really possible to create like a emnist like dataset from single image.
I am not able to come up with any way to seperate the letters from eachother
Finally i had done it by combining adaptive thresholding,bluring,and morphological, It is helpful to check out find contours for two part letters by @Salman and @Jeru Luke , it really helped me.
AI: This can be done with OpenCV.
Code needs to :
Prep-process image to enhance contrast
Use OpenCV to identify 'bounding box' for each character
Save each character's image as a new file
Code example : http://cvisiondemy.com/extract-roi-from-image-with-python-and-opencv/ |
H: What are CRF (Conditional Random Field)
Looking for language modeling, I have been finding CRF in a lot of places which is but looking online for the same isn't actually helping me a lot. I referred Edwin Chen's blog and Ravish Chawala's Medium article but rather than solving my problem, raises more question.
Could you please refer some interesting blog posts, Github repo's or research papers that are going to further help me with this?
AI: Stanford CoreNLP is a very good implementation of CRF (In Natural Language Processing domain).
https://nlp.stanford.edu/software/CRF-NER.html . CRF specific implementation is : https://github.com/stanfordnlp/CoreNLP/blob/master/src/edu/stanford/nlp/ie/crf/CRFClassifier.java
Few other resources :
https://www.analyticsvidhya.com/blog/2018/08/nlp-guide-conditional-random-fields-text-classification/
https://github.com/lancifollia/crf
https://towardsdatascience.com/conditional-random-field-tutorial-in-pytorch-ca0d04499463 |
H: Optimizing vector values for maximum correlation
I'm new to ML, linear algebra, statistics, etc. so bear with me on the terminology...
I’m looking to find a vector that produces the maximum correlation for the relationship between 1) all relationships among dimensions of the vector (determined by subtraction) and 2) some output value produced by said relationships. I'm specifically using this to create a sports ranking system that takes a number of matches and the resulting scores and attempts to assign a value to the teams that can be used to predict future scores. In other words, the difference between any two team's ratings should be predictive of the score differential for the next match between the two.
So for example, if I have 3 teams, A, B, and C, each start with unknown ratings:
$$
\begin{array}{c|c}
A&?\\
B&?\\
C&?\\
\end{array}
$$
If each team played each other team once, the left table would be used to calculate their rating differences (column team's rating minus row team's rating). The right table would be the difference in scores in the respective matchups.
$$
\begin{equation}
\begin{array}{c|c}
&A&B&C \\
\hline
A&*& B - A& C - A\\
B&A - B& *& C - B\\
C&A - C& B - C& *\\
\end{array}
\Rightarrow
\begin{array}{c|c}
&A&B&C \\
\hline
A&0&3&6\\
B&-3&0&3\\
C&-6&-3&0\\
\end{array}
\end{equation}
$$
Here is a possible solution that, for this example, would result in a perfect correlation between team rating differentials and score differentials.
$$
\begin{array}{c|c}
A&1\\
B&2\\
C&3\\
\end{array}
$$
This would be the regression line where x2 is the column team's rating and x1 is the row team's rating.
$$y = 3 * (x_2 - x_1)$$
It’s worth noting that what matters is the relationship between the various values (not their nominal values) since this would be another possible solution:
$$
\begin{array}{c|c}
A&2\\
B&4\\
C&6\\
\end{array}
$$
Which would result in a linear equation that looks like this, which would also have a correlation of 1:
$$y = {3\over 2} * (x_2 - x_1)$$
What I want to do is find a method to determine values for A, B, and C that maximizes the correlation between the pairwise differences and resulting output values. The one additional catch for the teams example is that not every team will play every other team so any resulting matrices will be asymmetrical (assuming that matters).
Are there any existing techniques to address this problem?
AI: This is one approach you can follow:
Setup a linear regression system, with each match as a row, and each feature corresponding to a team.
The unknown feature coefficient for each team is the team 'strength' that we try to determine.
The feature values will be one of 0, 1, or -1, depending on whether the team did not play, was the column team, or the row team respectively in that match.
The regression target will be the score differential (column team score - row team score) in that match.
Eg: For the result matrix above, the system would be:
$$
(-1) * x_1 + (1) * x_2 + (0) * x_3 = 3\\
(-1) * x_1 + (0) * x_2 + (1) * x_3 = 6\\
(0) * x_1 + (0-1) * x_2 + (1) * x_3 = 3
$$
One solution to the above system is: $x_1=-6; x_2=-3; x_3=0$
Multiple solutions are possible, which can be viewed as translations of each other (adding a constant to all team strengths).
If there are 'n' teams, then there are only 'n-1' linearly independent columns in the regression. (In R, one of the coefficients comes out as NA. This can be treated as 0, or dropped from the regression, effectively making it 0). |
H: Naive Bayes Classifier - Discriminant Function
To classify my samples, I decided to use Naive Bayes classifier, but I coded it, not used built-in library functions.
If I use this equality, I obtain nice classification accuracy: p1(x) > p2(x) => x belongs to C1
However, I could not understand why discriminant functions produce negative values. If they are probability functions, I think they must generate a value between 0 and 1.
Is there anyone who can explain the reason ?
AI: In Naive Bayes, for the case of two classes, a discriminant function could be $$D(\boldsymbol{x}) = \frac{P(\boldsymbol{x}, c=1)}{P(\boldsymbol{x}, c=0)}$$ which can be anywhere in $[0, +\infty)$, and decides $c=1$ if $D(\boldsymbol{x})>1$, $c=0$ otherwise, or it could be the logarithm of that value
$$d(\boldsymbol{x}) = \text{log}\frac{P(\boldsymbol{x}, c=1)}{P(\boldsymbol{x}, c=0)}=\text{log}P(\boldsymbol{x}, c=1)-\text{log}P(\boldsymbol{x}, c=0)$$
which can be anywhere in $(-\infty, +\infty)$ (handling zero probability as a special case), and decides $c=1$ if $d(\boldsymbol{x})>0$, $c=0$ otherwise.
As a side note, $P(\boldsymbol{x}, c=k)$ in Naive Bayes is calculated as
$$P(\boldsymbol{x}, c=k)=P(c=k)\prod_{i=1}^{d}P(x_i|c=k)$$
or equivalently for log probabilities as
$$\text{log}P(\boldsymbol{x}, c=k)=\text{log}P(c=k) + \sum_{i=1}^{d}\text{log}P(x_i|c=k)$$ |
H: How do I identify clusters that match on categorical data?
I am seeking some directions for a proper path to research the solve for this problem:
My company made all our employees take a "StrengthFinders" test, which results in every employee being assigned their top five (ordered) "strengths" from a possible list of 34 strengths. We have 500 employees. I am supposed to identify all the employees that match each other for the same 5 strengths (order not important), and also for employees that match each other for 4 out of 5 strengths (again, order doesn't matter). I could potentially have multiple groups matching on different sets of strengths, e.g.:
Group 1: Billy, Sally, Michael have strengths A, H, I, K, Z
Group 2: Bobby and Suzy have strengths A, B, L, S, W
For the case where strengths match for 4 out of 5, I might have the same people from Group 1 above, plus Joe, whose strengths are A, H, M, K, Z; and
Seth, whose strengths are A, H, G, K, Z. I would expect more groupings for the case of 4 out of 5 than the 5 out of 5 case.
The strengths are categorical in nature, so what I've read so far has largely revolved around clustering of continuous numerical variables.
I am looking for an algorithmic way to identify clusters and the members of those clusters for this situation. I think I could do this brute force by repeatedly sorting data in Excel, but I'm confident that a better way must exist, and I ask you to point me in that direction. Thank you.
AI: You have just 500 data points...
Excel of course is the worst possible tool though.
Anyway, build a dictionary. Put everybody in there 6 times: 1 with all five strengths, and 5 times with one strength omitted. Then you can easily identify the largest groups, and you can also perform various completion operations easily: if you have identified a group with strengths A B C D E, you can add all that have ABCD etc. using the dictionary. |
H: Why can decision trees have a high amount of variance?
I have heard that decision trees can have a high amount of variance, and that for a data set $D$, split into test/train, the decision tree could be quite different depending on how the data was split. Apparently, this provides motivation for algorithms such as random forest.
Is this correct? Why does a decision tree suffer from high variability?
Edit:
Just a note - I do not really follow the current answer and have not been able to solve that in the comments.
AI: It is relatively simple if you understand what variance refers to in this context. A model has high variance if it is very sensitive to (small) changes in the training data.
A decision tree has high variance because, if you imagine a very large tree, it can basically adjust its predictions to every single input.
Consider you wanted to predict the outcome of a soccer game. A decision tree could make decisions like:
IF
player X is on the field AND
team A has a home game AND
the weather is sunny AND
the number of attending fans >= 26000 AND
it is past 3pm
THEN team A wins.
If the tree is very deep, it will get very specific and you may only have one such game in your training data. It probably would not be appropriate to base your predictions on just one example.
Now, if you make a small change e.g. set the number of attending fans to 25999, a decision tree might give you a completely different answer (because the game now doesn't meet the 4th condition).
Linear regression, for example, would not be so sensitive to a small change because it is limited ("biased" -> see bias-variance tradeoff) to linear relationships and cannot represent sudden changes from 25999 to 26000 fans.
That's why it is important to not make decision trees arbitrary large/deep. This limits its variance.
(See e.g. here for more on how random forests can help with this further.) |
H: Error: building keras model using LSTM
I am trying to build a simple LSTM based model but I am getting "can't set attribute error" on the line to add LSTM layer to the model. I am unable to figure the reason as to why this error is appearing. This is the code I am using.
left = Sequential()
left.add(LSTM(64,activation='sigmoid',batch_input_shape=(10,look_back,dim)))
left.add(Dense(dim, activation='linear'))
left.compile(loss='mean_squared_error', optimizer='rmsprop')
I do not think there is an issue with code. The version of Keras is 2.0.7 with tensorflow backend. It is difficult to trace the cause of error.
AI: Install the latest Keras version: https://github.com/keras-team/keras/issues/7736#issuecomment-324989522 |
H: Integrate remote sensing & GIS data in a CNN
I am trying to develop a CNN model that takes as input a set of parameters derived from satellite imagery.
What are the necessary steps to go through in order to make the Geotiff data acceptable as a valid input for a keras model?
AI: Keras functional API can be used to create models with multiple inputs (E.g. : Image, Categorical and Numerical).
From Geotiff, you can extract :
TIFF
CSV with other features
TIFF will be an input to stack of CNN layers and other features will feed dense layers. Output from both of these is combined in a dense layer.
Example : https://www.pyimagesearch.com/2019/02/04/keras-multiple-inputs-and-mixed-data/ |
H: Can I create a good Speech Recognition Engine while having millions of recorded conversations?
I have at my disposal millions of wav files containing recorded conversations between employees and clients, i'm doing some research on the possibility of creating a good speech recognition engine. I've tested Google's Speech-To-Text and it's great. Is it possible to create something similar? ( Of course, no one can beat the quality and quantity of the data Google has but how close one can get?). And of course, what are the technical limits ( like hardware needed for this kind of learning ) and how much time should it take to achieve it?
Note : i'm a beginner in ML, so far , i've done some binary and multiclass classification,I have an idea on Neural Networks but no work done on that. The simpler the answer the easier for me to understand , Thanks!
AI: Yes, having lots of recorded conversations is great for building a speech recognition system. You will still have to create training samples (Each sample will be parts of Wave file --> text), but you will need lesser number of samples.
High level steps are :
Train a GAN on raw audio
Train a Language model on raw text data (it needs not be from these conversations, but has to be from same domain). For example, if conversations are related to medical, train language model on medical text.
Merge these models and train on labeled samples
For step 1, Google WaveNet is a good example (it is eventually used for Text-to-Speech, it is a component in Speech-to-Text as well)
https://deepmind.com/blog/wavenet-generative-model-raw-audio/
Papers that cover design and overall approach :
https://arxiv.org/abs/1711.01567
https://arxiv.org/abs/1803.10132 |
H: Why does all of NLP literature use Noise contrastive estimation loss for negative sampling instead of sampled softmax loss?
A sampled softmax function is like a regular softmax but randomly selects a given number of 'negative' samples.
This is difference than NCE Loss, which doesn't use a softmax at all, it uses a logistic binary classifier for the context/labels. In NLP, 'Negative Sampling' basically refers to the NCE-based approach.
More details here
I have tested both and they both give pretty much the same results. But in word embedding literature, they always use NCE loss, and never sampled softmax.
Is there any reason why this is? The sampled softmax seems like the more obvious solution to prevent applying a softmax to all the classes, so I imagine there must be some good reason for the NCE loss.
AI: Both negative sampling (derived from NCE) and sampled SoftMax use a few samples to bypass the calculation of full SoftMax.
The main problem comes from this comment in the linked pdf:
Sampled Softmax
(A faster way to train a softmax classifier)
which is only used for sampled SoftMax, although, negative sampling is as fast for the same reason that is working with few samples. If their performances are at the same level, this could be the reason why researchers are not convinced to switch over to sampled SoftMax. In academia, it is almost always the case that older methods are preferred over new, but equally-competent methods for the sake of credibility.
Negative sampling is NCE minus the logistic classifier. Roughly speaking, it only borrows the term "F(target) + sum of F(negative sample)s". Negative sampling is most prominently introduced in the Word2Vec paper in 2013 (as of now with 11K citations), and is backed by the mathematically rigorous NCE paper (2012). On the other hand, sampled SoftMax is introduced in this paper (2015) for a task-specific (Machine Translation) and biased approximation:
In this paper, we propose an approximate training algorithm based on
(biased) importance sampling that allows us to train an NMT model with
a much larger target vocabulary
Noting that negative sampling also allows us to train "with a much larger target vocabulary". |
H: What should I observe when choosing which optimizer suits my Deep Neural Network model?
I have trained my neural network model with optimizers such as RMSProp, AdaGrad, Momentum, and Adam.
Currently, after running the code, I have printed out the Train and Test Accuracy of every epoch (50 in my case). However, I would like to know how should I determine which of these optimizers performs the best?
Does a higher train accuracy at the last epoch determine which is best or would a higher test accuracy do so? Also, I observed that when using the Momentum optimizer, the model train accuracy reached its' highest around 0.91 in the 16th epoch compared to the other optimizer.
Hence, would that conclude that the Momentum optimizer performs best in this case?
AI: High training score is not an indication of model performance, high test score is. Also, a faster convergence to the same, or better test score is an indication of optimizer performance.
Therefore, if Momentum optimizer reaches a better test score faster, it definitely means that it is the best out of those tested optimizers.
As a side note, be careful about the choice of "score", for example using "accuracy" for imbalanced classes is not a good choice, since it equates %1 error in a class with 100 members to 10% error in a class with 10 members. If classes are equally important, i.e. 1% error is equally bad for all classes, macro-f1 and AUC would be better replacements.
An important note: when we use a score to select a hyper-parameter or an optimizer, final model will be affected, thus, that score would be called validation score, not test score. |
H: CNN output shape explanation
I have the following sequential model:
model = models.Sequential()
model.add(Reshape(([1]+in_shp), input_shape=in_shp))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(256, (1, 3),padding='valid', activation="relu", name="conv1",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(80, (2, 3), padding="valid", activation="relu", name="conv2",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_normal', name="dense1"))
model.add(Dropout(dr))
model.add(Dense( len(classes), kernel_initializer='he_normal', name="dense2" ))
model.add(Activation('softmax'))
model.add(Reshape([len(classes)]))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
and I got the following summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape_1 (Reshape) (None, 1, 2, 128) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 1, 6, 128) 0
_________________________________________________________________
conv1 (Conv2D) (None, 256, 6, 126) 1024
_________________________________________________________________
dropout_1 (Dropout) (None, 256, 6, 126) 0
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 256, 10, 126) 0
_________________________________________________________________
conv2 (Conv2D) (None, 80, 9, 124) 122960
_________________________________________________________________
dropout_2 (Dropout) (None, 80, 9, 124) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 89280) 0
_________________________________________________________________
dense1 (Dense) (None, 256) 22855936
_________________________________________________________________
dropout_3 (Dropout) (None, 256) 0
_________________________________________________________________
dense2 (Dense) (None, 8) 2056
_________________________________________________________________
activation_1 (Activation) (None, 8) 0
_________________________________________________________________
reshape_2 (Reshape) (None, 8) 0
=================================================================
Total params: 22,981,976
Trainable params: 22,981,976
Non-trainable params: 0
The model works fine. But, I want to understand something regarding conv1 layer. Why the width value has been reduced from 128 to 126? I am really confused about that shouldn't it be the same value as the previous layer?
Also, the same thing for the conv2 layer too where the height and width have decreased from (10,126) to (9,124).
Could someone explain me why?
AI: In the convolution layer, the filter (3x3 in your case) is applied to the images in order to produce the output (feature map) and is slid to the right and bottom by a parameter called stride (in your case, it is not defined, the default is 1). Now if padding='valid', the output dimension will change, but if you change it to padding='same', the output dimension will be the same as input and this is because of the idea of zero padding (i.e., padding image borders with zero). |
H: How to checkpoint by minibatch in Keras
I know that I can use ModelCheckpoint in Keras for checkpointing a model every epoch (or every few epochs, depending on what I want).
I am getting my data for each minibatch from a fit_generator, and it takes a very long time to evaluate each minibatch. I'd like to be able to checkpoint by minibatch instead of by epoch. How can I do this in Keras?
AI: You have to write a custom callback for this. Steps are :
Subclass ModelCheckpoint (https://github.com/keras-team/keras/blob/master/keras/callbacks.py) or create new one if you do not need filename pattern etc.
Add method that would be called at the end of each batch
class BatchModelCheckpoint(keras.callbacks.Callback):
def on_batch_end(self, batch, logs=None):
self.model.save(filepath, overwrite=True) |
H: Gradient descent with infinite gradient value
Given a function $f(x)$ and $\frac{\partial f(x)}{\partial x_i}=\frac{f^2(x1,...,x_i+\pi/2,...,x_n)-f^2(x1,...,x_i-\pi/2,...,x_n)}{f(x)}$. When $f(x)\to0$, $\frac{\partial f(x)}{\partial x_i}$ could be infinitely large. ($f^2(x1,...,x_i+\pi/2,...,x_n)-f^2(x1,...,x_i-\pi/2,...,x_n)$ is always non-zero)
I have very little experience in deal with this situation in gradient descent process...In my code, $f(x)$ is in continuous domain but for purpose to simulate some real world process, $f(x)$ is sampled to be discrete and would return values uniformly distributed over $[0,1]$. Assume discrete $f(x)$ has $N$ identity values, at the beginning there is a training set of size $M$ ($M$ is very large), $\{x_i, f(x_i)=\frac{k_i}{N}\}_{i=1..M} (k_i \in 1, 2, ..., N)$.
I found that setting $1/f(x)$ to some value like $0.01$ when $f(x)=0$ would reach the optimizim easily but slightly slower than ideal process, while set to much smaller value like $0.00001$ would let $f(x)=0$ have a great impact on the process and failed to form a descent curve.
Is the method replacing infinitely large values to some large but finite values correct? Or there are any better ways to deal with the infinite gradient problem?
Thanks in advance!
AI: Is the method replacing infinitely large values to some large but
finite values correct?
Yes. For example, the same problem happens for the logarithm in cross-entropy loss function, i.e. $p_i \text{log}(p'_i)$ when $p'_i \rightarrow 0$. This is avoided by replacing $\text{log}(x)$ with $\hat{\text{log}}(x) = \text{log}(x+\epsilon)$ for some small $\epsilon$.
Similarly, you are changing $f(x)$ in the denominator to $\hat{f}(x) = max(\epsilon, f(x))$.
However, I would suggest $\hat{f}(x) = f(x) + \epsilon$ instead of a cut-off threshold. This way, the difference in $f(x_1) < f(x_2) < \epsilon$ would not be ignored unlike the max cut-off. |
H: Generalization bound (single hypothesis) in "Foundations of Machine Learning"
I have a question about Corollary $2.2$: Generalization bound--single hypothesis in the book "Foundations of Machine Learning" Mohri et al. $2012$.
Equation $2.17$ seems to only hold when $\hat{R}_S(h)<R(h)$ in equation $2.16$ because of the absolute operator. Why is this not written in the corollary? Am I missing something important?
Thank you very much for reading this question.
AI: You are right. The relaxed inequality
$$R(h) \le \hat{R}_S(h)+ \epsilon.$$
can be replaced with the complete inequality
$$\left |\hat{R}_S(h) - R(h) \right| \le \epsilon.$$
Actually, authors use this complete inequality for the follow up examples in the book. Again in Theorem $2.13$, they write the relaxed inequality, but prove for the complete inequality.
We could say that the relaxed inequality is written for the sake of readability and/or convention.
On the relation of inequalities
Let us denote:
$$A:=\hat{R}_S(h) - R(h) \le \epsilon$$
$$B:=\hat{R}_S(h) - R(h) \ge -\epsilon$$
thus,
$$\left| \hat{R}_S(h) - R(h) \right| \le \epsilon = A \text{ and } B$$
Equation $(2.16)$ states:
$$\begin{align*}
& {\Bbb P}(\left| \hat{R}_S(h) - R(h) \right| \ge \epsilon) \le \delta \\
& \Rightarrow {\Bbb P}(\left| \hat{R}_S(h) - R(h) \right| \le \epsilon) \ge 1 - \delta \\
& \Rightarrow {\Bbb P}(A \text{ and } B) \ge 1 - \delta \\
\end{align*}$$
knowing that ${\Bbb P}(B) \ge {\Bbb P}(A \text{ and } B)$,
$$\begin{align*}
& {\Bbb P}(B) \ge {\Bbb P}(A \text{ and } B) \ge 1 - \delta \\
& \Rightarrow {\Bbb P}(\hat{R}_S(h) - R(h) \ge -\epsilon) \ge 1 - \delta
\end{align*}$$
which is equivalent to
$$R(h) \le \hat{R}_S(h) + \epsilon$$
with probability at least $1-\delta$, i.e. equation $(2.17)$. |
H: Why does TensorFlow convert my decoded image to float32 instead of uint8/16?
I am using the following code to import a bunch of .png images and decode them using TensorFlow:
from __future__ import absolute_import, division, print_function
import tensorflow as tf
import numpy as np
import os
tf.enable_eager_execution()
NUM_TRAINING_SAMPLES = 333
NUM_CLASSES = 3
BATCH_SIZE = 5
NUM_EPOCHS = 6
INPUT_SIZE = (256, 256, 3)
random_indices = np.random.choice(range(13000), NUM_TRAINING_SAMPLES)
directory = "/home/local/CYCLOMEDIA001/ebos/Downloads/SYNTHIA_RAND_CVPR16"
directory_images = "/home/Downloads/SYNTHIA_RAND_CVPR16/RGB"
directory_labels = "/home/Downloads/SYNTHIA_RAND_CVPR16/GT"
train_images = np.array(os.listdir(directory_images))
train_labels = np.array(os.listdir(directory_images))
train_images = train_images[random_indices]
train_labels = train_labels[random_indices]
train_images = [tf.read_file(os.path.join(directory_images, img)) for img in train_images]
train_labels = [tf.read_file(os.path.join(directory_labels, img)) for img in train_labels]
train_images = [tf.io.decode_image(img, channels=3) for img in train_images]
train_labels = [tf.io.decode_image(img, channels=3) for img in train_labels]
train_images = tf.image.resize_images(train_images, INPUT_SIZE[:2])
train_labels = tf.image.resize_images(train_labels, INPUT_SIZE[:2])
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
train_dataset = train_dataset.batch(3)
print(train_dataset.output_types)
This returns:
(tf.float32, tf.float32)
However, according to the documentation it should return a tensor of uint8's or uint16's. Why and where does the conversion take place?
I checked all intermediate steps with print statements, which doesn't tell me much as most intermediate lists are of class 'tensorflow.python.framework.ops.EagerTensor'.
AI: Types are changing to float due to tf.image.resize_images.
Convert them back to uint as follows:
train_images = tf.cast(train_images, dtype=tf.uint8)
train_labels = tf.cast(train_labels, dtype=tf.uint8)
Output:
(tf.uint8, tf.uint8)
Versions of my code:
tensorflow version: 1.14.1-dev20190330
numpy version: 1.16.2 |
H: Compute specificity and sensitivity at certain thresholds
I have the following table with predictive probabilities and true class labels:
\begin{array}{|c|c|c|c|}
\hline
P(T=1) &0.54& 0.23 & 0.78 & 0.88 & 0.26 & 0.41 & 0.90 & 0.45&0.19&0.36 \\ \hline
T&1&0 &0 &1 &0 &0& 1& 1& 0& 0\\ \hline
\end{array}
The question is to compute the specificity & sensitivity at the threshold of 0.5.
My attempt at answering this question:
Sensitivity = true positive rate[P(T=1) > 0.5]
= (0.54 + 0.88 + 0.9)/4 = 0.58
Specificity = 1-false positive rate[P(T=1) > 0.5]
= 1- [(0.78)/6]
= 0.87
Not sure if my working above is correct. I would appreciate if someone can guide me to the correct solution. Thanks.
AI: For threshold = $0.5$ we have:
Sensitivity = True Positive Rate
= (number of points with label $1$ and $P(T = 1)\geq 0.5$) divided by (number of points with label $1$)
= $\left|\{(1, 0.54), (1, 0.88), (1, 0.90)\}\right| / 4$ = $3/4$ = $0.75$
Specificity = 1 - False Positive Rate
= 1 - (number of points with label $0$ and $P(T = 1)\geq 0.5$) divided by (number of points with label $0$)
= $1 - \left|\{(0, 0.78)\}\right|/6$ = $1 - 1/6$ = $0.833$ |
H: How long would it take to become proficient in machine learning for someone with a non-statistical mathematical background?
I am currently a postdoc and my PhD was in applied mathematics in the area of numerical analysis and electromagnetic/acoustic wave propagation. There was no statistical element to my PhD, it was completely deterministic. I took several probability/statistics and one machine learning module 5-6 years ago during my BSc, and a stochastic ODE module during my MSc but that's about it..its been all applied mathematics since then.
I am considering leaving academia and entering industry and it seems like there are far more jobs in the area of data science/machine learning than there are for my skillset.
If I left academia and began 'studying up', how long do you think it could take me to gain the skills required for a data science/machine learning position in industry?
It seems like there is a very wide variety of science/machine learning techniques and obviously there isn't time to learn all or even most of them. So what approaches are absolutely essential for data science/machine learning in industry these days and what is the most efficient route to gaining these skills?
AI: As the market is in desperate need of people, and there are plenty of people with absolutely no formal training and no background in statistics, you are already perfectly qualified to spin this hype wheel and call yourself a "data scientist", too.
I'm not kidding. Just do some free online courses and you'll likely see that you can do all they ask for. Data science is about buzzword bingo, not about being smart at statistics not good at coding (unfortunately).
If you don't want to feel like an impostor, I suggest the following: find some important algorithm still missing from the big toolkits such as sklearn, R, Weka, ELKI. Implement it, and contribute it to some open-source toolkit. Then you can call yourself an "sklearn contributor" in your resume, which puts you ahead of 90% of self-proclaimed data scientists. What could make you a more proven data scientist / machine learner than having written code used by other data scientists / machine learners? |
H: Choosing a suitable learning rate based on validation or testing accuracy?
I have simulated a neural network with different learning rate, ranging from 0.00001 to 0.1, and recording each test and validation accuracy. The result i obtained is as below. There is 50 epoch for each learning rate, and i note down the validation accuracy at the last epoch, while the training accuracy is computed throughout the process.
Learning rate: 0.00001
Testing accuracy: 0.5850
Validation accuracy at final epoch: 0.5950
Learning rate: 0.0001
Testing accuracy:0.6550
Validation accuracy at final epoch: 0.6400
Learning rate: 0.001
Testing accuracy: 0.6350
Validation accuracy at final epoch: 0.6900
Learning rate: 0.01
Testing accuracy: 0.6650
Validation accuracy at final epoch: 0.6700
Learning rate: 0.1
Testing accuracy: 0.2500
Validation accuracy at final epoch: 0.2100
How does testing and validation accuracy influence which learning rate is better? Would a higher validation accuracy determine the most suitable learning rate for the model?
Hence, is it correct that 0.001 is the most suitable learning parameter since it has the highest validation accuracy at the last epoch?
AI: You cannot select a parameter based on test accuracy, because the moment you do that, it becomes a validation accuracy as it has affected the final model. Therefore, you are always choosing based on validation accuracy.
As a result, the best result comes from learning rate 0.001, with the highest validation accuracy 0.6900. We have ignored Testing accuracy. If we select based on Testing accuracy, it becomes a validation accuracy.
Generally, a learning rate that is a looser at epoch 50, might be a winner at epoch 200. In other words, a slower convergence may lead to a higher accuracy. Therefore, this issue is worth considering too. |
H: How do I fit a curve into non linear data?
I did an experiment in my Uni and I collected data $(ω,υ(ω))$ modeled by the equation:
$$ v(ω)=\frac{C}{\sqrt{(ω^2-ω_0^2 )^2 +γ^2 ω^2}} $$
where $ω_0$ is known. Do you know how can I fit a curve to my data $(ω,υ(ω))$ ? and how to find the parameter $ γ $ through this process ?
AI: I used mycurvefit.com for your problem. After creating an account (or maybe without if number of parameters is 2 or less) it lets you fit your function with at most 20 data points, which was enough. Here is an example
That correctly founds the parameter (g) close to 6.
Here are 20 data points that I have generated for $C=10$, $\omega_0=10$, and $\gamma=6$:
w v(w)
5.4881 0.1294
7.1519 0.1538
6.0276 0.1366
5.4488 0.1290
4.2365 0.1164
6.4589 0.1429
4.3759 0.1176
8.9177 0.1746
9.6366 0.1716
3.8344 0.1132
7.9173 0.1655
5.2889 0.1271
5.6804 0.1319
9.2560 0.1744
0.7104 0.1004
0.8713 0.1006
0.2022 0.1000
8.3262 0.1706
7.7816 0.1636
8.7001 0.1737
Copy and paste them into the data sheet at the bottom.
P.S.: an analytical answer cannot be derived since the derivative equation (derivative = 0) of mean squared error with respect to parameter $\gamma$ is intractable, therefore gradient descent must be used with the help of computer (similar to what this site does).
EDIT:
I've forgot to add noise to $v(\omega)$, here is a noisy ($\tilde{v}(\omega) = v(\omega)+\mathcal{N}(\mu=0, \sigma=0.01)$) version with the same parameters:
w v(w)
7.7132 0.1512
0.2075 0.1014
6.3365 0.1559
7.488 0.1483
4.9851 0.1039
2.248 0.0868
1.9806 0.106
7.6053 0.1848
1.6911 0.1136
0.8834 0.1174
6.8536 0.15
9.5339 0.1866
0.0395 0.0973
5.1219 0.1313
8.1262 0.1656
6.1253 0.1325
7.2176 0.1562
2.9188 0.1026
9.1777 0.1877
7.1458 0.1556
which gives $g=5.7$, meaning 20 data points are not enough for this level of noise or higher.
If you are more interested you can learn a framework like tensorflow to build the function and fit it to arbitrarily large number of data. |
H: An ambiguity in SVM equations about misclassified data
I have encountered an ambiguity in SVM equations.
As is stated in Chris Bishop's machine learning book, the optimization goal in SVM is to maximize this function:
$$C\sum\limits_{n = 1}^N {{\xi _n}} + {1 \over 2}{\left\| w \right\|^2}$$
Subject to this constraints(*):
$${\xi _n} \ge 0$$
$${t_n}y({x_n}) \ge 1 - {\xi _n}$$
where:
$$y({x_n}) = {w^T}{x_n} + b$$
so the corresponding Lagrangian function for this problem is:
$$L(w,b,a) = C\sum\limits_{n = 1}^N {{\xi _n}} + {1 \over 2}{\left\| w \right\|^2} - \sum\limits_{n = 1}^N {{a_n}\{ {t_n}y({x_n}) - 1 + {\xi _n}\} - } \sum\limits_{n = 1}^N {{\mu _n}{\xi _n}} $$
and the corresponding KKT conditions are given by (**):
$${a _n} \ge 0$$
$${{t_n}y({x_n}) - 1 + {\xi _n}} \ge 0$$
$${a_n}({t_n}y({x_n}) - 1 + {\xi _n}) = 0$$
$${\xi _n} \ge 0$$
$${\mu _n} \ge 0$$
$${\mu _n}{\xi _n} = 0$$
And if we set
$${{\partial L} \over {\partial {\xi _n}}} = 0$$
we get (***)
$${a_n} = C - {\mu _n}$$
As we know, that subset of data points that have
$${a_n} = 0$$
are not support vectors. But for this data points we have (from ***):
$${\mu_n} = C$$
and therefore (from **)
$${\xi _n} = 0$$
So here lies the problem. If a data point from this subset is in the wrong side of the decision boundary, then
$${t_n}y({x_n}) \le 0$$
and we will have (from *)
$${\xi _n} \ge 1$$
which is in an obvious conflict with
$${\xi _n} = 0$$
AI: Good point! Interesting consequence!
Problem is the $a_n=0$ assumption, i.e. assuming misclassified points are not support vectors.
Here is the flow. Slack variable $\xi_n$ is defined as
$$\xi_n := |t_n - y(\boldsymbol{x}_n)|$$
where $t_n \in \{+1, -1\}$ is the true label, and $y(\boldsymbol{x}_n)$ is the prediction. Therefore, for a misclassified point (on the wrong side) we have $$\xi_n > 1$$ by definition. Given $\mu_n \xi_n = 0$, therefore$$\mu_n=0$$
and given $a_n=C - \mu_n$, therefore $$a_n = C > 0$$ which means (given $a_n > 0$ only for support vectors)
Every misclassified point is a support vector.
This is a nice consequence and should have been stated in the book.
Although, a remotely! related point has been stated in the book:
Points with $a_n = C$ can lie inside the margin and can either be
correctly classified if $\xi_n \leq 1$ or misclassified if $\xi_n > 1$. |
H: Aggregate NumPy array with condition as mask
I have a matrix $b$ with elements:
$$b =
\begin{pmatrix}
0.01 & 0.02 & \cdots & 1 \\
0.01 & 0.02 & \cdots & 1 \\
\vdots& \vdots & \ddots & \vdots \\
0.01 & 0.02 & \cdots & 1 \\
\end{pmatrix}
$$For which through a series of calculation which is vectorised, $b$ is used to calculate $a$ which is another matrix that has the same dimension/shape as $b$.
$$a =
\begin{pmatrix}
3 & 5 & \cdots & 17 \\
2 & 6 & \cdots & 23 \\
\vdots& \vdots & \ddots & \vdots \\
4 & 3 & \cdots & 19 \\
\end{pmatrix}
$$
At this point it is important to note that the elements of $a$ and $b$ have a one to one correspondence. The different row values(let's call it $\sigma$) $0.01, 0.02...$ are different parameters for a series of simulations that I'm running. Hence for a fixed value of say $\sigma = 0.01$, the length of its column values correspond to the total number of "simulations" I'm running for that particular parameter. If you know python vectorisation then you'll start to understand what I'm doing.
It is known that higher the $\sigma$, the more the number of simulations for that particular sigma will have a value higher than 5 i.e. more of the matrix element along a column will have value bigger than 5. Essentially what I'm doing is vectorising $N$(columns) different simulations for $M$(rows) different parameters. Now I wish to find out the value of $\sigma$ for which the total number simulation that's bigger than 5, is bigger than 95% of the total simulation.
To put it more concisely, for a $\sigma$ of 0.02, each simulation would have results of $$5, 6, ..., 3$$ with say a total simulation of $N$. So let $$\kappa = \sum{ (\text{all the simulations that have values bigger than 5})},$$I wish to find out the FIRST $\sigma$ for which
$$\frac{\kappa}{N} > 0.95*N$$
i.e. the FIRST $\sigma$ for which the proportion of total experiment for which its value $>5$ is bigger than 95% of the total number of experiment.
The code that I have written is:
# say 10000 simulations for a particular sigma
SIMULATION = 10000
# say 100 different values of sigma ranging from 0.01 to 1
# this is equivalent to matrix b in mathjax above
SIGMA = np.ones((EXPERIMENTS,100))*np.linspace(0.01, 1, 100)
def return_sigma(matrix, simulation, sigma):
"""
My idea here is I put in sigma and matrix and total number of simulation.
Each time using np.ndenumerate looping over i and j to compare if the
element values are greater than 5. If yes then I add 1 to counter, if no
then continue. If the number of experiments with result bigger than 5 is
bigger than 95% of total number of experiment then I return that particular
sigma.
"""
counter = 0
for (i, j), value in np.ndenumerate(matrix):
if value[i, j] > 5:
counter+=1
if counter/experiments > 0.95*simulation:
break
return sigma[0, j] # sigma[:, j] should all be the same anyway
"""Now this can be ran by:"""
print(return_sigma(a, SIMULATION, SIGMA))
which doesn't seem to quite work as I'm not well-versed with 2D slicing comprehension so this is quite a challenging problem for me. Thanks in advance.
EDIT
I apologise on not giving away my calculation as it's sort of a coursework of mine. I have generated a for 15 different values of $\sigma$ with 15 simulations each, and here they are:
array([[ 6, 2, 12, 12, 14, 14, 11, 11, 9, 23, 15, 3, 10, 12, 10],
[ 7, 7, 6, 9, 13, 8, 11, 17, 13, 8, 10, 16, 11, 16, 8],
[14, 6, 4, 8, 10, 9, 11, 14, 12, 14, 5, 8, 18, 29, 22],
[ 4, 12, 12, 3, 7, 8, 5, 13, 13, 10, 14, 16, 22, 15, 22],
[ 9, 8, 7, 12, 12, 6, 4, 13, 12, 12, 18, 20, 18, 14, 23],
[ 8, 6, 8, 6, 12, 11, 11, 4, 9, 9, 13, 19, 13, 11, 20],
[12, 8, 7, 17, 3, 9, 11, 5, 12, 24, 11, 12, 17, 9, 16],
[ 4, 8, 7, 5, 6, 10, 9, 6, 4, 13, 13, 14, 18, 20, 23],
[ 5, 10, 5, 6, 8, 4, 7, 7, 10, 11, 9, 22, 14, 30, 17],
[ 6, 4, 5, 9, 8, 8, 4, 21, 14, 18, 21, 13, 14, 22, 10],
[ 6, 2, 7, 7, 8, 3, 7, 19, 14, 7, 13, 12, 18, 8, 12],
[ 5, 7, 6, 4, 13, 9, 4, 3, 20, 11, 11, 8, 12, 29, 14],
[ 6, 3, 13, 6, 12, 10, 17, 6, 9, 15, 12, 12, 16, 12, 15],
[ 2, 9, 8, 15, 5, 4, 5, 7, 16, 13, 20, 18, 14, 18, 14],
[14, 10, 7, 11, 8, 13, 14, 13, 12, 19, 9, 10, 11, 17, 13]])
As you can see as $\sigma$ gets higher the number of matrix elements in each column for which it is bigger than 5 is higher.
EDIT 2
So now condition is giving me the right thing, which is an array of booleans.
array([[False, False, False, False, False, False, False, False, True, True],
....................................................................,
[False, False, False, False, False, False, False, True, True, True]])
So now the last row is the important thing here as it corresponds to the parameters, in this case,
array([[0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.5],
...........................................................,
[0.05, 0.1 , 0.15, 0.2 , 0.25, 0.3 , 0.35, 0.4 , 0.45, 0.5]])
Now the last row of condition is telling me that first True happens at $\sigma$=0.4 i.e. for which all the > 95% of the total simulations for that $\sigma$ have simulation result of > 5. So now I need to return the index of condition where the first True in the last row appeared i.e. [i, j]. Now doing b[i, j] should give me the parameter I want.(which I'm not sure if your next few line of codes are doing that.)
AI: I think I have understood your problem (mostly from the comments added in your function).
I'll show step by step what the logic is, building upon each previous step to get the final solution.
First we want to find all position where the matrix is larger than 5:
a > 5 # returns a boolean array with true/false in each position
Now we want to check each row to count if the proportion of of matches (> 5) has reached a certain threshold; $N * 0.95$. We can divide by the number of simulations (number of columns) to essentially normalise by the number of simulations:
(a > 5) / SIMULATION # returns the value of one match
These values are required to sum to your threshold for an experiment to be valid.
Now we cumulatively sum across each row. As the True/False array is ones and zeros, we now have a running total of the numbers of matches for each experiment (each row).
np.cumsum((a > 5) / SIMULATION, axis=1) # still same shape as b
Now we just need to find out where (in each row) the sum of matches reaches your threshold. We can use np.where:
## EDIT: we only need to check the cumsum is greater than 0.95 and not (0.95 * SUMLATION)
## because we already "normalised" the values within the cumsum.
condition = np.cumsum((a > 5) / SIMULATION, axis=0) > 0.95
mask = np.where(condition)
I broke it down now as the expressions are getting long.
That gave us the i and j coordinates of places where the condition was True. We just want to find the place where we first breached the threshold, so we want to find the indices for the first time in each row:
valid_rows = np.unique(mask[0], return_index=True)[1] # [1] gets the indices themselves
Now we can simply use these indices to get the first index in each valid row, where the threshold was breached:
valid_cols = mask[1][valid_rows]
So now you can get the corresponding values from the parameter matrix using these valid rows/columns:
params = b[valid_rows, valid_cols]
If this is correct, it should be significantly faster than your solution because it avoids looping over the 2d array and instead utilises NumPy's vectorised method and ufuncs. |
H: Any suitable way to describe the distributions of 2 Pandas Dataframes visually/graphically?
I am aware of the fact that the Pandas Dataframe's Statistical description can easily be obtained using df.describe(). I am having 2 dataframes of the same dimensions (i.e. 102 columns and 800000 rows for both the dataframes). I would like to depict the fact visually that the 2 dataframes are very similar/have a statistically similar distribution.
I am not sure of how this can be done visually/graphically in Python. The df.describe method provides count, mean, std, min, 25 %, 50 %, 75 % and max values for the dataframes, but it is difficult to infer from in the first instance. I would prefer a visual/graphical method of any type (I am not sure if Boxplot can help?). Any suggested method along with a Minimum Workable Example would be highly appreciated. Cheers!
Presently,
# df1 is dataframe 1
# df2 is dataframe 2
df1.describe() # this gives statistical values of df1 dataframe
df2.describe() # this gives statistical values of df2 dataframe
AI: You can use the Kolmogorov-Smirnov Test. From Wikipedia
In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a
nonparametric test of the equality of continuous, one-dimensional
probability distributions that can be used to compare a sample with a
reference probability distribution (one-sample K–S test), or to
compare two samples (two-sample K–S test). It is named after Andrey
Kolmogorov and Nikolai Smirnov.
Good for us, there is already an implementation for this in Scipy
You can use this dummy code to test it
from scipy import stats
p_value = 0.05
rejected = 0
for col in range(103):
test = stats.ks_2samp(df1.ix[col,], df2.ix[col,])
if test[1] < p_value:
rejected += 1
print("We rejected",rejected,"columns in total")
I made the assumption that the columns that you want to compare are on the same index in both dataframes. If this is not true, you need to find another way. Maybe if they have the same name? You can do it like that.
from scipy import stats
p_value = 0.05
rejected = 0
for col in df1:
test = stats.ks_2samp(df1[col], df2[col])
if test[1] < p_value:
rejected += 1
print("We rejected",rejected,"columns in total")
If the K-S statistic is small or the p-value is high, then we cannot reject the hypothesis that the distributions of the two samples are the same.
Edit: I haven't tried it before, but maybe you can do something like that
Boxplot
from scipy import stats
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#create 2 dataframes with random integers. I don't have data to simulate your case.
df1 = pd.DataFrame(np.random.randint(0,100,size=(10000, 102)), columns=range(1,103))
df2 = pd.DataFrame(np.random.randint(0,100,size=(10000, 102)), columns=range(1,103))
#apply the Kolmogorov-Smirnov Test
p_value = 0.05
p_values = []
for col in range(103):
test = stats.ks_2samp(df1.iloc[col,], df2.iloc[col,])
p_values.append(test[1])
#create the box plot
plt.boxplot(p_values)
plt.title('Boxplot of p-values')
plt.ylabel("p_values")
plt.show()
Heatmap
Another way is the heatmap.
import matplotlib.patches as mpatches
plt.rcParams["figure.figsize"] = 5,2
x = range(1,104)
y = np.array(p_values)
fig, (ax,ax2) = plt.subplots(nrows=2, sharex=True)
extent = [x[0]-(x[1]-x[0])/2., x[-1]+(x[1]-x[0])/2.,0,1]
im = ax.imshow(y[np.newaxis,:], cmap="plasma", aspect="auto", extent=extent)
ax.set_yticks([])
ax.set_xlim(extent[0], extent[1])
values = np.unique(np.round(y.ravel(),2))
colors = [ im.cmap(im.norm(value)) for value in values]
# create a patch (proxy artist) for every color
patches = [ mpatches.Patch(color=colors[i], label="Level {l}".format(l=values[i]) ) for i in range(len(values)) ]
# put those patched as legend-handles into the legend
plt.legend(handles=patches, bbox_to_anchor=(1.05, 2.2), loc=2, borderaxespad=0. )
ax2.plot(x,p_values)
plt.tight_layout()
plt.show()
Code for heatmap: stackoverflow |
H: Prove two equations are equivalent
I'm reading Neural Networks and Deep Learning and running into trouble with the math. One of the exercises says:
Write out $a'=\sigma (wa + b)$ in component form, and verify that it gives the same result as the rule
$$\frac{1}{1 + \exp(-\sum_{j}w_jx_j - b)}$$for computing the output of a sigmoid neuron.
Not even sure where to start here. Can anyone help me out? I'd really appreciate a detailed explanation.
AI: In your first equation w is weight matrix $(w_1 + w_2 + \ldots + w_j)$ and a is the input vector $(x_1 + x_2 + \ldots + x_j)$. Writing wa in component form means taking the dot product of these two vectors, which is $\sum_{j}w_ja_j$. Now, you just apply the sigmoid function to get your final answer:$$\sigma (z) = \frac{1}{1 + \exp(-z)}$$ $$\Rightarrow\sigma (\textbf{wa} + b) = \frac{1}{1 + \exp(-\sum_{j}w_jx_j - b)}$$ |
H: How to add date column in python pandas dataframe
I want to add a date column (from 1/1/1979 upto the data is) in pandas data frame.
Currently, my data frame looks like this:
0 1 2 3 4
0 1 654 31.457899 76.930397 0.2
1 1 711 31.507900 76.880402 0.2
2 1 712 31.507900 76.930397 0.2
3 1 713 31.507900 76.980400 0.2
4 1 714 31.507900 77.030403 0.2
5 1 715 31.507900 77.080399 0.2
6 1 716 31.507900 77.130402 0.2
7 1 717 31.507900 77.180397 0.2
8 1 718 31.507900 77.230400 0.2
Now I want to add one more column of data starting from 1/1/1979. How do I do that?
AI: You can use the built-in date_range function from pandas library to generate dates and then add them to your dataframe. You can use it in the following way:
In [9]: import pandas as pd
In [10]: df = pd.DataFrame({'column1':[34,54,32,23,26]})
In [11]: df
Out[11]:
column1
0 34
1 54
2 32
3 23
4 26
In [12]: df['date'] = pd.date_range(start='1/1/1979', periods=len(df), freq='D')
In [13]: df
Out[13]:
column1 date
0 34 1979-01-01
1 54 1979-01-02
2 32 1979-01-03
3 23 1979-01-04
4 26 1979-01-05
Here I have incremented the dates day-wise per row. If you want to generate dates in some other way, check out the official documentation here. |
H: Why do I need pre-trained weights in transfer learning?
I am using a Mask-RCNN. I first chose the resnet50 backbone then downloaded COCO pre-trained weights.
Why do I need pre-trained weights for transfer learning?
The transfer learning approach is to train a base network and then copy its first layers of the target. My base network is resnet50 and I copied the first layers to Mask RCNN. So, why do I need pre-trained weights, for example, coco pre-trained weights?
I understand that I copy layers and pre-trained weights from resnet.
Thanks in advance.
AI: When you are calling a pre-trained model (resnet50 in your case) for performing transfer learning, only the model's architecture is actually called.
This architecture is practically useless to perform any sort of computational operation on the data and predicting any output unless trained. For that, you will need the weights (variables understood by the model on some previous dataset).
Weights are more like mathematical coefficients which take the input and performs the computation on the input to generate a required output. They are more like the coefficient in a linear algebra problem. This is the actual reason why it is required to call the weights for the model.
Even if you are not calling any weights in you model, you can make your model trainable and train the same to generate your own weights that fit your data properly rather than fine-tuning your model. Read more about the same here |
H: Multi-output regression problem with Keras
number of features: 12 , -15 < each feature < 15
number of targets: 6 , 0 < each target < 360
number of examples: 262144
my normalization: I normalized the features so that they are between 0 and 1. I normalized the targets so that they are between 1 and 10.
This is the model that I am using:
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(6, activation='linear')
])
model.compile(optimizer="rmsprop", loss='mean_squared_error', metrics=['accuracy'])
model.fit(training_x, training_y, epochs=10, batch_size=100)
This is the best result that I have got (training):
235929/235929 [==============================] - 8s 33us/step - loss: 8.9393e-04 - acc: 0.6436
testing:
loss: 0.00427692719418488
acc: 0.033187106618348276
I get almost 0% accuracy on the test set! I need a model to solve this ML problem.
AI: Accuracy is a metric for classification, not regression.
$$Accuracy = \frac{\text{Correct classification}}{\text{Number of classifications}}$$
So when you use accuracy for regression only the values where actual_label == predicted_label are evaluated as true are counted as correct classifications. That will happen quite rarely when you are doing regression, resulting in an accuracy that is close to zero.
Instead you should use something like mean absolute error or mean squared error as validation metrics for regression. |
H: How to "reshape" into square matrix for numpy.linalg.solve()?
I'm trying to find the intersection of lines $y=a_1x+b_1$ and $y=a_2x+b_2$ using numpy.linalg.solve(). What I can't get my head around is how to correctly make $A$ a square matrix for solve() to work. I'm familiar with solving linear equation systems, but there's something here I don't get.
What I'd like to do is:
def meeting_lines(a1, b1, a2, b2):
a = np.array([[a1], [a2]])
b = np.array([b1, b2])
return np.linalg.solve(a, b)
def main():
a1=1
b1=4
a2=3
b2=2
y, x = meeting_lines(a1, b1, a2, b2)
Where I expect $y=-3$ and $x=1$. However, this fails with numpy.linalg.LinAlgError: Last 2 dimensions of the array must be square.
Thank you very much for your help, trying to figure this out has messed up my day already!
AI: You should formulate your lines as follows to have $(x, y)$ as unknowns:
$$\begin{align}
\left.\begin{matrix}
a_1x-y=-b_1\\
a_2x-y=-b_2
\end{matrix}\right\}
\rightarrow
\overbrace{
\begin{bmatrix}
a_1& -1\\
a_2& -1
\end{bmatrix}
}^{\boldsymbol{a}}
\overbrace{
\begin{bmatrix}
x\\
y
\end{bmatrix}
}^{\boldsymbol{x}}
=
\overbrace{
\begin{bmatrix}
-b_1\\
-b_2
\end{bmatrix}
}^{\boldsymbol{b}}
\end{align}$$
Therefore, the code should be:
import numpy as np
def meeting_lines(a1, b1, a2, b2):
a = np.array([[a1, -1], [a2, -1]])
b = np.array([-b1, -b2])
return np.linalg.solve(a, b)
a1=1
b1=4
a2=3
b2=2
x, y = meeting_lines(a1, b1, a2, b2)
print(x, y)
which outputs:
1.0 5.0 |
H: Why margin-based ranking loss is reversed in these two papers?
For knwoledge graph completion, it is very common to use margin-based ranking loss
In the paper:margin-based ranking loss is defined as
$$ \min \sum_{(h,l,t)\in S} \sum_{(h',l,t')\in S'}[\gamma + d(h,l,t) - d(h',l,t')]_+$$
Here $d(\cdot)$ is the predictive model, $(h,l,t)$ means a positive training instance, and $(h',l,t')$ means a negative training instance corresponding to $(h,l,t)$.
However, in the Andrew's paper, it defines
$$ \min \sum_{(h,l,t)\in S} \sum_{(h',l,t')\in S'}[\gamma + d(h',l,t') - d(h,l,t)]_+$$
It seems that they switch the terms $d(h',l,t')$ and $d(h,l,t)$.
My question is that
does it matter to switch $d(h',l,t')$ and $d(h,l,t)$? it's real strange definition. Thanks
AI: In this paper, $d$ denotes "dissimilarity" which should be minimized for positive samples.
In this paper, $d$ ($g$ in the paper) denotes "similarity" which should be maximized for positive samples (or equivalently $-g\left(T^{(i)}\right)$ should be minimized) |
H: What loss function to use for imbalanced classes (using PyTorch)?
I have a dataset with 3 classes with the following items:
Class 1: 900 elements
Class 2: 15000 elements
Class 3: 800 elements
I need to predict class 1 and class 3, which signal important deviations from the norm. Class 2 is the default “normal” case which I don’t care about.
What kind of loss function would I use here? I was thinking of using CrossEntropyLoss, but since there is a class imbalance, this would need to be weighted I suppose? How does that work in practice? Like this (using PyTorch)?
summed = 900 + 15000 + 800
weight = torch.tensor([900, 15000, 800]) / summed
crit = nn.CrossEntropyLoss(weight=weight)
Or should the weight be inverted? i.e. 1 / weight?
Is this the right approach to begin with or are there other / better methods I could use?
Thanks
AI: What kind of loss function would I use here?
Cross-entropy is the go-to loss function for classification tasks, either balanced or imbalanced. It is the first choice when no preference is built from domain knowledge yet.
This would need to be weighted I suppose? How does that work in practice?
Yes. Weight of class $c$ is the size of largest class divided by the size of class $c$.
For example, If class 1 has 900, class 2 has 15000, and class 3 has 800 samples, then their weights would be 16.67, 1.0, and 18.75 respectively.
You can also use the smallest class as nominator, which gives 0.889, 0.053, and 1.0 respectively. This is only a re-scaling, the relative weights are the same.
Is this the right approach to begin with or are there other / better
methods I could use?
Yes, this is the right approach.
EDIT:
Thanks to @Muppet, we can also use class over-sampling, which is equivalent to using class weights. This is accomplished by WeightedRandomSampler in PyTorch, using the same aforementioned weights. |
H: Predicting descrete value problem in regression or classification
In machine learning, regression algorithms attempt to estimate the mapping function (f) from the input variables (x) to numerical or continuous output variables (y).
I have one usecase where I predict shift_id. Shit_Id is ID values given to different city location.
As per my understanding this is regression problem because it predict numerical value. Is this right?
Also precision, recall f1 measure can be calculated for regression problem?
AI: IDs are categorical, not numeric. You should be treating this as a multi-class classification problem. Your IDs are locations, a location is a class. The ID is just a identifier for the class.
Since you have a classification problem you should be using precision, recall and f1. However, if it was regression you would have been using mean squared error, mean absolute error and possibly something else. |
H: What is the range of values of the expected percentile ranking?
I'm currently reading
Hu, Koren, Volinsky: Collaborative Filtering for Implicit Feedback Datasets
One thing that confuses me is the "expected percentile ranking", an function the authors define to evaluate the goodness of their recommendations. They define it in the Evaluation methodology on page 6 as:
$$\overline{\text{rank}} = \frac{\sum_{u,i} r^t_{ui} \text{rank}_{ui}}{\sum_{u,i} r^t_{ui}}$$
where $u$ is a user, $i$ is an item (e.g. a TV show), $r_{ui} \in [0, \infty)$ is the amount how much user $u$ did watch show $i$. $\text{rank}_{ui} \in [0, 1]$ is the percentage rank of item $i$ for user $u$. For example, it is 0 if for user $u$ the item $i$ has the highest $r$ value and 1 if the item $i$ for user $u$ has the lowest $r$ value.
I'm not super sure if I understood it correctly.
The authors write that lower values of $\overline{\text{rank}}$ are more desirable and for random predictions would lead to an expected value of $\overline{\text{rank}}$ of 0.5.
Examples
Assume there is only one item. In this case $\text{rank} = 0$. Makes sense, as there cannot be any predictions.
Assume there is only one user and two items with $r_{1,1} = 1$ and $r_{1,2} = 2$. Then:
$$\overline{\text{rank}} = \frac{1 \cdot \text{rank}_{1, 1} + 2 \cdot \text{rank}_{1, 2}}{1+2}$$
This means $\overline{\text{rank}} \in \{2/3, 1/3\}$.
If there is only a single user and all $|I|$ values of $r_{ui}$ are the same, then $\overline{\text{rank}} = \sum_{ui} \text{rank}_{ui} = \frac{|I|}{2}$
Questions
Is my understanding of the metric correct? Especially my last example and the statement by the authors that $\overline{\text{rank}} \geq 50\%$ indicated an algorithm is no better than random seem off.
What is $t$?
AI: What is $t$?
It means observed $r_{ui}$ in the one-week test set (page 6-left).
Is my understanding of the metric correct?
First two examples are correct. Assuming user-item relation $r_{ui}^t$ is constant $a$ for all items in the test set, and predicted ranks are uniform across $[0, 1]$, then, the third one would be:
$$\overline{\text{rank}} = \frac{\sum_{u,i} r^t_{ui} \text{rank}_{ui}}{\sum_{u,i} r^t_{ui}}=\frac{\sum_{u,i} a \text{ rank}_{ui}}{\sum_{u,i} a}=\frac{1}{|I|}\sum_{u,i} \text{ rank}_{ui}=\frac{1}{|I|}\frac{|I|}{2}=\frac{1}{2}$$
This makes sense. Items are identical to the user, therefore no model can do better than random guessing, since there is no observed preference to help the model favor one item over the other. Of course, another assumption here is that training (4 weeks) and test (next week) sets are from the same distribution. |
H: Connect Orange 3.20 to postgresql database
I installed orange 3.20 on windows 7. It works so far, the problem is connecting it to a server-based Postgres database. While the connection can be made at the moment, when you try to load a table the message "missing extension quantile" comes up. A few problems are coming up with this message. It seems like it is not possible to install this extension on a windows server without a lot of stress. The extension seems not to be actual (version 2015) and is not compatible with the actual PostgreSQL versions (which will cause trouble for future updates). Does anyone know if Orange will support this in the future and a convenient way to use Orange for windows 7?
AI: To summarize my linked answer. While you may be able to connect Orange to Postgresql, Orange does not handle large datasets very well (it will cause it to crash). Generally, it does not support datasets the size that you would be storing in SQL sized databases. Personally, I think Orange works, and is most effective, with smaller sized datasets for illustrative tasks and exploratory data analysis. |
H: neural networks error function: is global minimum desirable?
In "Elements of statistical learning" page 395 the authors state that, relative to R(θ), the regression/classification error function in a neural network such as a multi layer perceptron:
Typically we don’t want the global minimizer of R(θ), as this is
likely to be an overfit solution. Instead some regularization is
needed: this is achieved directly through a penalty term, or
indirectly by early stopping. Details are given in the next section.
However in Backpropagation, when momentum regularization is described the figure 9 shows how the back propagation process should steer the error function towards its global minimum, avoiding any local minimum found along the way.
How come these 2 reasonings are compatible? The only thing I can think of is that we want to get the global minimum for each coefficient error (as computed during the backprop process) while not adjusting for the "global" error function. Am I interpreting this correctly?
AI: The figure describing momentum is a bit misleading because it only considers a very simple case. Usually momentum will help you not to get stuck too early in a local minimum, so you end up in a better but still local minimum.
Momentum can help you converge faster, but there are no guarantees to end up in a global minimum. In popular use cases of neural networks like image classification etc. the error function will be extremely complicated, so it will always be infeasible to reach a global minimum using gradient based methods. But practice has shown that driving down the error to a local minimum still leads to good results.
Also, momentum is compatible with other forms of regularization. You can apply momentum to any regularized cost function. The statement in Elements of Statistical Learning is more of general nature, meaning that even if you could achieve the global minimum it wouldn't be desirable because it is most likely not the solution that will give you the best generalization (even though it has the best training(!) error). |
H: Effects of L2 loss and smooth L1 loss
Can any one tell me what the effects of $L_2$ loss and smooth $L_1$ loss (i.e. Huber loss with $\alpha = 1$) are, and when to use each of them ?
AI: First, Huber loss only works in one-dimension as it requires $$\left\|\boldsymbol{a}\right\|_2=\left\|\boldsymbol{a}\right\|_1=\delta$$at the intersection of two functions, which only holds in one-dimension. Norms $L_2$ and $L_1$ are defined for vectors. Therefore, in my opinion, Huber loss better be compared with squared loss rather than $L_2$ loss, since "$L_2$" presumes a multi-dimensional input compared to "squared".
Huber loss is the same as squared loss for differences less than $\delta$, and the same as absolute loss for differences larger than $\delta$, i.e.
$$\begin{align*}
L_{\delta}(y_n, f_{\theta}(\boldsymbol{x}_n))
=\left\{
\begin{matrix}
\frac{1}{2}\left(y_n - f_{\theta}(\boldsymbol{x}_n)\right)^2 & \left|y_n - f(\boldsymbol{x}_n)\right| \leq \delta,\\
\delta\left|y_n - f_{\theta}(\boldsymbol{x}_n)\right| - \frac{1}{2}\delta^2, & \text{otherwise.}
\end{matrix}
\right.
\end{align*}$$
where $y_n$ is the target of data point $n$, and $f_{\theta}(\boldsymbol{x}_n)$ is model's prediction. Note that $L_{\delta}$ has nothing to do with $L_p$ norm, despite the similar notations.
Because of this definition, for large differences due to outliers, gradient of loss function remains constant $\pm \delta$, the same as absolute loss, i.e.
$$\frac{\partial \delta\left|y_n - f_{\theta}(\boldsymbol{x}_n)\right|}{\partial \theta_i} = \pm \delta \frac{\partial f_{\theta}(\boldsymbol{x}_n)}{\partial \theta_i}$$
compared to squared loss, where gradient increases with the difference, i.e.
$$\frac{\partial \frac{1}{2}\left(y_n - f_{\theta}(\boldsymbol{x}_n)\right)^2}{\partial \theta_i} = -\left(y_n - f_{\theta}(\boldsymbol{x}_n)\right)\frac{\partial f_{\theta}(\boldsymbol{x}_n)}{\partial \theta_i}$$
which leads to large contributions from outliers when we update a parameter solely based on squared loss as follows:
$$\begin{align*}
{\theta}'_i &=\theta_i + \lambda \sum_n \frac{\partial f_{\theta}(\boldsymbol{x}_n)}{\partial \theta_i}\left(y_n - f_{\theta}(\boldsymbol{x}_n)\right) \\
&= \theta_i + \lambda\sum_{n \notin \text{outliers}} \frac{\partial f_{\theta}(\boldsymbol{x}_n)}{\partial \theta_i}(\text{small}) +\lambda\sum_{n \in \text{outliers}} \frac{\partial f_{\theta}(\boldsymbol{x}_n)}{\partial \theta_i}(\text{large})
\end{align*}$$
It is worth noting that, here, outliers are irregularities in the joint input-output space $(\boldsymbol{x}_n, y_n)$, not necessarily just in the input space $\boldsymbol{x}_n$ as we usually visualize in unsupervised tasks. For example, in a linear trend, none of $(x, y)=\{(1, 2)$, $(5, 10)$, $(10, 20)\}$ are outliers, but $(1, 10)$ is, which leads to large difference $(10 - 2)$ when model expects (predicts) $f_{\theta}(1)=2$.
When to use each of them?
Reminding that we are only talking about one-dimensional targets, Huber loss is a complete replacement for squared loss to deal with outliers. However, the challenge is the choice of $\delta$, which makes it a less favorable "first choice" when we are not yet familiar with the problem at hand. Therefore, we may start with squared loss (or other losses), and after a while try to experiment with Huber loss for different values of $\delta$. |
H: Is it possible to achieve activity recognition on a single image?
If so, can you give me resources?
AI: The short answer, yes and even with a good performance, although not with same accuracy as with multiple frames.
There are actually many different architectures that rely on classifying each single frame as part of their process (you can treat the overall model as an ensemble-model of multiple frames).
There is a great article by "Schindler et al." about exactly this question:
Action Snippets: How many frames does human action recognition require? |
H: Sentence similarity using Doc2vec
I have a list of 50k sentences such as : 'bone is making noise', 'nose is leaking' ,'eyelid is down' etc..
I'm trying to use Doc2Vec to find the most similar sentence from the 50k given a new sentence.
tagged_data = [TaggedDocument(words=word_tokenize(_d.lower()), tags=[str(i)]) for i, _d in enumerate(data)]
max_epochs = 100
vec_size = 20
alpha = 0.025
model = Doc2Vec(size=vec_size,
alpha=alpha,
min_alpha=0.025,
min_count=1,
dm =0)
model.build_vocab(tagged_data)
for epoch in range(max_epochs):
print('iteration {0}'.format(epoch))
model.train(tagged_data,
total_examples=model.corpus_count,
epochs=model.iter)
# decrease the learning rate
model.alpha -= 0.0002
# fix the learning rate, no decay
model.min_alpha = model.alpha
test_data = word_tokenize("The nose is leaking blood after head injury".lower())
v1 = model.infer_vector(test_data)
#print("V1_infer", v1)
similar_doc = model.docvecs.most_similar(positive=[model.infer_vector(test_data)],topn=3)
for i in range(0,len(similar_doc)):
print(tagged_data[int(similar_doc[i][0])],similar_doc[i][1])
Such that for the sentence "The nose is leaking blood after head injury" i would like to get the sentence with the highest similarity score ( i guess that it will bring sentences with words like leak or even synonyms like dripping?) . But the sentence i get back are unrelated and change each iteration of model.infer_vector(test_data)
Any idea about what is wrong?
AI: Doc2Vec (and words vectors) need significant amount of data to learn useful vector representation. 50k sentences is not sufficient for this. To overcome this, you can feed word vectors as initial weights in Embedding Layer of network.
For example, code from following question :
How to implement LSTM using Doc2Vec vectors?
model_doc2vec = Sequential()
model_doc2vec.add(Embedding(voacabulary_dim, 100, input_length=longest_document, weights=[training_weights], trainable=False))
model_doc2vec.add(LSTM(units=10, dropout=0.25, recurrent_dropout=0.25, return_sequences=True))
model_doc2vec.add(Flatten())
model_doc2vec.add(Dense(3, activation='softmax'))
model_doc2vec.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Output of "Flatten" layer will be vector representation of a sentence / document.
Article with example code. |
H: Is image sharpening a good idea for data augmentation?
I'm training segmentation networks and while the dataset is somehow decent (~5k images) I wanted to augment it, so far I'm trying:
RandomFlip
RandomRotate
RandomBrightness changes
RandomShadows
Due to constraints of the problem I can't do random crops or shifts. Other than those augmentations I was looking into image sharpening, and was wondering if it could be a good candidate for dataset augmentation. I could find it in some web articles and many augmentation projects on Github, but I can't find any solid papers that refer to it as a possible augmentation technique. Does anyone have some experience/tips on the matter?
AI: At first, I have to mention that $5k$ cannot be considered as a large dataset for training a deep neural network. Anyway, about the question. In general, yes you can, but you have to be aware of some points. Data augmentation can be helpful or it can damage your entire predictions. The reason for each is that whenever you utilize data augmentation, you are somehow changing and manipulating the distribution of your data in hand. Because neural networks are considered to deal with random processes which their behaviour is iid, your data samples should not be dependent on each other. There is also another perspective. Your training data should have the same distribution as your test data. If you are sure that while testing what you are going to face can be sharpened images, so go ahead and carry out that, but if you're sure that your sensors are placed in a noisy situation, something you can encounter in self-driving cars, you can be pretty much sure that your raw data is blurry, and it is almost impossible to have sharp images due to velocity unless you take a preprocessing step and after that you feed it to your network. |
H: How to measaure the similarity between two series?
I'm confused about how to measure the similarity between two time series with the same length. For example, both time series are 2 hours in length and every 5 minutes a point. I really want to know which Distance Algorithm should I use. I have tried the Euclidean Distance but it didn't work well on this type of data. Should I find some point-to-point distance algorithms?
AI: A classical approach for time series similarity computation is Dynamic Time Warping (DTW).
From your description, it may suit your use case:
In general, DTW is a method that calculates an optimal match between two given sequences (e.g. time series) with certain restriction and rules:
Every index from the first sequence must be matched with one or more indices from the other sequence, and vice versa
The first index from the first sequence must be matched with the first index from the other sequence (but it does not have to be its only match)
The last index from the first sequence must be matched with the last index from the other sequence (but it does not have to be its only match)
The mapping of the indices from the first sequence to indices from the other sequence must be monotonically increasing, and vice versa, i.e. if $j>i$ are indices from the first sequence, then there must not be two indices $l>k$ in the other sequence, such that index $i$ is matched with index $l$ and index $j$ is matched with index $k$, and vice versa
Some alternatives to DTW can be found in this literature review on time series dissimilarity measures. |
H: Training an acoustic model for a speech-to-text engine
What are the steps for training an acoustic model? The format of the data (the audio) includes its length and other characteristics. If anyone could provide a simple example of how to train an acoustic model, it would be greatly appreciated.
AI: This paper has details on how to prepare Audio data (and merge it with Language Models) for speech-to-text :
http://slazebni.cs.illinois.edu/spring17/lec26_audio.pdf
Slide 16 has very high level description of feature engineering.
Following papers provide more detailed analysis of audio processing for speech to text :
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.701.6802&rep=rep1&type=pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.689.4627&rep=rep1&type=pdf
https://www.ijariit.com/manuscripts/v4i3/V4I3-1646.pdf |
H: Rearranging data frame from column names to key value pairs
If I have data like this in R:
> a = data.frame(a = c(1, 2),
b = c(3, 4))
> a
a b
1 1 3
2 2 4
but I would like to have it like this:
> b = data.frame(k = c("a", "a", "b", "b"),
+ v = c(1, 2, 3, 4))
> b
k v
1 a 1
2 a 2
3 b 3
4 b 4
How can I convert the data this way?
AI: This is a simple gather() using package tidyr:
> library(tidyr)
>
> gather(a, key = "k", value = "v")
k v
1 a 1
2 a 2
3 b 3
4 b 4 |
H: Certainity of a classifier
How to build a classifier that by default will predict that it is for class 1, but if the classifier believes with 80 certainity that it belongs to 0, it will be classed as 0. How to check how certain a classifier is on it's prediction.
AI: Many classifiers will give the option to get predicted probability. Then you can just put a threshold. Here is how it can be done in with sklearn:
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
# Make a dataset
X, y = make_classification(n_samples=1000, n_features=4,
n_informative=2, n_redundant=0,
random_state=0, shuffle=False)
clf = RandomForestClassifier(n_estimators=100, max_depth=2,
random_state=0)
clf.fit(X, y)
# 1 if proba is less than 0.8, otherwise 0
predictions = 1 - (clf.predict_proba(X)[:, 0] > 0.80) |
H: Column With Many Missing Values (36%)
Hello this is my first machine learning project, I got a dataset with 18.000 rows and I have a column with 4244 values missing.
I don't know why the values are missing since when it's appropriate there's a 0 value in it.
The dtype of the column is int64 I consider this column usable and would like to implement it to the model.
Could you please help me with how to deal with this problem, or lead my to a resource to teach me how to deal with this ?
AI: I don't know why the values are missing since when it's appropriate there's a 0 value in it.
The first step is to check with some SME (Subject Matter Expert) or the Data Custodian. I can't tell you how many times I've built a model/started analysis just to figure out that the data was wrong. Try to figure out the reason behind the Nulls/0.
Besides that there are many ways to handle missing data a few are below:
Remove records with this missing value in your column. If this is an important column to your model it may be best to get rid of that record depending on the shape (rows x cols/features) of your dataset. Don't throw off the results of your model because there's some data that may throw it off (even if you use some of the methods below)
Mean/Median/Mode Impute - A common method of handling missing data is to fill the missing values with the column's mean or median (rarely do you use the Mode).
Fill the values that creates a normal distribution - it depends on your data, but filling the values so you get normally distributed column data can be beneficial
Try all these methods and more - When you start modeling you'll learn to "throw stuff at the wall" and see what sticks. Look at your model results, talk with SMEs, and think about what makes sense. Some ways of handling missing data will work better with different models/datasets. Experiment and have fun! |
H: Efficient self study plan
I am hoping for a bit of guidance from experienced practitioners / academics.
I want to work through the Bishop ML book, but have minimal background.
What is the fastest way to get the pre-requisites (specific books would be appreciated)?
From searching around I found this potential self-study path:
Statistical Inference - Casella / Berger
Probability Theory and Examples - Durrett
Linear Algebra - Hoffman / Kunze
I checked these books out from the library, but they will take me over a year to work through thoroughly, so it does not seem to be practical.
I have searched around on the internet, but most of the advice doesn't list any specific books, just what subjects I should learn.
About me
Graduated in an unrelated discipline many years ago
Willing to dedicate many hours to this (I am doing this to build a background for a degree in machine learning)
I can code pretty well due to my job
AI: I'm not super familiar with Bishop, but it looks like Durrett, which deals with measure-theoretic probability, would be overkill. Plus you would probably have to add real analysis to your prerequisites. I would just work through the first 4-5 Casella chapters on probability and random variables.
You probably also don't need to go through the whole linear algebra book to get started. Just make sure you understand the basic matrix operations well enough to deal with multivariate calculus (ex. taking multivariate derivatives and computing multivariate integrals).
The second half of Casella would likely be useful, but I suspect not strictly necessary.
I think if you can handle those things you'll have most of Bishop covered; you can always pick up more as needed. |
H: How to measure the similarity between two images?
I have two group images for cat and dog. And each group contain 2000 images for cat and dog respectively.
My goal is try to cluster the images by using k-means.
Assume image1 is x, and image2 is y.Here we need to measure the similarity between any two images. what is the common way to measure between two images?
AI: Check this handout!
Well, there a few so... lets go:
Given two images $J[x,y]$ and $I[x,y]$ with $(x,y) \in N^{N \times M}$...
A - Used in template matching:
Template Matching is linear and is not invariant to rotation (actually not even robust to it) but it is pretty simple and robust to noise such as the ones in photography taken with low illumination.
You can easily implement these using OpenCV Template Matching. Bellow there are mathematical equations defining some of the similarity measures (adapted for comparing 2 equal sized images) used by cv2.matchTemplate:
1 - Sum Square Difference
$$ S_{sq} = \sum_{(n,m) \in N^{M \times N}} \bigr(J[n,m] - I[n,m]\bigl)^2$$
This can be normalized as
$$ \frac{S_{sq}}{\sqrt{\sum J[n,m]^2 \times \sum I[n,m]^2}} $$
2 - Cross-Correlation
$$ C_{crr} = \sum_{(n,m) \in N^{M \times N}} \bigr(J[n,m] \times I[n,m]\bigl)^2$$
This can be normalized as
$$ \frac{C_{crr}}{\sqrt{\sum J[n,m]^2 \times \sum I[n,m]^2}} $$
B - Image descriptors/feature detectors:
Many descriptors were developed for images, their main use is to register images/objects and search for them in other scenes. But, still they offer a lot of information about the image and were used in pupil detection (A joint cascaded framework for simultaneous eye detection and eye state estimation) and even seem it used for lip reading (can't direct you to it since I am not sure it was already published)
They detect points that can be considered as features in images (relevant points) the local texture of these points or even their geometrical position to each other can be used as features.
You can learn more about it in Stanford's Image Processing Classes (check handouts for classes 12,13 and 14, if you want to keep research on Computer vision I recomend you check the whole course and maybe Rich Radke classes on Digital Image Processing and Computer Vision for Visual Effects, there is a lot of information there that can be useful for this hard working computer vision style you're trying to take)
1 - SIFT and SURF:
These are Scale Invariant methods, SURF is a speed-up and open version of SIFT, SIFT is proprietary.
2 - BRIEF, BRISK and FAST:
These are binary descriptors and are really fast (mainly on processors with a pop_count instruction) and can be used in a similar way to SIFT and SURF. Also, I've used BRIEF features as substitutes on template matching for Facial Landmark Detection with high gain on speed and no loss on accuracy for both the IPD and the KIPD classifiers, although I didn't publish any of it yet (and this is just an incremental observation on the future articles so I don't think there is harm in sharing).
3 - Histogram of Oriented Gradients (HoG):
This is rotation invariant and is used for face detection...
C - Convolutional Neural Networks:
I know you don't want to used NN's but I think it is fair to point they are REALLY POWERFULL, training a CNN with Triplet Loss can be really nice for learning a representative feature space for clustering (and classification).
Check Wesley's GitHub for a example of it's power in facial recognition using Triplet Loss to get features and then SVM to classify.
Also, if your problem with Deep Learning is computational cost, you can easily find pre-trained layers with cats and dogs around.
D - Check on previous work:
This cats and dogs fight has been going on for a long time... you can check solutions on Kaggle Competitions (Forum and Kernels), there were 2 on cats and dogs This One and That One
E - Famous Measures:
SSIM Structural similarity Index
L2 Norm (Or Euclidean Distance)
Mahalanobis Distance
F - Check on other kind of features
Cats and dogs can be a easy to identify by their ears and nose... size too but I had cats as big as dogs... so not really that safe to use size.
But you can try segmenting the images into animals and background and then try to do region property analisys...
Also, check on this image similarity metrics toolkit page it is in C but...
Check this paper on image similarity
Take a look on this Stack Overflow question and this Research Gate one
If you have the time, this book here: Feature Extraction & Image Processing for Computer Vision from Mark S. Nixon have much information on this kind of procedure
You can try Fisher Discriminant Analysis and PCA to create a mapping and the evaluate with Mahalanobis Distance or L2 Norm |
H: Working ofLSTM with multiple Units - NER
I am trying to understand working of LSTM networks and kind of not clear about how different neurons in a cell interact each other. I had a look at a similar question, but still not clear about few things.
Assume an NER task where each time step get embedding on one word as input and there are 10 time steps. I defined the LSTM cell with 100 units, As per my understanding each units will have 3 separate gates and memory cell as shown in the second figure (total 100 memory cells and 150 gates). From the theory of LSTM, each of these memory states stores different information, like one might store about singularity, other might store information about gender and so on (please correct me if I am wrong). With this set up, I have few doubts about the working of LSTM and the interaction between each units/neurons
1) In the figure, each cell has multiple neuron, lets say 100. Does the first input X1 (first word vector) will be input to the first neuron of the cell along with previous cell's activation function (say h0) and its output will go to the second neuron and so on..? Or same input X1 and h0 will be fed to each 100 neurons parallel (independent) and respective outputs will go to next time step (not sure whether any aggregation happens) ?
2) How different units would be able to store different features (singularity, gender etc..)? is it because they are initialized with random and different weights?
3) If neurons are working parallel, Is there any interaction happens between them to share information? will it be possible that multiple cell state store about same information (say, gender)
4) If LSTM cell 1 has 100 neurons/units and input X1, output h1 will also have 100 values (one for each neuron) ?
AI: Welcome to StackExchange DataScience!
1) The same input $x_1$ and $h_0$ will be fed to each 100 neurons parallel (independent) and respective outputs will go to next time step. Notice that there are two 100-dimensional vector going to the next time step: $c_1$ and $h_1$, which are denoted by the arrows on the right side in your second figure.
2) This is a complicated question. Firstly, there are usually no one-to-one mapping between a neural and its meaning like neural 1 -> gender.
3) Those neurons do share information by the matrix multiplication operations. For example, the cell value at time step $t$ is usually calculated as
$$c_t = f_t c_{t-1} + i_t \tanh(W_{xc}x_t+W_{hc}h_{t-1}+b_c)$$
Notice that the hidden value in the previous time step $h_{t-1}$ is multiplied by a matrix $W_{hc}$, therefore each value in $W_{hc}h_{t-1}$ is a linear combination of all the values in $h_{t-1}$.
4) The number of neurons in the LSTM cell, the dimension of hidden vector $h_t$, the dimension of the cell vector $c_t$, and the dimension of the output vector $o_t$, are all the same (in your case, 100). This is known as the "hidden size", "output size" or "number of units/neurons". This is not the same as the dimension of input vector $x_t$, which is usually known as the "input size".
If you have further questions about the dimension of the LSTM variables, I suggest you to look at the first answer here where the author listed the dimension of all variables and parameters. |
H: How can I access to loss value in Keras LSTM implementation?
I use Keras library and it's LSTM model. When I train my network I can see loss value in my program execution console. I like to know how can I access to this value in my code?
AI: You can access it by assigning a variable when calling fit
hist = model.fit(X, y)
Where hist is a dictionary containing history of various variables during training. To get your training loss you would do hist['loss'] |
H: What are the disadvantages of having a left skewed distribution?
I'm currently working on a classification problem and I've a numerical column which is left skewed. i've read many posts where people are recommending to take log transformation or boxcox transformation to fix the left skewness.
So I was wondering what would happen If I left the skewness as it is and continue with my model building? Are there any advantages of fixing skewness for classification problem (knn, logistic regression)?
AI: There are issues that will depend on specific features of your data and analytic approach, but in general skewed data (in either direction) will degrade some of your model's ability to describe more "typical" cases in order to deal with much rarer cases which happen to take extreme values.
Since "typical" cases are more common than extreme ones in a skewed data set, you are losing some precision with the cases you'll see most often in order to accommodate cases that you'll see only rarely. Determining a coefficient for a thousand observations which are all between [0,10] is likely to be more precise than for 990 observations between [0,10] and 10 observations between [1,000, 1,000,000]. This can lead to your model being less useful overall.
"Fixing" skewness can provide a variety of benefits, including making analysis which depends on the data being approximately Normally distributed possible/more informative. It can also produce results which are reported on a sensible scale (this is very situation-dependent), and prevent extreme values (relative to other predictors) from over- or underestimating the influence of the skewed predictor on the predicted classification.
You can test this somewhat (in a non-definitive way, to be sure) by training models with varying subsets of your data: everything you've got, just as it is, your data without that skewed variable, your data with that variable but excluding values outside of the "typical" range (though you'll have to be careful in defining that), your data with the skewed variable distribution transformed or re-scaled, etc.
As for fixing it, transformations and re-scaling often make sense. But I cannot emphasize enough:
Fiddling with variables and their distributions should follow from properties of those variables, not your convenience in modelling.
Log-transforming skewed variables is a prime example of this:
If you really think that a variable operates on a geometric scale,
and you want your model to operate on an arithmetic scale, then log
transformation can make a lot of sense.
If you think that variable operates on an arithmetic scale, but you
find its distribution inconvenient and think a log transformation
would produce a more convenient distribution, it may make sense to
transform. It will change how the model is used and interpreted,
usually making it more dense and harder to interpret clearly, but
that may or may not be worthwhile. For example, if you take the log of a numeric outcome and the log of a numeric predictor, the result has to be interpreted as an elasticity between them, which can be awkward to work with and is often not what is desired.
If you think that a log transformation would be desirable for a
variable, but it has a lot of observations with a value of 0, then
log transformation isn't really an option for you, whether it would
be convenient or not. (Adding a "small value" to the 0 observations
causes lots of problems-- take the logs of 1-10, and then 0.0 to
1.0). |
H: In CycleGAN are there two different generators and two different discriminators?
I am trying to assimilate the contents of this paper. I have a confusion about how many different networks are there in the architecture of CycleGAN. To my understanding, the concept of cycle means that there is only one generator and one discriminator which changes roles (Discriminator as generator; and Generator as Discriminator) depending on what loss we want to calculate (whether Y -> X or X -> Y). However, my friend I have a confusion and we have been debating about this without a consensus. He says that there are totally four different networks. Two generators G1 and G2; and two discriminators D1 and D2. What is the truth ?
AI: There are two generators and two discriminators.
Check out the code here |
H: How to see/change learning rate in Keras LSTM?
I see in some question/answers that ask to decrease the learning rate. But I don't know how can I see and change the learning rate of LSTM model in Keras library?
AI: In Keras, you can set the learning rate as a parameter for the optimization method, the piece of code below is an example from Keras documentation:
from keras import optimizers
model = Sequential()
model.add(Dense(64, kernel_initializer='uniform', input_shape=(10,)))
model.add(Activation('softmax'))
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)
In the above code, the fifth line lr is learning rate, which in this code is set to 0.01. You change it to whatever you want.
For further detail, see this link.
Please tell me if this helped to solve your problem. |
H: Generating Similar Words (or Synonyms) with Word Embeddings (Word2Vec)
We have a search engine, and when users type in Tacos, we also want to search for similar words, such as Chilis or Burritos.
However, it is also possible that the user search with multiple keywords. Such as Tacos Mexican Restaurants, and we also want to find similar word such as Chilis or Burritos.
What we do is to add all the vectors together for each word. This sometimes works, but with more keywords the vectors tend to be in a place where there are no neighbors.
Is there an approach where we can use not only one word, but multiple word, and still gives us similar results? We are using pre-trained glove vectors from Stanford, would it help if we train on articles that are food related, and specifically use that type of word embeddings for this task?
AI: Word Mover’s Distance (WMD) is an algorithm for finding the minimum distance between multiple embedded words.
The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document.
For example:
Source: "From Word Embeddings To Document Distances" Paper
In your problem, it will allow you to find that "Tacos Mexican Restaurants" is similar to "Burritos Taqueria" even though they share no common string literals. |
H: logistic regression : highly sensitive model
I am a newbie to data science and ML. I am working on a classification problem where the task is to predict loan status (granted/not granted).
I am running a logistic regression model on the data. The accuracy of my model is 82%. However, my model is more sensitive (sensitivity = 97%) and less specific(specificity = 53%).
I want to increase the model's specificity. At this stage, after referring to a bunch of internet resources, I am confused about how to proceed.
Below is my observation :
In Testing data,
a percentage of 1's in the class label is 73.17073170731707.
Testing data has more 1's than 0's in the class label. Is this the reason behind model being highly sensitive.
I am attaching my data file and code file. Please take a look at it.
Data sample :
Process :
Data --> missing value imputation -->distribution analysis-->log transformation for normal distribution ---> one hot encoding --> feature selection --> splitting data --> model selection and evaluation
Code snippets :
Here I have selected "3 best features": Credit History, Property Area
How should I proceed? Any help (even if it's just a kick in the right direction) would be appreciated.
AI: Actually, what is happening is natural. There is a trade-off between sensitivity and specificity. If you want to increase the specificity, you should increase the threshold of your decision function but note that it comes at a price and the price is reducing the sensitivity. |
H: Training a LSTM on a time serie containing multiple inputs for each timestep
I am trying to train a LSTM in order to use it for forecasting. The problem is basically a multivariate multi-step time series problem.
It is simply an experiment to see how statistical models (ARIMA, Holts-Winters, ...) and neural networks compare for a given problem.
As my dataset is perfectly fit for a statistical model, I am having trouble when trying to format it to train the LSTM as I have multiple entries for one timestep (corresponding to different entities) and I don't really know how to deal with it since the sequence is no longer tied by the time of observation. Let's say my dataset looks like the following example :
time | ent | obs
1 --- 1 ------ 5
2 --- 1 ------ 6
2 --- 5 ------ 1
3 --- 2 ------ 7
3 --- 5 ------ 4
As you can see, not every entity have an entry for any given time, and one timestep can have multiple entries.
I thought of training the LSTM for each entity but I would have too few data for most of them. Some threads gave me the idea to separate each entity into batches but the number of observations is not constant so it wouldn't work for me.
How do you think I am supposed to tackle this problem?
AI: The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment. |
H: Decovolution function
I have an image (for example (7x7x3) and a filter (3x3x3)). I convolved the image with the filter and it became a (3x3) output. If I want to do the inverse operation and want it to become the image from the output and the filter. How can I implement this operation in Python with Numpy?
I don't know which operation I should use with the filter (inverse or transpose)?
Here is my code for the Deconvolution:
import numpy as np
def deConv(Z, cashe):
'''
deConv calculate the transpoe Convoultion between the output of the ConvNet and the filter
Arguments:
Z-- Output of the ConvNet Layer, an array of the shape()
'''
# Retrieve information from "cache"
(X_prev, W, b, s, p) = cashe
# Retrieve dimensions from X_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = X_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve dimensions from Z's shape
(m, n_H, n_W, n_C) = Z.shape
#create initial array for the output of the Deconvolution
X_curr = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
#loop over the Training examples
for i in range (m):
#loop over the vertical of the output
for h in range(n_H):
#loop over the horizontal of the output
for w in range(n_W):
#loop over the
for c in range (n_C):
#loop over the color channels
for x in range(n_C_prev):
#inverse_W = np.linalg.pinv(W[:, :, x, c])
transpose_W = np.transpose(W[:,:,x,c])
#X_curr[i, h*s:h*s+f, w*s:w*s+f, x] += Z[i, h, w, c] * inverse_W
X_curr[i, h*s:h*s+f, w*s:w*s+f, x] += Z[i, h, w, c] * transpose_W
X_curr[i, h*s:h*s+f, w*s:w*s+f, :] += b[:,:,:,c]
X_curr = relu(X_curr)
return X_curr
AI: This is probably irreversible operation unless the pre-convolution data was not full rank. But note that you reduced the dimensions of your signal, so the convolution was probably cropped to "valid" information (which doesn't need padding)
If you have at least the image or the filter, recovery might be possible, since convolution can be inverted using deconvolution but note that:
Convolution is defined as
$$ f(x) \circledast g(x) = h(x) = \int_{-\infty}^{\infty}f(x-t)g(t)dt $$
The some integral transforms (such as Fourier and Laplace) have the property that:
$$ T\{f(x) \circledast g(x)\}(s) = T\{h(x)\}(s) = T\{f(x)\}(s) \times T\{g(x)\}(s) $$
This is true for the Discrete Fourier Transform and Discrete Convolution as well, so to find your image $i$ from the filtered image $j$ by a filter $h$ given $I$,$J$ and $H$ as the Discrete Fourier transform of $i$,$j$ and $h$, respectively. Let $F\{.\}$ denote the discrete fourier transform and $F^{-1}\{.\}$ denote its inverse transformation:
$$ i = F^{-1}\{I\} = F^{-1}\{\frac{J}{H}\} $$
The discrete fourier transform is implemented efficiently by SciPy(signal module), FFTW, NumPy(fft module) and probably Theano. Having a lot of wrappers arround.
Note 1: Is important to notice that you will need at least an estimation of your filter, there are a lot of algorithms to do so.
Note 2: Deconvolution is very sensitive to noise, you can check on this class on Digital Image Processing to understand image filtering, mainly the part on Wiener filters.
Note 3: Image Deconvolution is implemented on scikit-image (e.g. Unsupervised Wiener) and on OpenCV using many algorithms (also on matlab in Image Processing Toolbox). |
H: Why real-world output of my classifier has similar label ratio to training data?
I trained a neural network on balanced dataset, and it has good accuracy ~85%. But in real world positives appear in about 10% of the cases or less. When I test network on set with real world distribution it seems to assign more positive labels than needed tending to balanced proportion as in training set.
What can be the reason of such behavior and what should I do solve it?
I'm using Keras and combination of LSTM and CNN layers.
AI: What can be the reason of such behavior?
A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):
In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).
What should I do to solve it?
If the problem is related to bad negative representatives
Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or
Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with output > 0.5 to positive class, use a harder-to-pass threshold like output > 0.8 to decrease the number of positive predictions.
If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.
However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.
EDIT:
As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension). |
H: How does GlobalMaxPooling work on the output of Conv1D?
In the field of text classification, it is common to use Conv1D filters running over word embeddings and then getting a single value on the output for each filter using GlobalMaxPooling1D.
As I understand the process, the convolutional filter is a matrix of the same size as the $$\text{size of filter matrix} = \text{embedding dim}\cdot\text{width of the filter}$$ The filter matrix is then applied to the input embeddings (multiplied element by element) which produces a matrix of the same size for each filter position. Not a single number.
So how does the global max pooling get a single number on the output? Does it simply take a maximum over all the values in all the output matrices, or is there any other processing?
Please correct me if I'm wrong.
AI: Apparently I forgot how the convolution works. The input is multiplied element wise with the filter weights and the products are then summed. That's how a single value is obtained on the output. |
H: Keras + Tensorflow CNN with multiple image inputs
I have a CNN that needs to take in 68 images that are all 59x59 pixels. The CNN should output 136 values on the output layer
My training data has shape (-1, 68, 59, 59, 1).
My current approach is to use concatenate to join multiple networks like so:
input_layer = [None] * 68
x = [None] * 68
for i in range(68):
input_layer[i] = tf.keras.layers.Input(shape=training_data.shape[1:][1:])
x[i] = Conv2D(64, (5,5))(input_layer[i])
x[i] = LeakyReLU(alpha=0.3)(x[i])
x[i] = MaxPooling2D(pool_size=(2,2))(x[i])
x[i] = Model(inputs=input_layer[i], outputs=x[i])
combined = concatenate(x)
However, this always gives the error:
ValueError: A `Concatenate` layer should be called on a list of at least 2 inputs
Is this approach a suitable approach or am I doing this completely wrong?
AI: Yes it is wrong, each (68, 59, 59) input should go through one model not an array of them.
You can treat each of 68 images as a channel, for this, you need to squeeze your data axes from (-1, 68, 59, 59, 1) to (-1, 68, 59, 59) to have a 59x59 image with 68 channels corresponding to Input((68, 59, 59)), and set data_format='channels_first' in conv2D, to let the layer know that channels are in the first dimension (it expects them to be in the last dimension by default). This is similar to an RGB image that has 3 channels corresponding to Input((59, 59, 3)). The rest is the same.
If 68 images are consecutive frames from a movie, you can use conv3D to extract motion patterns across neighbor frames too; this is done by 3D kernels instead of 2D kernels. It requires (-1, 68, 59, 59, 1) data shape corresponding to Input((68, 59, 59, 1)). Also, we should use the default data_format='channels_last' since now there is only one channel as the last dimension. Commonly, temporal axis is placed third, i.e. (-1, 59, 59, 68, 1), which can be accomplished by moving the axes. |
H: I received this error message: Found input variables with inconsistent numbers of samples: [15573, 15600]. How do I resolve it please?
Data source
I received that error and when running my code, I realised that the issue lies with my X_train and y_train.
The link below is the source of my data (test.csv and train.csv)
My X_train.shape gave me (15573,) and my y_train gave me (15600,)
How do I reshape this?
AI: You likely get problems because you remove rows containing nulls in X_train and y_train independent of each other. y_train probably has few, or no nulls and X_train probably has some. So when you remove a row in X_train and the same row is not removed in y_train it will cause your data to be unsynced and have different lenghts. Instead you should remove nulls before you separate X and y.
Before this:
X = df['text']
y = df['label']
Do this:
df.dropna(inplace=True)
And remove this:
X_train.dropna(inplace=True)
X_test.dropna(inplace=True)
y_train.dropna(inplace=True)
y_test.dropna(inplace=True) |
H: How to handle columns with categorical data and many unique values
I have a column with categorical data with nunique 3349 values, in a 18000k row dataset, which represent cities of the world.
I also have another column with 145 nunique values that I could also use in my model that represents product category.
Can I use one hot encoding to these columns or there's a problem with that solution?
Like which is the max number of unique values to use one hot encoding so there's not gonna be any problem ?
Can you point me to the right direction if I should use another encoding also?
AI: For categorical columns, you have two options :
Entity Embeddings
One Hot Vector
For a column with 145 values, I would use one hot encoding and Embedding for ~3k values. This decision might change depending on overall number of features.
Embeddings map feature values into a 1D vector so that model knows NYC, Paris, London are similar cities in one aspect (size) and very different in other aspects. So, instead of using ~3k column of features, model will have ~50 columns of vector representation.
Articles that explain Embeddings :
An Overview of Categorical Input Handling for Neural Networks
On learning embeddings for categorical data using Keras
Google Developers > Machine Learning > Embeddings: Categorical Input Data
Exploring Embeddings for Categorical Variables with Keras by Florian Teschner |
H: Choosing k value in KNN classifier?
I'm working on classification problem and decided to use KNN classifier for the problem.
so if k=131 gave me auc of 0.689 and k=71 gave me auc of 0.682 what should be my ideal k?
Does choosing higher k means more usage of computational resource? if that's the case can I go with k=71. (or) should I always use K with maximum score no matter what?
AI: Because knn is a non-parametric method, computational costs of choosing k, highly depends on the size of training data. If the size of training data is small, you can freely choose the k for which the best auc for validation dataset is achieved. In the case where you have a large training dataset, choosing large k can lead to huge computational complexity which is reflected in slow prediction for test data. |
H: Looking for other opinions on approach to classification problem
I'm looking to implement an "opt-out" filter for my company. The input is short, text-message style messages. A few examples of opt-out messages are:
"remove me from your list"
"remove from list"
"please unsubscribe from list"
etc.
All other messages are "good", and should not be removed.
My thoughts on approaches:
I was thinking of using a Bayesian classifier here, but not really knowing the solution space (or having much of a background in ML), want to be sure I'm not wasting time on a sub-optimal solution.
I'm fine with not having the most cutting-edge solution, but want to be sure I'm not missing an approach that might be equally as straightforward but more effective.
AI: You should use text classification techniques. The most basic one is multinomial naive Bayes classifier with tf-idf features. for this method, take a look at this:
https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html
If you don’t get enough accuracy (or maybe precision, recall or f-score), you could test more complex techniques e.g. using deep LSTM networks with word embedding. For this method, take a look at this:
https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/ |
H: A trick used in Rademacher complexity related Theorem
I am currently working on the proof of Theorem 3.1 in the book "Foundations of Machine Learning" (page 35, First edition), and there is a key trick used in the proof (equation 3.10 and 3.11):
$$\begin{align*}
&E_{S,S'}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\sum_{i=1}^{m} g(z'_i)-g(z_i)\right]=E_{\boldsymbol{\sigma},S,S'}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\sum_{i=1}^{m} \sigma_i(g(z'_i)-g(z_i))\right] \\
&\text{where } {\Bbb P}(\sigma_i=1)={\Bbb P}(\sigma_i=-1)=\frac{1}{2}
\end{align*}$$
It is also shown in the lecture pdf page 8 in this link:
https://cs.nyu.edu/~mohri/mls/lecture_3.pdf
This is possible because $z_i$ and $z'_i$ can be swapped. My question is, why can we swap $z_i$ and $z'_i$? In the book, it says this is possible because "we are taking the expectation over all possible $S$ and $S'$, it will not affect the overall expectation." Unfortunately, I don't understand the meaning or intuition of this part.
Thank you very much for reading the question and for your time!
AI: This requires hell of a derivation, but I liked the question :)
My question is, why can we swap $z_i$ and $z'_i$?
The key insight is that notation $S \sim \mathcal{D}^m$ is equivalent to $Z_1 \sim \mathcal{D}, \cdots, Z_m \sim \mathcal{D}$. This translates to
$$E_{S,S'}[.]=E_{Z_1,\cdots,Z_m,Z'_1,\cdots,Z'_m}[.]$$
which remains the same by reordering the $Z$s.
Therefore the swap argument can be shown by
Singling out an arbitrary term $k$ from $\sum_{i=1}^{m}$,
Switching $E_{Z_k,Z'_k}$ to $E_{Z'_k,Z_k}$, and
Renaming the variables.
That is,
$$\begin{align*}
&E_{S,S'}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\sum_{i=1}^{m} g(z'_i)-g(z_i)\right] \\
&\overset{\text{expand}}{=}
E_{\color{blue}{Z_1,\cdots,Z_m,Z'_1,\cdots,Z'_m}}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\sum_{i=1}^{m} g(z'_i)-g(z_i)\right]\\
&\overset{\text{separate}}{=}
E_{\cdots,Z_k,\cdots,Z'_k,\cdots}\left[\underset{g \in \mathcal{G}}{\text{sup}} \frac{1}{m}\left(\color{blue}{g(z'_k)-g(z_k)}+\sum_{i=1; \neq k}^{m} g(z'_i)-g(z_i)\right)\right] \\
&\overset{\text{switch}}{=}
E_{\cdots,\color{blue}{Z'_k},\cdots,\color{blue}{Z_k},\cdots}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\left(g(z'_k)-g(z_k)+\sum_{i=1; \neq k}^{m} g(z'_i)-g(z_i)\right)\right] \\
&\overset{\text{reorder}}{=}
E_{\cdots,Z'_k,\cdots,Z_k,\cdots}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\left(\color{blue}{-(g(z_k)-g(z'_k))}+\sum_{i=1; \neq k}^{m} g(z'_i)-g(z_i)\right)\right] (*)
\end{align*}$$
Now by renaming $Z_k \leftrightarrow Z'_k$, and thus $z_k \leftrightarrow z'_k$ in $(*)$ we have:
$$\begin{align*}
&E_{S,S'}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\sum_{i=1}^{m} g(z'_i)-g(z_i)\right] \\
&\overset{\text{rename}}{=} E_{\cdots,\color{blue}{Z_k},\cdots,\color{blue}{Z'_k},\cdots}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\left(-(g(\color{blue}{z'_k})-g(\color{blue}{z_k}))+\sum_{i=1; \neq k}^{m} g(z'_i)-g(z_i)\right)\right]\\
&\overset{\text{collapse}}{=} E_{S,S'}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\left(-(g(z'_k)-g(z_k))+\sum_{i=1; \neq k}^{m} g(z'_i)-g(z_i)\right)\right]\\
\end{align*}$$
which corresponds to $\boldsymbol{\sigma}=(\sigma_1=1,\cdots,\sigma_k = -1,\cdots,\sigma_m=1)$.
We have proved this equality by switching the sign of arbitrary index $k$. Therefore, by defining
$$f(\boldsymbol{\sigma}):=E_{S,S'}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\sum_{i=1}^{m} \sigma_i(g(z'_i)-g(z_i))\right]$$
we have shown that
$$\forall \boldsymbol{\sigma_1}, \boldsymbol{\sigma_2},f(\boldsymbol{\sigma_1})=f(\boldsymbol{\sigma_2}).$$
For example:
$$\begin{align*}
f(\boldsymbol{1})&=f(\sigma_1=1,\cdots,\sigma_k=1, \cdots,\sigma_m=1)\\
&=f(\sigma_1=1,\cdots,\sigma_k=-1, \cdots,\sigma_m=1)
\end{align*}$$
Knowing that there is $2^m$ equi-probable vectors $\boldsymbol{\sigma_i}$, we finally have
$$\begin{align*}
E_{S,S'}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\sum_{i=1}^{m} g(z'_i)-g(z_i)\right]&=f(\boldsymbol{1}) \\
&=\frac{1}{2^m} f(\boldsymbol{\sigma_1})+\cdots+\frac{1}{2^m}f(\boldsymbol{\sigma_{2^m}})\\
&=E_{\boldsymbol{\sigma}}[f(\boldsymbol{\sigma})] \\
&=E_{\boldsymbol{\sigma},S,S'}\left[\underset{g \in \mathcal{G}}{\text{sup}}\frac{1}{m}\sum_{i=1}^{m} \sigma_i(g(z'_i)-g(z_i))\right]
\end{align*}$$
$\square$ |
H: what could this mean if your "elbow curve" looks like this?
This is from running kmeans clustering with k on the x-axis (ranging from 2 to 10) and the silhouette distance on the y-axis.
Clearly there's peaks at k=3, k=4 and it seems to decline from there. It doesn't resemble an elbow and thought it should rise as k gets larger (due to over fitting on he training set). Do I just lack data?
I'm computing the silhouette distance using a 80-20 train test split.
AI: First of all, you do have two elbows: one at $k=4$ and a large one at $k=8$. The second isn't very apparent because you haven't drawn out the plot for larger values of $k$. If you do you might get a figure like this:
Secondly, you aren't meant to look for an elbow when computing the silhouette score! The silhouette score accounts for both inter- and intra-cluster distance, as such it can be used for selecting $k$ on its own (i.e. select the $k$ that produces the best silhouette score).
Note: I'm not familiar with the "silhouette distance", I assume it is somewhat related to the silhouette score (maybe its inverse).
The "elbow" criterion should be used when dealing with metrics that tend to improve as $k$ increases (e.g. inertia). |
H: What are the criteria for updating bias values in back propagation?
During back propagation, the algorithm can modify the weight values or bias values to reduce the loss.
How does the algorithm decide whether it has to modify the weight values or bias values to reduce the loss?
Does it modify the weight values in one pass and bias values in another pass?
Thanks!
AI: Actually, weight values and bias values are updated simultaneously in each pass of backpropagation. That’s because the orientation of loss gradient vector is determined by the partial derivatives of all weights and biases with respect to the loss function. So if in each pass, you want to move in the correct direction towards the minimun of loss function, you must update both weights and biases at the same time and in the correct orientation. |
H: What is "posterior collapse" phenomenon?
I was going through this paper on Towards Text Generation with Adversarially Learned
Neural Outlines and it states why the VAEs are hard to train for text generation due to this problem. The paper states
the model ends up
relying solely on the auto-regressive properties of the decoder while ignoring the latent variables,
which become uninformative.
please simplify and explain the problem in a lucid way.
AI: With the help of better explanations provided in Z-Forcing: Training Stochastic Recurrent Networks:
When posterior is not collapsed, $z_d$ (d-th dimension of latent variable $z$) is sampled from $q_{\phi}(z_d|x)=\mathcal{N}(\mu_d, \sigma^2_d)$, where $\mu_d$ and $\sigma_d$ are stable functions of input $x$. In other words, encoder distills useful information from $x$ into $\mu_d$ and $\sigma_d$.
We say a posterior is collapsing, when signal from input $x$ to posterior parameters is either too weak or too noisy, and as a result, decoder starts ignoring $z$ samples drawn from the posterior $q_{\phi}(z|x)$.
The too noisy signal means $\mu_d$ and $\sigma_d$ are unstable and thus sampled $z$'s are also unstable, which forces the decoder to ignore them. By "ignore" I mean: output of decoder $\hat{x}$ becomes almost independent of $z$, which in practice translates to producing some generic outputs $\hat{x}$ that are crude representatives of all seen $x$'s.
The too weak signal translates to
$$q_{\phi}(z|x)\simeq q_{\phi}(z)=\mathcal{N}(a,b)$$
which means $\mu$ and $\sigma$ of posterior become almost disconnected from input $x$. In other words, $\mu$ and $\sigma$ collapse to constant values $a$, and $b$ channeling a weak (constant) signal from different inputs to decoder. As a result, decoder tries to reconstruct $x$ by ignoring useless $z$'s which are sampled from $\mathcal{N}(a,b)$.
Here are some explanations from Z-Forcing: Training Stochastic Recurrent Networks:
In these cases, the posterior approximation tends to provide a too
weak or noisy signal, due to the variance induced by the stochastic
gradient approximation. As a result, the decoder may learn to ignore z
and instead to rely solely on the autoregressive properties of x,
causing x and z to be independent, i.e. the KL term in Eq. 2 vanishes.
and
In various domains, such as text and images, it has been empirically
observed that it is difficult to make use of latent variables when
coupled with a strong autoregressive decoder.
where the simplest form of KL term, for the sake of clarity, is
$$D_{KL}(q_{\phi}(z|x) \parallel p(z|x)) = D_{KL}(q_{\phi}(z|x) \parallel \mathcal{N}(0,1))$$
The paper uses a more complicated Gaussian prior for $p(z|x)$. |
H: What do we visualize in showing a VAE latent space?
I am trying to wrap my head around VAE's and have trouble understanding what is being visualized when people make scatter plots of the latent space. I think I understand the bottleneck concept; we go from $N$ input dimensions to $H$ hidden dimensions to a $Z$ dimensional Gaussian with $Z$ mean values, and $Z$ variance values. For example here (which is based off the official PyTorch VAE example), $N=784, H=400$ and $Z=20$.
When people make 2D scatter plots what do they actually plot? In the above example the bottleneck layer is 20 dimensional, which means there are 40 features (counting both $\mu$ and $\sigma$). Do people do PCA or tSNE or something on this? Even if $Z=2$ there is still four features so I don't understand how the scatter plot showing clustering, say in MNIST, is being made.
AI: When people make 2D scatter plots what do they actually plot?
First case: when we want to get an embedding for specific inputs:
We either
Feed a hand-written character "9" to VAE, receive a 20 dimensional "mean" vector, then embed it into 2D dimension using t-SNE, and finally plot it with label "9" or the actual image next to the point, or
We use 2D mean vectors and plot directly without using t-SNE.
Note that "variance" vector is not used for embedding. However, its size can be used to show the degree of uncertainty. For example a clear "9" would have less variance than a hastily written "9" which is close to "0".
Second case: when we want to plot a random sample of z space:
We select random values of z, which effectively bypasses sampling from mean and variance vectors,
sample = Variable(torch.randn(64, ZDIMS))
Then, we feed those z's to decoder, and receive images,
sample = model.decode(sample).cpu()
Finally, we embed z's into 2D dimension using t-SNE, or use 2D dimension for z and plot directly.
Here is an illustration for the second case (drawn by the one and only paint):
As you see, the mean and variances are completely bypassed, we directly give the random z's to decoder.
The referenced article says the same thing, but less obvious:
Below you see 64 random samples of a two-dimensional latent space of MNIST digits that I made with the example below, with ZDIMS=2
and
VAE has learned a 20-dimensional normal distribution for any input digit
ZDIMS = 20
...
self.fc21 = nn.Linear(400, ZDIMS) # mu layer
self.fc22 = nn.Linear(400, ZDIMS) # logvariance layer
which means it only refers to the z vector, bypassing mean and variance vectors. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.