text
stringlengths 83
79.5k
|
---|
H: How are per-layer-detected-patterns in a trained CNN plotted?
In the case my question is not clear, I am talking about the patterns that are detected in each of the layers of an image-trained Convolutional Neural Network (CNN). Take the following image as an example (by Honglak Lee). I think I understand the concept: different layers start coding for different features with increasing complexity. The first layers code for edges, the middle layers code for simple features (e.g. nose, eyes), and the later layers code for whole faces.
However, I do not see the equivalence betweence each of the patterns in the picture and the network elements. Do each pattern inside a pattern-class correspond to one neuron of that layer? How are these cool pattern detection pictures even plotted?
AI: The image you posted is derived from figures which depict the basis functions of a Convolutional Restricted Boltzmann Machine.
I found them in this ICML paper (behind a paywall, sorry) and this PhD thesis, both are by Honglak Lee, who was advised by Andrew Ng.
Strictly speaking they are not visualizations of a feed-forward Convolutional Neural Network, which might be why you are having a hard time interpreting what they depict. I don't really know much about RBM's myself so I'm afraid I can't help with their interpretation.
As far as visualizing CNN's, there are several common methods. You can visualize filter weights directly, find input images which maximally activate a filter, plot activation as a function of image occlusion, etc.
This page has a good summary of techniques, also this paper may be useful. |
H: Intuition in Backpropagation (gradient descent)
I am reading about backpropagation, one of many processes involved in gradient descent, and I couldn't quite grasp the intuition in one of the equations.
http://neuralnetworksanddeeplearning.com/chap2.html#MathJax-Element-185-Frame
An equation for the error $\delta^l$ in terms of the error in the next layer, $\delta^{l+1}$: In particular
$$
\delta^l=((w^{l+1})^T\delta^{l+1})\circ\sigma'(z^l),
$$
where $(w^{l+1})^T$ is the transpose of the weight matrix $w^{l+1}$ for the $(l+1)^{th}$ layer.
...
When we apply the transpose weight matrix, $(w^{l+1})^T$, we can think intuitively of this as moving the error backward through the network, giving us some sort of measure of the error at the output of the $l^{th}$ layer.
I know how each terms are derived algebraically (from further down the text), but I can't grasp the intuitive part. In particular, I don't understand how applying the transpose of the weight matrix is like moving the error backwards.
The mathematics is quite straightforward. The equation is the simplification of the error in layer $l$ expressed in terms of error in layer $l+1$. But, the whole intuition bit doesn't come so naturally.
Could someone please explain, in what sense applying the transpose is like "moving the error backwards"?
AI: in what sense applying the transpose is like "moving the error backwards"?
It isn't. Or at least you shouldn't think too hard about the analogy.
What you are actually doing is calculating the gradient - or partial derivative - of the error/cost term with respect to the activations and weights in the network. This is done by repeatedly applying the chain rule to terms that depend on each other in the network, until you have the gradient for all the variables that you control in the network (usually you are interested in changing the weights between neurons, but you can alter other things if they are under your control - this is exactly how deep dreaming works too, instead of treating the weights as a variable, you treat the input image as one).
The analogy is useful in that it explains the progression and goal of the repeated calculations across layers. It is also correct in that values of the gradient in one layer depend critically on values in "higher" layers. This can lead to intuitions about managing problems with gradient values in deep networks.
There is no relation between taking the matrix transpose and "moving the error backwards". Actually you are taking sums over terms connected by the weights in the NN model, and that happens to be the same operation as the matrix multiplication given. The matrix multiplication is a concise way of writing the relationship (and is useful when looking at optimisations), but is not fundamental to understanding neural networks.
The statement "moving the error backwards" is in my opinion a common cause of problems when developers who have not studied the theory carefully try to implement it. I have seen quite a few attempts at implementing back propagation that cargo cult in error terms and multiplications without comprehension. It does not help here that some combinations of cost functions and activation functions in the output layer have been deliberately chosen to have very simple derivatives, so that it looks like the neural network is copying error values around with almost magical multiplications by activation value etc. Actually deriving those simple terms takes a reasonably competent knowledge of basic calculus. |
H: Comparing Non-deterministic Binary Classifiers
I have two classifiers which I am implementing, and they are both non-deterministic in the sense that they can each give different results (FPR and TPR) when you run them multiple times. I would like to compare these two algorithms to evaluate their performance. How do I go about this? Usually what people do most times is to run the classifier till they get the best FPR and TPR values, then they publish the results. But the problem with this approach is that it might not be a good representation of the performance of such a classifier. This is what I planned on doing so far, but don't know if that is correct:
Split my evaluation data into train and test, and after training, predict using the test data to get the FPR and TPR, then repeat this prediction for 99 more times to form 100 FPR and TPR readings, then take an average of this. To get an ROC, use the mean FPR and TPR. OR
Use k fold cross validation of say k=3 or 10 on the data, and this will return 3 or 10 different values for TPR and FPR, then I will take the mean to get the mean FPR and TPR, and also use this mean for plotting the mean ROC.
Which of the two methods I stated above is ok? And if they are both wrong, what do you suggest I do. Thanks.
AI: A good strategy is to do n times k-fold cross validation, which should give a pretty good estimation of the average performance of both algorithms. What this means is you perform your k-fold cross validation n times, with different random folds each time, and average the results. People commonly use k = n = 10, but the higher the values of k and n the better. |
H: What is the One Max Problem in detail?
I am looking a python lib named deap, but stuck at beginning.
The first paragraph says:
The problem is very simple, we search for a 1 filled list individual.
What does 1 filled list means? Search a 1 filled list, from where? individual list or individual 1?
Google One Max Problem only gives some information which seems to be useful:
There is a Max One Problem
I can understand this, but is it same as One Max Problem? If so, I have a question that why need evolutionary algorithm to "evolve" our population until eventually the target emerges. ? If I am a Medical Researcher, I should already have the entire DNA(gene list) , what I need to do is just search in that list, not evolve a random list.
There is a Maximum_satisfiability_problem
This is understandable too, seems relate to One Max Problem , another saying?
AI: deap is an evolutionary algorithm library. In an evolutionary algorithm you usually want to optimize a function. For this, you define individuals as a collection of genes (e.g. a string of numbers) that condense a possible solution, you create a population of such individuals, and define a fitness function to evaluate how good they are; then you apply evolutionary operators (e.g. mutation, pairing) to evolve the population, effectively searching the solution space.
The problem referred to as "max one problem" in the linked page can be reworded as:
let's create a toy evolutionary algorithm where we want to evolve a population of individuals (where each individual is a list of N integer numbers) until one of them is exactly comprised of N ones (i.e. 1,1,....,1).
The "maximum satifiability problem" is not related to this. |
H: How to use binary relevance for multi-label text classification?
I'm trying to use binary relevance for multi-label text classification.
Here is the data I have:
a training set with 6000 short texts (around 500-800 words each) and
some labels attached to them (around 4-6 for each text). There are
almost 500 different labels in the entire set.
a test set with 6000 shorter texts (around 100-200 words each).
The difference of size between my two sets exists because of the source is different.
So, I want to use binary relevance to find the labels of the texts in the test set. To do it, I created a dictionary with all the different words in the entire training set and removed stop words, words who appear only once and words who appear in more that 10% of the texts. I got 14714 different words in my dictionary.
My idea was to create a matrix where each row represents a document and each column a words and each value was the number of occurrence of a word in a document. But with 14714 words and 6000 documents, I will get a matrix of 88 millions of integers! I tried, just to see, to create it and my laptop didn't support it. :)
I even didn't have the time to create my Y matrix and generate a model (I wanted to use a logistic regression) for only one label...
So, my questions are:
Was it a good way to make multi-label classification or is there a better method?
Is it a problem to have a training from one source and to use it to make a model to predict data from another source? Is the different size of the documents a problem?
Did you use logistic regression for this kind of problem?
Thank you!
Edit: I also want to add the most frequent words in my dictionary (after the cleaning part) are common words and totally useless in the field of my research (biology): used, much, two, use, possible, example, ... How can I pass through it?
AI: Just some views/suggestions. After removing the stop words did you stem/lemmatize the text you got? That would probably reduce the number of unique words in your corpus and brings some forms of words to the same level. But caution while using stemming as it sometimes create noise.
Try postagging and see what can be the important tags you want to keep and eliminate the ones that you feel are giving less relevance to the text.
Did you try to find the some important terms from each document by using tf-idf or chi-square method. They might be helpful to see the relevance of terms for each class/document.
See how far can you reduce the dimensions of the matrix after using the above, if not you have already used them and then apply logistic regression or whatever classifier you wanted to.
On your second question.
I don't think having data from different sources should be a problem since you are trying to create model based on keywords, Hopefully it should be able to see. I am not confident on this part.
I had worked on text classification before with bigger documents and larger corpus with not so fruitful results with various models.
Hope this helps. |
H: ValueError:invalid literal for int() with base 10: 'No''
Trying to do a binary classification and have the class names as string but when it gives me the error mentioned in the title. I tried to give integer labels as 0 and 1, the code works perfectly fine. I tried all the suggestions from stackoverflow regarding the same error but nothing is working.Here is the code snippet:
labels = ["No", "priorlocaltx"]
#labels = [0, 1]
keyword_identifiers_no = ['surgery', 'radiotherapy', 'brachytherapy', 'Newadjuvant', 'neo', 'Adjuvant', 'Mets at first diagnosis', 'M1HSPC']
keyword_identifiers_yes = ['no prior tx', 'BCR', 'Neoadjuvant', 'neo', 'Adjuvant', 'Hormone refractory', 'Mets', 'No mets']
cancer_df.insert(2, 'Result', 0, allow_duplicates = False)
#print(cancer_df.head())
for i in range(1, len(cancer_df)):
for word in keyword_identifiers_no:
if word in cancer_df.iloc[i]['Patient Segment(s)']:
cancer_df.at[i,'Result'] = labels[0]
#print(cancer_df['Patient Segment(s)'].dtype)
for word in keyword_identifiers_yes:
if word in cancer_df.iloc[i]['Patient Segment(s)']:
cancer_df.at[i,'Result']= labels[0]
cancer_df.to_csv('Result1.csv')
AI: It seems that your 'Result' column is of type int and you are trying to set some rows to str. Stick to int or category (docs). |
H: POC - Get an idea to create a Predictive Model
I'm trying to look for an idea to create a predictive model having the following data:
Customer_ID - Integer
Catalog_ID - Integer
Country_Code - Integet
Year - Integet
Month - Integer
Day - Integer
Quantity_Purchased - Integer
Product_Purchased - Double
I'm trying to look for a use case that creates a predictive model than can give me the ability to propose a product for a customer the next time that he comes to my website.
Is this a collaborative filtering use case? If yes, I only can use the last two fields of my dataset, right?
Thanks!
AI: Recommender systems are usually either collaborative ("other people who bought x also bought y") or content-based ("item y has similar properties as item x"). Given your very few data features, collaborative filtering seems appropriate. In the basic setting it could use only the purchases of your users. Ideally, you would also have some sort of rating of their purchases.
Also, nothing stops you from using a hybrid approach with additional data if available. |
H: Handling time series data with gaps
I am working on a dataset with physical measurements taken daily (weight, bmi, etc...) and I am working through the process to graphically represent it. I think it is worth noting that every day has a corresponding row, but if no measurements were taken, the values are the same as the day before.
Here is an example of the trend I am trying to manage:
Date, Weight, BMI
1/1/2016, 155.1, 21.9
1/2/2016, 155.1, 21.9
1/3/2016, 155.1, 21.9
--continued for several weeks--
3/1/2016, 170.2, 25.0
3/2/2016, 170.1, 25.0
edit: I should clarify that these repeated values are how the data is put together for missing values. The days that are repeated are days when no measurement was taken
If the value stays the same for an extended period, should there be a gap in any graphical representation? Should I keep the numbers as they were (155.1 & 21.9) until the next measurement was taken, or should the numbers increase over that time to "bridge the gap" - meaning they would increase by the difference in measurements divided by the number of days?
It feels like I should be increasing the value over time to account for what would happen in reality, but I don't know if this would have negative implications on the data.
AI: I'd represent actual repeated values differently from missing data. The latter is straightforward; the former I'd interpolate with Gaussian process regression. This way you'll be able to get error bars like so:
Note how the sample functions (and thus error bars) expand and contract as you depart and approach the measurements, as you would intuitively expect. |
H: How to apply Content-based image retrieval for scanned images?
Is there a way to differentiate between scanned images(with only text) that are well-lit and with good contrast, from the ones that are not(that have poor quality), making use of Content-based image retrieval? More exactly, is there a feature extraction method that can help to query a database to retrieve the well-lit, good contrast, scanned images?
Thank you!
AI: Your problem is studied under the rubric of "image (Quality) assessment" and, more specifically, "document image assessment". Here are links to and abstracts from some relevant surveys:
Document Image Quality Assessment: A Brief Survey
To maintain, control and enhance the quality of document images and minimize the negative impact of degrada- tions on various analysis and processing systems, it is critical to understand the types and sources of degradations and develop reliable methods for estimating the levels of degradations. This paper provides a brief survey of research on the topic of document image quality assessment. We first present a detailed analysis of the types and sources of document degradations. We then review techniques for document image degradation modeling. Finally, we discuss objective measures and subjective experiments that are used to characterize document image quality.
Subjective and Objective Quality Assessment of Image: A Survey
With the increasing demand for image-based applications, the efficient and reliable evaluation of image quality has increased in importance. Measuring the image quality is of fundamental importance for numerous image processing applications, where the goal of image quality assessment (IQA) methods is to automatically evaluate the quality of images in agreement with human quality judgments. Numerous IQA methods have been proposed over the past years to fulfill this goal. In this paper, a survey of the quality assessment methods for conventional image signals, as well as the newly emerged ones, which includes the high dynamic range (HDR) and 3-D images, is presented. A comprehensive explanation of the subjective and objective IQA and their classification is provided. Six widely used subjective quality datasets, and performance measures are reviewed. Emphasis is given to the full-reference image quality assessment (FR-IQA) methods, and 9 often-used quality measures (including mean squared error (MSE), structural similarity index (SSIM), multi-scale structural similarity index (MS-SSIM), visual information fidelity (VIF), most apparent distortion (MAD), feature similarity measure (FSIM), feature similarity measure for color images (FSIMC), dynamic range independent measure (DRIM), and tone-mapped images quality index (TMQI)) are carefully described, and their performance and computation time on four subjective quality datasets are evaluated. Furthermore, a brief introduction to 3-D IQA is provided and the issues related to this area of research are reviewed.
No-reference image quality assessment algorithms: A survey
valuation of noise content or distortions present in an image is same as assessing the quality of an image. Measurement of such quality index is challenging in the absence of reference image. In this paper, a survey of existing algorithms for no-reference image quality assessment is presented. This survey includes type of noise and distortions covered, techniques and parameters used by these algorithms, databases on which the algorithms are validated and benchmarking of their performance with each other and also with human visual system.
Objective image quality assessment: a survey
Image quality assessment (IQA) is critically important for the image-processing field. IQA aims to build a computational model to predict human perceived image quality, accurately and automatically. Until now, great efforts have been employed to design IQA metrics. In this paper, we systematically and comprehensively review the fundamental, brief history, and state-of-the-art developments of IQA, with emphasis on natural image quality assessment (NIQA). First, the definition of image quality is discussed, which contains three aspects and lead to different philosophies of designing IQA metrics. Afterwards, classic NIQA metrics are presented with some further discussions. Widely used databases and the performances of classic NIQA metrics on them are also listed. We highlight the most significant works and some open issues about the developments of IQA, and provide the benchmarks for the researchers and scholars who work on IQA.
Research papers specifically on document image assessment
Document Image Quality Assessment Using Discriminative Sparse Representation
The goal of document image quality assessment (DIQA) is to build a computational model which can predict the degree of degradation for document images. Based on the estimated quality scores, the immediate feedback can be provided by document processing and analysis systems, which helps to maintain, organize, recognize and retrieve the information from document images. Recently, the bag-of-visual-words (BoV) based approaches have gained increasing attention from researchers to fulfill the task of quality assessment, but how to use BoV to represent images more accurately is still a challenging problem. In this paper, we propose to utilize a sparse representation based method to estimate document image’s quality with respect to the OCR capability. Unlike the conventional sparse representation approaches, we introduce the target quality scores into the training phase of sparse representation. The proposed method improves the discriminability of the system and ensures the obtained codebook is more suitable for our assessment task. The experimental results on a public dataset show that the proposed method outperforms other hand-crafted and BoV based DIQA approaches.
Discrete Orthogonal Moments Based Framework for Assessing Blurriness of Camera Captured Document Images
One of the most widely used tasks in the area of image processing is automated processing of documents, which is done using Optical Character Readers (OCR) from document images. The most common form of distortion in document images is blur which can be caused by defocus, motion, camera shake etc. In this paper we propose a no reference image sharpness measure framework using discrete orthogonal moments and image gradients for assessing quality of document images and validated the results against state of the art image sharpness measures and accuracy of three well known Optical Character Readers.
Document Image Quality Assessment Based on Texture Similarity Index
In this paper, a full reference document image quality assessment (FR DIQA) method using texture features is proposed. Local binary patterns (LBP) as texture features are extracted at the local and global levels for each image. For each extracted LBP feature set, a similarity measure called the LBP similarity index (LBPSI) is computed. A weighting strategy is further proposed to improve the LBPSI obtained based on local LBP features. The LBPSIs computed for both local and global features are then combined to get the final LBPSI, which also provides the best performance for DIQA. To evaluate the proposed method, two different datasets were used. The first dataset is composed of document images, whereas the second one includes natural scene images. The mean human opinion scores (MHOS) were considered as ground truth for performance evaluation. The results obtained from the proposed LBPSI method indicate a significant improvement in automatically/accurately predicting image quality, especially on the document image-based dataset.
No-reference document image quality assessment based on high order image statistics
Document image quality assessment (DIQA) aims to predict the visual quality of degraded document images. Although the definition of “visual quality” can change based on the specific applications, in this paper, we use OCR accuracy as a metric for quality and develop a novel no-reference DIQA method based on high order image statistics for OCR accuracy prediction. The proposed method consists of three steps. First, normalized local image patches are extracted with regular grid and a comprehensive document image codebook is constructed by K-means clustering. Second, local features are softly assigned to several nearest codewords, and the direct differences between high order statistics of local features and codewords are calculated as global quality aware features. Finally, support vector regression (SVR) is utilized to learn the mapping between extracted image features and OCR accuracies. Experimental results on two document image databases show that the proposed method can accurately predict OCR accuracy and outperforms previous algorithms.
A deep learning approach to document image quality assessment
This paper proposes a deep learning approach for document image quality assessment. Given a noise corrupted document image, we estimate its quality score as a prediction of OCR accuracy. First the document image is divided into patches and non-informative patches are sifted out using Otsu’s bina- rization technique. Second, quality scores are obtained for all selected patches using a Convolutional Neural Network (CNN), and the patch scores are averaged over the image to obtain the document score. The proposed CNN contains two layers of convolution, location blind max-min pooling, and Rectified Linear Units in the fully connected layers. Exper- iments on two document quality datasets show our method achieved the state of the art performance.
Metric-based no-reference quality assessment of heterogeneous document images
No-reference image quality assessment (NR-IQA) aims at computing an image quality score that best correlates with either human perceived image quality or an objective quality measure, without any prior knowledge of reference images. Although learning-based NR-IQA methods have achieved the best state-of-the-art results so far, those methods perform well only on the datasets on which they were trained. The datasets usually contain homogeneous documents, whereas in reality, document images come from different sources. It is unrealistic to collect training samples of images from every possible capturing device and every document type. Hence, we argue that a metric-based IQA method is more suitable for heterogeneous documents. We propose a NR-IQA method with the objective quality measure of OCR accuracy. The method combines distortion-specific quality metrics. The final quality score is calculated taking into account the proportions of, and the dependency among different distortions. Experimental results show that the method achieves competitive results with learning-based NR-IQA methods on standard datasets, and performs better on heterogeneous documents. |
H: NLP - why is "not" a stop word?
I am trying to remove stop words before performing topic modeling. I noticed that some negation words (not, nor, never, none etc..) are usually considered to be stop words. For example, NLTK, spacy and sklearn include "not" on their stop word lists. However, if we remove "not" from these sentences below they lose the significant meaning and that would not be accurate for topic modeling or sentiment analysis.
1). StackOverflow is helpful => StackOverflow helpful
2). StackOverflow is not helpful => StackOverflow helpful
Can anyone please explain why these negation words are typically considered to be stop words?
AI: Stop words are usually thought of as "the most common words in a language". However, other definitions based on different tasks are possible.
It clearly makes sense to consider 'not' as a stop word if your task is based on word frequencies (e.g. tf–idf analysis for document classification).
If you're concerned with the context (e.g. sentiment analysis) of the text it might make sense to treat negation words differently. Negation changes the so-called valence of a text. This needs to be treated carefully and is usually not trivial. One example would be the Twitter negation corpus. An explanation of the approach is given in this paper. |
H: Performance and architecture of neural network for increased dimensions
I posted this question on Cross Validated before I realized that this existed. I think it is better suited here and got no answers over there so I have deleted other post. I have reproduced the question below:
I have been playing around with the neural network toolbox in MATLAB to develop an intuition for how the architectural requirements scale with feature dimension.
I put together a simple example, and the results have surprised me. I am hoping someone can point to either (a) an unrealistic expectation of mine, or (b) a mistake/misuse of the neural network toolbox.
The example is as follows: I have a simple un-normalized one-dimensional Gaussian that I am trying to learn. I do the following:
x = -5:0.2:5;
y = exp(-x.^2/2);
net = feedforwardnet(2);
net = configure(net, x, y);
net = train(net, x, y);
y2 = net(x);
plot(x, y, 'o', x, y2);
legend('Data', 'NN');
This gives me good results. I get the plot below.
Now, I try to extend this to 2 dimensions and this is where I run into trouble. I don't think I'm asking too much. My data is not noisy, or is it sparse. I figure if I double the number of neurons that should be sufficient for an increase in dimensionality. Here's my code:
x1 = -5:0.2:5;
x2 = -5:0.2:5;
[x1g, x2g] = meshgrid(x1, x2);
xv = [x1g(:)'; x2g(:)'];
yv = exp(-dot(xv,xv)/2);
net = feedforwardnet(4);
net = configure(net, xv, yv);
net = train(net, xv, yv);
y2v = net(xv);
plot3(xv(1,:), xv(2,:), yv, xv(1,:), xv(2,:), y2v, 'o');
legend('Data', 'NN');
The plot I get is this:
This is pretty poor. Perhaps I need more neurons? Maybe if I double the number of dimensions, I need to quadruple the number of neurons. I get this for 8 neurons:
Maybe with 8 neurons I have a lot of weights to fit, so let me try training with regularization. I get the plot below with trainbr:
It's only at around 16 neurons that I start getting something I would consider reasonable.
However, there are still oscillations which I don't like. Now I know I'm using it out of the box in a naïve manner, or perhaps I'm expecting too much. But this simple example resembles the real problem I want to tackle. I have the following questions:
Why is it that an increase from 1 to 2 dimensions increases the number of neurons required to get a decent fit considerably?
Even when I go to a larger number of neurons, I get oscillations that are going to be a problem in my real world application. How can I get rid of that?
Most resources on NN that I've read indicate a substantially lower number of neurons. They usually state something like "equal to or less than the number of input variables". Why is that? Is a multidimensional Gaussian a pathological case?
If I need to be more intelligent with how I treat my network for a given number of neurons, what do I need to do? I tried retraining the network to see if it was a local minima issue, but I generally get a similar fit.
Anything else that may be remotely useful to this issue is appreciated!
AI: Some things to take into account:
Try to apply appropriate input space transformations, e.g. convert to polar coordinates.
Despite the fact that a single hidden layer feedforward network can be a universal approximator, there is no guarantees about the number of neurons needed to approximate an arbitrarily complex function. Instead of having a single hidden layer and making it increasingly wide, try stacking more layers. This enables the network to model non linearities in an easier way.
Do not verify the performance of your model based on visual inspection (at least not only), but based on error measurements (e.g. MSE). |
H: Loss function for sparse tagging
I am writing a musical transcription system with a RNN (LSTM).
Input: 1 vector of features per timestep (about 40 timesteps in a second)
Output: 1 binary vector of notes per timestep (dimension=36) (1 is on, 0 if off).
Model : LSTM(512) + LSTM(256) + Dense(36, activation='sigmoid')
I am currently training on monophonic data (i.e. my output has at most one 1 per timestep).
The problem is that because of the sparsity the best strategy to use is to always return 0.
I tried the loss functions 'mean_squared_error', 'mean_absolute_error', 'binary_crossentropy', 'cosine_proximity' and one custom I wrote with keras :
K.sum(K.abs(y_pred - y_true), axis=-1) * K.mean(y_pred, axis=-1) / K.maximum(K.mean(y_pred * y_true, axis=-1), K.epsilon())
All those function led with sufficient training to the always zero output.
I can either change my loss function or my encoding, but the problem is that I need to support polyphonic data, i.e. when there is more than one class to select.
Or can I train as many LSTM as there are notes (88 for a piano keyboard), and each one detects if its note is played but I think this is pretty much equivalent, isn't it ?
I think my data is pretty much equivalent to what keras calls categorical data (binary matrix).
Currently I have only 36 classes, and 24 are always zero.
AI: In the case of 1 note at a time maximum you could use categorical_crossentropy as loss and add a class for when no note is played. This is a log loss on a last softmax layer. This would turn it into a 37 one-hot-encoding representation.
In the case of having the possibility of multiple notes I would keep the 36 (or 88 for the pian) dimensional representation and have a sigmoid activation on each of the notes at the end. Then sum the binary log loss of each of the nodes and use that as the loss. Then you can treshold the nodes individually at prediction time to see if a note is on or off. |
H: Machine Learning: Writing Poems
I'm a student of machine learning, and these days I was trying to learn how to use the TensorFlow library. I've gone through various tutorials and trial&errors with tensorflow, and I thought the best way to learn it for real would be to make use of it in a little project of my own.
I've decided that I should attempt to make a program that writes poems. I'm not aiming for top-end quality program; for my first model, I'd be happy with just a string of non-sense words groups together in poem format. The problem is that I'm having problem looking up books or videos about machine learning programs that deal with writing sentence structures.
Can you make any suggestions on what I could look for (even google keywords are fine) to get the sample programs and basic knowledge that I need?
Thank you.
AI: This is just the comment from Emre expanded, but yes you should look into recurrent neural networks for generating text in the style of a given corpus. RNNs and LSTM work really quite well for this.
This writeup is widely cited, and to your question, shows how it's pretty easy to generate something like this, given the text of Shakespeare's plays:
PANDARUS: Alas, I think he shall be come approached and the day When
little srain would be attain'd into being never fed, And who is but a
chain and subjects of his death, I should not sleep.
Second Senator: They are away this miseries, produced upon my soul,
Breaking and strongly should be buried, when I perish The earth and
thoughts of many states.
If you follow this you can easily run this on your local GPU too to generate text from whatever input poetry you like. I have had pretty good results with 0.1-0.5 dropout, 2 layers, layers of size 512-1024. |
H: An abstract idea for the performance diffs between SLP and MLP
Recently I am working on some predictive analytic which based on neural network.
When I tried some tests on MLP with one hidden layer or multiple hidden layers, the results showed that:
one hidden layer performance on prediction is always better than multiple hidden layers
the tests were executed both on Knime and R models, they gave me the same trends
Based on my knowledge, more hidden layers with more neurons should perform better?
Is there a principle on this about for which kind of dataset turns out this kind of result?
Or I may need some book / article / paper to read?
Do you have any abstract idea (not mathematical algorithms) for me, thanks!
UPDATE
I am working on a fraud detection dataset, which includes 1000 observations. and all the dimensions are numericed and normalized...
But I am actually asking for a general idea, for the general idea to choose algorithm for different dataset
AI: More layers/more neurons does not necessarily mean you will get better performance. If your data is too simple or the number of observations is not that high, then adding more parameters (more layers/neurons) may result in overfitting the data. During training, the network will try to represent the training data as closely as possible. When there are a good number of units in the network, then the representation that is learned by the network will mainly model the general trend of the data faithfully. But, if there are too many neurons, then it is possible that some of the neurons will simply model noise in the training data which does not generalise to unseen data, resulting in worse performance.
Also if you are using a saturating nonlinearity as an activation function (i.e. sigmoids or tanh) then adding too many layers may result in vanishing gradients which will cause your network to train very slowly, or perhaps not at all.
Designing neural networks that work well is not very easy, and the best way to get an intuition for what will work is to experiment. One thing to try is to evaluate a range of sizes for your hidden layer(s). If you are using sigmoid or tanh as your activation function, I suggest you try using rectified linear units as well. |
H: shape of theano tensor variable out of keras Conv2D
Being new to theano, pls bear with me. I thought the shape of the tensor variable is already well defined out of the Conv2D layer since the input is specified, as follow,
from keras.layers import Input, Convolution2D
import theano
input_img = Input(shape=(1, 28, 28))
x = Convolution2D(16, 3, 3, activation='relu', border_mode='same') (input_img)
print type(x)
print theano.tensor.shape(x)
But the output is,
<class 'theano.tensor.var.TensorVariable'>
Shape.0
Since I'm taking the default stride of 1, and same border mode here means that padding is added so the output is the same as input. Using this information I could calculate by hand what the output shape should be.
Did I miss something here? The question is how to get the shape of the output of a convolution layer?
AI: You can't get the shape of a theano tensor, because it is not fixed. The output of the convolutional layer is just a symbolic variable and its shape depends on whatever you put into the layer as input.
You can get the shape of the output for a specific input by making a theano function for the output of the layer, and feeding a numpy array through the function:
import numpy as np
input = np.ones(28*28).reshape(1, 1, 28, 28).astype('float32')
fn = theano.function([input_img], x)
print fn(input).shape
>>> (1, 16, 28, 28) |
H: How to compute the Jaccard Similarity in this example? (Jaccard vs. Cosine)
I am trying to understand the difference between Jaccard and Cosine. However, there seem to be a disagreement in the answers provided in Applications and differences for Jaccard similarity and Cosine Similarity.
I am seeking if anyone could step me through the calculations of the Jaccard Similarity in this Cosine Similarity example from https://bioinformatics.oxfordjournals.org/content/suppl/2009/10/24/btp613.DC1/bioinf-2008-1835-File004.pdf
Given:
Question: How do we compute the Jaccard Similarity index between t1 and t2?
Thank you.
AI: Cosine similarity is for comparing two real-valued vectors, but Jaccard similarity is for comparing two binary vectors (sets). So you cannot compute the standard Jaccard similarity index between your two vectors, but there is a generalized version of the Jaccard index for real valued vectors which you can use in this case:
$J_g(\Bbb{a}, \Bbb{b}) =\frac{\sum_i min(\Bbb{a}_i, \Bbb{b}_i)}{\sum_i
max(\Bbb{a}_i, \Bbb{b}_i)}$
So for your examples of $t_1 = (1, 1, 0, 1), t_2 = (2, 0, 1, 1)$, the generalized Jaccard similarity index can be computed as follows:
$J(t_1, t_2) = \frac{1+0+0+1}{2+1+1+1} = 0.4$
Alternatively you can treat your bag-of-words vector as a binary vector, where a value $1$ indicates a words presence and $0$ indicates a words absence i.e. $t_1 = (1, 1, 0, 1), t_2 = (1, 0, 1, 1)$. From there, you can compute the original Jaccard similarity index:
$J(t_1, t_2) = \frac{2}{2+1+1} = 0.5$ |
H: Pickled machine learning models
Is there a website where people store their pickled models for others to try? E.g. different people might try different ML approaches on the iris dataset, is there a place where I can find/download models others have constructed?
AI: On Kaggle.com, there is a datasets section. In this section, you can post or find datasets. For each dataset, you have the data, possibility to discuss about it and, of course, a list of kernels posts by the user. For instance, for the iris dataset, you have 592 kernels available. |
H: Rank feature selection over multiple datasets
Through backward elimination I get a ranking of features over multiple datasets. For example in the dataset 1 I have the following ranking, the feature in the top being the most important:
feat. 1
feat. 2
feat. 3
feat. 4.
...
, whereas for dataset 2 I have for example the following ranking:
feat. 3
feat. 1
feat. 2
feat. 4.
I want to filter out those features which end up in the top of the ranking the most (incorporating that finishing in the top is better than finishing in the 3rd place). Which kind of ranking metric can I use for this problem?
AI: An easy one to try would be average ranks, where you take the mean of the ranks for each feature. For your example,
$
\begin{array}{cc}
\textbf{Feature} & \textbf{Avg. Rank} \\
1 & 1.5 \\
3 & 2 \\
2 & 2.5 \\
4 & 4 \\
\end{array}
$
You could also weight the ranks by the size of the dataset, if the datasets that you are testing on are not the same size. |
H: Reinforcement learning: understanding this derivation of n-step Tree Backup algorithm
I think I get the main idea, and I almost understand the derivation except for this one line, see picture below:
I understand what we're doing by using the policy probability to weight the rewards from time t + 2 (because getting here depends on the prob of taking an action that gets here). But I don't understand why we similarly subtract the value function from the return...
It also doesn't seem to match the example target return (G) implied for 2 step backup on slide 15 of this lecture's slides:
https://www.dropbox.com/sh/3xowt7qvyadvejn/AABpWQMKWX3KVbeqVlBcxNYra/slides%20(pdf%20and%20keynote)?dl=0&preview=13-multistep.pdf
Thanks for any insight. I could be missing something simple/obvious as I dive into these details.
EDIT - for more context, see pg. 160 of this pdf which is where the picture comes from: http://incompleteideas.net/sutton/book/bookdraft2016sep.pdf
AI: The slides and the book are consistent. Notice how in the slides there is a restriction in the summatory: i.e. $a \neq A_{t+1}$. For $G^{(2)}$, you need to "remove" from $V_{t+1}$ the term that should not be there, i.e. $A_{t+1}$.
Now, why this term is removed?
If you keep this term you will be adding $A_{t+1}$ twice. In 1-step backup, it is part of the expectation of step $S_{t+1}$.
When you calculate 2-step backup you want to replace $(S_{t+1}, A_{t+1})$ in the 1-step expectation with the discounted expected value of $S_{t+2}$.
So you substract the term and add the discounted expectation for $S_{t+2}$ |
H: Principal Component Analysis, Eigenvectors lying in the span of the observed data points?
I have been reading several papers and articles related to Principal Component Analysis (PCA) and in some of them, there is one step which is quite unclear to me (in particular (3) in [Schölkopf 1996]).
Let me reproduce their reasoning below.
Consider the centered data set $D = \{\textbf{x}_k\}_{k=1}^M$ with $\textbf{x}_k \in \textbf{R}^N$ and $ \sum_{k=1}^M \textbf{x}_k = 0$. PCA diagonalizes the (sample) covariance matrix
$$
C = \frac{1}{M} \sum_{j=1}^M \textbf{x}_j \textbf{x}_j^T. \tag{1}
$$
To do this we find the solution to the eigenvector equation
$$
\lambda \textbf{v} = C \textbf{v} \tag{2}
$$
for eigenvalues $\lambda \geq 0$ and eigenvectors $\textbf{v} \in \textbf{R}^N\backslash \{{0}\}$. As
$$
\lambda \textbf{v} = C \textbf{v} = \frac{1}{M} \sum_{j=1}^M (\textbf{x}_j^T \textbf{v}) \textbf{x}_j, \tag{3}
$$
all solutions $\textbf{v}$ with $\lambda \neq 0$ must lie in the span of $\textbf{x}_1, \dots, \textbf{x}_M$, hence (2) is equivalent to
$$
\lambda(\textbf{x}_k^T \textbf{v}) = \textbf{x}_k^T C \textbf{v}, \qquad \text{for } k = 1, \dots, M \tag{4}
$$
In (4), doesn't $\lambda(\textbf{x}^T \textbf{v}) = \textbf{x}^T C \textbf{v}$ hold for $\textbf{any}$ value of $\textbf{x}$? Why does (4) only hold when $\textbf{x} \in D$? I do not understand how their end up with (4).
Thanks.
AI: The statement says, that (2) and (4) are equal. That means (2)$\Rightarrow$(4) and (4)$\Rightarrow$(2). The first implication is trivial, as you correctly pointed out.
$$\lambda v=Cv$$
implies
$$\lambda x^Tv=x^TCv$$
for all $v$, not just those from $D$. But the second implication is a bit more tricky, but that is what the proof is about. It tells you, that if you want to check, whether a vector is an eigenvector of $C$, you don't have to check if (2) is satisfied. It tells you, that when (4) is satisfied, (2) is satisfies as well.
Imagine you have 2 points in 3D space. Those 2 points are
$$x_1=(-1,-1,0)$$
$$x_2=(1,1,0)$$
(please excuse me for not making this "dataset" centered) Note, that both points lie in the $xy$ plane. Now the correlation matrix is
$$C=\begin{bmatrix}
1/2 & -1/2 & 0 \\[0.3em]
-1/2 & 1/2 & 0 \\[0.3em]
0 & 0 & 0
\end{bmatrix}$$
Now you want to know, whether $v=[1\ -1\ 0]^T$ is an eigenvector with eigenvalue 1. The statement tells you, that you can either just check, if (2) is satisfied (3 equations) or if
$$\frac{1}{2}x_1^Tv=x_1^TCv=0$$
and
$$\frac{1}{2}x_2^Tv=x_2^TCv=0$$
which are only two equations. |
H: Algorithm for generating rules for classifying documents
Im looking for an algorithm that can deduct a set of rules based on a dataset of "training documents" that can be applied to classify a new unseen document. The problem is that I need these rules to be viewable by the user in the form of some string representation. For example, the algorithm found that documents have a minimum word count of 1000 and that there are 4 citations in each document. The key is that these rules must be deducted by a algorithm. An example of this in practice would be:
Document 1 contains 890 words and only 2 citations
I need it to return something like:
- You should add more words to make it better
- Add more citations to prove your point
AI: It sounds like you have two issues. The first one is preprocessing and feature extraction. The second one is how to learn classification rules.
The second issue is the easier one to approach. There are a number of algorithms for learning classification rules. You could use a decision tree algorithm such as CART or C.4.5 but there are also rule induction algorithms like the CN2 algorithm. Both these types of algorithms can learn the types of rules you mention, however, rule induction based systems can usually be supplemented with hand crafted rules in a more straight forward way than decision tree based systems, while, unless my memory fails me, decision tree algorithms generally perform better on classification tasks.
The first issue is bit hairier. To recommend the types of changes you suggest you first need to extract the relevant features. There are pre-processors which perform part-of-speech tagging, syntactic parsing, named entity recognition etc. and if the citations follow a strict format, I guess a regular expression could perhaps solve the problem, but otherwise you have to first train a system to recognize and count the number of citations in a text (and the same for any other non-trivial feature). Then you can pass the output of this feature extraction system into the classification system. However, on reading your question again I'm unsure whether this problem might already be solved in your case? |
H: How to get summary statistics in Orange?
I just started using Orange, and am having trouble finding how to get basic summary statistics, like the n (count), average, and standard deviation.
Is there a widget that does this and I'm simply overlooking it?
AI: I found 3 ways to do it :
you have to use the Data Info widget who gives the number of rows of your dataset and the Box Plot widget who prints the average and the standard deviation of each feature in your dataset. Pro: easy to use. Con: you are unable to use this data.
you can create a Python script who will compute the average and the standard deviation using the statistics package. Pro: you can reuse your data. Con: you have to write a script and format the output data correctly.
it exists an add-on call "Orange3-Timeseries" with a Moving Transform widget who can compute the average and the standard deviation of your series. Pro: easy to use. Cons: the package is still a beta-version and have some bugs, the widget only works with timeseries. |
H: Finding out the scale when using MinMaxScaler()
I am using the MinMaxScaler() of sklearn to scale my features before using kmeans.I needed to find the scale used.
from sklearn import preprocessing
scaler = preprocessing.MinMaxScaler()
scaler= scaler.fit_transform(finance_features)
print scaler.scale_
However on using the line
print scaler.scale_
It shows an error saying no such attribute
AI: This is because you were importing a class as name 'scalar' and modifying it by assigning value obtained after transforming. Just change
from sklearn import preprocessing
scaler = preprocessing.MinMaxScaler()
scaled = scaler.fit_transform(finance_features)
print scaler.scale_
This should work. |
H: How can I fill NaN values in a Pandas DataFrame in Python?
I am trying to learn data analysis and machine learning by trying out some problems.
I found a competition "House prices" which is actually a playground competition. Since I am very new to this field, I got confused after exploring the data. The data has 81 columns out of which 1 is the target column which is the house value. This data contains multiple columns where majority of values are "NaN". When I ran:
nulls = data.isnull().sum()
nulls[nulls > 0]
This shows the columns with missing values:
LotFrontage 259
Alley 1369
MasVnrType 8
MasVnrArea 8
BsmtQual 37
BsmtCond 37
BsmtExposure 38
BsmtFinType1 37
BsmtFinType2 38
Electrical 1
FireplaceQu 690
GarageType 81
GarageYrBlt 81
GarageFinish 81
GarageQual 81
GarageCond 81
PoolQC 1453
Fence 1179
MiscFeature 1406
At this point I am totally lost and I don't know how to get rid of these "NaN" values.
Any help would be appreciated.
AI: You can use the DataFrame.fillna function to fill the NaN values in your data. For example, assuming your data is in a DataFrame called df,
df.fillna(0, inplace=True)
will replace the missing values with the constant value 0. You can also do more clever things, such as replacing the missing values with the mean of that column:
df.fillna(df.mean(), inplace=True)
or take the last value seen for a column:
df.fillna(method='ffill', inplace=True)
Filling the NaN values is called imputation. Try a range of different imputation methods and see which ones work best for your data. |
H: adjust output from normalization?
I've trained a neural network network that given a minmax normalized input, provides a minmax normalized output.
this might be late, but is it possible from a minmax normalized output to create the actual output, given you know the actual min and max value?
so an unormalized output?
AI: Yes, just rearrange the formula for finding the normalised value. Where $x_i$ is the original attribute, and $z_i$ is the normalised value:
$$
\begin{align}
z_i &= \frac{x_i - min(x)}{max(x) - min(x)} \\
x_i &= z_i(max(x) - min(x)) + min(x)
\end{align}
$$ |
H: What is "noise" in observed data?
I am reading pattern Recognition and machine learning by Bishop and in the chapter about probability, "noise in the observed data" is mentioned many times. I have read on the internet that noise refers to the inaccuracy while reading data but I am not sure whether it is correct. SO, what actually is noise in observed data? And what is additive noise and Gaussian noise ?
AI: When you have sensors, the values you receive change even if the signal that was recorded didn't change. This is one example of noise.
When you have a model of the world, it abstracts from the real relationships by simplifying things which are not too important. To take into account for the simplification, you model the error as noise (e.g. in a Kalman filter).
But noise sources can be anything. For example, in an image classification problem, data compression can distort an image. Images can have different resolution; low resolution signals are harder to classify than high resolution figures. Aliasing effects can also distort images.
And what is additive noise?
Suppose your system equation is
$$z = H \cdot x$$
where $z \in \mathbb{R}^{n_m}$ is the observation, $x \in \mathbb{R}^{n_x}$ is the state you're interested in and $H \in \mathbb{R}^{n_m \cdot n_x}$ is a transformation matrix. Then the noise could interact with your system in any way. But most of the time it is logical and practical that the noise is additive, meaning your model is
$$z = H \cdot x + r$$
where $r$ is sampled from a random variable of any distribution.
What is (additive) Gaussian noise?
$$r \sim \mathcal{N}(\mu, \sigma^2)$$
See normal distribution |
H: Reducing a page of content to a short paragraph
I remember years ago, Yahoo detailed how they were able to reduce a webpage down to a short paragrah of text succently summarising the content in sentences, as opposed to a list of keywords. What is this called? Are there any open / free code to do this?
AI: Look into TextRank algorithm, here is the paper paper. You can find a neat python implementation here textrank implementation If I am not wrong gensim also offers an implementation. |
H: Measuring Difference Between Two Sets of Likert Values
I asked users to complete a likert survey (1-5) at the start of an activity, and conclusion of the activity. What is the best manner to show the rate of change/difference between these two result sets? I was considering a Pearson correlation, but after thinking about things more I don't believe that it is the most appropriate correlation metric.
My data looks similar to below:
Before After
1 2
1   2
2   2
3   3
3   4
4   2
Result: XXX?
Thanks all
AI: If you are trying to see whether the survey had made any positive effect on the people, modeling a relationship between before and after with correlation is a fair approach while avoiding some edge cases. Correlation looks whether $X's$ and $Y's$ are linearly correlated or how one variable is influenced by other. But I guess what you might be looking for is just something simpler like average increase in rating after the activity. For example
import numpy as np
from scipy import stats
ratings_before = np.array([1,2,1,3,4,5])
ratings_after = np.array([5,5,5,5,5,5])
print "P.Corr: ",stats.pearsonr(ratings_before, ratings_after)[0]
print "Avg.Increase: ",np.mean(ratings_after-ratings_before)
which gives
Nan
2.333
And assume that your activity went so good that everyone rated you a 5-star. Then correlation turns as $NaN$ as denominator goes to zero as no $Y$ can be explained by $X$.
Hope this helps. |
H: Reducing sample size
I have a large dataset (around $ 10^6 $ samples) and an algorithm that will surely choke on that much data.
Suppose that I have removed duplicates and near-duplicates. What are the well-known techniques for reducing sample size without losing too much of the information possibly encoded in the initial dataset?
I thought about using some clustering algorithm (which scales well with respect to number of clusters, possibly BIRCH) and use the resulting clusters to find $ N $ nearest points to cluster centroid. However this feels somehow wrong.
AI: One method would be to take many subsets of your dataset, i.e.
bootstrapping, build your models, perform cross-validation and calculate the average performance. This is a good explanation of how the amount of data affects the model outcomes: https://stackoverflow.com/questions/25665017/does-the-dataset-size-influence-a-machine-learning-algorithm
Play around with the size of your subsets until you start getting stable results. |
H: When is something a Deep Neural Network (DNN) and not NN?
When would a neural network be defined as a Deep Neural Network (DNN) and not a NN?
A DNN as I understand them are neural networks with many layers, and simple neural networks usually have fewer layer... but what a many and a few in numbers? or is there some other definition?
What are networks trained used Tensorflow, Caffee as such? I haven't (as far I know) seen anybody manually design a network with many many layers.
They seem to promote their tools for creating DNN, but is it actually DNN if you only make a network with two layers?
AI: You are right. Mainly any network with more than two layers between the input and output is considered a deep neural network. Libraries like tensorflow provide efficient architecture for deep learning applications such as image recognition, or language modelling using Convolutional neural networks and Recurrent neural networks. Another thing to keep in mind, is the depth of the network also has to do with the number of units being used in the layer. Mainly, as your non-linear hypotheses get complex you will need deep neural networks. |
H: Why is learning rate causing my neural network's weights to skyrocket?
I am using tensorflow to write simple neural networks for a bit of research and I have had many problems with 'nan' weights while training. I tried many different solutions like changing the optimizer, changing the loss, the data size, etc. but with no avail. Finally, I noticed that a change in the learning rate made an unbelievable difference in my weights.
Using a learning rate of .001 (which I thought was pretty conservative), the minimize function would actually exponentially raise the loss. After one epoch the loss could jump from a number in the thousands to a trillion and then to infinity ('nan'). When I lowered the learning rate to .0001, everything worked fine.
1) Why does a single order of magnitude have such an effect?
2) Why does the minimize function literally perform the opposite of its function and maximize the loss? It seems to me that that shouldn't occur, no matter the learning rate.
AI: You might find Chapter 8 of Deep Learning helpful. In it, the authors discuss training of neural network models. It's very intricate, so I'm not surprised you're having difficulties.
One possibility (besides user error) is that your problem is highly ill-conditioned. Gradient descent methods use only the first derivative (gradient) information when computing an update. This can cause problems when the second derivative (the Hessian) is ill-conditioned.
Quoting from the authors:
Some challenges arise even when optimizing convex functions. Of these, the most prominent is ill-conditioning of the Hessian matrix $H$. This is a very general problem in most numerical optimization, convex or otherwise, and is described in more detail in section 4.3.1.
The ill-conditioning problem is generally believed to be present in neural network training problems. Ill-conditioning can manifest by causing SGD to get “stuck” in the sense that even very small steps increase the cost function. [my emphasis added]
The authors provide a simple derivation to show that this can be the case. Using gradient descent, the cost function should change (to second order) by
\begin{equation}
\frac{\varepsilon^2}{2} g^{T} H g - \varepsilon g^{T} g
\end{equation}
where $g$ is the gradient, $H$ is the Hessian, and $\varepsilon$ is the learning rate. Clearly, if the second derivatives are large, then the first term can swamp the second, and the cost function will increase, not decrease. Since the first and second terms scale differently with $\varepsilon$, one way to alleviate this problem is to reduce $\varepsilon$ (although, of course, this can result in learning too slowly). |
H: analogy for pearson r statistics for binary classification task
I am trying to get idea how variables of my data correspond to target variable (binary class).
In regression, Pearson r statistic is quite good to get sense of variable relationship. Also I can use it for classification, treating classes 0 and 1 as real values but it's a risky trick.
My question: is there any equivalent statistic saying which variable is good for classification task ? Thanks
AI: The Pearson Chi-Squared test may give you what you need:
https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test
This is a good resource for analyzing categorical data: https://en.wikipedia.org/wiki/List_of_analyses_of_categorical_data
Also, confusion matrices can be very helpful: https://en.wikipedia.org/wiki/Confusion_matrix |
H: CSR scipy matrix does not update after updating its values
I have the following code in python:
import numpy as np
from scipy.sparse import csr_matrix
M = csr_matrix(np.ones([2, 2],dtype=np.int32))
print(M)
print(M.data.shape)
for i in range(np.shape(M)[0]):
for j in range(np.shape(M)[1]):
if i==j:
M[i,j] = 0
print(M)
print(M.data.shape)
The output of the first 2 prints is:
(0, 0) 1
(0, 1) 1
(1, 0) 1
(1, 1) 1
(4,)
The code is changing the value of the same index (i==j) and setting the value to zero.
After executing the loops then the output of the last 2 prints is:
(0, 0) 0
(0, 1) 1
(1, 0) 1
(1, 1) 0
(4,)
If I understand the concept of sparse matrices correctly, it should not be the case. It should not show me the zero values and the output of last 2 prints should be like this:
(0, 1) 1
(1, 0) 1
(2,)
Does anyone have explanation for this? Am I doing something wrong?
AI: It appears that CSR does not remove the zeros by default. You will first have to call eliminate_zeros() on your object. Once you do this you will see that the data contains only the non zero elements of your matrix.
after running the loop in your code:
print(M)
gives
(0, 0) 0
(0, 1) 1
(1, 0) 1
(1, 1) 0
let us inspect the data
print(M.data)
we see that the zero elements are stored,
[0 1 1 0]
let us remove the zero elements to make our matrix sparse
M.eliminate_zeros()
and inspect whether the matrix is sparse,
print(M.data)
this confirms that the zero elements are removed and hence the matrix is sparse,
[1 1]
I hope this helps. |
H: What are recommended ways\tools for processing large data from Excel Files?
A Very Happy New Year! I'm currently working on an analytics project with large volumes of data stored in excel files (about 50GB in 1000 files). The files use a custom formatting to store date-time data to the millisecond. The processing also has to be efficient in view of the large data volume. What is the recommended methodology and tool to handle this?
I've seen others convert excel to CSV, and then confining their analysis to the CSV itself. Is this method advantageous in the sense that CSV is faster to read in, and can be processed by a larger toolset? Is there any powerful tool\library that can do batch conversion, and even extract custom formatting data?
Thanks and Regards
AI: pandas loads a csv upto 10x faster than excel. So, if you can, please convert these files to csv, which ever ones that are being loaded several times.
I have been using cython, which seems to speedup read functions and processing. Please follow this link for more details. It says, that using cython speeds up the processing 10x.
Suggestion: If you can, I would recommend to store data in database and use an orm like django/peewee for the data processing.
Please lemme know if you have any other queries. |
H: How to determine feature importance while using xgboost in pipeline?
How to determine feature importance while using xgboost (XGBclassifier or XGBregressor) in pipeline?
AttributeError: 'Pipeline' object has no attribute 'get_fscore'
The answer provided here is similar but I couldn't get the idea.
AI: As I found, there are two ways to determine feature importance:
First:
print(grid_search.best_estimator_.named_steps["clf"].feature_importances_)
result:
[ 0.14582562 0.08367272 0.06409663 0.07631433 0.08705109 0.03827286
0.0592836 0.05025916 0.07076083 0.0699278 0.04993521 0.07756387
0.05095335 0.07608293]
Second:
print(grid_search.best_estimator_.named_steps["clf"].booster().get_fscore())
result:
{'f2': 1385, 'f11': 1676, 'f12': 1101, 'f6': 1281, 'f9': 1511, 'f7': 1086, 'f5': 827, 'f0': 3151, 'f10': 1079, 'f1': 1808, 'f3': 1649, 'f13': 1644, 'f8': 1529, 'f4': 1881}
Third:
print(grid_search.best_estimator_.named_steps["clf"].get_booster().get_fscore()) |
H: DTW (Dynamic Time Warping) requires prior normalization?
I'm trying DTW from mlpy, to check similarity between time series.
Should I normalize the series before processing them with DTW? Or is it somewhat tolerant and I can use the series as they are?
All time series stored in a Pandas Dataframe, each in one column. Size is less than 10k points.
AI: DTW often uses a distance between symbols, e.g. a Manhattan distance $(d(x, y) = {\displaystyle |x-y|} $). Whether symbols are samples or features, they might require amplitude (or at least) normalization. Should they? I wish I could answer such a question in all cases. However, you can find some hints in:
Dynamic Time Warping and normalization
Section 1.2.1 of Searching and Mining Trillions of Time Series Subsequences under Dynamic Time Warping
Normalization for Dynamic Time Warping |
H: Problem when loading a XGBoost model in a different computer
I'm working on a project and we are using XGBoost to make predictions. My colleague sent me the model file but when I load on my computer it don't run as expected.
When I changed one variable from the model from 0 to 1 it didn't changed the result (in 200 different lines), so I started to investigate. We compared a lot of different results and it was all different.
I run the xgb_tree it showed the max_depth is 0, but it is supposed to be 4. When I run xgb_tree$results it says the max_depth = 4.
We also tried a lot of different save methods (.rda, .rds, .model) but none of them worked.
Any suggestion would be welcome, thanks.
EDIT: Posting the sessionInfo()
His:
R version 3.2.5 (2016-04-14)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] plyr_1.8.4 car_2.1-4 acepack_1.4.1 Ckmeans.1d.dp_3.4.6-4
[5] Hmisc_4.0-0 Formula_1.2-1 survival_2.40-1 memisc_0.99.7-1
[9] MASS_7.3-45 Information_0.0.9 minerva_1.4.5 randomForest_4.6-12
[13] pROC_1.8 xgboost_0.4-4 caret_6.0-73 lattice_0.20-33
[17] ggplot2_2.2.0 miscTools_0.6-22 reshape_0.8.6 data.table_1.9.8
[21] dplyr_0.5.0 e1071_1.6-7 lubridate_1.6.0 psych_1.6.9
[25] readr_1.0.0 stringr_1.1.0 stringi_1.1.2
loaded via a namespace (and not attached):
[1] Rcpp_0.12.8 class_7.3-14 assertthat_0.1 digest_0.6.10
[5] foreach_1.4.3 R6_2.2.0 MatrixModels_0.4-1 stats4_3.2.5
[9] lazyeval_0.2.0 minqa_1.2.4 SparseM_1.74 nloptr_1.0.4
[13] rpart_4.1-10 Matrix_1.2-4 labeling_0.3 splines_3.2.5
[17] lme4_1.1-12 foreign_0.8-66 munsell_0.4.3 compiler_3.2.5
[21] mnormt_1.5-5 mgcv_1.8-12 htmltools_0.3.5 nnet_7.3-12
[25] tibble_1.2 gridExtra_2.2.1 htmlTable_1.7 codetools_0.2-14
[29] ModelMetrics_1.1.0 grid_3.2.5 nlme_3.1-125 gtable_0.2.0
[33] DBI_0.5-1 magrittr_1.5 scales_0.4.1 reshape2_1.4.2
[37] doParallel_1.0.10 latticeExtra_0.6-28 RColorBrewer_1.1-2 iterators_1.0.8
[41] tools_3.2.5 parallel_3.2.5 pbkrtest_0.4-6 colorspace_1.3-1
[45] cluster_2.0.3 knitr_1.15.1 quantreg_5.29
Mine:
R version 3.3.1 (2016-06-21)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
locale:
[1] LC_COLLATE=Portuguese_Brazil.1252 LC_CTYPE=Portuguese_Brazil.1252 LC_MONETARY=Portuguese_Brazil.1252
[4] LC_NUMERIC=C LC_TIME=Portuguese_Brazil.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] caret_6.0-73 ggplot2_2.1.0 lattice_0.20-33 plyr_1.8.4 xgboost_0.4-4
[6] shinydashboard_0.5.3 shiny_0.14.2
loaded via a namespace (and not attached):
[1] Rcpp_0.12.8 nloptr_1.0.4 iterators_1.0.8 tools_3.3.1 digest_0.6.10 lme4_1.1-12
[7] jsonlite_1.1 nlme_3.1-128 gtable_0.2.0 mgcv_1.8-12 Matrix_1.2-6 foreach_1.4.3
[13] parallel_3.3.1 SparseM_1.72 stringr_1.1.0 MatrixModels_0.4-1 stats4_3.3.1 grid_3.3.1
[19] nnet_7.3-12 data.table_1.9.6 R6_2.1.3 minqa_1.2.4 reshape2_1.4.2 car_2.1-3
[25] magrittr_1.5 scales_0.4.0 codetools_0.2-14 ModelMetrics_1.1.0 htmltools_0.3.5 MASS_7.3-45
[31] splines_3.3.1 rsconnect_0.4.3 pbkrtest_0.4-6 mime_0.5 xtable_1.8-2 colorspace_1.2-6
[37] httpuv_1.3.3 quantreg_5.29 stringi_1.1.1 munsell_0.4.3 chron_2.3-47
AI: The error was due to the R version. My colleague was running a 3.2.5 and I was running the 3.3.1, and thanks to Stereo I could notice this and test on the same versions. |
H: LSTM for capturing multiple patterns
I am trying to use an LSTM to predict daily usage for users. I have data for (say) 90 days of usage for a large number of users. Based on business knowledge (and initial analysis) we know users fall roughly into different categories. E.g. daily users would have a non-zero usage almost every day, weekly users would have one or two days of non-zero usage every 7 days and monthly users would have a couple of days with non-zero usage per 30 days.
Sample data where each column is one day starting from October 1st and each row is data for one user. (The usage 'cycle' of each user might start on any day).
User 1: 10, 8, 10, 9, 0, 0, 11, ...
User 2: 0, 0, 0, 20, 0, 0, 0, 0, 0, 18, 0, 0, 0, ...
User 3: 40, 0, 0, 0, .....
where User 1 might be a "daily" user, User 2 is a "weekly" user and User 3 is a monthly user.
My first question is that can a single LSTM/deep learning model capture these different types of patterns?
The goal is to predict the daily usage (next couple of days based on past 90 days) for individual users.
Currently I am using a really simple LSTM (in Keras):
model = Sequential()
model.add(LSTM(1, input_shape=(90, 1), unroll=True))
model.compile(optimizer='rmsprop', loss='mse')
To help the model 'capture' the fact that different users might have different levels of usage I added the average usage (of non-zero values) for each user as the first data point for each row. The remaining 90 data points for each user remain as shown in the table above.
My second question is if I really need to add the average of the values to 'help' the model?
The problem is that even after 100 epochs the error remains unchanged.
And finally what can I change to make this to work?
Thanks in advance.
AI: Modeling is a process where you invent hypotheses and then try to reject them. Without seeing your data, there is no guarantee any of these would go far. But here are some thoughts.
90 days of daily observations seem too few, which won't lead to much confidence no matter what model you use.
1.1 If you treat each user as one sequential observation, yes your dataset is sufficient for models like RNN. But ask yourself the question, are there users out there who are unique and different and you don't know about how they differ from the rest.
You seem to start with a belief that there are three types of users. I would suggest don't. Let the data tells you how many types are there. And for the purpose of predicting future usage, you don't care about this type information.
Recurrent neural net is a good choice if you believe the user behaviors are temporally dependent individually and have cross-sectional dependence. Otherwise, a time series model like ARIMA may do a better job and require less resources to fit.
Lastly, if you want to normalize the data, you should do so by subtracting each row by the row mean and divide by the row standard deviation. By adding the mean as the first observation, you're going to confuse the neural net and it's not smart enough to treat this data point differently. Unless there is a huge variance across users, I would start without normalization, fit a model, figure out some sort of out-of-sample MSE, then repeat on normalized data.
EDIT : There is no reason to train multiple RNNs on this set of data if you do not have a good segmentation argument. |
H: Is feature selection necessary?
I would like to run some machine learning model like random forest, gradient boosting, or SVM on my dataset. There are more than 200 predictor variables in my dataset and my target classes are a binary variable.
Do I need to run feature selection before the model fitting? Does it affect the model performance significantly or is there not much difference if I directly fit the model using all predictor variables?
AI: Feature selection might be consider a stage to avoid.
You have to spend computation time in order to remove features and actually lose data and the methods that you have to do feature selection are not optimal since the problem is NP-Complete.
Using it doesn't sound like an offer that you cannot refuse.
So, what are the benefits of using it?
Many features and low samples/features ratio will introduce noise into your dataset. In such a case your classification algorithm are likely to overfit, and give you a false feeling of good performance.
Reducing the number of features will reduce the running time in the later stages. That in turn will enable you using algorithms of higher complexity, search for more hyper parameters or do more evaluations.
A smaller set of feature is more comprehendible to humans. That will enable you to focus on the main sources of predictability and do more exact feature engineering. If you will have to explain your model to a client, you are better presenting a model with 5 features than a model with 200 features.
Now for your specific case:
I recommend that you'll begin in computing the correlations among the features and the concept. Computing correlations among all features is also informative.
Note that there are many types of useful correlations (e.g., Pearson, Mutual information) and many attributes that might effect them (e.g., sparseness, concept imbalance). Examining them instead of blindly go with a feature selection algorithm might save you plenty of time in the future.
I don't think that you will have a lot of running time problems with your dataset. However, your samples/features ratio isn't too high so you might benefit from feature selection.
Choose a classifier of low complexity(e.g., linear regression, a small decision tree) and use it as a benchmark. Try it on the full data set and on some dataset with a subset of the features. Such a benchmark will guid you in the use of feature selection. You will need such guidance since there are many options (e.g., the number of features to select, the feature selection algorithm) an since the goal is usually the predication and not the feature selection so feedback is at least one step away. |
H: What knowledge do I need in order to write a simple AI program to play a game?
I'm a B.Sc graduate. One of my courses was 'Introduction to Machine Learning', and I always wanted to do a personal project in this subject.
I recently heard about different AI training to play games such as Mario, Go, etc.
What knowledge do I need to acquire in order to train a simple AI program to play a game? And what game do you recommend for a beginner?
This is what I know in Machine Learning so far -
Introduction to the course and to machine learning. K-Nearest Neighbor algorithm, and K-means algorithm
Statistical Inference
Gaussian Mixture Model (GMM) and Expectation Maximization (EM)
Probably Approximately Correct (PAC) model, including generalization bounds and model selection
Basic hyperplane algorithms: Perceptron and Winnow.
Support Vector Machines (SVM)
Kernels
Boosting weak learners to strong learners: AdaBoost
Margin-Perceptron
Regression
PCA
Decision Trees
Decision Trees pruning and random forests
AI: There are multiple ways to approach solving game playing problems. Some games can be solved by search algorithms for example. This works well for card and board games up to some level of complexity. For instance, IBM's Deep Blue was essentially a fast heuristic-driven search for optimal moves.
However, probably the most generic machine learning algorithm for training an agent to perform a task optimally is reinforcement learning. Technically it is not one algorithm, but an extended family of related algorithms that all solve a specific formalisation of the learning problem.
Informally, Reinforcement Learning (RL) is about finding optimal solutions to problems defined in terms of an agent that can observe the state of an environment, take actions in that environment and experience rewards which are somehow related to the state and action. RL solvers need to be designed to cope with situations where rewards are received later than when important actions were taken, and this is usually achieved by the algorithm learning an internal expectation of later rewards associated with state and/or state-action pairs.
Here are some resources for studying Reinforcement Learning:
Reinforcement Learning: An Introduction (Second Edition)
Algorithms for Reinforcement Learning (PDF)
Udacity Reinforcement Learning course
David Silver UCL lectures on Reinforcement Learning
You will find the subject itself is quite large as more and more sophisticated variations of the algorithms are necessary as the problem to solve becomes harder.
Starting games for studying reinforcement learning might include:
Tik-tac-toe (aka Noughts and crosses) - this can be solved easily using search, but it makes for a simple toy problem to solve using basic RL techniques.
Mazes - in the reinforcement learning literature, there are many examples of "grid world" games where an agent moves in single N,E,S,W steps on a small board that can be populated with hazards and goals.
Blackjack (aka 21)
If you want to work with agents for playing video games, you will also want to learn about neural networks and probably in some detail - you will need deep, convolutional neural networks to process screen graphics.
A relatively new resource for RL is OpenAI Universe. They have done a lot of work to package up environments ready to train agents against, meaning you can concentrate on studying the learning algorithms, as opposed to the effort of setting up the environment.
Regarding your list of current skills: None of them are directly relevant to reinforcement learning. However:
If you can understand the maths and theory from your previous course, then you should also be able to understand reinforcement learning theory.
If you have studied any online or batch supervised learning techniques, then these can be used as components inside a RL framework. Typically they can be used to approximate a value function of the game state, based on feedback from successes and failures so far. |
H: Computing weights in batch gradient descent
I have a reasonably large set of images that I want to classify using a neural network. I can't fil them all into memory at once, so I decided to process them in batches of 200. I'm using an cross-entropy cost function with a minimization algorithm from numpy.
My question is: is it correct to pass learned weights between batches and use them as a starting point of the minimization? Would this eventually cause my hypothesis to fit all the data, or will each iteration simply re-fit the weights for itself? What is the general approach to such a problem?
AI: What you need to use is mini-batch gradient descent or stochastic gradient descent. You will need to shuffle your samples and make draws from it of the batch size you are aiming for, also you will have to make sure all of the samples in your data are included which will constitute of one epoch. Train for a few epochs depending on the data. Here is a good blog post on mini-batch blog post. |
H: Instead of one-hot encoding, can I store the same information in one column using a single value?
Rather than creating 15 additional columns full of sparse binary data, could I:
1) use the first 15 prime numbers as indexes for the 15 categories
2) store data by multiplying the prime numbers of the categories that otherwise would have a value of 1 in one-hot encoding
3) retrieve data by factorizing the value generated by multiplying unique prime numbers
Ex: 1914 would yield the list [2, 3, 11, 29] which would let you know that the user with the 1914 value has property 2, 3, 11, and 29 but nothing else.
I understand this is limited because BIGINTs can only hold the product of the first 15 prime numbers, but would it not still be useful in some situations and save time when searching the database? The entire table would be 14 columns smaller. I guess this is less about machine learning algorithms and more about storing and retrieving data.
AI: I suppose you could do this, but if your goal is simply to store 15 boolean values in a single column you are complicating things unnecessarily. Instead of going to all the trouble to compute the prime factors of the stored value, why don't you just store the flags as a bit string? Your example of 15 different possible values could be stored in a single SMALLINT (2-byte) SQL column. After retrieving the value, you would just need to extract the bits of interest for your record with some basic bitwise arithmetic. |
H: Which book is a standard for introduction to genetic algorithms?
I have heard of genetic algorithms, but I have never seen practical examples and I've never got a systematic introduction to them.
I am now looking for a textbook which introduces genetic algorithms in detail and gives practical examples how they are used, what their strengths are compared to other solution methods and what their weaknesses are.
Is there any standard textbook for this?
AI: Most of the "standard textbooks" (e.g., Goldberg, Mitchell, etc.) are pretty dated now. If you just want to have some confidence that you understand how the basic algorithms work, they're fine, but they tend to emphasize material that's doesn't necessarily match the more modern way of understanding and talking about things like theoretical issues.
I've used the Eiben and Smith (https://www.amazon.com/Introduction-Evolutionary-Computing-Natural/dp/3642072852) book to teach evolutionary algorithms to my students over the past few years. It's a bit more concise than some of the others, but if you're OK with that, I think it does a better job of covering the stuff that's relevant about evolutionary computation today. |
H: Is the probabilistic cutoff in random forest flexible?
I fit the random forest to my dataset with a binary target class. I reset the probabilistic cutoff to a much lower value rather than the default 0.5 according to the ROC curve. Then I can improve the sensitivity (recall) but meanwhile sacrificed the precision.
Just wanna confirm that the default 0.5 is not much meaningful and a practical probabilistic cutoff was often derived from ROC curve in practice. Am I on the right track on the application of random forest and other tree based models.
AI: Yes, you are exactly right. 0.5 is just a heuristic, ROC curve and precision-recall curve give a much better idea of what the cut off should be. You can then use predict_proba, extract the probabilities and do the classification based on the cut-off you have inferred from ROC curve and. precision-recall curve |
H: Mapping sequences of different lengths to fixed vector - Python
I am trying to make a chatbot using a deep neural network in python using keras. The problem I am having is that for the deep neural network to work, the input dimension has to be fixed. So my question is, how can I map a sequence of words (sentence of different lengths) to a fixed vector so I am able to feed this through a deep neural network?
AI: A recurrent neural network (RNN) is probably what you want to look into. It has exactly that capability of dealing with variable length input sequence. In fact, currently some of the most successful applications of RNN are in the NLP area, where the inputs are sentences of varying size. Just google recurrent neural network LSTM and you will find lots of tutorials. |
H: which machine learning technique can be used?
I want to understand the intent of the customer using his search queries, let's say if a customer is interested in yoga pants, he can either search for yoga pants or exercise pants or workout tights etc. Is there a model that I can use to find out all the search keywords that can be related to yoga pants?
AI: I think these are the methods that you can try out (Please feel free to add more to this list):
Highly precise with a little low recall is to use a dictionary with almost all possibilities (manual effort, but must be worth it.).
Using Word2Vec. Mikolov has already trained text data and created word vectors. Using this vector space, you can figure out which words are similar. You can try out and find a threshold above which you can say which words are similar (for example, yoga and exercise would have decent similarity.)
Train custom W2V, if you have enough data(This is an unsupervised model, so you don't need to worry about tagging the data but finding huge amounts of data relevant to the working domain.)
You can use an RNN to find the most similar words in a corpus and use it for queries. This gives a bit more flexibility than W2V. |
H: The model performance vary between different train-test split?
I fit my dataset to the random forest classifier and found that the model performance would vary among different sets of train and test data split. As what I have observed, it would jump from 0.67 to 0.75 in AUC under ROC curve (fitted by the same model under same setting of parameters) and the underlying range may be wider than that. So what is the issue behind this phenomena and how to deal with this problem? As my understanding, cross validation is used for a specific split of train and test data.
AI: While training, your model will not have the same output when you train with different parts of the dataset. Cross validation is used to help negate this, by rotating the training and validation sets and training more.
Your dataset most likely has high variance, given the large jump in accuracy based on different validation sets. This means that the data is spread out, and can result in overfitting the model. You can imagine an overfitted model like this:
The green line represents the overfitted model.
Common techniques to reduce overfitting in random forests is k-fold Cross Validation, with k being between 5 and 10, and growing a larger forest. |
H: How to calculate VC-dimension?
Im studying machine learning, and I would like to know how to calculate VC-dimension.
For example:
$h(x)=\begin{cases} 1 &\mbox{if } a\leq x \leq b \\
0 & \mbox{else } \end{cases} $, with parameters $(a,b) ∈ R^2$.
What is the VC-dimension of it?
AI: The VC dimension is an estimate for the capability of a binary classifier. If you can find a set of $n$ points, so that it can be shattered by the classifier (i.e. classify all possible $2^n$ labelings correctly) and you cannot find any set of $n+1$ points that can be shattered (i.e. for any set of $n+1$ points there is at least one labeling order so that the classifier can not seperate all points correctly), then the VC dimension is $n$.
In your case, first consider two points $x_1$ and $x_2$, such that $x_1 < x_2$. Then there are are $2^2=4$ possible labelings
$x_1:1$, $x_2:1$
$x_1:0$, $x_2:0$
$x_1:1$, $x_2:0$
$x_1:0$, $x_2:1$
All labelings can be achieved through the classifier $h$ by setting the parameters $a<b \in R$ such that
$a<x_1<x_2<b$
$x_1<x_2<a<b$
$a<x_1<b<x_2$
$x_1<a<x_2<b$
respectively. (Actually, $x_1 < x_2$ can be assumed w.l.o.g. but it is enough to find one set that can be shattered.)
Now, consider three arbitrary(!) points $x_1$, $x_2$, $x_3$ and w.l.o.g. assume $x_1<x_2<x_3$, then you can't achieve the labeling (1,0,1). As in case 3 above, the labels $x_1$:1 and $x_2$:0 imply $a<x_1<b<x_2$. Which implies $x_3$ > b and therefore the label of $x_3$ has to be 0. Thus, the classifier cannot shatter any set of three points and therefore the VC dimension is 2.
-
Maybe it becomes clearer with a more useful classifier. Let's consider hyperplanes (i.e. lines in 2D).
It is easy to find a set of three points that can be classified correctly no matter how they are labeled:
For all $2^3=8$ possible labelings we can find a hyperplane that separates them perfectly.
However, we cannot find any set of 4 points so that we could classify all $2^4=16$ possible labelings correctly. Instead of a formal proof, I try to present a visual argument:
Assume for now, that the 4 points form a figure with 4 sides. Then it is impossible to find a hyperplane that can separate the points correctly if we label the opposite corners with the same label:
If they don't form a figure with 4 sides, there are two "boundary cases": The "outer" points must either form a triangle or all form a straight line. In the case of the triangle, it is easy to see that the labeling where the "inner" point (or the point between two corners) is labeled different from the others can't be achieved:
In the case of a line segment, the same idea applies. If the end points are labeled differently than one of the other points, they cannot be separated by a hyperplane.
Since we covered all possible formations of 4 points in 2D, we can conclude that there are no 4 points that can be shattered. Hence, the VC dimension must be 3. |
H: Is the graphic of deep residual networks wrong?
I am currently wondering if the following graphic of deep residual networks is wrong:
I would say the graphic describes
$$\varphi \left (W_2 \varphi(W_1 x) + x \right ) \qquad \text{ with } \varphi = ReLU$$
The $\mathcal{F}(x)$ does not make sense to me. Assuming both weight layers are simple MLPs without bias, where the first one has a weight matrix $W_1$ and the second one has a weight matrix $W_2$, what is $\mathcal{F}$?
In the text, they define
$$\mathcal{F}(x) := \mathcal{H}(x) - x$$
where $\mathcal{H}(x)$ is "the desired underlying mapping" (whatever that exactly means).
Also, equation (1) seems strange to me:
$$y = \mathcal{F}(x, \{W_i\}) + x$$
In figure 5 they have two weight layers and call this a building block. Why is there only one weight matrix in this equation?
My thoughts
I think the authors could mean
$$\mathcal{F}_i = \varphi(W_i x)$$
In that case, in the image where $\mathcal{F}(x)$ is it should be
$$\mathcal{F}_1(x) = \varphi(W_1 x)$$
and where $\mathcal{F}(x) + x$ is should be
$$\mathcal{F}_2(\mathcal{F}_1(x)) + x = \varphi \left (W_2 \varphi(W_1 x) + x \right )$$
AI: (Per the diagram), $F(x)$ here is simply the entire two-layer non-linear chain that is operating on the input $x$. Then, the final output is simply $F(x) + x = H(x)$. That's it!
The thing that may be confusing you is that $F(.)$. In this case, they do not mean for $F$ to simply encompass one operation. Instead, it encompasses any set of operations processing $x$, up until you add $x$ back. Hope that helps!
PS: It is also common to see this type of nomenclature in a lot of DNN literature, whereby one refers to an entire deep non-linear chain as $D(x)$. For example in Generative Adversarial Networks, (GAN)s, $D(x)$ refers to the entire deep net devoted to the discrimination process, while $G(x)$ refers to the entire net devoted to the noise shaping. In both cases, they are composed of entire functions/nets, and do not signify simply one operation. |
H: What is the procedure to create a bag of visual words model with SIFT?
I have more than 1500 black and white classified images in a training set and I want to create a probabilistic model to classified new images. To be more explicit, given a new black and white image, my model has to predict:
animal: 84%
vegetal: 12%
mineral: 4%
I watched this lecture and I still have questions about the procedure to create this model.
1. Extraction of the keypoints of each image
If I correctly understood the video, the first step is to extract all the keypoints with SIFT from all the images to create a kind of dictionary of visual words. Each word is described by a multidimensional vector.
Next, I have to use the $k$-means method to create groups of visual words.
Question 1: how many groups should I create? Does it exist a rule-of-thumb to determine the $k$ number I have to use?
2. Creation of histograms
Now, for each image, I have to create an histogram/a vector where each features corresponds to one group defined in part 1. The value associated to each feature corresponds to the frequency of the word in the image.
Question 2: how can I create this vector? Indeed, each image is unique and will never match perfectly with the words in my dictionary (words who is, by the way, mean of different words). Who can I bypass this problem?
3. Creation of the model
Finally, I have to create my model. In the video, the lecturer used SVM and has to create one classifier by category (binary classifier). In my case, I have 100 different categories (my introduction was a simplification) and I prefer to have only one classifier. Also, I want to get the probabilities, given an image, to be part of a category.
Question 3: is it possible to create only one classifier who gives probabilistic data?
To finish, don't hesitate to make suggestions or corrections about the procedure I described if you know a better way to classified my data.
AI: Question 1: how many groups should I create?
There is no rule of thumb but typically if you have n categories of data you may want to set k as 10*n. This has known to work well, however you can always change this number by looking at the accuracy on your validation data set.
Question 2: Creation of histograms
Let us say you extract m features from each image and your dictionary contains k words, for each of these m features you would find the closest match to the k words and assign this word to that particular feature. This way you will have m number of visual words taking one of the k values. Now you count the occurrence of these visual words and creating a normalized histogram out of it. So each image will have a normalized histogram of the k visual words.
Question 3: Classification
Now that we know how to extract a histogram for a given image, we would compute the histograms on the training data and given a test image we extract its histogram. Now, how we classify this is entirely up to you. You could just find the closest match in the histograms of training data using euclidian distance or do a weighted sum etc.
A simple way of obtaining probabilistic data is to train a neural network with softmax at the output layer and given a test image pass its histogram through the network and obtain the probabilities. |
H: the feasibility of image processing techniques for physics based images
When building deep learning models for image analytics-related applications, we sometimes apply various types of operations to enhance the image, such as an image denoising operation.
In my study, we have images generated by physical simulations. In other words, the physical simulations generate matrices of dimensions such as 256*256, which can be visualized as an image as well.
I am trying to apply a deep learning model to perform some analysis over these physics-based images. In the pre-processing steps, I can always apply those image analytics related techniques to pre-process my images, but I am not sure whether it makes sense. For example, denoising or some other contrast enhancement operations can be used to improve the quality of images, like photos. But would it make sense to use them process the images generated by physics simulation?
AI: I think your intuition is correct, that it doesn't make sense to "clean" the matrices, if they are generated perfectly. I agree with @SyedAliHamza that noise should be eliminated as much as possible in the simulation.
On the other hand, depending on what analysis you are trying to do, you may want your model to generalize better. In that case, it would be good to apply some data augmentation, meaning adding some noise or other types of augmentation to your matrices. |
H: R data frame create a new variable which corresponds to one of the existing one
Currently this is what I get. "community_area_clean" is the new variable that I added by extracting a variable that lists all the communities from a data frame. "CommArea_Name" is the original "unclean" variable. But I find that the names in "community_area_clean" do not correspond to the names in "CommArea_Name". How can I fix it? Thanks.
AI: If my assumptions about CommArea_Num are correct, then:
data$community_area_clean = data$community_area_clean[data$CommAreaNum]
should remap that column. Note that the AREA_KM fields will remain in the order currently shown. |
H: In text classification, how can I use a neural network on word embeddings?
I have read some papers on text classification but they are pretty abstract. I fail at understanding how to train a multi layer perceptron with data made of sentences -> label.
A perceptron takes a vector input of size m and outputs a vector.
As a first step I reduced the dimensions and I map each word to a vector of n features, where n is a constant.
Now how do I mix the words in a sentence to make an input of size m ? Each word maps to a vector of size n but then what ? Knowing that every sentence has a different number of words.
Thanks
AI: Okay. These are the steps to follow.
Pad your sentences to a fixed length. Use the maximum length. If you are comfortable with creating buckets, you can pad the sentences to maximum length for individual bucket. To pad, add PAD tokens to each sentence till they become the size of the fixed sequence length you want to use.
Extract the unique words out of the padded corpus and assign a numeric id to every word. You can use there index in the unique list as IDs. Filtering out rare words and replacing them with unknown token will prove useful.
Now if you had a sentence like "I am John", and your maximum sequence length was 5 the sentence will become, "I am John PAD PAD." Once you extract the unique words, you can represent the sentence as IDs like [1,2,3,4,4] Now if you had 100 sentences you will have a matrix of size 100 X 5.
Now you can go in two ways, represent each word as a one hot vector i.e.a vectors of zeros with a one at the index it represents or a simpler way would be to train a word embedding.
For details look into this RNN tutorial by Denny Britz, this is from scratch.
If you are of the deep learning persuasaion then check out Denny Britz's CNN text classification tutorial where he uses tensorflow and trains his own word embedding. The first blog will give you all the information you need on how to prepare your text dataset. |
H: In XGBoost would we evaluate results with a Precision Recall curve vs ROC?
I am using XGBoost for payment fraud detection. The objective is binary classification, and the data is very unbalanced. One out of every 3-4k transactions is fraud.
I would expect the best way to evaluate the results is a Precision-Recall (PR) curve, not a ROC curve, since the data is so unbalanced.
However in the eval_metric options I see only area under the ROC curve (AUC), and there is no PR option. https://github.com/dmlc/xgboost/blob/master/doc/parameter.md
Also the documentation recommends AUC http://xgboost.readthedocs.io/en/latest/how_to/param_tuning.html
Does it make sense to not use a Precision-Recall (PR) curve?
AI: David, you can use mean average precision ('map') or even better logloss ('logloss'). Yes, for unbalanced data precision and recall are very important. I would suggest individually examining these metrics after optimizing with whatever eval_metric you choose.Additionally, there is a parameter called scale_pos_weight, which will help tell the model the distribution of you data. I have found this to greatly improve the performance of "rare event" cases. The following markdown doc has a list of all the parameters and their options. https://github.com/dmlc/xgboost/blob/master/doc/parameter.md |
H: Binary classification problem
Im new to ML, I have a data set for Music sales info for Vinyls, the data set contains:
Author
Album Title
Genre
Country
RevenueGenerated
AverageRevenueGenerated
My goal is to create a Model which I can help me understand which Music may generate a lot of revenue (Boolean). I created a field AverageRevenueGenerated which is the average of all Revenue generated for all artists.
Im looking for a tool that can help me associate or generate insights based on input signals above. This cold be automatically or a specific guide that allows me to say for example if:
UK + Industrial
IT + Opera
Daft Punk + Electro
Will be potential high revenues.
I found house prices example: https://yalantis.com/blog/predictive-algorithm-for-house-price/ is it the same type of problem? I'm looking which input signals may be the highest revenue. Any insights or pointers will be helpful.
AI: Yes, you can run a regression using the features you have, and predict the revenue but you will need more features to run an effective analysis. Maybe adding features for the genre and wether the music is trending or not. To turn it into a binary classification problem you will have to decide on a cut-off and label everything above that as 1 and below it as 0, the median might be a good cut-off but solely depends on your data. This entire problem will boil down to feature engineering, which in my view is one of the hardest thing to do. Also you will need a lot of data for this. You can create features wether the song was featuted on the billboard. See, try to think of it like a human and add such features to your model. Like as a person I think the music that will generate more revenue will be :
Popularity and Reputation of the artist.
Wether the genre is trending these days.
Has the song won any awards.
Stuff like this. So you wilk have to find features that quantify these things. Try to find some open source datatset for this kind of problem and work from there. |
H: Extending a trained neural network for a larger input
I have a seq2seq conversational model (based on this implementation) trained on the Cornell movie dialogs.
Now I want to fine-tune it on a much smaller dataset. The new data comes with the new words, and I want UNKs for as few new words as possible. So I'm going to create a new network with respect to the new input/output sizes, and I'm going to initialize its submatrices with learned weights I have at hand.
Could you say if this method can cause problems with the resulting model's performance? E.g. are the softmaxes likely to be affected significantly with these new initially untrained weights?
And if it's OK, do you have some examples on how to do it with the least pain in tensorflow's seq2seq setup?
AI: Its okay as long as the nerwork you are planning to create has the same number of layers and units i.e the dimensions of your network must be compatible with the weights that you are borrowing from the trained model. Also it would be better if you follow the second blog post of suriyadeepan practical seq2seq where he trains a conversation model on twitter chat. The code is much simpler and easier to understand, also it is on a smaller dataset, also he mentioned that the bot trained on cornell movie dialog corpus wasnt performing so well. Mainly to use the pre-trained weights all you have to do is load the model, create placeholders for the weights, assign thr weights from the loaded model to the placeholders and run a forward pass. This blog and this question might help you with this task |
H: Difference between subgradient SVM and kernel SVM?
What is the difference between subgradient svm and kernel svm?
From my understanding subgradient svm is a linear classifier that uses hinge loss and kernel svm uses some kernel function for non linear classification. I was wondering, if subgradient is only a linear classifier, could I use a tree of non linear svms to separate non linear data? I would essentially do binary classification separate 1 class vs the rest then the child node of the tree would work on the rest and separate the next class from the rest and so on. Any general idea or feedback would be great.
AI: You are gravely misunderstanding SVM. Sub-gradient descent algorithm for SVM is a method to solve the underlying optimization problem of SVM.
SVM is always a linear classifier, which can yet by using kernels operate in a higher dimensional space. Therefore in input space, the separating hyperplane (linear!) computed in feature space (kernel!), seems non-linear.
Effectively you are thereby solving a non-linear classification task, but you are projecting into a higher dimensional feature space where the classification task is solved by a linear classifier.
Please read:
Wikipedia on svm
Please watch:
great lecture about SVM |
H: How to use ensemble of models in FM or FFM?
I am using Factorization Machines ( libfm) and also the Field Aware Factorization Machines (libffm) for a kaggle competition. I am currently using the single models of each respectively for prediction.
I came to know that we can use ensemble of models for both FM and FFM.
Can someone explain how this is done ?
AI: Most ensembling methods are independent from the internals of individual models, e.g. something like majority voting, bagging or model stacking can be directly applicable to any model.
Note: this was originally a comment to the OP. |
H: Normal equation result simplification
The derivation of the normal equation can be noted
$\theta = (X^TX)^{-1}(X^T)y$, where $X^{-1}$ is the inverse of $X$ and can also be written $inv(X)$.
But why can't we write $inv(X^T X)$ as $inv(X)inv(X^T)$, and also use the definition of inverse $inv(X^T)(X^T) = I$?
Then $\theta = inv(X^T X)(X^T)y = inv(X)inv(X^T)(X^T)y = inv(X)Iy = inv(X)y$
Why isn't the final result of normal equation in regression in machine learning written as $\theta = inv(X) y$ after simplifying?
AI: Only square matrices have an inverse, and neither $X$ nor $X^T$ here are (necessarily) square. I guess that's the most direct answer. $X$ may have a left-inverse $A$, such that $AX = I$. In that case, if $X\theta = y$ then $AX\theta = Ay$ so $\theta = Ay$.
The direct way to construct that $A$ is just where you started though; it's $(X^TX)^{-1}X^T$. The Gramian matrix $X^TX$ is (square, and) invertible if the features (columns) of $X$ are linearly independent, and, $X^TX$ is more importantly relatively quite small and so it's feasible to invert.
You normally wouldn't compute the inverse anyway, but try to solve the system $X^TX\theta = X^Ty$ with QR decomposition or something, but that's not what you're asking. |
H: Neural Network accuracy and loss guarantees?
This question is part of a sample exam that I'm working on to prepare for the real one. I've been stuck on this one for quite a while and can't really motivate my answers since it keeps referring to guarantees, can someone explain what guarantees we can have in a neural network such as the one described?
Network description:
Assume a deep neural network (DNN) that takes as input images images of handwritten digits (e.g., MNIST). The input network consists of 28x28 = 784 input units, a hidden layer of logistic units, and a softmax group of 10 units as the output layer. The loss function is the cross-entropy. We have many training cases and we always compute our weight update based on the entire training set, using the error backpropagation algorithm. We use a learning rate thats small enough for all practical purposes, but not so small that the network doesnt learn. We stop when the weight update becomes zero. For each of the following questions, answer yes or no, and explain very briefly.
Questions:
Will this DNN configuration enable the the weights to minimize the loss value (there may be multiple global optima)?
Does this DNN configuration guarantee that the loss reduces on every step? We use our network to classify images by simply seeing which of the 10 output units gets the largest probability when the network is presented with the image, and declaring the number of that output unit to be the networks guessed label.
Does this DNN configuration guarantee that the number of mistakes that the network makes on training data (by following this classification strategy) never increases?
AI: For my answers, I assume you are talking about batch (not mini-batch or stochastic) gradient descent.
No. Assume you initialize all weights with the same value. Then all gradients (in the same layer) will be the same. Always. Hence the network effectively only learns one parameter per layer. It is possible (and likely) that this is neither a gloabl nor a local minimum of the network (which has more parameters).
Yes, as the learning rate is "small enough for all practical purposes". (No, if you use SGD or mini-batch gradient descent)
Usure. I think the correct answer is "No, the network can make more mistakes in between with cross entropy.". It is certainly sure to improve CE loss while at the same time getting worse at accuracy (see proof below). However, I'm not sure if the gradient would ever lead to such a result.
Example for 3
#!/usr/bin/env python
from math import log
def ce(vec):
"""index 0 is the true class."""
return -(log(vec[0]) + sum([log(1-el) for el in vec[1:]]))
a = [0.1001, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.0999]
print(ce(a))
b = [0.49, 0.51, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
print(ce(b))
gives:
ce(a) = 3.24971912864
ce(b) = 1.42669977575
Hence the cross entropy loss dropped (as expected), the probability for the correct class increased (as expecte) but it makes a mistake (if you simply take the argmax). |
H: Classification problem approach with Python
I am a Python beginner, just getting into machine learning and need advice on the approach i should use for my problem.
Here is an example of my data-set.
Where the RESULT is a corresponding INDEX in each VALUES array and every row is a separate array, i need to find the probability of each index of an array being the RESULT based on its configuration, where the overall configuration and distribution is most important rather than any individual value.
AI: You are looking at a Classification problem.
Logistic regression, Decision trees, SVM. Any of the above can solve the job for you. But selecting the best model depends upon how good it is able to predict the test set. Use cross-validations and see which model can give you better accuracy. So split you test and train set accordingly. You don't expect the model to predict, if you haven't trained it. See stratified sampling for starters.
Though your predictors might be categorical or ordinals, they can be treated as numericals.
You have built in functions like predict_proba to find class probability. The references for this can be found in above links. But go through how they work and what is the policy for selecting best plane in any classification, so that you don't feel like you are using a black-box. Since you are saying that you are a beginner, pandas will be a very useful module for reading data into dataframes and mending it as you like.
Hope this clears something. |
H: My ADALINE model using Gradient Descent is increasing error on each iteration
I have used the Iris Dataset's 1st and 3rd Column for the features. and the labels of Iris Setosa (-1) and Iris Versicolor (1). I am using ADALINE as a simple classification model for my dataset. I am using gradient descent as the cost minimizing function. But on every iteration the error increases. What am I doing wrong in the python code?
import numpy as np
import pandas as pd
class AdalineGD(object):
def __init__(self, eta = 0.01, n_iter = 50):
self.eta = eta
self.n_iter = n_iter
def fit (self, X, y):
"""Fit training data."""
self.w_ = np.random.random(X.shape[1])
self.cost_ = []
print ('Initial weights are: %r' %self.w_)
for i in range(self.n_iter):
output = self.net_input(X)
print ("On iteration %d, output is: %r" %(i, output))
errors = output - y
print("On iteration %d, Error is: %r" %(i, errors))
self.w_ += self.eta * X.T.dot(errors)
print ('Weights on iteration %d: %r' %(i, self.w_))
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
print ("On iteration %d, Cost is: %r" %(i, cost))
prediction = self.predict(X)
print ("Prediction after iteration %d is: %r" %(i, prediction))
input()
return self
def net_input(self, X):
"""Calculate net input"""
return X.dot(self.w_)
def activation(self, X):
"""Computer Linear Activation"""
return self.net_input(X)
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.activation(X) >= 0.0, 1, -1)
####### END OF THE CLASS ########
#importing the Iris Dataset
df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data", header = None)
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', -1, 1)
X = df.iloc[0:100, [0, 2]].values
#Adding the ones column to the X matrix
X = np.insert(X, 0, np.ones(X.shape[0]), axis = 1)
ada = AdalineGD(n_iter = 20, eta = 0.001).fit(X, y)
AI: I think something is wrong here.
self.w_ += self.eta * X.T.dot(errors)
You are going to the positive to the gradient while you should be doing is going to the negative direction of it
self.w_ -= self.eta * X.T.dot(errors)
or
self.w_ += -self.eta * X.T.dot(errors)
see this for more clarification. |
H: Re-bucket weekly sales data and calculate descriptive statistics
I have sales data in weekly buckets like this:
weekID product SoldQty
1 1 10
2 1 20
3 1 30
4 1 40
5 1 50
6 1 60
7 1 70
1 2 10
2 2 20
Calculating the standard deviation of weekly sales per product is pretty straightforward.
Now, I am asking the question: how do you calculate the standard deviation for the same data, but bi-weekly bucketed? x-weekly bucketed?
Question 2: is there an efficient algorithm for calculating it on weekly data instead of materializing the x-weekly combinations?
From the business side it means that I have various forecast horizons (1wk, 4 wks, 6wks...) per product. And I would like to build the confidence intervals for predictions of SoldQty within the forecast horizon.
It all seems very similar to the Safety Stock calculations from logistics, but I would like to be sure.
UPDATE:
Please consider there are multiple ways to re-bucket (re-combine) this data. I think that if the new bucket contains W weeks, there are W ways to re-bucket data. For an example of 2-weekly buckets.
Way 1:
bucket wkid prod sold
1 1 1 10
1 2 1 20
2 3 1 30
2 4 1 40
3 5 1 50
3 6 1 60
4 7 1 70
...
Way 2:
bucket wkid prod sold
1 1 1 10
2 2 1 20
2 3 1 30
3 4 1 40
3 5 1 50
4 6 1 60
4 7 1 70
...
Both options make sense business-wise. Essentially you need to calculate standard deviations between sales during fortnights. And, standard deviation in way 2 should be different from way 1.
Thanks in advance for your suggestions!
AI: If the week ID is given as you state, calculate a bucket variable $w_x=int(weekID/x)$. Then use a SQL statement to summarize the volume to levels of $w_x$. |
H: Words to numbers faster lookup
I'm training an LSTM for sentiment analysis on a review dataset downloaded from here. The music review dataset contains about 150K data points (reviews of varying length labelled pos or neg). After creating a dictionary, I'm running a script in Python to replace strings (words) with numbers that keras/theano will embed later.
The problem is that such a large dataset requires a lot of time for lookup. I would appreciate if anyone had suggestion on a tool for faster lookup or similar. Currently I just loop through every word in the corpus and replace it with the corresponding number from the dictionary (1-hot encoding essentially)
EDIT:
I'm doing roughly the following: each Python list is a sentence (before tokenization here):
['noble', 'interesting_superlatives',...,'the_idea']
which I want to conver to a list of integers, like:
[143599, 12387,...,7582]
I referred to it (probably incorrectly) as one-hot encoding because for each word there is exactly one number in the dictionary.
AI: I'd like to extend the great @Emre's answer with another example - we are going to replace all tokenized words from the "1984" (c) George Orwell (120K words):
In [163]: %paste
import requests
import nltk
import pandas as pd
# source: https://github.com/dwyl/english-words
fn = r'D:\temp\.data\words.txt'
url = 'http://gutenberg.net.au/ebooks01/0100021.txt'
r = requests.get(url)
# read words into Pandas DataFrame
df = pd.read_csv(fn, header=None, names=['word'])
# shuffle DF, so we will have random indexes
df = df.sample(frac=1)
# convert Pandas DF into dictionary: {'word1': unique_number1, 'word2': unique_number2, ...}
lkp = df.reset_index().set_index('word')['index'].to_dict()
# tokenize "1984" (c) George Orwell
words = nltk.tokenize.word_tokenize(r.text)
print('Word Dictionary size: {}'.format(len(lkp)))
print('We have tokenized {} words...'.format(len(words)))
## -- End pasted text --
Word Dictionary size: 354983
We have tokenized 120251 words...
In [164]: %timeit [lkp.get(w, 0) for w in words]
10 loops, best of 3: 66.3 ms per loop
Conclusion: it took 66 ms to build a list of numbers for the list with 120K words from the dictionary containing 354.983 entries. |
H: What is a tower?
In many tensorflow tutorials (example) "towers" are mentioned without a definition. What is meant by that?
AI: According to tensorflow documentation about CNN,
The first abstraction we require is a function for computing inference and gradients for a single model replica. In the code we term this abstraction a "tower".
To get the relevant context and more, check this. |
H: Using a K-NN Classification Approach for Time Series Data?
I have a dataset which contains time-series data of water flow over time. I have a flow meter connected to a kitchen faucet, and I am trying to cluster or classify specific water usage events.
The data is collected every second, and in each row I am given a value for the amount of gallons which are flowing through my flow meter.
For example, I am trying to classify someone washing their hands, filling a teapot, cleaning dishes, etc...
Is this something that I can use a k-NN Classification Approach to cluster these events? If a clustering based approach isn't good, what other method of classified would be good for this type of data?
If I run some experiments, I can classify each event and turn it into a supervised learning problem. But at the moment, none of the water events are classified.
A very abridged version of my dataset looks like the following:
EDIT
water = pd.DataFrame(shower1)
rng = pd.date_range('2016-09-01 00:00:00', '2016-09-30 23:59:58', freq='S')
water = water.reindex(rng,fill_value=0.0)
water = water['shower1']
df = pd.DataFrame({'time_stamp':rng,'water_amount':water})
starts = (df['water_amount']>0)&(df['water_amount'].shift(1)==0) #find all starts of events
n_events = sum(starts) #total number of events
df.loc[starts,'event_number'] = range(1,n_events+1) #numerate starts from 1 to n
df['event_number'] = df['event_number'].fillna(method='pad').fillna(-1) #forward fill all the values
df.loc[df['water_amount']==0,'event_number']=-1 #set all event numbers to -1 where the water amount is 0
df.groupby('event_number').agg({'time_stamp':'first',
'water_amount':'sum'}) #feature matrix
AI: It seems pretty clear from looking at the data when an event starts and ends(basically whenever there is a sequence of positive values). So, instead of starting with some complicated models, I'd suggest calculating a few simple features (like length of the event, total amount of water, amount/seconds, time to previous event, time of day in seconds from start of recording) for every event and then try some clustering algorithm on that new data. k-NN might even produce something meaningful. But a statistical summary of the features can probably already give you a better idea of how to further approach this.
EDIT1
import pandas as pd
import numpy as np
rng = pd.date_range('2017-01-01 14:00:00', '2017-01-01 14:01:00', freq='S')
water = [0,0,0.2,0.3,0.4,0,0,0.3,0.2,0.5]*6+[0]
df = pd.DataFrame({'time_stamp':rng,'water_amount':water,'event_number':np.zeros(len(water))})
j = 1
for k in range(len(df)):
if df.ix[k,'water_amount']== 0:
df.ix[k,'event_number'] = -1
else:
if df.ix[k-1,'water_amount'] > 0:
df.ix[k,'event_number'] = df.loc[k-1,'event_number']
else:
df.ix[k,'event_number'] = j
j = j+1
df.groupby('event_number').agg({'time_stamp':'first',
'water_amount':'sum'}) #feature matrix
EDIT2
rng = pd.date_range('2017-01-01 14:00:00', '2017-01-01 14:01:00', freq='S')
water = [0,0,0.2,0.3,0.4,0,0,0.3,0.2,0.5]*6+[0]
df = pd.DataFrame({'time_stamp':rng,'water_amount':water})
starts = (df['water_amount']>0)&(df['water_amount'].shift(1)==0) #find all starts of events
n_events = sum(starts) #total number of events
df.loc[starts,'event_number'] = range(1,n_events+1) #numerate starts from 1 to n
df['event_number'] = df['event_number'].fillna(method='pad').fillna(-1) #forward fill all the values
df.loc[df['water_amount']==0,'event_number']=-1 #set all event numbers to -1 where the water amount is 0
df.groupby('event_number').agg({'time_stamp':'first',
'water_amount':'sum'}) #feature matrix |
H: Python SVM rgb cluster
This is the distribution of my data.
I want to use SVM with only one 'circle' to cluster most of the 0.
I tried to run it with the code
clf = svm.SVC(max_iter=1, kernel='rbf')
However, it gives a strange result
How should I do it correctly?
Data:
X
array([[ 5.46787217e+00, 2.09073426e-02],
[ -2.57653443e+00, 9.77145456e+00],
[ 1.09476747e+02, -1.71182599e+00],
[ 4.94810319e+01, -2.77826146e+00],
[ -1.15407498e+01, 1.94848276e+00],
[ 7.47153419e+00, -7.67879236e-01],
[ -3.45243619e+01, -1.75370697e+00],
[ 2.46902913e+00, -1.87289298e-01],
[ 3.04749853e+01, -1.46262345e+00],
[ 1.24751661e+01, -1.54323119e+00],
[ -1.85219673e+01, -2.28503662e+00],
[ -6.53200731e+00, -3.28311541e-02],
[ 3.44650256e+01, 7.53756355e-01],
[ 2.44812829e+01, -2.89021667e+00],
[ 5.24685269e+01, 3.66211052e-02],
[ 3.94686575e+01, -2.15955996e-02],
[ -2.65369170e+01, 1.08356664e+00],
[ -3.15280741e+01, -9.42529350e-01],
[ -2.35494219e+01, 3.89844921e+00],
[ -2.85181847e+01, -3.12756169e+00],
[ -3.95243116e+01, -1.77609801e+00],
[ -8.52462300e+00, -1.63727356e+00],
[ -1.15245929e+01, -1.65070818e+00],
[ -4.25492311e+01, 3.81336326e+00],
[ -2.25182450e+01, -3.10069245e+00],
[ -2.05293114e+01, -6.98507045e-01],
[ 1.14627013e+01, 1.25373855e+00],
[ -4.65242413e+01, -1.80744547e+00],
[ 4.14811122e+01, -2.81408713e+00],
[ -5.15256196e+01, -1.42161754e+00],
[ -2.45182249e+01, -3.10964886e+00],
[ 4.48148382e+00, -2.97978083e+00],
[ -2.75218769e+01, -2.32534049e+00],
[ 4.94623197e+01, 1.42391045e+00],
[ -2.95181747e+01, -3.13203990e+00],
[ 2.04251866e+01, 9.69838626e+00],
[ 5.45907950e+00, 2.02461229e+00],
[ -1.75219773e+01, -2.28055841e+00],
[ -4.05430138e+01, 2.42159570e+00],
[ -4.52466317e+00, -1.61936073e+00],
[ 3.64525308e+01, 3.56416072e+00],
[ -1.45367359e+01, 9.23848192e-01],
[ -4.95216559e+01, -2.42386107e+00],
[ -5.45377626e+01, 1.15293883e+00],
[ -2.55743514e+01, 9.49238869e+00],
[ 9.24618878e+01, 1.61647340e+00],
[ 8.54806703e+01, -2.61704597e+00]])
Y
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0]
AI: After some researches and trial, the dimensions need to be normalised.
In this case, I consider 0 as normal and 1 be outliers.
I normalise the dimensions by take the 10% and 90% of the samples.
clf = svm.SVC(max_iter=1, kernel='rbf') |
H: Pattern Recognition on Financial Market
Which machine learning or deep learning model(has to be supervised learning) will be best suited for recognizing patterns in financial markets ?
What I mean by pattern recognition in financial market :
Following Image shows how a sample pattern (i.e. Head and shoulder) looks like:
Image 1:
And Following Image shows how it actually forms in real chart events:
Image 2:
What I'm trying to do is:
Any pattern similar to Image 1 can be defined as Head and Shoulder Pattern but in a Chart (Price Chart) it will not form as clearly as Image 1. Image 2 is the sample of Head and Shoulder Pattern form in Chart (Price Chart).
As it seems in Image 2, it can not be identified as Head and Shoulder Pattern by normal algorithms or analysis (Because there are a lot of highers and low forming a lot of structure, which can easily mislead into a lot of shoulders or head or any other structures).
I'm expecting to train machine to recognize the Head and Shoulder Pattern when similar (as Image 2) pattern is formed.
Thank you for your time.
Let me know if I'm taking it to wrong way.
I only have beginners knowledge on Machine Learning.
AI: These are some suggestions that might be useful.
The data on the curve are bumpier than the roads in my country. So I think you should start by smoothing the curve. There are many smoothing filters like from the simplest median smoothing to Local Regression models like LOESS. There are some parameters to tweak. Take a look at the example.
Finding the local maxima. Python's numpy has an implementation for this and this should help.
My idea is to basically smooth till you get your head and shoulders i.e., three maxima.
Warning: Smoothing though reduces the amount of noise (not in literal noise sense) on the curve, it tends to shift the curve from its original position to represent it.
A sample Python implementation will be like
from statsmodels.nonparametric.smoothers_lowess import lowess
import numpy as np
from scipy.signal import argrelextrema
import matplotlib.pyplot as plt
sample_points = np.array([1,2.3,3.5,3,4.5,5,2.25,33.3,5,6.7,7.3,56.0,70.1,4.2,5.4,6.2,4.4,100,2.9,45,10,3.4,4.8,50,2.3,3.45,5.5,6.7,7.9,8.7,6.1])
for i in np.arange(0,0.5,0.05):
# i in the loop is the percentage of data points we are inputing for the loess regression. Wiki atricle explains it, I guess
filtered = lowess(sample_points,range(len(sample_points)), is_sorted=True, frac=i, it=0)
maxima = argrelextrema(filtered[:,1], np.greater)
if len(maxima[0]) == 3:
plt.plot(filtered[:,1])
plt.show()
I hope this kind of gives the direction of where you might need some checking. |
H: KMeans clustering to help label Multi-class Supervised model
EDITED:
Is it accepted practice to be able to use a KMeans clustering algorithm to help label data fed into a supervised model? (Unsupervised --feeds-> supervised)?
The reason being, relabeling millions of records is not possible and it is a class imbalance problem, where that historical minority class is very useful.
I feel like this is a reinforcement learning problem, but do not know enough about it to say so.
If none of the above, what is a good approach for an imbalanced fraud detection model? Where Precision and recall are better measures than Accuracy.
AI: k-means will not 'label' points for you.
Clustering is not classification.
It's a much harder problem. Most of the time, you get bad results!
So rather than trying to automate this, use clustering to understand your data. Try to derive some rules to identify e.g. different kind of fraud. But never assume the clusters are all good (because they never are all good). |
H: Why the number of neurons or convolutions chosen equal powers of two?
In the overwhelming number of works devoted to the neural networks, the authors suggest arhitechure in which each layer is a numbers of neurons is power of 2
what are the theoretical reasons(prerequisite) for this choice?
AI: Deep Neural Networks are usually trained on GPUs to speed up training time. Using power of two for the network topology follows the same logic as using power of two for image textures in computer games.
The GPU can take advantage of optimizations related to efficiencies in working with powers of two. (see https://gamedev.stackexchange.com/questions/26187/why-are-textures-always-square-powers-of-two-what-if-they-arent) |
H: How is a single element of the training set called?
This question is only about the vocabulary.
Do / can you say
data item
data sample
recording
sample
data point
something else
when you talk about elements of the training / test set? For example:
The figure shows 100 data items of the training set.
Database A contains the same data items as database B, but in another format.
The remaining data items were removed from the dataset.
Those 10 classes have 123456 data items.
Please provide papers with examples.
According to Google n-grams:
AI: The term you are looking for is "Example".
Source: Martin Zinkevich, Research Scientist at Google (http://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf)
Instance: The thing about which you want to make a prediction. For example, the instance might be a web page that you want to classify as either "about cats" or "not about cats".
Label: An answer for a prediction task either the answer produced by a machine learning system, or the right answer supplied in training data. For example, the label for a web page might be "about cats".
Feature: A property of an instance used in a prediction task. For example, a web page might have a feature "contains the word 'cat'".
Example: An instance (with its features) and a label. |
H: What is context window size?
I am trying to implement a recurrent neural network machine translation system, and I am just learning the things.
I am creating a word embedding matrix. In order to do that, I should know my vocabulary size, dimension of the embedding space, and context window size?
What is context window?
AI: A context window applies to the number of words you will use to determine the context of each word. Like of your statement is "the quick brown fox" a context window of two would mean your samples are like (the,quick) and ( the, brown). Then you slide one word and your samples become (quick, the ), (quick, brown) and (quick fox) and so on. I would suggest reading up this word 2vec tutorial to understand the training method and terminology. |
H: Machine Learning vs Deep Learning
I am a bit confused by the difference between the terms "Machine Learning" and "Deep Learning". I have Googled it and read many articles, but it is still not very clear to me.
A known definition of Machine Learning by Tom Mitchell is:
A computer program is said to learn from experience E with respect to
some class of tasks T and performance measure P, if its performance at
tasks in T, as measured by P, improves with experience E.
If I take an image classification problem of classifying dogs and cats as my taks T, from this definition I understand that if I would give a ML algorithm a bunch of images of dogs and cats (experience E), the ML algorithm could learn how to distinguish a new image as being either a dog or cat (provided the performance measure P is well defined).
Then comes Deep Learning. I understand that Deep Learning is part of Machine Learning, and that the above definition holds. The performance at task T improves with experience E. All fine till now.
This blog states that there is a difference between Machine Learning and Deep Learning. The difference according to Adil is that in (Traditional) Machine Learning the features have to be hand-crafted, whereas in Deep Learning the features are learned. The following figures clarify his statement.
I am confused by the fact that in (Traditional) Machine Learning the features have to be hand-crafted. From the above definition by Tom Mitchell, I would think that these features would be learned from experience E and performance P. What could otherwise be learned in Machine Learning?
In Deep Learning I understand that from experience you learn the features and how they relate to each other to improve the performance. Could I conclude that in Machine Learning features have to be hand-crafted and what is learned is the combination of features? Or am I missing something else?
AI: In addition to what Himanshu Rai said, Deep learning is a subfield which involves the use of neural networks.These neural networks try to learn the underlying distribution by modifying the weights between the layers.
Now, consider the case of image recognition using deep learning:
a neural network model is divided among layers, these layers are connected by links called weights, as the training process begins, these layers adjust the weights such that each layer tries to detect some feature and help the next layer for its processing.The key point to note is we don't explicitly tell the layer to learn to detect edges, or eyes, nose or faces.The model learns to do that itself.Unlike classical machine learning models. |
H: Does Tensorflow uses vectorization in its operators
I'm kinda new to tensorflow and just wanted to know if it already performs vectorization in its operators like multiplying matrices and so on.
And if not if its doable.
As an example let's say we have 2 matrices and we want to multiply them. Instead of for loops to do the operation I'm looking for a vectorized way to do this.
Thanks in advance.
AI: Yes, tensorflow works with tensors, which are like vectors but more generic, so dont worry about loops. Just make sure the matrices you are multiplying have compatible dimension. Its completely tensor arithmetic, which for your case will be vector algebra. |
H: How to calculate accuracy on keras model with multiple outputs?
I have a keras model that takes in an image with (up to) 5 MNIST digits and outputs a length and then (up to) 5 digits. I see that model.evaluate() reports accuracies for each of the outputs but how do I determine how good the model is at predicting the numbers? Do I need to write that myself?
AI: It's going to take a bit of engineering - since you have a variable size output, you need to encode the length into the output in order to evaluate the accuracy of the model overall. If instead of outputting "up to 5 digits", you output an array of 5 predictions, where some non-digit (such as -1) operates as indicating that there is no digit present, you can better evaluate your network. If you retrain your network as such (where $X$ is the array of images and $Y$ is an array containing arrays of form $[1,4,3,-1,-1]$, for example), then model.evaluate($X_{test}$,$Y_{test}$) will work as expected.
If you don't want to re-train your network, you can write a simple function to take the output from model.predict($X_{test}$) and encode it into the corresponding format. This encode function will simply go from $[1,4,3]$ to $[1,4,3,-1,-1]$. You can then calculate the accuracy by sklearn.metrics.accuracy_score($encode$(model.predict($X_{test}$)),$Y_{test}$), where $encode$ is the aforementioned function. |
H: Recommender System: how to treat different events
I'm trying to build recommender based on user history from e-commerce. There are two(potentially more) types of events: purchase and view.
Is it okay to sum up number of purchases and views for a given item(with purchase and view having different weights)? Or I`ll just mix up user intent this way?
AI: I assume you're using an implicit feedback recommender approach like ALS. Otherwise, summing data points generally won't make sense, such as if you're feeding it to a recommender that expects ratings.
The input to implicit ALS is, conceptually, weighted user-item pairs. Therefore it makes sense to perhaps use a sum of user-item clicks as the weight. Summing makes sense.
However, does a purchase and view seem to carry the same weight? obviously not. A purchase is a much stronger association and should be weighted accordingly.
As to how much, I'd suggest weighting purchases simply by price, to start. Then weight clicks by price times purchase-to-click ratio. A $10 item purchase is weight 10; if 1 in 200 clicks results in a purchase, then weight a click 0.2.
This is crude but probably about as close as anything for capturing this info in the context of ALS. |
H: Machine Learning - Range of Hypothesis space and choiceof Hypothesis function type
I am new to machine learning and seek your help in clarifying my elementary doubts. I did a fair amount of googling, but find most literature jumping directly into math.
What I know is that given a labelled training data, a ML algorithm chooses from a hypothesis space H a hypothesis function h.
As an example, assume that a feature vector in training data contains 3 features (x1 through x3)
Now the data from the training set is taken and plugged into a formula (function type). If x is the feature vector and w represents the coefficients of the formula, then output y = A pre-determined f(w,x).
My questions are:
1. Who decides the range of each of the coefficients?
2. Who decides the formula? Is the formula fixed for a ML algorithm?
3. What exactly is the hypothesis space? Is it range of w, or is it different formulas or both?
I acknowledge I have asked more than one question and against the rules, but it was convenient to logically group them in a single post.
AI: I will answer your question as I understand them, the clarity you are looking for comes from the Maths behind ML.
There is no fix range of values of the coefficients or weights. Finding the value of these weights is the work of ML algorithms. although using regularization you can reduce the value of these weights but range depends on algorithm. no. of weights you can decide
Yes, formula is loosely fixed for a ML algorithm.
e.g. for linear regression in your case
h(w,x) = w0*x0 + w1*x1 + w2*x2 +w3*x3 (where x0 =1)
the weights will change after each iteration though. for neural network it is multiplication of a weight matrix with an input matrix for each layer.
The output of h(w,x) is the prediction of ML which you can compare against y.
All the ML algorithms available are your hypotheses space(H)
e.g., quadratic,
linear equation(the one above),
high degree polynomial,
complex neural network. |
H: Organize TSNE data into grid
I have some data reduced by TSNE into a 2D representation, which shows clear spatial features.
However, I'd like to format this into a grid – not just snapping data to the nearest grid square but spreading everything out to fill up a grid, preserving (as much as possible) the existing spatial relationships.
So far, I've only found this article which might close to what I need? This process might already have a name and I'm just one step from an easy Google solution, but at the moment I'm stuck!
AI: There seem to be a few options, but I found rasterfairy which is very easy to install and use. Has the added bonus of being able to fit to a rectangular grid, but also circular and other arbitrary shapes.
A very nice IronPython notebook example: https://github.com/Quasimondo/RasterFairy/blob/master/examples/Raster%20Fairy%20Demo%201.ipynb
And some example results: |
H: Feature extraction from web browsing history of one website
I have a dataset of web browsing histories for users visiting a particular website over a period of time (say the last 90 days). Each user has a unique ID and several records showing when he/she visited a particular page on the website.
It looks like follow:
UserID,Timestamp,Path
U_1,2017-01-24 12:05:43,/sport/rugby/article_title
U_1,2017-01-24 12:06:56,/sport/football/article_title
U_1,2017-01-24 15:26:12,/finance/local/article_title
......
I do not have access to the content of the articles, I just know the path to the article.
My goal is to build a classifier to predict if a user will take an action or not. So I need to extract features from each user data.
Suppose that I have ground-truth information associated with each user, indicating when a user did the action.
My first guess is to aggregate all the records of each user and extract frequency features (hashing TF) from each level of the path.
So for demonstration, a particular user might visit the /sport category 5 times (first level category), and the /sport/football category 3 times (second level category), and the /sport/rugby 2 times (second level category).
So for each user I will have a feature vector representing the frequency of the first level categories, and another one for the second level categories and so on.
I can now train a classifier for each feature vector and do a late fusion of the results, or I can concatenate (early fusion) the different features and train a single classifier.
I can also extract the terms from the article titles and build a TFIDF feature.
What I am trying now is to extract the features from the N days proceeding taking the actions for the positive samples, and randomly select N consequent days from negative users.
What are the possible other features that I can extract, and is there any better ML techniques to use in order to model and learn the user web browsing behaviour?
AI: What you describe uses the data that the paths can offer. You can easily generate features from the data and time. For instance, given the date, you can generate a categorical variable denoting the weekday (Monday, Tuesday, etc..). Given the timestamp, you can generate binary variables to partition the day in four or more partitions: is_morning, is_afternoon etc.. Somebody may only read in the morning or at night, and the aim of these features is to capture this.
Further, you can get the interactions between weekdays and day partitions. Such features may help to distinguish users that in the Sunday mornings read about sports while Monday mornings they are at work and read financial news. Be careful of the overfitting though. Note that trees have been shown to capture such complex interactions; given them explicitly is beneficial though. |
H: Is Keras useful for professionals?
I know that Keras is developed for quick deployment. Is it just for beginners or also useful in industry for professionals?
AI: Keras is used in academia (see google scholar citations of Keras as a proxy for academy adoption) and hobbyists (see github stars or google results for keras in www.kaggle.com as proxies for hobbyist adoption).
It was recently bundled together with Google's tensorflow.
It is also used in industry, at least for prototyping (I know because I use it!), but I have found no source to back this statement. |
H: Clustering high dimensional data
TL;DR: Given a big image dataset (around 36 GiB of raw pixels) of unlabeled data, how can I cluster the images (based on the pixel values) without knowing the number of clusters K to begin with?
I am currently working on an unsupervised learning project to cluster images; think of it as clustering MNIST with 16x16x3 RGB pixel values, only that I have about 48 million examples that I need to cluster.
Without knowing their identities, I do know that some of the images are definitely related because they come from the same source, but - say - I also don't know an appropriate K in order to "just" run K-means on the set yet.
I was thinking of doing some manual 2D embedding using t-SNE and then clustering manually in the embedded space (a simpler task than doing it manually in 16x16x3-d), but all t-SNE implementations I could find required loading the data into memory. I also thought about first running t-SNE, then K-means on the t-SNE embedded data, but if you look at the results of t-SNE from MNIST, it's very obvious that these clusters might and probably will be distorted and skewed in nonlinear ways. So even if I were to know a K, the clusters would probably be split up. Using Mahalanobis distances for K-means might be an interesting thing, but since I don't know covariances to begin with, that appears to be a dead end as well.
Currently I'm trying if I can run PCA compression on the examples to at least gain some memory back for t-SNE, but that might or might not work ... can't say for now.
Can somebody give me a pointer in the right direction to do this (ideally, but definitely not necessary in a Python, TensorFlow or Apache Beam/Dataflow context)? I was working on porting a Streaming/Ball K-means a while ago which does have the nice property of creating new clusters "on demand", but before I start implementing that again in Python/TensorFlow/Dataflow, I was hoping that somebody could give me some ideas where to start or what to avoid.
AI: I don't think any of the clustering techniques "just" work at such scale. The most scalable supposedly is k-means (just do not use Spark/Mahout, they are really bad) and DBSCAN (there are some good distributed versions available).
But you will be facing many other challenges besides scale because clustering is difficult. It's not as if it's just enough to run the algorithm and then you have clusters. Clustering is an explorative technique. There is no "correct" clustering. But rather you will need to run clustering again and again, and look at every cluster. Because there will not be a single parameter setting that gets everything right. Instead, different clusters may appear only at different parameters.
But the main challenge in your case will likely be the distance function. Except for idealized settings like MNIST, Euclidean distance will not work at all. Nor will anything working on the raw pixels. So first you need to do feature extraction, then define a similarity function.
When it comes to clustering, work with a sample. Cluster the sample, identify interesting clusters, then think of a way to generalize the label to your entire data set. For example by classification (your labeled data points are your training set, predict the labels of unlabeled points). |
H: The effect of an linear layer?
I have the last couple of month worked with an regression problem, of turning a framed audio file into a set of mfcc features, for a speech recognition application
I tried a lot different network structures, Cnn, different normalisation techniques, different optimizer, adding more layers and so on..
but finally i've got some decent result, but i don't understand why..
What i did was i added a linear layer as output, and somehow that minimised the error tremendeosly, and bit puzzled why a linear layer would have that much effect?...
I mean am still tried to fit the actual output to the desired output?..
Why would the activation function matter here?... I mean the weight are being adjusted based on the error, so why is the neural network better at adjusting for the error when the output is linear rather than non-linear (such as: tanh, Relu).. ?
AI: If you are performing regression, you would usually have a final layer as linear.
Most likely in your case - although you do not say - your target variable has a range outside of (-1.0, +1.0). Many standard activation functions have restricted output values. For example a sigmoid activation can only output values in range (0.0, 1.0) and a ReLU activation can only output positive values. If the target value is outside of their range, they will never be able to match it and loss values will be high.
The lesson to learn here is that it is important which activation function you use in the output layer. The function must be able to output the full range of values of the target variable. |
H: handling missing data in pandas python
I have to impute missing values in the column named <Age> with the mean of the nearest available values above and below in the column <Age>.
If the Age column had values in order
NA,7,6,NA,7,8,NA,NA,NA,10,5,NA,NA,5,9,9,12,8,6,NA,NA
After imputation, column should look like
7,7,6,6.5,7,8,9,9,9,10,5,5,5,5,9,9,12,8,6,6,6
I know about filling value with mean, mode, or median forward filling, backward filling but not this one. I am new to this field and a student. Any help would be appreciated.
Thanks
AI: IIUC you can simply use Pandas Series.interpolate() method:
Data:
In [8]: NA = np.nan
In [9]: s = pd.Series([NA,7,6,NA,7,8,NA,NA,NA,10,5,NA,NA,5,9,9,12,8,6,NA,NA])
In [10]: s
Out[10]:
0 NaN
1 7.0
2 6.0
3 NaN
4 7.0
5 8.0
6 NaN
7 NaN
8 NaN
9 10.0
10 5.0
11 NaN
12 NaN
13 5.0
14 9.0
15 9.0
16 12.0
17 8.0
18 6.0
19 NaN
20 NaN
dtype: float64
Solution:
In [11]: s.interpolate().bfill()
Out[11]:
0 7.0
1 7.0
2 6.0
3 6.5
4 7.0
5 8.0
6 8.5
7 9.0
8 9.5
9 10.0
10 5.0
11 5.0
12 5.0
13 5.0
14 9.0
15 9.0
16 12.0
17 8.0
18 6.0
19 6.0
20 6.0
dtype: float64
if you need rounded integers:
In [13]: s.interpolate().round().bfill().astype(int)
Out[13]:
0 7
1 7
2 6
3 6
4 7
5 8
6 8
7 9
8 10
9 10
10 5
11 5
12 5
13 5
14 9
15 9
16 12
17 8
18 6
19 6
20 6
dtype: int32 |
H: Feed forward neural network, output as list of targets and associated probabilities
I am working through an FNN tutorial, right now it outputs a sigmoid probability from 0-1 (0.8956 for example). My own data has 3+ possible targets so i need the output to be a list of the targets and the associated probability for testing on new samples. (In this instance it would be: 0 (probability), 1 (probability)). How would i go about this? Here is the code. Thanks.
import numpy as np
# sigmoid function
def nonlin(x,deriv=False):
if(deriv==True):
return x*(1-x)
return 1/(1+np.exp(-x))
# input dataset
X = np.array([ [0,0,1],
[0,1,1],
[1,0,1],
[1,1,1] ])
# output dataset
y = np.array([[0,0,1,1]]).T
# seed random numbers to make calculation
# deterministic (just a good practice)
np.random.seed(1)
# initialize weights randomly with mean 0
syn0 = 2*np.random.random((3,1)) - 1
for iter in xrange(10000):
# forward propagation
l0 = X
l1 = nonlin(np.dot(l0,syn0))
# how much did we miss?
l1_error = y - l1
# multiply how much we missed by the
# slope of the sigmoid at the values in l1
l1_delta = l1_error * nonlin(l1,True)
# update weights
syn0 += np.dot(l0.T,l1_delta)
print "Output After Training:"
print l1
AI: If you have more than two targets, then you should use softmax activation instead of sigmoid. Softmax activation gives you the probability associated with each class in the output. The only thing you need to do before applying softmax is to convert your targets into one-hot encoded vectors. If you want to know more about softmax in detail, you can read about it here |
H: Modern Feature Selection Review/Resources
I found this review paper by Guyon and Elisseeff in a 2003 JMLR publication but, although not outdated, it is quite old. Is there a more recent review or resource on the topic of feature selection?
JMLR Review 2003
AI: I looked into it quite recently and found these papers:
Saeys et al. (2007) (also already a bit dated, but a good overview, and I think the 'key' paper in this area).
Dougherty et al. (2009) (interesting paper discussing various difficulties such as classifier dependence, label distribution and sample size).
Bolòn-Canedo et al. (2014) .
Dessì & Pes (2015).
However, I do agree with you that there is a need for an updated review regarding this topic. |
H: What is the difference between data-driven methods and machine learning?
I was wondering (about a more semantic question), is there a difference between data-driven methods and machine learning? Or is it more correct to state that machine learning is a category of data-driven methods (and what then are other categories)?
AI: Based on the quotation you have added in your comments, data-driven approaches are approaches where you use data that describes past states ("historical data") to get a (not defined) system to give a desired output.
To understand whether this definition includes machine learning or not we will have to define "machine learning", and while there could be plenty of ways to define it I expect that it will be quite difficult to come up with a definition that does not include within it "Using a system that, based on given states will give a desired output".
Note that in this last definition I use "given states" and not "past states" as to include approaches such as online learning.
Bottom line is that unless you really want to hold to a narrow definition of "past states" it seems that machine learning approaches are a subset of data-driven approaches. |
H: Reason for square images in deep learning
Most of the advanced deep learning models like VGG, ResNet, etc. require square images as input, usually with a pixel size of $224x224$.
Is there a reason why the input has to be of equal shape, or can I build a convnet model with say $100x200$ as well (if I want to do facIAL recognition for example and I have portrait images)?
Is there increased benefit with a larger pixel size, say $512x512$?
AI: There is no requirement for specific pixel dimensions for convolutional neural networks to function normally. It is likely the values have been chosen for pragmatic reasons - such as a compromise between using image details vs number of parameters and training set size required.
In addition, if source data has a range of different aspect ratios, some portrait, some landscape, with the target object usually in the centre, then taking a square crop from the middle could be a reasonable compromise.
When you increase the input image size, you will also increase the amount of noise and variance that the network will need to deal with in order to process that input. That could mean more layers - both convolutional and pooling. It could also mean that you need more training examples, and of course each training example will be larger. Together, these increase the computation resources you need to complete training. However, if you can overcome this requirement, it is possible that you will end up with a more accurate model, for any task where the extra pixels could make a difference.
One possible rule of thumb for whether you would want higher resolution is if, for goal of your network, a human expert could make use of the extra resolution and perform better at the task. This might be the case in regression systems, where the network is deriving some numerical quantities from the image - e.g. for face recognition extracting biometrics such as distance between facial features. It might also be desirable for image-processing tasks such as automated masking - state of the art results for these tasks may still be lower resolution than the commercial images where we would like to apply them in practice. |
H: Benefits of stochastic gradient descent besides speed/overhead and their optimization
Say I am training a neural network and can fit all my data into memory. Are there any benefits to using mini batches with SGD in this case? Or is batch training with the full gradient always superior when possible?
Also, it seems like many of the more modern optimization algorithms (RMSProp, Adam, etc.) were designed with SGD in mind. Are these methods still superior to standard gradient descent (with momentum) with the full gradient available?
AI: On large datasets, SGD can converge faster than batch training because it performs updates more frequently. We can get away with this because the data often contains redundant information, so the gradient can be reasonably approximated without using the full dataset. Minibatch training can be faster than training on single data points because it can take advantage of vectorized operations to process the entire minibatch at once. The stochastic nature of online/minibatch training can also make it possible to hop out of local minima that might otherwise trap batch training.
One reason to use batch training is cases where the gradient can't be approximated using individual points/minibatches (e.g. where the loss function can't be decomposed as a sum of errors for each data point). This isn't an issue for standard classification/regression problems.
I don't recall seeing RMSprop/Adam/etc. compared to batch gradient descent. But, given their potential advantages over vanilla SGD, and the potential advantages of vanilla SGD over batch gradient descent, I imagine they'd compare favorably.
Of course, we have to keep the no free lunch theorem in mind; there must exist objective functions for which each of these optimization algorithms performs better than the others. But, there's no guarantee whether or not these functions pertain to the set of practically useful, real-world learning problems. |
H: Do convolutions "flatten images"?
I'm looking for a good explanation of how convolutions in deep learning work when applied to multi-channel images. For example, let's say I have a 100 x 100 pixel image with three channels, RGB. The input tensor would then have dimensions 100 x 100 x 3.
If I apply a convolution with N filters and a stride of one, will the output dimension be:
100 x 100 x 3 x N ?
or
100 x 100 x N ?
In other words, does the convolution that is applied "flatten" the image, or is the convolution applied on a channel by channel basis?
AI: In all the implementations for CNNs processing images that I have seen, the output in any layer is
Width x Height x Channels
or some permutation. This is the same number of dimensions as the input, no additional dimensions are added by the convolutional layers. Each feature map channel in the output of a CNN layer is a "flattened" 2D array created by adding the results of multiple 2D kernels (one for each channel in the input layer).
Usually even greyscale input images are expected to be represented as Width x Height x 1 so that they fit the same pattern and the same layer model can be used.
It is entirely feasible to build a layer design which converts a standard 2D+channels input layer into a 3D+channels layer. It is not something I have seen done before, but you can never rule out that it could be useful in a specific problem.
You may also see 3D+channels convolutions in CNNs applied to video, but in that case, the structure will be some variation of
Width x Height x Frames x Channels |
H: Can the output of convolution on image be higher than 255?
I have probably a very simple question. When I convolve an (grayscale) image using some kernel I get some output. The original pixel can be only between 0 and 255. Is possible that the output of a convolution can be higher? Because we are creating feature map, which I understand is another image. Is the output only up to 255, do we scale it down or it does not matter?
Thanks.
AI: It is possible. You may have values less than 0 or greater than 255. It will depend on the values in the kernel. If you want to display the convolution output properly you will need to scale it first. This process is referred to as 'normalization". |
H: Ad click prediction: what are the negative examples?
I am analysing the log of a website and I would like to build a classifier to predict the users that are likely to click on an Ad.
The Ad can be displayed to the visitor several times.
To build any classifier I need positive and negative examples:
The positives are the visitors who clicked on the Ad (easy).
The negatives are the visitors who saw the Ad but didn't click (not very obvious).
Question
Is there a convention about how/when to consider a user as a negative example?
I presume that I should define a threshold of impressions (views) per user, if the user reaches this threshold without clicking on the Ad, I consider him/her as as negative example?
Any reference or guidance is highly appreciated.
AI: You're overthinking it. You might not need a threshold. Start with the simplest approach you possibly can: If you showed the ad to a visitor, then that's a negative example. Each time you show an ad to a visitor, you end up with an instance, whether positive or negative. If you've showed the ad three times to the same visitor, you end up with three examples. (Maybe all negative; or maybe one is positive and two are negative; but that's fine.)
Bonus tip: Do research methods for handling class imbalance. |
H: Why isn't leaky ReLU always preferable to ReLU given the zero gradient for x<0?
It looks to me like the leaky ReLU should have much better performance since the standard ReLU can’t use half of its space (x < 0 where the gradient is zero). But this doesn't happen and in practice most people use standard ReLU.
AI: One reason that ReL Units have been introduced is to circumvent the problem of vanishing gradients of sigmoidal units at -1 and 1.
Another advantage of ReL Units is that they saturate at exactly 0 allowing for sparse representations, which can be helpful when hidden units are used as input for a classifier. The zero gradient can be problematic in cases where the unit never activates in a gradient based scenario when the unit is initially not activated.
This problem can be alleviated by using leaky ReL Units. On the other hand, leaky ReL Units don't have the ability to create a hard-zero sparse representation which can be useful in certain cases. So, there is a bit of a trade-off and, as in general with NN, it depends on the use cases when which unit performs better. In most cases, if the initial settings can make sure that the ReL Unit is activated (e.g. by setting the biases to small positive values) one would expect ReL and leaky Rel Units to perform very similarly.
Also, leaky RelU (if parametric) introduces another parameter (the slope for $x<0$) that needs to be learned during training and therefore adds more complexity/training time. |
H: Unable to open .json file in pandas
I want to convert a json file into a dataframe in pandas (Python). I tried with read_json() but got the error:
UnicodeDecodeError:'charmap' codec can't decode byte 0x81 in position 21596351:character maps to <undefined>
I think I have some unwanted data in the json file like noise. The data is server generated.
This is a collection from the json file:
{"_id":{"$oid":"57a30ce368fd0809ec4d1b41"},"session":{"start_timestamp":{"$numberLong":"1470151881189"},"session_id":"8356bd90-20160802-153121189"},"metrics":{},"arrival_timestamp":{"$numberLong":"1470152028294"},"event_type":"OfferViewed","event_timestamp":{"$numberLong":"1470151943271"},"event_version":"3.0","application":{"package_name":"com.think.vito","title":"Vito","version_code":"5","app_id":"7ffa58dab3c646cea642e961ff8a8070","cognito_identity_pool_id":"us-east-1:4d9cf803-0487-44ec-be27-1e160d15df74","version_name":"2.0.0.0","sdk":{"version":"2.2.2","name":"aws-sdk-android"}},"client":{"cognito_id":"us-east-1:1d507b8f-857c-42a4-a705-8db07d46bc8f","client_id":"aa092911-b9a7-498a-82da-76318356bd90"},"device":{"locale":{"country":"US","code":"en_US","language":"en"},"platform":{"version":"5.1.1","name":"ANDROID"},"make":"Xiaomi","model":"Redmi Note 3"},"attributes":{"Category":"90000","CustomerID":"4077","OfferID":"11846"}}
AI: You have to read the file line by line, you can find a detailed answer in this question of stackoverflow |
H: Found input variables with inconsistent numbers of samples
I would appreciate if you could let me know how to resolve this error:
Code:
X = np.array(pd.read_csv('my_X_table1-1c.csv',header=None).values)
y = np.array(pd.read_csv('my_y_table1-1c.csv',header=None).values.ravel())
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=7)
def Ridgecv(alpha):
return cross_val_score(Ridge(alpha=float(alpha), random_state=2),
X_train, y_train, 'mae', cv=5).mean()
The error is related to X_train, y_train:
ValueError: Found input variables with inconsistent numbers of samples: [1052, 1052, 3]
AI: It seems that I missed the word "scoring". In fact, the extra 3 was related to the number of characters of 'mae'.
def Ridgecv(alpha):
return cross_val_score(Ridge(alpha=float(alpha), random_state=2),
X_train, y_train, scoring='mae', cv=5).mean() |
H: Gamma random variable , need to find the approximate 90th percentile of X?
A colleague defines a random variable $X = \frac{Z}{Y^2}$, where $Z$ is a known normal random variable, $Y$ is a known gamma random variable, and $Z$ and $Y$ are independent of each other.
You are not able to get an analytical form for the cumulative distribution function for $X$, but you need to find the approximate 90th percentile of $X$. How do you go about doing this?
AI: In R:
> Z = function(n){rnorm(n)} # normal(0,1) here
> Y = function(n){rgamma(n,10,10)} # some gamma variable
> quantile(Z(100000)/Y(100000)^2,.9) # 100000 from your target dist
90%
1.635033
So 90% of the values are less than 1.635 |
H: Is standardization needed before using scikit-learn SVM?
I am using the SVM function provided by scikit-learn. I would like to know whether I need to perform standardization before fitting the model. As I know, LibSVM tends to require pre-processing the data. I am not sure whether scikit-learn automatically normalizes the data instead of expecting us to handle it ourselves.
AI: scikit learn does not standardize data, but it does offer utilities for you to standardize your input data yourself:
http://scikit-learn.org/stable/modules/preprocessing.html
the rule of thumb is to standardize if your data aren't related. That is, if channel X is not a function of channel Y, you should standardize
Qualitatively, think about it this way, SVM 'creates a hyperplane' to separate data into categories; if the data are skewed too far in one axis, that will make it harder to draw a plane to separate them |
H: Confused about how to apply KMeans on my a dataset with features extracted
I am trying to apply a basic use of the scikitlearn KMeans Clustering package, to create different clusters that I could use to identify a certain activity. For example, in my dataset below, I have different usage events (0,...,11), and each event has the wattage used and the duration.
Based on the Wattage, Duration, and timeOfDay, I would like to cluster these into different groups to see if I can create clusters and hand-classify the individual activities of each cluster.
I was having trouble with the KMeans package because I think my values needed to be in integer form. And then, how would I plot the clusters on a scatter plot? I know I need to put the original datapoints onto the plot, and then maybe I can separate them by color from the cluster?
km = KMeans(n_clusters = 5)
myFit = km.fit(activity_dataset)
Wattage time_stamp timeOfDay Duration (s)
0 100 2015-02-24 10:00:00 Morning 30
1 120 2015-02-24 11:00:00 Morning 27
2 104 2015-02-24 12:00:00 Morning 25
3 105 2015-02-24 13:00:00 Afternoon 15
4 109 2015-02-24 14:00:00 Afternoon 35
5 120 2015-02-24 15:00:00 Afternoon 49
6 450 2015-02-24 16:00:00 Afternoon 120
7 200 2015-02-24 17:00:00 Evening 145
8 300 2015-02-24 18:00:00 Evening 65
9 190 2015-02-24 19:00:00 Evening 35
10 100 2015-02-24 20:00:00 Evening 45
11 110 2015-02-24 21:00:00 Evening 100
Edit:
Here is the output from one of my runs of K-Means Clustering. How do I interpret the means that are zero? What does this mean in terms of the cluster and the math?
print (waterUsage[clmns].groupby(['clusters']).mean())
water_volume duration timeOfDay_Afternoon timeOfDay_Evening \
clusters
0 0.119370 8.689516 0.000000 0.000000
1 0.164174 11.114241 0.474178 0.525822
timeOfDay_Morning outdoorTemp
clusters
0 1.0 20.821613
1 0.0 25.636901
AI: For clustering, your data must be indeed integers. Moreover, since k-means is using euclidean distance, having categorical column is not a good idea. Therefore you should also encode the column timeOfDay into three dummy variables. Lastly, don't forget to standardize your data. This might be not important in your case, but in general, you risk that the algorithm will be pulled into direction with largest values, which is not what you want.
So I downloaded your data, put into .csv and made a very simple example. You can see that I am using different dataframe for the clustering itself and then once I retrieve the cluster labels, I add them to the previous one.
Note that I omit the variable timestamp - since the value is unique for every record, it will only confuse the algorithm.
import pandas as pd
from scipy import stats
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('C:/.../Dataset.csv',sep=';')
#Make a copy of DF
df_tr = df
#Transsform the timeOfDay to dummies
df_tr = pd.get_dummies(df_tr, columns=['timeOfDay'])
#Standardize
clmns = ['Wattage', 'Duration','timeOfDay_Afternoon', 'timeOfDay_Evening',
'timeOfDay_Morning']
df_tr_std = stats.zscore(df_tr[clmns])
#Cluster the data
kmeans = KMeans(n_clusters=2, random_state=0).fit(df_tr_std)
labels = kmeans.labels_
#Glue back to originaal data
df_tr['clusters'] = labels
#Add the column into our list
clmns.extend(['clusters'])
#Lets analyze the clusters
print df_tr[clmns].groupby(['clusters']).mean()
This can tell us what are the differences between the clusters. It shows mean values of the attribute per each cluster. Looks like cluster 0 are evening people with high consumption, whilst 1 are morning people with small consumption.
clusters Wattage Duration timeOfDay_Afternoon timeOfDay_Evening timeOfDay_Morning
0 225.000000 85.000000 0.166667 0.833333 0.0
1 109.666667 30.166667 0.500000 0.000000 0.5
You asked for visualization as well. This is tricky, because everything above two dimensions is difficult to read. So i put on scatter plot Duration against Wattage and colored the dots based on cluster.
You can see that it looks quite reasonable, except the one blue dot there.
#Scatter plot of Wattage and Duration
sns.lmplot('Wattage', 'Duration',
data=df_tr,
fit_reg=False,
hue="clusters",
scatter_kws={"marker": "D",
"s": 100})
plt.title('Clusters Wattage vs Duration')
plt.xlabel('Wattage')
plt.ylabel('Duration') |
H: Using Pandas to_numeric() in Azure Machine Learning Studio
I am facing an issue that Azure Machine Learning Studio fails to find the to_numeric method in pandas.
After reading a .csv in AMLS I try to process it in a python script. The line that is throwing me an error is:
dataframe1['Monthly Debt'] = pd.to_numeric(dataframe1['Monthly Debt'])
pd of course is pandas, dataframe1 is my working dataframe. The error thrown is:
AttributeError: 'module' object has no attribute 'to_numeric'
Of course everything works on my local python. Do you have any idea what AMLS might be talking about?
AI: What’s new in Pandas v0.17.0
DataFrame.convert_objects has been deprecated in favor of
type-specific functions pd.to_datetime, pd.to_timestamp and
pd.to_numeric (new in 0.17.0) (GH11133).
So for Pandas versions < 0.17.0 you can and should use: df.convert_objects(convert_numeric=True)
Demo:
In [213]: x = pd.DataFrame({'a':['11', 'aaa', '0', np.nan, '123']})
In [214]: x
Out[214]:
a
0 11
1 aaa
2 0
3 NaN
4 123
In [215]: x.dtypes
Out[215]:
a object
dtype: object
In [216]: x = x.convert_objects(convert_numeric=True)
In [217]: x
Out[217]:
a
0 11.0
1 NaN
2 0.0
3 NaN
4 123.0
In [218]: x.dtypes
Out[218]:
a float64
dtype: object |
H: Text processing
I am completely new to analyze cluster texts, I'm using Goodreads API to get Books synopsis.
My goal is to group similar books, for example:
Politics
Music
Biographies
etc...
While Goodreads provide genre, I would like to use synopsis and use the text for this.
Lets say I will get N books synopsis like this:
<description>
<![CDATA[
<b>Alternate cover edition can be found <a href="https://www.goodreads.com/book/show/10249685-dune" rel="nofollow">here</a>. </b> and <a href="https://www.goodreads.com/book/show/11273438-dune" rel="nofollow">here</a><br /><br />Here is the novel that will be forever considered a triumph of the imagination. Set on the desert planet Arrakis, <b>Dune</b> is the story of the boy Paul Atreides, who would become the mysterious man known as Muad'Dib. He would avenge the traitorous plot against his noble family--and would bring to fruition humankind's most ancient and unattainable dream.<br />A stunning blend of adventure and mysticism, environmentalism and politics, Dune won the first Nebula Award, shared the Hugo Award, and formed the basis of what it undoubtedly the grandest epic in science fiction.
]]>
</description>
I have read about cosine similarity and new google NLP. But I want to start with this:
Represent books description (features, usually a bag of words with TF-IDF)
Calculate similarity between two books (cosine similarity)
Questions:
What's the most efficient algorithm to create a matrix of cosine similarity between all books (N)
How to cluster books together based on the above?
Any other ideas will be great.
AI: Since you are going to use TF-IDF representations, you already have a feature matrix. To calculate cosine similairty between all vectors, you can use:
from sklearn.metrics.pairwise import cosine_similarity
similarity = cosine_similarity(tfidfmat)
#tfidfmat is your TF-IDF matrix
#Use numpy arrays
To begin clustering, you can use K-means algorithm to begin with, and use cosine similairty as the distance metric. Here's an example from scikit-learn itself on clustering documents.
Further things to try:
If you find the above methods not working to your expectations, look into word2vec and doc2vec, and instead of using tfidf, which isa Bag of Words approach, use word vector representations. Here is a good blog explaining the concept. |
H: What does "batch" and "batch_size" mean in word2vec skip-gram model?
I am reading tensorflow documentation now and I can not understand what does "batch" and "batch_size" mean in explanation skip-gram model. Please, can someone explain me?
Here this paragraph:
Recall that skip-gram inverts contexts and targets, and tries to predict each context word from its target word, so the task becomes to predict 'the' and 'brown' from 'quick', 'quick' and 'fox' from 'brown', etc. Therefore our dataset becomes
(quick, the), (quick, brown), (brown, quick), (brown, fox), ...
of (input, output) pairs. The objective function is defined over the entire dataset, but we typically optimize this with stochastic gradient descent (SGD) using one example at a time (or a 'minibatch' of batch_size examples, where typically 16 <= batch_size <= 512).
AI: A minibatch, is a group of (input,output) pairs that you present to your neural net in one pass (or epoch), without computing the stochastic gradient descent between two pairs (you only do it at the end of the minibatch, summing the errors over the pairs). This improve the speed and prevent over learning on single elements, thus improving learning. It also doesn't need to much memory since your batches are "mini".
Some people call stochastic training when your mini-batch has a size of 1, and batch training when it has the size of your training dataset.
You can get some nice explanations here. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.