text
stringlengths
83
79.5k
H: Why do we need to handle data imbalance? I would like to know why we need to deal with data imbalance. I know how to deal with it and different methods to solve the issue - by up sampling or down sampling or by using SMOTE. For example, if I have a rare disease 1 percent out of 100, and lets say I decided to have a balanced data set for my training set which is: 50/50 sample won't that make the machine think 50% of patients will have disease? even though the ratio is 1 of 100. So Why do we need to deal with data imbalance? What is the recommended ratio to have balance set? AI: You need to deal with imbalanced data set when the value of finding the minority class is much higher than that of finding the majority. Let say that 1% of the population have that rare disease. Suppose that you assign the same cost to saying that a healthy man is sick or saying that a sick man is healthy. Provide a model that say that everybody are healthy, get 99% accuracy and go home early. The problem with such a model is that though it has high accuracy, it will probably not what you are looking for. Most of the time you want to find the people with the disease (giving high weigh to false negatives) much more than you are afraid to send an healthy person to unneeded test (low weight to false positives). In a real world health problem the ratio between the weight can easily be 1 to 1,000. The imbalance in the distribution fails most algorithms from finding a proper solution. You are correct that just balancing the distribution isn't the optimal solution. Indeed, an algorithm that is trained on a balanced distribution is not fitted to the natural distribution on which it will be evaluated. My favorite method is adapting it back, as you can see here. For a discussion, see here. Just setting the ratio to some other problem won't work since you will have the same problem. Smote is working in a different way, which didn't work as well when I tried it, but it might fit your problem.
H: Learning Algorithm that decide which model gives better results for each testing instance Is their any existing Ensemble technique which uses subset of training data to predict which algorithm is better for predicting each instance of testing data? Let's say we have N sized training set and K sized testing set in which a particular attribute needed to be predicted using the training set. But there are hundreds of algorithms and ways we can use. We can divide training set into two parts and train each model with first half and decide test on second half. Based on characteristics, we can decide which algorithm to use for real test cases (K sized set). As an example lets say dataset have an attribute named "temperature". Particular algorithm may work well when temperature is higher than 100 Celsius. We can then classify all the 100 degree or above instances to particular class. Then final prediction will be done based on that by with that model class trained with all N sized data. What I am asking is that is their any existing method similar to that? AI: Applying any non-linear model in a model stacking approach should do what you want. In brief the approach is to take predictions from other models as new features, plus the original data and labels, then use them to train a meta-model. Read the link, it offers practical advice on how to do this within a k-fold validation framework, which will give it a much better chance of doing well. Non-linear models that combine simpler units - e.g. neural networks and models that use multiple decision trees (e.g. xgboost) - already perform this kind of internal split during training. So if you are already using those, you might not gain such a big improvement over simpler ensemble techniques, such as taking a mean or weighted mean over models.
H: Use of TfidfVectorizer on dataframe I have the dataframe which has two colums(Reviews and Label): Reviews Label 0 [cameron, diaz, woman, marri, judg, play] 1 1 [turgid, dialogu, feebl, character, harvey] 0 2 [misfortun, watch, rubbish, sky, cinema] 1 I want to apply the TfidfVectorizer on the DF. I have written the following code. from sklearn.feature_extraction.text import TfidfVectorizer df_x=train_df["Reviews"] df_y=train_df["Label"] cv = TfidfVectorizer() df_xcv = cv.fit_transform(df_x) a=df_xcv.toarray() cv.get_feature_names() which is giving an error: AttributeError: 'list' object has no attribute 'lower' Why is this throwing an error? AI: Your reviews column is a column of lists, and not text. Tfidf Vectorizer works on text. I see that your reviews column is just a list of relevant polarity defining adjectives. A simple workaround is: df['Reviews']=[" ".join(review) for review in df['Reviews'].values] And then run the vectorizer again. That will fix the problem.
H: Goodhart's law applied to data science I recently learned about Goodhart's Law. Simply put, When a measure becomes a target, it ceases to be a good measure. However, in Data Science, we really do aim at improving our performance by increasing or decreasing a metric, and improve our models based on that. For instance, in Kaggle competitions. Is Goodhart's Law applicable to Data Science? Why or why not? AI: The function problem which Goodhart's Law described is the changes of the underlying model from which the data is generated. One way to look at this is that the data generating process in the past is bias. Say we our try model is a product of two variables $z = xy$. In the past, $y$ has always been constant at $a$, so you come to the conclusion $z$ must equal to $ax$. But this is only true when $y=a$. In Goodhart's setting (economic policy and social science in general), it is usually the prediction value of $z$ that causing the changes in $y$. It is a dynamic system of which we only have a bias static slice of data.
H: How can I find contextually related words and classify into custom tags/labels? PROBLEM: Suppose if I have a small dataset containing some words and their tags/labels. The main task is to provide tags to other words(which are not in the dataset) based on their contextual relationship with the words already in the dataset. Let's say, for example, my custom dataset includes Soap --> label__(cleaning_agent) pencil--> label__(stationary_item) mobile--> label__(electronics) washingmachine--> label(electronics) and so on. I want my program to be able to correctly predict the label of an unknown word, e.g. washing powder to its correct category label__(cleaning_agent) radio to label__(electronics) etc. ACTIONS: Now the main problem is to find the relation between two words based on context, but I am not able to decide what can be the parameters for finding that. I have tried a naive approach using datamuse API and fastText library. Naive approach is as follows-> step 1-> find all the related words of the given word(let's say W) e.g. pencil using datamuse API. step 2-> combine them into a string(let's say S) with spaces in between them step 3-> use the label name, W, S as the training dataset for fastText. NOTE: fastText requires label name, word, sentence(can be from news articles, blogs, Wikipedia etc ) as a context for that word. RESULTS: fastText is not providing any reliable results. I am thinking of building a neural network kind of thing for this purpose. But I am not able to decide what can be the input parameters for our data. The main problem is about custom word tagging. Our program should be able to tag unknown(not in training dataset) words to their most probable classes based on some score. As I am new to NLP, I want to know what can be the next move forward. AI: You are working with word classification so you really don't have any contextual information that you can leverage. So the best thing would be to use freely available contextual information called as word2vec. I would suggest using the pretrained GLOVE Word2Vec as they are much cleaner than the Google ones. I am going to suggest two approaches for doing this: Naive Approach Load the pretrained word vectors. For every new word calculate cosine similarity with every item in your training list. Predict the label for the word which is closest to your new word. Alternatively you can calculate cosine similarity with the labels and just predict the label which comes the closest. Better Approach Load the pretrained word vectors. Extract the vectors for every item in the training set. One hot encode the labels. Train a neural net or any multiclass classification algorithm using the vectors as features and save the model. For every new word extract the vector and run the model on this vector. The predicted label is your output. You can use the first approach to run quick tests and check the efficacy of the approach and then move on to the second one if you found it fit to your use.
H: MLP incremental learning In my project I use 2 MLP ANNs: for classification and for prediction. Basic training dataset is stored in database (MS SQL Server). But we need to make future incremental learning possible. It means that ANN is used in industrial realtime system, operator marks observations in database as objects of a particular class and ANN must take it into account. How can I do it? AI: If your old data is representatively of the underlying population and there is no radical shift of the underlying true model over time. Than you don't need to update you model as new data comes in. You might want to retrain you model with the new dataset periodically, but you don't need to do it in real time on your production machine. If the underlying true does change radically overtime and you don't have any existing data to captures those changes then doesn't matter what algorithm you use, it's not going to work. The information is simply not there.
H: Finding similar articles in realtime I want to build a large document (news article) searchable database, such as when adding a new article I will be able to quickly find X most similar articles from it. What is the right tech/algorithm/Python framework to approach this? AI: Elasticsearch is the right tool to use if you don't want to code this yourself. Indeed, you need an indexing algorithm that is able to efficiently retrieve pieces of texts in a big database, and SQL isn't particularly good at it. Moreover, Elasticsearch is quite user friendly, so it won't be an overkill to actually install it and use it. You might discover in the process that finding most similar articles isn't that easy and that Elasticsearch is of a great. Here is the documentation for a Python client:
H: In supervised learning, why is it bad to have correlated features? I read somewhere that if we have features that are too correlated, we have to remove one, as this may worsen the model. It is clear that correlated features means that they bring the same information, so it is logical to remove one of them. But I can not understand why this can worsen the model. AI: Correlated features in general don't improve models (although it depends on the specifics of the problem like the number of variables and the degree of correlation), but they affect specific models in different ways and to varying extents: For linear models (e.g., linear regression or logistic regression), multicolinearity can yield solutions that are wildly varying and possibly numerically unstable. Random forests can be good at detecting interactions between different features, but highly correlated features can mask these interactions. More generally, this can be viewed as a special case of Occam's razor. A simpler model is preferable, and, in some sense, a model with fewer features is simpler. The concept of minimum description length makes this more precise.
H: Regex remove 1-2 character sequences: hyphens literal, not word boundaries I need a regex in R to exclude 1 or 2-character words, but which does not treat hyphens as word boundaries. Here is an example: z <- c("regulatory protein SR-B1 na na na na", "Dr Foo is na Editor-in-chief", "na P3 protein is popular na na") The intended 'gsub' would produce the result [1] "regulatory protein SR-B1" " Foo Editor-in-chief" "P3 protein popular" This is not produced from the gsub statements below. Instead: A - 2-character segments unintentionally dropped from hyphenated words (and hyphens kept). B - 2-character segments unintentionally dropped within hyphenated words (hyphens also dropped): C - Produces: Error: '-' is an unrecognized escape in character string starting "" *\b[[-" D - Nothing dropped or changed: gsub(" *\\b[[:alpha:]]{1,2}\\b *", " ", z) # A gsub(" *\\b[-[:alpha:]]{1,2}\\b *", " ", z) # B gsub(" *\\b[[\-][:alpha:]]{1,2}\\b *", " ", z) # C gsub(" *\\b[[\\-][:alpha:]]{1,2}\\b *", " ", z) # D Input would be much appreciated. AI: Try this gsub(" *\\b(?<!-)\\w{1,2}(?!-)\\b *", " ", z, perl=T)
H: How to give CIFAR-10 as an input to MLP I am a noob in machine learning and trying to build a classifier using keras by following this tutorial machine learning mastery tutorial I want to build a classifier based on MLP like in classification of MNIST using MLP for CIFAR-10 data set.Like in MNIST data set the 28*28 images to a 784 vector is given as a input to the neural network. Like wise how should I frame the data in CIFAR-10 should it be changed in the 36*36 pixels rgb values to an array? or what type of array should I make to give as a input to the MLP network? AI: MNIST has 28x28 pixel greyscale images, so there are $28\times28 = 784$ features per image. CIFAR10 contains colour images that are 32x32 pixels. At first glance it may seem like there are only $32\times32 = 1024$ features per image, but there are red, blue and green channels in the image, which means each image is actually $3\times32\times32 = 3072$ features. Usually, these channels are arranged as three separate 32x32 pixel images, rather than a single 32x32 image with RGB pixels. In keras, you can load the CIFAR10 dataset like so: from keras.datasets import cifar10 (x_train, y_train), (x_test, y_test) = cifar10.load_data() However, this will load the train and test dataset in the shape (num_samples, 3, 32, 32). In order to input these into an MLP, we need to flatten the channels and pixel arrays to form an array of shape (num_samples, 3072), just like with MNIST. We can do this in python like so: x_train = x_train.reshape(-1, 3072) x_test = x_test.reshape(-1, 3072)
H: Does torch.cat work with backpropagation? I was wondering if it was okay to use torch.cat within my forward function. I am doing so because I want the first two columns of my input to skip the middle hidden layers and go directly to the final layer. Here is my code: you can see that I use torch.cat at the last moment to make xcat. Does the gradient propagate back? or does the torch.cat cover up what happened to my hidden variables? class LinearRegressionForce(nn.Module): def __init__(self, focus_input_size, rest_input_size, hidden_size_1, hidden_size_2, output_size): super(LinearRegressionForce, self).__init__() self.in1 = nn.Linear(rest_input_size, hidden_size_1) self.middle1 = nn.Linear(hidden_size_1,hidden_size_2) self.out4 = nn.Linear(focus_input_size + hidden_size_2,output_size) def forward(self, inputs): focus_inputs = inputs[:,0:focus_input_size] rest_inputs = inputs[:,focus_input_size:(rest_input_size+focus_input_size)] x = self.in1(rest_inputs).clamp(min=0) x = self.middle1(x).clamp(min=0) xcat = torch.cat((focus_inputs,x),1) out = self.out4(xcat).clamp(min=0) return out I call it like so: rest_inputs = Variable(torch.from_numpy(rest_x_train)) focus_x_train_ones = np.concatenate((focus_x_train, np.ones((n,1))), axis=1) focus_inputs = Variable(torch.from_numpy(focus_x_train_ones)).float() inputs = torch.cat((focus_inputs,rest_inputs),1) predicted = model(inputs).data.numpy() AI: Yes, torch.cat works with backward operation.
H: Should the bias value be added after convolution operation in CNNs? Should we add bias to each entry of the convolution then sum, or add bias once at end of calculating the convolution in CNNs? AI: Short answer: the bias is added once after the convolution has been calculated. Long answer: discrete convolution that you see in CNNs is a linear function applied to pixel values in a small region of an image. The output of this linear function is then jammed through some nonlinearity (like ReLU). For a region $\mathbf{x}$ of size $i \times j$ of an image and a convolutional filter $\mathbf{k}$, and no bias term, this linear function $f$ would be defined as: $$ f(\mathbf{x}, \mathbf{k}) = \mathbf{x}*\mathbf{k} = \sum_{i,j} k_{i,j} x_{i,j} $$ Without a bias term, this linear function $f$ must go through the origin. In other words, if $\mathbf{x}$ or $\mathbf{k}$ is all zeroes, the output of $f$ will be zero as well. This may not be desirable, so we add a bias term $b$. This gives the model more flexibility by providing a value that is always added to the output of the convolution, regardless of the values of $\mathbf{x}$ and $\mathbf{k}$ -- in other words, it's the intercept value. $$ f(\mathbf{x}, \mathbf{k}, b) = b + (\mathbf{x}*\mathbf{k}) = b + \sum_{i,j} k_{i,j} x_{i,j} $$ If this value was added to each entry of the convolution, it would not achieve its purpose as $f$ would still necessarily go through the origin.
H: What is average unpooling? I'm working on an implementation of a paper and I haven't been able to find a description of average unpooling. I tried using max unpooling instead and my results were not great. Thanks! AI: When you go backward for max-pooling you keep track of the position of the maximum: $$ X = \begin{bmatrix} 1 && 2 \\ 3 && 4 \end{bmatrix} \quad \rightarrow \quad dZ *\begin{bmatrix} 0 && 0 \\ 0 && 1 \end{bmatrix}$$ This gives you the position of the input value that ultimately influenced the cost/output. The gradient will then be "propagated" back to this value. For average pooling, all values influence the cost equally and therefore it should look like this: $$ dZ = z \quad \rightarrow \quad \begin{bmatrix} z/n && z/n \\ z/n && z/n \end{bmatrix}$$ where $n$ depends on the filter size, here $n=4$. The gradient gets distributed evenly back to all input values.
H: Character-based word representation using bi-lstm In this blog, it teaches us how to get a word embedding using bi-lstm in character level like the image below: I am wondering how to optimize the word vector from character-based approach. Is there any loss function or any strategy to update or optimize the word vector trained by a word's characters? AI: This would generally only be a part of a network, used for some task like classification, entity recognition or machine translation. The loss used for this task will guide the learning of the embedding space. This is different from approaches like word2vec or GLoVe where you use context windows to learn an embedding, which is an unsupervised (or maybe self-supervised) approach. The approach that you mention here could still be used in these scenarios, by structuring it with a similar loss as word2vec or GLoVe. One advantage would be that you can leverage word similarity (words that end with -ly have a similar function). However these models are usually trained on incredibly large corpora where these character based similarities are not that relevant, and the computational cost increases quite a bit by incorporating bigger models like LSTMs.
H: Does gradient descent always converge to an optimum? I am wondering whether there is any scenario in which gradient descent does not converge to a minimum. I am aware that gradient descent is not always guaranteed to converge to a global optimum. I am also aware that it might diverge from an optimum if, say, the step size is too big. However, it seems to me that, if it diverges from some optimum, then it will eventually go to another optimum. Hence, gradient descent would be guaranteed to converge to a local or global optimum. Is that right? If not, could you please provide a rough counterexample? AI: Gradient Descent is an algorithm which is designed to find the optimal points, but these optimal points are not necessarily global. And yes if it happens that it diverges from a local location it may converge to another optimal point but its probability is not too much. The reason is that the step size might be too large that prompts it recede one optimal point and the probability that it oscillates is much more than convergence. About gradient descent there are two main perspectives, machine learning era and deep learning era. During machine learning era it was considered that gradient descent will find the local/global optimum but in deep learning era where the dimension of input features are too much it is shown in practice that the probability that all of the features be located in there optimal value at a single point is not too much and rather seeing to have optimal locations in cost functions, most of the time saddle points are observed. This is one of the reasons that training with lots of data and training epochs cause the deep learning models outperform other algorithms. So if you train your model, it will find a detour or will find its way to go downhill and do not stuck in saddle points, but you have to have appropriate step sizes. For more intuitions I suggest you referring here and here.
H: Independence/correlation test between features (not feautre-label) I'm doing a Naive Bayes prediction model where I've 666 features to select from. I've tried the SelectKBest chi2 test but it's a features-label test whereas what I'm trying to do is to find out the independency and also correlation feature-features. As the assumption to use Naive Bayes is that variables are all independent to each other, I have to remove features that are dependent/correlated to each other. What are the other independence tests that I should look into apart from $chi^2$? (I have continuous features) AI: This package is very useful, this does the feature engg. for you, gives you the important variables. sample code would be something like this. set.seed(123) boruta.input <- Boruta(target variable~., data = training_data, doTrace = 2) print(boruta.input) #plot a graph for better understanding plot(boruta.input, xlab = "", xaxt = "n") lz<-lapply(1:ncol(boruta.input$ImpHistory),function(i) boruta.input$ImpHistory[is.finite(boruta.input$ImpHistory[,i]),i]) names(lz) <- colnames(boruta.input$ImpHistory) Labels <- sort(sapply(lz,median)) axis(side = 1,las=2,labels = names(Labels), at = 1:ncol(boruta.input$ImpHistory), cex.axis = 0.7) By using this you can get all the important features but the downfall is it takes time if you have more data In my case my data consists 40 features and 200,000 records it took almost 2 hours but the results were good. For better understanding you can go through this link -- I think you can use this test too, in the description it tells that it can be applied on numerical data too. If you have data which is Nominal data, you can use G-test Attaching one more link along with this, which consists of tests for different kinds of features. Might be helpful in the future Link I have a question, why aren't you applying PCA(or any DR Techniques) on the data-set? (assuming that you din't try any DR Techniques) If you apply DR Techniques, I think you might get better Components which would aid you in getting better results/better understanding of data. As you mentioned that you were using NB Classifier, why did you choose that, is there any specific reason? As you know that there are many other Classifiers which might outrun NB Classifier. In my case I used NB Classifier for twitter sentiment mining, as those are short sentences and NB Classifier best suited for my analysis. Hope my answer is helpful! NB: Naive Bayes DR: Dimensionality Reduction
H: Contemporary alternatives to SIFT for image feature extraction? I've been learning about SIFT and all the ways its descriptors can be used to do different tasks. I am particularly interested in the way SIFT can be used for image classification. (e.g. A 2006 paper by Niester & Stewenius relies on SIFT descriptors to build a vocabulary tree). However, as of 2017, 11 years later, Deep Learning has been replacing the classical approaches in many ways. What are some alternatives to find feature descriptors for images that, as of 2017, have shown more promising results? AI: The Capsule Networks by Geoff Hinton et al. are based on a similar idea to SIFT: in fact, if you look at the 2011 paper, Transforming Auto-Encoders by Hinton et al. they explicitly cite SIFT as an inspiration for Capsules. The main idea is that they construct a network with a new type of unit called a Capsule which outputs a vector (rather than neurons outputting scalars, as we know in traditional neural networks). You can think of these vectors as an analogue to the SIFT keypoints and feature descriptors, but with the benefit that they can be learned via back-propagation. There is a recent article, Dynamic Routing Between Capsules by Sabour, Frosst and Hinton elaborating on this which shows very promising results. I have written a short summary of Capsule Networks on the AI Stack Exchange, see the question What's the main concept behind Capsule Networks?.
H: Neural network predicted value I am starting to learn about machine learning as a whole and have found a big interest in Neural Networks. My question today is, Given a simple cost function $J$ with weights $w1, w2$ and inputs $m1, m2$ and a target prediction/error $b$ so that $J = w1*m1 + w2*m2 + b$. Lets say we want to develop an algorithm that can detect by the heights and length of the flower pedal which color the flower was. To save time I have already seen clustering in the graph and the red flowers which are close to 1 while the blue are close to 0. Most cost function outputs go from 0 to 1 (I think due to the sigmoid function), 1 being yes or some desired output = red and 0 being no or another desired output = blue. Now, what if instead of looking at two different colors (red and blue) what if our Machine wants to analyze 10 colors given the heights and length of the pedals? Would we say that if the output of the cost function was 2.1 (2 being green) it would be green for example or if its 5.2 (5 being yellow) its yellow? Not sure how to analyze more than two things if we can only go from 0 to 1 due to the sigmoid function. AI: With binary classification (your first example) you use a sigmoid function at the end, where the value indicates the probability P(Red), and 1-value equals the probability of P(Blue). Because there are only two classes we can use one value because together they sum to one. By using binary crossentropy loss we can fit our model. When you have to deal with more than two classes having one value is not enough anymore. What you do then is to have n output values that are mapped to a probability distribution using the softmax. This comes down to taking the e-power of all of them and then normalizing them to sum up to 1. By using the multiclass crossentropy loss which is the generalized version of the binary crossentropy loss we can still estimate this. EDIT: Where with binary classification we map using the sigmoid: $$P(Red) = \frac{1}{1+e^{-Wx}}$$ Now we have a probability for all colors individually: $$P(Color = y) = \frac{e^{W_yx}}{\sum_{c\in C}e^{W_cx}}$$
H: Is training one epoch using mini-batch gradient descent slower than using batch gradient descent? I wonder whether one epoch using mini-batch gradient descent is slower than one epoch using just batch gradient descent. At least I understand that one iteration of mini-batch gradient descent should be faster than one iteration of batch gradient descent. However, if I understand it correctly, since the mini-batch gradient descent must update the weights by the number of the batch size in one epoch, the training would be slower than the batch gradient descent, which computes and updates the weights only once in one epoch. Is this correct? In that case, is it worth worrying about the loss of the overall training time? AI: You are correct, there is more overhead to process the same amount of data because you do more weight updates and maybe preprocessing the batches in your generator will take more time as well. However, since you are doing more updates and if your batch size is big enough, the gradients of your mini-batches will approximate the gradients of your full batch fairly well and you will have gotten much closer to the optimum of your full dataset by splitting your full batch into mini batches than you would have gotten by doing one non-stochastic weight update.
H: Cost Function on non linear noisy data I was curious to see if one can use a cost function on a set of data points to find the "optimial minimum" solution for any given set of data. I know for a regular set of data that is clustered symmetrically following a straight regression line is easy to find the proper cost function Example: but what if the data makes a funky shape like that of x^2 or 3*x^3 or etc... As you can see that the the second graph is a lot more noisy than the first. Then what do you do? is it the same process or is it different? Can you find an optimal Minimum for any set of data points? I know the cost function is using the residuals to form its best fit but I was just curious if it can do with any set of data. AI: I don't think the second picture necessarily has more noise, it's just less linear than the first one. If you are doing regression (which is the case in both your problems) you can use the same loss function to optimize your family of functions that you are using to fit your data. The only difference is that with certain function families gradient descent might not get to a global optimum but will get stuck in a local one. The loss function only determines how to penalize certain errors (based on the residuals).
H: Convolution Neural Network Loss and performance I have a set of about ~100,000 training examples. Ratio of positive to negative example is roughly 1:2. The true ratio is more like 1:100 so this represents a major downsampling of the negative class. It is also a very noisy dataset - the examples were automatically generated through distant supervision. Each example represents a set of sentences and has 700 columns. Number of rows may vary from 10 to 100 (maybe even more). I used a Convolution Neural Network in Tensorflow to train my model (model architecture similar to the one described here) with only 2 epochs and stored the loss, f-score , precision and recall every 10 steps. I evaluated the model on a validation set (which too was generated automatically through distant supervision with negative class downsampling resulting in pos:neg ratio of ~1:2) every 100 steps. Here are the hyperparameters: batch size: 60 for train, 100 for validation epochs: 2 convolution filter sizes: 700x5, 700x6, 700x7, 700x8, 700x9, 700x10 number of convolution filters per filter size: 125 (so total of 750 filters) dropout: 0.5 l2reg: 0.001 lr: 0.001 I'm seeing some strange behavior with the model and I don't understand why. My training precision, recall and f-score go over 0.95 in about a 100 steps (6000 examples) and then plateaus. The loss falls down from 0.8 to 0.2 in about 200 steps and then fluctuates between 0.1 and 0.4. On the validation set my precision, recall and f-score are over 0.95 starting from the first time I evaluate it on the 100th step. Loss fall slightly from 0.3 to 0.2. When I evaluated on a real-world test set (without downsampling negative class so it has the true ratio of pos:neg), the actual precision and recall were 0.37 and 0.85. My results are not making any sense to me. I use tensorflow metrics for calculating training precision, recall and fscore and scikit-learn metrics for calculation validation precision, recall and fscore. I can't find anything wrong in the code but I don't understand why I should have such results unless there is a bug. I would have understood having low precision and recall all through - the class imbalance favors the negative class and my set is noisy. However, I am very confused about why I'm having such misleadingly high scores all through.. Given that my dev dataset is also noisy and generated in the same manner as the train set, the dev results might just be useless and it is possible that the model is overfitting the noisy set. But I still don't understand why the scores are so high so soon. Also, if overfitting is the issue, do you think I should make the dropout even higher? I've attached a screenshot of the graphs and would really appreciate your thoughts on this. Blue is train and red is dev. Thanks a lot! AI: Precision Your step change in precision looks to be almost entirely explained by the change in positive class frequency. It is reasonable to expect the proportion of false positives to increase when increasing the proportion of negative examples. Even if you assume your cv results were perfect, then you would see some increase. As an example, assume you have cv results representative of test results - which means same distribution before random under-sampling, and no over-fit to the cv set. Say you measured precision at 0.97 with a t:f ratio of 1:2, and for the sake of simplicity that this was due to the following confusion table: Predicted: T F Real T 97 3 Real F 3 197 What precision should you expect when going to the real distribution? That is the same as multiplying the bottom row of the confusion table by 50. Precision is $\frac{TP}{TP+FP}$, so your expected precision would be $\frac{97}{97+150} \approx 0.39$ Recall The same effect does not impact recall, because it is about the ratio between true positive and false negative. So when you change the ratio of positive to negative classes, in theory recall should be unaffected. In your case, recall has been affected, but a lot less than precision. that is promising. A drop from 0.95 to 0.85 between cv and test is not great perhaps, but it doesn't point to a really major problem, just room for improvement. There are a few possible causes. The ones that I can think of are: Your test set might be too small, so estimates of precision and recall have large error. So in fact there is no problem . . . Test distribution might be different to train and cv set. Train/CV set split might allow some data leakage (e.g. they share some common features such as data about the same person, and should be split by that common feature). In which case CV estimates could be too high. Your mechanism for under-sampling the negative class may be biased. What to do? First of all, these results are unlikely to be anything directly do with faults in the model, and are not informed much by the training curves. They are also not that bad out of context (i.e. they are much better than simply guessing which items are in the positive class) - the question is more whether you could improve on them, and what the costs are to you for the different types of error. It might be worth you actually assigning real-world comparable costs to each type of error, to help decide whether your model is successful/useful and to pick the best model later on. One thing from the training curves is that your cv and training loss look pretty close. It implies you are not over-fitting to the training data (or you should check to a train/cv data leak). You may have room to add more parameters and improve the model in general. It is possible you could make the model even better with different hyper-parameter choices, feature engineering etc, and that would improve the scores. There is no general advice for that though, it depends on what you have available. It might be worth experimenting with training on the unbalanced training set (take the raw data without undersampling) and instead weighting the loss function, so that costs are larger for inaccurate classification of positive class. This is not guaranteed to fix your problem, but will increase the amount of data you use for training. Otherwise, you should investigate whether any of the possible causes listed above is likely and try to apply fixes. Finally, in this situation, it is not unheard of to have a four-way data split: A ratio-adjusted set split two ways: Training data CV or "Dev" set A A same as production set split two ways: CV or "Dev" set B Test set CV set A is used to perform early stopping and low-level model selection. CV set B is used to perform high-level model selection against production metric. Test set is used to assess the chosen "best" model without bias.
H: ReLU activation function outputs HUGE numbers I have FINALLY been able to implement backpropagation, but there are still some bugs I need to fix. The main is issue the following: My ReLU activation function produces really big dJdW values (derivative of error function wrt weights). When this gets subtracted from the weights, my output becomes a matrix of -int or inf. How do I stop this? As of now, the only solution I have is to make my learning rate scalar variable REALLY small. import numpy as np class Neural_Network(object): def __init__(self, input_, hidden_, output_, numHiddenLayer_, numExamples_): # Define Hyperparameters self.inputLayerSize = input_ self.outputLayerSize = output_ self.hiddenLayerSize = hidden_ self.numHiddenLayer = numHiddenLayer_ self.numExamples = numExamples_ self.scalar = 0.0000000001 # LEARNING RATE: Why does ReLU produce such large dJdW values? # in -> out self.weights = [] # stores matrices of each layer of weights self.z = [] # stores matrices of each layer of weighted sums self.a = [] # stores matrices of each layer of activity self.biases = [] # stores all biases # Biases are matrices that are added to activity matrix # Dimensions -> numExamples_*hiddenLayerSize or numExamples_*outputLayerSize for i in range(self.numHiddenLayer): # Biases for hidden layer b = [np.random.random() for x in range(self.hiddenLayerSize)]; B = [b for x in range(self.numExamples)]; self.biases.append(np.mat(B)) # Biases for output layer b = [np.random.random() for x in range(self.outputLayerSize)] B = [b for x in range(self.numExamples)]; self.biases.append(np.mat(B)) # Weights (Parameters) # Weight matrix between input and first layer W = np.random.rand(self.inputLayerSize, self.hiddenLayerSize) self.weights.append(W) for i in range(self.numHiddenLayer-1): # Weight matrices between hidden layers W = np.random.rand(self.hiddenLayerSize, self.hiddenLayerSize) self.weights.append(W) # Weight matric between hiddenlayer and outputlayer self.weights.append(np.random.rand(self.hiddenLayerSize, self.outputLayerSize)) def setBatchSize(self, numExamples): # Changes the number of rows (examples) for biases if (self.numExamples > numExamples): self.biases = [b[:numExamples] for b in self.biases] def sigmoid(self, z): # Apply sigmoid activation function return 1/(1+np.exp(-z)) def sigmoidPrime(self, z): # Derivative of sigmoid function return 1-self.sigmoid(z) def ReLU(self, z): # Apply activation function for (i, j), item in np.ndenumerate(z): if (item < 0): item *= 0.01 else: item = item return z def ReLUPrime(self, z): # Derivative of ReLU activation function for (i, j), item in np.ndenumerate(z): if (item < 0): item = 0.01 else: item = 1 return z def forward(self, X): # Propagate outputs through network self.z.append(np.dot(X, self.weights[0]) + self.biases[0]) self.a.append(self.ReLU(self.z[0])) for i in range(1, self.numHiddenLayer): self.z.append(np.dot(self.a[-1], self.weights[i]) + self.biases[i]) self.a.append(self.ReLU(self.z[-1])) self.z.append(np.dot(self.z[-1], self.weights[-1]) + self.biases[-1]) self.a.append(self.ReLU(self.z[-1])) yHat = self.ReLU(self.z[-1]) return yHat def backProp(self, X, y): # Compute derivative wrt W # out -> in dJdW = [] # stores matrices of each dJdW (equal in size to self.weights[]) delta = [] # stores matrices of each backpropagating error self.yHat = self.forward(X) delta.insert(0,np.multiply(-(y-self.yHat), self.ReLUPrime(self.z[-1]))) # delta = (y-yHat)(sigmoidPrime(final layer unactivated)) dJdW.insert(0, np.dot(self.a[-2].T, delta[0])) # dJdW for i in range(len(self.weights)-1, 1, -1): # Iterate from self.weights[-1] -> self.weights[1] delta.insert(0, np.multiply(np.dot(delta[0], self.weights[i].T), self.ReLUPrime(self.z[i-1]))) dJdW.insert(0, np.dot(self.a[i-2].T, delta[0])) delta.insert(0, np.multiply(np.dot(delta[0], self.weights[1].T), self.ReLUPrime(self.z[0]))) dJdW.insert(0, np.dot(X.T, delta[0])) return dJdW def train(self, X, y): for t in range(60000): dJdW = self.backProp(X, y) for i in range(len(dJdW)): self.weights[i] -= self.scalar*dJdW[i] # Instantiating Neural Network inputs = [int(np.random.randint(0,100)) for x in range(100)] x = np.mat([x for x in inputs]).reshape(100,1) y = np.mat([x+1 for x in inputs]).reshape(100,1) NN = Neural_Network(1,3,1,1,100) # Training print("INPUT: ", end = '\n') print(x, end = '\n\n') print("BEFORE TRAINING", NN.forward(x), sep = '\n', end = '\n\n') NN.train(x,y) print("AFTER TRAINING", NN.forward(x), sep = '\n', end = '\n\n') # Testing test = np.mat([int(np.random.randint(0,100)) for x in range(100)]).reshape(100,1) print("TEST INPUT:", test, sep = '\n', end = '\n\n') print(NN.forward(test), end = '\n\n') NN.setBatchSize(1) # changing settings to receive one input at a time while True: # Give numbers between 0-100 (I need to fix overfitting) and it will get next value inputs = input() x = np.mat([int(i) for i in inputs.split(" ")]) print(NN.forward(x)) I first made the ANN using sigmoid but Leaky ReLU is faster. The code is a bit much so here is a summary: Neural Network Class define hyperparameter and stuff (include really small learning rate scalar) activation functions and their derivatives (ReLU and sigmoid) Member functions: forward propagation, backpropagation, setBatchSize etc. Instantiating ANN setting hyperparameters (topology of ANN) creating data (one array has values x and the output array has values x+1) Training using inputs generated in step 2 to train ANN Testing Testing using randomly generated inputs User can give inputs Hope that helps you help me. Thanks! AI: I suspect the problem is the fact that your input data values are very high. You're trying to map the input variable $x \in (0,100)$ to $y = x+1$ in your code, but neural networks work best when the data has much lower values. A good strategy is to normalise the data before training so that each feature has zero mean and unit variance. Try scaling your data down like so (I've also changed the code that originally generates the inputs to make it more efficient in numpy): # Instantiating Neural Network x = np.random.randint(0, 100, size=100).reshape(100,1) y = x + 1 # normalize data to have zero mean and unit variance x_normalized = (x - x.mean()) / x.std() y_normalized = (y - y.mean()) / y.std() Train your network with x_normalized and y_normalized instead of x and y. Then, during testing, you normalize your input data like above, and you can scale your predictions back up to the original scale by rearranging the above formula. # generate test data & normalize using train data mean&std test = np.random.randint(0, 100, size=100).reshape(100,1) test_x_normalized = (test - x.mean()) / x.std() # input to network to get normalized outputs test_y_normalized = NN.forward(test_x_normalized) # rescale normalized outputs to original 0-100 scale test_y = (test_y_normalized * y.std()) + y.mean() print("TEST INPUT:", test, sep = '\n', end = '\n\n') print(test_y, end = '\n\n')
H: Do Convolution Layers in a CNN Treat the Previous Layer Outputs as Channels? Lets say you have a max pooling layer that gives 10 downsampled feature maps. Do you stack those feature maps, treat them as channels and convolve that 'single image' of depth 10 with a 3d kernel of depth 10? That is how I have generally thought about it. Is that correct? This visualization confused me: http://scs.ryerson.ca/~aharley/vis/conv/flat.html On the second convolution layer in the above visualization most of the feature maps only connect to 3 or 4 of the previous layers maps. Can anyone help me understand this better? Related side question: If our input is a color image our first convolution kernel will be 3D. This means we learn different weights for each color channel (I assume we aren't learning a single 2D kernel that is duplicated on each channel, correct)? AI: Lets say you have a max pooling layer that gives 10 downsampled feature maps. Do you stack those feature maps, treat them as channels and convolve that 'single image' of depth 10 with a 3d kernel of depth 10? That is how I have generally thought about it. Is that correct? Yes. The usual convention in a CNN is that each kernel is always the same depth as the input, so you can also think of this as a "stack" of 2D kernels that are associated with the input channels and summed to make one output channel - because under the convention that $N_{in\_channels} = N_{kernel\_depth}$ this is mathematically the same. Expressing as a 3D convolution allows for simpler notation and code. On the second convolution layer in the above visualization most of the feature maps only connect to 3 or 4 of the previous layers maps. Can anyone help me understand this better? The diagram is non-standard in that respect, although it seems to show pooling and fully-connected layers as normal. It might be a mistake in the diagram, or something unconventional about that specific CNN. If our input is a color image our first convolution kernel will be 3D. This means we learn different weights for each color channel (I assume we aren't learning a single 2D kernel that is duplicated on each channel, correct)? Correct. You can see this in the visualised filters for AlexNet (do note that for computational reasons, AlexNet specialised one half of its filters to work in greyscale, and had other clever optimisations that we don't use nowadays because available GPU power is high enough to not need them). Most implementations will also treat a greyscale image as a 1-channel 3D shape for consistency.
H: Clamping Q function to it's theoretical maximum, yes or no? I'm implementing DQN algorithm from scratch on MountainCar simulation. I'm using a setup of $reward = 1.0$ when car hits the flag, and $0$ otherwise. Reward decay factor is set to $\gamma=0.99$. Algorithm starts with exploration factor of $\epsilon = 1.0$ and decreases that over time to $\epsilon = 0.1$. If I understood correctly, $Q$ function for some state and action pair is defined as: $Q(s_t, a_t) = r_t + \gamma \times \operatorname{arg\,max}_a Q(s_{t+1}, a_{t+1})$ So, $Q_{max}$ would satisfy the condition: $Q_{max} = r_{max} + \gamma \times Q_{max}$ Which means: $Q_{max} = \frac{r_{max}}{1 - \gamma}$ However, since my network only approximates the $Q$ function it is possible for it to sometimes produce value greater than $Q_{max}$. When that happens, further training causes the values to start growing exponentially, and the entire thing blows up. When I clamp the error of expected value vs. current predicted value to some small number, it still causes the blowup just a bit slower. The only solution I can think of is to clamp the predicted value $Q(s_{t+1}, a{t+1})$ to $Q_{max}$ and forcing it to never go above that. I have done that, and got OK results with it. Does this make sense? Is this a situation that happens in DQN? Or maybe I missed something and my implementation is a bit buggy? AI: As MountainCar is often solved with $\gamma = 1$ and a negative reward per timestep, you would immediately hit a problem with your ability to calculate a maximum action value in that case. However, I don't think that is your problem here, the discounted return with positive reward at the end should still encourage desired behaviour for the problem. It is likely that you are experiencing known problems with RL, and the "deadly triad": Function approximation (neural network) on a bootstrap method (Q-Learning or any TD-learning approach) off policy (learning optimal policy from non-optimal behaviour*, which is a feature of Q-learning) This combination is often unstable and difficult to train. Your Q value clamping is one way to help stabilise values. Some features of DQN approach are also designed to deal with this issue: Experience replay. Agent does not learn online, but instead puts each sample (S, A, R, S') into a memory table and trains the neural network on mini-batches sampled from this memory. Commonly this minibatch update is run on every step (after enough experience collected) and might be e.g. size 32. So learning updates happen faster than experience is collected. Frozen bootstrap target network. The network used to calculate $\operatorname{max}_{a'} Q(s', a')$ when setting the target values to learn (in td target $R + \gamma \operatorname{max}_{a'} Q(s', a')$) is kept stable, and refreshed to use a copy of the current weights after a certain number of steps (e.g. 100 or 1000, or once every 10 episodes). * Technically off-policy learning is learning the cost function for any policy $\pi$ from a different behaviour policy $b$, but the common case is for control systems attempting to find an optimal policy from an exploratory policy.
H: Is it possible to use Recurrent NN (LSTM) for classification? I have a dataset C of 50,000 (binary) samples each of 128 features. The class label is also binary either 1 or -1. For instance, a sample would look like this [1,0,0,0,1,0, .... , 0,1] [-1]. My goal is to classify the samples based on the binary classes( i.e., 1 or -1). I thought to try using Recurrent NN (LSTM) to generate a good model for classification. For that, my question is: Is it possible to use Recurrent NN (LSTM) for such a classification problem? tr_C, ts_C, tr_r, ts_r = train_test_split(C, r, train_size=.8) batch_size = 1000 print('>>> Build STATEFUL model... (%s, %s)' % (tr_C.shape[1], tr_C.shape[2])) model = Sequential() model.add(LSTM(129, batch_input_shape=(batch_size, C.shape[1], C.shape[2]), return_sequences=False, stateful=True)) model.add(Dense(1, activation='relu')) print('>>> Training...') model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(tr_C, tr_r, batch_size=batch_size, epochs=1, shuffle=True, validation_data=(ts_C, ts_r)) Thank you AI: Yes, it is possible. You will first use a LSTM layer and then use a Dense/Linear layer on top of it to get the final output. Make sure that your linear layer has number of outputs = number of classes.
H: Imbalanced class with same rows? In my dataset i have 3 classes-> 0,1,2. 0(72k),1(13k)and 2(13K) in brackets are there count. So whenever i try to predict them with any algorithm ,i observed that almost all the "2"'s are predicted as "0". On little exploration i found that there are some rows where attributes of 0 and 2 are exactly same. Any technique to tackle this issue? AI: The further exploration of your data would help. Are there, for example, some clusters where relative frequencies of your classes are much different from average? Now I see two ways to increase sensitivity of your algorithm toward class 2: Use probabilistic prediction. Maybe what you really need is to estimate the probability that the current example belongs to class 2. If identical observations are sometimes marked as 0 and sometimes as 2, you cannot do much better then say "with probability X this is the second class". Use class_weights and increase them for class 2. This will lead to more predictions of 2 and less predictions of 0 and 1. With imbalansed datasets it sometimes helps.
H: Using a decision tree with 3 dimensional input points I implemented a normal classification tree (that uses the Gini index to look for a split). I am using it to predict the age of people. My input data was a series of points on 1 axis (Only X coordinate), and output was an age (9 years old, 10 years old....). Example: p1 = 1.3, p2 = 3.4, p3 = 2.1 ........ => Output = 9 years old p1 = 1.4, p2 = 2.4, p3 = 2.6 ........ => Output = 10 years old My tree looks something like (just an example): if p1 > 1 if p2 < 3 output = 10 years old else output = 9 years old else ... This is working fine. Now I want to modify it, to work with 3D points. I mean points that have X, Y, Z and not just X. So my input data would be like: p1 = [1.2, 1.5, 4.3], p2 = [4.2, 1.3, 5.2] .... => output = 9 years old How do I approach this? should my tree use each coordinate as a seperate input (p1x, p1y, p1z, p2x, p2y....) or is there an approach that can be taken for my case? Note: I am using classification trees but if you think there's another way of predicting in my case, I would love to know about it. Also I'm using python code, but I'm looking for the logic not the programming part. Thanks AI: To use a single vector (p1x, p1y, p1z, p2x, p2y....) per person would be the most easy and obvious way. Maybe, your prediction would also improve by including some other transformations of these features. But usually trees are eager to use all the information you feed them efficiently. So just a 9-dimensional vector would do.
H: Create a Predictive model to find users that would book an exam I'm currently working for a health company in Brazil, and I want to create a Predictive model that would find users that are one step close to book an appointment. We currently have 1 million visits per month, and 60% of this visits are not booking anything (exam, consult, checkups, etc). I believe that with ML, we could find which users would book if we make a call or send an email. I have been a professional web development for the past 7 years, but I don't have the skills to create a Predictive model, so I'm looking for directions, a way to start. We do have Google Analytics, but I'm thinking to create a simple software that would track: Age Gender Which pages the user visited How many time the user expended in each page Name Then, I could segregate the users in two groups: the ones that booked, and the users that didn't book, and then... I'm stuck. Could someone give a direction? Should I create a software to track this information or use GA? I want create a demo to get financing, so I don't have much time :/ Thank you. AI: Your problem is a kind of Binary Classification which predicts the probability of a target variable to be Yes(1) or No(0). But, first of all, you need some ground-truth data, because most of the state-of-the-art solutions to this problem are supervised. If you're the admin of this system or have access to their server, you can collect these data by logging everything you need from every user and label their traces with 1 and 0 when they book or not. After a while, you have a big labeled dataset and you can normalize and shape your inputs in a way that it fits for example to a deep neural network for prediction.
H: Cleaning time series data I have a time series data about daily usage of a computer program, here is an example 2017-11-10: 0 2017-11-09: 14 2017-11-08: 0 2017-11-07: 6 2017-11-06: 102 2017-11-05: 0 2017-11-04: 0 As you can see 11-06 has a spike at 102. Due to our way of gathering this data, we know that data is probably erroneous and we are sure that 102 is not correct according other values. So we need to clean these dirty values. Is there a mathematical way to do this? Is there a python lib to help us? AI: I think you have a few options: If you have a pre-set rule to exclude outliers, such as a hard-threshold at 100 which you know the data shouldn't exceed, then something as simple as x = [e for e in x if e < 100] will do. If you have a parametric belief, such as any observation that falls beyond so many standard deviations from mean, or quartiles, are outliers; then you can implement the other answers that have been mentioned. Else, you can go for a clustering approach. Here I believe your first shot should be a k-means clustering. This is super easy to build and interpret. See my code below. x = [0,14,0,6,102,0,0] from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=2).fit(np.array(x).reshape(-1, 1)) #First cluster: np.array(x)[np.where(kmeans.labels_ == 0)] #Second cluster (outliers): np.array(x)[np.where(kmeans.labels_ == 1)] K-means is known to be sensitive to outliers, hence a more robust method such as MeanShift, which you tried, is a good rival to k-means. I would run both, and stick with the result that makes better sense to me. Hope this helps!
H: Conv1D to predict winner of two feature sets I'm trying to predict the winner of a race, when given 2 sets of features. The data looks like this: array([[[ 1. ], [ 1. ], [ 0. ], [ 0.10447761], [ 0.18892762], [ 0.76190472], [ 1. ], [ 0.0338983 ], [ 1. ], [ 0.41789967]], [[ 1. ], [ 1. ], [ 0. ], [ 0.10447761], [ 0.18892762], [ 0. ], [ 0.74712646], [ 0. ], [ 0.44444448], [ 0.53915548]], .... The first 5 entries are features of racer #1 and entries 6-10 are the same features for racer #2. In addition, I have the results of the output neuron (0 or 1) which tells me which winner is expected to win. I suggest the following model: A 1D convolutional neural network with a kernel size of 5 and s stride size of 5. This is then followed by some flattening and fully connected layers. My results are very bad and I would like to know if there is conceptionally something wrong or if it's just the data. My thoughts behind this is that it's like a convolution on an image, with a window size equivalent to the amount of features of each racer. On each of the two convolution, a filter set is then applied, which is equivalent for both of the two. The fully connected layer will then determine if the first or the second feature set is the one to be selected. Any views if a convolutional layer can be used in that way are appreciated. Here the code snipped: model = Sequential() model.add(Conv1D(filters=64, kernel_size=5, strides=5, input_shape=input_shape, activation='relu')) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(256, activation='relu', kernel_constraint=maxnorm(3))) model.add(Dropout(0.2)) model.add(Dense(128, activation='relu', kernel_constraint=maxnorm(3))) model.add(Dropout(0.2)) model.add(Dense(1, activation='softmax')) model.compile(loss=sparse_categorical_crossentropy, optimizer=Adam(), metrics=['accuracy']) model.fit(self.trainX, self.trainY, epochs=epochs, batch_size=1, verbose=1, callbacks=[tbCallBack, early_stop]) AI: This methodology seems a bit strange and potentially overkill for the problem. I would try having your input just be a 5 dimensional vector that is the difference between racer 1 and racer 2 and have the output just be the result for racer 1 (or what every racer's features are being subtracted from the other racer's features.). From here you can just use a few fully connected layers (which is essentially what you are doing be having kernel's of size 5 anyway. I would also try some more traditional machine learning algorithms with this type of 1D input. EDIT: since signs are important for the difference between your two racer features, do not use a relu as your activation function for this approach.
H: Bias in Naive Bayes classifier I am building a document classifier using Naive Bayes. There are 10 classes. My question is that : 1 Should each class contain the same number of documents for training? What if the number of training example in each class is different? 2 Does the number of classes and classification algorithm have any relation? say is there any thump rule like if there are 100 classes shall algorithm 'X' has better performance than 'Y' AI: Unbalanced class distributions First, unbalanced datasets will cause your model to have a bias towards the over-represented classes. If the distribution of the classes is not very drastic then this should not cause a significant problem with any algorithm you will employ. However, as the difference between the class distribution becomes more severe you should expect to get higher false negatives for that class. Consider this, you are trying to have the model adequately identify what it means for a specific example to belong to a class. If you do not provide sufficient examples, then the model will not be able to understand the extent of the variation which exists among the examples. If the class distribution is very different, then I would suggest anomaly detection techniques. These techniques allow you to learn the distribution of a single classification and then identify when novel examples fall within this distribution or not. Choosing an algorithm More classes will result in a higher dimensional output, thus contributing to the complexity of your model. For example, if you have a model which discriminates between 2 classes with a set dataset size. Then further discrimination (increasing the number of output classes) will cause the model to have higher bias. You should thus expect to see greater test error if you do not increase the size of your dataset. If you have a set dataset X. Then you need to find the correct balance between bias and model complexity to get the optimal results. For example, a neural network based technique (highly complex) is not a good algorithm to use for a limited dataset with many output classes. However, Naive Baye's or Random Forest would be.
H: How to retrain the neural network when new data comes in? I am new to deep learning. Can anybody help me with the online learning implimentation for deep learning models. As per my understanding, i can save a keras/tensorflow model after training and when new data comes in, i can reload the network back and retrain the network using the new data. I haven’t seen anywhere documenting this method. Is my understanding incorrect? If yes, let me know what can be done so that the model keeps on getting retrained when new data comes in? AI: Its extremely simple. There are a lot of ways of doing it. I am assuming you are familiar with Stochastic Gradient Descent. I am going to tell one naive way of doing it. Reload the model into RAM. Write a SGD function like SGD(X,y). It will take the new sample and label and run one step of SGD on it and save the updated model. As you can see this will be highly inefficient, a better way is to save a number of samples and then run a step of stochastic batch gradient descent on it. So that you dont have to reload the updated model every time you give it a new sample. I hope this gives you a rough idea of how the implementation can be done. You can easily find much more efficient and scalable ways of doing this. If you are not familiar waith algorithms like SGD, I would recommend to get familiar with them because online learning is just a one sample mini batch gradient descent algorithm.
H: Python inserts newline by writing to csv I am trying to scrape http://www.the-numbers.com/movie/budgets/all but when I write the table into a csv file, there is an additional line with the counter index written in between each movie row... how can I get rid of this? I dont understand how that counter line is being written to the csv... import csv,os from bs4 import BeautifulSoup from urllib.request import Request, urlopen, URLError from selenium import webdriver counter = 0 currentDir=os.getcwd() filename = currentDir + "\\theNumbersScraper.csv" pagecount = 1 headers=['ID', 'Release Date', 'Movie', 'Production Budget', 'Domestic Gross', 'Worldwide Gross'] with open(filename, 'w' ,newline='\n',encoding='utf-8') as csvfile: #writer = csv.DictWriter(csvfile, fieldnames=dictionary)#write headers #writer.writeheader() #csvfile = open(filename, 'w', newline='',encoding='utf-8') writer = csv.writer(csvfile,delimiter='|') writer.writerow(headers) #with open(filename, 'a', newline='',encoding='utf-8') as csvfile: #writer = csv.DictWriter(csvfile, fieldnames=dictionary)#write headers #writer.writeheader() #csvfile = open(filename, 'w', newline='',encoding='utf-8') # writer = csv.writer(csvfile,delimiter='|') while pagecount <5401: """ #movie-entries go from http://www.the-numbers.com/movie/budgets/all/1 #to http://www.the-numbers.com/movie/budgets/all/5401 #so there are 5400 entries """ request = Request("http://www.the-numbers.com/movie/budgets/all/"+str(pagecount)) request.add_header('User-agent', 'wswp') website = urlopen(request).read().strip() soup = BeautifulSoup(website,'lxml') """#obsolete headertags = soup.find("table").find_next("tr").find_all("th") headers= [] for line in headertags: headers.append(line.string) headers[0] = 'ID' """ #movie-entries go from http://www.the-numbers.com/movie/budgets/all/1 #to http://www.the-numbers.com/movie/budgets/all/5401 #so there are 5400 entries all_tr = soup.find_all("tr") for movie in range(1, len(all_tr)): row=[] counter+=1 row.append(counter) td = all_tr[movie].find_all("td") for colIndex in range(1, len(td)): row.append(td[colIndex].string) writer.writerow(row) """ for tr in all_tr: row = [] td = tr.find_all("td") for i in range(1, 4): row.append(str(counter)) row.append(td[i].string) writer.writerow(row) counter+=1 """ pagecount +=100 csvfile.close() The part of interest is actually this block only: all_tr = soup.find_all("tr") for movie in range(1, len(all_tr)): row=[] counter+=1 row.append(counter) td = all_tr[movie].find_all("td") for colIndex in range(1, len(td)): row.append(td[colIndex].string) writer.writerow(row) The output is: ID|Release Date|Movie|Production Budget|Domestic Gross|Worldwide Gross 1|12/18/2009|Avatar|$425,000,000|$760,507,625|$2,783,918,982 2 3|5/20/2011|Pirates of the Caribbean: On Stranger Tides|$410,600,000|$241,063,875|$1,045,663,875 4 5|5/1/2015|Avengers: Age of Ultron|$330,600,000|$459,005,868|$1,408,218,722 6 and so on, with additional counter lines between the output that I don't want, how can I get rid of that? (and where does it come from?) AI: Check the source code of http://www.the-numbers.com/movie/budgets/all/5401 In that table, every second row is an empty <tr> tag.
H: Applications of Neural networks? I am currently learning about Nueral Networks in Machine Learning and think it very interesting but I don't really know any other Applications NN have other than in Machine Learning and AI. I have tried researching these things but have come up short. Could anyone tell me if you could use it in "Inventory management" or "Finance"? If you can would you have any examples? AI: Yes, is applied in most of the fields: A simple search in google scholar will provide a list what include: fraud detection, stock market trading decision, inventory classification and many more, including the link below neural network inventory management As for Finance we can look at: credit scoring, bankruptcy forecast, financial forecasting and many more. Link from google scholar included below: neural network finance Including also working example for simple credit scoring with NN: Using neural networks for credit scoring: a simple example
H: Confusion with cosine similarity In information retrieval when we calculate the cosine similarity between the query features vector and the document features vector we penalize the unseen words in the query. Example if we have two documents with features vectors d1 = [1,1,1,0,0] d2 = [0,1,1,1,0] We can see that the two documents have the second feature so if we want to search for the second feature with query vector: q = [0,1,0,0,0] then the cosine similarity between q and d1,d2 will be $1/√3$, and not 1 because that we penalize the other features that we have not mention in the query. From this discussion I don't understand why penalize it is a good Idea. Is penalizing unseen features good? Is there another similarity measure that does not penalize them? AI: Of course cosine similarity is not proper for searching a specific features in documents! To do this, you can exactly using dot product, as it will ignore zero features in query vector from documents. Cosine similarity, in the current context, can be used to finding similarity between two documents. So, all features can be important to finding similarity. It means, if there is not a feature in query vector, but there is in a document, or vice versa, these two are 100 percent similar. Hence, it makes sense.
H: Is Neural Style Transfer considered supervised or unsupervised learning? As far as I understand it, the Neural Style Transfer uses a content image and a style image, and generate a new image based on the two images. It tries to find a set of pixel values such that the cost function J(C, S) is minimized. It does not have any labels associated in advance, but it has an output (generated image) that should be the target of the learning. However, I'm not sure if this is considered supervised or unsupervised learning. Which does it belong to? AI: Neural style transfer is not really machine learning, but an interesting side effect/output of machine learning on image tasks. When performing neural style transfer using a pre-trained model, then a significant amount of supervised machine learning has already occurred to enable it. The style transfer algorithm is still an example of gradient-based cost function optimisation, which it shares with many supervised and unsupervised learning algorithms. The output of style transfer is partially a probe into what a network has learned about different levels of structure in the problem domain it has been trained on. However, its main use has been to generate images with altered/mixed aesthetics for art and entertainment.
H: New to data Science. Which techniques best to use Large data set in insurance company? Dataset Features Insurance underwriting dataset for 8 years. age location amount insured some other features...(medical evidence) Not all feature will be available to all applicants. Target Variable Decision on whether the applicant can be insured Question What techniques can be used and which ones would work best? Outline a high-level overview - I do not think I have to go into too much detail as I do not have any data. Things I have considered I am thinking to first slice the data and analyse it in parts and see can I find a pattern. Regression analysis could be carried out. Possible Logistic regression? I could take a sample and perform hypothesis tests? I know that this is an ideal machine learning situation. I don't have any experience in this field and I think I am better to stick with methods I have some knowledge of. I know this is very ambiguous, but a little nod in the right direction and I would be very appreciative. AI: I think you need to do couple of tests to see what all variables are important with respect to your Target Variable(client can be insured: Yes/No) - this kind of test is called Predictor Importance test. As you you have mentioned this is sector is new for you, I would suggest you to take all the variable you think are useful. Convert categorical variables to factors using as.factor() numeric variable to numeric as as.numeric(). The reason for explicit transformation is sometimes algorithms cannot understand like: test1 <- c(1,2,2,4,4,1,2) summary(test1) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.000 1.500 2.000 2.286 3.000 4.000 test1 <- as.factor(test1) summary(test1) 1 2 4 2 3 2 Now once the data is ready, you can give them as an input to Boruta, by using this you get the predictor importance graph. For better understanding you can go through this Link or else if you want to learn why, how and when you can go through this Link for different tests for different variables. Boruta does all the above by itself and the outcome is set of important features, With Respect to that you can feed those respective features to your model for getting better results/accuracy. As your problem is Binary Classification, you can use the following Algorithms: Decision Trees Random Forest Bayesian Network Support Vector Machines(SVM) Neural Network For all the above algorithms I've attached a link for your reference in which you can find at-least one example for each. Hope my answer is helpful, mark as answer if you got what you needed.
H: Error when running RGFClassifier I recently tried to run Regularized Greedy Forest algorithm (rgf Classifier) from this package https://pypi.python.org/pypi/rgf_python. I did run pip install rgf_python, but when I tried to run the example this error pops out in spyder: File "C:\Users\me\Anaconda3\lib\site-packages\rgf\sklearn.py", line 111, in <module> "config flag 'exe_location' to RGF execution file.".format(_EXE_PATH)) Exception: C:\Users\me\rgf.exe is not executable file. Please set config flag 'exe_location' to RGF execution file. What should I change in order for config the flag 'exe_location'? May sound beginner but can't find it anywhere. AI: I had the same problem, and I'm not proud of the very "hacky" solution, but the bottom line is it's working now. I'm working on a Mac, so I'm sure your solution will be a bit different. The normal pip install wasn't working for me, so to install I did... git clone https://github.com/fukatani/rgf_python.git cd rgf_python python setup.py install I was then able to find the executable file your error message is complaining about in the new rgf_python directory, specifically... rgf_python/include/rgf/bin/rgf From there, I tried to change the exe_location in my .rgfrc file to point to the above file, but it didn't work so I went to the /site-packages/rgf/sklearn.py file mentioned at the top of your error message and changed a line in the "_get_paths" function. For me it was line 64/66. I suspect yours may be 54/56 if you're on Windows. I commented out the old declaration of "exe" and changed it to point to the rgf executable file I found earlier. I realize how messy this is, so I hope someone can find a better solution, but in the meantime, this did the trick for me. Good luck, and let me know if you have any questions.
H: Feature selection by overfitting a small sample size I am using a CNN based model to do sequence classification. Since training an entire dataset is very expensive, and I have a large set of features needed to try, its impossible for me to select features by full trainings. Usually a good sanity check before training, is try to let model overfit a small set of training samples, to make sure the model is at least capable of remember a small sample size. Borrowed from this idea, my question is, can I train a small subset of training data, and use its loss curve as a metric, to select best features? Each training is to test how quick would the training loss converges given a subset of selected features. AI: Congratulations! You have suggested independently the Wrapper method for feature selection. Yes, you can use this method. However, consider that the wrapper method is slow since you have to train a model for each iteration. More than that, feature selection is a NP-complete problem so don't expect the optimal subset.
H: What is the rationale to use weights and biases in a neural network? Searched a lot of research papers, blogs and videos but couldnt find an acceptable answer for choosing the weights and biases in neural network. Few people have mentioned to use weights randomly but if that is the case isnt that is uncertain (fuzzy)? AI: Weight Initialization Weights can be initialized by either setting them all to zero. Or by setting them randomly using a Gaussian distribution centered at 0 with a variance similar to the values of your input features. Is every initialization going to give you the same results? No. Neural networks (NNs) converge to a local minimum. Thus, different initialization of the weights will cause the loss function to be minimized to a different value. If you are unhappy with your results you can always do random restarts with different weight initialization.
H: How to correctly interpret the table function to determine logistic regression accuracy? I have a set of data, that gives the length of a species of abalone, and its corresponding type (male, M, female, F, or infant, I). (https://archive.ics.uci.edu/ml/datasets/abalone) I have constructed a logistic regression to create a model that will determine whether the abalone is M/F or I, given the length. (M and F are classed as the same.) So I write the following in R to generate and test the model on data points: g <- glm(sex ~ length, family="binomial") pairs <- paste(round(predict(g, type="response")), sex) table(pairs) The output table is: pairs 0 F 0 I 0 M 1 F 1 I 1 M 218 6 210 1089 1336 1318 How can I correctly interpret this? These are the options I have come up with: 1089 females correctly identified, 218 females incorrectly identified; 1318 males correctly identified, 210 males incorrectly identified; 1336 infants correctly identified, 6 incorrectly identified. 218 females correctly identified, 6 infants and 210 males incorrectly identified; 1089 females incorrectly identified, 1336 infants and 1318 males incorrectly identified. AI: The outcome of a multi-nomial or binomial is confusion matrix (2*2 for binomial, n*n for multinomial), Interpretation of the confusion matrix is for example: Accuracy of the model : ((TP + TN) / (TP + FN + FP + TN) ) * 100 It means you model could exactly pick this much percent of data was classified correctly. Precision: (TP / (TP + FP))*100 That measure of correctness achieved in positive prediction i.e. of observations labeled as positive, how many are actually labeled positive Recall: (TP / (TP + FN)) * 100 It is a measure of actual observations which are labeled (predicted) correctly i.e. how many observations of positive class are labeled correctly. It is also known as ‘Sensitivity’. F-Measure: ((1 + β)² × Recall × Precision) / ( β² × Recall + Precision ) It combines precision and recall as a measure of effectiveness of classification in terms of ratio of weighted importance on either recall or precision as determined by β coefficient. generally β is 1. If you want read them in just normal words in you example: Predicted M F M 100 150 actual F 100 300 It means your model could predict 100+300 correctly out of 100+150+100+300 Actual number of males where 250 but you could classify 100 correctly and 50 wrongly. Similarly, Actual number of females where 400 but you could classify 300 correctly and 100 wrongly. If you any need more detail let me know. Go through this Link, you will get better idea. This Link is an interactive chart, which will give you better understanding.
H: One hot encoding alternatives for large categorical values I have a data frame with large categorical values over 1600 categories. Is there any way I can find alternatives so that I don't have over 1600 columns? I found this interesting link. But they are converting to class/object which I don't want. I want my final output as a data frame so that I can test with different machine learning models? Or, is there any way I can use the generated matrix to train the other machine learning models other than Logistic regression or XGBoost? Is there anyway I can implement it? AI: One option is to map rare values to 'other'. This is commonly done in e.g. natural language processing - the intuition being that very rare labels don't carry much statistical power. I have also seen people map 1-hot categorical values to lower-dimensional vectors, where each 1-hot vector is re-represented as a draw from a multivariate Gaussian. See e.g. the paper Deep Knowledge Tracing, which says this approach is motivated by the idea of compressed sensing: BARANIUK, R. Compressive sensing. IEEE signal processing magazine 24, 4 (2007). Specifically, they map each vector of length N to a shorter vector of length log2(N). I have not done this myself but I think it would be worth trying.
H: Classifying with certainty I'm trying to classify a binary sample with Keras and I would like to classify as many correctly as possible, while ignore the ones where the model is not sure. The fully connected Nerual network currencly achieves around 65% but I would like to get a higher result of correctly classified ones, while ignoring the ones where the model is uncertain. Is there a way to tell Keras to simply ignore the ones where the model is uncertain and achieve a higher accuracy that way? Or is there a network design that could achieve this, for example feeding the result of the network striaght into a second part of it which then decides whether the prediction is likely accurate or not? One way I was thinking of achieving this is by building a second neural network on top of it that decides based on the result of the first network and all the input data of it, whether the classification will be correct or not. Would that work, and if yes, is there no more elegant way of achieving this in one go, such as directly having the results feeding into a second part of the network that then decides if the prediction is likley accurate or not? AI: Softmax output in neural networks can be misleading - often the confidence provided is higher than is intuitive. See e.g. here: A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks https://pdfs.semanticscholar.org/c26e/1beaeaa55acae7336882de5df48716afb8bb.pdf which suggests that in practice, softmax is not helpfully interpretable as a probability but should instead be used for ranking among class options. If you want to have an accurate probability estimate, you might consider using a Bayesian approach in which you explicitly model your estimate of each of the input variances, the output variance, etc. Failing that, having a second phase neural network that takes the input and predict correct or incorrect classification by the first network is an interesting idea - where incorrect classification is a proxy for 'low confidence' classification. If you try it I'd be curious to know how it works. Edit: As @Emre said the input to the softmax would be more informative than the softmax itself because it's pre-scaled (i.e. not forced to sum to 1). So it should reflect confidence better, with values further away from 0 indicating higher confidence.
H: Wrong Graph Plot using K-Means in Python This is my first time implementing a Machine Learning Algorithm in Python. I tried implementing K-Means using Python and Sklearn for this dataset. from sklearn.cluster import KMeans import numpy as np import pandas as pd from matplotlib import pyplot as plt # Importing the dataset data = pd.read_csv('dataset.csv') print("Input Data and Shape") print(data.shape) data.head() # Getting the values and plotting it f1 = data['Area'].values f2 = data['perimeter'].values f3 = data['Compactness'].values f4 = data['length_kernel'].values f5 = data['width_kernel'].values f6 = data['asymmetry'].values f7 = data['length_kernel_groove'].values X = np.array(list(zip(f1,f2,f3,f4,f5,f6,f7))) # Number of clusters kmeans = KMeans(n_clusters=7) kmeans = kmeans.fit(X) # Getting the cluster labels labels = kmeans.predict(X) # Centroid values centroids = kmeans.cluster_centers_ plt.scatter(X[:,0], X[:,1],cmap='rainbow') plt.scatter(centroids[:,0], centroids[:1], color="black", marker='*') plt.show() The graph doesn't seem to plot the data correctly. How can I debug this issue? AI: Well, there are some issues: Dimension vs K: Before talking about visualization I would like to address some clustering concept. Your data is in 7 dimensions but it does not mean that you have 7 clusters! Be careful here. For instance I have two features of people let's say salary and number of years they have working experience. Here I have two features but does it mean that there necessary two categories inside the data? sure not! Visualization: Your data is in 7 dimension which is not visualizable. So you decided to reduce this to two which is a correct approach but you did a wrong thing for this correct approach. You can not take the first two features to visualize 7 dimensions, you need to REDUCE it to two features using Dimensionality Reduction algorithms like PCA, NMF, etc. What you did is actually IGNORING 5 dimensions of the points which are extremely informative for placing them in a 7-dimensional space. Solution Everything is right. Just add a PCA to your code like this: From sklearn.decomposition import PCA Model = PCA(n_components=2) X_new = Model.fit_transform(X) ... Use X_new instead of X for K-means procedure Please note that I wrote this relying on my memory so better to check the documentation if I had a typo or smth. In case you have more question you can comment here. Good Luck!
H: How to add bias consideration into logistic regression code? I am a complete beginner in coding and machine learning, and I've been tasked with learning what's under the hood of logistic regression (so I have pieced together the python code below) but I've been asked to figure out how to add bias into this code. I'm completely stuck on at what point it would need to be added into, I think at the point I am defining the hypothesis function - but if anyone would be able to point me in the right direction to figure this out I would really appreciate it. If it helps, this logistic regression is being used to classify if a tumour is benign of malignant with the wisconsin breast cancer dataset (https://www.kaggle.com/uciml/breast-cancer-wisconsin-data) X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.3) X = data["diagnosis"].map(lambda x: float(x)) X = data[['texture_mean','perimeter_mean','smoothness_mean','compactness_mean','symmetry_mean', 'diagnosis']] X = np.array(X) X = min_max_scaler.fit_transform(X) Y = data["diagnosis"].map(lambda x: float(x)) Y = np.array(Y) def Sigmoid(z): if z < 0: return 1 - 1/(1 + math.exp(z)) else: return 1/(1 + math.exp(-z)) def Hypothesis(theta, x): z = 0 for i in range(len(theta)): z += x[i]*theta[i] return Sigmoid(z)enter preformatted text here def Cost_Function(X,Y,theta,m): sumOfErrors = 0 for i in range(m): xi = X[i] hi = Hypothesis(theta,xi) error = Y[i] * math.log(hi if hi >0 else 1) if Y[i] == 1: error = Y[i] * math.log(hi if hi >0 else 1) elif Y[i] == 0: error = (1-Y[i]) * math.log(1-hi if 1-hi >0 else 1) sumOfErrors += error const = -1/m J = const * sumOfErrors print ('cost is: ', J ) return J def Cost_Function_Derivative(X,Y,theta,j,m,alpha): sumErrors = 0 for i in range(m): xi = X[i] xij = xi[j] hi = Hypothesis(theta,X[i]) error = (hi - Y[i])*xij sumErrors += error m = len(Y) constant = float(alpha)/float(m) J = constant * sumErrors return J def Gradient_Descent(X,Y,theta,m,alpha): new_theta = [] constant = alpha/m for j in range(len(theta)): CFDerivative = Cost_Function_Derivative(X,Y,theta,j,m,alpha) new_theta_value = theta[j] - CFDerivative new_theta.append(new_theta_value) return new_theta initial_theta = [0,1] alpha = 0.01 iterations = 1000 Logistic_Regression(X,Y,alpha,initial_theta,iterations) AI: If I understood correctly, by bias you mean the intercept term in your model, that is, $\alpha$ in the equation $$ p(y=1|x) = \frac{1}{1+e^{-(\alpha + \beta_1 x_1 + \beta_2 x_2 + ... + \beta_k x_x)}} $$ If it is the case, you can easily incorporate intercept by adding a colum of ones into your X: X = np.hstack([np.ones([X.shape[0],1]), X]) So the first colum of $X$ is always one. Then, your model will look like $$ p(y=1|x) = \frac{1}{1+e^{-(\beta_0 x_0 + \beta_1 x_1 + \beta_2 x_2 + ... + \beta_k x_x)}} $$ where $\beta_0$ is your bias term and $x_0$ always equals $1$. Thus, your model would be equivalent to the one with explicit intercept.
H: How to add more theta parameters into my logistic regression? I am a complete beginner in machine learning and coding in python. I have been tasked with coding logistic regression from scratch in comparison with using sklearn. My question is, with my code below I believe I have set the number of thetas I want with: "X = data[['texture_mean','perimeter_mean','smoothness_mean','compactness_mean','symmetry_mean', 'diagnosis']]" but I am unsure how to prove this is true with my code below and its definition of theta, if I added more parameters (e.g. all 31 variables of this dataset [https://www.kaggle.com/uciml/breast-cancer-wisconsin-data] which is for classifying tumours) would I just need to add them into this list above? Any help pointing me towards the right direction just to understand this better would be appreciated. X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.3) X = data["diagnosis"].map(lambda x: float(x)) X = data[['texture_mean','perimeter_mean','smoothness_mean','compactness_mean','symmetry_mean', 'diagnosis']] X = np.array(X) X = min_max_scaler.fit_transform(X) Y = data["diagnosis"].map(lambda x: float(x)) Y = np.array(Y) def Sigmoid(z): if z < 0: return 1 - 1/(1 + math.exp(z)) else: return 1/(1 + math.exp(-z)) def Hypothesis(theta, x): z = 0 for i in range(len(theta)): z += x[i]*theta[i] return Sigmoid(z)enter preformatted text here def Cost_Function(X,Y,theta,m): sumOfErrors = 0 for i in range(m): xi = X[i] hi = Hypothesis(theta,xi) error = Y[i] * math.log(hi if hi >0 else 1) if Y[i] == 1: error = Y[i] * math.log(hi if hi >0 else 1) elif Y[i] == 0: error = (1-Y[i]) * math.log(1-hi if 1-hi >0 else 1) sumOfErrors += error const = -1/m J = const * sumOfErrors print ('cost is: ', J ) return J def Cost_Function_Derivative(X,Y,theta,j,m,alpha): sumErrors = 0 for i in range(m): xi = X[i] xij = xi[j] hi = Hypothesis(theta,X[i]) error = (hi - Y[i])*xij sumErrors += error m = len(Y) constant = float(alpha)/float(m) J = constant * sumErrors return J def Gradient_Descent(X,Y,theta,m,alpha): new_theta = [] constant = alpha/m for j in range(len(theta)): CFDerivative = Cost_Function_Derivative(X,Y,theta,j,m,alpha) new_theta_value = theta[j] - CFDerivative new_theta.append(new_theta_value) return new_theta def Logistic_Regression(X,Y,alpha,theta,num_iters): m = len(Y) for x in range(num_iters): new_theta = Gradient_Descent(X,Y,theta,m,alpha) theta = new_theta if x % 100 == 0: Cost_Function(X,Y,theta,m) print ('theta: ', theta) print ('cost is: ', Cost_Function(X,Y,theta,m)) initial_theta = [0,1] alpha = 0.01 iterations = 1000 Logistic_Regression(X,Y,alpha,initial_theta,iterations) AI: Your pass your initial_theta into Logistic_Regression where it defines how the cost function and its derivative are evaluated. Just make initial_theta the same width as X. If you want your code to be fool-proof, check variable sizes within your function like def Logistic_Regression(X,Y,alpha,theta,num_iters): assert len(theta) == X.shape[1], 'theta should have one coefficient per each column of X' .... A more convenient solution would be to define initial_theta inside the function, based on X: def Logistic_Regression(X,Y,alpha,num_iters): theta = np.zeros(X.shape[1]) .... In this case, you can be sure that the shape of initial $\theta$ is correct.
H: Features standardization - Multilayer perceptron I have serious doubts concerning the features standardization done before the learning process of a multilayer perceptron. I'm using python-3 and the scikit-learn package for the learning process and for the features normalization. As suggested from the scikit-learn wiki (Tips on pratical use), I'm doing a features standardization with the Preprocessing module, which means that all my features will appear as standard normal specifications. The problem is that in the showed example: from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # Don't cheat - fit only on training data scaler.fit(X_train) X_train = scaler.transform(X_train) # apply same transformation to test data X_test = scaler.transform(X_test) The standardization is done first on the training set, and then, after that, it's applied on the test set. Which means, that the distributions parameters (mean, st.deviation) will be estimated only from the training set. Now, because I need to save my multiple datasets in a database, on which I will perform the hold-out sampling for the training-test folds, I have done a preprocessing on all my instances at once, because I don't want to do that at run time execution. It's clear that in this way the distributions parameters will change and the results will be different. Now, Someone knows the differences between the two approaches? Are there issues doing the preprocessing at once on all the dataset? AI: Generally, preprocessing parameters are fit only on the training subset, because otherwise you could overfit your data and overestimate quality of your model on the test subset. With feature standardization, however, overfitting is not so dangerous, so I assume you can preprocess all your dataset at once safely. The best practice, however, is to do all the processing in one pipeline and apply cross-validation to the whole pipeline: from sklearn.pipeline import make_pipeline from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import StandardScaler from sklearn.model_selection import cross_val_score, GridSearchCV scaler = StandardScaler() model = MLPClassifier() pipeline = make_pipeline(scaler, model) scores = cross_val_score(pipeline, X, y) If you use scikit-learn cross-validation, incorporating pipelines would be simple.
H: Handling data imbalance and class number for classification 1 Is there a way to handle data imbalance? ie if data in each class for training is not balanced, say some classes have 50 documents some other have 200 documents. How to handle this? 2 How to handle the classification problem with a large number of classes? I have around 50+ classes (may increase once more data is available) to learn. I am trying out different algorithms and features. Is there any way to handle classification problem with a large number of classes? AI: People talk a lot about data imbalance, but in general I think you don't need to worry about it unless your data is really imbalanced (like <1% of one label). 50/200 is fine. If you build a logistic regression model on that dataset, the model will be biased towards the majority class - but if you gave me no information about an input to classify, the prior probability is that the new input is a member of the majority class anyway. The question you want to be able to answer is whether you are differentiating classes fine - so if you do have a minority class, do NOT use 'accuracy' as a metric. Use something like area under the ROC curve (commonly called AUC) instead. If your data is really super imbalanced, you can either over-sample the minority class or use something called 'SMOTE', for "Synthetic Minority Over-Sampling Technique", which is a more advanced version of the same thing. Some algorithms also let you set higher weights on minority classes, which essentially incentivizes the model to pay attention to the minority class by making minority-class errors cost more. To learn to differentiate between lots of classes, I think (a) you will need to have a ton of examples to learn from and (b) a model that's expressive enough to capture class differences (like deep neural network, or boosted decision tree), and (c) use softmax output. If those still don't work, you might try a 'model-free' approach like K-nearest-neighbors, which matches each input to the most similar labeled data. For kNN to work however, you need to have a very reasonable distance metric.
H: Classification based on a Clustering Result Here is the Sample data: Values Attribute1 Attribute2 Attribute3 Attribute4 2.5 1980 A 1.5 C 1.8 2000 A 0.4 B 12.4 2017 S 18.5 D 0.4 1987 R 10 P ...... In my real data, I have more than 60 attributes. What I need to do is: Category Values into different groups classify these groups using Attribute1 - Attribute4. Some of the difficulties are: Values mean something in my application. For example, Values>10 will be put into one group. So, it might be not ideal to use a clustering algorithm such as density based to divide Values into groups. When I use the groups based on Values and all the attributes to run a classification, I might need to choose some important attributes. What I want is to give Values(meaningful data) to a clustering algorithm. So, that I can get best results/Insights from that Clustering. So, here are my Questions: How to cluster? How to choose attributes? AI: I think you need to do some Feature Engineering, i.e., as you explained in the question, those values mean something to your application. For example : 1-3 : Bad, 4-6 : Average, 7-10 : Good V1 new_V1 3 Bad 5 Average 7 Good Something like this, so that clustering algorithm can make sense out of it. Assuming that you don't have any constraint in using R. If you have both Numeric and Categorical variable(as above) then you can usethis package : ClustMixType, it can understand and do the clustering based on the data fed. As traditional K Means algorithm is not applicable here as it works only for Numeric Data. If there is any discrepancy in data you can convert them explicitly by new_V1 <- as.factor(new_V1); if it is a numeric variable V2 <- as.numeric(V2) before feeding the data to Clustering algorithm. Once you get it, then you can use this package: Bruota, use this package to get Predictor Importance(what all variables are important). This can be done only when you know the Target Variable, Most likely your target variable would be the Clustering outcome. Implementation is over here, please follow this Link Most likely you would end up getting your desired result, Do let me know if you have any additional questions.
H: Reading a CSV in TensorFlow RNN I am just starting off with TensorFlow and trying to implement an RNN for a dataset which consists of 11 Features (all numeric). These features will be used to predict the output of another column. I am currently lost on where to start and tho I am able to understand how a RNN functions all the tutorials I could find were mainly related to image and text datasets. AI: There is no fundamental difference between using RNNs for text vs. for numeric values. In fact, text is more difficult because you have to preprocess the text to convert into numeric values first. In any case you might have a look at this tutorial: https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/ I find this site to be pretty useful in general for examples of working with Keras.
H: Python Pandas - Interpolation with fixed value Say we have a pandas series with the following values [np.nan, np.nan, 1, np.nan, 2, np.nan] What is the most efficient way fill the nan value with 0 in the middle. so we have [np.nan, np.nan, 1, 0, 2, np.nan] In other word, how to we do interpolation with a fixed value, or a .fillna operation but ignore the nan at the beginning and end of the array. AI: Current solution, I am using def interpolate_with_fixed(s, value=0): i = s.first_valid_index() j = s.last_valid_index() s.loc[i:j].fillna(value, inplace=True) return s
H: Non-convex feasible region in a linear programming problem Problem: I am working on a linear programming problem, i.e. a linear objective function to minimize: $\mathbf{c}\cdot\mathbf{x}$, where $\mathbf{c},\mathbf{x}\in\mathbb{R}^{N}$ Subject to constraints: $\mathbf{x}\geq 0$ and $\mathbf{A}\mathbf{x} \leq \mathbf{b}$ where $\mathbf{A}\in\mathbb{R}^{MxN}$ and $\mathbf{b}\in\mathbb{R}^{M}$ I am using CyLP and the COIN-OR simplex solver to perform the optimization. Question: A feature I would like to be able to build into the problem is the ability to handle some atypical constraints: Suppose $x_{i}$ is a component of $\mathbf{x}$. I want to apply the following constraints: $x_{i} = 0$ or $\mathrm{lb} \leq x_{i} \leq \mathrm{ub}$. In geometric terms, I want $x_{i}\in [0]\cup [\mathrm{lb}, \mathrm{ub}]$. Unlike the original problem, where all $x_{i}$ are bounded within convex sets (i.e. non-piecewise intervals). Is there a good way to modify an LP for the above atypical constraint? AI: Sorry, this cannot be done with a continuous LP solver. As you observed, this construct introduces a non-convexity. However, it can be handled by a Mixed Integer Programming (MIP) solver. $x=0$ or $\ell\le x\le u$ is what is known as a semi-continuous variable. Many advanced MIP solvers support this variable type directly. If a MIP solver does not support this, we can introduce a binary variable $\delta \in \{0,1\}$ and use the constraint: $$ \ell \cdot \delta \le x \le u \cdot \delta $$ to simulate a semi-continuous variable. COIN-OR has a capable MIP solver called CBC (I believe it supports semi-continuous variables directly).
H: Modeling Grocery Store Transactions I'm fiddling around with some data that represent grocery store transactions. The data are in the following form: Each row represents a final transaction by a customer, with a column for user ID, timestamp of purchase, and basket contents. For example: | ID | Timestamp | Basket | | 12 | 2016-04-02 | ['Celery', 'Beets', 'Cheese'] | The question I'm trying to answer is "How do I suggest an ideal basket to an individual customer?" Problems I'm dealing with: I can't seem to think past just suggesting the most frequently ordered items to a customer. Is there some technique that I'm missing here? Since we have a customer ID, we can have multiple baskets for a customer over time. Because of this, I can't use apriori or eclat as those assume each transaction is independent. How can I take into account seasonality with whatever technique I choose? I'd really appreciate some general direction here, I'm having some serious analysis paralysis. AI: I think that you can approach this problem in a better way. To do so you need some Customer Demographics. Even if you try doing some recommendation using customer ID it won't give you ideal Basket/Recommendation, because from Customer ID alone you can't decide anything. Is there any possibility to get Transaction Id, so that you can unique baskets to apply Aprori Algo/ any association mining techniques by which you can give good recommendation using Lift and Support. In the time stamp do you have time? As your sample record doesn't show time in it(just for confirmation), Let us consider a scenario you have it. Now ['Celery', 'Beets', 'Cheese'] - transform as Basket-1, ['Celery', 'Bread', 'Cheese'] as Basket-2 and so on... Now you see which Baskets are sold the most WRT timestamp. This is to see if there is any seasonality in the data and you can even find some trends, if there are any. This is just one way of looking at the problem but this is not the concrete solution.
H: Decision tree ordering I am interested in finding out how decision trees chose the order in which they split. I understand that splitting is based in information gain. The attribute with the lowest information gain is chosen as the root node. If I had a data set with columns: credit standing age income, marriage status and I was interested in finding out what determines a person to have a good or bad credit status, Am I correct in saying, I calculate entropy and information gain for each of these categorical attributes against the independent attribute i.e what I am investigating (credit standing), and that the calculation with the lowest information gain is chosen as the root node. For example, if this root node (first split) was Age, Is the entropy and information gain of Age (the new independent attribute) against the remaining attributes (marriage status and income) calculated, and the calculation with the lowest information gain is then chosen as the second split node, and so on? ie. information gain: credit standing vs age = 0.01 credit standing vs status = 0.1 credit standing vs income = 0.2 Age is chosen as root (first split) node. then, information gain: age vs status = 0.2 age vs income = 0.1 income is chosen as second split node. Am I understanding this correctly? AI: Here in your Scenario you need to select the one with more Information Gain rather than the least one and the process goes on till you reach the last feature/last node. Go through These Links. I think your doing it vice versa, I agree with Emre. In the link it was explained with an example to decide whether to play tennis or not.
H: User based recommendation factoring in user data The question is: what algorithms (and libraries) should i use if i want to build a recommender system with the following data in mind representation: USER_ID ZIP Movie1 Movie2 Movie3 1 2483 5 0 3 2 2483 4 1 5 3 2345 3 1 5 Basically i want to factor in user data into a recommendation of a movie. (zips can be transformed to long/lat but thats another question and out of scope now) i was searching the internet for hours with no success. So i will be grateful if someone can point me in the right direction. AI: Collaborative Filtering: Match users to people with similar tastes –recommend what they like Commonly used in e-retail Avoids the issue of users only being recommended more of what they already like (allows serendipity) Example: Your Scenario. Methods: Euclidean Distance, Cosine Distance, Pearson Correlation Coefficient (most common). Content-Based Recommendation: Match users directly to products and content Recommend based on what you have bought or viewed in the past Commonly used for document recommendation: webpages, news articles, blogs etc. Examples: Which Movie(s) might our user like? Option1: The user selects preferences for the various features using pull-down menus etc. We match against the movies using Vector-Space methods (described next) Methods: Vector-Space Method, K Nearest Neighbour(kNN) Option2: The user rates a sample of the movies (explicitly or implicitly) as like/dislike. we then build a user profile model for that user using machine learning. Method: Decision Trees
H: Time Series prediction using LSTMs: Importance of making time series stationary In this link on Stationarity and differencing, it has been mentioned that models like ARIMA require a stationarized time series for forecasting as it's statistical properties like mean, variance, autocorrelation etc are constant over time. Since RNNs have a better capacity to learn non-linear relationships (as per given here: The Promise of Recurrent Neural Networks for Time Series Forecasting) and perform better than traditional time series models when the data is large, it is essential to understand how stationarized data would affect its results. The questions I need to know the answer of are as follows: In case of traditional time series forecasting models, stationarity in time series data makes it easier to predict, why and how? While building a time series prediction model using LSTMs, is it important to make the time series data stationary? If so, then why? AI: In general time series are not really different from other machine learning problems - you want your test set to 'look like' your training set, because you want the model you learned on your training set to still be appropriate for your test set. That's the important underlying concept regarding stationarity. Time series have the additional complexity that there may be long term structure in your data that your model may not be sophisticated enough to learn. For example, when using an autoregressive lag of N, we can't learn dependencies over intervals longer than N. Hence, when using simple models like ARIMA, we want data to also be locally stationary. As you said, stationary just means the model's statistics don't change over time ('locally' stationary). ARIMA models are essentially regression models where you use the past N values as input to linear regression to prediction the N+1st value. (At least, that's what the AR part does). When you learn the model you're learning the regression coefficients. If you have a time series where you learn the relationship between the past N points and the next point, and then you apply that to a different set of N points to predict the next value, you are implicitly assuming that the same relationship holds between the N predictor points and the following N+1st point you're trying to predict. That's stationarity. If you separated your training set into two intervals and trained on them separately, and got two very different models - what would you conclude from that? Do you think you would feel confident applying those models to predict new data? Which one would you use? These issues arise if the data is 'non-stationary'. My take on RNNs is this - you are still learning a pattern from one segment of a time series, and you still want to apply it to another part of the time series to get predictions. The model learns a simplified representation of the time series - and if that representation applies on the training set but not in the test set, it won't perform well. However, unlike ARIMA, RNNs are capable of learning nonlinearities, and specialized nodes like LSTM nodes are even better at this. In particular, LSTMs and GRUs are very good at learning long-term dependencies. See for example this blog post. Effectively this means that what is meant by 'stationarity' is less brittle with RNNs, so it's somewhat less of a concern. To be able to learn long term dependencies, however, you need LOTS of data to train on. Ultimately the proof is in the pudding. That is, do model validation like you would with any other machine learning project. If your model predicts well for hold-out data, you can feel somewhat confident in using it. But like any other ML project - if your test data is ever significantly different than your training data, your model will not perform well.
H: Comparing 2 data frames, if value is present replace with 1 or else 0 nrow(df1$v1) = 63849 nrow(df2$v2) = 3244 ifelse(df1$v2 == df$v1, 1, 0) I know this is an easy question but I tried different procedures but none of them are useful, for(i in 1:nrow(df2)){ for(j in 1:nrow(df1)){ if(df2$v1[i] == df1$v2[j]){ df1$v2<- 1 } df1$v2 <- 0 } } This does the job but it take quiet sometime to get the job done. Other Methods: Method 1: df1$v2 <- ifelse(df2$v1 %in% df1$v1, 1,0 ) Error in `$<-.data.frame`(`*tmp*`, v2 , value = c(1, 1, 1, 1, 1, 1, : replacement has 3244 rows, data has 63849 Method 2: df1$v2 <- ifelse(do.call(paste0, df2$v1) %in% do.call(paste0, df1$v1), 1,0 ) summary(df1$v2) Min. 1st Qu. Median Mean 3rd Qu. Max. 0 0 0 0 0 0 Do suggest me if you have any better solution. AI: Here is a way using dplyr: require(dplyr) X = data_frame(A=c(1,2,3,4)) Y = data_frame(B=c(5,4,6,7)) X <- X %>% mutate(A = A %in% Y$B, A = A*1.0) #converts boolean to numeric or using base R: X$A <- 1.0*(X$A %in% Y$B)
H: Can I use euclidean distance for Latent Dirichlet Allocation document similarity? I have a Latent Dirichlet Allocation (LDA) model with $K$ topics trained on a corpus with $M$ documents. Due to my hyper parameter configurations, the output topic distributions for each document is heavily distributed on only 3-6 topics and all the rest are close to zero ($K$~$\mathcal{O}(100)$). What I mean by this, is that the 3-6 highest contributing topics for all documents is orders of magnitude (about 6 orders) greater than the rest of the topic contributions. If I use the Jensen-Shannon distance to compute the similarity between documents, I need to store all values of the topic distribution as non-zero, even the very small values of the non contributing topics, because Jensen-Shannon divides by each discrete value in the distribution. This requires a lot of storage and is inefficient. If, however, I store the topic distributions of each document as a sparse matrix (the 3-6 highest contributing topics are non-zero and the rest are zero) where each row is a unique document and each column is a topic, then this uses far less space. But I can no longer use the Jensen-Shannon metric, because we would be dividing by 0. In this case: Can I use the euclidean distance between documents topic distributions to compare similarity between documents? Using the euclidean distance would require far less storage and is extremely fast to compute. I appreciate that Jensen-Shannon is one of the "correct" metrics to compare discrete probability distributions, as well as the Bhattacharyya distance and Hellinger distance. But ultimately, the output of LDA is a discrete topic distribution for each doucment - each document is a vector (or point) in a $K$ dimensional space. By this argument, is it valid to use the euclidean distance to calcualte documents similarities? Is there something blatantly wrong with this method? I have tested the euclidean distance to compare documents, and yielded good results, which works well for my industrial application. But I want to know the academics behind such a method. Thanks in advance! AI: Euclidean distance -by which in this application, I assume you mean the euclidean distance in an $n$-dimensional space defined by the distribution of document contents among $n$ topics considered, is a valid measure to use in comparing the topics represented within two documents. What you're doing by applying this method is quantifying a topic frequency difference within this newly defined space, and so interpretation of these quanta will require analysis of the space. For example, what euclidean distance indicates that documents are relatively similar? In distiction, the normalized result of something like the hellinger distance provides an easily interperable framework by which to evaluate the results- a score of 0 indicates no overlap in the distribution over the topics in question of the two documents, and a 1, perfect overlap. For the efficiency concerns, it's not clear to me why you couldn't truncate your topics considered to the crucial topics and then calculate any of the metrics on the distributions over ony those topics, rather than the entire universe of considered topics.
H: Measure information gain / loss after a data Transformation operation Suppose that we have a dataset of 2 samples : [{1,2,0}, {2,0,0}, {3,1,1}, {4,0,1}, {5,1,1}] (the last element of each row is the class variable) If we want to reduce the variety of the column "1" we can put all values that are less or equal to "2" to a new value "1" and values that are greater than "2" to the value 2, so we would have a new dataset like : [{1,2,0}, {1,0,0}, {2,1,1}, {2,0,1}, {2,1,1}] In this example we can easly deduct that there is a information gain after this transfromation, but what if we have million of sample, is there any way to measure the information gain / loss after such a transformation. AI: It will help to define what you mean by information. The usual definition is 'Shannon information', which is equivalent to the amount of uncertainty an observation resolves. If you mean something else, nothing I am about to write will apply. Perhaps what you are looking for is the idea of mutual information. It is a way to measure the shared information between two variables - literally how much your uncertainty about one variable is reduced by knowing the value of another variable. Closely related conditional entropy, which measures the amount of uncertainty we still have about a variable given that we know the value of a second variable. Those references might be worth reading to get a sense of how information between two variables is defined in a technical sense. In your case, your operation actually does not increase the information in your dataset. Your logical operation does not bring in any outside observation to the system, but is merely a rule-baesd transformation. It is therefore subject to the data processing inequality. Edit: Re-reading your question, I think you are asking about the difference between I(X;Y) and I(X';Y), where I is the mutual information function, X is your input data, and Y is your target label. So basically the answer to your question is Info gain = I(X';Y) - I(X;Y). You can look up mutual information approximation methods - there are packages for most languages.
H: Weighted sum of word vectors for document similarity I have trained a word2vec model on a corpus of documents. I then compute the term frequency (the same Tf in TfIDF) of each word in each document, multiply each words Tf by its corresponding word vector (this is the weighted part), and sum each of these weighted vectors together element wise to obtain a single vector for a document. Is this method valid? An example, to make it clearer. Take the document: "The car drives on the road" The Tf for each word would be {'the':0.3333, 'car':0.1666, 'drives':0.1666, 'on':0.1666, 'road':0.16666} (obtained by taking the word count of a word and dividing by the total number of words in the document). If we have a trained word2vec model, we can do the following $$0.333*\begin{bmatrix} the_0 \\ the_1 \\ \vdots \\ the_n \end{bmatrix} + 0.1666*\begin{bmatrix} car_0 \\ car_1 \\ \vdots \\ car_n \end{bmatrix} + ...$$ where each of the column vectors is the word vector for that word. The final result is an $n\times 1$ dimensional vector representing the document. $$ \begin{bmatrix} 0.333*the_0 + 0.1666*car_0 + \dots \\ 0.333*the_1 + 0.1666*car_1 + \dots \\ \vdots \\ 0.333*the_n + 0.1666*car_n + \dots \end{bmatrix} $$ I appreciate there are other methods such as doc2vec that aim to do very much the same, but in a much more sophisticated way. But is my method valid / is there anything blatantly wrong here? I have tested this method and even used some document similarity metrics (yes, after normalizing the vectors of course), and yielded some good results for my industrial application. But I want to know more about the academics of this method. The nice thing about this way, is that by using the word2vec vectors, similarity queries between documents yield very good results due to semantic similarities (euclidean closeness) between word vectors, even if different words are used across documents; this is something TfIDF cannot do as each word is treated differently. Thanks in advance! AI: Yes, your method is valid and it has been studied before it is known as Mean of Word Embeddings (MOWE) or Sum of Word Embeddings (SOWE), although your method is more a weighted mean of vectors. I think that a good starting point for knowing more about the academics of the method is this paper: How Well Sentence Embeddings Capture Meaning. It discusses your method and give some pointers to other papers that also discuss the validity of the method.
H: Extracting NER from a Spanish language text file I am trying to extract various Named Entities from a Spanish language text file. I tried using nltk but to no success. I am using Python 2 with nltk 3.x. AI: I suggest you take a look at the Python library Spacy it has a model for spanish language that includes NER.
H: Vanishing Gradient in a shallow network I created an ANN in Python 3. My backpropagation algorithm seems to work up to a point where the gradient becomes very small. I am familiar with the vanishing gradient problem, but I found that it only applies to really deep network; my simple test network is no such network. It consists of an input layer (1 input node and bias), no hidden layers, and an output layer (1 output node). How do I stop the gradient from vanishing? Here is the code: import numpy as np from random import random class Neural_Network(object): def __init__(self): # Create a simple deterministic network for testing # Define Hyperparameters self.inputLayerSize = 1 self.outputLayerSize = 1 self.hiddenLayerSize = 0 self.numHiddenLayer = 0 self.numExamples = 20 self.learningRate = 0.07 # LEARNING RATE self.weightDecay = 0 # in -> out self.weights = [] # stores matrices of each layer of weights self.z = [] # stores matrices of each layer of weighted sums self.a = [] # stores matrices of each layer of activity self.biases = [] # stores all biases self.biasNodes = [] # Biases are matrices that are added to activity matrix # Dimensions -> numExamples_*hiddenLayerSize or numExamples_*outputLayerSize # Biases for output layer b = [0.5 for x in range(self.outputLayerSize)] B = [b for x in range(self.numExamples)]; self.biases.append(np.mat(B)) # Bias nodes b= [1 for x in range(self.numExamples)] for i in range(self.numHiddenLayer+1): self.biasNodes.append(np.mat(b).reshape([self.numExamples,1])) # Weights (Parameters) # Weight matrix between input and output layer W = np.matrix("0.5"); self.weights.append(W) def setBatchSize(self, numExamples): # Changes the number of rows (examples) for biases if (self.numExamples > numExamples): self.biases = [b[:numExamples] for b in self.biases] def hypTan(self, z): # Apply hyperbolic tangent function return (np.exp(z) - np.exp(-z)) / (np.exp(z) + np.exp(-z)) def hypTanPrime(self, z): # Apply derivative hyperbolic tangent function return 4/np.multiply((np.exp(z) + np.exp(-z)), (np.exp(z) + np.exp(-z))) def forward(self, X): # Propagate outputs through network self.z = [] self.a = [] self.z.append(np.dot(X, self.weights[0]) + self.biases[0]) self.a.append(self.hypTan(self.z[0])) yHat = self.a[-1] return yHat def backProp(self, X, y): # Compute derivative wrt W # out -> in dJdWb = [] # stores matrices of each dJdWb value dJdW = [] # stores matrices of each dJdW (equal in size to self.weights[]) delta = [] # stores matrices of each backpropagating error result = () # stores dJdW and dJdWb self.yHat = self.forward(X) # Quantifying Error print(np.linalg.norm(y-self.yHat)/np.linalg.norm(y+self.yHat)) delta.insert(0,np.multiply(-(y-self.yHat), self.hypTanPrime(self.z[-1]))) # delta = (y-yHat)(sigmoidPrime(final layer unactivated)) dJdW.insert(0, np.dot(X.T, delta[0]) + (self.weightDecay*self.weights[-1])) dJdWb.insert(0, np.dot(self.biasNodes[-1].T, delta[0]) + (self.weightDecay*self.biases[-1])) # you need to backpropagate to bias nodes result = (dJdW, dJdWb) return result def train(self, X, y): for t in range(10000): dJ = self.backProp(X, y) dJdW = dJ[0] dJdWb = dJ[1] for i in range(len(dJdW)): print("dJdW:", dJdW[i], sep = " ", end = "\n") print("dJdWb:", dJdWb[i], sep = " ", end = "\n\n") #print("Weights:", self.weights[i]); self.weights[i] -= self.learningRate*dJdW[i] self.biases[i] -= self.learningRate*dJdWb[i] # Instantiating Neural Network # Instantiating Neural Network NN = Neural_Network() # create a deterministic NN for testing x = np.matrix("0.025; 0.05; 0.075; 0.1; 0.125; 0.15; 0.175; 0.2; 0.225; 0.25; 0.275; 0.3; 0.325; 0.35; 0.375; 0.4; 0.425; 0.45; 0.475; 0.5") y = np.matrix("0.05; 0.1; 0.15; 0.2; 0.25; 0.3; 0.35; 0.4; 0.45; 0.5; 0.55; 0.6; 0.65; 0.7; 0.75; 0.8; 0.85; 0.9; 0.95; 1.0") # Training print("INPUT: ", end = '\n') print(x, end = '\n\n') print("BEFORE TRAINING", NN.forward(x), sep = '\n', end = '\n\n') print("ERROR: ") NN.train(x,y) print("\nAFTER TRAINING", NN.forward(x), sep = '\n', end = '\n\n') NN.setBatchSize(1) # changing settings to receive one input at a time while True: inputs = input() x = np.mat([float(i) for i in inputs.split(" ")]) print(NN.forward(x)) When you run the program it will show the dJdW value (gradient values w.r.t weights) and dJWb values (gradient values w.r.t bias weights). Then it will test the inputs on the newly trained network and print the outputs. After that, you can give the network your own inputs (between 0 and 0.5 since i trained the network to multiply inputs by 2) and it will return outputs in console. Please note that this is a highly simplified version of my real network. I want to fix the problem here before adressing it in the full version. AI: This is not a "vanishing gradient" problem, it is just your network converging as designed. It is normal for gradients to become low as you approach convergence. In a really simple problem, like your linear regression, it is relatively easy to get a gradient of zero (within combined rounding errors). That is because, near a stationary point, gradients do approach zero. This can also be a weakness of gradient descent in general - learning will slow or halt near any stationary point, hence concerns about finding local minima as opposed to global minima. The way to check your gradients are correct is to test them against small weight deltas. Pick a set of weight parameters for the network. Calculate the gradients using your code. Then to test, take each weight in turn, change it by +/- $\epsilon$ and use both variants to generate cost values, use them to estimate the gradient for that weight $\frac{J_{+\epsilon} - J_{-\epsilon}}{2\epsilon}$. Use this alternative (and much slower) gradient calculation to build up a second opinion of what dJdW should be. Then compare the two values. This process is often called gradient checking.
H: GANs (generative adversarial networks) possible for text as well? Are GANs (generative adversarial networks) good just for images or can they be used for text as well? Like training a network to generate meaningful text from a summary. UPD - quotes from the GAN inventor Ian Goodfellow. GANs have not been applied to NLP because GANs are only defined for real-valued data. (2016) source It is not a fundamentally flawed idea. It should be possible to do at least one of the following... (2017) source AI: Yes, GANs can be used for text. However, there is a problem in the combination of how GANs work and how text is normally generated by neural networks: GANs work by propagating gradients through the composition of Generator and Discriminator. Text is normally generated by having a final softmax layer over the token space, that is, the output of the network is normally the probabilities of generating each token (i.e. a discrete stochastic unit). These 2 things do not work well together on their own, because you cannot propagate gradients through discrete stochastic units. There are 2 main approaches to deal with this: the REINFORCE algorithm and the Gumbel-Softmax reparameterization (also known as the Concrete distribution). Take into account that REINFORCE is known to have high variance so you need large amounts of data to get good gradient estimations. As an example of REINFORCE for textual GANs you can check the SeqGAN article. An example of Gumbel-Softmax you can check this article. Another completely different option is not having a discrete stochastic unit as output of the generator (e.g. generating tokens deterministically in embedded space), hence eliminating the original problem of backpropagating through them.
H: Kmodes for Mixed Data Kmodes is for categorical Data. But sometimes I read it somewhere that it is applicable on Mixed Data i.e., Numeric and categorical. So whats right? AI: As far as I remember Kmodes is used for Categorical data, even in the documentation I couldn't find anything related to Mixed Data Type, if you have some reference do share. I've used Kproto for Mixed Data types as you have mentioned and the package is ClustMixType, it combination of Kmeans and Kmodes. Do have a look and let me know if have any questions.
H: When to use Linear Regression and When to use Logistic regression - use cases Linear Regression is used to find the relation between dependent variable and independent variable. Logistic Regression is used when dependent variable is categorical. But this doesn't exactly suffice, what use cases you can use either of these, or can we use a combination of them? AI: Logistic Regression is used when you know that the data is lineraly seperable/classifiable and the outcome is Binary or Dichotomous but it can extended when the dependent has more than 2 categories. Linear Regression is used to find the relation and based on the relation between them you can predict the outcome, the dependent variable should be numeric. What kind of usecases are you expecting? give an example so that we can extend it further.
H: Recombining rare values I have a column called item_colour which describes the colour of products in my dataset. There are 85 colours and some of the colours are unique or they represent a small part of the data ( I have 100,000 observations). For example, there is only one "almond" colour or only two "sky blue". I want to recombine rare values and put them together into a group called "other colour". How can I find the threshold to recombine them? For example, combine the colours together which have values less 50 or so. P.S.I am working with R AI: This is going to be a situation where there will be no fixed rule. One important factor is how meaningful colour differences are to the other parts of your problem. If colour has low correlation/impact in a supervised learning/prediction scenario for example, and the dataset is noisy, then you will want to merge more colours (at a higher fraction of total number) to reduce sampling bias effects that might otherwise assign importance to the colour and increase error rates in test and production. The safest approach is to treat the colour combination threshold as a hyper-parameter to the model building process, and test to see what differences it makes. If there is little or no impact to model effectiveness, then a higher threshold could be useful purely to reduce number of parameters in the model - decreasing resources used to train and run it. If that seems time-consuming, then picking something by feel (e.g. your idea of picking count less than 50 in the dataset) is not usually too bad, at least to start with. You can go back and re-evaluate your choice if you have problems with the model. One other possibility for feature engineering is to use the rareness of the specific colour as an additional feature. So in addition to categories for the popular colours and an "other colour" category, add a real value "colour frequency" = the observed ratio of that colour in the training set. Whether or not this is useful will depend on the problem, but it may help address some of the lost information when merging categories with a wider range of rareness values, assuming that unusual colours indicate anything at all (they may not).
H: What optimization algorithms are used in scikit-learn? Gradient descent is one of the well-known optimisation algorithms. However, are the regression algorithms in scikit-learn implemented with gradient descent or some other techniques? AI: There are multiple approaches to optimization in scikit-learn. I will focus on generalized linear models, where a vector of coefficients needs to be estimated: LinearRegression and Ridge use closed-form solution $\beta=(X^TX+I\lambda)^{-1}X^TY$, but Ridge can also use stochastic gradient descent or method of conjugate gradients Lasso and ElasticNet use coordinate descent OrthogonalMatchingPursuit uses a greedy algorithm with the same name, that has $L_0$ penalty on coefficients ARDRegression and BayesianRidge use something like EM algorithm SGDRegressor and PassiveAggressiveRegressor use guess what! Stochastic gradient descent. HuberRegressor uses BFGS (a second-order optimization method)
H: How is the number of grid cells in YOLO determined? I'm now learning YOLO but I don't understand how the number of grid cells is determined. Some article like this one uses 13 x 13 while others use 19 x 19 (e.g. Andrew Ng's course in Coursera). It seems that the height and width should always be the same, but how is it determined? Is there any general guideline regarding how many grid cells should be picked up over the others? AI: As far as I can tell, there is no specific rule. It will depend in part on how crowded your scene will become with items that you want to detect and locate separately. Creating a high granularity grid increases computational cost for training, and there is no reason to do so if it would only cover additional cases that are much rarer than the detection accuracy that the base algorithm achieves. The choice can be driven by the ground truth data. The ground truth for YOLO needs to be expressed in the form of grid locations with classes and bounding rectangle sizes. If you don't find any training examples where you want to label two items with their centre inside the same grid square, then this is a good indication that your grid size is fine-grained enough. Even if there are one or two examples with a clash like this, you may be able justify labelling just one item in the ground truth and be OK with a resulting model that is not able to cope with close overlap between two separate objects. And even with smaller grid squares, YOLO may not be able to learn to separate the objects, if such an overlap only occurs rarely. I would expect a simple rule of diminishing returns applies. As datasets grow larger, and object detection can be trained on more powerful computers, we may see state of the art models still using YOLO but with more grid points.
H: Clustering mixed data I have 90000 records for customer segmentation consisting of numeric nominal and ordinal variables: *houselhold_ID* ID for household *quantity*: integer, number of products bought *price*: values 1 to 3 with 1 for 0-5, 2 for 5-8 and 3 for 8-10 *product name*: nominal values 1 to 9 representing 9 different products like different kind of cheese... *duration*: days from the current buy to the next buy. *age*: catgeorical age groups: 1: 0-24 years, 2: 25-39 years, 3: 40-59 etc. *salary*: 1: to 1499 2 : 1500 bis 2499 3 : 2500 bis 3499 4 : 3500 or more *number_of_persons_household*: integer *price_con*: price conscious: 1 to 4 for not price conscious to very price conscious *education*: 1 to 3 for lower education to high school/university. I already did outliers detection and removed them having small frequencies. Most household have 40 buys. I want to cluster analysis in R for to find patterns in this data. I currently run kmodes and and want to test kproto too. But my question: is there any possibility to things do like Hopkin statistics for clusterness like with numeric values? Is there also a kind of visualization if I have more than 2 classes? So that one can visualize the data points in to groups. Maybe someone has experience with such kind of data. I am open for hints and tips to analyse this data. EDIT: Here is the elbow plot for kmodes with 15 iterations and 20000 records from the full 90000: So 2 or 3 clusters are good? EDIT2: I did now a Multi Factor Analysis in R. Additionally I did some engineering for the duration variable. I categorized the variable in three groups splitted according to the mean and quantiles in low, mid, high duration. library("FactoMineR") library("factoextra") res.mfa <- MFA(dat[1:20000,], group = c(1, 1, 1, 1,1,1,1,1,1), type = c("c", "n", "n", "n", "n","c","n","n","n"), name.group = c("quantity","price","product", "age","salary","persn","price_con","edu", "duration"), graph = FALSE) I used a group for each variable. The result seem poor. Only 30% of variance can be explained with 6 dimensions?! So bad? Any Ideas? AI: Try HAC with Gower's similarity. It is a very heuristic approach - there is nothing going to save you from weighting variables - but worth a try.
H: Which process step in KDD or CRISP-DM includes labeling of the data? KDD and CRISP-DM are both processes to structure your Data Mining procedure. Is data labeling not also a important part of Data Mining? Data labeling is for example in unsupervised learning the target of the Data Mining process. So if I want to classify a data set that was labelled by me before, do I just do the process twice? In my opinion sometimes the labeling is quite trivial, so that doing the process twice would be quite unnecessary? Is it possible to include the labeling into the data exploring or preprocessing phase? E.g. in CRISP-DM Preprocessing there is something like generating a new parameter. Can this Parameter be also a new target/label? I know this question is quite process orientated and in Data Mining you are quite free but just assume that in this case you have to follow the process. AI: Data Labeling is a very trivial process as you have mentioned. As far as I know, it falls under Data Understanding(Exploration Analysis). When you don't know anything about data then you do exploratory analysis to understand and derive some insights. If you don't know the target variable then the problem falls under Unsupervised Learning, as you have mentioned in the question that your problem is unsupervised. So, you don't know your Target Variable, you are trying to make new feature/dimension to get some good insight irrespective of the factor whether you derive your Target Variable or any other feature it falls under Data Preparation(any new derived variables), which ever we think are important for our Analysis
H: Cost Function for evaluating a Regression Model There are several "classical" ways to quantify the quality of (any!) regression models such as the RMSE, MSE, explained variance, r2, etc... These metrics however do not take "costs" into account, for example, for me it is worse to under-predict a value (Real: 0.5, Predicted: 0.4) than to over-predict it (Real: 0.5, Predicted: 0.6). How can I model such costs into an evaluation function? I just need a first idea to start with and will welcome any suggestions. AI: A loss function and cost function are the same thing. As you intuit, classical regression treats loss/cost as symmetric, which is not always what you want. In classification tasks, you can make an asymmetric loss matrix. You can do a similar thing with regression if you solve it with gradient descent, but the ordinary least squares has symmetric loss baked in. So I would consider either (1) using a numeric optimization library like sklearn or tensorflow to explicitly define the regression parameters you want to estimate, write your own custom loss function, and then do parameter estimation via gradient descent, or (2) finding a software package that allows for asymmetric loss, for example see this discussion.
H: Correlation and feature selection I'm trying to learn from example on the internet. I choose the Iris dataset. When I checked for correlation, I saw that there was a correlation between 'petal length' and 'petal width'. But when I run RFE to select only 3 features, it took both these 2 features. So, my question is Is that normal? Doesn't correlation data contribute much to your model? PS: correlation value is 0.96 AI: As we were discussing above regarding the correlation, yes it is very important factor which would play an important role in selecting the features which are useful in explaining the Target Variable. The RFE works like this, it would take all the features which ever are significant is explaining the Target Variable, if you take these 2 features separately and do the check then obviously they both are shown as significant features. Before going to that step as you have done Correlation analysis and you should have eliminated one of the variable before giving to the model. The outcome of RFE is also correct as it was treating all the features independently. Now you need to remove the features which are strongly correlated because if two features explain the same thing then the model would be biased towards those features and you might end-up getting Wrong Insights. So, strongly correlated features are highly recommended to be eliminated before doing RFE.
H: Clustering mixed data: Find optimal number of clusters What do you say about this plot to find the number of cluster for kmean or kproto for mixed data. Where is the elbow to identify? I would say 5? I have 11 feautures. AI: The bumps at 8 and 11 are likely just due to random initialization, and if you rerun with a different random seed, then they will be at a different k. The elbow argument would probably suggest 3, but it is all but clear. I don't think there is a clear cut, but the values only drop as they would on uniform data. So most likely, a) your distance function is not good enough, 2) the algorithm does not work on this data, and/or c) this evaluation does not work on this data.
H: Error in f1(x) : argument "b" is missing, with no default f1 <- function(a,b,c,d,e,f) { -111.605*a-208.39+(14.882-b)^2+35.29813*c-.001251205/d-1.050695*e+11.63420*f } x = matrix(c(1:66), byrow = T, ncol = 6) f1(x) An error of the following occurs: Error in f1(x) : argument "b" is missing, with no default AI: This should give you the answer: f1 <- function(a,b,c,d,e,f) { -111.605*a-208.39+(14.882-b)^2+35.29813*c-.001251205/d-1.050695*e+11.63420*f } x = matrix(c(1:66), byrow = T, ncol = 6) for(i in 1:nrow(x)){ new_out[i] <- f1(x[,1][i],x[,2][i],x[,3][i],x[,4][i],x[,5][i],x[,6][i]) } new_out <- as.data.frame(new_out) or instead of that for loop you can use this: new_out2 <- as.data.frame(f1(x[,1],x[,2],x[,3],x[,4],x[,5],x[,6])) colnames(new_out2) <- "new_out" Reason for the error is, it was taking x as 1 element. So, I called this function using its name. Do let me know if you any issues.
H: accumarray function not working in matlab I have an 2D array like the one shown below. 35046, 0.17 1963, 0.34 1135, 0.51 405, 0.68 99, 0.85 1, 0.85 15, 1.02 2, 1.02 I tried using accumarray function in matlab on this data to get the results like this below. 35046, 0.17 1963, 0.34 1135, 0.51 405, 0.68 100, 0.85 17, 1.02 The following is the code snippet for the same. [num_dist, txt_dist]=xlsread(fname); num_dist_final(:,1) = accumarray(num_dist(:,2),num_dist(:,1)); num_dist_final(:,2) = unique(num_dist(:,2)); But accumarray throws an error 'Error using accumarray: First input SUBS must contain positive integer subscripts.'. Is there way where I can use accumarray for float subscripts like the one in this problem? Any other alternate methods are also welcome. AI: You can use the third output of unique to convert your floats to a format suitable for accumarray: [C,~,ib] = unique(num_dist(:,2)) num_dist_final(:,1) = accumarray(ib,num_dist(:,1)) num_dist_final(:,2) = C
H: Irregular Precision-Recall Curve I'd expect that for a precision-recall curve, precision decreases while recall increases monotonically. I have a plot that is not smooth and looks funny. I used scikit learn the values for plotting the curve. Is the curve below abnormal? If yes, why and how can I correct it considering scikit learn automatically sorts the true and predicted labels. If the plot is OK, how best do I explain this behaviour? AI: This is definitely possible. When you are reducing the threshold, you will never decrease the recall (you can only flag more of the positive examples as positive). Precision is looking at all the examples that you flag positively, and of those the fraction that are truly positive. This means when you are reducing the threshold, you might not add any true positives but only false positives, thereby lowering your precision. Let's take a look at a fabricated example, where P is positive and N is negative. The samples are ranked by score/probability. Everything before the threshold is flagged as positive: PPNPNNPNNN If we put the threshold between items 2 and 3, we get a precision of $1$ and a recall of $0.5$: PP - NPNNPNNN When we put it between items 3 and 4, we keep the same recall but our precision drops to $0.6667$: PPN - PNNPNNN
H: What is the difference between observation and variable? I have a matrix with size m×n that is built from n number of individuals for person identification. So, n is the number of person and m is the number of feature's value for the person. It makes me confused about observation and variables. What will I call n and m? Which one represents observation and which one represents variable? My confusion will be cleared to you if you visit the following link: How to do SVD and PCA with big data? AI: In your case, 'n' is the number of observations, and 'm' is the number of variables. Think of a table in a database - each row is a 'record', and each record has various properties. A record corresponds to an observation. Each property is a variable or property of that observation. In some contexts people make a distinction between data (i.e. observations, things observed) and variables (parameters to learn - things that vary in a mathematical sense). But in the context of the link you sent, a variable is a feature.
H: What are suitable predictive analytics models for data from multiple sensors? I am a newbie in the field of AI/ML. I am trying to implement predictive analytics model on the data generated every minute from a device with sensors. I have two questions: What are various ML algorithms I can use to predict the number of people in a room given temperature, humidity, luminosity, and motion { 0 | 1 }? What other things can we predict using the above data from the sensor that is deployed in a closed room? Context: The device sends temperature, humidity, luminosity, and motion(yes/no) in real-time. I deployed this device in a closed room and started collecting data. Now I want to use this data to predict the number of people in the room using the data collected. I believe a multiple (linear/poly) regression model will help me in achieving this but, wanted to know if there are any other algorithms or any other use cases I can look into. Sensor Specifications: click for more details LoRa Alliance Certified Temperature (Accuracy: ± 0.5°C, Resolution: 0.1°C) Humidity (Accuracy: ±2%rh, Resolution: 0.1%rh) Light Motion (PIR) NFC for easy configuration Size : 86x86x26mm US902-928, EU863-870, AS923, AU915-928, KR920-923 2 x 3.6V AA lithium battery AI: First, to answer your questions directly 1) What M/L algorithms to try: Since number of people can be treated as a continuous variable, if you are using python, you can try everything from multiple linear regression from sklearn.linear_model import LinearRegression polynominal regression, first create polynomial features using sklearn.preprocessing library, then fit the model using linear regression again from sklearn.preprocessing import PolynomialFeatures support vector machine from sklearn.svm import SVR decision tree regressor from sklearn.tree import DecisionTreeRegressor random forest regressor from sklearn.ensemble import RandomForestRegressor AdaBoost regressor from sklearn.ensemble import AdaBoostRegressor XGBoost, this one needs to be installed separately, following the instructions on this link: http://xgboost.readthedocs.io/en/latest/build.html# from xgboost import XGBClassifier Deep neural network, I wouldn't recommend trying this in the beginning since it can be constructed into very flexible and complex models with way too many choices on the network layout. But if you feel adventurous, this one can be quite fun. Simplest way to get started on this is probably using the keras wrapper for TensorFlow. Here're just some of the most commonly used models and layers import keras from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout Recurrent neural network, since you do have time series sensor data, you can potentially use RNN to predict future occupancy based on the past sensor data. You would combine the above keras models, then add LSTM layers to get the 'memory' effect from keras.layers import LSTM 2) This one is very difficult to answer as it depends on the configuration of the sensors, setup of the room, how the sensor is placed, is the motion detector a PIR, if PIR, is it dual or quad sensor, what is the shape of the PIR lens, is there an ultrasound sensor, how many channels on the luminosity sensor module, does it read RGB ... etc. This one would require experience, tons of experimentation and ingenuity. You can maybe consult with someone with physics, electronics or electrical engineering background. Some wild ideas: if you know how many people are in the room (through your prediction), and you know the room setup, size and thermal characteristics, using temperature and humidity sensor, you can probably estimate the average mass of the individuals in the room. This one is interesting in that it combines a physics model and a machine learning model the luminosity sensor can probably tell you if the room has natural light or artificial light (looking at rate of illuminance change, does it follow diurnal patterns) motion sensor can probably tell you (other than yes or no motion), whether the motion is large, small, fast or slow, depending on the sensor setup Finally, some thoughts on training this model. Before starting to play with all the ML algorithms, it's probably good to spend a good week or two figuring out the 'preprocessing' of the sensor time series. The model accuracy is highly dependent on the preprocessing steps and feature selection / engineering, probably even more so than the choice of the algorithms. If you feed most algorithms with enough data and do a good job on cross-validation, you can get similar accuracy results (at least for simple regression-type applications). Some preprocessing issues to consider What is your window frame size? Will you simply take moving average, or combine some other statistics within each data window? How do you remove or reduce the noise from the sensors? e.g., how do you know the signal you are looking at results from actual physical phenomenon v.s. electronic noise? Does it help to do fourier transform and filtering? Is it better to use frequency-domain features or time-domain features? Or wavelets? What about dimensionality reduction techniques like PCA, LDA? Lots of things to consider, which is also why machine learning is both challenging and really fun.
H: Difference usage of document level, sentence level and aspect level in sentiment analysis I'm studying about Sentiment analysis. What is the purpose of using Document Level, Sentence Level and Aspect Level? I read this definition in Web Data Mining book by Bing liu. In this book it was stated that, Sentence Level sentiment analysis is to classify a sentence to negative, positive, neutral class. Document level sentiment analysis to classify a document. Aspect Level sentiment analysis is to classify each aspect of entity mentioned in a review. Thanks. AI: I am building on Toros91's answer. He has said: Use case of all these 3 different levels are: Sentence Level: Used to classify a sentences for instance, whether a tweet is positive or negative or neutral. Document Level: Used to classify the whole document level to say whether that document sentiment is positive or negative or neutral. Aspect Level: Used to select an aspect and give the outcome of that respective Aspect, for instance let us consider Iphone X as an aspect and mine tweets(this is an example you can use documents and so on) and carry out Sentiment Analysis. Now, to answer your question further about how it can be beneficial, just take an example of some review. I am just forming my own review on something as an example, I do not know the specifics, so please do not quote me here. This has some business logic implied as well. I feel the latest laptop from Mac is really good overall. It has amazing resolution. The computer is really very sleek and can slide into bags easily. However, I feel the weight is a let down. The price is a bit expensive given the configurations. I expect an SSD storage. However, the processor seems really good. At a document level, I can say the entire paragraph looks more positive than negative. So, the review, if has to be labeled as one thing, is positive. At a sentence level, I can break the document into 6 sentences and classify each of them as positive, negative or neutral. I get a bit more specific here. Finally, at an aspect level, I get information about price, storage, processor and so on. Given N customer reviews, you can know how well different aspects of a particular product are received. Say, 10 customers feel the price is fine, but 4 feel it is expensive. 13 of your 14 customers really love the resolution while 1 feel it is okayish. A lot of products in day to day life have multiple things to be considered. And this is where fine grained labeling help analysts or scientists.
H: Extract key phrases from a single document I need to extract relevant key phrases from a single document. Since I don't have a lot of documents, TF-IDF doesn't really work. Currently I'm using TextRank. It produces okay-ish result - some really good phrases along with a lot of garbage. Is there a better algorithm to use for this? Can anyone give me a rundown of available options? Real-world use case: I'm developing a help desk app that comes with Knowledge Base (a bunch of articles, think of it as FAQ). When a user writes a new support ticket I want to extract key phrases and find the most relevant KB articles. Overall there is not enough data to train a model. I need to compare sets of key phrases I think. AI: A related keyword to your case can be Single Document Keyword Extraction. A good paper about this is: We present a new keyword extraction algorithm that applies to a single document without using a corpus. Frequent terms are extracted first, then a set of cooccurrence between each term and the frequent terms, i.e., occurrences in the same sentences, is generated. Co-occurrence distribution shows importance of a term in the document as follows. If probability distribution of co-occurrence between term a and the frequent terms is biased to a particular subset of frequent terms, then term a is likely to be a keyword. The degree of biases of distribution is measured by the $\chi^2$-measure. Our algorithm shows comparable performance to tfidf without using a corpus. You can find the paper here. In sum, this paper gives a rank on keywords based on the defined $\chi^2$-measure.
H: How to deal with large data sets So i'm very new to this, forgive my silly questions. I've got some data I need to analyse, I would usually just use excel it doesnt seem to be able to do the job. I have .csv files containing data for every seconds over a year period. The data consists simlpy of timestamp and valueX. I need to analyse these values to see if values X goes below a certain value, and if it does I have various calculations to carry out. Value A is proportional to X and just instantaneous, then value B is a counter proportional to A. Could you suggest a method/ language / software that would be the best and most accessible to do this? I don't have much experience in data analytics/big data apart from using excel and a bit of matlab and python. thanks in advance EDIT: Thanks for the replies, my data is like this... I have a text file for January values. 01/02/2016 00:00:00,49.972332 01/02/2016 00:00:01,49.9690056 01/02/2016 00:00:02,49.9600029 01/02/2016 00:00:03,49.9490013 01/02/2016 00:00:04,49.9430046 Different text files for each month, some files have text or notes at the top which I'd need to clean up. For some context the second value is grid frequency, If it goes below a certain threshold, it constitutes an grid fault. I need to know how long these faults occur for, how often, how far the f value falls etc. EDIT 2: Thanks for the tips folks, I've been able to read in my data, clean it up and combine it into one dataframe. I'm now try to add some extra column to the dataframe that are dependent of the col 1 value but having some trouble. "Read in month csv files and create dataframes" df_jan_short = pd.read_csv('2016_01_short.csv', header = None ,parse_dates = [0], index_col = 0, names = ['timestamp','Freq'],squeeze= True) df_feb_short = pd.read_csv('2016_02_short.csv', header = None ,parse_dates = [0], index_col = 0, names = ['timestamp','Freq'],squeeze= True) "Combine dataframes into one dataframe for year" months_short = [df_jan_short, df_feb_short] year_short = pd.concat(months_short) "Repalce NaN values in df with 50.00 Hz" year_short.fillna(50.00,inplace = True) "Check what range freq is in" year_short['Case'] = 'No Case' year_short['Case'](year_short['Freq'] > Fc & year_short['Freq'] < Ff ) = 'A' year_short['Case'](year_short['Freq'] > Fa & year_short['Freq'] < Fc ) = 'B' year_short['Case'](year_short['Freq'] < Fa ) = 'C' year_short['Case'](year_short['Freq'] > Ff & year_short['Freq'] < Fh ) = 'G' year_short['Case'](year_short['Freq'] > Fh ) = 'H' But I get the following error... runfile('C:/Users/ShaneOKeeffe/Documents/Grid Freq/Py/4.py', wdir='C:/Users/ShaneOKeeffe/Documents/Grid Freq/Py') Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-74-3a0fbc1f23f6>", line 1, in <module> runfile('C:/Users/ShaneOKeeffe/Documents/Grid Freq/Py/4.py', wdir='C:/Users/ShaneOKeeffe/Documents/Grid Freq/Py') File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile execfile(filename, namespace) File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/ShaneOKeeffe/Documents/Grid Freq/Py/4.py", line 48 year_short['Case'](year_short['Freq'] > Fc & year_short['Freq'] < Ff ) = 'A' ^ SyntaxError: can't assign to function call AI: As mentioned python's pandas library is good start. They have a lot of time series functionality, see e.g. the documentation here. You can load your data like so: import pandas as pd s = pd.read_csv("your_file.csv",header=None,index_col=0,names=['timestamp','value'],squeeze=True,parse_dates=[0]) This gives you a timeseries with the timestamps as index: s Out[1]: timestamp 2016-01-02 00:00:00 49.972332 2016-01-02 00:00:01 49.969006 2016-01-02 00:00:02 49.960003 2016-01-02 00:00:03 49.949001 2016-01-02 00:00:04 49.943005 Name: value, dtype: float64 s<49.95 Out[2]: timestamp 2016-01-02 00:00:00 False 2016-01-02 00:00:01 False 2016-01-02 00:00:02 False 2016-01-02 00:00:03 True 2016-01-02 00:00:04 True Name: value, dtype: bool If your data is too big to process as a whole, you can iterate over chunks using the chunksize option in read_csv.
H: How to transpose rows into single column based on time-stamp index in python? Sample input dataset is: Sample output dataset what i want: 0 2017-11-17 10:23:28.691 788 0 2017-11-17 10:23:28.691 756 0 2017-11-17 10:23:28.691 789 0 2017-11-17 10:23:28.691 780 1 2017-11-17 10:23:29.731 788 1 2017-11-17 10:23:29.731 783 2 2017-11-17 10:23:30.655 747 3 2017-11-17 10:23:31.627 766 4 2017-11-17 10:23:32.606 807 How can I do this by pandas? or is there any other technique to do this? AI: Assuming you have the following source DF: In [21]: df Out[21]: Time val1 val2 val3 0 2017-11-17 11 12 13 1 2017-11-18 24 25 0 2 2017-11-19 37 0 0 Solution: In [22]: (df.replace(0, np.nan) .set_index('Time', append=True) .stack() .reset_index() .rename(columns={0:'val'}) .drop('level_2',1)) Out[22]: level_0 Time val 0 0 2017-11-17 11.0 1 0 2017-11-17 12.0 2 0 2017-11-17 13.0 3 1 2017-11-18 24.0 4 1 2017-11-18 25.0 5 2 2017-11-19 37.0
H: Strange behavior with Adam optimizer when training for too long I'm trying to train a single perceptron (1000 input units, 1 output, no hidden layers) on 64 randomly generated data points. I'm using Pytorch using the Adam optimizer: import torch from torch.autograd import Variable torch.manual_seed(545345) N, D_in, D_out = 64, 1000, 1 x = Variable(torch.randn(N, D_in)) y = Variable(torch.randn(N, D_out)) model = torch.nn.Linear(D_in, D_out) loss_fn = torch.nn.MSELoss(size_average=False) optimizer = torch.optim.Adam(model.parameters()) for t in xrange(5000): y_pred = model(x) loss = loss_fn(y_pred, y) print(t, loss.data[0]) optimizer.zero_grad() loss.backward() optimizer.step() Initially, the loss quickly decreases, as expected: (0, 91.74887084960938) (1, 76.85824584960938) (2, 63.434078216552734) (3, 51.46927261352539) (4, 40.942893981933594) (5, 31.819372177124023) Around 300 iterations, the error reaches near zero: (300, 2.1734419819452455e-12) (301, 1.90354676465887e-12) (302, 2.3347573874232808e-12) This goes on for a few thousand iterations. However, after training for too long, the error starts to increase again: (4997, 0.002102422062307596) (4998, 0.0020302983466535807) (4999, 0.0017039275262504816) Why is this happening? AI: This small instability at the end of convergence is a feature of Adam (and RMSProp) due to how it estimates mean gradient magnitudes over recent steps and divides by them. One thing Adam does is maintain a rolling geometric mean of recent gradients and squares of the gradients. The squares of the gradients are used to divide (another rolling mean of) the current gradient to decide the current step. However, when your gradient becomes and stays very close to zero, this will make the squares of the gradient become so low that they either have large rounding errors or are effectively zero, which can introduce instability (for instance a long-term stable gradient in one dimension makes a relatively small step from $10^{-10}$ to $10^{-5}$ due to changes in other params), and the step size will start to jump around, before settling again. This actually makes Adam less stable and worse for your problem than more basic gradient descent, assuming you want to get as numerically close to zero loss as calculations allow for your problem. In practice on deep learning problems, you don't get this close to convergence (and for some regularisation techniques such as early stopping, you don't want to anyway), so it is usually not a practical concern on the types of problem that Adam was designed for. You can actually see this occurring for RMSProp in a comparison of different optimisers (RMSProp is the black line - watch the very last steps just as it reaches the target): You can make Adam more stable and able to get closer to true convergence by reducing the learning rate. E.g. optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) It will take longer to optimise. Using lr=1e-5 you need to train for 20,000+ iterations before you see the instability and the instability is less dramatic, values hover around $10^{-7}$.
H: Do you know a dataset for regression where deep learning outperforms svm and random forests? I am currently trying to learn about deep learning. I asked myself where deep learning outperforms SVM and Random Forest on regression? Do you have any dataset for regression where deep learning outperforms SVM and Random Forest? My main objective for asking this question, is to see on which kind of datasets deep learning outperforms SVM and Random Forests. Since I do not know what kind of datasets these are, I do not want to be too restrictive when asking about these datasets. Update: From the answer given by tom, it gets clear to me, what kind of dataset I am looking for. So to be more specific: If you know a dataset with spatial/temporal structure at different levels of granularity which is known that for example convolutional deep networks outperform SVM and Random Forest on this dataset, then please share this dataset or a link to it. Don't know why the question is still marked as on hold for being to broad, as I updated the question. Thank you for your help. AI: As a general rule, convolutional deep networks will perform better than SVM and Random Forest models on data that has hierarchical spatial/temporal structure, so for example sound, images, video. Deep networks are very good at learning this spatial/temporal structure at different levels of granularity. For classic datasets like Iris uses for teaching ML, however, I seriously doubt deep networks would perform better than SVM or Random Forest.
H: The best w_j confusion in xgboost from XGBoost tutorial, it described: In this equation $w_j$ are independent with respect to each other, the form $G_j w_j + \frac{1}{2}(H_j+λ)w_j^2$ is quadratic and the best $w_j$ for a given structure $q(x)$ and the best objective reduction we can get is: $w^∗_j = \frac{−G_j}{H_j+λ}$ $obj^∗=\frac{−1}{2}\sum_{j=1}^T\frac{G^2_j}{H_j+λ} + γT$ So my confusion is: if we would like to minimize the $obj$ function, clearly, the best way is to set: $G_j w_j + \frac{1}{2}(H_j + λ) w_j^2 = 0$ thus we would have the best $w_j^*$ is: $w_j^* = \frac{-2G_j}{H_j + λ}$ which is different with the official explanation. anyone please help to point out where am I wrong? AI: Usually to find an optimum you set the derivative of the function equal to 0. In your case that gives $$ \frac{dG_j}{dw_J} = G_j + (H_j+\lambda)w_j = 0 $$ leading to $$ w_j = \frac{-G_j}{H_j+\lambda}$$
H: Error in FUN(newX[, i], ...) : argument "Iclo" is missing, with no default pmv1=function(H,pa,Ta,fcl,Tr,Iclo,M){ Tcl = (35.7 - 0.0275*H + 0.155*Iclo* (H - 0.31*(57.4 - 0.07*H - pa) - 0.42*(H - 58) - 0.0017*M*(58.7 - pa) - 0.0014*M*(34 - Ta))) hc=(2.4*(Tcl - Ta)) dT= (Tr - 22) PMV= (4 + (0.303 *exp(-.036*H) + 0.0275)* (6.57 + 0.46*H +.31*pa + 0.0017*H*pa + 0.0014*H*Ta - 4.13 *fcl *(1 + 0.01*dT)*(Tcl - Tr)-hc*fcl *(Tcl - Ta))) return((PMV-min(PMV))*3/(max(PMV)-min(PMV))-0) } ---------- d=7 n <- 250 set.seed(0) X1 <- optimumLHS(n, d) X1 <- data.frame(X1) colnames(X1) <- c("H","pa","Ta","fcl","Tr","Iclo","M") apply(X1, 1,pmv1) Error in FUN(newX[, i], ...) : argument "Iclo" is missing, with no default AI: This is the answer: pmv1 = function(H,pa,Ta,fcl,Tr,Iclo,M){ Tcl = (35.7 - 0.0275*H + 0.155*Iclo * (H - 0.31*(57.4 - 0.07*H - pa) - 0.42*(H - 58) - 0.0017*M*(58.7 - pa) - 0.0014*M*(34 - Ta))) hc=(2.4*(Tcl - Ta)) dT= (Tr - 22) PMV= (4 + (0.303 *exp(-.036*H) + 0.0275)* (6.57 + 0.46*H +.31*pa + 0.0017*H*pa + 0.0014*H*Ta - 4.13 *fcl *((1 + 0.01*dT) * (Tcl - Tr))-fcl *(Tcl - Ta))) return((PMV-min(PMV))*3/(max(PMV)-min(PMV))-0) } d <- 7 n <- 250 set.seed(0) X1 <- optimumLHS(n, d) X1 <- data.frame(X1) #colnames(X1) <- c("H","pa","Ta","fcl","Tr","Iclo","M") new <- pmv1(X1[,1],X1[,2],X1[,3],X1[,4],X1[,5],X1[,6],X1[,7]) This would do the job.
H: What are some popular process models for solving problems utilizing? What are some standard process models which people or companies use to solve their problems utilizing data? I'm familiar with current process models such as CRISP-DM and SEMMA, but I wondered if there are any other approaches or guidelines? AI: As you know about CRIPS-DM and SEMMA, you can loo into this too KDD
H: How can I create a space in IBM Cloud? I want to try the IBM speech to text API. I created an IBM cloud account and went to https://console.bluemix.net/catalog/services/speech-to-text. I see the error message: Please create a space in the selected org or choose another org. Where can I create such a space? I wasn't able to find it yet. AI: It seems that Watson doesn't allow some accounts to access some regions. Just choose another region, like "US South", when creating the resource, and the problem will be gone.
H: What is the difference between Linear SVM and SVM with linear kernel? I'm wondering whether there is a difference between Linear SVM and SVM with a linear kernel. Or is a linear SVM just a SVM with a linear kernel? If so, what is the difference between the two variables linear_svm and linear_kernel in the following code. from sklearn import svm linear_svm = svm.LinearSVC(C=1).fit(X_train, y_train) linear_kernel_svm=svm.SVC(kernel='linear', C=1).fit(X_train, y_train) AI: As you can read in the documentation (http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html) it's conceptually similar, but implemented in another way to add flexibility: Similar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. So if you only want to use linear support vectors use this one because it scales better and you get more freedom with your loss functions, but if you want to more easily try different kernels in a grid search use the more generic one.
H: How can l get 50 % examples in training set and 50% in test set for each class when splitting data? l have a dataset of 200 examples with 10 classes. l would like to split the dataset into training set 50% and test set 50%. for each class, l have 20 examples. Hence, l would like to get for each class : 10 training examples and 10 test examples. Here are my classes : classes=['BenchPress', 'ApplyLipstick', 'BabyCrawling', 'BandMarching', 'Archery', 'Basketball', 'ApplyEyeMakeup', 'BalanceBeam', 'BaseballPitch', 'BasketballDunk'] l tried the following : from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(final_data, true_label, test_size=0.50, random_state=42) However it returns a 50% training set and 50 % test set, without respecting the proportion for each class (l would like to get 10 examples in test set and 10 examples in training set for each class). Here is the resulted splitting : AI: sklearn version 0.17 onwards the train_test_split should give you stratified results by using the stratify parameter. Example code: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(final_data, true_label, test_size=0.50, random_state=42, stratify=true_label) From the documentation about the parameter stratify: stratify: array-like or None (default is None) If not None, data is split in a stratified fashion, using this as the labels array. New in version 0.17: stratify splitting Hope this helps!
H: How to update bias in CNN? How do we find the gradient and the back propagation error if we had a bias which just added a scalar value after the convolution calculation? AI: Like the update rule for bias terms in dense layers, in convolutional nets the bias gradient is calculated using the sum of derivatives of Z terms: $$ dJ / db = \sum_h \sum_w dZ_{hw} $$ which J is the cost function, w is the width of the activation after convolution and h is the height of the activation after convolution. db is computed by summing dZs. It means you are summing over all the gradients of the conv output (Z) with respect to the cost. Calculating the error of the net depends on the cost function that you have used. Depending on using cross entropy or mean-squared-error or other justified cost functions, you may have different update rule for other parameters of your net. But if you use cross entropy which is common for variants of classification tasks, the above update rule is used for updating bias terms.
H: What is the difference between 8 filters twice and one 16 filters in convolutional layers at CNN? Why would we use two convolution layers in a row with same spec? E.g. in VGG16 first two layers, they used 3 x 3 filter with 64 depth twice. What is the difference if we use 3 x 3 filter with 128 depth once. AI: Each layer has a limited amount that it can transform the layer below it. There is one linear component (weighted sum of output of layer beneath it), and one non-linear component (typically ReLU). It is in theory possible to approximate any function with a large enough single layer in a fully-connected network. However, a stack of similar smaller layers is more expressive using less resources. That means for the same number of parameters you have access to a more flexible function approximator. At some level of complexity for your target function, the cost (in terms of CPU time, data required and effort in training) of making a single layer wider is higher than the cost of stacking more, similar layers. In addition, for a CNN, you have to worry about receptive field. Any feature map can only express values that the filter can "see" due to width of the kernel. As you add more layers, each kernel applied extends the width and height of the base image that the features in the last layer effectively calculate over. If you also have a fully-connected layer after the convolutional layer, then you can in theory compensate for a poor receptive field with a very large fully-connected layer - but then you are back to the first problem of wide network with more parameters than strictly necessary to learn the function.
H: What is the efficient way to generate a similarity score when comparing two face images? I am working on a face recognition application using deep learning. To plot the ROC curves and do performance evaluation, I extracted the features from the last layer of the deep neural network and I have tried Euclidian and Manhattan distances between high dimensional feature representation, but this won't give me accurate measure in terms of comparison as faces have intra-personal and interpersonal differences and other challenges such as pose, lighting condition, and expression. I came across techniques such as joint bayesian, sparse projection, KL-divergence based regularizer/prior(EM algorithm) etc. Are there any other approaches that I'm missing here? What would be the efficient approach to generate similarity scores? AI: A common method is defining "landmarks", a set of specific points that exist on every face . For example, the outside edge of each eye or the bottom of the chin are landmarks on most faces. Train the deep learning network to learn the relationship between these landmarks which will form the embedding space. Then define a distance metric in this embedding space. Finally, create a threshold for similarity between sets of points in this embedding space. That threshold can be learned through "one-class" classification techniques. One gotcha is pose. Faces turned different directions look totally different to a computer. Assuming you have isolated the faces in an image, it is best practice to warp each face so that the eyes and lips are always in the place in the image. This (and other normalization steps) should be done early in the processing pipeline to make subsequent analysis much easier.
H: Why an eigenvector might be reasonable notion of centrality Using adjacency matrix to represent a connection between node i and node j, 1 means connected and 0 means not connected. Using eigenvector to represent the centrality means the more nodes with high centrality values a node is connected to, the higher that node's centrality value is. Eigenvector itself is just a vector after applying a transformation, the result vector has the same direction or totally opposite direction as the original one. I really can't see the relationship between those two properties. AI: Let the adjacency matrix of our network be $A∈\{0,1\}^{n×n}$ with an empty diagonal ($A_{ii} = 0 ∀i$). Direct approach Let’s start with the approach that a node’s centrality ($C_i$) shall be proportional to the sum of the centralities of its neighbours with a proportionality constant $\frac{1}{λ}$ (chosen thusly with some foresight): $$ C_i = \frac{1}{λ} \sum_{j=1}^{n} A_{ij} C_j.$$ This is nothing but a line-wise formulation of the matrix–vector multiplication: $$ λ \vec{C} = A·\vec{C},$$ which is exactly the definition of an eigenvector. Now, to see why the largest eigenvector is chosen, we can turn to the Perron–Frobenius theorem, which tells us that for this eigenvalue (and in case of a connected network only for this eigenvalue), we can find an eigenvector all of whose components, i.e., the eigenvector centralities are positive. Iterative approach Alternatively, we can interpret the above ansatz iteratively: Assign random positive values to the components of $\vec{C}$. Update these values according to: $$\vec{C} ← \frac{A·\vec{C}}{\left| A·\vec{C} \right|}.$$ This means that each component is updated according to: $$C_i ← \frac{1}{\left| A·\vec{C} \right|} \sum_{j=1}^{n} A_{ij} C_j,$$ i.e., you say the new centrality of a node is the sum of the centralities of its neighbours – times some normalisation to avoid values getting very big. Repeat Step 2 until the centralities converge. The idea is that if this converges to a unique result, this result does not only fulfil $C_i \propto \sum_{j=1}^{n} A_{ij} C_j$ but is also robust in that respect. For almost all initial choices of $\vec{C}$, this will converge to the positive, length-1 eigenvector to the largest eigenvalue (which exists and is unique for a connected network, see above). The reason for this is that components along the eigenvector to the largest eigenvalue will be most enlarged by the multiplication and thus dominate the others over the iterations. (Having no components along the eigenvector to the largest eigenvalue does not happen in reality and is the reason why it’s almost all above.) Note that such an iteration can also be used to numerically determine the largest eigenvalue.
H: XGBoost Predictions I am working on a multi-class classification task for 24 classes using XGBoost. I am training the model as follows: param = {'max_depth': max_depth, 'eta': learning_rate, 'silent': 1, 'objective': 'multi:softmax', 'num_class': 24} bst = xgb.train(param, dtrain, num_round) bst = xgb.Booster({'nthread': 4}) predictions = bst.predict(dval) I am getting the predictions as this array: [ 0.5 0.5 0.5 ..., 0.5 0.5 0.5] I don't know why it's predicting all classes as 0.5 instead of one of the indices in 0 to 23. Do I need to use any other objective parameter? AI: I do not know python API, but I guess with the following line you overwrite the trained object with a freshly created booster object (0.5 is a default prediction) bst = xgb.Booster({'nthread': 4})
H: Function Callers Vs Data Scientists In our organization, there are many people who are into analytics and data science who are OK in their work with in the sense that they know what packages in R/Python etc to use, what algorithms to call for a particular type of problem etc. The problem is that they have very little knowledge of underlying mathematics and are not able to think beyond calling ready made functions i.e. they can never build a custom solution to specific problem. Let us call them as Function Callers as opposed a genuine Data Scientist who has a fair knowledge of the underlying mathematics. We want to conduction a training course to teach the mathematical foundations of machine learning to the Function Callers and help them become Data Scientists. For this we have shortlisted some ten topics namely. • Probability • Various Probability distributions and data • Maximum likelihood methods • Linear Algebra (advanced) • Single variable calculus, Multi variable calculus • Vector calculus • Graphical models • Bayesian networks • Optimization techniques • Statistical models This is going to be an 80 hour course so it is not possible to cover everything in detail like a college/university course. Question: Assume that the course will have enough time to teach and implement only 3 or 4 things in each of the topics mentioned above. So given this background, what are the best 3 or 4 things that can be covered under each topic? Note: The target audience have either science or college level mathematics background but they do not have deep knowledge about the mathematics that is used in machine learning. AI: I've done my Masters in Analytics from National University of Singapore. The answer which I'm going to give is based on my experience. Ofcourse all the topics are important in the above mentioned list but if you need to prioritize from the topics which you have mentioned above I think the following are very important for any person who is working in a Data Science field. : Various Probability and Data Distribution: Here you can give basic intro on Probability. You don't need a separate Probability class. A bit more its applications. Single variable calculus/Multi variable calculus: There are very important, when you try to forecast something this plays an important role. Graphical Models: Yes, these are helpful in implementing all kinds of community detection algorithms, social networking and many more. Bayesian Network: This is one of the basic techniques which is important and should be known by all statisticians. Statistical Models: They need to be exposed to different statistical models as you know there are many models which can perform better in different scenarios, It comes only by practice. Rest are not so important at the start, you can conduct another session if necessary based on the outcome of the 1st session. Where you can cover all the topics like Linear Algebra (advanced), Vector calculus, Optimization techniques, Maximum likelihood methods. As you know these techniques would come into play after getting the basic models ready and you would use these if you want to improve the accuracy of the model or tweaking the model WRT to your Business Problem. I hope this answer may help you.
H: What is the difference between fully observed and partially observed state features in Reinforcement learning ? When we use a function to approximate Q values or policy to state-action pairs we hear the above two terms fully and partially observed features. what is the difference . Normally in Atari Games they generate Q values by taking the current pixels distribution of the frame . AI: A fully observed state means that there is no hidden information. Clear examples of this are chess and Go because both players have all the information. The fact that both these games are deterministic doesn't matter. A game where the state changes are stochastic can still be fully observable. Games like poker, where both players can observe their own hand but not their opponents' are called partially observable. Other examples of this can be real time strategy games like Starcraft where you can only see in the line of sight of your units.
H: Multi-class classification metrics in R and Python I would like to ask if there is a metric in R and Python that serves not only for binary classification. I've found Matthews Correlation Coefficient works well in Python, but in R is only a binary version. Of course, I'm talking about "real" not just binary metrics - not something like "It's binary, but you can use one vs one or one vs all." AI: You mean this? MLR's performance measures Performance measure suitable for the iris classification task listMeasures(iris.task) 1] "kappa" "multiclass.brier" "multiclass.aunp" [4] "multiclass.aunu" "qsr" "ber" [7] "logloss" "wkappa" "timeboth" [10] "timepredict" "acc" "lsr" [13] "featperc" "multiclass.au1p" "multiclass.au1u" [16] "ssr" "timetrain" "mmce" Caret does too but fewer.
H: RNN unable to classify time series I have 400 time series of length 50. 200 of them have values between 1-10 and are considered of type A. The rest 200 have values 1-10 with the exception that 3 from the total of 50 data points have value 20, and are considered of type B. I am trying to use a RNN network to make it learn these differences using tensorflow. I use GRU cell with input size is 50 (the whole time series), internal state is 100, 3 layers, feature length is 1 and dropout 0.8. Batch size is 1 (1 whole time series of 50 values) 80% of the series are used for training and 20% for test, evenly distributed between type A and type B. I use softmax as activation function and for gradient descent optimization i use RMSPropOptimizer. Also i am not changing the data at all before feeding it into the neural network, i tried normalizing them using (x-min()/max()-min()) but it didn't work. When i am trying to train it, i always get 50% error. Maximum epochs i tried are 10 and it still was 50%. The same network can classify correctly if i use it on data that have way more differences than just on these 3 points, even on small dataset of about 50 runs of each type. Is it impossible for a RNN to learn these small differences on just those 3 data points out of the 50? What would you suggest i can change in my configuration? I also tried with 2000 time series and still the error is the same. Also note that these 3 data points exist at the beginning of the time series. (in case this has something to do with my issue). AI: I let it run for more epochs, as suggested, and i noticed that after 23 epochs, rnn did get 0% error. I then tried changing activation function of GRUCell from the default tanh to relu and it greatly speed up training. It took 1-4 epochs to get 0% error.
H: In a recommender system, how can you normalise the similarity between two arbitrary users? Consider the following problem: There are 1000 users, 100 items (movies, for example), and 10000 ratings. The probability of a user, $u$, rating a movie, $i$, is $\mathbb{P}(R_{u_i}=\text{yes})=\frac{1}{10}$, and the probability of any two users rating the same movie is $\mathbb{P}(R_{u_i}=R_{v_i}=\text{yes})=\frac{1}{100}$. If we let a random variable $x_i=1$ if $i$ is rated by both $u$ and $v$, and $0$ otherwise, then we have $$E\Big[\sum_{i=1}^{100}x_i\Big]=100\times\big(1\frac{1}{100} + 0\frac{1}{100}+0\frac{1}{100}+ ... \big)=1.$$ The similarity between $u$ and $v$ is $$PC(u, v)=\frac{\sum_{i\in T_{uv}}(R_{u_i}-\bar{R_u})(R_{v_i}-\bar{R_v})}{\sqrt{\sum_{i\in T_{uv}}(R_{u_i}-\bar{R_u})\sum_{i\in T_{uv}}(R_{v_i}-\bar{R_v})}}=1,$$ where $T_{uv}$ is the set of all items rated by $u$ and $v$, and I used the fact that $R_{u_i}$ and $R_{v_i}$ become irrelevant in comparison to $\bar{R_u}$ and $\bar{R_v}$. This is an obvious problem because it means that in situations like this you will always have a perfect similarity between $u$ and $v$. I believe that the normalisation solution is something like this: $$PC(u, v)=\frac{\text{min}(|T_{uv}|, \beta)}{\beta}PC(u, v),$$ where $\beta\in[25, 50]$. I don't understand the logic behind this normalisation. Could you explain please? AI: This normalization means if the number of mutual voting of $u$ and $v$ is low and not enough proportional to total number of voting (here from $\frac{25}{10000} = \frac{1}{400}$ to $\frac{50}{10000} = 0.005$) you should apply this fact into the similarity in some way. Because, your observation is not enough and you can't talk about it confidently. In other words, it can be confident factor which you apply into the similarity measure.