text
stringlengths
83
79.5k
H: How to self-learn data science? I am a self-taught web developer and am interested in teaching myself data science, but I'm unsure of how to begin. In particular, I'm wondering: What fields are there within data science? (e.g., Artificial Intelligence, machine learning, data analysis, etc.) Are there online classes people can recommend? Are there projects available out there that I can practice on (e.g., open datasets). Are there certifications I can apply for or complete? AI: Welcome to the site, Martin! That's a pretty broad question, so you're probably going to get a variety of answers. Here's my take. Data Science is an interdisciplinary field generally thought to combine classical statistics, machine learning, and computer science (again, this depends on who you ask, but other might include business intelligence here, and possible information visualization or knowledge discovery as well; for example, the wikipedia article on data science). A good data scientist is also skilled at picking up on the domain-specific characteristics of the domain in which they working, as well. For example, a data scientist working on analytics for hospital records is much more effective if they have a background in Biomedical Informatics. There are many options here, depending on the type of analytics you're interested in. Andrew Ng's coursera course is the first resource mentioned by most, and rightly so. If you're interested in machine learning, that's a great starting place. If you want an in-depth exploration of the mathematics involved, Tibshirani's The Elements of Statistical Learning is excellent, but fairly advanced text. There are many online courses available on coursera in addition to Ng's, but you should select them with a mind for the type of analytics you want to focus on, and/or the domain in which you plan on working. Kaggle. Start with kaggle, if you want to dive in on some real-world analytics problems. Depending on your level of expertise, it might be good to start of simpler, though. Project Euler is a great resource for one-off practice problems that I still use as warm-up work. Again, this probably depends on the domain you wish to work in. However, I know Coursera offers a data science certificate, if you complete a series of data science-related courses. This is probably a good place to start. Good luck! If you have any other specific questions, feel free to ask me in the comments, and I'll do my best to help!
H: Code Vectorization of gsub in R How can I vectorize this code in R? data <- data.frame(A = rep(5, 5), B = rep(0, 5)) data$abstract <- c("no abstract available", "A", "A", "B", "no abstract available") for (row in (1:nrow(data))){ data [row,"abstract"] <- gsub("no abstract available"," ",data[row,"abstract"]) } AI: You have plenty of alternatives for this problem Using sapply data$abstract <- sapply(data$abstract, function(x){gsub(pattern = "no abstract available", replacement = " ", x)}) Using mapply data$abstract <- mapply(gsub, pattern = "no abstract available", replacement = " ", data$abstract) Using the stringr package library(stringr) data$abstract <- str_replace(data$abstract, "no abstract available", " ") Also, check this this question on StackOverflow for more information, like solutions with match and the qdap package.
H: When to use One Hot Encoding vs LabelEncoder vs DictVectorizor? I have been building models with categorical data for a while now and when in this situation I basically default to using scikit-learn's LabelEncoder function to transform this data prior to building a model. I understand the difference between OHE, LabelEncoder and DictVectorizor in terms of what they are doing to the data, but what is not clear to me is when you might choose to employ one technique over another. Are there certain algorithms or situations in which one has advantages/disadvantages with respect to the others? AI: There are some cases where LabelEncoder or DictVectorizor are useful, but these are quite limited in my opinion due to ordinality. LabelEncoder can turn [dog,cat,dog,mouse,cat] into [1,2,1,3,2], but then the imposed ordinality means that the average of dog and mouse is cat. Still there are algorithms like decision trees and random forests that can work with categorical variables just fine and LabelEncoder can be used to store values using less disk space. One-Hot-Encoding has the advantage that the result is binary rather than ordinal and that everything sits in an orthogonal vector space. The disadvantage is that for high cardinality, the feature space can really blow up quickly and you start fighting with the curse of dimensionality. In these cases, I typically employ one-hot-encoding followed by PCA for dimensionality reduction. I find that the judicious combination of one-hot plus PCA can seldom be beat by other encoding schemes. PCA finds the linear overlap, so will naturally tend to group similar features into the same feature.
H: Pandas: how can I create multi-level columns I have a pandas DataFrame which has the following columns: n_0 n_1 p_0 p_1 e_0 e_1 I want to transform it to have columns and sub-columns: 0 n p e 1 n p e I've searched in the documentation, and I'm completely lost on how to implement this. Does anyone have any suggestions? AI: Finally, I found a solution. You can find the example script below. #!/usr/bin/env python3 import pickle import pandas as pd import itertools import numpy as np data = pd.DataFrame(np.random.randn(10, 5), columns=('0_n', '1_n', '0_p', '1_p', 'x')) indices = set() groups = set() others = set() for c in data.columns: if '_' in c: (i, g) = c.split('_') c2 = pd.MultiIndex.from_tuples((i, g),) indices.add(int(i)) groups.add(g) else: others.add(c) columns = list(itertools.product(groups, indices)) columns = pd.MultiIndex.from_tuples(columns) ret = pd.DataFrame(columns=columns) for c in columns: ret[c] = data['%d_%s' % (int(c[1]), c[0])] for c in others: ret[c] = data['%s' % c] ret.rename(columns={'total': 'total_indices'}, inplace=True) print("Before:") print(data) print("") print("After:") print(ret) Sorry for this...
H: R: lattice equivalent of density2d in ggplot? What would be the equivalent of geom_density2d in lattice? In essence I'm trying to create this graph with lattice: I don't think contourplot or levelplot is what i want and when trying it, it gives me a blank plot? AI: Use a custom panel function in which the density is estimated (e.g. using MASS::kde2d()): library("lattice") library("MASS") set.seed(1); dat <- data.frame(x=rnorm(100), y=rt(100, 5)) xyplot(y~x, data=dat, panel=function(x, y, ...) { dens <- kde2d(x=dat$x, y=dat$y, n=50) tmp <- data.frame(x=dens$x, y=rep(dens$y, each=length(dens$x)), z=as.vector(dens$z)) panel.levelplot(tmp$x, tmp$y, tmp$z, contour=TRUE, subscripts=1:nrow(tmp), region=FALSE, ...) panel.xyplot(x, y, ...) })
H: nlp - opinion mining vs sentiment analysis I have been told that nlp possibly holds the key for allowing researchers to infer the affective state of a person when writing. For instance, by using nlp analysis on online note taking you could infer whether a student is stressed. I work in educational cognitive science, so if this were the case it would be a valuable resource for me, however I am struggling to find evidence that this is indeed the case. I have begun preliminary research and am trying to learn a bit about the technical aspects of nlp - I'm taking an online course by Jurafsky and Manning, another by Michael Collins - and I have been reading about what what can be inferred using nlp, specifically around opinion mining and sentiment analysis. My question is two part: Firstly, most resources I have come across say something along the lines of nlp can be used for opinion mining and sentiment analysis and we will talk about the implications for opinion mining'. Can someone point me in the direction of more sentiment analysis oriented resources? Secondly, as I understand it, sentiment analysis is deriving the stated sentiment within text, e.g. I was happy with, I loved, was tasty, enjoyed, hated, frustrated, etc. Can nlp, or something else, be used to derive unstated affect? And is this the same thing as sentiment analysis? [Apologies if this is asked in the wrong exchange. I've been trying to find the best fit, but there were a few candidates and I wasn't sure which would be most appropriate.] AI: I think the key is that most Recurrent Neural Networks problems are formulated in terms of either a regression (with low values indicating negative sentiment, and high values positive) or a binary classification (is this text positive?). What you seem to be interested in is a much more nuanced definition of sentiment. This doesn't present any inherent problem, as the same algorithms might well work to predict more complex sentiments. The issue is simply labeled data. Because this kind of classification is difficult even for humans, it isn't easy to reliably gather data on, say, how stressed a writer is. However if you're interested in assembling a dataset of that nature, you'd be able to apply the same methods (Recurrent Neural Networks are a popular option) to do the classification. Many researchers in the field use Amazon Mechanical Turk or something similar to gather labeled data at a reasonable cost.
H: XGBoost Linear Regression output incorrect I am a newbie to XGBoost so pardon my ignorance. Here is the python code : import pandas as pd import xgboost as xgb df = pd.DataFrame({'x':[1,2,3], 'y':[10,20,30]}) X_train = df.drop('y',axis=1) Y_train = df['y'] T_train_xgb = xgb.DMatrix(X_train, Y_train) params = {"objective": "reg:linear"} gbm = xgb.train(dtrain=T_train_xgb,params=params) Y_pred = gbm.predict(xgb.DMatrix(pd.DataFrame({'x':[4,5]}))) print Y_pred Output is : [ 24.126194 24.126194] As you can see the input data is simply a straight line. So the output I expect is [40,50]. What am I doing wrong here? AI: It seems that XGBoost uses regression trees as base learners by default. XGBoost (or Gradient boosting in general) work by combining multiple of these base learners. Regression trees can not extrapolate the patterns in the training data, so any input above 3 or below 1 will not be predicted correctly in your case. Your model is trained to predict outputs for inputs in the interval [1,3], an input higher than 3 will be given the same output as 3, and an input less than 1 will be given the same output as 1. Additionally, regression trees do not really see your data as a straight line as they are non-parametric models, which means they can theoretically fit any shape that is more complicated than a straight line. Roughly, a regression tree works by assigning your new input data to some of the training data points it have seen during training, and produce the output based on that. This is in contrast to parametric regressors (like linear regression) which actually look for the best parameters of a hyperplane (straight line in your case) to fit your data. Linear regression does see your data as a straight line with a slope and an intercept. You can change the base learner of your XGBoost model to a GLM (generalized linear model) by adding "booster":"gblinear" to your model params : import pandas as pd import xgboost as xgb df = pd.DataFrame({'x':[1,2,3], 'y':[10,20,30]}) X_train = df.drop('y',axis=1) Y_train = df['y'] T_train_xgb = xgb.DMatrix(X_train, Y_train) params = {"objective": "reg:linear", "booster":"gblinear"} gbm = xgb.train(dtrain=T_train_xgb,params=params) Y_pred = gbm.predict(xgb.DMatrix(pd.DataFrame({'x':[4,5]}))) print Y_pred In general, to debug why your XGBoost model is behaving in a particular way, see the model parameters : gbm.get_dump() If your base learner is linear model, the get_dump output is : ['bias:\n4.49469\nweight:\n7.85942\n'] In your code above, since you tree base learners, the output will be : ['0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=2.85\n\t\t4:leaf=5.85\n\t2:leaf=8.85\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=1.995\n\t\t4:leaf=4.095\n\t2:leaf=6.195\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=1.3965\n\t\t4:leaf=2.8665\n\t2:leaf=4.3365\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.97755\n\t\t4:leaf=2.00655\n\t2:leaf=3.03555\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.684285\n\t\t4:leaf=1.40458\n\t2:leaf=2.12489\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.478999\n\t\t4:leaf=0.983209\n\t2:leaf=1.48742\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.3353\n\t\t4:leaf=0.688247\n\t2:leaf=1.04119\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.23471\n\t\t4:leaf=0.481773\n\t2:leaf=0.728836\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.164297\n\t\t4:leaf=0.337241\n\t2:leaf=0.510185\n', '0:[x<2] yes=1,no=2,missing=1\n\t1:leaf=0.115008\n\t2:[x<3] yes=3,no=4,missing=3\n\t\t3:leaf=0.236069\n\t\t4:leaf=0.357129\n'] Tip : I actually prefer to use xgb.XGBRegressor or xgb.XGBClassifier classes, since they follow the sci-kit learn API. And because sci-kit learn has so many machine learning algorithm implementations, using XGB as an additional library does not disturb my workflow only when I use the sci-kit interface of XGBoost.
H: xgboost: give more importance to recent samples Is there a way to add more importance to points which are more recent when analyzing data with xgboost? AI: You could try building multiple xgboost models, with some of them being limited to more recent data, then weighting those results together. Another idea would be to make a customized evaluation metric that penalizes recent points more heavily which would give them more importance.
H: Binary Neural Network Classification or Multiclass Neural Network Classification? I am confused about the difference between a binary and multiclass neural network classification. If I am writing an algorithm that has 2 output classes (Obama or Romney), but not yes or no (so not like Obama or not Obama), then is it a binary neural network or a multi class (2 class) neural network classification? What I do know: A binary neural network classification outputs 1 unit. If a multi class classification neural network has k classes, then it has k outputs. What I think: I think it is a binary neural network classification because I am really only trying to output one thing, whether a county votes for Romney or Obama. I am confused because I thought binary was more like Romney or not Romney classification and I am not sure if that is the same as Romney or Obama. Just wanted to double check and clarify my understanding. AI: I think you are making things more confusing then they are. Binary In this case you have two possible outputs: Obama = 1. Not-Obama (who in this case can only be Romney) = 0. Multi-Class In this case you have k possible outputs, for example when k = 4: k = 0: Obama k = 1: Romney k = 2: Clinton k = 3: Bush There are approaches to tackle multi-class classification as binary classification which are called One-vs-rest classification and One-vs-one classification, other classifiers, such as Random Forests, are able to deal with a multiclass setting in a natural way. For a brief report, see here.
H: Properties for building a Multilayer Perceptron Neural Network using Keras? I am trying to build and train a multilayer perceptron neural network that correctly predicts what president won in what county for the first time. I have the following information for training data. Total population Median age % BachelorsDeg or higher Unemployment rate Per capita income Total households Average household size % Owner occupied housing % Renter occupied housing % Vacant housing Median home value Population growth House hold growth Per capita income growth Winner That's 14 columns of training data and the 15th column is what the output should be. I am trying to use Keras to build a multilayer perceptron neural network, but I need some help understanding a few properties and the pros of cons of choosing different options for these properties. ACTIVATION FUNCTION I know my first step is to come up with an activation function. I always studied neural networks used sigmoid activation functions. Is a sigmoid activation function the best? How do you know which one to use? Keras additionally gives the options of using a softmax, softplus, relu, tanh, linear, or hard_sigmoid activation function. I'm okay with using whatever, but I just want to be able to understand why and the pros and cons. PROBABILITY INITIALIZAIONS I know initializations define the probability distribution used to set the initial random weights of Keras layers. The options Keras gives are uniform lecun_uniform, normal, identity, orthogonal, zero, glorot_normal, glorot_uniform, he_normal, and he_uniform. How does my selection here impact my end result or model? Shouldn't it not matter because we are "training" whatever random model we start with and come up with a more optimal weighting of the layers anyways? AI: 1) Activation is an architecture choice, which boils down to a hyperparameter choice. You can make a theoretical argument for using any function, but the best way to determine this is to try several and evaluate on a validation set. It's also important to remember you can mix and match activations of various layers. 2) In theory yes, many random initializations would be the same if your data was extremely well behaved and your network ideal. But in practice initializations seek to ensure the gradient starts off reasonable and the signal can be backpropagated correctly. Likely in this case any of those initializations would perform similarly, but the best approach is to try them out, switching if you get undesirable results.
H: Where does the name 'LSTM' come from? Long short-term memory is a recurrent neural network architecture introduced in the paper Long short-term memory. Can you please tell me where the name comes from? ("Memory", as the network can store information because of the recurrence - but where does the "Long short-term" come from?) AI: In Sepp Hochreiter's original paper on the LSTM where he introduces the algorithm and method to the scientific community, he explains that the long term memory refers to the learned weights and the short term memory refers to the gated cell state values that change with each step through time t. edit: quote from paper "Recurrent networks can in principle use their feedback connections to store representations of recent input events in the form of activations ("short-term memory", as opposed to "long-term memory embodied by slowly changing weights)"
H: Are there other large margin classifiers than SVMs? When reading about SVMs (e.g. on the German Wikipedia) there is a sentence like "an svm is a large-margin classifier). Are there other large margin classifiers than SVMs? AI: Yes, one famous example are boosting techniques like Adaboost. It uses small classifiers to create a big one. Here you can find more info about margin classifiers.
H: Recognition human in images through HOG descriptor and SVM classifier performs poorly I'm using a HOG descriptor, coupled with a SVM classifier, to recognise humans in pictures. I'm using the Python wrappers for OpenCV. I've used the excellent tutorial at pymagesearch, which explains what the algorithm does and furnishes hints on how to set the parameters of the detectMultiScale method. Specifically, I do # initialize the HOG descriptor hog = cv2.HOGDescriptor() # Set the support vector machine to be pre-trained for people detection hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector()) # Detect people in the image (rects, weights) = hog.detectMultiScale(image, winStride=(4, 4), padding=(8, 8), scale=1.05) The parameters are chosen according to a fine tuning on both accuracy and performance, following the explanations in the tutorial itself. My problem is that this method, which seems like the currently best method to recognise humans in a picture according to literature (the original paper is dated 2005) seems to perform pretty poorly on my images. I have images containing clothes, both with a model and without it and I'm trying this approach to recognise those with the model. On a subset of 300 images which I manually scanned to tag them for containing the model or not, the method fails 30% of the times. These are some images as examples. Here it detected a missing human: Here it did not get the full human: Here it did not recognise it at all: I understand that the detector works for upright humans. Should they also be full-figure? My images encompass half-figure, figures with no head or no feet. Before this, I have tried a Haar-feature based cascade classifier to recognise the face in an image and the accuracy on the same set of image has been 90%, so I was trying to improve on this. Also, I'm interested in understanding why things are not working here. AI: You are using the training set that opencv is giving you which it doesn't correspond to the kind of images you are using. The data you are using comes from getDefaultPeopleDetector and the kind of images that the default detector uses are pictures of many people, not a female model from a fashion ecommerce. If you want to distinguish between models and garments you can try to train your own classifier with HOG or other features. Another path you can take is to detect if there is a face or not. You could use haar cascades for that.
H: Where exactly does $\geq 1$ come from in SVMs optimization problem constraint? I've understood that SVMs are binary, linear classifiers (without the kernel trick). They have training data $(x_i, y_i)$ where $x_i$ is a vector and $y_i \in \{-1, 1\}$ is the class. As they are binary, linear classifiers the task is to find a hyperplane which separates the data points with the label $-1$ from the data points with the label $+1$. Assume for now, that the data points are linearly separable and we don't need slack variables. Now I've read that the training problem is now the following optimization problem: ${\min_{w, b} \frac{1}{2} \|w\|^2}$ s.t. $y_i ( \langle w, x_i \rangle + b) \geq 1$ I think I got that minizmizing $\|w\|^2$ means maximizing the margin (however, I don't understand why it is the square here. Would anything change if one would try to minimize $\|w\|$?). I also understood that $y_i ( \langle w, x_i \rangle + b) \geq 0$ means that the model has to be correct on the training data. However, there is a $1$ and not a $0$. Why? AI: First problem: Minimizing $\|w\|$ or $\|w\|^2$: It is correct that one wants to maximize the margin. This is actually done by maximizing $\frac{2}{\|w\|}$. This would be the "correct" way of doing it, but it is rather inconvenient. Let's first drop the $2$, as it is just a constant. Now if $\frac{1}{\|w\|}$ is maximal, $\|w\|$ will have to be as small as possible. We can thus find the identical solution by minimizing $\|w\|$. $\|w\|$ can be calculated by $\sqrt{w^T w}$. As the square root is a monotonic function, any point $x$ which maximizes $\sqrt{f(x)}$ will also maximize $f(x)$. To find this point $x$ we thus don't have to calculate the square root and can minimize $w^T w = \|w\|^2$. Finally, as we often have to calculate derivatives, we multiply the whole expression by a factor $\frac{1}{2}$. This is done very often, because if we derive $\frac{d}{dx} x^2 = 2 x$ and thus $\frac{d}{dx} \frac{1}{2} x^2 = x$. This is how we end up with the problem: minimize $\frac{1}{2} \|w\|^2$. tl;dr: yes, minimizing $\|w\|$ instead of $\frac{1}{2} \|w\|^2$ would work. Second problem: $\geq 0$ or $\geq 1$: As already stated in the question, $y_i \left( \langle w,x_i \rangle + b \right) \geq 0$ means that the point has to be on the correct side of the hyperplane. However this isn't enough: we want the point to be at least as far away as the margin (then the point is a support vector), or even further away. Remember the definition of the hyperplane, $\mathcal{H} = \{ x \mid \langle w,x \rangle + b = 0\}$. This description however is not unique: if we scale $w$ and $b$ by a constant $c$, then we get an equivalent description of this hyperplane. To make sure our optimization algorithm doesn't just scale $w$ and $b$ by constant factors to get a higher margin, we define that the distance of a support vector from the hyperplane is always $1$, i.e. the margin is $\frac{1}{\|w\|}$. A support vector is thus characterized by $y_i \left( \langle w,x_i \rangle + b \right) = 1 $. As already mentioned earlier, we want all points to be either a support vector, or even further away from the hyperplane. In training, we thus add the constraint $y_i \left( \langle w,x_i \rangle + b \right) \geq 1$, which ensures exactly that. tl;dr: Training points don't only need to be correct, they have to be on the margin or further away.
H: How to select regression algorithm for noisy (scattered) data? I am going to do regression analysis with multiple variables. In my data I have n = 23 features and m = 13000 training examples. Here is the plot of my training data (area of houses against price): There are 13000 training examples on the plot. As you can see it is relatively noisy data. My question is which regression algorithm is more appropriate and reasonable to use in my case. I mean is it more logical to use simple linear regression or some nonlinear regression algorithm. To be more clear I provide some examples. Here is some unrelated example of linear regression fit: And some unrelated example of nonlinear regression fit: And now I provide some hypothetic regression lines for my data: AFAIK primitive linear regression for my data will generate very high error cost because it is very noisy and scattered data. On the other hand, there is no apparent nonlinear pattern (for example sinusoidal). What regression algorithm will be more reasonable to use in my case (house prices data) in order to get more or less appropriate houses' price prediction and why this algorithm (linear or nonlinear) is more reasonable? AI: The model I would use is the one that minimizes the accumulated quadratic error. Both models you are using, linear and quadratic, looks good. You can compute which one has the lowest error. If you want to use an advanced method you can use RANSAC. It is an iterative method for regression that assumes that there are outliers and remove them from the optimization. So your model should be more accurate that just using the first approach I told you.
H: How is it possible to process an image with a few neurons? An 1024*1024 pixel image has around one million pixels. If I would like to connect each pixel to an R,G,B input neuron, then more than 3 million neurons are needed. It would be really hard, to train a neural network, which has millions of inputs. How is it possible, to reduce the number of neurons? AI: There are several ways to make this big number trainable: Use CNNs Auto-Encoders (see Reducing the Dimensionality of Data with Neural Networks) Dimensionality reduction of the input Scale the image down PCA / LDA Troll-Answer If you really meant "only a few neurons" then you might want to have a look at Spiking neural networks. Those are incredibly computationally intensive, need a lot of hand-crafting and still get worse performance than normal neural networks for most tasks ... but you only need very little of them.
H: Stacked features not helping I am wondering why my stacked features do not help me to improve against my loss metric. Here's what I'm doing: I am adding new features which are simple the predictions originated from train, predict of other models to the original train/test features. Every time I have tried this method, it has failed. I am curious what the issue could be the with this. Can anyone give me some advice? AI: As far as I understood stacking does not add features to the original data set. The point is to train several models on the training data and use their predictions on training data as input features to another model. First such kind of construction used logistic regression as a final ensemble and and class probabilities from each base learner as input features. Now, what I have described is a technical layout, the intuition behind is the following: considering that there are no models which are good over all joint probability space of features, one can combine their results in order to get the best from each one. In other words we can state the we explore the richness of models (seen as function spaces) to get a combined thing. This strategy does not work always but often it works. I think you do something wrong. I think is better to use original features only for base learners. Be careful to use scores or probabilities if possible, instead of final classifications from base learners, it gives more space for improvements. Often is better to stack learners from different families, not the same model with different parameters (better to use a gradient boost and random forest than two gradient boosts). All of those advices are not rules which cannot be broken, and even if you take them all there is no guarantee that there will be improvements.
H: Research in random forest algorithms able to switch data sets I'm curious as to whether research been done into random forests that combine unsupervised with supervised learning in a way allowing a single algorithm to find patterns in, and work with, multiple different data sets. I have googled every possible way to find research on this, and have come up empty. Can anyone point me in the right direction? AI: Semi-Supervised Learning The combination of unsupervised learning and supervised learning is referred to as semi-supervised learning, which is the concept that I believe you are searching for. Label propagation is often cited when outlining the heuristics of semi-supervised learning. The essence is to employ clustering, but to use a tiny set of known cases in order to derive (or propogate) the labels of the clusters. Hence one is able to use a small set of labeled cases to classify a much larger set of unsupervised data. Here are some references: Wikipedia has an entry on the semi-supervised learning. The scikit learn User Guide is often a useful starting point and has a label propogation routine. There are, in fact, papers treating semi-supervised random forest models. Another one here Hope this helps!
H: Using R and Python together I'm new in this field and I started working with data by using R. Because of that, I find R much easier to approach a data project. However, apparently an employer wants you to know an object-oriented programing language (a language like Python). So it would be smart to think I can use Python just when I need to deal with a complex programming process, like replacing the na's in Titanic/kaggle project with the average based on name and to use R for anything else? So use them both interchangeably? Beside the fact that Python is more programming oriented I don't see why somebody would use it over R... AI: Several clarifications: you can program with object-oriented (OOP) concepts in R, even though OOP in R has slightly different syntax from other languages. Methods do not bind to objects. In R, different method versions will be involved based on the input argument classes (and types). (Ref: Advanced R) you can also replace nans with mean / any stat. / value in R using a mask if you store the data in a dataframe (See SO post) There is no problem using them interchangeably. I use R from Python using the package RPy2. I assume it is equally easy to do it the other way round. At the end of the day, any language is only as good as how much the users know about it. Use one that you are more familiar with and try to learn it properly using the vast online resources online.
H: Neural Networks for Predictive typing I don't have a background in neural networks. But, various studies has been proved that neural networks (feed forward / Recurrent) outperformed n-gram language modeling for predicting words in a sequence. But, in an application to text messaging or any text-based conversation, where the language which is most likely used will be more informal or colloquial. Can still a neural networks perform well than n-gram LM? Considering the data to be fed are the text messages (colloquial phrases). If so, please enlighten me, thanks. AI: A neural network is in principle a good choice when you have A LOT of similar data and classification tasks. Predicting the next character (or word... which is just multiple characters) is such a szenario. I don't think it really matters which kind of language you have, as long as you have enough training data of the same kind. See The Unreasonable Effectiveness of Recurrent Neural Networks for a nice article where a recurrent neural network (RNN) was used as a character predictor to write complete texts. They also have code on github.com/karpathy/char-rnn ready to train / go. You can feed it with a start string and ask for the next characters / words.
H: supervised learning and labels In this wiki page, I came across with the following phrase. When data is not labeled, a supervised learning is not possible, and an unsupervised learning is required I cannot figure out why supervised learning is not possible? Appreciate any help to resolve this ambiguity. AI: The main difference between supervised and unsupervised learning is the following: In supervised learning you have a set of labelled data, meaning that you have the values of the inputs and the outputs. What you try to achieve with machine learning is to find the true relationship between them, what we usually call the model in math. There are many different algorithms in machine learning that allow you to obtain a model of the data. The objective that you seek, and how you can use machine learning, is to predict the output given a new input, once you know the model. In unsupervised learning you don't have the data labelled. You can say that you have the inputs but not the outputs. And the objective is to find some kind of pattern in your data. You can find groups or clusters that you think that belong to the same group or output. Here you also have to obtain a model. And again, the objective you seek is to be able to predict the output given a new input. Finally, going back to your question, if you don't have labels you can not use supervised learning, you have to use unsupervised learning.
H: Software Testing for Data Science in R I often use Nose, Tox or Unittest when testing my python code, specially when it has to be integrated with other modules or other pieces of code. However, now that I've found myself using R more than python for ML modelling and development. I realized that I don't really test my R code (And more importantly I really don't know how to do it well). So my question is, what are good packages that allow you to test R code in a similar manner as Nose, Tox or Unittest do in Python. Additional references such as tutorials will be greatly appreciated as well. Bonus points for packages in R similar to Hypothesis or Feature Forge Related Talk: Trey Causey: Testing for Data Scientists AI: Packages for unit testing and assertive testing that are actively maintained: Packages for unit testing testthat: more information on how to use you can find here or on github Runit: Cran page Packages for assertions: assertthat: info on github assertive: Assertive has a lot of subpackages available in case you do not need all of them. check on cran assertr: info on github ensurer: info on github tester: info on github It is a matter of preference what you want to use for assertions. Read this bioconductor page for more info on the difference between RUnit and testthat.
H: Tokenizing words of length 1, what would happen if I do topic modeling? Suppose my dataset contains some very small documents (about 20 words each). And each of them may have words in at least two languages (combination of malay and english, for instance). Also there are some numbers inside each of them. Just out of curiosity, while usually customizable, why are some tokenizers choose to ignore tokens that are just numbers by default, or anything that doesn't meet certain length? For example, the CountVectorizer in scikit-learn ignores words that do not have more than 1 alphanumeric. And the tokenizer utility in gensim ignores words with digits. I used CountVectorizer in the end and made it to accept words containing digits and words with length 1 as well. This is because I needed exact match, as slight differences in those words of length 1 may point to a different document. I am currently trying topic modeling (gensim's LSI) to perform topic analysis, but the main intention of doing that is to be able to reduce the dimension so I can feed it to spotify's annoy library for quick approximate searching (from 58k features, to 500 topics). Also I expect it to reduce the time and memory taken to compute classifier models. So the question is really, if I tokenize words of length 1, would it makes sense to perform topic modeling? Even if I do a brute force search by just comparing cosine similarity (no fancy ANN), would it affect the precision or accuracy (i.e. would it be able to recognize the slight change in words of length 1 in query and retrieve the right document)? AI: The libraries usually exclude 1-length tokens and tokens with no alpha-numeric characters because typically they are noise and do not have any descriptive power. That is, these tokens are usually not helpful, say, in distinguishing between relevant vs not relevant documents. However, if in your domain you feel like 1-length tokens can be helpful, feel free to use them as well. For example, if all the document that contain 1 belong to the same topic, it may be a good idea to preserve this token. 1 has descriptive power in this case: it can help distinguish between one particular topic and the rest. Now, your next question is about LSI. For LSI there is no difference if a column in the document-term matrix corresponds to a 1-char token or to a 5-char token. So you can use LSI in your analysis.
H: Building a machine learning model to predict crop yields based on environmental data I have a dataset containing data on temperature, precipitation and soybean yields for a farm for 10 years (2005 - 2014). I would like to predict yields for 2015 based on this data. Please note that the dataset has DAILY values for temperature and precipitation, but only 1 value per year for the yield, since harvesting of crop happens at end of growing season of crop. I want to build a regression or some other machine learning based model to predict 2015 yields, based on a regression/some other model derived by studying the relation between yields and temperature and precipitation in previous years. I am familiar with performing machine learning using scikit-learn. However, not sure how to represent this problem. The tricky part here is that temperature and precipitation are daily but yield is just 1 value per year. How do I approach this? AI: For starters, you can predict the yield for the upcoming year based on the daily data for the previous year. You can estimate the model parameters by considering each year's worth of data as one "point", then validate the model using cross-validation. You can extend this model by considering more than the past year, but look back too far and you'll have trouble validating your model and overfit.
H: Machine learning algorithm to classify blog posts So I have a large collection of blog posts containing title, content, category, tags and geo-location fields and I'm looking to achieve three things: Assign a category (or multiple categories) to all the posts and any new ones. I have a strict vocabulary of categories. Add new tags to the posts that might be relevant to the post. Mark the post if it contains information about a place. For example: Lorem ipsum dolor sit amet San Francisco, consectetur adipiscing elit. I've been looking into different machine learning algorithms, most recently decision trees, but I don't feel that is the best algorithm to work out the problems above (or that I haven't understood them enough). Many of these posts already contain categories, tags and geo-location data. Some do not contain any information and some have only a few details. What would be the best machine learning algorithm to look into to solve each of the three areas? AI: Question 1: Category prediction To predict the category of a new blog post, you could do the following: Build a MLP (multilayer Perceptron, a very simple neural network). Each category gets an output node, each tag is a binary input node. However, this will only work if the number of tags is very little. As soon as you add new tags, you will have to retrain the network. Build a MLP with "important words" as features. If you have internal links, you might want to have a look at "On Node Classification in Dynamic Content-based Networks". In case you're German, you might also like Über die Klassifizierung von Knoten in dynamischen Netzwerken mit Inhalt You could take all words you currently have, see those as a vector space. Fix that vocabulary (and probably remove some meaningless words like "with", "a", "an" - this is commonly called a "stopword"). For each text, you count the different words you have in your vocabulary. A new blog post is point in this space. Use $k$ nearest neighbor for classification. Use combinations of different predictiors by letting each predictor give a vote for a classification. See also Yiming Yang, Jan O. Pedersen: A Comparative Study on Feature Selection in Text Categorization, 1997. Scikit-learn: Working With Text Data Question 2: Tagging texts This can be treated the same way like question 1. Question 3: Finding locations Download a database of countries / cities (e.g. maxmind) and just search for a match.
H: Open cv and computer vision I'm new to computer vision, and I'm looking for a good place to start from, what's better between open cv in python or open cv in c++ AI: Depends on your programming skills. Here is the summary: OpenCV is a great tool originally developed in C++ and after a while a Python interface was added. In industry (and also many academic research group) C++ is the popular language for Computer Vision as the nature of data needs efficiency in computation. So starting with python is an easy way to learn Computer Vision and OpenCV capabilities but when you are applying for a job in industry or academia, they most probably need you to get your hands dirty with C++.
H: Spike Slab in r, bad output I've successfully used Spike Slab in the past, but with this data it seems like something is going wrong. My code is: >require(spikeslab) >set.seed(2) >model1_ss<-with(data_use,spikeslab(gdeaths_per_100_thousand~gdealers_per_100_thousand+Physically.Unhealthy.Days+Mentally.Unhealthy.Days, na.rm=TRUE)) >summary(model1_ss) the first 10 lines of data_use is Physically.Unhealthy.Days Mentally.Unhealthy.Days gdeaths_per_100_thousand gdealers_per_100_thousand 5.2 4.1 19.70210418 30.11828855 4.9 4.1 21.6723146 32.83851989 5.1 4.3 23.52217417 39.4076506 5.1 3.6 NA 38.03625359 3.4 4.1 17.78983533 27.28191219 3.4 4.2 19.4576324 32.2906761 3.3 3.8 13.9248167 40.75855495 3.3 3.8 12.05513916 36.2194827 5.9 3.9 NA 31.52340694 And it returns this, which is totally unlike what I expect from from Spike Slab: Length Class Mode summary 4 data.frame list verbose 10 -none- list terms 3 terms call sigma.hat 1 -none- numeric y 2132 -none- numeric xnew 12792 -none- numeric x 12792 -none- numeric y.center 1 -none- numeric x.center 6 -none- numeric x.scale 6 -none- numeric names 6 -none- character bma 6 -none- numeric bma.scale 6 -none- numeric gnet 6 -none- numeric gnet.scale 6 -none- numeric gnet.path 2 -none- list gnet.obj 16 lars list gnet.obj.vars 6 -none- numeric gnet.parms 6 -none- numeric phat 1 -none- numeric complexity 500 -none- numeric ridge 500 -none- list model 500 -none- list Is there a way to interpret this that I don't know about? Or is there something wrong with how I'm running it? Thanks so much for any help, I have no idea what to try. AI: There seems to be no summary method for objects of class spikeslab, so the default summary method is used which prints the internal list elements. Just print the object to get a nice summary - maybe that's what you want? Is this the output you expected? > model1_ss ------------------------------------------------------------------- Variable selection method : AIC Big p small n : FALSE Screen variables : FALSE Fast processing : TRUE Sample size : 7 No. predictors : 3 No. burn-in values : 500 No. sampled values : 500 Estimated mse : 7.5765 Model size : 2 ---> Top variables: bma gnet bma.scale gnet.scale Mentally.Unhealthy.Days 2.290 2.331 13.003 13.236 Physically.Unhealthy.Days 0.926 0.951 1.084 1.113 -------------------------------------------------------------------
H: Assigning values to missing target vector values in scikit-learn I have a dataset containing data on temperature, precipitation, and soybean yields for a farm for 10 years (2005 - 2014). I would like to predict yields for 2015 based on this data. Please note that the dataset has DAILY values for temperature and precipitation, but only 1 value per year for the yield (since harvesting of crop happens at end of growing season of crop). I would like to build a regression or some other machine learning based model to predict 2015 yields, based on a regression/some other model derived by studying the relation between yields and temperature and precipitation in previous years. As per, Building a machine learning model to predict crop yields based on environmental data, I am using sklearn.cross_validation.LabelKFold to assign each year the same label. The question is that since I have a single target value per year, do I need to interpolate to fill in target values for all the other days of the year? Should I just use the same target value for each day of the year? AI: The model likely won't have much predictive power if the input is a single day. No weather patterns longer than one day can be captured that way. Instead you should aggregate the days together. You can come up with different features that describe your larger, aggregated unit of time (months, year). For example mean precipitation is a very simple one. Binning the data and using counts within those bins would also work. More advanced options would roll the time all the way up to a full year and learn a feature set at that level.
H: How can I use data on customer interactions to drive communication strategy? I have data on my customers (age, location, gender) and number of interactions with customer by channel (#calls, #sms, #letters) and data on whether they have bought a product (yes/no). I would like to predict what combination of interaction (e.g. 3 calls, 2 letters) is the best approach when contacting different types of customers. What would be an appropriate statistical/ machine learning technique to use to calculate this? AI: I would do the following: First, cluster your customers into several groups, based on age, location, gender etc. Second, for each group, use different combination of interaction as features to predict 'yes/no', and select which combination of interaction performs best.
H: How to prepare colored images for neural networks? I have seen many examples online regarding the MNIST dataset, but it's all in black and white. In that case, a 2D array can be constructed where the values at each array element represent the intensity of the corresponding pixel. However, what if I want to do colored images? What's the best way to represent the RGB data? There's a very brief discussion of it here, which I quote below. However, I still don't get how the RGB data should be organized. Additionally, is there some OpenCV library/command we should use to preprocess the colored images? the feature detectors in the second convolutional-pooling layer have access to all the features from the previous layer, but only within their particular local receptive field* *This issue would have arisen in the first layer if the input images were in color. In that case we'd have 3 input features for each pixel, corresponding to red, green and blue channels in the input image. So we'd allow the feature detectors to have access to all color information, but only within a given local receptive field. AI: Your R,G, and B pixel values can be broken into 3 separate channels (and in most cases this is done for you). These channels are treated no differently than feature maps in higher levels of the network. Convolution extends naturally to more than 2 dimensions. Imagine the greyscale, single-channel example. Say you have N feature maps to learn in the first layer. Then the output of this layer (and therefore the input to the second layer) will be comprised of N channels, each of which is the result of convolving a feature map with each window in your image. Having 3 channels in your first layer is no different. This tutorial does a nice job on convolution in general. http://deeplearning.net/tutorial/lenet.html
H: Mine webshop history for clusters I've no experience in data science so this will be one of those questions... I have data from >100k purchases made via a webshop regarding a catalogue of around >100 items. The history of purchases flattened out looks like Item1 Item2 ... ItemN Sex State 5 0 0 M NY 25 15 0 F IL 0 1 1 ? NY By playing around with the data, I can deduce simple facts like "90% of all purchases include at least 3 Item1", "If there are at least 4 of Item2, it is likely that Item3 is 0" or "60% of all customers from NY are male, but only 40% of those from IL are". Given the amount of combinations and data there is, the most obvious question: How can I approach wringing out more information from a data set like the above? I'm mostly interested in how one item does or does not entails inclusion of another... AI: Frequent Item-Set Mining is what you are looking for. You can see the tree structure of your frequent itemsets and the association rules afterwards. For your data I'd suggest to look at the whole for a while to get a sense on what you have in hand. Playing with concepts like Probability Distributions, Entropy, etc would be really helpful in case you can reduce the size of your features. PCA also gives you the opportunity of projecting your data into a low-dimenstional space and you can see also plots showing first several PCs in 2-D or 3-D and get an impression about your data. Before all above I strongly suggest to see if you have Missing Values and if yes try to cope with them.
H: Simple example of genetic alg minimization I have been looking for a while for examples of how I could find the points at which a function achieves its minimum using a genetic algorithm approach in Python. I looked at DEAP documentation, but the examples there were pretty hard for me to follow. For example: def function(x,y): return x*y+3*x-x**2 I am looking for some references on how I can make a genetic algorithm in which I can feed some initial random values for both x and y (not coming from the same dimensions). Can someone with experience creating and using genetic algorithms give me some guidance on this? AI: Here is a trivial example, which captures the essence of genetic algorithms more meaningfully than the polynomial you provided. The polynomial you provided is solvable via stochastic gradient descent, which is a simpler minimimization technique. For this reason, I am instead suggesting this excellent article and example by Will Larson. Quoted from the original article: Defining a Problem to Optimize Now we're going to put together a simple example of using a genetic algorithm in Python. We're going to optimize a very simple problem: trying to create a list of N numbers that equal X when summed together. If we set N = 5 and X = 200, then these would all be appropriate solutions. lst = [40,40,40,40,40] lst = [50,50,50,25,25] lst = [200,0,0,0,0] Take a look at the entire article, but here is the complete code: # Example usage from genetic import * target = 371 p_count = 100 i_length = 6 i_min = 0 i_max = 100 p = population(p_count, i_length, i_min, i_max) fitness_history = [grade(p, target),] for i in xrange(100): p = evolve(p, target) fitness_history.append(grade(p, target)) for datum in fitness_history: print datum """ from random import randint, random from operator import add def individual(length, min, max): 'Create a member of the population.' return [ randint(min,max) for x in xrange(length) ] def population(count, length, min, max): """ Create a number of individuals (i.e. a population). count: the number of individuals in the population length: the number of values per individual min: the minimum possible value in an individual's list of values max: the maximum possible value in an individual's list of values """ return [ individual(length, min, max) for x in xrange(count) ] def fitness(individual, target): """ Determine the fitness of an individual. Higher is better. individual: the individual to evaluate target: the target number individuals are aiming for """ sum = reduce(add, individual, 0) return abs(target-sum) def grade(pop, target): 'Find average fitness for a population.' summed = reduce(add, (fitness(x, target) for x in pop)) return summed / (len(pop) * 1.0) def evolve(pop, target, retain=0.2, random_select=0.05, mutate=0.01): graded = [ (fitness(x, target), x) for x in pop] graded = [ x[1] for x in sorted(graded)] retain_length = int(len(graded)*retain) parents = graded[:retain_length] # randomly add other individuals to # promote genetic diversity for individual in graded[retain_length:]: if random_select > random(): parents.append(individual) # mutate some individuals for individual in parents: if mutate > random(): pos_to_mutate = randint(0, len(individual)-1) # this mutation is not ideal, because it # restricts the range of possible values, # but the function is unaware of the min/max # values used to create the individuals, individual[pos_to_mutate] = randint( min(individual), max(individual)) # crossover parents to create children parents_length = len(parents) desired_length = len(pop) - parents_length children = [] while len(children) < desired_length: male = randint(0, parents_length-1) female = randint(0, parents_length-1) if male != female: male = parents[male] female = parents[female] half = len(male) / 2 child = male[:half] + female[half:] children.append(child) parents.extend(children) return parents I think it could be quite pedagogically useful to also solve your original problem using this algorithm and then also construct a solution using stochastic grid search or stochastic gradient descent and you will gain a deep understanding of the juxtaposition of those three algorithms. Hope this helps!
H: Writing custom data analysis program I have a number of large datasets (10GBs) each with data fetched from a NoSQL database that I have remotely downloaded on my desktop. I would like to write a Python program to run some custom data analysis (plots - preferably interactive) and export custom reports in html or pdf. I was wondering how people do the following: 1) Store the data. For the moment I have plain text files (each file has rows of a fixed number of columns - most of the data are categorical). Would it make sense to save those in some database (SQL) or hdf5? Any hints on which is preferrable? 2) Which plotting library would you propose for the graphs? I have seen about bookeh and matplotlib supports interactive widgets but I don't know what people normally use. 3) Could I export the analysis results in an IPython notebook and then in html programmatically? AI: 1) Store the data. For the moment I have plain text files (each file has rows of a fixed number of columns - most of the data are categorical). Would it make sense to save those in some database (SQL) or hdf5? Any hints on which is preferable? Yes, it would make sense to store in a local database, rather than using large csv/text files. As you say that the data is derived from a NoSQL source, I assume unstructured data. So, using a SQL/relational store is out of question. As you say you are using Python, I would suggest you use TinyDB, which is both light-weight and easy to handle. 2) Which plotting library would you propose for the graphs? I have seen about bookeh and matplotlib supports interactive widgets but I don't know what people normally use. Matplotlib would be good enough. Actually, this question is more opinion-based than anything else. There are a lot of visualization libraries you can use, like Bokeh, Seaborn, etc. 3) Could I export the analysis results in an IPython notebook and then in html programmatically? Yes, you can do the analytics directly in an Ipython notebook(Jupyter), which also supports Markdown and HTML cells. In addition, you can also use widgets and interactive visualization with Jupyter Ipy notebooks and Matplotlib. Tutorials for the same
H: Pandas: access fields within field in a DataFrame Suppose I have such a JSON file: [ { "id": "0", "name": "name0", "first_sent": "date0", "analytics": [ { "a": 1, ... }, { "a": 2, ... } ] } ] and I want to parse it with Pandas. So I load it with df = pd.read_json('file.son') It's all good until I try to access and count the number of dictionaries in the "analytics" field for each item, for which task I haven't found any better way than for i in range(df.shape[0]): num = len(df[i:i+1]['analytics'][i]) But this looks totally non-elegant and it's missing the point of using Pandas in the first place. I need to be able to access the fields within "analytics" for each item. The question is how to use Pandas to access fields within a field (which maps to a Series object), without reverting to non-Pandas approaches. A head of the DataFrame looks like this (only fields 'id' and 'analytics' reported): 0 [{u'a': 0.0, u'b... 1 [{u'a': 0.01, u'b... 2 [{u'a': 0.4, u'b... 3 [{u'a': 0.2, u'b... Name: analytics, dtype: object 0 '0' 1 '1' 2 '2' 3 '3' The first number is obviously the index, the string is the 'id', and it is clear that 'analytics' appears as a Series. AI: Multi-indexing might be helpful. See this. But the below was the immediate solution that came to mind. I think it's a little more elegant than what you came up with (fewer obscure numbers, more interpretable natural language): import pandas as pd df = pd.read_json('test_file.json') df = df.append(df) # just to give us an extra row to loop through below df.reset_index(inplace=True) # not really important except to distinguish your rows for _ , row in df.iterrows(): currNumbDict = len(row['analytics']) print(currNumbDict)
H: Python: validating the existence of NLTK data with database search I need to pull the names of companies out of resumes. Thousands of them. I was thinking of using NLTK to create a list of possible companies, and then cross-referencing the list of strings with something like SEC.gov. I've already been able to successfully pull the candidate's name, and contact info off of the resumes with some RegEx, but this one has me quite stumped. What I'm thinking is that I could use NLTK to create a list of strings of proper nouns from the resume's, and then search SEC.gov, or some other database. This is a link to the SEC page I would be searching: SEC company search page Read Resume1 Get all potential company names as list of strings potentialCompanies IF searching for string1 in SEC gets result, THEN add to candidateCompanies ELSE remove from potentialCompanies, go to next string My Questions To people that have used NLTK, would there be a better way of getting the potential companies from the text besides using proper nouns? Would there be a better place to search for companies than the SEC site? I have never done any web scraping before, and don't really know where to start if it is needed. (I had posted this on Stack Overflow but they told me that it might be better suited for here...) AI: NLTK has a built-in NER model that would extract potential Organizations from text, you can read about it here (and see examples) NLTK book (look for section "5 Named Entity Recognition"). However, if your input text has organizations in a very specific context that wasn't seen by NLTK NER model, performance might be quite low. In that case you should be looking into training your own NER model, what would extract company names. For that you would require to manually markup a small amount of your dataset.
H: Rules by which RStudio sets Headings RStudio automatically recognizes headers in an R script that are set via comments: I would like to exploit that feature, but I don't quite understand what the rules are for RStudio to recognize them as headers. Can someone explain? AI: Check out Code Folding and Sections: Code sections allow you to break a larger source file into a set of discrete regions for easy navigation between them. Code sections are automatically foldable—for example, the following source file has three sections (one expanded and the other two folded): To insert a new code section you can use the Code -> Insert Section command. Alternatively, any comment line which includes at least four trailing dashes (-), equal signs (=), or pound signs (#) automatically creates a code section. For example, all of the following lines create code sections: # Section One --------------------------------- # Section Two ================================= ### Section Three ############################# Note that as illustrated above the line can start with any number of pound signs (#) so long as it ends with four or more -, =, or # characters. (highlights by myself)
H: Mathematics major for data science So I'm a recent transfer 2nd year student from Computer Science major to Mathematics major. Though I do have a bit of an issue here. I can choose between the applied mathematics, pure mathematics and statistics concentrations. Along with this major, I'm doing a minor in Data Science with courses focused on Economics and Statistics. In the future, I'm interested in doing a master's degree in Data Science and a career in data analysis for businesses. Although I know any degree can be used to get into graduate school, I still want to which would be most beneficial for education in the future as well as opportunities for internships, research, and jobs. Thank you! AI: If you know that you want to become a data scientist, you can pretty much rule out pure mathematics. Note, I'm not saying that pure mathematicians cannot become data scientists, but it's not the most natural transition. Between the other two branches, stats is probably the most natural path. Both will having you think about applying math to answer real world problems, but stats is very specifically geared towards focusing on larger scale data analysis. EDIT: @rocinante mentions the need for software skills, and suggests CS over econ as a minor. I would say this really depends; if you are in a bigger data science team, you'll probably work alongside dedicated programers, who will be able to implement your analytics more efficiently than you would be expected to as an analyst. If you know that you want to apply D.S. techniques to either finance or business, domain specific knowledge is helpful. Additionally, if you know that it is your desire to end up in data science, you'll be able to look for ways to use programing along the way, and keep those skills sharp.
H: How data representation affects neural networks? Suppose A's possible values are ON or OFF. Suppose I represent it as: if A ON then feature f=1 else f=0 Or, suppose I represent it with 2 features, where: -if A is ON then f1=1 and f2=0 -if A is OFF then f1=0 and f2=1 How this kind of representation affects neural networks? AI: It will have very little effect The answer most will give is that it will have no effect, but adding one more feature will decrease the ratio of records to features so will slightly increase the bias and will hence make your model slightly less accurate. Unless, of course, you have overfit your model , in which case it will make your model slightly more accurate (a good data scientist would never do this because they understand the importance of cross-validation :-). If you normalize your data and then attempt some sort of dimensionality reduction, your algorithm will immediately eliminate the feature that you added since it is perfectly negatively (linearly) correlated with the first feature. In this case it will have no effect. Please also consider the following: I always see big red flags when someone asks a very fundamental data science question with the words neural network included. Neural networks are very powerful and receive a great deal of attention in the media and on Kaggle, but they take more data to train, are difficult to configure, and require much more computing power. If you are just starting out, I suggest getting a foundation in linear regression, logistic regression, clustering, SVMs, decision trees, random forests, and naive Bayes before delving into artificial neural networks. Just some food for thought. Hope this helps!
H: Please list some well tested api's for arima model I am looking for a good python api for timeseries models such as ARIMA. Please list some well tested apis and few more advance models possible for financial time-series analysis. AI: Statsmodels: Statsmodels is your best bet for a python library that includes ARIMA. I have used it fairly extensively and am quite happy with it. But, its certainly not as well tested as R based ARIMA models. R: If you want something "well-tested" then your best bet is likely to use Rpy2 to call an R based ARIMA library from python. Rpy2 can be a bit tricky based on version reconciliation between python, R and Rpy2. Here's a turorial on calling R from python using Rpy2 Hope this helps!
H: Approximating density of test set in train I am looking for a method to approximate how similar a test set (i.e., test set features) to a train set. For example, something like, for each row in test: is there a similar enough data point in train? I've been thinking about using a mixture model approach, but I haven't been able to find a good reference on this. Can anyone suggest a good approach, or provide good references for how to use mixture models for this application? AI: The approach that comes to mind, is to calculate the kullback-leibler divergence between the kernel density estimations of your train dataset and of your test dataset. The kernel density estimation of each of your datasets will give you an approximation to the pdf's of your datasets. The kullback-leibler divergence will give you a number that will represent the divergence in bits from one distribution to another (if you use base 2 for your logarithm). Below are some references I think you would fine useful. https://en.wikipedia.org/wiki/Kernel_density_estimation https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/ If you would like me to show the math behind this method. Feel free to ask. EDIT: Added math as asked by author of question Let $\hat x_1, \hat x_2,\hat x_3...,\hat x_n$ be your training dataset while $x_1,x_2,x_3...,x_n$ is your testing dataset, where both $\hat x_i$ and $x_i$ belong to $\mathbb{R}^d$. $$\hat f(x;H)=\frac{1}{n} \sum_{i=1}^{n}K(x-\hat x_i;H)$$ $$f(x;H)=\frac{1}{n} \sum_{i=1}^{n}K(x-x_i;H)$$ $\hat f$and $f$ represent the kernel density estimation for training set and testing set respectively. The parameter $H$ represents the bandwidth parameter and is a symmetric positive definite $d \times d$ matrix. $K(u;H)$ can be rewritten as $$|H|^{-\frac{1}{2}} K(H^{\frac{1}{2}} u)$$ where K can be any kernel function. I would recommend for simplicity purposes the standard multivariate normal kernel. Okay so now that we have the kernel density estimations of both our training and our testing dataset, we can use the kullback-leibler divergence in order to estimate the difference between the two. Optimally we would like to calculate the kullback-leibler divergence with respect to every point in our space. Mathematically speaking. $$\int_X f(x;H) \ log_2(\frac{f(x;H)}{\hat f(x;H)})dx$$ But this is computationally unpractical to compute. We can approximate this integral by sampling a set of points from the $f(x;H)$ and then computing the discrete sum. $$\sum_{x \in X} f(x;H) \ log_2(\frac{f(x;H)}{\hat f(x;H)})$$ Quick Note: To sample from a kernel density estimator, uniformly randomly select a point from the respective dataset. Then sample from the kernel of choice centered around the point chosen.
H: is neural networks an online algorithm by nature? I have been doing machine learning for a while, but bits and pieces come together even after some time of practicing. In neural networks, you adjust the weights by doing one pass (forward pass), and then computing the partial derivatives for the weights (backward pass) after each training example - and subtracting those partial derivatives from the initial weights. in turn, the calculation of the new weights is mathematically complex (you need to compute the partial derivative of the weights, for which you compute the error at every layer of the neural net - but the input layer). Is that not by definition an online algorithm, where cost and new weights are calculated after each training example? Thanks! AI: You can train after each example, or after each epoch. This is the difference between stochastic gradient descent, and batch gradient descent. See pg 84 of Sebastian Raschka's Python Machine Learning book for more.
H: what is the difference between "fully developed decision trees" and "shallow decision trees"? As reading Ensemble methods on scikit-learn docs, it says that bagging methods work best with strong and complex models (e.g., fully developed decision trees), in contrast with boosting methods which usually work best with weak models (e.g., shallow decision trees). But search on google it always return information about Decision Tree. I'd like to know the detail of the two trees mentioned in the doc, what's the fully developed and shallow meanings. Update: About why bagging work best with fully developed and why boosting work best with shallow. First, I think complex models (e.g., fully developed decision trees) means such a data set has a complex format be called as fully developed decision trees. After I read above quote over 20 times and rapaio's answer, I think my poor English lead me to the wrong road(misunderstand). I also mistake shallow as shadow , which make me confusing a long time....Now I understand the meanings of fully developed and shallow . I think the quote is saying bagging work best with a models(already trained) which algorithm is complex. And boosting only need simple model. Both bagging and boosting need many estimators, as n_estimators=100 in scikit-learn examples. If n_estimators=100 : bagging need 100 fully developed decision trees estimators(models) boosting need 100 shallow decision trees estimators(models) Does my thoughts is right? Hope my update can help non-native speakers. e.g. means for example, so there are other models can use for bagging and boosting. How about changing the model to svm or something else? Or both of they need a tree base model? AI: [Later edit - Rephrase everything] Types of trees A shallow tree is a small tree (most of the cases it has a small depth). A full grown tree is a big tree (most of the cases it has a large depth). Suppose you have a training set of data which looks like a non-linear structure. Bias variance decomposition as a way to see the learning error Considering bias variance decomposition we know that the learning error has 3 components: $$Err = \text{Bias}^2 + \text{Var} + \epsilon$$ Bias is the error produced by the fit model when it is not capable to represent the true function; it is in general associated with underfitting. Var is the error produced by the fit model due to sampled data, it describes how unstable a model is if the training data changes; it is in general associated with overfitting. $\epsilon$ is the irreducible error which envelops the true function; this can't be learned Considering our shallow tree we can say that the model has low variance since changing the sample does not change too much the model. It needs too many changed data points to be considered unstable. At the same time we can say that has a high bias, since it really can't represent the sine function which is the true model. We can say also that it has a low complexity. It can be described by 3 constants and 3 regions. Consequently, the full grown tree has low bias. It is very complex since it can be described only using many regions and many constants on those regions. This is why it has low bias. The complexity of the model impacts also variance which is high. In some regions at least, a single point sampled differently can change the shape of the fitted tree. As a general rule of thumb when a model has low bias it has also high variance and when it has low variance it has high bias. This is not true always, but it happens very often. And intuitively is a correct idea. The reason is that when you are getting close to the points in the sample you learn the patterns but also learn the errors from sample, when you are far away from sample you are instead very stable since you do not incorporate errors from sample. How can we build ensembles using those kind of trees? Bagging The statistical model behind bagging is based on bootstrapping, which is a statistical procedure to evaluate the error of a statistic. Assuming you have a sample and you want to evaluate the error of a statistical estimation, bootstrapping procedure allows you to approximate the distribution of the estimator. But trees are only a simple function of sample "split the space into regions and predict with the average, a statistic". Thus if one builds multiple trees from bootstrap samples and averages the trees can be considered i.i.d. and the same principle works to reduce variance. Because of that bagging allows one to reduce the variance without affecting too much the bias. Why it needs full depth trees? Well it is a simple reason, perhaps not so obvious at first sight. It need classifiers with high variance in order to reduce it. This procedure does not affect the bias. If the underlying models have low variance high bias, the bagging will reduce slightly the already small variance and produce nothing for bias. Boosting How does boosting? Many compare boosting with model averaging, but the comparison is flawed. The idea of boosting is that the ensemble is an iterative procedure. It is true that the final rule for classification looks like a weighted average of some weak models, but the point is that those models were built iteratively. Has nothing to do with bagging and how it works. Any $k$ tree is build using information learned from all previous $k-1$ trees. So we have initially a weak classifier which we fit to data, where all the points have the same importance. This importance can be changed by weights like in adaboost or by residuals like in gradient boosting, it really does not matter. The next weak classifier will not treat in the same way all the points, but those previously classified correctly has smaller importance than those classifier incorrectly. The consequence is that the model enriches it's complexity, it's ability to reproduce more complex surfaces. This is translated in the fact that it reduces the bias, since it can go closer to data. A similar intuition is behind: if the classifier has already a low bias, what will happen when I boost it? Probably a much unbearable overfit, that's all. Which one is better? There is no clear winner. It depends too much on data set and on other parameters. For example bagging can't hurt. It is possible to be useless ut usually does not hurt performance. Boosting can lead to overfit. That is because you can go eventually too close to data. The is a lot of literature which says that when the irreducible error is high, the bagging is much better and boosting does not progress too much. Can we decrease both variance and bias? Pure bootstrap and bagging approaches serves a single purpose. Either reduce variance either reduce bias. However modern implementations changed various things in how those approaches works. Sampling can be used in boosting and it seems to work towards reducing also the variance. There are bagging procedures which takes some ideas prom boosting, for example iterative bagging (or adaptive bagging) published by Breiman. So, the answer is yes, is possible. Can we use other learners other than trees? Of course. Often you will see boosting to use this approach. I read some papers on bagging svm or another learners also. I tried myself to bag some svms but without much success. The trees are the preferred way, however, for a very simple reason, they are simple to build, simple to adapt and simple to control their effect. My personal opinion is that it is not everything said regarding ensembles of trees. PS: a last note on the number of weak classifiers: this depends entirely on the complexity of the data set mixed with the complexity of the learner. There is no recipe. Often 20 of them are enough to get most of the information and additional ones are only for tiny tuning.
H: Question about train example code for TensorFlow I am trying to learn TensorFlow, and I could understand how it uses the batch in this example: cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) sess.run(tf.initialize_all_variables()) for i in range(20000): batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={ x:batch[0], y_: batch[1], keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) print("test accuracy %g"%accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})) My question is, why it get a batch of 50 training data, but only use the first one for training. Maybe I did not understand the code correctly. AI: If I understood you correctly, you are asking about this line of code: train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) Here you only specify which part of batch is used for features and which for your predicted class.
H: Predicted features combined with original ones I am currious how good such a procedure could be : I get some predictions of some 10 learners trained on train set and predicted on train set also . Then I am column binding those predictions to the original train. Could this be a valid procedure on improving learning process? AI: This procedure exists and is called stacked generalization or simply stacking. See stacking section from wikipedia page. Strating from there you can red more by following references from the page. The first paper on subject was published by Wolpert in 1992. [Later edit] Do not combine the results with original features, but keep only the predictions combined only with the target.
H: What kinds of learning problems are suitable for Support Vector Machines? What are the hallmarks or properties that indicate that a certain learning problem can be tackled using support vector machines? In other words, what is it that, when you see a learning problem, makes you go "oh I should definitely use SVMs for this" rather than neural networks or decision trees or anything else? AI: SVM can be used for classification (distinguishing between several groups or classes) and regression (obtaining a mathematical model to predict something). They can be applied to both linear and non linear problems. Until 2006 they were the best general purpose algorithm for machine learning. I was trying to find a paper that compared many implementations of the most known algorithms: svm, neural nets, trees, etc. I couldn't find it sorry (you will have to believe me, bad thing). In the paper the algorithm that got the best performance was svm, with the library libsvm. In 2006 Hinton came up with deep learning and neural nets. He improved the current state of the art by at least 30%, which is a huge advancement. However deep learning only get good performance for huge training sets. If you have a small training set I would suggest to use svm. Furthermore you can find here a useful infographic about when to use different machine learning algorithms by scikit-learn. However, to the best of my knowledge there is no agreement among the scientific community about if a problem has X,Y and Z features then it's better to use svm. I would suggest to try different methods. Also, please don't forget that svm or neural nets is just a method to compute a model. It is very important as well the features you use.
H: Overfitting/Underfitting with Data set size In the below graph, x-axis => Data set Size y-axis => Cross validation Score Red line is for Training Data Green line is for Testing Data In a tutorial that I'm referring to, the author says that the point where the red line and the green line overlap means, Collecting more data is unlikely to increase the generalization performance and we're in a region that we are likely to underfit the data. Therefore it makes sense to try out with a model with more capacity I cannot quite understand meaning of the bold phrase and how it happens. Appreciate any help. AI: So, the underfitting means that you still have capacity for improving your learning while overfitting means that you have used a capacity more than needed for learning. Green area is where testing error is rising i.e. you should continue providing capacity (either data points or model complexity) to gain better results. More green line goes, more flat it becomes i.e. you are reaching the point where the provided capacity (which is data) is enough and better to try providing the other type of capacity which is model complexity. If it does not improve your test score or even reduce it that means that the combination of Data-Complexity was somehow optimal and you can stop training.
H: Feauture selection for clustering regarding zero-correlated feature I want to cluster a 5 feature data-set. Firstly to explore the data I did a correlation matrix to see if some features where highly correlated so I could reduce them. Then I saw a feature that have close to zero correlation against all the other features. This got me wondering if I should exclude this parameter since it acts as a kind of "noise" relatively to all the other features. What's your opinion? AI: Lack of correlation with other features is not a reason to omit a feature. On the contrary, it is usually a reason to keep the feature because it may provide unique information. Typically, highly correlated features provide redundant information and feature reduction techniques (e.g., Principal Components Analysis) are used to remove the redundancy. While it is possible that the uncorrelated feature is noise, you should not make that assumption. It could be that the uncorrelated feature is the only one containing information and the other 4 features are all correlated noise.
H: What is a Recurrent Heavy Subgraph? I recently came across this term recurrent heavy subgraph in a talk. I don't seem to understand what it means and Google doesn't seem to show any good results. Can someone explain what this means in detail. AI: The term may best be expressed as a Recurrent, Heavy Subgraph. That is, a subgraph which is both Recurrent and Heavy. Heaviness of a subgraph refers to heavily connected vertices- that is, nodes which are connected many times ("many" being relative to the network in question). Recurrent refers to the propensity of a subgraph to occur more than once. Thus, a Recurrent Heavy Subgraph is a densely connected set of vertices which occurs several times in the overall network. These subgraphs are often used to determine properties of a network. For example: In a network of emails interactions within a company organized into 4-person teams with one member acting as the lead, each team's email activity (if they email between themselves sufficiently to be considered "heavy") could be described as a Heavy Subgraph. The fact that these subgraphs occur many times in the network make them Recurrent Heavy Subgraphs. If one was searching for structure in the network, noticing that these recurrent, heavy subgraphs exist would go a long way toward determining the organization of the network as a whole.
H: Best regression model to use for sales prediction I have the following variables along with sales data going back a few years: date # simple date, can be split in year, month etc shipping_time (0-6 weeks) # 0 weeks means in stock, more weeks means the product is out of stock but a shipment is on the way to the warehouse. Longer shipping times have a siginificant impact on sales. sales # amount of products sold I need to predict the sales (which vary seasonally) while taking into account the shipping time. What would be a simple regression model that would produce reasonable results? I tried linear regression with only date and sales, but this does not account for seasonality, so the prediction is rather weak. Edit: As a measure of accuracy, I will withold a random sample of data from the input and compare against the result. Extra points if it can be easily done in python/scipy Data can look like this -------------------------------------------------- | date | delivery_time| sales | -------------------------------------------------- | 2015-01-01 | 0 |10 | -------------------------------------------------- | 2015-01-01 | 7 |2 | -------------------------------------------------- | 2015-01-02 | 7 |3 | ... AI: This is a pretty classic ARIMA dataset. ARIMA is implemented in the StatsModels package for Python, the documentation for which is available here. An ARIMA model with seasonal adjustment may be the simplest reasonably successful forecast for a complex time series such as sales forecasting. It may (probably will) be that you need to combine the method with an additional model layer to detect additional fluctuation beyond the auto-regressive function of your sales trend. Unfortunately, simple linear regression models tend to fare quite poorly on time series data.
H: Using machine learning specifically for feature analysis, not predictions I'm new to machine learning and have spent the last couple months having a blast using Sci-Kit Learn to try to understand the basics of building feature sets and predictive models. Now I'm trying to use ML on a data set not to predict future values but to understand the importance and direction (positive or negative) of each feature. My features (X) are boolean and integer values that describe a product. My target (y) is the sales of the product. I have ~15,000 observations with 16 features a piece. With my limited ML knowledge to this point, I'm confident that I can predict (with some level of accuracy) a new y based on a new set of features X. However I'm struggling to coherently identify, report on and present the importance and direction of each feature that makes up X. Thus far, I've taken a two-step approach: Use a linear regression to observe coefficients Use a random forest to observe feature importance The code First, I try to get the directional impact of each feature: from sklearn import linear_model linreg = linear_model.LinearRegression() linreg.fit(X, y) coef = linreg.coef_ ... Second, I try to get the importance of each feature: from sklearn import ensemble forest = ensemble.RandomForestRegressor() forest.fit(X, y) importance = forest.feature_importances_ ... Then I multiply the two derived values together for each feature and end up with some value that maybe perhaps could be the information I'm looking for! I'd love to know if I'm on the right track with any of this. Is this a common use case for ML? Are there tools, ideas, packages I should focus on to help guide me? Thank you very much. AI: You don't need the linear regression to understand the effect of features in your random forest, you're better off looking at the partial dependence plots directly, this what you get when you hold all the variables fixed, and you vary one at a time. You can plot these using sklearn.ensemble.partial_depence.plot_partial_dependence. Take a look at the documentation for an example of how to use it. Another type of model that can be useful for exploratory data analysis is a DecisionTreeClassifier, you can produce a graphical representation of this using export_graphviz
H: How to define person's gender from the fullname? Given person's name, e.g. 'Adjutor Ferguson'. How to define is it a male or female? One solution came to my mind: I have found Person NLP training dataset here mbejda.github.io. And via a machine learning software like Apache Mahout, train it and provide real data. But I am not sure about the accuracy of the results. May be another approach exist? (e.g. scikit-learn.org) AI: That dataset looks like a good starting point. Keep in mind that when you make your own dataset from those datasets you'll want to keep the male to female ratio balanced if you want it to predict both well. It should not matter what machine learning software you use (Apache Mahout, scikit-learn, weka, etc.). Pick one that fits your language of choice since speed will probably not be too much of a concern with the smallish dataset size. As for features, you'd generally use ngrams as your baseline for NLP classification tasks. If you use ngrams here you won't end up with anything very interesting because the model won't generalize to any unseen names. I'd suggest as a feature baseline that you try character ngrams, and maybe something like syllable ngrams for something slightly more advanced (for syllable tokenization see https://stackoverflow.com/questions/405161/detecting-syllables-in-a-word).
H: Face Recognition using Eigenfaces and SVM I am new to machine learning. I want to develop a face recognition system using scikit-learn. This is the example given in the tutorials of scikit-learn. I am not getting how the input is being provided to the program. How should I load a particular image and make my program run to predict the label for that? AI: Take a look at the code that you linked to: # Download the data, if not already on disk and load it as numpy arrays lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4) fetch_lfw-poeple is a routine that loads the data and is detailed here. Hope this helps!
H: Is there any domain where Bayesian Networks outperform neural networks? Neural networks get top results in Computer Vision tasks (see MNIST, ILSVRC, Kaggle Galaxy Challenge). They seem to outperform every other approach in Computer Vision. But there are also other tasks: Kaggle Molecular Activity Challenge Regression: Kaggle Rain prediction, also the 2nd place Grasp and Lift 2nd also third place - Identify hand motions from EEG recordings I'm not too sure about ASR (automatic speech recognition) and machine translation, but I think I've also heard that (recurrent) neural networks (start to) outperform other approaches. I am currently learning about Bayesian Networks and I wonder in which cases those models are usually applied. So my question is: Is there any challenge / (Kaggle) competition, where the state of the art are Bayesian Networks or at least very similar models? (Side note: I've also seen decision trees, 2, 3, 4, 5, 6, 7 win in several recent Kaggle challenges) AI: One of the areas where Bayesian approaches are often used, is where one needs interpretability of the prediction system. You don't want to give doctors a Neural net and say that it's 95% accurate. You rather want to explain the assumptions your method makes, as well as the decision process the method uses. Similar area is when you have a strong prior domain knowledge and want to use it in the system.
H: Ensemble Model vs Normal model If I get 95+ % accuracy in normal models, should I still consider Ensemble models? Why should I choose Ensemble models over normal models? AI: Firstly, welcome to the site! When do we use Ensemble model? when there are 2 models which perform moderately then we combine their results to get a model which performs better. In your scenario you already have a model which gives you good results what is the point of implementing Ensemble Models? As @Tagoma said, it depends on your data and your goal. For example, you are trying to predict stock rates, every 0.01% matters. In such scenarios you need to use complex algorithms to maintain balance on that slim line i.e., not to over-fit, not to over-train and just predict. Measures to check if your model is over-trained is by giving some random data and see how it performs, add some noise in the training data. One more important thing to do is, to check for the Predictor Importance and see if there is any highly correlated feature with the target variable. For example, you are planning to predict age and if you have DOB as a feature in that yes of course you would predict with 99.99% accuracy but that is not what we use ML for. If all of them are satisfied and achieving that accuracy then it means that your model's performance is good. Finally, Implementation of Ensemble is dependent on your business problem and your understanding on business.
H: Warning message in randomForest I want to applicate the randomForest to my data for predicting target variable, but I have got a warnings message saying: Warning message: In randomForest.default(m, y, ...) : The response has five or fewer unique values. Are you sure you want to do regression? I didn't know what was going on? this is my code: library(randomForest) M=data.frame(Type_peau,PEAU_CORPS,SENSIBILITE,IMPERFECTIONS,BRILLANCE ,GRAIN_PEAU,RIDES_VISAGE,ALLERGIES,MAINS, INTERET_ALIM_NATURELLE,INTERET_ORIGINE_GEO,INTERET_VACANCES,INTERET_COMPOSITION,DataQuest1,Priorite2, Priorite1,DataQuest4,Age,Nbre_gift,w,Achat_client) factor_vars <- c("Type_peau","PEAU_CORPS","SENSIBILITE","IMPERFECTIONS","BRILLANCE","GRAIN_PEAU", "RIDES_VISAGE","ALLERGIES","MAINS","INTERET_ALIM_NATURELLE","INTERET_ORIGINE_GEO", "INTERET_VACANCES","INTERET_COMPOSITION","DataQuest1","Priorite2","Priorite1","DataQuest4") head(M) str(M) sample.ind <- sample(2, nrow(M), replace = T,prob = c(0.6,0.4)) cross.sell.dev <- M[sample.ind==1,] cross.sell.val <- M[sample.ind==2,] table(cross.sell.dev$Achat_client)/nrow(cross.sell.dev) table(cross.sell.val$Achat_client)/nrow(cross.sell.val) varNames <- names(M) # Exclude ID or Response variable varNames <- varNames[!varNames %in% c("Achat_client")] # add + sign between exploratory variables varNames1 <- paste(varNames, collapse = "+") # Add response variable and convert to a formula object rf.form <- as.formula(paste("Achat_client", varNames1, sep = " ~ ")) test$Variable1 <- factor(test$Variable1,levels=levels(train$Varialbe1)) # Buiding cross.sell.rf <- randomForest(rf.form,cross.sell.dev,ntree=500,importance=T) plot(cross.sell.rf) Thanks for your help! AI: In fact, the problem was in the type of target variable. It should be assumed as factor and not numeric !! So the solution is just to did like that after reading table: Achat_client=as.factor(Achat_client) Good Luck!
H: Text classification problem using Python or R I am a novice in machine learning and new to NLP. I am looking for ideas on how to solve the below two problems. I have a dataset with two columns, "Titles" and "Description". Titles column has names of clinical lab tests and description column has description about results of corresponding laboratory test( can be seen below). There many titles specific to a particular lab test. Title Description Complete blood test RBC: Normocytic and Normochromic COMPLETE Blood test\ Platelets: Adequate on the smear Blood glucose COLOUR - COLOURLESS Complete blood picture WBC: Total and Differential counts are within normal limits I have only shared a small part of the data frame. Problem 1: I have manually grouped the similar looking titles. For example, I have grouped the titles in the above data frame as "Blood Test". Is there a way to use NLP technique to group similar looking titles ( As shown below). Problem 2: Based on the description, i have manually labeled a particular outcome for a lab test as normal or abnormal. Again i am looking for a way to do this without having to manually label the outcome (As shown below). Title Description Outcome Blood Test RBC: Normocytic and Normochromic Normal Blood Test Platelets: Adequate on the smear Normal Blood Test COLOUR - COLOURLESS Normal Blood Test WBC: Total and Differential counts Normal Any suggestions or resources to help me get started would be appreciated. AI: Take a supervised approach: For each group "Blood test", "Stool test" etc, Take a subset of your rows as a training set, say 200 rows. Create a new numeric or logical column, "IsBloodTest" or similar for this new subset of your dataset For each row in these 200 Classify them manually: if it is a blood test, assign 1, if not assign 0 to "IsBloodTest" Split the "Description" column into word vectors. This creates a Document-Term matrix with only 1 or 0 values in the cells (Word is present/not present) Classify the rows of the sparse new matrix (omitting your own newly manually created attribute) with the Multinomial NaiveBayes algorithm. This assigns another column with "0/1" prediction, say "PredictedBloodTest" to your training set. With these 0/1 values in the two new columns you can build a confusion matrix. Perform Cross validation experiments to check your confusion matrix, or use an extra validation set. Label more rows manually if necessary. If the classification result is good enough, use it for the remaining 1000s of rows from your data set. This procedure is simple, but it might get you results faster than learning more sophisticated approaches first.
H: Some questions about feature hashing in the context of document classification I'm trying to understand feature hashing, specifically in the context of document classification. I'm under the impression that it is useful because: it allows us to easily deal with 'new' words/features/predictors that we haven't seen before it is rather efficient because it allows us to exploit sparsity which is common in document/word data. Some questions: Say my word/feature set consists of 50,000 unique words and I use 16 hash bits $(2^{16}=65,536)$. In this setting, is it likely each word gets mapped to one of the 65,536 indices? How does this aid us in the situation where I build a model and try to make a prediction on a document that contains a word we haven't seen before? Say my feature set is 100,000 words and I still use 16 hash bits. Does this effectively mean I have (on average?) $100,000-65,536 = 34,464$ words mapped randomly on to other words? It seems I'm missing crucial point or idea here. Can anyone recommend some good introductory material? AI: Throughout the hashing process those words may map to the same indices and for the example you gave, i'll say it's extremely likely you'll have collision. How the hashing table chooses to handle that is up to the implementation. Feature hashing may purpose is when you have a large amount of data and that data tends to be sparse. Words tend to be a good example because word matrix of counts tend to be rather sparse. The hashing trick is useful for large datasets because the more traditional ways of handling text data is essentially by making two passes on the dataset. You basically have to go over the dataset once to create a dictionary and then make your transformation. For large data this can be rather expensive process. Feature hashing allows for a one time pass of the data. This also enables the ability to do online learning, which brings me to this point. I wouldn't say that feature hashing helps with new words rather that it basically handles it for you. In traditional approaches you have to ignore the new word or create a new dictionary. Feature hashing will simply just map it to an existing index or a new one if there's space. The last point is that feature hashing rarely if ever improves performance of a model. It also hurts the ability to be introspective. You are going to lose information, and in fact, you can view feature hashing as a way to introduce noise. However, research indicates that in large sparse data, the noise does not impact performance in major ways. Hashing Function Large Scale Hashing Short idea behind hashing
H: Input normalization for ReLu? Let's assume a vanilla MLP for classification with a given activation function for hidden layers. I know it is a known best practice to normalize the input of the network between 0 and 1 if sigmoid is the activation function and -0.5 and 0.5 if tanh is the activation function. What about ReLu ? Should I normalise the network input between 0 and 1, -0.5 and 0.5, or -1 and 1 Any known best practices there? I am not talking about normalisation of the input of the ReLu like using Batch Normalisation just before or just after the ReLu : https://arxiv.org/pdf/1508.00330 But I am talking about normalising the input of the whole network. AI: You have to normalize your data to accelerate learning process but based on experience its better to normalize your data in the standard manner, mean zero and standard deviation one. Although mapping to other small intervals near to zero may also be fine but the latter case usually takes more time than the other. If you use ReLU, again based on experience, you have to normalize your data and use standard initialization techniques for your weights, like He or Glorot methods. The reason is that your should avoid each activation to be so large, because your net would be so much dependent to that activation, and you may have overfitting problem. When you use ReLU because there is no limit for its output, you have to normalize the input data and also use initialization techniques that avoid having large values for weights. For more information I encourage you taking a look at here and here.
H: What is the BLEU score used in Google Brain's "Attention Is All You Need" paper? Google Brain's Attention Is All You Need paper on sequence-to-sequence translation reports: Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU but the Wikipedia entry on BLEU says: BLEU’s output is always a number between 0 and 1 What definition of BLEU is the Google Brain paper using? I could not find a separate definition in the paper itself. AI: BLEU (Bi Lingual Evaluation Understudy) is an algorithm for evaluating the quality of text which has been machine-translated (MT) from one natural language to another. BLEU is typically measured on a 0 to 1 scale, with 1 as the hypothetical “perfect” translation. Google uses the same definition of BLEU but multiplies the typical score by 100.
H: Features selection/combination for random forest I am working on using random forest to predict 1 or 0. I have about 20 variables available for modeling. I realized that if I put different variables will have different accuracy/sensitivity/specificity. I am wondering if there is a test or method can tell me which variables combination has the highest accuracy? Or which variable combination has the highest sensitivity and specificity respectively? Thanks in advance! AI: The Random Forest model in sklearn has a feature_importances_ attribute to tell you which features are most important. Here is a helpful example. There are a few other algorithms for selecting the best features that generalize to other models such as sequential backward selection and sequential forward selection. In the case of sequential forward selection, you begin by finding the single feature that provides you with the best accuracy. Then, you find the next feature in combination with the first that gives you the best accuracy. This pattern continues until you find $k$ features, where $k$ is the number of features you want to use. Sequential backward selection is just the opposite, where you start with all of the features and remove those which inhibit your accuracy the most. You can find more information on these algorithms here.
H: XGBoost Classification Probabilities higher than RF or SVM? I am using Random Forests, XGBoost and SVMs to classify whether the home team wins or the away team wins their bowl game (in college football). I trained the models on all the games during the season. I've come across something that is a bit weird and can't explain. I calculated a prediction confidence by subtracting the class probabilities. The XGBoost confidence values are consistency higher than both Random Forests and SVM's. I've attached the image below. I did some hyper-parameter tuning for all of my models and used the best parameters based on testing accuracy. Random Forest: 700 trees 15 variables randomly sampled (mtries) minimum split criteria of 5 rows. XGBoost: 0.5, Learn rate gbtree as my booster max depth of 6 SVM: RBF kernel C (slack) of 1 0.01, Sigma I wasn't clear with my question: Why exactly does XGBoost prefer one class greatly to the other? In comparison to these other methods. I'm trying to figure out why my prediction confidences of a class are so high for XGboost. AI: Rather than answering why XGBoost give very confident predictions, I will answer why random forest and SVM give not-so-confident predictions. Random forest probability estimates are given by the percentage of the forest that predicted a particular class. For example, if you have $100$ trees in your forest and $81$ of them predict some class for some example, the probability estimate for that example belonging to that class is calculated to be $\frac{81}{100} = 0.81$. Because of the random nature of the ensemble members, it's very unlikely that each individual tree will end up with the correct prediction, even if the majority do. This makes probability estimates from random forests shy away from the extreme ends of the scale. SVM is a slightly different case, because they are unable to produce probability estimates directly. Typically, Platt scaling (essentially logistic regression) is used to scale the SVM output to a probability estimate. This has the added benefit of calibrating the probability estimates, meaning the predicted probability is quite accurate - in other words, if a probability of $0.8$ is given for a prediction, it actually has approximately an $80\%$ chance of being correct. For a problem like this where there's a lot of noise (underdog teams do win sometimes, and there's a lot of evenly matched games that are hard to predict), these predictions will tend not to be overconfident. I don't have a good reason as to why XGBoost is possibly overconfident, but it has been observed in the past that additive boosting models tend to provide distorted probability estimates without applying post-training calibration e.g. here, here & here.
H: What are similarity and distance metrics in classification? I have an assignment to train a model to classify text data, the brief for the assignment mentions that any for any learning model used I have to provide a reasoning for the similarity or distance metric used. What does this refer to? My initial thought was that, for example, in logistic regression, the normalisation used would be L1 (Minkowski) or L2 (Euclidean). Would this be correct? AI: The choice of the metric is somewhat dependent on how you are representing the text. A common metric for vector space models would be cosine similarity. Cosine similarity: K(X, Y) = < X, Y > / (||X||*||Y||) Blockquote My initial thought was that, for example, in logistic regression, the normalisation used would be L1 (Manhattan) or L2 (Euclidean). Would this be correct? In vector space models for text representation, the vectors are from a high dimension and sparse. When using L2 or L1 norm, there will be a term for every dimension which has a term in either vector. However, for consine similarity, since it is a dot product, there will only be a term when both elements in either vector is non-zero. The sparsity renders the consine similarity a better choice for such comparisons.
H: Does keras categorical_cross_entropy loss take incorrect classification into account I was looking at keras source here which calculates cross entropy loss using: output /= tf.reduce_sum(output, reduction_indices=len(output.get_shape()) - 1, keep_dims=True) # manual computation of crossentropy epsilon = _to_tensor(_EPSILON, output.dtype.base_dtype) output = tf.clip_by_value(output, epsilon, 1. - epsilon) return - tf.reduce_sum(target * tf.log(output), reduction_indices=len(output.get_shape()) - 1) target is the truth data, which is 0 or 1, and output is the output of the neural net. So it looks like the loss is of the form $$J_{y'} (y) = - \sum_{i} y_{i}' \log (y_i)$$ where $y_i$ is the model output for class $i$, and $y_i'$ is the truth data. Does this mean the errors for $y_i' = 0$ do not contribute to the loss? Why isn't the formula $$J_{y'}(y) = - \sum_{i} ({y_i' \log(y_i) + (1-y_i') \log (1-y_i)})$$ used? AI: Does this mean the errors for $y_i=0$ do not contribute to the loss? That is correct. However, the respective weights that connect to wrong neurons will still have gradients due to the error, and those gradients will be influenced by the size of each incorrect classification. That is due to how softmax works: $$\hat{y}_i = \frac{e^{z_i}}{\sum_j e^{z_j}}$$ (where $z_i$ is the pre-softmax value of each neuron, a.k.a. the logit) . . . weights that affect one neuron's pre-transform value affect the post-transform value of all neurons. So those weights will still be adjusted to produce a lower $z_j$ value for the incorrect neurons during weight updates. Why isn't the formula $$J_{y'}(y) = - \sum_{i} ({y_i' \log(y_i) + (1-y_i') \log (1-y_i)})$$ used? It is not clear why when selecting a single class, that you would care how probability estimates were distributed amongst incorrect classes, or what the benefit would be to drive the incorrect values to be equal. For instance if $y' = [1, 0, 0, 0]$ then using the suggested formula for $J_{y'}(y)$ gives ~ 0.67 for $y = [0.7, 0.1, 0.1, 0.1]$ and ~0.72 for $y = [0.73, 0.26, 0.05, 0.05]$, yet arguably the second result is better. However, you would use this loss when dealing with non-exclusive classes (where the outputs would use sigmoid as opposed to softmax activation).
H: ANN on Pattern Recognition I have been trying to apply a simple neural network using keras to predict a sequence of numbers and the rule is if the input integer is odd it should be 4 and if its even it should be 2. Yet the neural network gets stuck at a 60% accuracy rate. Anyone know a solution to this? from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers.normalization import BatchNormalization from sklearn.model_selection import cross_val_score from keras.wrappers.scikit_learn import KerasClassifier import numpy as np def gen(x): if (x%2==0): return 2; else: return 4; a = [] for i in range(1,100001): a.append([i,gen(i)]) a = np.array(a) x = a[:,0:1] y = a[:,1:2] def MakeClassifier(): network_classifier = Sequential() network_classifier.add(Dense(units=2,kernel_initializer="uniform",activation="relu",input_dim=1)) #Hidden Layer1 taking into account number of inputs(independant variables(x) network_classifier.add(BatchNormalization()) network_classifier.add(Dense(units=1,kernel_initializer="uniform",activation="sigmoid"))#OutPutLayer network_classifier.compile(optimizer="adam",loss="binary_crossentropy",metrics=["accuracy"])#If multicategorical then categorical_crossentropy return network_classifier classifier = KerasClassifier(build_fn= MakeClassifier , batch_size = 10 , epochs = 1000) classifier.fit(x,y,epochs=100,batch_size=1000) print(classifier.predict([[6],[7]])) #Should Predict 2 and 4 AI: There are two possible reasons for this result: Low number of training examples Using dense layers without batch normalization You have a relatively deep network and your training set size is small. In such cases if you run your model too many times you will certainly overfit the training data. The reason is that whenever you have a powerful model that can learn complicated functions and if you provide low number of training examples, it has capability to fit the data and not learn it. Suppose that you have a fitting problem in calculus and you have 4 points and you also have polynomials with degree of four. In such cases you can fit the data exactly without any error but the point is that you are fitting it, not learning it. In your case your model is powerful and it tries to fit the data, not to learn it. The cure is to provide enough training data. The reason that your model can not learn is that you are stacking dense layers without batch normalization. each time you update the weights, the output of the deep layers change and this cause the covariat shift. To avoid this problem use batch normalization. As a solution for you I recommend you to putting just two hidden layers and manipulate the number of units in those layers and provide more training examples. Your task can easily be learned by a two hidden-layer-network. After struggling for about one day finally I want to express my opinion about the problem. Depending on the problems, they may be solved using machine-learning or other techniques. Machine learning problems are those which you can not define an appropriate function to map the input to output or it may be so much hard to do so. Whenever you have the function, it can be used as the hypothesis, the final goal of machine learning algorithms. I tried hard and put so much time on this code and I get the same result. Nothing is going to be learned at all. I have been trying for about one day and still no progress has been seen. To explain the reason, the data is so much hard to be learned! for imagining how difficult it is, I recommend you to write numbers from one to ten in a straight line and put a line between consecutive numbers. The numbers are endless, so you will have no generalization because the boundaries that are going to be found will work just for the two neighbor numbers. This means that if you use the current features, you can not separate, learn, your data. I tried to do, somehow, feature engineering and used the following code to solve the problem: import keras import numpy as np from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers.normalization import BatchNormalization from keras import regularizers from keras.optimizers import Adam def gen(x): if (x % 2 == 0): return 0; # represents 2 else: return 1; # represents 4 a = [] for i in range(1,100001): temp = np.random.randint(0, 10000000) a.append([temp, temp ** 2, temp ** 3, temp ** 4, gen(temp)]) a = np.array(a) x = a[:, 0: a.shape[1] - 1] y = a[:, a.shape[1] - 1:] mean_of_x = np.mean(x, axis = 0, keepdims = True) std_of_x = np.std(np.float64(x), axis = 0, keepdims = True) x = (x - mean_of_x) / std_of_x n_classes = 2 y = keras.utils.to_categorical(y, 2) percentage = 95 / 100 limit = int(percentage * x.shape[0]) x_train = x[: limit, :] y_train = y[: limit, :] x_test = x[limit: , :] y_test = y[limit: , :] x_train.shape model = Sequential() model.add(Dense(1000, activation='relu', input_shape=(x_train.shape[1],))) model.add(BatchNormalization()) # model.add(Dropout(0.5)) model.add(Dense(1000, activation='relu')) model.add(BatchNormalization()) # model.add(Dropout(0.5)) model.add(Dense(n_classes, activation='softmax')) model.summary() model.compile(loss = 'categorical_crossentropy', optimizer = keras.optimizers.Adam(lr = 0.0001, decay = 1.5), metrics=['accuracy']) model.fit(x_train, y_train, batch_size = 128, epochs = 2000, verbose = 1, validation_data=(x_test, y_test), shuffle = True, class_weight = {0: 10, 1: 1}) As you can see, the above code uses high order polynomial. Surprisingly, no progress was seen here too. this model has the mentioned problem for the previous feature in higher dimensions. That's why the learning does not happen here too. the point here is that although you can not learn using the current feature, number itself, or its high order polynomials, you already have a solution for the problem. Instead of passing the number itself to the learning problem, pass the modulus two of the current number. This feature is so much easy to be learned. you may need just one unit.
H: Small amount of training data set for naive Bayes classifier for binary classification I'm implementing prediction system for young cricketers in ODI format using Naive Bayes classifier. The output of the system is to predict whether the young player is rising star or not. I have collected data from statsguru API of espncricinfo but I'm getting only about 300 records of players from ODI. Is it very small for training dataset? AI: Actually in machine learning more data equals more accuracy but as you mentioned in the question you had 300 sample dataset.So, the classifier has the little room to decide whom it should select but if you have less number of classes and features you may get better results.When I'm doing my project based on sensor data I used just about 100 sample records from a sensor and it actually predicted very well and got the accuracy of 77 and it has grown once I increased the dataset.but don't train it with the noise like outliers and unwanted features they affect your accuracy a lot
H: Multiclass classification with Neural Networks Let’s suppose I wanted to classify some input as one of three categories using a simple neural network. The output of my network are three columns (one for each possible category I assume) with values between 0 and 1. Moreover, the single rows add up to precisely one when adding the three columns together. Is is possible to interpret the output as the probability of my input belonging to each single category? AI: Indeed, this is the standard interpretation of continuous classifier outputs, not only for neural networks, but for the more general case called Softmax Regression. Thus, provided that you have used softmax activation on the final layer (in order, among other things, to ensure that your outputs indeed sum up to 1), you can interpret the continuous outputs as the respective probabilities of a particular data sample belonging to each one of your classes. See also the discussion in this (rather unfortunately titled) discussion at SO: How to convert the output of an artificial neural network into probabilities?
H: Difference between interpolate() and fillna() in pandas Since interpolate and fillna method does the same work of filling na values. What is the basic difference between the two. What is the significance of having these two different methods?? Can anyone explain me in layman terms. I already visited through the official documentation and wanted to know the difference AI: fillna fills the NaN values with a given number with which you want to substitute. It gives you an option to fill according to the index of rows of a pd.DataFrame or on the name of the columns in the form of a python dict. But interpolate is a god in filling. It gives you the flexibility to fill the missing values with many kinds of interpolations between the values like linear (which fillna does not provide) in the example provided below and many more interpolations possible. For example >> import pandas as pd, numpy as np >> df = pd.Series([1, np.nan, np.nan, 3]) >> df.interpolate() 0 1.000000 1 1.666667 2 2.333333 3 3.000000 dtype: float64 Pandas documentation on fillna and interpolate is very clear on this.
H: Is there a way to set a different activation function for each hidden unit in one layer in keras? I'm trying to set a different activation function for each hidden unit in a layer. Is this possible in Keras with 'Concatenate'? AI: If I get the point, you can use a similar code like the following: from keras.layers import merge, Convolution2D, MaxPooling2D, Input input = Input(shape=(256, 256, 3)) seq1 = Dense(1, activation = 'relu')(input) seq2 = Dense(1, activation = 'sigmoid')(input) seq3 = Dense(1, activation = 'tanh')(input) acum = merge([seq1, seq2, seq3], mode='concat', concat_axis=1) Depending on your task, specify concat_axis.
H: CUDA_ERROR_OUT_OF_MEMORY I have a large network that is somewhat similar to Wavenet. Although (it seems) that my GPU has enough memory, I get an out of memory error on fitting (see logs below). Any idea? How can I troubleshoot these kind of CUDA driver issues? 2017-12-22 23:32:05.288986: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorf low\core\common_runtime\gpu\gpu_device.cc:955] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate (GHz) 1.6575 pciBusID 0000:01:00.0 Total memory: 11.00GiB Free memory: 10.71GiB 2017-12-22 23:32:05.288986: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorf low\core\common_runtime\gpu\gpu_device.cc:976] DMA: 0 2017-12-22 23:32:05.288986: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorf low\core\common_runtime\gpu\gpu_device.cc:986] 0: Y 2017-12-22 23:32:05.429386: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorf low\core\common_runtime\gpu\gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device : 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0) 2017-12-22 23:32:06.131386: E C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorf low\stream_executor\cuda\cuda_driver.cc:924] failed to allocate 10.17G (10922166272 bytes) fro m device: CUDA_ERROR_OUT_OF_MEMORY 2017-12-22 23:32:06.599386: E C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorf low\stream_executor\cuda\cuda_driver.cc:924] failed to allocate 9.15G (9829949440 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2017-12-22 23:32:07.332586: E C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorf low\stream_executor\cuda\cuda_driver.cc:924] failed to allocate 8.24G (8846954496 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY built graph AI: Did you try to start your training with a smaller data set and that worked? How about using generators to gradually input your data? For me that worked quite well (in Keras : model.fit_generator(...))
H: Should the input data be normalized using keras pre-trained models I want to use a pre-trained VGG16 in keras. My question is simple. Should I normalize the input image before predicting its label? AI: According to Very Deep Convolutional Networks for Large-Scale Image Recognition, which is the paper that first presented VGG: "...The only pre-processing we do is subtracting the mean RGB value, computed on the training set, from each pixel."- (Karen Simonyan, Andrew Zisserman). I am certain that Keras provides a preprocess function based on the above mentioned principle. Checkout: tensorflow.keras.applications.vgg16.preprocess_input If it is in the TF version, its certainly in the other version.
H: Backpropagation in other fields Powerful techniques are sometimes rediscovered by various disciplines at different points in time, because the particular scientific fields do not overlap or interact. A couple of ML researchers have pointed out, that the back-propagation (Rumelhart et al, 1986) algorithm (a solution to credit-assignment) has been rediscovered by many fields. In what other fields is backprop used? Under which names and in which context? AI: Beyond its use in deep learning, backpropagation has been used in many other areas, ranging from weather forecasting to analyzing numerical stability. In fact, the algorithm has been reinvented several times in different fields. The general, application independent, name is "reverse-mode differentiation" [3]. The modern version of backpropagation, also known as automatic differentiation, was first published by Seppo Linnainmaa [1] in 1970. He used it as a tool for estimating the effects of arithmetic rounding errors on the results of complex expressions [2]. Gerardi Ostrowski discovered and used it some five years earlier in the context of certain process models in chemical engineering [2]. In the sixties Hachtel et al. considered the optimization of electronic circuits using the costate equation of initial value problems and its discretizations to compute gradients in the reverse mode for explicitly time-dependent problems [2]. Others researchers that discovered it include Bernt Speelpenning. He arrived at the reverse mode via compiler optimization when asked to automatically generate efficient codes for Jacobians of stiff ODEs [2]. More info can be found in: Deep Learning in Neural Networks: An Overview and in Who invented backpropagation? and references therein. [1] Who invented backpropagation? [2] Who Invented the Reverse Mode of Differentiation? [3] Calculus on Computational Graphs: Backpropagation
H: Feature agglomeration: Is it testing interactions? I have been looking at feature agglomeration in Python's scikit-learn. According to the user guide, feature agglomeration "applies Hierarchical clustering to group together features that behave similarly". Does this mean it is testing for interactions between features and groups features that do interact together? What does "behave similarly" mean in this context? AI: From the documentation: Similar to AgglomerativeClustering, but recursively merges features instead of samples. In standard agglomerative clustering you receive a matrix $M^{n \times m}$ representing $n$ samples of dimension $m$ that you want to cluster. In feature agglomeration the algorithm clusters the transpose of the matrix, i.e. $M^T$ so it clusters $m$ samples of dimension $n$, these samples represent the features. The default distance used to cluster the features (samples in the transpose matrix) is the euclidean distance, but you can also use l1, cosine and others. For example suppose you have 3 samples of dimension 3 (a matrix 3x3 matrix):: ---------------------------------- | feature1 | feature2 | feature3 | ---------------------------------- | 1 | 1.05 | 10 | ---------------------------------- | 1 | 1.05 | 10 | ---------------------------------- | 2 | 2.05 | 20 | ---------------------------------- If you want to reduce the dimension of your dataset to 2 dimensions, the algorithm clusters together feature1 and feature2, and leaves feature3 unchanged, the new matrix is: ---------------------------------- | feature1 - feature2 | feature3 | ---------------------------------- | 1.025 | 10 | ---------------------------------- | 1.025 | 10 | ---------------------------------- | 2.025 | 20 | ---------------------------------- The resulting feature is determined by the pooling function in the case above it computes the arithmetic mean. See this for a more in depth usage example.
H: Evaluating machine learning model with missing features I am working on a credit risk binary classification problem. The classes are GoodPayers and BadPayers. The training set has variables/features that contains: DemoGraphics Data such as - Age, Education, Loan Amount, Interest Rate Behavioral Data such as - Payment in Month1, Payment in Month2, Payment in Month3, Payment delay in Month1, Payment delay in Month2. The 10-fold cross-validation has 0.82 AUC on this set. However, the unseen data just contains the 'Demographics Data' and does not have 'Behavioral Data' of Payment. How do we deploy/test the model based on DemoGraphics Dataset only? AI: If less than 20-25% of behavioral data is missing, maybe you could try to impute missing data using one of the following solutions : Impute missing behavioral data using some business rule or by training a machine learning model with demographics data as input and behavioral data as output variable. Impute missing data with feature mean/median. Impute missing data by picking up random value in the feature distribution (hot-deck). In case you have more than 20-25% missing data, it will be really hard to impute values. I think in this situation you should consider creating a new model such as : The new model doesn't use behavioral data anymore. The new model is based on a different train-val-test split in order to have behavioral data in each dataset. If you can't create a new model neither impute missing data, I guess hot-deck would be the best option you have to avoid bad performance on unseen data.
H: Error 'Expected 2D array, got 1D array instead:' While performing a simple fitting operation on the Titanic dataset. The following is my code: data = pd.read_csv(r'.\Desktop\DS\Titanic\train.csv') sex_train = data['Sex'].map({'male':0,'female':1}) survived_train = data['Survived'] sex_survivor_tree = GaussianNB() sex_survivor_tree.fit(sex_train,survived_train) AI: This is a bit tricky. Using pandas data, sklearn only accepts input variables (features) with type pandas.Dataframe. In your code variable sex_train in pandas.Series type. Try the following code : sex_train = data['Sex'].map({'male':0,'female':1}).to_frame()
H: Is there an intuitive explanation why some neural networks have more than one fully connected layers? I have searched online, but am still not satisfied with answers like this and this. My intuition is that fully connected layers are completely linear. That means no matter how many FC layers are used, the expressiveness is always limited to linear combinations of previous layer. But mathematically, one FC layer should already be able to learn the weights to produce the exactly the same behavior. Then why do we need more? Did I miss something here? AI: There is a nonlinear activation function inbetween these fully connected layers. Thus the resulting function is not simply a linear combination of the nodes in the previous layer.
H: Set for building ROC curve and and choosing logistic regression cut-off I'm building a logicsic regression classifier for binary classification. I have trained it and going to choose a cut-off value using ROC curve. But what set to use for it: training or validation? AI: Usually you need to generate the ROC curve and choose the threshold within the training data. Then, with the selected threshold, you have the possibility to report accuracy, sensitivity, recall results you reach on your validation set.
H: Questions about CNN: weights and biases I have a question regarding CNN. I do understand how they work on the surface. Very simply put, they are D-NN for images. I'll use this example as a reference for this question. In the example, they are not initializing weights and biases anywhere. They are using tf.layers.conv2d() function in the example. Let's focus only on 2 of the function's arguments: filters: specify the number of filters to apply kernel_size: specifies the size of each filter Questions: We are not defining these filters. Does tensorflow define them for us or in general if I were to use this for a different set of images how does this work? Does this filter include both weights and biases? If not, then what exactly does the GradientDescentOptimizer() defined in the above example update after each training step? I understood the code and understand how the entire process works. I also understand convolution and how it works. But, I'm trying to apply these concepts and implement my own code using tensorflow and CNN and I'm kind of stuck here. AI: If you do not specify them, as it is clear in the signature of the functions you are referring to, the function will use the default value for them. For instance, you can see that the default value for stride is (2, 2) which means if you don't define the value of stride the method will use the mentioned value as the default value for the stride. Consequently, if you don't specify it, it does not mean there isn't such thing. In programming this approach helps programmers not to define many different overloads of a typical function. As the response for the second question, again, if you don't specify the initial weights, TensorFlow itself will use Glorot method for initialization. So, definitely the filters will have values called weights in order to operations like convolution, actually cross correlation, be applicable. As a recommendation, I highly suggest you taking a look at here and here.
H: python sklearn decision tree classifier feature_importances_ with feature names when using continuous values I'm using sklearn Decision Tree Classifier with some continuous features. When I run export_graphviz I see the same features in more than one nodes and with different values. Example: I would like to take the top important ones and want to use feature_importances_ for that. The problem is that feature_importances_ is array without reference to the tree nodes. I have the original features but as each one can be more than one time in the tree I'm not sure how to relate importance to node. AI: I think you are mixing two different things here. The feature_importance_ - this is an array which reflects how much each of the model's original features contributes to overall classification quality. The features positions in the tree - this is a mere representation of the decision rules made in each step in the tree. A feature position(s) in the tree in terms of importance is not so trivial. There are some potential heuristics for understanding the relation between the two. If a feature dosn't appear in the tree it has 0 importance and generally the higher the feature is in the tree the more important it is (assuming its being compared to another feature on the same branch).
H: When using numerical duplicates for categorical data, new columns should be added or values be converted? If this is the dataframe, in pandas +--------+--------+-------+ | Col1 | Col2 | Sex | +--------+--------+-------+ | Value | Value | F | | Value | Value | M | | Value | Value | M | | Value | Value | Other | | Value | Value | F | | Value | Value | M | +--------+--------+-------+ Should it converted to +--------+--------+-------+-------+ | Col1 | Col2 | Sex_M | Sex_F | +--------+--------+-------+-------+ | Value | Value | 0 | 1 | | Value | Value | 1 | 0 | | Value | Value | 1 | 0 | | Value | Value | 0 | 0 | | Value | Value | 0 | 1 | | Value | Value | 1 | 0 | +--------+--------+-------+-------+ or this +--------+--------+-------+ | Col1 | Col2 | Sex | +--------+--------+-------+ | Value | Value | 1 | | Value | Value | 0 | | Value | Value | 0 | | Value | Value | 2 | | Value | Value | 1 | | Value | Value | 0 | +--------+--------+-------+ AI: It depends of the algorithm you are using. For a linear model (linear / logistic regression, SVM...), you need to create dummy variables meaning features "Sex_M" and "Sex_F" as you noticed. However, if you are using tree-based technics, create an integer typed column with Sex in [0, 1, 2] should be sufficient for these algorithms. Reason is, contrary to linear model, tree-based technics are non-linear and will evaluate all possible splits to partition your observations. However, the way you map your categorical variale into integers can lead to different trees structures. Below is an example. Suppose you want to predict y variable using indexed feature "Sex". On left chart, there is a slight difference between category 2 and categories 0 and 1 because categories are not oriented. It will results into two consecutive splits and should happen late in the tree building. However, on the right chart, the means difference between categories is larger that on left chart. So the split should happen earlier in the tree building. Moreover, you will only need 1 split to make the difference. To conclude, I think simple indexing can be used with tree-based technics. I would also recommend you to sort your categorical features in an orderly way to easy tree learning.
H: Negative Rewards and Activation Functions I have a question regarding appropriate activation functions with environments that have both positive and negative rewards. In reinforcement learning, our output, I believe, should be the expected reward for all possible actions. Since some options have a negative reward, we would want an output range that includes negative numbers. This would lead me to believe that the only appropriate activation functions would either be linear or tanh. However, I see any many RL papers the use of Relu. So two questions: If you do want to have both negative and positive outputs, are you limited to just tanh and linear? Is it a better strategy (if possible) to scale rewards up so that they are all in the positive domain (i.e. instead of [-1,0,1], [0, 1, 2]) in order for the model to leverage alternative activation functions? AI: This would lead me to believe that the only appropriate activation functions would either be linear or tanh. However, I see any many RL papers the use of Relu. Generally you want a linear output, unless you can guarantee scaling total possible reward to within a limited range such as $[-1,1]$ for $\text{tanh}$. Reminder this is not for estimating individual rewards but for total expected reward when following the policy you want to predict (typically the optimal policy eventually, but you will want the function to be able to estimate returns for other policies visited during optimisation). Check carefully in the papers you mention, whether the activation function is applied in the output. If all rewards are positive, there should be no problem using ReLU for regression, and it may in fact help stabilise the network in one direction if the output is capped at a realistic minimum. However, you should not find in the literature a network with ReLU on output layer that needs to predict a negative return. If you do want to have both negative and positive outputs, are you limited to just tanh and linear? There are likely others, but linear will be far the most common. Is it a better strategy (if possible) to scale rewards up so that they are all in the positive domain (i.e. instead of [-1,0,1], [0, 1, 2]) in order for the model to leverage alternative activation functions? It may sometimes be worth considering scaling rewards by a factor, or normalising them, to limit gradients, so that learning is stable. This was used in the Atari-games-playing DQN network to help the same algorithm tackle multiple games with different ranges of scoring. In continuous problems, the absolute value of reward is usually flexible, you are generally interested in getting the best mean reward per time step. So in that case you could scale so that minimum reward is 0, and use ReLU or other range limited transform in output - as above that might help with numeric stability. In episodic problems without a fixed length, you typically don't have a such a free choice, because the agent is encouraged to end the episode quickly when rewards are negative. This is something you might want for instance if the goal is to complete a task as quickly or energy-efficiently as possible. A good example of this is "Mountain Car" - granting only positive rewards in that scenario would be counter-productive, although you might still get acceptable results with positive reward only at the end and discounting. The general case is that rewards can be arbitrarily scaled and centred for continuous problems without changing the agent's goal meaningfully, but only arbitrarily scaled for episodic problems.
H: Hadoop and input informations divided in splits Hadoop divides the input to a MapReduce job into fixed-size pieces called input splits, or just splits. Hadoop creates one map task for each split, which runs the user-defined map function for each record in the split. Having many splits means the time taken to process each split is small compared to the time to process the whole input. So if we are processing the splits in parallel, the processing is better load balanced when the splits are small Why ? AI: All bigdata eco system works on something called parallel processing. We have to process 100gigs of file. If we didnt split the file, then all the 100 gigs should be processed by single JVM(single map). If we split the file into 1000 parts each of 100mb, then we can process each part with different JVM and apply the map function in less time. MPP: Massively parallel Processing
H: What is the tag mapping for entity recognition in nltk? When doing entity recognition using NLTK, one gets as a result a Tree with a bunch of words mapped to tags (eg. Mark -> NNP, first -> JJ, ...). It's not at all clear what all the tags stand for at first glance and I was unable to find any documentation about these tags in the NLTK docs. >>> from nltk import word_tokenize, pos_tag, ne_chunk >>> sentence = "Mark and John are the first to work at Google from one years old in 39 years." >>> print ne_chunk(pos_tag(word_tokenize(sentence))) (S (PERSON Mark/NNP) and/CC (PERSON John/NNP) are/VBP the/DT first/JJ to/TO work/VB at/IN (ORGANIZATION Google/NNP) from/IN one/CD years/NNS old/JJ in/IN 39/CD years/NNS ./.) I ended up looking into the source code to get the mapping. Posting in case anyone else runs into the same problem. AI: Tag mapping according to nltk source 'CC': 'Coordinating conjunction', 'PRP$': 'Possessive pronoun', 'CD': 'Cardinal number', 'RB': 'Adverb', 'DT': 'Determiner', 'RBR': 'Adverb, comparative', 'EX': 'Existential there', 'RBS': 'Adverb, superlative', 'FW': 'Foreign word', 'RP': 'Particle', 'JJ': 'Adjective', 'TO': 'to', 'JJR': 'Adjective, comparative', 'UH': 'Interjection', 'JJS': 'Adjective, superlative', 'VB': 'Verb, base form', 'LS': 'List item marker', 'VBD': 'Verb, past tense', 'MD': 'Modal', 'NNS': 'Noun, plural', 'NN': 'Noun, singular or masps', 'VBN': 'Verb, past participle', 'VBZ': 'Verb,3rd ps. sing. present', 'NNP': 'Proper noun, singular', 'NNPS': 'Proper noun plural', 'WDT': 'wh-determiner', 'PDT': 'Predeterminer', 'WP': 'wh-pronoun', 'POS': 'Possessive ending', 'WP$': 'Possessive wh-pronoun', 'PRP': 'Personal pronoun', 'WRB': 'wh-adverb', '(': 'open parenthesis', ')': 'close parenthesis', '``': 'open quote', ',': 'comma', "''": 'close quote', '.': 'period', '#': 'pound sign (currency marker)', '$': 'dollar sign (currency marker)', 'IN': 'Preposition/subord. conjunction', 'SYM': 'Symbol (mathematical or scientific)', 'VBG': 'Verb, gerund/present participle', 'VBP': 'Verb, non-3rd ps. sing. present', ':': 'colon',
H: Unable to print in Jupyter Notebook using Pandas I am doing basic data analysis on an csv file in jupyter notebook def answer_two(): return (df['Gold']-df['Gold.1']).argmax() answer_two The above code snippet is to subtract two columns of the dataframe.I am expecting an answer in the form of a country name but I am getting the following output. <function __main__.answer_two> I am unable to figure out why this is happening .Occasionally ,the required output in the form of country name is coming but not running it everytime. AI: Instead of calling answer_two, call answer_two(). you are referring to the function object now. You have to call the function.
H: Non trainable problems Before facing this question, I always thought non-learnable problems are those which the provided data for the problem has high amount of outliers, those which don't have sufficient features or those for which the Bayes error is large because of having same features with different labels. As you can see, it seems that the data is fine because the learning should be comparable with human level inference. A human can distinguish between even or odd numbers by just looking at them. I know that we as human begins, do modulus two operation in our mind to decide whether a number is even or odd, the feature extraction part, but we are doing that with just the number itself. It is clear that we can not find a decision boundary to be able to generalize because the inputs have alternative behavior. 1 is even 2 is odd, 3 is even 4 is odd and all the other numbers in this manner. I want to know this kind of problem ,which does not have the mentioned problems which may cause an algorithm not to learn, has any special name? AI: In this discussion, it is described how a neural network that distinguishes odd and even numbers can be constructed. Your question can be rephrased more generally: Is there a function that cannot be learned by a machine learning algorithm. This question was discussed here and here. Both discussions refer to the Universal Approximation Theorem that basically states that any computable function on a given finite range can be approximated by a neural network. So, what is left is uncomputable functions or undecidable problems. These cannot be learned by a machine-learning algorithm.
H: Testing fit of probability distribution If I have fitted training data to a probability distribution, e.g. a poisson distribution, how can I test this fit on some test data? To fit the poisson distribution I am using R's fitdistrplus package that using MLE for determining the optimal coefficients of a given distribution. Therefore, I have the estimated $\lambda$ for a poisson distribution based on my training data but I am not sure how to test this on some unseen test data. AI: Use chi-square test to check the goodness of fit to a specific distribution http://courses.wcupa.edu/rbove/Berenson/10th%20ed%20CD-ROM%20topics/section12_5.pdf
H: Which classification algorithms can handle 24000 features Which classification algorithms can handle 24000 features? What are their pros and cons? AI: Deep Learning Algorithms and Graphical model algorithms can handle that scale of features. For example a typical parsing algorithm using CRF++ computes millions of features. In case of Deep Learning, A typical image of 256*256*3 has to deal with 196608 number of features where each pixel in image is a feature.
H: Is it legal to scrape YouTube videos for training data? Is it legal, at least as long as I am not "selling" the video under my name, to scrape YouTube videos to train a neural network? If it is not, is there a procedure to get permission for the above? I am in academia and require a massive amount of video data that YouTube would be the perfect source in. Note: I couldn't figure out where to post this questions, as it is not technically technical. I am posting it here only because I believe someone here would know a bit or two. Would appreciate any suggestions regarding community this question might have a better chance in. What about Law? AI: It will depend on the rights of the videos themselves, although probably the terms of service of youtube wouldn't agree with it anyway. But you have the YouTube 8M dataset, released by Google for research purposes. YouTube-8M is a large-scale labeled video dataset that consists of millions of YouTube video IDs and associated labels from a diverse vocabulary of 4700+ visual entities. It comes with precomputed state-of-the-art audio-visual features from billions of frames and audio segments, designed to fit on a single hard disk. This makes it possible to get started on this dataset by training a baseline video model in less than a day on a single machine! At the same time, the dataset's scale and diversity can enable deep exploration of complex audio-visual models that can take weeks to train even in a distributed fashion. Edit: And Facebook and MIT just released SLAC (dataset to be released soon, as well as pretrained models for transfer learning). This project presents a novel video dataset, named SLAC (Sparsely Labeled ACtions), for action recognition and localization. It consists of over 520K untrimmed videos and 1.75M clip annotations spanning 200 action categories.
H: how does minibatch for LSTM look like? Minibatch is a collection of examples that are fed into the network, (example after example), and back-prop is done after every single example. We then take average of these gradients and update our weights. This completes processing 1 minibatch. I read these posts [1] [2], about padding entries in a minibatch so they have same length and about preserving the cell state but the following is still unclear to me: Question part a: How a minibatch entity would look like for LSTM? Say, I want it to reproduce Shakespeare, letter by letter (30 characters to choose from). I launch LSTM, let it predict for 200 characters of a poem, then perform back propagation. (hence, my LSTM works with 200 timesteps). Does this mean my minibatch consist of 1 example whose length is 200? Question part b: If I wanted to launch 63 other minibatches in parallel, would I just pick 63 extra poems? (Edit: Original answer doesn't mention this explicitly, but we don't train minibatches in parallel. We train on 1 minibatch, but train its examples in parallel) Question part C: If I wanted each minibatch to consist of 10 different examples, what would such examples be, and how would they be different from 'what I perceive as a minibatch'? AI: I think you need to distinguish between training and execution of the model. During training, you can use batches, which in your case will be different fragments from Shakespeare. So, a batch will be a list of fragments, and the language model will start from the first character on each element of the batch and do the backward and forward pass. When you execute the model, once it is trained, you would like to see one single example, in which case you can set the batch size to one. I believe this answers your three questions.
H: state-action-reward-new state: confusion of terms My question may sound like a duplicate of, for example, How is that possible that a reward function depends both on the next state and an action from current state? but I still feel confused. In neural network approximation of the Q function, I follow the experience replay routine. Papers and manuals suggest storing state, action, received reward, and next state information in the experience replay. However when one is about to calculate a maximum of Q value based on the next action, which state do they use: state or next state? If the next state is used to calculate the max Q, then what is the whole purpose of storing the previous state and action information? An example (2D world): An agent is at cell A1 (state). His goal is to get to C3 to get a positive reward. He then moves to B2 (action); a received reward is, let's say, 0. Next state is B2 (after the action was done). Questions: Should one rely on the B2 state to iterate over possible actions from this state (next state) to get an approximation of highest reward (max Q)? Then, why do we store the A1 and move-to-B2 information at all in the replay buffer? Or I am wrong and we just use the A1 and iterate over possible actions (including that to B2) to get the max Q? Edit: I think I have found an answer ). We need to store previous state (A1) and action (move to B2) in order to create the state-action distribution, which will be met with the expected long-term reward distribution, that we get after the next state routine. Right? AI: The TD Target (for learning update) for using $\hat{q}(s,a)$ neural network in Q-learning is: $$r + \text{max}_{a'} \hat{q}(s',a')$$ In order to calculate this, you need a starting state $s$, the action taken form that state $a$, and the resulting reward $r$ and state $s'$. You need the $s, a$ to generate the input to the neural network (what you might call train_X for supervised learning). You need $r, s'$ to generate the TD Target shown above as an output to learn for regression (in supervised learning, that would be train_y). And you need to work through all possible $a'$ based on $s'$ in order to find the maximum value for the TD Target equation used in Q learning - other RL algorithms may use variations of this for calculating the TD Target. This means your suggestions are all close but not quite right. Should one rely on the B2 state to iterate over possible actions from this state (next state) to get an approximation of highest reward (max Q)? Sort of. The B2 state is $s'$ from the equation, so is responsible for calculating the TD target for state-action value $q(s,a)$. Then, why do we store the A1 and move-to-B2 information at all in the replay buffer? You still need to know $s$ is A1, because the representation of A1 (and whichever action is taken) will be the input to your network. Or I am wrong and we just use the A1 and iterate over possible actions (including that to B2) to get the max Q? Iterate over actions from B2 using your neural network to pick the highest estimate. I think I have found an answer ). We need to store previous state (A1) and action (move to B2) in order to create the state-action distribution, which will be met with the expected long-term reward distribution, that we get after the next state routine. Right? I'm not sure I fully understand this, but it does not seem quite right. One thing that might be confusing you is having your actions as "move to state". Whilst this is quite normal in many deterministic environments, especially board games, most RL formula and tutorials are written using separate state/action pairs, so whilst getting this straight in your mind, best to find another way to represent actions - e.g. move piece from X to Y, or place new piece at (I,J) . . . you can return to "change state from A1 to B2" as the representation later. This is actually more efficient and is called the after-state representation, but most of the literature will show state-action value functions.
H: Merging two different models in Keras I am trying to merge two Keras models into a single model and I am unable to accomplish this. For example in the attached Figure, I would like to fetch the middle layer $A2$ of dimension 8, and use this as input to the layer $B1$ (of dimension 8 again) in Model $B$ and then combine both Model $A$ and Model $B$ as a single model. I am using the functional module to create Model $A$ and Model $B$ independently. How can I accomplish this task? Note: $A1$ is the input layer to model $A$ and $B1$ is the input layer to model $B$. AI: I figured out the answer to my question and here is the code that builds on the above answer. from keras.layers import Input, Dense from keras.models import Model from keras.utils import plot_model A1 = Input(shape=(30,),name='A1') A2 = Dense(8, activation='relu',name='A2')(A1) A3 = Dense(30, activation='relu',name='A3')(A2) B2 = Dense(40, activation='relu',name='B2')(A2) B3 = Dense(30, activation='relu',name='B3')(B2) merged = Model(inputs=[A1],outputs=[A3,B3]) plot_model(merged,to_file='demo.png',show_shapes=True) and here is the output structure that I wanted:
H: Initialize perceptron weights with zero I'm new to datascience so please just don't blast me. In a text book i found: Now, the reason we don't initialize the weights to zero is that the learning rate (eta) only has an effect on the classification outcome if the weights are initialized to non-zero values. If all the weights are initialized to zero, the learning rate parameter eta affects only the scale of the weight vector, not the direction. Now, why? Are you able to explain me? AI: If you initialize all weights with zeros then every hidden unit will get zero independent of the input. So, when all the hidden neurons start with the zero weights, then all of them will follow the same gradient and for this reason "it affects only the scale of the weight vector, not the direction". Also, having zero ( or equal) weights to start with will prevent the network from learning. The errors backpropagated through the network is proportional to the value of the weights. If all the weights are the same, then the backpropagated errors will be the same, and consequently, all of the weights will be updated by the same amount. To avoid this symmetry problem, the initial weights to the network should be unequal. Look at these links in more detail: 1) https://www.quora.com/Why-does-it-work-to-initialize-weights-of-a-deep-Neural-Network-to-zero-plus-some-noise-N-0-epsilon-and-not-anything-else 2) http://staff.itee.uq.edu.au/janetw/cmc/chapters/BackProp/index2.html 3) https://stackoverflow.com/questions/20027598/why-should-weights-of-neural-networks-be-initialized-to-random-numbers 4) https://stats.stackexchange.com/questions/27112/danger-of-setting-all-initial-weights-to-zero-in-backpropagation
H: Classify text labels in to a similar category I'm trying to classify same kind of text labels in to one category. For example, if I have labels like qty, quantity, qty_no all of them should direct to Quantity. Since I'm new to data science, what's the best way to start this kind of thing? AI: One way to do it could be with fuzzy string search. Levenshtein distance algorithm is what you may use for it. ... the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. From wikipedia More on Levenshtein distance - https://stackoverflow.com/a/5859823/5406448
H: Neural network without matrices Have you ever seen a neural network without matrices? I'm asking, because I'm currently building one for educational purposes. AI: Matrix multiplication is just a simplified notation for a particular set of addition and multiplication operations. You can absolutely represent a neural network without invoking matrix notation, it'll just be really tedious (and it will run slower). Start with a single perceptron and build your way up from there.
H: Cluster titles or ingredients of food into n-categories I have a dataset which has information about food recipes (in german), that looks like this: Here is a link to a small .csv file (first 1000 rows of my data) https://drive.google.com/file/d/1C7thFlOnDn-oTc6AaDWA3CXXcX8m9NRu The idea is to cluster the recipe names into n-categories so that afterwards I can assign every recipe to a category. Of note, there are tags and ingredients for every recipe, maybe this information helps to refine the clusters? For example the algorithm (maybe a semantic analysis?) should output: categorised recipes into 200 biggest found categories: (hamburger, soup, pizza, ...) Is there a way to do this? Note: I have for every recipe min. 1 image. The idea is to label my images with n-categories, afterwards to train a convolutional neural network with my data. The input would be an food image, the output would be a category. AI: This sounds like a job for latent dirichlet alocation (i.e. topic modeling). https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation https://radimrehurek.com/gensim/wiki.html
H: Facing a difficult regular expression issue in cleaning text data I am trying to substitute a sequence of words with some symbols from a long string appearing in multiple documents. As an example, suppose I want to remove: Decision and analysis and comments from a long string. Let the string be: s = Management's decision and analysis and comments is to be removed. I want to remove Decision and analysis and comments from s. The catch is, between Decision, and, analysis, and, comments, in s there could be 0, 1 or multiple spaces and newline characters (\n) showing up with no pattern in different documents, for example, one document shows: Management's decision \n \n and analysis\n and \n comments is to be removed while another has a different pattern. How do I account for this and still remove it from the string? I tried the following, of course unsuccessfully: st = 'Management's decision \n \n and analysis\n and \n comments is to be removed' re.sub(r'Decision[\s\n]and[\s\n]analysis[\s\n]and[\s\n]comments','',s) AI: To remove multiple white space matches, you will need [\s\n]+, note the inclusion of the + (match one or more). Code: Here is a function which will build the regex automatically from a text snippet: def remove_words(to_clean, words, flags=re.IGNORECASE): regex = r'[\s\n]+'.join([''] + words.split() + ['']) return re.sub(regex, ' ', to_clean, flags) Test Code: st = "Management's decision \n \n and analysis\n " \ "and \n comments is to be removed" print(remove_words(st, 'decision and analysis and comments')) Results: Management's is to be removed
H: How does decision tree for regression makes a prediction? For classification, it is obvious how a decision tree is used to make a prediction.You just have to find the final leaf. However for regression problems, how can you find the prediction considering the continous aspect of the variable to predict? AI: Depends on the implementation but commonly used is a certain cursive partitioning method called CART. The algorithm works as following: It searches for every distinct values for your predictors and chooses to the split based on what minimize the SSE for two groups of dependent variables. The difference is within the SSE is usually the difference between the actual value and the average of the sample or the difference between the actual value and the output of a linear regression. For each group, the method will recursively split the predictor values within the groups. In practice, the method stops when a certain sample size threshold is met. The node with the lowest SSE becomes the root node. Reference: You can read more about tree based regressions here: Tree-based regressions
H: Clustering Data to learned cluster I am new to data science, I have clustered some data using Scipy agglomerative clustering. how can I fit new data into the learned clusters? dm = pdist ( dataset ,lambda u,v: mlpy.dtw_std ( pd.Series(u).dropna().values.tolist(),pd.Series(v).dropna().values.tolist(),dist_only=True )) z = hac.linkage(dm, method='average') cluster = hac.fcluster(z, t=100, criterion='distance') leader = scipy.cluster.hierarchy.fcluster(z, t=100, criterion='distance') I would like to cluster new data into the same clusters, How can I do it? AI: The word "prediction" does not belong to any specific type of machine learning. There is nothing wrong with "predicting" new data to the cluster it belongs to; (e.g. there are many applications that place new customers into pre-discovered market segments). A conditional probability, like that used in classification, is not "stronger" than an unsupervised approach, as it rests its assumption on properly labelled classes; something that is not guaranteed. This is why there are packages that provide a predict function to clustering algorithms. Here is an example using the flexclust package with the kcaa function. That being said, the prediction step is usually handled by a supervised classifier, so the approach would be to sit a classifier on top of your learned clusters (treating cluster assignments as "labels"). You just have to reason about your weaknesses. As stated above, the weakness in classification is the assumption that labelled data is tagged correctly, whereas the weakness in clustering is that your discovered clusters are assumed to be valid. Unsupervised approached cannot be validated the same way it is done with classification. Clustering requires a variety of cluster validity techniques along with domain experience (e.g. show campaign managers your market segments to validate customer types). Ultimately, you are just matching an incoming vector (new data) to the cluster most similar. For example, in k-means this could be accomplished by finding the smallest distance between the incoming vector and all the centroids of your clusters. This kind of pattern matching depends on the data you are using. This works best for clustering techniques that have well-defined cluster objects with exemplars in the center, like k-means. Using hierarchical techniques means you would need to cut the tree to obtain flat clusters, then use the "label" assignment to run a classifier on top. This comes baked with a lot of assumptions, so you need to make sure you understand your data very well, and validate any clusters with non-technical users that have deep domain experience. POSSIBLE APPROACH If you're bent on using hierarchical clustering, then here is the general approach. Note I am not suggesting this is the best way. Every approach comes baked with a number of assumptions. You will need to work to understand your data, attempt many models, validate with stakeholders, etc. Readers can use the tutorial by Jörn Hees to get started in hierarchical clustering if needed: Create some example data: from matplotlib import pyplot as plt from scipy.cluster.hierarchy import dendrogram, linkage import numpy as np np.random.seed(42) a = np.random.multivariate_normal([10, 0], [[3, 1], [1, 4]], size=[100,]) b = np.random.multivariate_normal([0, 20], [[3, 1], [1, 4]], size=[50,]) X = np.concatenate((a, b),) Confirm clusters exist in synthetic data: plt.scatter(X[:,0], X[:,1]) plt.show() Generate the linkage matrix using the Ward variance minimization algorithm: (This assumes your data should be be clustered to minimize the overall intra-cluster variance in euclidean space. If not, try Manhattan, cosine or hamming. You can also try different linking options). Z = linkage(X, 'ward') Check the Cophenetic Correlation Coefficient to assess quality of clusters: from scipy.cluster.hierarchy import cophenet from scipy.spatial.distance import pdist c, coph_dists = cophenet(Z, pdist(X)) 0.98001483875742679 Calculate full dendrogram: plt.figure(figsize=(25, 10)) plt.title('Hierarchical Clustering Dendrogram') plt.xlabel('sample index') plt.ylabel('distance') dendrogram( Z, leaf_rotation=90., # rotates the x axis labels leaf_font_size=8., # font size for the x axis labels ) plt.show() Determine the number of clusters (e.g. can be done manually by looking for any large jumps in the dendrogram...see Jörn's blog for plotting function): Retrieve clusters: (using our max distance determined from reading dendrogram) from scipy.cluster.hierarchy import fcluster max_d = 50 clusters = fcluster(Z, max_d, criterion='distance') Map cluster assignments back to original frame: import pandas as pd def add_clusters_to_frame(or_data, clusters): or_frame = pd.DataFrame(data=or_data) or_frame_labelled = pd.concat([or_frame, pd.DataFrame(clusters)], axis=1) return(or_frame_labelled) df = add_clusters_to_frame(X, clusters) df.columns = ['A', 'B', 'cluster'] df.head() Build a classifier using this "labelled" data: Here, I'll just use the original data and the assigned clusters along with a knn classifier: np.random.seed(42) indices = np.random.permutation(len(X)) X_train = X[indices[:-10]] y_train = clusters[indices[:-10]] X_test = X[indices[-10:]] y_test = clusters[indices[-10:]] # Create and fit a nearest-neighbor classifier from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier() knn.fit(X_train, y_train) res = knn.predict(X_test) print(res) print(y_test) predicted labels: [2 2 1 1 2 2 1 2 2 1] test labels: [2 2 1 1 2 2 1 2 2 1] As with any classifier, your incoming data needs to be in the same representation as your training data. As new data arrives you run it against the predict function provided by your classifier (here we use sci-kit learn's knn.predict). This effectively assign new data to the cluster it belongs. Ongoing cluster validation would be required in the model monitoring step of the machine learning workflow. New data can change the distribution and results of your approach. BUT, this isn't unique to unsupervised as all machine learning approaches will suffer from this (all models eventually go stale). As argued by Jörn in the reference above, manual inspection typically trumps automated approaches, so regular visual/manual inspection of the flat clusters is recommended.
H: Parzen and k nearest neighbor I have this formula for the density estimation. $$p_n(x) = \frac{k_n / n}{V_n}$$ I have been told that with a Parzen window approach you can specify $V_n$ as a function of $n$. So if $V$ decreased when $n$ increased it is clear that it is a fixed volume. I have also been told that with a knn approach you specify $k_n$ as a function of $n$. So if you increase as $n$ is raised, it is clear that the volume is dependent of the volume. So can anyone explains me the above statements. I think it is somewhat clear to me how knn and Parzen works. (knn count the $k$ nearest neighbor, at the new sample is assigned to the class which has most votes. In Parzen the volume is fixed). I also do not understand the two formulas in the figure. The figure illustrated two methods for estimating the density at a point $x$ at the center of each square. The top knn, bottom Parzen AI: The probability that a vector $x$ is drawn from $p(x)$ in some region $R$ of a sample space is given by $ P = \int_{R} p(x')dx'$. Given a set of N vectors drawn from the distribution; it should be obvious that the probability k of these N vectors fall in $R$ is given by $P(k) = \binom{N}{k} p^{k} (1-p)^{N-k}$. From the properties of a binomial p.m.f the mean and variance of the ratio $\frac{k}{N}$ are ${E}[\frac{k}{N}] = P$ and ${var}[\frac{k}{N}] = \frac{P(1-P)}{N}$. Therefore, as $N \rightarrow \infty$ the distribution becomes more defined and the variance smaller. Hence, we can expect a decent estimate of the probability P to be obtained from the mean fraction of points that fall within the region $R$. Hence $P \cong \frac{k}{N}$, Now consider if the region $R$ is small such that $p(x)$ does not vary considerably within it, then $\int_{R} p(x')dx' \cong p(x)V $. Combining this result with the one above. We see that $p(x) \cong \frac{k}{NV}$. That's where the formula you found basically comes from. Therefore if we want to improve $p(x)$ we should let V approach 0. However, then $R$ would become so small that we would find no examples. Thus we really only have two choices in practice. We have to let V be large enough to find examples in $R$ or small enough such that p(x) is constant within $R$. The basic approaches include using KDE (parzen window) or kNN. The KDE fixes V while kNN fixes k. Either way, it can be shown that both methods converge to the true probability density as N increases providing that V shrinks with N and that k grows with N. The formulas used in the picture are just arbitrary examples that fulfill this requirement.
H: calculate distance between each data point of a cluster to their respective cluster centroids I have a dataset of some keywords in some text files. Using the append feature I have access each text file and I append all of the keywords to token_dict like this token_dict="wrist. overlapping. direction. receptacles. comprising. portion. adjacent. side. hand. receive. adapted. finger. comprising. thumb. ..............................." By using k-means clustering, I clustered this data by using k=3. Now, I want to calculate the distance between each data point in a cluster to its respective cluster centroid. I have tried to calculate euclidean distance between each data point and centroid but somehow I am failed at it. My code is as follows: from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer, TfidfTransformer from sklearn.cluster import KMeans from sklearn.decomposition import PCA import matplotlib.pyplot as plt from matplotlib import style import numpy as np style.use('ggplot') token_dict = [] import glob path = 'E:\\Project\\*.txt' files=glob.glob(path) for file in files: f=open(file, 'r') text = f.read() token_dict.append(text) vectorizer = TfidfVectorizer(max_df=0.8, max_features=10000, min_df=2, use_idf=True) #print(X) km = KMeans(n_clusters=3) #labels = km.fit_predict(vectorizer) #print(labels) X = vectorizer.fit_transform(token_dict).todense() km.fit(X) pca = PCA(n_components=2).fit(X) data2D = pca.transform(X) # ============================================================================= # cluster_0=np.where(X==0) # print(cluster_0) # # X_cluster_0 = data2D[cluster_0] # print (X_cluster_0) # ============================================================================= # ============================================================================= # def euclidean(X1, X2): # return(X1-X2) # ============================================================================= # ============================================================================= # distance = euclidean(X_cluster_0[0], km.cluster_centers_[0]) # print(distance) # ============================================================================= # ============================================================================= # # km.predict() # ============================================================================= order_centroids = km.cluster_centers_ centers2D = pca.transform(order_centroids) labels = km.labels_ colors = ["y.", "b.","g."] for i in range(len(X)): plt.plot(data2D[i][0], data2D[i][1], colors[labels[i]], markersize=10) plt.scatter(centers2D[:, 0], centers2D[:, 1], marker='x', s=200, linewidths=3, c='r') plt.show() Can someone see where I went wrong? AI: def k_mean_distance(data, cx, cy, i_centroid, cluster_labels): distances = [np.sqrt((x-cx)**2+(y-cy)**2) for (x, y) in data[cluster_labels == i_centroid]] return distances clusters=km.fit_predict(data2D) centroids = km.cluster_centers_ distances = [] for i, (cx, cy) in enumerate(centroids): mean_distance = k_mean_distance(data2D, cx, cy, i, clusters) distances.append(mean_distance) print(distances) Using this function I have solved my problem
H: Why is training take so long on my GPU? Details: GPU: GTX 1080 Training: ~1.1 Million images belonging to 10 classes Validation: ~150 Thousand images belonging to 10 classes Time per Epoch: ~10 hours I've setup CUDA, cuDNN and Tensorflow( Tensorflow GPU as well). I don't think my model is that complicated that is takes 10 hours per epoch. I even checked if my GPU was the problem but it wasn't. Is the training time due to the Fully connected layers? My model: model = Sequential() model.add() model.add(Conv2D(64, (3, 3), padding="same", strides=2)) model.add(Activation('relu')) model.add(Dropout(0.25)) model.add(Conv2D(64, (3, 3), padding="same", strides=2)) model.add(Activation('relu')) model.add(Dropout(0.25)) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(3, 3), strides=2)) model.add(Flatten()) model.add(Dense(256)) model.add(Activation('relu')) model.add(Dense(4096)) model.add(Activation('relu')) model.add(Dense(10)) model.add(Activation('softmax')) model.summary() opt = keras.optimizers.rmsprop(lr=0.0001, decay=1e-6) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'] ) Because there is a lot of data I used the ImageDataGenerator. gen = ImageDataGenerator( horizontal_flip=True ) train_gen = gen.flow_from_directory( 'train/', target_size=(512, 512), batch_size=5, class_mode="categorical" ) valid_gen = gen.flow_from_directory( 'validation/', target_size=(512, 512), batch_size=5, class_mode="categorical" ) AI: That's about expected. If you divide the number of seconds by the number of images you processed, you get 33 milliseconds per image, which seems about right for such a small network. Larger networks usually take in the ballpark of 50 to 200 milliseconds per image. Yes, a large dense layer is likely to hurt your performance, since that's a huge matrix (256 by 4096) and a large matrix multiplication to go along with it every time you run the network.