text
stringlengths
83
79.5k
H: problem with regular expression I have a dataframe in R containing a variable for programming languages. This is extracted from a multiple choice questionnaire in a survey. As a result, any programming language may fall into the beginning, middle or end of the whole string. I added a binary variable for any programming language. Here is a sample of my data: language <- c( "Ruby, Assembly, R, Go, Rust", "Java, Javascript, Ruby, SQL", "Java, Ruby", "C, C++", "PHP, Javascript, Ruby, Assembly, Swift, R, Matlab, Go, Haskell", "R", "Perl, Javascript, R", "Javascript, Ruby, Bash", "Python, PHP, Javascript", "Java", "Java, C" ) df <-as.data.frame(language,stringsAsFactors = FALSE) df$python <- 0 df$java <- 0 df$c <- 0 df$r <- 0 df$d <- 0 for(trow in (1:nrow(data))) {if (length (grep(pattern="Python",x=df[trow,"language"])) > 0) {df[trow,"python"] <- 1}} I have problem with one-word languages such as C, R, D and Java . I made the variables for C#, C++ , Javascript, Ruby and others using the for loop but I have problems with these four. What regular expression can I use so that it covers Java but not Javascript (in the beginning, middle and end of the string) and covers R but not Ruby and so on? Any other solution for making the binary variables are also welcome. AI: You could do reshape2::recast( data = setNames(strsplit(language, ", ", T), language), formula = L1~value, fun.aggregate = length ) # L1 Assembly Go R Ruby Rust Java Javascript SQL C C++ Haskell Matlab PHP Swift Perl Bash Python # 1 C, C++ 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 # 2 Java 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 # 3 Java, C 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 # 4 Java, Javascript, Ruby, SQL 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 0 0 # 5 Java, Ruby 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 # 6 Javascript, Ruby, Bash 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 # 7 Perl, Javascript, R 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 # 8 PHP, Javascript, Ruby, Assembly, Swift, R, Matlab, Go, Haskell 1 1 1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 # 9 Python, PHP, Javascript 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 # 10 R 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 # 11 Ruby, Assembly, R, Go, Rust 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
H: Is SuperLearning actually different to stacking,or are they essentially the same thing? Articles which use the terms 'stacking' and 'Super Learner' often seem to use the terms interchangeably. Is the Super Learner algorithm a specific form of the more generic stacking concept, or is Super Learner essentially the same thing, and at some stage one of these terms is likely to become redundant? AI: So Ensemble Learning is essentially using multiple learning algorithms and providing the best predictive performance considering all of them. This gives a better detailed description. Now, Ensemble Learning can be broadly divided into multiple types like Boosting Bagging Stacking / Super Learning Stacking contains a bunch of algorithms along with a learner to ensemble a group of base learners. Going deeper, the term Stacking was used way before Super Learning. Later, when the algorithm was actually developed theoretically and made popular in 2007, it was given the name 'Super Learner'.
H: How do I determine the best statistical way for data transformation for standardization (like log, sq root) to remove bias between different datasets? I'm currently working on applying data science to High Performance Computing cluster, by analyzing the log files generated and trying to see if there is a pattern that leads to a system failure(specifically STALE FILE HANDLEs for now in GPFS file system). I am categorizing the log files and clustering based on their instances per time interval. Since some messages are more predominant over the others in any given time frame than the others, i don’t want the clustering to bias towards the one with maximum variance. AI: Its unclear what the OP is asking (so this response is somewhat general), but the table below illustrates common contexts and the transformations that are typical: sales, revenue, income, price --> log(x) distance --> 1/x, 1/x^2, log(x) market share, preference share --> (e^x)/(1+e^x) right-tailed dist --> sqrt(x), log(x) caution log(x<=0) left-tailed dist --> x^2 You can also use John Tukey's three-point method as discussed in this post. When specific transformations don't work, use Box-Cox transformation. Use package car to lambda <- coef(powerTransform()) to compute lambda and then call bcPower() to transform. Consider Box-Cox transformations on all variables with skewed distributions before computing correlations or creating scatterplots.
H: Does batch_size in Keras have any effects in results' quality? I am about to train a big LSTM network with 2-3 million articles and am struggling with Memory Errors (I use AWS EC2 g2x2large). I found out that one solution is to reduce the batch_size. However, I am not sure if this parameter is only related to memory efficiency issues or if it will effect my results. As a matter of fact, I also noticed that batch_size used in examples is usually as a power of two, which I don't understand either. I don't mind if my network takes longer to train, but I would like to know if reducing the batch_size will decrease the quality of my predictions. Thanks. AI: After one and a half years, I come back to my answer because my previous answer was wrong. Batch size impacts learning significantly. What happens when you put a batch through your network is that you average the gradients. The concept is that if your batch size is big enough, this will provide a stable enough estimate of what the gradient of the full dataset would be. By taking samples from your dataset, you estimate the gradient while reducing computational cost significantly. The lower you go, the less accurate your esttimate will be, however in some cases these noisy gradients can actually help escape local minima. When it is too low, your network weights can just jump around if your data is noisy and it might be unable to learn or it converges very slowly, thus negatively impacting total computation time. Another advantage of batching is for GPU computation, GPUs are very good at parallelizing the calculations that happen in neural networks if part of the computation is the same (for example, repeated matrix multiplication over the same weight matrix of your network). This means that a batch size of 16 will take less than twice the amount of a batch size of 8. In the case that you do need bigger batch sizes but it will not fit on your GPU, you can feed a small batch, save the gradient estimates and feed one or more batches, and then do a weight update. This way you get a more stable gradient because you increased your virtual batch size.
H: How to optimize cohort sizes to reduce pair-wise comparisons? I am making all pairwise comparisons in a dataset. The use-case is collapsing records into a unique ID based on fuzzy names and dates of birth. The size of the database is around 57,000 individuals. So this is a total of 57,000 choose 2 pairwise combinations. (This is a tiny example I know, but I have other databases with the same problem that are much larger.) Based on other analysis, I concluded that I do not want to consider people with birthdates of more than three years apart. So I can subset the database into overlapping cohorts, and then only do all pair-wise comparisons within each cohort. It is easy to show examples of where this cohort approach will reduce the number of pairs I need to compare. Here is one example with my data just based on the quintiles of the year of birth (and those with missing birthdays go into all cohorts). (Min,1968] : 13,453 [1962,1980] : 17,335 [1974,1988] : 21,188 [1982,1993] : 21,993 [1987,Max) : 17,449 Which saves me around 0.7 billion comparisons. So this brings up two questions: are the choosing the bins based on quantiles a good strategy, or is there anther strategy that works better? how many bins should I make? AI: There is a better way if I understand your question correctly. Here is the algorithm I propose: Initialize a 'window' list and a 'pairs' list Sort your data on birthday from old to young (or the other way around) Loop over your rows and keep track of all the rows that are still in the last three years since your current row. When you get to a new row, throw out rows that are now more than three years apart, add the current row together with the rows in your 'window' to your pairs set and add current row to your 'window'. This means you only iterate over your main list once, and if you implement your 'window' list properly (like a linked list for example) you don't need to do a lot of looping in that regard either. You also get all the pairs only once as a by product. Plus you actually get all the pairs, while with your binning approach you get missing pairs around the bin edges (if you don't overlap). If you use the overlapping binning approach I think you should have bins of 6 years width and the borders shift 3 years each time, that way all true pairs are in at least 1 bin together, and there is the least amount of unwanted pairs.
H: Gaussian Mixture Models EM algorithm use average log likelihood to test convergence I was investigating scikit-learn's implementation of the EM algorithm for fitting Gaussian Mixture Models and I was wondering how they did come up with using the average log likelihood instead of the sum of the log likelihoods to test convergence. I see that it should cause the algorithm to converge faster (given their default parameters), but where does that idea come from ? Does anyone know if they based this part of the implementation on a specific paper or if they just came up with it and used it ? In most explanations of the EM algorithm I have come across, they would have used log_likelihoods.sum() instead of log_likelihoods.mean(). AI: It makes unit testing easier; invariant to the size of the sample. Reference: the github discussion that led to the change.
H: Multiclass Classification with large number of categories I am making a recommendation system (kind of) and I have to recommend the item a user is most likely to buy in his next purchase. Doesn't matter if he already bought this item. Given this, I'm treating this problem as a multiclass-classification problem with 4000 categories (number of different items users can buy). Searching in Wikipedia I found this link and decided to use the One vs -rest method. So I decided to train one random forest for each item using as covariates flags if the user bought each item before (so I have around 4000 covariates). Then I will decide a rule to decide the recommended item (something like the one which has the largest probability to be bought or the largest lift.) My problem is that it's taking too long to train (5 to 10 min per item): > 5*4000 [1] 20000 > 20000/60 [1] 333.3333 > 333.3333/24 [1] 13.88889 So in the best case it would take 2 weeks to train. I would like to know if the method i'm using is right, and if there's another faster method to achieve this. AI: You might have more luck with a Naive Bayes Classifier. It can handle a large number of target classes, and is relatively fast to train, since you largely just calculate a bunch of univariate stats to plug into at prediction time. It won't capture fancy interactions as much as a random forest though, so if you are concerned about "they only buy shoelaces if they bought shoes but NOT shoeshine" vs "they often buy shoelaces if they've bought shoes" then it may disappoint. You may also want to incorporate a time component, but I'm not sure what you're doing. https://en.wikipedia.org/wiki/Association_rule_learning may also be relevant.
H: Beginner in programming and data science with 100 hours to spend learning the basics I started using the below link to teach myself data science, with some mathematical knowledge (including calculus, linear algebra, stats/probability) but very little programming experience: Quora - Roman Trusov's answer on how to learn Data Science in 100 hours. The above link has the following starting advice: You will need an RDBMS to handle the data, so the first day would look like this: Install and configure MySQL. Import the dump into the database Read SQL basics. Spend some time doing simple exercises to get the hang of manipulating the data... After a few days, I'm still stuck at this stage. I'd like to learn how to do data science, and was hoping that following the plan in the above link would achieve that quickly. But I'm starting to feel it is not working. Is debugging MySQL installations an important part of becoming a data scientist - so I should just get advice on that and try harder? If not, what can I do differently to achieve my goals of learning data science in similar time frame? AI: Being good at a subject does not automatically make someone a good teacher, and it looks like Roman's answer on Quora has fallen into a trap of thinking everything he knows is simple and could be picked up quickly. Also making the potential student attempt things outside of a beginner level - effectively just by research on the web following a 2 paragraph pointer - is going to make progress slow and frustrating. Despite no doubt good intentions, the advice there is likely to give a very poor learning experience. There are full structured tutorials available on free Massive Online Open Courses (MOOCs) available for learning data science topics. These have been put together by professionals who know how to teach, and the effort put into designing any of these courses, providing materials etc, totally dwarfs the effort that went into the advice on Quora. Ignore the Quora answer, and sign up for one or more of these MOOCs. You can find them at Coursera, Udacity, edX and other similar places. If you want to cram a lot of study into a short time, look for "at your own pace" courses where all the materials are available to you immediately. The almost canonical start point to test the waters would be Andrew Ng's Machine Learning course on Coursera. Machine Learning is one of the more immediately accessible and fun parts of data science, and the course goes into theory and practice with additional sections for beginners at the start covering necessary maths and programming. Following that entire course is probably around 100 hours total effort. You won't come out of it knowing data science, but you will gain useful practical skills, and get a real taster for the machine learning side of the subject.
H: Measuring Value in Data Science? Why does it seem that it's difficult to find out how people in data science create measurable value? All I find on the internet are buzzwords like data cleaning, visualization, and writing about the data. This is equivalent of describing a landscaper as trimming trees and grass or saying an investment banker spreads M&A. That is insufficient. Let my type out an insufficient answer: A data scientist uses database querying languages like SQL as well as statistical programming languages like R to design experiments with data and visualizing such experimental conclusions to decision-makers in the hopes of making better decisions and thus creating value for a company/cause. How is this insufficient? Well, how do we know that data scientists' recommendations actually create value? If we can't measure the value of our analyses, then how can we determine if hundreds of hours of learning/working in the subject is meaningful? Extreme example: George is a data scientist working for Company X. After hundreds of hours of data cleaning/experimentation, he concludes Decision A will benefit Company X. George convinces a Product Manager to apply Decision A that ultimately increases the revenues of Company X by $0.01 How do we know that several years of work may not amount to basically nothing? Let me give you a depressing and real/plausible example: Instead of becoming a data scientist, George becomes a portfolio manager, managing investment portfolios for clients. After 20 years of management, George has a track record of -1% per annum, compared to an S&P 500 return of 8%. In the above example, George is a useless person and has lost value for people. How do data scientists know that they aren't destroying value, and if we can't figure that out, what are the steps to avoid value destruction and create the most value for companies? AI: I read your question as: tl;dr. How do you know that a data scientist adds value to a company? The job of a data scientist is to pick out the best data set, scrub it, fit a statistical model. And most importantly, a data scientist needs to know the problem statement and why is the analysis being done. For example, consider a DS team in an e-commerce company. Their primary (let's assume) is to build a highly effective recommender system, which needs to adapt itself to a lot of variables, like the user behaviour, the session-based behaviour, user segments, etc. So, if the recommender system is still not being able to get the company a considerable CTR or upsells/cross-sells, then maybe the team is not adding proper value to the company. So, a data scientist needs to have a keen knowledge of the business value which an experiment is expected to add to the company. The decisions made from the experiments should ultimately add value to a company (monetary or otherwise). If not, then just like any other employee, it means that the person have failed in the job.
H: "Results do not have equal lengths" using ldply in R package plyr I've found a few similar questions, but I am new to R and can't figure out how it applies to my specific problem. Here is my code: library(rvest) library(plyr) library(stringr) #function passes in letter and extracts bold text from each page fetch_current_players<-function(letter){ url<-paste0("http://www.baseball-reference.com/players/", letter, "/") urlHTML<-read_html(url) playerData<-html_nodes(urlHTML, "b a") player<-html_text(playerData) player } #list of letters to pass into function atoz<-c("a","b","c","d","e","f","g","h", "i","j","k","l","m","n","o","p","q","r", "s","t","u","v","w","x","y","z") player_list<-ldply(atoz, fetch_current_players, .progress="text") So what this code is trying to do is use the URL structure of this website to pass a list of the letter A through Z into my function to produce a list of names that are in bold. I think the problem is that each list of players it returns is of different lengths and that is producing an error as when I manually type in each letter into the function the function appears to work. Any help is appreciated, thanks! AI: Here's a slightly modified version using some newer "tidyverse" packages: library(rvest) library(purrr) # flatten/map/safely library(dplyr) # progress bar # just in case there isn't a valid page safe_read <- safely(read_html) fetch_current_players <- function(letter){ URL <- sprintf("http://www.baseball-reference.com/players/%s/", letter) pg <- safe_read(URL) if (is.null(pg$result)) return(NULL) player_data <- html_nodes(pg$result, "b a") html_text(player_data) } pb <- progress_estimated(length(letters)) player_list <- flatten_chr(map(letters, function(x) { pb$tick()$print() fetch_current_players(x) }))
H: Similarity between two words I'm looking for a Python library that helps me identify the similarity between two words or sentences. I will be doing Audio to Text conversion which will result in an English dictionary or non dictionary word(s) ( This could be a Person or Company name) After that, I need to compare it to a known word or words. Example: 1) Text to audio result: Thanks for calling America Expansion will be compared to American Express. Both sentences are somehow similar but not the same. Looks like I may need to look into how many chars they share. Any ideas will be great. Looks a functionality like Google search "did you mean" feature. AI: The closest would be like Jan has mentioned inhis answer, the Levenstein's distance (also popularly called the edit distance). In information theory and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (i.e. insertions, deletions or substitutions) required to change one word into the other. It is a very commonly used metric for identifying similar words. Nltk already has an implementation for the edit distance metric, which can be invoked in the following way: import nltk nltk.edit_distance("humpty", "dumpty") The above code would return 1, as only one letter is different between the two words.
H: Can we use a model that overfits? I am on a binary classification problem with the AUC metrics. I did a random split 70%, 30% for training and test sets. My first attempts using random forest with default hyper-parameters gave me auc 0.85 on test set and 0.96 on training set. So, the model overfits. But the score of 0.85 is good enough for my business. I also did a 5-folds cross validation with the same model and same hyper-parameters and the test set results were consistently something between 0.84 and 0.86 My question is: can I believe on the score 0.85 and use this model in production? AI: Yes, if your 0.85 AUC is good enough for your use case this is a good enough model. The performance on the training set indicates how well your model knows the training set. This we don't really care about, it's just what the model tries to optimize. The performance on the test set is an indication on how well your model generalizes. This is what we care about, and your model gets to around 0.85 as an estimate for your generalization. Differences between training and testing are the norm and in this case it could be that you might get a better performance by adding stronger regularization but if 0.85 is good enough, go for it!
H: Understanding Bernoulli Trials, Bayesian Setting I am required to complete a project on ML applications. I guess there is a lot of statistics in ML, not helpful for a non-maths background. I am getting too bogged down by notations. There are too many notations. I am trying to read about what Bernoulli trials are and I can't relate to it. What is a Bayesian setting and why is Bayesian thing everywhere? What makes it such an omnipotent distribution? Are there any theorems/theories that I must know ? Any book/notes/resources where I could learn about these stuff in a relatable way (I have very little maths background, but I can learn)? AI: What is a Bayesian setting and why is Bayesian thing everywhere? In very simple terms: Bayesian is a statistical setting, where the likelihood of an event happening (called the posterior) depends on the prior trials or observations (called the prior(s)). Bayesian networks is an extension of the above, forming a chain or a network of inferencing. Are there any theorems/theories that I must know? For understanding the Bayesian paradigm, you need to know the Bayesian theorem/relation, which is basically: $$P(\theta|d) = \dfrac{P(d|\theta)P(\theta)}{P(d)}.$$ Any book/notes/resources where I could learn about these stuff in a relatable way (I have very little maths background, but I can learn)? I would highly recommend "Doing Bayesian Analysis" by John Krushke
H: Smith-Waterman-Gotoh Algorithm - how to determine an overall similarity percentage Using the Smith-Waterman-Gotoh algorithm I want to get an overall similarity percentage between two sequences. What would be the best way to do this? eg. comparing strings COELACANTH and PELICAN in this example gives a score of 4 with alignment: ELACAN ELICAN How would I then go an determine the overall similarity percentage between COELACANTH and PELICAN based on this? AI: I don't know how to write math algebra like on the Wikipedia page for Smith-Waterman, so i'll use pseudo code. I found the logic at SimMetrics in the SmithWatermanGotoh java code. str1 = PELICAN str2 = COELACANTH matchValue = 1 #in the comparisons below, when characters are equal, assign this value mismatchValue = -2 #in the comparisons below, when characters are not equal, assign this value gapValue = -0.5 #the gap penalty used in smith-waterman # get the maxDistance which is the smallest number of characters between str1 and str2 multiplied by # the largest of matchValue and gapValue maxDistance = min(length(str1), length(str2)) x max(matchValue, gapValue); # function to compare character at index aIndex of string a with character at index bIndex of string b function compareCharacters(a, aIndex, b, bIndex, matchValue, mismatchValue) { if a[aIndex] === b[bIndex] return matchValue else return mismatchValue } v0 = an array v1 = an array lengthOfStr1 = number of characters in str1 lengthOfStr2 = number of characters in str2 # do the smith waterman similarity measure (currentMax) currentMax = v0[0] = max(0, gapValue, compareCharacters(str1, 0, str2, 0, matchValue, mismatchValue)) for (j = 1; j < lengthOfStr2; j++) { v0[j] = max(0, v0[j - 1] + gapValue, compareCharacters(str1, 0, str2, j, matchValue, mismatchValue)) currentMax = max(currentMax, v0[j]) } for (i = 1; i < lengthOfStr1; i++) { v1[0] = max(0, v0[0] + gapValue, compareCharacters(str1, i, str2, 0, matchValue, mismatchValue)) currentMax = max(currentMax, v1[0]) for (j = 1; j < lengthOfStr2; j++) { v1[j] = max(0, v0[j] + gapValue, v1[j - 1] + gapValue, v0[j - 1] + compareCharacters(str1, i, str2, j, matchValue, mismatchValue)) currentMax = max(currentMax, v1[j]) } for (j = 0; j < lengthOfStr2; j++) { v0[j] = v1[j] } } # calculate the overallSimilarity between the strings overallSimilarity = currentMax / maxDistance #<- 0.4767 for COELACANTH vs PELICAN
H: How to count the number of missing values in each row in Pandas dataframe? How can I get the number of missing value in each row in Pandas dataframe. I would like to split dataframe to different dataframes which have same number of missing values in each row. Any suggestion? AI: You can apply a count over the rows like this: test_df.apply(lambda x: x.count(), axis=1) test_df: A B C 0: 1 1 3 1: 2 nan nan 2: nan nan nan output: 0: 3 1: 1 2: 0 You can add the result as a column like this: test_df['full_count'] = test_df.apply(lambda x: x.count(), axis=1) Result: A B C full_count 0: 1 1 3 3 1: 2 nan nan 1 2: nan nan nan 0
H: How does SelectKBest() perform feature selection? SelectKBest(f_classif, k), where k is the number of features to select, is often used for feature selection, however, I am having trouble finding descriptive documentation on how it works. A sample of how this works is below: model = SelectKBest(f_classif, k) model.fit_transform(X_train, Target_train) The ANOVA F-value, as I understand it, does not require a categorical response. (see scipy.stats.f_oneway) It is computing the value between the features. Why does f_classif require the response? How does SelectKBest actually achieve a ranking of features based on the F-value when there should only be one F-value for a set of data? AI: Your question is really more about f_classif than SelectKBest. It's to drop duplicate labels; note the np.unique(y): X, y = check_X_y(X, y, ['csr', 'csc', 'coo']) args = [X[safe_mask(X, y == k)] for k in np.unique(y)] return f_oneway(*args) f_oneway still only gets passed the feature matrix, but a subset of it.
H: How to take advantage of variables whose values are available in the past but not in the future? Example: weather data. You know the location data, but you don't know the previous days'/weeks' temperatures and other weather conditions. How can you exploit these variables in your past data when you build a predictive model that attempts to forecast pretty far out into the future? AI: My generic answer to the title is to use the extra data for regularization in representation learning; a transformation of your features into a space conducive to your main task: regression (prediction, forecasting). Here's a survey [PDF]. For your example, you could build a model that takes the delay of the target time from the present as an input, so you can predict arbitrarily far into the future, though it probably would not predict as well as a simple regressor that has a fixed horizon since it is trying to learn a more complex function.
H: word2vec -storing a word and its vector yet be able to efficiently run k-nearest neighbour After training a model using word2vec I'd now like to store the trained model with the word serving as a key and the vector as its value. However I'm not sure how I'll be able to implement in this way a k-nearest neighbour search. What would be the correct way to get around this? AI: A most popular way of obtaining the approximate nearest neighbors is the locality sensitive hash. And here are some practical results. Then once you have the neighboring keys, it's straightforward to use a key-value store to retrieve the corresponding words.
H: Date prediction - periodic recurrence If I have some data regarding the occurence of an event on a certain date and some other variables regarding it (think fe.: I have data on which dates it rained, and some addtitional data like temperature, atmospheric pressure etc.), which is the most appropriate model for predicting on which day the event is going to happen again? Or, to be more precise, I'd like to predict the frequency of said event, to know in how many days it's going to occur again. I mostly use Python with the numpy, sklearn libraries, and I'm interested which of its models fits my use case best. Thank you! AI: You should read that : http://www.analyticsvidhya.com/blog/2016/02/time-series-forecasting-codes-python/ and take a look at that : http://statsmodels.sourceforge.net/0.6.0/generated/statsmodels.tsa.arima_model.ARIMA.html The endog argument is your time serie and the exog is your other data
H: How to decide power of independent variables in case of non-linear polynomial regression? Consider one dependent variable 'Y' and 10 independent variables or features- X1, X2, X3, ... X10. I want to create a non-linear polynomial regression model such that- Y ~ a1.X1^b1 + a2.X2^b2 + .... + a10.X10^b10 I was wondering is there any algorithm that will determine best possible values for powers of independent variables that is values of b1, b2, ... b10 from data. AI: If all you care about is the quality of predictions (as opposed to explanatory power), skip linear models altogether and use gradient boosted trees instead. Gradient boosting can generally learn polynomial splines with ease, and you don't have to manually make a bunch of polynomial predictors yourself. By the way, gradient boosting is implemented in Python's scikit-learn library, R's caret library, and Java/Scala's Weka library.
H: Where does the random in Random Forests come from? As the title says: Where does the random in Random Forests come from? AI: For each tree you randomly select from the variables that you can use to split tree nodes. Generally you randomly select 1/3 of the variables per tree.
H: What is the definition of precision for no positive classifications? The precision is defined as $$\text{precision} = \frac{\text{true positive}}{\text{true positive} + \text{false positive}}$$ Is there any definition what this value should be if there is no positive classification (but of course positive elements)? AI: I just found that sklearn.metrics.precision_score handles it like this: >>> sklearn.metrics.precision_score(y_true, y_pred) /home/moose/.local/lib/python2.7/site-packages/sklearn/metrics/classification.py:1074: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. 'precision', 'predicted', average, warn_for) 0.0 So they give a warning and set it to 0.
H: Tools for ML on csv files and jsons So much of what we export is in CSV and JSON files. Is there any useful tools you know of that can automatically perform data analysis on flat file formats. For example Basic statistics if numeric types: avg, stdev, mode, median Column type and Cardinality detection Find relationships between columns, if column A is X, then column B is always Y Any of these things would be useful, even if there'd be an intermediary step of just loading the flat file into some software... AI: In R you can load the csv file easily by using the method read.csv and get the summary of the data using the method summary, you will get the most of the basic statistics of each columns like Min, 1st Qu, Median, Mean, 3rd Qu, Max and NA counts for numeric/integer rows and for string columns you will also get the some statics of count of different type of string if it has repeated multiple i.e factors. Sample code: Loading the csv file and check the few datasets: > df <- read.csv("SampleData.csv") > head(df) name age Type 1 A 34 Active 2 B 56 Cancelled 3 C 12 Active 4 D 32 Cancelled 5 Z 34 Active Basic statistics if numeric types: avg, stdev, mode, median > summary(df) name age Type A :1 Min. :12.0 Active :3 B :1 1st Qu.:32.0 Cancelled :2 C :1 Median :34.0 D :1 Mean :33.6 Z :1 3rd Qu.:34.0 Max. :56.0 > Column type and Cardinality detection > sapply(df, class) name age Type "factor" "integer" "factor" > > nrow(df) [1] 5 > ncol(df) [1] 3 > Find relationships between columns, if column A is X, then column B is always Y For this you have to do some Basic calculation and you can go through this cookbook: http://www.cookbook-r.com/ I have attached Sample CSV Image here
H: How to architect ConvNet to ignore top half of image I'm building a convoluted neural network to teach a toy car, powered by a Raspberry Pi, how to drive based on incoming streams of frames from a webcam mounted on top of the car. The top half of each image is irrelevant. What matters is the curvature of the road, and this is in the bottom half. I've generated a substantial amount of data (about 40k records) by driving the car around myself and recording what I do (commands are left, right, and straight) and what the frames are. However my trained ConvNets aren't giving me the performance I'd hoped for. My experimentation with one of the deployed models confirms that it is indeed tricked by changes in the top half of the streaming images. A simple solution is to programmatically cut the frames in half so that the neural net only receives relevant portions. However, deep neural nets are praised for their ability to learn features with (ideally) zero human future transformations, so I want to avoid this approach. This project is meant to be a learning experience for me so I want learn how to architect the ConvNet more effectively. Each training session takes several hours to run, so rather than try everything under the sun, I'd figure I'd reach out to the community here to narrow my focus of exploration. One thought I have is to put a fully connected (FC) layer in front so that subsequent layers effectively only convolve over the relevant portion of the image. I think this could potentially work if this up front FC layer learned to assign very small weights to pixels in the top half of the image. Could this work? Are there better architectures? AI: You could be right that ignoring top part of image would benefit the CNN. However, there is very little point in trying to architect this - if your premise that the CNN will ignore irrelevant details in the top half is correct, then that will occur anyway and there is no standard NN architecture that will help that other than disconnecting the top half of network, which is going to be logically exactly the same as programmatically slicing the image, with the disadvantage of storing and calculating with twice as many parameters. You should either programatically cut the image in half or do nothing to the image and rely on the CNN's inherent ability to give low weights to irrelevant details. If you do the latter, you may be able to get around the learning of incorrect details in the top half by augmenting your data - e.g. add some noise to images*, especially in the irrelevant top half. Perhaps horizontally flipping a few images (and reverse relevant targets for the control class) might be another useful augmentation. Some augmentations could also be useful if you just take the lower half of the image. * Noise should be something close to variations that could be seen when in use. E.g. slurring pixels left or right might be reasonable. Inserting "static" probably is not.
H: Should we convert independent continous variables (features) to categorical variable before using decision tree like classifier? Consider I have one dependent variable to predict 'Attitude' which can take three values 'Positive/Negative/Neutral'. I have following independent variables or features- Age, Height, Gender, Income etc. I trying to predict Attitude using decision tree classifier. Attitude ~ Age + Height + Gender + Income (Decision Tree) I am getting >90% accuracy for the when tree depth is 15. As tree is dividing on continuous variables (i.e. Age, Income and Height) again and again to get leaf with pure classes. Is this problem of overfitting? Should I convert the continuous variables into categorical variables (like range classes)? AI: There is no need to split continuous variables because the tree already does that automatically. The only way you can test for overfitting is by either using a holdout set or by doing cross validation. If you are overfitting, changing a continuous variable to a categorical variable likely won't make a difference. If you get the sense that you're overfitting, you should reduce the depth of your tree.
H: Why are sigmoid/tanh activation function still used for deep NN when we have ReLU? Looks like ReLU is better then sigmoid or tanh for deep neural networks from all aspects: simple more biologically plausible no gradient to vanish better performance sparsity And I see only one advantage of sigmoid/tanh: they are bounded. It means that your activations won't blow up as you keep training and your network parameters won't take off to the sky. Why should not we forget about sigmoid/tanh for deep neural networks? AI: In certain network structures having symmetric activation layers has advantages (certain autoencoders for example) In certain scenarios having an activation with mean 0 is important (so tanh makes sense). Sigmoid activation in the output layer is still important for classification In 95% of the cases ReLU is much better though.
H: DIfferent learning rates converging to same minima I am optimizing some loss function using Gradient Descent method. I am trying it with different learning rates, but the objective function's value is converging to same exact point. Does this means that I am getting stuck in a local minima?, because the loss function is non-convex so it is less likely that I would converge to a global minima. AI: This is the expected behavior. Different learning rates should converge to the same minimum if you are starting at the same location. If you're optimizing a neural network and you want to explore the loss surface, randomize the starting parameters. If you always start your optimization algorithm from the same initial value, you will reach the same local extremum unless you really increase the step size and overshoot.
H: Machine Learning Identification and Classification, based on string contents: General advice I have just very recently started to develop an interest in machine learning, and I have a particular problem in mind that I would like to start to explore. I would like to train a system to automatically classify various attributes of an item, based on what's in a string. Let's say I have a long list of various mutual funds, like: Ticker Fund Name ------ --------- ABNAX ABC Bond Fund, Inc: Bond Inflation Strategy ALYSX ABC Bond Fund, Inc: Credit Long/Short Portfolio; Advisor Class AGRXX DEF Bond Fund, Inc: Government Reserves Portfolio; Class 1 Shares HIYYX FGH Bond Fund, Inc: High Yield Portfolio; Advisor Class Shares HIYAX FGH Bond Fund, Inc: High Yield Portfolio; Class A Shares ... … And so on. I have a large data set that contains "complete" classifications, which have Fund Names similar to the ones above, and – in addition – a human has already given the training set items certain attributes. For example: AIISX Allianz Funds Multi-Strategy Trust: AllianzGI International Small-Cap Fund; Class R6 Shares Which will have the associated attributes: Strategy: Multi-Strategy Geography: International Capitalization: Small-Cap Share class: R6 The challenge for the machine learning system will be to assign the right value to an attribute, when there are values "competing" on the same attribute. Let's say that a certain fund can have Strategy: Long-Short and Strategy: High Yield at the same time – and both terms are present in the Fund name. The system should select the right one, based on exposure to historical bias present in the training data set. Question I am interested in getting a grasp of which machine learning methods and algorithms that would be able to "learn" how to classify an item, based on a large set of examples with human-classified attributes, as indicated above. I am a complete beginner to machine learning, except for some basic knowledge of statistics, so I would just like to be pointed in a general direction. Can/should this be accomplished with something like multiple regression, or are we looking at something else? Is some sort of natural language processing needed – or is basic keyword pattern recognition enough? Lastly, which terminology or labeled area of expertise would summarize this problem description? AI: If the content/information is lengthy, I'd suggest you to use some NLP tasks for starters. I would suggest you to use some basic NLP based preprocessing because it makes our model perform better. So, the basic feature extraction can be used for this. Example, using Porter Stemmer, Lemmatizer to clean the data or removing stop words and then using ngrams for features seem to be a basic idea and a good start. There are various vectorizers which can be used to extract features the documents. For example, TfidfVectorizer calculates the frequency of a word in a document and also frequency across documents. This can be more useful than a naive Bag of words approach. Then, on top of this there are various classifiers which can be used like OneVsRestClassifier or others. A simple approach could be selecting the input and target first. Select the parameters which are to be passed as input and the desired output. Then, decide to clean the input or not based on some NLP APIs(you can use nltk). Then decide on a classifier. You can then predict the values. Test on validation set and try various classifiers for starters. As for terminology, I can think of Multiclass Classification only now.
H: Minimize absolute values of errors instead of squares Calculating absolute values is much more efficient than calculating squares. Is there any advantage, then, to using the latter as a cost function over the former? Squares are easier to treat analytically, but in practice that doesn't matter. AI: In what term "calculating absolute values is much more efficient than calculating squares"? Compared to the complexity of any estimator/model used, I don't think it is significant - but I would be interested if anyone makes me wrong. Again, why do you think it doesn't matter in practice? Working with a smooth and convex function is more convenient (in terms of time and results) than not-convex function. Actually, you can choose whatever function to minimize you'd like; it is just a trade-off between : Which kind of value you want to penalize Complexity of the function to solve (mathematically speaking : local or global solution) Time consuming (related to the previous point) 1. Minimizing absolute values : With absolute value, you penalize the distance between y and f(x) linearly. Roughly speaking, you might end up with a lot of data that will look like outliers as long as enough are well explained by your estimator f. Then, to minimize a function, one generally looks for the root(s) of its derivative. However, the derivative of |x| is not smooth. You can work with subgradient and other more complex mathematical object that may result in a longer time process due to more calculation. 2. Minimizing square values : In this case, the distance between y and f(x) is more penalized. You'll tend to have less outliers (relatively to f(x)). What is interesting is that is a smooth function (i.e. a defined derivative) and convex (with a global minimum) So I guess people believe that the square of errors is a good trade-off.
H: Backpropagation: In second-order methods, would ReLU derivative be 0? and what its effect on training? ReLU is an activation function defined as $h = \max(0, a)$ where $a = Wx + b$. Normally, we train neural networks with first-order methods such as SGD, Adam, RMSprop, Adadelta, or Adagrad. Backpropagation in first-order methods requires first-order derivative. Hence $x$ is derived to $1$. But if we use second-order methods, would ReLU's derivative be $0$? Because $x$ is derived to $1$ and is derived again to $0$. Would it be an error? For example, with Newton's method, you'll be dividing by $0$. (I don't really understand Hessian-free optimization, yet. IIRC, it's a matter of using an approximate Hessian instead of the real one). What is the effect of this $h''=0$? Can we still train the neural network with ReLU with second-order methods? Or would it be non-trainable/error (nan/infinity)? For clarity, this is ReLU as $f(x)$: $f(x) =$ \begin{array}{rcl} 0 & \mbox{for} & x < 0\\ x & \mbox{for} & x \ge 0\end{array} $f'(x) =$ \begin{array}{rcl} 0 & \mbox{for} & x < 0\\ 1 & \mbox{for} & x \ge 0\end{array} $f''(x) = 0$ AI: Yes the ReLU second order derivative is 0. Technically, neither $\frac{dy}{dx}$ nor $\frac{d^2y}{dx^2}$ are defined at $x=0$, but we ignore that - in practice an exact $x=0$ is rare and not especially meaningful, so this is not a problem. Newton's method does not work on the ReLU transfer function because it has no stationary points. It also doesn't work meaningfully on most other common transfer functions though - they cannot be minimised or maximised for finite inputs. When you combine multiple ReLU functions with layers of matrix multiplications in a structure such as a neural network, and wish to minimise an objective function, the picture is more complicated. This combination does have stationary points. Even a single ReLU neuron and a mean square error objective will have different enough behaviour such that the second-order derivative of the single weight will vary and is not guaranteed to be 0. Nonlinearities when multiple layers combine is what creates a more interesting optimisation surface. This also means that it is harder to calculate useful second-order partial derivatives (or Hessian matrix), it is not just a matter of taking second order derivatives of the transfer functions. The fact that $\frac{d^2y}{dx^2} = 0$ for the transfer function will make some terms zero in the matrix (for the second order effect from same neuron activation), but the majority of terms in the Hessian are of the form $\frac{\partial^2E}{\partial x_i\partial x_j}$ where E is the objective and $x_i$, $x_j$ are different parameters of the neural network. A fully-realised Hessian matrix will have $N^2$ terms where $N$ is number of parameters - with large neural networks having upwards of 1 million parameters, even with a simple calculation process and many terms being 0 (e.g. w.r.t. 2 weights in same layer) this may not be feasible to compute. There are techniques to estimate effects of second-order derivatives used in some neural network optimisers. RMSProp can be viewed as roughly estimating second-order effects, for example. The "Hessian-free" optimisers more explicitly calculate the impact of this matrix.
H: r - How to determine the correlation between unordered categorical variables and individuals? I have a matrix with several unordered categorical variables. Each row represents a type of individual. Each column represents the number of times each type of individual was found to be in that particular condition. Type coal cobalt concrete copper gold A 12 0 0 19 5 B 5 0 0 11 0 C 4 2 0 14 1 D 1 3 15 0 1 E 0 20 2 1 9 My question is very simple: I want to know if there is a correlation between the type of the individual (A, B or C) with a particular condition (copper, gold, etc). Which test should I use? If possible, I would like to get the answer by using R. Thanks! AI: If you have data sets $X_1,\cdots,X_n$ and $Y_1,\cdots,Y_n$, then you can compute their correlation with the following formula: $$Cor(X,Y) = \frac{\sum (X_i-\bar{X})(Y_i-\bar{Y})}{\sqrt{\sum (X_i-\bar{X})^2\sum(Y_i-\bar{Y})^2}}$$ (where $\bar{X}$ denotes the average value of the $X_i$'s). This is accomplished in $R$ with the following command: cor(x,y) That being said, it is unclear what two data sets you are trying to find the correlation for. Finding the correlation between a type (A,B,C) and a condition (copper, gold, etc.) would not make any sense. You could, however, find the correlation between two different types (A and B, for example), or between conditions (copper and gold). Edit: I think you might want to do a test for independence between categorical variables...if this is the case then this is what you are looking for.
H: LSTMs: what is $W_x$ & $U_z$ in $φ(W_x + U_z + b)$? Reading On Multiplicative Integration with Recurrent Neural Networks Despite of their varying characteristics, most of them(RNNs) share a common computational building block, described by the following equation: $φ(W_x + U_z + b)$, where $x ∈ R_n$ and $z ∈ R_m$ are state vectors coming from different information sources, $W ∈ R_{d×n}$ and $U ∈ R_{d×m}$ are state-to-state transition matrices, and $b$ is a bias vector. Don't get what meaning of $W_x$ and $U_z$. I know that $W$ is typicaly for weights... what does this equation mean? AI: If you look deeper in LSTMs or GRUs, we observe that the gates(input, output, cell or forget based on the RNN) are calculated using an equation like you specified. Example, according to deep learning tutorial of lstm, it=sigma(Wi xt + Ui ht-1 + bi) In this, h is the hidden state vector and x is the input state vector as specified and W and U are the corresponding weights for the input gate it. SImilarly, there are gates for output and forget. So in the paper, they recall a gist of RNNs and sum it up as a general equation. It is a common computational block in RNNs despite their minor differences. Refer Colah's blog or wildml, I think they are one of the best to understand RNNs.
H: Should a model be re-trained if new observations are available? So, I have not been able to find any literature on this subject but it seems like something worth giving a thought: What are the best practices in model training and optimization if new observations are available? Is there any way to determine the period/frequency of re-training a model before the predictions begin to degrade? Is it over-fitting if the parameters are re-optimised for the aggregated data? Note that the learning may not necessarily be online. One may wish to upgrade an existing model after observing significant variance in more recent predictions. AI: Once a model is trained and you get new data which can be used for training, you can load the previous model and train onto it. For example, you can save your model as a .pickle file and load it and train further onto it when new data is available. Do note that for the model to predict correctly, the new training data should have a similar distribution as the past data. Predictions tend to degrade based on the dataset you are using. For example, if you are trying to train using twitter data and you have collected data regarding a product which is widely tweeted that day. But if you use use tweets after some days when that product is not even discussed, it might be biased. The frequency will be dependent on dataset and there is no specific time to state as such. If you observe that your new incoming data is deviating vastly, then it is a good practise to retrain the model. Optimizing parameters on the aggregated data is not overfitting. Large data doesn't imply overfitting. Use cross validation to check for over-fitting.
H: location of the resampled data from SMOTE I am using SMOTE in Python to perform oversampling of the minor class in an unbalanced dataset. I would like to know the way SMOTE formats its output, that is, whether SMOTE concatenates the newly generated samples to the end of the input data and returns that as the output or whether the new synthetic data points are positioned randomly among the input data points. I'd appreciate your help. AI: There is not that much package managing the under-/over-sampling in python. So if you are using imbalanced-learn, it will return a numpy array which concatenate the original imbalanced set with the generated new samples in the minority class.
H: Where can I find a software library for pairwise matching (ideally, Python, R, Java)? I am looking for a library that implements a pairwise ranking algorithm. For example, if I have 200 writing samples from 100 people (two samples from each individual) and I want to identify which samples belong together (i.e., were written by the same person), what library could I use? AI: If you can transform those sentences into number vectors (e.g. into a bag of words or tf-idf representation), I guess you could use k-Means or hierarchical clustering functionality from Orange, a GUI and machine learning library written in Python. It also has an add-on for text mining specifically, but I cannot attest to it as I haven't tried it yet.
H: Is there an R package which uses neural networks to explicitly model count data? Ripley's nnet package, for example, allows you to model count data using a multi nomial setting but is there a package which preserves the complete information relating to a count? For example, whereas an ordinal multinomial model preserves the ordering of the integers that make up the count, a fully developed model of count data as a GLM such as Poisson or Negative Binomial Regression includes how large the integer counts are in relation to each other. Another phrasing might be, 'What kind of models come closest to combining the advantages of neural networks, in terms of, as an example, easily modelling non-linearity in the predictors, and count data GLMs, which are good at taking into account that the data is in fact a count?' AI: I skimmed over a paper recently that aims to use Neural Networks as Poisson regression. The method they propose is basically a standard Multi-Layer Perceptron where they use a different loss function, namely: $$E = -\sum_{n=1}^N[-t_n + y_nlog(t_n)]$$ This is a version without regularization to prevent the overfitting, they use regular weight decay. They mention that they wrote it in R and Matlab but I don't have a clue if it's available online somewhere, but any neural network package where you can pass your own loss function should suffice. http://www.mathstat.dal.ca/~hgu/Neural%20Comput%20&%20Applic.pdf
H: Correlating company entities between different data sources I have two datasets with information about companies and my task is to correlate (match) companies from dataset A to companies in dataset B. Datasets are from different sources. The columns in both datasets include fields such as company_name, country, state, city, address, zip. All companies are in the US. The problem is that even though we have the company_name on both sides - the names in A aren't equal to the names in B. So for example on A you might have Google and on B you might have Google Inc. Another example is Amazon and Amazon LLC. etc, there are many different variations to that. These aren't typos, but just different representations of the same entity, one is more common and the other is more formal. The addresses themselves aren't always the same as well. Probably b/c a company might have more than one address. (at least large companies do) What is the best approach to correlate (match) these entities b/w these two data sources? There are about 500k companies in each dataset. A few ideas come to mind: Soundex function on the company name (tried it, not great) Levenshtein distance b/w names of each two potential matches (didn't try it yet, but it is O(n*m)) Levenshtein b/w the concatenated values of company names state, city, etc (also O(n*m)) Geocode the address and build a function that takes into account the Levenshtein distance as well as geographical distance. Clean up the "INC", "LLC" and all other extensions and run any of 1-4. What's your take? Any other suggestions? Thanks! AI: First of all I would clean up INC, LLC, BV etcetera from both the sources. After this there are a few options. Since Levenshtein is a metric you can use metric trees to search your space more efficiently (about O(n*log(m))). This will still be very slow so there are approximations available, for example the cosine similarity on bi-grams of the names. You can do this using matrix multiplication which is both very efficient and easily distributable. Instead of taking the highest similarity you could take the top-n and do further analysis on these, for example the real Levenshtein distance. The fact that you have additional information could be useful, you could add this to your similarity function in some way but this will be guess work. Most of these ideas I got from a PyData meetup that was recorded, a speaker from ING (a big bank) discusses the exact problem you have albeit on a bigger set with less additional information: https://www.youtube.com/watch?v=4ohTsblxOJs
H: Sklearn feature selection stopping criterion (SelectFromModel) Sklearn has several functions for feature selection that lets the user determine the size of the chosen subset. An example of this is SelectKBest where the user determines the value of "k", which is the number of top performing features. Does anyone know what stopping criterion SelectFromModel uses when it selects a feature subset? The documentation mentiones a "threshold"-parameter that determines which features are important enough, and that this parameter is set to "median" OR "mean" by default. AI: Some regression/classification models can also calculate feature importances - for example RandomForestClassifier models have a property feature_importances_ and LogisticRegression models have a coef_ property. There are many more models that can provide feature importances - but all of them either have a coef_ or feature_importances_ property. What SelectFromModel does is to check if the object you pass as "estimator" has one of these two properties and to return features where this property is higher than the specified threshold (or the mean/median). For example if you pass a RandomForestClassifier to SelectFromModel, it will return all features where the random forest's feature_importances_ property is higher than the specified threshold. The same happens if you pass a LogisticRegression model, except that it'll compare the coef_ property with the threshold instead. Selecting the best value for the threshold can be done using a grid- or randomized search.
H: Using RNN (LSTM) for predicting one future value of a time series I have been reading several papers, articles and blog posts about RNNs (LSTM specifically) and how we can use them to do time series prediction. In almost all examples and codes I have found, the problem is defined as finding the next x values of a time series based on previous data. What I am trying to solve is the following: Assuming We have t values of a time series, what would be its value at time t+1? So using different LSTM packages (deeplearning4j, keras, ...) that are out there, here is what I am doing right now: Create a LSTM network and fit it to t samples. My network has one input and one output. So as for input I will have the following patterns and I call them train data: t_1,t_2 t_2,t_3 t_3,t_4 The next step is to use for example t_4 as input and expect t_5 as output then use t_5 as input and expect t_6 as output and so on. When done with prediction, I use t_5,t_6 to update my model. My question: Is this the correct way of doing it? If yes, then I have no idea what does batch_size mean and why it is useful. Note: An alternative that comes to my mind is something similar to examples which generate a sequence of characters, one character at a time. In that case, batch_size would be a series of numbers and I am expecting the next series with the same size and the one value that I'm looking for would be the last number in that series. I am not sure which of the above mentioned approaches are correct and would really appreciate any help in this regard. Thanks AI: The way you are doing it is just fine. The idea in time series prediction is to do regression basically. Probably what you have seen other places in case of vector, it is about the size of the input or basically it means feature vector. Now, assuming that you have t timesteps and you want to predict time t+1, the best way of doing it using either time series analysis methods or RNN models like LSTM, is to train your model on data up to time t to predict t+1. Then t+1 would be the input for the next prediction and so on. There is a good example here. It is based on LSTM using the pybrain framework. Regarding your question on batch_size, at first you need to understand the difference between batch learning versus online learning. Batch size basically indicates a subset of samples that your algorithms is going to use in gradient descent optimization and it has nothing to do with the way you input the data or what you expect your output to be. For more information on that I suggest you read this kaggle post.
H: What does normalizing and mean centering data do? Are there any concerns to normalizing data to be within the range 0 - 1 and mean centering the data as well? Does it matter which comes first? If you do one, is the other not required? AI: If you don't center before you normalize, you don't take advantage of the full [-1,1] range if your input is non-negative. The combination of centering and normalization is called standardization. Sometimes one normalizes by the standard variation, and other times by just the range (max-min). The latter is called feature scaling. The effect is much the same. Normalizing by the range is easier computationally. Normalizing by the standard deviation fixes the sample variance, which is nice from a statistical perspective. When using the standard deviation, the subtraction is usually against the sample mean rather than the minimum. There are several reasons for performing standardization. Sometimes we are interested in relative rather than absolute values. Standardization achieves invariance to these irrelevant differences. By explicitly preprocessing the data to reflect this disinterest, we relieve the model from having to learn it, allowing us to use a simpler one. Another reason is computational; it reduces the condition number -- you can think of this as the skewness or niceness of the loss surface -- making optimization easier and faster.
H: Deep learning - rule generation I wanted to know if there is any methodology in Deep/Machine learning, where given a set of input/output values, it can derive rules for the same. Lets say I generate training input and output by $y=x^2$ i/p | o/p 0 0 2 4 . . 1000 1000000 It sort of generate rule like, $y=x*x$ AI: One way of stating what you are looking for is to find a simple mathematical model to explain your data. One thing about neural networks is that (once they have more than 2 layers, and enough neurons total) they can in theory emulate any function, no matter how complex. This is useful for machine learning as often the function we want to predict is complex and cannot be expressed simply with a few operators. However, it is kind of the opposite of what you want - the neural network behaves like a "black box" and you don't get a simple function out, even if there is one driving the data. You can try to fit a model (any model) to your data using very simple forms of regression, such as linear regression. So if you are reasonably sure that your system is a cubic equation $y= ax^3 + bx^2 +cx +d$ then you could create a table like this: bias | x | x*x | x*x*x | y 1 0 0 0 0 1 2 4 8 4 1 3 9 27 9 1 . . . . 1 100 1000000 10000 10000 and then use a linear regression optimiser (sci-kit learn's SGD optimiser linked). With the above data this should quickly tell you $b=1, a,c,d=0$. But what it won't tell you is whether your model is the best possible or somehow "correct". You can scan for more possible formulae by creating more columns - any function of any combination of inputs (if there is more than one) that could be feasible. However, the more columns you add in this way, the more likely it is you will find an incorrect overfit solution that matches all your data using a clever combination of parameters, but which is not a good general predictor. To address this, you will need to add regularisation - a simple L1 or L2 regularisation of the parameters will do (in the link I gave to scikit-learn, the penalty argument can control this), which will penalise large parameters and help you home in on a simple formula if there is one.
H: Represent outlier days I have hourly power consumption data for 30 days. On representing, each day data using a separate line, I get a plot as I want to highlight the days with abnormally high consumption (in other words, the outlier days). I think that the current plot is too much congested. Is there any other better representation to show the outlier days? AI: One idea would be to plot the daily average power consumption in a bar plot: For a finer visualization of day-hour peaks, you can try to plot it in a matrix format:
H: how to generate sample dataset for classification problem I am a newbie to data science. I have a 'short text' categorization problem where input variables are either unstructured texts (names, definition, description etc) or categorical. There is not much semantic to the fields as they are product names, territory name, sales order type etc. Issue is I do not have any sample data set from which I can derive training, test, validation set or divided it into k-fold for cross validation. So how should I generate sample data? I have about 20 target classes. I can classify some dataset using regex or lucene rule based matches and manually verify them and make sure each class have equal amount of samples. But I am open to other suggestion. AI: I know this isn't answering the question that you actually asked, but I suggest that you NOT generate data for your 'short text' categorization problem. Generated data can work for certain cases when data scientists who are very familiar with an algorithm want to demonstrate a specific feature, but there is a hokeyness that may lead you astray as someone new to data science and machine learning. I suggest you avail yourself of some free and open data here, or an entire stack exchange dedicated to open data. Real data will give you experience with less contrived problems and you can often try out similar ML models on very different data sets in order to gain experience with the variations and pitfalls of data science. Be creative... you might not find exactly what you want, but you can subset the data by projecting out a few text fields and using them to classify another field. Addendum update: With your recent edit, it is now apparent that you are seeking to turn your unsupervised training data into supervised training data in order to train a supervised learning classification model. The method that you suggested, "classify some dataset using regex or lucene rule based matches and manually verify them", is a deterministic unsupervised method without the human verification step. Without the human verification of the target data that you have created, I would not consider feeding the derived targets back into the classification algortihm as your results will only be as strong as the derived target data and predictions will show similar errors. Instead, you should think about a semi-supervised learning method where you perhaps employ a clustering algorithm and then label the clusters with the target variable. With human verification of the target data that you have created, this will work just fine as training data for a classification model. The only issue is that this can become tedious. There are mechanical turks (humans paid to perform repetitive tasks) that you can hire to perhaps perform the labeling for you, which may be a more scalable option. Hope this helps!
H: How are 1x1 convolutions the same as a fully connected layer? I recently read Yan LeCuns comment on 1x1 convolutions: In Convolutional Nets, there is no such thing as "fully-connected layers". There are only convolution layers with 1x1 convolution kernels and a full connection table. It's a too-rarely-understood fact that ConvNets don't need to have a fixed-size input. You can train them on inputs that happen to produce a single output vector (with no spatial extent), and then apply them to larger images. Instead of a single output vector, you then get a spatial map of output vectors. Each vector sees input windows at different locations on the input. In that scenario, the "fully connected layers" really act as 1x1 convolutions. I would like to see a simple example for this. Example Assume you have a fully connected network. It has only an input layer and an output layer. The input layer has 3 nodes, the output layer has 2 nodes. This network has $3 \cdot 2 = 6$ parameters. To make it even more concrete, lets say you have a ReLU activation function in the output layer and the weight matrix $$ \begin{align} W &= \begin{pmatrix} 0 & 1 & 1\\ 2 & 3 & 5\\ \end{pmatrix} \in \mathbb{R}^{2 \times 3}\\ b &= \begin{pmatrix}8\\ 13\end{pmatrix} \in \mathbb{R}^2 \end{align} $$ So the network is $f(x) = ReLU(W \cdot x + b)$ with $x \in \mathbb{R}^3$. How would the convolutional layer have to look like to be the same? What does LeCun mean with "full connection table"? I guess to get an equivalent CNN it would have to have exactly the same number of parameters. The MLP from above has $2 \cdot 3 + 2 = 8$ parameters. AI: Your Example In your example we have 3 input and 2 output units. To apply convolutions, think of those units having shape: [1,1,3] and [1,1,2], respectively. In CNN terms, we have 3 input and 2 output feature maps, each having spatial dimensions 1 x 1. Applying an n x n convolution to a layer with k feature maps, requires you to have a kernel of shape [n,n,k]. Hence the kernel of your 1x1 convolutions have shape [1, 1, 3]. You need 2 of those kernels (or filters) to produce the 2 output feature maps. Please Note: $1 \times 1$ convolutions really are $1 \times 1 \times \text{number of channels of the input}$ convolutions. The last one is only rarely mentioned. Indeed if you choose as kernels and bias: $$ \begin{align} w_1 &= \begin{pmatrix} 0 & 1 & 1\\ \end{pmatrix} \in \mathbb{R}^{3}\\ w_2 &= \begin{pmatrix} 2 & 3 & 5\\ \end{pmatrix} \in \mathbb{R}^{3}\\ b &= \begin{pmatrix}8\\ 13\end{pmatrix} \in \mathbb{R}^2 \end{align} $$ The conv-layer will then compute $f(x) = ReLU\left(\begin{pmatrix}w_1 \cdot x\\ w_2 \cdot x\end{pmatrix} + \begin{pmatrix}b_1\\ b_2\end{pmatrix}\right)$ with $x \in \mathbb{R}^3$. Transformation in real Code For a real-life example, also have a look at my vgg-fcn implementation. The Code provided in this file takes the VGG weights, but transforms every fully-connected layer into a convolutional layers. The resulting network yields the same output as vgg when applied to input image of shape [244,244,3]. (When applying both networks without padding). The transformed convolutional layers are introduced in the function _fc_layer (line 145). They have kernel size 7x7 for FC6 (which is maximal, as pool5 of VGG outputs a feature map of shape [7,7, 512]. Layer FC7 and FC8 are implemented as 1x1 convolution. "Full Connection Table" He might refer to a filter/kernel which has the same dimension as the input feature map. In both cases (Code and your Example) the spatial dimensions are maximal in the sense, that the spatial dimension of the filter is the same as the spatial dimension as the input.
H: Predicting Age of Birth I have a pet project to figure out the birth year of a significant person in history. I'm collecting a lot of data on other people with similar status during that time period. I have data such as education length, year married, year of child bearing, information about siblings and age difference between each child, marriages, etc... The age of this person is disputed between two years, one making the person very old, and the other making the person young. I want to regress the persons age. My first idea was to draw gaussians over each variable and see if one is more likely to be an outlier than the other. What way would you tackle this problem? AI: It seems like you have a classical bayesian problem. You have some sort of prior distribution, a distribution over years of birth, your prior distribution is bimodal with peaks at the two years, you can probably use a convolution of two normal distributions to model this variable. Then have it spit out a posterior distribution after you feed in some data. The real problem that I have with this analysis is it seems your features aren't particularly good. It is true these vars might have information about birth year, for example for the 20th century the average age of first marriage has steadily been increasing. But I suspect that the signal is going to be fairly weak. Essentially, if I tell you that I got married at age 24, had my first child at 26, and that my older brother is 3 years older than me and my younger sister is 2 years younger than me, can you tell me in what year was I born, 1956 or 1989? I suspect that without additional data this information that I provided would be completely useless, mostly because it is a very noisy signal. That information could apply equally to someone born in 1956 or 1989. It isn't very helpful. Essentially, what I am saying is that when you update your prior, it isn't going to change very much. (Your posterior distribution would look very similar to the prior distribution.) Instead of doing some mustache twirling over what is the right algorithm to crack this problem, I think a much more fruitful exercise would be to think up some better features.
H: Prediction with non-scalar output (label) I have recently confronted with a (at least for me) new kind of ML problem, where the output of the model should be a vector/matrix (depending on the interpretation, but there is no difference actually), not a scalar as usual. This is totally unknown for me. What kind of approach should one apply here? Are the "usual" (scalar-based) models applicable on this problem? (Just for the sake of completeness, the problem is an image segmentation task where the model should decide first: if there a given pattern on the picture?, second: if so, where is it? - In latter case, it should define the borders of the subset pixels). AI: Neural Networks can have a vector or matrix as output layer, image segmentation is a well researched topic and deep learning (as most things concerning images) are the state-of-the-art. You will need (a lot of) training examples where the pattern is found, and where. This could be a bounding box, or per pixel if it is part of the pattern or not (this will generate a matrix equally sized to your input). To see if the pattern is found you could construct a second network that is just a binary classifier, or you could try to see if your pixel-based network will output almost only zeros in case of no pattern. In this case you will need negative examples as well.
H: What is the simplest neural network for the simplest nonlinear function $f(x,y) =xy$ How do I capture $y = x_1 x_2$ using a simple neural network with commonly used activation functions? I assume that I need at least one hidden layer. What mix of commonly activation functions should I use? So far, I have used $max(0,x)$ and $tanh$ activation for the hidden layer, but the gradient descent diverges very quickly. Some thoughts which may be useful: $$ (x_1 y_1)^2 = \exp(\log(x_1^2) + \log(x_2^2)) $$ AI: Probably you need to do one or more of: Decrease learning rate. Diverging loss is often a symptom of learning rate too high. Increase number of hidden neurons. The output function will be the combination of many "patches" each created by a neuron that has learnt a different bias. The shape and quality of each patch is determined by the activation functions, but almost any nonlinear activation function used in an NN library should work to make a universal function approximator. Normalise inputs. If you are training with high values for $x_1$ or $x_2$, this could make it harder to train unless you normalise your training data. For your purposes, it might be an idea to skip the need for normalisation by training with $x_1$ and $x_2$ in range -2.0 to 2.0. It doesn't change your goal much to do this, and removes one potential problem. You should note that a network trained against an open-ended function like this (where there are no logical bounds on the input params) will not learn to extrapolate. It never "learns" the function itself, but an approximation of it close to the supplied examples. When you supply a $(x_1, x_2)$ input far from the training examples, the output is likely to completely mismatch your original function.
H: How to handle the CEO expectations from a company that's new to data science? I'm new here. I'm about to have a final interview for a data scientist position for a company (it's in the e-commerce field) that is new for data science. It's a pretty new position for the company, and from the interviews I had so far, I noticed that they don't fully understand what they want from a data scientist. They barely know what data science is. I explained them the "standard" data science workflow (Ask A Question, Get the Data, Explore the Data, Model the data and Communicate the data). But I don't think they got it. I want to set the proper expectations, so the company and I can agree on my job description, and what they can expect from me in one month, tree months, six and twelve months. So, how do you handle the expectations(specially the CEO) from a company that's new to data science? AI: I tell you what has worked for me: practical examples. They have probably already read about what data science is in general and which are the standard procedures. What they have not seen is someone in front of them explaining how innovative (and useful for business!!!) the data science really is. Follow the "Say what you would say to your granny to tell her how cool your work is" advice. People want to know what you have actually done, indipendently from the algorithms or procedures you have used. Good luck!
H: recommendation system for eCommerce healthcare portal suggestion I am trying to build a recommendation system. My system is basically a ecommerce application where our customers answers a bunch of questions related to healthcare (their basic health related question). Based on their ansers, we recommend some product. This process of recommendationis based of conventional rule base aapproach. Think of it as bunch of if-else condition. Now I am playing around with some machine learning technique nd want to see if this way will add any value in our healthcare system. I am at the very starting point and can use any suggestion from you guys. The suggestion could aim towards following: Any product that you feel which leverages ML technique with respect to health(considering HIPPA constraints) Any product that you feel which leverages ML technique with respect to health What could be the first step towards building such a system. AI: Your recommendation system will be designed to tell the customer what product they should choose, however, this doesn't account for what products the customer likes. A ML method could take all the input parameters from the recommendation system and provide a recommended product based on what similar users liked. There's no specific ML technique for considering HIPPA constraints. This would come into play more in the pre-processing stage. For example, you might not be able to acquire specific birth dates and addresses, but maybe you can get an age range and a zip code (first 3 digits only). Logistic regression is frequently used in healthcare for binary classification. There are lots of tutorials online for building ML models. Kaggle has a good tutorial for Python and R using titanic survival data.
H: Is TensorFlow a complete Machine Learning Library? I am new to TensorFlow and I need to understand the capabilities and shortcomings of TensorFlow before I can use it. I know that it is a deep learning framework, but apart from that which other machine learning algorithms can we use with tensor flow. For example can we use SVMs or random forests using TensorFlow? (I know this sounds crazy) In short, I want to know which Machine Learning Algorithms are supported by TensorFlow. Is it just deep learning or something more? AI: This is a big oversimplification, but there are essentially two types of machine learning libraries available today: Deep learning (CNN,RNN, fully connected nets, linear models) Everything else (SVM, GBMs, Random Forests, Naive Bayes, K-NN, etc) The reason for this is that deep learning is much more computationally intensive than other more traditional training methods, and therefore requires intense specialization of the library (e.g., using a GPU and distributed capabilities). If you're using Python and are looking for a package with the greatest breadth of algorithms, try scikit-learn. In reality, if you want to use deep learning and more traditional methods you'll need to use more than one library. There is no "complete" package.
H: Choosing the right parameters to train a Tf-Idf vectoriser I'm very new to the DS world, so please bear with my ignorance. I'm trying to analyse user comments in Spanish. I have a somewhat small dataset (in the 100k's -- is that small?), and when I run the algorithm in a, let's say, naïve way (scikit-learn's default options +remove accents and no vocabulary / stop words) I get very high values for very common and low value words (such as spanish equivalents of "to", "at", etc.). What would be the most effective way to train the vectoriser on a 100k long corpus of ~200 char long docs? I was thinking of using a larger corpus of Spanish text or looking for a stop word removing implementation for Spanish, but would love to have some expert advise before jumping into it. Thanks! AI: There are no silver bullets. But here are some suggests: Use a better stopwords vocabulary. If you still have words like "to" and "at", then you are either not removing stopwords or using a lousy vocabulary. Try using the Spanish stopwords from nltk: from nltk.corpus import stopwords stopwords.words('spanish') Use max_df < 1. This will truncate words that appear in more than that percentage number of documents. The TF-IDF part that punishes common words (transversal to all documents) is the IDF part of TF-IDF, which means inverse document transform. Several functions may be used as your IDF function. Usually, IDF=$\log\frac{\text{#documents}}{\text{#documents where word appears}}$ is used. You could try a more punitive IDF function. sklearn does not seem to allow to specify it, but you can use nltk or gensim or easily implement your own TF-IDF vectorization. It needs no more than five lines of code. I would try each of these suggestions in this order, and stop when it is good enough. It sounds like using a better stopwords vocabulary will be good enough for you. EDIT: I forgot to mention, but of course you can add more uninteresting words to the stopwords list, if you have the need to do so. You may also want to first evaluate the impact TF-IDF is having in your counts. I think this will work: m = TfidfVectorizer().fit(docs) c = [m.transform([word]) for word in m.vocabulary_] for i in np.argsort(c)[::-1][:20]: print(m.vocabulary_[i], c[i]) You may have to adjust the code. Try to use with IDF disabled and enabled, and other tweaks.
H: Scikit Learn OneHotEncoded Features causing error in classifier I’m trying to prepare data for input to a Decision Tree and Multinomial Naïve Bayes Classifier. This is what my data looks like (pandas dataframe): Label Feat1 Feat2 Feat3 Feat4 0 1 3 2 1 1 0 1 1 2 2 2 2 1 1 3 3 3 2 3 I have split the data into dataLabel and dataFeatures. Prepared dataLabel using dataLabel.ravel() I need to discretize features so the classifiers treat them as being categorical not numerical. I’m trying to do this using OneHotEncoder: enc = OneHotEncoder() enc.fit(dataFeatures) chk = enc.transform(dataFeatures) from sklearn.naive_bayes import MultinomialNB mnb = MultinomialNB() from sklearn import metrics from sklearn.cross_validation import cross_val_score scores = cross_val_score(mnb, Y, chk, cv=10, scoring='accuracy') I get this error: bad input shape (64, 16) This is the shape of label and input: dataLabel.shape = 72 chk.shape = 72,16 Why won't the classifier accept the onehotencoded features? EDIT: Adding how I got dataFeatures dataFeatures = data[['Accpred', 'Gyrpred', 'Barpred', 'altpred']] Y = dataLabel.ravel() AI: scores = cross_val_score(mnb, Y, chk, cv=10, scoring='accuracy') You have your Y and chk switched. That's it. :) The signature of cross_val_score is sklearn.cross_validation.cross_val_score(estimator, X, y). X is a matrix and y is a 1D vector with your class labels. unlike in R, most (or all?) sklearn models do not support categorical variables. Most of the time, encoding your feature matrix X into what is called one-hot encoding is good enough. Notice that, in some models, this hack is not the same as true native categorical support, and the performance of the model will be worse. Invert One-Hot Encoding Sklearn does not seem to have an easy method to invert the one-hot encoding. It is not trivial how to do this. I found this suggestion: def inverse(enc, out, shape): return np.array([enc.active_features_[col] for col in out.sorted_indices().indices]).reshape(shape) - enc.feature_indices_[:-1] Example: import numpy as np from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder() X = np.array([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])) Z = enc.fit_transform(X) print(inverse(enc, Z, X.shape)) # [[0 0 3] # [1 1 0] # [0 2 1] # [1 0 2]] print(X) # [[0 0 3] # [1 1 0] # [0 2 1] # [1 0 2]] Notice: This only works when HotOneEncoding(sparse=True) (default) because it uses scipy sparse matrix methods (this could be changed by making the code only use numpy methods), but this is probably what you want since working with a dense matrix will kill your memory anyhow I think this will only work if your variables are within the range [0,something] because you lose that information in the transformation (no work-around for this other than you using something like DictVectorizer which offers you more control over the transformation.
H: Is it possible to (de)activate a specific set of cells in jupyter? I have a jupyter notebook and I would like to perform several runs, the code I want to run depending on the results of the previous runs. I divided my notebook into several cells, and for each run I would like to select which cell should be executed and which one should not. Is there a functionality which looks like "run all cells except those I explicitly deactivate"? AI: Welcome to DataScience.SE! This is not currently possible. You could change the cells to Raw.
H: statistics or robust statistics for identifying multivariate outliers For the single variate data sets, we can use some straightforward methods, such as box plot or [5%, 95%] quantile to identify outliers. For multivariate data sets, are there any statistics that can be used to identify outliers? AI: Multivariate outlier detection can be quite tricky and even 2D data can be difficult to visually decipher at times. You are spot-on in looking for robust statistical treatments analogous to 95% quantiles. Where as normally distributed data naturally aligns with the chi square distribution, the gold standard for robust statistics in n dimensions would be to use Mahalanobis distances and then eliminate data beyond 95% or 99% quantiles in Mahalanobis space. Plug and play capabilities are available in scikit-learn and in R. Here is an excellent theoretical and practical treatment of the methodology: And here is a big picture viewpoint with some heuristics. Additionally there is a very sophisticated treatments called PCOUT for outlier detection that instead rely on principal component decomposition. There is a corresponding R package, but the theoretical treatment is behind a paywall: P. Filzmoser, R. Maronna, M. Werner. Outlier identification in high dimensions, Computational Statistics and Data Analysis, 52, 1694-1711, 2008 Hope this helps!
H: Should I use regularization every time? I have learned regularization for linear and logistic regression but when I implement that algorithm to my code generally my estimates not changing.I mean,it looks like ineffective.I know,it's for overfitting.So if I use it in my code every time ,could it be a problem? or is this a good thing? AI: Normally you use regularization. The exception is if you know the data generating process and can model it exactly. Then you merely estimate the model parameters. In general you will not know the process, so you will have to approximate with a flexible enough model. If the model is not flexible enough you will not need to regularize but you won't approximate well anyway. If you use a more flexible model, you will get closer on average (low bias) but you will have more variance, thus the need for increased regularization. In other words, the amount of regularization you need depends on the model. This is related to the bias-variance trade-off. Welcome to DataScience.SE.
H: Is there an open source implementation for bag-of-visual words? I'm not quite sure I understand the bag-of-visual-words representation, so I may misformulate my question. What I'm currently looking for is an open source library (possibly with python API). I give it pictures as input, and its output is a set of (sparse) features, so that I can perform my stuff base on this features. Idealy, I would like this piece of software to work without internet connection (so that I can work with it while in the plane). EDIT: I just learnt that facebook recently (summer 2016) released some of its image recognition code (namely multipathnet, deepmask and sharpmask) AI: There is one implementation of BoVW in openCV. You can find the documentation here : http://docs.opencv.org/2.4/modules/features2d/doc/object_categorization.html
H: From developper to data scientist I code a lot for web, games and some basic ML scripts. Now I would like to learn about data science. This post is a good starting point but I would like some readings. I would like advices on books for a beginner (maths, tools, whatever). I've found these ones from O'Reilly : Machine Learning for Hackers, By Drew Conway, John Myles White Agile Data Science, By Russell Jurney R Cookbook, By Paul Teetor (Sorry, my reputation is too low to let me post more than 1 link haha) Do you recommand these readings ? Are there other must-reading books ? Thanks. AI: I suggest this guide for a good introduction to the resources available to you to learn to do data science. You are starting at a good time, there are loads of excellent (and mostly free) resources to teach you this interesting field.
H: Merging large CSV files in pandas I have two CSV files (each of the file size is in GBs) which I am trying to merge, but every time I do that, my computer hangs. Is there no way to merge them in chunks in pandas itself? AI: No, there is not. You will have to use an alternative tool like dask, drill, spark, or a good old fashioned relational database.
H: Replacing column values in pandas I have a data frame which has three columns as shown below. There are about 10,000 entries in the data frame and there are duplicates as well. Hospital_ID District_ID Employee Hospital 1 District 19 5 Hospital 1 District 19 10 Hospital 1 District 19 6 Hospital 2 District 10 50 Hospital 2 District 10 51 Now I want to remove the duplicates but I want to replace the values in my original data frame by their mean so that it should look like this: Hospital 1 District 19 7.0000 Hospital 2 District 10 50.5000 AI: As Emre already mentioned, you may use the groupby function. After that, you should apply reset_index to move the MultiIndex to the columns: import pandas as pd df = pd.DataFrame( [ ['Hospital 1', 'District 19', 5], ['Hospital 1', 'District 19', 10], ['Hospital 1', 'District 19', 6], ['Hospital 2', 'District 10', 50], ['Hospital 2', 'District 10', 51]], columns = ['Hospital_ID', 'District_ID', 'Employee'] ) df = df.groupby( ['Hospital_ID', 'District_ID'] ).mean() which gives you: Hospital_ID District_ID Employee 0 Hospital 1 District 19 7.0 1 Hospital 2 District 10 50.5
H: To learn machine learning which one is good? Until now I have implemented linear and logistic regression myself. I have not used any library other than numpy and matplotlib. But in the internet every example is solved using libraries such as sklearn, pandas and etc. My question is, which one is good to learn for machine learning, implementing algorithm yourself or using libraries (sklearn or ...)? Thanks. AI: I can think of the following pros and cons for each. As for learning to code your own machine learning algorithms such as logistic regression: You will definitely learn more about specific algorithms, their details, role of different parameters and etc. That is also a good practice of coding itself. Then you can validate your implementation by benchmarking it against other packages and implementations. You will have more freedom in controlling different aspects of your method. You can add functions and modules as you wish and do not necessary have to deal with predefined variables, methods and etc. On the other hand, implementing algorithms when it is not necessary and you can just use existing packages is like reinventing the wheel. It normally takes a lot of time and you have to verify your results for each one of them. Packages like sklearn are popular because of the following: A group of people are working on those, constantly making them up to date, testing the methods in different situations for different needs. That makes packages like sklearn very dependable and usually scalable. If you have a question about them, there are tons of resources out there; documentation, forums, source code, communities like StackOverflow where thousands of people are eager to help you literally for any error you face while running your code. Another important feature is automated hyperparameters tuning. Most of machine learning algorithms have a series of hyperparameters that need to be optimized in order to achieve the best performance. Packages like sklearn efficiently search for the optimal tuning parameters (or "hyperparameters") for your machine learning model in order to maximize its performance. Still if you are interested in implementing machine learning algorithms and like the coding, you can always contribute to the existing packages. They usually have Github repositories where you can raise an issue, ask for a new feature or provide help improving them. All in all, if you have enough time and you are keen to learn low level details about those models, go ahead and give implementation them a shot. That is certainly fun. However, if you need to get to the results as soon as possible and looking for a reliable package where a huge group of people both in industry and academia are already using, sklearn, pandas and others are you options. Hope this is helpful and good luck.
H: Is t-SNE just for visualization? I have used the t-SNE algorithm to visualize my high dimensional data. However, I was wondering if this is a practical method for inference? AI: It's a dimensionality reduction algorithm. Inference is the problem of determining the parameters, or labels, that best fit the model for a given input once the model parameters have been learned, or estimated.
H: String Values in a data frame in Pandas Suppose I have a data frame like this : Hospital_name State Employees ...... Fortis Delhi 5000 ...... AIIMS Delhi 1000000 ...... SuperSpeciality Chennai 1000 ...... Now I want to use this data frame to build a machine learning model for predictive analysis. For that, I must convert the strings to float values. Also, some of these columns in Hospital_name and State contains 'NAN' values. In such a case, how should I prepare my data for building a model in Keras? AI: To convert from string to float in pandas (assuming you want to convert Employees and you loaded the data frame with df), you can use: df['Employees'].apply(lambda x:float(x)) You have not given enough information about your input and expected output. So let us assume that hospital name or anything for that matter which is the input for your model is nan. You would like to remove it from the dataset because extracting features from 'nan' wouldn't make sense. Apart from that, if they are just other peripheral features, then it might be alright. In that case, if you wish to convert them into blank, then use: df.replace(np.nan,' ', regex=True)` Else, if you wish to remove that frame, you can check for nan using this.
H: Looking for an algorithm that correctly clusters visually separable clusters I have visualized a dataset in 2D after employing PCA. As 2D visualization shows in figure, there is a good separation between points (A, B). Now, I want to use a metric which can separate these points (between these 2 PC components not in main dataset) too. I mean have separation between these PCA components without visualization. I used some clustering methods but they raise false positives. I mean they miss cluster many points. Also, as shown in histogram there is a gap between points A,B. Does this help in devising any metric? I will be so grateful if you can introduce me any method and algorithm to be able to do separation between A and B. AI: With appropriate parameters, DBSCAN and single linkage hierarchical agglomerative clustering should work very well. Epsilon=0.2 or so. But why? You know the data, just use a threshold. If you just want an algorithm to "confirm" your desired outcome then you are using it wrong. Be honest: if you want your result to be "if $F-factor-1 > 1.5 then cluster1 else cluster2", then just say so, instead of attempting to find a clustering algorithm to fit to your desired solution!
H: How to further improve the kaggle titanic submission accuracy? I am working on the Titanic dataset. So far my submission has 0.78 score using soft majority voting with logistic regression and random forest. As for the features, I used Pclass, Age, SibSp, Parch, Fare, Sex, Embarked. My question is how to further boost the score for this classification problem? One thing I tried is to add more classifiers for the majority voting, but it does not help, it even worthens the result. How do I understand this worthening effect? Thanks for your insight. AI: big question. Ok so here's a few things I'd look at if I was you.. Have you tried any feature engineering ?(it sounds like you've just used the features in the training set but I can't be 100%) Random Forests should do pretty well, but maybe try xgboost too? It's quite good at everything on Kaggle. SVM's could be worth a go also if you're thinking of stacking/ensembling. Check out some of the tutorials around this competition. There's hundreds of them and most of them are great. Links: R #1 (my favourite) R #2 Python #1 Python #2 ...Hopefully this helps
H: Depending upon how I download, I get two different files I am downloading the data set for the Kaggle competition on the titanic. If I use the following code : if (!file.exists("data")){ dir.create("data") } fileUrl <- 'https://www.kaggle.com/c/titanic/download/train.csv' download.file(fileUrl, destfile='./data/train.csv') I get a 14kb file, however, Paste this Url directly in your browser and you will download the correct file about 60kb. AI: This code works for sites where you don't need to be logged on. The Kaggle link only gives you the file when you are logged on to Kaggle. The file that is created with the code only contains html / javascript code of the Kaggle page.
H: MLP on Iris Data not working but it does fine on MNIST - Keras So I'm a little bit baffled. I have just started working with the Keras framework for Python (which is awesome by the way!). However just trying a few simple test of neural networks has got me a bit confused. I initially tried to classify the Iris data as it was a small, quick and simple dataset. However when I constructed a neural network for it (4 input dimensions, 8 node hidden layer, 3 node output layer for binary classification of the 3 classes). This however didn't produce any predictive powers at all (100 samples in the training set, a separate 50 in the test set. Both had been shuffled so as to include a good distribution of classes in each). Now I thought I was doing something wrong, but I thought I'd give the network a quick test on the MNIST dataset just in case. So I used basically the exact same network (other than changing the input dimensions to 784, hidden nodes to 30, and output nodes to 10 for the binary encoded 0-9 output). And this worked perfectly! With a 97% accuracy rate and a 5% loss. So now I'm not sure why the Iris dataset isn't playing ball, does anyone have any clues? I've tried changing the number of hidden layer nodes and also normalized the X input. Here's an output I've produced when I've manually run some of the test set through the trained Iris model. As you can see it's basically producing a uniform random guess. Target: [ 1. 0. 0.] | Predicted: [[ 0.44635904 0.43874186 0.45729554]] Target: [ 0. 0. 1.] | Predicted: [[ 0.44618103 0.43869928 0.45735642]] Target: [ 0. 1. 0.] | Predicted: [[ 0.44612524 0.43863046 0.45729461]] Target: [ 0. 0. 1.] | Predicted: [[ 0.44617626 0.43870446 0.45736298]] Target: [ 0. 0. 1.] | Predicted: [[ 0.44613886 0.43865535 0.45731983]] And here's my full code for the Iris MLP (the MNIST one is essentially the same) import numpy as np import random from keras.models import Sequential from keras.layers import Dense, Activation from sklearn.datasets import load_iris from sklearn import preprocessing # Model Layers Defnition # Input layer (12 neurons), Output layer (1 neuron) model = Sequential() model.add(Dense(8, input_dim=4, init='uniform', activation='sigmoid')) model.add(Dense(3, init='uniform', activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # Iris has 150 samples, each sample has 4 dimensions iris = load_iris() iris_xy = zip(iris.data, iris.target) random.shuffle(iris_xy) # Iris data is sequential in it's labels iris_x, iris_y = zip(*iris_xy) iris_x = preprocessing.normalize(np.array(iris_x)) iris_y = np.array(iris_y) # Encode decimal numbers to array iris_y_enc = np.zeros(shape=(len(iris_y),3)) for i, y in enumerate(iris_y): iris_y_enc[i][y] = 1 train_data = np.array(iris_x[:100]) # 100 samples for training test_data = np.array(iris_x[100:]) # 50 samples for testing train_targets = np.array(iris_y_enc[:100]) test_targets = np.array(iris_y_enc[100:]) model.fit(train_data, train_targets, nb_epoch=10) #score = model.evaluate(test_data, test_targets) for test in zip(test_data, test_targets): prediction = model.predict(np.array(test[0:1])) print "Target:", test[1], " | Predicted:", prediction AI: By default sklearn.preprocessing.normalize normalizes samples, not features. Replace sklearn.preprocessing.normalize with sklearn.preprocessing.scale. This will center and scale (to unit variance) every feature. Also give it more than 10 epochs. Here are learning curves (log loss) for 5000 epochs: This should end up with an accuracy of about 96%.
H: How do I factor in features which are IDs? I am an absolute beginner in data science and I had this (possibly stupid) question on my mind, while reading a problem in Kaggle: Say I'm given IDs of some clients, IDs of products that they sell, and quantity of the product sold, and I'm asked to predict the quantity of a product, given the client ID, and the product ID. Now, say the client IDs are in the range 10000 - 50000 and the product IDs are in the range 1-10. Suppose, for a moment that the client IDs were random integers from 1-1000000 and the product IDs were random integers from 1000-2000. This isn't supposed to make the slightest change in the results, is it? After all, IDs are mere tags. But thinking data-wise, I've just bloated up two entire columns in my data to a higher scale, and these are two features as well. So how do I think about this? How do I factor in features which are IDs? How do I 'normalize' them? I hope I'm not being vague here. I just don't know a better way of phrasing this question. AI: These IDs should not be represented as numerical values to your model. If you would, your model thinks 2 and 3 are closer together than 2 and 2000, while it's just an ID, the number is just a name. Some models can deal with them but then they need to be factors or categories (like decision trees). However most models cannot deal with categories at all, there are however numerous ways to solve this problem. The most used one is probably one-hot encoding, which means for every category in your feature you add a column, and you put a 1 if it's that category and a 0 otherwise. Example: ID | target 1 | 0 1 | 1 2 | 3 3 | 2 To: ID_1 | ID_2 | ID_3 | target 1 | 0 | 0 | 0 1 | 0 | 0 | 1 0 | 1 | 0 | 3 0 | 0 | 1 | 2 This work very well if you have few categories, however in the case of thousands of IDs this will increase your dimensionality too much. What you can do is collect statistics about the target and other features per group and join these onto your set and then remove your categories. This is what is usually done with a high number of categories. You have to be careful not to leak any information about your target into your features though (problem called label leaking).
H: What is the difference between word-based and char-based text generation RNNs? While reading about text generation with Recurrent Neural Networks I noticed that some examples were implemented to generate text word by word and others character by character without actually stating why. So, what is the difference between RNN models that predict text per-word basis and the ones that predict text per-char basis? Do word-based RNN require a bigger corpus size? Do char-based RNN generalize better? Maybe the only difference is input representation (one-hot encoding, word embeddings)? Which ones to choose for text generation? AI: Here is what I learnt recently. Obviously, when talking about text generation RNNs we are talking about RNN language models. When asking about word/char-based text generation RNNs, we are asking about word/char-based RNN language models (LM). Word-based LMs display higher accuracy and lower computational cost than char-based LMs. This drop of performance is unlikely due to the difficulty for character level model to capture longer short term memory, since also the Longer Short Term Memory (LSTM) recurrent networks work better with word-based input. This is because char-based RNN LMs require much bigger hidden layer to successfully model long-term dependencies which means higher computational costs. Therefore, we can say that one of the fundamental differences between the word level and character level models is in the number of parameters the RNN has to access during the training and test. The smaller is the input and output layer of RNN, the larger needs to be the fully connected hidden layer, which makes the training of the model expensive. However, char-based RNN LMs better model languages with a rich morphology such as Finish, Turkish, Russian etc. Using word-based RNN LMs to model such languages is difficult if possible at all and is not advised. The above analysis makes sense especially when you look at the output text, generated by char-based RNNs: The surprised in investors weren’t going to raise money. I’m not the company with the time there are all interesting quickly, don’t have to get off the same programmers. While simple char-based Maximum Likelihood LM with a 13-character window delivers this: And when she made many solid bricks. He stacked them in piles and stomped her feet. The doctor diagnosed him with a bat. The girl and her boyfriend asked her out. Of course I cherry-picked the example (actually most ML LM examples looked better than any RNN generated text I've read so far) and this tiny ML LM was trained on a simpler corpus but you get the idea: straightforward conditional probability generates better texts than far more complex char-based RNN. Char-based RNN LMs can mimic grammatically correct sequences for a wide range of languages, require bigger hidden layer and computationally more expensive while word-based RNN LMs train faster and generate more coherent texts and yet even these generated texts are far from making actual sense.
H: K-means Clustering algorithm problems I am trying to implement k-means clustering algorithm, but I am confused about calculating the distance and update(move) cluster centroids. For example, let's say that I have 2 features. One of them is weight={2,4,6,8,11,14,21} and the other one is height={4,6,7,8,9,12,14}. So, in the coordinate system my points are x1={2,4},x2={4,6},x3={6,7} and so on. Then, I initialize the cluster centroids randomly, doesn't matter how many there are for now, but they have coordinates too. Let's say μ1={4,2}. At this point, I understand how do I calculate distance with Euclidean distance. My code for calculating distance: def get_distance(x1,x2,s1,s2): return np.sqrt(np.power(s1-x1,2)+np.power(s2-x2,2)) Now I get a distance. My first question is how cluster assignment step(first step in loop) will know which centroid assign to c(i).I mean,am I supposed to look at each centroid for understand which sample(x(i)) is close to it and then I should assign centroid to c(i), right? My second question, let's say I got distances and I have c(1,2..,n) array now. Second step in algorithm which is called move(update) centroid step, we are calculate μ. According to formula, this μ is the average of points assigned to clusters so for example μ1=[x(3) + x(4) + x(6)] / 3. However, here our μ was a point in coordinate system, right? I mean, μ1 was {4,2}. How can this be possible? It's a point not a variable. It has coordinates. Well, if it will become a variable let's say μ1=5, how can I subtract ||x(i)-μ|| then? x is a coordinate. My last question is very simple. For this example, I have two features weight and height. What is the maximum number of features that we can use in k-mean? Is it possible to use k-mean algorithms for many many features? For instance, my first feature is height and the second is weight and the third is width fourth and so on. I hope, I explained my problem clearly. If not, sorry for the bad English. I think these three questions are independent questions, so you can answer one of them. Thanks. AI: Wikipedia says: "Assign each observation to the cluster whose mean yields the least within-cluster sum of squares (WCSS)" I think in your case, this is translatable to: $c_i$ is assigned to the closest centroid by euclidean distance. For your second question, the centroid should $\mu$ should have the same number of dimensions as each training point $x_i$. They are both points in the co-ordinate system. You can use a high number of features with K-means, for example, text analytics might reduce a corpora of news articles to 10,000+ dimensions. Depending on the package you use these might be represented as a sparse matrix.
H: One-hot encoding I am going through tensor-flow tutorial and noticed that they use one-hot encoding in regression tensorflow. I don't fully understand how it works. Let us take oversimplified case of ordinary least square regression. Assume we have y = [1,2,3] and x = [cat, dog, mouse]. Converting to one hot vector we get cat = [0,0,1] dog = [0,1,0] mouse = [1,0,0] how does regression equation looks now? Is it multivariate regression now? y = alpha + beta*x_1 + beta*x_2 + beta*x_3, where x_1, x_2, x_3 are coordinates of one-hot vector? P.S. I am interested more in mechanics of this set up, not so much meaning. AI: Yes you turn it into three different variables, however this is not called multivariate regression, that indicates multiple output variables, not inputs. (Thanks to Simon for correcting me)
H: Different runs of feature selection algorithm giving different set of selected feature. How to choose the best set among them? I am using the forward feature selection algorithm from MATLAB. The code is as follows: X=combine_6_non; y=target; c = cvpartition(y,'k',10); opts = statset('display','iter'); [fs,history] = sequentialfs(@fun,X,y,'cv',c,'options',opts) The function fun is as follows: function err = fun(XT,yT,Xt,yt) model = svmtrain(XT,yT, 'Kernel_Function', 'rbf', 'boxconstraint', 1); err = sum(svmclassify(model, Xt) ~= yt); end Now for different runs of the selection algorithm I am getting different feature sets. How should I zero down to the best feature set? AI: You can set a seed or a random state for the splitting process. This helps in generating a fixed random number everytime, which helps you get the same data everytime you do the CV split. This can be done by set_seed(...) in R, and adding the random_state = ... in relevant functions parameters in Python. So, please add the relevant parameter for Matlab, and your CV would do good.
H: Are there pitfalls in using the output of machine learning model, such as a neural net as the input to a traditional GLM or similar? To be more specific, loss reserving models in actuarial science, such as the chain ladder method, can be expressed as GLMs. I have developed a predictive model using neural nets which takes into account some aspects of the insured (it is an individual risk model). Can the output of this model be safely used as an input to the insurance company's existing loss reserve model? AI: Theoretically there is no problem. I've seen tree models put as predictors in logistic models. NN as input into a GLM moodel makes sense. The ultimate decition should be made based on the predictability of the NN. You have to mind a few issues: model mantenance and deployment. The NN model would probably have more parameters than your vanilla GLM deployment might be more involved. You could live it frozen and never update it, and let the GLM model use the NN score as long as it adds to the GLM prediction. interpretability. There might be managers whose acountability is to review risk, and they may not be comfortable with a NN. In that case feeding the NN results into a GLM might make it more acceptable, much as credit scores are used in some risk models.
H: Which features can help to differentiate these two density? I'm wondering that is there any features that can help in differentiating the following two images. I mean differentiating in related numbers. AI: Regarding two probability distributions $A,B$ on the line, that have densities, the following are equivalent: $ A = B$ (i.e.: given any Borel set $\mathscr S$, the probability assign to $\mathscr S$ by A equals the probability assign to $\mathscr S$ by B) $A(-\infty,a] = B(-\infty,a] $, for any $a \in \mathbb R$, i.e.: $A,B$ have the same cumulative distribution function $A,B$ have the same densities. In the case of your question, the two densities are clearly different, so the two probabilities are different. From your question, it seems you want something you can measure (perhaps on sample from these distributions). There are infinitely many quantities that you can measure that when apply to $A$ and $B$ produce different results. The statement above gives some hints for a possible solution. From your graphs, calling $A$ the first pdf, and $B$ the second: $$ F_A(0.35) = A(-\infty, 0.35] = \text{ probability that an observation from the first distribution is < 0.35 } = 0 $$ since the density is 0 to the left of 0.35. On the other hand, $$ F_B(0.35) = B(-\infty, 0.35] = \text{ probability that an observation from the second distribution is < 0.35 } > 0 $$ since the density is positive from -0.2 to 0.35. We can a little to the right of 0.35 and find a number $a_0$ such that $0 < F_A(a_0) < F_B(a_0) $. Being less than $a_0$ is something one can measure on samples. Let $A_1, .., A_n$ be an iid sample from the first distribution, and let $B_1,...,B_m$ be an iid sample from the second distribution. Then $x_j = I(A_j \le a_0)$ is a measure (Note: I am using the notation $I(S)$ = indicator function of a set $S$, or, in more typical notation in the context of probabilitie, $I(S)$ = indicator function of event $S$ = 1 if $S$ is true, 0 if $S$ is false). Likewise, one can measure $y_j = I(B_j \le a_0).$ $\{x_j\}$ are iid Bernoulli, with $P(x_j=1) = P(A_j \le a_0) = F_A(a_0).$ $ \frac{1}{n} \sum_1^n x_j $ is an estimator of $F_A(a_0)$ $\{y_j\}$ are iid Bernoulli, with $P(y_j=1) = P(B_j \le a_0) = F_B(a_0).$ $ \frac{1}{m} \sum_1^m y_j $ is an estimator of $F_B(a_0)$ Now one can run a rest of the hyposesis that the tw means are the same. Before proceeding, does this construction answer your question?
H: How do companies like Amazon track what products are most frequently bought together? What methods could someone use to find out what products are most frequently grouped with each other per order? Are there applications that can make achieving this goal easier? AI: This is called Association Learning. Quoting Wikpedia: Association rule learning is a method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness So, your question asks about how companies like Amazon and Walmart know what products are frequently bought together. So, that is done by the Apriori algorithm, which falls under the class of Association Learning algorithms. Again, quoting Wikipedia: Apriori is an algorithm for frequent item set mining and association rule learning over transactional databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. This algorithm generates rules which show what items are bought along with which items, depending on the support and confidence thresholds. (As you didn't ask about the particulars, I would skip the details. Pl check wikipedia for the finer details and the algorithm itself). The arules package of R implements the apriori algorithm. Unfortunately, there is no fast implementation of the same in Python.
H: Using Decision Tree methodology to identify Independent Variables for Multiple Regression Have access to a dataset with hundreds of variables and millions of cases (American Community Survey). Need to identify a small, manageable set of Independent Variables (IVs) to use for Multiple Regression. One way to do this, of course, would be to use applicable theories to identify the IVs. Was wondering how I could use a data-driven (data-mining?) approach as follows: Use a Decision Tree to identify impactful (candidate? relevant?) IVs? And then use these as the IVs in the Multiple Regression? (Seem to remember reading once, in passing, that this approach to variable reduction is permitted.) Tried searching on Google for articles that clarify the above, but the search terms are such that I keep getting hits to articles that compare Decision Trees and Multiple Regression. So, if you know of articles and research papers that describe how to do the above, please leave links below. Also, I would welcome your own original suggestions on how to proceed. AI: Decision trees are useful for determining nested/interactive relationships between combinations of IVs and a DV. The model you specified, a multiple regression, presupposes a relationship between the IVs and the DV (e.g. linear). As you are aware, these models are different. So using a decision tree coupled with some importance measure to find predictive variables won't necessarily provide you with an optimal set of IVs in a regression model. That being said, it can be a helpful exercise to inform you of non-linear relationships or interaction terms that could be predictive, and which may not be captured by specifying a model such as a multiple regression. If I were you, I wouldn't solely rely on using decision trees to determine a set of IVs for a regression model. I would investigate penalized regression methods such as LASSO or ridge regression to help take you from a reduced candidate set of IVs to your final IVs. In addition, you might want to explore associative metrics related to your model specification that might be useful in exploring the relationships in your data, such as information values, chi-square tests, correlations, etc. This may be helpful: https://stats.stackexchange.com/questions/47367/decision-tree-as-variable-selection-for-logistic-regression
H: Is there a problem of over fitting in my dataset? I have applied the sequential forward selection to my dataset having 214 samples and 515 features (2 class problem). The feature selection algorithm has selected 8 features. Now I have applied the svm (MATLAB) on these 8 features. I have also tried to see the performance after adding more features. The table given below gives the correct rate of the algorithm (training data set) along with the feature set used. The result obtained is: 8 features = 0.9392 10 features = 0.9439 12 features = 0.9672 14 features = 0.9672 16 features = 0.9626 18 features = 0.9766 20 features = 0.9672 As visible, the accuracy seems to increase. Is it because of over fitting? Should I use the default feature set as given by the sequentialfs function of Matlab or should I force it to deliver more features to get more accuracy? I have uploaded the validation training and testing performance (70-15-15). Now can you tell me if my data is being over-fitted or not? AI: It is not possible to tell whether a machine learning algorithm is overfitting based purely on the training set accuracy. You could be right, that using more features with a small data set increases sampling error and reduces the generalisation of the SVM model you are building. It is a valid concern, but you cannot say that for sure with only this worry and the training accuracy to look at. The usual solution to this is to keep some data aside to test your model. When you see a high training accuracy, but a low test accuracy, that is a classic sign of over-fitting. Often you are searching for the best hyper-parameters to your model. In your case you are trying to discover the best number of features to use. When you start to do that, you will need to make multiple tests in order to pick the best hyper-parameter values. At that point, a single test set becomes weaker measure of true generalisation (because you have had several attempts and picked best value - just by selection process you will tend to over-estimate the generalisation). So it is common practice to split the data three ways - training set, cross-validation set and test set. The cross-validation set is used to check accuracy as you change the parameters of your model, you pick the best results and then finally use the test set to measure accuracy of your best model. A common split ratio for this purpose is 60/20/20. Taking a pragmatic approach when using the train/cv/test split, it matters less that you are over or under fitting than simply getting the best result you can with your data and model class. You can use the feedback on whether you are over-fitting (high training accuracy, low cv accuracy) in order to change model parameters - increase regularisation when you are over-fitting for example. When there are a small number of examples, as in your case, then the cv accuracy measure is going to vary a lot depending on which items are in the cv set. This makes it hard to pick best hyper-params, because it may just be noise in the data that makes one choice better than another. To reduce the impact of this, you can use k-fold cross-validation - splitting your train/cv data multiple times and taking an average measure of the accuracy (or whatever metric you want to maximise). In your confusion matrices, there is no evidence of over-fitting. A training accuracy of 100%* and testing accuracy of 93.8% are suggestive of some degree of over-fit, but the sample size is too low to read anything into it. You should bear in mind that balance between over- and under- fit is very narrow and most models will do one or the other to some degree. * A training accuracy of 100% is nearly always suggestive of overfit. Matched with e.g. 99% test accuracy, you may not be too concerned. The question is at worst "could I do better by increasing regularisation a little"? However, matched with ~60% test accuracy it is clear you have actually overfit - even then you might be forced to accept the situation if that's the best you could achieve after trying many different hyperparameter values (including some attempts with increased regularisation).
H: What open-source books (or other materials) provide a relatively thorough overview of data science? As a researcher and instructor, I'm looking for open-source books (or similar materials) that provide a relatively thorough overview of data science from an applied perspective. To be clear, I'm especially interested in a thorough overview that provides material suitable for a college-level course, not particular pieces or papers. AI: One book that's freely available is "The Elements of Statistical Learning" by Hastie, Tibshirani, and Friedman (published by Springer): see Tibshirani's website. Another fantastic source, although it isn't a book, is Andrew Ng's Machine Learning course on Coursera. This has a much more applied-focus than the above book, and Prof. Ng does a great job of explaining the thinking behind several different machine learning algorithms/situations.
H: K-Means clustering for mixed numeric and categorical data My data set contains a number of numeric attributes and one categorical. Say, NumericAttr1, NumericAttr2, ..., NumericAttrN, CategoricalAttr, where CategoricalAttr takes one of three possible values: CategoricalAttrValue1, CategoricalAttrValue2 or CategoricalAttrValue3. I'm using default k-means clustering algorithm implementation for Octave. It works with numeric data only. So my question: is it correct to split the categorical attribute CategoricalAttr into three numeric (binary) variables, like IsCategoricalAttrValue1, IsCategoricalAttrValue2, IsCategoricalAttrValue3 ? AI: The standard k-means algorithm isn't directly applicable to categorical data, for various reasons. The sample space for categorical data is discrete, and doesn't have a natural origin. A Euclidean distance function on such a space isn't really meaningful. As someone put it, "The fact a snake possesses neither wheels nor legs allows us to say nothing about the relative value of wheels and legs." (from here) There's a variation of k-means known as k-modes, introduced in this paper by Zhexue Huang, which is suitable for categorical data. Note that the solutions you get are sensitive to initial conditions, as discussed here (PDF), for instance. Huang's paper (linked above) also has a section on "k-prototypes" which applies to data with a mix of categorical and numeric features. It uses a distance measure which mixes the Hamming distance for categorical features and the Euclidean distance for numeric features. A Google search for "k-means mix of categorical data" turns up quite a few more recent papers on various algorithms for k-means-like clustering with a mix of categorical and numeric data. (I haven't yet read them, so I can't comment on their merits.) Actually, what you suggest (converting categorical attributes to binary values, and then doing k-means as if these were numeric values) is another approach that has been tried before (predating k-modes). (See Ralambondrainy, H. 1995. A conceptual version of the k-means algorithm. Pattern Recognition Letters, 16:1147–1157.) But I believe the k-modes approach is preferred for the reasons I indicated above.
H: The data in our relational DBMS is getting big, is it the time to move to NoSQL? We created a social network application for eLearning purposes. It's an experimental project that we are researching on in our lab. It has been used in some case studies for a while and the data in our relational DBMS (SQL Server 2008) is getting big. It's a few gigabytes now and the tables are highly connected to each other. The performance is still fine, but when should we consider other options? Is it the matter of performance? AI: A few gigabytes is not very "big". It's more like the normal size of an enterprise DB. As long as you go over PK when joining tables it should work out really well, even in the future (as long as you don't get TB's of data a day). Most professionals working in a big data environment consider > ~5TB as the beginning of the term big data. But even then it's not always the best way to just install the next best nosql database. You should always think about the task that you want to archive with the data (aggregate,read,search,mine,..) to find the best tools for you problem. i.e. if you do alot of searches in you database it would probably be better to run a solr instance/cluster and denormalize your data from a DBMS like Postgres or your SQL Server from time to time and put it into solr instead of just moving the data from sql to nosql in term of persistence and performance.
H: Is Data Science the Same as Data Mining? I am sure data science as will be discussed in this forum has several synonyms or at least related fields where large data is analyzed. My particular question is in regards to Data Mining. I took a graduate class in Data Mining a few years back. What are the differences between Data Science and Data Mining and in particular what more would I need to look at to become proficient in Data Mining? AI: @statsRus starts to lay the groundwork for your answer in another question What characterises the difference between data science and statistics?: Data collection: web scraping and online surveys Data manipulation: recoding messy data and extracting meaning from linguistic and social network data Data scale: working with extremely large data sets Data mining: finding patterns in large, complex data sets, with an emphasis on algorithmic techniques Data communication: helping turn "machine-readable" data into "human-readable" information via visualization Definition data-mining can be seen as one item (or set of skills and applications) in the toolkit of the data scientist. I like how he separates the definition of mining from collection in a sort of trade-specific jargon. However, I think that data-mining would be synonymous with data-collection in a US-English colloquial definition. As to where to go to become proficient? I think that question is too broad as it is currently stated and would receive answers that are primarily opinion based. Perhaps if you could refine your question, it might be easier to see what you are asking.
H: How big is big data? Lots of people use the term big data in a rather commercial way, as a means of indicating that large datasets are involved in the computation, and therefore potential solutions must have good performance. Of course, big data always carry associated terms, like scalability and efficiency, but what exactly defines a problem as a big data problem? Does the computation have to be related to some set of specific purposes, like data mining/information retrieval, or could an algorithm for general graph problems be labeled big data if the dataset was big enough? Also, how big is big enough (if this is possible to define)? AI: To me (coming from a relational database background), "Big Data" is not primarily about the data size (which is the bulk of what the other answers are so far). "Big Data" and "Bad Data" are closely related. Relational Databases require 'pristine data'. If the data is in the database, it is accurate, clean, and 100% reliable. Relational Databases require "Great Data" and a huge amount of time, money, and accountability is put on to making sure the data is well prepared before loading it in to the database. If the data is in the database, it is 'gospel', and it defines the system understanding of reality. "Big Data" tackles this problem from the other direction. The data is poorly defined, much of it may be inaccurate, and much of it may in fact be missing. The structure and layout of the data is linear as opposed to relational. Big Data has to have enough volume so that the amount of bad data, or missing data becomes statistically insignificant. When the errors in your data are common enough to cancel each other out, when the missing data is proportionally small enough to be negligible and when your data access requirements and algorithms are functional even with incomplete and inaccurate data, then you have "Big Data". "Big Data" is not really about the volume, it is about the characteristics of the data.
H: What is the difference between Hadoop and noSQL I heard about many tools / frameworks for helping people to process their data (big data environment). One is called Hadoop and the other is the noSQL concept. What is the difference in point of processing? Are they complementary? AI: Hadoop is not a database, hadoop is an entire ecosystem. Most people will refer to mapreduce jobs while talking about hadoop. A mapreduce job splits big datasets in some little chunks of data and spread them over a cluster of nodes to get proceed. In the end the result from each node will be put together again as one dataset. Let's assume you load into hadoop a set of <String, Integer> with the population of some neighborhoods within a city and you want to get the average population over the whole neighborhoods of each city(figure 1). figure 1 [new york, 40394] [new york, 134] [la, 44] [la, 647] ... Now hadoop will first map each value by using the keys (figure 2) figure 2 [new york, [40394,134]] [la, [44,647]] ... After the mapping it will reduce the values of each key to a new value (in this example the average over the value set of each key)(figure 3) figure 3 [new york, [20264]] [la, [346]] ... now hadoop would be done with everything. You can now load the result into the HDFS (hadoop distributed file system) or into any DBMS or file. Thats just one very basic and simple example of what hadoop can do. You can run much more complicated tasks in hadoop. As you already mentioned in your question, hadoop and noSQL are complementary. I know a few setups where i.e. billions of datasets from sensors are stored in HBase and get then through hadoop to finally be stored in a DBMS.
H: Is the R language suitable for Big Data R has many libraries which are aimed at Data Analysis (e.g. JAGS, BUGS, ARULES etc..), and is mentioned in popular textbooks such as: J.Krusche, Doing Bayesian Data Analysis; B.Lantz, "Machine Learning with R". I've seen a guideline of 5TB for a dataset to be considered as Big Data. My question is: Is R suitable for the amount of Data typically seen in Big Data problems? Are there strategies to be employed when using R with this size of dataset? AI: Actually this is coming around. In the book R in a Nutshell there is even a section on using R with Hadoop for big data processing. There are some work arounds that need to be done because R does all it's work in memory, so you are basically limited to the amount of RAM you have available to you. A mature project for R and Hadoop is RHadoop RHadoop has been divided into several sub-projects, rhdfs, rhbase, rmr2, plyrmr, and quickcheck (wiki).
H: Use liblinear on big data for semantic analysis I use Libsvm to train data and predict classification on semantic analysis problem. But it has a performance issue on large-scale data, because semantic analysis concerns n-dimension problem. Last year, Liblinear was release, and it can solve performance bottleneck. But it cost too much memory. Is MapReduce the only way to solve semantic analysis problem on big data? Or are there any other methods that can improve memory bottleneck on Liblinear? AI: Note that there is an early version of LIBLINEAR ported to Apache Spark. See mailing list comments for some early details, and the project site.
H: Running an R script programmatically I have an R script that generates a report based on the current contents of a database. This database is constantly in flux with records being added/deleted many times each day. How can I ask my computer to run this every night at 4 am so that I have an up to date report waiting for me in the morning? Or perhaps I want it to re-run once a certain number of new records have been added to the database. How might I go about automating this? I should mention I'm on Windows, but I could easily put this script on my Linux machine if that would simplify the process. AI: For windows, use the task scheduler to set the task to run for example daily at 4:00 AM It gives you many other options regarding frequency etc. http://en.wikipedia.org/wiki/Windows_Task_Scheduler
H: Why Is Overfitting Bad in Machine Learning? Logic often states that by overfitting a model, its capacity to generalize is limited, though this might only mean that overfitting stops a model from improving after a certain complexity. Does overfitting cause models to become worse regardless of the complexity of data, and if so, why is this the case? Related: Followup to the question above, "When is a Model Underfitted?" AI: Overfitting is empirically bad. Suppose you have a data set which you split in two, test and training. An overfitted model is one that performs much worse on the test dataset than on training dataset. It is often observed that models like that also in general perform worse on additional (new) test datasets than models which are not overfitted. One way to understand that intuitively is that a model may use some relevant parts of the data (signal) and some irrelevant parts (noise). An overfitted model uses more of the noise, which increases its performance in the case of known noise (training data) and decreases its performance in the case of novel noise (test data). The difference in performance between training and test data indicates how much noise the model picks up; and picking up noise directly translates into worse performance on test data (including future data). Summary: overfitting is bad by definition, this has not much to do with either complexity or ability to generalize, but rather has to do with mistaking noise for signal. P.S. On the "ability to generalize" part of the question, it is very possible to have a model which has inherently limited ability to generalize due to the structure of the model (for example linear SVM, ...) but is still prone to overfitting. In a sense overfitting is just one way that generalization may fail.
H: Clustering customer data stored in ElasticSearch I have a bunch of customer profiles stored in a elasticsearch cluster. These profiles are now used for creation of target groups for our email subscriptions. Target groups are now formed manually using elasticsearch faceted search capabilities (like get all male customers of age 23 with one car and 3 children). How could I search for interesting groups automatically - using data science, machine learning, clustering or something else? r programming language seems to be a good tool for this task, but I can't form a methodology of such group search. One solution is to somehow find the largest clusters of customers and use them as target groups, so the question is: How can I automatically choose largest clusters of similar customers (similar by parameters that I don't know at this moment)? For example: my program will connect to elasticsearch, offload customer data to CSV and using R language script will find that large portion of customers are male with no children and another large portion of customers have a car and their eye color is brown. AI: One algorithm that can be used for this is the k-means clustering algorithm. Basically: Randomly choose k datapoints from your set, $m_1$, ..., $m_k$. Until convergence: Assign your data points to k clusters, where cluster i is the set of points for which m_i is the closest of your current means Replace each $m_i$ by the mean of all points assigned to cluster i. It is good practice to repeat this algorithm several times, then choose the outcome that minimizes distances between the points of each cluster i and the center $m_i$. Of course, you have to know k to start here; you can use cross-validation to choose this parameter, though.
H: Is there a replacement for small p-values in big data? If small p-values are plentiful in big data, what is a comparable replacement for p-values in data with million of samples? AI: There is no replacement in the strict sense of the word. Instead you should look at other measures. The other measures you look at depend on what you type of problem you are solving. In general, if you have a small p-value, also consider the magnitude of the effect size. It may be highly statistically significant but in practice meaningless. It is also helpful to report the confidence interval of the effect size. I would consider this paper as mentoned in DanC's answer to this question.
H: Parallel and distributed computing What is(are) the difference(s) between parallel and distributed computing? When it comes to scalability and efficiency, it is very common to see solutions dealing with computations in clusters of machines, and sometimes it is referred to as a parallel processing, or as distributed processing. In a certain way, the computation seems to be always parallel, since there are things running concurrently. But is the distributed computation simply related to the use of more than one machine, or are there any further specificities that distinguishes these two kinds of processing? Wouldn't it be redundant to say, for example, that a computation is parallel AND distributed? AI: Simply set, 'parallel' means running concurrently on distinct resources (CPUs), while 'distributed' means running across distinct computers, involving issues related to networks. Parallel computing using for instance OpenMP is not distributed, while parallel computing with Message Passing is often distributed. Being in a 'distributed but not parallel' setting would mean under-using resources so it is seldom encountered but it is conceptually possible.
H: When are p-values deceptive? What are the data conditions that we should watch out for, where p-values may not be the best way of deciding statistical significance? Are there specific problem types that fall into this category? AI: You are asking about Data Dredging, which is what happens when testing a very large number of hypotheses against a data set, or testing hypotheses against a data set that were suggested by the same data. In particular, check out Multiple hypothesis hazard, and Testing hypotheses suggested by the data. The solution is to use some kind of correction for False discovery rate or Familywise error rate, such as Scheffé's method or the (very old-school) Bonferroni correction. In a somewhat less rigorous way, it may help to filter your discoveries by the confidence interval for the odds ratio (OR) for each statistical result. If the 99% confidence interval for the odds ratio is 10-12, then the OR is <= 1 with some extremely small probability, especially if the sample size is also large. If you find something like this, it is probably a strong effect even if it came out of a test of millions of hypotheses.
H: Is this Neo4j comparison to RDBMS execution time correct? Background: Following is from the book Graph Databases, which covers a performance test mentioned in the book Neo4j in Action: Relationships in a graph naturally form paths. Querying, or traversing, the graph involves following paths. Because of the fundamentally path-oriented nature of the datamodel, the majority of path-based graph database operations are highly aligned with the way in which the data is laid out, making them extremely efficient. In their book Neo4j in Action, Partner and Vukotic perform an experiment using a relational store and Neo4j. The comparison shows that the graph database is substantially quicker for connected data than a relational store.Partner and Vukotic’s experiment seeks to find friends-of-friends in a social network, to a maximum depth of five. Given any two persons chosen at random, is there a path that connects them which is at most five relationships long? For a social network containing 1,000,000 people, each with approximately 50 friends, the results strongly suggest that graph databases are the best choice for connected data, as we see in Table 2-1. Table 2-1. Finding extended friends in a relational database versus efficient finding in Neo4j Depth RDBMS Execution time (s) Neo4j Execution time (s) Records returned 2 0.016 0.01 ~2500 3 30.267 0.168 ~110,000 4 1543.505 1.359 ~600,000 5 Unfinished 2.132 ~800,000 At depth two (friends-of-friends) both the relational database and the graph database perform well enough for us to consider using them in an online system. While the Neo4j query runs in two-thirds the time of the relational one, an end-user would barely notice the the difference in milliseconds between the two. By the time we reach depth three (friend-of-friend-of-friend), however, it’s clear that the relational database can no longer deal with the query in a reasonable timeframe: the thirty seconds it takes to complete would be completely unacceptable for an online system. In contrast, Neo4j’s response time remains relatively flat: just a fraction of a second to perform the query—definitely quick enough for an online system. At depth four the relational database exhibits crippling latency, making it practically useless for an online system. Neo4j’s timings have deteriorated a little too, but the latency here is at the periphery of being acceptable for a responsive online system. Finally, at depth five, the relational database simply takes too long to complete the query. Neo4j, in contrast, returns a result in around two seconds. At depth five, it transpires almost the entire network is our friend: for many real-world use cases, we’d likely trim the results, and the timings. Questions are: Is this a reasonable test to emulate what one might except to find in a social network? (Meaning do real social networks normally have nodes with approximately 50 friends for example; seems like the "rich get richer" model would be more natural for social networks, though might be wrong.) Regardless of the naturalness of the emulation, is there any reason to believe the results are off, or unreproducible? AI: Looking at this document called Anatomy of Facebook I note that the median is 100. Looking at the cumulative function plot I can bet that the average is higher, near 200. So 50 seems to not be the best number here. However I think that this is not the main issue here. The main issue is the lack of information on how the database was used. It seems reasonable that a data storage designed specially for graph structures to be more efficient than traditional RDBMs. However, even if the RDBMs are not in the latest trends as a data storage of choice, these systems evolved continuously in a race with the data set dimensions. There are various types of possible designs, various ways of indexing data, improvements related with concurrency and so on. To conclude I think that regarding reproducibility, the study lack a proper description of how the database schema was designed. I do not expect that a database to dominate on such king of interrogations, however I would expect that with a well-tuned design the differences to not be such massive.
H: How does a query into a huge database return with negligible latency? For example, when searching something in Google, results return nigh-instantly. I understand that Google sorts and indexes pages with algorithms etc., but I imagine it infeasible for the results of every single possible query to be indexed (and results are personalized, which renders this even more infeasible)? Moreover, wouldn't the hardware latency in Google's hardware be huge? Even if the data in Google were all stored in TB/s SSDs, I imagine the hardware latency to be huge, given the sheer amount of data to process. Does MapReduce help solve this problem? EDIT: Okay, so I understand that popular searches can be cached in memory. But what about unpopular searches? Even for the most obscure search I have conducted, I don't think the search has ever been reported to be larger than 5 seconds. How is this possible? AI: Well, I'm not sure if it is MapReduce that solves the problem, but it surely wouldn't be MapReduce alone to solve all these questions you raised. But here are important things to take into account, and that make it feasible to have such low latency on queries from all these TBs of data in different machines: distributed computing: by being distributed does not mean that the indexes are simply distributed in different machines, they are actually replicated along different clusters, which allows for lots of users performing different queries with low retrieval time (yes, huge companies can afford for that much of machines); caching: caches tremendously reduce execution time, be it for the crawling step, for the retrieval of pages, or for the ranking and exihibition of results; lots of tweaking: all the above and very efficient algorithms/solutions can only be effective if the implementation is also efficient. There are tons of (hard coded) optimizations, such as locality of reference, compression, caching; all of them usually appliable to different parts of the processing. Considering that, lets try to address your questions: but I imagine it infeasible for the results of every single possible query to be indexed Yes, it would be, and actually is infeasible to have results for every single possible query. There is an infinite number of terms in the world (even if you assume that only terms properly spelled will be entered), and there is an exponential number of queries from these n -> inf terms (2^n). So what is done? Caching. But if there are so many queries/results, which ones to cache? Caching policies. The most frequent/popular/relevant-for-the-user queries are the ones cached. wouldn't the hardware latency in Google's hardware be huge? Even if the data in Google were all stored in TB/s SSDs Nowdays, with such highly developed processors, people tend to think that every possible task that must finish within a second (or less), and that deals with so much data, must be processed by extremely powerful processors with multiple cores and lots of memory. However, the one thing ruling market is money, and the investors are not interested in wasting it. So what is done? The preference is actually for having lots of machines, each using simple/accessible (in terms of cost) processors, which lowers the price of building up the multitude of clusters there are. And yes, it does work. The main bottleneck always boils down to disk, if you consider simple measurements of performance. But once there are so many machines, one can afford to load things up to main memory, instead of working on hard disks. Memory cards are expensive for us, mere human beings, but they are very cheap for enterprises that buy lots of such cards at once. Since it's not costly, having much memory as needed to load indexes and keep caches at hand is not a problem. And since there are so many machines, there is no need for super fast processors, as you can direct queries to different places, and have clusters of machines responsible for attending specific geographical regions, which allows for more specialized data caching, and even better response times. Does MapReduce help solve this problem? Although I don't think that using or not MapReduce is restricted information inside Google, I'm not conversant about this point. However, Google's implementation of MapReduce (which is surely not Hadoop) must have lots of optimizations, many involving the aspects discussed above. So, the architecture of MapReduce probably helps guiding how the computations are physically distributed, but there are many other points to be considered to justify such speed in querying time. Okay, so I understand that popular searches can be cached in memory. But what about unpopular searches? The graph below presents a curve of how the kinds of queries occur. You can see that there are three main kinds of searches, each of them holding approximately 1/3 of the volume of queries (area below curve). The plot shows power law, and reinforces the fact that smaller queries are the most popular. The second third of queries are still feasible to process, since they hold few words. But the set of so-called obscure queries, which usually consist of non-experienced users' queries, are not a negligible part of the queries. And there lies space for novel solutions. Since it's not just one or two queries (but one third of them), they must have relevant results. If you type in something much too obscure in a Google search, it shan't take longer to return a list of results, but will most probably show you something it inferred you'd like to say. Or it may simply state that there was no document with such terms -- or even cut down your search to 32 words (which just happened to me in a random test here). There are dozens of appliable heuristics, which may be either to ignore some words, or to try to break the query into smaller ones, and gather the most popular results. And all these solutions can be tailored and tweaked to respect feasible waiting times of, say, less then a second? :D
H: Does click frequency account for relevance? While building a rank, say for a search engine, or a recommendation system, is it valid to rely on click frequency to determine the relevance of an entry? AI: Depends on the user's intent, for starters. Users normally only view the first set of links, which means that unless the link is viewable, it's not getting clicks; meaning you'd have to be positive those are the best links, otherwise the clicks are most likely going to reflect placement, not relevance. For example, here's a click and attention distribution heat-map for Google search results: Further, using click frequency to account for relevance is not a direct measure of the resource's relevance. Also, using clicks is problematic, since issues like click-inflation, click-fraud, etc. will pop-up and are hard to counter. That said, if you're interested in using user interaction to model relevance, I would suggest you attempt to measure post-click engagement, not how users respond to search results; see "YouTube's head of engineering speaking about clicks vs engagement" for more information, though note that the size itself of the content is a factor too. Might be worth noting that historically Google was known for PageRank algorithm though it's possible your intent is only to review click-streams, so I won't delve Google ranking factors; if you are interested in the Google's approach, you might find a review of Google's Search Quality Rating Guidelines.
H: Clustering unique visitors by useragent, ip, session_id Given website access data in the form session_id, ip, user_agent, and optionally timestamp, following the conditions below, how would you best cluster the sessions into unique visitors? session_id: is an id given to every new visitor. It does not expire, however if the user doesn't accept cookies/clears cookies/changes browser/changes device, he will not be recognised anymore IP can be shared between different users (Imagine a free wi-fi cafe, or your ISP reassigning IPs), and they will often have at least 2, home and work. User_agent is the browser+OS version, allowing to distinguish between devices. For example a user is likely to use both phone and laptop, but is unlikely to use windows+apple laptops. It is unlikely that the same session id has multiple useragents. Data might look as the fiddle here: http://sqlfiddle.com/#!2/c4de40/1 Of course, we are talking about assumptions, but it's about getting as close to reality as possible. For example, if we encounter the same ip and useragent in a limited time frame with a different session_id, it would be a fair assumption that it's the same user, with some edge case exceptions. Edit: Language in which the problem is solved is irellevant, it's mostly about logic and not implementation. Pseudocode is fine. Edit: due to the slow nature of the fiddle, you can alternatively read/run the mysql: select session_id, floor(rand()*256*256*256*256) as ip_num , floor(rand()*1000) as user_agent_id from (select 1+a.nr+10*b.nr as session_id, ceil(rand()*3) as nr from (select 1 as nr union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 union all select 0)a join (select 1 as nr union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 union all select 0)b order by 1 )d inner join (select 1 as nr union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9 )e on d.nr>=e.nr AI: One possibility here (and this is really an extension of what Sean Owen posted) is to define a "stable user." For the given info you have you can imagine making a user_id that is a hash of ip and some user agent info (pseudo code): uid = MD5Hash(ip + UA.device + UA.model) Then you flag these ids with "stable" or "unstable" based on usage heuristics you observe for your users. This can be a threshold of # of visits in a given time window, length of time their cookies persist, some end action on your site (I realize this wasn't stated in your original log), etc... The idea here is to separate the users that don't drop cookies from those that do. From here you can attribute session_ids to stable uids from your logs. You will then have "left over" session_ids for unstable users that you are relatively unsure about. You may be over or under counting sessions, attributing behavior to multiple people when there is only one, etc... But this is at least limited to the users you are now "less certain" about. You then perform analytics on your stable group and project that to the unstable group. Take a user count for example, you know the total # of sessions, but you are unsure of how many users generated those sessions. You can find the # sessions / unique stable user and use this to project the "estimated" number of unique users in the unstable group since you know the number of sessions attributed to that group. projected_num_unstable_users = num_sess_unstable / num_sess_per_stable_uid This doesn't help with per user level investigation on unstable users but you can at least get some mileage out of a cohort of stable users that persist for some time. You can, by various methods, project behavior and counts into the unstable group. The above is a simple example of something you might want to know. The general idea is again to define a set of users you are confident persist, measure what you want to measure, and use certain ground truths (num searches, visits, clicks, etc...) to project into the unknown user space and estimate counts for them. This is a longstanding problem in unique user counting, logging, etc... for services that don't require log in.
H: What is the Best NoSQL backend for a mobile game What is the best noSQL backend to use for a mobile game? Users can make a lot of servers requests, it needs also to retrieve users' historical records (like app purchasing) and analytics of usage behavior. AI: Some factors you might consider: Developer familiarity: go with whatever you or your developers are familiar with. Mongo, Couch, Riak, DynamoDB etc all have their strengths but all should do ok here, so rather than going for an unfamiliar solution that might be slightly better go for familiar and save a bunch of development time. Ease of cloud deployment: for example, if you are using Amazon AWS, then DynamoDB is likely an excellent choice. Sure, you could use Mongo on AWS, but why bother? Other cloud providers have their own preferred db, for example if you are using Google AppEngine, it makes sense to use BigTable or Cloud Datastore. Your use case seems both well suited to NoSQL and not very challenging since your data has a natural partition by user. I think you'd be technically ok with anything, which is why I'm mainly covering other factors.
H: Machine learning techniques for estimating users' age based on Facebook sites they like I have a database from my Facebook application and I am trying to use machine learning to estimate users' age based on what Facebook sites they like. There are three crucial characteristics of my database: the age distribution in my training set (12k of users in sum) is skewed towards younger users (i.e. I have 1157 users aged 27, and 23 users aged 65); many sites have no more than 5 likers (I filtered out the FB sites with less than 5 likers). there's many more features than samples. So, my questions are: what strategy would you suggest to prepare the data for further analysis? Should I perform some sort of dimensionality reduction? Which ML method would be most appropriate to use in this case? I mainly use Python, so Python-specific hints would be greatly appreciated. AI: One thing to start off with would be k-NN. The idea here is that you have a user/item matrix and for some of the users you have a reported age. The age for a person in the user item matrix might be well determined by something like the mean or median age of some nearest neighbors in the item space. So you have each user expressed as a vector in item space, find the k nearest neighbors and assign the vector in question some summary stat of the nearest neighbor ages. You can choose k on a distance cutoff or more realistically by iteratively assigning ages to a train hold out and choosing the k that minimizes the error in that assignment. If the dimensionality is a problem you can easily perform reduction in this setup by single value decomposition choosing the m vectors that capture the most variance across the group. In all cases since each feature is binary it seems that cosine similarity would be your go to distance metric. I need to think a bit more about other approaches (regression, rf, etc...) given the narrow focus of your feature space (all variants of the same action, liking) I think the user/item approach might be the best. One note of caution, if the ages you have for train are self reported you might need to correct some of them. People on facebook tend to report ages in the decade they were born. Plot a histogram of the birth dates (derived from ages) and see if you have spikes at decades like 70s, 80s, 90s.
H: When a relational database has better performance than a no relational When a relational database, like MySQL, has better performance than a no relational, like MongoDB? I saw a question on Quora other day, about why Quora still uses MySQL as their backend, and that their performance is still good. AI: It depends on your data and what you're doing with it. For example, if the processing you have to do requires transactions to synchronize across nodes, it will likely be faster to use transactions implemented in an RDBMS rather than implementing it yourself on top of NoSQL databases which don't support it natively.
H: How to learn noSQL databases and how to know when SQL or noSQL is better I want learn about NoSQL and when is better to use SQL or NoSQL. I know that this question depends on the case, but I'm asking for a good documentation on NoSQL, and some explanation of when is better to use SQL or NoSQL (use cases, etc). Also, your opinions on NoSQL databases, and any recommendations for learning about this topic are welcome. AI: Please have a look at my answer here: Motivations for using relational database / ORM or document database / ODM Short version: Use NoSQL when data size and number of transactions per second forces it, which typically happens above a few tens of TB and millions of transactions per second (db in memory, running on cluster), or at hundreds of TB and thousands of transactions per second (traditional db on disk, transactions per second is highly dependent on the usage pattern). Traditional SQL scales up to that point just fine. NoSQL is well suited for some problems (data has a natural sharding, schema is flexible, eventual consistency is ok). You can use for those even if scaling doesn't force you to. Developer familiarity with tools and ops ease of deployment are major factors, don't overlook them. A solution may be technically better but you may have a hard time using it, make sure you need it and make sure you budget for the learning curve. As to how to learn it: fire up a MongoDB image on AWS, or DynamoDB, and have fun! MongoDB on AWS tutorial DynamoDB tutorial
H: What is dimensionality reduction? What is the difference between feature selection and extraction? From wikipedia: dimensionality reduction or dimension reduction is the process of reducing the number of random variables under consideration, and can be divided into feature selection and feature extraction. What is the difference between feature selection and feature extraction? What is an example of dimensionality reduction in a Natural Language Processing task? AI: Simply put: feature selection: you select a subset of the original feature set; while feature extraction: you build a new set of features from the original feature set. Examples of feature extraction: extraction of contours in images, extraction of digrams from a text, extraction of phonemes from recording of spoken text, etc. Feature extraction involves a transformation of the features, which often is not reversible because some information is lost in the process of dimensionality reduction.
H: Publicly Available Datasets One of the common problems in data science is gathering data from various sources in a somehow cleaned (semi-structured) format and combining metrics from various sources for making a higher level analysis. Looking at the other people's effort, especially other questions on this site, it appears that many people in this field are doing somewhat repetitive work. For example analyzing tweets, facebook posts, Wikipedia articles etc. is a part of a lot of big data problems. Some of these data sets are accessible using public APIs provided by the provider site, but usually, some valuable information or metrics are missing from these APIs and everyone has to do the same analyses again and again. For example, although clustering users may depend on different use cases and selection of features, but having a base clustering of Twitter/Facebook users can be useful in many Big Data applications, which is neither provided by the API nor available publicly in independent data sets. Is there any index or publicly available data set hosting site containing valuable data sets that can be reused in solving other big data problems? I mean something like GitHub (or a group of sites/public datasets or at least a comprehensive listing) for the data science. If not, what are the reasons for not having such a platform for data science? The commercial value of data, need to frequently update data sets, ...? Can we not have an open-source model for sharing data sets devised for data scientists? AI: There is, in fact, a very reasonable list of publicly-available datasets, supported by different enterprises/sources. Some of them are below: Public Datasets on Amazon WebServices; Frequent Itemset Mining Implementation Repository; UCI Machine Learning Repository; KDnuggets -- a big list of lots of public repositories. Now, two considerations on your question. First one, regarding policies of database sharing. From personal experience, there are some databases that can't be made publicly available, either for involving privacy restraints (as for some social network information) or for concerning government information (like health system databases). Another point concerns the usage/application of the dataset. Although some bases can be reprocessed to suit the needs of the application, it would be great to have some nice organization of the datasets by purpose. The taxonomy should involve social graph analysis, itemset mining, classification, and lots of other research areas there may be.