text
stringlengths
83
79.5k
H: Which approach for user classification on chat text (classifier, representation, features)? I'm trying to train a classifier to classify text from a chat between 2 users so later on I can predict who of the two users is more likely to say X sentence/word. To get there I mined the text from the chat log and ended up with two arrays of words, UserA_words and UserB_words. Which classifier should I use fot this purpose and what structure the training data should have? I've researched for the bag of words structure but dont know exactly how to train a classifier with data in that format. To clarify this last point, for now I have the data in a dict like {"hello":34, "how":12} and so on, being the terms word:frequency of each user. As far as I know, there is no way to use this two dicts as a classifier fit input. So, how do I transform this 2 dicts into an array that I can use to train a classifier (let's say I want to use a gaussian Naive Bayes just for the sake of the example) AI: You're asking what ML representation you should use for user-classification of chat text. User-classification is not the usual text-processing task. It's not strictly necessary to semantically understand what the user is saying, only how they're saying it; so we look for telltale features indicative of a specific user. And we don't necessarily need to use, or solely rely on, the usual text-processing representations like bag-of-words, word-counts, TFIDF and word-vector. Here are some features which are predictive of the user: character length, word length, sentence length of each comment typing speed (esp. if you have timestamps in seconds) ratio of punctuation (e.g. 17 punctuation symbols in 80 chars = 17/80) ratio of capitalization ratio of numerals ratio of whitespace character n-grams (and notice these can pick up e.g. l0ser, f##k, :-) ) use of Unicode (emojis, symbols e.g. stars) ratio of specific punctuation (e.g. how many '.', '!', '?', '*', '#' ) word-counts, esp. anything statistically anomalous, foreign, slang, insults anything else you can think of that seems predictive for these two users, e.g. number of misspelled words per sentence (may be actual typos, or come from predictive swiping on a cellphone)
H: Does anyone could help me to understand what is the autoencoders? Does anyone could help me to understand what the autoencoders means? What we expect is that the outputs are equal to the inputs, then why we need to do that? It doesn't make any sense to me. I find some interpretation that it learned how to reconstruct the input data, does that mean that we could just pick some of the pixels from the origin picture and then reconstruct the whole original picture? If so, it still makes no sense to me, because the reconstructing part of the model is from the hidden layer to the output layer, we cannot just put the selected data into the hidden layer, cuz the input data of the hidden layer are combinations of the whole raw data from the input layer. Thanks in advance. AI: Autoencoders are a neural network solution to the problem of dimensionality reduction. The point of dimensionality reduction is to find a lower-dimensional representation of your data. For example, if your data includes people's height, weight, trouser leg measurement and shoe size, we'd expect there to be some underlying size dimension which would capture much of the variance of these variables. If you're familiar with Principal Component Analysis (PCA), this is another example of a dimensionality reduction technique. Autoencoders attempt to capture a lower-dimensional representation of their data by having a hidden "bottleneck" layer which is much smaller than the dimensionality of the data. The idea is to train a neural network which throws away as much of its dimensionality as possible and can still reconstruct the original data. Once you have an autoencoder which performs well, by observing the activations at the bottleneck layer, it's possible to see how an individual example scores in each of the reduced dimensions. This may allow us to begin to make sense of what each of the dimensions represents. One can then use these activations to score new examples on this set of reduced dimensions.
H: Heatmap on a map in Python Mode Analytics has a nice heatmap feature, but it is not conducive to comparing maps (only one per report). What they do allow is data to be pulled easily into a wrapped python notebook. And then any image in python can easily be added to a report. So my question is: how do I recreate a heatmap on an actual map in Python? I've checked out follium and plotly, but neither seem to have similar functionality. AI: Looks like the gmaps package is what you're looking for. You can do things like this with it:
H: Torch on Gentoo Has anyone tried to install Torch on top of Gentoo Linux Distribution? I'm very familiar with this distribution, but I'm totally new at torch. Googling a little, I found that Gentoo is not officially supported by torch (take a look at the install-deps script, line 142: https://github.com/torch/distro/blob/master/install-deps), but some users have successfully installed it, even if the corresponding tutorial is a little bit aged (http://spotofdata.com/installing-torch-gentoo/). I'm wondering if someone has tried the tandem gentoo/torch and, in that case, if her/he would suggest it in spite of the more traditional ones (arch/torch or ubuntu/torch) and if there are some issues/recommendations in using it. Thanks! AI: I recently wanted to do the same thing, like yesterday, which is how I came across your post. You might also try this link that has a modified installation script for gentoo. I haven't gotten very far and know almost nothing about lua, but I have verified that after running the installation script I can load and run some basic torch scripts. [edit: fixed typo]
H: Why would you split your train data to compute a value on half of the data to then fill the Nan values on the other half? I was checking a kernel written in python from the Bosch kaggle competition (kaggle link to python kernel) and I came across with a weird (at least to me) way to fill Nan values. The train data is split into two halves and then some kind of average is computed in one half by using the non-Nan values of a field along with the target value and then fill Nan values on the other half with these computed values. Then, when training the model after filling Nan values, the model is only trained on the half data where the Nan values have been replaced. The question is, why would you split the data into two halves to compute on one and then fill the other? Are you introducing some kind of leakage when mean values are related to the target value and that's the reason why just the half part with filled values (the one in which you haven't computed anything, just filled Nans) is used to train? Is this procedure prone to overfitting if you perform this operation on all the train data? Thanks in advanced. AI: You basically answered your own question: Are you introducing some kind of leakage when mean values are related to the target value and that's the reason why just the half part with filled values (the one in which you haven't computed anything, just filled Nans) is used to train? Is this procedure prone to overfitting if you perform this operation on all the train data? Yes. You have already used the target values and the first half of the training set. In other words, the filled-out NaN values have information about the target values in them, so if you were to perform any cross-validation for performance estimation, you would end up with an over-estimation of your performance metric. It's best practice to leave that used part out of any further training steps to avoid information leakage.
H: Association Rules - Data Mining - Train and Test approach? Does it make sense to use the train, test, and validation model using the Association Rules Technique? AI: Finding association rules is an unsupervised learning task (or exploratory task). You don't actually know which rules you want to find before you actually find them, so there's nothing to test against. Validating against a separated instance set is usually done on supervised learning tasks such as classification and regression.
H: Matrix based Visualization Meaning - Assocation Rules I didn't found any good resource that explains me very well what type of information I can extract from a Matrix based Visualization like this: I do't uderstand what RHS and LHS represents Anyone can explain me the meaning of this chart? Many thanks! AI: An association rule is (usually) of the format X -> Y, meaning that if X happens then Y is likely to happen. In the traditional example of supermarket, a possible rule would be something like Bread -> Butter. In this case, X is called the antecedent, or LHS (left-hand side), and Y is the consequent, or RHS. This specific chart has the number of items in the LHS as it's horizontal axis, and the number of items in the RHS as the vertical axis. The colors represents the metrics of support and confidence, as can be seen in the legend. Example information that can be read from this chart is that the support is higher for lower number of items in the LHS, because the left side of the chart looks red-ish and the right side looks blue-ish, matching our intuition (more items in the LHS is more restrictive and thus less likely to happen). The confidence doesn't seem to be related to any of the axes in particular.
H: Pandas - Get feature values which appear in two distinct dataframes I have a Pandas DataFrame structured like this: user_id movie_id rating 0 1 1193 5 1 2 1193 5 2 12 1193 4 3 15 1193 4 4 17 1193 5 5 18 1193 4 6 19 1193 5 7 24 1193 5 8 28 1193 3 Each row corresponds to a rating event performed by the user user_id for the movie movie_id. For instance, the first row says that user 1 rated the movie 1193 with a rating of 5. This data comes from the MovieLens project. My goal is to find all the users who satisfy these two conditions: rated movie 588 with a rating of 5 rated movie 3578 with a rating of 3 I came up with two filtered DataFrame objects for each of the above conditions: ratings_588_5 = data[(data.movie_id == 588) & (data.rating == 5] ratings_3578_3 = data[(data.movie_id == 3578) & (data.rating == 3)] Which result in, respectively: >>> ratings_588_5 user_id movie_id rating 438 588 5 758 588 5 913 588 5 1024 588 5 1214 588 5 >>> ratings_3578_3 user_id movie_id rating 45 3578 3 321 3578 3 467 3578 3 758 3578 3 1024 3578 3 1381 3578 3 In Pandas, how can I compute a list of all user_id which appear in both DataFrames? In this example, the result I want to obtain is: [758, 1024] AI: you can use numpy.intersect1d() method: In [277]: np.intersect1d(a.user_id, b.user_id).tolist() Out[277]: [758, 1024] or pd.core.common.intersection() method, but it seems to be slow (at least on my notebook for aa and bb DataFrames [see setup below...]): In [307]: pd.core.common.intersection(a.user_id, b.user_id).tolist() Out[307]: [1024, 758] Timing for aa DF (50K rows) and bb DF (60K rows): In [294]: aa = pd.concat([a] * 10**4, ignore_index=True) In [295]: bb = pd.concat([b] * 10**4, ignore_index=True) In [296]: aa.shape Out[296]: (50000, 3) In [297]: bb.shape Out[297]: (60000, 3) In [298]: %timeit aa.ix[aa.user_id.isin(bb.user_id),'user_id'].tolist() 10 loops, best of 3: 41.8 ms per loop In [299]: %timeit np.intersect1d(aa.user_id, bb.user_id).tolist() 100 loops, best of 3: 5.36 ms per loop In [300]: %timeit pd.merge(aa, bb, on='user_id').user_id.tolist() ... skipped ... MemoryError: In [308]: %timeit pd.core.common.intersection(aa.user_id, bb.user_id).tolist() 10 loops, best of 3: 52.8 ms per loop PS original answer
H: Checking for skewness in data I have a data frame consisting of some continuous data features. I did a kde plot of the features using seaborn kdeplot functionality which gave me a plot as shown below : How do I interpret this visualization in order to check for things like skew in the data points, etc.? AI: IIUC you can use [DataFrame.hist()] method: import matplotlib import matplotlib.pyplot as plt import pandas as pd matplotlib.style.use('ggplot') df = pd.DataFrame(np.random.randint(0,10,(20,4)),columns=list('abcd')) df.hist(alpha=0.5, figsize=(16, 10)) Result: Data: In [44]: df Out[44]: a b c d 0 3 0 2 5 1 8 7 6 6 2 6 4 5 7 3 4 4 0 6 4 5 6 0 2 5 0 0 4 8 6 7 6 7 4 7 7 6 6 2 8 6 5 9 4 9 6 3 6 9 10 7 9 7 6 11 9 3 5 6 12 9 4 7 0 13 2 8 8 8 14 0 8 4 7 15 1 5 2 4 16 2 6 6 4 17 0 3 8 1 18 4 1 0 4 19 4 4 6 8 In [45]: df.skew() Out[45]: a -0.154849 b -0.239881 c -0.660912 d -0.376480 dtype: float64 In [46]: df.describe() Out[46]: a b c d count 20.000000 20.000000 20.000000 20.000000 mean 4.500000 4.600000 4.900000 5.050000 std 2.964705 2.521487 2.770142 2.502105 min 0.000000 0.000000 0.000000 0.000000 25% 2.000000 3.000000 3.500000 4.000000 50% 4.500000 4.500000 6.000000 5.500000 75% 7.000000 6.000000 7.000000 7.000000 max 9.000000 9.000000 9.000000 9.000000
H: SPARK RDD - Clustering - K-Means Imagine that I've this dataset (just a sample) A B C 1 23 1000 2 52 5000 3 12 500 4 10 450 I'm trying to assign each row to a clustering based on C value. Like this: A B C CLUSTER 1 23 1000 2 2 52 5000 1 3 12 500 3 4 10 450 3 For that I'm using K-Means algorithm using Spark: import org.apache.spark.mllib.clustering.{KMeans, KMeansModel} import org.apache.spark.mllib.linalg.Vectors val data = sc.textFile("/user/cloudera/TESTE1") val parsedData = data.map(s => Vectors.dense(s.split(',').map(_.toDouble))).cache() val numClusters = 4 val numIterations = 20 val clusters = KMeans.train(parsedData, numClusters, numIterations) val WSSSE = clusters.computeCost(parsedData) println("Within Set Sum of Squared Errors = " + WSSSE) clusters.save(sc, "/user/cloudera/KMeansModel") val sameModel = KMeansModel.load(sc, "/user/cloudera/KMeansModel") But this script extracts me a .gz.parquet file and when I try to see what type of information this file contains, using: sqlContext.read.parquet("/user/cloudera/KMeansModel/data/part-r-00000-f551ea29-54db-45be-8cba-d06a97d6d9f2.gz.parquet").show I'm getting this: +---+--------------------+ | id| point| +---+--------------------+ | 0|[9.39601519885208...| | 1|[9.80112351958380...| | 2|[9.63822872186722...| | 3|[9.44194658832542...| +---+--------------------+ How can I get the table that I put above? Basically I just want to extract the same fields and add a column with the cluster calculated by K-Mwans to each row... Many thanks! AI: This is well answered in this earlier question: https://stackoverflow.com/q/31447141/1060350 Beware that Spark k-means is slow. If your data fits into main memory (i.e. a few gigabyte, which means billions of vectors!) then other tools such as ELKI that don't have the cluster overhead will be much faster. Use spark only for preprocessing the data, if you e.g. have several TB of jsons, and you need to first extract the numbers out of the JSONs, then here is where Spark shines. Once your data is then vectors, use ELKI instead, it's much faster.
H: Is datawarehouse considered as datalake in big data environnment? Suppose I have a datawarehouse (DWH) and now I would like to add many other bigdata sources of information most of them are not structured. I still keep the DWH with no architectural change. The only thing I do is to enrich the bigdata with the data resides in the DWH using connectivity. Can the DWH defined as datalake in this case? AI: Check out this article: http://www.kdnuggets.com/2015/09/data-lake-vs-data-warehouse-key-differences.html Looks like the answer is yes, your DWH can be defined as datalake because you are adding unstructured data. Here is an excerpt: “A data lake is a storage repository that holds a vast amount of raw data in its native format, including structured, semi-structured, and unstructured data. The data structure and requirements are not defined until the data is needed.”
H: Data set size versus data dimension, is there a rule of thumb? I am trying to collect some data for ML, specifically for training a neural network model, and I don't know how big the data set is enough. So is there a rule of thumb on how many data of dimension DIM should one collect for training a NN-Model ? For example, it depends on the number of features or the kind of NN-Model or something else ? Any help will be appreciated. AI: In this video by Caltech prof. Yaser Abu-Mostafa, he explains the relationship between dimension of a dataset and it's size required for any learning model to work. As a general rule of thumb, size of dataset should be at-least about 10x it's dimension and should be independent of the model used. Also, this link has summaries from some of the relevant papers, viz. For a finite sized data with little or no a priori information, the ratio of the sample size to dimensionality must be as large as possible to suppress optimistically biased evaluations of the performance of the classifier. This says, the ratio of size of dataset (sample) to dimension should be as large as possible to reduce classifier bias towards a particular class. The ratio of the sample size to dimensionality should vary inversely proportional to the amount of available knowledge about the class conditional densities. In a classifier setting, the more knowledge we have for each class' probability density, lesser can be the sample-size to dimension ratio. In simpler terms, we can say we should be including as much data as possible and if we are not able to do that, include as much information as possible in the small dataset itself, because for any model to work we need to feed it with high variance dataset.
H: Spark MLLib - how to re-use TF-IDF model I am using spark ml IDF estimator/model (TF-IDF) to convert text features into vectors before passing it to the classification algorithm. Here's the process: Datasets: Full sample data (labeled) <br> Training (labeled)<br> Test (labeled)<br> Unseen (non-labeled)<br> This is my current workflow: Fit IDF model (idf-1) on full Sample data<br> Apply(Transform) idf-1 on full sample data<br> Split data set into Training and Test data<br> Fit ML model on Training data<br> Apply(Transform) model on Test data<br> Apply(Transform) idf-1 on Unseen data<br> Apply(Transform) model on Unseen data<br> I read somewhere that I should split my data into training and test BEFORE fitting IDF model; Fit IDF only on training data and then use the same transformer to transform training and test data. Why would you do that? What exactly do IDF learn during the fitting process that it can reuse to transform any new dataset. Perhaps, idea is to keep same value for |D| and DF|t, D| while use new TF|t, D| ? Also, how often I will Fit (not transform) IDF model against new unseen data? let say my model is ready for prediction. I made n prediction using same IDF and Classifier model. After that I want to retrain model as I have new data now. Should I also retrain IDF then? AI: tf-idf will learn a vocabulary, idf, and some will also learn stop words (based on min_df, max_df, max_features). Read over sklearn's TfidfVectorizer and you can see the attributes that the fit method will set. When you expose a trained tf-idf to new data it will transform that data into a vector of the same size as your original data using the vocabulary to construct Term_Counts which are then converted into your tf vector. The value in this is that you can use another model to predict an outcome based on the tf_idf as each new document will have the same size tf_idf vector as the documents you used to train the model. Otherwise you couldn't use it to make a prediction! For example with a naive bayes classifying tfidf: tfidf = TfidfVectorizer() X = tfidf.fit_transform(X_train) nb = MultinomialNB() nb.fit(X, y_train) # When you receive a new document X = tfidf.transform(new_doc) prediction = nb.predict_proba(X) And I don't think you would want to refit the model. If you want some kind of continuous real-time update consider implementing a bayesian update
H: Contrasting logistic regression vs decision tree performance in specific example I have a set of 10,000 integers, and another set of 100. The integers in the first set are mapped to integers in the second set according to some rules (not mathematical rules, think of these values as codes for naming certain items, it is some categorical mapping). The mapping is not necessarily 100 to 1, in some case I may have just 30 or so integers from the first set mapped to an integer in the second set, in other cases 300, but on average of course it is 100 to 1. Using sklearn, I created a decision tree that was able to get over 99% accuracy, as I would expect. When I tried logistic regression, though, accuracy was just 45%. The training sample is about 100,000 example, so, it should be enough to learn. What is going on? Is there something inherently different in the logistic regression method that I am missing? AI: A decision tree is designed to make many branches leading to any number of categorical outcomes. Logistic regression in it's simplest form, however, takes a continuous variable and decides where to apply a threshold in order to model a binary response. In your case, a decision tree makes sense because you are working with data that has no overall mathematical model, if I understand you correctly. Logistic regression is going to struggle with deciding between 100 classes with no underlying pattern. I would suggest reviewing the math behind logistic regression in depth in order to understand the limitations.
H: Help reducing a set of features I am trying to do some clustering. I have a dataset that is very sparse - with the majority of features only occurring in a single vector. Here is a list of our features: https://gist.github.com/scrooloose/5963725dc88e5d15d74dcae522bebf82 I am looking for any suggestions/hints/pointers as to how we can merge some of these isolated features together. This should hopefully make my clustering experiments more successful. For example, from a manual inspection of the data, I can see this group of features that could be all merged into a feature like "health" or perhaps "mental health" + "general health" or similar. 618: Mental Health Research 619: Mental disorder 1616: mental health 1617: mental illness 1618: men’s health 410: Genital wart 402: Genomic Medicine 476: Hygiene Another example is this set of features that could be merged into something like "education": 536: Kiir Primary School 591: Makonzi Boarding School 609: Mathematics 670: New York University 300: Education 301: Educational psychology 349: Female education Any thoughts would be very welcome, thanks :) Side note: These features are keywords as returned from alchemy (http://www.alchemyapi.com/). Resulting from keyword searches for a set of URLS. The intention is to cluster the URLs (and hence they companies they represent) by these keywords. AI: If I understand correctly you want to cluster urls by using the keywords extracted as features. As these features are really sparse, you can try to use dimensionality reductions methods to help you. One way is to treat each URL keywords as a document. Then you can use document embeddings algorithms such as LDA or doc2vec that learn denser representations of your documents. If you want to group keywords, you can try to use word embeddings methods that learn representations of words. Using this you can then measure the similarity between words and groups of words. An example is the well-known word2vec. Recent methods like FastText can be alternative that take the morphology into account.
H: Difference between summation and integration I understand calculus and maths but when i apply statistics and add up numbers they both look kinda same Can anybody explain the difference in a little detail and simple manner please AI: In most simple words- Summation- Sum of a small numbers of large quantities. Integration- Sum of a large numbers of small quantities. Other Simple Difference can be- The Summation is a discrete sum whereas Integration is a continuous sum. Example: Here dx is an infinitesimal so that the integral summation is continuous. Hope it helps, cheers! :)
H: Extract information from sentence I'm creating a simple chatbot. I want to obtain the information from the user response. An example scenario: Bot : Hi, what is your name? User: My name is Edwin. I wish to extract the name Edwin from the sentence. However, the user can response in different ways such as User: Edwin is my name. User: I am Edwin. User: Edwin. I'm tried to rely on the dependency relations between words but the result does not do well. Any idea on what technique I could use to tackle this problem? [UPDATED] I tested with named entity recognition together with part of speech tagger and parser. I found out that most model is trained in a way that the first character of the entity for the person name or the proper noun must be upper case. This may be true for normal document, but it is irrelevant for a chatbot. E.g. User: my name is edwin. Most NER failed to recognize this. AI: You can possibly use a combination of Named Entity Recognition and Syntactical Analysis - while the word Edwin is certainly propping up, imagine a situation where the name is Edward Philip Martel. NER detects each word as a separate entities (hence 3 different entities) - thus, you will anyways have to string them together based on some logic. Further, in the case of multiple names being present, it can get harder to disambiguate (e.g. John & Ramsey dined at Winterfell). This is where the analysis of the sentence syntax would also help (assuming that the end user enters a relatively coherent and proper sentence - if slang and short forms of text are used, even the Stanford NLP can help upto a certain extent only). One way of leveraging on syntax analysis / parsing and NER is in the following examples - 1. User: Edwin is my name. 2. User: I am Edwin. 3. User: My name is Edwin. In each of the cases (as is generically the case as well), the Entity Name (Proper Noun / Noun) is associated in close proximity to a Verb. Hence, if you first parse the sentence to determine verbs and then apply NER to surrounding (+/- 1 or 2) words, you may have a relatively decent way to resolve the problem. This solution would depend primarily on the syntax rules you create to identify NERs as well as the window around the verbs.
H: After the training phase, is it better to run neural networks on a GPU or CPU? My understanding is that GPUs are more efficient for running neural nets, but someone recently suggested to me that GPUs are only needed for the training phase. Once trained, it's actually more efficient to run them on CPUs. Is this true? AI: This depends on many factors, such as the neural network architecture (CNNs tend to be better optimized than RNN on GPU) as well as how many test samples you give as input to the neural network (GPUs can be even faster when given a batch of samples instead of a single sample). As an example, here is a benchmark comparing CPU with GPU on different CNN-based architectures. The forward pass is much slower on a CPU in that case: FYI: Benchmarks based on neural networks libraries to compare the performance between different GPUs
H: Use of Correlation Map in Machine Learning I would like to know the use of correaltion map in machine leraning. For example, if there are 2 features with high correaltion, should either of the features be removed before appying the algorithm or it depends on every data set. Any explanation would be highly helpful. Thanks in advance. AI: It depends. A high correlation between two features suggests that they represents almost the same information. For some problems like clustering, it is always useful to remove redundant features while some algorithm like Gradient Boosting in xgboost is not affected at all by such features. So, it all depends on what you want to do with your data set. As per my opinion, if your dataset has too many features, then I would suggest to check the correlation between those features and apply PCA to reduce the dimensionality of your dataset especially if you are doing tasks like clustering or regression.
H: Difference between business and financial and data analyst Can anyone please tell me whats the difference between Business analyst Financial analyst Data analyst And what should one need to learn in order to achieve either of these profile. AI: Business Analyst A business analyst is one who understands the specific domain of the project (ex. retail, merchandising to be specific, supply chain etc.). His/her role is to understand the business problem, analyze the current state and capture requirements using various tools like surveys, interviews, group discussions and then provide recommendations and create a Requirements document for sign off. Business Analyst Skillset- Communication, Analytical thinking, Domain Knowledge, Generic technical Knowledge, Problem solving skills,Decision solving skills, Managerial Skills,Negotiation and Persuasion Skills. Financial Analyst Financial analysts use financial data to spot trends and extrapolate into the future, helping their employers and clients make the best investing decisions. Businesses rely on financial analysts to determine when it is an auspicious time to buy or sell specific securities and, in some cases, companies use reports put together by financial analysts to determine if the entire business should be sold. Financial Analyst Skillset- Financial analysts should be great problem-solvers, excel at the use of logic and possess strong skills in quantitative analysis. In addition, successful financial analysts have an in-depth understanding of various financial markets and investment products. Data Analyst A data analyst's role is one that works with lots of data to derive meaningful insights to either address business problems or discover hidden trends and patterns that can be leveraged to meet the business objectives. Data Analyst Skillset- Data analyst also needs similar skills with some additional skills like to analyze data like SQL, DATA mining, OLAP, Reports etc. Note: Strong people skills, leadership ability, and teamwork are beneficial for each type career. For more info, you can visit these resources- Career Advice: Financial Analyst Vs. Data Analyst What is the difference between data analyst and business analyst? The Differences Between a Business Analyst & a Data Analyst Hope it helps, cheers! :)
H: Network Plot - R - Modify Col Having this R statement: itemsets = apriori(data, parameter=list(support=0.05, confidence=0.5)) plot(itemsets, method="graph", control=list(type="items")) How can I change the colour of my plot? I try to insert "col" but don't allows me to do that because I'm getting the following error: Error in i.parse.plot.params(graph, list(...)) : Unknown plot parameters: col thanks AI: Use edgeCol and nodeCol: library("arulesViz") data(Groceries) rules <- apriori(Groceries, parameter=list(support=0.005, confidence=0.5)) plot(rules, method="graph", control=list(nodeCol="yellow", edgeCol="blue", type="items"))
H: Series prediction for any given time I have a time series of data points. Then I am given a future timestamp and I have to predict the value for the data point. For simplicity, you can assume that the timestamp is bounded i.e. for e.g. query can be of at most 1 hr in the future. This is different from tradition train and predict models. Here you will be given the time as query input apart from past data. Currently, I am training different models for each minute(yeah, 60 models: lots of waiting time ). I am wondering if there is something available for this specific task? EDIT: To get the view about the data, you can assume a simple time series of real numbers, and I have to use history to predict for any general time(within next 1 hr). AI: Considering ocam's razor I would recommend to use the simplest model first and increase the complexity if the simple models fail: Exponentially Weighted Moving Average, allows only for auto correlation with lag one ARIMAX, allows for several lagged autocorrelation, seasonal adjustments and external regressor. Fourier transformation, allows to fit more shapes time series but they are often more complex to explain to users All these models can predict one step ahead and you can repeat this up to 60. Of course the confidence intervals become much wider these many steps ahead. Base R has many time series available. These two models work well for many time series data. I would only start to use neural nets when these are not working or if you are looking for say edge detection. Neural nets If neural nets are the only option to go you can check this post or if you have matlab available this post. I recently found this very interesting article on medium that explains how to fit a Neural Net to time series data. A more detailed implementation guide can be found here. I have not tried either approaches but I thought that the may be useful for future reference. Time series modus operandi Note that in any case, before staring off modeling your time series it is highly recommended to take some preparatory steps: Make your series stationary as explained here Understand what elements you can decompose in your series as explained here Hope this helps
H: Plotting relationship between 2 data points where one data point is a boolean I am working with the titanic survivors data set. I have the data as a DataFrame and I can create 1D visualizations such as histograms, and also see the correlations by calling data.corr(). I would like to create a scatter plot to represent the correlation between 'age' and 'survived'. I can't figure out how to plot this data because 'survived' is effectively an integer of 0 or 1 (died or lived, respectively) If I do something like: titanic_data.plot(x='Age', y='Survived', style='o') I get a plot that looks like this: What I would like is a plot that somehow takes the average survival rate by age and created something more like this: AI: You can precalculate the survival rate (probability) and plot a bar plot: import seaborn as sns x = sns.load_dataset('titanic') bins = np.linspace(0, 100, 11) labels = bins[1:] # let's group all ages by bins (10, 20, 30, ..., 100) rpt = (x.groupby(pd.cut(x.age, bins, labels=labels)) .survived.mean()*100 ).dropna().to_frame('survival_rate') rpt.plot.bar(rot=0, width=0.8, alpha=0.5, figsize=(12, 10)) calculated data: In [84]: bins Out[84]: array([ 0., 10., 20., 30., 40., 50., 60., 70., 80., 90., 100.]) In [85]: labels Out[85]: array([ 10., 20., 30., 40., 50., 60., 70., 80., 90., 100.]) In [86]: rpt Out[86]: survival_rate age 10.0 59.375000 20.0 38.260870 30.0 36.521739 40.0 44.516129 50.0 38.372093 60.0 40.476190 70.0 23.529412 80.0 20.000000
H: Pros and Cons of Python and R for Data Science So, let's start out by saying I am NOT asking which is better. I like both of these languages for data science and I think this is a matter of and rather than or since there is no need to choose one of them. My general opinion is that R has more data science depth while Python has more breadth in terms of diversity of application. I will start this out with a few of my own scattered opinions. Python I tend to use Python as the overarching language (sometimes using it to execute scripts in R) since: Installed standard on Macs and many *nix setups Easy to install/set up in environments with security restrictions Has a great suite of tools whether I want to: Set up a website Interact with the Apache suite of tools Build out pipelines Build and score predictive models Develop an application / front-end Has a larger audience if I get stuck and need to ask questions on here Debug / Unit test Auto Document A good example of this was in building a Robot some facial recognition stuff over the summer. Raspbery Pi comes with Python by default, the sensors had Python API's, and a few of the web API's I bumped up against had Python API's available. This would have been a difficult application to take on using R. R I like to use R for early data explanation since data table tends to load larger sets into memory more quickly than Pandas and the general environment seems more mature for exploratory stuff. If I need mixed models or something similar that isn't Machine Learning "bread and butter" R is more likely to have what I need. sklearn is pretty good but doesn't feel quite as polished as what I have in R. On the data viz side, ggplot2 is obviously better than matplotlib (the html based stuff is tough to share in an environment with heavy IP and security restrictions so I tend to lean towards the simple stuff.) How do you use them? How do you interact with these languages? Do you mix them together? Specialize in one? What's your rationale? Strengths/weaknesses? AI: Interaction - Random Facts Both are good stable languages with interesting complementary qualities. You can get much better packages in one and then stitch them with some data from the other. An example is using time series forecasting and decision trees in R and doing data munging in Python. Both languages borrow from each other. Even seasoned package developers like Hadley Wickham (Rstudio) borrows from Beautiful Soup (python) to make rvest for web scraping. In addition to that, Yhat borrows from sqldf to make pandasql and many other. Rather than reinvent the wheel in the other language developers can focus on innovation because, in the end, the customer does not care which language the code was written, the customer cares for insights. Mixing Them Up AM mentioning few approaches to mix them together- Use a Python package rpy2 to use R within Python . [Demo] Use Python from within R using the rPython package. [Demo] Use Jupyter with the IR Kernel. Python and R and makes the interactivity of iPython available to other languages. Use Beaker Notebook. It allows you to switch from one language in one code block to another language in another code block in a streamlined way to pass shared objects. Python vs R Python vs R - This section will answer: Which will be better? How to choose one over other? Specialization See as I said earlier both are stable and you can choose any or work with both. But when it comes to master one I'll suggest keep these 3-4 guidelines in mind- Personal Preference Choose the language to begin with based on your personal preference, on which comes more naturally to you, which is easier to grasp from the get-go. To give you a sense of what to expect, mathematicians and statisticians tend to prefer R, whereas computer scientists and software engineers tend to favor Python. Project selection You can also make the Python vs. R call based on a project you know you’ll be working on in your data studies. If you’re working with data that’s been gathered and cleaned for you, and your main focus is the analysis of that data, go with R. If you have to work with dirty or jumbled data, or to scrape data from websites, files, or other data sources, you should start learning, or advancing your studies in, Python. Collaboration Once you have the basics of data analysis under your belt, another criterion for evaluating which language to further your skills in is what language your teammates are using. If you’re all literally speaking the same language, it’ll make collaboration—as well as learning from each other—much easier. Job market Jobs calling for skill in Python compared to R have increased similarly over the last few years. Note: Have a look at this infographic by DataCamp. For a better view on it. My Rationale In my case am doing both and using them interactively and Customizing them as per my use. You can get something really interesting in one (as I mentioned above) which will be hardly available in other, so it's better to use both together. This is the best way to bridge the gap between these two. But in the last, it's your call keep the guidelines, your interest, and scenarios in mind and make a clear view on that. Strength & Weaknesses R Strength R is great for prototyping and for statistical analysis. It has a huge set of libraries available for different statistical type analysis. Check The Comprehensive R Archive. RStudio IDE is a definitely a big plus. It eases most of the tedious tasks and fastens your workflow. Weaknesses The syntax could be obscure sometimes. It is harder to integrate to a production workflow. In my opinion, it is better suited for consultancy-type tasks. The libraries documentation isn't always user friendly. Python Strength Python is great for scripting and automating your different data mining pipelines. It is the de facto scripting language nowadays. It also integrates easily in a production workflow. Besides, it can be used across different parts of your software engineering team (back-end, cloud architecture etc.). The scikit-learn library is awesome for machine-learning tasks. Ipython (and its notebook) is also a powerful tool for exploratory analysis and presentations. Weaknesses It isn't as thorough for statistical analysis as R, but it has come a long way these recent years In my opinion, the learning curve is steeper than R, since you can do much more with Python. The Conclusion Use R and Python. Learn how they inter-operate together. Start with one and then add the other to your workflow. As I like to remind myself- "choosing the tools should never be the primary problem". When in doubt, use the one that is available and that gets the work done quickly. Hope it helps! Ref- Udacity, Quora, Letustweak, kD, DataCamp
H: Backpropagation derivation problem I read a few tutorials on neural network backpropagation and decided to implement one from scratch. I tried to find this single error for the past few days I have in my code with no success. I followed this tutorial in hopes of being able to implement a sine function approximator. This is a simple network: 1 input neuron, 10 hidden neurons and 1 output neuron. The activation function is sigmoid in the second layer. The exact same model easily works in Tensorflow. def sigmoid(x): return 1 / (1 + np.math.e ** -x) def sigmoid_deriv(x): return sigmoid(x) * (1 - sigmoid(x)) x_data = np.random.rand(500) * 15.0 y_data = [sin(x) for x in x_data] ETA = .01 layer1 = 0 layer1_weights = np.random.rand(10) * 2. - 1. layer2 = np.zeros(10) layer2_weights = np.random.rand(10) * 2. - 1. layer3 = 0 for loop_iter in range(500000): # data init index = np.random.randint(0, 500) x = x_data[index] y = y_data[index] # forward propagation # layer 1 layer1 = x # layer 2 layer2 = layer1_weights * layer1 # layer 3 layer3 = sum(sigmoid(layer2) * layer2_weights) # error error = .5 * (layer3 - y) ** 2 # L2 loss # backpropagation # error_wrt_layer3 * layer3_wrt_weights_layer2 error_wrt_layer2_weights = (y - layer3) * sigmoid(layer2) # error_wrt_layer3 * layer3_wrt_out_layer2 * out_layer2_wrt_in_layer2 * in_layer2_wrt_weights_layer1 error_wrt_layer1_weights = (y - layer3) * layer2_weights * sigmoid_deriv(sigmoid(layer2)) * layer1 # update the weights layer2_weights -= ETA * error_wrt_layer2_weights layer1_weights -= ETA * error_wrt_layer1_weights if loop_iter % 10000 == 0: print(error) The unexpected behavior is simply that the network doesn't converge. Please, review my error_wrt_... derivatives. The problem should be there. Here's the Tensorflow code it works flawlessly with: x_data = np.array(np.random.rand(500)).reshape(500, 1) y_data = np.array([sin(x) for x in x_data]).reshape(500, 1) x = tf.placeholder(tf.float32, shape=[None, 1]) y_true = tf.placeholder(tf.float32, shape=[None, 1]) W = tf.Variable(tf.random_uniform([1, 10], -1.0, 1.0)) hidden1 = tf.nn.sigmoid(tf.matmul(x, W)) W_hidden = tf.Variable(tf.random_uniform([10, 1], -1.0, 1.0)) output = tf.matmul(hidden1, W_hidden) loss = tf.square(output - y_true) / 2. optimizer = tf.train.GradientDescentOptimizer(.01) train = optimizer.minimize(loss) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(500000): rand_index = np.random.randint(0, 500) _, error = sess.run([train, loss], feed_dict={x: [x_data[rand_index]], y_true: [y_data[rand_index]]}) if i % 10000 == 0: print(error) sess.close() AI: I think your biggest problem is the lack of biases. Between the input layer and the hidden layer, you should not only transform by the weights but should also add a bias. This bias will shift your sigmoid function to the left or right. Take a look at this code (I made some adaptations). What is important: Added biases. Altered your error_w such that they are correct. Made some good random starting points for biases (np.random.rand(width) * 15. - 7.5) such that all biases are random points on the desired x-scale. Made a plot that shows the initial guess and final. Let me know if some parts are not clear: import numpy as np import matplotlib.pyplot as plt def sigmoid(x): return 1 / (1 + np.math.e ** -x) def sigmoid_deriv(x): return sigmoid(x) * (1 - sigmoid(x)) def guess(x): layer1 = x z_2 = layer1_weights * layer1 + layer1_biases a_2 =sigmoid(z_2) z_3 = np.dot(a_2, layer2_weights) + layer2_biases # a_3 = sigmoid(z_3) a_3 = z_3 return a_3 x_data = np.random.rand(500) * 15.0 - 7.5 y_data = [np.sin(x) for x in x_data] ETA = 0.05 width = 10 layer1_weights = np.random.rand(width) * 2. - 1. layer1_biases = np.random.rand(width) * 15. - 7.5 layer2_weights = np.random.rand(width) * 2. - 1. layer2_biases = np.random.rand(1)* 2. - 1. error_all = [] x_all = x_data y_all = [guess(x_i) for x_i in x_all] plt.plot(x_all,y_all, '.') plt.plot(x_data, y_data, '.') plt.show() epochs = 500000 for loop_iter in range(epochs): # data init index = np.random.randint(0, 500) x = x_data[index] y = y_data[index] # forward propagation # layer 1 layer1 = x # layer 2 #TODO add the sigmoid function here z_2 = layer1_weights * layer1 + layer1_biases a_2 =sigmoid(z_2) # layer 3 #TODO remove simgmoid here (not that is really matters, but values at each layer are after sigmoid z_3 = np.dot(a_2, layer2_weights) + layer2_biases # a_3 = sigmoid(z_3) a_3 = z_3 # error error = .5 * (a_3 - y) ** 2 # L2 loss # backpropagation # error_wrt_layer3 * layer3_wrt_weights_layer2 # error_wrt_layer2_weights = (y - layer3) * sigmoid(layer2) delta = (a_3 - y) error_wrt_layer2_weights = delta * a_2 error_wrt_layer2_biases = delta # error_wrt_layer3 * layer3_wrt_out_layer2 * out_layer2_wrt_in_layer2 * in_layer2_wrt_weights_layer1 # error_wrt_layer1_weights = (y - layer3) * layer2_weights * sigmoid_deriv(sigmoid(layer2)) * layer1 error_wrt_layer1_weights = delta * np.dot(sigmoid_deriv(z_2), layer2_weights) * layer1 # error_wrt_layer1_weights = 0 error_wrt_layer1_biases = delta * np.dot(sigmoid_deriv(z_2), layer2_weights) # a = 0 # while a ==0: # a*0 # update the weights layer2_weights -= ETA * error_wrt_layer2_weights layer1_weights -= ETA * error_wrt_layer1_weights layer2_biases -= ETA * error_wrt_layer2_biases layer1_biases -= ETA * error_wrt_layer1_biases error_all.append(error) if loop_iter % 10000 == 0: print(error) # plt.plot(error_all) # plt.show() x_all = x_data y_all = [guess(x_i) for x_i in x_all] plt.plot(x_all,y_all, '.') plt.plot(x_data, y_data, '.') plt.show()
H: Logistic regression with high cardinality categorical variable I have a logistic regression model where I care about predictive power solely over comprehensibility. I'm interested in predicting win rates in a video game. There are 133 characters. Each team picks 5 of them (no repeats). Each of these characters is assigned to one of five positions (again no repeats). Currently I have each of these characters as a dummy variable. In addition I have an interaction variable between each of these variables. The position of a character is not included in the model at present. I know I can trim down the size of the model by excluding low-playrate characters, however my concern is that the required sample size is still far too small for the complexity of the model. Any advice would be appreciated. Sample Size: Aprox. Two million AI: So I believe you're building a model on the binary outcome {lose, win}:= {0, 1}, correct? I'd recommend just using a one-hot-encoding or a sparse matrix to store these inputs, then the model should run just fine. This is very straightforward in R (sparse.model.matrix) or Python (pd.get_dummies(sparse=True)). Here's a quick demo of how to build a sparse matrix in R out of sampled categories and select a subset of them with at least 5 observations. library(MASS) require(glmnet) n <- 1000 x1 <- sample(paste(letters,1), n, replace=T) x2 <- sample(paste(letters,2), n, replace=T) x3 <- paste(x1,x2,sep='-') xdf <- data.frame(x1,x2,x3) xs <- sparse.model.matrix(~.-1, data=xdf) vars <- colnames(xs) colsmry <- colSums(xs) colsubset <- colsmry > 4 xs_ss <- xs[,vars[colsubset]] dim(xs) dim(xs_ss)
H: Counting indexes in pandas I feel like this is a rudimentary question but I'm very new to this and just haven't been able to crack it / find the answer. Ultimately what I'm trying to do here is to count unique values on a certain column and then determine which of those unique values have more than one unique value in a matching column. So for this data, what I am trying to determine is "who" has "more than one receipt" for all purchases, then determine the same information based on each product category. My approach so far: We have a dataset like this: receipt,name,etc,category 1,george,xxx,fish 1,george,xxx,cat 2,george,xxx,fish 3,bill,xxx,fish 3,bill,xxx,dog 4,jill,xxx,cat 5,bill,xxx,cat 5,bill,xxx,cat 5,bill,xxx,dog 6,george,xxx,fish So then I can do this: df.set_index(['name','receipt']) And get the more interesting etc category name receipt george 1 xxx fish 1 xxx cat 2 xxx fish bill 3 xxx fish 3 xxx dog jill 4 xxx cat bill 5 xxx cat 5 xxx cat 5 xxx dog george 6 xxx fish At this point it feels to me like the data is easy to work with, but I haven't figured it out. One thing that is interesting to me is that if I sort the data by name before indexing it, the data displays grouped by name. In both cases the index is the same, so I don't know how to play with the representation of the data after indexing. It is easy to find the data by category using >>> orders.loc[orders['category'] == 'fish'] etc category name receipt george 1 xxx fish 2 xxx fish bill 3 xxx fish george 6 xxx fish But what I can't figure out is how to tell pandas "Find me the list of names that have more than one receipt". Smaller questions: What is the "pandas way" to get the length of the names part of the index? I'm supposing I could just turn the name column into a set and get the length of that. But I'm curious about indexes. Edit / Update Thanks for those answers! Here is clarification on what I am looking for: I'm trying to find "repeat customers": people with more than one receipt. So my set of all customers would be: names: ['george','bill','jill'], ratio: 1.0 My repeat customers: names: ['george','bill'], ratio 0.66 All 'fish' customers: names: ['george','bill'], ratio: 0.666 My repeat 'fish' customers: names: ['george'], ratio: 0.333 I think the examples given look helpful, but feel free to add anything. AI: I think maybe you are looking for: receipts_by_name_x_cat = df.groupby(['name','category']).count() Or, if you just want the total across all categories: receipts_by_name = df.groupby(['name']).count() Then, you can search those who have more than one: receipts_by_name[receipts_by_name['receipt']>1] And, you can find the length of an index by typing: len(df.index.get_level_values(0)) Assuming the name was the first index column (otherwise substitute 1, 2, etc.)
H: What is 'parameter convergence'? I'm trying to teach myself data science, with my particular interest being decision trees. A few steps in, I've come across a term, 'parameter convergence' that I can't find a definition for (because, after all, I'm learning on my own and have no access to teachers or peers): However, even in studies with much lower numbers of predictor variables, the combination of all main and interaction effects of interest – especially in the case of categorical predictor variables – may well lead to cell counts too sparse for parameter convergence. (from Strobl et al., 2009) A web search isn't overly helpful because convergence is such a common term, and I'm not sure which results specifically apply in the context of decision trees. And also, the results don't provide an entry-level definition. So, while a definition or explanation of parameter convergence (in the context of recursive partitioning) would be great, it would also be handy to be directed to a resource (academic or otherwise) that might have a 'glossary' of this and similar terms... AI: A naive definition of Parameter convergence is when the weights or the values of the parameters reach a point asymptotically. What I mean is that, when your model training is not altering the parameter values(maybe less than epsilon-small values) it might be a good fit. For decision trees, I found this paper which explains rate of convergence and more. It might be a good read if you want to get more details.
H: Median function in R I tried mtcars <- data.table( mtcars, keep.rownames=T) mtcars[median(qsec),] mtcars[order(qsec),] This gives me 15th value though it should have given me the 16 value as the dataset has 32 values in it. Please help me if I am doing something wrong... I know it's quite the basics AI: I don't know what your qsec is, so I'll assume you've attached mtcars and its the column from there. PLEASE edit your question and show ALL your working so anyone else can get your results. Let's proceed: > attach(mtcars) # WARNING: Attaching data frames is a BAD IDEA > median(qsec) [1] 17.71 > mtcars[median(qsec),] mpg cyl disp hp drat wt qsec vs am gear carb Chrysler Imperial 14.7 8 440 230 3.23 5.345 17.42 0 0 3 4 Because the median of qsec is 17.7, mtcars[median(qsec),] will get row 17.7, which is rounded down to 17: > mtcars[17,] mpg cyl disp hp drat wt qsec vs am gear carb Chrysler Imperial 14.7 8 440 230 3.23 5.345 17.42 0 0 3 4 [I don't know why you say 16, maybe you've filtered mtcars or something, I can't reproduce your work because you've not shown your working.] So what were you trying to do? Get the row where qsec is the median value of its values? Well you can do that, but its pretty pointless because there's no guarantee that the median of a set of values will be one of the values. So if you try this you get nothing: > mtcars[qsec==median(qsec),] [1] mpg cyl disp hp drat wt qsec vs am gear carb <0 rows> (or 0-length row.names) i.e. there are no rows where qsec equals its median value.
H: How to do batch inner product in Tensorflow? I have two tensor a:[batch_size, dim] b:[batch_size, dim]. I want to do inner product for every pair in the batch, generating c:[batch_size, 1], where c[i,0]=a[i,:].T*b[i,:]. How? AI: There is no native .dot_product method. However, a dot product between two vectors is just element-wise multiply summed, so the following example works: import tensorflow as tf # Arbitrarity, we'll use placeholders and allow batch size to vary, # but fix vector dimensions. # You can change this as you see fit a = tf.placeholder(tf.float32, shape=(None, 3)) b = tf.placeholder(tf.float32, shape=(None, 3)) c = tf.reduce_sum( tf.multiply( a, b ), 1, keep_dims=True ) with tf.Session() as session: print( c.eval( feed_dict={ a: [[1,2,3],[4,5,6]], b: [[2,3,4],[5,6,7]] } ) ) The output is: [[ 20.] [ 92.]]
H: How to check if a data is in gaussian distribution in R or excel? I know about the fitdist() function from the fitdistrplus package in R, however, I am not able to use it to predict a gaussian distribution. I can predict normal, logistic, weibull etc. How can I use it for gaussian? are there any other ways to predict this? AI: You can try the following: The minimum syntax you can use is: fit.norm <- fitdist(x, "norm") to fit the normal density function to the data x. Use the parameters "gamma", "weibull", "lnorm" for fitting gamma, weibull and lognormal distributions respectively. After doing that, you can use plot() function on your object fit.norm to visualize the fitted distribution ,q-q plot, p-p plot and empirical and theoretical CDF's. Normal distribution and Gaussian distribution are one and the same. The following is the output due to plot() function on the object generated by fitdist() function for 1000 standard normal variates.
H: What is the difference between dcast and recast in R? I am working with a dataframe in R that is formatted like this sample: Countries <- c('USA','USA','Australia','Australia') Type <- c('a','b','a','b') X2014 <- c(10, -20, 30, -40) X2015 <- c(20, -40, 50, -10) X2016 <- c(15, -10, 10, -100) X2017 <- c(5, -5, 5, -10) df_sample <- data.frame(Countries, Type, X2014, X2015, X2016, X2017) The dataframe looks like this: Countries Type X2014 X2015 X2016 X2017 1 USA a 10 20 15 5 2 USA b -20 -40 -10 -5 3 Australia a 30 50 10 5 4 Australia b -40 -10 -100 -10 I want to be able to create columns of year values for each type by each country, yielding something that looks like this: Countries Year a b 1 USA X2014 10 -20 2 USA X2015 20 -40 3 USA X2016 15 -10 4 USA X2017 5 -5 ... With recast I get this: recast(df_sample, Countries ~ Type) Countries a b 1 Australia 4 4 2 USA 4 4 With dcast I get this: dcast(df_sample, Countries ~ Type) Countries a b 1 Australia 5 -10 2 USA 5 -5 The dataset I'm working with has 44 years of data, so I'd like to be able to indicate all columns of yearly data without having to enter each column id manually into a cast formula. What is the difference between dcast and recast (i.e. what situations might they be best suited to), and is it possible to shape my data with them? AI: See ?reshape2::recast: The function conveniently wraps melting and (d)casting a data frame into a single step. library(reshape2) recast(df_sample, Countries+variable~Type, id.var=1:2) # Countries variable a b # 1 Australia X2014 30 -40 # 2 Australia X2015 50 -10 # 3 Australia X2016 10 -100 # 4 Australia X2017 5 -10 # 5 USA X2014 10 -20 # 6 USA X2015 20 -40 # 7 USA X2016 15 -10 # 8 USA X2017 5 -5 So, it's just a shortcut for these two steps: (tmp <- melt(df_sample, id.vars=1:2)) # Countries Type variable value # 1 USA a X2014 10 # 2 USA b X2014 -20 # 3 Australia a X2014 30 # 4 Australia b X2014 -40 # 5 USA a X2015 20 # ... dcast(tmp, Countries+variable~Type) # Countries variable a b # 1 Australia X2014 30 -40 # 2 Australia X2015 50 -10 # 3 Australia X2016 10 -100 # 4 Australia X2017 5 -10 # 5 USA X2014 10 -20 # 6 USA X2015 20 -40 # 7 USA X2016 15 -10 # 8 USA X2017 5 -5
H: Convolutional Neural Network: learning capacity and image coverage I was looking through a CNN tutorial and towards the end they refer to learning capacity and image coverage during network learning diagnostics What do those 2 terms mean in the context of a convolutional neural network? AI: You can look at nolearn/lasagne/util.py to see how learning capacity and image coverage are computed for each layer: real_filters = get_real_filter(layers, img_size) receptive_fields = get_receptive_field(layers, img_size) capacity = 100. * real_filters / receptive_fields capacity[np.logical_not(np.isfinite(capacity))] = 1 img_coverage = 100. * receptive_fields / img_size Xudong Cao. "A practical theory for designing very deep convolutional neural networks". 2015. https://www.kaggle.com/c/datasciencebowl/forums/t/13166/happy-lantern-festival-report-and-code explains how to compute the capacity of a layer: To quantitatively measure the learning capacity of a convolutional layer we define the c-value of a convolutional layer as follows. c-value = Real Filter Size / Receptive Field Size where the real filter size of a k-by-k convolutional layer is k if there is no down sampling, it doubles after each down sampling i.e. 2k after one down sampling and 4k after two down sampling etc. The receptive field size is defined as the maximum size of a neuron can see on the raw image. It grows proportionally as the convolutional neural network goes deep. Figure 3 shows how the receptive fields grows in an exemplar convolutional neural network. This is why the coverage increases as you go deeper in the network, e.g.: # Neural Network with 122154 learnable parameters ## Layer information name size total cap.Y cap.X cov.Y cov.X ---------- -------- ------- ------- ------- ------- ------- input0 1x28x28 784 100.00 100.00 100.00 100.00 conv2d1 32x26x26 21632 100.00 100.00 10.71 10.71 maxpool2d2 32x13x13 5408 100.00 100.00 10.71 10.71 conv2d3 64x11x11 7744 85.71 85.71 25.00 25.00 conv2d4 64x9x9 5184 54.55 54.55 39.29 39.29 maxpool2d5 64x4x4 1024 54.55 54.55 39.29 39.29 conv2d6 96x2x2 384 63.16 63.16 67.86 67.86 maxpool2d7 96x1x1 96 63.16 63.16 67.86 67.86 dense8 64 64 100.00 100.00 100.00 100.00 dropout9 64 64 100.00 100.00 100.00 100.00 dense10 64 64 100.00 100.00 100.00 100.00 dense11 10 10 100.00 100.00 100.00 100.00 Explanation X, Y: image dimensions cap.: learning capacity cov.: coverage of image magenta: capacity too low (<1/6) cyan: image coverage too high (>100%) red: capacity too low and coverage too high
H: Did anybody ever use mean pooling and publish it? I found a couple of sources which mention mean pooling for convolutional neural networks (CNNs) - including all lectures I had about CNNs so far - but I could not find any paper with at least 10 citations which uses mean pooling. Do you know of a paper which uses mean pooling? AI: Sum-pooling, which is of course just a scaled version of mean pooling, has been proposed for the task of content-based image retrieval (CBIR). To my best knowledge, the first paper for this was the following, and has (according to Google Scholar) gathered 35 citations over its first year of being published: 1A. Babenko and V. Lempitsky: "Aggregating Deep Convolutional Features for Image Retrieval". In IEEE International Conference on Computer Vision (ICCV), Dec. 2015, pp. 1269-1277. DOI: 10.1109/ICCV.2015.150. arXiv: 1510.07493. A very short explanation of image retrieval: For large-scale image retrieval, a new query image has to be compared to a database of thousands of images to find the most similar image. While previous work has achieved this by matching SIFT descriptors, recently CNN-based descriptors have become the state-of-the-art2. The basic idea when using CNNs for image retrieval is, to use a network which is pre-trained (usually on a classification task, such as ILSVRC) and use the output of a layer as the descriptor vector3,4. Different papers propose using either one of the FC layers or one of the Conv layers as the descriptor. As the outputs of these layers are very large - too large to use as a "compact" descriptor of an image - the usual approach is to reduce the dimensionality to something in the range of 32 to 1024. This is usually done by L2-normalization → PCA Whitening → L2-normalization 1,3,4. Now finally, I get to the sum-pooling part: Babenko and Lempitsky1 show that sum-pooling leads to a better retrieval performance than max-pooling, when the dimensionality of the resulting descriptor is reduced using PCA and Whitening. Their SPoC (Sum-Pooling of Convolutions) method outperforms other pooling/embedding methods, such as max-pooling, Fisher vectors and triangular embedding. Final words: more recent work (ICLR 2016) proposes the so-called R-MAC descriptor5, which is the regional maximum activations of convolutions, i.e. they use a set of regions at different scales, calculate the feature vector for each region using max-pooling, and finally sum up all these regional feature vectors. This again improves over the sum-pooling proposed in 1. Still it has a kind of sum-pooling over the different regions in it. Footnotes & References 2If required, I can add a few citations to show this, just comment below. 3A.S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, "CNN Features Off-the-Shelf: An Astounding Baseline for Recognition", in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Jun. 2014, pp. 512–519. DOI: 10.1109/CVPRW.2014.131. arXiv: 1403.6382. 4A. Babenko, A. Slesarev, A. Chigorin, and V. Lempitsky, "Neural Codes for Image Retrieval", in European Conference on Computer Vision (ECCV), Sep. 2014, pp. 584–599. DOI: 10.1007/978-3-319-10590-1_38. arXiv: 1404.1777. 5G. Tolias, R. Sicre, and H. Jégou, "Particular object retrieval with integral max-pooling of CNN activations", in International Conference on Learning Representations (ICLR), Oct. 2016, pp. 1–11. arXiv: 1511.05879.
H: Comparing SMOTE to down sampling the majority class in imbalanced binary classification I have a binary classification task with imbalance between the two classes. I want to compare SMOTE vs down sizing the majority class to the size of the minority class. I trained the classifier with 3-fold validation using the two methodologies: SMOTE to increase the size of the minority class to the majority class size Downsizing the majority class to the minority class size with a random subsampling To test which methodology works better I trained my classifier (Random Forests) with a 3-fold Cross-Validation. The confusion matrices I get from 3-fold CV seems to promote the use of SMOTE (better classification performance for the two classes). I assume that this CV can be used to choose the best methodology. However, when I test the classifier on a real testing set (which was kept out and not used for training or validation) I don't see a real superiority of the SMOTE algorithm w.r.t. random subsample of the majority class. The minority class is better classified, but at the expense of the majority class performance. Is this a limitation of SMOTE algorithm or my model selection methodology (using 3-fold CV) has some flaws? AI: It's difficult to say without the actual data. However, I can tell you that SMOTE creates artificial instances, hence, when used in much expanse can "deviate" from the actual minority class data. It's difficult to determine the expanse. Many factors take place, firstly the Data, then the neighbouring coefficients. P.S. You could try boosting using many random under Samples. Hence, instead of Random Forest you could try first Adaboost for instance were each classifier is trained on a different sub sample.
H: Remove Outliers - Market Basket Analysis I'm having some thoughts on whether I should remove the outliers. I'm trying to find the tags that are commonly used together. Imagine that I have the following dataset. The first column is the Tag_ID and the second column is the Number of People that used that Tag. 1 3472034 2 1277918 3 1249839 4 1010770 5 915099 6 898292 7 636792 8 604352 9 555673 10 298495 11 291511 12 211074 13 200868 ... (This was copied from my actual dataset). My question is: Should I remove a Tag instance when it is much more frequent than the other? Is that regarded as a good practice? Many thanks! AI: Since I cannot comment to ask for clarification, I am asking it here. What is your reason to think about removing the most frequent value in your dataset? If the second column actually represent frequency of usage, you probably should not remove it and I think it would be illogical to throw away that piece of information. Having said that, you may consider removing that tag if it is a "less meaningful" word (e.g. a, an etc). Can you give a little bit more context on what you are trying to achieve? In general, one way to find outlier is to look at points that lie beyond 1.5 times of inter-quartile range of the distribution i.e. for the frequency count in your data. Just a quick thought, did you try clustering for finding similar tags? What are the ways you are considering to find similar tags?
H: Practical use of oop in R R supports a wide range of OOP designs like s3,s4,RC and others via packages,and it's a bit overwhelming to decide on which to use and a more basic question that I have is when and where do you use OOP while doing machine learning or data analytics ,can someone answer this from a data scientist/ML practitioner point of view . I'm aware of how OOP works in R at a superficial level ,but should I invest time in learning OOP in R ? how practical is it from a data science point of view ? AI: I have used S3, S4 and R6 (you forgot that one in your overview ;)). I would agree with Hadley Wickham that S3 is sufficient for most tasks. However, this is only required if you start building advanced functions that operate on objects. Say for example you build a model with one function and you want to create a summary and print function for the object returned by your model building function. For general Data Science purposes I would say that it helps to know the systems but none of them are very good examples of real OOP. For that I would recommend to work in Python, Ruby or Java. All have been build with OOP in mind. In regards to knowing OOP, I think it is vital for someone involved in ML. Not when you are prototyping in R or Python but definitely when you start working on production code. I think this Quora thread gives a good run down on when OOP becomes important in ML. If your focus is more on statistics in R it may be of less importance.
H: Train/Test/Validation Set Splitting in Sklearn How could I randomly split a data matrix and the corresponding label vector into a X_train, X_test, X_val, y_train, y_test, y_val with scikit-learn? As far as I know, sklearn.model_selection.train_test_split is only capable of splitting into two not into three... AI: You could just use sklearn.model_selection.train_test_split twice. First to split to train, test and then split train again into validation and train. Something like this: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2
H: Autoregressive (AR) models constants - Time Series Analysis I'm currently struggling with different Model like AR or MA. If AR(1) is expressed as: $y_t = \beta + \beta_t \times y_{t-1} + \epsilon_t $ How do I know what the $\beta$ 's would be? What are the dependencies? I think a simple example would help a lot. AI: You have a slight typo in the notation of a AR(1) model. The correct signature is, $y_t = \beta_0 + \beta_1 \times y_{t-1}+\epsilon_t$ or $y(t) = \beta_0 + \beta_1 \times y(t-1) + \epsilon(t)$, where $y(t)$ and $\epsilon(t)$ are random variables. If $y(t)$ is standard Gaussian you can estimate $\beta_{0,1}$ with a maximum likelihood estimator (MLE). If not you will need a more complex method. You can read more about it in this article on estimating at ARMA Process.
H: What's the best way to tune the regularization parameter in neural nets I'm tuning the regularization parameter of a neural net (L2 regularization) using a grid. Starting with values 0.0005, 0.005, 0.05, 0.5, 5. Then if 0.005 brigs the best validation result, I continue using a grid like this: 0.0015, 0.0025, 0.0035, 0.0045, 0.0055, 0.0065, 0.0075, 0.0085, 0.0095. And it continues.. I would like to know if there's a more intelligent way to tune this parameter. AI: Wikipedia lists some well-known approaches to hyper-parameter searches. The brute-force scan/search, or a grid search across multiple parameters, is still a very common and workable approach. As is random search, just trying some variations of parameters automatically and picking the best result from cross validation. A guided search by intuitive feel of what might best work is also still quite common approach, despite being characterised as "optimisation by graduate descent" (a pun based on the fact that a senior researcher will hand off the tuning work to their smart junior researchers who make educated guesses on hyperparameter values) - for regularisation parameters you can look at training curves and difference between training loss vs cv loss to help decide where to look. You generally want to increase regularisation if there is a large difference and decrease it if the values become close but not very good. What counts and as "large difference" and "not very good" is subjective though, and typically you only get a feel for what works after a few random attempts and immersing yourself in the problem so that you recognise good vs bad behaviour of a model. There are some claimed "smart" schemes that apply Bayesian optimisation or similar, and even some automated services or paid tools that attempt to make these available for tuning models automatically. However, I don't see these used much in practice. Unless you are trying to win a Kaggle competition, you quickly hit diminishing returns attempting to fine-tune your parameter. I've rarely found tuning the second significant digit of a meta-param like L2 loss to be worth much attention. Maybe with the exception of testing between e.g. 1.0 and 1.5 - often useful to think in terms of geometric progression, not linear, even when fine-tuning. You also have to be careful that you are not simply tuning to your cross-validation set, with no meaningful or justifiable improvement for test cases or in production. You can mitigate this in part by using k-fold cross validation (which can reduce error on metrics estimates effectively by using more data to estimate them), although of course that increases time/CPU costs.
H: Loss function for classifying when more than one output can be 1 at a time My desired output is not 1-hot encoding, but like a 10 D vector: [1, 0, 1, 0, 1, 0, 0, 1, 1, 1] and the input is like the normal MNIST data set. I want to use TensorFlow to build a model to learn this, then which loss function should I choose? AI: If your classes arre not mutually exlcusive, then you just have multiple sigmoid outputs (instead of softmax function as seen in example MNIST classifiers). Each output will be a separate probability that the network assigns to membership in that class. For a matching loss function, in TensorFlow, you could use the built-in tf.nn.sigmoid_cross_entropy_with_logits - note that it works on the logits - the inputs to the sigmoid function - for efficiency. The link explains the maths involved. You will still want a sigmoid function on the output layer too, for when you read off the predictions, but you apply the loss function above to the input of the sigmoid function. Note this is not a requirement of your problem, you can easily write a loss function that works from the sigmoid outputs, just the TensorFlow built-in has been written differently to get a small speed boost.
H: Efficiently Sending Two Series to a Function For Strings with an application to String Matching (Dice Coefficient) I am using a Dice Coefficient based function to calculate the similarity of two strings: def dice_coefficient(a,b): try: if not len(a) or not len(b): return 0.0 except: return 0.0 if a == b: return 1.0 if len(a) == 1 or len(b) == 1: return 0.0 a_bigram_list = [a[i:i+2] for i in range(len(a)-1)] b_bigram_list = [b[i:i+2] for i in range(len(b)-1)] a_bigram_list.sort() b_bigram_list.sort() lena = len(a_bigram_list) lenb = len(b_bigram_list) matches = i = j = 0 while (i < lena and j < lenb): if a_bigram_list[i] == b_bigram_list[j]: matches += 2 i += 1 j += 1 elif a_bigram_list[i] < b_bigram_list[j]: i += 1 else: j += 1 score = float(matches)/float(lena + lenb) return score However, I am trying to evaluate the best match out of a large possible list, and i want to use list comprehension/map/vectorize the function calls for a whole series of strings to be matched to make this computationally efficient. However, I am having difficult getting the run time into a reasonable ballpark for even medium sized series (10K-100K elements). I want to send two input series into/through the function, and then get the best possible match from all candidates on dflist1 against a second series: dflist2 . Ideally, but not necessarily, the return would be another series in the dflist1 dataframe return the best possible score also. I have an implementation of this working (below), but it's incredibly slow. Is it also possible to parrelelize this? I think this would be a hugely valueable problem to solve as it would perform the same function that reconcile csv currently does. dflist1 = pd.read_csv('\\list1.csv', header = 0,encoding = "ISO-8859-1") dflist2 = pd.read_csv('\\list2.csv', header = 0,error_bad_lines=False) dflist1['Best Match'] = 'NA' dflist1['Best Score'] = '0' d = [] start = time.time() for index, row in dflist1.iterrows(): d=[dice_coefficient(dflist1['MasterList'][index],dflist2['TargetList'][indexx]) for indexx,rows in dflist2.itertuples()] dflist1['Best Match'][index]=dflist2['TargetList'][d.index(max(d))] dflist1['Best Score'][index]=max(d) print('Finished '+str(index)+' out of '+str(len(dflist1.index))+' matches after '+str(round(time.time() - start))+' seconds.') Any help would be appreciated very much! AI: Your function does a lot of pythonic data crunching. In these cases numba can be useful. In the below code I split your function into two: sorting and scoring. I then converted your bigrams from strings to integers (to comply with numba datatypes) and decorated the scoring subfunction with numba's @autojit. from numba import autojit import numpy as np def dice_coefficient(a,b): try: if not len(a) or not len(b): return 0.0 except: return 0.0 if a == b: return 1.0 if len(a) == 1 or len(b) == 1: return 0.0 a_bigram_list = [a[i:i+2] for i in range(len(a)-1)] b_bigram_list = [b[i:i+2] for i in range(len(b)-1)] a_bigram_list.sort() b_bigram_list.sort() lena = len(a_bigram_list) lenb = len(b_bigram_list) matches = i = j = 0 while (i < lena and j < lenb): if a_bigram_list[i] == b_bigram_list[j]: matches += 2 i += 1 j += 1 elif a_bigram_list[i] < b_bigram_list[j]: i += 1 else: j += 1 score = float(matches)/float(lena + lenb) return score def dice_coefficient_new(a,b): try: if not len(a) or not len(b): return 0.0 except: return 0.0 if a == b: return 1.0 if len(a) == 1 or len(b) == 1: return 0.0 a_bigram_list, b_bigram_list = dice_coefficient_sorting(a,b) score = dice_coefficient_scoring(a_bigram_list,b_bigram_list) return score def dice_coefficient_sorting(a,b): a = np.array([ord(i) for i in a]) b = np.array([ord(i) for i in b]) a_bigram_list = 256*a[:-1]+a[1:] b_bigram_list = 256*b[:-1]+b[1:] a_bigram_list.sort() b_bigram_list.sort() return a_bigram_list,b_bigram_list @autojit(nopython=True) def dice_coefficient_scoring(a_bigram_list,b_bigram_list): lena = len(a_bigram_list) lenb = len(b_bigram_list) matches = i = j = 0 while (i < lena and j < lenb): if a_bigram_list[i] == b_bigram_list[j]: matches += 2 i += 1 j += 1 elif a_bigram_list[i] < b_bigram_list[j]: i += 1 else: j += 1 score = float(matches)/float(lena + lenb) return score Let's time it: N = np.power(10,5) a = ''.join([str(unichr(i)) for i in np.random.randint(97,123,N)]) b = ''.join([str(unichr(i)) for i in np.random.randint(97,123,N)]) %timeit dice_coefficient(a,b) %timeit dice_coefficient_new(a,b) Output: 1 loop, best of 3: 204 ms per loop 10 loops, best of 3: 52.9 ms per loop So for 100K elements you get a speedup of 4x! For further optimisation you could parallelise your global for loop (for example also using numba or multiprocessing). (Note: I edited my rushed first answer which didn't work)
H: How is dimensionality reduction achieved in Deep Belief Networks with Restricted Boltzmann Machines? In neural networks and old classification methods, we usually construct an objective function to achieve dimensionality reduction. But Deep Belief Networks (DBN) with Restricted Boltzmann Machines (RBM) learn the data structure through unsupervised learning. How does it achieve dimensionality reduction without knowing the ground truth and constructing an objective function? AI: As you know, a deep belief network (DBN) is a stack of restricted Boltzmann machines (RBM), so let's look at the RBM: a restricted Boltzmann machines is a generative model, which means it is able to generate samples from the learned probability distribution at the visible units (the input). While training the RBM, you teach it how your input samples are distributed, and the RBM learns how it could generate such samples. It can do so by adjusting the visible and hidden biases, and the weights in between. The choice of the number of hidden units is completely up to you: if you choose to give it less hidden than visible units, the RBM will try to recreate the probability distribution at the input with only the number of hidden units it has. An that is already the objective: $p(\mathbf{v})$, the probability distribution at the visible units, should be as close as possible to the probability distribution of your data $p(\text{data})$. To do that, we assign an energy function (both equations taken from A Practical Guide to Training RBMs by G. Hinton) $$E(\mathbf{v},\mathbf{h}) = -\sum_{i \in \text{visible}} a_i v_i - \sum_{j \in \text{hidden}} b_j h_j - \sum_{i,j} v_i h_j w_{ij}$$ to each configuration of visible units $\mathbf{v}$ and hidden units $\mathbf{h}$. Here, $a_i$ and $b_j$ are the biases, and $w_{ij}$ are the weights. Given this energy function, the probability of a visible vector $\mathbf{v}$ is $$p(\mathbf{v}) = \frac 1Z \sum_{\mathbf{h}} e^{-E(\mathbf{v},\mathbf{h})}$$ With that, we know that to increase the probability of the RBM generating a training sample $\mathbf{v}^{(k)}$ (denotes the $k$-th training sample), we need to change $a_i$, $b_j$ and $w_{ij}$ so that the energy $E$ for our given $\mathbf{v}^{(k)}$ and the corresponding $\mathbf{h}$ gets lower.
H: How many vectors does paragraph vector generate for each paragraph? For example,if I have a corpus with two paragraphs, does paragraph vector generate two vectors?Additionally, on Distributed Representations of Sentences and Documents (Q. Le, T. Mikolov) paper I do not understand why paragraph vectors are unique among paragraphs but the word vectors are shared. Why? AI: It would make no sense of the word embeddings to change within a document. It would be as if the spellings of the words changed; how would that help? When you use a document embedding, you find its numerical representation in a way that loosely captures its meaning. If you want to capture the meaning of each paragraph separately, find their embeddings separately. If you want to capture meaning of the entire document, feed the entire document. For example, if your document covers a range of topics and you want to allow users to pinpoint where a particular topic is covered, you can find embeddings of each section (paragraph, page, etc.), then find the section that's nearest to your query's embedding. One use case for dynamic word embeddings is to identify temporal dynamics; how the meaning of a word changes over time, as in this paper: The Visualization of Change in Word Meaning over Time using Temporal Word Embeddings.
H: Why TensorFlow can't fit simple linear model if I am minimizing absolute mean error instead of the mean squared error? In Introduction I have just changed loss = tf.reduce_mean(tf.square(y - y_data)) to loss = tf.reduce_mean(tf.abs(y - y_data)) and model is unable to learn the loss just became bigger with time. Why? AI: I tried this and got same result. It is because the gradient of .abs is harder for a simple optimiser to follow to the minima, unlike squared difference where gradient approaches zero slowly, the gradient of the absolute difference has a fixed magnitude which abruptly reverses, which tends to make the optimiser oscillate around the minimum point. Basic gradient descent is very sensitive to magnitude of the gradient, and to the learning rate, which is essentially just a multiplier of the gradient for step sizes. The simplest fix is to reduce the learning rate e.g. change line optimizer = tf.train.GradientDescentOptimizer(0.5) to optimizer = tf.train.GradientDescentOptimizer(0.05) Also, have a play with different optimisers. Some will be able to cope with .abs-based loss better.
H: Recommendation/personalization algorithm conflict I'm trying to build a recommendation engine for an e-commerce site. By using the common recommendation approach, I'm assuming that each product I recommend has the same value, so all I need to do is optimize the conversion rate probably using a common recommendation algorithm, but when the product's price varies a lot, what I really need to optimize is the following formula for each user: Value of recomendation = (probability to convert) * (product price) The bigger problem than choosing the right algorithm and approach is choosing the right metric, so I could compare the different algorithms. For example, if you would like to only optimize the conversion rate, I would use the precision and recall or false/positive metrics. What metrics and approaches/algorithms are recommended in this case? Thanks AI: This is actually slightly similar to the problem that insurance companies face except that it seems like your loss costs are known. Insurers have some probability of loss and then, given loss, the magnitude of the loss follows some distribution. The cost to the insurer is dependent on both and they tend to be inversely related (lower losses are more likely than higher ones.) In your case, the value is known so you don't need to predict it the way insurers need to predict losses so you could simply: Model the probability (phat) Multiply the predicted probability by the known value (score = phat * value) Recommend based on the resulting score Insurance companies typically do the same thing in calculating premiums except that they also need to model value. They sometimes model the two components jointly but typically they have separate models for frequency and severity and then just multiply them together to determine how much premium they should charge somebody.
H: Is there any other probability distribution model than gaussian for multivariate data Whenever we talk about the probability distribution of data having more than one feature, we have only one option i-e multivariate normal distribution. Is there any other probability distribution model exist for multivariate data? If yes, then how can we find its parameters using MLE in MATLAB AI: There are quite a number of multivariate distributions, in theory. At the outset, we categorize them as multivariate discrete distributions and multivariate continuous distributions. To start with, you may explore the following link for : multivariate discrete distributions and the following for multivariate continuous distributions. There are entire volumes printed on the subject, for example, books by: N. Balakrishnan, Norman L Johnson, and Samuel Kotz and these books are published by John Wiley and Sons, Inc.
H: Validation during fitting in Keras How does Keras' fit function calculate when validation set/validation split are NOT defined (I understand that the default values are None/0.0 respectively, so being not defined is practically default)? One always need a reference set to evaluate model performance... AI: If you don't define the validation set/ validation split for your model, then it has no way to check for it's performance because you have not provided anything to the model on which it can validate its performance. In this case, the model will run through training examples, will learn a way to minimize the cost function as per your code and that's it. You get a model which has learned a hypothesis from your data but how good the model is, it can only be checked by making predictions for the test set.
H: Methods / Algorithms for rank scales based on cumulative scoring Say you have an organization that requires employees to participate in a Q&A site similar to StackOverflow - questions and answers are voted upon, selected answers get extra points, certain behaviors boost your score etc. What we need to do is assign a rating from 1-100 to these users with even distribution. The behaviors that add points: Ask a question [fixed] Answer a question [fixed] Receive an upvote on a question [determined by relative ranking] Receive an upvote on an answer [determined by relative ranking] Have your answer selected [determined by relative ranking] Responding to a comment, etc [fixed] Likewise, there are behaviors that subtract points. If a user with a high ranking upvotes a question asked by a lower-ranking user, more points should be awarded than the inverse situation. Likewise if a lower-ranking user downvotes a higher-ranking user's question, the impact should be minimal compared to the inverse. There should be a limit to this impact though so that a high-ranking user doesn't unintentionally destroy any momentum of a low-ranking user by issuing a powerful downvote. We have a few challenges here: How do we determine how many points to assign to each type of behavior, with actor/recipient relative rank taken into account? I'm thinking we just assign a flat number to each behavior, that number decided relative to the importance of the other behaviors, and then have a variable score that can alter the score if there is a wide variance between the users. The mechanics of this - does the score double at most? - are unclear. How to we assign this rank? This one is a little easier - I'm thinking we just order the users according to score and then split the dataset into 100 sections, assigning each "chunk" a number 1-100. Should we be worried about these numbers getting "very big"? The scenario described above has been trivialized; actions taken by these users may happen hundreds of times per day so the scores can become very high, very quickly. Is there a way we can keep this under control while avoiding a large number of duplicate scores? How do we define the "fixed" scores as the total scores become very big? Over time we may have users with hundreds of thousands of points - but the fixed-score behaviors should still reward them. They should reward lower-ranking users more than higher-ranking users. I don't know if there are some standard practices, algorithms, or terminologies that I should be aware of when facing a problem like this - any input would be appreciated. AI: To solve challenges #3 and #4, let's limit the overall available rank volume. For example, sum of this rank for all the users will be 1 (100%). From challenge #2 I understood, that you accept 2 different ranks: (1) place from 1 to 100, and (2) simple sum of all earned points (fixed and relative). Did I got it right? I so, there is no need to worry about unlimited growth, or fixed scores inflation. Let's just use percentages, not 1-100 ranks. These percentage ranks could be calculated based on interaction behaviors (vote/selecting answer/etc), using PageRank-like algorithm. Such algorithm will consider all previous reactions (and ranks of acted users), obtained by an exact user. Unfortunately, you cannot use PageRank algorithm "as is", because it supports only "positive" links, but you can look for it's extensions. For example, look at this paper with PageRank extension for both positive and negative links (as users can down vote). You can iteratively estimate percentage rank (TrustRank, TR) using this algorithm. The second task is to calculate reward/penalty rate in points for each single action. Let's determine (predefine) maximal reward/penalty rate (X) for each type of action. And will use coefficient to discount it, based on TrustRanks of acting users (e.g., author and voter). Slightly modified Sigmoid will map this ratio from [-Inf,+Inf] range to [0,1]. Here for peer users you will have ~0.5 of predefined maximal rate. If "voter" has TR twice more than "author", "author" will recieve ~0.75 of predefined value, and so on. You can tune steepness with additional parameter, or try to find any other mapping transformation function. Anyway, now simply multiply maximal penalty/reward by this coefficient, and you'll get the number of point, you need to deduct or add. The only issue, I see, is a user with zero TR - such user as a voter will "give" nothing, and as an object of voting, will recieve the maximal amount of points regardless voter's rank. To avoid this, you can predefine minimal TR (like 1e-10), and don't let user's TR to fall beyond this value.
H: Replacing values in multiple columns of a data frame in R for example consider this dataframe : dam <- data.frame(name = letters[1:5], re1 = factor(c("yes","no","yes","no","yes")), re2 = factor(c("yes","no","yes","no","yes")), re3 = factor(c("yes","no","yes","no","yes"))) #>the dataframe looks like this name re1 re2 re3 a yes yes yes b no no no c yes yes yes d no no yes e yes no yes I would like to replace the yes & no's in re1,re3 with 1 & 2. to get something like this - #>the output should look like this name re1 re2 re3 a 1 1 yes b 2 2 no c 1 1 yes d 2 2 no e 1 1 yes I know i could do something like : replace_re <- c("yes","no","yes","no","yes") with_this <- rep(1:2,2,5) dam$re1 %>% mapvalues(replace_re,with_this) dam$re2 %>% mapvalues(replace_re,with_this) and I know I could use a for loop for many columns. but what I want is how do you accomplish this in a functional way ,say with pacakage "purrr" (map,invoke_map functions)and does the job,in an elegant way for n such columns i.e replacing my categorical variables with some other values. also, how would I do that with base R apply functions ? this might be a very trivial question but I'm unable to come up with a more tidy way of doing this. Recoding the factors seems to be a bane for me,any help would be much appreciated . AI: Given: > dam name re1 re2 re3 1 a yes yes yes 2 b no no no 3 c yes yes yes 4 d no no no 5 e yes yes yes do this: dam2 = reshape2::dcast( dplyr::mutate( reshape2::melt(dam,id.var="name"), value=plyr::mapvalues( value, c("yes","no"),c("OK","notOK")) ),name~variable) get that: > dam2 name re1 re2 re3 1 a OK OK OK 2 b notOK notOK notOK 3 c OK OK OK 4 d notOK notOK notOK 5 e OK OK OK I've recoded it to "OK" and "notOK" because your remapping doesn't make sense. The "from values" should be unique, not have repeated "yes" and "no" in them. Note how this is done. Make a tidy data set by melting. Mutate it. Cast it back into untidy format. Yes you could use pipes.
H: What are interleaved layers of convolutions? In the FaceNet paper, they describe the Zeiler&Fergus model like this: [...] the Zeiler&Fergus model which consists of multiple interleaved layers of convolutions, [...] What do they mean by interleaved? How does that compare to the inception layers? Especially, as the Zeiler&Fergus paper states We use standard fully supervised convnet models throughout the paper, as defined by (LeCun et al., 1989) and (Krizhevsky et al., 2012). [...] The top few layers of the network are conventional fully-connected networks and the final layer is a softmax classifier. AI: The Oxford dictionary explains "interleave" as Place something between the layers of (something) or in the context of telecommunication, as Mix (digital signals) by alternating between them. In the FaceNet paper, Schroff et. al describe their first architecture as multiple interleaved layers of convolutions, non-linear activations, local response normalizations, and max pooling layers which means they create a CNN by "altering between" these layer types (e.g. Conv -> Pool -> Conv -> Pool -> ..). Or with the wording of the first definition, you "place one layer" of each type "between" the other existing layers (i.e. start with a couple of Conv layers, then place Pool layer between two Conv layers, and so on). The final architecture would look somewhat like this: Conv -> ReLu -> LRN -> Pool -> Conv -> ReLu -> LRN -> Pool -> Conv -> ... So, this "interleaved" CNN is a network where the different layer types are applied in series, layer-by-layer. As a comparison, in "inception" type CNN, you apply the different layer types in parallel (the graphic below is 1 layer): +---> Conv 3x3 ---+ | | prev layer ---+---> Conv 5x5 ---+---> next layer | | +---> Pool 3x3 ---+
H: How is the evaluation setup for YouTube faces of FaceNet? The YouTube Faces database (YTF) consists of 3,425 videos of 1,595 different people. Given two videos, the task for YTF is to decide if they contain the same person or not. Having $n$ comparisons, the classifier might get $c \leq n$ right. Then the accuracy would be $\frac{c}{n}$. FaceNet is a CNN which maps an image of a face on a unit sphere of $\mathbb{R}^{128}$. It was evaluated on YTF. How did they decide which person is in the video? (I can imagine several procedures how this could be done, but I couldn't find it in the paper. One example, how it could be done, is by evaluating all images $x_i^{(k)}$ with $i = 1, \dots, \text{length of video }k$ and averaging the results - but I would like to know what they did / how this is usually done.) AI: The objective function they use to train the CNN minimizes the squared L2 distance (i.e. the squared Euclidean distance) between two similar (positive) images and simultaneously maximizes the distance between two different (negative) images. That means, the (squared) Euclidean distance between two representations is a measure of their similarity. Then, recognizing a face in a new image is as simple as 1) running it through the CNN and 2) finding its nearest neighbors with a KNN algorithm. The last paragraph was only about images - in the Youtube Faces DB, we are handling videos of different persons. In section 5.7 of the paper, they describe how they evaluate performance: We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. So, you were partially right: they just average the independent results over video frames. Probably for performance reasons, they chose to average the first 100 frames. They do describe that increasing this to the first 1000 frames increases performance from 95.12% to 95.18%, which is not significantly more.
H: How do CNNs use a model and find the object(s) desired? Background: I'm studying CNN's outside of my undergraduate CS course on ML. I have a few questions related to CNNs. 1) When training a CNN, we desire tightly bounded/cropped images of the desired classes, correct? I.e. if we were trying to recognize dogs, we would use thousands of images of tightly cropped dogs. We would also feed images of non-dogs, correct? These images are scaled to a specific size, i.e. 255x255. 2) Let's say training is complete. Our model's accuracy seems sufficient, with no problems. From here, let's have a large, HD image of a non-occluded dog running through a field with various obstacles. With a typical NN and some data, we just take the model, cross it with some input, and bam it's going to output some class. How will the CNN view this large image, and then 'find' the dog? Do we run some type of preprocessing on the image to partition it, and feed the partitions? AI: Though there can be a very detailed explanation for this question but I will try to make you understand much minimal words. 1) Cropping the images to a particular size isn't a necessary condition and neither is scaling. But put this way, it doesn't matter whether a dog is represented in a B&W image or RGB image because a convolution network learns features in the images which are independent of colors. Scaling and resizing help to limit the value of pixels between 0 and 1. 2) Once you have trained your CNN model, it has learned all the features like edges,etc. to recognize a dog in the image. Because the model has learned the features, it acquires certain properties like translation invariance which means that no matter where you position a dog in the image, it's still a dog and have the same features. How the model recognize it? It checks for the features of a dog, learned during training, no matter what the size of the new image is or where the dog is in the image or what the dog is doing. For getting a in-depth understanding you can refer to the following resources: http://neuralnetworksanddeeplearning.com/chap6.html http://cs231n.github.io/convolutional-networks/
H: Using the trainbr function for classification in Matlab I am training a neural network for classification using Matlab, and I don't understand if I can use the trainbr training function (Bayesian Regularization Backpropagation). It uses the MSE performance measure, but I want to use the crossentropy. If I set crossentropy as the performance function, the algorithm sets it back to MSE. On the other way, I can't use a validation set with this training and I don't find how to change it. The code is: x = A'; t = y'; % Choose a Training Function % For a list of all training functions type: help nntrain % 'trainlm' is usually fastest. % 'trainbr' takes longer but may be better for challenging problems. % 'trainscg' uses less memory. Suitable in low memory situations. trainFcn = 'trainbr'; % Scaled conjugate gradient backpropagation. % Create a Pattern Recognition Network net = patternnet(hiddenLayerSize,trainFcn); % Choose Input and Output Pre/Post-Processing Functions % For a list of all processing functions type: help nnprocess net.input.processFcns = {'removeconstantrows','mapminmax'}; net.output.processFcns = {'removeconstantrows','mapminmax'}; % Setup Division of Data for Training, Validation, Testing % For a list of all data division functions type: help nndivide net.divideFcn = 'dividerand'; % Divide data randomly net.divideMode = 'sample'; % Divide up every sample net.divideParam.trainRatio = 60/100; net.divideParam.valRatio = 20/100; net.divideParam.testRatio = 20/100; % Choose a Performance Function % For a list of all performance functions type: help nnperformance net.performFcn = 'crossentropy'; % Cross-Entropy net.trainParam.epochs = 5000; % Choose Plot Functions % For a list of all plot functions type: help nnplot net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ... 'plotconfusion', 'plotroc'}; % Train the Network [net,tr] = train(net,x,t); % Test the Network y = net(x); e = gsubtract(t,y); net.performParam.regularization = 0; performance = perform(net,t,y); tind = vec2ind(t); yind = vec2ind(y); percentErrors = sum(tind ~= yind)/numel(tind); % Recalculate Training, Validation and Test Performance trainTargets = t .* tr.trainMask{1}; valTargets = t .* tr.valMask{1}; testTargets = t .* tr.testMask{1}; trainPerformance = perform(net,trainTargets,y); valPerformance = perform(net,valTargets,y); testPerformance = perform(net,testTargets,y); Thanks AI: The trainbr mode uses the Bayesian regularization backpropagation. This method was presented in 1, which presents a regression problem with the loss function $$ E_D = \sum_{i=1}^n (t_i - a_i)^2 $$ where $t_i$ is the target and $a_i$ is the network's response. The paper proposes to add a regularization term, leading to a loss function $F$ of the form $$ F = \beta E_D + \alpha E_W $$ where $E_W$ is the square of the sums of all network weights, i.e. $E_W = \sum_{i,j} \| w_{ij} \|^2$. The two parameters $\alpha$ and $\beta$ control the weighing of the two parts $E_D$ and $E_W$: For $\alpha \ll \beta$, the network will minimize the loss, without really trying to keep weights low. For $\alpha \gg \beta$, the network will minimize the weights, allowing for some more error. In reality, this means a large $\alpha$ will stop the network from overfitting, which leads to a better generalization at the cost of a larger training error. The key to find a train a model which generalizes well, but still has a low error rate, is the right setting of $\alpha$ and $\beta$. This is achieved by treating them as random variables and finding an optimal setting, using the Bayesian methods presented in 2. (I won't talk about the details on that here, you can find that in the two linked papers.) Finally, the paper presents an algorithm, which calculates the optimal $\alpha$ and $\beta$ in each training iteration. This makes this algorithm generalize really well, especially in the presence of noisy input signals. However, as described, the loss function is a weighted sum between the MSE ($E_D$) and the regularization term ($E_W$). So, in short: you can only use it with the MSE, and not with cross-entropy. You'll need a different training algorithm, you can find a list in the MATLAB documentation, here. References: 1 F. D. Foresee and M. T. Hagan: "Gauss-Newton Approximation to Bayesian Learning", in Proceedings of the 1997 International Joint Conference on Neural Networks, June 1997. DOI: 10.1109/ICNN.1997.614194. [Link to PDF]. 2 D. J. C. McKay: "Bayesian Interpolation", Neural Computation, May 1992, Vol. 4, No. 3, pp. 415-447. DOI: 10.1162/neco.1992.4.3.415. [Link to PDF].
H: How does the Nearest Centroid method work? I have read this Wikipedia article. But, the idea is still very fuzzy to me. Suppose, k=5. Then, we have, $X_5 = \{A, B, C, D, E\}$ $Y_2 = \{Triangle, Square\}$ $R_5 = \{9, 8, 5, 1, 4 \}$ (just assumed) Now, $\mu_{Triangle} = \frac{5}{2} = 2.50$ and, $\mu_{Square} = \frac{22}{3} = 7.33$ Since, $\mu_{Triangle} < \mu_{Square}$, $class(?) == Triangle$. Am I correct? AI: You are making a mistake regarding what is given: during training, you don't have a radius $R$. You have the coordinates $\vec{x}$ and the label $y$ for each point: $$ \vec{x}_1 = [-2;1] \quad \vec{x}_2 = [1;2] \quad \vec{x}_3 = [0;-1] \quad \vec{x}_4 = [1;0] \quad \vec{x}_5 = [1;1] $$ and $$ y_1 = T \quad y_2=T \quad y_3=T \quad y_4=S \quad y_5=S $$ with that, your "trained" centroids are $$ \mu_T = \frac 13 [-2 + 1 + 0; 1 + 2 - 1] = \left[-\frac 13, \frac 23\right] $$ $$ \mu_S = \frac 12 [1 + 1; 0 + 1] =\left[1, \frac 12\right]$$ You calculate these centroids before you get any test values. Then, for testing, for your observed point $\vec{x} = [1,1]$, you calculate the Euclidean distance between the point $\vec{x}$ and the centroids $\mu_T$ and $\mu_S$: $$ \| \vec{x} - \mu_T \| = \left\| [1;1] - \left[-\frac 13; \frac 23\right] \right\| = \left\| \left[\frac 43; \frac 13\right] \right\| = 1.37 $$ $$ \| \vec{x} - \mu_S \| = \left\| [1;1] - \left[1; \frac 12\right] \right\| = \left\| \left[0; \frac 12 \right] \right\| = 0.5$$ Finally, the term $\hat{y} = \arg\min_{l \in \mathbf{Y}} \|\vec{x}-\mu_l\|$ is used to find the estimated class $\hat{y}$ (the hat symbol is to denote that this is an estimated $y$, not one we knew before.). $\arg\min$, means that you find the minimum value - which is 0.5 in our case, and chose which "argument", i.e. which class, leads to that minimum value. In our case, the class which leads to the minimal distance is $S$, so the result is $\hat{y} = S$, and our test point is a square.
H: Can overfitting occur even with validation loss still dropping? I have a convolutional + LSTM model in Keras, similar to this (ref 1), that I am using for a Kaggle contest. Architecture is shown below. I have trained it on my labeled set of 11000 samples (two classes, initial prevalence is ~9:1, so I upsampled the 1's to about a 1/1 ratio) for 50 epochs with 20% validation split.I was getting blatant overfitting for a while but I thought it got it under control with noise and dropout layers. Model looked like it was training wonderfully, at the end scored 91% on the entirety of the training set, but upon testing on the test data set, absolute garbage. Notice: the validation accuracy is higher than the training accuracy. This is the opposite of "typical" overfitting. My intuition is, given the small-ish validation split, the model is still managing to fit too strongly to the input set and losing generalization. The other clue is that val_acc is greater than acc, that seems fishy. Is that the most likely scenario here? If this is overfitting, would increasing the validation split mitigate this at all, or am I going to run into the same issue, since on average, each sample will see half the total epochs still? The model: Layer (type) Output Shape Param # Connected to ==================================================================================================== convolution1d_19 (Convolution1D) (None, None, 64) 8256 convolution1d_input_16[0][0] ____________________________________________________________________________________________________ maxpooling1d_18 (MaxPooling1D) (None, None, 64) 0 convolution1d_19[0][0] ____________________________________________________________________________________________________ batchnormalization_8 (BatchNormal(None, None, 64) 128 maxpooling1d_18[0][0] ____________________________________________________________________________________________________ gaussiannoise_5 (GaussianNoise) (None, None, 64) 0 batchnormalization_8[0][0] ____________________________________________________________________________________________________ lstm_16 (LSTM) (None, 64) 33024 gaussiannoise_5[0][0] ____________________________________________________________________________________________________ dropout_9 (Dropout) (None, 64) 0 lstm_16[0][0] ____________________________________________________________________________________________________ batchnormalization_9 (BatchNormal(None, 64) 128 dropout_9[0][0] ____________________________________________________________________________________________________ dense_23 (Dense) (None, 64) 4160 batchnormalization_9[0][0] ____________________________________________________________________________________________________ dropout_10 (Dropout) (None, 64) 0 dense_23[0][0] ____________________________________________________________________________________________________ dense_24 (Dense) (None, 2) 130 dropout_10[0][0] ==================================================================================================== Total params: 45826 Here is the call to fit the model (class weight is typically around 1:1 since I upsampled the input): class_weight= {0:1./(1-ones_rate), 1:1./ones_rate} # automatically balance based on class occurence m2.fit(X_train, y_train, nb_epoch=50, batch_size=64, shuffle=True, class_weight=class_weight, validation_split=0.2 ) SE has some silly rule that I can post no more than 2 links until my score is higher, so here is the example in case you are interested: Ref 1: machinelearningmastery DOT com SLASH sequence-classification-lstm-recurrent-neural-networks-python-keras AI: I am not sure if the validation set is balanced or not. You have a severe data imbalance problem. If you sample equally and randomly from each class to train your network, and then a percentage of what you sampled is used to validate your network , this means that you train and validate using balanced data set. In the testing you used imbalanced database. This means that your validation and testing sets are not equivalent. In such case you may have high validation accuracy and low testing accuracy. Please find this reference that talks mainly about data imbalance problem for DNN, you can check how they sample to do the training, validation and testing
H: Big data analytics references I'm looking for a good introductory book or course to big data analytics. For the practical part, I'm particularly interested into using big data tools in R. I would prefer a book, but other references are welcome. Thanks! AI: I tried to explore some of the best available resources, which includes online courses (Free/Paid), Books etc. Books Big Data and Analytics (WIND) Hadoop for Dummies Big Data for Dummies Hadoop: The Definitive Guide Learning Spark: Lightning-Fast Big Data Analysis MapReduce Design Patterns Online Courses These are some best platforms that provide lots of courses with rich content and hands-on labs. You can go from beginner to expert level, followed by intermediate. (courses will be - Free/Paid ) Udemy - Big Data Analytics Courses Coursera - Big Data Analytics Courses edX - Big Data Analytics Courses Udacity - Data Analysis Other References For more thoughts on that you might like to explore these websites- Quora Big Data - Made Simple From Dev KDnuggets Update [BigData in R] You can go through these references- Wikipedia - Programming with Big Data in R RStudio - Working with BigData in R InfoWorld - Learn To crunch BigData with R That's all from my side. Hope it helps. Cheers!
H: Predefined Neural Networks instead of fine tuning? I usually try to form my ANNs with classic fine-tuning approach but I recently learned that there are different "predefined" networks specially for certain tasks. Is there a good summary about these? Are they really perform better than home-made ones? AI: It depends. If you are doing a task for which the weights of a very large network, trained on the same type of problem, are present, then it's better to use the weights of that pre-trained network. You can also fine tune the layers later on. One such example is the VGG16 Net. This NN was used in the ImageNet challenge in 2014, hence if you are trying to do image classification tasks where your images are the subset of ImageNet, then you should use the pre-trained VGG16 weights. This is a great tutorial if you want to go in details. But the above situation might not be true in every case. For example, the above case is not true if you reverse the situation.
H: Visualizing results of a classification problem, excluding confusion matrices? A classification system I built is going to go into production soon (it'll be part of a larger dashboard), and I'm looking for ways to better visualize and convey to business folks the results of a classification. Basically, given "old" data on which the model was trained, I predict the classes of "new" data, with the goal being to show whether the class distributions for the new data are statistically indifferent from the class distributions observed in the old data. So, if there are three classes A, B, and C in the old data, with proportions of 50%, 30%, and 20%, respectively, I compare the classification distribution for the new data with those original, observed proportions. Outside of a confusion matrix (which I think is probably inappropriate for most dashboard users), how else can I effectively present these results? I was thinking of a bar chart like this: AI: Here are a couple of options. I prefer the stacked column chart, which could also be annotated with the actual proportions if needed.
H: Image feature extraction Python skimage blob_dog I am trying to extract features from images using: def process_image(image_fp): image_ = imread(image_fp) image_ = resize(image_, (300, 200,3)) image=equalize_hist(rgb2gray(image_)) edges = skimage.feature.blob_dog(image) return edges.reshape(edges.size).tolist() where image_fp is an image path. I am having a problem due to the different sizes of the return. In general, the reshape guarantees the same size in the others algorithms. Is there a way to get always the same size? I only see a way: to truncate the lists (the stupid way). AI: Please read through the scikit documentation that is found here and I am assuming that you have gone through the method through which it calculates blobs in the images. If not the link is here. It return the 2D array of arrays with 3 values in each array, giving coordinates and std.deviation of Gaussian of the blob found. The length of the array returned is the number of blobs it has found in the image. So eliminating elements from that array by resizing it, is a wrong way of approach and may eliminate the features which you might find important.
H: Choice of replacing missing values based on the data distribution I am building a classification model based on a relatively small dataset. I have some missing values on the different attributes that I have. I cannot afford deleting any of the record that has missing values so I want to replace them. I made some general calculation to get some understand of the disruption of the data and to help me choose the value which would replace the missing values, Assume that I have attribute A with the following: mean = 121.68676278 std = 30.51562426 median = 117 mode = min = 44 max = 199 [in all the calculations, I ignored the missing values] If I were to choose between mean, median, or mode which one would be most suitable? and there is something else which was very confusing for me, the std is very large and when I asked about it I was told that this could be normal based on the range of my data but I did not understand what that means? AI: I would not definitely recommend substituting missing values by mean or by median or mode. If you want to go through some techniques and get a glance at them, I would recommend going through this link and for imputation techniques this wiki page gives you a brief . Do you think that there is a way to predict missing values from the other cells. If yes, apply a regression model on those variables and estimate the missing value. But remember this lacks variability as values fall on the regression line itself. There are methods like regression imputation which can add this variability component to the estimated value. If you are unable to go anywhere from the previous step, then see how the values are distributed for the missing variable, substitute them according to that distribution by using a random function. And if your unable to perform any of the above mentioned ones and want to go by mean, median. I cant really give my opinion as in this case they are nearer to each other. See what gives you the best predictable and decide between them. Coming to your final question, Std. deviation just shows how far your values are falling away from mean. If your data has large range with good enough number of points distributed at extremes, you would be expected to have high std. deviation.
H: Which recommender system approach allows for inclusion of user profile? I wanted to enhance a recommendation engine with information relying not only on past purchases or ratings but also on behavioral and demographical variables like sex, age, location, service usage frequency or hours. This information may be sparse (e.g. the user may not have provided me with his age). Can you suggest an approach that would allow for it? My first approach: After screening the general approaches to recommender systems, I think the one that is able to implement my idea is custom(?) item-based collaborative filtering. To be more precise: I would include user profile information in the same way item ratings are introduced, probably with two alterations to raw data: Normalize scale to be compliant with the rating scale. Add a parameter in my algorithm that would put a weight (e.g. 20% or 50%) on user profile rows, as there may be 10 user-profile variables and a million product items. AI: Try matrix/tensor factorization methods and data fusion.
H: How to equalize the pairwise affinity perplexities when implementing t-SNE? I'm trying to implement the t-SNE algorithm: I found that to compute the pairwise affinities, I have to follow this: My problem is computing $\sigma_i$. In the Wikipedia I found: The bandwidth of the Gaussian kernels $\sigma_{i}$, is set in such a way that the perplexity of the conditional distribution equals a predefined perplexity using a binary search. As a result, the bandwidth is adapted to the density of the data: smaller values of $\sigma_{i}$ are used in denser parts of the data space. I don't understand what this really means. How can I calculate $\sigma_i$? AI: It simply means that you should set the bandwidths through binary search. The way it works is that you start with a preset target perplexity (Mark's link suggests values from 5 to 50 as reasonable values), and bounds for the bandwidth. If the target perplexity is inside the interval defined by the boundary perplexities, you iteratively halve the search space until you converge to the target: $$2^{H(p; \sigma_L)} < PP_\mathrm{target} < 2^{H(p; \sigma_U)}$$ If the target was not in the initial interval, you expand the interval and try again.
H: How does Xgboost learn what are the inputs for missing values? So from Algorithm 3 of https://arxiv.org/pdf/1603.02754v3.pdf, it says that an optimum default direction is determined and the missing values will go in that direction. However, or perhaps I have misunderstood/missed the explanation from the article, it doesn't say what exactly is the input. For example, I have (parent) node A (with 50 inputs) splitting into node B and node C. Now, say of the 50 inputs there are 7 missing values. The other 43 inputs are split into B and C accordingly. What I seem to be understanding is that, it will allocate the remaining 7 into B and C and determine which one gives a higher gain score; that will be the optimal direction. However, given the 7 values are missing (Which means I don't know what are these 7 values), how does allocating missing values into any of the child nodes change the gain score, or rather minimize the loss function? This seems to suggest that Xgboost is inputting something for the missing values. I can't seem to find out what is Xgboost inputting for these missing values. I hope this question isn't too vague/general and easy. Edit: I think "Missing values" may be a vague term. What I meant here is (From wiki) "In statistics, missing data, or missing values, occur when no data value is stored for the variable in an observation." From the author himself (https://github.com/dmlc/xgboost/issues/21), he said " tqchen commented on Aug 13, 2014 xgboost naturally accepts sparse feature format, you can directly feed data in as sparse matrix, and only contains non-missing value. i.e. features that are not presented in the sparse feature matrix are treated as 'missing'. XGBoost will handle it internally and you do not need to do anything on it." And, " tqchen commented on Aug 13, 2014 Internally, XGBoost will automatically learn what is the best direction to go when a value is missing. Equivalently, this can be viewed as automatically "learn" what is the best imputation value for missing values based on reduction on training loss." AI: The procedure is described in their paper, section 3.4: Sparsity aware split-finding. Assume you're at your node with 50 observations and, for the sake of simplicity, that there's only one split point possible. For example, you have only one binary feature $x$, and your data can be split in three groups: Group $B$: 20 observations such that $x=B$, Group $C$: 20 observations such that $x=C$, Group $M$: 10 observations such that $x=?$ The algorithm will split based on $x$, but does not know where to send the group $M$. It will try both assignments, $$(B, M), C \qquad \text{and} \qquad B, (C, M),$$ compute the value to be assigned to the prediction at each node using all the data, and chose the assignment that minimizes the loss. For example, if the split $(B, M), C$ is chosen, the value of the left node will have been computed from all the $B$ and $M$ samples. This is what is meant by Automatically "learn" what is the best imputation value for missing values based on reduction on training loss. Comment question: how can group M be calculated if I don't know what are the values for them? The value we compute is not based on the feature $x$, which we don't know, but based on the known label/target of the sample. Let say we are doing a regression and trying to minimize the mean square error, and let say we have the following: For group $B$, the mean of the target is $5$. For group $C$, the mean of the target is $10$. For the missing value group $M$, the mean of the target is $0$. Note that even if the value of $x$ is missing, we know the value of target of the sample, otherwise we could not use it to train our model. In this case, the split would be made on $(B,M), C$, the value assigned to the right node containing samples $C$ would be $10$, and the value assigned to the left node containing samples in $(B, M)$ would be the mean of the target for the whole group. In our example, the mean would be $$\frac{|M|}{|B| + |M|}\text{mean}(M) + \frac{|B|}{|M|+|B|}\text{mean}(B) = \frac{10}{30}0 + \frac{20}{30}5 = 3.\overline{3}$$
H: Convolutional autoencoders not learning I'm trying to implement convolutional autoencoders in tensorflow, on the mnist dataset. The problem is that the autoencoder does not seem to learn properly: it will always learn to reproduce the 0 shape, but no other shapes, in fact I usually get an average loss of about 0.09, which is 1/10 of the classes that it should learn. I am using 2x2 kernels with stride 2 for the input and output convolutions, but the filters seems to be learned properly. When I visualize the data, input image is passed thru 16 (1st conv) and 32 filters (2nd conv), and by image inspection it seems running fine (i.e. apparently features like curves, crosses, etc are detected). The problem seems to arise in the fully connected part of the network: no matter what is the input image, its encoding will be always the same. My first thought is "I'm probably just feeding it with zeroes while training", but I don't think I made this mistake (see code below). Edit I realized the dataset was not shuffled, which introduced a bias and could be the cause of the problem. After introducing it, the average loss is lower (0.06 instead of 0.09), and in fact the output image looks like a blurry 8, but conclusions are the same: the encoded input will be the same no matter what is the input image. Here a sample input with the relative output Here are the activation for the image above, with the two fully connected layers at the bottom (encoding is the bottommost). Finally, here there are the activation for the fully connected layers for different inputs. Each input image corresponds to a line in the activation images. As you can see, they always yield the same output. If I use transposed weights instead of initializing different ones, the first FC layer (image in the middle) looks a bit more randomized, but the underlying pattern is still evident. In the encoding layer (image at the bottom), the output will be always the same no matter what is the input (of course, the pattern varies from one training and the next). Here's the relevant code # A placeholder for the input data x = tf.placeholder('float', shape=(None, mnist.data.shape[1])) # conv2d_transpose cannot use -1 in output size so we read the value # directly in the graph batch_size = tf.shape(x)[0] # Variables for weights and biases with tf.variable_scope('encoding'): # After converting the input to a square image, we apply the first convolution, using 2x2 kernels with tf.variable_scope('conv1'): wec1 = tf.get_variable('w', shape=(2, 2, 1, m_c1), initializer=tf.truncated_normal_initializer()) bec1 = tf.get_variable('b', shape=(m_c1,), initializer=tf.constant_initializer(0)) # Second convolution with tf.variable_scope('conv2'): wec2 = tf.get_variable('w', shape=(2, 2, m_c1, m_c2), initializer=tf.truncated_normal_initializer()) bec2 = tf.get_variable('b', shape=(m_c2,), initializer=tf.constant_initializer(0)) # First fully connected layer with tf.variable_scope('fc1'): wef1 = tf.get_variable('w', shape=(7*7*m_c2, n_h1), initializer=tf.contrib.layers.xavier_initializer()) bef1 = tf.get_variable('b', shape=(n_h1,), initializer=tf.constant_initializer(0)) # Second fully connected layer with tf.variable_scope('fc2'): wef2 = tf.get_variable('w', shape=(n_h1, n_h2), initializer=tf.contrib.layers.xavier_initializer()) bef2 = tf.get_variable('b', shape=(n_h2,), initializer=tf.constant_initializer(0)) reshaped_x = tf.reshape(x, (-1, 28, 28, 1)) y1 = tf.nn.conv2d(reshaped_x, wec1, strides=(1, 2, 2, 1), padding='VALID') y2 = tf.nn.sigmoid(y1 + bec1) y3 = tf.nn.conv2d(y2, wec2, strides=(1, 2, 2, 1), padding='VALID') y4 = tf.nn.sigmoid(y3 + bec2) y5 = tf.reshape(y4, (-1, 7*7*m_c2)) y6 = tf.nn.sigmoid(tf.matmul(y5, wef1) + bef1) encode = tf.nn.sigmoid(tf.matmul(y6, wef2) + bef2) with tf.variable_scope('decoding'): # for the transposed convolutions, we use the same weights defined above with tf.variable_scope('fc1'): #wdf1 = tf.transpose(wef2) wdf1 = tf.get_variable('w', shape=(n_h2, n_h1), initializer=tf.contrib.layers.xavier_initializer()) bdf1 = tf.get_variable('b', shape=(n_h1,), initializer=tf.constant_initializer(0)) with tf.variable_scope('fc2'): #wdf2 = tf.transpose(wef1) wdf2 = tf.get_variable('w', shape=(n_h1, 7*7*m_c2), initializer=tf.contrib.layers.xavier_initializer()) bdf2 = tf.get_variable('b', shape=(7*7*m_c2,), initializer=tf.constant_initializer(0)) with tf.variable_scope('deconv1'): wdd1 = tf.get_variable('w', shape=(2, 2, m_c1, m_c2), initializer=tf.contrib.layers.xavier_initializer()) bdd1 = tf.get_variable('b', shape=(m_c1,), initializer=tf.constant_initializer(0)) with tf.variable_scope('deconv2'): wdd2 = tf.get_variable('w', shape=(2, 2, 1, m_c1), initializer=tf.contrib.layers.xavier_initializer()) bdd2 = tf.get_variable('b', shape=(1,), initializer=tf.constant_initializer(0)) u1 = tf.nn.sigmoid(tf.matmul(encode, wdf1) + bdf1) u2 = tf.nn.sigmoid(tf.matmul(u1, wdf2) + bdf2) u3 = tf.reshape(u2, (-1, 7, 7, m_c2)) u4 = tf.nn.conv2d_transpose(u3, wdd1, output_shape=(batch_size, 14, 14, m_c1), strides=(1, 2, 2, 1), padding='VALID') u5 = tf.nn.sigmoid(u4 + bdd1) u6 = tf.nn.conv2d_transpose(u5, wdd2, output_shape=(batch_size, 28, 28, 1), strides=(1, 2, 2, 1), padding='VALID') u7 = tf.nn.sigmoid(u6 + bdd2) decode = tf.reshape(u7, (-1, 784)) loss = tf.reduce_mean(tf.square(x - decode)) opt = tf.train.AdamOptimizer(0.0001).minimize(loss) try: tf.global_variables_initializer().run() except AttributeError: tf.initialize_all_variables().run() # Deprecated after r0.11 print('Starting training...') bs = 1000 # Batch size for i in range(501): # Reasonable results around this epoch # Apply permutation of data at each epoch, should improve convergence time train_data = np.random.permutation(mnist.data) if i % 100 == 0: print('Iteration:', i, 'Loss:', loss.eval(feed_dict={x: train_data})) for j in range(0, train_data.shape[0], bs): batch = train_data[j*bs:(j+1)*bs] sess.run(opt, feed_dict={x: batch}) # TODO introduce noise print('Training done') AI: Well, the problem was mainly related to the kernel size. Using 2x2 convolution with stride of (2,2) turned to be a bad idea. Using 5x5 and 3x3 sizes yielded decent results.
H: Dataframe request with groupBy I'm a beginner in Spark and I want to calculate the average of number per name. I have a JSON file with this information df = spark.read.json("myjson.json") df.select(avg(df["number"]), df["name"]).groupBy("name").show() But I'm doing it wrong.. How can I solve my problem? Thanks a lot AI: You are probably thinking in terms of regular SQL but spark sql is a bit different. You'll need to group by field before performing your aggregation. Thus the following, you can write your query as followed : df.groupBy('name').agg({'number': 'mean'}).show You can also write it using a SQL dialect by registering the DataFrame as a temp table and then query on it use the SQLContext or HiveContext : df.registerTempTable("people") df2 = sqlContext.sql("select name, avg(number) as average from people group by name") I hope that this answers your question.
H: Topics to cover for software developer interested in data analytics First of all I don't know if this is the appropriate place to post this question. If it's not, I apologize in advance. It seems like the most relevant Stack Exchange sub. A little about myself: I'm a software developer and have been working in the field for about 5 years. I have a two year degree from a community college that didn't involve much math above pre-calc high school level stuff. I've never taken a statistics course. Most of the software I write is business software. It's typically back end server or database stuff, never user facing. I do allot of data aggregation and normalization. Lately I've been thinking about expanding my skillset. Data and statistics have always interested and I'd like to learn ways to measure and interpret data. Ideally this would lead to thinking of useful metrics and uses for data that would help the company I work at. The problem is I'm not sure what I should be looking into. There are so many data science topics and I don't know what would be useful for me to learn. I prefer structured learning environments such as courses or video series. I know this isn't a great question... I'm just not sure where to ask and I'm not sure how to ask for what I'm looking for... or even sure what I am looking for. Any guidance or suggestions would be very much appreciated. Even if the suggestion is a better place to ask the question or a better question for me to ask. Thanks much. AI: Welcome to the community!! There can be a lot of answers to this question but I would suggest you the approach I took when I shifted from software development to the data science field. 1) Refresh your statistics and probability concepts. You should not go into too much details but you must understand basic things like Gaussian Distribution, Mean, Variance, Probability,etc. 2) Go through the basics of Machine Learning concepts. I prefer Andrew Ng's machine learning course on Coursera. That will help you build a strong foundation and will give you a great start in the field 3) Choose a particular language Python/R for building models. Though it's totally upto you but I prefer python as it has great libraries for machine learning as well as Deep learning. 4) Take part in competitions. We learn by doing not by taking only lectures. I suggest you should join Kaggle and the slack community out there namely, 'KaggleNoobs'. It's a great community. I learn everyday a new thing from there. P.S: Data Science is a vast field. It demands from you various skill set like Data Analysis, Data Visualisation, Machine Learning,etc. So sometimes it can become frustrating too. But once you start enjoying it, you will become a master eventually.
H: Spark Mllib - FPG-Growth - Machine Learning Is the FPG-Growth an Machine Learning algorithm? Because I'm looking at this code: import org.apache.spark.mllib.fpm.FPGrowth import org.apache.spark.rdd.RDD val data = sc.textFile("data/mllib/sample_fpgrowth.txt") val transactions: RDD[Array[String]] = data.map(s => s.trim.split(' ')) val fpg = new FPGrowth() .setMinSupport(0.2) .setNumPartitions(10) val model = fpg.run(transactions) model.freqItemsets.collect().foreach { itemset => println(itemset.items.mkString("[", ",", "]") + ", " + itemset.freq) } val minConfidence = 0.8 model.generateAssociationRules(minConfidence).collect().foreach { rule => println( rule.antecedent.mkString("[", ",", "]") + " => " + rule.consequent .mkString("[", ",", "]") + ", " + rule.confidence) } And I'm not seeing how the algoritm can learn because it don't have an train, test or validation set... Thanks AI: FP-growth is a frequent pattern association rule learning algorithm. Thus it's a rule based machine learning algorithm. When you call the following : val model = fpg.run(transactions) You are actually creating and Frequent Pattern model without generating candidates. So actually you'll need to generate afterwards if you need to use them with : model.generateAssociationRules(minConfidence) Now, concerning the usual flow of building and validating such models, your code doesn't deal with that. It's usually done through quality measures variations. It can also be done through Feature extraction techniques. With such techniques, you should consider the following. For each association rule, you'll need to measure the improvements in accuracy that a commonly used predictor can obtain from an additional feature, constructed according to the exceptions to the rule. In other terms, you'll have to select a reference set of rules that should help your model perform better. I strongly advice you to read this paper about the topic. So now what does that mean ? This means that you'll need to implement that pipeline yourself because it's not implemented in Spark yet. I hope that this answers your question.
H: Methods to reduce dimensionality within a feature? Suppose that I am interested in predicting an outcome (say, the arrival delay [in seconds] of a flight) based upon a set of features. One of these features is a nominal variable - carrier - that specifies the airline carrier of the flight. This feature has 16 different values. After investigation of how arrival delay is distributed across each carrier, it appears that some carriers could be collapsed into one value (e.g., "AS" and "HA" or "WN" and "B6"). install.packages("nycflights13") library(nycflights) boxplot( formula = arr_delay ~ with(flights, reorder(carrier, -arr_delay, median, na.rm = TRUE)), data = flights, horizontal = TRUE, las = 2, plot = TRUE ) In general, are there well-known methods for reducing the dimensionality within a feature? AI: I think you are looking for a method that groups up the 16 nominal categorical values. If you conduct the regression problem on a tree-based algorithm, say, rpart, it would give you the various splits that you could consider aggregating them up to reduce the number of categorical values. For example, the tree based algorithm may suggest a split of carrier IN (AS, HA, VC) vs. NOT IN (AS, HA, VC). This effectively would reduce the number of distinct values to 2. You might want to consider more than 1 split to take into account interactions. Overall, this approach would reduce the number of distinct values in a categorical variable.
H: Variance in cross validation score / model selection Between cross-validation runs of a xgboost classification model, I gather different validation scores. This is normal, the Train/validation split and model state are different each time. flds = self.gsk.Splits(X, cv_folds=cv_folds) cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=xgb_param['n_estimators'], nfold=cv_folds, flds, metrics='auc', early_stopping_rounds=50, verbose_eval=True) self.model.set_params(n_estimators=cvresult.shape[0]) To make the parameters selection, I run multiple times this CV and average the results in order to attenuate those differences. Once my model parameters have been "found", what is the correct way to train the model, which seems to have some inner random states ? Do I : train on the full train set and hope for the best? keep the model with the best validation score in my CV loop (I am concerned this will overfit)? bag all of them? bag only the good ones? AI: Since you want your model to be a general solution, you want to include all your data when building the final model. You are correct in saying that keeping the model with the best validation score in the CV is overfitting. Including these inner random states help generalize your model, and since you have already tuned your model parameters using CV, you can apply these parameters to the final model. As for feature selection, you want to separate the data used to perform feature selection and the data used in cross-validation, so feature selection is performed on independent data in the cross-validation fold. This prevents biasing the model. If you were to select your features on the same data that you then use to cross-validate, you will likely overestimate your accuracy. Here are some other great posts that help: https://stats.stackexchange.com/questions/11602/training-with-the-full-dataset-after-cross-validation https://stats.stackexchange.com/questions/27750/feature-selection-and-cross-validation Check out Dikran Marsupial's answers to both, they are really good.
H: PCA on matrix with large M and N Based on this answer, we know that we can perform build covariance matrix incrementally when there are too many observations, whereas we can perform randomised SVD when there are too many variables. The answer provide are clear and helpful. However, what if we have a large amount of observations AND variables? e.g. 500,000 samples with 600,000 observations. In this case, the covariance matrix will be huge (e.g. 2,000 GB, assuming 8byte float, and if my calculation is correct) and will be impossible for us to fit it into memory. In such scenario, is there anything that we can do to calculate the PCA, assuming we only want the top PCs (e.g. 15 PCs)? AI: There are a couple of things you can do. Sample a representative, but small set of your data, which will allow you to compute PCA in memory. But seeing as you have 600,00 observations this will most likely not result in any meaningful results. Use incremental PCA, here is a link: http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.IncrementalPCA.html#sklearn.decomposition.IncrementalPCA But the main problem you have is that a number of samples are less than the amount of observations you have. I would recommend a different approach to dimensionality reduction. Autoencoders would be my recommendation to you. Autoencoders can be trained in an iterative fashion, circumventing your memory issue, and can learn more complicated projections than PCA (which is a linear transform). In case you want a linear projection you can have an autoencoder with one hidden layer, and the solution found by the neural network will be equal to the solution found by PCA. Here are a couple of links you will find helpful: http://ai.stanford.edu/~quocle/tutorial2.pdf https://www.quora.com/How-is-autoencoder-compared-with-other-dimensionality-reduction-algorithms https://www.cs.toronto.edu/~hinton/science.pdf
H: What is the reward function in the 10 armed test bed? The Sutton & Barto book on reinforcement learning mentions the 10 armed test bed in chapter 2, Bandit Problems: To roughly assess the relative effectiveness of the greedy and ε-greedy methods, we compared them numerically on a suite of test problems. This was a set of 2000 randomly generated n-armed bandit tasks with n = 10. For each bandit, the action values, $q_∗(a), a = 1, . . . , 10,$ were selected according to a normal (Gaussian) distribution with mean 0 and variance 1. On tth time step with a given bandit, the actual reward $R_t$ was the $q_∗(A_t)$ for the bandit (where $A_t$ was the action selected) plus a normally distributed noise term that was mean 0 and variance 1 [. . . .] We call this suite of test tasks the 10-armed testbed. What is the reward function in the 10 armed test bed ? I interpreted it as something like q*(a) + some normally random value. where q*(a) is the true value of action a. Why is the reward function chosen this way in the test bed and how does the reward function affect the value estimations and the graphs ? I guess the reason i'm asking this question is because i'm not completely clear how a reward looks like in the real world where i don't know anything about q*(a). AI: The reward function in the Chapter 2 test bed is simply the "true" mean value for the chosen action, plus a "noise term" which is normal distribution with mean 0, standard deviation 1. The noise has the same distribution as the initial setting of "true" values. The difference is you set the true values at the start and do not change them, then add noise on evaluation of each reward. The goal for the learner is then to find the best "true" value whilst only seeing the reward. This matches your understanding as I read it from the question. You could write it like this: Initialisation: $\forall a \in A: q_*(a) \leftarrow N(0,1)$ Evaluation: $R_t = r(A_t) = q_*(A_t) + N(0,1)$ Where $N(\mu,\sigma)$ is a sample from the normal distribution, mean $\mu$, standard deviation $\sigma$ Why is the reward function chosen this way in the test bed and how does the reward function affect the value estimations and the graphs ? For a bandit problem to be non-trivial, the reward function needs to be stochastic, such that it is not possible to immediately discover the best action, there should be some uncertainty on what the best action to take is, even after taking many samples. So the noise is there to provide at least some difficulty - without it finding the best action would be a trivial $argmax$ over the 10 possible actions. The noise does not represent uncertainty in the sensing (although that could also be a real world issue), but variability of the environment in response to an action. The test examples could have almost any distribution (e.g. $p(-1.0|a=1) = 0.9, p(9.0|a=1) = 0.1$ for $q_*(a=1) = 0.0$), the authors made a choice that was concise to describe and useful for exploring the different techniques in the chapter. The specific reward function will affect the learning graphs. The test bed has been chosen so that ratio of noise to magnitude of "true" values is high. In turn, this means that value estimates will converge relatively slowly (as a ratio to the true values), and this exposes differences between different sampling and estimation techniques when they are plotted by time step. To answer your concern: I guess the reason i'm asking this question is because i'm not completely clear how a reward looks like in the real world where i don't know anything about q*(a). In the real world you may need to sense or receive the reward from the environment. Obviously that complicates the test scenario, and doesn't add anything to the understanding of the maths, so the test bed just generates some imaginary distribution for the environment inside the problem. The "sensing" in the test is just assumed with a reward amount defined by the test. To qualify as a simple (and static) bandit problem, the reward has to be immediately apparent on taking the action, and have no dependency on current state or history. That constrains the problem somewhat - it is not the full reinforcement problem. So real-world examples tend to be about gambling with limited choice on independent repeatable events.
H: What are some good error metrics for multi-label (not mutli-class) problem in industry? What are some good error metrics for multi-label (not mutli-class) problem in industry? http://scikit-learn.org/dev/modules/multiclass.html AI: A common example is the Jaccard similarity coefficient: $J(Y, P) = \frac{|Y~\cap~P|}{|Y~\cup~P|}$ where $P$ is the set of predicted labels for an instance and $Y$ is the true set of labels. This gives a value between $0$ and $1$ for each instance, which you can average over the whole test set to give a score. If $P = Y$, then $J(Y, P) = 1$. This is implemented in scikit-learn as sklearn.metrics.jaccard_similarity_score.
H: What is a better approach for cross-validation with time-related predictors I was a given a data set with different predictors about a store and the idea is to forecast the number of daily shoppers. The predictors are the weekday, time of the day (morning, afternoon, evening), week number, month, weather (humidity, dew point, temperature), holidays. The outcome variable is the number of visitors. I want to build a regression model to predict the number of visitors using traditional machine learning algorithms such as random forests, SVM, and the like. My main concern is how to validate this model using CV since some of the predictors are time-related. Plain vanilla CV cannot be performed here. In this question, they suggest a way to perform this but my problem is that I only have data from June 2015 to present. My initial idea was the following: train with data from June 2015-December 2015. Test with January train with June 2015-January 2016. Test with February 2016. Each time one month of data is added to the training data after having asses the error for that month. Then compute average performance. My questions: Is this approach reasonable or not? If so, should I get rid of the month variable? Note that in a., for instance, I am testing with some data that belongs to different months that the one used for training. I mean, for the training I used data from June to December 2015, but I am testing for January 2016. Seasonality can be something I am missing. How to validate such models in general? AI: One such way to handle a time series cross-validation is to take a look at the below Python code from here: def performTimeSeriesCV(X_train, y_train, number_folds, algorithm, parameters): """ Given X_train and y_train (the test set is excluded from the Cross Validation), number of folds, the ML algorithm to implement and the parameters to test, the function acts based on the following logic: it splits X_train and y_train in a number of folds equal to number_folds. Then train on one fold and tests accuracy on the consecutive as follows: - Train on fold 1, test on 2 - Train on fold 1-2, test on 3 - Train on fold 1-2-3, test on 4 .... Returns mean of test accuracies. """ print 'Parameters --------------------------------> ', parameters print 'Size train set: ', X_train.shape # k is the size of each fold. It is computed dividing the number of # rows in X_train by number_folds. This number is floored and coerced to int k = int(np.floor(float(X_train.shape[0]) / number_folds)) print 'Size of each fold: ', k # initialize to zero the accuracies array. It is important to stress that # in the CV of Time Series if I have n folds I test n-1 folds as the first # one is always needed to train accuracies = np.zeros(folds-1) # loop from the first 2 folds to the total number of folds for i in range(2, number_folds + 1): print '' # the split is the percentage at which to split the folds into train # and test. For example when i = 2 we are taking the first 2 folds out # of the total available. In this specific case, we have to split the # two of them in half (train on the first, test on the second), # so split = 1/2 = 0.5 = 50%. When i = 3 we are taking the first 3 folds # out of the total available, meaning that we have to split the three of them # in two at split = 2/3 = 0.66 = 66% (train on the first 2 and test on the # following) split = float(i-1)/i # example with i = 4 (first 4 folds): # Splitting the first 4 chunks at 3 / 4 print 'Splitting the first ' + str(i) + ' chunks at ' + str(i-1) + '/' + str(i) # as we loop over the folds X and y are updated and increase in size. # This is the data that is going to be split and it increases in size # in the loop as we account for more folds. If k = 300, with i starting from 2 # the result is the following in the loop # i = 2 # X = X_train[:(600)] # y = y_train[:(600)] # # i = 3 # X = X_train[:(900)] # y = y_train[:(900)] # .... X = X_train[:(k*i)] y = y_train[:(k*i)] print 'Size of train + test: ', X.shape # the size of the dataframe is going to be k*i # X and y contain both the folds to train and the fold to test. # index is the integer telling us where to split, according to the # split percentage we have set above index = int(np.floor(X.shape[0] * split)) # folds used to train the model X_trainFolds = X[:index] y_trainFolds = y[:index] # fold used to test the model X_testFold = X[(index + 1):] y_testFold = y[(index + 1):] # i starts from 2 so the zeroth element in accuracies array is i-2. performClassification() is a function which takes care of a classification problem. This is only an example and you can replace this function with whatever ML approach you need. accuracies[i-2] = performClassification(X_trainFolds, y_trainFolds, X_testFolds, y_testFolds, algorithm, parameters) # example with i = 4: # Accuracy on fold 4 : 0.85423 print 'Accuracy on fold ' + str(i) + ': ', acc[i-2] # the function returns the mean of the accuracy on the n-1 folds return accuracies.mean() If on the other hand, you prefer R you can explore the timeslice method in the caret package and make use of the following code: library(caret) library(ggplot2) data(economics) myTimeControl <- trainControl(method = "timeslice", initialWindow = 36, horizon = 12, fixedWindow = TRUE) plsFitTime <- train(unemploy ~ pce + pop + psavert, data = economics, method = "pls", preProc = c("center", "scale"), trControl = myTimeControl)
H: Check Accuracy of Model Provided by Consultant My company has recently engaged a consultant firm to develop a predictive model to detect defective works. I understand that there are many ways to validate the model, for example, using k-fold cross-validation and I believe that the consultant firm will carry out the validation before submitting the model to us. However, at the employer's side, how can I check the accuracy of the model developed by the consultant firm ?? Someone suggested that I can give 2000-2015 data to the consultant firm and keep 2016 data for our own checking. However, a model with good accuracy on 2016 data does not imply that it will have good predictive power in the future. In my view, keeping 2016 data for checking seems like adding one more test set for validation, which in my view, is unnecessary since I already hv "k-fold" cross validation. Could someone advise what the employer can do to check the consultant's model? AI: Cross validation can be used in parameter tuning or model selection, but it does not evaluate the performance of a model. When developing a model, you divide your data between train, validation and testing. In the best case scenario, testing is only used once at the end to score the model. You should definitely keep the 2016 data. If you give all your data, it is easy to have a model learning "by heart" your expected data but it will not generalize well to future years. This is overfiting. The only way to know is by testing it on unknown data, here, 2016 Employed for measuring model performance, cross validation can measure more than just the average accuracy and you can select your features to answer the best accuracy score
H: What is stored in heap structure in the following example? I am planning to use heap structure to find the minimum distance between a set of 2D points and form a cluster.. and after to spend a couple of hours surfing on the internet, I have not still gotten a clearly example. Imagine you have a set of 2d points (x1,y1; x2,y2;....;xn,yn) and then you compute all possible distances between these pairs of points to finally get a matrix with all distances values. After that, I want to use a min heap structure to sort data and match the pairs with the mininum distance... and here is my question... Which value/data is really stored at every root or leaves in heap structure? I know that the root of min Heap is the shortest distance but, the data stored in every globe at Heap graph represent the index of every distances of the pairs of points or the pairs of points? In the first case, if you store the index value located at the top of heap, Do you need to perform after a new search in the distance matrix to obtain which pairs of points form these shortest distance? Is it optimal to do that? I hope my question was clear and thanks for your support. AI: You would put triplets into your heap, storing (distance, i, j) where distance is the distance, i and j are the indices. When you "pop" the topmost (smallest distance) element from the heap using the appropriate algorithm, the second smallest will now become the top. Python has a module heapq for building min-heaps. Simply add tuples with heapq.heappush, and remove the smallest with heapq.heappop. To convert an array into a heap, use heapq.heapify (faster than calling heappush for every element). Note that putting n^2 values into a heap yields runtime O(n^2 log n^2) which is fairly expensive.
H: SVD for recommendation engine I'm trying to build a toy recommendation engine to wrap my mind around Singular Value Decomposition (SVD). I've read enough content to understand the motivations and intuition behind the actual decomposition of the matrix A (a user x movie matrix). I need to know more about what goes on after that. from numpy.linalg import svd import numpy as np A = np.matrix([ [0, 0, 0, 4, 5], [0, 4, 3, 0, 0], ... ]) U, S, V = svd(A) k = 5 #dimension reduction A_k = U[:, :k] * np.diag(S[:k]) * V[:k, :] Three Questions: Do the values of matrix A_k represent the the predicted/approximate ratings? What role/ what steps does cosine similarity play in the recommendation? And finally I'm using Mean Absolute Error (MAE) to calculate my error. But what I'm values am I comparing? Something like MAE(A, A_k) or something else? AI: You can use SVD to build a recommendation engine, but I don't think it's the best way to get intuition around what's going on under the hood. Regardless, here's a presentation with more details, I'd recommend reviewing slide 9. And to answer your questions: A_k represents an embedding dimension (i.e. the low-rank approximation) that is used to predict the user-rating matrix. The cosine similarity is just the dot product for user $i$ and item $j$, which maps to the predicted rating for user $i$ and item $j$. The dot product is what defines the users and items as being similar. Yes, you should use the MAE on A and A_k. You may prefer to use MSE instead. This measures the quality of your predictions for user $i$ and item $j$. Note, this is obviously the MSE of a matrix, which is the Frobenius Norm. I think an easier way to understand SVD is to see it applied to image compression for different components. See this presentation here.
H: roc_auc score GridSearch I am experimenting with xgboost. I ran GridSearchCV with score='roc_auc' on xgboost. The best classificator scored ~0.935 (this is what I read from GS output). But now when I run best classificator on the same data: roc_auc_score(Y, clf_best_xgb.predict(X)) it gives me score ~0.878 Could you tell me how the score is evaluated in both cases? Thanks AI: Try using predict_proba instead of predict as below. It should give you the same number. roc_auc_score(Y, clf_best_xgb.predict_proba(X)[:,1]) When we compute AUC, most of time people will use the probability instead of the actual classs.
H: Sigmoid vs Relu function in Convnets The question is simple: is there any advantage in using sigmoid function in a convolutional neural network? Because every website that talks about CNN uses Relu function. AI: The reason that sigmoid functions are being replaced by rectified linear units, is because of the properties of their derivatives. Let's take a quick look at the sigmoid function $\sigma$ which is defined as $\frac{1}{1+e^{-x}}$. The derivative of the sigmoid function is $$\sigma '(x) = \sigma(x)*(1-\sigma(x))$$ The range of the $\sigma$ function is between 0 and 1. The maximum of the $\sigma'$ derivative function is equal to $\frac{1}{4}$. Therefore when we have multiple stacked sigmoid layers, by the backprop derivative rules we get multiple multiplications of $\sigma'$. And as we stack more and more layers the maximum gradient decreases exponentially. This is commonly known as the vanishing gradient problem. The opposite problem is when the gradient is greater than 1, in which case the gradients explode toward infinity (exploding gradient problem). Now let's check out the ReLU activation function which is defined as: $$R(x) = max(0,x)$$ The graph of which looks like If you look at the derivatives of the function (slopes on the graph), the gradient is either 1 or 0. In this case we do not have the vanishing gradient problem or the exploding problem. And since the general trend in neural networks has been deeper and deeper architectures ReLU became the choice of activation. Hope this helps.
H: K-Means Algorithm - Feature Selection Suppose I've this dataset: Employee_ID Store_ID Company_ID Stock_ID Product_Value 1 1 1 2 3.7 4 1 4 2 8 ... Where: Employee_ID: Is the unique number for a employee that sold the product Store_ID: Unique number for Store Chain Company_ID: Unique number for product supplier Stock_ID: Unique number for the product purchased Product_Value: Product Value And I want to make a segmentation over my dataset using K-Means... Its make sense use all the variables for my dataset? AI: So long as you want to use those variables to define closeness then yes, you just have to encode them differently. You should use a one-hot-encoding for the discrete variables (e.g., Employee_ID, Store_ID, etc.) and just the Product_Value as is. I answered a similar question about K-means for just categorical variables (like Employee_ID) so I've copied the code below that gives a quick demo for using the Clusters as a feature for predicting something after you use K-means. As I said in my old answer, in general, this framework isn't optimal but it's okay for a simulation. library(glmnet) library(Matrix) n <- 1e5 nclusters <- 5 set.seed(420) ls <- data.frame(sample(letters, n, replace=TRUE)) xs <- sparse.model.matrix(~.,data=ls) print(head(xs)) # Now let's run k-means out <- kmeans(xs, centers=nclusters) bs <- rep(1, dim(xs)[2]) # Let's run k-means on the different categories clusterpred <- data.frame(out[[1]]) ys <- xs %*% bs + rnorm(n) print(table(clusterpred)) # Now let's use a clustered data set to predict some outcome cxs <- sparse.model.matrix(~.,data=clusterpred) model <- glmnet(y=ys, x=xs, alpha=0) cmodel <- glmnet(y=ys, x=cxs, alpha=0) # Predictions yhat <- predict(model, xs) yhatc <- predict(cmodel, cxs) # Looking at the difference RMSEs print(sqrt( sum( (ys-yhat)**2 ))) print(sqrt( sum( (ys-yhatc)**2 )))
H: Recommendation System to integrate with an android app I need to build a recommendation system that takes certain parameters as input, computes a score and order suggestions to users based on this score. Well this is what I need to do loosely speaking. I am new to the scene of data science and haven't come across anything that could help me out. This guy asked a similar question 5 years ago. I basically need something like this: https://stackoverflow.com/questions/6828013/recommendation-systems-for-a-mobile-market-and-algorithm-suggestion Apologies for the open-ended and vague question. Just need to be pointed out in the right direction. Edit: I do not intend to place the system on the android device. It will be running back-end AI: I don't want to sound harsh but building a recommender system on a android isn't really Data science. In any recommender system context, the science on data is usually done in ad-hoc manner. As for the algorithm implementation, validation and scaling, data science can play a part in that. But this is still not related to the fact of serving the data in a mobile application nor a website. That's mainly computer engineering. Recommender systems algorithms a quite expensive to compute on any mobile nevertheless. It's best fit to be done by a dedicated server which can also play a role in serving the information when needed. I can't get into the details of this thought as it's quite broad to fit in a single answer on the site. I advice you to read my answer on the time/space complexity challenge in building this types of application. That may help you getting some insights about the architectural design of a recommender engine in the real world. I hope that this answers your question.
H: Why is this Binning by Median code wrong? I was working on Binning by Mean, Median and Boundary in R. # R CODE a=c(20.5, 52.5, 62.6, 72.4, 104.8, 63.9, 35.3, 83.9, 37.4, 71.6, 74.6, 44.5, 66.6, 56.1, 45.3, 37.2) a=sort(a) binsize=4 median(a[1:4]) # BINS ARE for(i in 1:length(a)) { if(i%%binsize==0) { print("HI") print(a[i-3]) print(a[i-2]) print(a[i-1]) print(a[i]) } } # BINNING BY MEAN sum=0 for(i in 1:length(a)) { sum=sum+a[i] if(i%%binsize==0){ avg=sum/binsize sum=0 print(rep.int(avg,binsize)) } } # BINNING BY MEDIAN i=1 for(i in 1:length(a)){ if(i%%binsize==0) { print(rep.int(median(a[i-3:i]),binsize)) } } Can anyone tell me why binning by median is giving me wrong output. The median(a[i-3:i]) for 1st bin returns a value which is not same as median(a[1:4]) for 1st bin. Why? AI: The error is due to the well-known R gotcha that the : (colon operator, which calls seq()) takes higher precedence than arithmetic. Always parenthesize arguments to : if they involve arithmetic or are expressions: a[(i-3):i] Your code a[i-3:i] doesn't do what you want it to do a[(i-3):i], it does a[i - (3:i)]). So the medians you are printing are for these slices: 4-3:1 # i.e. 1:3 8-3:1 # i.e. 5:7 12-3:1 # i.e. 9:11 16-4:1 # i.e. 13:15 PS some coding-style tips You don't need to iterate over all possible values of i and check them modulo binsize, just do: for(i in seq(binsize, length(a), by=binsize)) { ... } so in your median case: for(i in seq(binsize, length(a), by=binsize)) { print(rep.int(median(a[(i-3):i]),binsize)) } [1] 36.25 36.25 36.25 36.25 [1] 48.9 48.9 48.9 48.9 [1] 65.25 65.25 65.25 65.25 [1] 79.25 79.25 79.25 79.25 But in fact you can replace even that with: split(a, ceiling(seq_along(a)/binsize)) as per the "Split a vector into chunks in R" To make it even clearer, you could define a helper function chunk <- function(x, binsize) { split(x, ceiling(seq_along(x)/binsize)) } Then you can replace the for-loop with sapply: . sapply(split(a, ceiling(seq_along(a)/binsize)), mean) sapply(chunk(a,binsize), mean) 1 2 3 4 32.600 49.600 66.175 83.925 sapply(split(a, ceiling(seq_along(a)/binsize)), median) sapply(chunk(a,binsize), median) 1 2 3 4 36.25 48.90 65.25 79.25 Much cleaner, easier to read, and prevents errors, right? The generalization of the colon operator is the seq() function, give it a read, it's pretty useful.
H: How to handle a zero factor in Naive Bayes Classifier calculation? If I have a training data set and I train a Naive Bayes Classifier on it and I have an attribute value which has probability zero. How do I handle this if I later want to predict the classification on new data? The problem is, if there is a zero in the calculation the whole product becomes zero, no matter how many other values I got which maybe would find another solution. Example: $P(x|spam=yes) = P(TimeZone = US | spam=yes) \cdot P(GeoLocation = EU | spam = yes) \cdot ~ ... ~ = 0.004 $ $P(x|spam=no) = P(TimeZone = US | spam=no) \cdot P(GeoLocation = EU | spam = no) \cdot ~ ... ~ = 0 $ The whole product becomes $0$ because in the training data the attribute TimeZone US is always Yes in our small training data set. How can I handle this? Should I use a bigger set of training data or is there another possibility to overcome this problem? AI: An approach to overcome this 'zero frequency problem' in a Bayesian setting is to add one to the count for every attribute value-class combination when an attribute value doesn’t occur with every class value. So, for example, say your training data looked like this: $$\begin{array}{c|c|c|} & \text{Spam} = yes & \text{Spam} = no \\ \hline \text{TimeZone} = US & 10 & 5 \\ \hline \text{TimeZone} = EU & 0 & 0 \\ \hline \end{array}$$ $ P(\text{TimeZone} = US | \text{Spam} = yes) = \frac{10}{10} = 1$ $P(\text{TimeZone} = EU | \text{Spam} = yes) = \frac{0}{10} = 0$ Then you should add one to every value in this table when you're using it to calculate probabilities: $$\begin{array}{c|c|c|} & \text{Spam} = yes & \text{Spam} = no \\ \hline \text{TimeZone} = US & 11 & 6 \\ \hline \text{TimeZone} = EU & 1 & 1 \\ \hline \end{array}$$ $ P(\text{TimeZone} = US | \text{Spam} = yes) = \frac{11}{12}$ $P(\text{TimeZone} = EU | \text{Spam} = yes) = \frac{1}{12}$
H: Example of Logistic Regression using a proportion as the dependent variable? I am trying to run a logistic regression on a data set where my dependent variable is a proportion of a binary variable, rather than the binary variable itself. I have seen a ton of documentation that says this is possible, but I am having trouble finding an example of how to actually do it. I am open to using scikit learn, statsmodels, or any other library that will do it. I have added a photo showing a simplified version of my data. successes here is just a count of a binary (1/0) outcome. instead of having the individual observations, I only have them rolled up, but my understanding is that it is still a logistic regression problem. I want to predict the dependent variable 'proportion' based on the features. I understand this conceptually, but am just trying to find an example of this in python. all of the examples I have seen assume a binary dependent variable. your help is appreciated! AI: As the link by @Spacedman shows, binomial regression works well. If you don't have the attempts and successes but, instead, have just the proportion, then you'll want to use beta-regression. After a little digging, it doesn't seem like this is available in Python. Here's a blog post demoing it in R.
H: How do you evaluate ML model already deployed in production? so to be more clear lets consider the problem of loan default prediction. Let's say I have trained and tested off-line multiple classifiers and ensembled them. Then I gave this model to production. But because people change, data and many other factors change as well. And performance of our model eventually will decrease. So then it needs to be replaced with the new, better model. What are the common techniques, model stability tests, model performance tests, metrics after deployment? How to decide when to replace current model with newer one? AI: What you should consider more often in production scenario is revenue for your model , and A/B test is a must . As in your case , you can exactly measure how much money can your model for loan default prediction bring to you , or how much loss can your model can save for you . Besides , you can check if the distribution of your prediction is consistent with that of ground truth concerning accuracy and stability for your model . Hopes this will help you , good luck -)
H: Understanding autoencoder loss function I've never understood how to calculate an autoencoder loss function because the prediction has many dimensions, and I always thought that a loss function had to output a single number / scalar estimate for a given record. However, on GitHub I recently came across a repository that has implemented an example autoencoder in TensorFlow, and the squared error implementation is a lot simpler than I thought: cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2)) The TensorFlow documentation on reduce_mean says, among other things: If axis has no entries, all dimensions are reduced, and a tensor with a single element is returned. Can I conclude from all of this that the squared error of an autoencoder prediction is just the average across all of the record's dimensions? AI: Yes, you are correct in thinking that the squared error of an autoencoder prediction for a single example is the average of the squared error of the prediction for all dimensions. Similarly, the squared error for a whole batch of examples will be the average of the error of each example in the batch.
H: Deep Learning Project to Predict Stock Prices So I have a background in computer programming and a little in machine learning in general. What I would like to do is create a fun project in A.I. with deep learning. I have a dataset that has a whole bunch of stock prices at a certain date, with a bunch of features for each entry to go with it. I also have some "experts" who made predictions on whether the stock will go up or down. As my dataset grows I can evolve the game to make selections from multiple stocks...etc Essentially what I would love to do is create an A.I. app that will be fed the same data that the "experts" had and see if I can create something more accurate and beat them at it. Is this a viable approach? AI: Essentially what I would love to do is create an A.I. app that will be fed the same data that the "experts" had and see if I can create something more accurate and beat them at it. Is this a viable approach? Sure, you can use one or more supervised learning techniques to train a model here. You have features, a target variable and ground truth for that variable. In addition to applying ML you have learned, all you need to do to test your application fairly is reserve some of the data you have with expert predictions for comparison as test data (i.e. do not train using it). I would caveat that with some additional thoughts: You haven't really outlined an "approach" here, other than mentioning use of ML. Be careful not to leak future data back into the predictive model when building a test version. Predicting stock and markets is hard, because they react to their own predictability and many professional organisations trade on the slightest advantage they can calculate, with experienced and highly competent staff both gathering and analysing data. Not directly part of the answer, but to anyone just starting out and discovering machine learning, and finding this Q&A: Please don't imagine rich rewards from predicting markets using stats at home, it doesn't happen. If you think that this is a route to "beating the market" be aware that you are far from the first to think of doing this, and such a plan can be summarised like this: Market Data + ML ??? Profit You can fill in the ??? by learning loads about financial markets - i.e. essentially by becoming one of the experts. ML is not a short-cut, but it might be a useful tool if you are, or plan to be, a market analyst.
H: Filtering outliers in Apache Spark based on calculations of previous values I'm processing geospatial data using Spark 2.0 Dataframes with the following schema: root |-- date: timestamp (nullable = true) |-- lat: double (nullable = true) |-- lon: double (nullable = true) |-- accuracy: double (nullable = true) |-- track_id: long (nullable = true) I have seen that there are jumps of the location signal to a complete different place. The strange thing is, that the signal then remains for a certain time, say aound 25 seconds or 5 samples at the remote location and then jumps back to where I stand. I'd like to remove these outliers by calculating the speed between the current and the "last valid record" by calculating the speed between the points. If the speed is above a given threshold, the current record should be dropped and the "last valid record" remains the same. If the speed is below the threshold the current record is added to the result data frame and becomes the new "last valid record". I'm using Spark 2.0 with Dataframes. Any suggestions of how to implement this strategy or any better strategy are highly appreciated. Thanks. PS: I asked the same questions in stackoverflow, with a concrete implementation. But, since I'm not sure if this is the right approach, and do not want bias the answers to a certain Spark method, I ask here for any suggestions. https://stackoverflow.com/questions/41002844/how-to-filter-outlier-rows-from-spark-dataframe-based-on-distance-to-previous-va AI: This is actually a general problem with time-series data: you have some logic to implement based on one or more values in the series. You always have two choices: Feed the time series through some module that calculates as each data point arrives Use the "spreadsheet method" to calculate a series of columns eventually arriving at the goal The advantage of the first approach is you can use the same module to process your real-time data. The advantage of the second approach is that it's very fast and usually easier to implement. Since you're already in a Spark Dataset, here's the strategy: Calculate a speed column: $p_t - p_{t-1}$ where $p$ is the position Calculate a "jump" column: 1 if the speed is over a certain threshold, -1 if under, 0 otherwise Calculate a "jumpsum" column: the cumulative sum of the jump column Bad data will have a jumpsum of 1; filter them out Here's how you do it: import org.apache.spark.sql.expressions.Window import org.apache.spark.sql.functions._ import org.apache.spark.sql.SparkSession val ss: SparkSession = SparkSession.builder.getOrCreate() // note the file must be on each executor in the same directory val ds = ss.read .option("header", "true") .option("inferSchema", "true") .csv("file:///home/peter/data.csv") val w = Window.partitionBy().orderBy("datetime") val threshold = 10 def jump(v: Double): Int = if (v > threshold) 1 else if (v < -threshold) -1 else 0 val sqlJump = udf(jump _) val cleanDS = ds .withColumn("speed", $"position" - lag($"position", 1).over(w.rowsBetween(-1, -1))) .withColumn("jump", sqlJump($"speed")) .withColumn("jumpsum", sum($"jump").over(w.rowsBetween(Long.MinValue, 0))) Here's what the output Dataset looks like (I didn't remove the bad rows so you can see the calculation): +--------+--------+-----+----+-------+ |datetime|position|speed|jump|jumpsum| +--------+--------+-----+----+-------+ | 1| 1| null|null| null| | 2| 1| 0| 0| 0| | 3| 1| 0| 0| 0| | 4| 1| 0| 0| 0| | 5| 1| 0| 0| 0| | 6| 2| 1| 0| 0| | 7| 1| -1| 0| 0| | 8| 1| 0| 0| 0| | 9| 46| 45| 1| 1| | 10| 45| -1| 0| 1| | 11| 48| 3| 0| 1| | 12| 45| -3| 0| 1| | 13| 1| -44| -1| 0| | 14| 2| 1| 0| 0| | 15| 1| -1| 0| 0| +--------+--------+-----+----+-------+ The "data.csv" is just the first two columns of that Dataset: datetime,position 1,1 2,1 3,1 4,1 ...etc. All that's left to do is filter out jumpsum === 1.
H: Can you interpret probabilistically the output of a Support Vector Machine? I am trying to build a binary classification system using different classification algorithms like random forests, support vector machines, AdaBoost. I want to use the output of these classifiers to visualize a score. For example, when using random forests, I would like to use the probability of a sample belonging to class A to build a score from 0 to 100. Given that random forests output a probability (from 0 to 1) using it as the score is intuitive (I would just multiply it by 100). However, given that SVMs output a classification but not a probabilistic output (i.e. distance to the hyperplane, but not probability), would it be legitimate to use the distance to the hyperplane as some sort of "pseudo probability"? I would, for example, do max-min scale on the distance to the hyperplane for all samples so all distances are scaled from 0 to 1. I want to be sure that I can use the distance to the hyperplane as a pseudo-probability and that this pseudo-probability is comparable to the probability to belong to a given class outputted by the random forest. For example, that a sample with a probability of .80 of belonging to class A is the same than another sample of a (min-max transformed) probability of belonging to class A according to the SVM. AI: One standard way to obtain a "probability" out of a SVM is to use Platt scaling. See, e.g., this Wikipedia page and this question on Stats.SE. Platt scaling involves fitting a logistic regression model to predict the "probability", based on the distance to the hyperplane.
H: Which machine learning algorithm should I apply for differentiate question difficulty level with users' result Here's the scenario, There's a database with thousands of single-option questions for testing a specific skills, and a large number of users (either professional or amateur in this skill), each of which answer 10 random questions from the database. The only thing that I can think of is to differentiate question difficulty level according to the correct rate of each question. But how could I take fully use of other informations like: the correct rate from the user's perspective and feedback to its own influence to the difficulty level of questions (if user A answers 9 out of 10 questions correct, then the incorrect one (question_10) is more likely to be harder, than an user B answers 1 out of 10 questions correct and question_10 is in the incorrect set) the answer time for each question by each user Could anyone give me some ideas on this model, like where should I delve more into to make the difficulty level of question more accurate? Great thanks! AI: I suggest you look at item response theory and the Rasch model. They construct a model that attempts to identify both (a) the proficiency/knowledge/ability of the user and (b) the level of difficulty of the question.
H: Intuition Behind Restricted Boltzmann Machine (RBM) I went through Geoff Hinton's Neural Networks course on Coursera and also through introduction to restricted boltzmann machines, still I didn't understand the intuition behind RBMs. Why do we need to compute energy in this machine? And what is the use of the probability in this machine? I also saw this video. In the video, he just wrote the probability and energy equations before the computation steps and didn't appear to use it anywhere. Adding to the above, I am not sure what the likelihood function is for? AI: RBM's are an interesting beast. To answer your question, and to jog my memory on them, I'll derive RBMs and talk through the derivation. You mentioned that you're confused on the likelihood, so my derivation will be from the perspective of trying to maximize the likelihood. So let's begin. RBMs contain two different sets of neurons, visible and hidden, I'll denote them $v$ and $h$ respectively. Given a specific configuration of $v$ and $h$, we map it the probability space. $$p(v,h) = \frac{e^{-E(v,h)}}{Z}$$ There are a couple things more to define. The surrogate function we use to map from a specific configuration to the probability space is called the energy function $E(v,h)$. The $Z$ constant is a normalization factor to ensure that we actually map to the probability space. Now let's get to what we're really looking for; the probability of a set of visible neurons, in other words, the probability of our data. $$Z = \sum_{v \in V}\sum_{h \in H}e^{-E(v,h)}$$ $$p(v)=\sum_{h \in H}p(v,h)=\frac{\sum_{h \in H}e^{-E(v,h)}}{\sum_{v \in V}\sum_{h \in H}e^{-E(v,h)}}$$ Although there are a lot of terms in this equation, it simply comes down to writing the correct probability equations. Hopefully, so far, this has helped you realize why we need energy function to calculate the probability, or what is done more usually the unnormalized probability $p(v)*Z$. The unnormalized probability is used because the partition function $Z$ is very expensive to compute. Now let's get to the actual learning phase of RBMs. To maximize likelihood, for every data point, we have to take a gradient step to make $p(v)=1$. To get the gradient expressions it takes some mathematical acrobatics. The first thing we do is take the log of $p(v)$. We will be operating in the log probability space from now on in order to make the math feasible. $$\log(p(v))=\log[\sum_{h \in H}e^{-E(v,h)}]-\log[\sum_{v \in V}\sum_{h \in H}e^{-E(v,h)}]$$ Let's take the gradient with respect to the paremeters in $p(v)$ \begin{align} \frac{\partial \log(p(v))}{\partial \theta}=& -\frac{1}{\sum_{h' \in H}e^{-E(v,h')}}\sum_{h' \in H}e^{-E(v,h')}\frac{\partial E(v,h')}{\partial \theta}\\ & + \frac{1}{\sum_{v' \in V}\sum_{h' \in H}e^{-E(v',h')}}\sum_{v' \in V}\sum_{h' \in H}e^{-E(v',h')}\frac{\partial E(v,h)}{\partial \theta} \end{align} Now I did this on paper and wrote the semi-final equation down as to not waste a lot of space on this site. I recommend you derive these equations yourself. Now I'll write some equations down that will help out in continuing our derivation. Note that: $Zp(v,h)=e^{-E(v,h')}$, $p(v)=\sum_{h \in H}p(v,h)$ and that $p(h|v) = \frac{p(v,h)}{p(h)}$ \begin{align} \frac{\partial log(p(v))}{\partial \theta}&= -\frac{1}{p(v)}\sum_{h' \in H}p(v,h')\frac{\partial E(v,h')}{\partial \theta}+\sum_{v' \in V}\sum_{h' \in H}p(v',h')\frac{\partial E(v',h')}{\partial \theta}\\ \frac{\partial log(p(v))}{\partial \theta}&= -\sum_{h' \in H}p(h'|v)\frac{\partial E(v,h')}{\partial \theta}+\sum_{v' \in V}\sum_{h' \in H}p(v',h')\frac{\partial E(v',h')}{\partial \theta} \end{align} And there we go, we derived maximum likelihood estimation for RBM's, if you want you can write the last two terms via expectation of their respective terms (conditional, and joint probability). Notes on energy function and stochasticity of neurons. As you can see above in my derivation, I left the definition of the energy function rather vague. And the reason for doing that is that many different versions of RBM implement various energy functions. The one that Hinton describes in the lecture linked above, and shown by @Laurens-Meeus is: $$E(v,h)=−a^Tv−b^Th−v^TWh.$$ It might be easier to reason about the gradient terms above via the expectation form. $$\frac{\partial \log(p(v))}{\partial \theta}= -\mathop{\mathbb{E}}_{p(h'|v)}\frac{\partial E(v,h')}{\partial \theta}+\mathop{\mathbb{E}}_{p(v',h')}\frac{\partial E(v',h')}{\partial \theta}$$ The expectation of the first term is actually really easy to calculate, and that was the genius behind RBMs. By restricting the connection the conditional expectation simply becomes a forward propagation of the RBM with the visible units clamped. This is the so called wake phase in Boltzmann machines. Now calculating the second term is much harder and usually Monte Carlo methods are utilized to do so. Writing the gradient via average of Monte Carlo runs: $$\frac{\partial \log(p(v))}{\partial \theta}\approx -\langle \frac{\partial E(v,h')}{\partial \theta}\rangle_{p(h'|v)}+\langle\frac{\partial E(v',h')}{\partial \theta}\rangle_{p(v',h')}$$ Calculating the first term is not hard, as stated above, therefore Monte-Carlo is done over the second term. Monte Carlo methods use random successive sampling of the distribution, to calculate the expectation (sum or integral). Now this random sampling in classical RBM's is defined as setting a unit to be either 0 or 1 based on its probability stochasticly, in other words, get a random uniform number, if it is less than the neurons probability set it to 1, if it is greater than set it to 0.
H: How to find the filename associated with a prediction in Keras? My question is really simple, how to find the filename associated with a prediction in Keras? That is, if I have a set of 100 test samples named and I get a numpy array which contains the estimated class probabilities, how do I map the filenames to the probabilities? import cv2 import os import glob def load_test(): X_test = [] y_test = [] os.chdir(testing_path) file_list = glob.glob('*.png') for test_image in file_list: img = cv2.imread(test_image,1) X_test.append(img) y_test.append(1) return X_test,y_test if __name__ == '__main__': X_test = np.array(X_test, dtype = np.uint8) X_test = X_test.reshape(X_test.shape[0],3,100,100) X_test = X_test.astype('float32') X_test /= 255 AI: The order of the files that populate file_list, is the same order X_test appears in, by row. So just match the indices to correlate filename with prediction. X_test[0] ~ prediction[0] ~ file_list[0]
H: Train/Test Split after performing SMOTE I am dealing with a highly unbalanced dataset so I used SMOTE to resample it. After SMOTE resampling, I split the resampled dataset into training/test sets using the training set to build a model and the test set to evaluate it. However, I am worried that some data points in the test set might actually be jittered from data points in the training set (i.e. the information is leaking from the training set into the test set) so the test set is not really a clean set for testing. Does anyone have any similar experience? Does the information really leak from the training set into the test set? Or does SMOTE actually take care of this and we do not need to worry about it? AI: When you use any sampling technique (specifically synthetic) you divide your data first and then apply synthetic sampling on the training data only. After you do the training, you use the test set (which contains only original samples) to evaluate. The risk if you use your strategy is having the original sample in training (testing) and the synthetic sample (that was created based on this original sample) in the test (training) set.
H: Failure tolerant factor coding There are a lot of ml-algorithms which cannot directly deal with categorical variables. A very common solution is to apply binary (dummy-) coding to still properly handle the categorical nature of the data. Very often e.g. in sk-learn or apache-spark the actual dummy-coder can only handle numeric values. So label-encoding needs to be performed beforehand. In a real live ml-scenario, the fitted model will encounter new and formerly not known data. Usually, such a label-encoder (string-indexer) for spark has the option to either skip (ignore) a row of data which contains any unknown value or to throw an error. If multiple values require coding this can lead to a big loss of "new" data. Are there any approaches which "tolerate" up to x new values per row and still properly evaluate the fitted pipeline? An example for spark string-indexing + dummy-coding is shown below. val df = spark.createDataFrame(Seq( (0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c") )).toDF("id", "category") val indexer = new StringIndexer() .setInputCol("category") .setOutputCol("categoryIndex") .fit(df) val indexed = indexer.transform(df) val encoder = new OneHotEncoder() .setInputCol("categoryIndex") .setOutputCol("categoryVec") val encoded = encoder.transform(indexed) encoded.select("id", "categoryVec").show() http://spark.apache.org/docs/latest/ml-features.html#onehotencoder AI: For a categorical variable, if the fitted model encounters a previously "unseen" category, i.e. it did not exist in the training set when the model was trained; then, you should skip that record. You could also opt to throw an error so that you're notified of the existence of new categories and can re-train the model based on that trigger. If you skip the records with new categories, you would still be able to evaluate the pipeline successfully. That may be the preferred option for a fully automated production setup. The one-hot encoding creates a new "column" of data for a new category and if this hasn't been used for training, the ML algorithm has no way of knowing how to use the new dummy variable. In your sample code, each of the categories are encoded into new variables e.g. is_category_a = 0/1, is_category_b = 0/1 , is_category_c = 0/1, etc. If such a model is sent data with a new category "d", then it would be encoded in another column called is_category_d = 0/1 , but the model would ignore this column (or throw an error) since it doesn't expect its input matrix to contain is_category_d. Lets assume you've fit a linear regression model with the coefficients as: $$ y = 1 + 2.is\_category\_a + 3.is\_category\_b + 4.is\_category\_c $$ Now, when you try to evaluate a record with new category = "d", then, the model is not able to use the new dummy variable since it doesn't have a coefficient for is_category_d . Hence, so you should not "tolerate" new categories and process such records.
H: # of iterations in Restricted Boltzmann Machine (RBM) I have a training set, I provide it (consider a data from training set) to the visible layer. Then the normal process is followed, i.e. Positive Phase-> Negative Phase-> Reconstruction of weights, bias units take place. Does it end here or should I have to iterate with the present visible unit value again (the process follows for few fixed iterations) ? Because it is not properly stated anywhere whether with each data of training set, it has to iterated again and again or with each data of training set. AI: When you are unsure about how something (e.g. RBMs) should really be implemented, it is often useful to look at the code of others. G. Hinton himself has published a MATLAB-script (here) which demonstrates the training of an RBM. There, you can see that for each mini-batch, he does the positive phase, then the negative phase, and finally updates the weights - and that's it. So he doesn't iterate between the visible and hidden states. However, this is not the full truth: for the weight updates we need to know the probability $p(v,h)$. This is very complicated to calculate, as it would contain a sum over all possible states of the RBM. There is a "mathematical trick" called Gibbs sampling: it allows us to iterate back and forth between visible and hidden units to calculate this probability $p(v,h)$. But: for the result to be correct, we have to iterate forever, which is not really practical. So what Hinton proposed is to iterate for only 1 step instead (this is $CD_1$), so he only goes back-and-forth once. But, you can also iterate any number of times $k$, which is denoted by $CD_k$. While for Hinton's $CD_1$, you would do visible --> hidden --> visible For a $CD_3$, you would iterate from visible to hidden and back, three times: visible --> hidden --> visible --> hidden --> visible --> hidden --> visible And just to be clear: you run this iteration for every data point in the training set. Actually, you usually make so-called mini-batches of maybe 10 data points, which you run at the same time. Then you calculate the average weight update from this batch. But then, you do this iteration again, and again, and again, until you have finished your training.
H: Outliers Approach Having a schema which the majority of the values are IDs. Like this example (this isn't my real data): ID SCHOOL_ID CLASSE_ID STUDENT_ID GRADE 1 1 1 1 17 2 1 1 2 10 3 1 1 3 4 4 1 2 19 11 5 1 2 21 8 ... ... ... ... ... Which one of this can be a better approach to detect outliers using SQL: - Standard Deviation + Average - Try to implement an clustering algorithm I'm a little bit confusing about this... Thanks AI: Student ID (and ID) doesn't make sense as a column to cluster on because it's not continuous, and is unique and high cardinality too so isn't even usable as a categorical value. Clustering school and class ID could make a little sense if converted to a one-hot encoded value, but it's also probably high cardinality. I think you may need to question whether those are even meaningful dimensions to cluster on. You might just drop them.
H: How to add a new category to a deep learning model? Say I have done transfer learning on a pre-trained network to recognize 10 objects. How can I add a $11^{th}$ item that the network can classify without losing all the 10 categories I already trained and the information from the original pre-trained model? A friend told me that active research is going on in this field, but I cannot find any relevant papers or a names to search for? AI: If this is just a one-time case, you can simply re-train the neural network. If you frequently have to add new classes, then this is a bad idea. What you want to do in such cases is called content-based image retrieval (CBIR), or simply image retrieval or visual search. I will explain both cases in my answer below. One-time case If this just happens once - you forgot the 11th class, or your customer changed his/her mind - but it won't happen again, then then you can simply an 11th output node to the last layer. Initialize the weights to this node randomly, but use the weights you already have for the other outputs. Then, just train it as usual. It might be helpful to fix some weights, i.e. don't train these. An extreme case would be to only train the new weights, and leave all others fixed. But I am not sure whether this will work that well - might be worth a try. Content-based image retrieval Consider the following example: you are working for a CD store, who wants their customers to be able to take a picture of an album cover, and the application shows them the CD they scanned in their online store. In that case, you would have to re-train the network for every new CD they have in the store. That might be 5 new CDs each day, so re-training the network that way is not suitable. The solution is to train a network, which maps the image into a feature space. Each image will be represented by a descriptor, which is e.g. a 256-dimensional vector. You can "classify" an image by calculating this descriptor, and comparing it to your database of descriptors (i.e. the descriptors of all CDs you have in your store). The closest descriptor in the database wins. How do you train a neural network to learn such a descriptor vector? That is an active field of research. You can find recent work by searching for keywords like "image retrieval" or "metric learning". Right now, people usually take a pre-trained network, e.g. VGG-16, cut off the FC layers, and use the final convolutional as your descriptor vector. You can further train this network e.g. by using a siamese network with triplet loss.
H: How hidden layer is made binary in Restricted Boltzmann Machine (RBM)? In RBM, in the positive phase for updating the hidden layer(which should also be binary), [Acually consider a node of h1 ∈ H(hidden layer vector)] to make h1 a binary number we compute the probability of turning on a hidden unit by operating activation function over total input (after the activation function operation, we would be getting values in the range between 0 and 1, since activation function I am using - sigmoid). My doubt is that how do we make it binary by leveraging the probability computed. I don't think if P>=0.5, make it 1 else 0 is a proper method to work on. By few literature reviews, I found this document (by Hinton), in section 3.1: he has stated "the hidden unit turns on if this probability is greater than a random number uniformly distributed between 0 and 1". What does this actually mean? And also in this link, they say "Then the jth unit is on if upon choosing s uniformly distributed random number between 0 and 1 we find that its value is less than sig[j]. Otherwise it is off." I actually didn't get this. Whether the random number generated is same for all h ∈ H ? Another query is, what about the random in next sampling iteration? I saw this video. Just watch the video from that point as per the link. How do you get that sampled number? Whether we have to just run rand() in Matlab and obtain it? Should it would be different for each h(i) (oh nooo! I don't think the machine will learn properly)? Whether the random number should be different for each iteration or the same random number can be used for all iteration to compare? AI: As you correctly say, we calculate the probability of a hidden unit $h_j$ being one and then make it binary. That probability is given by $$p(h_j=1) = \sigma\left(b_j + \sum_{i=1}^V w_{ij}v_i \right)$$ where $\sigma$ is the sigmoid function, $b_j$ is the bias of hidden unit $h_j$, $V$ is the number of visible units, $v_i$ is the (binary!) state of visible unit $i$, and $w_{ij}$ are the weights. So, your MATLAB code for obtaining the probabilities hidden_probs is something like this (we write the sum implicitly by making a matrix multiplication): hidden_probs = sigmoid(hidden_bias + data * weights) Now, we have the probability $p(h_j=1)$ for each hidden unit $j \in [1,H]$. Now, this is only a probability. And we need a binary number, either 0 or 1. So the only thing we can do is pick a random sample from the probability distribution of $h_j$, which is a Bernoulli distribution. As all hidden units are independent, we need to get one sample for each hidden unit independently. And also, in each training step, we need to draw new samples. To draw these samples from the Bernoulli distribution, you can use the built-in functions of e.g. MATLAB (binornd) or Python (numpy.random.binomial). Note that these functions are to sample from a binomial distribution, but the Bernoulli distribution is just a special case of the binomial distribution with N=1. In MATLAB, that would be something like hidden_states = binornd(1, hidden_probs) which would create vector hidden_states which contains either 0 or 1, drawn randomly for each probability in hidden_probs. As you probably have noticed, nobody does that! E.g. describes it in his Practical Guide to Training RBMs, as the hidden unit turns on if this probability is greater than a random number uniformly distributed between 0 and 1. That is exactly what Hinton does in his RBM code: he gets a random number for each hidden unit using rand, i.e. randomly sampled from the uniform distribution between [0,1]. He then does the comparison: hidden_states = hidden_probs > rand(1, H) This is equivalent to using binornd, but is probably faster. For example to generate a random number that is 1 with p=0.9, you get a random number from [0,1]. Now, in 90% of the cases, this random number is smaller than 0.9, and in 10% of the cases it is larger than 0.9. So to get a random number that is 1 with p=0.9, you can call 0.9 > rand(1) - which is exactly what they do. tl;dr: Pick a new random number from the range [0,1] for each hidden unit in each iteration. Compare it to your probability with hidden_probs > rand(1,H) to make it binary.
H: Feature engineering using XGBoost I am participating in a kaggle competition. I am planning to use the XGBoost package (in R). I read the XGBoost documentation and understood the basics. Can someone explain how is feature engineering done using XGBoost? An example for explanation would be of great help. AI: It turns out that the question I asked is incorrect. Initially feature engineering is done, then xgboost is used to build a model out of it. If we not satisfied with the model's performance we can go back to feature engineering. Thanks Dan Levin for the explanation.
H: How to test People similarity measure? I am doing a project on finding famous people who are similar to each other. For this, I am extracting a bunch of features and applying a distance function on them to evaluate who is closer to whom. Is there a way to test this out quantitatively? Take the example of actors for easy reference. Say, one model predicts Robert Downey Jr. to be most similar to Christian Bale. While after applying more weights to the 'heights' feature, George Clooney is found to be most similar to Christian Bale. Now how do I quantitatively assert that one is better than the other? One might say that Robert Downey is a better result because both Christian Bale and Robert Downey have starred in cult superhero movies (albeit different ones), while someone else might say that since George Clooney and Christian Bale were both batmans at one point, George Clooney is the better result. Is there a way to test this other than intuition? AI: Not unless you have an idea of what makes two people similar to each other. By choosing a distance function you are defining what it means for two people to be similar to one another. The question of whether one metric is quantitatively better than another may not be answerable without a defined objective. For example, compare the Manhattan distance (L1 norm) to the euclidean distance (L2 norm). The former may be more effective for a pedestrian trying to navigate a city on foot, whereas the latter may be more effective for an athlete running across a sports field. Now how do I quantitatively assert that one is better than the other? You would need some examples which embody what it means for two people to be similar (or not) to one another. This would have to be generated by a person or group of people. Rather than generate a "similarity value" directly between two people, one could instead generate a ranked list of people which are most similar to a given person of interest: For person A, the most similar other person is B, the second most similar person is C, etc. Better yet, presented with a person of interest and a list of other people someone would just have to rank each person in the list by their similarity to the person of interest. If more than one person generate ranks of a given list of people for a given person of interest, you could agglomerate the results by summing the ranks for each person and reordering the list by summed rank. If you have some data in this form, then you may evaluate your similarity model by how well it recreates the ranked lists. Some possible metrics for evaluation are: Recall (at rank n): What fraction of the top n people are returned in the model's top n results? Precision (at rank n): What fraction of the model's top n results are in the true top n results? Look here for other methods of evaluating ranking methods. I hope this helps. Let me know if I'm not understanding your question.