text
stringlengths
83
79.5k
H: Conditional attributes in a prediction problem I am trying to implement an algorithm to predict fantasy points of a player regarding past events. However, I am having some doubts about some attributes. Imagine that a player is going to play away in the next match, I believe it is useful to have the points/goals he did in the past away matches to evaluate how well he performs away from his stadium. However, that same attribute would not be useful if the next match is at home. The same with the % of occupation of the stadium - it is important when a player plays at home but not relevant when he plays away. What is the best approach to this kind of scenario? Do I put the away goals anyway even when the match is at home? Regards AI: Decision trees can handle conditions based on several features by design: if the model determines that the feature "at home" is important for the prediction, it will create a node based on this condition close to the root of the tree. By construction other features will appear as conjuncts, e.g. "if at-home==true and featureX==valueX and ... then predict ..." (that's just how a decision tree works). In case you consider that this particular feature is so important that everything else should depend on it, another option is to train two distinct models: one for "at home" and one for "away". The disadvantage is that a model can only use the instances relevant for its case, so there is less training data for each of them.
H: Control which features are used for every task in multioutput classification? I would like to perform a multiclass-multioutput classification task, on vectorized textual data. I started by using a random forest classifier in a multioutput startegy: forest = RandomForestClassifier(random_state=1) multi_target_forest = MultiOutputClassifier(forest, n_jobs=-1) multi_target_forest.fit(X_train, y_train) y_pred_test = multi_target_forest.predict(X_test) When looking on the feature importance for the individual estimators (multi_target_forest.estimators_ ) I've noticed that some features in my dataset are very relevant and useful for some tasks, but are disrupting for another class. Example: Task 1: classify documents for Date (q1, q2, q3, q4) Task 2: classify document for Version (preliminary, final, amendment) For task 1, features related to dates, such as 'April', are very useful. However, for the second task, the feature 'April' gets a high importance but is a consequence of overfitting to a small dataset. Knowing this I would like to actively remove such features. Is there a way to control which features are used for every task? I could just explicitly train separate classifiers for every task, but is that equivalent to multioutput-multiclass? or is there some joined probability calculation going on, that I'll be missing? Thank you! AI: I don't think its possible. First, let's see what's the difference in explicitly modeling separate trees for different tasks versus modeling them in joint manner. Lets suppose we have 2 task with each n classes. In the later case, to be able to jointly model the correlations, one must create new classes which is a subset of all the permutations available from nC2 combination of the classes from the 2 tasks. Now, if it is true that one feature (say feature A) is beneficial for task 1 but not for task 2, then how would one decide whether to use the feature A while figuring out the final classes in both the task? Task 1 wants the feature but Task 2 doesn't, so this creates a deadlock which would prevent us from putting a bias in the tree model to not use feature A for the classification! So, if you are certainly sure that a particular feature is not beneficial for a particular task the then the simpler and more effective way would be to model them separately as you mentioned.
H: Which input to use when generating a new sequence I want to use sequence-to-sequence architecture to generate sequences. My data has such structure [0, 0, 1, 0, ..., 0, 1] --> [12.34, 0.78, 1.54, 6.90, ..., 5.32] I follow this tutorial to achieve it. After forwarding through Encoder network encoder_hidden is used as a decoder_hidden. But what should I use as a first decoder_input to Decoder network? The original tutorial uses a Start Of the Sequence token, but I can't use it because it is encoded as 0. Probably 0 as a number will give some additional information for decoder. AI: As you have seen, normally you need a "special token" to be given to the decoder as the first element in its input to start the autoregressive generation. However, given that your output are real (floating point) numbers, it is a bit trickier, as you are not dealing with a discrete token vocabulary where you could simply reserve a token for that. I would suggest using a specific value, like $0.0$. Your model should be able to figure out the no influence of the $0.0$ in the first position. Another option would be to learn the value to be used as first token. You would have an extra trainable parameter that you use as value for the first position.
H: xgboost classifier predicted negative probabilities I'm using XGBoost for a binary classification problem. There is no negative label, only 1 and 0. I tunned the hyperparameters using Bayesian optimization then tried to train the final model with the optimized hyperparameters. Mdl_XGB = xgb.train(OptimizedParams, dtrain) scores_train = Mdl_XGB.predict(dtrain) scores_test = Mdl_XGB.predict(dtest) My problem is that the predicted scores for both train and test sets include both negative values and numbers greater than one. The scores are between -0.23 and 1.13. Shouldn't these scores present the probability of belonging to class 1 (positive class)? AI: You have to set the option objective = binary:logistic to get probabilities between 0 and 1, otherwise you only get relative scores.
H: Generating text using NLP based on parameters I want to generate some text based on the value of certain parameters. For instance, let's say I want to generate descriptions of video games. So, besides real descriptions as training data, I would like that the model takes in account the following parameters (for example) about the game: Violent: yes Multiplatform: yes Drugs: no So that if the game has drugs content, the output text has some phrase referring to it. Is this possible? If so, how could I do it in Python? I was going to use LSTM neural networks in Tensorflow. AI: A direct way would be to encode any binary input features as embedded vectors and add them together as the initial hidden state for the LSTM, and then you train it as a normal language model. The "little manual introduction" could be supplied to the language model (together with the initial hidden state created from the binary features) at inference time and then use it autoregressively to generate the description. Given that you are going to deal with proper nouns (i.e. the name of the game), you should use a vocabulary that does not lead to out-of-vocabulary words. I would suggest using a subword-level vocabulary (e.g. byte-pair encoding). For the implementation, you could do it in Pytorch, using as a starting point their language model example.
H: Is it good practice to convert columns with a number to a range between 0 and 1? Relatively new to data science. I heard something about converting columns which contain integers into a range between 0 and 1. I think the reasoning was that so all the columns will be more similar in their range. I think along with that there might have been a step of removing outliers (very high integers) to that they wouldn't cause all other results be skewed as a low fraction. Is this accurate? If yes, is there an easy command to make it happen with a Pandas dataset? AI: This transformation is called min-max-scaling and also often referred to as standardization. Scikit learn provides the MinMaxScaler() for this (see here). Here is an example adapted from "Introduction to Machine Learning with Python" by Mueller and Guido: from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler cancer = load_breast_cancer() X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=1) scaler = MinMaxScaler() scaler.fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) (Side note: keep in mind to fit the scaler only on the training and not the test data!) In the book "Python Machine Learning" by Raschka the author provides a brief pragmatic comparison of min-max-scaling/standardization to normalization (the latter one means substracting the mean and dividing by variance): Although normalization via min-max scaling is a commonly used technique that is useful when we need values in a bounded interval, standardization can be more practical for many machine learning algorithms. The reason is that many linear models, such as the logistic regression and SVM, [...] initialize the weights to 0 or small random values close to 0. Using standardization, we center the feature columns at mean 0 with standard deviation 1 so that the feature columns take the form of a normal distribution, which makes it easier to learn the weights. Furthermore, standardization maintains useful information about outliers and makes the algorithm less sensitive to them in contrast to min-max scaling, which scales the data to a limited range of values.
H: Python: forecast unevenly spaced time-series? My data has timestamps corresponding to the failure occurrences of a specific component in machinery. The timestamps are not uniformly distributed. My question is: 1) what methods can I use to (almost) accurately to forecast future occurrences (timestamps) of Failure? 2)What other features can I derive? What I've tried so far: Since the timestamp sequence is unevenly spaced I've derived a feature datediff= difference between sequential fault occurrences. Since it is now a univariate time-series I have tried classical time-series forecasting methods like ARIMA and SARIMA (hasn't worked out well) I am posting the seasonal decompositions of the time-series freq=7(weekly) freq=30 acf/pacf AI: I reccomend you to use Prophet: Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well. In the documentation you can see that is really easy to implement in python. Also you could convert the problem to a supervised learning one. You can read this blog where they make an introduction of how to face the problem.
H: Cross-entropy loss colloquially I have just made an MNIST classifier from a tutorial. It has 1 layer, no hidden layers. It uses PyTorch's nn.CrossEntropyLoss as a loss function. Plotting my losses after training, it starts out at around 2.25 and goes down to 0.5. What does this number mean, in a colloquial sense? Does a loss of 0.5 mean that it's right 50% of the time? AI: The cross entropy is equivalent (up to a constant) to the Kullback Leibler Divergence which has an interpretation based on information theory: (very crudely) it is the amount of information "lost" by representing the true labels via the distribution of their predicted values (measured in "nats" (e) or "bits" (2) depending on the base of the logarithm). I don't find it very useful as a "colloquial" interpretation or to think in units of bits. Typically, you would calculate more interpretable metrics like accuracy and inspect them along with the loss to get a picture of the performance. However, you can use the loss and compare it to a simple baseline (e.g. the model only predicting the average of your labels or only the majority class) and thus get a relative performance estimate.
H: Understanding these probabilities, and reading the pdf and cdf UPDATE: this is a math question but as i would like to codify it, I thought the DS SE might be appropriate. I am trying to sketch out this problem statement. Where am I going wrong since I don't see how they get "6% chance of losing any money at all", "41% chance of making more than \$100M" and "75% of earning at least \$40M". My attempt at a solution is in the second image: Row 48 (CASES) enumerates the number of ways the sum can take a given value, while row 49 (PROB) is that number in row 48 divided by the total number of outcomes. This gives me the pdf and row 50 is the cdf. Yet I don't get the answers in the problem statement. Are they wrong or am I reading the pdf and cdf the wrong way? The first image is the text below. AI: The payoff aren't +30 and -10. They're +20 and -10. Successful projects return \$30M after investing \$10M.
H: Degree of Profanity in a Sentence Given a comment or a sentence and a list of profane words, How do I write a program to print the degree of profanity in that sentence? AI: One way to approach it is to split the sentence into tokens and count the number of tokens that are profanities. import re def tokenize(text): return re.findall(r'\w+', text.lower()) profane_tokens = {"nerfherder"} sentence = "Why you stuck-up, half-witted, scruffy-looking nerfherder!" tokens = tokenize(sentence) # Rate: number of occurrences normalized by total number degree_of_profanity = sum(1 for t in tokens if t in profane) / len(tokens) This code will not handle multiple tokens and many profanities are multiple tokens.
H: Does ridge regression always reduce coefficients by equal proportions? Below is an excerpt from the book Introduction to statistical learning in R, (chapter-linear model selection and regularization) "In ridge regression, each least squares coefficient estimate is shrunken by the same proportion" On a simple dataset, I obtained 2 non-intercept coefficients b1=-0.03036156 and b2=-0.02481822 using OLS. On l2 shrinkage with lambda=1, the new coefficients were b1=-0.01227141 and b2=-0.01887098. Both haven't reduced by equal proportions. What am I missing here? Note: the assumption made in an Introduction to Statistical Learning book for the quoted statement is n=p the scale of both variables in my dataset is same AI: As I know we have below equation for Ridge Regression: \begin{equation} RSS_{Ridge} = \Sigma_{i=1}^{n} (\hat{y}_{i} - y_{i})^2 - \lambda \Sigma_{j=1}^{p}(\beta^2) \end{equation} First of all, it seems to me, if lambda goes higher does not mean that coefficients go down with inverse relation to lambda. Because the power of beta is two and the lambda is one. I think you refer to page 226 of "An Introduction to Statistical Learning" book. In the footnote of the figure, the writer says: "The ridge regression and lasso coefficient estimates for a simple setting with n = p and X a diagonal matrix with 1’s on the diagonal. Left: The ridge regression coefficient estimates are shrunken proportionally towards zero, relative to the least-squares estimates. Right: The lasso coefficient estimates are soft-thresholded towards zero." In that figure shows that if we got +2 and -2 coefficient in the OLS model and if Ridge shrunk +2 to +1.8 then we are sure Ridge will shrink -2 to -1.8. So, in both cases with the same proportion of 0.2 coefficients go to near zero.
H: Would averaging two vectors in word embeddings make sense? I'm currently using the GloVe embedding matrix which is pre-trained on a large corpus. For my purpose it works fine, however, there are a few words which it does not know (for example, the word 'eSignature'). This spoils my results a bit. I do not have the time or data to retrain on a different (more domain-specific) corpus, so I wondered if I could add vectors based on existing vectors. By E(word) I denote the embedding of a word. Would the following work? E(eSignature) = 1/2 * ( E(electronic) + E(signature) ) If not, what are other ideas that I could use to add just a few words in a word embedding? AI: Averaging embeddings vectors could make sense if your aim is to represent a sentence or document with a unique vector. For words out of vocabulary it make more sense to just use a random initialisation and allow training of the embedding parameters during the training of the model. In this way the model will learn the representation for the out-of-vocabulary words by itself. Alternatively, you could use external resources like WordNet [1] to extract a set of synonyms and other words closely related to a specific term, and then leverage the vectors of those close words (averaging them might have sense but it's always a matter of testing and see what happens, as far as I know there are no grounded rules established yet). [1] https://wordnet.princeton.edu
H: Can we add other features along with Text in sentiment analysis Can we add other features along with Text to a ML model . Like giving text and other features as one input combining them and predict the output value. As model can learn some more better if given some extra features along with text from tf-idf vector matrix. Can we input other features along with tf-idf vector matrix to a ML model or DL model. AI: Yes you definitely can. Here's an example: Using Convolutional Neural Networks to Classify Hate-Speech The authors used classic embeddings concatenated with a vector of size 28 representing the presence or not (in a tweet) of each letter of the alphabet (26) plus any digit (1) plus any other symbol (1). So basically for a tweet like 'I love NLP!' the representation would be: (embeddings)(00001000100110110000000)(0)(1) where you have the classic embeddings for each word, then a one hot vector for the alphabet letters, another hot vector (just a binary variable actually) for digits and one last that check for the presence of any other symbol (! in this case). The resulting concatenated vector is then passed to a LSTM, convolution layer or whatever architecture you want to use. Consider that there are infinite other variables that you can concatenate to the embeddings. For example you could concatenate counters for each alphabet letter or any other symbol in a sentence (in this case I suggest to separately normalise these variables between 0 and 1 before concatenating them to the embeddings), one-hot representation of n-grams or dependencies, the total number of words, presence or not and counters of capital letters, presence of entities, sentiment scores obtained through external resources like Vader (see nltk) and so on.
H: No module named "" error when loading the pickle file I created a model and I saved it in a pickle file using the Algorithm SVR(Support Vector Regression) import pickle pickle.dump(model,open('carb patients data/Pickles/svr.pickle', 'wb')) In jupyter notebook it gives an error can't pickle _thread.RLock objects So I converted that jupyter file in to a .py file and downloaded it and executed using the Python Idle. Then it got saved in that particular location. But when I try to load my pickle file from another Jupyter Notebook it gives an error called, No module named 'sklearn.svm._classes' The code how i tried to load the pickle file, from flask import Flask, request, redirect, url_for, flash, jsonify import numpy as np import pickle as p import json from sklearn import ensemble app = Flask(__name__) @app.route('/api/', methods=['POST']) def makecalc(): data = request.get_json() prediction = np.array2string(model.predict(data)) return jsonify(prediction) if __name__ == '__main__': modelfile = 'Clustering Patients/carb patients data/Pickles/svr.pickle' model = p.load(open(modelfile, 'rb')) app.run(debug=True, host='0.0.0.0') Any thing I have done wrong or is it an issue in the Jupyter Notebook? How can I load my saved pickle file? AI: According to the scikit-learn model persistence docs, it may be better to use joblib instead: Save model from joblib import dump dump(model, 'filename.joblib') Load model from joblib import load model = load('filename.joblib')
H: Do I need to convert strings before using LSTM? I have a dataset which includes one column with a URL and another column with value 0 or 1 indicating if it is a phishing link. I want to process this dataset using LSTM. Do I need to first convert the string column into some other data type? The model I'm hoping to use would be something like below: # Model building using the Sequential API model3 = Sequential() model3.add(Dense(10, activation='relu', kernel_initializer='uniform',input_dim=x3.shape[1])) # Add a LSTM layer with 128 internal units. model3.add(layers.LSTM(128)) # Add a Dense layer with 10 units. model3.add(layers.Dense(10)) AI: Do I need to first convert the string column into some other data type? Yes, it's very important. Neural Networks don't take raw words and/or letters as inputs. Textual information must be processed numerically in order to be fed into the model. I thought about it, and came up with three things you could do: Use one-hot encoding, as it was suggested. I don't prefer this option: one-hot encoding generates very sparse matrices, meaning very little variation in almost everything dimension at each step. Moreover, the number of websites is absurdly high, it can be unmanageable through one-hot encoding. Additionally, it's not robust: your model have to be tested on unseen data, that by definition couldn't be operationalized with that technique. Also, some websites might be more or less similar to others, they play more or less similar roles for your model; but one-hot encodind fails at representing their differences: they will all look equally different if you compute any distance measure between them. Assuming you are working with sequences, my main suggestion is to represent web USLs with embedding vectors. Just as words can be translated into embedding vectors in NLP (it's the case of word2vec or glove), you can apply the same technique to represent the "relative meaning" of websites. In this way you can: a) keep the number of dimensions at bay, and b) have a non-sparse matrix that your models need. You would end up with a web2vec model, that fits perfectly with RNNs. Moreover, the meaning of new, unseen websites could be learned effortlessly by the model. This can be done with libraries such as gensim, or using Keras Embedding() layers. A more extreme, time consuming option is to use character embeddings. This is going to be useful only if you thing the name of the website itself contains the relevant information for your tasks. I'm not sure if it's the case. Character-level embeddings produce very sofisticated models, but they are more computationally expensive than the others. And I'm not sure they would be useful in your case. If I had to choose, I'd go for option 2. Finally, let me close with a hint: do not put Dense() layers before LSTM() layers. First, Recurrent layers process sequential data, then their output is sent to dense layers that execute the prediction. The output layer should have either one node + Sigmoid activation + Binary Crossentropy loss, or two nodes + Softmax activation + Categorical Crossentropy loss. As you prefer.
H: Dimensionality reduction without select components I would like to use dimensionality reduction algorithm in my pipeline. I have 2k features and I'm using xgboost. My model is rebuilding each day (there are new records that should be involve to training set). I'm looking for method for dimensionality reduction with out setting n_components. I know that in PCA it shouldn't be set. But I'm looking for method that find something like clusters on my data and then I will use it to train my model. Of course the same flow I'll be using for prediction. Do you have idea how should I do my data processing for this case? AI: It would be helpful to know a bit better what you're trying to achieve and why the selection of a specific number of eigenvalues bothers you. From the generic information you gave it seems you're aiming at training a model on a compressed/dense representation of several features, in which case I would suggest to train an autoencoder (or something similar), on top of which you could then train whatever classifier you need. Otherwise if the problem relies only on the amount of features you have you could try with some feature selection strategies
H: Why normalize function has a different result on a matrix vs single value? I have a matrix like: B=[ 1.5035; 1.5728; 1.6485; 1.5369; 1.5467; 1.572; 1.5374; 1.787; 1.5825; 1.6905]; Using normalize function like normalize(B,'range') has this result: ans = 0 0.24444 0.51146 0.11781 0.15238 0.24162 0.11958 1 0.27866 0.65961 But when I use it for a single value like: normalize(B(2,:),'range') the result is 0 , but the result for row number 2 in ans is 0.24444. Why its different and how can I fix it? AI: The normalize function works by calculating z-score for the given data. Now z-score is given by: $$z_{i} = \frac{x_{i} - mean(X)}{s.d.(X)}$$ where $X$ is your original vector/matrix. Now, when you give a vector as input it first calculates the mean and standard deviation for the row/column. Then the output you ask for is calculated the above formula. So, if you only ask for a scalar it only shows one value. You can see in your example the normalized value(or z-score) is same for that particular position i.e. $0.24444$. But when you give only a single value as input the z-score is 0 by definition. For more on it's working and use take a look at the docs.
H: How to trust the labels generated using ML models? I have a dataset of patient records. But I do not know whether he is +ve for a cancer or not. So, I do not have the labels in my dataset. Now I can run a machine learning models like clustering to generate labels. For ex: I can run clustering to group the two classes based on similarity and find out who all belong to +ve and -ve class. Of course, we cannot sit and manual review the patients' data to know whether he is actually +ve for cancer or not. So when we generate labels via machine learning models like clustering above, is it a recommended approach? Is it used in industries/real time where people don't have ground truth and only rely on labels based on ML models? How can we trust these labels generated? If it's a human I know that it can be trusted. But how do we trust these labels. Are things like this being used in Industries and how do they tackle the trust issue? AI: So when we generate labels via machine learning models like clustering above, is it a recommended approach? Only if you can really make highly distinct 2 clusters/groups. This will be highy unlikely, especially for complicated and high dimensional datasets. One of the reasons is that clustering algorithms are just weaker than the supervised algorithms. If you can find a good representation (have a look at representation learning from Bengio), i.e. highly discriminative embeddings, than it might work. Is it used in industries/real time where people don't have ground truth and only rely on labels based on ML models? Its an option, one can definately try it, but not rely on it. How can we trust these labels generated? As long as you can validate it with out of fold set with ground truths, or with humans looking at the clusters, there is no problem. Are things like this being used in Industries and how do they tackle the trust issue? Its one of the possible solutions, personally I always try first transfer learning. Especially for problems like yours, chances are there is already some pretrained model. Only thing you need is some labeling tool, for 1000 samples (it takes a couple of hours to do it but its worth it). Have a look at this tool.
H: Trying to understand the Fowlkes-Mallows Score I recently bought Chris Albon's ML flashcards and I'm working my way through them. But this one on the Fowlkes-Mallows score has me stumped, as his definitions of false negatives and false positives seem reversed: If y_hat are my predictions, and I've said that a pair of observations are in the same cluster when they're not, how is that a false negative and not a false positive? Of course I read the Wikipedia article and did some Googling, but haven't turned up a clear answer yet. Am I misunderstanding? Did he get this wrong? AI: Yes, its a mistake. FN: they should be together but you said they are not, you false negatively ordered them to a different class. FP: they should not be together, but you said they are you false positevly ordered them together
H: Probabilistic gold standard vs Deterministic gold standard I understand that we say something as a gold standard when it involves human intervention/judgement/review. But can someone help me understand what's the difference between probabilistic gold standard and deterministic gold standard. For ex: Patient has cancer or not - binary response - Deterministic gold standard which can be provided by humans. Whereas Patient has 60% chance of being a cancer and 40% chance of not being a cancer. Am I right to understand that this is called as probabilistic gold standard but this can't be produced by Humans right? Can any human/doctor for example, say this patient has 60% chance of being a cancer and 40 % chance of not being a cancer? AI: No, deterministic probability is when you know for certain. If a person does not have a diagnosis, then he doesn't have the disease/condition. Doctors are not supposed to give a probability but we as human beings always like to know the likelihood. For example, person A who is 27 years old who has the coronavirus is highly unlikely to die of the virus but it's not impossible.
H: Why does averaging over vector values cause errors? As a way to improve my model, I want to average GloVe vectors over a sentence. However, I can't get np.mean to work. The following code works when not averaging over words. (copied from other code) embeddings_dict = {} with open("glove.6B.50d.txt", 'r') as f: for line in f: values = line.split() word = values[0] vector = np.asarray(values[1:], "float32") embeddings_dict[word] = vector g = open("input.txt", 'r') vector_lines = [] for line in g: clean_line = line.translate(str.maketrans('', '', string.punctuation)) array_line = clean_line.lower().split() vec_line = [] i = 0 for word in array_line: i += 1 try: vec_line.append(embeddings_dict[word]) except: vec_line.append(embeddings_dict["unk"]) while i < 30: //pad up to thirthy words with zero vectors vec_line.append(np.zeros(50)) i += 1 vector_lines.append(np.asarray(vec_line)) X = np.asarray(vector_lines) To average over words, I'm modifying a small part of the code try: vec_line.append(embeddings_dict[word]) except: vec_line.append(embeddings_dict["unk"]) #padding is not necessary anymore #while i < 30: # vec_line.append(np.zeros(50)) # i += 1 vec_mean = np.mean(vec_line, axis=0, keepdims=True)[:,None] vector_lines.append(vec_mean) X = np.asarray(vector_lines) This gives me the error "ValueError: could not broadcast input array from shape (1,50) into shape (1,1)". It feels as if I have tried every possible modification to this code as I could, but I keep on getting shape issues. What is causing all of these issues? AI: Change this: import bumpy as np a = np.array([1,2,3]) b = np.array([4,5,6]) vec_line = [a,b] print(np.mean(vec_line, axis=0, keepdims=True)[:,None]) >>[[[2.5 3.5 4.5]]] np.mean(vec_line, axis=0, keepdims=True)[:,None].shape >>(1,1,3) To this: print(np.mean(vec_line, axis=0)) >>[2.5 3.5 4.5] np.mean(vec_line, axis=0).shape >>(3,)
H: Gradient descent parameter estimation Package for R I am looking for a package that does gradient descent parameter estimation in R, maybe with some bootstrapping to get confidence intervals. I wonder if people call it something different here as I get almost nothing on my searches, and the one article I found was from someone who rolled their own. It is not that hard to implement, but I would prefer to use something standard. AI: Ok, after a lot of looking I found the "optim" routine which is in "stats", one of the packages that is always loaded. It has quite a few methods including conjugate gradients and BGGS and a few others and worked well on the first few examples I tried. It doesn't seem to get a lot of attention strangely. I guess optimization people tend to use Matlab. I knew there had to be something.
H: K-means: What are some good ways to choose an efficient set of initial centroids? When a random initialization of centroids is used, different runs of K-means produce different total SSEs. And it is crucial in the performance of the algorithm. What are some effective approaches toward solving this problem? Recent approaches are appreciated. AI: An approach that yields more consistent results is K-means++. This approach acknowledges that there is probably a better choice of initial centroid locations than simple random assignment. Specifically, K-means tends to perform better when centroids are seeded in such a way that doesn't clump them together in space. In short, the method is as follows: Choose one of your data points at random as an initial centroid. Calculate $D(x)$, the distance between your initial centroid and all other data points, $x$. Choose your next centroid from the remaining datapoints with probability proportional to $D(x)^2$ Repeat until all centroids have been assigned. Note: $D(x)$ should be updated as more centroids are added. It should be set to be the distance between a data point and the nearest centroid. You may also be interested to read this paper that proposes the method and describes its overall expected performance.
H: Analyzing survey data for predictions I've got survey data that resembles: |-------------| Q1a | Q1b | Q1c | Q2a | Q2b | Q2c | Classification | Respondent | 1 | 0 | 0 | 1 | 0 | 0 | Red | Respondent | 0 | 0 | 1 | 1 | 0 | 0 | Green | Respondent | 0 | 1 | 0 | 0 | 0 | 1 | Yellow I am trying to predict the classification for new respondents. Currently I'm using a Naive Bayes, and getting pretty bad accuracy (~20%). I don't have much training data, and the training data is hand scraped from non-standard sources (internal company procedures are a mess here). I'm looking for other ways to predict the classification. I'm thinking about assigning weights to each question, and magically predicting the result based on those, somehow. Although I don't really know where to start learning about how to do that, and whether it's appropriate for this data. I have very little background in this :( Any ideas or tips on predicting the classification column with no training data? AI: Can you give a bit more information on the size of the data you're training on (and if it's really 6 parameters you're basing the predictions off of)? If it's really 6 questions with binary answers (1, 0 as you suggest), then there are 2^6 (i.e. 64) unique answer combinations, and to determine a probability for them you'll want a multiple entries per combination. Standard error scales like 1/sqrt(n) so for 10% accuracy you'll need roughly 6,400 inputs which given your description, sounds like more data than you may have. You may want to invest time into automating data collection. If on the other hand, you have a reasonably large data set and are hoping for some alternative models, both boosted decision trees and random forest models sound like good candidates for this problem.
H: Reference about social network data-mining I am not in the data science field, but I would like to examine in depth this field and, particularly, I would like to start from the analysis of the social networks data. I am trying to find some good references, both paper, websites and books, in order to start learning about the topic. Browsing on the internet, one can find a lot of sites, forum, papers about the topic, but I'm not able to discriminate among good and bad readings. I am an R, Matlab, SAS user and I know a little bit of python language. Could you suggest any references from which I could start studying and deepen the industry? AI: My favorite place to find information about social network analysis is from SNAP, the Stanford Network Analysis Project. Led by Jure Leskovec, this team of students and professors has built software tools, gathered data sets, and published papers on social network analysis. http://snap.stanford.edu/ The collection of research papers there is outstanding. They also have a Python tool you could try. http://snap.stanford.edu/snappy/index.html The focus is on graph analysis, because social networks fit this model well. If you are new to graph analysis, I suggest you take a undergraduate level discrete mathematics course, or check out my favorite book on the topic "Graph Theory with Algorithms and its Applications" by Santanu Ray. For a hands-on approach to social network analysis, check out "Mining the Social Web" by Matthew A Russell. It has examples which cover how to collect and analyze data from the major social networks like Twitter, Facebook, and LinkedIn. It was Jure Leskovec who initially excited me about this field. He has many great talks on YouTube, for example: https://www.youtube.com/watch?v=LmQ_3nijMCs
H: How should I create a single score with two values as input? I have two series of values, a and b as inputs and I want to create a score, c, which reflects both of them equally. The distribution of a and b are below In both cases, the x-axis is just an index. How should I go about creating an equation c = f(a,b) such that a and b are (on average) represented equally in c? Edit: c = (a+b)/2 or c = ab will not work because c will be too heavily weighted by a or b. I need a function, f, where c = f(a,b) and c' = f(a + stdev(a),b) = f(a, b + stdev(b)) AI: If you're looking for something where A and B are equally represented, consider trying something like Z score normalization (or standard score): c = (a-u_a)/sigma_a + (b-u_b)/sigma_b That score equally represents the two, but would be on a smaller scale. It really shouldn't matter since the numbers are arbitrary, however, if you need to scale it up, you could do something like: c2 = (sigma_a+sigma_b)*(c) + u_a + u_b
H: What types of features are used in a large-scale click-through rate prediction problem? Something that I often see in papers (example) about large-scale learning is that click-through rate (CTR) problems can have up to a billion of features for each example. In this Google paper the authors mention: The features used in our system are drawn from a variety of sources, including the query, the text of the ad creative, and various ad-related metadata. I can imagine a few thousands of features coming from this type of source, I guess through some form of feature hashing. My question is: how does one get to a billion features? How do companies translate user behavior into features in order to reach that scale of features? AI: That really is a nice question, although once you're Facebook or Google etc., you have the opposite problem: how to reduce the number of features from many billions, to let's say, a billion or so. There really are billions of features out there. Imagine, that in your feature vector you have billions of possible phrases that the user could type in into search engine. Or, that you have billions of web sites a user could visit. Or millions of locations from which a user could log in to the system. Or billions of mail accounts a user could send mails to or receive mails from. Or, to swich a bit to social networking site-like problem. Imagine that in your feature vector you have billions of users which a particular user could either know or be in some degree of separation from. You can add billions of links that user could post in his SNS feed, or millions of pages a user could 'like' (or do whatever the SNS allows him to do). Similar problems may be found in many domains from voice and image recognition, to various branches of biology, chemistry etc. I like your question, because it's a good starting point to dive into the problems of dealing with the abundance of features. Good luck in exploring this area! UPDATE due to your comment: Using features other than binary is just one step further in imagining things. You could somehow cluster the searches, and count frequencies of searches for a particular cluster. In a SNS setting you could build a vector of relations between users defined as degree of separation instead of a mere binary feature of being or not being friends. Imagine logs that global corporations are holding on millions of their users. There's a whole lot of stuff that can be measured in a more detailed way than binary. Things become even more complicated once we're considering an online setting. In such a case you do not have time for complicated computations and you're often left with binary features since they are cheaper. And no, I am not saying, that the problem becomes tractable once it's reduced to a magical number of billion features. I am only saying that a billion of features is something you may end up after a lot of effort in reducing the number of dimensions.
H: How to approach automated text writing? What are the tools, practices and algorithms used in automated text writing? For example, lets assume that I have access to wikipedia/wikinews and similar websites API and I would like to produce article about "Data Science with Python". I believe that this task should be divided into two segments. First would be text mining and second would be text building. I'm more or less aware how text mining is performed and there are lots of materials about it in Internet. However, amount of materials related to automated text building seems to be lower. There are plenty of articles which says that some companies are using it, but there is lack of details. Are there any common ideas about such text building? AI: You should probably do some reading in the field of "Natural Language Generation", since this seems to relate most directly to your question. But the way you have described the process -- "text mining...text building" -- leads me to wonder if you are aiming for something much more ambitious. It seems as though you aim to automate the process of 1) reading natural language texts, 2) understanding the meaning, and then 3) generate new texts based on that semantic knowledge. I'm not aware of any general-purpose end-to-end systems that can do that, not even specialized systems by the likes of Palantir. What you are aiming for would probably pass the Turing Test for fully capable Artificial Intelligence.
H: Dimensionality and Manifold A commonly heard sentence in unsupervised Machine learning is High dimensional inputs typically live on or near a low dimensional manifold What is a dimension? What is a manifold? What is the difference? Can you give an example to describe both? Manifold from Wikipedia: In mathematics, a manifold is a topological space that resembles Euclidean space near each point. More precisely, each point of an n-dimensional manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. Dimension from Wikipedia: In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. What does the Wikipedia even mean in layman terms? It sounds like some bizarre definition like most machine learning definition? They are both spaces, so what's the difference between a Euclidean space (i.e. Manifold) and a dimension space (i.e. feature-based)? AI: What is a dimension? To put it simply, if you have a tabular data set with m rows and n columns, then the dimensionality of your data is n: What is a manifold? The simplest example is our planet Earth. For us it looks flat, but it really is a sphere. So it's sort of a 2d manifold embedded in the 3d space. What is the difference? To answer this question, consider another example of a manifold: This is so-called "swiss roll". The data points are in 3d, but they all lie on 2d manifold, so the dimensionality of the manifold is 2, while the dimensionality of the input space is 3. There are many techniques to "unwrap" these manifolds. One of them is called Locally Linear Embedding, and this is how it would do that: Here's a scikit-learn snippet for doing that: from sklearn.manifold import LocallyLinearEmbedding lle = LocallyLinearEmbedding(n_neighbors=k, n_components=2) X_lle = lle.fit_transform(data) plt.scatter(X_lle[:, 0], X_lle[:, 1], c=color) plt.show()
H: Lightweight data provenance tool One of the problems I often encounter is that of poor data provenance. When I do research I continuously make modifications to my code and rerun experiments. Each time I'm faced with a number of questions, such as: do I save the old results somewhere, just in case? Should I include the parameter settings in the output filenames or perhaps save them in a different file? How do I know which version of the script was used to produce the results? I've recently stumbled upon Sumatra, a pretty lightweight Python package that can capture Code, Data, Environment (CDE) information that can be used to track data provenance. I like the fact that it can be used both from the command line and from within my Python scripts and requiring no GUI. The downside is that the project seems inactive and perhaps there's something better out there. My question is: what is a good lightweight data provenance solution for my research? I'm coding small projects mostly in Python in the terminal on a remote server over SSH, so a command line solution would be perfect for me. EDIT 2019: The Sumatra project seems inactive and other, more mature project have come along. DVC looks very promising and I've been in contact some of its authors, who have proven very helpful and supportive. AI: Yes, you should save result files before you make major mods to the code. Disk space is cheap, so you're unlikely to run into issues unless your results sets are prolific. I would suggest storing old results sets with folder names that include a time stamp of when they were generated. As far as time shots of your code, using github (or some other code repository tool) is as easy as can be, will save version information, allows for collaboration, and is an all around great way to backup and version your code. The combination of these two things, you'll effectively have an easy way of mapping a result set to a specific version of the code.
H: Why does logistic regression in Spark and R return different models for the same data? I've compared the logistic regression models on R (glm) and on Spark (LogisticRegressionWithLBFGS) on a dataset of 390 obs. of 14 variables. The results are completely different in the intercept and the weights. How to explain this? Here is the results of Spark (LogisticRegressionWithLBFGS) : model.intercept : 1.119830027739959 model.weights : GEST 0.30798496002530473 DILATE 0.28121771009716895 EFFACE 0.01780105068588628 CONSIS -0.22782058111362183 CONTR -0.8094592237248102 MEMBRAN-1.788173534959893 AGE -0.05285751197750732 STRAT -1.6650305527536942 GRAVID 0.38324952943210994 PARIT -0.9463956993328745 DIAB 0.18151162744507293 TRANSF -0.7413500749909346 GEMEL 1.5953124037323745 Here is the result of R : Estimate Std. Error z value Pr(>|z|) (Intercept) 3.0682091 3.3944407 0.904 0.366052 GEST 0.0086545 0.1494487 0.058 0.953821 DILATE 0.4898586 0.2049361 2.390 0.016835 * EFFACE 0.0131834 0.0059331 2.222 0.026283 * CONSIS 0.1598426 0.2332670 0.685 0.493196 CONTR 0.0008504 0.5788959 0.001 0.998828 MEMBRAN -1.5497870 0.4215416 -3.676 0.000236 *** AGE -0.0420145 0.0326184 -1.288 0.197725 STRAT -0.3781365 0.5860476 -0.645 0.518777 GRAVID 0.1866430 0.1522925 1.226 0.220366 PARIT -0.6493312 0.2357530 -2.754 0.005882 ** DIAB 0.0335458 0.2163165 0.155 0.876760 TRANSF -0.6239330 0.3396592 -1.837 0.066219 . GEMEL 2.2767331 1.0995245 2.071 0.038391 * Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 AI: A quick glance at the docs for LogisticRegressionWithLBFGS indicates that it uses feature scaling and L2-Regularization by default. I suspect that R's glm is returning a maximum likelihood estimate of the model while Spark's LogisticRegressionWithLBFGS is returning a regularized model estimate. Note how the estimated model weights of the Spark model are all smaller in magnitude than those in the R model. I'm not sure whether or not glm in R is implementing feature scaling, but this would also contribute to different model values.
H: Creating Strings corresponding to Location Co-ordinates What are some Python libraries which can convert a (X,Y) tuple to strings? (1.23,4.56) yields strings “1_4”, “12_45”, “123_456”. AI: Your question has nothing to do with NLP or text-mining (as you claim in the attached tags), or data science in general. It's a pure programming question best suitable for StackOverflow. Moreover, you don't really need any libraries to do, what you want to do. A simple function will do. NOTE: I am using map and reduce functions on purpose to include at least a little bit of data science-related stuff (WINK): coord = (1.23, 4.56) def get_coord(coord): coord_str = map(lambda x: str(x).replace('.', ''), coord) for level in range(1, len(coord_str[0])+1): yield reduce(lambda x, y: (x[:level]+'_'+y[:level]), coord_str) print list(get_coord(coord)) Running this code will result in printing: ['1_4', '12_45', '123_456'] You're not discussing any corner cases in your question, so I assume there are none.
H: What is the "dying ReLU" problem in neural networks? Referring to the Stanford course notes on Convolutional Neural Networks for Visual Recognition, a paragraph says: "Unfortunately, ReLU units can be fragile during training and can "die". For example, a large gradient flowing through a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again. If this happens, then the gradient flowing through the unit will forever be zero from that point on. That is, the ReLU units can irreversibly die during training since they can get knocked off the data manifold. For example, you may find that as much as 40% of your network can be "dead" (i.e. neurons that never activate across the entire training dataset) if the learning rate is set too high. With a proper setting of the learning rate this is less frequently an issue." What does dying of neurons here mean? Could you please provide an intuitive explanation in simpler terms. AI: A "dead" ReLU always outputs the same value (zero as it happens, but that is not important) for any input. Probably this is arrived at by learning a large negative bias term for its weights. In turn, that means that it takes no role in discriminating between inputs. For classification, you could visualise this as a decision plane outside of all possible input data. Once a ReLU ends up in this state, it is unlikely to recover, because the function gradient at 0 is also 0, so gradient descent learning will not alter the weights. "Leaky" ReLUs with a small positive gradient for negative inputs (y=0.01x when x < 0 say) are one attempt to address this issue and give a chance to recover. The sigmoid and tanh neurons can suffer from similar problems as their values saturate, but there is always at least a small gradient allowing them to recover in the long term.
H: Library/package/tool for geographical data visualizations I am looking for a good free existing tool which visualizes geographical data (let's say in the form of coordinates) by plotting them on a map. It can be a library (see this question on StackOverflow, which suggests a Python library called basemap, which is interesting but not dynamic enough, namely it does not allow for interactivity) or a complete toolkit. Existing things I found are oriented towards realizing web pages, which are not my ultimate goal (see Exhibit or Modest Maps). I'd like something to feed with data which spits out an interactive map where you can click on places and it displays the related data. AI: Assuming you want your visualizations to be available on the internet, there are many options for this: Google Maps (and Google Fusion Tables) Mapbox (and the related LeafletJS) CartoDB D3 It really comes down to exactly what you want to do and how much control you want to have over the map. Additionally you should consider what sort of interactivity you need. Google Maps is probably the most user friendly for very basic map functions but is somewhat limited in what you can do stylistically. Mapbox and CartoDB are both user friendly and offer good options for styling and displaying different varieties of data. However, they also both tend to require subscription fees for anything other than small maps. Also, the last time I checked, CartoDB explicitly handles animation and time-series data where Mapbox does not. D3 will probably give you the most control over display, animation, and interactivity but also has a long learning curve. Even if you don't want the map to be available on the internet, this is still a very good tool for making interactive visualizations that run in the browser. If you don't care as much about the visualizations being online, you can get a lot of work done in open GIS software like QGIS or GrassGIS, though I don't know if user interactivity is really an option there. As I said though, it really comes down to the specifics of exactly what you're trying to do and how comfortable you are with various aspects of mapping and coding.
H: Add Custom Labels to NLTK Information Extractor I am working on an information extractor specifically purposed with parsing relationships between entities such as movies, directors, and actors. NLTK appears to provide the necessary tools to construct such a system. However, it is not clear how one would go about adding custom labels (e.g. actor, director, movie title). Similarly, Chapter 7 of the NLTK Book discusses information extraction using a named entity recognizer, but it glosses over labeling details. So, I have two questions: How would I add custom labels? If I have bare lists of relevant named entities (e.g. movies, actors, etc.), how can I include them as features? It appears that I would need to use IOB format, but I am unsure about how to do this when I only have lists of named entities. AI: Once you have your own lists of named entities, and you're only interested in extracting the relations, I believe there are simpler solutions (although I never tried relation extraction in NLTK, so I might be wrong): ReVerb - a tool written in Java. Once it produces the results, you can simply keep the rows, where your labels are present as objects of the relation. OpenIE - the successor of ReVerb (also written in Java). The authors claim better perfomance, and the output might be more informative. IEPY - a relation extraction tool in Python. You should be able to provide your own labels/named entities using gazetees. MITIE - this library has bindings in Python, and it offers relation extraction functionality.
H: In Weka, how to draw learning curve evaluated on both test and training set? This is just for finding overfitting gap. After initial research, I can only find method to draw learning curve using evaluation of test set. However, I could not evaluate on training set and over the two learning curves. AI: This is only possible with KnowledgeFlow. In WekaManual.pdf (which is included in Weka package) for version 3.7.12 there is an example in Chapter 7.4.2 "Plotting multiple ROC curves" with picture and step-by-step instructions. For other Weka versions it is the same, just find the appropriate chapter. To give an impression on how it goes, I extracted the picture from the manual. It will draw two curves for two classifiers. For your question it is very similar. You use one classifier and then connect trainingSet to one ClassifierPerformanceEvaluator and testSet to another.
H: What kind of research can be done with an email data set? I found a data set called Enron Email Dataset. It is possibly the only substantial collection of "real" email that is public. I found some prior analysis of this work: A paper describing the Enron data was presented at the 2004 CEAS conference. Some experiments associated with this data are described on Ron Bekkerman's home page Parakweet has released an open source set of Enron sentence data, labeled for speech acts. Work at the University of Pennsylvania includes a query dataset for email search as well as a tool for generating spelling errors based on the Enron corpus. I'm looking for some interesting current trend topics to work with.please give me some suggestions. AI: You're learning, are you? Try to find something easy and interesting to start. Why don't you start off something easy like building a Bayesian model to predict which email will get deleted. You should glance over those deleted emails, are they spams? are they just garbage? Here, you have a simply supervised model where the data-set already labels the emails for you (deleted or not). Think of something easy like words, titles, length of the email etc, see if you can build a model that predicts email deletion.
H: Ranking Bias in Learning to Rank Users tend to click on results ranked highly by search engines much more often than those ranked lower. How do you train a search engine using click data / search logs without this bias? I.e. you don't want to teach the search engine that the results that are currently ranked highly should necessarily continue to be ranked highly just because they were frequently clicked. AI: Have you seen this paper? Optimizing Search Engines using Clickthrough Data I stumbled upon this the other day, and I'm still reading through it, but the author attempts to deal with the problem you describe. You may also find Improving Web Search Ranking by Incorporating User Behavior Information useful.
H: Best or recommended R package for logit and probit regression Could somebody please recommend a good R package for doing logit and probit regression? I have tried to find an answer by searching on Google but all the links I find go into lengthy explanations about what logit regression is, which I already know, but nobody seems to recommend an R package. Thanks in advance. Jerome Smith AI: Unless you have some very specific or exotic requirements, in order to perform logistic (logit and probit) regression analysis in R, you can use standard (built-in and loaded by default) stats package. In particular, you can use glm() function, as shown in the following nice tutorials from UCLA: logit in R tutorial and probit in R tutorial. If you are interested in multinomial logistic regression, this UCLA tutorial might be helpful (you can use glm() or packages, such as glmnet or mlogit). For the above-mentioned very specific or exotic requirements, many other R packages are available, for example logistf (http://cran.r-project.org/web/packages/logistf) or elrm (http://cran.r-project.org/web/packages/elrm). I also recommend another nice tutorial on GLMs from Princeton University (by Germán Rodríguez), which discusses some modeling aspects, not addressed in the UCLA materials, in particular updating models and model selection.
H: Fast k-means like algorithm for $10^{10}$ points? I am looking to do k-means clustering on a set of 10-dimensional points. The catch: there are $10^{10}$ points. I am looking for just the center and size of the largest clusters (let's say 10 to 100 clusters); I don't care about what cluster each point ends up in. Using k-means specifically is not important; I am just looking for a similar effect, any approximate k-means or related algorithm would be great (minibatch-SGD means, ...). Since GMM is in a sense the same problem as k-means, doing GMM on the same size data is also interesting. At this scale, subsampling the data probably doesn't change the result significantly: the odds of finding the same top 10 clusters using a 1/10000th sample of the data are very good. But even then, that is a $10^6$ point problem which is on/beyond the edge of tractable. AI: k-means is based on averages. It models clusters using means, and thus the improvement by adding more data is marginal. The error of the average estimation reduces with 1/sqrt(n); so adding more data pays off less and less... Strategies for such large data always revolve around sampling: If you want sublinear runtime, you have to do sampling! In fact, Mini-Batch-Kmeans etc. do exactly this: repeatedly sample from the data set. However, sampling (in particular unbiased sampling) isn't exactly free either... usually, you will have to read your data linearly to sample, because you don't get random access to individual records. I'd go with MacQueen's algorithm. It's online; by default it does a single pass over your data (although it is popular to iterate this). It's not easy to distribute, but I guess you can afford to linearly read your data say 10 times from a SSD?
H: Preparation for Career in Data Analysis Without College I dropped out of college but am interested in a career in data analysis. Now I am self-studying approximately 10 hours per day. Browsing through job postings on Linkedin has allowed me to compose a rough curriculum. It would be of great help to me if you would either add a subject I have omitted or eliminate a subject that is not necessary for success in the market place. Curriculum (in 3-subject groupings): Group 1 Single-variable calculus Intro to python SQL Group 2 Multi-variable calculus/linear algebra Discrete math Data structures and algorithms Group 3 Calculus-based statistics and probability Hadoop stack Differential equations Group 4 Statistical learning/predictive modelling Python data analysis techniques/Statistical programming in R Fundamentals of machine learning All the while I plan to practice using any data sets I can find online. Will this be sufficient to land a job in data analysis? Of course I plan to learn far more than just this, but is this foundation solid enough to land an entry level data engineering/science position? AI: At least based on what I and other data analysts/scientists do in my company, your technical topics list seems sufficient. But I would also add: Visualization (ggplot2 in R, matplotlib in Python, d3.js for really cool stuff) Design of experiments Communication skills are also quite important. For more inspiration, here's a good "curriculum" represented as a metro map: http://nirvacana.com/thoughts/becoming-a-data-scientist/ Let me also recommend building up a portfolio of data science projects. This could consist of your analyses of data sets you find online (e.g. on UCI), Kaggle competitions, or class projects (e.g. via Udacity or Coursera). That way, you can give direct proof of your technical skills, your communication in the form of reports or graphics, and your ability to extract insight.
H: LinkedIn web scraping I recently discovered a new R package for connecting to the LinkedIn API. Unfortunately the LinkedIn API seems pretty limited to begin with; for example, you can only get basic data on companies, and this is detached from data on individuals. I'd like to get data on all employees of a given company, which you can do manually on the site but is not possible through the API. import.io would be perfect if it recognised the LinkedIn pagination (see end of page). Does anyone know any web scraping tools or techniques applicable to the current format of the LinkedIn site, or ways of bending the API to carry out more flexible analysis? Preferably in R or web based, but certainly open to other approaches. AI: Beautiful Soup is specifically designed for web crawling and scraping, but is written for python and not R
H: Learning time of arrival (ETA) from historical location data of vehicle I have location data of taxis moving around the city sourced from: Microsoft Research Overall it has around 17million data points. I have converted the data to JSON and filled up mongo. A sample looks like this: {'lat': 39.84349, 'timestamp': '2008-02-08 17:38:10', 'lon': 116.33986, 'ID': 1131} {'lat': 39.84441, 'timestamp': '2008-02-08 17:38:15', 'lon': 116.33995, 'ID': 1131} {'lat': 39.8453, 'timestamp': '2008-02-08 17:38:20', 'lon': 116.34004, 'ID': 1131} {'lat': 39.84615, 'timestamp': '2008-02-08 17:38:25', 'lon': 116.34012, 'ID': 1131} {'lat': 39.84705, 'timestamp': '2008-02-08 17:38:30', 'lon': 116.34022, 'ID': 1131} `{'lat': 39.84891, 'timestamp': '2008-02-08 17:38:40', 'lon': 116.34039, 'ID': 1131} {'lat': 39.85083, 'timestamp': '2008-02-08 17:38:50', 'lon': 116.3406, 'ID': 1131} It consists of a taxiID - ID field, timestamp of its latitude and longitude combination. My question is: I want to use this data to calculate estimated time of arrival(ETA) So far, I am doing it a crude way by querying mongoDB with aggregation. It is totally inefficient. I am looking at some sort of learning algorithm where the historical data can be used to train it. In the end, given two points, the algorithm should traverse the possible route by referring historical data and give an estimate of time. Calculating time estimate is not a problem at all if I get the array of JSON documents between the points. But, getting those right arrays is. Any pointers in this direction will be very helpful. AI: Based on what I figured out from your problem: 1 You can easily convert your data to a graph using Networkx, igraph or any other tool/library/software. Then what you need is a Shortest Path Algorithm (Dijkstra is widely used and implemented in all graph/network analysis softwares). Once you created the graph you can simply calculate the average estimated time. For turning the problem into a Learning Problem, you can use historical time estimations for different paths and assign a weight to an edge proportional to the property of that edge (e.g. traffic jam probability, time conditions) and try to predict the ETA for a new query. 2 You can also turn it into a Network Science Problem and use Graph Theoretc approaches to approach the question. You can start with statistical analysis of nodes and edges attributes e.g. passing time distribution, shortest path length distribution, probabilistic modeling of traffic jam and so on to see if some meaningful insight leads you the next step. Another idea is to use graph clustering algorithms to extract most connected parts of the town and go through the analysis of them i.e. calculate the ETA for different parts instead of whole the data and assign the estimated time to the members of corresponding cluster and reduce the computational complexty if your algorithm. 3 The last but not least is having a look at ArangoDB. It's a new database model which is based on graphs and you can run queries on millions of edges in an amazing speed! all what you need is a bit of javascript knowledge and even if you don't have it you can use AQL language designed for arangoDB. The interesting point is that it uses JSON files as the standard data format so you are already half way through ;) Hope i could help :) Good Luck!
H: Attributing causality to single quasi-independent variable Apologies if this isn't the correct place to ask - I'm not sure if this fits best with Stats or Data Science. I'm using analytics to help marketers identify attributes of their users correspond to successful conversions (such as someone buying a product, signing up for a newsletter, or subscribing to a service). Attributes could be things like which site they came from (referrer), their location, time/day of week, device type, browser, etc. What I'd like to say (although I'm not certain it's possible) is to isolate differences in conversion rate to an individual attribute, something like, '11% of your users from Facebook converted whereas only 3% of non-Facebook users converted', which would mean that the attribute 'referrer' and the level of the attribute 'Facebook' are responsible for driving conversions. Given that I may have 100s of quasi-independent variables, is it even possible to isolate the effect to one variable and one level of that variable? As opposed to a combination of them that is more likely to be driving the difference? If so, what technique or conceptual paradigm do I use to identify which variable-level is responsible for the greatest lift in my dependent variable, conversion rate? AI: I would suggest you to consider either direct dimensionality reduction approach. Check my relevant answer on this site. Another valid option is to use latent variable modeling, for example, structural equation modeling. You can start with relevant articles on Wikipedia (this and this, correspondingly) and then, as needed, read more specialized or more practical articles, papers and books.
H: How to subset rows from a data frame with comparison operators in R I have a data frame (a csv file) with dimensions 100x6 and I need only the columns c("X1", "X2", "X4") and the rows in which the value of "X1" is greater than 30. So I did: data_frame <- read.csv ("data_frame") data_frame [c("X1", "X2", "X4")] The column subset problem is solved but now I need to subset rows from data_frame [c("X1", "X2", "X4")] where the values of "X1" is greater than 30. I tried: data_frame [c("X1" > 30), c("X1", "X2", "X4")] But it returned the same data frame as data_frame [c("X1", "X2", "X4")]. Also tried using the function subset() with the same approach but got the same results. AI: You want data_frame[data_frame$X1 > 30, c("X1","X2","X4")] that will just print it, you probably want to update data_frame or store it in something else: data_frame = data_frame[data_frame$X1 > 30, c("X1","X2","X4")] also you probably want to try asking this on StackOverflow, or reading a bit more basic R documentation because it should be well covered. Its a bit simple to be "data science".
H: Convolutional neural network for sparse one-hot representation I have some basic features which I encoded in a one-hot vector. Length of the feature vector equals to 400. It is sparse. I saw that conv nets is applied to a dense feature vectors. Is there any problems to apply conv nets to a sparse feature vectors? AI: I would not apply convolutional neural networks to your problem (at least from what I can gather from the description). Convolutional nets' strengths and weaknesses are related to a core assumption in the model class: Translating patterns of features in a regular way either has a minor impact on the outcome, or has a specific useful meaning. So a pattern 1 0 1 seen in features 9,10,11 is similar in some way to the same pattern seen in features 15,16,17. Having this assumption built in to the model allows you to train a network with far fewer free parameters when dealing with e.g. image data, where this is a key property of data captured by scanners and cameras. With one-hot encoding of features, you assign a feature vector index from a value or category essentially at random (via some hashing function). There is no meaning to translations between indices of the feature vectors. The patterns 0 0 1 0 1 0 0 and 0 0 0 1 0 1 0 can represent entirely different things, and any associations between them are purely by chance. You can treat a sparse one-hot encoding as an image if you wish, but there is no good reason to do so, and models that assume translations can be made whilst preserving meaning will not do well. For such a small sparse feature vector, assuming you want to try a neural network model, use a simple fully-connected network.
H: Using attributes to classify/cluster user profiles I have a dataset of users purchasing products from a website. The attributes I have are user id, region(state) of the user, the categories id of product, keywords id of product, keywords id of website, and sales amount spent of the product. The goal is to use the information of a product and website to identity who the users are, such as "male young gamer" or "stay at home mom". I attached a sample picture as below: There are all together 1940 unique categories and 13845 unique keywords for products. For the website, there are 13063 unique keywords. The whole dataset is huge as that's the daily logging data. I am thinking of clustering, as those are unsupervised, but those id are ordered number having no numeric meaning. Then I don't know how to apply the algorithm. I am also thinking of classification. If I add a column of class based on the sales amount of product purchased. I think clustering is more preferred. I don't know what algorithm I should use for this case as the dimensions of the keywords id could be more than 10000 (each product could have many keywords, so does website). I need to use Spark for this project. Can anyone help me out with some ideas or suggestions? Thank you so much! AI: Right now, I only have time for a very brief answer, but I'll try to expand on it later on. What you want to do is a clustering, since you want to discover some labels for your data. (As opposed to a classification, where you would have labels for at least some of the data and you would like to label the rest). In order to perform a clustering on your users, you need to have them as some kind of points in an abstract space. Then you will measure distances between points, and say that points that are "near" are "similar", and label them according to their place in that space. You need to transform your data into something that looks like a user profile, i.e.: a user ID, followed by a vector of numbers that represent the features of this user. In your case, each feature could be a "category of website" or a "category of product", and the number could be the amount of dollars spent in that feature. Or a feature could be a combination of web and product, of course. As an example, let us imagine the user profile with just three features: dollars spent in "techy" webs, dollars spent on "fashion" products, and dollars spent on "aggressive" video games on "family-oriented" webs (who knows). In order to build those profiles, you need to map the "categories" and "keywords" that you have, which are too plentiful, into the features you think are relevant. Look into topic modeling or semantic similarity to do so. Once that map is built, it will state that all dollars spent on webs with keywords "gadget", "electronics", "programming", and X others, should all be aggregated into our first feature; and so on. Do not be afraid of "imposing" the features! You will need to refine them and maybe completely change them once you have clustered the users. Once you have user profiles, proceed to cluster them using k-means or whatever else you think is interesting. Whatever technique you use, you will be interested in getting the "representative" point for each cluster. This is usually the geometric "center" of the points in that cluster. Plot those "representative" points, and also plot how they compare to other clusters. Using a radar chart is very useful here. Wherever there is a salient feature (something in the representative that is very marked, and is also very prominent in its comparison to other clusters) is a good candidate to help you label the cluster with some catchy phrase ("nerds", "fashionistas", "aggressive moms" ...). Remember that a clustering problem is an open problem, so there is no "right" solution! And I think my answer is quite long already; check also about normalization of the profiles and filtering outliers.
H: How to scale an array of signed integers to range from 0 to 1? I'm using Brain to train a neural network on a feature set that includes both positive and negative values. But Brain requires input values between 0 and 1. What's the best way to normalize my data? AI: This is called unity-based normalization. If you have a vector $X$, you can obtain a normalized version of it, say $Z$, by doing: $$Z = \frac{X - \min(X)}{\max(X) - \min(X)}$$
H: What techniques are used to understand call patterns? I have customer data since 2013 and there is a file which has the customer unique id, a timestamp, and the reason for the call (a drop down from the person who handled the call). I did some cumulative counts based on customer ID and the timestamp and I saw that one customer called in over 1000 times alone. What's the best way to make sense of the call driver data when I'm looking at millions of rows and around 200 categories of call types? Is there a broader topic which looks into 'downstream' issues or predicting the probability of future calls or events? The end goal would be to visualize these calling patterns and focus on reducing the call backs. This is a specific problem but it seems like it should be common and I can learn about addressing it in a bigger picture manner. AI: When you said "a broader topic", did you mean what algorithms to use to examine the event log with the goal of reducing future calls from the same customer on the same topic? In other words, a customer may call for help for a different topic. If a customer calls in for the same topic repetitively, something can be improved. You may get ideas from a Coursera class Processing Mining since the issue you're solving is similar to the example of a spaghetti process in lecture 6.7: "Spaghetti process describing the diagnosis and treatment of 2765 patients in a Dutch hospital. The process model was constructed based on an event log containing 114,592 events. There are 619 different activities (taking event types into account) executed by 266 different individuals (doctors, nurses, etc.)." By the way, you can click the drop down menu under "Sessions", choose "April 1, 2015 to May 19, 2015" then you can register and view the lectures. On the right of each lecture, there are some icons. The first icon is to download slides for the lecture. You may find reading the slides is faster than listening to the lecture. Suppose a customer called for installation of software release 1.5 then called a day later for running the new features of software release 1.5. Are these two issues logged as two of the 200 call categories? If so, we can use a time period (say one week) to judge whether it is about the same topic. Within a short time period, a customer is likely to work on the same topic, especially with the same key words such as "software release 1.5". We can coach the call center employees on solving possible follow up questions by saying something like "now that we finished installation, let me show you a couple of new features. It'll save you time." This will reduce the number of calls on the same topic from the same customer.
H: How deep should ones linear algebra knowledge be before starting data science? How important is linear algebra to being a data scientist? Are we talking college postgraduate level? AI: I think it truly depends on what you decide to specialize in. Data science is a very broad field, and you can actually work with data without knowing what eigenvalues and eigenvectors are. However, if you want to acquire a intermediate/advanced understanding of statistics or machine learning, you need at least an intermediate/advanced knowledge of linear algebra. I suggest to take an introductory class on linear algebra on MOOC - just to have a more precise idea of what linear algebra is - and then study some other topics that you are interested in. Linear algebra is a useful tool, but it can be very boring, especially if you are an "applied" kind of guy. Moreover, I think that some concepts like eigenvalues or eigenvectors are easier to understand when seen in an applied context, e.g. principal component analysis.
H: How to create a good list of stopwords I am looking for some hints on how to curate a list of stopwords. Does someone know / can someone recommend a good method to extract stopword lists from the dataset itself for preprocessing and filtering? The Data: a huge amount of human text input of variable length (searchterms and whole sentences (up to 200 characters) ) over several years. The text contains a lot of spam (like machine input from bots, single words, stupid searches, product searches ... ) and only a few % of seems to be useful. I realised that sometimes (only very rarely) people search my side by asking really cool questions. These questions are so cool, that i think it is worth to have a deeper look into them to see how people search over time and what topics people have been interested in using my website. My problem: is that i am really struggling with the preprocessing (i.e. dropping the spam). I already tried some stopword list from the web (NLTK etc.), but these don't really help my needs regarding this dataset. Thanks for your ideas and discussion folks! AI: One approach would be to use tf-idf score. The words which occur in most of the queries will be of little help in differentiating the good search queries from bad ones. But ones which occur very frequently (high tf or term-frequency) in only few queries (high idf or inverse document frequency) as likely to be more important in distinguishing the good queries from the bad ones.
H: How to cluster a link traversal dataset I'm using Google Analytics on my mobile app to see how different users use the app. I draw a path based on the pages they move to. Given a list of paths for say a 100 users, how do I go about clustering the users. Which algorithm to use? By the way, I'm thinking of using sckit learn package for the implementation. My dataset (in csv) would look like this : DeviceID,Pageid,Time_spent_on_Page,Transition.<br> ABC,Page1, 3s, 1->2.<br> ABC,Page2, 2s, 2->4.<br> ABC,Page4,1s,4->1.<br> So the path, here is 1->2->4->1, where 1,2,4 are Pageids. AI: @Shagun's answer is right actually. I just expand it! There are 2 different approaches to your problem: Graph Approach As stated in @Shagun's answer you have a weighted directed graph and you want to cluster the paths. I mention again because it's important to know that your problem is not a Graph Clustering or Community Detection problem where vertices are clustered! Cunstructing a Graph in networkx using the last two column of the data, you can add time spent as weight and users who passed that link as an edge attribute. After all you'll have different features for clustering: the set of all vertices an individual ever met in the graph, total, mean and std of time spent, shortest path distribution parameters, ... which can be used for clustering the user behaviors. Standard Data All above can be done by reading data efficiently in a matrix. If you consider each edge for a specified user as a single row (i.e. you'll have MxN rows where M is the number of users and N the number of edges in case you stick with 100 case!) and add properties as columns you'll probably able to cluster behaviors. if a user passed an edge n times, in the row corresponding to that user and that edge add a count column with value n and same for time spend, etc. Starting and ending edges are also informative. Be careful that node names are categorical variables. Regarding clustering algorithms you can find enough if you have a quick look at SKlearn. Hope it helped. Good Luck :)
H: How to install rattle in centos While running rattle in my system I am getting this error rattle() Error: attempt to apply non-function In addition: Warning message: In method(obj, ...) : Unknown internal child: selection I am using R version 3.1.0 (2014-04-10) AI: I got the answer. I have to install some of the packages in terminal . I installed it and it works. sudo yum install gtk+-devel gtk2-devel
H: change in variable importance I have a multi year dataset. Each time frame of the data has different predictor importance. Say for example, I am slicing the data into two partions as follows: a dataset for the year 2014 (whole year) a 2015 Jan. When i look for the predictor importance, the predictor variables are different for both the partions. (1) Hence i am not able to arrive at one unique decision tree which can explain the model better. (2). I am not able to train a model which can predict the new data correctly. Is there anything I am going wrong here. AI: Probably you can use multiple decision trees - each for slice of dataset. Then combine the results from each decision tree. You can also weigh the results from each decision tree. Eg the dataset for 2015 can be given more weight than dataset for 2014. I do not see much harm in spliting dataset and training different trees as it helps to account for the predictor values in a more accurate manner. If you could more details about the data or the "variable" you mentioned, people would be able to help more.
H: Wordnet Lemmatization I tried finding about exception lists in wordnet lemmatizers. "Morphy() uses inflectional ending rules and exception lists to handle different possibilities" which I read from http://www.nltk.org/howto/wordnet.html . Can you explain what is an exception list. Thank you. AI: The exception list files are used to help the processor find base forms from 'irregular inflections' according to the man page. They mean that some words, when plural or a different tense, can't be algorithmically processed to find the base/root word. More details can be found in the morphy man. I'm not a language processing expert, but this is likely a result of English words that 'break the rules'. If you think about the code like a human trying to learn English: the student learns rules to use (algorithm) and then has to memorize exceptions for the rules (exception lists). An over-simplified analogy that does not involve endings/conjugation would a spell checking program. An algorithm might check for 'i before e, except after c' but would first have to check the word against an exception list to make sure it isn't 'weird' or 'caffeine' - please don't start a linguistics fight about this rule, I am not commenting on the validity of it/that's not the point I'd like to make.
H: getting error:-Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/io/Writable I am trying to connect to hive from java but getting error. I searched in google but not got any helpfull solution. I have added all jars also. The code is:- package mypackage; import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class HiveJdbcClient { private static String driver = "org.apache.hadoop.hive.jdbc.HiveDriver"; public static void main(String[] args) throws SQLException, ClassNotFoundException { Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver"); try { Class.forName(driver); } catch (ClassNotFoundException e) { e.printStackTrace(); System.exit(1); } Connection connect = DriverManager.getConnection("jdbc:hive://master:10000 /default", "", ""); Statement state = connect.createStatement(); String tableName = "mytable"; state.executeQuery("drop table " + tableName); ResultSet res=state.executeQuery("ADD JAR /home/hadoop_home/hive/lib /hive-serdes-1.0-SNAPSHOT.jar"); res = state.executeQuery("create table tweets (id BIGINT,created_at STRING,source STRING,favorited BOOLEAN,retweet_count INT,retweeted_status STRUCT<text:STRING,user:STRUCT<screen_name:STRING,name:STRING>>,entities STRUCT<urls:ARRAY<STRUCT<expanded_url:STRING>>,user_mentions:ARRAY<STRUCT<screen_name:STRING,name:STRING>>,hashtags:ARRAY<STRUCT<text:STRING>>>,text STRING,user STRUCT<screen_name:STRING,name:STRING,friends_count:INT,followers_count:INT,statuses_count:INT,verified:BOOLEAN,utc_offset:INT,time_zone:STRING>,in_reply_to_screen_name STRING) ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe' LOCATION '/user/flume/tweets'"); String show = "show tables"; System.out.println("Running show"); res = state.executeQuery(show); if (res.next()) { System.out.println(res.getString(1)); } String describe = "describe " + tableName; System.out.println("Running describe"); res = state.executeQuery(describe); while (res.next()) { System.out.println(res.getString(1) + "\t" + res.getString(2)); } } } I am getting these errors:- run: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hadoop/hive/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/GlassFish_Server/glassfish/modules/weld-osgi-bundle.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/io/Writable at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:198) at org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:132) at org.apache.hadoop.hive.jdbc.HiveConnection.configureConnection(HiveConnection.java:133) at org.apache.hadoop.hive.jdbc.HiveConnection.(HiveConnection.java:122) at org.apache.hadoop.hive.jdbc.HiveDriver.connect(HiveDriver.java:106) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:215) at dp.HiveJdbcClient.main(HiveJdbcClient.java:35) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.io.Writable at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 8 more Java Result: 1 AI: I got the answer. One jar file was missing now it is solved. This file was missing. hadoop-common-2.1.0-beta.jar
H: Support vector regression and paremeters I am doing load forecasting using SVR(kernel='rbf').How can I understand which is the best value for parameters C, epsilon and gamma?Thanks. AI: It looks like that you are using scikit-learn. In this case use Grid Search Cross Validation or Randomized Search Cross Validation to find the best parameters. sklearn.grid_search.GridSearchCV(estimator, param_grid, scoring=None, cv=None, ...) In these approaches you basically loop over possible sets of your parameters, specified via param_grid. For each set you perform a cross-validation (default is 3-fold, but you can specify it via cv parameter). Cross-validation gives you the mean value and deviation of your 'scoring' parameter (e.g. 'accuracy' for classification, 'r2' for regression). The set of parameters with the best 'scoring' is the winner. See example here. It also show how to output not only the best set of parameters, but also "top-N". I find it very useful in building intuition about my model.
H: Are there any unsupervised learning algorithms for time sequenced data? Each observation in my data was collected with a difference of 0.1 seconds. I don't call it a time series because it don't have a date and time stamp. In the examples of clustering algorithms (I found online) and PCA the sample data have 1 observation per case and are not timed. But my data have hundreds of observations collected every 0.1 seconds per vehicle and there are many vehicles. Note: I have asked this question on quora as well. AI: What you have is a sequence of events according to time so do not hesitate to call it Time Series! Clustering in time series has 2 different meanings: Segmentation of time series i.e. you want to segment an individual time series into different time intervals according to internal similarities. Time series clustering i.e. you have several time series and you want to find different clusters according to similarities between them. I assume you mean the second one and here is my suggestion: You have many vehicles and many observations per vehicle i.e you have many vehicles. So you have several matrices (each vehicle is a matrix) and each matrix contains N rows (Nr of observations) and T columns (time points). One suggestion could be applying PCA to each matrix to reduce the dimenssionality and observing data in PC space and see if there is meaningful relations between different observations within a matrix (vehicle). Then you can put each observation for all vehicles on each other and make a matrix and apply PCA to that to see relations of a single observation between different vehicles. If you do not have negative values Matrix Factorization is strongly recommended for dimension reduction of matrix form data. Another suggestion could be putin all matrices on top of each other and build a NxMxT tensor where N is the number of vehicles, M is the number of observations and T is the time sequence and apply Tensor Decomposition to see relations globally. A very nice approach to Time Series Clustering is shown in this paper where the implementation is quiet straight forward. I hope it helped! Good Luck :) EDIT As you mentioned you mean Time Series Segmentation I add this to the answer. Time series segmentation is the only clustering problem that has a ground truth for evaluation. Indeed you consider the generating distribution behind the time series and analyze it I strongly recommend this, this, this, this, this and this where your problem is comprehensively studied. Specially the last one and the PhD thesis. Good Luck!
H: Algorithm for deriving multiple clusters Suppose I have a set of data (with 2-dimensional feature space), and I want to obtain clusters from them. But I do not know how many clusters will be formed. Yet, I want separate clusters (The number of clusters is more than 2). I figured that k means of k medoid cannot be used in this case. Nor can I use hierarchical clustering. Also since there is no training set hence cannot use KNN classifier to any others (supervised learning cannot be used as no training set). I cannot use OPTICS algorithm as I do not want to specify the radius (I don't know the radius) Is there any machine learning technique that would give me multiple clusters (distance based clustering) that deals well with outlier points too? This should be the output: AI: I don't think that EM clustering algorithms like k-means and Gaussian mixture models are quite what you're looking for. There are definitely other algorithms that don't require one to pick a number of clusters. My personal favorite (most of the time) is called mean-shift-clustering. You could find a great little blog post about it here, and it has a good implementation in python's scikit-learn library.
H: How to paste string and int from map to an array in hive? I am trying to paste a string and int from map in Hive to an array. For now, record looks like this: {"string1":1,"string2":1,"string3":15} Is there a way to convert it to an array like this: ["string1:1","string2:1","string3:15"] AI: Assuming your map is called "M" and you want your array field to be called "A" SELECT ... array(concat_ws(":","string1",M[1]), concat_ws(":","string2",M[2]), concat_ws(":","string3",M[3]) as A .... FROM table;
H: How to learn spam email detection? I want to learn how a spam email detector is done. I'm not trying to build a commercial product, it'll be a serious learning exercise for me. Therefore, I'm looking for resources, such as existing projects, source code, articles, papers etc that I can follow. I want to learn by examples, I don't think I am good enough to do it from scratch. Ideally, I'd like to get my hand dirty in Bayesian. Is there anything like that that? Programming language isn't a problem for me. AI: First of all check this carefully. You'll find a simple dataset and some papers to review. BUT as you want to start a simple learning project I recommend to not going through papers (which are obviously not basic) but try to build your own bayesian learner which is not so difficult. I personally suggest Andrew Moore's lecture slides on Probabilistic Graphical Models which are freely available and you can learn from them simply and step by step. If you need more detailed help just comment on this answer and I'll be glad to help :) Enjoy baysian learning!
H: Making a labelled training data set We are developing a classification system, where the categories are fixed, but many of them are inter-related. For example, we have a category called, "roads" and another one called "traffic". We believe that the model will be confused by the text samples, which could be in roads category and also in traffic. Some of our text samples are suitable for multi class labelling too. For example, "There is a garbage dump near the footpath. The footpath is broken completely". This text could be categorized into garbage bucket or footpath bucket. We are going to build a training set for this classifier, by manually annotating the text. So, can we put multiple labels for one issue? How should we deal with text with multiple labels for it? Should they be added into all categories to which it is tagged to, as training sample ? For example, "There is a garbage dump near the footpath. The footpath is broken completely". This text could be categorized into garbage bucket or footpath bucket. So, should this text be added as a training sample for garbage and footpath? How should we consider the labels? Can you please give your insights? AI: Generally with multiple classes you have to make a distinction between exclusive and inclusive groups. The simplest cases are "all classes are exclusive" (predict only one class), and "all classes are compatible" (predict list of classes that apply). Either way, label the classes as you would want your trained model to predict them. If you expect your classifier to predict an example is in both garbage and footpath, then you should label such an example with both. If you want it to disambiguate between them, then label with a single correct class. To train a classifier to predict multiple target classes at once, it is usually just a matter of picking the correct objective function and a classifier with architecture that can support it. For example, with a neural network, you would avoid using a "softmax" output which is geared towards predicting a single class - instead you might use a regular "sigmoid" function and predict class membership on a simple threshold on each output. You can get also more sophisticated perhaps with a pipeline model if your data can be split into several exclusive groups - predict the group in the first stage, and have multiple group-specific models predicting the combination of classes in each group in a second stage. This may be overkill for your problem, although it may still be handy if it keeps your individual models simple (e.g. they could all be logistic regression, and the first stage may gain some accuracy if the groups are easier to separate).
H: Random Forest, Type - Regression, Calculation of Importance Example I am trying to use a Random Forest Model (Regression Type) as a substitute of logistic regression model. I am using R - randomForest Package. I want to understand the meaning of Importance of Variables (%IncMSE and IncNodePurity) by example. Suppose I have a population of 100 employees out of which 30 left the company. Suppose in a particular decision tree, population is split by an attribute (say location) into two nodes. One node contains 50 employees out of which 10 left the company and other contains 50 employees from which 20 left the company. Can someone demonstrate me a calculation of %IncMSE and IncNodePurity. (if Required for averages etc., please consider another decision tree) This may look like a repeated question but I could not find a worked out example. AI: MSE is measure of error of the overall regression model, $\frac{1}{n}\sum\|y_i-\hat y_i\|^2$. For an important variable, if it is replaced with random noise, you would imagine MSE with the faulty data to increase. IncMSE (Incremental MSE) for a particular variable is how much the MSE will increase if the variable is completely randomized. This is usually computed on the out-of-bag data. Node purity is a measure of how homogeneous a node is. An example of node purity is information entropy, i.e. $-p_1\log p_1-p_0\log p_0$ if there are two classes. For regression models, node impurity is usually taken as the variance in a node. Everytime you split a node, you do it to make the new nodes homogeneous, hence the purity increases. IncPurity of a variable is weighted average of incremental purity because of each split by this variable was used to split, with the node population as the weight.
H: How to analyze which site has most numbers I am trying to determine which site in our organization is in greater need of upgrades to SEP 12, so when I run a query to count, I get these type of numbers Group Windows_SEP_11 Mac_SEP_11 Windows_SEP_12 Mac_SEP_12 Arizona\A 417 29 219 6 Arizona\B 380 20 282 15 Arizona\C 340 30 383 507 Arizona\D 310 104 186 857 Arizona\E 307 74 403 243 Arizona\F 285 171 522 14 Arizona\G 269 1 559 41 However, when I find percentages, I get these numbers Group Win_Sep_11_% Mac_SEP_11_% Windows_SEP_12_% Mac_SEP_12_% Boston/Site 1 100 0 0 0 Boston/Site 2 100 0 0 0 Boston/Site 3 94 0 0 5 And obviously, percentage isn't good indicator because Boston/Site 1 has only 3 computers, Boston/Site 2 only has 4 computers, etc. What is the best way to analyze data? I ultimately need a visual of sites that have many computers, and a great need for upgrades to SEP 12, i.e. if there are more computers with SEP 11 than SEP 12. Please point me in the right direction. AI: The most obvious way of visualizing this is to have the number of computers on the Y-axis and the size of the dots representing the percentages. The categories (or sites in your case) can be represented on the X-axis. The image below shows an example where the Y-axis represents a continuous value (can be mapped to number of computers in your case), the X-axis represents a discrete value (can be mapped to sites in your case), and the size of the dots represents another attribute (like percentage in your case). I have used the R package ggplot2 for this.
H: R Programming, how to replicate for districts in a city I am new to R Programming and just learned basics through codeschool.com Our network spans the city, and it is divided into districts. I would like to create a map that assigns a value (based on ratio of outdated software and new software) to each district. This website has sample of 3-D maps that were created by R Programming, and I see one I am very interested in replicating, but for our city only. But when I see the source code, I don't see any mention of latitude or longitude. My head is spinning, trying to figure out how I will input this, i.e latitude and longitude of a district in our city, versus an assigned ratio, which I believe will be read from a spreadsheet. Thanks for any guidance. AI: The latitude and longitude are stored in a text file that is read into a table, Dat. It has the form: lat long climate-group For your particular case, you would calculate Dat[, 3].
H: How to select features from text data? I have a data set of questions belonging to ten different categories namely (definitions, factoids, abbreviations, fill in the blanks, verbs, numerals, dates, puzzle, etymology and category relation). The categories are briefly described as follows: Definition – A question that contains a definition of the answer. Category Relation – The answer has a semantic relation to the question where the relation is specified in the category. FITB – These are generic fill in the blank questions – some of them ask for the completion of a phrase. Abbreviation – The answer is an expansion of an abbreviation in the question. Puzzle – These require derivation or synthesis for the answer. Etymology – The answer is an English word derived from a foreign word. Verb – The answer is a verb. Number – The answer is a numeral. Date – The question asks for a date or a year. Factoid – A question is a factoid if its answer can be found on Wikipedia. I used the Stanford core NLP package called shiftreducer to find out the Part-Of-Speech (POS) values for each question in a category. I thought of using this POS pattern as a discriminant among the classes but it turned out to be generalized since: All the classes follow a similar pattern Nouns top the POS count followed by Determinants, Prepositions, Adjectives, Plural nouns and finally verbs. What could be the other ways in which I could differentiate among the question categories? Or as my question was in its first place, "What kind of features do I select for efficient categorization?" AI: If I understand you correctly, you're looking to take the text of these questions and train classifiers to identify which of 10 categories they belong to. And you'd like to come up with a decent feature representation in order to do this. I think your finding about part-of-speech is intuitive. It makes sense that in grammatical English (assuming your question data is written in English), most questions would follow similar part-of-speech sequences since grammatically correct questions follow a particular syntactic form (at least when posed interrogatively as in the case of "When was George Washington born?") So, you've ruled something out - which you should actually view as progress. If you haven't tried it already, one simple thing you might do is use the actual words within the questions as features. You could use any order n-gram you like, but unigrams stick out as an immediate linguistic feature to try. It seems likely to me that while the POS-tags are similar across classes, making them difficult to distinguish between, the actual words being used in the questions may vary from class to class, giving your model a better shot of differentiating between classes. That is, maybe words like "time", "year", and "when" co-occur more highly with the Date class while words like "numerical" and "quantity" co-occur more with the Number class (obviously, this is speculation - I haven't seen your data). You might also look at bigrams, trigrams, or any other number n-gram for this feature set as well. Finally, there may be other features you could generate using NLP methods that may be useful. I'm not familiar with the Shiftreducer software, but Named-Entity Recognition could be helpful in generating features for the Factoid class if there are many questions about proper nouns. Other really simple features such as length of the question (counted in number of tokens). A final thought would be to use only the tagged verbs from your POS-tagger, tab these up and to see whether they differ between classes. This may be a useful feature for identifying questions present in your Verb class. Hopefully, those are some ideas to get you started.
H: Comparing accuracy of models in ordinal regression / classification I am looking into creating a model to predict whether an item is "Very Good", "Good", "Bad" or "Very Bad". After I fit the training data to the models, comparing the accuracy of the models during test stump me: should it matter if a model misclassified a G to VG while the other G to VB? What about a model that has two misclassifications of one level away versus another model with only one misclassification but three levels away (eg VG to VB)? Any guideline on what is the common approach? Also, my thinking at the moment is that this should be a regression problem, but I'm happy to be corrected if I should approach this labeling of datasets more as a classification problem. AI: Your classes express a certain order. You can classify apples to, say, "green", "red" or "yellow", and then every disagreement with a reference set is equal. After all, colours express no order. So as you already suggested, I would certainly use regression. Assume that the classes could be distributed as something like this: Very bad = 0 - 0.25 Bad = 0.25 - 0.50 Good = 0.50 - 0.75 Very good = 0.75 - 1.00 Now, the mismatch of Very good vs. Bad is at least 0.25, where is must be at least 0.50 with Very good vs. Very bad, which gives a better and more honest impression of the performance of your model.
H: Create a prediction formula from data input I have an algorithm which have as an input about 20-25 numbers. Then in every step it uses some of these numbers with a random function to calculate the local result which will lead to the final output of A, B or C. Since every step has a random function, the formula is not deterministic. This means that with the same input, I could have either A, B or C. My first thought was to take step by step the algorithm and calculating mathematically the probability of each output. However, it is really difficult due to the size of the core. My next thought was to use machine learning with supervised algorithm. I can have as many labeled entries as I want. So, I have the following questions: How many labeled inputs should I need for a decent approach of the probabilities? Yes, I can have as many as I want, but it needs time to run the algorithm and I want to estimate the cost of the simulations to gather the labeled data. Which technique do you suggest that works with so many inputs that can give the probability of the three possible outputs? As an extra question, the algorithm run in 10 steps and there is a possibility that some of the inputs will change in one of the steps. My simple approach is to not include this option on the prediction formula, since I have to set different inputs for some of the steps. If I try the advanced methods, is there any other technique I could use? AI: I'm not sure if I understood your question! Probably better to plot a scheme at least. But according to what I guess from your question: Q2- You probably need a simple MLP (Multilayer Perceptron)! it's a traditional architecture for Neural Networks where you have $n$ input neurons (here 20-25), one or more hidden layers with several neurons and 3 neurons as output layer. If you use a sigmoid activation function ranged from 0 to 1, the output for each class will be $P(Y=1|X=x)$. Q1- So your question probably is: how many training data you need for learning a model? and to the best of my knowledge the answer is as many as possible! and about the last question, I really could not figure out what you mean. You apparently have a very specific task so I suggest to share more insight for sake of clarification. I hope I could help a little!
H: What is a XML dataset? What are xml datasets? Is it possible to convert them to csv files? I'm working on a Java program and I sometimes download datasets wich are in a binary format, are those xml? Thank you. AI: XML is a markup language similar to html. One uses tags with attributes to build data structures. For example, <sampleXML> <Menu> <Food> <item1>Spaghetti Bolognese</item1> <item2>Spaghetti Carbonara</item2> </Food> <Drinks> <item1 class = 'drinks'>Sprite</item1> </Drinks> </Menu> </sampleXML> As you can see XML employs tags such as <Food></Food> and attributes such as class = 'drinks' which is exactly what HTML has. To access the XML data in java you got couple choices. You can read it in as a string and parse it using built in DOM parser. Or you can use JAXB to map XML directly to Java objects. Surely you can convert XML to a csv file. There are free websites online for that. Just google "XML to csv converter." Binary files are not XML, but can be. This needs little explanation. The letters and words you see here are ASCII characters. This is human readable text. Each ASCII character has a binary representation. For example, j in binary is 1101010. A binary file is any file in computer language (0 an 1). A binary file can also be a combination of text and binary. You can convert binary to ASCII and those files you download may indeed be XML I described above. To convert binary to ASCII just google it.
H: visualize a horizontal box plot in R I have a dataset like this. The data has been collected through a questionnaire and I am going to do some exploratory data analysis. windows <- c("yes", "no","yes","yes","no") sql <- c("no","yes","no","no","no") excel <- c("yes","yes","yes","no","yes") salary <- c(100,200,300,400,500 ) test<- as.data.frame (cbind(windows,sql,excel,salary),stringsAsFactors=TRUE) test[,"salary"] <- as.numeric(as.character(test[,"salary"] )) I have an outcome variable (salary) in my dataset and a couple of input variables (tools). How can I visualize a horizontal box plot like this: AI: Let's start by creating some fake dataset. software = sample(c("Windows","Linux","Mac"), n=100, replace=T) salary = runif(n=100,min=1,max=100) test = data.frame(software, salary) This should create a dataframe test that will look like somewhat like: software salary 1 Windows 96.697217 2 Linux 29.770905 3 Windows 94.249612 4 Mac 71.188701 5 Linux 94.028326 6 Linux 7.482632 7 Mac 98.841689 8 Mac 81.152623 9 Windows 54.073761 10 Windows 1.707829 EDIT based on comment Note that if the data does not already exist in the above format, it can be changed to this format. Let's take a data frame provided in the original question and lets assume the dataframe is called raw_test. windows sql excel salary 1 yes no yes 100 2 no yes yes 200 3 yes no yes 300 4 yes no no 400 5 no no yes 500 Now, using the melt function/ method from the reshape package in R, first create the dataframe test (that will be used for final plotting) as follows: # use melt to convert from wide to long format test = melt(raw_test,id.vars=c("salary")) # subset to only select where value is "yes" test = subset(test, value == 'yes') # replace column name from "variable" to "software" names(test)[2] = "software" Now, you will get a datframe test that looks like: salary software value 1 100 windows yes 3 300 windows yes 4 400 windows yes 7 200 sql yes 11 100 excel yes 12 200 excel yes 13 300 excel yes 15 500 excel yes Having created the dataset. We will now generate the plot. First, create the bar plot on the left based on the counts of software that represents usage rate. p1 <- ggplot(test, aes(factor(software))) + geom_bar() + coord_flip() Next, create the boxplot on the right. p2 <- ggplot(test, aes(factor(software), salary)) + geom_boxplot() + coord_flip() Finally, place both these plots next to each other. require('gridExtra') grid.arrange(p1,p2,nrow=1) This should create a plot like:
H: How to fit an odd relationship with a function? Let's say there is a function $f$ such that $y = f(x)$. However, if $f$ is a piecewise function such that: $$y = \begin{cases} 0 \quad x \leq 0 \\ 1 \quad x >0\end{cases} $$ How do I fit $f$ in that case? Many thanks, guys. AI: The definition you gave is the definition of the function. This is called the Heaviside Step Function. There is not a simple analytic way to express it (like as a ratio, product, or composition of trigonometric functions, exponentials, or polynomials). Note that it is neither continuous nor differentiable at x = 0. There are a couple of cool ways to represent it. The coolest and most intuitive way is as an integral of a Dirac Delta Function: $$ H(x) = \int_{-\infty}^x { \delta(s)} \, \mathrm{d}s $$ Note, though, that a Dirac Delta Function is itself not an "official" function, since it is not well-defined at x = 0. Check out Distribution Theory for some cool info on weird "functions" like this. Now, I think you may be trying to approximate this function, because you asked how to "fit" it. Taken straight from Wikipedia: For a smooth approximation to the step function, one can use the logistic function $$ H(x) \approx \frac{1}{2} + \frac{1}{2}\tanh(kx) = \frac{1}{1+\mathrm{e}^{-2kx}}, $$ where a larger k corresponds to a sharper transition at x = 0.
H: Price optimization for tiered and seasonal products Assuming I can collect the demand of the purchase of a certain product that are of different market tiers. Example: Product A is low end goods. Product B is another low end goods. Product C and D are middle-tier goods and product E and F are high-tier goods. We have collected data the last year on the following 1. Which time period (season - festive? non-festive?) does the different tier product reacts based on the price set? Reacts refer to how many % of the product is sold at certain price range 2. How fast the reaction from the market after marketing is done? Marketing is done on 10 June and the products are all sold by 18 June for festive season that slated to happened in July (took 8 days at that price to finish selling) How can data science benefit in terms of recommending 1. If we should push the marketing earlier or later? 2. If we can higher or lower the price? (Based on demand and sealing rate?) Am I understanding it right that data science can help a marketer in this aspect? Which direction should I be looking into if I am interested to learn about it. AI: You should be able to use linear regression to find correlation between the factors which cause your products to sell better (or worse). There are many correlations you can test against in this data set. Some examples are: If a product has been marketed aggressively, does it sell more quickly? If a low tier item is available, do fewer high-tier items sell? If multiple high-tier items are available, are fewer sold of each item? Keep in mind that correlation does not necessarily imply causation. Always think about other factors which may cause sales to go up and down. For example, you may sell more high tier items in a season one year than another year. But, this could be due to changes in the overall economy, rather than changes in your pricing. The second thing you can do is perform A/B tests on your product sales pages. This gives you clear feedback right away. Some example tests could be: Show the user one high-tier product and one low-tier product (A). Show the user two high-tier products and no low-tier products(B). Which page generates more revenue? Send out marketing emails for a seasonal sale 5 days in advance to one group of users (A). Send the same email to a different set of users 1 day in advance (B). There are many possibilities. Use your intuition and think about previous knowledge you have about your products.
H: Relationship between VC dimension and degrees of freedom I'm studying machine learning and I feel there is a strong relationship between the concept of VC dimension and the more classical (statistical) concept of degrees of freedom. Can anyone explain such a connection? AI: As stated by Prof Yaser Abu-Mostafa- Degrees of freedom are an abstraction of the effective number of parameters. The effective number is based on how many dichotomies one can get, rather than how many real-valued parameters are used. In the case of 2-dimensional perceptron, one can think of slope and intercept (plus a binary degree of freedom for which region goes to +1), or one can think of 3 parameters w_0,w_1,w_2 (though the weights can be simultaneously scaled up or down without affecting the resulting hypothesis). The degrees of freedom, however, are 3 because we have the flexibility to shatter 3 points, not because of one way or another of counting the number of parameters.
H: Algorithm for segmentation of sequence data I have a large sequence of vectors of length N. I need some unsupervised learning algorithm to divide these vectors into M segments. For example: K-means is not suitable, because it puts similar elements from different locations into a single cluster. Update: The real data looks like this: Here, I see 3 clusters: [0..50], [50..200], [200..250] Update 2: I used modified k-means and got this acceptable result: Borders of clusters: [0, 38, 195, 246] AI: Please see my comment above and this is my answer according to what I understood from your question: As you correctly stated you do not need Clustering but Segmentation. Indeed you are looking for Change Points in your time series. The answer really depends on the complexity of your data. If the data is as simple as above example you can use the difference of vectors which overshoots at changing points and set a threshold detecting those points like bellow: As you see for instance a threshold of 20 (i.e. $dx<-20$ and $dx>20$) will detect the points. Of course for real data you need to investigate more to find the thresholds. Pre-processing Please note that there is a trade-off between accurate location of the change point and the accurate number of segments i.e. if you use the original data you'll find the exact change points but the whole method is to sensitive to noise but if you smooth your signals first you may not find the exact changes but the noise effect will be much less as shown in figures bellow: Conclusion My suggestion is to smooth your signals first and go for a simple clustering mthod (e.g. using GMMs) to find an accurate estimation of the number of segments in signals. Given this information you can start finding changing points constrained by the number of segments you found from previous part. I hope it all helped :) Good Luck! UPDATE Luckily your data is pretty straightforward and clean. I strongly recommend dimensionality reduction algorithms (e.g. simple PCA). I guess it reveals the internal structure of your clusters. Once you apply PCA to the data you can use k-means much much easier and more accurate. A Serious(!) Solution According to your data I see the generative distribution of different segments are different which is a great chance for you to segment your time series. See this (original, archive, other source) which is probably the best and most state-of-the-art solution to your problem. The main idea behind this paper is that if different segments of a time series are generated by different underlying distributions you can find those distributions, set tham as ground truth for your clustering approach and find clusters. For example assume a long video in which the first 10 minutes somebody is biking, in the second 10 mins he is running and in the third he is sitting. you can cluster these three different segments (activities) using this approach.
H: How to convert a text to lower case using tm package? I am using the below R code to convert text to lower case: movie_Clean <- tm_map(movie_Clean, content_transformer(tolower)) However I end up getting the below error: Error in FUN(content(x), ...) : invalid input 'I just wanna watch Jurassic World í ½í¸«' in 'utf8towcs'. Please help how to overcome this error. AI: This seems like an encoding error. Try adding the line Encoding(movie_Clean) <- "UTF-8" before you lowercase the data. Check out this answer for a little context: https://stackoverflow.com/a/28340080/4539807
H: Sampling from a multivariate von Mises-Fisher distribution in Python I am looking for a simple way to sample from a multivariate von Mises-Fisher distribution in Python. I have looked in the stats module in scipy and the numpy module but only found the univariate von Mises distribution. Is there any code available? I have not found yet. -- edit. Apparently, Wood (1994) has designed an algorithm for sampling from the vMF distribution according to this link, but I can't find the paper. AI: Thanks to your help. I finally got my code working, plus some bibliography. I put my hands on Directional Statistics (Mardia and Jupp, 1999) and on the Ulrich-Wood's algorithm for sampling. I post here what I understood from it, i.e. my code (in Python), with a 'movMF' flavour. The rejection sampling scheme: def rW(n,kappa,m): dim = m-1 b = dim / (np.sqrt(4*kappa*kappa + dim*dim) + 2*kappa) x = (1-b) / (1+b) c = kappa*x + dim*np.log(1-x*x) y = [] for i in range(0,n): done = False while not done: z = sc.stats.beta.rvs(dim/2,dim/2) w = (1 - (1+b)*z) / (1 - (1-b)*z) u = sc.stats.uniform.rvs() if kappa*w + dim*np.log(1-x*w) - c >= np.log(u): done = True y.append(w) return y Then, the desired sampling is $v \sqrt{1-w^2} + w \mu$, where $w$ is the result from the rejection sampling scheme, and $v$ is uniformly sampled over the hypersphere. def rvMF(n,theta): dim = len(theta) kappa = np.linalg.norm(theta) mu = theta / kappa result = [] for sample in range(0,n): w = rW(kappa,dim) v = np.random.randn(dim) v = v / np.linalg.norm(v) result.append(np.sqrt(1-w**2)*v + w*mu) return result And, for effectively sampling with this code, here is an example: import numpy as np import scipy as sc import scipy.stats n = 10 kappa = 100000 direction = np.array([1,-1,1]) direction = direction / np.linalg.norm(direction) res_sampling = rvMF(n, kappa * direction)
H: Working with text files in Excel I have an excel file containing a long text in column A. I am looking for the words starting by "popul" such as popular and populate . I can find these cells by the formula: =SEARCH("popul",A1,1) I want a function that returns the whole words starting by popul such as popular and populate. AI: I'm no Excel expert, as I generally use Python or R instead, but this might get you started until an Excel expert comes along. In the meantime, it would help if you clarified your question. And you should be aware that search will only find you the index of the first match, not all matches in the string. If you only need the first hit, you can use =MID(A1,SEARCH("popul",A1,1),IFERROR(FIND(" ",A1,SEARCH("popul",A1,1)),LEN(A1)+1)-SEARCH("popul",A1,1)) although I cannot claim this is the best way to do this. You really didn't specify where you want the results to appear, how they should look, or if you only have one cell you need to search in. It would also help to know the version of Excel you have. I'll also present a crude way to return all the hits in the string: Cell A1 contains the string, B1 has no formula, and if you run out of "n/a"s you can extend columns B, C, and D by filling down. The formulas are as follows: B3 and below use =IF(C2+1<LEN($A$1),C2+1,"n/a") C2 and below use =IFERROR(FIND(" ",$A$1,SEARCH("popul",$A$1,B2)),LEN($A$1)+1) D2 and below use =IFERROR(MID($A$1,SEARCH("popul",$A$1,B2),C2-SEARCH("popul",$A$1,B2)),"") As you can see, there's little to no error checking except to deal with the match at the end of the string. In the end though, if you're going to use Excel for this you should probably create a user defined function or utilize VBA instead of in-cell formulas.
H: Implementing Complementary Naive Bayes in python? Problem I have tried using Naive bayes on a labeled data set of crime data but got really poor results (7% accuracy). Naive Bayes runs much faster than other alogorithms I've been using so I wanted to try finding out why the score was so low. Research After reading I found that Naive bayes should be used with balanced datasets because it has a bias for classes with higher frequency. Since my data is unbalanced I wanted to try using the Complementary Naive Bayes since it is specifically made for dealing with data skews. In the paper that describes the process, the application is for text classification but I don't see why the technique wouldn't work in other situations. You can find the paper I'm referring to here. In short the idea is to use weights based on the occurences where a class doesn't show up. After doing some research I was able to find an implementation in Java but unfortunately I don't know any Java and I just don't understand the algorithm well enough to implement myself. Question where I can find an implementation in python? If that doesn't exist how should I go about implementing it myself? AI: Naive Bayes should be able to handle imbalanced datasets. Recall that the Bayes formula is $$P(y \mid x) = \cfrac{P(x \mid y) \, P(y)}{P(x)} \propto P(x \mid y) \, P(y)$$ So $P(x \mid y) \, P(y)$ takes the prior $P(y)$ into account. In your case maybe you overfit and need some smoothing? You can start with +1 smoothing and see if it gives any improvements. In python, when using numpy, I'd implement the smoothing this way: table = # counts for each feature PT = (table + 1) / (table + 1).sum(axis=1, keepdims=1) Note that this is gives you Multinomial Naive Bayes - which applies only to categorical data. I can also suggest the following link: http://www.itshared.org/2015/03/naive-bayes-on-apache-flink.html. It's about implementing Naive Bayes on Apache Flink. While it's Java, maybe it'll give you some theory you need to understand the algorithm better.
H: What is the right algorithm to detect segmentations of a line chart? To be concrete, given 2D numerical data as is shown as line plots below. There are peaks on a background average movement (with small vibrations). We want to find the values of pairs (x1, x2) if those peaks drops down to average; or (x1) only if the line doesn't back to the average. There are thousands of such 2D data. What is the right statistic or machine learning algorithm to find x1 and x2 above without plotting? AI: One way to do what you're talking about is called "change point analysis." There is an R package for this called changepoint that you might want to check out. In Python, you could try changefinder.
H: What is an alternative name for "Unstructured Data"? I'm writing my thesis at the moment, and for some time - due to a lack of a proper alternative - I've stuck with "unstructured data" for referring to natural, free flowing text, e.g. Wikipedia articles. This nomenclature has bothered me from the very beginning, since it opens a debate that I don't want to get into. Namely, that "unstructured" implies that natural language lacks structure, which it does not - the most obvious being syntax. It also gives a negative impression, since it is the opposite of "structured", which is accepted as being positive. This is not the focus of my thesis, though the "unstructured" part itself plays an important role. I completely agree with the writer of this article, but he proposes no alternative except for "rich data", which doesn't cover my point. The point I'm trying to make that the text lacks a traditional database-like (e.g. tabular) structure of the data, with every piece of data having a clear data type and semantics that is easy to interpret using computer programs. Of course I'd like to condense this definition into a term, but so far I've been unsuccessful coming up with, or discovering an acceptable taxonomy in literature. AI: It is a bad idea to counterpose "unstructure data" to, say, tabular data (as in "non-tabular data"), as you will have to elliminate other alternatives as well (e.g., "non-tabular and non-graph and ... data"). "Plain text" (-- my choice) or "raw text" or "raw data" sound fine.
H: How many observations in a neural networks dataset? I started to study and programming in neural networks for a little while now, but I never read about the minimum number of observations one must collect in a dataset to get robust results. Of course, more observations better results, but, Does exist an empirical or theoretical relationship between variables and observations number? I mean, neither in econometrics you can compute the minimum number of observations, but it does exist some rule of thumbs that relies the number of exogenous variables to the target variable. I wonder if there is something similar to that in neural networks too, but, till now, browsing on the internet, I did not find anything of useful. Any ideas, advises or hint will be appreciated. AI: A neural network is nothing but a set of equations. And the basic rule of any set of equations is that you must have as many data points as the number of parameters. The parameters of any neural network are its weights and biases. So that means that as the neural network gets deeper and wider, the number of parameters increase a lot, and so must the data points. This being said, the more proper and detailed way to know whether the model is overfitting is to check if the validation error is close to the training error. If yes, then the model is working fine. If no, then the model is most likely overfitting and that means that you need to reduce the size of your model or introduce regularization techniques.
H: How can I show the relations between travel destinations? I'm trying to do a project about email marketing. I'm working on a tourism company and I want to make a best destination suggestion for the clients. But I need to see the relations between destinations. Example: How many people visited Dublin and then visited London? My question: How can I best analyse this relation between the cities, given data about traveler itineraries? I want to send email offers to clients who went to London and didn't go to Dublin (assuming a strong relation between London and Dublin). AI: You can try Graph databases (Neo4j/Orient DB etc). Store location and connection between the location as nodes and edges. Then do analysis over graph data. Based on your need, you can use additional attributes (like count) and assign weights for edges etc. Neo4j supports collaborative filtering also.
H: How to combine two different random forest models into one in R? I have two different dataset with same variables. I have built training models separately using random forest. Now i want to combine both these models. Could anyone tell me how can i this be achieved? Do we have something called combine() function in R? Regards, Arun AI: This question has already been answered here. It addresses the problem in hand.
H: How to find similarity between different factors in a dataset Introduction Let's say I have a dataset of different observation of different people and I want to group people together to know which person is closest to the other one. I also want to have a measure to know how close they are to each others and know the statistical significance. Data eat_rate drink_rate sleep_rate play_rate name game 1 0.0542192259 0.13041721 5.013682e-03 1.023533e-06 Paul Rayman 4 0.0688171511 0.01050611 6.178833e-03 3.238838e-07 Paul Mario 6 0.0928997660 0.01828468 9.321211e-03 3.525951e-07 Jenn Mario 7 0.0001631273 0.02212345 7.061524e-05 1.531270e-07 Jean FIFA 8 0.0028735509 0.05414688 1.341689e-03 4.533366e-07 Mark FIFA 10 0.0034844717 0.09152440 4.589990e-04 5.802708e-07 Mark Rayman 11 0.0340738956 0.03384180 1.636508e-02 1.354973e-07 Mark FIFA 12 0.0266112679 0.20002020 3.380704e-02 4.533366e-07 Mark Sonic 14 0.0046597056 0.01848672 5.472681e-04 4.034696e-07 Paul FIFA 15 0.0202715299 0.16365289 2.994086e-02 4.044770e-07 Lucas SSBM Reproduce it: structure(list(eat_rate = c(0.0542192259374624, 0.0688171511010916, 0.0928997659570807, 0.000163127341146237, 0.00287355085557602, 0.00348447171120939, 0.0340738956099744, 0.0266112679045701, 0.00465970561072008, 0.0202715299408583), drink_rate = c(0.130417213859986, 0.0105061117284574, 0.0182846752197192, 0.0221234468128094, 0.0541468835235882, 0.0915243964036772, 0.0338418022022427, 0.200020204061016, 0.0184867158298818, 0.163652894231741), sleep_rate = c(0.00501368170182717, 0.00617883308323771, 0.00932121105128431, 7.06152352370024e-05, 0.00134168946950305, 0.000458999029040516, 0.0163650807661753, 0.0338070438697149, 0.000547268073086768, 0.029940859740489), play_rate = c(1.02353325645595e-06, 3.23883801132467e-07, 3.52595117873603e-07, 1.53127022619393e-07, 4.53336580123204e-07, 5.80270822557701e-07, 1.35497266725713e-07, 4.53336580123204e-07, 4.03469556309652e-07, 4.04476970932148e-07 ), name = structure(c(5L, 5L, 2L, 1L, 4L, 4L, 4L, 4L, 5L, 3L), .Label = c("Jean", "Jenn", "Lucas", "Mark", "Paul"), class = "factor"), game = structure(c(3L, 2L, 2L, 1L, 1L, 3L, 1L, 4L, 1L, 5L), .Label = c("FIFA", "Mario", "Rayman", "Sonic", "SSBM"), class = "factor")), .Names = c("eat_rate", "drink_rate", "sleep_rate", "play_rate", "name", "game"), row.names = c(1L, 4L, 6L, 7L, 8L, 10L, 11L, 12L, 14L, 15L), class = "data.frame") Question Given a dataset as fellow (with continuous and categorical feature), how can I know if a person (a categorical answer) identified by a name is more correlated to another person? AI: One way is to normalize your quantitative values (play, eat, drink, sleep rates) so they all have the same range (say, 0 -> 1), then assign each game to its own "dimension", that takes value 0 or 1. Turn each row into a vector and normalize the length to 1. Now, you can compare the inner product of any two people's normalized vectors as a measure of similarity. Something like this is used in text mining quite often R Code for Similarity Matrix Assumes you've saved your dataframe to the variable "D" #Get normalization factors for quantitative measures maxvect<-apply(D[,1:4],MARGIN=2,FUN=max) minvect<-apply(D[,1:4],MARGIN=2,FUN=min) rangevect<-maxvect-minvect #Normalize quantative factors D_matrix <- as.matrix(D[,1:4]) NormDMatrix<-matrix(nrow=10,ncol=4) colnames(NormDMatrix)<-colnames(D_matrix) for (i in 1:4) NormDMatrix[,i]<-(D_matrix[,i]-minvect[i]*rep(1,10))/rangevect[i] gamenames<-unique(D[,"game"]) #Create dimension matrix for games Ngames<-length(gamenames) GameMatrix<-matrix(nrow=10,ncol=Ngames) for (i in 1:Ngames) GameMatrix[,i]<-as.numeric(D[,"game"]==gamenames[i]) colnames(GameMatrix)<-gamenames #combine game matrix with normalized quantative matrix People<-D[,"name"] RowVectors<-cbind(GameMatrix,NormDMatrix) #normalize each row vector to length of 1 and then store as a data frame with person names NormRowVectors<-t(apply(RowVectors,MARGIN=1,FUN=function(x) x/sqrt(sum(x*x)))) dfNorm<-data.frame(People,NormRowVectors) #create person vectors via addition of appropriate row vectors PersonMatrix<-array(dim=c(length(unique(People)),ncol(RowVectors))) rownames(PersonMatrix)<-unique(People) for (p in unique(People)){ print(p) MatchIndex<-(dfNorm[,1]==p)*seq(1,nrow(NormRowVectors)) MatchIndex<-MatchIndex[MatchIndex>0] nclm<-length(MatchIndex) SubMatrix<-matrix(NormRowVectors[MatchIndex,],nrow=length(MatchIndex),ncol=dim(NormRowVectors)[2]) CSUMS<-colSums(SubMatrix) NormSum<-sqrt(sum(CSUMS*CSUMS)) PersonMatrix[p,]<-CSUMS/NormSum } colnames(PersonMatrix)<-colnames(NormRowVectors) #Calculate matrix of dot products Similarity<-(PersonMatrix)%*%t(PersonMatrix)
H: Data repositories like UCI Are there any other data repositories like UCI and mlData, for biological data?? I want to know about mostly biological data set. AI: There're lots and tons of data sets for biological data. GenBank Overview IGSR: The International Genome Sample Resource Bioinformation and DDBJ Center List of biological databases
H: Can you use clustering to pick out signals in noisy data? As my first project into data science, I would like to pick out the main clusters in noisy data. I think a good example would be trying to pick out certain links on a given StackExchange question that has a number of answers. The most common type of link is a link to a question on the SE network. The next common is either tag links, or links to user profiles. The remaining links might be random links included in posts, which is considered noise. Ideally, I'm looking for a solution where I don't know how many clusters of links there will be ahead of time. I've implemented my first attempt using scikit-learn and KMeans. However, it's not ideal because I appear to have to specify the number of clusters ahead of time, and I think the random, noisy links get grouped improperly. I also think it's more effective on a larger corpus compared to the relatively small one of URL tokens (though that's just a guess). Is there a way to do this type of clustering, where the number of clusters is unknown or where one of the clusters is a sort of miscellaneous cluster containing objects that don't closely match the other clusters? AI: Have you looked at DBSCAN? It is a density-based spatial clustering of data with noise that can define non-linear clusters (unlike k-means). It doesn't require knowing the number of clusters. However, it does require two parameters (minimum cluster size and neighborhood size) that measure density. But you may be able to estimate them in your particular domain.
H: Predict set elements based on other elements Sorry if this has been answered before but could someone help me with solving the following problem: Each symbol in a dataset has a set of labels. Given a set of labels how can we predict more labels for that set? Or to attempt a more formal wording: let a set of sets $S = \{ s_1, s_2, s_3, ... \} $ where $ s_i \subseteq L $ and $ L = \{ l_1, l_2, l_3,...,l_m\} $ where $L$ is the finite set of labels. Given a set of labels $Y = \{ y_1 ,..., y_n \} $ where $Y \subseteq X $ what is the probability $ P(r = l_i) \forall l_i \in L$ so that $Y \cup \{r\} \in S $. AI: Maybe you could use some algorithm that solves "Market Basket Analysis". The problem is explained here: Market Basket Analysis is a modelling technique based upon the theory that if you buy a certain group of items, you are more (or less) likely to buy another group of items. For example, if you are in an English pub and you buy a pint of beer and don't buy a bar meal, you are more likely to buy crisps (US. chips) at the same time than somebody who didn't buy beer. http://albionresearch.com/data_mining/market_basket.php One example of such an algorithm is: https://en.wikipedia.org/wiki/Association_rule_learning
H: Root Mean Squared Error (RMSE) - significance of square root What is the significance of the square root in root-mean-square-error? Essentially, my question is: what is the difference between (rms error) and (rms error)$^2$? AI: It depends on what you are using the RMSE for. If you are merely trying to compare two models/estimators, then there is no significance to the square root. However, if you are trying to plot the error in terms of the same units as you made the measurements/estimates, then you need to take the square root to transform the squared units to the original units (much like variance vs standard deviation)
H: What is the best way to propose an item from a set based on previous choices? The goal of this question is to be able to propose a user further choices based on his past experiences: like Amazon's book advices. From a set of mp3 files, I assume that a set of mp3 tags data is already filled, based on the music he/she has alredy listened to : what is the easiest way to implement a machine learning that is able to propose a list of music choices based on the user's set ? NB : I'm a Machine learning novice I'd appreciate if the answer could be based on Orange, Weka or these kind of tools. Update: removed the classification tag as recommanded. For new comers as me: - the book Predictive Analytics For Dummies is a nice general introduction about this subject - a next step would be the the paper Finding Clusters of Similar Artists which is really interesting especially for the K-means approach - The millions songs dataset which is a gold mine with its huge dataset as well as tutorials with Python codes to use with it - special thanks sheldonkreger for his answer with neo4J graph usage idea wich is really interesting AI: There are many great ways to handle this problem. It is a recommendation problem, not a classification problem, as pointed out by others. There are many ways to do recommendation with a data set like this. I'll point out a few methods and you can choose one or try them all. The first method is called user-based collaborative filtering. The basic idea is to give users recommendations based on the tastes of like-minded users. So, you'd be trying to recommend music based on the listening history of users who have listened to the same songs. Such data can be modeled as a graph or sparse matrix. Then, you choose the exact algorithm depending on how you want to model your data. The second method is called item-based collaborative filtering. Rather than associating users together, this strategy looks at the set of items a user has 'rated' (the songs a user has listened to) and calculates how similar they are to a specific target item (song), or even to all the songs in your data set. It grabs the set of most-similar items and uses various methods to predict how much a user will like the song. In this case, you only have binary data (user listened to it or they did not). These calculations tend to work best with actual rating scores (like a 5 star system) because this gives more detailed variation amongst items in the data set. The third option is to model your data in a graph database like Neo4J and write graph traversal queries in order to find similar items. If you like graph theory, this can be a lot of fun. The sky is the limit in regards to what kinds of traversals will return good results. To get started, think of the users and songs as nodes in the graph, and 'listened' as the edge. $user->listened->$song Because of ratings and item-based filtering, and because there are probably many songs in your data set, and each user only listens to a very small portion of them, I'd first try a user-based collaborative filtering method which uses sparse matrix operations to calculate recommendations. If your data set is large, these computations scale horizontally so you can leverage parallel processing if you run into performance issues. You can find more detail about collaborative filtering in this paper.
H: Error in R - Unexpected '}' in " }" I am getting the error: Error: Unexpected '}' in " } I have put closing brackets for every open bracket, but am not sure why I am getting this error. The error arises when I run the following code: #find number of Columns & Rows numcol <- ncol(training_main) numrow <- nrow(training_main) attach(training_main) NA_count <- as.data.frame(sapply(training_main, function(x) sum(is.na(x)))) for(i in 1:numcol) { if (NA_count > 0) { for(j in 1:numrow) { if(is.na(training_main[j,i]) { training_main[j,i] <- as.character(training_main[j,i]) training_main[j,i] <- "Empty" training_main[j,i] <- as.factor(training_main[j,i]) print("empty printed") } } } } Is the nested for -if - for - if as mentioned below allowed in R? That is the logic that I have to use. Is there any other way I can run this code? AI: Error is due to missing ')' in the second if statement in your code. if(is.na(training_main[j,i]) change to if(is.na(training_main[j,i]))
H: How to speed up optimization using Differential Evolution? My application is high frequency trading. My data are time series of the bid and ask prices of a stock recorded on every tick (change in price). For each data point I also have a certain indicators that predict the future movement of the price. The indicators have different horizons of the predictions, some being optimal at few second intervals and others few minutes. I need to assign these predictors weights and based on whether the linear combination crosses a threshold, the decision will be taken to buy of sell the stock. So far I have tried the Differential Evolution (DE) method to figure out the weights. I use a black box model with the weights vector $w_i$ and threshold as inputs. For each data point I have a vector of indicators $\alpha _i$. $$ total\_alpha = \sum\alpha _i*w_i $$ If $$ total\_alpha > threshold, BUY $$ Else If $$ total\_alpha < -threshold, SELL $$ The output of the model is the sum of difference between each between the price of each consecutive buy and sell. This output is being optimised by the DE algorithm. I am having trouble with the computational aspects. My data is very large (~$7 \times 10^8$ rows by 20 columns), and thus the execution time for the DE algorithm is unacceptable. My question: Is there a better and a faster way to solve this problem? AI: Do you run your analysis algos in batch or live? Which programing language under which environment do you use? At a first naive look, i would recommend to parallelize your code as each indicator calculation seems to be independent of the other's.
H: python - Will this data mining approach work? Is it a good idea? I need to extract fields like the document number, date, and invoice amount from a bunch of .csv files, which I believe are referred to as "unstructured text." I have some labeled input files and will use the NLTK and Python to design a data extraction algorithm. For the first round of classification, I plan to use tf-idf weighting with a classifier to identify the document type - there are multiple files that use the same format. At this point, I need I way to extract the field from the document, given that it is X type of document. I thought about using features like the "most common numbers" or "largest number with a comma" to find the invoice amount, for example, but since the invoice amount can any numerical value I believe the sample size would be smaller than the number of possible features? (I have no training here, bear with me.) Is there a better way to do the second part? I think the first part should be okay, but I'm not sure that second part will work or if I even really understand the problem. How is my approach in general? I'm new to this kind of thing and this was the best I could come up with. AI: I am not sure if using a classifier is the best way to approach this problem. If it is something which can be easily extracted using regex, then that is the best way to do it. If however, you want to use classifiers, here are two questions you need to ask yourself. One, what does the unlabelled data look like and can you design good features from it? Depending on the kind of feature vector you design, the complexity of the classification task may range from very easy, to impossible. (A perceptron cannot solve XOR usually, except when you provide it with specific linear combinations of the input variable). Two, what does the labelled data look like? Is it representative of the entire dataset or does it only contain very specific types of format? If it is the former, then your classifier will not work well on files which are not represented in the labelled data. If you just want to test run a classifier first, you can solve the problem of having more features than training samples by using Regularization. Regularization forces the training algorithm of the classifier to accept the simplest possible solution (think occam's razor). Almost all Machine Learning related packages in Python will have regularization options you can use, so enjoy.
H: Storing Sensor Data for Analysis of the Office I have currently been tasked with designing an application that tracks several different measurements around the office, eg. the temperature, light, presence of people, etc. Having never really worked on data analysis before, I would like some guidance on how to store this data (which database design to use). What we're looking at currently are around 50 sensors that only send data when an event of interest occurs: if the temperature changes by 0.5 degrees or if the light turns on/off or if a room becomes occupied/vacant. So, the data will only be updated every few seconds. Also, in the future, I'd like to analyse some of the data. Hence, the data must be persistent in the database. What kind of technologies would you suggest to carry out this task? AI: I have been doing similar project in my college. I have classroom and I'm supposed to collect data like temp, humidity, light, occupancy, etc. Assuming that you have worked with sensors and motes to use, I'm going to explain rest of the structure. You need sensor network setup and like you said you have done it. These sensor networks generally do not send data directly over internet so you need a Gateway that can collect data from sensors and send it over internet to local server. On server side you need REST API and you could use any language to develop it and I use PHP. I find it very easy to use and develop using PHP. This REST API shall receive data from Gateway and store it into database. I use mysql database because amount of data is not so big for us. But if your data is big enough you can use big data Nosql tool like mongoDB or so. Whatever type of database you use structure remains same. For sending data from Gateway to server you can use protocols like HTTP or MQTT whichever you feel comfortable. What I do is I have WSN controller that sends data over USB to Gateway then Gateway sends data to server over Ethernet. So I had to develop USB to Ethernet Gateway. If you can just take two UART terminals out of your controller you can build UART to Ethernet Gateway using any microcontroller or even Arduino Ethernet shield would work in that case. In my case data is sensed periodically but as you said you are sensing data when event of interest occurs then you can use poisson distribution method over periodically collected data to predict what is average number of events per day and then you can decide if your data is big or not.
H: How are clusters from DBSCAN sometimes non-convex? I've been using clustering in my bag of ML techniques for quite some time now, and I've never found a satisfying answer to this question. In DBSCAN, we define a maximum radius with which to form clusters. The algorithm will scan the space and group together points that are ALL reachable from one another. However, we can sometimes end up with a non-convex cluster. My confusion is around how the notion of a "radius", which describes a convex object, can be an input to an algorithm which results in a non-convex object? AI: A cluster in DBSCAN consists of multiple core points. The radius is the area covered by a single core point, but together with neighbor core points the shape will be much more complex. In particular, they can be much larger than epsilon, so you should choose a small value, and rely on this "cover" functionality. Wikipedia has an example of a non-convex cluster
H: What are the performance measures in the neural networks field? I constructed a neural networks in R using neuralnet package. I want to test that using cross-validation, that is a technique based on using 4/5 of the dataset to train the network and the fifth one as the test set. I wonder about what measures I should use to measure the neural networks performance in terms of predictability. Could you suggest what measures are commonly used in the field and explain me why? Any hint and ideas about that will be appreciated. AI: Typical predictive performance measures used to compare accuracy of ANN models are: RMSE - (root mean squared error) measures the distance between estimated and actual outcomes. Other metrics that measure the same concept are MSE, MAE or MPE. R square - (R^2 or coefficient of determination) measures the reduction of variance when using the model. When comparing two different ANN models for performance, metrics that take into account the complexity of the model may be used, such as AIC or BIC.
H: Pivoting a two-column feature table in Pandas How can I transform the following DataFrame into one with cities as rows and each cuisine as a column, and 1 or 0 as values (1 if the city has that kind of cuisine)? I think this turns out to be a very common problem in transforming data into features for machine learning. I am aware of the Pandas pivot_table functionality, but it asks for a value column, and in this case we don't have any. import pandas as pd data = { 'city': ['NY','NY', 'SF','SF','SF'], 'cuisine': ['Japanese', 'Chinese', 'French', 'Japanse', 'German'] } df = pd.DataFrame(data) AI: If there is no value column - introduce it yourself! df["value"]=1 pd.pivot_table(df, values="value", index=["city"], columns="cuisine", fill_value=0) For your example I got (after fixing the misprint in 'Japanse' to 'Japanese') cuisine Chinese French German Japanese city NY 1 0 0 1 SF 0 1 1 1
H: What is the actual output of Principal Component Analysis? I'm trying to understand PCA, but I don't have a machine learning background. I come from software engineering, but the literature I've tried to read so far is hard for me to digest. As far as I understand PCA, it will take a set of datapoints from an N dimensional space and translate them to an M dimensional space, where N > M. I don't yet understand what the actual output of PCA is. For example, take this 5 dimensional input data with values in the range [0,10): // dimensions: // a b c d e [[ 4, 1, 2, 8, 8], // component 1 [ 3, 0, 2, 9, 8], [ 4, 0, 0, 9, 1], ... [ 7, 9, 1, 2, 3], // component 2 [ 9, 9, 0, 2, 7], [ 7, 8, 1, 0, 0]] My assumption is that PCA could be used to reduce the data from 5 dimensions to, say, 1 dimension. Data details: There are two "components" in the data. One component has mid a levels, low b and c levels, high d, and nondeterministic e levels. The other component has high a and b levels, low c and d levels, and nondeterministic e levels. This means that the two components are most differentiated by b and d, somewhat differentiated by a, and negligibly differentiated by c and e. Outputs? I'm making this up, but say the (non-normalized) linear combination with the highest differentiating power is something like 5*a + 10*b + 0*c + 10*d + 0*e The above input data translated along that single axis is: [[110], [105], [110], ...etc Is that linear combination (or a vector describing it) the output of PCA? Or is the output the actual reduced dataset? Or something else entirely? AI: I agree with dpmcmlxxvi's answer that the common "output" of PCA is computing and finding the eigenvectors for the principal components and the eigenvalues for the variances, but I can't add comments yet and would still like to contribute. Once you hit this step of calculating the eigenvectors and eigenvalues of the principal components, you can do many types of analyses depending on your needs. I believe the "output" you are specifically asking about in your question is the resultant data set of applying a transformation or projection of the original data set into the desired linear subspace (of n-dimensions). This is taking the output of PCA and applying it on your original data set. This PCA step by step example may help. The ultimate output of this 6 step analysis was the projection of a 3 dimensional data set into 2 dimensions. Here are the high level steps: Taking the whole dataset ignoring the class labels Compute the d-dimensional mean vector Computing the scatter matrix (alternatively, the covariance matrix) Computing eigenvectors and corresponding eigenvalues Ranking and choosing k eigenvectors Transforming the samples onto the new subspace Ultimately, step 4 is the "output" since that is where the common requirements for performing PCA are fulfilled. We can make different decisions at steps 5 and 6 and produce alternative output there. A few more possibilities: You could decide to project the observations with outliers removed Another possible outcome here would be to calculate the proportion of variance explained by one or any combination of principal components. For example, the proportion of variance explained by the first two principal components of K components is (λ1+λ2)/(λ1+λ2+. . .+λK). After plotting the projected observations into the first two principal components (as in the given example), you can impose a plot of the loadings of each of the original dimensions into the subspace (scaled by the standard deviation of the principal components). This way, we can see the contribution of the original dimensions (in your case a - e) to principal component 1 and 2. The biplot is another common product of PCA.
H: What should I care about while stacking as an ensemble method? I'm using SMO, Logistic Regression, Bayesian Network and Simple CART algorithms for classification. Results form WEKA: Algorithm Sensitivity (%) Specificity (%) Overall accuracy (%) Bayesian Network 57.49 76.09 65.24 Logistic Regression 64.73 69.86 66.87 SMO 54.32 79.20 64.69 Simple CART 71.88 61.51 67.56 SMO gives the best result for my classification problem, since it correctly classify the 79.20% of the class which is important for me. I want to increase this accuracy by stacking. I tried to combine some of them. In most of the cases I couldn't increase the accuracy but stacking SMO with Logistic Regression made a little increment in accuracy. How can I explain why stacking SMO with Logistic Regression is better than others? Is there any generalization such as combining tree classifiers gives good result in stacking? What should I care about while stacking? EDIT: Bayesian Network Logistic Reg. SMO CART Kappa statistic 0.3196 0.3367 0.3158 0.3335 Mean absolute error 0.3517 0.4164 0.3531 0.4107 Root mean squared error 0.5488 0.4548 0.5942 0.4547 Relative absolute error (%) 72.3389 85.65 72.6299 84.477 Root relative squared error (%) 111.3076 92.2452 120.5239 92.2318 Weighted Avg. of F-Measure 0.653 0.671 0.676 92.2318 ROC Area 0.725 0.727 0.668 0.721 Total number of instance is 25106. 14641 of them is class a, and 10465 of them belong to class b. === Confusion Matrix of Simple CART === a b <-- classified as 10524 4117 | a = 0 4028 6437 | b = 1 === Confusion Matrix of SMO === a b <-- classified as 7953 6688 | a = 0 2177 8288 | b = 1 === Confusion Matrix of Logistic Regression === a b <-- classified as 9477 5164 | a = 0 3154 7311 | b = 1 Since SMO is successful at class b and CART is successful at class a, I tried to ensemble these two algorithms. But I couldn't increase the accuracy. Then I tried to combine SMO with Logistic Regression, the accuracy is increased a little bit. Why ensembling SMO with Logistic Regression is better than ensebling SMO with CART, is there any explanation? AI: To directly answer your question about stacking: you should care about minimizing 1) bias, and 2) variance. This is obvious, but in practice this often comes down to simply having models which are "diverse". (I apologize that link is behind a paywall, but there are a few others like it and you may well find it other ways) You don't want ensembles of like-minded models - they will make the same mistakes and reinforce each other. In the case of stacking, what is happening? You are letting the outputs of the probabilistic classifiers on the actual feature input become the new features. A diverse set of classifiers which can in any way give signals about edge cases is desirable. If classifier 1 is terrible at classes A, B, and C but fantastic at class D, or a certain edge case, it is still a good contribution to the ensemble. This is why neural nets are so good at what they do in image recognition - deep nets are in fact recursive logistic regression stacking ensembles! Nowadays people don't always use the sigmoid activation and there are many layer architectures, but it's the same general idea. What I would recommend is trying to maximize the diversity of your ensemble by using some of the similarity metrics on the classifiers' prediction output vectors (ie, Diettrich's Kappa statistic) in training. Here is another good reference. Hope that helps.
H: Does the network learn based on previous training or does it restart? Matlab, neuralnetworks In Matlab, if you build a simple network and train it: OP = feedforwardnet(5, 'traingdm'); inputsVals = [0,1,2,3,4]; targetVals = [3,2,5,1,9]; OP = train(OP,inputsVals,targetVals); then you train it again so another OP = train(OP,inputsVals,targetVals); What is happens to the network? Does it train again based on what it learned the first time you did OP = train(OP,inputsVals,targetVals); or does it train as if it were the first time training the network. AI: It trains again based on what it learned the first time you did OP = train(OP,inputsVals,targetVals). More generally, train uses your network's weights, i.e. it does not initialize the weights. The weight initialization happens in feedforwardnet. Example: % To generate reproducible results % http://stackoverflow.com/a/7797635/395857 rng(1234,'twister') % Prepare input and target vectors [x,t] = simplefit_dataset; % Create ANN net = feedforwardnet(10); % Loop to see where train() initializes the weights for i = 1:10 % Learn net.trainParam.epochs = 1; net = train(net,x,t); % Score y = net(x); perf = perform(net,y,t) end yields perf = 0.4825 perf = 0.0093 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0028
H: Scalable open source machine learning library written in python I believe sci kit learn is written in python,however that not scalable.Spark mlib or ml is scalabale but written in scala.I am looking for an ongoing effort where a machine learning library is being built in python (available in github or so) so that I can contribute to that.Is anyone aware of such effort. AI: Is there a specific reason beside the fact that you would like to contribute? I am asking because there is always pyspark that you can use, the Spark python API that exposes the Spark programming model to Python. For deep learning specifically, there are a lot of frameworks built on top of Theano -which is a python library for mathematical expressions involving multi-dimensional arrays-, like Lasagne, so they are able to use GPU for intense training. Getting an EC2 instance with GPU on AWS is always an option.