text
stringlengths 83
79.5k
|
---|
H: Convert date into number - Apache PIG
Imagine that I've a field called date in this format: "yyyy-mm-dd" and I want to convert to number like "yyymmdd". For that I'm trying to use this:
Data_ID = FOREACH File GENERATE CONCAT((chararray)SUBSTRING(Date,0,4),(chararray)SUBSTRING(Date,6,2),(chararray)SUBSTRING(Date,9,2));
But I'm getting a list of nulls...
Anyone knows what I'm doing wrong?
Thnaks!
AI: I don't use PIG, but from looking online I think you should be able to use the Replace function built into PIG.
Assuming your Date field is already a string:
Data_ID = FOREACH File GENERATE(int) REPLACE(Date,'-','');
Here's the reference if it's helpful: https://pig.apache.org/docs/r0.9.1/func.html#replace |
H: value saveAsTextFile is not a member of org.apache.spark.sql.DataFrame
How Can I save a DataFrame as a CSV in HDFS?
I'm trying with this:
df.groupBy("Product_ID").agg(collect_list($"Stock")).saveAsTextFile("PATH/results.csv")
AI: This is not standard part of the API of DataFrames. You can either map it to a RDD, join the row entries to a string and save that or the more flexible way is to use the DataBricks spark-csv package that can be found here.
If it's just one column you can map it to a RDD and just call .saveAsTextFile(filename) |
H: What does Negative Log Likelihood mean?
I have a data set which has continuous independent variables and a continuous dependent variable. To predict the dependent variable using the independent variables, I've run an ensemble of regression models and tried to compare them against each other. Here are the results for reference:
I can interpret what the R-squared value / Coefficient of determination for each of those models means. However, I can't understand what the Negative Log Likelihood means. Especially, why is it Infinity for Linear Regression and Boosted Decision Tree, and a finite value for a Decision Forest Regression?
Edit:
Data Description: The data that went into these three models is all continuous independent variables and a continuous dependent variable. There are a total of 542 observations and 26 variables.
These 542 variables are split 70 - 30 to get training and testing datasets. Therefore, the training dataset has 379 observations and 26 variables; the testing dataset has 163 observations and 26 variables.
No missing data.
Edit 2 Possible Explanation - (click here): Apparently, Linear Regression and Boosted Trees in Azure ML don't calculate the Negative Log-Likelihood metric - and that could be the reason that NLL is infinity or undefined in both cases.
AI: Likelihood function is the product of probability distribution function, assuming each observation is independent. However, we usually work on a logarithmic scale, because the PDF terms are now additive. If you don't understand what I've said, just remember the higher the value it is, the more likely your model fits the model. Google for maximum likelihood estimation if you're interested.
Obviously, your input data is bad. You should give your model a proper data set. While I don't have your data set, we can take a look at the likelihood function for linear regression:
You will get infinity if the likelihood function is zero or undefined (that's because log(0) is invalid). Look at the equation, most likely your sample standard deviation is zero. If it's zero, the last term will be undefined. Have you given a data set that you copied and pasted the same data over rows?
Boosted trees should also be undefined if your sample deviation is zero. However, decision tree is estimated based on impurity and won't crash here.
Summary: please check and double check your data.
EDIT I think you just had a bug. Linear regression will always give you something here. Have you fitted the models in R with the same dataset? – |
H: Do I need a strong programming background to become a data analyst?
I'm thinking about becoming a data analyst, and I'm wondering if programming knowledge is a must for this. I'm fairly strong with maths, but I have very little programming experience. Do all data analysts have strong programming skills (R, SAS, SQL, Python, etc.), or can it vary depending on the type of data analyst you are?
AI: I would say you don't need to already have lots of programming experience, but being generally mathematically and computer-literate is important.
If you've literally never programmed a computer before, then dig up a basic online R or python tutorial. As to which one, depends on the industry you'll be working in. Tech companies tend to use python, elsewhere R might be slightly more prevalent.
There are various "drag-and-drop" software "solutions" that don't in theory need programming, but in most real-life applications you'll find that there's functionality you need that they don't have, or you need to pre- or post-process your data in some way, and you'll have to resort to R/python/SAS/... to get that sorted out.
You'll find that you are most likely able to learn the coding skills you need on the job, as long as you have just a little experience with writing code. |
H: Does dropout require multiple passes of the same data set, as a sort of ensemble method?
I'm a bit confused about dropout -- on one tutorial, it was described as basically an 'ensemble method' of sorts. This implies that you might need to create an ensemble of networks. Is this the case, where you would need to get a consensus from the ensemble at the end? Or is dropout run over one single network?
AI: Dropout is applied over one network. Sometimes (like with non-dropout networks) you will run your data through it multiple times before it converged and this number will be a bit higher on average with dropout but it is one network. Per layer you have a dropout probability during training and during testing/prediction you use the full network without dropout by multiplying the weights by (1-p) where p is the dropout probability. This is because during training (1-p) of the nodes are actually used.
Why this is related to ensembles is that every training instance is basically trained on a different network, by randomly dropping out nodes, forcing it to learn specific things using different nodes because it will not have all the nodes available at all times. It is not a traditional ensemble in that you combine multiple networks, just during training it acts a bit like it. |
H: I need to measure Performance : AUC for this code of NLTK and skLearn
The code below measures precision and recall and F-measure (source). How can I measure AUC?
import collections
import nltk.metrics
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
def word_feats(words):
return dict([(word, True) for word in words])
negids = movie_reviews.fileids('neg')
posids = movie_reviews.fileids('pos')
negfeats = [(word_feats(movie_reviews.words(fileids=[f])), 'neg') for f in negids]
posfeats = [(word_feats(movie_reviews.words(fileids=[f])), 'pos') for f in posids]
negcutoff = len(negfeats)*3/4
poscutoff = len(posfeats)*3/4
trainfeats = negfeats[:negcutoff] + posfeats[:poscutoff]
testfeats = negfeats[negcutoff:] + posfeats[poscutoff:]
print 'train on %d instances, test on %d instances' % (len(trainfeats), len(testfeats))
classifier = NaiveBayesClassifier.train(trainfeats)
refsets = collections.defaultdict(set)
testsets = collections.defaultdict(set)
for i, (feats, label) in enumerate(testfeats):
refsets[label].add(i)
observed = classifier.classify(feats)
testsets[observed].add(i)
print 'pos precision:', nltk.metrics.precision(refsets['pos'], testsets['pos'])
print 'pos recall:', nltk.metrics.recall(refsets['pos'], testsets['pos'])
print 'pos F-measure:', nltk.metrics.f_measure(refsets['pos'], testsets['pos'])
print 'neg precision:', nltk.metrics.precision(refsets['neg'], testsets['neg'])
print 'neg recall:', nltk.metrics.recall(refsets['neg'], testsets['neg'])
print 'neg F-measure:', nltk.metrics.f_measure(refsets['neg'], testsets['neg'])
AI: It is unclear if you are requesting AUC of ROC or Precision-Recall curve. However, instead of storing the indices of examples in sets, you can store the labels in lists and use sklearn's auc function after running precision_recall_curve or roc_curve:
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
def label2int(label):
if label == 'pos':
return 1
else:
return 0
y_true, y_score = [], []
for i, (feats, label_true) in enumerate(testfeats):
label_predicted = classifier.classify(feats)
y_true.append(label2int(label_true))
y_score.append(label2int(label_predicted))
# Precision-Recall AUC
precision, recall, _ = precision_recall_curve(y_true, y_score, pos_label=1)
pr_auc = auc(recall, precision)
print "Precision-Recall AUC: %.2f" % pr_auc
# ROC AUC
fpr, tpr, _ = roc_curve(y_true, y_score, pos_label=1)
roc_auc = auc(fpr, tpr)
print "ROC AUC: %.2f" % roc_auc
Precision-Recall AUC: 0.82
ROC AUC: 0.73 |
H: What do this Classification evaluation results mean to you? Do they are suspicious or not?
I have collected dataset with two class labels and used the SVM Method to classify the dataset, and this is the results. Does this appear suspicious or not?
scikit-learn classifiers with SVM SVC train on 114859 instances, test on 49227 instances
Excution (Training) Time: 9.82799983025
Excution (Testing) Time: 3.75
accuracy: 0.999837487558
Precision-Recall AUC: 1.00
ROC AUC: 1.00
pos precision: 0.999822253822
pos recall: 1.0
pos F-measure: 0.999911119012
neg precision: 1.0
neg recall: 0.998107404779
neg F-measure: 0.999052806062
AI: The statistics here are obviously very good, in fact too good for any practical data set. Your model is almost perfect... Unfortunately, it's practically useless and I'll explain.
In machine learning, if you see something like this you know you are in trouble. That can happen if there are problems with your data workflow. For example, you might have removed all outliers that you shouldn't, or you actually used a subset of your training data for the test set.
It's fine if you're just toying SVM, but you'll never encounter something like this in real life. |
H: Word vectors as input
I have a corpus on which I want to perform sentiment analysis using LSTM and word embeddings. I have converted the words in the documents to word vectors using Word2Vec. My question is how to input these word vectors as input to Keras? I don't want to use the embeddings provided by Keras.
AI: You can just skip the Embedding layer and use a normal input layer with n input nodes where n is the dimensions of your word2vec embeddings. The rest is the same as you would with an embedding layer, just pass a sequence of n dimensional vectors as the input, potentially padded or truncated depending on your model. |
H: Machine Learning Best Practices for Big Dataset
I am about to graduate from my Master and had learnt about machine learning as well as performed research projects with it. I wonder about the best practices in the industry when performing machine learning tasks with Big Datasets (like 100s GB or TB). Appreciate if fellow data scientists can share their experience. Here are my questions:
Obviously, very large datasets take longer time to train (can be days or weeks). Many times we need to train various models (SVM, Neural Network, etc.) to compare and find better performance model. I suspect, in industry projects, we want the results as quick as possible but produce the best performance. Are there any tips for reducing the training & testing time? If you recommend subsetting the dataset, I will be interested to learn how best to subset the dataset to cover all or majority of scenarios from the dataset.
We know that performing cross validation is better as it may reduce over-fitting. However, cross validation also takes time to train and the model trained with cross validation may not be implemented straight (speaking from python sklearn experience: I need to train the model with dataset again after cross validation testing for it to be implemented). Do you normally do cross validation in your big data projects or getting by with the train-test split?
Appreciate the feedback.
AI: I'll list some practices I've found useful, hope this helps:
Irrespective of whether the data is huge or not, cross validation is a must when building any model. If this takes more time than an end consumer is willing to wait, you may need to reset their expectations, or get faster hardware/software to build the model; but do not skip cross validation. Plotting learning curves and cross-validation are effective steps to help guide us so we recognize and correct mistakes earlier in the process. I've experienced instances when a simple train-test set does not reveal any problems until I run cross-fold validations and find a large variance in the performance of the algorithm on different folds.
Before sizing up a dataset, eliminate the records with missing values of key variables and outliers, columns of highly correlated variables, and near zero variance variables. This will give you a much better estimate of the real usable dataset. Sometimes you may end up with only a fraction of the available dataset that can actually be used to build a model.
When sizing up a dataset for building a model, it is easier to estimate the computing resources if you enumerate the dataset in rows and columns and memory size of the final numeric matrix. Since every machine learning algorithm is ultimately going to convert the dataset into a numeric matrix, enumerating the dataset size in terms of GBs/TBs of raw input data (which may be mostly strings/textual nominal variables/etc.) is often misleading and the dataset may appear to be more daunting and gigantic to work with than it is.
Once you know (or estimate) the final usable size of your dataset, check if you have a suitable machine to be able to load that into memory and train the model. If your dataset size is smaller than memory available/usable by the software, then you need not worry about the size any longer.
If the dataset size is larger than the memory available to train a model, then you could try these approaches (starting from the simplest ones first):
Use a machine with more memory: If you're using a cloud service provider then the simplest approach could be just to provision more memory and continue building the model as usual. For physical machines, try to procure additional RAM, its price continues to reduce and if your dataset is going to remain this big or grow bigger over time, then it is a good investment.
Add nodes to the cluster: For Hadoop and Spark based cluster computing deployments, training on a larger data-set is as easy as adding more machines to the cluster.
Quite often classification tasks require training on data with highly imbalanced classes, the ratio of positive to negative classes could sometimes be as large as 1:1000 or more. A straightforward method to improve accuracy in these cases is to either over-sample the minority class or under-sample the majority class, or do both together. If you have a large dataset, under-sampling the majority class is a very good option which will improve your algorithm's accuracy as well as reduce training time.
Build an ensemble: Split the dataset randomly and train several base learners on each part, then combine these to get the final prediction. This would most effectively make use of the large dataset and produce a more accurate model. But you need to spend more time to carefully build the ensemble and keep clear of the usual pitfalls of ensemble building.
If you're using an ensemble, train many single-thread models in parallel. Almost all ML software provide features to train multiple models on different cores or separate nodes altogether.
Evaluate multiple different algorithms on the time taken to train them for your specific dataset vs. their accuracy. While there is no universal answer, but I've found when using noisy data, SVMs take much longer time to train than carefully built ensemble of regularized regression models, but may be only slightly more accurate in performance; and a well built neural network may take a very long time to train as compared to a CART tree, but perform significantly more accurately that the tree.
To reduce time taken to build the model, try to automate as much of the process as you can. A few hours spent automating a complex error-prone manual task may save your team a hundred hours later in the project.
If available, use those algorithm implementations which use parallel processing, sparse matrices and cache aware computing, these reduce processing time significantly. For example, use xgboost instead of a single-core implementation of GBM.
If nothing else works, train the model on a smaller dataset; as Emre has suggested in his answer, use learning curves to fix the smallest sample size required for training the model, adding more training records than this size does not improve model accuracy noticeably. Here is a good article which explores this situation - largetrain.pdf |
H: Getting uniform distribution over topics from gensim's LDA?
I am trying to learn topics distribution for each document in a corpus.
I have term-document matrix (sparse matrix of dim: num_terms * no_docs) as input to the LDA model (with num_topics=100) and when I try to infer vectors for each document I am getting uniform distribution over them. This is highly unlikely since documents are of different topics.
The relevant code snippet is:
#input : scipy sparse term-doc matrix (no_terms * no_docs)
corpus = gensim.matutils.Sparse2Corpus(term_doc)
lda = gensim.models.LdaModel(corpus, 100)
vec_gen = lda[corpus]
vecs = [vec for vec in vec_gen]
Now for each vector in vecs I am getting same probability for each topic.
Can anyone point out where I am going wrong?
AI: I solved this issue. There is a parameter for minimum probability in gensim's LDA which is set to 0.01 by default. So topics with prob. < 0.01 are pruned from output.
Once I set min. prob to a very low value the results had all topics and their corresponding probability. |
H: Features of word vectors in Word2Vec
I am trying to do sentiment analysis. In order to convert the words to word vectors, I am using Word2Vec model. Suppose I have all the sentences in a list named 'sentences' and I am passing these sentences to word2vec as follows:
model = word2vec.Word2Vec(sentences, workers=4 , min_count=40, size=300, window=5, sample=1e-3)
Since I am noob to word vectors, I have two doubts:
1- Setting the number of features to 300 defines the features of a word vector. But what these features signify? If each word in this model is represented by a 1x300 numpy array, then what do these 300 features signify for that word?
2- What does downsampling as represented by 'sample' parameter in the above model do in actual?
AI: 1- The number of features: In terms of neural network model it represents the number of neurons in the projection(hidden) layer. As the projection layer is built upon distributional hypothesis, numerical vector for each word signifies it's relation with its context words.
These features are learnt by the neural network as this is unsupervised method. Each vector has several set of semantic characteristics.
For instance, let's take the classical example, V(King) -V(man) + V(Women) ~ V(Queen) and each word represented by 300-d vector. V(King) will have semantic characteristics of Royality, kingdom, masculinity, human in the vector in a certain order. V(man) will have masculinity, human, work in a certain order. Thus when V(King)-V(Man) is done, masculinity,human characteristics will get nullified and when added with V(Women) which having femininity, human characteristics will be added thus resulting in a vector much similar to V(Queen). The interesting thing is, these characteristics are encoded in the vector in a certain order so that numerical computations such as addition, subtraction works perfectly. This is due to the nature of unsupervised learning method in neural network.
2- There are two approximation algorithms. Hierarchical softmax and negative sampling. When the sample parameter is given, it takes negative sampling. In case of hierarchical softmax, for each word vector its context words are given positive outputs and all other words in vocabulary are given negative outputs. The issue of time complexity is resolved by negative sampling. As in negative sampling, rather than the whole vocabulary, only a sampled part of vocabulary is given negative outputs and the vectors are trained which is so much faster than former method. |
H: Classifier on top of LDA topic vectors?
I have training data in form of pair of documents with an associated label - {doc1, doc2, label}. Label is defined as function of pair of documents.
Now I want to build a model which can predict the label given two new documents.
I want to try different representation of document (instead of common ones say TF-IDF). Can I use vectors (topic distribution) from LDA as features for a classifier?
AI: Yes, that is a reasonable approach. Also try neural network based representations such as doc2vec. I suppose you know how to do the classification part? |
H: Network Analysis using R
I've the following dataset:
**Strenght Movie1 Movie2**
23 2 3
80 1 2
10 4 3
And I want to create a graph with the relationships between movies having the first column as the strenght of the relationship. How can I do this using R?
Many thanks!
AI: Try this R code:
library(igraph)
dfr <- data.frame(idMovie1=c(2,1,4), idMovie2=c(3, 2, 3), strength=c(23,80,10))
igr <- igraph::graph.data.frame(dfr)
plot(x = igr,
edge.curved=FALSE, edge.width=log(edge_attr(igr)$strength), edge.label=edge_attr(igr)$strength,
main="Graph of Movie Strengths")
Works only for small daasets. visualizations get ugly quickly. |
H: Do categorical features always need to be encoded?
I'm using Spark's Machine Learning Library, and features are categorical. The features are strings, and Spark's MLlib (like many other machine learning libraries) does not accept Strings as inputs.
The normal procedure for overcoming this is to convert Strings to integers, and then encode these integers (using a onehotencoder for example), because converting to integers implies that there is an ordering between features.
My question is - do categorical features always need to be encoded? In what situation could integers be used instead of encoding?
I'm using Logistic Regression and Naive Bayes.
When using integers as features I get an 84% accuracy, when these integers are encoded I get an 82% accuracy.
Is it necessary to encode?
AI: You have partly answered this question yourself ("because converting to integers implies that there is an ordering between features").
I will just clarify the terminology a bit more.
Categorical data: information has categories, but no natural ordering defined between them (gender, name of user's cat)
Ordinal data: information has categories with natural ordering defined between them (annual income scale defined in terms of categories \$40000, \$40000 - $80000).
If the variable is of type ordinal, you can replace it with integers and proceed with the algorithm. If it is categorical, it should be converted as well as encoded.
Hope this helps. |
H: Standardization/Normalization test data in R
I understand that one should standardize and normalize the test data (or any "unlabeled" data) with the training mean and sd. How can I implement this in R language? Is there a kind of "fitting" to the training set and a kind of applying to the test data?
AI: Check out the preProcess function from the caret library. You can choose the parameters you want to scale/center the training data, and it also saves the transformations it makes so then you can normalize the test set with the same specifications that you normalized the training set with. Could go something like this:
library(caret)
trainData <- data.frame(v1 = rnorm(15,3,1), v2 = rnorm(15,2,2))
testData <- data.frame(v1 = rnorm(5,3,1), v2 = rnorm(5,2,2))
normParam <- preProcess(trainData)
norm.testData <- predict(normParam, testData)
now your norm.testData is scaled and centered according to the training data set parameters.
Another way to do this without using caret:
## set up data
trainData <- data.frame(v1 = rnorm(15,3,1), v2 = rnorm(15,2,2))
testData <- data.frame(v1 = rnorm(5,3,1), v2 = rnorm(5,2,2))
## find mean and sd column-wise of training data
trainMean <- apply(trainData,2,mean)
trainSd <- apply(trainData,2,sd)
## centered
sweep(trainData, 2L, trainMean) # using the default "-" to subtract mean column-wise
## centered AND scaled
norm2.testData <- sweep(sweep(testData, 2L, trainMean), 2, trainSd, "/") |
H: Can overfitting occur in Advanced Optimization algorithms?
while taking an online course on machine learning by Andrew Ng on coursera, I came across a topic called overfitting. I know it can occur when gradient descent is used in linear or logistic regression but can it occur when Advanced Optimization algorithms such as "Conjugate gradient", "BFGS", and "L-BFGS" are used ?
AI: There is no technique that will eliminate the risk of overfitting entirely. The methods you've listed are all just different ways of fitting a linear model. A linear model will have a global minimum, and that minimum shouldn't change regardless of the flavor of gradient descent that you're using (unless you're using regularization), so all of the methods you've listed would overfit (or underfit) equally.
Moving from linear models to more complex models, like deep learning, you're even more at risk of seeing overfitting. I've had plenty of convoluted neural networks that badly overfit, even though convolution is supposed to reduce the chance of overfitting substantially by sharing weights. In summary, there is no silver bullet for overfitting, regardless of model family or optimization technique. |
H: Any guidance for new beginners interested in data science
I am a student with a master degree in biostatistics. I am interested in data science. I know SAS and R. No experience with python. May I ask for your experts' advice on how to teach myself data science from zero, please?
Any advice is much appreciated.
AI: In order to learn data science you should know about main tools and ideas in the data scientist's toolbox.
Two steps which most newbee's follow are:
The first is a conceptual introduction to the ideas behind turning data into actionable knowledge which involves following steps:
Getting and Cleaning Data,Data Analysis,Reproducible Research
Statistical Inference, using Regression Models and Machine Learning
The second is a practical introduction to the tools that will be used in the program like version control, markdown, git, GitHub, R, and RStudio.
I would advice you to complete a general course on data science from any website or tutorial you find easy for example I personally like this course: https://www.coursera.org/specializations/jhu-data-science
And then start building some projects of your own for example in my free time I build a lot of small data science projects playing with any available data like US census, petroleum data etc. For example see following link for one of my many data science projects:
https://arjun-chaudhary.shinyapps.io/Data_Analytics/ |
H: Transformation from Datawarehouse into Big Data structure
What are the required stages for transferring data in datawarehouse into big data structure. Are there any tools and methods that support it?
How to use the schema for such transformation, how to deal with different data types like facts and dimensions for instance. What is the criteria for data separation into machines, indexes or unique keys?
AI: Based on your comments transitioning big data sets from on-premises systems to a cloud-based system is cumbersome and fraught with challenges. However, you can use Amazon RedShift:
Amazon Redshift is a fully managed, petabyte-scale data warehouse
service in the cloud. You can start with just a few hundred gigabytes
of data and scale to a petabyte or more.
The first step is to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster.
After you provision your cluster, you can upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift offers fast query performance using the same SQL-based tools and business intelligence applications that you use today.
OR
Use a Hadoop environment as the landing zone to pull in data from
various sources, process it, and transfer the processed data to the
existing data warehouse or other repositories.
Explore scenarios of different ways to implement a landing zone. Learn about the architecture of the zone and the tools and techniques for integrating it with various environments. |
H: Calculate feature weight vector for one-hot-encoded data frame in R
I have the following data frame with one categorical and two numerical columns:
V1 V2 V3
1 A 1 3
2 A 3 5
3 B 3 3
4 C 2 3
I have turned this into the following dummy variables:
V1.1 V1.2 V1.3 V2 V3
1 1 0 0 1 3
2 1 0 0 3 5
3 0 1 0 3 3
4 0 0 1 2 3
Now, I want to apply clustering to this latter set. I guess that I could get better results if I downweight the dummy variable columns (proportionally to their number) because with equal weights the distance based clusters will be distorted.
My question is, how could I get the following weight vector from the new data set:
0.33, 0.33, 0.33, 1, 1
AI: The information you need is in the assign attribute on the matrix returned by model.matrix. Something like this seems to work for me:
data <- data.frame(V1 = factor(c("A", "A", "B", "C")), V2 = c(1, 3, 3, 2), V3 = c(3, 5, 3, 3))
model <- model.matrix(~ . + 0, data = data)
assignments <- attr(model, "assign")
counts <- table(assignments)
weights <- sapply(assignments, function (x) { 1 / counts[[x]] })
weights is now (0.3333333, 0.3333333, 0.3333333, 1.0000000, 1.0000000). |
H: importing csv data in python
I have a csv file with around 130 columns and 6000 rows
what is the best way to import them into python, so that I can later use them in a classification algorithm(columns are the labels and rows are individual samples)
AI: Use pandas library:
import pandas as pd
pd.read_csv('foo.csv')
Pandas identify the headers automatically and is a great tool for data wrangling.
10 Minutes intro to pandas |
H: What is the purpose of multiple neurons in a hidden layer?
On the surface, this sounds like a pretty stupid question. However, i've spent the day poking around various sources and can't find an answer.
Let me make the question more clear.
Take this classic image:
Clearly, the input layer is a vector with 3 components. Each of the three components is propagated to the hidden layer. Each neuron, in the hidden layer, sees the same vector with 3 components -- all neurons see the same data.
So we are at the hidden layer now. From what I read, this layer is normally just ReLus or sigmoids.
Correct me if I'm wrong, but a ReLu is a ReLu. Why would you need 4 of the exact same function, all seeing the exact same data?
What makes the red neurons in the hidden layer different from each other? Are they supposed to be different? I haven't read anything about tuning or setting parameters or perturbing different neurons to have them be different. But if they aren't different...then what's the point?
Text under the image above says, "A neural network is really just a composition of perceptrons, connected in different ways." They all look connected in the exact same way to me.
AI: To explain using the sample neural network you have provided:
Purpose of the multiple inputs: Each input represents a feature of the input dataset.
Purpose of the hidden layer: Each neuron learns a different set of weights to represent different functions over the input data.
Purpose of the output layer: Each neuron represents a given class of the output (label/predicted variable).
If you used only a single neuron and no hidden layer, this network would only be able to learn linear decision boundaries. To learn non-linear decision boundaries when classifying the output, multiple neurons are required. By learning different functions approximating the output dataset, the hidden layers are able to reduce the dimensionality of the data as well as identify mode complex representations of the input data. If they all learned the same weights, they would be redundant and not useful.
The way they will learn different "weights" and hence different functions when fed the same data, is that when backpropagation is used to train the network, the errors represented by the output are different for each neuron. These errors are worked backwards to the hidden layer and then to the input layer to determine the most optimum value of weights that would minimize these errors.
This is why when implementing backpropagation algorithm, one of the most important steps is to randomly initialize the weights before starting the learning. If this is not done, then you would observe a large no. of neurons learning the exact same weights and give sub-optimal results.
Edited to answer additional questions:
The only reason the neurons aren't redundant is because they've all been "trained" with different set of weights, hence, give a different output when presented with the same data. This is achieved by random initialization and back-propagation of errors.
The outputs from the Orange neurons (use your diagram as an example), are "squashed" by each Blue neuron by applying the sigmoid or Relu function with the trained weights and the output of the orange neurons. |
H: Counting the number of layers in a neural network
I am going over the Udacity tutorial on Neural Networks.
Here's a diagram from the tutorial:
What makes this a '2 layer neural network'?
I was under the impression that the first layer, the actual input, should be considered a layer and included in the count.
This screenshot shows 2 matrix multiplies and 1 layer of ReLu's. To me this looks like 3 layers. There are arrows pointing from one to another, indicating they are separate. Include the input layer, and this looks like a 4 layer NN.
AI: Input layer is a layer, it's not wrong to say that.
However, when calculating the depth of a deep neural network, we only consider the layers that have tunable weights. |
H: How do multiple linear neurons together allow for nonlinearity in a neural network?
As I understand it, the point of architecting multiple layers in a neural network is so that you can have non-linearity represented in your deep network.
For example, this answer says: "To learn non-linear decision boundaries when classifying the output, multiple neurons are required."
When I watch online tutorials and whatnot, I see networks described as in the screenshot below. In cases like this, I see a series of linear classifiers:
We have a multiply, add, ReLu, multiply and add, all in series.
From studying math, I know that a composite function made out of linear functions is itself linear.
So how do you coax non-linearity out of multiple linear functions?
AI: The phase
"To learn non-linear decision boundaries when classifying the output, multiple neurons are required."
is NOT correct. More precisely, it should be:
"To learn non-linear decision boundaries when classifying the output, we need a non-linear activation function."
To understand why, imagine you have a network with many layers and nodes (multiple neurons in your question). If you don't have a non-linear activation function such as ReLu or sigmoid, your network is just a linear combination of bias and weights. Your network won't be useful for classifying non-linear decision boundary. But if your inputs can be linearly separable, you don't need neutral network...
That's why all neutral networks almost always have a non-linear activation function. ReLu is the most popular, but there are other possibilites. When you pipe up a dozen of non-linear outputs like in neutral network, your network will be able to classify a non-linear decision boundary. The more your have, the better it can perform (but also easier for overfitting). |
H: Ordinal feature in decision tree
I am curious if ordinal features are treated differently from categorical features in decision tree, I am interested in both cases where target is categorical or continuous.
If there is a difference, could you anybody point to good source with explanation and any packages (R or Python) supporting it?
AI: As per my knowledge, it doesn't matter for a decision tree model whether the features are ordinal or categorical. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. Decision trees describe patterns by using a list of attributes.
For a more detailed explanation, I am providing here some links which you will find helpful for such queries.
http://www.ibm.com/support/knowledgecenter/SS3RA7_17.0.0/clementine/nodes_treebuilding.html
http://scikit-learn.org/stable/modules/tree.html
http://www.ryerson.ca/~rmichon/mkt700/SPSS/Creating%20Decision%20Trees.htm |
H: Tool to label images for classification
Can anyone recommend a tool to quickly label several hundred images as an input for classification?
I have ~500 microscopy images of cells. I would like to assign categories such as 'healthy', 'dead', 'sick' manually for a training set and save those to a csv file.
Basically, the same as described in this question, except I do not have proprietary images, so maybe that opens up additional possibilities?
AI: I just hacked together a very basic helper in python
it requires that all images are stored in a pyton list allImages.
import matplotlib.pyplot as plt
category=[]
plt.ion()
for i,image in enumerate(allImages):
plt.imshow(image)
plt.pause(0.05)
category.append(raw_input('category: ')) |
H: sklearn.cross_validation.cross_val_score "cv" parameter question
I was working through a tutorial on the titanic disaster from Kaggle and I'm getting different results depending on the details of how I use cross_validation.cross_val_score.
If I call it like:
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=3)
print(scores.mean())
0.801346801347
I get a different set of scores than if I call it like:
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=kf)
print(scores.mean())
0.785634118967
These numbers are close, but different enough to be significant. As far as I understand, both code snippets are asking for a 3-fold cross validation strategy. Can anyone explain what is going on under the hood of the second example which is leading to the slightly lower score?
AI: From the sklearn docs for cross_val_score's cv argument :
For int/None inputs, if the estimator is a classifier and y is either
binary or multiclass, StratifiedKFold is used. In all other cases,
KFold is used.
I believe that in the first case, StratifiedKFold is being used as the default. In the second case, you are explicitly passing a KFold generator.
The difference between the two is also documented in the docs.
KFold divides all the samples in $k$ groups of samples, called folds (if
$k = n$, this is equivalent to the Leave One Out strategy), of equal
sizes (if possible).
[...]
StratifiedKFold is a variation of k-fold which returns
stratified folds: each set contains approximately the same
percentage of samples of each target class as the complete set.
This difference in folds is what is causing the difference in scores.
As a side note, I noticed that you are passing a random_state argument to the KFold object. However, you should note that this seed is only used if you also set KFold's shuffle parameter to True, which by default is False. |
H: Predictive Modeling of Multiple Items
I have a dataset of Social Media Post and want to predict the number of "thumbs up" it will receive over time.
+---------+----------------+-----------+----------------+-----+-------+
| Post_id | Timestamp | Follows | Comments_count | ... | Likes |
+---------+----------------+-----------+----------------+-----+-------+
| 01 | 12-04-16 14:00 | 34 | 4 | | 23 |
+---------+----------------+-----------+----------------+-----+-------+
| 01 | 12-04-16 14:35 | 35 | 7 | | 34 |
+---------+----------------+-----------+----------------+-----+-------+
| | ... | | | | |
+---------+----------------+-----------+----------------+-----+-------+
| 02 | 12-04-16 14:02 | 134 | 5 | | 36 |
+---------+----------------+-----------+----------------+-----+-------+
| 02 | 12-04-16 14:45 | 136 | 23 | | 123 |
+---------+----------------+-----------+----------------+-----+-------+
The likes amount over Time looks like f(x) = sqrt(x)
My approach is to create a multivariable polynomial regression for each post and somehow ensemble/average them.
Is this a good approach? Which ensemble technique is appropriate?
AI: Overall classification is generally better when the decision rules of each component classifier differ and provide complimentary information.
So the question becomes: Can you set up your component classifiers so that their decision rules are different and compliment one another based on the feature space? e.g. Does Post 1 have a significantly different feature space than Post 2? etc. If so, the ensemble approach should be beneficial.
Which technique? If you can highly train each classifier and make it an expert in different regions of the feature space, try models:
mixture model
mixture distribution
gating subsystem
winner take all. |
H: Imputing missing values by mean by id column in R
This is fairly straight forward but i am unable to do it. My data frame has a id variable which is repeating. For same id I want to replace the NAs in other continuous variable(rating and sur) with their corresponding mean. can anyone pls suggest
ID rating Sur
101 60 0.7687
101 78 NA
101 NA 0.765
102 60 NA
102 NA 0.654
102 75 0.435
103 NA 0.576
103 68 0.875
103 70 NA
AI: If you want the mean, you could use dplyr syntax:
df = structure(list(ID = c(101L, 101L, 101L, 102L, 102L, 102L, 103L,103L, 103L),
rating = c(60L, 78L, NA, 60L, NA, 75L, NA, 68L, 70L),
Sur = c(0.7687, NA, 0.765, NA, 0.654, 0.435, 0.576, 0.875, NA)),
.Names = c("ID", "rating", "Sur"), class = "data.frame", row.names = c(NA,-9L))
library(dplyr)
df %>%
group_by(ID) %>%
mutate(rating = ifelse(is.na(rating), mean(rating, na.rm = TRUE), rating),
Sur = ifelse(is.na(Sur), mean(Sur, na.rm = TRUE), Sur))
ID rating Sur
<int> <dbl> <dbl>
1 101 60.0 0.76870
2 101 78.0 0.76685
3 101 69.0 0.76500
4 102 60.0 0.54450
5 102 67.5 0.65400
6 102 75.0 0.43500
7 103 69.0 0.57600
8 103 68.0 0.87500
9 103 70.0 0.72550
for other imputations you can look at packages like mi, mice, caret, amelia or simputation. And I'm probably forgetting a lot of others. Personally I like the simplicity of simputation.
For example median imputation is just one line:
impute_median(df, rating + Sur ~ ID) |
H: Join two dataframes - Spark Mllib
I've two dataframes. The first have the some details from all the students, and the second have only the students that haved positive grade.
How can I return only the details of the student that have positive grade (make the join) but not using SQL Context.
I've this code:
val all_students = sc.textFile("/user/cloudera/Data");
case class Students(Customer_ID:String,Name:String,Age:String);
def MyClass(line: String) = {
val split = line.split(',');
Students(split(0),split(1),split(2))
}
val df = all_students.map(MyClass).toDF("Customer_ID","Name", "Age").select("Customer_ID","Name", "Age");
val students_positive_grande = sc.textFile("/user/cloudera/Data");
How can I make the join between this datasets? I want to join the "Customer_ID" with the first column of the second column...
AI: Use this syntax:
val joinedDF = students_positive_grande.as('a).join(
df.as('b),
$"a.Customer_ID" === $"b.Customer_ID")
joinedDF.select($"a.Customer_ID", $"b.Customer_ID") |
H: Is there any technique, which can decide no. of bags for Bagged Logistic regression?
In case of Bagged Logistic regression, people suggest more the bags better will be results.
There should be some threshold for more. Is there any technique available which can suggest no. of bags for the type of data available.
For ex.
Details about data
No. of features - 6
No. of records - 1 M\
Bag size - 20%
No. of Bags - ?
For above mentioned details, Is there any way to devise no. of bags?
AI: From the start I have to state that I am not aware about any paper regarding number of bags for bagged logistic regression. Therefore my assertions apply generally, for any bagged ensemble.
The short answer is no, and I do not think that if possible to be constructed. There are a couple of reasons that I will describe above.
The first reason is that it depends on the complexity of the joint probability you want to estimate. Technically any model you build for prediction purposes aims to estimate a conditional probability space on output variable giving the joint input variables. Doing bagging puts the problem if the estimations from selected samples does cover the that relation in all places. To exemplify, you can have one input categorical variable with 2 levels or with 10 levels. I think more estimations are needed for the later case.
Second reason if the sample itself is representative. Often in practice the sample is not purely a random sample. It contains various interdependencies which can lead to problems. For example if you collect data from multiple countries and you have more samples from some countries simply because of costs, but not related with the phenomena you study.
Third reason is that 20%. A typical bootstrap sample have the same size as original. Reducing the size is interesting because of performance gain or because of induced artificially variance which makes the samples "more independent". However estimating what is the effect of decreasing the sample size is again hard to estimate.
Fourth factor could be also the signal to noise ratio. In other words if you data is noisy enough you need more samples to surface to interesting signal in a stable way.
What you can do?
Since finding the answer to your question is hard, and knowing it is close to similar with knowing the true model, there are things which can be follow in order to find a proper number of bags. What I recommend would be to build repeatedly models with increasing number of bags and see the error estimates. Usually when those estimates have a low variance and adding bags does improve the performance significantly than you can assume you need no more bags and bagging with your chosen model is not able to capture any other structure in your data. |
H: Is ROC and AUC the only criteria for choosing a model?
If no, what are the other criteria. Please elaborate.
What should be minimum value of AUC to select a model.
AI: Not at all, and AUC is not a particularly well-respected measure of model performance. It performs particularly poorly for rare events, since the usual criterion for "best fit" is obtained for sensitivities between .7 and .9 where the specificities will typically be also .7 to .9 for evan a good model. The choice of a proper "best" criterion will very much depend on the type of outcome, the frequency of the outcomes of interest and any weighting or costs associated with them. Naive application of the AUC will have no consideration of the cost of false positives or false positives
You should read up on calibration and cross-validation. This is quite a broad topic, but the statisticians have been at it for a while and there is a lot to be learned by searching an affiliated site, CrossValidated.com: https://stats.stackexchange.com/search?q=best+fit+cross-validation
The minimum AUC would of course be 0.5, which is what you would get from pure chance. There is no established minimum. Even teh p < 0.05 is an aribitrary boundary, not at all established by a theory. You need to decide how much to value good versus bad decisions from the task at hand. |
H: Music corpus sentence level clustering
I provide an offline library of Music to my users.
My goal is to understand what my users are looking for, which means translate raw user searches to: Music Artists, Songs, Albums and then add music to the company library.
What are the suggested clustering algorithms to group common short sentences into a single entity. Example:
Taylor Swift Shake it Off
Taylor Swift Shaek it Off
Shake it off
Twylor Swift Shake it Off
I tried this example and works fine for a specific number of clusters (K) where K < N | K <= N. But since searches are unpredictable need to find a way to automate the number of clusters: my goal is to cluster 2 or more similar items and let alone single searches in independent clusters, example:
Cluster 1:
Taylor Swift Shake it Off
Taylor Swift Shaek it Off
Shake it off
Twylor Swift Shake it Off
Cluster 2
Avicci
Avicii Ibiza
Avicci Electro House
Cluster 3
Juan Gabriel
Cluster 4
Adele
Cluster 5
Britney Spears
Britney Spears VMA 2016
AI: I would suggest hierarchical clustering. It's unsupervised, and you don't need to predefine number of clusters. How it works (for the bottom up version) is each sentence (or object) is initialized as its own cluster. At each iteration of the algorithm, the two clusters of the smallest intra-cluster distance are joined, all the way until there's a 'root' where everything is one cluster. The result of this is a big dendrogram, and you can cut the dendrogram at whatever point you want to define clusters. Or you can just inspect it. Here's a nice numerical example of the alg in action.
It should detect the same types of clusters you're already finding, and you wouldn't need to redefine a distance metric. And, I anticipate you'll get the same amount of class separation you seem to already be getting; in fact, the nice thing about the dendrogram from hierarchical clustering is that it illustrates class separation nicely. it's 'hclust' in R, not sure for Python. |
H: Score for Ranked plots using scatter plot widget
I'm using orange 3 and when I Score plots using the scatter plot widget how do I view the numerical score associated with the ordered list of plots. Did that feature get removed with the upgrade from 2.7 to 3?
Thanks,
AI: Hm, what was the meaning of this score?
See. This is why it was removed.
(As I recall, it was the average probability assigned to the correct class by the k-nearest neighbour classifier on the projection -- or something similar. The number is useful for ranking projections, but doesn't have any meaningfyl interpretation or absolute scale, like >0.70 is good, >0.90 is excellent. Hence there's no point in showing it.) |
H: Do logistic regression and softmax regression do the same thing?
If the both do the same thing then which give us better accuracy?
AI: There is a key difference:
Softmax regression provides class probabilities for mutually exclusive classes.
Logistic regression treats class membership for each class separately. Classes do not need to be mutually exclusive.
The two are equivalent for a scenario with two mutually exclusive classes - e.g. a "positive" and "negative" class - where softmax would have two outputs summing to 1, and logistic regression would have one output giving probability of the "positive" class.
I am not certain of the results comparing logistic regression one-vs-all (taking max output) with softmax regression on the same multi-class problem. I would expect the performance to be quite similar. Neither model copes well with non-linear relationships between input and target classes. |
H: Linear regression - LMS with gradient descent vs normal equations
I wonder when to use linear regression with stochastic or batch gradient descent to minimize the cost function vs when to use normal equations? The algorithms using gradient descent are iterative, so they might take more time to run, as opposed to the normal equation solution, which is a closed form equation. But it does use matrices to store the training data. Does this mean gradient solutions require more processing power, but using the normal equation method requires more memory because of the matrices? Which method is optimal in what scenario?
AI: Andrew Ng answers this question succinctly in his Coursera lecture about the normal equation. I will summarize.
You have m training examples and n features.
Disadvantages of gradient descent:
you need to choose the learning rate, so you may need to run the algorithm at least a few times to figure that out.
it needs many more iterations, so, that could make it slower
Compared to the normal equation:
you don't need to choose any learning rate
you don't need to iterate
Disadvantages of the normal equation:
Normal Equation is computationally expensive when you have a very large number of features ( n features ), because you will ultimately need to take the inverse of a n x n matrix in order to solve for the parameters data.
Compared to gradient descent:
it will be reasonably efficient and will do something acceptable when you have a very large number ( millions ) of features.
So if n is large then use gradient descent.
If n is relatively small ( on the order of a hundred ~ ten thousand ), then the normal equation |
H: Naive Bayes: Divide by Zero error
OK this is my first time in ML and for starter I am implementing Naive Bayes. I have Cricket(sports) data in which I have to check whether the team will win or lost based on Toss Won|Lost and Bat First|Second. Below is my code:
from sklearn.naive_bayes import GaussianNB
import numpy as np
"""
Labels : Lost, Draw, Won [-1,0,1]
Features
==========
Toss(Lost,Won) = [-1,1]
Bat(First, Second) = [-1,1]
"""
#Based on Existing Data Features are:
features = np.array([[-1, 1],[-1, 1]])
labels = np.array([0,1])
# Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(features, labels)
# Predict Output
predicted = model.predict([[1,0]])
print(predicted)
On running this I get error:
/anaconda3/anaconda/lib/python3.5/site-packages/sklearn/naive_bayes.py:393: RuntimeWarning: divide by zero encountered in log
[0]
n_ij = - 0.5 * np.sum(np.log(2. * np.pi * self.sigma_[i, :]))
/anaconda3/anaconda/lib/python3.5/site-packages/sklearn/naive_bayes.py:395: RuntimeWarning: divide by zero encountered in true_divide
(self.sigma_[i, :]), 1)
/anaconda3/anaconda/lib/python3.5/site-packages/sklearn/naive_bayes.py:395: RuntimeWarning: invalid value encountered in subtract
(self.sigma_[i, :]), 1)
Update
Code given here
AI: Although I haven't verified it, a first glance at the features and training set you used shows an obvious problem. You have just two data samples with the exact same features while you give it different labels.
Although other types of models might not break and just give both samples an equal probability of being 0 or 1, something in the Naive Bayes classifier internal calculations probably requires there to be at least some difference in differently labeled samples (not an unreasonable assumption).
I can't confirm this for myself as I'd have to go deeper into the source code to actually check the scikit implementation so perhaps someone else more familiar with it can check.
EDIT: I just tested this out by changing one line in your code to
features = np.array([[-1, 1],[-1, 0]])
It worked. |
H: Online Variational Autoencoder
when training a VAE, typically one samples from the latent distribution using the reparametrization trick using a fairly large minibatch size (>100) in the decoder/generator half of the VAE. I'm assuming this minibatch size allows the network to 'smooth' out the error and allows us to avoid having to repeatedly sample from the latent space.
However, I'm interested in online scenarios where you are training the VAE on streaming data as it arrives, so the batch size would be 1. In this case, it can take the VAE a long time to converge because the error is highly volatile.
Is there any way to avoid this issue in practice? I am unsure what will happen if I have to repeatedly sample from the latent distribution and then take the mean of those samples (or something) - aside from obvious performance concerns. The other alternative is to wait for enough samples to arrive that I can train them in a larger batch, but even in this case I wouldn't be able to wait for 100+ samples to arrive.
AI: The fact that pieces of data arrive one by one does not define the minibatch size. You can have a buffer of size N (with either FIFO or with any other eviction policy that suits the statistical properties needed) and sample minibatches of size M out it every time you want to update the autoencoder.
Depending on the buffer eviction policy and the sampling strategy, this may also help to avoid local autocorrelation.
Note: this answer was originally a comment to the OP's question. |
H: Can I use categorical data and Decision Trees to regress a continuous variable?
Is there a way to take a set of data that consists of discrete values and predict a continuous value? Take for instance data that looks like:
sample matrix of jewel data
color | size | shape
['red' ,'large','square']
['blue','small','circle']
['blue','small','square']
sample array of price labels
[9.99, 7.00, 6.37]
Can I do Decision Tree Regression on this to predict the price of a jewel with a given set of features? What if some of the data is continuous? Also is there any way I can/should pre-process the categorical data other than onehot encoding?
AI: Yes, most software implementations of trees will allow you to predict a continuous target variable with all binary predictors. This is because the predictors are only used as splits, and the prediction comes from the average value at a given terminal node. The predictions will not be truly continuous across all terminal nodes in the same way that linear regression is continuous, but in practice, this is generally not a problem. If your tree is under-fitting (not continuous enough) you can always add more terminal nodes. Also, one-hot encoding should be sufficient. |
H: Pivot DataFrame while calculating new values
I have reduced the data set to only the columns I need:
| yearID | POS | PO | A | E |
|--------|:---:|:--:|:-:|:-:|
| 1871 | SS | 0.0|3.0|1.0|
| 1871 | 2B |30.0|1.0|0.0|
| ... | .. | ...|...|...|
source: Sean Lahmans 2015 Baseball Data set Using the Fielding.csv file.
I am trying to calculate the Fielding Percentage:
values = df['E'] / (df['PO'] + df['A'] + df['E'])
Where there are multiple records for each 'yearID'. I am not sure if I need to transpose, apply a function or map one. Additionaly, in what order should I be moving the pieces around in.
data.loc[:,('C')] = middle_infielders.PO + middle_infielders.A + data.E
data.loc[:,('FP')] = 1 - (data.E / data.C)
| yearID | POS | PO | A | E | C | FP |
|--------|:---:|:--:|:-:|:-:| | |
| 1871 | SS | 0.0|3.0|1.0| 123 | .960 |
| 1871 | 2B |30.0|1.0|0.0| 12 | .452 |
| ... | .. | ...|...|...| ... | ... |
I would like it in this form to plot a line graph:
| yearID | SS | 2B |
|--------|:----:|:----:|
| 1871 |0.3745|0.1245|
| 1872 |0.8940|0.3366|
| ... | ... | ... |
End result: One mean for each 'POS' (SS and 2B) each year.
UPDATE
Expecting pivoting to aggregate the values='FP' from the statement:
data.pivot(index='yearID', columns='POS', values='FP')
But, I get the error:
"ValueError: Index contains duplicate entries, cannot reshape."
Should I apply a Lambda to calculate the Fielding Percentage ('FP') I want in the values instead of pre-calculating it?
AI: The magic word is "pivoting":
records = [(1871,"SS",0.0,3.0,1.0), (1871,"2B",30.0,1.0,0.0)]
pandas.DataFrame.from_records(records, columns=("yearID", "POS", "PO", "A", "E")
).assign(result = df.apply(lambda x: x['E']/(x['PO']+x['A']+x['E']), 1)
).pivot(index='yearID', columns='POS', values='result')
Result:
| POS | 2B | SS |
|--------|----|------|
| yearID | | |
| 1871 | 0 | 0.25 |
I went to the trouble of looking at your file now that you linked to it, and the problem is that you have removed to much information, so your index is not unique. Indices always have to be unique, so either you add that information back, or you preprocess the data before pivoting such that the duplication is resolved. I chose to retain the extra columns; I don't know if this is what you want, but it should help you understand what it takes to make pivoting work:
from pandas import read_csv
field_cols = ("playerID", "yearID", "teamID", "POS", "PO", "A", "E")
df = read_csv('Fielding.csv', usecols=field_cols).dropna(
).query('not PO == A == E == 0')
df.assign(ID = df[['playerID', 'yearID', 'teamID']].apply(tuple, 1),
FP = df.apply(lambda x: 1-x['E']/(x['PO']+x['A']+x['E']), 1)
).drop(set(df.columns) - {'POS'}, 1).drop_duplicates('ID'
).pivot(index='ID', values='FP', columns='POS')
Result:
| POS | 1B | 2B | 3B | C | CF | LF | OF | P | RF | SS |
|------------------------|-----|-----|-----|-----|-----|-----|-----|----------|-----|-----|
| ID | | | | | | | | | | |
| (aardsda01, 2006, CHN) | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 1.000000 | NaN | NaN |
| (aardsda01, 2007, CHA) | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 0.857143 | NaN | NaN |
| (aardsda01, 2008, BOS) | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 1.000000 | NaN | NaN |
| (aardsda01, 2009, SEA) | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 1.000000 | NaN | NaN |
| (aardsda01, 2010, SEA) | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 0.833333 | NaN | NaN |
... |
H: What is the difference between model hyperparameters and model parameters?
I have noticed that such terms as model hyperparameter and model parameter have been used interchangeably on the web without prior clarification. I think this is incorrect and needs explanation. Consider a machine learning model, an SVM/NN/NB based classificator or image recognizer, just anything that first springs to mind.
What are the hyperparameters and parameters of the model?
Give your examples please.
AI: Hyperparameters and parameters are often used interchangeably but there is a difference between them. You can call something a 'hyperparameter' if it cannot be learned within the estimator directly. However, 'parameters' is a more general term. When you say 'passing the parameters to the model', it generally means a combination of hyperparameters along with some other parameters that are not directly related to your estimator but are required for your model.
For example, suppose you are building a SVM classifier in sklearn:
from sklearn import svm
X = [[0, 0], [1, 1]]
y = [0, 1]
clf = svm.SVC(C =0.01, kernel ='rbf', random_state=33)
clf.fit(X, y)
In the above code an instance of SVM is your estimator for your model for which the hyperparameters, in this case, are C and kernel. But your model has another parameter which is not a hyperparameter and that is random_state. |
H: Is a neural network suitable for this application
Apologies if this question is not a suitable format. I am a novice in data science.
I have a database of species observation data consisting of ~16 million records.
Each record consists of:
latitude
longitude
date
time
species observed (that's species singular, not plural)
This data has been manually vetted by experts, so there is an additional field for each species in a record that classifies the observation as either valid or invalid (or more accurately speaking, likely correct/likely incorrect)
I am exploring the idea of training a neural network on this data to automatically classify new records as being valid or invalid ("invalid" data will be flagged for manual expert review.)
The vast majority of records are classified as 'valid', so my worry is that there isn't much information to train the model on what constitutes 'invalid'.
However, a good predictor of whether a record is valid is, informally speaking, "are there other records of this species close by (spatially and/or temporally)"
I'm not sure where to start with formulating a neural network for this problem. E.g.
Inputs: latitude, longitude, date, time, species
Output: validity
OR
Inputs: latitude, longitude, date, time
Outputs: one output for each known species indicating validity
I like the idea of this second model as I can input a time and location and get out a list of likely species.
So my concrete questions are:
Does this sounds like an application suitable for a neural network?
If so, where might I start with formulating a model for my problem? Or can someone point me in a good direction to learn more about this topic.
AI: Before deciding on the model, I would recommend to re-formulate the dataset to best suit your problem. You could approach this problem as follows:
Since the output you're trying to predict is validity of the observation, keep "validity" = True/False, or, 1/0 as the target variable.
One of the parameters is a categorical variable "species", and I'm expecting this to have a high cardinality. Since there are approximately 8.7 million species on earth, if you used this variable in a model it could possibly expand into 8.7 million individual columns (in on-hot encoding form). Even a conservative estimate of 100,000 species makes it nonviable to be used as it is. So you need a way to convert this species information to a fewer features.
One approach you could try out is to create geographical clusters for each species (using only valid marked records), then identify the nearest center and max/avg./quartile measures of distance from their cluster center for each species. Do this for each quarter of the year separately to account for seasonal changes. Next, add this information back to the main dataset to indicate for each record - all the geographical centers of that species cluster. In the next step, for each record find the nearest cluster center and calculate this particular observation's distance from its cluster center. Then calculate the ratio of its distance from cluster center vs. max distance and vs. avg distance from that cluster's center. Use this metric instead of the geospatial coordinates and species identifier.
Another approach could be to add additional features such as the climate of each location and average historic temperature at that location during the time of the year when the observation was taken. This is because some animals may migrate north/south based on the seasons and so if a species' location was found valid in the summer, it may be impossible to find it in the same location in winter due to it being unable to survive the cold weather. If you combine this with #3 above, it would enrich the observations significantly.
After doing this extensive hard work, you should do some exploratory analysis and plot subsets of this data to better understand it. By visualizing the data, sometimes we're able to figure out best course of action more quickly than without visualizing the data.
Next, you may explore different machine-learning algorithms to fit a model to this refined data. I would recommend trying out other algorithms such as logistic regression, SVM, ridge-regression, random forests and gradient boosting machines in addition to neural networks and then select the best performing one. Most machine learning suites/frameworks implement these, so it should not be difficult to find out how to apply these to your dataset.
Neural networks are fine to try out, but as with all algorithms you need to be careful about the usual pitfalls such as:
Avoid over-fitting the model to training data: to avoid this use regularization and keep cross checking accuracy with an independent held-out validation set.
Use cross-validation (10-fold) and repeat several times to get good estimates of the model's performance metrics on new data.
Since the data is highly class imbalanced (many valid records but few invalid records by proportion), use a performance metric other than simple true positive accuracy. Try using F1 score, precision (of identifying invalid records), Kappa metric, etc.
Due to high class imbalance, it would help if you either over-sampled the minority class (invalid) or under-sampled the majority class (valid), or did both together. This will improve the model's ability to classify mode precisely.
Adjust the hyper-parameters such as learning rate and hidden layers/no. of units for best model performance. |
H: xgboost speed difference per API
How can it be that a xgboost.cv cross-validation operation where n-folds are evaluated is quicker than a single XGBoostClassifier.fit(X,y) of the xgboost.sklearn API?
AI: I believe this is the answer: https://github.com/dmlc/xgboost/issues/651
the sklearn api uses n_estimators= 100 as default whereas xgb.train is using n_boost_rounds=10
As both refer to the same parameter this could explain the huge difference. |
H: What does "zero-meaned vector" mean
I'm trying to reproduce an algorithm designed in a paper. And everything is going well except one thing:
It says we considered the lengths zero-meaned accelerometer vectors and created a feature for the mean and standard deviation of this value. and I do not understand what is it zero-meaned vectors?
Example dataset:
-0.6946377 12.680544 0.50395286
5.012288 11.264028 0.95342433
4.903325 10.882658 -0.08172209
-0.61291564 18.496431 3.0237172
-1.1849703 12.108489 7.205164
1.3756552 -2.4925237 -6.510526
-0.61291564 10.56939 5.706926
-0.50395286 13.947236 7.0553403
Can any body help me?
I found only this information https://www.quora.com/What-does-it-mean-when-a-vector-is-zero-mean but I'm not sure about it.
Thank you.
AI: "Zero-meaned" means the vector has been transformed so that its mean is 0.
Typically, you would do this by subtracting the mean of each column from that column. (This is for dimensional as well as algorithmic reasons; you don't want to subtract a person's weight from their height.)
It sounds like here they're actually talking about the row mean--that is, $(-0.6946377, 12.680544, 0.50395286)$ would be transformed to $(-4.857924, 8.5172577, -3.65933344, 4.1632863, 7.40047)$, where the first three are the original features minus the row mean, the fourth is the row mean, and the fifth is the standard deviation of the original features.
This would make sense if the three have the same units (if they're all accelerations at the same scale, this works), and so you want a separate measure of how much it's being accelerated at all and how much it's being accelerated in a particular direction. |
H: How to train neural network that has different kind of layers
If we have MLP then we can easily compute the gradient for each parameters, by computing the gradient recursively begin with the last layer of the network, but suppose I have neural network that consist of different type of layer for instance Input->convolution layer->ReLu->max pooling->fully connected layer->siftmax layer, how do I compute the gradient for each parameters ?
AI: The different layers you describe can all have gradients calculated using the same back propagation equations as for a simpler MLP. It is still the same recursive process, but it is altered by the parameters of each layer in turn.
There are some details worth noting:
If you want to understand the correct formula to use, you will need to study the equations of back propagation using the chain rule (note I have picked one example worked through, there are plenty to choose from - including some notes I made myself for a now defunct software project).
When feed-forward values overlap (e.g. convolutional) or are selected (e.g. dropout, max pooling), then the combinations are usually logically simple and easy to understand:
For overlapped and combined weights, such as with convolution, then gradients simply add. When you back propagate the the gradients from each feature "pixel" in a higher layer, they add into the gradients for the shared weights in the kernel, and also add into the gradients for the feature map "pixels" in the layer below (in each case before starting calculation, you might create an all-zero matrix to sum up the final gradients into).
For a selection mechanisms, such as the max pooling layer, you only backprop the gradient to the selected output neuron in the previous layer. The others do not affect the output, so by definition increasing or decreasing their value has no effect - they have a gradient of 0 for the example being calculated.
In the case of a feed-forward network, each layer's processing is independent from the next, so you only have a complex rule to follow if you have a complex layer. You can write the back propagation equations down so that they relate gradients in one layer to the already-calculated gradients in the layer above (and ultimately to the loss function evaluated in the output layer). It doesn't directly matter what the activation function was in the output layer after you backpropagate the gradient from it - at that point the only difference is numeric, the equations relating deeper layer gradients to each other do not depend on the output at all.
Finally, if you want to just use a neural network library, you don't need to worry much about this, it is usually just done for you. All the standard activation functions and layer architectures are covered by existing code. It is only when creating your own implementations from scratch, or when making use of unusual functions or structure, that you might need to go as far as deriving the values directly. |
H: Convolution operator yields negative index of matrix
When I read about convolutional neural network from the internet, like this one, mostly I found that discrete convolution operator is defined as follow:
$$C=I*F$$
$$C(x,y)={\sum_{a=0}^{k-1} }{\sum_{b=0}^{k-1}}I(x-a,y-b)F(a,b)$$
Where the size of $F$ is $k\times k$, Suppose that the size of F is $3\times 3$ and size of $I$ is $9\times 9$, then $C(1,1)=\dots +F(2,2)I(1-2,1-2)+\dots$ that is does not make sense when $I$ has negative index, how do I compute matrix $C$ ? does we change the way the matrix indexed?
AI: There are two approaches that can be taken:
Only use valid indices. Matrix C will then be smaller than matrix I, in your example it would be a 7x7 matrix (9 - 3 + 1 = 7). You may see this in neural network libraries as a convolution working with "valid" border.
Use synthetic values, usually just 0, for out-of-bounds indices in matrix I in order to calculate C. This produces the same size output as input, so some neural network libraries note this with "same" border.
Your formula suggests "same" border is being used*, because it would be unusual to work with a matrix C starting from an offset corner. If you do use "same" borders, it is important to normalise your input image data pixels to mean 0, otherwise the synthetic border will appear as a strong edge to the kernels.
If you are developing your own CNN code, I would suggest using "valid" border mode, because it is simpler, and a more common approach. It will work just fine for image classification tasks.
* Actually I think the formula you have posted is over-simplified or even wrong. I would expect to see the term $I(x+a,y+b)$ so that the top left index of C stays as (1,1) for a "valid" border. For a "same" border, I would expect to see a centered kernel. |
H: How to replace NA values with another value in factors in R?
I have a factor variable in my data frame with values where in the original CSV "NA" was intended to mean simply "None", not missing data. Hence I want replace every value in the given column with "None" factor value. I tried this:
DF$col[is.na(DF$col)] <- "None"
but this throws the following error:
Warning message:
In `[<-.factor`(`*tmp*`, is.na(DF$col), value = c(NA, NA, :
invalid factor level, NA generated
I guess this is because originally there is no "None" factor level in the column, but is it the true reason? If so, how could I add a new "None" level to the factor?
(In case you would ask why didn't I convert NAs into "None" in the read.csv phase: in other columns NA really does mean missing data).
AI: You need to add "None" to the factor level and refactor the column DF$col. I added an example script using the iris dataset.
df <- iris
# set 20 Species to NA
set.seed(1234)
s <- sample(nrow(df), 20)
df$Species[s] <- NA
# Get levels and add "None"
levels <- levels(df$Species)
levels[length(levels) + 1] <- "None"
# refactor Species to include "None" as a factor level
# and replace NA with "None"
df$Species <- factor(df$Species, levels = levels)
df$Species[is.na(df$Species)] <- "None" |
H: Cut-Off using Frequent Pattern Mining - Spark Mllib
I am using the Association Rules algorithm using this:
http://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html
I'm have 83945 transactions in my dataset. And I want to filter some products that only appears X times in my dataset. Basically, I want to set my cut-off. My question is: How can I define my cut-off, this is how to define the minimum number of occurences that my products need to have?
Many thanks!
AI: Like @SeanOwen pointed out its called support.
spark.mllib’s FP-growth implementation takes it as a hyper-parameter under minSupport.
It is the minimum support for an itemset to be identified as frequent, e.g : if an item appears 4 out of 5 transactions, it has a support of 4/5=0.8.
Usage:
import org.apache.spark.mllib.fpm.FPGrowth
val transactions: RDD[Array[String]] = ???
val fpg = new FPGrowth().setMinSupport(0.2)
val model = fpg.run(transactions)
I hope this helps. |
H: Approaching data science
I'm totally naive to Data Science - that is, the relatively new, somewhat hyped-field that is so popular at the moment. But I'm not naive to data ... as a scientist and researcher I've worked with all sorts in different roles in the past.
Now I'm in the lamentable position of having dug a lot of shallow holes, using different software systems and different types of data, and not really being professionally competent in anything.
My question is, if I want to "get up to speed" with data science, and perhaps leverage the different experiences I've had, how do I approach it? Ideally I'd like to make my research skills marketable - that is, become a data scientist of sorts, but with a greater emphasis on the research/reporting side.
Assuming I'm coming from scratch but have demonstrated capacity - I say that because, for example, I've used R before for some projects but after a break of a year or so I need to relearn it every time... Where do I start; how do I unify all these bits and pieces?
And what claim can I make to work in this field? (I've worked on all sorts of data, from gigabytes of climate data and earth science, to health registers to longitudinal surveys ... but none of it under the moniker of a data scientist).
Specifically, what tool(s) do I learn and what theory do I need to grasp? (Keeping in mind that all my coding and statistical competencies are mostly self-taught.)
Unlike this (fascinating) question, I don't have a business background and don't necessarily want to move towards the business analyst path - I still want to play with psychical (earth science) or social data. Neither do I want to work on the data management side so much - I want databases and coding to be a means, not an end. And finally, I'm not inclined much towards the theory and mathematics. Perhaps the best way to summarise my inclination and position is that I don't want to become a data science expert, but want to be able to become an expert in given subjects through data science.
My inclination is perhaps to concentrate on something like Python, and use it to exploit R and other functionality?
Tools I've used in the past (in order of exposure) -
SAS (for statistics and research, not the warehouse side)
VBA/VB6/Excel/Access (data manipulation, reporting)
GIS (ArcGIS for analysis/research, not database management)
R (stats...)
Some HTML/JS
Some Python
One thing I find is that my existing competencies don't provide me with a useful tool for bringing together different data and getting it into the state I want for analysis (ETL I suppose?), hence the inclination to re-learn Python.
Thanks for your thoughts!
AI: The Udacity Data Analyst Nanodegree gives a very gentle introduction using Python and R for mostly exploratory Data Analysis which I think is what you are looking for. (It's not tailored towards business analysis. The courses are free and you can just take the ones you think are interesting. In your case, I would skip the ones about data visualization with javascript and data wrangling with MongoDB.) THey provide a bunch of resources to go from there. Of course there are plenty of other online courses.
If you are more into books you could check out Tukey's Exploratory Data Analysis. A classic that is still highly relevant.
In terms of tools, the most commonly used are R and Python and I would focus on them for now. |
H: Handling a feature containing multiple values
I have a dataset in following format:
Movie ID | Actors | Director | language | ReleaseYear | Genre
1 | Anil Kapoor;Manisha Koirala;Jackie Shroff;Anupam Kher;Danny Denzongpa;Pran | Vidhu Vinod Chopra | hin | 1994 | Drama;Romance;Patriotic
As you can see the columns Actors and Genre has multiple values. I need to perform cluster analysis on this dataset. I don't know what would the best way to deal with such data. I am thinking of two possible solution(don't know if it is a correct approach for this type of problem)--
Split the column, say, 'Actors' into multiple columns e.g. Actors 1, Actor2,..., and then perform the cluster anylysis.
Row split i.e. convert a single row of data into multiple rows each row for one value of columns like 'Actors' and 'Genre'.
Please suggest me the best way to deal with such data for cluster analysis.
AI: You need to rethink your approach. Rather than "what can I code to make things work", you need to ask "what is the right thing to do, and how can I implement this".
Clustering is hard (easy to run, hard to get good results).
Clustering non-continuous data is even harder.
The reason why people create a lot of false results here is because you have infinitely many way to weight different values and attributes, and essentially you can get almost any result you want just by playing around with parameters. Don't let yourself be lured by the common hack of "one hot encoding" everything; it just shows that people don't (want to) know what they are doing.
To get a reliable result, you need to be very clear on your assumptions, such as "I assume all actors are equally representative, and the overlap as measured by Jaccard is a good indicator of similarity" (I would disagree here, actors should be weighted).
Then you need to do the same thing with genres. Here it is even more questionable hos to compute similarity of genres.
And after that, you need to combine all these different similarities into one. That is probably the hardest step, and will involve even more weighting parameters.
All in all, I'd say: whatever you do, the clustering is not statistically sound. It's too many parameters chosen without even a good reason (yet a proof) that this way is better than another way of choosing them.
In particular, you will never know if there is an even better clustering with different parameters. |
H: Clustering tendency using hopkin's statistics
As per the references
1. http://www.sthda.com/english/wiki/assessing-clustering-tendency-a-vital-issue-unsupervised-machine-learning
2. http://www.listendata.com/2016/01/cluster-analysis-with-r.html
The Hopkins statistics value close to '0' is also clusterable. Is this correct? Or the value should always be close to '1' for clusterable?
AI: I don't think the Hopkins statistic is useful for this purpose at all.
It is essentially a test for a uniform distribution.
But not having a uniform distribution does not mean the data is suitable for cluster analysis.
For example a single Gaussian distribution (unimodal!) will score high on this test, but the data doesn't have multiple clusters but all points are from the same distribution. |
H: Postitive event but unsure when it occurred in time
Morning,
I have a lot of data of which I am positive an event (target data / the event I want to predict going forward) occurred within a two week time frame, however I am unsure when it happened within this time frame.
I can get daily or more frequent feature data, but the target data only appears in 2 week gaps or in some cases 4 week gaps.
Currently I use an average of any feature during the timeframe or gather the feature data on the same day as when the target data is available but is there a more appropriate / better way?
I am using the data for machine learning purposes.
AI: This is referred to as "interval censoring": that is to say, you know the event occurred within an interval, but not exactly where in that interval.
I don't think there has been a lot of attention in the ML community for interval censored data (or any, really). However, there has been a reasonable amount in the statistical community, myself included. As such, I've written the R package icenReg for interval censored regression models. Regression models certainly can be used as ML tool, although these don't yet have all the bells and whistles that are more typical for ML problems (i.e. no penalized regression a la elastic net etc.).
However, icenReg at least contains a tool for generalized cross validation, although it is hidden from the public. It can be extracted by icenReg:::icenReg_cv. |
H: Difference of Activation Functions in Neural Networks in general
I have studied the activation function types for neural networks. The functions themselves are quite straightforward, but the application difference is not entirely clear.
It's reasonable that one differentiates between logical and linear type functions, depending on the desired binary/continuous output but what is the advantage of sigmoid function over the simple linear one?
ReLU is especially difficult to understand for me, for instance: what is the point to use a function that behaves like linear in case of positive inputs but is "flat" in case of negatives? What is the intuition behind this? Or is it just a simple trial-error thing, nothing more?
AI: A similar question was asked on CV: Comprehensive list of activation functions in neural networks with pros/cons.
I copy below one of the answers:
One such a list, though not much exhaustive:
http://cs231n.github.io/neural-networks-1/
Commonly used activation functions
Every activation function (or non-linearity) takes a single number
and performs a certain fixed mathematical operation on it. There are
several activation functions you may encounter in practice:
Left: Sigmoid non-linearity
squashes real numbers to range between [0,1] Right: The tanh
non-linearity squashes real numbers to range between [-1,1].
Sigmoid. The sigmoid non-linearity has the mathematical form $\sigma(x) = 1 / (1 + e^{-x})$ and is shown in the image above on
the left. As alluded to in the previous section, it takes a
real-valued number and "squashes" it into range between 0 and 1. In
particular, large negative numbers become 0 and large positive numbers
become 1. The sigmoid function has seen frequent use historically
since it has a nice interpretation as the firing rate of a neuron:
from not firing at all (0) to fully-saturated firing at an assumed
maximum frequency (1). In practice, the sigmoid non-linearity has
recently fallen out of favor and it is rarely ever used. It has two
major drawbacks:
Sigmoids saturate and kill gradients. A very undesirable property of the sigmoid neuron is that when the neuron's activation
saturates at either tail of 0 or 1, the gradient at these regions is
almost zero. Recall that during backpropagation, this (local) gradient
will be multiplied to the gradient of this gate's output for the whole
objective. Therefore, if the local gradient is very small, it will
effectively "kill" the gradient and almost no signal will flow through
the neuron to its weights and recursively to its data. Additionally,
one must pay extra caution when initializing the weights of sigmoid
neurons to prevent saturation. For example, if the initial weights are
too large then most neurons would become saturated and the network
will barely learn.
Sigmoid outputs are not zero-centered. This is undesirable since neurons in later layers of processing in a Neural Network (more on
this soon) would be receiving data that is not zero-centered. This has
implications on the dynamics during gradient descent, because if the
data coming into a neuron is always positive (e.g. $x > 0$
elementwise in $f = w^Tx + b$)), then the gradient on the weights
$w$ will during backpropagation become either all be positive, or
all negative (depending on the gradient of the whole expression
$f$). This could introduce undesirable zig-zagging dynamics in the
gradient updates for the weights. However, notice that once these
gradients are added up across a batch of data the final update for the
weights can have variable signs, somewhat mitigating this issue.
Therefore, this is an inconvenience but it has less severe
consequences compared to the saturated activation problem above.
Tanh. The tanh non-linearity is shown on the image above on the right. It squashes a real-valued number to the range [-1, 1]. Like the
sigmoid neuron, its activations saturate, but unlike the sigmoid
neuron its output is zero-centered. Therefore, in practice the tanh
non-linearity is always preferred to the sigmoid nonlinearity. Also
note that the tanh neuron is simply a scaled sigmoid neuron, in
particular the following holds: $ \tanh(x) = 2 \sigma(2x) -1 $.
Left: Rectified Linear
Unit (ReLU) activation function, which is zero when x < 0 and then
linear with slope 1 when x > 0. Right: A plot from Krizhevsky
et al. (pdf) paper indicating the 6x improvement in convergence
with the ReLU unit compared to the tanh unit.
ReLU. The Rectified Linear Unit has become very popular in the last few years. It computes the function $f(x) = \max(0, x)$. In
other words, the activation is simply thresholded at zero (see image
above on the left). There are several pros and cons to using the
ReLUs:
(+) It was found to greatly accelerate (e.g. a factor of 6 in Krizhevsky et
al.) the
convergence of stochastic gradient descent compared to the
sigmoid/tanh functions. It is argued that this is due to its linear,
non-saturating form.
(+) Compared to tanh/sigmoid neurons that involve expensive operations (exponentials, etc.), the ReLU can be implemented by simply
thresholding a matrix of activations at zero.
(-) Unfortunately, ReLU units can be fragile during training and can "die". For example, a large gradient flowing through a ReLU neuron
could cause the weights to update in such a way that the neuron will
never activate on any datapoint again. If this happens, then the
gradient flowing through the unit will forever be zero from that point
on. That is, the ReLU units can irreversibly die during training since
they can get knocked off the data manifold. For example, you may find
that as much as 40% of your network can be "dead" (i.e. neurons that
never activate across the entire training dataset) if the learning
rate is set too high. With a proper setting of the learning rate this
is less frequently an issue.
Leaky ReLU. Leaky ReLUs are one attempt to fix the "dying ReLU" problem. Instead of the function being zero when x < 0, a leaky ReLU will instead have a small negative slope (of 0.01, or so). That is, the function computes $f(x) = \mathbb{1}(x < 0) (\alpha x) + \mathbb{1}(x>=0) (x) $ where $\alpha$ is a small constant. Some people report success with this form of activation function, but the results are not always consistent. The slope in the negative region can also be made into a parameter of each neuron, as seen in PReLU neurons, introduced in Delving Deep into Rectifiers, by Kaiming He et al., 2015. However, the consistency of the benefit across tasks is presently unclear.
Maxout. Other types of units have been proposed that do not have the functional form $f(w^Tx + b)$ where a non-linearity is applied
on the dot product between the weights and the data. One relatively
popular choice is the Maxout neuron (introduced recently by
Goodfellow et
al.) that
generalizes the ReLU and its leaky version. The Maxout neuron computes
the function $\max(w_1^Tx+b_1, w_2^Tx + b_2)$. Notice that both
ReLU and Leaky ReLU are a special case of this form (for example, for
ReLU we have $w_1, b_1 = 0$). The Maxout neuron therefore enjoys
all the benefits of a ReLU unit (linear regime of operation, no
saturation) and does not have its drawbacks (dying ReLU). However,
unlike the ReLU neurons it doubles the number of parameters for every
single neuron, leading to a high total number of parameters.
This concludes our discussion of the most common types of neurons and
their activation functions. As a last comment, it is very rare to mix
and match different types of neurons in the same network, even though
there is no fundamental problem with doing so.
TLDR: "What neuron type should I use?" Use the ReLU non-linearity, be careful with your learning rates and possibly
monitor the fraction of "dead" units in a network. If this concerns
you, give Leaky ReLU or Maxout a try. Never use sigmoid. Try tanh, but
expect it to work worse than ReLU/Maxout.
License:
The MIT License (MIT)
Copyright (c) 2015 Andrej Karpathy
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.* |
H: Is a correlation matrix meaningful for a binary classification task?
When examining my dataset with a binary target (y) variable I wonder if a correlation matrix is useful to determine predictive power of each variable.
My predictors (X) contain some numeric and some factor variables.
AI: Well correlation, namely Pearson coefficient, is built for continuous data. Thus when applied to binary/categorical data, you will obtain measure of a relationship which does not have to be correct and/or precise.
There are quite a few answers on stats exchange covering this topic - this or this for example. |
H: Quick start using python and sklearn kmeans?
I started tinkering with sklearn kmeans last night out of curiosity with the goal of clustering users into groups to see what kind of user groups I can derive. I am lost when it comes to plotting the results as most examples have nice (x,y) coordinates. For example, the iris data set has pedal width and pedal length. From my experimentation, I don't seem to have anything that displays very nice. Is this assumption correct / does anyone have tips, pointers, learning resources that they could offer?
import pandas as pd
import pprint
import numpy as np
from sklearn.preprocessing import normalize
from sklearn.preprocessing import LabelEncoder
from sklearn.cluster import KMeans
from collections import defaultdict
import matplotlib.pyplot as plt
from matplotlib import style
style.use('ggplot')
I normalized the data as it had a wide variance...again, not sure if this is a correct assumption to make
X = np.array(normalize(data, axis=0, copy=False))
kmeans = KMeans(n_clusters=3)
pred = kmeans.fit_predict(X)
labels = kmeans.labels_
cent = kmeans.cluster_centers_
plt.scatter(X[:, [4]], X[:, [6]])
plt.scatter(cent[:, [4]], cent[:, [6]], marker="x", s=150, linewidths=5, zorder=10)
plt.ylabel('Count')
plt.xlabel('Department')
plt.show()
Any pointers are appreciated, I will include sample data below. Thanks!
Sample Data:
emp_type,title,work_country,director_userid,dept_name,business_unit_name,UserCNT
0,9,7,29,20,2,2
0,13,7,8,14,6,5
0,4,3,56,29,8,3
0,15,3,36,32,2,3
0,4,3,32,16,2,0
0,4,1,40,13,6,0
0,4,3,62,12,4,1
0,13,7,61,5,13,4
2,1,3,70,35,15,2
0,4,3,64,4,13,0
2,1,3,43,43,2,0
0,13,7,50,17,16,0
2,1,3,31,26,2,1
2,1,3,65,58,17,0
0,4,3,57,63,12,0
2,1,6,7,45,18,2
2,1,3,43,42,2,0
1,1,7,65,58,17,0
2,1,3,32,16,2,0
2,1,3,29,20,2,0
0,4,0,50,17,16,2
0,5,3,20,23,9,0
0,9,3,32,16,2,2
0,4,3,5,51,12,0
2,1,7,51,53,7,0
0,13,7,37,55,12,0
2,1,4,19,62,13,0
Example clustering using entire data set:
AI: This has been answered in other places e.g. here.
You could run Principal Component Analysis (or other dimensionality reduction techniques) and plot the cluster for the first two principal components.
You could plot the results for two variables at a time.
You could encode third or fourth variables using standard visualization techniques like color coding, symbols or facetting.
There are ways to visualize the quality of the fit e.g. silhouette analysis or elbow test for determining the number of cluster etc.
Have a quick look at this link |
H: Benchmarks based on neural networks libraries to compare the performance between different GPUs
I am looking for benchmarks based on neural networks libraries (Theano/TensorFlow/Torch/Caffe/…) to compare the performance between different GPUs.
I am aware of:
https://github.com/jcjohnson/cnn-benchmarks (CNN in Torch)
https://github.com/jcjohnson/neural-style#speed (CNN in Torch)
https://github.com/glample/rnn-benchmarks (vanilla RNNs and LSTM in Theano, TensorFlow, and Torch)
What are some other benchmark codes?
AI: I found a few more benchmarks with the help of reddit:
https://github.com/soumith/convnet-benchmarks: feedforward and convolution layers in many different libraries. (overview)
https://github.com/DeepMark/deepmark : LSTM and CNN, torch only so far.
https://github.com/baidu-research/DeepBench : "DeepBench attempts to answer the question, "Which hardware provides the best performance on the basic operations used for training deep neural networks?" (another page on it: https://svail.github.io/DeepBench) |
H: Sentence similarity
Every week I get a group of sentences (~1000) each of them may be similar. Example:
metallica hard wired
metallica hardwire
metallica hardwired
metallica hard wire
hardwired metallica
hardwire metallica
hardwire
I'm using Cosine similarity to find common documents and group them.
I have realized that similar docs:
metallica hardwire and metallica hardwired
return ~0.5 similarity.
hardwired metallica and metallica hardwire
return ~0.433
Other docs with more words return higher values. (Im using cosine_similarity from sklearn.metrics.pairwise)
I iterate over each document and get the similarity among all docs, after that I extract the highest values. (cosine similarity > 0.55)
So far is working fine but there are cases in which I can't find similar sentences unless I reduce my coefficient, doing so it may associate other values to non-related items.
I want to know what is the best technique to group common sentences from a list of sentences. Not sure if that would be semantic similarity.
AI: Cosine is only good for long documents. example document and exampl docunemt have 0 cosine similarity. Similarly, hard wired and hardwired are completely dissimilar for cosine. Because it is based on counting the number of identical words.
If you want letter-based similarity, consider levenshtein. But you will likely need to go to something complex n-gram based to also detect wordA wordB and wordB wordA. |
H: How are deep-learning NNs different now (2016) from the ones I studied just 4 years ago (2012)?
It is said in Wikipedia and deeplearning4j that Deep-learning NN (DLNN) are NN that have >1 hidden layer.
These kind of NN were standard at university for me, while DLNN are very hyped right now. Been there, done that - what's the big deal?
I heard also that stacked NN are considered deep-learning. How is deep-learning really defined?
My background of NN is mostly from university, not from jobs:
studied applications of NN in industry
had about 5 courses on artif. intel. & mach. learn. - though maybe 2 of them on NN
used NN for small, simple project on image recognition - used 3 layer feed-forward NN
did not do real research (as in doctor thesis) on them
AI: You are right in that the basic concept of a deep NN hasn't changed since 2012. But there have been a variety of improvements to the ways in which deep NNs are trained that have made them qualitatively more powerful. There are also a wider variety of architectures available today. I've listed some developments since 2012, grouped by training improvements and architecture improvements:
Improvements to training deep NNs
Hardware: The most obvious change is just the inexorable progression of Moore's law. There is more computing power available today. Cloud computing also makes it easy for people to train large NNs without needing to buy a huge rig.
Software: The open source software for deep learning is really enormously improved from 2012. Back in 2012 there was Theano, maybe Caffe as well. I'm sure there are some others, too. But today we also have TensorFlow, Torch, Paddle, and CNTK, all of which are supported by large tech companies. This is closely related to the hardware bullet point since many of these platforms make it easy to train on GPUs, which drastically speeds up training time.
Activation functions: The use of ReLU activation functions is probably more widespread these days, which makes training very deep networks easier. On the research side, there is a wider variety of activation functions being studied, including leaky ReLU, parametric ReLU, and maxout units.
Optimization algorithms: There are more optimization algorithms around today. Adagrad and Adadelta just been introduced in 2011 and 2012, respectively. But we now also have the Adam optimizer and it's become a very popular choice.
Dropout: In the past few years, dropout has become a standard tool for regularization when training neural networks. Dropout is a computationally inexpensive form of ensembling for NNs. In general, a set of models trained on random samples of the dataset will outperform a single model trained on the entire dataset. This is difficult to do explicitly for NNs because they are so expensive to train. But a similar effect can be approximated just by randomly "turning off" neurons on each step. Different subgraphs in the NN end up getting trained on different data sets, and thereby learn different things. Like ensembling, this tends to make the overall NN more robust to overfitting. Dropout is a simple technique that seems to improve performance in almost every case, so it's now used de rigueur.
Batch normalization: It's been known for a while that NNs train best on data that is normalized --- i.e., there is zero mean and unit variance. In a very deep network, as the data passes through each layer, the inputs will be transformed and will generally drift to a distribution that lacks this nice, normalized property. This makes learning in these deeper layers more difficult because, from its perspective, its inputs do not have zero mean and unit variance. The mean could be very large and the variance could be very small. Batch normalization addresses this by transforming the inputs to a layer to have zero mean and unit variance. This seems to be enormously effective in training very deep NNs.
Theory: Up until very recently, it was thought that the reason deep NNs are hard to train is that the optimization algorithms get stuck in local minima and have trouble getting out and finding global minima. In the last four years there have been a number of studies that seem to indicate that this intuition was wrong (e.g., Goodfellow et al. 2014). In the very high dimensional parameter space of a deep NN, local minima tend not to be that much worse than global minima. The problem is actually that when training, the NN can find itself on a long, wide plateau. Furthermore, these plateaus can end abruptly in a steep cliff. If the NN takes small steps, it takes a very long time to learn. But if the steps are too large, it meets a huge gradient when it runs into the cliff, which undoes all the earlier work. (This can be avoided with gradient clipping, another post-2012 innovation.)
New architectures
Residual networks: Researchers have been able to train incredibly deep networks (more than 1000 layers!) using residual networks. The idea here is that each layer receives not only the output from the previous layer, but also the original input as well. If trained properly, this encourages each layer to learn something different from the previous layers, so that each additional layer adds information.
Wide and deep networks: Wide, shallow networks have a tendency to simply memorize the mapping between their inputs and their outputs. Deep networks generalize much better. Usually you want good generalization, but there are some situations, like recommendation systems, in which simple memorization without generalization is important, too. In these cases you want to provide good, substantive solutions when a user makes a general query, but very precise solutions when the user makes a very specific query. Wide and deep networks are able to fulfill this task nicely.
Neural turing machine: A shortcoming of traditional recurrent NNs (whether they be the standard RNN or something more sophisticated like an LSTM) is that their memory is somewhat "intuitive". They manage to remember past inputs by saving the hidden layer activations they produce into the future. However, sometimes it makes more sense to explicitly store some data. (This might be the difference between writing a phone number down on a piece of paper vs. remembering that the number had around 7 digits and there were a couple of 3s in there and maybe a dash somewhere in the middle.) The neural Turing machine is a way to try to address this issue. The idea is that the network can learn to explicitly commit certain facts to a memory bank. This is not straightforward to do because backprop algorithms require differentiable functions, but committing a datum to a memory address is an inherently discrete operation. Consequently, neural Turing machines get around this by committing a little bit of data to a distribution of different memory addresses. These architectures don't seem to work super well yet, but the idea is very important. Some variant of these will probably become widespread in the future.
Generative adversarial networks: GANs are a very exciting idea that seems to be seeing a lot of practical use already. The idea here is to train two NNs simultaneously: one that tries to generate samples from the underlying probability distribution (a generator), and one that tries to distinguish between real data points and the fake data points generated by the generator (a discriminator). So, for example, if your dataset is a collection of pictures of bedrooms, the generator will try to make its own pictures of bedrooms, and the discriminator will try to figure out if it's looking at real pictures of bedrooms or fake pictures of bedrooms. In the end, you have two very useful NNs: one that is really good at classifying images as being bedrooms or not bedrooms, and one that is really good at generating realistic images of bedrooms. |
H: Pearson correlation method using absolute values and relative values
I have a dataset with election results and crime rates per city. For each variable I have an absolute value (i.e. Total votes, Total crimes) and a relative value (i.e. Percentage shares of votes).
I want to calculate the correlation coefficient for some variables, but in the process I had a question about what value I need to use, if relative values or absolute values.
First I calculated z score for absolute values and then I calculated the correlation using excel. I also used pandas.DataFrame.corr() and pearsonr from scipy.stats.stats in python, in order to corroborate results.
For example, if I use absolute values I will get a positive correlation between candidate 1 and candidate 2.
x = df['Abs Cand 1'].tolist()
y = df['Abs Cand 2'].tolist()
print (pearsonr(x,y))
(0.95209664861187004, 0.0)
However, if I use relative ones I will get a negative correlation:
x = df['Rel Cand 1'].tolist()
y = df['Rel Cand 2'].tolist()
print (pearsonr(x,y))
(-0.99704737036262991, 0.0)
I was confused when I saw both results, and now I need some orientation to understand those differences.
Thanks in advance!
AI: In general, the correlation coefficient is "invariant to separate changes in location and scale in the two variables". In particular, you can mix relative with absolute values.
However, that only works if you scale the variables globally. You can't scale every individual data point (here on a city level). If this was a county wide election, you could scale the city values by the county population.
But it sounds like your crime rates are on a per city level. In this case you should scale the votes on a city level as well to make them comparable. This will change the correlation coefficient and give a different result than with absolute values. I think using percentages is more intuitive in your case. |
H: How to interpret silouette coefficient?
I'm trying to determine number of clusters for k-means using sklearn.metrics.silhouette_score. I have computed it for range(2,50) clusters. How to interpret this? What number of clusters should I choose?
AI: They are all bad. A good Silhouette would be 0.7
Try other clustering algorithms instead. |
H: Visualizing items frequently purchased together
I have a dataset in following structure inserted in a CSV file:
Banana Water Rice
Rice Water
Bread Banana Juice
Each row indicates a collection of items that were purchased together. For example, the first row denotes that the items Banana, Water, and Rice were purchased together.
I want to create a visualization like the following:
This is basically a grid chart but I need some tool (maybe Python or R) that can read the input structure and produce a chart like the above as output.
AI: I think what you probably want is a discrete version of a heat map. For example, see below. The red colors indicate the most commonly purchased together, while green cells are never purchased together.
This is actually fairly easy to put together with Pandas DataFrames and matplotlib.
import numpy as np
from pandas import DataFrame
import matplotlib
matplotlib.use('agg') # Write figure to disk instead of displaying (for Windows Subsystem for Linux)
import matplotlib.pyplot as plt
####
# Get data into a data frame
####
data = [
['Banana', 'Water', 'Rice'],
['Rice', 'Water'],
['Bread', 'Banana', 'Juice'],
]
# Convert the input into a 2D dictionary
freqMap = {}
for line in data:
for item in line:
if not item in freqMap:
freqMap[item] = {}
for other_item in line:
if not other_item in freqMap:
freqMap[other_item] = {}
freqMap[item][other_item] = freqMap[item].get(other_item, 0) + 1
freqMap[other_item][item] = freqMap[other_item].get(item, 0) + 1
df = DataFrame(freqMap).T.fillna(0)
print (df)
#####
# Create the plot
#####
plt.pcolormesh(df, edgecolors='black')
plt.yticks(np.arange(0.5, len(df.index), 1), df.index)
plt.xticks(np.arange(0.5, len(df.columns), 1), df.columns)
plt.savefig('plot.png') |
H: How to get the probability of belonging to clusters for k-means?
I need to get the probability for each point in my data set. The idea is to compute distance matrix (first column contsins distances to first cluster, second column conteins distances to second cluster and etc). The closest point has probability = 1, the most distant has probability = 0.
The problem is linear function (like MinMaxScaller) have output where almost all points have almost the same probability.
How to choose nonlinearity for this task? How to automatizate this process on python? For example for the most closest point p=1, for the most distant point that belongs to cluster p=0.5, for the most distant point p is almols 0.
Or you can propose another methods for computing this probability.
AI: Let us briefly talk about a probabilistic generalisation of k-means: the Gaussian Mixture Model (GMM).
In k-means, you carry out the following procedure:
- specify k centroids, initialising their coordinates randomly
- calculate the distance of each data point to each centroid
- assign each data point to its nearest centroid
- update the coordinates of the centroid to the mean of all points assigned to it
- iterate until convergence.
In a GMM, you carry out the following procedure:
- specify k multivariate Gaussians (termed components), initialising their mean and variance randomly
- calculate the probability of each data point being produced by each component (sometimes termed the responsibility each component takes for the data point)
- assign each data point to the component it belongs to with the highest probability
- update the mean and variance of the component to the mean and variance of all data points assigned to it
- iterate until convergence
You may notice the similarity between these two procedures. In fact, k-means is a GMM with fixed-variance components. Under a GMM, the probabilities (I think) you're looking for are the responsibilities each component takes for each data point.
There is a scikit-learn implementation of GMM available if you wanted to look into that, but I'm guessing you just want a quick way to amend your existing code, in which case, if you're happy to assume your clusters are fixed-variance Gaussians, you could transform your distance matrix element-wise as $y = e^{-x}$ (giving you an exponential fall-off), and then calculating the softmax over your columns (normalising your distribution so $P(Y=1) + P(Y=2) + ... + P(Y=k) = 1$).
It's worth pointing out that the assumption your clusters are fixed-variance Gaussians isn't necessarily valid. If your dimensions have wildly different scales, this may produce strange results, as dimensions with smaller-magnitude units will appear more "probable". Standardising your data before running your clustering procedure should remedy this. |
H: Period Predictive Model
I am not sure how to formulate this problem clearly into a machine learning task yet. So hope you guys can chime in and give me some help.
Problem : To predict whether someone will pick up their phone during office hours in week n+2 by looking at customer's behaviour in week n.
Data : I have calling records for about 3 months, which is aggregated on customer level. The various attributes include, num of calls, duration of calls, time of calls, amount of data traffic. But of course, these main attributes are further split into at about 20 attributes.
Current Approach (Very Manual) : I look at data at week n+2 and get the group of guys who picked up the phone during office hours (duration of calls > 5s and time of call). This is the target group, T.
I look at data at week n and manually try all possible combinations of the attributes to get as close to T as possible. But trying manually seems tiring after some time. The baseline is of course using the same conditions as at week n+2. But the whole idea will be to increase this number.
Question : Is there any way I can transform this dataset so that I would be able to do accomplish it as a machine learning task ?
AI: You can try to build some kind of "sliding window table". Let's say you have following attributes:
call duration (x1)
time of call (x2)
picked the phone (x3)
Let's further assume that you have data from past 3 weeks, which allows us to set following table. The rows contain individual calls, the columns the attributes. The appendix _1 tells us the time. So for example x1_1 is call duration previous week, x2_2 is time of call two weeks before etc.
client | x1_1 | x2_1 | x3_1 | x1_2 | x2_2 | x3_2 | ... | x3_3
You can train your model using historical data, where x3_3 is last week. Then, you will feed the model with current data (_3 is current week*) and try to predict x3_3 - whether the customer will pick the phone.
*I am assuming that you know who you are going to call, hence you have _3 attributes, but you don' know yet whether they respond or not
The aim is to give the model the opportunity to learn the time dependencies - maybe time of call week before together with call duration strongly correlate with future chance of picking up the phone again.
What can also help is perform feature selection. The assumption is that some attributes are strongly correlated with others, whereas others are not. You can simply use x1_1, x2_1 and see the relationship to x3_1. But I'd suggest recalculate these often as the preferences might change in time. |
H: Why we use information gain over accuracy as splitting criterion in decision tree?
In decision tree classifier most of the algorithms use Information gain as spiting criterion. We select the feature with maximum information gain to split on.
I think that using accuracy instead of information gain is simpler approach. Is there any scenario where accuracy doesn't work and information gain does?
Can anyone explain what are the advantages of using Information gain over accuracy as splitting criterion?
AI: Decision trees are generally prone to over-fitting and accuracy doesn't generalize well to unseen data. One advantage of information gain is that -- due to the factor $-p*log(p)$ in the entropy definition -- leafs with a small number of instances are assigned less weight ($lim_{p \rightarrow 0^{+} } p*log(p) = 0$) and it favors dividing data into bigger but homogeneous groups. This approach is usually more stable and also chooses the most impactful features close to the root of the tree.
EDIT: Accuracy is usually problematic with unbalanced data. Consider this toy example:
Weather Wind Outcome
Sunny Weak YES
Sunny Weak YES
Rainy Weak YES
Cloudy Medium YES
Rainy Medium NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Rainy Strong NO
Weather and Wind both produce only one incorrect label hence have the same accuracy of 16/17. However, given this data, we would assume that weak winds (75% YES) are more predictive for a positive outcome than sunny weather (50% YES). That is, wind teaches us more about both outcomes. Since there are only few data points for positive outcomes we favor wind over weather, because wind is more predictive on the smaller label set which we would hope to give us a rule that is more robust to new data.
The entropy of the outcome is $ -4/17*log_2(4/17)-14/17*log_2(14/17)) =0.72$. The entropy for weather and outcome is $14/17*(-1/14*log_2(1/14)-13/14*log_2(13/14)) = 0.31$ which leads to an information gain of $0.41$. Similarly, wind gives a higher information gain of $0.6$. |
H: Regression in Predicting Tenancy Lengths
I'm currently working on a project involving the prediction of tenancy lengths. I've so far managed to get to a point where I've processed the data and pruned my Random Forest model (via sklearn in Python) to the following accuracy levels (in days):
Train MAE: 131
Train R^2: 0.906
Test MAE: 259 (using cross-validation)
Test R^2: 0.651
While the model is decent for the industry, there's more performance to squeeze out of it. It currently overestimates results and has poor accuracy on the test data imo.
I'd like to further develop a Neural Network approach, as my initial implementation of an MLP Regressor seems promising:
Train MAE: 301
Train R^2: 0.582
Test MAE: 338 (using cross-validation)
Test R^2: 0.522
My question is how can I improve my results for the prediction (using Python) other than using GridSearch to play around with the MLPRegression function in sklearn? Are there any other models that could be useful in this situation? (I have tried also decision trees, gradient boosting)
In case it is relevant, my dataset contains ~5000 entries since 2008 onwards of individual tenancies, containing: tenancy dates, rent, repair costs, property information and replacements, client information, etc., currently at 41 variables.
AI: You may have done this already, but if your target values are positive integers, it could be worth transforming your output layer in such a way that constrains it to take an appropriate range of values.
Tenancy length values presumably have some recognisable distribution that might fruitfully be modelled by a known statistical process (Poisson process, survival analysis, etc.), and so instead of using your NN to predict the tenancy length directly, you could use the NN to parametrise a relevant probability distribution, and make a point estimate of that distribution in your output layer. |
H: How evaluate text clustering?
What metrics can be used for evaluating text clustering models? I used tf-idf + k-means, tf-idf + hierarchical clustering, doc2vec + k-means (metric is cosine similarity), doc2vec + hierarchical clustering (metric is cosine similarity).
How to decide which model is the best?
AI: Check out this paper. It also addresses question of how many clusters to use. The R package mclust has a routine which will try different cluster models/number of clusters and plot the Bayesian inference criterion (BIC). (great vignette here). It's a general method, meaning, something you can do without being domain/data specific. (It's always good to be domain-specific if you have the time and data.)
The chart is from the vignette by Lucca Scrucca. MClust tries 14 different clustering algorithms (represented by the different symbols), increasing the number of clusters from 1 to some default value. It's finds the BIC each time. Highest BIC is usually the best choice. You could apply this methodology to your own stable of clustering algorithms. |
H: What does Theano dimension ordering mean?
In this code , line 13 is commented as Theano dimension ordering mode. What does this mean?
AI: Let's say you're working with 128x128 pixel RGB images (that's 128x128 pixels with 3 color channels).
When you put such an image into a numpy array you can either store it with a shape of (128, 128, 3) or with a shape of (3, 128, 128).
The dimension ordering specifies if the color channel comes first (as with theano / "th") or if it comes last (as with tensorflow / "tf").
The code you've posted contains the following line:
inputs = Input((1, img_rows, img_cols))
It specifies the shape of the input as (1, img_rows, img_cols) - i.e. there's one color channel (gray scale) and it comes first. That's why it requires Theano's dimension ordering "th". |
H: Difference between MDS and other manifold learning algorithms
From sklearn docs:
Note that the purpose of the MDS is to find a low-dimensional representation of the data (here 2D) in which the distances respect well the distances in the original high-dimensional space, unlike other manifold-learning algorithms, it does not seeks an isotropic representation of the data in the low-dimensional space.
Can someone elaborate, in layman's terms, what the distinction is?
AI: The images in the link you provide, of the severed sphere and its lower-dimensional representations, go some way towards explaining the difference.
The severed sphere is a set of points in a three-dimensional space, but we want a two-dimensional representation of it. The objective of manifold-learning is (shockingly) to find a manifold: a subset of that three-dimensional space which (a) closely fits all the points that make up the severed sphere, and (b) can be described with a two-dimensional coordinate system.
If you look at some of the other lower-dimensional representations of the severed sphere, it's like they're taking it and flattening it out into a rectangle so it'll fit in two dimensions. It's taking the severed sphere and figuring out a new coordinate system that maps as closely as possible onto all the points that make up the severed sphere.
The MDS lower-dimensional representation, though, is more like a shadow that the severed sphere casts on a wall. Rather than finding a new coordinate system that closely fits the sphere, it's just "forgetting" whichever of the dimensions it thinks it can most afford to lose while maintaining the same distance to and from all the points.
A good analogy would be maps of the earth. A good map of the earth makes a new coordinate system that fits a sphere onto a 2D surface. To do this it has to distort the relative distances between places, but you end up with effective 2D coordinates that relate well to places on the globe.
Instead of doing this, you could just take two photos of the earth from above the north and south pole and glue them back to back. You'd still have a 2D representation of the earth, but it doesn't work so well as a coordinate system.
This isn't to say that MDS is "bad". It's just doing something different. You probably wouldn't use MDS for dimensionality reduction prior to carrying out some sort of statistical procedure, but if you're trying to produce a graphic that gives some idea of how close multidimensional points are to one another, it might be a good choice. |
H: LSTM input in Keras
I am confused about the input vector in LSTM model, the data I am using is the text data, e.g. 1,000 sentences. I have two questions about the LSTM input layer:
1.If I would tokenize those sentences into the vectors (we can call it sentence vectors), is there a way in Keras to make sentence vectors given a document? Should be word level, right?
2.The second question is the 3D Tensor type in LSTM. I have 1,000 sentences (samples) and time_step would be 1 if I want to LSTM read one document at each time step, is that correct? The last one is the input dimension, this input dimension is the word dimension (100) in each sentence or how many word observed in each time step (10)?
Thus the LSTM tensor should be (1000, 1, 10) or (1000, 1, 100)
AI: For LSTM, the documents should be at word level. Hence, sentence vectors are not that useful for a document but word vectors are. You can use an embedding layer if you want do it though.
in the 3D tensor, the first dimension is number of sentences. so 1000 is correct. The second one is the number of time_steps which is the number of words for each sentence. The third one is the word vector dimension. Hence, taking your numerical example, the input dimension of the LSTM will be (1000, 10, 100). |
H: Is it possible using tensorflow to create a neural network that maps a certain input to a certain output?
I am currently playing with tensorflow, but can't seem to get a hold whether it usefull for my problem?
I need to create a neural network, that is capable of mapping input to output.
The way things are progression now, i haven't a single example in which this has been done, all types of problem tensorflow seem to solve are classification task, and not this kind of mapping problem... Is tensorflow able to do so?, do I have to use a different neural network framework to do so?, And if it is available could somebody show some code (not the MNIST example)
My task:
I am currently trying to make an neural network that takes in samples of audio files, and generates MFCC features from the samples. The MFCC features can be calculated manually, which I have done, so I know what the I output i seek is. MFCC feature is a feature vector is different real values numbers. I cannot be classified as class A or Class B, or doing so would greatly reduce the accuracy of the output, as you are fitting the output to predetermined "bins"...
AI: From your description, it seems that you are facing a regression problem, because you want your output to be certain values. This is different from classification problems, that have as output the probability of the input belonging to certain class.
The key to using neural networks for regression is that the output layer should have no activation, that is, it should be a linear layer. As pointed out by @JanvanderVegt, a common loss function for regression problems is the mean square error (MSE) between the current output and the MFCC features you computed.
If you google "tensorflow regression example" you can find dozens of complete examples, like this or this. |
H: Supervised learning with tagged images
I am new to ML and looking to learn with some project. I have a medical imaging dataset where an image (image is a time series of an object so multiple images) has been looked at by radiologist and they have graded it on a scale of 1-5 for some pathology.
Now, I would like to basically use this to predict the pathology on new images. I am guessing there are multiple approaches one could take to do so. Could someone point me to some methods I could try (simple to more advanced) as I would like to also learn about them.
Another issue is that different images are of different sizes. Is this usually a problem for these approaches? I could try something where I can register them so that they are of the same size.
AI: I have worked on similar projects (using medical images such as PET to predict outcomes). A method being used more and more for predicting cancer treatment outcomes is texture analysis: https://www.ncbi.nlm.nih.gov/pubmed/21321270
Another method of texture analysis uses wavelet transforms: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3505569/
After deriving the texture features, you could then use those in a ML model for making predictions.
A benefit to texture analysis is that it is mostly independent of the size of the target. It is however, more dependent on resolution. If your images are produced on different scanners, you may need to resample in order to normalize the images.
There are fewer studies about deep learning for medical imaging, but it definitely has some exciting potential. The question is whether or not a CNN can pick up subtle differences. From what I've seen (I'll look for a reference), CNN hasn't outperformed any other method yet.
Maybe add a few more details so we can get a better idea of what your objective is? |
H: Data that's not missing is called...?
Is there a standard term for data that are not missing? I.e. is it called non-missing, present, or something else?
AI: Depends on content, but I would probably go for "observed" (vs. "unobserved"). A suitable direct antonym of "missing" might be "extant". |
H: Multitask multivariate regression?
I'm trying to solve a multivariate regression problem similar to PLS regression.
The problem can be described as a connectivity analysis problem where we have two regions with unknown unidirectional connections(many-to-many) and given a set of input region patterns and output region patterns, we want to infer the underlying connections.
Mathematically, the problem can be formulated as below
$Y = BX \qquad$ where $Y \in \mathbb{R}_+^{M\times N}$, $X \in \mathbb{R}_+^{L\times N}$, and $B \in \mathbb{R}_+^{M\times L}$ with $L > M >> N$
The column of $X$ and $Y$, will be a vectorized version of 2D image.
Although this would result in highly underdetermined system, I do have some prior knowledge about the pattern in input/output regions that I can incorporate in the model.
Is there a model/idea that I can use in situation like this?
AI: You may want to model your problem using bayesian regression, which may allow you to introduce your prior knowledge in the form of priors (a priori distributions) of the model parameters. They would also allow you to model latent variables that govern the dynamics of the interactions (and impose priors on them as well).
The specific approach may be based on sampling (e.g. Markov Chain Monte Carlo) or optimization (e.g. Variational Bayes).
One of the most popular bayesian frameworks is Stan, which has bindings to R (rstan) and python (pystan). In R there are other alternatives such a BUGS and JAGS. In the python realm other options are PyMC (which is also pretty popular) or Edward. |
H: Create data visualization for unstructured data - Basket Market Analysis
I have this dataset (just a sample):
product1,product2,product3
product1,product4
product1,product2
product4,product3,product1,product2
The products are grouped by transaction. I want to create some data visualization using this dataset but I don't any tool or any type of visualization that allows creating some visualization with this structure...
Anyone can suggest me an option?
Thanks!
I feel desperate because they can not find anything that suits this data structure
AI: Considering data stored in CSV format like below without headers you can use below R code to plot simple bar chart. It will plot occurrences of transactions grouped by unique transactions.
product1,product2,product3
product1,product4
product1,product2
product4,product3,product1,product2
product1,product2,product3
product1,product4
product1,product4
R Code -
transactions <- read.csv("filepath/transactions.csv", header = FALSE)
transactions$V1
plot(transactions$V1)
Bar Chart- |
H: How to transform an imbalanced attribute to make it more suitable for linear regression?
I'm new to data science but trying to get better
Here I have an attribute and plotting its histogram
From what I know so far such a distribution is imbalanced and my goal is to equalize things a little bit right?
Again from what I know, I have to transform this attribute to be more suitable for linear regression?
Is it obvious (to someone more experienced than me) which kind of transformation is applicable in this case?
Note that this is an attribute and not my target, this is not what I am trying to predict. This is one of the attributes to be used for predicting
AI: You have multiple options and you may choose the best by seeing the performance.
First one is to use a guess. For positive-only values the log-transform is a hot candidate to make sense (maybe correct for small values here to avoid exceedingly large negative transformed values).
Log-transform is natural if percentage increases have a particular real-world meaning. This is often the case for financial data. Why is log a hot candidate? You probably know that when there are addititve real world effects, the normal distribution often appears. Now, when you have multiplicative effects, you get the https://en.wikipedia.org/wiki/Log-normal_distribution .
Other common transformations are power transforms, where you take some power of the values. I don't think there are many more which are very common. Theoretically, your perfect transform would make the noise on the linear regression Gaussian, but no-one can tell what that would be and most likely reality isn't perfectly linear anyway.
A transform is more interesting when the transformed values follow a Gaussian distribution. But that is just a guess and in the end only final performance evaluation can tell more.
For a second option, be aware that you can force the transformed values to be any distribution you want. For example, if you take ranks of the values you get a uniform transformed distribution. You could even force it to be a Gaussian by a suitable mapping. However, in your case, this will lose the interesting bump on the right.
I think these are the most common options. In data science, nothing is ever obvious and most of the time you can only decide by performance evaluation (cross-validation with the whole model).
Conclusion:
Set up a performance test (cross-validation; not on final test set though if you like a fair, final evaluation) and try all of the following
Try untransformed. It might already be what has most information.
Try log-transform (while adding a small offset if you have some very small values)
Try power-transform if you like |
H: Feature extraction for sentiment analysis
I am working on a group project for my capstone course and we have been tasked with creating a sentiment analysis tool with Python business logic and (L/W)AMP everything else.
We have good feedback for every part of our project plan except for feature extraction. One of our advisors insists that we should have ~15 different kinds of features.
Currently we only use unigrams and are having a hard time finding others that are practical to implement with our small set of data (~50 items) and within our time limit (~2 weeks to fully implement).
What are feature extraction techniques that are useful for sentiment analysis and work on smaller datasets? They should be able to be implemented quickly or already exist in a Python library.
AI: Have a look at these papers-
Semantic Similarity - 1
Semantic Similarity - 2
They came up with a good solution. Have a look maybe you will find something of your use in it. Plus try exploring these too
vaderSentiment 0.5
(VADER Sentiment Analysis. VADER (Valence Aware Dictionary and sEntiment Reasoner) is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media, and works well on texts from other domains.)
Demo- Sentiment Analysis with Python
(This is a demonstration of sentiment analysis using a NLTK 2.0.4 powered text classification process. It can tell you whether it thinks the text you enter below expresses positive sentiment, negative sentiment, or if it's neutral. Using hierarchical classification, neutrality is determined first, and sentiment polarity is determined second, but only if the text is not neutral.)
Hope it helps! :) |
H: AttributeError: type object 'DataFrame' has no attribute 'read_csv'
I'm trying to create some charts using Python. I've this dataset in a CSV file:
Banana Water Rice
Rice Water
Bread Banana Juice
And I've this code:
import numpy as np
from pandas import DataFrame
import matplotlib
matplotlib.use('agg') # Write figure to disk instead of displaying (for Windows Subsystem for Linux)
import matplotlib.pyplot as plt
data = DataFrame.read_csv("test.csv", index_col=1, skiprows=1).T.to_dict()
But when I execute the code I'm getting the following error:
AttributeError: type object 'DataFrame' has no attribute 'read_csv'
How can I solve this problem?
Many thanks!
AI: read_csv() is not available on DataFrame. to read csvs using pandas:
import pandas as pd
data = pd.read_csv("file_name")
If you check type(data), it will be pandas DataFrame. |
H: Is there a Hyperopt equivalent for optimization in R?
I've used Hyperopt in Python, but I'm looking for a package with similar capabilities in R. Does a package like this exist?
AI: Give a try to DEoptim(). This package might solve your problem. For the documentation and more information visit-
Documentation
CRAN-R : DEoptim()
Hope it helps! |
H: How to interpolate and check correlation of two time series with differing cardinality
I want to check how correlated two time series are, but they don't have the same cardinality. They have different number of data points because the timestamps the data are collected are different. Available libraries I have found require that the cardinality be the same. Therefore, I would like to ask if there is a library, an algorithm I can implement myself, or some advice you can give me to approach the problem. Thank you
AI: I understand that your time series are unevenly spaced. In this case, why not simply use a library like traces and transform them to evenly spaced time series. |
H: How to get the inertia at the begining when using sklearn.cluster.KMeans and MiniBatchKMeans
When I cluster a lot of data, it is hard to run KMeans and wait it stop until centers has not change, so I have to stop KMeans when it reach maximum number of iterations.
Here come problem: how can I evaluate the impact of this KMeans did for my data. I know I can get the inertia_ after KMeans fitting my data to see the sum of distances of samples to their closest cluster center. But How can I get the the inertia_ before KMeans fitting with which I can compare it with the the inertia_ after KMeans fitting, so that I can see the improvement of KMeans did for my data.
AI: It sounds like you are grappling with large data set sizes, for which I first suggest switching to mini-batch k-means. Mini-batch scales better so will be less frustrating.
Regarding apriori estimates of the inertia_, I suggest using a sample data set to approximate the inertia_ with appropriate margins of error. But, mini-batch may just preclude your need for apriori inertia_.
Hope this helps! |
H: Word2vec - KeyError
I trained a word2vec from the gensim package. Even though I pass a word in the model.train() method, it doesnt appear in the model's vocab.
Can such a case arise?
Why does it happen so?
AI: The reason behind this is the default value for min_count is 5 in word2vec. Since my words have very less frequency, they are not being added to the vocabulary. |
H: High dimensional space is dense or sparse?
I read some blog articles recently. One mentions that you could not imagine high dimensional space as 2d or 3d as distance between any 2 points in high dimensional space tends to be similar, which means 'dense'. However in the t-SNE paper, it says high dimensional space tends to be sparse such that you have to employ special dimensionality reduction techniques to visualize in 2d or 3d space in meaningful way. So how to reconciliate these 2 different views?
AI: Data in a high dimensional space tends to be sparser than in lower dimensions. There are various ways to quantify this, but one way of thinking that may help your intuition is to start by imagining points spread uniformly at random in a three dimensional box. Now flatten the box into a square, pushing two opposite sides together so all the points lie on a single plane. Do you see that the average distance between a point and its nearest neighbor is now smaller? Now flatten the square into a line segment. Do you see that the average distance between a point and its neighbors is now smaller still?
There is no conflict between this and saying that the average distance between any 2 points in the high dimensional space tends to be similar. The latter statement doesn't imply density. The real number line is dense (it has no gaps), and yet the distance between points ranges from 0 to infinity. The point is that the higher the dimension of your space, the more likely the points are to lie near the edges of the space rather than the center.
Again, consider the dimensions we can actually see. Consider a circle with radius=1, inscribed in a square with sides of length=2. The circle occupies $\pi / 4$ of the square's area, about 78.5%. Now consider a sphere of radius=1 inscribed in a cube with sides of length=2. The sphere occupies $\pi / 6$ of the cube's volume, about 52.4%. As you see in this example, the odds of a randomly placed point lying close to the center (with close to the center in this case meaning within the circle or the sphere) is lower as the dimension increases. Points are more likely to be in the corners. This is why in high dimensions the distance between the points tends to be similar - because randomly placed points tend to be close to the edges of the region. |
H: How can I detect events on fuel tank
Hi guys.
I'm not a data analyst and I need some direction in this. I'm looking for a way to know the events of the fuel during a range of time, could be a day or a month, etc. If the consumption was like the picture above would be easy.
The problem is that the data I have is like this picture.
I need to take the fueling and the possible thefs, that considering that I have false data, that could go up or down. I can see that those abrupt changes are fake because it always comes to the original value, the main problem is that I don't know how long they are gonna stay fake. Could be just one point or n points of bad points.
So, how can I detect these events (fueling and possible theft) whitout counting the errors as events?.
What algorithms or formulas can I apply here?
AI: Welcome to SO! It looks like you have a time series problem. Typically the first step when dealing with time series is to consider the difference. Let us define $f(t)$ as the fuel level at time $t$. You would want to calculate $diff_{\text{fuel}}(t) = f(t) - f(t - 1)$.
After this step you will likely see that the spikes that you identify as bad data are outliers. You could detect these by for example looking at all the data below and above your 2.5 percentile or 5 percentile. Typically this requires some careful analysis work to ensure that you do not delete too much. Once you have identified what the best workable percentile is you can could Winsorize your data.
Lastly, you would look at the data points in the bottom of your resulting distribution. These will likely be the points you identify as theft.
I hope this helps. |
H: Multivariate linear regression accounting for threshold / data cleaning
I am trying to make a linear regression model for the sale price of a house based on many variables (based on the data from this Kaggle challenge https://www.kaggle.com/c/house-prices-advanced-regression-techniques)
The distribution above is the 2nd-floor size in square feet and the y-axis is the sale price. It shows a clear linearity, except for the fact that homes without a 2nd floor clearly are not for sale for 0 dollars.
I have many variables like this that have some threshold either upper or lower that has a large distribution of response for a single value. If I simply exclude these values then the intercept for this curve will be through the origin. Should I let that be the case and assume that the $0 price tag will be corrected by the other variables in my regression?
What is the best way to treat/fit data such as this?
Thanks!
AI: You could try including "Does not have a 2nd floor" as a separate categorical variable, which you could encode as 1 or 0 in your linear model. Something like:
price ~ no_second_floor + area_of_second_floor + [Other Variables]
Event without the other variables, it's already going to be able to fit the data better than price ~ area_of_second_floor because, instead of forcing the price of the single-floor houses to $0, it would be able to fit it to the average price of all the single-floor houses in your data set - which is the best you can do for those houses until you add other variables besides area_of_second_floor.
(Actually, it wouldn't be forced to $0 if you were to include a constant term in your model, but the constant term that best lets you fit the linear portion of the data still probably isn't what you want for the single-floor houses, some of which - as you see - are quite expensive). |
H: How to construct a Decision tree in R where the training data has a frequency associated with each class
Essentially, this is my data set
X Class Sex Age Survived Freq
1 1st Male Child No 0
2 2nd Male Child No 0
3 3rd Male Child No 35
4 Crew Male Child No 0
5 1st Female Child No 0
6 2nd Female Child No 0
7 3rd Female Child No 17
8 Crew Female Child No 0
9 1st Male Adult No 118
10 2nd Male Adult No 154
11 3rd Male Adult No 387
12 Crew Male Adult No 670
13 1st Female Adult No 4
14 2nd Female Adult No 13
15 3rd Female Adult No 89
16 Crew Female Adult No 3
17 1st Male Child Yes 5
18 2nd Male Child Yes 11
19 3rd Male Child Yes 13
20 Crew Male Child Yes 0
21 1st Female Child Yes 1
22 2nd Female Child Yes 13
23 3rd Female Child Yes 14
24 Crew Female Child Yes 0
25 1st Male Adult Yes 57
26 2nd Male Adult Yes 14
27 3rd Male Adult Yes 75
28 Crew Male Adult Yes 192
29 1st Female Adult Yes 140
30 2nd Female Adult Yes 80
31 3rd Female Adult Yes 76
32 Crew Female Adult Yes 20
If there were no frequency but only single valued data, then I know how to invoke rpart to construct a decision tree for me. How to do it considering the frequency of each class?
I am a beginner in R. Thanks
AI: To use the column named Freq as your case weights, you can call rpart with the argument weights=Freq. |
H: sklearn random forest and fitting with continuous features
Does anyone know how the python sklearn random forest implementation handles continuous variables in the fitting process? I'm curious to know if it does any sort of binning (and if so, how it does the binning), or if a continuous variable is just treated as a categorical variable? I'm hoping it's not the latter...thanks! Also, I'd be open to using some R implementation if anyone knows about that.
AI: To understand how a random forest treats continuous data it is imperative to understand how a random forest works. At the base of the random forest algorithm lays a tree construction. The default in sklearn is to split a tree based on the Gini coefficient (see sklearn documentation). This type of tree algorithm is referred to as CART trees. You can change the criterion to entropy to select ID3 and C4.5 trees. Without going to deep into the maths, the tree algorithm will seek to split the tree based on a cutoff that leads to the lowest Gini coefficient.
The random forest algorithm will build a large number of deep trees on your data and average over all the trained trees to give you the final prediction.
Depending on your requirements in terms of data size and necessity for parallelization I can highly recommend H2O. It is an open source machine learning software suite with APIs in Python and R. Their random forest implementation is very fast and leads to models with a higher AUC (see this page for a good comparison between different ML libraries). |
H: Deploying the prediction model under missing values for test data
I have successfully built a logistic regression prediction model based on data set that is complete and clean, i.e., there is no missing values and the data is consistent.
Now, for deploying the model and testing it for online use, there is missing values in the inputs, i.e., not all inputs are available to predict the target value.
Is there a standard way to deal with this?
AI: I can think of three ways to deal with the problem:
Treat "missing value" as another feature: Imagine you have a feature like "date of graduation". One possible (likely?) reason why this value is missing might be that the person did not graduate. So you could build a model which as a binary feature "graduation date is available" and the actual graduation date as another feature.
Predict the missing values: If data is missing because of your lack of knowledge of it (in contrast to the first point), then you might think about trying to predict the missing value. You could also add a feature which encodes the certainty of the predicted value being correct.
Skip the feature: If it is missing very often and if it doesn't add much value to your prediction, you might simply want to remove it. |
H: Convert a pandas column of int to timestamp datatype
I have a dataframe that among other things, contains a column of the number of milliseconds passed since 1970-1-1. I need to convert this column of ints to timestamp data, so I can then ultimately convert it to a column of datetime data by adding the timestamp column series to a series that consists entirely of datetime values for 1970-1-1.
I know how to convert a series of strings to datetime data (pandas.to_datetime), but I can't find or come up with any solution to convert the entire column of ints to datetime data OR to timestamp data.
AI: You can specify the unit of a pandas.to_datetime call.
Stolen from here:
# assuming `df` is your data frame and `date` is your column of timestamps
df['date'] = pandas.to_datetime(df['date'], unit='s')
Should work with integer datatypes, which makes sense if the unit is seconds since the epoch. |
H: How to combine two CART decision trees learned in same type of data?
We have distributed data centers and we build decision trees in each data center. Our problem is to combine our CART decision trees into one CART decision tree. The data in each data center related to the same event (data from light sensor for instance). I know about boosting methods but they don't give result we want to.
Is there any known method to do this ?
AI: You mention two decision trees. Traversing a decision tree is very cheap, so running a feature instance through multiple trees is very fast, you could just take all the decision trees from the data centers and average the outcome, maybe weigh it by the (crossvalidated) strength of the models. Random Forests are powerful models that also combine decision trees in this way, except that they are done on random subsets of the features (and in some cases also random subsets of the data). |
H: Use regression instead of classification for hard labeled ranking datasets
Let's imagine I have a dataset of movie reviews with annotated sentiment:
-1 means negative
0 means neutral
+1 means positive
I see a lot of people trying to do classification to try to answer those types of problems, but shouldn't regression be used instead? To me using regression would allow the system to model that there is a transition between labels, e.g. 0 is in between. Any thoughts on this?
AI: This is Ordinal Regression https://en.wikipedia.org/wiki/Ordinal_regression
Quote from Wikipedia:
In statistics, ordinal regression (also called "ordinal
classification") is a type of regression analysis used for predicting
an ordinal variable, i.e. a variable whose value exists on an
arbitrary scale where only the relative ordering between different
values is significant.
Examples are the ranking system you describe or any question with categorical but ordered answers often seen in surveys ("always", "sometimes", "never"). |
H: Neural network with flexible number of inputs?
Is it possible to create a neural network which provides a consistent output given that the input can be in different length vectors?
I am currently in a situation where I have sampled a lot of audio files, which are of different length, and have to train a neural network provides me the desired output given a certain input. I am trying to create a regression network that can generate MFCC feature, given samples of an audio file, which are of different length, which makes different numbered input.
AI: Yes this is possible by treating the audio as a sequence into a Recurrent Neural Network (RNN). You can train a RNN against a target that is correct at the end of a sequence, or even to predict another sequence offset from the input.
Do note however that there is a bit to learn about options that go into the construction and training of a RNN, that you will not already have studied whilst looking at simpler layered feed-forward networks. Modern RNNs make use of layer designs which include memory gates - the two most popular architectures are LSTM and GRU, and these add more trainable parameters into each layer as the memory gates need to learn weights in addition to the weights between and within the layer.
RNNs are used extensively to predict from audio sequences that have already been processed in MFCC or similar feature sets, because they can handle sequenced data as input and/or output, and this is a desirable feature when dealing with variable length data such as spoken word, music etc.
Some other things worth noting:
RNNs can work well for sequences of data that are variable length, and where there is a well-defined dimension over which the sequences evolve. But they are less well adapted for variable-sized sets of features where there is no clear order or sequence.
RNNs can get state-of-the-art results for signal processing, NLP and related tasks, but only when there is a very large amount of training data. Other, simpler, models can work just as well or better if there is less data.
For the specific problem of generating MFCCs from raw audio samples: Whilst it should be possible to create a RNN that predicts MFCC features from raw audio, this might take some effort and experimentation to get right, and could take a lot of processing power to make an RNN powerful enough to cope with very long sequences at normal audio sample rates. Whilst creating MFCC from raw audio using the standard approach starting with FFT will be a lot simpler, and is guaranteed to be accurate. |
H: Why after adding categorical data the Linear Regression fails?
Based on a training set we applied a simple Linear Regression on some attributes that all were numeric.
Now we have more attributes in terms of categories and of course we applied one-hot-encoding to transform the categories to binary attributes
Take for example this simple python code:
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.8, test_size=0.2)
model = LinearRegression(normalize=True).fit(X_train, y_train)
printErrorMetrics(trueTargets=y_test, predictions=model.predict(X_test))
When the table X has only the original numeric attributes the scores from the printErrorMetrics function (RMSE, etc.) are all good enough
We were expecting better results after adding the one-hot-encoded categories but the results are so worse that the method does not seem to work.
Are we missing anything?
Do we need to preprocess the data after adding the one-hot-encoded columns/attributes?
AI: One possible reason is that when you use one-hot-encoding for categorical data, you should set the intercept property in the function to be False:
model = LinearRegression(fit_intercept=False, normalize=True).fit(X_train, y_train)
This will avoid the dummy variable trap:
http://www.algosome.com/articles/dummy-variable-trap-regression.html
You could also use dummy encoding to avoid this problem:
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html |
H: Does the Bishop book imply that a neuron feeds to itself in chapter 5.3?
I just read Bishop's book Pattern Recognition and Machine Learning. I read the chapter 5.3 about backpropagation, and it said that, in a general feed-forward network, each unit computes a weighted sum of its inputs of the form $$a_j=\sum\limits_{i}w_{ji}z_i$$
Then the book says that the sum in the above equation transformed by the non-linear activation function $h(.)$ to give the activation $z_j$ of unit $j$ in the form $$z_j=h(a_j)$$
I think the notation is somehow akward: suppose I want to compute $a_2$, then
$$a_2=w_{21}z_1+w_{22}z_2+\dots$$
Then does $$a_2=w_{21}z_1+w_{22}h(a_2)+\dots$$ mean that the neuron $a_2$ is connected to itself?
AI: The equations are only working for a given layer.
If you want to generalize, you need to rewrite them as, for example :
$$a^l_j=\sum\limits_{i}w^l_{ji}z^{l-1}_i + b^l_j$$ |
H: What is the difference of R-squared and adjusted R-squared?
I have in mind that R-squared is the explained variance of the response by the predictors. But i'd like to know how the adjusted value is computed ? and if the concept has any change from the original.
AI: A google search for r-squared adjusted yielded several easy to follow explanations. I am going to paste a few directly from such results.
Meaning of Adjusted R2
Both R2 and the adjusted R2 give you an idea of how many data points fall within the line of the regression equation. However, there is one main difference between R2 and the adjusted R2: R2 assumes that every single variable explains the variation in the dependent variable. The adjusted R2 tells you the percentage of variation explained by only the independent variables that actually affect the dependent variable.
What Is the Adjusted R-squared?
The adjusted R-squared compares the explanatory power of regression models that contain different numbers of predictors.
Suppose you compare a five-predictor model with a higher R-squared to a one-predictor model. Does the five predictor model have a higher R-squared because it’s better? Or is the R-squared higher because it has more predictors? Simply compare the adjusted R-squared values to find out!
The adjusted R-squared is a modified version of R-squared that has been adjusted for the number of predictors in the model. The adjusted R-squared increases only if the new term improves the model more than would be expected by chance. It decreases when a predictor improves the model by less than expected by chance. The adjusted R-squared can be negative, but it’s usually not. It is always lower than the R-squared. |
H: Is there a network analysis tool built into Orange?
I am interested in a canned or built-in network analysis tool. Wondering if this is possible with Orange.
AI: This Networks add-on for Orange data mining suite should help you. If you are open to using other solutions, I would recommend networkx Python library. |
H: In Tensorflow, What kind of neural network should I use?
I am doing Tensorflow tutorial, getting what TF is. But I am confused about what neural network should I use in my work. I am looking at Single Layer Neural Network, CNN, RNN, and LSTM RNN.
-----------------------What I'm going to do is...-----------------------
There is a sensor which measures something and represents the result in 2 boolean ways. Here, they are Blue and Red, like this:
the sensor gives result values every 5minutes. If we pile up the values for each color, we can see some patterns:
number inside each circle represents the sequence of result values given from sensor.
(for example, 107 was given right after 106) when you see from 122 to 138, you can see decalcomanie-like pattern.
I want to predict the next result value, before the sensor imparts the result, with probability.
Machine has to know what the next will be, based on patterns from past results.
I may do supervised learning using past results. But I'm not sure which neural network or method is suitable. Thinking that this work needs pattern using past results (have to see context), and memorize past results, maybe LSTM RNN (long-short term memory recurrent neural network) would be suitable one.
Could you tell me which one is suitable for this work?
AI: Sure, you can use an RNN. I would create two features for the past $k$ run lengths, as well as the length of the current run; e.g., just before t=150, the current run would be length 2 (red), and previous three runs would be (1,1,1) for red and (1,1,5) for blue. The intuition here is that the run lengths seem to follow some sort of exponential distribution, and you want to help the model estimate the scale parameter by feeding it samples of the length. You could additionally encode the past k events as a bit string, with 1 representing red and 0 representing blue. You have a classification problem, so you should use classification loss like the cross-entropy, and a softmax output layer to get your probabilities. |
H: software for workflow integrating network analysis, predictive analytics, and performance metrics
I am hoping that there is some existing software for what I want to accomplish, as I'm not a big fan of reinventing the wheel.
In general, I would like a software package that can serve as a workflow that integrates network analysis, predictive analytics, and performance evaluation.
More specifically, information about a criminal network along with other relevant data would be used to predict monetary flows across the network. Law enforcement case data combined with information about network monetary flows would direct resource use in disrupting flows across the network. These law enforcement performance metrics along with metrics related to criminal networks would be used for strategic decision making. It would also be ideal to have dashboards showing some of the metrics of interest.
I have identified some potential software candidates: Dataiku DSS, RapidMiner, KNIME, Orange Data Mining, Watson, and Statistica. I am not too familiar with them and I wonder if something else may immediately come to mind for those more familiar with these applications.
Any direction is greatly appreciated.
-Ted
AI: In Orange, you can do something like this:
This takes the network, which is already containing class you'd like to predict, then training (or testing) the learner in Test & Score and evaluating it in Confusion Matrix. Then you can see misclassifications directly in the network graph.
There are a bunch of other learners and evaluation methods available. A big plus is also interactive data exploration (see how you can input wrongly classified data into Network Explorer?). However, there's no dashboard available yet. We make do with opening several windows side by side.
That's just my 2¢ on Orange. I suggest you to at least try all of them and see which one works best for you. :) |
H: Feature selection and classification accuracy relation
One of the methodology to select a subset of your available features for your classifier is to rank them according to a criterion (such as information gain) and then calculate the accuracy using your classifier and a subset of the ranked features.
For example, if your features are A, B, C, D, E, and if they are ranked as follow D,B,C,E,A, then you calculate the accuracy using D, then D, B then D, B, C, then D, B, C, E... until your accuracy starts decreasing. Once it starts decreasing, you stop adding features.
In example1 (above), you would pick features F, C, D, A and drop the other features as they decrease your accuracy.
That methodology assumes that adding more features to your model increases the accuracy of your classifier until a certain point after which adding additional features decreases the accuracy (as seen in example 1)
However, my situation is different. I have applied the methodology described above and I found that adding more features decreased the accuracy up until a point after which it increases.
In a scenario such as this one, how do you pick your features? Do you only pick F and drop the rest? Do you have any idea why the accuracy would decrease and then increase?
AI: feature selection involves several approaches just like methods for machine learning. Idea is to keep most relevant but not redundant feature for predictive model that can yield optimal accuracy.
In your case, I can not see which method you are using for feature selection but assuming that you are not taking account of multivariate nature of feature dependency. Say you have N features, likely reason that your model accuracy drops after n top feature(s) but improves by adding n+k (where n < k < N when features are in descending order based on information gain) is due to inter-dependency (more relevance and less redundancy) of top n and k features. Univariate feature selection does not necessarily get optimal model accuracy when features are inter-dependent and not mutually exclusive. From philosophical point of view, set of optimal features is analogous to a quote by Aristotle: "The whole is greater than the sum of its parts"!
For optimal feature selection, I often is Caret package in R language where one may do feature selection using recursive feature elimination (RFE) among several other approaches. There is also a package called mRMRe to do feature selection based on maximum relevance, minimal redundancy.
Best,
Samir |
H: Cost of greater than 1, is there an error?
I'm computing cost in the following way:
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(y, y_)
cost = tf.reduce_mean(cross_entropy);
For the first cost, I am getting 0.693147, which is to be expected on a binary classification when parameters/weights are initialized to 0.
I am using one_hot labels.
However, after completing a training epoch using stochastic gradient descent I am finding a cost of greater than 1.
Is this to be expected?
AI: The following piece of code does essentially what TF's softmax_cross_entropy_with_logits functions does (crossentropy on softmaxed y_ and y):
import scipy as sp
import numpy as np
def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0)
def crossentropy(true, pred):
epsilon = 1e-15
pred = sp.maximum(epsilon, pred)
pred = sp.minimum(1-epsilon, pred)
ll = -sum(
true * sp.log(pred) + \
sp.subtract(1,true) * \
sp.log(sp.subtract(1, pred))
) / len(true)
return ll
==
true = [1., 0.]
pred = [5.0, 0.5]
true = softmax(true)
pred = softmax(pred)
print true
print pred
print crossentropy(true, pred)
==
[ 0.73105858 0.26894142]
[ 0.98901306 0.01098694]
1.22128414101
As you can see there is no reason why crossentropy on binary classification cannot be > 1 and it's not hard to come up with such example.
** Crossentropy above is calculated as in https://www.kaggle.com/wiki/LogarithmicLoss, softmax as in https://en.wikipedia.org/wiki/Softmax_function
UPD: there is a great explanation of what it means when logloss is > 1 at SO: https://stackoverflow.com/a/35015188/1166478 |
H: How to find splits in data so that each split has equal weighting according to function f
I have a weight function f that outputs a numeric weighting for a sample s. I also have an ordered set of samples S where the weight of each sample s in set S varies greatly.
How can I create n splitting points so that each split is weighted approximately the same? What kind of methodology, algorithms or models could I use to achieve this?
AI: A classic optimization problem! You can use Linear Programming/Optimization to find a good split. Every of the n samples s $\in S$ has weight f(s) and we want to divide them into m folds. You can use a trick to linearize the L1 objective or you can use Quadratic Programming for a L2 objective function. The Quadratic Programming model is easier to define in this case. Let's define $x_{ij}$ as a binary decision to put sample i in fold j and $\mu$ is the ideal, mean weight per fold. Then this is our objective function:
min $\sum_{j=1}^m(\sum_{i=1}^nx_{ij}f(i)-\mu)^2$
Under the following constraints:
$\sum_{j=1}^mx_{ij}=1$ for all $i\in \{1..n\}$ to ensure exactly one assignment per sample
$x_{ij} \in \{0, 1\}$ to turn it into binary decision varibales
Depending on the size of your dataset and the solver you use this can be a heavy optimization, but there are a lot of greedy heuristics that will get you close fairly fast. |
H: Python TypeError: __init__() got an unexpected keyword argument 'decision_function_shape'
I tried creating a SVM Classifier, as:
# Create a SVM Classifier
model = SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape=None, degree=3, gamma='auto', kernel='linear',
max_iter=-1, probability=True, random_state=None, shrinking=True,
tol=0.001, verbose=False
)
(Using Python 2.7)
But getting this error--
TypeError: init() got an unexpected keyword argument
'decision_function_shape'
Any thoughts on that? How to sort it out?
Update >>
My sklearn version is 0.16.1. I tried to install the update but it's kept on saying- No matching distribution found for the upgrade.
AI: Your snippet is almost exactly the same as scikit-learn's example (except for kernel='rbf' and probability=False) and works fine under version 0.18, provided the needed imports are present.
Update: the version of scikit learn used is 0.16.1, and in that version, SVC did not have as many arguments as in 0.18, as per the docs. You should therefore use something like this:
model = SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
gamma=0.0, kernel='linear', max_iter=-1, probability=True,
random_state=None, shrinking=True, tol=0.001, verbose=False)
For upgrading scikit-learn to 0.18, (assuming you use pip) do this:
pip install scikit-learn==0.18 --force-reinstall |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.