text
stringlengths 83
79.5k
|
---|
H: What is the difference between SVM and GMM classifier
What is the difference between support vector machine and Gaussian mixture model classifiers?
AI: A Gaussian mixture model is a special case of a mixture distribution, which is a simple way of combining probability distributions.
However, the SVM does not make any assumptions. It is just a function, which depends on the distance of a data point from another point/plane. Som there is absolutely no probabilistic assumptions. |
H: finding maximum depth of random forest given the number of features
How do we find maximum depth of Random Forest if we know the number of features ?
This is needed for regularizing random forest classifier.
AI: The maximum depth of a forest is a parameter which you set yourself. If you're asking how do you find the optimal depth of a tree given a set of features then this is through cross-validation. For example, create 5 rf's with 5 different tree depths and see which one performs the best on the validation set. |
H: How do scientists come up with the correct Hidden Markov Model parameters and topology to use?
I understand how a Hidden Markov Model is used in genomic sequences, such as finding a gene. But I don't understand how to come up with a particular Markov model. I mean, how many states should the model have? How many possible transitions? Should the model have a loop?
How would they know that their model is optimal?
Do they imagine, say 10 different models, benchmark those 10 models and publish the best one?
AI: I'm familiar with three main approaches:
A priori. You might know that there are four base pairs to pick from, and so allow the HMM to have four states. Or you might know that English has 44 phonemes, and so have 44 states for the hidden phoneme layer in a voice recognition model.
Estimation. The number of states can often be estimated beforehand, perhaps by simple clustering on the observed features of the HMM. If the HMM transition matrix is triangular (which is often the case in failure prediction), the number of states determines the shape of the distribution of total time from the start state to the end state.
Optimization. Like you suggest, either many models are created and fit and the best model selected. One could also adapt the methodology that learns the HMM to allow the model to add or discard states as needed. |
H: Can I consider this pattern of data as a linear and use parametric multiple linear regression?
In the data, there are 355 observations including one continuous dependent variable (Y: ranges from 15-55) and 12 independent variables (continuous, categorical, and ordinal). The X1 (2 levels) and X6 (3 levels) are considered as categorical variables. Here are some questions that I have:
Can I assume that all the coefficients (except X1 and X6 which are categorical) are linear with respect to Y?
Can I consider X5 as continuous variable; however, it is ordinal and ranges from (1-7)?
Can I get the X7 (year) as continuous variable; however, it’s ordinal and rages from 2002-2006 (In fact, year of data per se does not improve the response; it is the other factors occurring in the same time period which result in improvements and we don’t know those factors), does this approach seem logical?.
In general if I use different transformations on independent variables such as log, squared, square root, and inverse, do I need to standardize the data also?
Here is the scatter plot:
Any feedback and insights would be highly appreciated. Thank you
AI: I don't think "can" is the right question to ask; it's not going to give you a syntax error. The right question is "what could go wrong?". Any modeling technique will have assumptions that may be broken, and knowing how those assumptions impact the results will help you know what to look for (and how much to care when those assumptions are broken).
The best test of whether or not linearity is appropriate is whether the residuals are white or structured. For example, it looks like X9 might have a nonlinear relationship with Y. But that might be an artifact of the interaction between X9 and other variables, especially categorical variables. Fit your full model, then plot the residuals against X9 and see what it looks like.
Treating it as continuous won't cause serious problems, but you might want to think about what this implies. Is the relationship between 1 and 2 in the same direction and half the strength as the relationship between 2 and 4? If not, you might want to transform this to a scale where you do think the differences are linear.
Same as 2, except it's even more reasonable to see time as linear.
Standardization is not necessary for most linear regression techniques, as they contain their own standardization. The primary exception is techniques that use regularization, where the scale of the parameters is relevant.
It's also worth pointing out that multivariate linear relationships, while they can capture general trends well, are very poor at capturing logical trends. For example, looking at X3 and X4, it could very well be that there are rules like Y>X3 and Y>X4 in place, which is hinted at but not captured by linear regression. |
H: Examples of the Three V's of Big Data?
What are some examples of the Three V's of Big Data? The three V's stand for: volume, velocity, variety.
Reference:
Three V's of Big Data, provided by Norwegian University of Science and Technology.
https://www.ntnu.edu/ime/bigdata/what-is
AI: Volume:
Simply stated, big data is to big to work on one computer. This is a relative definition, as what can't work on today's computer will easily work on computers in the future.
- One Google search uses the computing power of the entire Apollo space mission.
- Excel used to hold up to 65k rows in a single spreadsheet. Now it holds over a million.
Velocity:
Data is coming in extremely fast. Traditional scientific research methods of a few hundred cases could take weeks, months or even years to analyze and publish.
- Iris flower data set
- Statistical Programming Language R
- Twitter Firehose (6,000 tweets per second)
Variety:
Big Data that is contained in one specific data type or does not fit well within the format of a relational database. This data often comes in the form of unstructured text.
- Estimated 80% of all enterprise data is unstructured
- Open Data(Government)
- noSql Databases
Iris Data Set: https://en.wikipedia.org/wiki/Iris_flower_data_set
Open Data: https://www.data.gov/open-gov/ |
H: Sentiment Analysis Tutorial
I am trying to understand sentiment analysis and how to apply it using any language (R, Python etc). I would like to know if there is a good place on internet for tutorial that I can follow. I googled, but I wasn't very much satisfied because they were not tutorials but more of theory. I want theory and practical examples.
AI: The Stanford NLP course on Coursera covers Sentiment Analysis in week 3:
- What is Sentiment Analysis?
- Sentiment Analysis: A baseline algorithm
- Sentiment Lexicons
- Learning Sentiment Lexicons
- Other Sentiment Tasks
For coding tutorials see:
Stream Hacker's NLP tutorials
Basic Sentiment Analysis with Python
Andy Bromberg's Sentiment Analysis tutorials
Laurent Luce's Sentiment Analysis tutorials
These are really basic, so their performance will not be great in all cases. |
H: Predict which user will buy with an offer - discount
I have historical data from an e-shop transactions. I want to write a prediction model and check if a specific user will buy with or without a discount, so I can do some targeting offers.
The idea is:
If a user will buy the regular price, will not have an offer.
If a user will not buy the regular price, check if he/she will buy with an offer.
With this way, I will avoid to make an offer to someone who would buy with the regular price.
So, I am still in the brainstorming and trying to find a way for implementing the 1-2. Should I create two separate models to predict the 1) and then the 2) with the second model? Or should I join both in one prediction model?
AI: You can use Decision trees for a single model prediction for both the set of users.
A good start would be to first read up on Decision Trees and their applications.
You can include the offer as a decision(as a boolean in this case).
buying with offer and buying without offer can be the decision criterion.
You can, in fact go ahead and put in the offer values also. For example,
offer>10% and offer <10% |
H: Sampling for multi categorical variable
My hypothesis h depends on multiple categorical variables (a,b,c) each with their corresponding set of possible values (A,B,C). Now each of my data point exist in this space where I have no control over the values (observational data).
For e.g. Hypothesis to predict user shopping probability say depends
on (Age, Country, Gender, Devicetype etc.)
How could I sample the above data set so that it would give me a good representation. The techniques I have learned from the books very well apply to one dimension but that is a rare case in practice. If I sample across one dimension my other dimensions will be heavily skewed towards some values. Is there any standard algorithm to give good sampling?
AI: Let me give you some pointers (assuming that I'm right on this, which might not necessarily be true, so proceed with caution :-). First, I'd figure out the applicable terminology. It seems to me that your case can be categorized as multivariate sampling from a categorical distribution (see this section on categorical distribution sampling). Perhaps, the simplest approach to it is to use R ecosystem's rich functionality. In particular, standard stats package contains rmultinom function (link).
If you need more complex types of sampling, there are other packages that might be worth exploring, for example sampling (link), miscF (link), offering rMultinom function (link). If your complex sampling is focused on survey data, consider reading this interesting paper "Complex Sampling and R" by Thomas Lumley.
If you use languages other than R, check multinomial function from Python's numpy package and, for Stata, this blog post. Finally, if you are interested in Bayesian statistics, the following two documents seems to be relevant: this blog post and this survey paper. Hope this helps. |
H: Scan-based operations Apache Spark
Looking at the first paper on RDDs/Apache Spark, I found a statement saying that "RDDs degrade gracefully when there is not enough memory to store them, as long as they are only being used in scan-based operations"
What are scan-based operations in the context of RDDs and which of the Transformations in Spark are scan-based operations
AI: Scan based operations are basically all the operations that require evaluating the predicate on an RDD.
In other terms, each time your create an RDD or a DataFrame in which you need to compute a predicate like performing a filter, map on a case class, per example, or even explain method will be considered as a scan based operation.
To be more clear, let's review the definition of a predicate.
A predicate or a functional predicate is a logical symbol that may be applied to an object term to produce another object term.
Functional predicates are also sometimes called mappings, but that term can have other meanings as well.
Example :
// scan based transformation
rdd.filter(!_.contains("#")) // here the predicate is !_.contains("#")
// another scan based transformation
rdd.filter(myfunc) // myfunc is a boolean function
// a third also trivial scan based transformation followed by a non scan based one.
rdd.map(myfunc2)
.reduce(myfunc3)
If you want to understand how spark internals work, I suggest that you watch the presentation made by Databricks about the topics |
H: Cosine Distance > 1 in scipy
I am working on a recommendation engine, and I have chosen to use SciPy's cosine distance as a way of comparing items.
I have two vectors:
a = [2.7654870801855078, 0.35995355443076027, 0.016221679989074141, -0.012664358453398751, 0.0036888812311235068]
and
b = [-6.2588482809118942, -0.88952297609194686, 0.017336984676103874, -0.0054928004763216964, 0.011122959185936367]
Running the following code will produce an output of ~1.999:
from scipy.spatial import distance
print(distance.cosine(a,b))
Is there something wrong with my input values? Anyone know why I am getting a result of >1?
AI: The cosine distance formula is:
And the formula used by the cosine function of the spatial class of scipy is:
So, the actual cosine similarity metric is: -0.9998.
So, it signifies complete dissimilarity. |
H: Optimal projection for data visualization
I have 90k points in $\mathbb{R}^{32}$ (i.e., a 90k by 32 real matrix) which I want to visualize. I know I can cluster my points (k-means &c), but I want to select a few "interesting" 2-planes in $\mathbb{R}^{32}$, project the points there and scatter-plot them.
How do I select the interesting 2-planes to project to?
AI: There are a few interesting plots and transformations you could start with, each dependent upon the purpose of your analysis. Below are some first steps I might take.
If simply visually looking for clusters:
If clusters are what you are after, then I would recommend applying a Principal Component Analysis to the dataset and then plotting the dataset with the first 2 principal components as the axes. However, the major downside to PCA is that you will have to go "unpack" the principal components to find the original variables. In other words, you'll be may be able to identify cool clusters, but it will be a little harder to tie the findings back to your 32 variables.
If looking for quick relationships between variables to build on:
With "only" 32 variables you could do a pairwise plot. However, a smarter way may be to first identify the relationships mathematically (e.g. correlation) and then plotting those variables.
If looking for quick relationships in each variable to build on:
Or, look at 32 histograms to start off with. Look out for clear bi-modal (or more) to begin piecing together an understanding of how your variables may contribute to an unsupervised model. If you end up look at 32 unimodal histograms then you can conclude early that no matter how you cluster, you will simply end up with a blob.
Actually, in usual analytics workflows I would go 3 > 2 > 1. But if I was fishing for clusters and just wanted to see if clusters will appear or not, PCA would be a good shortcut.
Also, feel free to sample your dataset. 90k points on your screen will likely do more harm than good. |
H: Extract the "path" of a data point through a decision tree in sklearn
I'm working with decision trees in python's scikit learn. Unlike many use cases for this, I'm not so much interested in the accuracy of the classifier at this point so much as I am extracting the specific path a data point takes through the tree when I call .predict() on it. Has anyone done this before? I'd like to build a data frame containing ($X_{i}$, path$_{i}$) pairs for use in a down-stream analysis.
AI: Looks like this is easier to do in R, using the rpart library in combination with the partykit library. I'd ideally like to find a way to do this in python, but here's the code, for anyone who is interested (taken from here):
pathpred <- function(object, ...){
## coerce to "party" object if necessary
if(!inherits(object, "party")) object <- as.party(object)
## get standard predictions (response/prob) and collect in data frame
rval <- data.frame(response = predict(object, type = "response", ...))
rval$prob <- predict(object, type = "prob", ...)
## get rules for each node
rls <- partykit:::.list.rules.party(object)
## get predicted node and select corresponding rule
rval$rule <- rls[as.character(predict(object, type = "node", ...))]
return(rval)
}
Illustration using the iris data and rpart():
library("rpart")
library("partykit")
rp <- rpart(Species ~ ., data = iris)
rp_pred <- pathpred(rp)
rp_pred[c(1, 51, 101), ]
Yielding,
response prob.setosa prob.versicolor prob.virginica
1 setosa 1.00000000 0.00000000 0.00000000
51 versicolor 0.00000000 0.90740741 0.09259259
101 virginica 0.00000000 0.02173913 0.97826087
rule
1 Petal.Length < 2.45
51 Petal.Length >= 2.45 & Petal.Width < 1.75
101 Petal.Length >= 2.45 & Petal.Width >= 1.75
Which looks to be something I could at least use to derive shared parent node information. |
H: How can I vectorize this code in R? Maybe with the apply() function?
I am really struggling to replicate the output of the dist() function in R code without using 1 or 2 for loops. (If you're wondering why I'm doing this, it's so that I can play around with the distance calculation, and also to improve my R skills - so please only solutions that involve R!)
Overview: matrix is passed to dist(), which calculates the Euclidean distance row-wise and outputs a full distance matrix of the distance between each row (e.g. distance between rows 1 and 50 will be in distancematrix[1, 50] and distancematrix[50, 1]). The fast code looks like this:
distancematrix <- as.matrix(dist(myMatrix, method="euclidean", diag = T, upper = T))
I have successfully produced the same output in R using the following code:
for (i in 1:nrow(myMatrix)) {
for (j in 1:nrow(myMatrix)) {
distancematrix[i, j] <- sum(abs(myMatrix[i,] - myMatrix[j,]))
}
}
However, using two nested for loops is much slower than using dist(). I have read a lot about using apply() to optimise slower for loops, but I haven't been able to get my head around it so far. I believe that at least one of the for loops is definitely avoidable by just outputting a vector, and dealing with it at the end. However, I cannot for the life of me work out how to remove both for loops.
Does anyone have any thoughts?
AI: First of all it should be noted that the code you posted does not actually replicate the output of the dist function, because the line:
distancematrix[i, j] <- sum(abs(myMatrix[i,] - myMatrix[j,]))
does not calculate the Euclidean distance; it should be:
distancematrix[i, j] <- sqrt(sum((myMatrix[i,] - myMatrix[j,]) ^ 2))
Here are two solutions that rely on apply. They are simplified, and in particular do not take advantage of the symmetry of the distance matrix (which, if considered, would lead to a 2-fold speedup). First, generate some test data:
# Number of data points
N <- 2000
# Dimensionality
d <- 10
# Generate data
myMatrix = matrix(rnorm(N * d), nrow = N)
For convenience, define:
# Wrapper for the distance function
d_fun <- function(x_1, x_2) sqrt(sum((x_1 - x_2) ^ 2))
The first approach is a combination of apply and sapply:
system.time(
D_1 <-
apply(myMatrix, 1, function(x_i)
sapply(1:nrow(myMatrix), function(j) d_fun(x_i, myMatrix[j, ]))
)
)
user system elapsed
14.041 0.100 14.001
while the second uses only apply (but going over the indices, which are paired using expand.grid):
system.time(
D_2 <-
matrix(apply(expand.grid(i = 1:nrow(myMatrix), j = 1:nrow(myMatrix)), 1, function(I)
d_fun(myMatrix[I[["i"]], ], myMatrix[I[["j"]], ])
)
)
)
user system elapsed
39.313 0.498 39.561
However, as expected both are much slower than dist:
system.time(
distancematrix <- as.matrix(
dist(myMatrix, method = "euclidean", diag = T, upper = T)
)
)
user system elapsed
0.337 0.054 0.388 |
H: Why does ada (adaboost) in R return different training error graphs and variable importance plots when running the same function multiple times?
Question says most of it. I created a matrix of descriptors, set the vectors of responses, and input a set number of iterations. Each time I run the function with the same exact inputs, I get the same confusion table, but I get a different training error graph and different variable importance plots.
AI: Boosting, together with bagging, falls into the realm of so cold ensemble models: you randomly draw a sample from the data, fit a model, adjust your predictions, sample once again. Unless your samples are fixed, every time you run the algo you'll get slightly different results. |
H: R: Revalue multiple special characters in a data.frame
R noob here..
I have the following data frame
>data
Value Multiplier
1 15 H
2 0 h
3 2 +
4 2 ?
5 2 k
where the multiplier is of class factor. The values of K & k is 3, + is 5 and ? is 2.
I have used
> data$Multiplier <- revalue(data$Multiplier, c("+"="5"))
> data$Multiplier <- revalue(data$Multiplier, c("?"="2"))
> data$Multiplier <- revalue(data$Multiplier, c("K"="3"))
> data$Multiplier <- revalue(data$Multiplier, c("k"="3"))
Is there a better way of doing it?
AI: That seems pretty straight forward to me. I'm pretty new too but in general I'm not sure if you can get better than one command. Though you could have combined all that:
> newValueVector <- c("+"="5", "?"="2", "K"="3", "k"="3")
> data$Multiplier <- revalue(data$Multiplier, newValueVector) |
H: Why would removing a variable in adaboost decrease error rate?
I was trying to classify an outcome on some data using adaboost (the ada package in R) and I was playing around with the training data set of descriptors when I realized that removing a column in the descriptor matrix increased the accuracy of the output on the training data. Specifically, the number of false negatives dropped/true positives increased.
Aside from removing a single column in the descriptors, I left everything else the same, including number of iterations.
AI: Imagine that one of the column is just random data -- then it's not informative at all, so no classifier will be improved by including it.
However, ada's stochastic boosting implementations will always have some chance of including that variable in the classifier it generates. As a result, removing it has the potential to improve the classifiers generated.
(In your case, you might check whether that variable is part of the final model generated.) |
H: Reason behind choosing Neural Network for classification
Given a two class multi dimensional classification problem, what reason would you give to choose Artificial Neural Network for carrying out the classification instead of Support Vector Machine or other classification methods?
AI: SVM is Parametric. Parametric models are something with fixed finite number of parameters independent of dataset size. Anything which is not parametric model is non-parametric model. ANN is non parametric. Also ANN has 'deep architectures" which can represent "intelligent" behaviour/functions etc more efficiently than "shallow architectures" like SVMs. ANN may have any number of outputs, while support vector machines have only one. |
H: On fitting a Poisson distribution to make sense of data
Hi guys I am working with a regular network which has the shape of a square grid and contains 100x100=10000 nodes. The edges (links) between these nodes simply follow the shape of a chess table: each node which is not placed in the corner or along the boundary has 4 connections only, all of them involving its nearest neighbors. Accordingly, the nodes on the boundary have 3 connections only, while the nodes in the four corners have 2.
Now, this brings me to my problem. If you plot such data you will figure out you have 4 nodes with links=2, 98*4 nodes with links=3, and 10000-(98*4)-4 with links=4. This translates into three tuples to plot: (x,y)=(2,4),(3,392),(4,9604). The resulting histogram is heavily skewed toward the right hand side:
My question is: what kind of distribution do you think would fit this dataset? I was thinking of a Poisson distribution (the x-axis values are discrete and not continuous) skewed to the right. I will appreciate any kind of help/guidance. Thank you!
AI: Unlike other graphs, the degree distribution is a function of N. Specifically, the number of nodes with k=2 is constant (4), the number of nodes with k=3 grows linearly with the length of one side $4l-4$, while the number of nodes with k=4 grows exponentially with the length of a side, $l^2-4l-4$. In the large $l$ (or $N$) limit, the degree distribution is 4.
You can get the exact distribution by normalizing this piecewise defined function (divide by $l^2=N$). |
H: Using Machine Learning to Predict Musical Scales
It's possible to use Machine Learning techniques to cluster songs into musical-scale groups? I mean: "this song was written in C"... or "this song was written in Am" etc. I made a fast search about the subject and I found no software that can do this. If you know some software, or research (academic papers), related to that subject, could you link it here for me? I'm very interested in that subject but I'm not sure from where I can begin. I have a little experience with Random Forests and Neural Networks, maybe I can accomplish the classification task with one of those algorithms, but, again, I'm not sure which kinda of features I should pass to the algorithm. Thanks in advance.
AI: From a very high level -- You can convert the song to a spectrogram, there are a large number of implementations to do this. From there you can analyze the sound waves. In the case of the key, for instance, the note A is equal to 440 hz. Look into FFT as well. Hope this helps get you started. I know spotify trains neural networks on spectrograms of songs to find similar songs based on "sound". |
H: Reduce dimension, then apply SVM
Just out of curiousity, is it generally a good idea to reduce the dimension of training set before using it to train SVM classifier?
I have a collection of documents, each of them is represented by a vector with tf-idf weight calculated by scikit-learn's tfidf_transformer. The number of terms (feature?) is close to 60k, and with my training set that consist of about 2.5mil of documents, it makes the training process to go on forever.
Besides taking forever to train, the classification also wasn't accurate most probably due to the wrong model. Just to get an idea of what I am dealing with, I tried finding a way to visualize the data somehow. And I decomposed the document matrix into a (m, 2) matrix using SVD with scikit-learn (wanted to try other methods, but they all crashed halfway).
So this is what the visualization looks like
So is it generally a good practice to reduce the dimension, and then only proceed with SVM? Also in this case what can I do to improve the accuracy of the classifier? I am trying to use sklearn.svm.SVC and kernel='poly', and degree=3 and it is taking a very long time to complete.
AI: I'd recommend spending more time thinking about feature selection and representation for your SVM than worrying about the number of dimensions in your model. Generally speaking, SVM tends to be very robust to uninformative features (e.g., see Joachims, 1997, or Joachims, 1999 for a nice overview). In my experience, SVM doesn't often benefit as much from spending time on feature selection as do other algorithms, such as Naïve Bayes. The best gains I've seen with SVM tend to come from trying to encode your own expert knowledge about the classification domain in a way that is computationally accessible. Say for example that you're classifying publications on whether they contain information on protein-protein interactions. Something that is lost in the bag of words and tfidf vectorization approaches is the concept of proximity—two protein-related words occurring close to each other in a document are more likely to be found in documents dealing with protein-protein interaction. This can sometimes be achieved using $n$-gram modeling, but there are better alternatives that you'll only be able to use if you think about the characteristics of the types of documents you're trying to identify.
If you still want to try doing feature selection, I'd recommend $\chi^{2}$ (chi-squared) feature selection. To do this, you rank your features with respect to the objective
\begin{equation}
\chi^{2}(\textbf{D},t,c) = \sum_{e_{t}\in{0,1}}\sum_{e_{c}\in{0,1}}\frac{(N_{e_{t}e_{c}}-E_{e_{t}e_{c}})^{2}}{E_{e_{t}}e_{c}},
\end{equation}
where $N$ is the observed frequency of a term in $\textbf{D}$, $E$ is its expected frequency, and $t$ and $c$ denote term and class, respectively. You can easily compute this in sklearn, unless you want the educational experience of coding it yourself $\ddot\smile$ |
H: Extracting list of locations from text using R
I have a string containing many words [not sentences], I want to know how I can extract all the words that correspond to a location in that string for example:
text<-c("China","Japan","perspective","United Kingdom","formatting","clear","India","Sudan","United States of America","Bagel","Mongolian",...)
The output should be:
> China, Japan, United Kingdom, Mongolian
something of the type. Basically I am looking at extracting locative information from random text.
This is a very general problem I am looking for guidance on how to model my solution, is there any dataset or something I can use to compare or extract information from. I don't want to carry out word by word comparison.
I have looked up OpenNLP but I am not sure how to use it's location-models for carrying out Named Entity Recognition in R. In the above example there are only countries but I would like to identify other places, such as provinces, states, counties, cities, etc. as well.
I am new to machine learning and R-programming, any guidance is greatly appreciated.
AI: This might be better for opendata, but nonetheless, you have a few options. One would be to go to geohive which has other pages, including this one. There is also the UN categorization, available on wikipedia which uses membership within the United Nations system divides the 206 listed states into three categories: 193 member states,[1] two observer states, and 11 other states. The sovereignty dispute column indicates states whose sovereignty is undisputed (190 states) and states whose sovereignty is disputed (16 states).
You can read.table or rvest those sources and grab them at runtime. |
H: Appropriate algorithm for string (not document) classification?
I am trying to classify a large-ish number of small strings (millions) into about 10 disjunct categories. Examples of classes and strings for each class include:
email: "[email protected]"
phone: "55", "22334455"
personName: "John", "Q.", "Public"
organizationName: "Reuters", "IBM"
date: "Dec.", "22.10.2010"
nameAndEmail: "[email protected]" (a last name has been concatenated with an email address.)
phoneAndEmail: "[email protected]" (part of a phone number has been concatenated with an emal address)
separator: ",", "and", "or"
other: stuff that does not fit in other categories.
Carrying out classification via a set of heuristics seems tedious, so I have been experimenting with an SVM. I'm not able to get more than about 80% accuracy without doing a lot of work manually encoding features like "has an ampersand", "is uppercase", "is mostly numeric", "has a colon in the middle", etc., which sort of defeats the purpose.
I'm wondering whether part of the challenge with using an SVM is that the "salient features" of a given string, like the '@' character, are not in a fixed position in the string, so it will not get the same feature index for each string.
Does anyone have suggestions for a more appropriate approach to this task, or can anyone recommend further reading?
AI: You might find it useful to treat n-grams of characters as your feature space. Then you could represent a string as a bag of substrings. With N = 4 or greater, you would capture things like ".com" in emails, "##.#" in dates, etc.
It might also help to encode all single digits as one reserved number-only-character.
An easy way to to this might be to create all the n-gram substrings for each string in your dataset, then simply treat that list of words as a document. Then you could use term frequency or tf-idf vectors in your supervised step.
For example to create the substring uni-, bi-, and tri-grams for "[email protected]":
a = "[email protected]"
b = set([a[x:x+y] for y in range(0,4) for x in range(0,len(a))])
set(['', 'co', 've', 'ai', 'eve', 'r@', 'at', '.co', 'gm', 'ev', 'tev', 'er', '@gm', 'ver', '@g', 'r@g', 'ail', 'il.', 'gma', '.', 'te', 'hat', '@', 'wha', 'om', 'wh', 'er@', 'mai', 'ma', 'ha', 'l.c', 'a', 'c', 'ate', 'e', 'g', 'i', 'h', '.c', 'm', 'l', 'o', 'l.', 'r', 't', 'w', 'v', 'com', 'il']) |
H: Agglomerative Clustering Stopping Criteria
I am trying to implement section 3.4 of paper Predicting Important Objects for Egocentric Video Summarization where they have created a distance matrix of frame histograms.
In short, let say Ω is mean of distances between all frames,DVis distance matrix.
I didn't understand what is meant by this:
We next perform complete-link agglomerative clustering
with DV , grouping frames until the smallest maximum interframe
distance is larger than two standard deviations beyond
Ω
Can this be achieved by setting the cutoff value to 2Ω in Matlab's clusterdata function ?
AI: Clearly, 2 standard deviations beyond Omega is not the same as twice the mean.
Apparently, their process is this:
compute the distance matrix
compute the mean
compute the standard deviation
compute hierarchical clustering with maximum linkage
cut the tree at mu+2*sigma
Because complete linkage is in O(n^3), this approach will not scale to longer videos or higher frame rates. |
H: One multilabel classifier or one for each type of label?
Let's say I need to classify addresses with scikit-learn, so if I want my classifier to be able to classify addresses by the street name, and post/zip code, should I do a OneVsRest classifier, or separate them into two different classifiers (for the same training set)?
I have tried both, and it seems like having multiple classifiers might be a better choice, as it feels faster to train multiple smaller classifiers. Is this how it is supposed to be done?
AI: Both ways are valid and both are commonly used. Sometimes, a classifier that claims to be multilabel may just be separating the labels into multiple OneVsRest classifiers under-the-hood and conveniently joining the results together at the end.
However, there are cases where the methods are fundamentally different. For instance, in training a neural net with multiple targets (labels), you can setup the structure of the network such that there is shared structure. The shared nodes will end up learning features that are useful for all the targets, which could be very useful.
For example, if you're classes (labels) are "cat-pet", "cat-big", and "dog", you may want an algorithm that first learns to distinguish between any cat and any dog, and then in a later step learns to separate cats that are pets from cats that are big (like a lion!). This is called hierarchy, and if your classifier can exploit hierarchy you may gain better accuracy. If your classes are completely independent however, it may not make any difference.
I suggest you start with the method that is easiest (i.e. OneVsRest), and see if the performance is suitable to your needs, then move to more complicated methods (multilabel, hierarchical methods, etc) only once you need better performance. |
H: How to visualize data of a multidimensional dataset (TIMIT)
I've built a neural network for a speech recognition task using the timit dataset. I've extracted features using the perceptual linear prediction (PLP_ method. My features space has 39 dimensions (13 PLP values, 13 about first order derivative and 13 about second order derivative).
I would like to improve my dataset. The only thing I've tried thus far is normalizing the dataset using a standard scaler (standardizing features with mean 0 and variance 1).
My questions are:
Since my dataset has high dimensionality, is there a way to visualize? For now, I've just plotted the dataset values using a heat map.
Are there any methods for separating my sample even more, making it easier to differentiate between the classes?
My heat map is below, representing 20 samples. In this heatmap there are 5 different phonemes, related to vowels, in particular, uh, oy, aw, ix, and ey.
As you can see, each phoneme is not really distinguishable from the others. Does anyone know how could I improve it?
AI: Like I said in the comment, you'll need to perform dimension reduction, otherwise you'll not be able to visualize the $\mathbb{R}^n$ vector space and this is why :
Visualization of high-dimensional data sets is one of the traditional applications of dimensionality reduction methods such as PCA (Principal components analysis).
In high-dimensional data, such as experimental data where each dimension corresponds to a different measured variable, dependencies between different dimensions often restrict the data points to a manifold whose dimensionality is much lower than the dimensionality of the data space.
Many methods are designed for manifold learning, that is, to find and unfold the lower-dimensional manifold. There has been a research boom in manifold learning since 2000, and there now exist many methods that are known to unfold at least certain kinds of manifolds successfully.
One of the most used methods for dimension reduction is called PCA or Principal component analysis. PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. You can read more on this topics here.
So once you reduce your high dimensional space into a ${\mathbb{R}^3}$ or ${\mathbb{R}^2}$ space you will able to project it using your adequate visualization method.
References :
Information Retrieval Perspective to Nonlinear Dimensionality
Reduction for Data Visualization - Jarkko Venna
EDIT: To avoid confusion for some concerning PCA and Dimension Reduction, I add the following details :
PCA will allow you compute the principal components of your vector model, so the information are not lost but "synthesized".
Unfortunately there is no other imaginable way to display 39 dimensions on a 2/3 dimension screen. If you wish to analyze correlations between your 39 features, maybe you should consider another visualization technique.
I would recommend a scatter plot matrix in this case. |
H: Steps in exploratory methods for mild-sized data with mixed categorical and numerical values?
Experienced in signal/image analysis, and new to data science, I recently was challenged with a relatively simple dataset: 100 to 200 items, about 10-20 numerical variables (in the [0-1] or percentage range), with only one variable used at present time for ranking, and 5 to 10 categorical variables, each with few options. A categorical variable takes about 2 to 4 different values.
I would like first to get insight on potential structures in such data. I have browsed Agresti's Analysis of Ordinal Categorical Data, some have advised me to invest on TDA (Topological data Analysis). Yet I do not know where to start from.
Do you have guidelines, best practices on such REAL data to gradually address the aforementioned issues, from visualization to genuine processing/inference?
AI: You can get a reasonably good approximation of steps for exploratory data analysis (EDA) by reviewing the EDA section of the NIST Engineering Statistics Handbook. Additionally, you might find helpful parts of my related answer here on Data Science SE.
Methods, related to EDA, are too diverse that it is not feasible to discuss them in a single answer. I will just mention several approaches. If you are interested in applying classification to your data set, you might find information, mentioned in my other answer helpful. In order to detect structures in a data set, you can try to apply principal component analysis (PCA). If, on the other hand, you are interested in exploring latent structures in data, consider using exploratory factor analysis (EFA). |
H: Yarn timeline recovery not enabled error upgrading via ambari
Using the automated upgrade, when I try to upgrade I get:
“YARN Timeline state preserving restart should be enabled
Reason: YARN should have state preserving restart enabled for the Timeline server. The yarn-site.xml property yarn.timeline-service.recovery.enabled should be set to true”
However, using Ambari, whenever I go to change the setting in yarn-site.xml it gets me to create a new configuration group and changes it there. I have moved all of the servers into that config group but the check still fails.
How do I modify the default yarn-site.xml?
Alternatively, does anyone know the actual script (or how to find it) that is kicked off when the automated upgrade is invoked?
AI: Run the following from the ambari server's shell:
/var/lib/ambari-server/resources/scripts/configs.sh set mace gamma yarn-site "yarn.timeline-service.recovery.enabled" "true"
Where mace is the ambari server's host name and gamma is the cluster name |
H: how often should one sample a dataset?
I have some archive data that show the variation of a certain quantity X measured over 1-year intervals.
I'm now facing the problem of understanding how X correlates with another variable Y, that I can measure much more frequently, even daily.
So my question is, is it really necessary to collect data for Y so often? This seems such a waste of resources.
So what I would do is to collect data for Y only for 10% of the days of each year and I would select those days randomly. This way I'm sure that the observations are independent because the sample size is not too large and days have been selected randomly.
What do you think of that? Is that the approach you would follow as well?
AI: In order to show a correlation between X and Y they should be aggregated over the same time period. Otherwise, there there may be seasonality or other features in Y that may have been averaged out in X. Is it possible re-index X to get monthly aggregates? I would not sub-sample Y. Find a common time period for both X and Y to make your comparisons. |
H: Multivariate linear regression in Python
I'm looking for a Python package that implements multivariate linear regression.
(Terminological note: multivariate regression deals with the case where there are more than one dependent variables while multiple regression deals with the case where there is one dependent variable but more than one independent variables.)
AI: You can still use sklearn.linear_model.LinearRegression. Simply make the output y a matrix with as many columns as you have dependent variables. If you want something non-linear, you can try different basis functions, use polynomial features, or use a different method for regression (like a NN). |
H: Classification when one class is other
I am working on a litigation support application using the Enron corpus, which contains about 600,000 unique text documents.
In litigation, one is often concerned with whether a document is responsive or non-responsive. One produces responsive documents to the opposing side, unless they are privileged (e.g., attorney client communication).
Here, I have sample sets of over 200 responsive and non-responsive documents.
The challenge here is that the topic of the responsive documents is about one thing, whereas the non-responsive documents could be about any number of topics, ranging from spam, to soccer practice, to business documents, etc. The non-responsive is what I would call diluted.
I don't know what those non-responsive classes are up front, and there is no purpose or value to breaking them out up front. If a document is non-responsive, then it needs to be quickly (and cheaply) dismissed.
If the samples are random when the classification is applied to the corpus my customers expect the split between corpus responsive and non-responsive to be close to the split of the sample.
What is the best approach to classify responsive versus non-responsive in this situation?
Below I briefly describe what I have tried. If there is a better approach, please share.
Using tf-idf, create an average vector for each document class (responsive and non-responsive).
Take the first 1000 terms (sorted by weight), such that each vector is the same length.
Normalized the vectors.
Note: these two vectors are only a 0.03 cosine similarity to each other.
For each document in the responsive sample set, calculate the cosine similarity to the single average vector.
Note: the average is 0.06. It is a small number, as documents in the sample set have around 40 terms, as compared to an average vector of 1000.
In the non-responsive perform the same analysis (compare each non-responsive sample document to the non-responsive average vector).
In the non-responsive that same comparison is 0.03. Basically, the non-responsive average vector is diluted.
Based on this approach, I cannot really conclude if document has a higher cosine similarity to responsive compared to non-responsive then it is responsive. By that measure 1/3 of the documents in the non-responsive sample set would be classified as responsive.
The non-responsive needs to be handicapped. In this approach, how do I handicap it? Is there another approach that would accomplish the same thing?
AI: As a first step you should try logistic regression on your tf-idf vectors. It is simple to implement and would provide a good baseline for comparison. You can find an implementation in whatever language you're using. You could also try some kind of (perhaps supervised) topic modeling to create a better feature space, but that would be more involved. |
H: Memory efficient structure for membership checking without false positive
The initial task can be described like this:
I have a requirement to deduplicate HUGE list(potentially billions of items) without storing the original items - it's simply unaffordable
All I need to know is answer to the question "Has my system ever seen this element before?"
The most close data structure I was able to find so far is a bloom filter, but it has false positives which better to avoid in my task as it results in data loss
For example providing I account to store at least 2^32 items, with positive error rate of just 1%(which means 1% of all urls won't be visited) I would need at least
n = 4,294,967,296, p = 0.01 (1 in 100) → m = 41,167,512,262 (4.79GB), k = 7
4.79GB of memory...
The task itself is a high scale web crawler, so I need to keep track of already visited urls(or sha1 hashes of this urls)
Any help is welcome
Thanks!
AI: For web crawler scale why not use a distributed database like Apache Cassandra? Lookups on indexes are efficient and no false positives. |
H: Unsupervised sequence identification
I am looking for the best method to go from a sequence of events such as
time event
1 a
2 b
3 a
4 b
5 c
6 d
7 c
8 d
9 e
Where each letter corresponds to a certain event that occurs at a time. I want to reduce the number of events by aggregating frequently occurring events into a new event. A possible solution data set would look like,
1 a'
2 a'
3 a'
4 a'
5 b'
6 b'
7 b'
8 b'
9 e
where the clusters are created because they occur in a sequence following each other.
I was looking at the text mining algorithms in R with tm or RNA sequenceing
with edgeR. But I have no experience in this so I was hoping that someone can
shed me some light on a common approach for this type of problems.
AI: You may use ngrams to get the frequent sequences of you events.
Here a little example
library(tau)
seq <- "ababcdcde"
textcnt(seq, method="ngram", n=3L, decreasing=TRUE)
_ a ab b c cd d _a _ab aba abc ba bab bc bcd cdc cde dc dcd de de_ e e_
2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Select the longest and most frequent ngram, here ab and cd.
You may also trade off the length and frequency to get maximal compression. The n parameter limits the length of the ngram. |
H: crow probability problem
I am trying to solve the following problem, but am having some difficulty. Can anyone give me some guidance?
There are lots of tourists in Grandeville. The streets in Grandeville run
east to west and go from
..., S. 2nd St., S. 1st St., Broadway St., N. 1st St., N. 2nd St., ...
The avenues run north to south and go from
..., E. 2nd Ave., E. 1st Ave., Broadway Ave., W. 1st Ave., W. 2nd Ave., ...
These streets form a square block grid. For each of the questions
below, the tourist starts at the intersection of Broadway St. and
Broadway Avenue and moves one block in each of the four cardinal
directions with equal probability.
Q1) What is the probability that the tourist is at least 3 city blocks (as
the crow flies) from Broadway and Broadway after 10 moves?
Q2) What is the probability that the tourist is at least 10 city blocks
(as the crow flies) from Broadway and Broadway after 60 moves?
Q3) What is the probability that the tourist is ever at least 5 city
blocks (as the crow flies) from Broadway and Broadway within 10 moves?
Q4) What is the probability that the tourist is ever at least 10 city
blocks (as the crow flies) from Broadway and Broadway within 60 moves?
Q5) What is the probability that the tourist is ever east of East 1st
Avenue but ends up west of West 1st Avenue in 10 moves?
Q6) What is the probability that the tourist is ever east of East 1st
Avenue but ends up west of West 1st Avenue in 30 moves?
Q7) What is the average number of moves until the first time the tourist
is at least 10 city blocks (as the crow flies) from Broadway and
Broadway.
Q8) What is the average number of moves until the first time the tourist
is at least 60 city blocks (as the crow flies) from Broadway and
Broadway.
I am running this matlab code. But i am not sure how to find probabilities even to the first question?
function random_walk_2d_simulation ( step_num, walk_num )
%*****************************************************************************80
%
%% RANDOM_WALK_2D_SIMULATION simulates a random walk in 2D.
%
% Discussion:
%
% The expectation should be that, the average distance squared D^2
% is equal to the time, or number of steps N.
%
% Or, equivalently
%
% average ( D ) = sqrt ( N )
%
% The program makes a plot of both the average and the maximum values
% of D^2 versus time. The maximum value grows much more quickly,
% and that curve is much more jagged.
%
% Licensing:
%
% This code is distributed under the GNU LGPL license.
%
% Modified:
%
% 03 November 2009
%
% Author:
%
% John Burkardt
%
% Parameters:
%
% Input, integer STEP_NUM, the number of steps to take in one test.
%
% Input, integer WALK_NUM, the number of walks to take.
%
%
% Set up arrays for plotting.
%
time = 0 : step_num;
d2_ave = zeros(step_num+1,1);
d2_max = zeros(step_num+1,1);
%
% Take the walk WALK_NUM times.
%
for walk = 1 : walk_num
x = zeros(step_num+1,1);
y = zeros(step_num+1,1);
for step = 2 : step_num + 1
%
% We are currently at ( X(STEP-1), Y(STEP-1) ).
% Consider the four possible points to step to.
%
destination = [ x(step-1) + 1.0, y(step-1); ...
x(step-1) - 1.0, y(step-1); ...
x(step-1), y(step-1) + 1.0; ...
x(step-1), y(step-1) - 1.0 ];
%
% Choose destination 1, 2, 3 or 4.
%
k = ceil ( 4.0 * rand );
%
% Move there.
%
x(step) = destination(k,1);
y(step) = destination(k,2);
%
% Update the sum of every particle's distance at step J.
%
d2 = x(step)^2 + y(step)^2;
d2_ave(step) = d2_ave(step) + d2;
d2_max(step) = max ( d2_max(step), d2 );
end
end
%
% Average the squared distance at each step over all walks.
%
d2_ave(:,1) = d2_ave(:,1) / walk_num;
%
% Make a plot.
%
clf
plot ( time, d2_ave, time, d2_max, 'LineWidth', 2 );
xlabel ( 'Time' )
ylabel ( 'Distance squared' )
title_string = sprintf ( '2D Random Walk Ave and Max - %d walks, %d steps', walk_num, step_num );
title ( title_string );
return
end
Here is the plot of running the following command
random_walk_2d_simulation(60,10000)
AI: This is a random walk problem. You could write a fairly simple random walk simulation which ends as soon as the tourist is 60 city blocks away from the starting location. At any given time you only need to know the tourist's x,y coordinates. Then each time-step, you pick a direction and increment either x or y. You then need to keep track of distance from the origin (0,0). Presumably this should be Euclidean distance, since it's all as the crow flies.
decide direction to move
update x,y and distance
check if any of the distance requirements mentioned are met and store the timestep at which it occured
if distance > 60 blocks, terminate |
H: Predictive analysis of rare events
I'm trying to predict rare events, meaning less than 1% of positive cases. I basically try to predict if a subject will have 0, 1, 2 ... , 6, > 6 failures (there are cases in all those categories).
I've tried several algorithms:
decision trees
random forest
adaboost
grouping using k-means clustering and finding associations with failures (which group has most failure)
In any case, learning either goes to no failure or has too much variance (leading poor reasults on C.V. set).
Do you know any machine learning algorithms which are better suited for rare events?
Or is it surprising that I get those bad results using those algorithms, which means that my features list is not good?
Thanks a lot.
AI: When you have an unbalanced data set, the algorithm is going to weight its success on each data point equally, meaning the majority class comes out as much more important than the minority class. The typical solution is to sample down the majority class until it's the same size as the minority class, and an alternate (similar) solution is to adjust the cost function so that the minority class is weighted appropriately.
See these similar questions for more:
Should I go for a 'balanced' dataset or a 'representative' dataset?
Quick Guide into training highly imbalanced data sets
What are the implications for training a tree ensemble with highly biased datasets?
Skewed multi-class data
Ratio of positive to negative sample in data set for best classification |
H: Python Seaborn: how are error bars computed in barplots?
I'm using seaborn library to generate bar plots in python. I'm wondering what statistics are used to compute the error bars, but can't find any reference to this in the seaborn's barplot documentation.
I know the bar values are computed based on mean in my case (the default option), and I assume the error bars are computed based on a Normal distribution 95% confidence interval, but I'd like to be sure.
AI: Looking at the source (seaborn/seaborn/categorical.py, line 2166), we find
def barplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None,
estimator=np.mean, ci=95, n_boot=1000, units=None,
orient=None, color=None, palette=None, saturation=.75,
errcolor=".26", ax=None, **kwargs):
so the default value is, indeed, .95, as you guessed.
EDIT: How CI is calculated: barplot calls utils.ci() which has
seaborn/seaborn/utils.py
def ci(a, which=95, axis=None):
"""Return a percentile range from an array of values."""
p = 50 - which / 2, 50 + which / 2
return percentiles(a, p, axis)
and this call to percentiles() is calling:
def percentiles(a, pcts, axis=None):
"""Like scoreatpercentile but can take and return array of percentiles.
Parameters
----------
a : array
data
pcts : sequence of percentile values
percentile or percentiles to find score at
axis : int or None
if not None, computes scores over this axis
Returns
-------
scores: array
array of scores at requested percentiles
first dimension is length of object passed to ``pcts``
"""
scores = []
try:
n = len(pcts)
except TypeError:
pcts = [pcts]
n = 0
for i, p in enumerate(pcts):
if axis is None:
score = stats.scoreatpercentile(a.ravel(), p)
else:
score = np.apply_along_axis(stats.scoreatpercentile, axis, a, p)
scores.append(score)
scores = np.asarray(scores)
if not n:
scores = scores.squeeze()
return scores
axis=None so score = stats.scoreatpercentile(a.ravel(), p) which is
scipy.stats.scoreatpercentile(a, per, limit=(), interpolation_method='fraction', axis=None)[source]
Calculate the score at a given percentile of the input sequence.
For example, the score at per=50 is the median. If the desired quantile lies between two data points, we interpolate between them, according to the value of interpolation. If the parameter limit is provided, it should be a tuple (lower, upper) of two values.
Parameters:
a : array_like
A 1-D array of values from which to extract score.
per : array_like
Percentile(s) at which to extract score. Values should be in range [0,100].
limit : tuple, optional
Tuple of two scalars, the lower and upper limits within which to compute the percentile. Values of a outside this (closed) interval will be ignored.
interpolation_method : {‘fraction’, ‘lower’, ‘higher’}, optional
This optional parameter specifies the interpolation method to use, when the desired quantile lies between two data points i and j
fraction: i + (j - i) * fraction where fraction is the fractional part of the index surrounded by i and j.
lower: i.
higher: j.
axis : int, optional
Axis along which the percentiles are computed. Default is None. If None, compute over the whole array a.
Returns:
score : float or ndarray
Score at percentile(s).
and looking in the source for scipy.stats.stats.py we see the signature
def scoreatpercentile(a, per, limit=(), interpolation_method='fraction',
axis=None):
so since seaboard calls it with no param for interpolation it is using fraction.
On a side note, there is a warning of future obsolescence in stats.scoreatpercentile(), namely
This function will become obsolete in the future. For Numpy 1.9 and higher, numpy.percentile provides all the functionality that scoreatpercentile provides. And it’s significantly faster. Therefore it’s recommended to use numpy.percentile for users that have numpy >= 1.9. |
H: Trying to understand Logistic Regression Implementation
I'm currently using the following code as a starting point to deepen my understanding of regularized logistic regression. As a first pass I'm just trying to do a binary classification on part of the iris data set.
One problem I think I have encountered is that the negative log-loss (computed with loss and stored in loss_vec) doesn't change much from one iteration to the next.
Another challenge I am facing is trying to figure out how to plot the decision boundary once I have learned the logistic regression coefficients. Using the coefficients to plot the 0.5 decision boundary is way off. This makes me think I have made a mistake somewhere
http://fa.bianp.net/blog/2013/numerical-optimizers-for-logistic-regression/
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, datasets
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
Y = iris.target
X = X[:100,:]
Y = Y[:100]
def phi(t):
# logistic function, returns 1 / (1 + exp(-t))
idx = t > 0
out = np.empty(t.size, dtype=np.float)
out[idx] = 1. / (1 + np.exp(-t[idx]))
exp_t = np.exp(t[~idx])
out[~idx] = exp_t / (1. + exp_t)
return out
def loss(x0, X, y, alpha):
# logistic loss function, returns Sum{-log(phi(t))}
#x0 is the weight vector w are the paramaters, c is the bias term
w, c = x0[:X.shape[1]], x0[-1]
z = X.dot(w) + c
yz = y * z
idx = yz > 0
out = np.zeros_like(yz)
out[idx] = np.log(1 + np.exp(-yz[idx]))
out[~idx] = (-yz[~idx] + np.log(1 + np.exp(yz[~idx])))
out = out.sum() / X.shape[0] + .5 * alpha * w.dot(w)
return out
def gradient(x0, X, y, alpha):
# gradient of the logistic loss
w, c = x0[:X.shape[1]], x0[-1]
z = X.dot(w) + c
z = phi(y * z)
z0 = (z - 1) * y
grad_w = X.T.dot(z0) / X.shape[0] + alpha * w
grad_c = z0.sum() / X.shape[0]
return np.concatenate((grad_w, [grad_c]))
def bgd(X, y, alpha, max_iter):
step_sizes = np.array([100,10,1,.1,.01,.001,.0001,.00001])
iter_no = 0
x0 = np.random.random(X.shape[1] + 1) #initialize weight vector
#placeholder for coefficients to test against the loss function
next_thetas = np.zeros((step_sizes.shape[0],X.shape[1]+1) )
J = loss(x0,X,y,alpha)
running_loss = []
while iter_no < max_iter:
grad = gradient(x0,X,y,alpha)
next_thetas = -(np.outer(step_sizes,grad)-x0)
loss_vec = []
for i in range(np.shape(next_thetas)[0]):
loss_vec.append(loss(next_thetas[i],X,y,alpha))
ind = np.argmin(loss_vec)
x0 = next_thetas[ind]
if iter_no % 500 == 0:
running_loss.append(loss(x0,X,y,alpha))
iter_no += 1
return next_thetas
AI: There are several issues I see with the implementation. Some are just unnecessarily complicated ways of doing it, but some are genuine errors.
Primary takeaways
A: Try to start from the math behind the model. The logistic regression is a relatively simple one. Find the two equations you need and stick to them, replicate them letter by letter.
B: Vectorize. It will save you a lot of unnecessary steps and computations, if you step back for a bit and think of the best vectorized implementation.
C: Write more comments in the code. It will help those trying to help you. It will also help you understand each part better and maybe uncover errors yourself.
Now let's go over the code step by step.
1. The sigmoid function
Is there a reason for such a complicate implementation in phi(t)? Assuming that t is a vector (a numpy array), then all you really need is:
def phi(t):
1. / (1. + np.exp(-t))
As np.exp() operates element-wise over arrays. Ideally, I'd implement it as a function that can also return its derivative (not necessary here, but might be handy if you try to implement a basic neural net with sigmoid activations):
def phi(t, dt = False):
if dt:
phi(t) * (1. - phi(t))
else:
1. / (1. + np.exp(-t))
2. Cost function
Usually, the logistic cost function is defined as a log cost in the following way (vectorized): $ \frac{1}{m} (-(y^T \log{(\phi(X\theta))})-(1-y^T)(\log{(1 - \phi(X\theta))}) + \frac{\lambda}{2m} \theta^{1T}\theta $ where $\phi(z)$ is the logistic (sigmoid) function, $\theta$ is the full parameter vector (including bias weight), $\theta^1$ is parameter vector with $\theta_1=0$ (by convention, bias is not regularized) and $\lambda$ is the regularization parameter.
What I really don't understand is the part where you multiply y * z. Assuming y is your label vector $y$, why are you multiplying it with your z before applying the sigmoid function? And why do you need to split the cost function into zeros and ones and calculate losses for either sample separately?
I think the problem in your code really lies in this part: you must be erroneously multiplying $y$ with $X\theta$ before applying $\phi(.)$.
Also, this bit here: X.dot(w) + c. So c is your bias parameter, right? Why are you adding it to every element of $X\theta$? It shouldn't be added - it should be the first element of the vector $X\theta$. Yes, you don't regularize it, but you need to use in the "prediction" part of the loss function.
In your code, I also see the cost function as being overly complicated. Here's what I would try:
def loss(X,y,w,lam):
#calculate "prediction"
Z = np.dot(X,w)
#calculate cost without regularization
#shape of X is (m,n), shape of w is (n,1)
J = (1./len(X)) * (-np.dot(y.T, np.log(phi(Z))) * np.dot((1-y.T),np.log(1 - phi(Z))))
#add regularization
#temporary weight vector
w1 = copy.copy(w) #import copy to create a true copy
w1[0] = 0
J += (lam/(2.*len(X))) * np.dot(w1.T, w)
return J
3. Gradient
Again, let's first go over the formula for the gradient of the logistic loss, again, vectorized: $\frac{1}{m} ((\phi(X\theta) - y)^TX)^T + \frac{\lambda}{m}\theta^1$.
This will return a vector of derivatives (i.e. gradient) for all parameters, regularized properly (without the bias term regularized).
Here again, you've multiplied by $y$ way too soon: phi(y * z). In fact, you shouldn't have multiplied by $y$ in gradient at all.
This is what I would do for the gradient:
def gradient(X, y, w, lam):
#calculate the prediction
Z = np.dot(X,w)
#temporary weight vector
w1 = copy.copy(w) #import copy to create a true copy
w1[0] = 0
#calc gradient
grad = (1./len(X)) * (np.dot((phi(Z) - y).T,X).T) + (lam/len(X)) * w1
return grad
The actual gradient descent implementation seems ok to me, but because there are errors in the gradient and cost function implementations, it fails to deliver :/
Hope this will help you get on track. |
H: Which algorithms or methods can be used to detect an outlier from this data set?
Suppose I have a data set : Amount of money (100, 50, 150, 200, 35, 60 ,50, 20, 500). I have Googled the web looking for techniques that can be used to find a possible outlier in this data set but I ended up confused.
My question is: Which algorithms, techniques or methods can be used to detect possible outlier in this data set?
PS:Consider that the data does not follow a normal distribution. Thanks.
AI: You can use BoxPlot for outlier analysis. I would show you how to do that in Python:
Consider your data as an array:
a = [100, 50, 150, 200, 35, 60 ,50, 20, 500]
Now, use seaborn to plot the boxplot:
import seaborn as sn
sn.boxplot(a)
So, you would get a plot which looks somewhat like this:
Seems like 500 is the only outlier to me. But, it all depends on the analysis and the tolerance level of the analyst or the statistician and also the problem statement.
You can have a look at one of my answers on the CrossValidated SE for more tests.
And there are several nice questions on outliers and the algorithms and techniques for detecting them.
My personal favourite is the Mahalanobis distance technique. |
H: Is Vector in Cosine Similarity the same as vector in Physics?
I'm new to Data Science. I'm trying to understand cosine similarity and it seems like the equation is about finding the distance between two vectors. From what I've Googled, a vector needs to have magnitude and direction. But in CS, it seems like it's a 1-dimensional array. Is vector in CS the same as vector in Physics? If so, what is the direction of a vector. And if a vector is like this [1, 0, 1, 0] what is the magnitude of this vector?
AI: As you ask specifically for the Cosine Similarity technique, it has magnitude and direction, and similar to a vector which is used in Physics, as Cosine Similarity deals with vectors in an inner product space.
So, the magnitude of vectors is exactly the same as the formula in Physics (summating over the squares of the vector elements.) |
H: SPARK 1.5.1: Convert multi-labeled data into binary vector
I am using SPARK 1.5.1, and I have DataFrame that looks like follow:
labelsCol, featureCol
(Label1, Label2, Label 32), FeatureVector
(Label1, Label10, Label16, Label30, Label48), FeatureVector
...
(Label1, label 95), FeatureVector
The first column is the list of labels for that sample, and in total I have 100 label.
I would like to build a binary classifier for each label, so I want to transform the labels list column into a binary vector.
The binary vector will have a length of 100 and the value will be 0 or 1 depends on the existence of the label for sample.
Is there any strait forward solution for this?
AI: Spark only recently implemented CountVectorizer, which will take the labels (as strings) and encode them as your 100-dimensional vector (assuming all 100 labels show up somewhere in your dataset). Once you have those vectors, it should be a simple step to threshold them to make them 0/1 instead of a frequency. |
H: Does it makes sense to apply feature scaling on timestamp
I was wondering if it makes sense to apply normal standardization on a feature like timestamp ?
The data that I process are network packets.
Thank you
AI: For time series analysis. Yes
But, turning data into a computable object for using in the ML computation? No
Using the data as a feature. Then, Yes
I would give you a general time series example:
Consider the number of days in months. The irregularity causes friction while analyzing the model.
Consider this:
So, such type of transformations would be helpful which analyzing time series', which can help reduce friction in the models sensibly.
Link of the explanation |
H: Typing error handling n-gram character index and vector space model
Suppose I apply tri-gram indexing for my document collection, and is implementing a vector-space model to help retrieving the document. In the text it is mentioned implementing a trigram will introduce a new step in filtering the result. However, what are the problems that I need to be aware of if I implement tfidf/vector-space model? The reason I am exploring this option is to try handling basic spelling error handling, does it really work in practice?
AI: Trigram models can be more powerful for document retrieval than unigram models, but if you want to handle spelling errors, they will not be of much help. You need some form of fuzzy matching for that.
For example the string, "I like dosg too" would fool a unigram model because "dosg" is likely "dogs" misspelled, and it will encode it as "dosg" : 1. But you have the same problem in a trigram model. It will encode "I like dosg" : 1, "like dosg too" : 1. Which is not really better, as it will still not match any trigrams with the word "dogs" in it. |
H: How to define a distance measure between two IP addresses?
I have IP addresses as feature and I would like to know how much two IP addresses are similar to each other to use the difference in an Euclidean distance measure (in order to quantify the similarities of my data points). What tactic can I use for this?
AI: If I understood them correctly, both Jeremy and Edmund's (first) solutions are the same, namely, plain euclidean distance in a 4-dimensional space of IP addresses.BTW, I think a very fast alternative to euclidean distance would be to calculate a hamming distance bit-wise.
Edmund's first update would be better than his second. The reason is simple to state: his 2nd update tries to define a distance measure by considering a non-linear function of the coordinates of a 4D vector. That however will most likely destroy the key properties that it needs to satisfy in order to be a metric, namely
Injectivity: $d(IP_1,IP_2)=0 \iff IP_1=IP_2$,
Symmetry: $d(IP_1,IP_2)=d(IP_2,IP_1)$, and
Triangular inequality: $d(IP_1,IP_2)\leq d(IP_1,IP_3)+d(IP_3,IP_2)\,\forall IP_3$.
The latter is key for later interpreting small distances as close points in IP space. One would need a linear (in the coordinates) distance function. However, simple euclidean distance is not enough as you saw.
Physics (well, differential geometry actually) could lead to a nice solution to this problem: define a metric tensor $g$. In plain english, give weights to each pair of coordinates, take each pair difference, square it and multiply it by its weight, and then add those products. Take the square root of that sum and define it as your distance.
For the sake of simplicity, one could start trying with a diagonal metric tensor.
Example: Say you take $g=\begin{pmatrix}1000 &0 &0 &0 \\0 &100&0&0\\0&0&10&0\\0&0&0&1\end{pmatrix}$ $IP_1=(x_1,x_2,x_3,x_4)$ and
$IP_2=(y_1,y_2,y_3,y_4)$. Then the square of the distance is given by
$$d(IP_1,IP_2)^2=1000*(x_1-y_1)^2+100*(x_2-y_2)^2+\\ \,+10*(x_3-y_3)^2+1*(x_4-y_4)^2$$
For $IP_1=192.168.1.1,\,IP_2=192.168.1.2$ the distance is clearly 1.
However, for $192.168.1.1$ and $191.168.1.1$ the distance is
$\sqrt{1000}\approx 32$
Eventually you could play around with different weights and set a kind of normalization where you could fix the value of the maximal distance $d(0.0.0.0,FF.FF.FF.FF)$.
Furthermore, this set up allows for more complex descriptions of your data where the relevant distance would contain "cross-products" of coordinates like say $g_{13}*(x_1-y_1)*(x_3-y_3)$.
EDIT: While this would be a better "weighting" method than using those other I addressed, I realize now it is actualy meaningless: As Anony-Mousse and Phillip mention, IP are indeed 32 dimensional. This means in particular that giving the same weight to all bits in say the 2nd group is in general not sound: One bit could be part of the netmask while the other not. See Anony-Mousse answer for additional objections. |
H: Is there any difference between feature extraction and feature learning?
It appears to me that "feature extraction" and "feature learning" are equivalent concepts, however there are 2 separate wikipedia articles dedicated to them that are notably different. In particular, only in the Feature Learning article Neural Networks/Deep Learning are mentioned. However, it seems like they would be equally appropriate for either because autoencoders extract features from the raw (typically image) data and those now extracted features feed into the next layers.
So, what's the real difference between these terms?
AI: Yes I think so. Just by looking at Feature Learning and Feature extraction you can see it's a different problem.
Feature extraction is just transforming your raw data into a sequence of feature vectors (e.g. a dataframe) that you can work on.
In feature learning, you don't know what feature you can extract from your data. In fact, you will probably apply machine learning techniques just to discover what are good features to extract from your dataset. Then you can extract them them apply machine learning to the extracted features. Deep learning techniques are one example of this.
In the word2vec toolkit, for instance, you extract vectors from documents which can't be easily interpreted by a human, you can't look at it and tell what features have been extracted at all. It's just a mass of vectors which, for some reason, give good empirical results. |
H: Predicting New Data with Naive Bayes
Say I had the following training set for a Naive Bayes algorithm.
Outlook Person Play Golf?
------- ------ ----------
Sunny Joe Yes
Sunny Mary Yes
Raining Joe Yes
Raining Mary No
Raining Harry Yes
If try to predict whether Harry will play golf on a sunny day (which I have no data for).
Would it be correct to exclude the person attribute and use the remaining outlook attribute to calculate the probability of this happening? Or could that potentially cause problems with a larger data set that I'm unaware of?
AI: Among Naive Bayes assumptions the main one is that features are conditionally independent. For our problem we would have:
$$P(Play|Outlook,Person) \propto P(Play)P(Outlook|Play)P(Name|Play)$$
To address question is Harry going to play on a sunny day?, you have to compute the following:
$$P(Yes|Sunny,Harry) = P(Yes)P(Sunny|Yes)P(Harry|Yes)$$
$$P(No|Sunny,Harry) = P(No)P(Sunny|No)P(Harry|No)$$
and choose the probability with bigger value.
That is what theory says. To address your question I will rephrase the main assumption of Naive Bayes. The assumptions that features are independent given the output means basically that the information given by joint distribution can be obtained by product of marginals. In plain English: assume you can find if Harry plays on sunny days if you only know how much Harry plays in general and how much anybody plays on sunny days. As you can see, you simply you would not use the fact that Harry plays on sunny days even if you would have had that record in your data. Simply because Naive Bayes assumes there is no useful information in the interaction between the features, and this is the precise meaning of conditional independence, which Naive Bayes relies upon.
That said if you would want to use the interaction of features than you would have either to use a different model, or simply add a new combined feature like a concatenation of factors of names and outlook.
As a conclusion when you do not include names in your input features you will have a general wisdom classifier like everybody plays no matter outlook, since most of the instances have play=yes. If you include the name in your input variables you allow to alter that general wisdom with something specific to player. So your classifier wisdom would look like players prefer in general to play, no matter outlook, but Marry like less to play less on Rainy.
There is however a potential problem with Naive Bayes on your data set. This problem is related with the potential big number of levels for variable Name. In order to approximate the probability there is a general thing that happens: more data, better estimates. This probably would happen with variable Outlook, since there are two levels and adding more data would probably not increase number of levels. So the estimates for Outlook would be probably better with more data. However for name you will not have the same situation. Adding more instances would be possible perhaps only by adding more names. Which means that on average the number of instances for each name would be relatively stable. And if you would have a single instance, like it is the case for Harry, you do not have enough data to estimate $P(Harry|No)$.
As it happens this problem can be alleviated using smoothing. Perhaps Laplace smoothing (or a more general for like Lindstone) is very helpful. The reason is that estimates based on maximum likelihood have big problems with cases like that.
I hope it answers at least partially your question. |
H: Events prediction with time series of continuous variables as features
We have the feeling that behavior of a device in terms of continuous variables (fans speeds, temperatures, voltages, ...) has influence on rare events happening (components failures).
I now have to build a predictive model for that, to proof the influence.
Those continuous features are given as time series, and events are punctual.
I've made a model based on descriptive statistics of those variable (see this question) with decision tree, random forest, adaboost, and clustering but it doesn't works. I will still improve by balancing classes, but I'm convinced it is not the best approach.
I'm pretty sure that there are nicer algorithms for such predictions (this is quite common problem), but I don't find anything.
Do you have ideas?
Thanks a lot
PS: I'm working with Python and cython
AI: First of all, you will not be able to prove anything with a model, you will have false positives/negatives. With a good model you may be able show what variables may be an indicator of component failure.
In problems like this feature generation can have the most important influence on the accuracy of the model. The time stamps can be used for aggregation. For example, you may aggregate metrics per device per hour. The metrics/features that you might create for input into the model might be average/max temperature or fan speed, rate of change in temperature or fan speed, number of seconds device was above some threshold temperature or fan speed, boolean indicator of voltage spike, etc. There could be any number of features you may create. You can then find which features are not strong predictors and remove these columns to reduce noise, if need be. |
H: What is a benchmark model?
I am working on a breast cancer dataset (http://kdd.org/kdd-cup/view/kdd-cup-2008). I need to perform classification on the data using C4.5 algorithm, after doing any necessary pre-processing.
A section of the report that I have to write is "benchmark models" and I have no idea what that means. I googled the term and it doesn't seem to be something well defined in data mining. Any idea what that means?
Thanks!
AI: Benchmarking is the process of comparing your result to existing methods. You may compare to published results using another paper, for example. If there is no other obvious methodology against which you can benchmark, you might compare to the best naive solution (guessing the mean, guessing the majority class, etc) or a very simple model (a simple regression, K Nearest Neighbors). If the field is well studied, you should probably benchmark against the current published state of the art (and possibly against human performance when relevant). |
H: How to model compositional data?
What is the best way to model compositional data problems?
Compositional data is when each example or sample is a vector that sums to 1 (or 100%). In my case, I am interested in the composition of minerals in a rock and I have sensors that tell me the sum of the minerals but not the components that make up the sum.
For example, lets say I have two minerals, $m_1$ and $m_2$, that are made up of 3 elements (like copper and other elements from the periodic table) which form a vector of length 3:
m1 = [0.1, 0.3, 0.6]
m2 = [0.6, 0.2, 0.2]
If a rock has 25% of $m_1$ and 75% of $m_2$, the sensor reading produces the sum of the two minerals (shown in bottom-left subplot below):
$$
\begin{align}
&0.25*m_1 + 0.75*m_2 \\
=&0.25*[0.1, 0.3, 0.6] + 0.75*[0.6, 0.2, 0.2] \\
=&[0.475, 0.225, 0.3]
\end{align}
$$
I would like to know how to model and solve the problem of unmixing a composition into its underlying components, where the sum of the elements is normalized to 100% (e.g. $0.25m_1 + 0.75m_2$ has the same composition as $0.50m_1 + 1.50m_2$).
Furthermore, my example is simplistic; in reality a composition can have more than just 2 minerals (up to 3000) and each mineral is made up of 118 elements, not just 3 (all the elements of periodic table - though many elements will be zero). The elemental composition of a mineral is assumed to be known (definition of $m_1$ and $m_2$ in the example). Also, the sensor reading is noisy - each element of the observed composition is assumed to have Gaussian noise.
AI: First normalize the result vector. E.g. [.95, .45, .6] by dividing by 2 (sum of the members); giving [.475, .225, .3].
Let $x$ be the share of the first mineral, than $(x-1)$ is the share of the second mineral.
Solve the three linear equation which must give the same result.
$.1 * x + .6 * (1-x) = .475$
$.3 * x + .2 * (1-x) = .225$
$.6 * x + .2 * (1-x) = .3$
Result is $x = 1/4$ as expected.
UPDATE
The above proposed solution works of course only if the number of minerals is less or equal
the number of elements.
In the update of the question is clearly stated, that this is not the case (3000 minerals and 118 elements).
Lets simulate this oposite case on a small example with 3 minarals and 2 elements.
m1 <- c(0.2, 0.8)
m2 <- c(0.4, 0.6)
m3 <- c(0.9, 0.1)
and with the mix of minerals
x <- c(.25, .65, .1)
which produce a measurement of
t(matrix(c(m1,m2,m3),3,2, byrow= T)) %*% x
[,1]
[1,] 0.4
[2,] 0.6
This gives following linear equations
$m_1 + m_2 + m_3 = 1$
$.2 m_1 + .4 m_2 + .9 m_3 = .4$
$.8 m_1 + .6 m_2 + .1 m_3 = .6$
The solution of the equations is not unique
$m_3 \in <0, 1 / 3.5>$
$m_1 = 2 - 2 * m_2 - 4.5 * m_3$
$m_2 = 1 - 3.5 * m_3$
Some alternative solutions are provided below
0 1 0
0.125 0.825 0.050
0.375 0.475 0.150
0.5 0.3 0.2
0.625 0.125 0.250
0.71428571 0.00000000 0.28571429
This is of course oversimplified example, but shows that you should carefully select the optimization goal,
as there could be more "equaly good" solutions.
For example with promoting sparsity you will find the solution [0 1 0] which is far from our used mix. |
H: Understanding convolutional pooling sizes (deep learning)
I'm dumb but still trying to understand the code provided from this e-book on deep learning, but it doesn't explain where the n_in=40*4*4 comes from. 40 is from the 40 previous feature maps, but what about the 4*4?
>>> net = Network([
ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),
filter_shape=(20, 1, 5, 5),
poolsize=(2, 2),
activation_fn=ReLU),
ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),
filter_shape=(40, 20, 5, 5),
poolsize=(2, 2),
activation_fn=ReLU),
FullyConnectedLayer(
n_in=40*4*4, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
FullyConnectedLayer(
n_in=1000, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
SoftmaxLayer(n_in=1000, n_out=10, p_dropout=0.5)],
mini_batch_size)
>>> net.SGD(expanded_training_data, 40, mini_batch_size, 0.03,
validation_data, test_data)
For instance, what if I do a similar analysis in 1D as shown below, which should that n_in term be?
>>> net = Network([
ConvPoolLayer(image_shape=(mini_batch_size, 1, 81, 1),
filter_shape=(20, 1, 5, 1),
poolsize=(2, 1),
activation_fn=ReLU),
ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 1),
filter_shape=(40, 20, 5, 1),
poolsize=(2, 1),
activation_fn=ReLU),
FullyConnectedLayer(
n_in=40*???, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
FullyConnectedLayer(
n_in=1000, n_out=1000, activation_fn=ReLU, p_dropout=0.5),
SoftmaxLayer(n_in=1000, n_out=10, p_dropout=0.5)],
mini_batch_size)
>>> net.SGD(expanded_training_data, 40, mini_batch_size, 0.03,
validation_data, test_data)
Thanks!
AI: In the given example from the e-book, the number $4$ comes from $(12-5+1) \over 2$, where $12$ is the input image size $(12*12)$ of the second constitutional layer; 5 is the filter size (5*5) used in that layer; and $2$ is the poolsize.
This is similar to how you get the number $12$ from the first constitutional layer: $12= {(28-5+1) \over 2}$. It's well explained in your linked chapter.
Regarding your "For instance" code, your 6th line is not correct:
ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 1),
The number 12 should be replaced by $(81-5+1)\over2$ which unfortunately is not an integer. You may want to change the filter_shape in the first layer to (6,1) to make it work. In that case, your 6th line should be:
ConvPoolLayer(image_shape=(mini_batch_size, 20, 38, 1),
and your 11th line should be:
n_in=40171, n_out=1000, activation_fn=ReLU, p_dropout=0.5), |
H: how to make new class from the test data
I have a list of accounts as data set and I need to group the accounts that refer to the same user using many features.
I'm thinking to use machine learning( but I'm new in this domain), because I know the group of each account for the training data set.
ex of training data:
account-id Feature1 Feature2 class(Group)
1 T1 P4 Gr1
2 T2 P4 Gr1
3 T3 P2 Gr2
The problem is in the testing of data and when a new account arrive for a new group not learned before in the training set.
ex of testing data:
account-id Feature1 Feature2
4 T5 P5
5 T6 P5
6 T3 P2
The groups of the testing data should be as following:
account-id Feature1 Feature2 class(Group)
4 T5 P5 Gr3
5 T6 P5 Gr3
6 T3 P2 Gr2
The accounts 4 and 5 are in a new group (Gr3) which is not learned before in the training data.
My question is how could I group the new data under a new class that is not defined before in the learning phase ? and which algorithm can I use to solve this issue ?
AI: I think you need to read about Online learning, it refers to learning when new data is being constantly added. In these cases you need an algorithm that can update itself as new data arrives (i.e. it doesn't need to recalculate itself from scratch). In other words, incrementally.
There are incremental versions for support vector machines (SVMs) and for neural networks. Also, bayesian networks can be made to work incrementally. |
H: On interpreting the statistical significance of R squared
I have performed a linear regression analysis to two series of data, each of which has 50 values. I did the analysis in SPSS and as a result got a table which says that my adjusted R squared is 0.145 and its significance is 0.004.
Being 0.004 < 0.05, I assume my adjusted R squared is significant.
1) Does it mean my adjusted R squared is credible?
2) What does happen if you get a significance which is > 0.05? Does it imply the adjusted R squared can be trusted with credibility but also that the two datasets are not or poorly correlated?
AI: The p-value is the strength of evidence against the null hypotheses. In this case the null is that the coefficient is equal to zero. So your p-value says that this is very weak evidence against the null so you model is likely to be describing the underlying system of the data.
R-squared describes the percent of variation that is explained by the model. Your value is very low; 14.5%. Of all the "activity" in the data your model is only explaining 14.5% of it.
So you have a situation were model is most likely explaining variation in data but not explaining very much of it. I would suggest altering the model and refitting. |
H: Examining a DocumentTermMatrix in RTextTools
I created a DocumentTermMatrix for text mining using RTextTools. The rows for this DocumentTermMatrix correspond to dataframe rows and matix columns correspond to words. My question is : How can I get the words (labels vector) for examining the DocumentTermMatrix ? In other words, How can I get the vector of these 904 words?
require(RTextTools,quietly=TRUE)
data(USCongress)
doc_matrix <- create_matrix(USCongress$text, language="english", removeNumbers=TRUE, stemWords=TRUE, removeSparseTerms=.998)
dim(USCongress)
[1] 4449 6
dim(doc_matrix)
[1] 4449 904
AI: a documenttermmatrix is a simple_triplet_matrix. You can turn this into a simple matrix with the as.matrix command and then use all matrix functions.
# turn into simple matrix
mat <- as.matrix(doc_matrix)
# vector of the words
word_vector <- colnames(mat)
# Dataframe containing words and their frequency
df_words <- data.frame(words = colnames(mat), frequency = colSums(mat), row.names = NULL) |
H: Why do we need to use sysfunc when we call a SAS function inside a SAS macro
I saw this piece of code in my project:
%let num = test;
%let x=%sysfunc(trim(&num));
Why could not I write:
%let x= %trim(&num);
Why did I need to use sysfunc?
Under what circumstances can I call a function inside a macro without using sysfunc?
AI: Without the sysfunc(), the expression will not be evaluated. You will not be assigning the value of the expression trim(&num) to the macrovariable, but rather the whole expression.
If you want to store the result of an expression, you need to execute that function with sysfunc()
http://support.sas.com/documentation/cdl/en/mcrolref/61885/HTML/default/viewer.htm#z3514sysfunc.htm |
H: Feature extraction of images in Python
In my class I have to create an application using two classifiers to decide whether an object in an image is an example of phylum porifera (seasponge) or some other object.
However, I am completely lost when it comes to feature extraction techniques in python. My advisor convinced me to use images which haven't been covered in class.
Can anyone direct me towards meaningful documentation or reading or suggest methods to consider?
AI: In images, some frequently used techniques for feature extraction are binarizing and blurring
Binarizing: converts the image array into 1s and 0s. This is done while converting the image to a 2D image. Even gray-scaling can also be used. It gives you a numerical matrix of the image. Grayscale takes much lesser space when stored on Disc.
This is how you do it in Python:
from PIL import Image
%matplotlib inline
#Import an image
image = Image.open("xyz.jpg")
image
Example Image:
Now, convert into gray-scale:
im = image.convert('L')
im
will return you this image:
And the matrix can be seen by running this:
array(im)
The array would look something like this:
array([[213, 213, 213, ..., 176, 176, 176],
[213, 213, 213, ..., 176, 176, 176],
[213, 213, 213, ..., 175, 175, 175],
...,
[173, 173, 173, ..., 204, 204, 204],
[173, 173, 173, ..., 205, 205, 204],
[173, 173, 173, ..., 205, 205, 205]], dtype=uint8)
Now, use a histogram plot and/or a contour plot to have a look at the image features:
from pylab import *
# create a new figure
figure()
gray()
# show contours with origin upper left corner
contour(im, origin='image')
axis('equal')
axis('off')
figure()
hist(im_array.flatten(), 128)
show()
This would return you a plot, which looks something like this:
Blurring: Blurring algorithm takes weighted average of neighbouring pixels to incorporate surroundings color into every pixel. It enhances the contours better and helps in understanding the features and their importance better.
And this is how you do it in Python:
from PIL import *
figure()
p = image.convert("L").filter(ImageFilter.GaussianBlur(radius = 2))
p.show()
And the blurred image is:
So, these are some ways in which you can do feature engineering. And for advanced methods, you have to understand the basics of Computer Vision and neural networks, and also the different types of filters and their significance and the math behind them. |
H: Bagging vs Dropout in Deep Neural Networks
Bagging is the generation of multiple predictors that works as ensamble as a single predictor. Dropout is a technique that teach to a neural networks to average all possible subnetworks. Looking at the most important Kaggle's competitions seem that this two techniques are used together very often. I can't see any theoretical difference besides the actual implementation. Who can explain me why we should use both of them in any real application? and why performance improve when we use both of them?
AI: Bagging and dropout do not achieve quite the same thing, though both are types of model averaging.
Bagging is an operation across your entire dataset which trains models on a subset of the training data. Thus some training examples are not shown to a given model.
Dropout, by contrast, is applied to features within each training example. It is true that the result is functionally equivalent to training exponentially many networks (with shared weights!) and then equally weighting their outputs. But dropout works on the feature space, causing certain features to be unavailable to the network, not full examples. Because each neuron cannot completely rely on one input, representations in these networks tend to be more distributed and the network is less likely to overfit. |
H: Finding the top K most similar sets
I have a database containing sets of words. So for example, I have a database that has:
{happy, birthday, to, you}
{how, are, you}
...
Given a query set, lets say {how, was your, birthday}, I want to find the top K sets in the database that is most similar to my query. Similarity metric used can be something like Jaccard's Index. Right now, I go through the database one by one and calculate the Jaccard Index and keep track of the top K scores found so far. I was wondering if there are any data structures or methods that would allow me to more efficiently find the top K scores. Right now it's a linear search. Thanks
AI: Do you have any information about your data set? Is it sparse, will most similarities be zero? Is the total dictionary small? You could consider a inverted index. For example
word query_id
W1 [1, 3, 6]
W2 [2, 5]
W3 [1, 3, 4]
W4 [2, 3, 4]
W5 [2, 3, 6]
query_id query
1 W1 W3
2 W2 W4 W4
3 W1 W3 W4 W5
4 W3 W4
5 W2
6 W1 W5
Here W_i is a word, e.g. birthday and query_id is the id of the query in the database. e.g. {how, are, you} might have id 22. Now you get a query {W1 W3 W5}. Aggregate counts on the inverted index. W1 was seen in queries 1, 3, and 6. W3 in 1, 3, and 4, etc.
query_id count
1 2
2 1
3 3
4 1
6 2
The count will the number of words in common with the incoming query, this is the numerator of the jaccard similarity. So, to find the top k you can start with the queries with the highest count. query_id 3 has the highest count and its similarity is 3/4.
If you have a massive database there are techniques like locality sensitive hashing which will basically reduce the search space into a small bucket. The incoming query gets hashed and lands in a bucket. You can then do a linear search with all the queries in this bucket to find the nearest k. |
H: With unbalanced class, do I have to use under sampling on my validation/testing datasets?
I’m a beginner in machine learning and I’m facing a situation. I’m working on a Real Time Bidding problem, with the IPinYou dataset and I’m trying to do a click prediction.
The thing is that, as you may know, the dataset is very unbalanced : Around 1300 negative examples (non click) for 1 positive example (click).
This is what I do:
Load the data
Split the dataset into 3 datasets :
A = Training (60%)
B = Validating (20%)
C = Testing (20%)
For each dataset (A, B, C), do an under-sampling on each negative class in order to have a ratio of 5 (5 negative example for 1 positive example). This give me 3 new datasets which are more balanced:
A’
B’
C’
Then I train my model with the dataset A’ and logistic regression.
My question are:
Which dataset do I have to use for validation ? B or B’ ?
Which dataset do I have to use for testing ? C or C’
Which metrics are the most relevant to evaluate my model? F1Score seems to be a well used metric. But here due to the unbalanced class (if I use the datasets B and C), the precision is low (under 0.20) and the F1Score is very influenced by low recall/precision. Would that be more accurate to use aucPR or aucROC ?
If I want to plot the learning curve, which metrics should I use ? (knowing that the %error isn’t relevant if I use the B’ dataset for validating)
Thanks in advance for your time !
Regards.
AI: Great question... Here are some specific answers to your numbered questions:
1) You should cross validate on B not B`. Otherwise, you won't know how well your class balancing is working. It couldn't hurt to cross validate on both B and B` and will be useful based on the answer to 4 below.
2) You should test on both C and C` based on 4 below.
3) I would stick with F1 and it could be useful to use ROC-AUC and this provides a good sanity check. Both tend to be useful with unbalanced classes.
4) This gets really tricky. The problem with this is that the best method requires that you reinterpret what the learning curves should look like or use both the re-sampled and original data sets.
The classic interpretation of learning curves is:
Overfit - Lines don't quite come together;
Underfit - Lines come together but at too low an F1 score;
Just Right - Lines come together with a reasonable F1 score.
Now, if you are training on A` and testing on C, the lines will never completely come together. If you are training on A` and testing on C` the results won't be meaningful in the context of the original problem. So what do you do?
The answer is to train on A` and test on B`, but also test on B. Get the F1 score for B` where you want it to be, then check the F1 score for B. Then do your testing and generate learning curves for C. The curves won't ever come together, but you will have a sense of the acceptable bias... its the difference between F1(B) and F1(B`).
Now, the new interpretation of your learning curves is:
Overfit - Lines don't come together and are farther apart than F1(B`)-F1(B);
Underfit - Lines don't come together but the difference is less than F1(B`)-F1(B) and the F1(C) score is under F1(B);
Just right - Lines don't come together but the difference is less than F1(B`)-F1(B) with an F1(C) score similar to F1(B).
General: I strenuously suggest that for unbalanced classes you first try adjusting your class weights in your learning algorithm instead of over/under-sampling as it avoids all of the rigor moral that we've outlined above. Its very easy in libraries like scikit-learn and pretty easy to hand code in anything that uses a sigmoid function or a majority vote.
Hope this helps! |
H: Best Julia library for neural networks
I have been using this library for basic neural network construction and analysis.
However, it does not have support for building multi-layered neural networks, etc.
So, I would like to know of any nice libraries for doing advanced neural networks and Deep Learning in Julia.
AI: Mocha.jl - Mocha is a Deep Learning framework for Julia, inspired by the C++ framework Caffe.
Project with good documentation and examples.
Can be run on CPU and GPU backend. |
H: Removing strings after a certain character in a given text
I have a dataset like the one below. I would like to remove all characters after the character ©. How can I do that in R?
data_clean_phrase <- c("Copyright © The Society of Geomagnetism and Earth",
"© 2013 Chinese National Committee ")
data_clean_df <- as.data.frame(data_clean_phrase)
AI: For instance:
rs<-c("copyright @ The Society of mo","I want you to meet me @ the coffeshop")
s<-gsub("@.*","",rs)
s
[1] "copyright " "I want you to meet me "
Or, if you want to keep the @ character:
s<-gsub("(@).*","\\1",rs)
s
[1] "copyright @" "I want you to meet me @"
EDIT: If what you want is to remove everything from the last @ on you just have to follow this previous example with the appropriate regex. Example:
rs<-c("copyright @ The Society of mo located @ my house","I want you to meet me @ the coffeshop")
s<-gsub("(.*)@.*","\\1",rs)
s
[1] "copyright @ The Society of mo located " "I want you to meet me "
Given the matching we are looking for, both sub and gsub will give you the same answer. |
H: Data Science conferences?
This is a similar question like the Statistics Conferences question at CrossValidated
What are the most significant annual Data Science conferences?
Rules:
Include a link to the conference
Please include links for the talks (be it youtube, the conference site or some other video streaming site)
AI: PyData - talks about Python Data tools
Link: http://pydata.org/events/
There is one PyData conference on the east coast and one on the west coast each year.
NIPS - Neural Information Processing Systems (NIPS)
Link: https://nips.cc/
This is one of the hardest / most prestigious academic Machine Learning conferences to get an abstract / poster accepted.
The 5th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics (IEEE IPDPS 2016)
Link: http://parlearning.ecs.fullerton.edu/
This one is also an academic conference with paper submission.
Note:
I am not sure if you want academic or nor academic conferences (have conference proceedings / papers associated with the conference).
Some conferences are not about new data science methodologies but the tools and libraries (e.g. PyData) that implement existing methodologies.
Also, data science is very broad and includes Stat, Machine Learning and data warehousing / mining etc. |
H: How to place XGBoost in a full stack for ML?
Is XGBoost complete by itself for prod-strength machine learning? If not, with which other tools or libs is it typically combined, and how?
(I recently read a description of a stack that included ca 5 pieces, including XGBoost and Keras.)
AI: Yes, it is a full-strength Machine Learning paradigm.
XGBoost is basically Extreme Gradient Boosting.
It only takes in numeric matrix data. So, you might want to convert your data such that it is compatible with XGBoost.
The wide range of parameters of the xgboost paradigm is what makes it so diverse. Boosting can be done on trees and linear models, and then more parameters can be defined depending on the model you have selected.
So, yes it is a complete paradigm in itself. But, when you want more than the limitations of xgboost like linear and tree models, then you can use the concept of ensembling.
In the case of ensembles, the tools/libraries which can be used depends on the data scientist who is conducting the experiment. It can be Keras or Theano or TensorFlow, or anything which he/she is comfortable with. (opinion-based) |
H: Data Science Podcasts?
What are some podcasts which are related to data science?
This is a similar question to the reference request question on CrossValidated.
Details/rules:
The podcasts (the theme and the episodes) should be related to data science. (For example: A podcast which is about some other domain, with an episode which speaks about data science in that domain, is not a good reference/answer.)
Personal opinions/reviews (if any) would be very helpful too.
AI: I strongly suggest Talking Machines. It's a very well put together podcast from a professor at Harvard. They cater to both machine learning experts and enthusiasts.
Their interviews are often done from NIPS, and the guests are usually top tier practitioners. |
H: When do I have to use aucPR instead of auROC? (and vice versa)
I'm wondering if sometimes, to validate a model, it's not better to use aucPR instead of aucROC? Do these cases only depend on the "domain & business understanding" ?
Especially, I'm thinking about the "unbalanced class problem" where, it seems more logical to use the aucPR because recall and precision are well-used metrics for this problem.
AI: Yes, you are correct that the dominant difference between the area under the curve of a receiver operator characteristic curve (ROC-AUC) and the area under the curve of a Precision-Recall curve (PR-AUC) lies in its tractability for unbalanced classes. They are very similar and have been shown to contain essentially the same information, however PR curves are slightly more finicky, but a well drawn curve gives a more complete picture. The issue with PR-AUC is that its difficult to interpolate between points in the PR curve and thus numerical integration to achieve an area under the curve becomes more difficult.
Check out this discussion of the differences and similarities.
Quoting Davis' 2006 abstract:
Receiver Operator Characteristic (ROC)
curves are commonly used to present results
for binary decision problems in machine
learning. However, when dealing
with highly skewed datasets, Precision-Recall
(PR) curves give a more informative picture
of an algorithm’s performance. We show that
a deep connection exists between ROC space
and PR space, such that a curve dominates
in ROC space if and only if it dominates
in PR space. A corollary is the notion of
an achievable PR curve, which has properties
much like the convex hull in ROC space;
we show an efficient algorithm for computing
this curve. Finally, we also note differences
in the two types of curves are significant for
algorithm design. For example, in PR space
it is incorrect to linearly interpolate between
points. Furthermore, algorithms that optimize
the area under the ROC curve are not
guaranteed to optimize the area under the
PR curve.
This was also discussed on Kaggle recently.
There is also some useful discussion on Cross Validated. |
H: Deconvolutional Network in Semantic Segmentation
I recently came across a paper about doing semantic segmentation using deconvolutional network: Learning Deconvolution Network for Semantic Segmentation.
The basic structure of the network is like this:
The goal is to generate a probability map in the end. I'm having trouble figuring out how to realize the deconvolution layer. In the paper, it says:
The output of an unpooling layer is an enlarged, yet sparse activation map. The deconvolution layers densify the sparse activations obtained by unpooling through convolution-like operations with multiple learned filters. However, contrary to convolutional layers, which connect multiple input activations within a filter window to a single activation, deconvolutional layers associate a single input activation with multiple outputs.
The output of the deconvolutional layer is an enlarged and dense activation map. We crop the boundary of the enlarged activation map to keep the size of the output map identical to the one from the preceding unpooling layer.
The learned filters in deconvolutional layers correspond to bases to reconstruct shape of an input object. Therefore, similar to the convolution network, a hierarchical structure of deconvolutional layers are used to capture different level of shape details. The filters in lower layers tend to capture overall shape of an object while the class-specific fine details are encoded in the filters in higher layers. In this way, the network directly takes class-specific shape information into account for semantic segmentation.
Can anyone explain how the deconvolution works? I'm guessing it's not a simple interpolation.
AI: There are two main functions they undo.
The pooling layers in the convolutional neural network downsample the image by (usually) taking the maximum value within the receptive field. Each rxr image region is downsampled to a single value. What this implementation does is store which unit had the maximum activation in each of these pooling steps. Then, at each "unpooling" layer in the deconvolutional network, they upsample back to a rxr image region, only propagating the activation to the location that produced the original max-pooled value.
Thus, "the output of an unpooling layer is an enlarged, yet sparse activation map."
Convolutional layers learn a filter for each image region that maps from a region of size r x r to a single value, where r is the receptive field size. The point of the deconvolutional layer is to learn the opposite filter. This filter is a set of weights that projects an rxr input into a space of size sxs, where s is the size of the next convolutional layer. These filters are learned in the same way that as regular convolutional layers are.
As the mirror image of a deep CNN, the low-level features of the network are really high-level, class-specific features. Each layer in the network then localizes them, enhancing class-specific features while minimizing noise. |
H: Reading a CSV using R
I have a CSV file that, by all appearances, is totally normal except that each line ends with ,^M. Not sure if that has anything to do with my issue or not, but I try reading in the file in R with the usual command
df <- read.csv('file.csv')
and then when I try to inspect it by typing df it prints out this result that I don't know how to interpret:
function (x, df1, df2, ncp, log = FALSE)
{
if (missing(ncp))
.External(C_df, x, df1, df2, log)
else .External(C_dnf, x, df1, df2, ncp, log)
}
<bytecode: 0x000000000b083720>
<environment: namespace:stats>
Anybody know what's going on here? I've read in CSV files before and had no problem.
AI: Sorry, never mind, I just realized that I thought I was running the command to set my working directory to the correct file and it turns out I wasn't so when I was importing the file I was importing the wrong one. All fixed. |
H: pandas count values for last 7 days from each date
There are two pd.DataFrame. First is like this:
print df1
id date month is_buy
0 17 2015-01-16 2015-01 1
1 17 2015-01-26 2015-01 1
2 17 2015-01-27 2015-01 1
3 17 2015-02-11 2015-02 1
4 17 2015-03-14 2015-03 1
5 18 2015-01-28 2015-01 1
6 18 2015-02-12 2015-02 1
7 18 2015-02-25 2015-02 1
8 18 2015-03-04 2015-03 1
In second data frame there are some aggregated data by month from the first one:
df2 = df1[df1['is_buy'] == 1].groupby(['id', 'month']).agg({'is_buy': np.sum})
print df2
id month buys
0 17 2015-01 3
1 17 2015-02 1
2 17 2015-03 1
3 18 2015-01 1
4 18 2015-02 2
5 18 2015-03 1
I'm trying to get new df2 column named 'last_week_buys' with aggregated buys by last 7 days from first day of each df1['month']. In other words, I want to get this:
id month buys last_week_buys
0 17 2015-01 3 NaN
1 17 2015-02 1 2
2 17 2015-03 1 0
3 18 2015-01 1 NaN
4 18 2015-02 2 1
5 18 2015-03 1 1
Are there any ideas to get this column?
AI: The main obstacle is figuring out whether a date is within the last 7 days of the month. I'd recommend something hacky like the following:
from datetime import datetime, date, timedelta
def last7(datestr):
orig = datetime.strptime(datestr,'%Y-%m-%d')
plus7 = orig+timedelta(7)
return plus7.month != orig.month
Once you have that, it's relatively simple to adapt your previous code:
df3 = df1[df1['is_buy'] == 1 && last7(df1['date'])].groupby(['id', 'month']).agg({'is_buy': np.sum})
Now we just join together df2 and df3 and we're done. |
H: Theoretical treatment of unlabeled samples
In a typical supervised learning setting with a few positive and a few negative examples, it is clear that unlabeled data carries some information that can benefit learning and that is not captured in the labeled data. For example one can estimate mean values, bounds and some other geometrical characteristics of the data-set with much higher precision if you do not discard the (massive) unlabeled data.
On the other hand, the most common ML algorithms from Neural Networks to SVM do not take advantage of this information (at least in their standard, most common form). My question:
Is there any theoretical framework where unlabeled data is treated in the supervised setting?
I can think of semi-supervised ways to approach this (first cluster and then label the clusters). Are there any other?
AI: In a neural network model, you can use autoencoders.
The basic idea of an autoencoder is to learn a hidden layer of features by creating a network that simply copies the input vector at the output. So the training features and training "labels" are initially identical, no supervised labels are required. This can work using a classic triangular network architecture with progressively smaller layers that capture a compressed and hopefully useful set of derived features. The network's hidden layers learn representations based on the larger unsupervised data set. These layers can then be used to initialise a regular supervised learning network to be trained using the actual labels.
A similar idea is pre-training layers using a Restricted Boltzmann Machine, which can be used in a very similar way, although based on different principles. |
H: Dealing with big data
I am on a project dealing with a lot of data in the form of images and videos (Data related to wind engineering). My requirement is to build a predictive algorithm based on the data I have. I have found many tools with which I can analyse the data where each tool has its own advantages and disadvantages. Big data being really new to me, I find it very difficult to choose a platform to start with. There should be other people here who should have dealt with similar situations.
What criteria should I mainly take into account before selecting a
tool for analysing big data?
Some of the Criteria that I have taken into account : Visualization, Interaction, Security, Data Access and integration, Speed of response, Integrated Data Mining, Pattern Matching, Ease of use etc. As you can see the list that I have made for the criteria comes from the extensive reading of different articles on the topic. But I can't narrow down the list nor find the individual contribution of these criteria in the various tools available for analysis.
Let me also list some of the tools that I found after googling : Knime, Statistica 2, Rapidminer, Orange, WEKA, KEEL, R and RATTLE.
On what basis could I choose a tool from a list of tools that perform similar tasks?
UPDATE based on Comment
Aim : To develop a software that analyses the data coming from the wind mills and generate reports. The software should be able to predict when a wind mill can fail based on the analysis.
The project is still in the phase of gathering User Requirements. Maybe i am so early to come into conclusions about what tool should be used.
Someone else suggested that I should be finalise the requirements and then think about a tool that can help me get things done. So is it possible that I find what and how things should be analysed before finding a tool? And is it also possible that I find an algorithm for predictive analysis without knowing what would be the results of the tool after analysis.
AI: What criteria should I mainly take into account before selecting a
tool for analysing big data?
There are a lot of criterion which are to be taken into account when the tool selection is concerned. The can be:
Structure of the data. (The data model Ex: Hierarchical, tabular, etc)
Type of data and what is the problem statement. (time series, or classification, etc)
Speed
Security
Aim : To develop a software that analyses the data coming from the
wind mills and generate reports. The software should be able to
predict when a wind mill can fail based on the analysis.
Almost all the existing analytics tools like Python, Julia, R, etc can do this.
And is it also possible that I find an algorithm for predictive
analysis without knowing what would be the results of the tool after
analysis.
Yes. The predictive algorithm or technique can be inferred by looking at the data and the contents of the data. It is not dependant on the tool.
Some points which I would like to include which I believe would be useful to you:
Select the database depending on your data, and it's type. According to your data, a NoSQL database would be more relevant and suitable.
Select the algorithms and techniques only after you have a clear knowledge about the problem statement and takeaways and also after clearly looking at the data for an exploratory analysis.
If you want more flexibility, then use a tool/programming language like Python, R and Julia. Else, you can use a tool like Knime, Orange (it has a Python library too.), RapidMiner, etc. |
H: removing words based on a predefined vector
I have the dataset test_stopword and I want to remove some words from the dataset based on a vector. How can I do that in R?
texts <- c("This is the first document.",
"Is this a text?",
"This is the second file.",
"This is the third text.",
"File is not this.")
test_stopword <- as.data.frame(texts)
ordinal_stopwords <- c("first","primary","second","secondary","third")
AI: texts <- c("This is the first document.",
"Is this a text?",
"This is the second file.",
"This is the third text.",
"File is not this.")
test_stopword <- as.data.frame(texts)
ordinal_stopwords <- c("first","primary","second","secondary","third")
(newdata <- as.data.frame(sapply(texts, function(x) gsub(paste(ordinal_stopwords, collapse = '|'), '', x))))
The output is getting skewed when added in a code block (maybe a bug in SE). But, you would get the desired output. |
H: Modelling population changes with years on a network graph
This may be a silly question, apologies if it is. And further apologies if I've said something wrong.
Network graphs, or visualisations look somewhat like this, with each node representing a thing connected to another thing. So for my example, I'd like to model population changes for refugees from their origin country to their destination country.
I've been tasked to create a graph which shows a similar change over a period of years, but I am really not sure how years could be modelled? e.g. how would a graph like the example above, show a change in years?
Edit: preferably using the NodeXL plugin
AI: The Sankey diagram would be nicely suited for your problem statement.
The flow width can be the number of refugees migrated, and the end nodes can be the destinations (from and to) of the refugees, and the flow width would be the magnitude of refugee migration.
If you want to model the graph on a geographical map, it can look something like this:
I am not sure whether it can be done in NodeXL or not. But, a google search has returned me this link. |
H: Correlation and Naive Bayes
I would like to ask if the Pearson correlation between fields (but not the class field) of a dataset affects somehow the performance of Naive Bayes when applying it to the dataset in order to predict the class field.
AI: As you probably know, naive here implies that the "fields" are independent. So your question boils down to does correlation imply dependence. Yes, it does. See here.
https://stats.stackexchange.com/questions/113417/does-non-zero-correlation-imply-dependence
So, if your features show correlation then this will have an adverse effect on the naive assumption. Despite this fact, Naive Bayes has been shown to be robust against this assumption. If your model still suffers from this, however, you could consider transforming the space to be independent with methods such as PCA. |
H: Hub removal from graphs
I have a graph with vertices that represent some entities and the edges are weighted as the correlation between two such entities.
I would like to break this graph into several subgraphs with high inner-correlation.
My problem is that I have a few 'hubs', with high correlation to a lot of different entities.
How can I detect such hubs in order to remove them?
AI: Depends on how you define "hubs". In Network Science, a hub is simply a node with high degree i.e. those node who contribute to the power-law nature of the degree distribution the most. But you can also find other definitions for instance according to the information flow in the network where hubs are defined as those node that are critical in the process information flow (also called central nodes).
My Suggestions
Degree Distribution: The simplest approach would be to choose high degree nodes as hubs. To ensure your results a bit more I recommend to calculate the summation of all correlations correspond to each node and have a look at these numbers as well. In this case you are looking for nodes which have high degrees and contain a larger value of correlation.
Centrality Measure: or Betweenness introduced by Linton Freeman which again somehow measures the influence of a node in the process of information flow over the network. Calculating the centrality for a vertex $v$ in a graph has basically 3 steps:
For each pair of vertices $(s,t)$, compute the shortest paths between them.
For each pair of vertices $(s,t)$, determine the fraction of shortest paths that pass through the vertex in question (here, vertex $v$).
Sum this fraction over all pairs of vertices $(s,t)$.
For more information read this carefully and in case you need any help (specially for implementation) just drop me a line in the comments :)
Good Luck! |
H: How to cluster Houses on the basis of similarity of features+location?
I have a dataset of houses like this:
HouseID Latitude Longitude PriceIndex
1 1.4 103.120 1.21
2 1.42 103.112 2.01
I want to find houses which are similar to each other both on the basis of their position as well as their price index.[Also would need to Rank in order of similarities, given one house] I tried using hclust package in R and was able to extract 9 classes. However the groups don't seem to have any interpretable similarities (for example the points are spread all across the city etc). I haven't done clustering based projects before so any help in the right direction will be helpful. Thanks!
Edit: Removing the price index column from the clustering data-set actually clusters spatially. But adding the price shows only price-based clustering
AI: Check the ranges of your dimensions and consider scaling if you see a large difference.
I would interpret your described behaviour due to much larger range if the index that the two other dimensions.
See also the question. |
H: Detecting boilerplate in text samples
I have a corpus of unstructured text that, due to a concatenation from different sources, has boilerplate metadata that I would like to remove. For example:
DESCRIPTION PROVIDED BY AUTHOR: The goal of my ...
Author provided: The goal of my ...
The goal of my ... END OF TRANSCRIPT
The goal of my ... END, SPONSORED BY COMPANY XYZ
The goal of my ... SPONSORED: COMPANY XYZ, All rights reserved, date: 10/21
This boilerplate can be assumed to occur in beginning or end of each sample. What are some robust methods for wrangling this out of the data?
AI: This might get you started. Phrase length is determined by the range() function. Basically this tokenizes and creates n-grams. Then it counts each token. Tokens with a high mean over all documents (occurs often across documents) is printed out in the last line.
from sklearn.feature_extraction.text import CountVectorizer
import numpy as np
import nltk
text = """DESCRIPTION PROVIDED BY AUTHOR: The goal of my a...
Author provided: The goal of my b...
The goal of my c... END OF TRANSCRIPT
The goal of my d... END SPONSORED BY COMPANY XYZ
The goal of my e... SPONSORED: COMPANY XYZ All rights reserved date: 10/21
"""
def todocuments(lines):
for line in lines:
words = line.lower().split(' ')
doc = ""
for n in range(3, 6):
ts = nltk.ngrams(words, n)
for t in ts: doc = doc + " " + str.join('_', t)
yield doc
cv = CountVectorizer(min_df=.5)
fit = cv.fit_transform(todocuments(text.splitlines()))
vocab_idx = {b: a for a, b in cv.vocabulary_.items()}
means = fit.mean(axis=0)
arr = np.squeeze(np.asarray(means))
[vocab_idx[idx] for idx in np.where(arr > .95)[0]]
# ['goal_of_my', 'the_goal_of', 'the_goal_of_my'] |
H: Regression: how to interpret different linear relations?
I have three datasets, let's call them X and Y1 and Y2. A scatterplot is produced out of them, with Y1 and Y2 sharing them same X dataset (or support).
My question: if the two regression lines are different in both slope and intercept, is there a way to evaluate if the X dataset has more influence on Y1 or Y2?
Based on the image below, this is to say - which Y dataset is more influenced by the X dataset?
Blue slope (Y1): -112
Red slope (Y2): -90
EDIT
It is visible that an increase in X produces a decrease in both Y1 and Y2. My question could be interpreted as follows: which Y dataset decreases the most, given the increase in X? Is the slope everything I need?
Image:
AI: What you are looking for is the Analysis of Covariance (ANCOVA) analysis, which is used to compare two or more regression lines by testing the effect of a categorical factor on a dependent variable (y-var) while controlling for the effect of a continuous co-variable (x-var).
Here is an example for carrying out the ANCOVA analysis using R. |
H: What are the best ways to tune multiple parameters?
When building a model in Machine Learning, it's more than common to have several "parameters" (I'm thinking of real parameter like the step of gradient descent, or things like features) to tune. We validate these parameters on a validating set.
My question is: what is the best way of tuning these multiple parameters? For example, let say we have 3 parameters A, B and C that take 3 values each:
A = [ A1, A2, A3 ]
B = [ B1, B2, B3 ]
C = [ C1, C2, C3 ]
Two methods come to my mind.
Method 1:
Vary all the parameters at the same time and test different combinations randomly, such as:
Test1 = [A1,B1,C1]
Test2 = [A2,B2,C2]
Test3 = [A3,B3,C3]
Test4 = [A1,B2,C3]
Test5= [A3,B1,C2]
etc..
Method 2:
Fix all the parameters except one:
- TestA1 = [A1,B1,C1]
- TestA2 = [A2,B1,C1]
- TestA3 = [A3,B1,C1]
In that way, we can find the best value for parameter A, then we fix this value and use it to find the best value for B, and finally the best for C.
It seems more logical to me to use the method 2 which seems more organized. But may be we will miss a combination which can be found only in method 1 that doesn’t appear in method 2, such as [A1,B2,C3] for example.
Which method is the best? Is there another method more accurate for tuning multiple parameters?
Thanks in advance.
Regards.
AI: Generally people perform a grid search, which in its simplest "exhaustive" form is similar to Method 1. However there are also more 'intelligent' ways to choose what to explore, which optimize in parameter space in a fashion similar to how each individual model is optimized. It can be tricky to do greedy optimization in this space, as it is often strongly non-convex.
This page describes the basics of optimizing model parameters. |
H: Can k-means clustering get shells as clusters?
Imagine you have $k$ classes. Every class $i$ has points which are follow a probability distribution, such that their distance to 0 is $i$ in mean, but this distance follows a normal distribution. The direction is uniformly distributed. So all classes are in shells around the origin 0.
Can $k$-means get these shells when you choose the "right" distance metric?
(Obviously it can't find it if you take the euclidean metric, but I wonder if there is any metric at all or if this problem is inherently unsolvable by $k$-means, even if you know the number of clusters $k$)
AI: You cannot just use arbitrary distance functions with k-means.
because the algorithm is not based on metric properties but on variance.
https://stats.stackexchange.com/q/81481/7828
Fact is that k-means minimizes the sum of squares. This does not even give you the "smallest distances" but only the smallest squared distances. This is not the same (see: difference between median and mean) - if you want to minimize Euclidean distances, use k-medians or if you want other distances PAM (k-medoids).
You can generalize k-means to use a few more distances known as "Bergman divergences" and you can do some variant of the kernel trick. But that is not very powerful, because you don't have labels for optimizing the kernel parameters! Still, that could be what this exercise question is up to... If your "shells" are indeed centered at 0, then you can transform your data (read: kernel trick done wrong) to angle+distance from origin, and k-means may be able to cluster the projected data (dependent on the not well defined scaling of the axes).
Or the textbook did not realize that a kernel k-means has been proposed long ago. Then the argument is probably this: the mean of each shell is 0, and thus the shells cannot be distinguished. This clearly holds for unmodified k-means. |
H: Cluster Similar Images into Folders
I wasn't sure where to ask this question so forgive me if the question seems out of place (and please guide me where to ask it !)
I have an archive of 9GAG images and I want to Cluster them based on their content and their similarity... 9GAG images are mostly memes so it's natural that you'd find many of them pretty similar to each other...
I couldn't find any application which does this out of the box (if there is, could you please refer me to it ?) and what I found was a vast number of Papers about Image Clustering but no real application based on them...
I was wondering if there is a Ruby, Python or Java Program which could simply get the directory of the Images and Cluster them into groups (folders) based on their Similarity to each other ?
Thank you very much...
AI: I would impressed if there is already a dedicated 9gag clustering :P
However, you can read this blog post about Hierarchical clustering in Python for images, which is close to what you want. The problem is that the author uses the average color of the image as a feature and it could proved crude and not inefficient. You may find something more interesting to use. But in the end, you need to experiment a lot with your own dataset. |
H: Predicting future value with regression Model
I have a set of predictor variables and another target variable .
Now I am really confused on what method to use to forecast the target variable .
For e.g my data set have customer profit(which is my target variable) and a set of predictor variable(balances of different account) for one year for each customer .
Now I need to predict profit of next 5 years .I am confused in the part that I dont have the data(predictor variables) for future .
What are my possible choices of modelling .Please assist .
AI: You should distinct between a time series prediction, where from a known history of some attribute the future is predicted and model prediction where based on the predictor variables the target variable is calculated.
In your case you could combine both approaches, i.e. use time series prediction on the customer balances and apply the regression model to calculate the profit on the result. |
H: Is Maxout the same as max pooling?
I've recently read about maxout in slides of a lecture and in the paper.
Is maxout the same as max pooling?
AI: They are almost identical:
The second key reason that maxout performs well is
that it improves the bagging style training phase of
dropout. Note that the arguments in section 7 motivating
the use of maxout also apply equally to rectified
linear units (Salinas & Abbott, 1996; Hahnloser, 1998;
Glorot et al., 2011). The only difference between maxout
and max pooling over a set of rectified linear units
is that maxout does not include a 0 in the max.
Source: Maxout Networks. |
H: Predicting app usage on mobile phone
I'm currently building an app that strives to predict how the users uses different apps and give the user a suggestion based on which apps it think the user will currently use (a ranked list based on the user's current conditions). I've been collecting some data over the past week now, and I'm not really sure which approach to take. I've been thinking about using multiple seasonalities (correct me if I'm using the wrong terminology) such as time of day, day of week, week of month, month and quarter. I also want to use location, and other sensor data (such as the user state "walking" or "sitting" later on).
I've summarised the usage over the last week on an hourly level for some apps.
The bars represents each time the app was opened during that time, and the green line is a weighted moving average which has a weight of 0.5 on its closest neighbours.
Now I see several challenges in front of me and would greatly appreciate some input from others, or some good resources to find further information.
Do you think my model is a good one for this problem
How do I account for ageing data?
How do add up the different seasonalities/states/location? Multiply them?
Does it make sense smoothening the curve as I do?
Here some data for the last week on an hourly basis:
AI: So you have collected data that shows which app is being used at any time, binned into hours in the day. And you have several apps. You mention other dimensions, like user state when using the app (walking, not-walking), (active, not-active - it appears to me that you are not collecting much usage 2-6. is that because the usage is from a self-directed ping from an app while the user is truly away?), location (is this going to be all possible values, or are you going to use something like the fact that this location has been seen often before?). Another interesting relationship could be pairing apps, i.e., mining for a relationship between App A being used after using App B or before App B.
Regardless, then you will definitely have many different dimensions upon which to measure the usage characteristics for any particular usage measurement, and so you are definitely going to have a multiple dimensional problem. You might try to visualize this as an N-space problem, with an axis of measurement for each of your characteristics. Each of your previous measurements represents vectors and you are producing a new vector with your next measurement.
From this, you want to predict future behavior based on measuring the input characteristics from your usage space. You could go for something that classifies as nearest neighbor, and you probably want to do this for your first stab at the problem. You might end up wanting to make the predictive model more sophisticated by adding probabilities to the classifier and acting on that. This means getting estimates of class membership probability rather than just simple classifications. But I would build the whole thing incrementally. Start simple and add complexity as you require it. The increased complexity will also have effects on performance, so why not baseline with something.
For the aging of data, are you wanting to reduce the predictive power of characteristics that are too long in the tooth? If so, be explicit with yourself about what that means, quantitively. Do I trust the usage data from last month less than yesterday's data? Perhaps so, but then why? is my usage different because I am different or because last month was special compared to yesterday, or vice-versa? Again, you might benefit from ignoring this at first, but then trying to search for "seasonal" or periodicity characteristics from the data. Once you determine if/how it changes, you can weight that contribution compared to your immediate usage in different ways. Perhaps you want to amplify the contribution of a similar period (same time of day && same location && same previous app usage). Perhaps you want to provide an exponential dampening on historical data because the usage is always adapting and changing, and recent usage seems to be a much better predictor than 3xcurrent.
For all of this, the proper data science perspective is to let the data lead you. |
H: Decision tree vs. KNN
In which cases is it better to use a Decision tree and other cases a KNN?
Why use one of them in certain cases? And the other in different cases? (By looking at its functionality, not at the algorithm)
Anyone have some explanations or references about this?
AI: They serve different purposes.
KNN is unsupervised, Decision Tree (DT) supervised.
(KNN is supervised learning while K-means is unsupervised, I think this answer causes some confusion.)
KNN is used for clustering, DT for classification. (Both are used for classification.)
KNN determines neighborhoods, so there must be a distance metric. This implies that all the features must be numeric. Distance metrics may be affected by varying scales between attributes and also high-dimensional space.
DT, on the other hand, predicts a class for a given input vector. The attributes may be numeric or nominal.
So, if you want to find similar examples you could use KNN. If you want to classify examples you could use DT. |
H: Why do activation functions have to be monotonic?
I am currently preparing for an exam on neural networks. In several protocols from former exams I read that the activation functions of neurons (in multilayer perceptrons) have to be monotonic.
I understand that activation functions should be differentiable, have a derivative which is not 0 on most points, and be non-linear. I do not understand why being monotonic is important/helpful.
I know the following activation functions and that they are monotonic:
ReLU
Sigmoid
Tanh
Softmax: I'm not sure if the definition of monotonicity is applicable for functions $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ with $n, m > 1$
Softplus
(Identity)
However, I still can't see any reason why for example $\varphi(x) = x^2$.
Why do activation functions have to be monotonic?
(Related side question: is there any reason why the logarithm/exponential function is not used as an activation function?)
AI: The monotonicity criterion helps the neural network to converge easier into an more accurate classifier.
See this stackexchange answer and wikipedia article for further details and reasons.
However, the monotonicity criterion is not mandatory for an activation function - It is also possible to train neural nets with non-monotonic activation functions. It just gets harder to optimize the neural network.
See Yoshua Bengio's answer. |
H: Pylearn2 vs TensorFlow
I am about to dive into a long NN research project and wanted a push in the direction of Pylearn2 or TensorFlow? As of Dec 2015 has the community started to lean one direction or another?
This link has given me concern about getting tied to TenserFlow.
AI: You might want to take into consideration that Pylearn2 has no more developer, and now points to other Theano-based libraries:
There are other machine learning frameworks built on top of Theano that could interest you, such as: Blocks, Keras and Lasagne.
As Dawny33 says, TensorFlow is just getting started, but it is interesting to note that the number of questions on TensorFlow (244) on Stack Overflow already surpasses Torch (166) and will probably catch up with Theano (672) in 2016. |
H: Theano logistic regression example
I am trying to understand some simple neural net case using theano.
The deeplearning.net site gives the following simple code for implementing a logistic regression application to a simple case:
import numpy
import theano
import theano.tensor as T
rng = numpy.random
N = 400
feats = 784
D = (rng.randn(N, feats), rng.randint(size=N, low=0, high=2))
training_steps = 10000
# Declare Theano symbolic variables
x = T.matrix("x")
y = T.vector("y")
w = theano.shared(rng.randn(feats), name="w")
b = theano.shared(0., name="b")
print("Initial model:")
print(w.get_value())
print(b.get_value())
# Construct Theano expression graph
p_1 = 1 / (1 + T.exp(-T.dot(x, w) - b)) # Probability that target = 1
prediction = p_1 > 0.5 # The prediction thresholded
xent = -y * T.log(p_1) - (1-y) * T.log(1-p_1) # Cross-entropy loss function
cost = xent.mean() + 0.01 * (w ** 2).sum()# The cost to minimize
gw, gb = T.grad(cost, [w, b]) # Compute the gradient of the cost
# (we shall return to this in a
# following section of this tutorial)
# Compile
train = theano.function(
inputs=[x,y],
outputs=[prediction, xent],
updates=((w, w - 0.1 * gw), (b, b - 0.1 * gb)))
predict = theano.function(inputs=[x], outputs=prediction)
# Train
for i in range(training_steps):
pred, err = train(D[0], D[1])
print("Final model:")
print(w.get_value())
print(b.get_value())
print("target values for D:")
print(D[1])
print("prediction on D:")
print(predict(D[0]))
I understand most of it, p_1 is the logistic regression function, the prediction is whether the value will be in the 0 class or 1 class, xent is the loss function, i.e. how far from correct is our prediction. I do not understand the next line, the cost. Shouldn't the cost be equal to the xent, i.e. the loss?
What is the cost function representing here?
Also, why is the bias initially set to 0 and not a random number like the weights?
AI: I do not understand the next line, the cost. Shouldn't the cost be equal to the xent, i.e. the loss? What is the cost function representing here?
The cost is the error (xent.mean()) + some regularization (0.01 * (w ** 2).sum())
Why is the bias initially set to 0 and not a random number like the weights?
It is possible and common to initialize the biases to be zero, since the asymmetry breaking is provided by the small random numbers in the weights.
More details here. |
H: Calculating KL Divergence in Python
I am rather new to this and can't say I have a complete understanding of the theoretical concepts behind this. I am trying to calculate the KL Divergence between several lists of points in Python. I am using this to try and do this. The problem that I'm running into is that the value returned is the same for any 2 lists of numbers (its 1.3862943611198906). I have a feeling that I'm making some sort of theoretical mistake here but can't spot it.
values1 = [1.346112,1.337432,1.246655]
values2 = [1.033836,1.082015,1.117323]
metrics.mutual_info_score(values1,values2)
That is an example of what I'm running - just that I'm getting the same output for any 2 input. Any advice/help would be appreciated!
AI: First of all, sklearn.metrics.mutual_info_score implements mutual information for evaluating clustering results, not pure Kullback-Leibler divergence!
This is equal to the Kullback-Leibler divergence of the joint distribution with the product distribution of the marginals.
KL divergence (and any other such measure) expects the input data to have a sum of 1. Otherwise, they are not proper probability distributions. If your data does not have a sum of 1, most likely it is usually not proper to use KL divergence! (In some cases, it may be admissible to have a sum of less than 1, e.g. in the case of missing data.)
Also note that it is common to use base 2 logarithms. This only yields a constant scaling factor in difference, but base 2 logarithms are easier to interpret and have a more intuitive scale (0 to 1 instead of 0 to log2=0.69314..., measuring the information in bits instead of nats).
> sklearn.metrics.mutual_info_score([0,1],[1,0])
0.69314718055994529
as we can clearly see, the MI result of sklearn is scaled using natural logarithms instead of log2. This is an unfortunate choice, as explained above.
Kullback-Leibler divergence is fragile, unfortunately. On above example it is not well-defined: KL([0,1],[1,0]) causes a division by zero, and tends to infinity. It is also asymmetric. |
H: How to draw a hyperplane using the weights calculated
In the simple example where I have n input neurons, I can consider this to be a point in a n-dimensional space.
If the output layer is just one neuron with value 0 or 1, if I get convergence, the neural net should define a hyperplane dividing the two classes of points - those mapped to 0 and those mapped to 1. How do I calculate this hyperplane using the (n,1) matrix of weights I calculated?
AI: If you have only an input layer, one set of weights, and an output layer, you can solve this directly with $$ X \cdot w = threshold $$
However if you add in hidden layers, you no longer necessarily have a hyperplane, as in order to be a hyperplane it must be able to be expressed as the "solution of a single algebraic equation of degree 1."
Even if you can't solve directly, you can still get a sense for the response surface by evaluating your network's output over a wide range of inputs. |
H: Web services to mine the social web?
Are there any web services that can be used to analyse data in social networks with respect to a specific research question (e.g. mentioning of certain products in social media discussions)?
AI: Twitter's API is one of the best sources of social network data. You can extract off twitter pretty much everything you can imagine, you just need an account and a developer ID. The documentation is rather big so I will let you navigate it.
https://dev.twitter.com/overview/documentation
As usual there are wrappers that make your life easier.
python-twitter
twitteR
There are also companies who offer detailed twitter analytics and historic datasets for a fee.
Gnip
Datasift
Check them out! |
H: What is the appropriate evaluation metric for RandomForest with probability in R?
In order to build a predict model with two categories (buy or not buy),I want to use RandomForest and predict with type='prob', so I can have a prob of someone buy or not buy. So, with this outcome I can clusterize and make groups, like this:
group A: costumer who has [100 to 80]% of buy.
group B: costumer who has [81 to 60]% of buy.
...
But I don't know the appropriate evaluation metric to measure the accuracy of this model. I guess that I can't use a confusion matrix.
Maybe I can use a ROC curve, and or measure KS between the buy group with the not buy group. But I'm not sure about this metrics.
AI: You should select some threshold, let's say 0.5 and treat customers with probability below threshold as not buy and above as buy. Based on this you can compute accuracy of your model. You can also check the ROC curve of the model. You can check various metric for binary classifier here |
H: Using predictive modelling for temperature data set
I am absolutely new to this area of predictive modelling in data science. I am not able to understand how and what modelling techniques do we use? Does it depend on the data type? Does it depend on size of data?
To be specific to the title, I have to predict missing values in a given temperature data set and i am unaware of anything that i can use. Could someone guide me through?
AI: I am not able to understand how and what modelling techniques do we
use?
Every data science workflow has the follwing steps:
Pre-processing (data cleaning and wrangling)
Exploratory analytics
Model selection
Prediction and testing. (And re-iteration)
(Optional) Reporting the workflow
Does it depend on the data type?
Yes, the entire workflow is dependent on the type and features of the data.
Does it depend on size of data?
Size of data makes a difference in the tools and sometimes(very rarely) the algorithms used.
I have to predict missing values in a given temperature data set and i
am unaware of anything that i can use
There is a lot of material and algorithms on how to impute missing data, which you can refer to and use them accordingly depending on the type of data and the problem statement. |
H: Multi-class logistic regression
I am trying to understand logistic regression for a multi-class example and I have the following code:
import numpy
import theano
import theano.tensor as T
rng = numpy.random
num_classes = 3
#N = number of examples
N = 100
#feats = number of input neurons
feats = 784
#training rate
tr_rate = 0.1
D = (rng.randn(N, feats), rng.randint(size=N, low=0, high=num_classes))
training_steps = 1000
# Declare Theano symbolic variables
x = T.matrix("x")
y = T.ivector("y")
w = theano.shared(rng.randn(feats), name="w")
b = theano.shared(0.01, name="b")
# Construct Theano expression graph
sigma = T.nnet.softmax(T.dot(x,w) + b)
prediction = T.argmax(sigma, axis=1) # The class with highest probability
# Cross-entropy loss function
xent = -T.mean(T.log(sigma)[T.arange(y.shape[0]), y])
cost = xent.mean() + 0.01 * (w ** 2).sum() # Regularisation
gw, gb = T.grad(cost, [w, b]) # Compute the gradient of the cost
# Compile
train = theano.function(
inputs=[x,y],
outputs=[xent],
updates=((w, w - tr_rate * gw), (b, b - tr_rate * gb)),
allow_input_downcast=True)
predict = theano.function(inputs=[x], outputs=prediction)
# Train
for i in range(training_steps):
train(D[0], D[1])
However at this point the code gives me the following error:
ValueError Traceback (most recent call last)
<ipython-input-246-c4753ce8ccc7> in <module>()
1 # Train
2 for i in range(training_steps):
----> 3 train(D[0], D[1])
/Users/vzocca/gitProjects/Theano/theano/compile/function_module.pyc in __call__(self, *args, **kwargs)
869 node=self.fn.nodes[self.fn.position_of_error],
870 thunk=thunk,
--> 871 storage_map=getattr(self.fn, 'storage_map', None))
872 else:
873 # old-style linkers raise their own exceptions
/Users/vzocca/gitProjects/Theano/theano/gof/link.pyc in raise_with_op(node, thunk, exc_info, storage_map)
312 # extra long error message in that case.
313 pass
--> 314 reraise(exc_type, exc_value, exc_trace)
315
316
/Users/vzocca/gitProjects/Theano/theano/compile/function_module.pyc in __call__(self, *args, **kwargs)
857 t0_fn = time.time()
858 try:
--> 859 outputs = self.fn()
860 except Exception:
861 if hasattr(self.fn, 'position_of_error'):
ValueError: number of rows in x (1) does not match length of y (100)
Apply node that caused the error: CrossentropySoftmaxArgmax1HotWithBias(Elemwise{Add}[(0, 0)].0, Alloc.0, y)
Toposort index: 12
Inputs types: [TensorType(float64, row), TensorType(float64, vector), TensorType(int32, vector)]
Inputs shapes: [(1, 100), (100,), (100,)]
Inputs strides: [(800, 8), (8,), (4,)]
Inputs values: ['not shown', 'not shown', 'not shown']
Outputs clients: [[Sum{acc_dtype=float64}(CrossentropySoftmaxArgmax1HotWithBias.0)], [], []]
Backtrace when the node is created:
File "<ipython-input-244-834f04b2fb36>", line 16, in <module>
xent = -T.mean(T.log(sigma)[T.arange(y.shape[0]), y])
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
Can anyone help understand what is wrong?
Thank you!
AI: I found my own answer. I had defined
w = theano.shared(rng.randn(feats), name="w")
and that was wrong.
The correct definition is:
w = theano.shared(rng.randn(feats, num_classes), name="w")
since the weights link 'feats'-number of input neuron to 'num_classes'-number of output neurons. |
H: Early-Stopping for logistic regression. Theano
I am trying to understand the code for the logistic regression on the official documentation, but I am struggling to understand the logic behind this code:
# early-stopping parameters
patience = 5000 # look as this many examples regardless
patience_increase = 2 # wait this much longer when a new best is
# found
improvement_threshold = 0.995 # a relative improvement of this much is
# considered significant
validation_frequency = min(n_train_batches, patience/2)
# go through this many
# minibatches before checking the network
# on the validation set; in this case we
# check every epoch
best_params = None
best_validation_loss = numpy.inf
test_score = 0.
start_time = time.clock()
done_looping = False
epoch = 0
while (epoch < n_epochs) and (not done_looping):
# Report "1" for first epoch, "n_epochs" for last epoch
epoch = epoch + 1
for minibatch_index in xrange(n_train_batches):
d_loss_wrt_params = ... # compute gradient
params -= learning_rate * d_loss_wrt_params # gradient descent
# iteration number. We want it to start at 0.
iter = (epoch - 1) * n_train_batches + minibatch_index
# note that if we do `iter % validation_frequency` it will be
# true for iter = 0 which we do not want. We want it true for
# iter = validation_frequency - 1.
if (iter + 1) % validation_frequency == 0:
this_validation_loss = ... # compute zero-one loss on validation set
if this_validation_loss < best_validation_loss:
# improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * improvement_threshold:
patience = max(patience, iter * patience_increase)
best_params = copy.deepcopy(params)
best_validation_loss = this_validation_loss
if patience <= iter:
done_looping = True
break
Could any one, explain to me, what do the variables: patience, patience_increase, improvement_threshold, validation_frequency, iter, represent ?
What this condition do ?
if (iter + 1) % validation_frequency == 0:
AI: Patience is the number of training batches to do before you stop. iter is the number of training batches you've seen. Each iteration, you decide whether or not your validation is lower than your previous best. The improvement is only stored if the score is lower than improvement_threshold * validation_score.
It appears that patience_increase is a multiplier. Every time you have a new lowest score, you up the total number or training batches to iter*patience_increase, but not below the current value of patience.
validation_frequency is just the number of batches between times you check the validation score. |
H: Hypertuning XGBoost parameters
XGBoost have been doing a great job, when it comes to dealing with both categorical and continuous dependant variables. But, how do I select the optimized parameters for an XGBoost problem?
This is how I applied the parameters for a recent Kaggle problem:
param <- list( objective = "reg:linear",
booster = "gbtree",
eta = 0.02, # 0.06, #0.01,
max_depth = 10, #changed from default of 8
subsample = 0.5, # 0.7
colsample_bytree = 0.7, # 0.7
num_parallel_tree = 5
# alpha = 0.0001,
# lambda = 1
)
clf <- xgb.train( params = param,
data = dtrain,
nrounds = 3000, #300, #280, #125, #250, # changed from 300
verbose = 0,
early.stop.round = 100,
watchlist = watchlist,
maximize = FALSE,
feval=RMPSE
)
All I do to experiment is randomly select (with intuition) another set of parameters for improving on the result.
Is there anyway I automate the selection of optimized(best) set of parameters?
(Answers can be in any language. I'm just looking for the technique)
AI: Whenever I work with xgboost I often make my own homebrew parameter search but you can do it with the caret package as well like KrisP just mentioned.
Caret
See this answer on Cross Validated for a thorough explanation on how to use the caret package for hyperparameter search on xgboost.
How to tune hyperparameters of xgboost trees?
Custom Grid Search
I often begin with a few assumptions based on Owen Zhang's slides on tips for data science P. 14
Here you can see that you'll mostly need to tune row sampling, column sampling and maybe maximum tree depth. This is how I do a custom row sampling and column sampling search for a problem I am working on at the moment:
searchGridSubCol <- expand.grid(subsample = c(0.5, 0.75, 1),
colsample_bytree = c(0.6, 0.8, 1))
ntrees <- 100
#Build a xgb.DMatrix object
DMMatrixTrain <- xgb.DMatrix(data = yourMatrix, label = yourTarget)
rmseErrorsHyperparameters <- apply(searchGridSubCol, 1, function(parameterList){
#Extract Parameters to test
currentSubsampleRate <- parameterList[["subsample"]]
currentColsampleRate <- parameterList[["colsample_bytree"]]
xgboostModelCV <- xgb.cv(data = DMMatrixTrain, nrounds = ntrees, nfold = 5, showsd = TRUE,
metrics = "rmse", verbose = TRUE, "eval_metric" = "rmse",
"objective" = "reg:linear", "max.depth" = 15, "eta" = 2/ntrees,
"subsample" = currentSubsampleRate, "colsample_bytree" = currentColsampleRate)
xvalidationScores <- as.data.frame(xgboostModelCV)
#Save rmse of the last iteration
rmse <- tail(xvalidationScores$test.rmse.mean, 1)
return(c(rmse, currentSubsampleRate, currentColsampleRate))
})
And combined with some ggplot2 magic using the results of that apply function you can plot a graphical representation of the search.
In this plot lighter colors represent lower error and each block represents a unique combination of column sampling and row sampling. So if you want to perform an additional search of say eta (or tree depth) you will end up with one of these plots for each eta parameters tested.
I see you have a different evaluation metric (RMPSE), just plug that in the cross validation function and you'll get the desired result. Besides that I wouldn't worry too much about fine tuning the other parameters because doing so won't improve performance too much, at least not so much compared to spending more time engineering features or cleaning the data.
Others
Random search and Bayesian parameter selection are also possible but I haven't made/found an implementation of them yet.
Here is a good primer on bayesian Optimization of hyperparameters by Max Kuhn creator of caret.
http://blog.revolutionanalytics.com/2016/06/bayesian-optimization-of-machine-learning-models.html |
H: Extremely dominant feature?
I'm new to datascience. I was wondering how one should treat an extremely dominant feature.
For example, one of the features is "on"/"off", and when it's "off", none of the other features matter and the output will just always be 0. So should I drop all rows where it's "off" in my train/test data sets? I feel like I would get a better fit that way.
If I delete those rows, I'm concerned about how I would handle those rows in the test set. For example, I'd have to write code to loop through the data and put a 0 in the prediction column for those rows, as well as make sure everything else lines up. (This is all Kaggle related, so the training set is several columns of features and a y_column, whereas the test set doesn't have the y_column and we're supposed to predict it.)
I'm using Python and Scikit Learn's random forest, if that matters.
AI: Actually, it shouldn't really matter what classification algorithm you use. The whole point of machine learning is that the algorithm learns how to combine the available features to achieve the desired result. If one feature has the ability to 'turn the others off,' the algorithm will learn that (It'll also learn lots of things that you probably aren't aware of).
So in short, no, modifying the data this way probably won't affect classification performance. Not needing to incorporate these kinds of things into the training set is part of what makes machine learning so cool! |
H: Handling a feature with multiple categorical values for the same instance value
I have data in the following form:
table 1
id, feature1, predict
1, xyz,yes
2, abc, yes
table2
id, feature2
1, class1
1, class2
1, class3
2, class2
I could perform a one many join and train on the resultant set- which is one way to go about it. But If I rather wanted to maintain the length of the resultant set equal length of table 1, what is the technique?
AI: One possible approach is to perform an encoding, where each level of the feature2 corresponds to a new feature (column).
This way you may describe the 1:N relation between the feature 1 and 2
Here a small example in R
> table1 <- data.frame(id = c(1,2), feature1 = c("xyz","abc"), predict = c(T,T))
> table2 <- data.frame(id = c(1,1,1,2), feature2 = c("class1", "class2", "class3", "class2"))
>
> ## encoding
> table(table2)
feature2
id class1 class2 class3
1 1 1 1
2 0 1 0
The new object contains the (now unique) id and setting of the feature2.
You need only to merge (join) the result to the table1 (basically same task a DB join - which variance: inner, outer or full depends on your requirements). |
H: How should I analyze this data from reddit (sample text included)
I have downloaded some data to learn about machine learning and distributed computing. I used WinZip to uncompress a .bz2. Now I have text (opened in Notepad) which looks like this (this file is long, this is just a sample):
{"parent_id":"t3_5yba3","created_utc":"1192450635","ups":1,"controversiality":0,"distinguished":null,"subreddit_id":"t5_6","id":"c0299an","downs":0,"archived":true,"link_id":"t3_5yba3","score":1,"author":"bostich","score_hidden":false,"body":"test","gilded":0,"author_flair_text":null,"subreddit":"reddit.com","edited":false,"author_flair_css_class":null,"name":"t1_c0299an","retrieved_on":1427426409}
{"score_hidden":false,"body":"much smoother.\r\n\r\nIm just glad reddit is back, #reddit in mIRC was entertaining but I had no idea how addicted I had become. Thanks for making the detox somewhat short.","author_flair_text":null,"gilded":0,"link_id":"t3_5yba3","score":2,"author":"igiveyoumylife","author_flair_css_class":null,"name":"t1_c0299ao","retrieved_on":1427426409,"edited":false,"subreddit":"reddit.com","ups":2,"controversiality":0,"parent_id":"t3_5yba3","created_utc":"1192450639","downs":0,"archived":true,"distinguished":null,"subreddit_id":"t5_6","id":"c0299ao"}
{"author":"Arve","link_id":"t3_5yba3","score":0,"body":"Can we please deprecate the word \"Ajax\" now? \r\n\r\n(But yeah, this _is_ much nicer)","score_hidden":false,"author_flair_text":null,"gilded":0,"subreddit":"reddit.com","edited":false,"author_flair_css_class":null,"retrieved_on":1427426409,"name":"t1_c0299ap","created_utc":"1192450643","parent_id":"t1_c02999p","controversiality":0,"ups":0,"distinguished":null,"id":"c0299ap","subreddit_id":"t5_6","downs":0,"archived":true}
{"author_flair_text":null,"gilded":0,"score_hidden":false,"body":"[deleted]","score":1,"link_id":"t3_5yba3","author":"[deleted]","name":"t1_c0299aq","retrieved_on":1427426409,"author_flair_css_class":null,"subreddit":"reddit.com","edited":false,"ups":1,"controversiality":0,"created_utc":"1192450646","parent_id":"t3_5yba3","archived":true,"downs":0,"subreddit_id":"t5_6","id":"c0299aq","distinguished":null}
{"edited":false,"subreddit":"reddit.com","author_flair_css_class":null,"name":"t1_c0299ar","retrieved_on":1427426409,"author":"gigaquack","link_id":"t3_5yba3","score":3,"body":"Oh, I see. Fancy schmancy \"submitting....\"","score_hidden":false,"author_flair_text":null,"gilded":0,"distinguished":null,"subreddit_id":"t5_6","id":"c0299ar","downs":0,"archived":true,"created_utc":"1192450646","parent_id":"t1_c0299ah","controversiality":0,"ups":3}
{"author_flair_text":null,"gilded":0,"body":"testing ...","score_hidden":false,"author":"Percept","score":1,"link_id":"t3_5yba3","retrieved_on":1427426409,"name":"t1_c0299as","author_flair_css_class":null,"edited":false,"subreddit":"reddit.com","controversiality":0,"ups":1,"parent_id":"t3_5yba3","created_utc":"1192450656","archived":true,"downs":0,"subreddit_id":"t5_6","id":"c0299as","distinguished":null}
{"ups":3,"controversiality":0,"parent_id":"t1_c0299ar","created_utc":"1192450658","archived":true,"downs":0,"subreddit_id":"t5_6","id":"c0299at","distinguished":null,"author_flair_text":null,"gilded":0,"score_hidden":false,"body":"I like it. One more time...","score":3,"link_id":"t3_5yba3","author":"gigaquack","name":"t1_c0299at","retrieved_on":1427426409,"author_flair_css_class":null,"subreddit":"reddit.com","edited":false}
{"ups":2,"controversiality":0,"parent_id":"t1_c02999j","created_utc":"1192450662","archived":true,"downs":0,"id":"c0299au","subreddit_id":"t5_6","distinguished":null,"gilded":0,"author_flair_text":null,"score_hidden":false,"body":" try refreshing yor cache, that worked for me \n//edit: trying to edit","score":2,"link_id":"t3_5yba3","author":"mcm69","retrieved_on":1427426409,"name":"t1_c0299au","author_flair_css_class":null,"edited":false,"subreddit":"reddit.com"}
{"controversiality":0,"ups":3,"parent_id":"t1_c0299at","created_utc":"1192450670","downs":0,"archived":true,"distinguished":null,"subreddit_id":"t5_6","id":"c0299av","body":"K. I lied. Just one more...","score_hidden":false,"author_flair_text":null,"gilded":0,"author":"gigaquack","score":3,"link_id":"t3_5yba3","author_flair_css_class":null,"name":"t1_c0299av","retrieved_on":1427426409,"edited":false,"subreddit":"reddit.com"}
{"edited":false,"subreddit":"reddit.com","author_flair_css_class":null,"retrieved_on":1427426409,"name":"t1_c0299aw","author":"deki","score":2,"link_id":"t3_5yba3","body":"I also wonder what the differences are...","score_hidden":false,"gilded":0,"author_flair_text":null,"distinguished":null,"id":"c0299aw","subreddit_id":"t5_6","downs":0,"archived":true,"parent_id":"t3_5yba3","created_utc":"1192450681","controversiality":0,"ups":2}
{"created_utc":"1192450683","parent_id":"t1_c0299av","controversiality":0,"ups":3,"subreddit_id":"t5_6","id":"c0299ax","distinguished":null,"archived":true,"downs":0,"author":"gigaquack","score":3,"link_id":"t3_5yba3","author_flair_text":null,"gilded":0,"score_hidden":false,"body":"So addictive...","edited":false,"subreddit":"reddit.com","name":"t1_c0299ax","retrieved_on":1427426409,"author_flair_css_class":null}
{"downs":0,"archived":true,"distinguished":null,"subreddit_id":"t5_6","id":"c0299ay","controversiality":0,"ups":1,"created_utc":"1192450687","parent_id":"t3_5yba1","author_flair_css_class":null,"retrieved_on":1427426409,"name":"t1_c0299ay","subreddit":"reddit.com","edited":false,"body":"I can't post a story to proggit - I got \"you are trying to submit too fast\" on my first submission.","score_hidden":false,"author_flair_text":null,"gilded":0,"author":"llimllib","score":1,"link_id":"t3_5yba1"}
{"gilded":0,"author_flair_text":null,"score_hidden":false,"body":"Alright I'm done.","author":"gigaquack","score":3,"link_id":"t3_5yba3","name":"t1_c0299az","retrieved_on":1427426409,"author_flair_css_class":null,"subreddit":"reddit.com","edited":false,"controversiality":0,"ups":3,"parent_id":"t1_c0299ax","created_utc":"1192450691","archived":true,"downs":0,"subreddit_id":"t5_6","id":"c0299az","distinguished":null}
{"distinguished":null,"subreddit_id":"t5_6","id":"c0299b0","downs":0,"archived":true,"created_utc":"1192450696","parent_id":"t3_5yba3","ups":12,"controversiality":0,"edited":false,"subreddit":"reddit.com","author_flair_css_class":null,"retrieved_on":1427426409,"name":"t1_c0299b0","score":12,"link_id":"t3_5yba3","author":"igiveyoumylife","score_hidden":false,"body":"Is anyone else's \"recommended\" page completely empty? Mine is, and it is usually jam-packed.","author_flair_text":null,"gilded":0}
{"downs":0,"archived":true,"distinguished":null,"id":"c0299b1","subreddit_id":"t5_6","ups":1,"controversiality":0,"parent_id":"t1_c02999j","created_utc":"1192450700","author_flair_css_class":null,"name":"t1_c0299b1","retrieved_on":1427426409,"subreddit":"reddit.com","edited":false,"body":"[deleted]","score_hidden":false,"author_flair_text":null,"gilded":0,"link_id":"t3_5yba3","score":1,"author":"[deleted]"}
{"parent_id":"t3_5yba3","created_utc":"1192450705","controversiality":0,"ups":2,"id":"c0299b2","subreddit_id":"t5_6","distinguished":null,"archived":true,"downs":0,"author":"[deleted]","score":2,"link_id":"t3_5yba3","author_flair_text":null,"gilded":0,"score_hidden":false,"body":"Ok, I guess we need to submit comments to test? ","edited":false,"subreddit":"reddit.com","retrieved_on":1427426409,"name":"t1_c0299b2","author_flair_css_class":null}
{"name":"t1_c0299b3","retrieved_on":1427426409,"author_flair_css_class":null,"subreddit":"reddit.com","edited":false,"author_flair_text":null,"gilded":0,"body":"I can't submit any stories -- even the first time, i get, \"you are trying to submit too fast\" .. anyone else seeing this?\n\nEdit: This appears to be fixed.","score_hidden":false,"author":"raldi","score":5,"link_id":"t3_5yba3","archived":true,"downs":0,"id":"c0299b3","subreddit_id":"t5_6","distinguished":null,"controversiality":0,"ups":5,"parent_id":"t3_5yba3","created_utc":"1192450709"}
{"archived":true,"downs":0,"subreddit_id":"t5_6","id":"c0299b4","distinguished":null,"controversiality":0,"ups":1,"created_utc":"1192450737","parent_id":"t1_c0299a5","retrieved_on":1427426409,"name":"t1_c0299b4","author_flair_css_class":null,"subreddit":"reddit.com","edited":false,"author_flair_text":null,"gilded":0,"body":"Working fine with normal Adblock.","score_hidden":false,"author":"deki","score":1,"link_id":"t3_5yba3"}
{"subreddit_id":"t5_6","id":"c0299b5","distinguished":null,"archived":true,"downs":0,"parent_id":"t3_5yba3","created_utc":"1192450739","controversiality":0,"ups":12,"subreddit":"reddit.com","edited":false,"retrieved_on":1427426409,"name":"t1_c0299b5","author_flair_css_class":null,"author":"jezmck","link_id":"t3_5yba3","score":12,"author_flair_text":null,"gilded":0,"score_hidden":false,"body":"can't see beta.reddit.com from here..."}
{"downs":0,"archived":true,"distinguished":null,"id":"c0299b6","subreddit_id":"t5_6","ups":2,"controversiality":0,"parent_id":"t1_c02999j","created_utc":"1192450748","author_flair_css_class":null,"retrieved_on":1427426409,"name":"t1_c0299b6","subreddit":"reddit.com","edited":false,"body":"I had a problem as well, with my comment being lost. Then I did a \"super-refresh\" (shift-Reload) and it seemed to work.","score_hidden":false,"gilded":0,"author_flair_text":null,"score":2,"link_id":"t3_5yba3","author":"sickofthisshit"}
{"distinguished":null,"id":"c0299b7","subreddit_id":"t5_6","downs":0,"archived":true,"parent_id":"t3_5yba3","created_utc":"1192450748","ups":1,"controversiality":0,"edited":false,"subreddit":"reddit.com","author_flair_css_class":null,"name":"t1_c0299b7","retrieved_on":1427426409,"score":1,"link_id":"t3_5yba3","author":"hfaber","body":"hm?","score_hidden":false,"author_flair_text":null,"gilded":0}
{"subreddit":"reddit.com","edited":false,"retrieved_on":1427426409,"name":"t1_c0299b8","author_flair_css_class":null,"score":1,"link_id":"t3_5yba3","author":"paternoster","author_flair_text":null,"gilded":0,"score_hidden":false,"body":"I think so...","id":"c0299b8","subreddit_id":"t5_6","distinguished":null,"archived":true,"downs":0,"parent_id":"t1_c0299b2","created_utc":"1192450770","ups":1,"controversiality":0}
{"distinguished":null,"id":"c0299b9","subreddit_id":"t5_6","downs":0,"archived":true,"parent_id":"t1_c0299ac","created_utc":"1192450771","ups":2,"controversiality":0,"subreddit":"reddit.com","edited":false,"author_flair_css_class":null,"name":"t1_c0299b9","retrieved_on":1427426409,"link_id":"t3_5yba3","score":2,"author":"igiveyoumylife","score_hidden":false,"body":"well... I read Ron Paul's website.","gilded":0,"author_flair_text":null}
{"parent_id":"t1_c0299ap","created_utc":"1192450772","controversiality":0,"ups":2,"distinguished":null,"subreddit_id":"t5_6","id":"c0299ba","downs":0,"archived":true,"author":"zoomzoom83","link_id":"t3_5yba3","score":2,"score_hidden":false,"body":"No","gilded":0,"author_flair_text":null,"edited":false,"subreddit":"reddit.com","author_flair_css_class":null,"name":"t1_c0299ba","retrieved_on":1427426409}
{"gilded":0,"author_flair_text":null,"body":"Cry me a river. Most patents and\r\ncopyrights these days are theft of\r\npublic domain under the color of\r\nlaw, to an absurd degree.\r\n\r\nMickey Mouse copyright extended to\r\n100 years as a favor to everyone's\r\nfavorite political donor, Will Eisner.\r\nAmazon's one click patent.\r\n\r\nIt's probably been 20 years \r\nsince the Constitutional authority\r\nto grant patents and copyrights\r\nactually served, on the whole, to\r\n\"advance the arts and commerce\",\r\nrather than to stifle them.\r\n","score_hidden":false,"link_id":"t3_2zxa6","score":1,"author":"donh","retrieved_on":1427426409,"name":"t1_c0299bb","author_flair_css_class":null,"subreddit":"reddit.com","edited":false,"ups":1,"controversiality":0,"parent_id":"t3_2zxa6","created_utc":"1192450781","archived":true,"downs":0,"id":"c0299bb","subreddit_id":"t5_6","distinguished":null}
{"parent_id":"t1_c0299b2","created_utc":"1192450789","controversiality":0,"ups":1,"distinguished":null,"subreddit_id":"t5_6","id":"c0299bc","downs":0,"archived":true,"author":"tashbarg","link_id":"t3_5yba3","score":1,"body":"jup","score_hidden":false,"author_flair_text":null,"gilded":0,"edited":false,"subreddit":"reddit.com","author_flair_css_class":null,"name":"t1_c0299bc","retrieved_on":1427426409}
{"retrieved_on":1427426409,"name":"t1_c0299bd","author_flair_css_class":null,"subreddit":"reddit.com","edited":false,"gilded":0,"author_flair_text":null,"body":"As long as we can still say \"Web 2.0\"...","score_hidden":false,"link_id":"t3_5yba3","score":2,"author":"newton_dave","archived":true,"downs":0,"subreddit_id":"t5_6","id":"c0299bd","distinguished":null,"ups":2,"controversiality":0,"parent_id":"t1_c0299ap","created_utc":"1192450790"}
{"ups":3,"controversiality":0,"parent_id":"t1_c0299b7","created_utc":"1192450797","downs":0,"archived":true,"distinguished":null,"id":"c0299be","subreddit_id":"t5_6","score_hidden":false,"body":"what's new except the asynchronous submit (which alone isn't really worth being dubbed 'new comment system')?","author_flair_text":null,"gilded":0,"link_id":"t3_5yba3","score":3,"author":"hfaber","author_flair_css_class":null,"name":"t1_c0299be","retrieved_on":1427426409,"edited":false,"subreddit":"reddit.com"}
{"subreddit":"reddit.com","edited":false,"author_flair_css_class":null,"retrieved_on":1427426409,"name":"t1_c0299bf","score":6,"link_id":"t3_5yba3","author":"paternoster","score_hidden":false,"body":"Yep","gilded":0,"author_flair_text":null,"distinguished":null,"id":"c0299bf","subreddit_id":"t5_6","downs":0,"archived":true,"parent_id":"t1_c0299b0","created_utc":"1192450809","ups":6,"controversiality":0}
{"name":"t1_c0299bg","retrieved_on":1427426409,"author_flair_css_class":null,"subreddit":"reddit.com","edited":false,"author_flair_text":null,"gilded":0,"score_hidden":false,"body":"[deleted]","score":1,"link_id":"t3_5yba3","author":"[deleted]","archived":true,"downs":0,"id":"c0299bg","subreddit_id":"t5_6","distinguished":null,"ups":1,"controversiality":0,"parent_id":"t3_5yba3","created_utc":"1192450827"}
{"subreddit_id":"t5_6","id":"c0299bh","distinguished":null,"archived":true,"downs":0,"created_utc":"1192450831","parent_id":"t1_c0299az","controversiality":0,"ups":2,"subreddit":"reddit.com","edited":false,"retrieved_on":1427426409,"name":"t1_c0299bh","author_flair_css_class":null,"author":"newton_dave","link_id":"t3_5yba3","score":2,"author_flair_text":null,"gilded":0,"body":"You can stop anytime you want.\n\nReally.","score_hidden":false}
{"distinguished":null,"id":"c0299bi","subreddit_id":"t5_6","downs":0,"archived":true,"created_utc":"1192450849","parent_id":"t1_c02999p","controversiality":0,"ups":3,"subreddit":"reddit.com","edited":false,"author_flair_css_class":null,"retrieved_on":1427426409,"name":"t1_c0299bi","author":"lyon","link_id":"t3_5yba3","score":3,"body":"hmm only worked after refresh...","score_hidden":false,"gilded":0,"author_flair_text":null}
{"body":"okay, dokay, lets see how the comment work now ... PING !!!","score_hidden":false,"author_flair_text":null,"gilded":0,"link_id":"t3_5yba3","score":2,"author":"joe24pack","author_flair_css_class":null,"name":"t1_c0299bj","retrieved_on":1427426409,"subreddit":"reddit.com","edited":false,"ups":2,"controversiality":0,"parent_id":"t3_5yba3","created_utc":"1192450869","downs":0,"archived":true,"distinguished":null,"id":"c0299bj","subreddit_id":"t5_6"}
{"distinguished":null,"subreddit_id":"t5_6","id":"c0299bk","downs":0,"archived":true,"created_utc":"1192450870","parent_id":"t3_5yba3","controversiality"
AI: Data Science is not an algorithm to run on your data. It is the process that helps you answer a specific question. The key to be a data scientist is to ask the right questions. So, first, since you want to be familiar with machine learning, examine your data and try to understand what they can answer for you.
Examples:
Can I cluster the reddit questions in categories?
Can you predict the score of a question based on the number of comments?
Can you predict the sub-reddit that a user will ask a question? |
H: Algorithm to construct similarity structure from hash lookup table
I have constructed a lookup table using locality-sensitive hashing for comparing nearly similar documents/records. If two records (columns) have the same hash value in a row, they are considered to be similar. For example, the structure is
R1 R2 R3 R4 R5
b1 a1 a2 a3 a2 a5 .....
b2 a2 a4 a1 a4 a4 .....
b3 a3 a5 a3 a7 a4....
Since each band holds similar records, similar record sets will be
S1 = {R2,R4}
S2 = {R2, R4, R5}
S3 = {R1,R3}
And the merged similarity structures will be
S1' = {R2,R4,R5}
S2' = {R1,R3}
I want to span the matrix and map all similarity structures so I have buckets of similar records. The dimensionality of the hash table is large, hence a naive approach is not likely to work. What types of algorithms should I consider to achieve this efficiently?
Edit 1:
Question updated to clarify more on the objective
AI: The algorithm you are asking about is very straightforward.
What you do, you are looking for connected components in a certain graph where edges are determined by matching hash values. You can implement this with a modification of the disjoint-set data structure.
Your particular variation would be that apart from keeping track of vertices in each component, you have to keep track of m (number of rows) sets of hash values that have been found for each component. |
H: How to update weights in a neural network using gradient descent with mini-batches?
[I've cross-posted it to cross.validated because I'm not sure where it fits best]
How does gradient descent work for training a neural network if I choose mini-batch (i.e., sample a subset of the training set)? I have thought of three different possibilities:
Epoch starts. We sample and feedforward one minibatch only, get the error and backprop it, i.e. update the weights. Epoch over.
Epoch starts. We sample and feedforward a minibatch, get the error and backprop it, i.e. update the weights. We repeat this until we have sampled the full data set. Epoch over.
Epoch starts. We sample and feedforward a minibatch, get the error and store it. We repeat this until we have sampled the full data set. We somehow average the errors and backprop them by updating the weights. Epoch over.
AI: Let us say that the output of one neural network given it's parameters is $$f(x;w)$$
Let us define the loss function as the squared L2 loss (in this case).
$$L(X,y;w) = \frac{1}{2n}\sum_{i=0}^{n}[f(X_i;w)-y_i]^2$$
In this case the batchsize will be denoted as $n$. Essentially what this means is that we iterate over a finite subset of samples with the size of the subset being equal to your batch-size, and use the gradient normalized under this batch. We do this until we have exhausted every data-point in the dataset. Then the epoch is over. The gradient in this case:
$$\frac{\partial L(X,y;w)}{\partial w} = \frac{1}{n}\sum_{i=0}^{n}[f(X_i;w)-y_i]\frac{\partial f(X_i;w)}{\partial w}$$
Using batch gradient descent normalizes your gradient, so the updates are not as sporadic as if you have used stochastic gradient descent. |
H: How to analyse move kindness in python (Logistic regression, neural network, etc.)?
With a team of researchers we were given the assignment to make a scale for the move kindness (how inviting a room or place is for exercise--e.g., a gym). In order to get objective results we were asked to make a measurement device. This device receives input through various sensors and maybe some true/false questions, and then analyze it with training information in mind. After the analysis it returns a number from, say: 1 to 10, indicating the move kindness. So a gym gets, for example an eight and a classroom a three. The problem with this is, that move kindness is very subjective, so we have conducted some surveys. For example one of the criteria is the temperature of the room/place. While the survey was being conducted we measured the temperature:
From a scale from 1 to 10, what is your opinion about the temperature?
And then we measured the temperature. We have put all these information into some spreadsheets:
Rating (Move Kindness): 8
Temperature: 18 degrees Celsius
At the end of this survey we asked them to give the move kindness a rating.
So we have this, for example:
Temperature: 8, 18
Light: 7, 300
Humidity: 8, 50
....
Rating (Move Kindness): 8
So my question is, what's the best way to analyse these data for a reliable measurement device using python?
We were thinking of using neural networks, because they can be trained, but logistic regression or some other machine learning algorithm is also an option. Can anyone give me some direction on this?
AI: Okay, so from what I understand, you have a regression problem taking into account a variety of physical features. The reason I say that this is a regression problem, verses a classification problem is because the scale you are trying to predict is an ordinal scale.
There are a couple approaches to this. If your features are discriminative and linear enough, a simple least squares linear regression might work. If you believe the problem you have is to complicated for linear regressions, a simple vanilla neural network with one single output. I would recommend using the scikit-learn library in python for all models that are not neural networks. Here is a link to the generalized linear regression page.
That link has code samples and mathematical explanations. If you decide to use neural networks, and you don't have a great amount of samples or a need to use the GPU, the pyBrain library is great.
I wouldn't recommend using a logistic regression (since you mentioned it in your question), simply because a logistic regression is a classification problem, and I believe you would be better off approaching this from a regression standpoint. |
H: Why in deep learning or another AI technique python is used rather than other languages?
I want to know, why python? Instead of C#, Java, C\C++ and so on?
AI: Short answers
Tooling. Python has fantastic math, statistics, and linear algebra libraries.
Less Code, Same Result. Python provides quick and simple ways of achieving programming solutions compared to C#, Java, C++, etc. This means you'll write less code and achieve the same result. |
H: Handling categorical features in Factorization Machines algorithm - Feature Hashing vs. One-Hot encoding
For solving a prediction problem I'm willing to use the Factorization Machines, a model that in addition to learning linear weights on features, learn a vector space for each feature to learn pairing interactions between features in this new space.
I was told that performing the hashing trick to convert categorical features to 1-of-k binary features (using sklearn’s DictVectorizer, which returns sparse matrix) can destroy feature interaction and I should try regular one-hot encoding instead.
Can anyone explain why?
AI: I decided to expand a bit on my comment and make a full answer.
So the reason why somebody may say that performing the hashing trick can destroy interactions is because it may map different features to the same bucket. But usually it's not a big deal.
Note that DictVectorizer doesn't perform the hashing trick:
When feature values are strings, this transformer will do a binary one-hot (aka one-of-K) coding: one boolean-valued feature is constructed for each of the possible string values that the feature can take on. For instance, a feature “f” that can take on the values “ham” and “spam” will become two features in the output, one signifying “f=ham”, the other “f=spam”.
To do it, you need to use a different vectorizer: HashingVectorizer |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.