text
stringlengths 83
79.5k
|
---|
H: Dummy coding a column in R with multiple levels
I have a dependent variable measuring the net revenue. One of the major predictor affecting this is "product" i.e. the product sold to the customer. My randomly sampled dataset contains 1.4 million entries.
Products are assigned a specific categorical value.
I feel that using dummy variables to represent the products would be apt however, there are 4481 levels of products. I do not know how to code so many levels in R.
model.matrix(~ product, data=salesdata)
returns an error. (Needs 38.4GB of memory)
Can someone guide me a little on how to code these categorical variables?
Dependent: Net revenue (quantitative)
Independent: Product code (quantitative but treated as qualitative since values are nominal)
AI: You can use either sparse matrices or feature hashing.
Sparse Matrix
I suppose that using a sparse matrix is the only choice. I suspect that this line of code will work. This uses the Matrix package.
sparseProducts <- sparse.model.matrix(~ product, data=salesdata)
Take my example:
sparseDiagonalMatrix <- sparse.model.matrix(~., data.frame(V1 = as.factor(seq(1, 10))))
each column represents a different factor, this will yield:
1 1 . . . . . . . . .
2 1 1 . . . . . . . .
3 1 . 1 . . . . . . .
4 1 . . 1 . . . . . .
5 1 . . . 1 . . . . .
6 1 . . . . 1 . . . .
7 1 . . . . . 1 . . .
8 1 . . . . . . 1 . .
9 1 . . . . . . . 1 .
10 1 . . . . . . . . 1
> class(sparseDiagonalMatrix)
[1] "dgCMatrix"
attr(,"package")
[1] "Matrix"
alternatively you can remove the intercept and have all zeros represent class 1
sparseDiagonalMatrix <- sparse.model.matrix(~., data.frame(V1 = as.factor(seq(1, 10))))[, -1, drop=FALSE]
10 x 9 sparse Matrix of class "dgCMatrix"
V12 V13 V14 V15 V16 V17 V18 V19 V110
1 . . . . . . . . .
2 1 . . . . . . . .
3 . 1 . . . . . . .
4 . . 1 . . . . . .
5 . . . 1 . . . . .
6 . . . . 1 . . . .
7 . . . . . 1 . . .
8 . . . . . . 1 . .
9 . . . . . . . 1 .
10 . . . . . . . . 1
> class(sparseDiagonalMatrix)
[1] "dgCMatrix"
attr(,"package")
[1] "Matrix"
You will need a package that supports sparse matrices for measuring the net revenue though. Fortunately, most modern mainstream packages support sparse matrices.
Feature Hashing
Here is a great explanation of feature hashing in R (among other techniques) which is also an alternative, specially useful when you have hundreds of thousands or millions of multiple levels.
https://amunategui.github.io/feature-hashing/ |
H: ROC curves/AUC values as a performance metric
I want to plot ROC curves using R. I have a prediction matrix, where each column shows the prediction values corresponding to different approaches. Also, I have a label vector. The column names of prediction columns are ccs,badaI,badaII and the column name of label vector is value. I am using ROCRlibrary for this as:
library(ROCR)
pred1 <- prediction(df$ccs,df$value)
roc <- performance(pred1,"tpr","fpr");
pred2 <- prediction(df$badaI,df$value)
roc2 <- performance(pred2,"tpr","fpr")
pred3 <- prediction(df$badaII,df$value)
roc3 <- performance(pred3,"tpr","fpr")
auc<- performance(pred,"auc")
auc = round(unlist([email protected]),2)
auc2<- performance(pred2,"auc")
auc2 = round(unlist([email protected]),2)
auc3<- performance(pred3,"auc")
auc3 = round(unlist([email protected]),2)
plot(roc,col="black",lty=1, lwd=4, cex.lab=1.5,axt="n")
axis(1,cex.axis=1.0);axis(2,cex.axis=1.0)
plot(roc2, add=TRUE,col="black",lty=3, lwd=4)
plot(roc3, add=TRUE,col="black",lty=2, lwd=4)
abline(0,1,col="gray60")
legend(0.3,0.30,c(paste0("CCS, ","AUC = ",auc),paste0("BADAI, ","AUC = ",auc2),paste0("BADAII, ","AUC = ",auc3)),
lty=c(1,3,2), col=c('black','black','black'), lwd=4,cex=1.4,bty="n")
While using the above code,I am getting following plot:
I have doubt as:
While looking at the data, It is obvious that ccs and badaII should have higher AUC values than badaI, but the results are somehow opposite. Can anyone help me in understanding why it is behaving like this?
The dput of the data used, df is:
structure(list(ccs = c(0.16, 0.04, 0.18, 0.09, 0.14, 0.14, 0.04,
0.04, 0.08, 0.76, 0.03, 0.03, 0.68, 0.06, 0.83, 0.15, 0.07, 0.02,
0.93, 0.22, 0.28, 0.11, 0.05, 0.01, 0.17, 0.15, 1, 0.13, 0.23,
0.44, 1), badaI = c(0.61, 0.11, 0.53, 0.79, 0.75, 0.82, 0.57,
0.67, 0.4, 0.95, 0.49, 0.61, 0.97, 0.52, 0.98, 0.7, 0.03, 0.18,
0.85, 0.94, 0.9, 0.77, 0, 0.37, 0.47, 0.88, 0.99, 0.55, 0.86,
0.96, 0.99), badaII = c(0.32, 0, 0.27, 0.12, 0.33, 0.12, 0.56,
0, 0.32, 0.18, 0.18, 0.11, 0.18, 0.54, 0.37, 0.33, 1, 0.39, 0.29,
0.11, 0.32, 0.53, 0.25, 0.21, 0.15, 0.16, 0.85, 0.31, 0.44, 1,
1), value = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1)), .Names = c("ccs",
"badaI", "badaII", "value"), row.names = c(NA, -31L), class = "data.frame")
UPDATE
I use below figure to further explain my intuition. This figure is drawn using the same predictor values.
According to value column, observation with numbers 15, 27, 30, 31 have labels as 1 and the remaining observations have 0 value. While looking at above figure it is clear that CCS or badaIIare best in discriminating the difference between 0 and 1 as compared to badaI, which always provide higher predictor values.In other words, I mean with badaI, it is difficult to predict 1 as its values are higher for both 0 and 1.
I am not able to correlate my intuition with the ROC plot. @TBSRounder, I understood what you have mentioned, but I need to support the above figure with the ROC plot, which I find disappointing. Can anyone help me to correlate the above figure with the ROC plot?
AI: The results you posted are correct, I did a quick check with library(pROC) and got the same thing. The important feature for AUC/ROC is that the cutpoint of calling a sample "1" or "0" is not set at 0.5. For each of your predictors, the range of the numbers do not matter at all. What does matter is how often higher numbers are associated with true positive labels. Example:
Method 1:[1, 3, 5, 6, 9, 14] (predictor)
Method 1:[0, 0, 1, 0 ,1 , 1] (true label)
AUC: 0.8889
Method 2:[[0.01, 0.02, 200 , 250, 300, 1000000] (predictor)
Method 2:[0, 0, 1, 0 ,1 , 1] (true label)
AUC: 0.8889
The AUC measure is just a measure of how often true "1" samples have a higher number than true "0" samples. The predictor does not have to even be bounded by [0,1], it can be any range.
Your plot would be more informative if it was sorted. Consider the below sorted plot (crude, sorry) for each of the three methods, with the "1" labels highlighted in blue. badaI clearly has the closest grouping of "1"s towards the higher values. |
H: group_by: How to Transform my Data
I'm coming from Stata and struggling to get used to group_by in dplyr. Perhaps using group_by is the wrong approach, but if you know Stata, you'll understand why I'm trying to use it (I think).
I'd like to transform the data as shown below:
Essentially, I'm trying to collapse each NAICS code into a single row, and to create columns for the 5 statistics under CONC and HHI for each NAICS code (the 4 values for CONC and one for HHI, ignoring the Xs)
The data is here. The code I'm trying is as follows:
ManufMktConc <- read.csv("Manufacturing.csv",
stringsAsFactors = FALSE)
ManufMktConc %<>%
tbl_df %>%
slice(-1) %>%
filter(CONCENFI.display.label!="All companies") %>%
select(-GEO.id,-GEO.id2,-GEO.display.label,-COMPANY,-RCPTOT,-YEAR.id)
ManufMktConc_byInd <- group_by(ManufMktConc,NAICS.id)
ManufMktConc_byInd %<>%
arrange(CONCENFI.id) %>%
mutate(FourFirm = ManufMktConc_byInd$CCORCPPCT[ManufMktConc_byInd$CONCENFI.id=="856"])
That gives me the following error:
> ManufMktConc_byInd %<>% arrange(CONCENFI.id) %>% mutate(FourFirm=ManufMktConc_byInd$CCORCPPCT[ManufMktConc_byInd$CONCENFI.id=="856"])
Error: incompatible size (651), expecting 4 (the group size) or 1
So I can tell that I am asking R to stick the 651 instances of ManufMktConc_byInd$CONCENFI.id == "856" into the group of 4. I guess this is where my understanding of group_by falls apart. Why isn't my logical statement only applied within each group?
Thank you.
AI: ManufMktConc <- read.csv("Downloads/Manufacturing.csv",
stringsAsFactors = FALSE)
ManufMktConc %<>%
tbl_df %>%
slice(-1) %>%
filter(CONCENFI.display.label!="All companies") %>%
select(-GEO.id,-GEO.id2,-GEO.display.label,-COMPANY,-RCPTOT,-YEAR.id)
## use melt and cast from reshape2
require(reshape2)
# melt makes the data set tall and thin using id variables and measure variables
#
ManufMktConc_molten <- as.data.frame(ManufMktConc) %>%
melt(id.vars=1:4, measure.vars=5:6) %>%
filter(value!="X")
ManufMktConc_molten[1:5,]
NAICS.id NAICS.display.label CONCENFI.id CONCENFI.display.label variable value
1 311 Food manufacturing 856 4 largest companies CCORCPPCT 16.3
2 311 Food manufacturing 857 8 largest companies CCORCPPCT 24.2
3 311 Food manufacturing 858 20 largest companies CCORCPPCT 38.4
4 311 Food manufacturing 859 50 largest companies CCORCPPCT 50.9
5 3111 Animal food manufacturing 856 4 largest companies CCORCPPCT 30.2
# make a new column with the eventual column header. (Note this is different that your example)
ManufMktConc_molten$label <- paste0(ManufMktConc_molten$variable,
trimws(substr(ManufMktConc_molten$CONCENFI.display.label,1,2)))
# cast it into multiple columns (something like a pivot in Excel).
ManufMktConc_result <- ManufMktConc_molten %>%
cast(NAICS.id + NAICS.display.label ~ label) %>%
select(1,2,4,6,3,5,7) ## reorder columns
ManufMktConc_result[1:5,]
NAICS.id NAICS.display.label CCORCPPCT4 CCORCPPCT8 CCORCPPCT20 CCORCPPCT50 VSHERFI50
1 311 Food manufacturing 16.3 24.2 38.4 50.9 110.7
2 3111 Animal food manufacturing 30.2 40.7 57.8 71.5 368.6
3 31111 Animal food manufacturing 30.2 40.7 57.8 71.5 368.6
4 311111 Dog and cat food manufacturing 67.8 80.6 89.6 96.5 2019.4
5 311119 Other animal food manufacturing 24.3 36.2 51.5 68.2 228.3 |
H: Outlier detection for unbalanced classes
I have to make a predictive model for predicting a boolean Won/Lost variable based on some other numeric data; and further find out the features of observations that have 'Won'.
However, the number of 'Won's' in my dataset is 0.05%. I've tried both oversampling and downsampling, but it hasn't worked. Even if I take an equal amount of 'Won's and 'Lost's', the model is not accurate for the rest of the 'Lost' values. I've also tried out weights, but it's not working well. Ideally I think I have to put a very high weight for 'Won'.
PS: Using RandomForestClassifier, with a confusion matrix to verify.
I'm not keen on trying out SMOTE, as I've heard it's tough in Python.
So now I'm trying to look at it in a different way, and do anomaly detection for the 'Won' case, as it natural for the data to have so few 'Won' cases. So, two questions
Is this a correct approach?
How to go about it using Python?
AI: you need to distinguish between these cases:
Data Imbalance
Data Imbalance + Very few number of samples (minority class)
Severe Data Imbalance + Very few number of samples (minority class)
20:60 vs. 10:20 vs. 100:1000 vs. 10:100
and these cases:
similarities between different classes.
wide variations within the same class.
You need to understand to which of these cases your problem belong.
if you have very severe data imbalance + very few number of samples + wide variation within the majority class and similarities between different classes. regular oversampling or down sampling techniques will not help you as well as most of the synthetic oversampling techniques designed specifically to deal with the data imbalance but the assumption is to have enough number of samples.
Try to focus more on ensemble techniques that designed mainly to deal with data imbalance.
SMOTE-Boost
RUSBoost
SMOTEBagging
IIIVote
EasyEnsemble |
H: When to use Linear Discriminant Analysis or Logistic Regression
The Wikipedia article on Logistic Regression says:
Logistic regression is an alternative to Fisher's 1936 method, linear
discriminant analysis. If the assumptions of linear discriminant
analysis hold, application of Bayes' rule to reverse the conditioning
results in the logistic model, so if linear discriminant assumptions
are true, logistic regression assumptions must hold. The converse is
not true, so the logistic model has fewer assumptions than
discriminant analysis and makes no assumption on the distribution of
the independent variables.
Could someone help me to understand what the assumptions of linear discriminant are, an example of where they hold, and how application of Bayes' rule results in the Logistic Model.
Would it be correct to say that Logistic Regression is always the preferred choice? Are there conditions where Linear Discriminant Analysis is to be preferred?
AI: This question was asked and answered at the Cross-Validate SE. The answer is a few years old but it is still useful. If you want you can click here to find their answer. |
H: Overcome memory limitation when downloading from database into Orange
I would like to run the association rule mining algorithm of the Orange library on a dataset that is stored in a PostgreSQL database. The table 'buildingset' contains the itemsets for each user, thus each record is related to a user, and each field is related to an item. The values are either 1 (smallint) or missing. The table has about 14,000 records and 31 fields.
When I try to run the algorithm on this dataset, I get the following error:
ValueError Traceback (most recent call last):
File "/opt/orange/orange3/Orange/canvas/scheme/widgetsscheme.py", line 722, in process_signals_for_widget
handler(*args)
File "/home/bdukai/.local/lib/python3.4/site-packages/orangecontrib/associate/widgets/owassociate.py", line 444, in set_data
self.X = data.X
File "/opt/orange/orange3/Orange/data/sql/table.py", line 353, in X
self.download_data(AUTO_DL_LIMIT)
File "/opt/orange/orange3/Orange/data/sql/table.py", line 333, in download_data
raise ValueError("Too many rows to download the data into memory.")
ValueError: Too many rows to download the data into memory.
Thus is there any way overcome this limitation, without upgrading the hardware?
AI: You can use "download data to local memory" checkbox in SQL Table widget, it should allow you to work with up to 1000000 rows. |
H: RNN vs CNN at a high level
I've been thinking about the Recurrent Neural Networks (RNN) and their varieties and Convolutional Neural Networks (CNN) and their varieties.
Would these two points be fair to say:
Use CNNs to break a component (such as an image) into subcomponents (such as an object in an image, such as the outline of the object in the image, etc.)
Use RNNs to create combinations of subcomponents (image captioning, text generation, language translation, etc.)
I would appreciate if anyone wants to point out any inaccuracies in these statements. My goal here is to get a more clearer foundation on the uses of CNNs and RNNs.
AI: A CNN will learn to recognize patterns across space. So, as you say, a CNN will learn to recognize components of an image (e.g., lines, curves, etc.) and then learn to combine these components to recognize larger structures (e.g., faces, objects, etc.).
You could say, in a very general way, that a RNN will similarly learn to recognize patterns across time. So a RNN that is trained to translate text might learn that "dog" should be translated differently if preceded by the word "hot".
The mechanism by which the two kinds of NNs represent these patterns is different, however. In the case of a CNN, you are looking for the same patterns on all the different subfields of the image. In the case of a RNN you are (in the simplest case) feeding the hidden layers from the previous step as an additional input into the next step. While the RNN builds up memory in this process, it is not looking for the same patterns over different slices of time in the same way that a CNN is looking for the same patterns over different regions of space.
I should also note that when I say "time" and "space" here, it shouldn't be taken too literally. You could run a RNN on a single image for image captioning, for instance, and the meaning of "time" would simply be the order in which different parts of the image are processed. So objects initially processed will inform the captioning of later objects processed. |
H: Hidden neuron representation of weights
In an RBM, if we represent the weights learned by the hidden units, they show that the neural net is learning basic shapes. For example, for the mnist dataset, they learn features of the numbers they are trying to classify.
In a regular feed-forward net with one hidden layer, I can train the network to recognize digits, and it works, but when I try to visualize the hidden layer weights, I only see noise, no distinguishable feature. Why is that? Hasn't the network learned to recognize the digits?
AI: It has learnt to recognize the digits, but it might have put too much weight on single pixels. Try to add different amounts of L2 regularization or dropout and compare the visualizations of the weights. Adding some kind of regularization should make the net rely less on single / independent pixels and more on the inherent structure of the digits, giving you smoother weights / visualization. |
H: Does the count of items in a transaction matter to apriori?
When preparing my data for funneling into the Microsoft Association Rules algorithm, I was not sure if I should group by data by Transaction and Item, or have a record for every instance of an item in a transaction. Does the algorithm care and add weight if an item appears 3 times in a transaction Or is it just looking for the existence of an item with another item, regardless of how many are present?
AI: No it is not important and highly recommended to remove the duplicate items and sort the items in lexicographical order in each transaction. This is to improve the performance.
In association rule mining, an item is frequent iff it is repeated in multiple transactions not in a single transaction. This is why you don't need to have duplicate items in a each transaction. |
H: When does cache get expired for a RDD in pyspark?
We use .cache() on RDD for persistent caching of an dataset, My concern is when this cached will be expired?.
dt = sc.parallelize([2, 3, 4, 5, 6])
dt.cache()
AI: It will not expire until Spark is out of memory, at which point it will remove RDDs from cache which are used least often. When you ask for something that has been uncached it will recalculate the pipeline and put it in cache again. If this would be too expensive, unpersist other RDDs, don't cache them in the first place or persist them on your file system. |
H: Backprop Through Max-Pooling Layers?
This is a small conceptual question that's been nagging me for a while: How can we back-propagate through a max-pooling layer in a neural network?
I came across max-pooling layers while going through this tutorial for Torch 7's nn library. The library abstracts the gradient calculation and forward passes for each layer of a deep network. I don't understand how the gradient calculation is done for a max-pooling layer.
I know that if you have an input ${z_i}^l$ going into neuron $i$ of layer $l$, then ${\delta_i}^l$ (defined as ${\delta_i}^l = \frac{\partial E}{\partial {z_i}^l}$) is given by:
$$
{\delta_i}^l = \theta^{'}({z_i}^l) \sum_{j} {\delta_j}^{l+1} w_{i,j}^{l,l+1}
$$
So, a max-pooling layer would receive the ${\delta_j}^{l+1}$'s of the next layer as usual; but since the activation function for the max-pooling neurons takes in a vector of values (over which it maxes) as input, ${\delta_i}^{l}$ isn't a single number anymore, but a vector ($\theta^{'}({z_j}^l)$ would have to be replaced by $\nabla \theta(\left\{{z_j}^l\right\})$). Furthermore, $\theta$, being the max function, isn't differentiable with respect to it's inputs.
So....how should it work out exactly?
AI: There is no gradient with respect to non maximum values, since changing them slightly does not affect the output. Further the max is locally linear with slope 1, with respect to the input that actually achieves the max. Thus, the gradient from the next layer is passed back to only that neuron which achieved the max. All other neurons get zero gradient.
So in your example, $\delta_i^l$ would be a vector of all zeros, except that the $i^{*^{th}}$ location will get a values $\left\{\delta_j^{l+1}\right\}$ where $i^* = argmax_{i} (z_i^l)$ |
H: Reshaping of data for deep learning using Keras
I am a beginner to Keras and I have started with the MNIST example to understand how the library actually works. The code snippet of the MNIST problem in the Keras example folder is given as :
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
batch_size = 128
nb_classes = 10
nb_epoch = 12
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel size
nb_conv = 3
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
..........
I am unable to understand the reshape function here. What is it doing and why we have applied it?
AI: mnist.load_data() supplies the MNIST digits with structure (nb_samples, 28, 28) i.e. with 2 dimensions per example representing a greyscale image 28x28.
The Convolution2D layers in Keras however, are designed to work with 3 dimensions per example. They have 4-dimensional inputs and outputs. This covers colour images (nb_samples, nb_channels, width, height), but more importantly, it covers deeper layers of the network, where each example has become a set of feature maps i.e. (nb_samples, nb_features, width, height).
The greyscale image for MNIST digits input would either need a different CNN layer design (or a param to the layer constructor to accept a different shape), or the design could simply use a standard CNN and you must explicitly express the examples as 1-channel images. The Keras team chose the latter approach, which needs the re-shape. |
H: Interpret User Interfaces with Machine Learning
I am currently working on a prototype of an application that should be able to interact with user interfaces.
Now every user interface has some common elements, like buttons, scrollbars, input fields etc.
I would like to use Machine Learning to "interpret" such user interfaces in a way, in which I later can input a user interface as an image for example, and let the prototype then "try out" the interface, meaning, clicking on buttons, using scrollbars inputting some text into input fields etc.
I know that this would have to be done using Image Recognition, since there are many different UIs.
I am specifically interested in Websites, Adobe Reader with an opened PDF (that in turn can be a form etc.), and Word with an opened Document (again this can contain forms etc.).
Now my main question is if there is already some research going on in this field that I can use, or even an existing tool for parts of the process.
Any help is appreciated :)
AI: I would try experimenting with recurrent neural networks: http://karpathy.github.io/2015/05/21/rnn-effectiveness/. Recurrent neural networks can output sequences of variable length given inputs of variable length. In your case a recurrent neural network might output a sequence like the following when given a user interface: click a button, select a field, type some text, hit enter. For another interface, the network might output only: click one button, click another button, and that's it. This would be useful for you because the sequence and length of actions from interface to interface might change a lot.
You could also experiment with reinforcement learning and build an algorithm that has an objective (reach some final page in as few actions as possible). The algorithm would start by doing random things (like clicking the same button a bunch of times), and then gradually learn over time to take appropriate actions. If you go that route you could use deep learning and Monte Carlo Tree Search (MCTS) like what Alpha Go did.
In either case you're going to need a framework that can train an algorithm quickly because you're likely to have to go through a lot of iterations. TensorFlow (https://www.tensorflow.org/) is one option (I've started using it recently, and I like it a lot because of its easy of use). TensorFlow is capable of building both recurrent neural nets and deep neural nets. |
H: Determining correlated product categories using store purchase history
I have a large dataset that contains product purchase history, like so:
userID productID category subcategory
123 ABC Kitchen Knives
123 BEA Kitchen Organization
233 ZZS Electronics Phones
For a first project, I'm looking to answer the question: "What discrete groups of categories/subcategories do shoppers tend to shop in?". For example, we may find that shoppers who buy Monitors are highly likely to buy Keyboards and Mice as well.
Any direction on getting started on a problem like this is appreciated!
AI: This is classic market basket analysis.
Clustering is the weong tool, you want frequent item set mining and association rules instead.
https://en.wikipedia.org/wiki/Association_rule_learning |
H: How would you teach multiplications to a neural network?
Let's say you train a neural network on Input = 1/Output = 2; Input = 4/Output = 8.
How would you train a machine to recognize that it needs to multiply the input by 2, from scratch?
AI: You need to limit the network model to one which naturally would generalise your problem.
For a simple multiplication, this would be a single linear neuron. You can also train bias so that the network represents y = Ax + B.
If you take that very simple network give it just 2 or 3 examples to learn from, and train it using least squares error, it will quickly converge to the function you want. A single layer linear network like this can generalise to y = Ax + B where x, y and B are vectors, and A is a matrix - i.e. it can solve simultaneous linear equations provided you supply enough examples to do so (usually number of examples equals number of dimensions of the vectors).
This is actually really trivial, and if you train such a simple network using any modern NN library, it will be very fast to learn your linear function. However, in the general case of having a specific non-linear function and wanting the network to learn it in a general sense, it is not possible. The network can be made to learn to generalise reasonably well near where you have provided example inputs and outputs, but will output incorrect results when you give it an x value far away from the training examples.
It is important to note that the network does not learn the math formula for your function. It finds the best approximation given its own internal model. But by picking a network where the internal model is a good match to the function you want to learn, it is going to generalise well to it, and may be able to get it exactly if there is no noise in the examples and enough of them. |
H: NNDSVD to initialize Convex-NMF
I'm working with the Convex Nonnegative Matrix Factorization Algorithm described in Ding, Li, Jordan 2008 ("Convex and Semi-Nonnegative Matrix
Factorizations").
Good initialization strategies make all the difference and using the described k-means clustering to get started works very well.
But there is a paper describing NNDSVD (Boutsidis, Gallopoulos, 2007) to initialize "traditional" NMF Algos. I wanted to test this, to see if it improves my results.
The nonnegativity constraints for Convex-NMF are relaxed. X can have mixed sign data, where X ~ XWG', with factors W and G having only positive data.
I've implemented NNDSVD just like in the paper (in C++ w/ OpenCV), but since X has mixed sign data, the resulting W contains negative values as well.
Has anyone tried to adapt this initialization strategy? Are there any other recommended initialization strategies for Convex-NMF, aside from the ones mentioned in the Ding et al Paper (and random init)?
AI: I've played before with a library for ("classic") NMF from University of Vienna: libNMF
While it wasn't useful for me at first, since they don't implement Convex-NMF, I had a look at their NNDSVD implementation.
They simply replace all negative values and values smaller than the machine epsilon with zero.
I tried that, but it fails, since the update rules for Convex-NMF are such that I get a division by zero. Since they add 0.2 to all values in their proposed initialization scheme (for "smoothing"), I just did the same here.
It works, the precision is worse than expected (a bit worse than k-means, which is my best approach for now), but I'll tweak it a bit more. |
H: Decision Stumps with same value leaf nodes
I'm doing some ADA boosting with Decision stumps and in inducing a binary classifying decision stump, i'm finding both leaf nodes to have a positive value. Can this be the case? Is this possible?
AI: What is the overall response rate? If it's low (even 15-20%) it may be difficult to find decision stumps that contain one leaf with > 50% response!
You could consider oversampling or changing cutoff probability, but I think if your using only 2 leaf trees, your model is bound to struggle. |
H: Why does an L2 penalty prefer smaller and more diffuse weight vectors?
For instance, why is it that it is more favourable for a weight of [0.25, 0.25, 0.25, 0.25] (for which the L2 penalty is 0.25) instead of simply [1, 0, 0, 0] (for which an L2 penalty is 1)?
In this case, both weights would give the same dot product when using W.T * X
AI: You answered this in your question. "Prefer" means "produces a smaller penalty", and you've identified that the penalty in the first case is smaller. Why would this be a good thing? It amounts to preferring an explanation based a bit on many features, rather than one based entirely on one features. That's often a good bet to avoid overfitting.
These two weight vectors do not produce the same dot product with other vectors in general. If the vector X contained all identical values, they would.
If the 4 input features were regularly identical or nearly so, then it means they're redundant, and you may prefer to use just 1 of the features (your second case), instead of a bit of each. In this case, an L1 penalty would at least be indifferent to the two, not penalize the second one. |
H: Predicting next action to take to reach a final state
Does anyone know of an algorithm that could be used to determine the next action to take to reach a desired state when trained on time-series data?
For example, a robot starts at a certain state, then takes an action to get to another state. This occurs continuously for many iterations (imagine the robot is randomly exploring a room). If the robot is at a specific starting state, and I desire the robot to end up in a different state, is there an algorithm that could recommend the best next action (or set of next actions) to take to reach that final desired state?
One approach I've tried is to use a neural network with the current state and the next state being the input and the action to get from the current state to the next state being the output. The network would know for a single state how to get to a next desired state that is one action away. The issue is, what if the desired state is many actions away?
AI: The problem you've described can be formalized as a Markov decision process, the foundational problem of reinforcement learning. In broad strokes, reinforcement learning is concerned with how agents (robot) in a given environment (room) out to take actions (movements from one state to another) to maximize some notion of reward.
Formalizing your problem requires defining a few parts of an MDP model:
Set of states $S$
Set of actions $A$
A reward function $R(s)$, defining the reward of arriving in a given state. In your case, a simple scheme of $R(s) = \mathbb{1}(s = s_{goal})$ is one option.
A transition function $T(s,a,s')$ giving the probability of winding up in state $s'$ having taken action $a$ from state $s$.(Note that you can model deterministic transitions by returning a value of $1$ for a single $s'$.)
If the problem goes on infinitely, you'll also need a discount factor $\gamma$.
In reinforcement learning, the term optimal policy describes a function that returns the best action to take from a given state. I.e., this gives the recommendation you're looking for.
If you know all of the model components described above—or can at least derive ones that match your problem—you can use a variety of planning algorithms to find the optimal policy, e.g. value iteration or policy iteration.
If you don't know the rewards and transitions—say, for example, that the robot is seeking a sensor attached to a charging station and you don't know where it is or the size of the enclosing room—you'll likely need to explore algorithms that observe action-outcome pairs and try to learn optimal policy from these learning episodes.
A full description of these is beyond the scope of your question, but a good place to start is Sutton & Barto's Reinforcement Learning: An Introduction. An html version is freely available. Another resource is RL udacity course produced by Georgia Tech.
In your example, you may also want to research potential-based reward functions. Very loosely, one potential might be the robot's distance to goal state, and the reward would be based on changes to this potential value. (This is described in a paper by Ng, Harada and Russell, and in unit 6 of the GA Tech course mentioned above.) |
H: Which features do I select from text?
Hello, I am very new to data science, machine learning, and stack overflow. Excuse me for being unclear or asking naive questions.
My question is as follows:
From any given document, I am trying to classify it according to the emotions it evokes in readers, using a neural network. However, I am having difficulties with feature selection. I'm thinking of using NLTK and RAKE to extract keywords, but I don't know how I can translate them into features. Should I hash the keywords for one feature? Or, should I find a dictionary of english words (i.e. Wordnet), and use every word in the dictionary as a feature.
AI: Using NLTK in python you should first Tokenize the sentences into words, even you can use Ngram for 2-Gram or 3-Gram bags of word, the reason I am suggesting N-Gram is that let's suppose you have sentence like: I am not happy with this product, then 2-Gram tokenize it as ['not happy', 'happy with', 'with this', 'this product'] here I and am are assumed as STOPWORDS. Using HashingTF you can hash the sentence into a feature vector as ['word position': frequency of word, ...] i.e highly sparse vectors, For Hashing in PySpark check this documentation.
Here below python code will help you to tokenize in bags of word
import string
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
PUNCTUATION = set(string.punctuation)
STOPWORDS = set(stopwords.words('english'))
STEMMER = PorterStemmer()
example = ['Hello Krishna Prasad, this is test file for spark testing',
'Another episode of star',
'There are far and away many stars'
'A galloping horse using two coconuts'
'My kingdom for a horse'
'A long time ago in a galaxy far']
def tokenize(text):
tokens = word_tokenize(text)
lowercased = [t.lower() for t in tokens]
no_punctuation = []
for word in lowercased:
punct_removed = ''.join([letter for letter in word if not letter in PUNCTUATION])
no_punctuation.append(punct_removed)
no_stopwords = [w for w in no_punctuation if not w in STOPWORDS]
stemmed = [STEMMER.stem(w) for w in no_stopwords]
return [w for w in stemmed if w]
tokenized_word = [tokenize(text) for text in example]
for word in tokenized_word:
print word
Out of the above code as:
$python WordFrequencyHash.py
[u'hello', u'krishna', u'prasad', u'test', u'file', u'spark', u'test']
[u'anoth', u'episod', u'star']
[u'far', u'away', u'mani', u'starsa', u'gallop', u'hors', u'use', u'two', u'coconutsmi', u'kingdom', u'horsea', u'long', u'time', u'ago', u'galaxi', u'far']
You can also use word2vec or countvectorizer for tokenization. |
H: What is the minimum size of the test set?
The mean of a population of binary values can be sampled with about 1000 samples at 95% confidence, and 3000 samples at 99% confidence.
Assuming a binary classification problem, why is the 80/20% rule always used, and not the fact that with a few thousand samples the mean accuracy can be estimated with > 95% confidence?
AI: This is a great question since it is illuminating to examination the best practices of both about traditional statistics and machine learning as they are brought together.
Two Separate Rules/Best Practices -
First, the two items that you mentioned should be examined separately and not conflated i.e. they should be carefully combined as you suggest in your question.
Significance: You have estimated that you want greater than 1000 cases to have statistical significance and would further benefit from greater than 3000 test cases.
Cross validation: Cross validation is performed by splitting the data (often 80% train-20% test) so that tests for bias (~underfitting) and variance (~overfitting) can be assessed.
Combining significance with cross validation: Now we know that we want to have significance in our tests so we want greater than 3000 records. We also want to perform cross validation, so in order for both the testing and training data to return significant results we want both to have a minimum of 3000 records. The best scenario for this would be to have 15,000 total records. This way, the data can be split 80/20 and the testing set is still significant.
Lets assume you only have 4000 records in your data set. In this case, I would opt to make my training data significant while allowing the testing set to drop to lower significance.
More rigor: Everything above has been quite hand wavey and lacks statistical rigor. For this, I refer you to a couple of papers referenced in another Stack Exchange question -
Dietterich, "Approximate Statistical Tests for Comparing
Supervised Classication Learning Algorithms"
Salzberg, Data Mining and Knowledge Discovery, 1, 317–327 (1997), "On Comparing Classifiers: Pitfalls to Avoid and a
Recommended Approach".
Hope this helps! |
H: When should we consider a dataset as imbalanced?
I'm facing a situation where the numbers of positive and negative examples in a dataset are imbalanced.
My question is, are there any rules of thumb that tell us when we should subsample the large category in order to force some kind of balancing in the dataset.
Examples:
If the number of positive examples is 1,000 and the number of negative examples is 10,000, should I go for training my classifier on the full dataset or I should subsample the negative examples?
The same question for 1,000 positive example and 100,000 negative.
The same question for 10,000 positive and 1,000 negative.
etc...
AI: I think subsampling (downsampling) is a popular method to control class imbalance at the base level, meaning it fixes the root of the problem. So for all of your examples, randomly selecting 1,000 of the majority of the class each time would work. You could even play around with making 10 models (10 folds of 1,000 majority vs the 1,000 minority) so you will use your whole data set. You can use this method, but again you're kind of throwing away 9,000 samples unless you try some ensemble methods. Easy fix, but tough to get an optimal model based on your data.
The degree to which you need to control for the class imbalance is based largely on your goal. If you care about pure classification, then imbalance would affect the 50% probability cut off for most techniques, so I would consider downsampling. If you only care about the order of the classifications (want positives generally more higher than negatives) and use a measure such as AUC, the class imbalance will only bias your probabilities, but the relative order should be decently stable for most techniques.
Logistic regression is nice for class imbalance because as long as you have >500 of the minority class, the estimates of the parameters will be accurate enough and the only impact will be on the intercept, which can be corrected for if that is something you might want. Logistic regression models the probabilities rather than just classes, so you can do more manual adjustments to suit your needs.
A lot of classification techniques also have a class weight argument that will help you focus on the minority class more. It will penalize a miss classification of a true minority class, so your overall accucracy will suffer a little bit but you will start seeing more minority classes that are correctly classified. |
H: Split a list of values into columns of a dataframe?
I am new to python and stuck at a particular problem involving dataframes.
The image has a sample column, however the data is not consistent. There are also some floats and NAN. I need these to be split across columns. That is each unique value becomes a column in the df.
Any insights?
AI: It looks like you're trying to "featurize" the genre column.
df = pandas.Series([('Adventure', 'Drama', 'Fantasy'), ('Comedy', 'Family'), ('Drama', 'Comedy', 'Romance'), (['Drama']),
(['Documentary']), ('Adventure', 'Biography', 'Drama', 'Thriller')]).apply(frozenset).to_frame(name='genre')
for genre in frozenset.union(*df.genre):
df[genre] = df.apply(lambda _: int(genre in _.genre), axis=1)
The output:
| row | genre | Romance | Documentary | Thriller | Biography | Family | Drama | Comedy | Adventure | Fantasy |
|-----|-----------------------------------------|---------|-------------|----------|-----------|--------|-------|--------|-----------|---------|
| 0 | (Drama, Adventure, Fantasy) | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 |
| 1 | (Comedy, Family) | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 |
| 2 | (Drama, Comedy, Romance) | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |
| 3 | (Drama) | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
| 4 | (Documentary) | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 5 | (Drama, Biography, Adventure, Thriller) | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | |
H: Recommendation model that can recommend already bought item
Most recommendation algorithms recommend new products to users.
If you bought this you might like that
But sometimes the item user is most likely to buy is an item that he bought sometime ago.
Is there any algorithm appropriate for this use?
AI: Usually recommendation algorithms provides the confidence that a user will like an item. Items that the user already bought should get high confidence - the user bought them since he liked them.
In most cases filtering out item the user already bough is done in the application level, not in the algorithm level.
So , you can use regular recommendation algorithms in order to know if the user will like the item.
Please note that you might face a different problem - will the user buy the item for the second/third time.
In order to cope with this problem, using some domain knowledge might be beneficial. If the products are usually bough many times (e.g., milk), just use the recommendation algorithm. If all the products have the same tendency to be bought few times, build a model for that (e.g., probability for buying another time given already buying x times) and combine the models.
If your products are very different from this aspect you might need to get into a lower level. Different domains require different solution but you might split the product by the buying behavior and train few recommendation systems, add the number of buying as a feature, etc. |
H: Unable to load NLTK in spark using PySpark
I have install NLTK and its working fine with the following code, I running in pyspark shell
>>> from nltk.tokenize import word_tokenize
>>> text = "Hello, this is testing of nltk in pyspark, mainly word_tokenize functions in nltk.tokenize, working fine with PySpark, please see the below example"
>>> text
//'Hello, this is testing of nltk in pyspark, mainly word_tokenize functions in nltk.tokenize, working fine with PySpark, please see the below example'
>>> word_token = word_tokenize(text)
>>> word_token
//['Hello', ',', 'this', 'is', 'testing', 'of', 'nltk', 'in', 'pyspark', ',', 'mainly', 'word_tokenize', 'functions', 'in', 'nltk.tokenize', ',', 'working', 'fine', 'with', 'PySpark', ',', 'please', 'see', 'the', 'below', 'example']
>>>
When I try to run it using spark inbuild method map it throwing error ImportError: No module named nltk.tokenize
>>> from nltk.tokenize import word_tokenize
>>> rdd = sc.parallelize(["This is first sentence for tokenization", "second line, we need to tokenize using word_tokenize method in spark", "similar sentence here"])
>> rdd_tokens = rdd.map(lambda sentence : word_tokenize(sentence))
>> rdd_tokens
// PythonRDD[2] at RDD at PythonRDD.scala:43
>>> rdd_tokens.collect()
I am using spark version:1.6.1 and python version: 2.7.9
Fullstack errors:
>>> from nltk.tokenize import word_tokenize
>>> rdd = sc.parallelize(["This is first sentence for tokenization", "second line, we need to tokenize using word_tokenize method in spark", "similar sentence here"])
>> rdd_tokens = rdd.map(lambda sentence : word_tokenize(sentence))
>> rdd_tokens
// PythonRDD[2] at RDD at PythonRDD.scala:43
>>> rdd_tokens.collect()
16/05/17 17:06:48 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 2.0 (TID 16, spark-w-0.c.clean-feat-131014.internal): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/lib/spark/python/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/usr/lib/spark/python/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/usr/lib/spark/python/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
ImportError: No module named nltk.tokenize
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
16/05/17 17:06:49 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 2.0 failed 4 times; aborting job
16/05/17 17:06:49 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.3 in stage 2.0 (TID 23, spark-w-0.c.clean-feat-131014.internal): org.apache.spark.TaskKilledException
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:204)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/spark/python/pyspark/rdd.py", line 771, in collect
port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
File "/usr/lib/spark/python/pyspark/sql/utils.py", line 45, in deco
return f(*a, **kw)
File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 22, spark-w-0.c.clean-feat-131014.internal): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/lib/spark/python/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/usr/lib/spark/python/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/usr/lib/spark/python/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
ImportError: No module named nltk.tokenize
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/lib/spark/python/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/usr/lib/spark/python/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/usr/lib/spark/python/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
ImportError: No module named nltk.tokenize
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
>>>
AI: It looks like you installed it only on the driver/gateway and not on the nodes/workers itself. The test you ran in the shell is running it locally, once you map a function via your SparkContext it gets distributed to the workers which don't have NLTK installed. |
H: How can we evaluate DBSCAN parameters?
yes, DBSCAN parameters, and in particular the parameter eps (size of the epsilon neighborhood).
In the documentation we have a "Look for the knee in the plot".
Fine, but it requires a visual analysis. And it doesn't really work if we want to make things automatic.
So, I was wondering if it was possible to find a good eps in a few lines of code.
Let's imagine something like :
evaluate kNN distance
sort these values
scale them (so that
the values are always between 0 and 1)
evaluate the derivative
find the first point where derivative is higher than a certain
value, let's try with 1
In R, it would look like (using iris dataset as in the DBSCAN documentation) :
# evaluate kNN distance
dist <- dbscan::kNNdist(iris, 4)
# order result
dist <- dist[order(dist)]
# scale
dist <- dist / max(dist)
# derivative
ddist <- diff(dist) / ( 1 / length(dist))
# get first point where derivative is higher than 1
knee <- dist[length(ddist)- length(ddist[ddist > 1])]
and the result is 0.536 which looks quite good.
Is this approach relevant or totally nonsense ?
AI: OPTICS gets rid of $\varepsilon$, you might want to have a look at it. Especially the reachability plot is a way to visualize what good choices of $\varepsilon$ in DBSCAN might be.
Wikipedia (article) illustrates it pretty well. The image on the top left shows the data points, the image on the bottom left is the reachability plot:
The $y$-axis are different values for $\varepsilon$, the valleys are the clusters. Each "bar" is for a single point, where the height of the bar is the minimal distance to the already printed points. |
H: border_mode for convolutional layers in keras
Keras has two border_mode for convolution2D, same and valid.
Could anyone explain what "same" does or point out some documentation? I could not find any document on the net (except people asking that it be implemented in theano as well).
AI: With border mode "valid" you get an output that is smaller than the input because the convolution is only computed where the input and the filter fully overlap.
With border mode "same" you get an output that is the "same" size as the input. That means that the filter has to go outside the bounds of the input by "filter size / 2" - the area outside of the input is normally padded with zeros.
Note that some libraries also support the border mode "full" where the filter goes even further outside the bounds of the input - up to "filter size - 1". This results in an output shape larger than the input.
There's a short explanation in.. numpy's convolve documentation:
http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.convolve.html |
H: Trying to figure out how to set weights for convolutional networks
I am working on CNN, and I have some doubts. Let's assume I only want one feature map, just to make things easier. And let's suppose my image is grayscale, to make things even easier. So, let's say my image is (32,32) --grayscale, hence just a channel and we don't need to write it explicitly, and my filter is (3,3) --again, one feature map, so I won't bother writing 1.
I understand this will map to a (30,30) layer.
How many parameters will I have? If I understand it correctly, I will have 9 weights and one bias, so a total of 10, because we map each (3,3) subregion using the same weights. Back-propagation will determine the best values for those weights and that will give me one feature map, or a filter.
So far, so good. What I don't understand is how does the training work? I need to keep the same weights and bias when moving across the image (that's why I only have 10 parameters), but won't those change when I do back-propagation? How can I apply back-propagation and keep the same values for the weights regardless of the subregion they are applied to?
AI: You are right there are just 10 params in your example.
For determining gradients, you just add up all the deltas from backpropagation in each location - i.e. you run backpropagation 30x30 = 900 times, for each position the 3x3 kernel is used, for every example in your batch (or just for one example if you are running most simple onine stochastic gradient descent), and for each position you add those delta values into a suitably-sized buffer (10 values for weight deltas, or 9 values for previous layer activation deltas). You will end up with one set of summed deltas matching your single 3x3 filter (plus a delta bias term). You then apply the summed version to update the weights of your single filter + bias.
Note this is a general rule you can apply whenever multiple gradient sources from backpropagation can be applied to any parameter - they just add. This occurs in RNNs too, or in any structure where you can set an objective function for non-output neurons. |
H: Term for relative recall
For in calculating success in information retrieval, precision and recall are fairly standard measurements, relating to accuracy of the results, and to what extent the results are comprehensive, respectively.
However, recall values typically require that you know how many correct results there are in total (in order to be able to state to what extent these results have been returned). This is, of course, problematic, if you do not know how many correct results there are in the first place.
But supposing you have x number of results. You do not know how they relate to the absolute number of correct documents that could have been returned, but you do know how accurate these results are. You can increase this accuracy, but you will lose some correct results in the process. You could exclude every result from x bar one, and ensure you have 100% accuracy, but clearly there has been a dramatic decrease in relative recall through this action. But what is the technically correct term to be associated with this "relative recall", when you do not have any idea what the absolute value of the recall in this environment is? Is there such a term?
EXAMPLE
For instance supposing you were developing an IR system to return articles relating to the Call of Duty video game franchise and it gives you the following results
http://www.theverge.com/2016/5/2/11564464/call-of-duty-infinite-warfare-release-date-first-trailer
https://www.theguardian.com/technology/2016/may/03/call-of-duty-infinite-warfare
http://www.asianage.com/editorial/beyond-call-duty-943
http://europe.newsweek.com/chinas-military-attempts-attract-millennials-call-duty-inspired-recruitment-455425?rm=eu
http://www.breitbart.com/video/2016/05/16/call-duty-black-ops-iii-eclipse-multiplayer-trailer/
http://www.journalgazette.net/news/local/courts/County-residents-getting-bogus-jury-duty-calls-13146090
http://www.straitstimes.com/singapore/health/healthcare-workers-who-went-beyond-call-of-duty
Now you don't know what the absolute number of correct articles are for the call of duty franchise: but you do know the number in the dataset your IR system has produced. Above represents a precision of 42.85% and a relative recall of 100%. A cut list of
http://www.theverge.com/2016/5/2/11564464/call-of-duty-infinite-warfare-release-date-first-trailer
https://www.theguardian.com/technology/2016/may/03/call-of-duty-infinite-warfare
http://www.asianage.com/editorial/beyond-call-duty-943
Has precision of 66.66%, but at the expense of one of the correct articles, thereby affecting "recall". So what is the correct term for this so-called recall?
AI: In the x results obtained from the IR system, let TP, FP, FN, TN be the True Positives, False Positives, False Negatives and True Negatives of the trial, respectively.
When you take a random smaller-sized sample s of x observations (cut list in your language), the values TP and FP decrease (by p and q) in their respective ratios. This is compensated by proportional increase (by p and q) in values FN and TN, conserving the actual number of binary classes.
TP + FN = (TP - p) + (FN + p)
FP + TN = (FP - q) + (TN + q)
x = TP + FP, s = (TP - p) + (FP - q)
For a good classifier, TP >>> FP. So decrease in already minimal FP (in new sample), results in improved precision. Now your concept of "relative recall" comes into picture, which in this case can be expressed as
(TP - p)/(TP+FN + p)
where TP acts as actual true observations (ground-truth) for the new sample.
Note: FN is not known because ground-truth labels are not available.
The decrease in recall is caused by type-2 error, which is the FN rate (Not reporting a link as Call of Duty when it actually is), also denoted β (beta). So this reduction in recall (relative recall) is attributed to rise in False Negative Rate (miss rate), since
Recall = 1 - FN Rate |
H: Question about bias in Convolutional Networks
I am trying to figure out how many weights and biases are needed for CNN.
Say I have a (3, 32, 32)-image and want to apply a (32, 5, 5)-filter.
For each feature map I have 5x5 weights, so I should have 3 x (5x5) x 32 parameters. Now I need to add the bias. I believe I only have (3 x (5x5) + 1) x 32 parameters, so is the bias the same across all colors (RGB)?
Is this correct? Do I keep the same bias for each image across its depth (in this case 3) while I use different weights? Why is that?
AI: Bias operates per virtual neuron, so there is no value in having multiple bias inputs where there is a single output - that would equivalent to just adding up the different bias weights into a single bias.
In the feature maps that are the output of the first hidden layer, the colours are no longer kept separate*. Effectively each feature map is a "channel" in the next layer, although they are usually visualised separately where the input is visualised with channels combined. Another way of thinking about this is that the separate RGB channels in the original image are 3 "feature maps" in the input.
It doesn't matter how many channels or features are in a previous layer, the output to each feature map in the next layer is a single value in that map. One output value corresponds to a single virtual neuron, needing one bias weight.
In a CNN, as you explain in the question, the same weights (including bias weight) are shared at each point in the output feature map. So each feature map has its own bias weight as well as previous_layer_num_features x kernel_width x kernel_height connection weights.
So yes, your example resulting in (3 x (5x5) + 1) x 32 weights total for the first layer is correct for a CNN with first hidden layer processing RGB input into 32 separate feature maps.
* You may be getting confused by seeing visualisation of CNN weights which can be separated into the colour channels that they operate on. |
H: Which, if any, machine learning algorithms are accepted as being a good tradeoff between explainability and prediction?
Machine learning texts describing algorithms such as gradient boosting machines or neural networks often comment that these models are good at prediction, but this comes at the price of a loss of explainability or interpretability. Conversely, single decision trees and classical regression models are labelled as good at explanation, but giving a (relatively) poor prediction accuracy compared to more sophisticated models such as random forests or SVMs. Are there machine learning models commonly accepted as representing a good tradeoff between the two? Is there are any literature enumerating the characteristics of algorithms which allow them to be explainable? (This question was previously asked on cross-validated)
AI: Is there are any literature enumerating the characteristics of algorithms which allow them to be explainable?
The only literature I am aware of is the recent paper by Ribero, Singh and Guestrin. They first define explainability of a single prediction:
By “explaining a prediction”, we mean presenting
textual or visual artifacts that provide qualitative understanding
of the relationship between the instance’s
components (e.g. words in text, patches in an image)
and the model’s prediction.
The authors further elaborate on what this means for more concrete examples, and then use this notion to define the explainability of a model. Their objective is to try and so-to-speak add explainability artificially to otherwise intransparent models, rather than comparing the explainability of existing methods. The paper may be helpful anyway, as tries to introduce a more precise terminology around the notion of "explainability".
Are there machine learning models commonly accepted as representing a good tradeoff between the two?
I agree with @Winter that elastic-net for (not only logistic) regression may be seen as an example for a good compromise between prediction accuracy and explainability.
For a different kind of application domain (time series), another class of methods also provides a good compromise: Bayesian Structural Time Series Modelling. It inherits explainability from classical structural time series modelling, and some flexibility from the Bayesian approach. Similar to logistic regression, the explainability is helped by regression equations used for the modelling. See this paper for a nice application in marketing and further references.
Related to the Bayesian context just mentioned, you may also want to look at probabilistic graphical models. Their explainability doesn't rely on regression equations, but on graphical ways of modelling; see "Probabilistic Graphical Models: Principles and Techniques" by Koller and Friedman for a great overview.
I'm not sure whether we can refer to the Bayesian methods above as a "generally accepted good trade-off" though. They may not be sufficiently well-known for that, especially compared to the elastic net example. |
H: Verification of trained system
I have trained a system in order to detect some features from a set of scenarios. Now the system can detect and classify that set. How can I validate how that system works in real world? What mathematics tools should I use?
AI: Did you also validate the data set with a separate test set so you can see how it performs with known data?
This answer has a good run down on how to do that:
https://stats.stackexchange.com/questions/19048/what-is-the-difference-between-test-set-and-validation-set
It also is a best practice to ensure that you haven't overfit the data. The easiest way to do that is check the accuracy against the data that you used to train the data. If its not substantially better than the test data (this data should never be used to train a model) then you have not over fit.
After that, if you eventually find out the correct classification in your system, I would compare the accuracy there to the accuracy you experienced in the training and test environments to determine if its performing as intended. |
H: Difference between Validation data and Testing data?
I am bit confused about validating data. What is this data mainly for?? Like I am seeing some tutorial and they have some training[I know it] images , they they have some validation images[donot know] and some testing[I know] images? So what is validating images mainly for??
AI: There are two uses for the validation set:
1) Knowing when to stop training
Some models are trained iteratively - like neural nets. Sooner or later the model might start to overfit the training data. That's why you repeatedly measure the model's score on the validation set (like after each epoch) and you stop training once the score on the validation set starts degrading again.
From Wikipedia on Overfitting:
"Training error is shown in blue, validation error in red, both as a function of the number of training cycles. If the validation error increases (positive slope) while the training error steadily decreases (negative slope) then a situation of overfitting may have occurred. The best predictive and fitted model would be where the validation error has its global minimum."
2) Parameter selection
Your model needs some hyper-parameters to be set, like the learning rate, what optimizer to use, number and type of layers / neurons, activation functions, or even different algorithms like neural net vs SVM... you'll have to fiddle with these parameters, trying to find the ones that work best.
To do that you train a model with each set of parameters and then evaluate each model using the validation set. Finally you select the model / the set of parameters that yielded the best score on the validation set.
In both of the above cases the model might have fit the data in the validation set, resulting in a biased (slightly too optimistic) score - which is why you evaluate the final model on the test-set before publishing its score. |
H: Modeling and Predicting Co-Occuring Values
I have data for a 100,000 people with different personality traits. Here would be sample data:
Person Trait-1 Trait-2 Trait-3 ..... Trait-N
John 1 0 1 1
I need a model where for a new user when I see Trait-X, I need a prediction of the likelihood of all other Traits i.e. how likely is he to have any of the other traits. Can someone point me to a possible model I could use? I am a novice so don't know much about this space.
AI: I would recommend a Restricted Boltzmann machine, which is a type of neural network that can model probability distributions. What you are trying to do is to estimate the marginal distribution of the missing features. In a probabilistic model, this is done through marginalization. The beauty of this method is that it provides the full distribution over the missing features rather than mere point estimates.
Here are two tutorials. |
H: Chose the right regression analysis
In R I have data where head(data) gives
day promotion profit new_users
1 105 45662 33
2 12 40662 13
3 44 46800 20
4 203 54102 46
Now day is simply the day (and is in order). promotion is simply the promotion-value for the day, the profit is the profit that day and new_users is the number of new users that day.
I want investigate the relationships between promotion to profit and new_users. We see a clear positive correlation between promotion and profit, and there is also a positive correlation between promotion and new_users.
In R I simply test correlation
cor.test(data$promotion, data$profit, method="kendall", alternative="greater" )
cor.test(data$promotion, data$new_users, method="kendall", alternative="greater")
which both gives a low p-value, ie we have a positive correlation.
I want to find a point where where the increase of promotion don't increase profit or new_users that must, ie a sweet spot.
Here is 2 plots and the R code for these
plot(data$promotion, data$profit, col="brown")
plot(data$promotion, data$new_users)
How should this be done?
My thoughts where to make a regression model.
For the first one "promotion vs. new_users" one could use a poisons model because it's a count-process, so a model like this would be a good chose?
glm(formula= data$new_users ~ data$promotion, family="poisson", data=data)
Next what regression model should one chose for the next one. Is it fair to say that this regression model is a good chose ? (I use sqrt command)
glm(formula=data$profit ~ sqrt(data$promotion) , data=data)
Or maybe it's not even necessary to use a regression model at all to find a sweet spot?
Thanks.
I have now looked at 'good' new users. For each day we have a promotion value and we have a count value which is the number of new good users. This plot shows us the number of good new users we get for a promotion for each day. For example for promotion value 90 we have a day where we got 8 new good users and a day where we got 14 new good users.
What would be the right approach to find a sweet spot for the use of promotion ?
AI: Since I cannot comment because I don't have enough reputation, I will post this as an answer.
If your goal is to "find a point where the increase of promotion don't increase profit or new_users", I won't do a simple regression, since the regression will tell you that if you do more promotions, you will always inscrease the profits. I would say that, in reality, the relationship between promotions and profits or new users is not linear. Because the number of new users is limited and the promotions are not.
A better model is to say that there is a optimal promotion that will give you the best increase of profits and new users.
(if you have a real business to optimise, I would introduce the Customer Lifetime Value of new users. Because generally, the new users you get when doing huge promotions will not come back...) |
H: Time Series Forecasting with Neural Network (mixed data types)
I have a dataset with the following format:
TimeStamp | Action | UserId
2015-02-05 | Action1 | XXX
2015-02-06 | Action2 | YYY
2015-02-07 | Action2 | XXX
...
I try to forecast future Actions for specific users based on the Users history in the dataset. I started with ntstool in MATLAB, but it can't handle mixed data types or non-numerical values. Now I am looking for other methods to predict future Actions and find periodic patterns in the records.
Is it possible to convert the values for Actions to numeric values or is there a possibility to create a neural network with mixed datatypes on the input? Maybe in R?
AI: I guess I answered the question in the comments, so here goes.
Most ML models cannot deal with categorical values. A common way to solve this is to use one-hot encoding, also known as dummy variables. For very possible value of your categorical variable you create a column which is 0 unless this row has this category, then it is 1. It is possible to remove one of the categories since it is a linear combination of the other dummy variables (if all are 0, the last one must be 1).
The downside of this method is that it increases the dimensionality of your feature space. If you have enough data to support this or not that many categories that is not a problem. There are other alternatives, like taking the average feature of every category and adding that to your features as opposed to the categorical feature. |
H: Choosing regularization method in neural networks
When training neural networks, there are at least 4 ways to regularize the network:
L1 Regularization
L2 Regularization
Dropout
Batch Normalization
plus of course other things like weight sharing and reducing the number of connections, which might not be regularization in the strictest sense.
But how would one choose which of those regularization methods to use? Is there a more principled way than "just try everything and see what works"?
AI: There are not any strong, well-documented principles to help you decide between types of regularisation in neural networks. You can even combine regularisation techniques, you don't have to choose just one.
A workable approach can be based on experience, and following literature and other people's results to see what gave good results in different problem domains. Bearing this in mind, dropout has proved very successful for a broad range of problems, and you can probably consider it a good first choice almost regardless of what you are attempting.
Also sometimes just picking a option you are familiar with can help - working with techniques you understand and have experience with may get you better results than trying a whole grab bag of different options where you are not sure what order of magnitude to try for a parameter. A key issue is that the techniques can interplay with other network parameters - for instance, you may want to increase size of layers with dropout depending on the dropout percentage.
Finally, it may not matter hugely which regularisation techniques you are using, just that you understand your problem and model well enough to spot when it is overfitting and could do with more regularisation. Or vice-versa, spot when it is underfitting and that you should scale back the regularisation. |
H: Why are ensembles so unreasonably effective
It seems to have become axiomatic that an ensemble of learners leads to the best possible model results - and it is becoming far rarer, for example, for single models to win competitions such as Kaggle. Is there a theoretical explanation for why ensembles are so darn effective?
AI: For a specific model you feed it data, choose the features, choose hyperparameters etcetera. Compared to the reality it makes a three types of mistakes:
Bias (due to too low model complexity, a sampling bias in your data)
Variance (due to noise in your data, overfitting of your data)
Randomness of the reality you are trying to predict (or lack of predictive features in your dataset)
Ensembles average out a number of these models. The bias due to sampling bias will not be fixed for obvious reasons, it can fix some of the model complexity bias, however the variance mistakes that are made are very different over your different models. Especially low correlated models make very different mistakes in this areas, certain models perform well in certain parts of your feature space. By averaging out these models you reduce this variance quite a bit. This is why ensembles shine. |
H: Convolutional Neural Networks in R
I don't see a package for doing Convolutional Neural Networks in R. Has anyone implemented this kind of algorithm in R?
AI: I guess there is no package for cnn but you can write your own convolutional layer. mxnet or h2o will be useful for it.
check this out:
http://dmlc.ml/rstats/2015/11/03/training-deep-net-with-R.html |
H: ValueError: Input contains NaN, infinity or a value too large for dtype('float32')
I got ValueError when predicting test data using a RandomForest model.
My code:
clf = RandomForestClassifier(n_estimators=10, max_depth=6, n_jobs=1, verbose=2)
clf.fit(X_fit, y_fit)
df_test.fillna(df_test.mean())
X_test = df_test.values
y_pred = clf.predict(X_test)
The error:
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').
How do I find the bad values in the test dataset? Also, I do not want to drop these records, can I just replace them with the mean or median?
Thanks.
AI: With np.isnan(X) you get a boolean mask back with True for positions containing NaNs.
With np.where(np.isnan(X)) you get back a tuple with i, j coordinates of NaNs.
Finally, with np.nan_to_num(X) you "replace nan with zero and inf with finite numbers".
Alternatively, you can use:
sklearn.impute.SimpleImputer for mean / median imputation of missing values, or
pandas' pd.DataFrame(X).fillna(), if you need something other than filling it with zeros. |
H: Dimension-Hopping in Machine Learning
What is the dimension hopping problem in machine learning (occurring in convolutional neural networks and image recognition)? I have googled about it but all I get is information on the Physics of material shape deformation. It will be more helpful to me if some one explain it with an example related to machine learning. Can anyone help me out with this or point me toward resources that can?
AI: Welcome to DataScience.SE! I'd never heard of this problem so I looked it up. It is explained on the third slide of this presentation by Geoff Hinton:
More things that make it hard to recognize objects
• Changes in viewpoint cause changes in images that standard learning
methods cannot cope with.
– Information hops between input dimensions (i.e. pixels)
• Imagine a medical database in which the age of a patient sometimes
hops to the input dimension that normally codes for weight!
– To apply machine learning we would first want to eliminate this
dimension-hopping.
In other words, it is about conceptual features migrating or hopping from one input feature dimension to another while still representing the same thing. One would like to be able to capture or extract the essence of the feature while being invariant to which input dimension it is encoded on. |
H: Using Neural Networks To Predict Sets
I'm building a neural network for data analysis, however I'm stuck on how many output neurons I need and what they should represent. The neural network tries to predict peoples choices for certain objects. There are 75 possible objects, but here's the catch, they choose 6 of them at a time. These six can (theoretically, though this won't happen realistically) be any combination of the 75 objects, in rare cases there might even be duplicates of a certain object in this set of six.
I naturally considered creating an output neuron for each possible set, but since that would lead to 75^6th output neurons (or more if we consider duplicates) I imagine that that would have disadvantages for the learning speed. Another option I considered is just taking the six highest ratest items as a set, but I'm unsure if this would work since the choice of the second, third, etc. objects depend on which earlier objects were chosen. I wonder if there are any better, faster or more accurate ways of doing this?
Edit, some more information about the trainingdata:
The neural network will be used to analyse the ingame item choices in the computergame called league of legends. There players can choose up to six items each game. These six items are most likely based on some factors that change each game such as the ingame characters they and their enemies play, but also things such as what the other 5 items are they have. However, the reason for this may be universal to all games. For instance, the (likely) biggest reason why items depend on eachother is the multiplicativity of item stats, if you already have an item that makes your attacks hit harder then getting an item that allows you to attack more often is going to do more for you than getting an item that makes your attacks hit even harder. There are also some links between item choices though, while probably more limited, there is for instance an item that returns damage to the enemy when he attacks you, while it's not detrimental it is probably not a great idea to combine that with an item that lowers the amount of attacks that an enemy can do to you.
AI: The 75^6 option is not only bad for speed, but it is a very difficult representation to train, because the NN doesn't "understand" that any of the output categories are related. You would need an immense amount of data to train such a network, because ideally you need at least a few examples in any category that you expect the network to predict. Unless you had literally billions of examples to train from, the chances are certain combinations will never occur in your training set, thus could never be predicted with any confidence.
Therefore I would probably use 75 outputs, one for each object representing the probability that it would be chosen. This is easy to create training data for, if you have training examples with the 6 favoured objects - just a 1 for the objects chosen and 0 for all others as a 75-wide label.
For prediction, select the 6 objects with the highest probabilities. If these choices are part of a recommender system (i.e. may be presented to same person as being predicted for), then you can select items randomly using the outputs as weights. You may even find that this weighted Monte Carlo selection works well for predicting bulk user behaviour as well (e.g. for predictions fed into stock purchases). In addition, this stochastic approach can be made to predict duplicates (but not accurately, except perhaps averaged over many predictions).
A sigmoid transfer function on the output layer is good for representing non-exclusive probability. The logloss objective function can be used to generate the error values and train the network.
If you want to accurately predict duplicate choices out of the 6 items chosen, then you will need plenty of examples where duplicates happened and have some way to represent that in the output layer. For example, you could have double the number of output neurons, with two assigned to each object. The first probability would then be probability of selecting the item once, and the second probability would be for selecting it twice.
The question has since been updated, and it appears there are strong relationships between items making the choice of a set of items potentially very recipe-like. That may reduce the effectiveness of the ideas outlined above in this answer.
However, using 75 outputs may still work better than other approaches, and is maybe the simplest setup, so I suggest still giving it a try, even if just to establish a benchmark for other ideas. This will work best when decisions are driven heavily by the feature data available, and when in practice there are lots of valid choices for combining items so there is a strong element of player preference. It will work less well if there is a large element of game mastery and logic in player decisions in order to combine items. |
H: How to update bias and bias's weight using backpropagation algorithm
I'm writing my own training algorithm, but I don't know how to set the bias weight.
Have I to set bias in any layer?
Must the bias weight, be updated in every layer?
AI: There should be a bias weight for each virtual neuron as it controls the threshold at which the neuron responds to combined input. So if your hidden layer has 100 neurons, that is 100 bias weights for that layer. Same applies to each layer.
There are usually two different approaches taken when implementing bias. You can do one or the other:
As a separate vector of bias weights for each layer, with different (slightly cut down) logic for calculating gradients.
As an additional column in the weights matrix, with a matching column of 1's added to input data (or previous layer outputs), so that the exact same code calculates bias weight gradients and updates as for connection weights.
In both cases, you only do backpropagation calculation from neuron activation deltas to the bias weight deltas, you don't need to calculate the "activation" delta for bias, because it is not something that can change, it is always 1.0. Also the bias does not contribute deltas back further to anything else. |
H: LeNet for Convolution network?
I keep seeing LeNet used to referring to a convolution network? I am wondering why LeNet is called LeNet? Is it the abbreviation of anything? Is there a difference between LeNet and convolutional neural network? Thanks!
AI: LeNet is a family (LeNet-1, LeNet-4, LeNet-5) of convolutional neural network designed by Yann LeCun et al. The name is a play on his name and the French language, where "le" means "the", hence LeNet means "the network". I believe it was originally devised to recognize handwritten numbers on checks (cheques). LeNet is only one early instance of a convolutional neural network; many others exist today. |
H: Compute Baseline/Representative of Time-Series Data
I have time-series data of 10-days for the same time interval as shown in below figure. Here it shows one-hour power consumption for 10 days. Data is sampled at 10 minutes rate.
I need to show this 10-day usage with a single baseline/representative curve. I can calculate baseline curve simply by taking mean/median of these 10 days data, but before that I need to answer following questions:
The baseline should not represent the outlier (abnormal) usage days. Here in figure, we see that day 1 and 2 are following an unusual pattern from the rest of days. I think the usage of these days should not be used in baseline calculation. How should I exclude these days from baseline calculation automatically?
How should I find the most similar usages out of these 10-day usages for my baseline calculation? I think the most/maximum similar usage days represent the days used for baseline calculation.
AI: Ad 1. Assuming the measurements at any given time are normally distributed (they shape approximately a bell curve), you could use simple standard deviation to detect outliers. Specifically, for any given time, you can calculate the mean and standard error. Then you calculate the mean sans outliers by taking into account only the measurements that fall at most some pre-set distance from the mean (e.g. given normal distribution, 68% of measurements fall within one standard deviation from the mean).
Pseudo-code example:
# Measurements at time t0 for all 10 days
t0 = np.array([0.1, 0.1, 1.4, .9, 1.25, 1.25, 1.5, 0.1, 0.3, 1.75])
# Get mean and standard error
mean0, std0 = t0.mean(), t0.std()
# Inliers are within one sigma from the mean
inliers = np.logical_and(mean0 - std0 < t0,
mean0 + std0 > t0)
# ==> [0, 0, 1, 1, 1, 1, 0, 0, 1, 0]
# And the baseline mean at time t0 is
baseline0 = t0[inliers].mean()
# ==> 1.02
Ad 2. You can find the most similar days to the baseline by using any appropriate distance measure (i.e. for time series: Euclidean or dynamic time warping). The result, then, consists of those days where distance is the least. |
H: How to read Several JSON files to a dataframe in R?
I have a folder with 30,000 plus JSON file. A sample file with contents is posted below.
{
"name": null, "release_date_local": null, "title": "3 (2011)",
"opening_weekend_take": 1234, "year": 2011,
"release_date_wide": "2011-09-16", "gross": 59954
}
However, I need the data in a df in a structure as given below:
name relase_date_local title opening_weekend_take year release_date gross
NA NA 3 (2011) 1234 2011 2011-09-16 5994
Here is my code snippet to get all the files as a list:
path = "./Week1/jsonfiles"
temp = list.files(path, pattern = "*.json")
filename = paste(path, temp, sep = "/")
movies = c()
for (i in filename){
movie = fromJSON(file = i)
movies = c(movies, movie)
}
Please advise, how can I read all 30,000 files as rows of a df?
AI: First, you can use the full.names parameter to list.files() to get
the full path added to each file.
temp <- list.files(path, pattern="*.json", full.names=TRUE)
Next, there are issues with the data since they contain NULL values
which throws off a quick-and-dirty solution. So, we have to take each
list element and convert any NULL to NA.
Finally, we can use the handy purrr::map_df() to take the whole list
of lists and turn them into a data.frame:
movies <- purrr::map_df(temp, function(x) {
purrr::map(jsonlite::fromJSON(x), function(y) ifelse(is.null(y), NA, y))
}) |
H: Tips for a new data scientist
I am about to start a job in which I will be working with large datasets and will be expected to find trends, etc... I have found lots of resources on where to learn ML and other hard skills and feel that I am (semi) competent on this end.
I am interested in knowing if there are specific soft skills that are helpful as a data scientist. What are things you wish you knew starting out?
While Kaggle is very useful when learning, it also presents clear objectives. How do you handle being given a dataset, but no clear objective?
Let me know if this is too broad, I can think of more specific questions.
AI: I think there are a lot of important soft skills to consider in the Data Science domain.
Here are some of them:
Know for a fact what the goal is, spending a lot of time on data wrangling, models, visualization and reports when it was not all for the specific goal in mind is a waste. Communicating with less technical people is a skill in itself.
Iterate repeatedly with the product owner. Keep making sure you are on the right path.
If the data doesn't tell the story they thought/want tell them it is not the case, be clear in why this is happening, what biases might be playing a role etcetera. Do not apply all kinds of filters or keep changing parameters to get the desired results.
Regarding your second question:
The objective has to be either gotten from the product owner explicitly or derived from a less mathematical objective. An example could be where you need to predict train arrivals based on some features. They want the model to predict as many times as possible within a 10-minute error range. This is relatively explicit.
Sometimes it is less clear than that, they might say we need it as accurate as possible. Then you will have to decide what to optimize, in some cases, this will just be minimizing the MSE but in other cases, other things might make more sense for your case. Usually, this will be clear from the implicit objective and something that you will get better at with more experience. Both implicit and explicit objectives derive from clear communication with the product owner. |
H: generate graph from .eps file (preferably using R)
Using the R package 'Deducer', I saved a graphic (chart) as an .eps file.
I can open the .eps file, it's just a bunch of text.
How do I re-generate the graphic (chart) from the .eps file (preferably using Deducer or R)?
AI: EPS is "Encapsulated PostScript". Its meant for embedding like an image in documents, or sending to printers. You can view it with a PostScript document viewer, and there are free PostScript document viewers for Linux, Windows, and Mac OSs. Ghostview, Evince, etc etc.
So although you can view the graphic once you've got a PostScript document viewer, you cannot load it into R as if you had just plotted it. |
H: When to choose character instead of factor in R?
I am currently working on a dataset which contains a name attribute, which stands for a person's first name. After reading the csv file with read.csv, the variable is a factor by default (stringsAsFactors=TRUE) with ~10k levels. Since name does not reflect any group membership, I am uncertain to leave it as factor.
Is it necessary to convert name to character? Are there some advantages in doing (or not doing) this? Does it even matter?
AI: Factors are stored as numbers and a table of levels. If you have categorical data, storing it as a factor may save lots of memory.
For example, if you have a vector of length 1,000 stored as character and the strings are all 100 characters long, it will take about 100,000 bytes. If you store it as a factor, it will take about 8,000 bytes plus the sum of the lengths of the different factors.
Comparisons with factors should be quicker too because equality is tested by comparing the numbers, not the character values.
The advantage of keeping it as character comes when you want to add new items, since you are now changing the levels.
Store them as whatever makes the most sense for what the data represent. If name is not categorical, and it sounds like it isn't, then use character. |
H: Sum up counts in a data.frame grouped by multiple variables
This is a snippet of the dataset I am currently working on:
> sample
name sex count
1 Maria f 97
2 Thomas m 12
3 Maria m 5
4 Maria f 97
5 Thomas m 8
6 Maria m 4
I want to sum up the counts grouped by name and sex to finally get this data.frame:
> result
Maria Thomas
f 194 0
m 9 20
I wrote a simple loop to iterate over the rows and sum up the counts:
result <- matrix(0, nrow=2, ncol=2)
colnames(result) <- unique(sample$name)
rownames(result) <- unique(sample$sex)
for (i in 1:nrow(sample)) {
sex <- as.character(sample[i,"sex"])
name <- sample[i,"name"]
count <- sample[i,"count"]
result[sex, name] <- result[sex, name] + count
}
Is it suitable to do it this way? Are there any other ways to do it in a more elegant / shorter fashion?
Edit:
I already tried it with aggregate, but the output is in a different format:
> aggregate(sample$count,by=list(sample$name,sample$sex),sum)
Group.1 Group.2 x
1 Maria m 9
2 Thomas m 20
3 Maria w 194
AI: You can do this using the xtabs function! Here's how I did it using your example data:
# Create example data...
name <- c("Maria", "Thomas", "Maria", "Maria", "Thomas", "Maria")
sex <- c("f", "m", "m", "f", "m", "m")
count <- c(97, 12, 5, 97, 8, 4)
data <- data.frame("name"=name, "sex"=sex, "count"=count)
# Create table...
xtabs(formula=count~name + sex, data=data)
which gives the following output:
sex
name f m
Maria 194 9
Thomas 0 20 |
H: Why do we calculate partial derivative of Error w.r.t output of Neural Network during backpropagation?
As seen in this image we calculate partial derivative of Error w.r.t output of the output neuron. Shouldn't it be normal derivative? Does not that particular error is determined by only that output?
AI: The partial derivative is used precisely because it separates concerns about how the value is calculated (from all the other parameters and outputs in the network) from how the value affects the output. This is purely by definition so you can do the calculation.
The "normal" derivative of the error function can be expressed as the sum of any "complete" set of independent partial derivatives. There are a few such sets possible - in a simple feed-forward network one for each layer plus one for all the weights combined.
So in essence you are calculating the "normal" derivative w.r.t. the network weights, but due to the nature of the problem this is done by calculating the partial derivatives in multiple steps.
Caveat: I am probably not using those maths terms 100% accurately. For instance, I am ignoring the role of the training data and treating it as a constant. |
H: Time-stamp for linear model
How can we extract information from time-stamp variable for modelling? I have a variable with format mm-dd-yyyy hh:mm:ss I want to predict an outcome variable using time-stamp as input variable. I do not think i can directly use this column for modelling and will need to do some transformation before i can use this in model. Not sure what type of transformation i need to do.
For example, if you have date field, you can create dummy variables of day o week and use that as input variable in model. Not sure how to proceed in the case of time-stamp.
Any help?
AI: Welcome to Datascience.SE!
Like you said, you can extract the day of the week. Also extract the hour of the day, then encode these two variables using sines and cosines with their respective periodicities (7 and 24). Also create a column for the UNIX/epoch time. If there are "special days", such as holidays or sales, create boolean columns for them too. |
H: Account for unknown error in time series data
Given:
Time series data collected from sensors.
There is an unexpected gradual drop in the initial data when sensors are idle.
However, this drop is not so visible when sensors are active because the drop is masked by the actual measurements.
Main question: How do I eliminate this drop from the sensor output?
Not sure if this is an accurate assumption, but let's assume that the drop is a convolution of an unknown system X (leak, sensor drift etc.) with the system being measured G.
I am feeding this into a neural network.
How do I get an approximation to the unknown influence X (from initial data)?
My goal: Apply (-X) to all other data collected from G before training the neural net and expect an increase in performance.
Should I expect an increase in performance?
AI: I would set up a sensor as idle, purposefully to determine the decay in the signal. Then try an ARIMA or similar model on the data collected to find out what an approximation of the decay function is.
Further to this you should determine if the model still fits if they decay is put into a "stop/start" mode.
Afterwards you can apply a corrective function to remove or take into account the decay on your other data. |
H: Redundancy - is it a big problem?
I am trying to create a sentiment analysis program which will classify some of the tweets which i have collected under a hashtag. There are 7750 tweets in the dataset and I am labeling them into the two classes now. Then I will use a neural network to classify them into positive and negative classes.
The problem with the dataset is that it posses a lot of redundant data (Retweets basically). Manually deleting them is not possible and I have tried to find a programmable solution via the Tweepy API but couldn't find any.
So my question is do I need to get rid of these redundant records or should i leave them as is?
AI: Assuming you want to learn sentiment this is a problem. What happens when you feed this to a Machine Learning algorithm is that it will give more weight to the tweets that are in there multiple times, while not learning more information. It's unlikely that a tweet that has been retweeted 10 times carries more significant information about sentiment than one that hasn't been retweeted. What you could do is add the number of times the tweet was in the set as a feature to your sentiment model, it's possible something can be learned from that fact (maybe positive tweets are retweeted more often), but you should keep it at 1 row for every distinct tweet.
Getting rid of these redundant records should not be difficult programmatically though. I don't know what language you are using but if you only consider the body of the tweet (the content) you could iterate over your tweets, keep a list of all the unique bodies, combined with other meta information (like user, labeled sentiment) and if the content of the next tweet is already in there, just do not add it. Look for 'distinct' functionality as opposed to redundant, enough information out there. |
H: Apache Spark Question
I am trying to parse the files using Stanford nlp in Spark in mapper function. How to set the number of mappers in Apache Spark? Please help me.
AI: It automatically determines the amount of mappers by the number of partitions your data is in. You can call getNumberPartitions on your data source (RDD/DataFrame) to see how much it is and use repartition for scaling this up or coalesce to scale this down (you can use repartition for this as well but this is slower). Repartitioning is expensive however and should be avoided when unneccesary. |
H: Reading a wide dataset in R
I originally had a wide CSV dataset of about 18000 columns(and about 80 rows) that I am trying to read in R. It was stored in an Excel sheet,which unfortunately has a limit of only 16384 columns. Hence, taking the dimension I obtain:
> dim(train_set)
[1] 83 16384
i.e 1000+ columns are getting eaten up ,and this would badly affect the accuracy of the predictions. How can I read all the columns in R?
Your suggestions are much appreciated. Thanks a lot!
AI: Refer to my comment, I believe what you need is
df <- read.csv("./yourpath/yourfile", sep = ";", header = TRUE) # play around with the arguments per your file. |
H: Choosing the correct learning algorithm
I am kind of new to the data mining subject but i need help to choose a learning algorithm for my application:
The problem: identifying that a certain curve or data set belongs to a certain fault in a Component.
My training data should be like this:
Motor Current Values:
[0.5,0.6,...,0.4] -> Fault in Komponent 1
[0.2,0.3,...,0.4] -> Fault in Komponent 3
[1,0.7,...,0.4] -> Fault in Komponent 2
.
.
.
[0.8,0.7,...,0.3] -> Fault in Komponent 3
And i was wondering if i should using a cluster analysis (k-means and save centroid centers then compute the distance of each new entry then give a fuzzy estimation to where which cluster my data might belong to).
Decision Tree algorithm(entropy of the values and stuff), Or should i calculate a distance between the Nominal data (Healthy(No fault) Curve) and the faulty data and play on a more simple basic decision tree with thresholds ?
Proceed with a peak analysis and count the number of peaks within my data and add it to my learning algorithm.
And when should i do a data pre-processing like normalization ?
This is an example of my entry parameters : [Speed(mm/s), Motor Current Values(Ampere), DistanceToNominalState(Ampere)]
Here is what my data looks like.
Any suggestions ?
AI: It seems you have a data set for one component where the component suffered a fixed number of failure modes. You want to find out which data (let's assume continuous in time, so, what time) correspond to what failure mode. In other words, you are doing "pattern recognition" in your failure data.
Have you thought of using Self-Organizing Maps (SOM)? They are a sub-branch of artificial neural networks and have great capability in such problems.
You should also consider that not all failure modes appear in shape of a "peak" value. So, only looking at peaks is not a very smart way. It most probably will cover most of the failures, though, there will be moments that you miss. SOM could take care of this too.
Data pre-processing is done before the analysis. Be careful in normalization. You could miss the peaks or valley points easily if you don't pay enough attention in normalization. Don't just use any code or normalization method you find online. Test and check it with you data. For instance, some normalization could make all negative values positive which a negative value may have an important meaning in your work.
I assume you have another variable called "failure". Another approach I suggest is building a Neural Network (NN) model, which is very common. You have three input data that you mentioned, consider the "failure" variable as your target variable. Build the neural network and apply it to your data again. If the numbers of failures are little, the NN will be able to rebuild the normal behavior of your data (NN here is called a normal behavior model). When you apply it to your input data, the NN model will detect any deviation which is not expected.
MATLAB has a very good support for both of these approaches. |
H: Technical name for this data wrangling process? Multiple columns into multi-factor single column
What is the technical name for the following data wrangling process? I want to collapse Table A into Table B. (To make the data suitable for ANOVA.)
Table A:
ArmyVet_ID Served_WW2 Served_KoreanWar Served_VietnamWar
110001 1 0 0
110002 1 0 0
110004 0 1 0
110005 0 1 0
110009 0 0 1
110010 0 0 1
Table B:
ArmyVet_ID Served
110001 WW2
110002 WW2
110004 KoreanWar
110005 KoreanWar
110009 VietnamWar
110010 VietnamWar
Also, the question of how to do the above conversion using R has been asked to death on SO. However, there seem to be way too many ways to do it. If anyone's figured out the absolutely best way to do it (quickest, easiest), I'd appreciate pointers.
Update after correct answer marked below: It turns out that Table A is called "wide format" and B is called "long format".
AI: It is usually called reshaping! For a great description of the process, see this walkthrough, or read up on Hadley Wickham's documentation for the reshape package! |
H: Is there a text on Apache Spark that attempts to be as comprehensive as White's Hadoop: The Definitive Guide'?
Tom White's 'Hadoop: the Definitive Guide' has become a popular guide to the entire Hadoop ecosystem and earned a reputation as providing both a broad survey, as well as covering individual aspects of Hadoop in decent depth. Has anyone thus far attempted to provide the Spark equivalent?
AI: Learning Spark: Lightning Fast Big Data Analytics is a fairly comprehensive book covering the core concepts as well as the higher level components involved in the Spark stack. This is the book recommended by Databricks for their Spark Developer Certification as well.
If you are interested more about the use cases built using Spark, I suggest Advanced Analytics with Spark |
H: Scala vs Java if you're NOT going to use Spark?
I'm facing some indecision when choosing how to allocate my scarce learning time for the next few months between Scala and Java.
I would like help objectively understanding the practical tradeoffs.
The reason I am interested in Java is that I think some of my production, frequently refreshed, forecasts and analyses at work would run much faster in Java (compared to R or Python) and by becoming more proficient in Java I would enable myself to work on interesting side projects, such as non-Data-Science apps I'd like to develop. Currently I've taken a couple of Java courses, but I need much more education and practice to master it.
The reason I started considering learning Scala is very similar -- I figured that the statistical/ML orientation would make it good for my work as a Data Scientist, and since it is based on Java I would be getting practice at work that helps me with my side-interest in Java, even though there are a few major differences such as functional vs. imperative and traits vs. interfaces.
It seems like a lot of the advantages of Scala revolve around integration with Spark. I am thinking that this should be the tipping point for my decision, because my team is not currently using Spark and I don't have a good enough reason to request it. However, I thought I should ask here so that I don't waste too much time if Scala is still a better choice.
For the purposes of this question please ignore alternatives such as Python, R, Julia, etc (I've eliminated those from consideration for other reasons, such as already being sufficiently familiar with them for my use cases).
AI: This is a bit off topic for this SE, or maybe opinion-based, but, I work in this field and I'd recommend Scala.
No I would not characterize Scala as a "stats-oriented" Java. I'd describe it as what you get if you asked 3 people to design "Java 11" and then used all of their ideas at once.
Java 8 remains great, but Scala fully embraces just about all the good ideas from languages that you'd want, like type safety and closures and functional paradigms, and a more elaborate types/generics system, down to more syntactic sugar conveniences like case classes and lazy vals. Plus you get to run everything in the same JVM and interoperate with any Java libraries.
The price is complexity: understanding all of Scala is a lot more difficult than all of Java. And some bits of Scala maybe were a bridge too far. And, the tooling ecosystem isn't great IMHO in comparison to the standard Java tools. But there again you can for example use Maven instead of SBT. But you can mostly avoid the parts of Scala that are complex if you don't need them. Designing Scala libraries takes a lot of skill and know-how; just developing run-of-the-mill code in Scala does not.
From a productivity perspective, once you're used to Scala, you'll be more productive. If you're an experienced Java dev, I actually think you'll appreciate Scala. I have not liked any other JVM language, which tend to be sort of one-issue languages that change lots of stuff for marginal gains.
Certainly, for analytics, the existence of Spark argues for Scala. You'll use it eventually. |
H: Machine learning technique to calculate weighted average weights?
I'm just starting to investigate machine learning concepts, so I'm sorry if this question is very naive, but I'm hoping that it will be an easy one to answer!
I have a document matching algorithm that individually calculates a match for each field (0-1, with 0 = no match, 1 = 100% match), and applies a separate weight to each field match to be used in calculating an overall weighted average relevance "score".
E.g., given a document of 3 fields (d1-3) and an input query against each of the fields (q1-3), field matches are calculated for each pair (m1-3) and then weights (w1-3) are applied using a weighted average for a final relevance score: s = sum(mi x wi)/sum(wi).
For this contrived example, perhaps we can simply say that a document is considered relevant if the score is above 0.5. I.e., there is either a "relevant" (0.5-1.0) or "not relevant" (0-0.5) outcome. But I don't want every field to have equal weight in determining the outcome.
So, my question is simply: What type of machine learning technique is "best" used to calculate the appropriate weights (w1-n), based on past, known results? Is this even an appropriate use of machine learning?
And secondly, if instead of a simple outcome of relevant and non-relevant, I actually want to rank the documents by relevancy, can this also be achieved using a machine learning technique?
AI: Yes it is definitely possible to calculate optimised weightings provided you have some training examples where you know the document fields, the query, and either the outcome (relevant/not-relevant) or the desired score.
I think your training feature set should be the query score in range [0.0,1.0] for each field of each example. The training label should be either relevance 0 or 1 for each example, or the relevance score that the example has.
If you have a target score for each example
You want to determine the weights $W_i$ to use for each field $i$. Your calculated relevance score would be $\hat{y} = \sum_{i=1}^{N_{fields}} W_i * X_i$ where the caret signifies this is the estimate from your function and $N_{fields}$ is the number of field. Note I am ignoring your original idea of dividing by the sum of all $W_i$, because it makes things more complex. You can either add that term or force the sum to be equal to 1.0 if you wish (I am not going to show you how though, as this answer would get too long, and it probably won't help you much)
With a target score and training data, the simplest approach is to find the weights which cause the lowest error when used with the training data. This is a very common goal in supervised learning. You need a loss function. Having a target scalar value means you can use difference from target and a very common loss function for this kind of regression problem is the mean squared error:
$$E = \frac{1}{N_{examples}} \sum_{j=1}^{N_{examples}} (\hat{y}_j - y_j)^2$$
Where $\hat{y}_j$ is your calculated relevance score for example $j$ and $y_j$ is your training label for the same example.
There are a few different ways to solve for lowest $E$ with this loss function, and it is one of the simplest to solve. If you express your weights as a vector $W$ length $N_{fields}$ your example features as a matrix $X$ size $N_{examples} \times N_{fields}$ and the labels as a vector $Y$ length $N_{examples}$ then you can get an exact solution to minimise loss using the linear least squares equation
$$W = (X^TX)^{-1}X^TY$$
There are other approaches that work too - gradient descent or other function optimisers. You can look these up and see which you would prefer to use for your problem. Most programming languages will have a library with this already implemented.
Note that you will likely get scores greater than 1.0 or less than 0.0 from some document/query pairs.
You will have to use a adjust the technique if you want to divide by total of all weights or want sum of all weights equal to 1 in your scoring system.
If you have a relevance 0 or 1 for each example
You have a classification problem, relevant or not are your two classes. This can still be made to work, but you will want to change how you calculate your weighted score and use logistic regression.
Your weighted score under logistic regression would be:
$$\hat{y} = \frac{1}{1 + e^{-(b + \sum_{i=1}^{N_{fields}} W_i * X_i)}}$$
Where $b$ is a bias term. This looks complicated, but really it is just the same as before but mapped by a sigmoid function to better represent class probabilities - the result is always between 0 and 1.
You can look up solvers for logistic regression, and most stats or ML libraries will have functions ready to use.
Caveats
You have made a starting assumption that a simple combined relevance score will lead to a useful end result for your users performing search. This has led to simple linear models looking like a good solution. However, this may not be the case in practice, and you may need to re-visit that assumption. |
H: Looking for a rough explanation of additive hidden nodes and radial basis functions
I'm working on a neural networks project right now and for that I'm reading a bunch of scientific papers, in a few of those the terms additive hidden nodes and radial basis functions are thrown around, but I seem to have trouble to get a clear explanation of the terms there and anywhere else on the internet. I seem to have gathered that these are classifications for neuron types but I would love to get a more clear intuitive explanation of the terms. Preferably one that doesn't require me to be very mathy to understand. I'm fairly new to neural networks so beyond sigmoid neurons and backpropagation algorithms I'm still a bit lost when it comes to the common termonology.
AI: Albeit not wrong, Huang seems to be the only person in the world to use the term "additive hidden nodes". By this, he means that the neuron computes the sum of weighted inputs. In other words, the kind of neural network you're already used to.
An RBF neuron, on the other hand, computes a distance (usually the Euclidean distance) from input to some center (which can be thought as the weights if you see them as a vector) and applies a exp(-dist²) function in order to obtain a Gaussian activation. Thus, RBF neurons have maximum activation when the center/weights are equal to the inputs. |
H: Match users based on the content of their articles
I have users in my database that I would like to match up or group togetter based on the content of there articles. I cant seem to find how this kind of problem is being solved today. Any advice will help.
Available Data:
1) Each user's posts (anything written by them, like a blog).
2) Tags for each post (tags that the user gave to their post when they created it)
Goal:
1) Match/Group users based on available data.
2) Produce match percentage.
Attempt:
I matched people based on the number of exact matches of their tags.
Example: user1 has [car,honda,sports], user2 has [car,food].
This will give a 33% match.
As you can image this does not work very well. Most users have 20 tags but typically get a match percentage of 0% even if they are talking about similar things.
Problem:
Tags that have a clear relationship like CAR and HONDA are NOT matched.
Question:
How can I match/Group users based on tags or the content of there articles?
AI: One way could be to apply word-embeddings for semantic similarity checking. word2vec model generates feature vectors which could capture semantic similarity. For example, the closest vector to car will be honda, ferrari, vehicle, bike. Train a model using large amount of data from wikipedia dumps or the one released by google. It has fine quality vectors. Gensim has a nice implementation of word2vec
For each blog articles, pre-process the data by removing the stop-words and stemming them. From the resulting words, collect the more frequent words. Do this for all the articles and check for the similarity among the frequent words in other articles as well. So that one article with frequent words car, race, tournament, ferrari, F1 will have more closer vectors in article with frequent words bike, honda, racer.
Or other way is to look for similar vectors in tags itself. It is good to play with it for sometime, so that you get to know which features work better for the dataset you have. |
H: Newsgroup classification
Currently our company, has a special user forum.
The main forum is about specific topic: SIP protocol.
Im trying to understand what would be a good approach to classify the top 10 issues customers report in the forum,
Example:
Installation, Crash, Media, Database, etc.
What would be a good approach to start classifying the threads, currently each thread is in a CSV file.
I though of doing the following:
Tokenization
Remove stop words
Extract top terms per thread
Count all Top issues across all threads and display top words.
Any suggestion is appreciated.
AI: Your use case boils down to categorizing news feed on an online forum and then finding out top-n categories. I would suggest you look at this Hacker News Categorizer developed by MonekyLearn. This way you can understand how to get started with such projects.
PS : I am not affiliated with MonkeyLearn. |
H: Optimisation strategy webstores with shipping costs
I am scraping the prices of several products on different websites for the past couple of weeks. These prices are stored, plotted and e-mailed to me every day with an update about the price changes. Now I want to take it to the next level, and make a store selection based on several products.
What I want, is pick a number of products, and calculate the most optimal shops where to buy these products, including shipping costs.
For example, I want to buy products A, B and C, at stores X and Y, with the following prices (so product A costs 2 at store X, and 3 at store Y):
X Y
A 2 3
B 3 4
C 3 2
Not taking into account shipping costs, I would buy A and B at store X, and C at store Y. However, if the shipping costs are too large for store Y, it might be better to buy everything at store X. Making things more complicated, if stores X and Y have shipping costs of 5, but Y is free when buying more then a specific amount, it might be more profitable to buy everything at Y.
Can anyone point me in the right direction how to solve this? The idea for now is to include ~10 products at three stores. I though about linear optimisation, but I do not know how to include the shipping costs.
AI: I have done the opposite of your problem - I have written code to implement shipping costs for e-commerce sites which runs on their sites.
Shipping cost rules can be almost completely arbitrary logic. In the general case, you have no good choice but to implement them as logic that is assessed per order. That means your optimiser will have to run that logic for every combination of purchases it considers. Which makes this a planning/combinatorics problem.
This may not be so bad - e.g. for purchasing 10 items across 5 stores you have $5^{10}$ combinations which is 9765625. That is possible by brute force.
For larger orders, or more choices of store, you may want to look at dynamic solutions to optimise cost, such as simulated annealing, genetic algorithms. Reinforcement learning run per order may also work. |
H: Using Neural Networks to extract multiple parameters from images
I want to extract parameters from an image using a neural network.
Example:
Given an image of a brick wall the NN should extract the width and height of the bricks, the color and the roughness.
I can generate images for given parameters to train the NN and want to use it to extract the parameters from an actual image.
I've looked into CNNs. Can I perform this task with them? Do I need special learning algorithms to extract multiple parameters instead of classification? Are there any NNs that are designed for such tasks?
AI: A CNN could be a good choice for this task if you expect variation in the original image scale, rotation lighting etc, and also have a lot of training data.
The usual CNN architecture is to have convolutional layers close to the input, and fully-connected layers in the output. Those fully-connected layers can have the output arranged for different classification or regression tasks as you see fit. Predicting the values of parameters describing the image is a regression task.
If you want accurate measures of size, you may need to avoid using max pooling layers. Unfortunately, not using pooling will make your network larger and harder to train - you might get away with strided convolution instead if that is a problem for you.
If your input images are very simple and clear (because they are always computer generated), then other approaches may be more reliable. You may be able to reverse-engineer image production and derive simple rules such as identifying lines, corners, circles and other easy-to-filter image components, and make direct measurements. There may also be a middle ground in complexity where extracting this data as features and using it to train a simple NN (or other ML model) will have good performance. |
H: Master thesis topics
I am looking for a thesis to complete my master, I am interested in Predictive Analytics in marketing, HR, management or financial subject, using Data Mining Application.
I have found a very interesting subject: "Predicting customer churn using decision tree" or either "Predicting employee turnover using decision tree", I looked around very hard but unfortunately couldn't find any relevant dataset to download (Telecommunication Customer churn Dataset ).
I would like to work on a similar subject using "Decision Tree Technique".
Please suggest some topics or project that would make for a good masters thesis subject.
Thanks.
AI: This is the approach I took:
Find journals related to your field of studies
Skim through the proceedings, see if there are titles that catch your interest
Read the papers (carefully or globally) that seemed interesting
Carefully consider the approaches and whatever future suggestions they present in their papers
Think critically: What would you change? What do you want to find out? Don't limit yourself to data but rather orient from the perspective of research. Solutions for data might only become apparent when you know exactly what you want to examine.
I think this has advantages because these papers outline details regarding data as well -- perhaps you can use the same.
Present some papers and your idea to your prospective supervisor and he/she will make some suggestions. Researchers generally have a lot of knowledge about the possibilities and might even be curious about some things themselves.
Good luck! And enjoy. |
H: Deep Learning for Time series
Deep Learning is an excellent model for classification problem such as image recognition or object detection. Can we use deep learning for regression problems - Time Series prediction ? So if it can, how can we build structure of deep learning. I mean how to building layer to extract features from time series.
AI: Recurrent neural networks (RNNs) can work with series as input or output or both.
Even a simple one-layer RNN is effectively "deep" because it has to solve similar problems as multi-layer non-recursive networks. That is because backpropagation logic in a RNN has to account for delay between input and target, which is solved by backpropagation through time - essentially adding a layer to the network for every time step of delay between first input and last output.
RNN architecture has become more sophisticated in recent years by using "gating" techniques such as Gated Recurrent Units (GRU) or Long Short Term Memory (LSTM). These have multiple trainable params - 3 or 4 - per neuron, and the schematics are more complicated than feed-forward networks. They have been demonstrated as very effective in practice, so this extra complexity does seem to pay off.
Although you can research and implement RNNs yourself in a library like Theano or Tensor Flow, several neural network libraries already implement RNN architectures (e.g. Keras, torch-rnn) |
H: Ideas for prospect scoring model
I have to think about a model to identify prospects (companies) that have a high chance of being converted into clients, and I'm looking for advice on what kind of model could be of use.
The databases I will have are, as far as I know (I don't have them yet), the list of current clients (in other words, converted prospects) and their features (size, revenue, age, location, stuff like that), and a list of prospects (that I have to score) and their features. However, I don't think I'll have a list of the companies that used to be prospects but for which the conversion to clients failed (if I had, I think I could have opted for a random forest. Of course I could still use a random forest, but I feel it would be a bad idea to run a random forest on the union of my two databases, and treat the clients as converted and the prospects as non-converted...)
So I need to find, in the list of prospects, those who look like the already existing clients. What kind of model can I use to do that ?
(I'm also thinking about things such as "evaluating the value of the clients and apply this to the similar prospects", and "evaluating the chance each prospect has of going out of business" to further refine the value of my scoring, but it's kinda out of the scope of my question).
Thanks
AI: I faced almost exactly the same scenario a year and a half ago -- basically what you have is a variation of the one-class classification (OCC) problem, specifically PU-learning (learning from Positive and Unlabelled data). You have your known, labelled positive dataset (clients) and an un-labelled dataset of prospects (some of which are client-like and some of which are not client-like). Your task is to identify the most client like of the prospects and target them... this hinges on the assumption that prospects that look most like clients are more likely to convert than prospects that look less like clients.
The approach we settled upon used a procedure called the Spy-technique. The basic idea is that you take a sample from your known positive class and inject them into your unlabelled set. You then train a classifier on this combined data and then run the unlabelled set back through the trained classifier assigning each instance a probability of being a positive class member. The intuition is that the injected positives (so-called spies) should behave similarly to the positive instances (as reflected by their posterior probabilities). By setting a threshold this allows you to extract reliable negative instances from the unlabelled set. Now, having both positive and negative labelled data, and you can build a classifier using any standard classification algorithm you choose. In essence, with the spy technique, you boot-strap your data to provide you with the needed negative instances for proper training.
For starters you should look into the work of Li and Liu who have a number of papers exploring the topic of OCC and PU-learning. |
H: First steps when analyzing a company's data
I'm not sure if this question is appropriate for this forum, so excuse me if it's not (if not, any suggestions on where might be a better place would be very much appreciated). I'm currently an undergrad in a quantitative field, and for the summer, I've been given an opportunity to do a data project by the company I am working for. I'm not really sure where to start. I had a conversation with one of the business owners today, in order to get a better handle on how the business works, and what kind of data they have. We talked a little bit about what sorts of questions they have, and what sort of things would be nice to know. I guess that seems to be the main question: What questions to ask? My initial thoughts are to first just look at the data via traditional descriptive stats methods (histograms, scatter plots etc....), and maybe that creates some ideas. If anyone has some tips, or even some good links (yes, I have already Googled it quite a bit), I would be grateful. Thanks.
AI: The thing you need to understand as completely as possible is how they expect a data analysis to enable them to achieve their objective. They are a business, so their overall objective is likely related to maximising profit. However, there will be a more immediate objective underneath that heading. To maximise profit you can either reduce costs or increase sales. In turn, to increase sales you can increase the number of customers or increase the amount of sales to each customer etc.
The question then turns on how you can use data science to perform one those objectives.
For example, questions that can almost be answered with data science could be 'how do I better identify potential customers?' or 'how do increase existing custmers' spend?' These are still very high level questions, but they are the sort of questions that you need to have in mind as you start to do your descriptive stats etc.
Bear in mind that this is an iterative process and it is completely normal to start off in a fuzzy sort of area. At this stage it is almost the case that having a question in mind is a McGuffin - it will kick things off, but it may not be the question you end up answering.
The CRISP-DM process is a process that has been built for data mining that discusses how to iteratively use results from analyses and models to increase your understanding of the customer's situation, and hence drive the development of a better business objective for use in a data science project. |
H: Difference: Replicator Neural Network vs. Autoencoder
I'm currently studying papers about outlier detection using RNN's (Replicator Neural Networks) and wonder what is the particular difference to Autoencoders? RNN's seem to be treaded for many as the holy grail of outlier/anomaly detection, however the idea seems to be pretty old to, as autoencoders have been there for a long while.
AI: Both types of networks try to reconstruct the input after feeding it through some kind of compression / decompression mechanism. For outlier detection the reconstruction error between input and output is measured - outliers are expected to have a higher reconstruction error.
The main difference seems to be the way how the input is compressed:
Plain autoencoders squeeze the input through a hidden layer that has fewer neurons than the input/output layers.. that way the network has to learn a compressed representation of the data.
Replicator neural networks squeeze the data through a hidden layer that uses a staircase-like activation function. The staircase-like activation function makes the network compress the data by assigning it to a certain number of clusters (depending on the number of neurons and number of steps).
From Replicator Neural Networks for Outlier Modeling in
Segmental Speech Recognition:
RNNs were originally introduced in the field of data compression [5].
Hawkins et al. proposed it for outlier modeling [4]. In both papers a
5-layer structure is recommended, with a linear output layer and a
special staircase-like activation function in the middle layer (see
Fig. 2). The role of this activation function is to quantize the
vector of middle hidden layer outputs into grid points and so arrange
the data points into a number of clusters. |
H: Logistic regression on biased data
I am currently working on a dataset to predict customer attrition based on past data and transactions of the customers.
There are 2,40,000 customers in total out of which around 1,77,000 customers are active(as of today) while the remaining ones are inactive (6300).
This is how sample headers look like :
custID|custAge|custGender|TQuantity|TVolume|TValue|TAmount|HolidayStatus|...
Overall, I have 40 predictors which include customer details, transaction details, item details etc.
The data obviously has more active customers than inactive customers i.e. inactive customers form only 2.6% of the entire customer base. Due to this, there are more transactions conducted by active customers(25million/32million) than by inactive (previously active) (6million/32million) ones.
Despite this, I created a logistic regression model using random data (shuf -n 500000 data.csv). The model achieves 96.69% base accuracy in predicting when fed with random data.
The problem: How to make the model predict with greater accuracy on such a biased dataset? or How do I sample the data more appropriately?
Model prediction: With 99.7% probablity, it predicts that the customer will be active whereas the customer is inactive
PS: Changing threshold won't help much
AI: Background
I'll start with some background to help you research the solution yourself and then will add some specifics. What you refer to as "biased data" is more commonly known as unbalanced classes in the data science world. Also "customer turnover" is often referred to as churn.
Metrics
As hoards of Ng'ian devotees will undoubtably point out you need to start by designing a set of metrics that work better with unbalanced classes than accuracy. Accuracy does a poor job in testing the quality of predictions for unbalanced classes e.g. a cancer test for a cancer that occurs in 0.05 % of the population is 99.95% accurate if it always predicts "no cancer". I suggest using the F1-score as the key metric in cross-validating your model. The F1-score is the harmonic mean of precision and recall and tends to work both for balanced and unbalanced classes. There are other rations of harmonic mean, that could work in special cases, so be aware of these.
There are other metrics you should learn about also. ROC-AUC is likely at the top of the list for other metrics you should understand and know about.
Model Selection and Cross-Validation
Beginning a classification task with Logistic Regression is a fantastic strategy. I make a point to always use a linear regression for regression tasks and a logistic regression for classification tasks. The linear model provides significant insight into the feature importance and helps frame the problem.
But following this initial survey you should move on to other, more sophisticated models. Many will give you a litany of things to try. You should perhaps focus on one or two and develop the model while paying very careful attention to bias and variance as you cross-validate and test your model. A full bias-variance decomposition may be unnecessary, once you develop better intuition, but is a great place for newbs to start.
I suggest starting with an SVM and also eventually trying a random forest or naive Bayes model as this will traverse several regimes of model types (analogy, decision trees, bagging, Bayesian).
Finally... Unbalanced Classes
There are two typical methods for dealing with unbalanced classes. These include oversampling the minority class, and fixing the model by altering the hyperplane (SVM) or changing priors (Bayes).
There are lots of summaries of this problem and solution if you search for "unbalanced classes". But, it can still be a tricky problem despite all the literature. Good luck...
Hope this helps! |
H: Feature Selection and PCA
I have a classification problem. I want to reduce number of features to 4 (I have 30). I'm wondering why I get better result in classification when I use correlation based feature selection(cfs) first and then employ pca in comparison with just employing pca (the latter one is worse than the first one). It also should be mentioned that data loss in the second approach (just pca) 0.2-variance cover:0.8- and in the first one is 0.4 -variance coverd: 0.6!
Thank you in advance
AI: PCA simply finds more compact ways of representing correlated data. PCA does not explicitly compact the data in order to better explain the target variable. In some cases, most of your inputs might be correlated with each other but have minimal relevance to your target variable. That's probably what is happening in your case.
Consider a toy example. Lets say I want to predict stock prices. Say I'm given four predictors:
Year-over-year earnings growth (relevant)
Percent chance of rain (irrelevant)
Humidity (irrelevant)
Temperature (irrelevant)
If I apply PCA to this data set, the first principle component would relate to weather since 75% of the predictors are weather related. Is this principle component relevant? It's not.
The two options you've highlighted boil down to using CFS or not using it. The option that uses CFS does better because it explicitly selects variables that have relevance to the target variable. |
H: feature redundancy
Why exactly does features being dependent on each other, features having high correlation with one another, mean that they would be redundant? Also, does PCA help get rid of redundant/irrelevant features or do we have to get rid of redundant/irrelevant features before running PCA on our dataset?
AI: For the sake of training, features that are highly correlated offer little training "value" as the presence/state of one value can always (or almost always) be used to determine the presence/state of the other. If this is the case there's no reason to add both features as having both will have little impact on the predictions - if A "on" = B "off", and A "off" = B "on", then all states can be represented by just learning off either A or B. This is greatly simplified, but the same is true for other highly correlated values.
PCA can help reduce features, but in any case, if you've identified redundant or highly correlated features that will be of little use in training, it probably makes sense to eliminate them right away and then use PCA, or other feature importance metrics that can be generated by training off your full dataset, to further optimize your training feature set. |
H: how to choose classifer
is the best way to create the most accurate classifier to train a bunch of classifying algorithms like ANN, SVM, KNN, etc, and test different parameters to get optimal parameters for each classifier, and see which classifier has the least testing error?
Or is it better to use ensemble method and choose the "majority" decision of different kinds of trained classifiers?
AI: It's usually not that clear cut; there's typically not one universally best approach.
Having said that, there are some prototyped ensemble approaches that are supposed to always be better than their underlying component algorithms, notably Erin LeDell's binary ensemble classifier for H2O. However, even in those cases you still need to optimize the first stage algorithms for the ensemble to be universally better.
Thus if you're willing to spend a lot of extra time, let's say 2 weeks for an ensemble instead of the 1 week it might take you for your single-stage algorithm, then it's possible (especially for binary classification) to find an ensemble that will definitely be better than your single-stage classifier.
However this is rarely the case and the way you framed the question implies there's a choice between
building 1 really good single-stage model, selected from many candidate models (and by the way be sure to avoid overfitting while making those selections) and
throwing an ensemble at the problem without completing #1 above for each component of the ensemble (or completing #1 but not also optimizing the 2nd stage of the ensemble)
If that's the decision then -- while there's no 1 universally right answer -- I'd say that in the vast majority of the cases it's better to stick with #1. |
H: Feature Extraction - calculate slope
Having a bit of a mind-blank at the moment and am looking for some advice.
I am extracting features from time series data for input into a classification algorithm, for example I'm extracting average and variance from inputX.
For input Y, I have graphed the data and have seen that for class A, it can be seen that there is an upwards slope, and for class B, it can be seen that there is a downward slope, for class C there is no slope, the line is more or less straight.
For Feature Extraction, how can I best describe this? Would a calculation to get positive/negative slope be best?
AI: There are several ways to do this, here are a couple of options:
Calculate different lag values (difference between now and t time units)
Calculate a linear regression for different time windows and store the slope and the bias
You can also involve higher order models to describe what is happening, if you think for example that the acceleration also matters you could use a 2nd or 3rd degree polynomial over the past couple of observations. |
H: What does "linear in parameters" mean?
The model of linear regression is linear in parameters.
What does this actually mean?
AI: Consider an equation of the form
$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \epsilon$
where $x$'s are the variables and $\beta$'s are the parameters. Here, y is a linear function of $\beta$'s (linear in parameters) and also a linear function of $x$'s (linear in variables). If you change the equation to
$y = \beta_0 + \beta_1x_1 + \beta_2x_1^2 + \epsilon$
Then, it is no longer linear in variables (because of the squared term) but it is still linear in parameters. And for (multiple) linear regression, that's all that matters because in the end, you are trying to find a set of $\beta$'s that minimizes a loss function. For that, you need to solve a system of linear equations. Given its nice properties, it has a closed form solution that makes our lives easier. Things get harder when you deal with nonlinear equations.
Assume you are not dealing with a regression model but instead you have a mathematical programming problem: You are trying to minimize an objective function of the form $c^Tx$ subject to a set of constraints: $Ax \geq b$ and $x\geq0$. This is a linear programming problem in the sense that it is linear in variables. Unlike the regression model, you are trying to find a set of $x$'s (variables) that satisfies the constraints and minimizes the objective function. This will also require you to solve systems of linear equations but here it will be linear in variables. Your parameters won't have any effect on that system of linear equations. |
H: How can give weight to feature before PCA
I wonder how can I give weight to my feature before employing PCA. I mean somehow weighted PCA. Because I know that one of the features is better than others and want to give importance to it in creating components (It is not possible to select only that feature. I should have others impact too)
AI: After standardizing your data you can multiply the features with weights to assign weights before the principal component analysis. Giving higher weights means the variance within the feature goes up, which makes it more important.
Standardizing (mean 0 and variance 1) is important for PCA because it is looking for a new orthogonal basis where the origin stays the same, so having your data centered around the origin is good. The first principal component is the direction with the most variance, by scaling a certain feature with a weight of more than 1 will increase the variance on this axis and thus give more weight to this axis, pulling the first principal component in it's direction. |
H: how to explain the behaviour: linear svm does better than non-linear RBF
I am working on a binary class classification problem.
Each sample is a vector 1x101, I have a lot of data samples more than 150k
I tried training a linear svm and a non-linear svm (RBF) "zscore normalization is used in both cases". surprisingly, the linear does better than the svm (RBF).
I am trying to explain this by considering the following points:
I beleive that the quality of my feature is not very good.
I think the nonlinear case experinces a kind of overfitting.
my question is how to explain this behaviour?!! does what I am thinking in make sense?!! I am thinking in using Adaboost to perform the training, is it a good idea or not?
AI: The quality of your features might actually be better than you think. If they provide linear separability, nonlinear kernels will overfit more readily than a linear kernel, leading to your result. |
H: Approaches for implementing Domain specific Question answering System
Given several wikipedia articles on different movies.
What are the different approaches to implement a QA system to answer different quires related to movies.
Dataset : Wikipedia articles
Input : natural language query, eg : who directed terminator ?
Output : The Terminator is a 1984 American science fiction action film directed by James Cameron
Different approaches available are :
1 IR Based approaches
In IR based approaches the similarities between query and passaeges are considered and best suitable passages are selected as answer (as Google does)
2 Ontology based
In Ontology based prior domain knowledge is required about the domain. And documents need to be mapped accordingly.
3 Machine learning based
In ML based approaches large training set of documents is needed for training.
What are the other cons and pros of each approaches?
Which of these approaches suits the best for this use case ? Also is there any other methods ?
AI: While the actual "best system" depends heavily on a number of factors including your goals and your resources, it is possible to discuss some general pros and cons to each system.
1. IR Based approaches: Information retrieval approaches allow the algorithm to make effective use of what we might call "explicit knowledge" or hardcoded facts. They can work very well when the answer to the query exists in some form already and where the algorithm simply needs to find this answer amongst a set of all the other possible answers. Such systems typically rely on either an explicit dataset of possible answers or facts, or by treating the web as such a dataset and querying it in some way. IR systems are often the only reasonable choice when the answer cannot be generated from abstract principles. For example, if we built a system to answer trivia questions about movie stars, we would need to have explicit knowledge of which features are associated with which movie stars (i.e. Brad Pitt has brown hair and starred in Fight Club.) No machine learning algorithm can be expected to generate this without the knowledge already present (although machine learning systems with access to such knowledge may serve as effective information retrieval systems.) Due to the external nature of the data with respect to the algorithm, IR systems tend to be very effective for large and rapidly changing data sets, which would otherwise require retraining of the machine learning model or manual reconfiguration of the ontology to handle. It is also worth noting that more complex information retrieval systems may include other systems such as NLP engines to translate the data (ex. natural language text) into a form usable by the IR system.
2. Ontology based approaches: Ontology methods take the most time to implement and are usually the most finicky. They require the iterative hypothesizing of a structured representation of the domain, testing of the representation, and modification of the representation based on discovered flaws. For complex real-world domains with many possible contingencies, this process can take a very long time and be quite frustrating. Because everything is hard-coded, even the best ontology-based systems display a relative degree of inflexibility compared to their machine learning and IR counterparts. Ontologies can be effective, however, when you do not have access to a data set for information retrieval or machine learning training, as the domain knowledge are hard-coded into ontology systems by hand. They can also be useful when the domain is relatively well-structured and where it is important to understand and plan every decision made by the system such as in automated attendant phone systems.
3. Machine learning based approaches: As machine learning approaches extract patterns from data, they are relatively robust to subtleties present in complex domains that were not explicitly built into an algorithm, model, or ontology. They work best when a prediction needs to be made about which "class" or category a given input (or query in your case) belongs to, or when a quantitative prediction needs to be produced from data. For example, if I have a large collection of animal pictured labeled with the type of animal present in these images, I can train a machine learning model to predict which animal type is present in future images, as long as the model has seen enough pictures of this animal that it can extract the patterns in the image that correspond to the image likely representing a lion or a tiger or a bear, etc. This works because there are properties in the image itself that predict which animal it represents and these patterns can be extracted from labeled data (there are also machine learning algorithms that pick out patterns in unlabeled data called unsupervised algorithms, but these are not relevant for your domain of interest.) Traditionally, machine learning systems are relatively poor at modeling domains where the answer depends not on recognizing a pattern, but having access to knowledge like in the movie stars example above. Machine learning systems are also great because they require relatively little manpower to implement. As you mentioned, they also require access to a dataset, which may need to be quite large depending on the complexity of the domain.
We can roughly summarize the advantages and disadvantages by ranking the methods according to a few criteria:
All of the best systems (ex: IBM Watson) use a hybrid approach, taking advantage of each method for its relative strengths and substituting other methods to address their weaknesses. Depending on the performance you want, a QA system can be built by a single person with some knowledge of any of the above, or may require a team of upwards of 100 engineers. |
H: Why is the number of samples smaller than the number of values in my decision tree?
I'm using scikit-learn RandomForestClassifier for a classification problem. When taking a closer look at one of the trees I noticed that the number of samples at the root was 662, but there were 507 instances of the first class and 545 of the second. What's going on or did I understand something wrong? Is the number of samples actually the number of unique samples and since I used bootstrap aggregation there are many samples that were chosen multiple times?
AI: Yes, it seems to display unique samples, the others have been duplicated by the bootstrap sampling.
There's the 0.632 rule - when you have N items and you take a random sample of size N with replacement (as bootstrap does), you only get 63.2% of the samples from N, the rest are duplicates.
That roughly matches what you've seen: 0.632 * (507+545) = 665 unique samples.
You can also try it with some Python code:
samples = np.arange(507 + 545)
bootstrap_sample = np.random.choice(samples, size=len(samples), replace=True)
print(len(np.unique(bootstrap_sample)))
This always prints values closely around 665. |
H: Which supervised learning algorithms are available for matching?
I'm working on a non-profit where we try to help potential university applicants by matching them with alumni that want to share their experience/wisdom and, at the moment, it is happening manually. So I'll have two tables, one with students and one with alumni (they may have some features in common, but not necessarily all of them)
$\begin{array}{|l|c|c|} \text{Name} & \text{Gender} & \text{Height} \\ \hline \text{Kathy} & F & 165 \\ \hline \text{Tommy} & M & 182 \\ \hline \text{Ruth} & F & 163 \\ \hline ... & ... & ... \\ \end{array}$ $\begin{array}{|l|c|c|} \text{Name} & \text{Gender} & \text{Weight} \\ \hline \text{Miss Lucy} & F & 65 \\ \hline \text{Miss Geraldine} & F & 70 \\ \hline \text{Miss Emily} & F & 60 \\ \hline ... & ... & ... \\ \end{array}$
Currently, we are manually matching the members of table 1 with those in table 2. We will also collect information after the match ("Was it a good match? Please rate it on a scale from 1 to 10"). So it will look something like this:
$$
\begin{array}{|l|l|c|}
\text{Person #1} & \text{Person #2} & \text{Match?} \\ \hline
\text{Ruth} & \text{Miss Lucy} & N \\ \hline
\text{Tommy} & \text{Miss Emily} & Y \\ \hline
\text{Kathy} & \text{Miss Geraldine} & N \\ \hline
\text{Ruth} & \text{Miss Emily} & N \\ \hline
... & ... & ... \\
\end{array}$$
I would like to use a learning algorithm for this process. I know a little bit of machine learning, but I am still very much a novice (so it's also an opportunity for me to learn more about it), but I can't wrap my head around how you would do this kind of supervised learning when you have two sets both of which have multiple features. What sort of matching algorithms are available to do this? (Also, I prefer to work in R)
(By the way, I would be grateful if you could just point me in the right direction and I'll try to read about it and solve it myself. Also, I know how deeply frustrating it is to see questions that have already been answered -- if this is case, please don't hesitate to let me know without answering the question. I have already tried to search for various strings on Google and StackExchange, but mostly find lecture slides on graph theory that don't seem to be what I'm looking for (although it may just be because it's a bit over my head). Many thanks!)
AI: You can try to frame this problem as a recommender systems situation. Where you have your users (prospective students) and items (alumni) and want to recommend to the users one item.
It's not a perfect fit as you want just one item for each user and you don't have previous match data for each user. However you could investigate this idea a bit further. I'm applying these techniques to the recruitment problem, I'm matching users with job offers and I'm having some success.
Try to read a bit about recommender systems, to start I recommend chapter 9 of mining massive data sets, it's really introductory, but gives a good overview of the most common techniques. |
H: What is this formula, related to simple linear regression, called?
This is my first post here. I hope I can make myself clear. Right now I'm learning linear regression as part of an introduction class to machine learning.
After going over the steps in the simple regression formulae, I realized that I had been doing something similar in the past to construct lines. In Python:
data = [{'x': 1, 'y': 2}, {'x': 5, 'y': 7}, {'x': 6, 'y': 8}]
coeff = sum([d['x'] / d['y'] for d in data]) / len(data)
Here, we're calculating the mean ratio between the variables, which we can use as a coefficient for constructing a line. Does this method have a name, and how does it relate to simple linear regression?
AI: I do not know other terms than the average of inverse slope or the inverse of the harmonic mean of slopes. This is also the negative of the average slope of the perpendiculars.
It gives you the inverse of the average slope of lines passing through $(0,0)$ and $(x_i,y_i)$. If the $(x,y)$ are almost aligned with $(0,0)$, this is an estimate of the inverse of the slope of a line passing through the points.
Linear regression is quite different, as it involves cross-products of sums of $x$ or $y$. |
H: Machine Learning in Spark
I am using Apache Spark to perform sentiment analysis.I am using Naive Bayes algorithm to classify the text. I don't know how to find out the probability of labels. I would be grateful if I know get some snippet in python to find the probability of labels.
AI: Probability can be found for the test dataset once you trained the model and transformed for the test dataset e.g: if your trained Naive Bayes model is model then model.transform(test) contains a node of probability, for more details please check the below code, going to show you the probability node and others useful nodes also for iris dataset.
Partition dataset randomly into Training and Test sets. Set seed for reproducibility
(trainingData, testData) = irisdf.randomSplit([0.7, 0.3], seed = 100)
trainingData.cache()
testData.cache()
print trainingData.count()
print testData.count()
Output:
103
47
Next, we will use the VectorAssembler() to merge our feature columns into a single vector column, which we will be passing into our Naive Bayes model. Again, we will not transform the dataset just yet as we will be passing the VectorAssembler into our ML Pipeline.
from pyspark.ml.feature import VectorAssembler
vecAssembler = VectorAssembler(inputCols=["SepalLength", "SepalWidth", "PetalLength", "PetalWidth"], outputCol="features")
For iris dataset, it has three classes namely setosa, versicolor and virginica. So let's create a Multiclass Naive Bayes Classifier using pysaprk library ml.
from pyspark.ml.classification import NaiveBayes
from pyspark.ml import Pipeline
# Train a NaiveBayes model
nb = NaiveBayes(smoothing=1.0, modelType="multinomial")
# Chain labelIndexer, vecAssembler and NBmodel in a pipeline
pipeline = Pipeline(stages=[labelIndexer, vecAssembler, nb])
# Run stages in pipeline and train model
model = pipeline.fit(trainingData)
Analyse the created mode model, from which we can make predictions.
predictions = model.transform(testData)
# Display what results we can view
predictions.printSchema()
Output
root
|-- SepalLength: double (nullable = true)
|-- SepalWidth: double (nullable = true)
|-- PetalLength: double (nullable = true)
|-- PetalWidth: double (nullable = true)
|-- Species: string (nullable = true)
|-- label: double (nullable = true)
|-- features: vector (nullable = true)
|-- rawPrediction: vector (nullable = true)
|-- probability: vector (nullable = true)
|-- prediction: double (nullable = true)
You can also select a particular node to view for some dataset as:
# DISPLAY Selected nodes only
display(predictions.select("label", "prediction", "probability"))
Above will show you in tabular formate.
Reference:
spark
Models using pipeline
https://mike.seddon.ca/natural-language-processing-with-apache-spark-ml-and-amazon-reviews-part-1/
https://stackoverflow.com/questions/31028806/how-to-create-correct-data-frame-for-classification-in-spark-ml |
H: What's the difference between fit and fit_transform in scikit-learn models?
I do not understand the difference between the fit and fit_transform methods in scikit-learn. Can anybody explain simply why we might need to transform data?
What does it mean, fitting a model on training data and transforming to test data? Does it mean, for example, converting categorical variables into numbers in training and transforming the new feature set onto test data?
AI: To center the data (make it have zero mean and unit standard error), you subtract the mean and then divide the result by the standard deviation:
$$x' = \frac{x-\mu}{\sigma}$$
You do that on the training set of the data. But then you have to apply the same transformation to your test set (e.g. in cross-validation), or to newly obtained examples before forecasting. But you have to use the exact same two parameters $\mu$ and $\sigma$ (values) that you used for centering the training set.
Hence, every scikit-learn's transform's fit() just calculates the parameters (e.g. $\mu$ and $\sigma$ in case of StandardScaler) and saves them as an internal object's state. Afterwards, you can call its transform() method to apply the transformation to any particular set of examples.
fit_transform() joins these two steps and is used for the initial fitting of parameters on the training set $x$, while also returning the transformed $x'$. Internally, the transformer object just calls first fit() and then transform() on the same data. |
H: Characterisitcs of the data set for a binary classification problem
I want to build a classifier for my problem statement and for that I don't have data. So while doing data acquisition, what should be the minimum sample size? And would it be a good practice if I label each observation myself to build a valid data set? (I cannot automate the process of labeling observation to each class while doing data acquisition and manually doing that takes up lot of time)
AI: Unfortunately you're not going to be able to do much without at least 200-300 records. You're going to be limited to simple (i.e. mostly linear) models until your dataset expands to at least 1,000. Anything less than 1,000 will require very thorough cross validation, and if you're not careful you'll be at risk of building a model that easily overfits.
@EricLecoutre makes a great point that you should use Amazon's Mechanical Turk. It usually costs just a penny or two per record and could save you a lot of time. |
H: Understanding ROCs in imbalanced data-sets
A response variable (label) $B$ can either be $0$ or $1$.
In the training set, $B_i = 1$ is an extremely rare event at only $0.26\%$ occurrences. Which makes the prediction of this label on a test data-set a difficult problem.
I used SMOTE to sample from the training-set of some $1.55 \times 10^5$ rows to obtain a completely balanced set of $620$ rows.
balanced.df <- SMOTE(B ~ ., df, perc.over = 100, perc.under = 200)
randomForest with $1000$ trees was used to fit the model, as shown:
randomForest.Fit <- randomForest(B ~ ., data = balanced.df, ntree = 1000)
For making a validation set, $2000$ rows were sampled at random without repetition from the data-set.
The actual frequencies of $B_i$ in the validation set are:
0 1
1998 2
And those in the predicted set are:
0 1
1836 164
The results seem promising, but perhaps a little too much. Also, it is essential that the percentage of False Positives are reduced.
My questions are:
How severely do you think the skew in data is affecting the validation results?
Is there a point in validating again by creating an arbitrary data-set with bias, for example by selecting more $B_i = 1$ in the validation set?
What other metrics/validation techniques which reflect the "accuracy*" of prediction?
*The term accuracy is used in a generic sense.
AI: So your data-set of 155000 records has 403 records where B=1, and B=0 for the remaining 154597 records.
You could try splitting your data-set into 2:1 training/test sets sampled by each class of B. After you've done this, then only for the training set use SMOTE to over-sample the records with B=1 along with under-sampling the B=0 training records to bring the class ratio to something like 4:1.
Over/under sampling for the test set is not required as it is supposed to mimic real world uncertainty to test your model's performance.
Your model's AUC will definitely get reduced since (as rightly pointed out by stmax), you've leaked test records into the training set by over sampling B=1 cases before splitting the train-test sets.
The answers to each of your questions are:
Yes, class imbalance does effect a random forest model's accuracy. How severely depends on the severity of the imbalance as well as the nature of the data-set itself.
Yes, you are biasing the training by over sampling the minority class, but if these are large enough samples in the original set, then hopefully they are close representations to the entire population.
I would recommend two metrics/techniques you could use here: Kappa Statistic (refer to this article) and the precision-recall curves to compare different models. |
H: Suggestions on what patterns/analysis to derive from Airlines Big Data
I recently started learning Hadoop,
I found this data set http://stat-computing.org/dataexpo/2009/the-data.html - (2009 data),
I want some suggestions as what type of patterns or analysis can I do in Hadoop MapReduce, i just need something to get started with, If anyone has a better data set link which I can use for learning, help me here.
The attributes are as:
1 Year 1987-2008
2 Month 1-12
3 DayofMonth 1-31
4 DayOfWeek 1 (Monday) - 7 (Sunday)
5 DepTime actual departure time (local, hhmm)
6 CRSDepTime scheduled departure time (local, hhmm)
7 ArrTime actual arrival time (local, hhmm)
8 CRSArrTime scheduled arrival time (local, hhmm)
9 UniqueCarrier unique carrier code
10 FlightNum flight number
11 TailNum plane tail number
12 ActualElapsedTime in minutes
13 CRSElapsedTime in minutes
14 AirTime in minutes
15 ArrDelay arrival delay, in minutes
16 DepDelay departure delay, in minutes
17 Origin origin IATA airport code
18 Dest destination IATA airport code
19 Distance in miles
20 TaxiIn taxi in time, in minutes
21 TaxiOut taxi out time in minutes
22 Cancelled was the flight cancelled?
23 CancellationCode reason for cancellation (A = carrier, B = weather, C = NAS, D = security)
24 Diverted 1 = yes, 0 = no
25 CarrierDelay in minutes
26 WeatherDelay in minutes
27 NASDelay in minutes
28 SecurityDelay in minutes
29 LateAircraftDelay in minutes
Thanks
AI: There really is no wrong answer here, but I recommend predicting flight cancellations (#22) and/or delays (25-29), since this is how I often see this data set being used. It could also have practical significance to you if you should ever find yourself flying to or departing from one of the worst offending airports/airlines.
I'm not sure if you have a choice (perhaps your employer requires it), but don't use Map Reduce -- it's incredibly difficult to learn/maintain, it's slow, and on top of that it has become obsolete. Use something like Spark's ML lib (http://spark.apache.org/docs/latest/mllib-guide.html). It's much easier to use and is much more current. |
H: Support Vector Classification kernels ‘linear’, ‘poly’, ‘rbf’ has all same score
I build a classification model based on SVM and getting same results after running different kernels. Can you please let me know if is mistake ? also recall for all are identical. Thank you for help.
Adding the location for the notebook and data.
SVC repo with the notebook and data
AI: You will love the answer to this one...
Take a look at your code and notice that you are calling the scoring function and each time you are passing in the exact same values i.e. they are all spitting out the lin_svc.score(). Try interweaving the four scoring calls below the four respective fit calls and you should see the desired variation in the result.
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
#rbf is for gaussian
C = 1.0 # SVM regularization parameter
svc = svm.SVC(kernel='linear', C=C).fit(X_train, y_train)
print svc.score(X_test, y_test)
rbf_svc = svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(X_train, y_train)
print rbf_svc.score(X_test, y_test)
poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X_train, y_train)
print poly_svc.score(X_test, y_test)
lin_svc = svm.LinearSVC(C=C).fit(X_train, y_train)
print lin_svc.score(X_test, y_test)
Similarly, you are doing the same thing below also.
Hope this helps! |
H: problem loading data into R
I have problem with loading data into R:
fileUrl <- "http://jadi.net/files/iran_it_status_1394_detail_data_jadi_net.tsv"
download.file(fileUrl , destfile="iran_it_status_1394_detail_data_jadi_net.tsv")
dev <- read.delim("iran_it_status_1394_detail_data_jadi_net.tsv",
header=TRUE,sep="\t",blank.lines.skip = TRUE,
na.strings="",fileEncoding="UTF-8",
stringsAsFactors=FALSE,skipNul = TRUE)
I receive the following error:
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
no lines available in input
In addition: Warning message:
In read.table(file = file, header = header, sep = sep, quote = quote, :
invalid input found on input connection 'iran_it_status_1394_detail_data_jadi_net.tsv'
Edit: The dataset has 1217 rows and 33 variables.
names(data) <- c("timestamp","age","sex","birth_province","work_province","experience","education",
"certificate","learn","project","book","language","wish_language","db","desktop_os",
"wish_os","mobile","env","theme","src_ctrl","tab_space","drink","items","device","title",
"org_type","org_emp","income","perk","job_contract","job_type","hour_wage","happy")
for language variable I expect this output:
data[1:3,"language"]
C#, Javascript, R, SQL
Java, C#, Javascript, Objective C, Swift, SQL
C#, SQL
Python Solutions are also welcome
AI: I am able to load the data set like so:
dev <- read.table("iran_it_status_1394_detail_data_jadi_net.tsv",
header=TRUE, sep="\t", blank.lines.skip = TRUE,
na.strings="",
stringsAsFactors=FALSE, skipNul = TRUE, fill=T, quote="")
Note the removal of the encoding (so that the function "finds" lines in the file), the fill attribute (to allow for a ragged table with empty cells), and the elimination of quotes (apparently there is a misquoted line somewhere near line 585).
This results in a table full of encoded characters - you'll need to know more about the source data to figure out how to work with it, but if you open the file up in a raw text editor (e.g.: Sublime text) you might get some clues: |
H: Pruning and parameter reduction for decision trees
I am trying to perform a classification using a decision tree classifier. I was wondering whether using a Feature reduction method is relevant for decision trees since they automatically use pruning?
My idea would be to perform a loop from 5 to 15 parameter reduction and then compare the classification accuracy of each decision tree, and then conclude the optimal number of parameters for my classification.
Thank you.
AI: Pruning and feature reduction are different things.
Pruning: It basically compares the purity of the two leafs separately, and, together. If the leafs together are purer, than the two leafs are pruned. Thus, the decision over the parameter(s) at the node is wiped off.
Let's say you have N different parameters. You tree might be tall enough such that pruning has been used over all the parameters at different nodes. And in the same time, all these parameters might have been used in other nodes. If not, the decision tree will take the decision itself not to use this parameter - doesn't prevent from overfitting though.
Dimensionality Reduction:
It you reduce the number of parameters, then these parameters would never appear in your tree, at any node. Whereas they might have been relevant at some point.
They are not uncompatible, and performing a dimensionality reduction may increase the accuracy of your task for a further classifier (as decision trees).
However, decision trees are also used for dimensionality reduction: After being trained, one might scan the features importance within the decision tree, i.e. how much each feature is used to create a split a different nodes. based on this new knowledge, you can used only the most important features to train another classifier. |
H: How to form Hessian matrix in BFGS Quasi-Newton Method
I came across this link. In BFGS Quasi-Newton Method, a Hessian matrix is used in weight updation. Is there any resource where I can find how this hessian matrix was obtained along with a clear description of the process, as to why Hessian matrix has been taken? I could not understand the wiki article.
AI: In optimization, you are looking for minima and maxima of a certain function f. Equivalently, under certain derivative conditions, you are looking for the zeros of g, the derivative of f.
The first equation of your link describes one step of the Newton Method - not BFGS. It involves the Hessian of f which is not accessible in practice - it's ineficcient in terms of time and memory storage. Due to this fact, some have developed methods not to calculate the hessian directly.
Thanks to Taylor expansion, you can write
g(x) = g(y) + g'(y)(x-y) + Θ(|x-y|²)
Replacing x by x(k+1) and y by x(k), and g' by the Hessian of f gives you (with the following convention : g(k+1) = g( x(k+1) )
g(k+1) ≈ g(k) + ∇²f(x(k+1)) ( x(k+1) - x(k) )
H(k+1) (x(k+1) - x(k)) ≈ g(k+1) - g(k)
where H(k) aims to approximate ∇²f(x(k)). But instead of calculating it at each step, we'll try to approximate it thanks to the previous H(k-1)
In 1970, Broyden, Fletcher, Goldfard and Shamo proposed a rank 2 update of H(k), ie
H(k+1) = H(k) + Term1 + Term2
where Term1 does not depend on H and Term2 depends on H(k). You can find these terms here.
But this solves only the time issue, not the memory storage. As for that, some have developed Limited-memory BFGS (or L-BFGS) to limit the amount of memory used. |
H: What does it mean when people say a cost function is something you want to minimize?
I am having a lot of trouble understanding this. Does it mean you should not use the cost function very often?
AI: No, it means you are trying to find the inputs that make the output of the cost function the smallest. It doesn't mean that you should "minimize" use of it. |
H: How do I find the minimum distance between zip codes in R?
I have a dataset that lists all zip codes in the U.S., their types (standard, po box, university, etc). I want to replace po box and university zip codes with the next closest standard zip code. I broke down the dataset by state so that R wouldn't have to make as many calculations. In theory, I would like to have standard zip codes in the first column and zip codes that need replacement in the first row, and have the distance between the two be the intersection value.
For example,
REP 1 REP 2 REP 3 REP 4
STD 1 0.215 0.152 0.025 0.124
STD 2 0.365 0.410 0.074 0.234
STD 3 0.234 0.201 1.322 0.683
STD 4 0.543 0.282 0.483 0.094
MINS STD 1 STD 1 STD 2 STD 4
where STD 1 is a standard zip code with its own latitude and longitude, and REP 1 is a zip code that needs to be replaced (is a university/po box zip) with its own latitude and longitude. I only have about 5 weeks of experience in R, so please bear with me if something doesn't quite make sense to me immediately. I have tried to do this in excel and having a sheet with close to 10,000 columns by 40,000 rows crashes every time that I try to calculate all of the distances because there are just too many calculations.
I have a feeling that either the apply() or mapply() functions are needed here. I want to calculate the distances using a formula that considers the curvature of the earth, (euclidean, etc) like dist() or the geosphere package to maintain accuracy and be reproducible.
If there is anything else that would be helpful to add on here, let me know and I'll upload it asap. Here is my R code for Alaska, the first state in alphabetical order.
AK<-subset(db,STAABBRV.x=="AK")
AKPO<-subset(AK,ZipCodeType!="STANDARD",select=c("ZIP_CODE","ZipCodeType","Long","Lat"))
AKPO<-within(AKPO,{IS_PO=ifelse(ZipCodeType!="STANDARD",1,0)})
AKSTANDARD<-subset(AK,ZipCodeType=="STANDARD",select=c("ZIP_CODE","ZipCodeType","Long","Lat"))
AKSTANDARD<-within(AKSTANDARD,{IS_PO=ifelse(ZipCodeType!="STANDARD",1,0)})
table<-rbind(AKSTANDARD,AKPO)
table$ZipCodeType<-NULL
rm(AK,AKPO,AKSTANDARD)
This sets up a table that has column names "ZIP_CODE", "Long", "Lat", and "IS_PO". "IS_PO" is a numerical indicator for whether or not the zip code is standard or po/university. 1 indicates that the zip code is a po/univ zip and 0 indicates a standard zip. I did this because some functions required that the data in the dataset be the same type (numerical).
Here are some of my failed attempts at writing code to calculate the minimum distances.
lapply(bit::chunk(1, nrow(zipcode), 1e2), function(ridx) {
merge(zipcode, zipcode[ridx[1]:ridx[2]], by = "dum", allow.cartesian = T)[
, dist := distGeo(matrix(c(longitude.x, latitude.x), ncol = 2),
matrix(c(longitude.y, latitude.y), ncol = 2))/1609.34 # meters to miles
][dist <= 5 # necessary distance treshold
][, dum := NULL]
}) %>% rbindlist -> zip_nearby_dt
DOESITWORK<-apply(db, 1, function(x) spDistsN1(matrix(x[3:4], nrow=1),
x[5:6],
longlat=TRUE))
mins<-apply(Lat,1,function(x)return(array(which.min(x))))
mins<-data.frame(row=names(mins),col=mins)
Lat$mins<-apply(mins,1,FUN=function(x)return(paste(x["row"],colnames(Lat[as.numeric(x["col"])]),Lat[x["row"],as.numeric(x["col"])],sep="/")))
AI: I think I have read your question correctly it looks like you need a nearest neighbor implementation. If you are unfamiliar with the concept you can find the wiki article here https://en.wikipedia.org/wiki/Nearest_neighbor_search.
I went ahead a wrote an example implementation you can use as a guide. Please note that this is a brute force method and not useful for big data sets. Once you have a grasp of the material I suggest checking out some libraries like RANN that have "real" implementations.
Read in some random test data and clean
For this test let us assume that we want to find the
closest AMERICAN city for each location
coord_data = read.csv("~/Downloads/SalesJan2009.csv", stringsAsFactors = F)
coord_data$id = c(1:nrow(coord_data))
coord_data$is_usa = ifelse(coord_data$Country == "United States", 1, 0)
coord_data = coord_data[ , c("id", "Latitude", "Longitude", "is_usa")]
names(coord_data) = tolower(names(coord_data))
Define your distance function.
Here we have geo-coordinates over long distance so Euclidean will not do.
I am using the Law of Cosines to calculate great circle distance
but Haversine and Vincenty should be considered given your needs.
To learn more start here: https://en.wikipedia.org/wiki/Great-circle_distance.
greatCircleDistance = function(latAlpha, longAlpha, latBeta, longBeta, radius = 6371) {
## Function taken directly from Wikipedia
## Earth radius in km is default (6371)
## Long/Lats are in degrees so need helper function to convert to radians
degreeToRadian = function(degree) (degree * pi / 180)
deltaLong = degreeToRadian(longBeta) - degreeToRadian(longAlpha)
sinLat = sin(degreeToRadian(latAlpha)) * sin(degreeToRadian(latBeta))
cosLat = cos(degreeToRadian(latAlpha)) * cos(degreeToRadian(latBeta))
## acos is finicky with precision so we will assume if NA is thrown
## the argument was very close to 1 and therefore will return 0
## acos(1) == 0
acosRaw = suppressWarnings(acos(sinLat + cosLat * cos(deltaLong)))
acosSafe = ifelse(is.na(acosRaw), 0, acosRaw)
acosSafe * radius
}
Distance between Basildon, UK and Parkville, US
greatCircleDistance(coord_data$latitude[1],
coord_data$longitude[1],
coord_data$latitude[2],
coord_data$longitude[2])
Returns [1] [1] 6929.351 km.
It matches Google's calc so we are good to go!
Brute Force Example:
As you noticed in your Excel sheet this will blow up quickly as data set gets larger. There are much more efficient ways of implementing the search. One idea is to start with the the geo-data structure itself and write an R-Tree, but I'll leave that for you.
bruteForceNearestNeighbor = function(geoData) {
makeCoordinate = function(idx) {
c("id" = idx, "latitude" = geoData$latitude[idx], "longitude" = geoData$longitude[idx])
}
singleCoordMinDistance = function(coordinate, locations) {
locationsUS = locations[locations$is_us == 1 & locations$id != coordinate["id"], ]
distances = mapply(greatCircleDistance,
latAlpha = coordinate["latitude"],
longAlpha = coordinate["longitude"],
latBeta = locationsUS$latitude,
longBeta = locationsUS$longitude)
closestIndex = which(distances == min(distances))
locations[closestIndex, "id"]
}
nearestNeighbors = vector("numeric", nrow(geoData))
for ( i in 1:nrow(geoData) ) {
coord = makeCoordinate(i)
nearestNeighbors[i] = singleCoordMinDistance(coord, geoData)
}
nearestNeighbors
}
coord_data$nearest_neighbor = bruteForceNearestNeighbor(coord_data) |
H: feature selection techniques
Is it always a good idea to remove features that have high mutual information with each other and to remove features that have very low mutual information with the target variable? Why or why not?
AI: Doing that is a very good idea. The problem is that doing that is very hard.
Feature selection is a NP-complete problem.
The practical meaning as that we don't know any fast algorithm that can select only the needed feature.
In the other direction, omitting features that don't have mutual information (MI) with the concept might cause you to throw the features you need most.
There are cases in which a single feature is useless but given more features it becomes important.
Consider a concept which is the XOR of some features. Given all the features, the concept is totally predictable. Given one of them, you have 0 MI.
A more real life example is of age at death. Birth date and death date give you the age. One of them will have very low correlation (due to increase in life expectancy).
In practice, omitting features with low MI is OK. Many learning algorithms are using MI so they won't be able to use the omitted variables anyway.
As for the selection itself, there are many algorithms, usually heuristics or approximation algorithm that are quite handy. |
H: How to reduce time R takes for model building
I am building machine learning algorithms in my laptop. It has i3 procesor and 16 GB RAM. Despite using multiple cores(3 out of 4), it takes 2 days to run all the techniques that i am trying to run an obviously data is huge (close to 1.3 million rows and 20 variables).
Is there any way to reduce this time required for running algorithms into fraction of the time it takes currently? Lets say some hours instead of days? I have heard from a friend who has computer science background that spark takes less time than stand alone R. Not sure if Spark can reduce the analysis time from multiple days to some hours. I am open to suggestions and solutions(preferably open source). Thoughts?
I am sure a solution for this must exist as R is pretty old and some genius would have found a way to solve this painful problem.
AI: Depends on the models you are trying to run. Your data isn't that big. For example using a support vector model from the kernlab package, you run into problems. Not every model is fast or has a fast implementation.
Without more information on what you are doing it is difficult to say what causes the bottleneck. But if you just want a speed boost in running models, have a look at the xgboost package, the h2o package (GLM, GBM, rf, deeplearning), ranger for a faster implementation of a randomforest model. |
H: How does QUEST compare to other decision tree algorithms?
SPSS Modeler has an implementation of QUEST, along with C&RT, C5.0 and CHAID. QUEST is relatively rarely covered in textbooks - what are its pros and cons compared to other decision tree algorithms? How does it make splits? Why is it (apparently) not in as widespread use as C&RT or C5.0?
AI: QUEST stands for Quick, Unbiased and Efficient Statistical Tree.
It uses ANOVA F and contingency table Chi Square tests to select variables for splitting. Variables with multiple classes are merged into two super-classes to get binary splits which are determined using QDA (Quadratic Discriminant analysis). The tree can be pruned using the CART algorithm. It can be used for both classification and regression tasks.
Quest first transforms categorical (symbolic) variables into continuous variables by assigning discriminant coordinates to categories of the predictor. Then it applies quadratic discriminant analysis (QDA) to determine the split point. Notice that QDA usually produces two cut-off points—choose the one that is closer to the sample mean of the first superclass.
An advantage of the QUEST tree algorithm is that it is not biased in split-variable selection, unlike CART which is biased towards selecting split-variables which allow more splits, and those which have more missing values.
Not sure why it is not used as widely as CART or C5.0. It might be due to greater coverage of CART/C5.0 in literature than others.
References:
Quest reference manual: http://www.stat.wisc.edu/~loh/treeprogs/guide/guideman.pdf
http://www.stat.wisc.edu/~loh/quest.html |
H: Convert Lat /Lon of User input to Lat/Lon of Open Data
I have data from a public data set in gridded form 2.5 degree x 2.5 degree(lat,lon).
Latitude goes from 90 N to -90 S. Longitude goes from 0 to 357.5. It is stored every 2.5 degrees and there are no intermediate values.
My user will want to download data from this data set. However they are not restricted from entering only the values that the public data set possesses. My goal then is to convert the data they enter to the "nearest" latitude and longitude of the public data set. For example while data may exist at 5 N and 60 E(and the next pont is 7.5 N and 62.5E) they may enter 6 N and 61.3 E.
How do I map the user input to the nearest latitude and longitude ? Note - user input will be in the form of a rectangle - lat_min,lat_max, lon_min and lon_max.
I took a shot at this and I hope I can find a better algorithm through DS SE.
lat_min = floor(lat_min/2.5) * 2.5
lat_max = ceiling(lat_max/2.5) * 2.5
lon_min = floor(lon_min/2.5) * 2.5
lon_max = ceiling(lon_max/2.5) * 2.5
AI: What's wrong with what you've suggested? It seems fine.
You've supplied four of possible points, essentially:
(latmin, lonmin)
(latmin, lonmax)
(latmax, lonmin)
(latmax, lonmax)
Clearly one of them is the closest grid point to the user input. Then just use whatever distance calculation you like (e.g. Haversine distance, or better measurements if you have access to them) and pick which of the four points is closest to what the user specified. It'll be accurate and performant. |
H: Feature selection for gene expression dataset
I am searching for a feature selection algorithm which selects features that are:
relevant to discriminate groups of samples (for each sample a group label is provided)
endowed with high variance across all the samples
This should be applied to gene expression dataset, in which each sample has a group label, therefore it should be possible to select for each group a set of features to be checked against.
I have now two candidates:
selecting features by the feature importance result of a Random Forest classifier
using the Minimum Redundancy Maximum Relevance (mRMR) algorithm
However, I am unsure of which may be the best or if there are better candidates for this purpose.
If the algorithm is implemented in Python scikit-learn it would be a plus.
AI: It would be helpful if you described your dataset more. Gene expression datasets seem to often have very high dimensionality and Lasso regularized logistic regression is a popular method to approach this problem. This paper takes it a little further and might help you out:
http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-14-198
Random forest can generally certainly provide a meaningful importance ranking, but it also depends on what your dataset looks like.
mRMR sounds like it is specifically designed for identifying gene characteristics, so definitely give it a try.
There's also Principle Component Analysis which is also used for gene expression data.
Lots of options, but your questions is not detailed enough to go any further, and providing code as a solution at this point isn't realistic. The documentation for Python scikit-learn has many good explanations and examples. |
H: Analytics term for turning row values into column names and count its assigned values
Do we have a data mining/analysis term for turning row values into column names and count its assigned values?
AI: Aggregation.
See Aggregate function |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.