text
stringlengths
83
79.5k
H: Identifying repeating sequences of data in byte array Given a sample of hexadecimal data, I would like to identify UNKNOWN sequences of bytes that are repeated throughout the sample. (Not searching for a known string or value) I am attempting to reverse engineer a network protocol, and I am working on determining data structures within the packet. As an example of what I'm trying to do (albeit on a smaller scale): (af:b6:ea:3d:83:02:00:00):{21:03:00:00}:[b3:49:96:23:01] {21:03:00:00}:(af:b6:ea:3d:83:02:00:00):01:42:00:00:00:00:01:57 And (38:64:88:6e:83:02:00:00):{26:03:00:00}:[b3:49:96:23:01] {26:03:00:00}:(38:64:88:6e:83:02:00:00):01:42:00:00:00:00:00:01 Obviously, these are easy to spot by eye, but patterns that are hundreds of chars into the data are not. I'm not expecting a magic bullet for the solution, just a nudge in the right direction, or even better, a premade tool. I'm currently needing this for a C# project, but I am open to any and all tools. AI: I believe the problem that you are referring to, is that of "Motif Discovery in Time Series Data". An appreciable amount of research literature already exists in this domain, so you can look through those. If the data that you handle is not very large, you can find some relatively easy to implement algorithms. If the data is large , you can look at more recent publications in this domain. As a starting point I would recommend taking a look at how Motif Discovery is done in SAX. SAX takes continuous signals as inputs and discretizes them. These discrete levels are then stored as alphabets. This resulting data looks very much like yours in my opinion. Take a look at what they do in "Mining Motifs in Massive Time Series Databases".
H: How to replace levels with certain value? I have a categorical variable in my dataset. I want to replace the levels which are present in test dataset and not present in training set with a value called "others" Here is how it looks: levels(training$var1) has levels as "1" "2" "3" "Others" levels(testing$var1) has levels as "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" I want to replace all the levels in testing data which are not training data. To achieve this I take the difference between the levels first. a <- setdiff(levels(levels(testing$var1),training$var1)). and I get output as levels(a) as "4" "5" "6" "7" "8" "9" "10" Now I need to replace all the above difference values with "Others". Kindly note that I do not want to drop out these levels but I want to replace with "Others". For this I tried testing$var1[testing$var1 == "4" <- 'Others" testing$var1[testing$var1 == "5" <- 'Others" ==> these works. However I want to make it in run time, something like this: testing$var1[testing$var1 == a[1,] <- 'Others" But this is not working. AI: I could be able to achieve this with the following code: testing$var1 <- as.character(testing$var1) a <- data.frame(a) testing$var1 [testing$var1 %in% a[1,] <- "Others" testing$var1 <- as.factor(testing$var1) In case if there is any other better/effective solution/function to achieve this, please let me know. Thanks all.
H: How to avoid overfitting in random forest? I want to avoid overfitting in random forest. In this regard, I intend to use mtry, nodesize, and maxnodes etc. Could you please help me choose values for these parameters? I am using R. Also, if possible, please tell me how I can use k-fold cross validation for random forest (in R). AI: Relative to other models, Random Forests are less likely to overfit but it is still something that you want to make an explicit effort to avoid. Tuning model parameters is definitely one element of avoiding overfitting but it isn't the only one. In fact I would say that your training features are more likely to lead to overfitting than model parameters, especially with a Random Forests. So I think the key is really having a reliable method to evaluate your model to check for overfitting more than anything else, which brings us to your second question. As alluded to above, running cross validation will allow to you avoid overfitting. Choosing your best model based on CV results will lead to a model that hasn't overfit, which isn't necessarily the case for something like out of the bag error. The easiest way to run CV in R is with the caret package. A simple example is below: > library(caret) > > data(iris) > > tr <- trainControl(method = "cv", number = 5) > > train(Species ~ .,data=iris,method="rf",trControl= tr) Random Forest 150 samples 4 predictor 3 classes: 'setosa', 'versicolor', 'virginica' No pre-processing Resampling: Cross-Validated (5 fold) Summary of sample sizes: 120, 120, 120, 120, 120 Resampling results across tuning parameters: mtry Accuracy Kappa Accuracy SD Kappa SD 2 0.96 0.94 0.04346135 0.06519202 3 0.96 0.94 0.04346135 0.06519202 4 0.96 0.94 0.04346135 0.06519202 Accuracy was used to select the optimal model using the largest value. The final value used for the model was mtry = 2.
H: Adjusting Probabilities When Using Alternative Cutoffs For Classification I am reading Applied Predictive Modeling by Max Khun. I chapter 16 he discusses using alternate cutoffs as a remedy for class imbalance. Suppose our model predicts the most likely outcome of 2 events, e1 and e2. We have e1 occurring with a predicted probability 0.52 and e2 with a predicted probability 0.48. Using the standard 0.5 for e1 cutoff we would predict e1, but using an alternative cutoff of 0.56 for e1 we would predict e2 because we only predict e1 when p(e1) > 0.56. My question is, does it make sense to also readjust the probabilities when using alternate cutoffs. For example, in my previous example using 0.56 cutoff of e1. p(e1) = 0.52; p(e2) = 0.48 Then we apply an adjustment of 0.56 - 0.5 = 0.06. So p_adj(e1) = 0.52 - 0.06 = 0.46; p_adj(e2) = 0.48 + 0.06 = 0.54 Basically we shift the probabilities so that they predict e1 when p_adj(e1) > 0.5. I apologize if there is something obviously flawed with my logic but it feels intuitively wrong to me to predict e2 when p(e1) > p(e2). Which probabilities would be more in line with the real-world probabilities? AI: First of all, you cannot always consider what a machine learning algorithm outputs as a "probability". Logistic regression outputs a sigmoid activation on a (0, 1) scale, but that doesn't magically make it so! We simply often scale things to a (0, 1) scale in ML as a measure of confidence. Also in your example, if the events are mutually exclusive (like classification), just think of them as "event 1" and "NOT event 1". Something like p(e1) + p(~e1) = 1. So when your book tells you to lower the threshold, it is simply saying that you require a smaller level of confidence to choose e1 over e2. This doesn't mean you are choosing one with smaller likelihood, you are simply making a conscious choice to adjust your precision-recall curve. There are other ways to combat class imbalance, but changing the threshold to be more sensitive to any indication of confidence of one class over another is certainly a way to do that.
H: Machine learning for state-based transforms? If I provide: A list of possible transforms, and, A list of input states, and, A corresponding list of output states for each input state, and, A fitness function to score each output state Which subset of machine learning can direct me towards an optimization algorithm that can map each input state to a dictionary of input states, and, failing to find a match, apply the necessary transforms to get me to the closest-related output state? An example involving polygon legalization: Any given "window" can contain N different polygons, where each polygon has lower-left and upper-right co-ordinates, as well as a polygon "type". The input state of the polygons may or may not be "illegal". A list of transforms includes: move, copy, rotate, resize If the input state maps directly to any output state, the input state is decided to be legal. Nothing more to be done; move on the next window. If the input state matches any previously seen input state, transform to the matching (known-legal) output state. Nothing more to be done; move on the next window. Attempt transforms in different sequences until a state is reached that satisfies a fitness function. Store this input:output state combination. Move on to the next window. Would this imply some combination of neural networking (for classification) and genetic/evolutionary algorithms? Or, does the presence of a fitness function negate the need to store combinations of input:output states? AI: If i get it correctly: You have an input polygon As a first step you want to "match" that against a list of previously seen templates. If this is successful, you pick it's corresponding output and move on. If not, you wish to find some optimal transformation, in order for it to satisfy some constraints that you have (your "objective function"). Then add the original+transformed shape to the templates list and move on. Is this correct? I'll risk an answer anyways: For the first part, I believe that there is a slew of literature out there. It's not my expertise, but first thing that comes to mind is measuring the distance in feature space between your shape and each template, and picking the closest one, it the distance is below a threshold that you set. "Feature" here would be either some low-level polygon property, e.g. x and y coordinates of vertices, or an abstraction, e.g. perimeter, area, no. vertices, mean side length/side length variance, etc. For the second part, it really depends on the nature of your constraints/objective functions. Are they convex? Uni- or multi-modal? Single or multi-objective? Do you want to incorporate some domain knowledge (i.e. knowledge about what "good" transformations would be?)? One can really not tell without further details. Evolutionary algorithms are quite versatile but expensive methods (although some argue on that). If you can spare the possibly large amount of function evaluations, you could try EAs as a first step, and then refine your approach. Finally, while not exactly related to what you describe in your process, I believe you may benefit by taking a look into auto-associative networks (and models in general); these are models that are able to perform constraint-satisfaction on their input, effectively enforcing learned relationships on input values. I could see this being used in your case by inputing a shape, and having a transformed shape as an output, which would be "legal", i.e. satisfying the constraints learned by the auto associative model. Thus, you would eliminate the need for a template matching + optimization altogether.
H: Deriving Confidences from Distribution of Class Probabilities for a Prediction I run into this problem from time to time and have always felt like there should be an obvious answer. I have probabilities for potential classes (from some classifier). I will offer the prediction of the class with the highest probability, however, I would also like to attach a confidence for that prediction. Example: If I have Classes [C1, C2, C3, C4, C5] and my Probabilities are {C1: 50, C2: 12, C3: 13, C4: 12, C5:13} my confidence in predicting C1 should be higher than if I had Probabilities {C1: 50, C2: 45, C3: 2, C4: 1, C5: 2}. Reporting that I predict class C1 with 60% probability isn't the whole story. I should be able to derive a confidence from the distribution of probabilities as well. I am certain there is a known method for solving this but I do not know what it is. EDIT: Taking this to the extreme for clarification: If I had a class C1 with 100% probability (and assuming the classifier had an accurate representation of each class) then I would be extremely confident that C1 was the correct classification. On the other hand if all 5 classes had almost equal probability (Say they are all roughly 20%) than I would be very uncertain claiming that any one was the correct classification. These two extreme cases are more obvious, the challenge is derive a confidence for intermediate examples like the one above. Any suggestions or references would be of great help. Thanks in advance. AI: As @David says, in your initial example, your confidence about C1 is the same in both cases. In your second example, you most certainly are less confident about the most-probable class in the second case, since the most-probable class is far less probable! You may have to unpack what you're getting at when you say 'confidence' then, since here you're not using it as a term of art but an English word. I suspect you may be looking for the idea of entropy, or uncertainty present in the distribution of all class probabilities. In your first example, it is indeed lower in the second case than the first. I don't think what you're getting at is just a function of the most-probable class, that is.
H: What features from sound waves to use for an AI song composer? I am planning on making an AI song composer that would take in a bunch of songs of one instrument, extract musical notes (like ABCDEFG) and certain features from the sound wave, preform machine learning (most likely through recurrent neural networks), and output a sequence of ABCDEFG notes (aka generate its own songs / music). I think that this would be an unsupervised learning problem, but I am not really sure. I figured that I would use recurrent neural networks, but I have a few questions on how to approach this: - What features from the sound wave I should extract so that the output music is melodious? - Is it possible, with recurrent neural networks, to output a vector of sequenced musical notes (ABCDEF)? - Any smart way I can feed in the features of the soundwaves as well as sequence of musical notes? AI: First off, ignore the haters. I started working on ML in Music a long time ago and got several degrees using that work. When I started I was asking people the same kind of questions you are. It is a fascinating field and there is always room for someone new. We all have to start somewhere. The areas of study you are inquiring about are Music Information Retrieval (Wiki Link) and Computer Music (Wiki Link) . You have made a good choice in narrowing your problem to a single instrument (monophonic music) as polyphonic music increases the difficulty greatly. You're trying to solve two problems really: 1) Automatic Transcription of Monophonic Music (More Readings) which is the problem of extracting the notes from a single instrument musical piece. 2) Algorithmic Composition (More Readings) which is the problem of generating new music using a corpus of transcribed music. To answer your questions directly: I think that this would be an unsupervised learning problem, but I am not really sure. Since there are two learning problems here there are two answers. For the Automatic Transcription you will probably want to follow a supervised learning approach, where your classification are the notes you are trying to extract. For the Algorithmic Composition problem it can actually go either way. Some reading in both areas will clear this up a lot. What features from the sound wave I should extract so that the output music is melodious? There are a lot of features used commonly in MIR. @abhnj listed MFCC's in his answer but there are a lot more. Feature analysis in MIR takes place in several domains and there are features for each. Some Domains are: The Frequency Domain (these are the values we hear played through a speaker) The Spectral Domain (This domain is calculated via the Fourier function (Read about the Fast Fourier Transform) and can be transformed using several functions (Magnitude, Power, Log Magnitude, Log Power) The Peak Domain (A domain of amplitude and spectral peaks over the spectral domain) The Harmonic Domain One of the first problems you will face is how to segment or "cut up" your music signal so that you can extract features. This is the problem of Segmentation (Some Readings) which is complex in itself. Once you have cut your sound source up you can apply various functions to your segments before extracting features from them. Some of these functions (called window functions) are the: Rectangular, Hamming, Hann, Bartlett, Triangular, Bartlett_hann, Blackman, and Blackman_harris. Once you have your segments cut from your domain you can then extract features to represent those segments. Some of these will depend on the domain you selected. A few example of features are: Your normal statistical features (Mean, Variance, Skewness, etc.), ZCR, RMS, Spectral Centroid, Spectral Irregularity, Spectral Flatness, Spectral Tonality, Spectral Crest, Spectral Slope, Spectral Rolloff, Spectral Loudness, Spectral Pitch, Harmonic Odd Even Ratio, MFCC's and Bark Scale. There are many more but these are some good basics. Is it possible, with recurrent neural networks, to output a vector of sequenced musical notes (ABCDEF)? Yes it is. There have been several works to do this already. (Here are several readings) Any smart way I can feed in the features of the soundwaves as well as sequence of musical notes? The standard method is to use the explanation I made above (Domain, Segment, Feature Extract) etc. To save yourself some work I highly recommend starting with a MIR framework such as MARSYAS (Marsyas). They will provide you with all the basics of feature extraction. There are many frameworks so just find one that uses a language you are comfortable in.
H: How to calculate most frequent value combinations I have the following CSV data: shot_id,round_id,hole,shotType,clubType,desiredShape,lineDirection,shotQuality,note 48,2,1,tee,driver,straight,straight,good, 49,2,1,approach,iron,straight,right,bad, 50,2,1,approach,wedge,straight,straight,bad, 51,2,1,approach,wedge,straight,straight,bad, 52,2,1,putt,putter,straight,straight,good, 53,2,1,putt,putter,straight,straight,good, 54,2,2,tee,driver,draw,straight,good, 55,2,2,approach,iron,draw,straight,good, 56,2,2,putt,putter,straight,straight,good, 57,2,2,putt,putter,straight,straight,good, 58,2,3,tee,driver,draw,straight,good, 59,2,3,approach,iron,straight,right,good, 60,2,3,chip,wedge,straight,straight,good, 61,2,3,putt,putter,straight,straight,good, 62,2,4,tee,iron,straight,straight,good, 63,2,4,putt,putter,straight,straight,good, 64,2,4,putt,putter,straight,straight,good, 65,2,5,tee,driver,straight,left,good, 66,2,5,approach,wedge,straight,straight,good, 67,2,5,putt,putter,straight,straight,bad, 68,2,5,putt,putter,straight,straight,good, 69,2,6,tee,driver,draw,straight,bad, 70,2,6,approach,hybrid,draw,straight,good, 71,2,6,putt,putter,straight,straight,good, 72,2,6,putt,putter,straight,straight,good, 73,2,7,tee,driver,straight,straight,good, 74,2,7,approach,wood,fade,straight,good, 75,2,7,approach,wedge,straight,straight,bad,long 76,2,7,putt,putter,straight,straight,good, 77,2,7,putt,putter,straight,straight,good, 78,2,8,tee,iron,straight,right,bad, 79,2,8,approach,wedge,straight,straight,good, 80,2,8,putt,putter,straight,straight,bad, 81,2,9,tee,driver,straight,straight,good, 82,2,9,approach,iron,straight,straight,good, 83,2,9,approach,wedge,straight,straight,bad, 84,2,9,putt,putter,straight,straight,good, 85,2,9,putt,putter,straight,straight,good, 86,2,10,tee,driver,straight,left,good, 87,2,10,approach,iron,straight,left,good, 88,2,10,chip,wedge,straight,straight,good, 89,2,10,putt,putter,straight,straight,good, 90,2,10,putt,putter,straight,straight,good, 91,2,11,tee,driver,draw,straight,good, 92,2,11,approach,iron,draw,straight,good, 93,2,11,putt,putter,straight,straight,good, 94,2,11,putt,putter,straight,straight,good, 95,2,12,tee,iron,draw,straight,good, 96,2,12,putt,putter,straight,straight,good, 97,2,12,putt,putter,straight,straight,good, 98,2,13,tee,driver,draw,straight,good, 99,2,13,approach,wood,straight,straight,bad,topped 100,2,13,putt,putter,straight,straight,good, 101,2,13,putt,putter,straight,straight,good, 102,2,14,tee,driver,draw,straight,good, 103,2,14,approach,wood,straight,straight,bad, 104,2,14,approach,iron,draw,straight,good, 105,2,14,approach,wedge,straight,straight,bad, 106,2,14,putt,putter,straight,straight,bad, 107,2,14,putt,putter,straight,straight,good, 108,2,15,tee,iron,draw,right,bad, 109,2,15,approach,wedge,straight,straight,good, 110,2,15,putt,putter,straight,straight,good, 111,2,15,putt,putter,straight,straight,good, 112,2,16,tee,driver,draw,right,good, 113,2,16,approach,iron,straight,left,bad, 114,2,16,approach,wedge,straight,left,bad, 115,2,16,putt,putter,straight,straight,good, 116,2,17,tee,driver,straight,straight,good, 117,2,17,approach,wood,straight,right,bad, 118,2,17,approach,wedge,straight,straight,good, 119,2,17,putt,putter,straight,straight,good, 120,2,17,putt,putter,straight,straight,good, 121,2,18,tee,driver,fade,right,bad, 122,2,18,approach,wedge,straight,straight,good, 123,2,18,approach,wedge,straight,straight,good, 124,2,18,putt,putter,straight,straight,good, 125,2,18,putt,putter,straight,straight,good, And I would like to be able to identify which combinations of values are the most frequently occurring. club types: driver, wood, iron, wedge, putter Shot types: tee, approach, chip, putt line directions: left, center, right shot qualities: good, bad, neutral Where ideally I'd be able to identify a sweet spot (no pun intended) combination: "driver" + "tee" + "straight" + "good" I intend only to measure this for a static dataset, not for any future values or prediction. So, my thought is that this is probably a clustering / k-means problem. Is that correct? If so, how would I begin doing a K-Mean analysis with these types of values in R? If it isn't a kmeans problem, then what is it? AI: If I understand your question you want to know which combination is most frequent or how frequent a combination is relative to others. This is a static method that will determine the unique combinations in total (i.e., combinations of all five columns). The plyr package has a nifty utility for grouping unique combinations of columns in a data.frame. We can specify the names of the columns we want to group by, and then specify a function to perform for each of those combinations. In this case, we specify the columns associated with your golf shot qualities and use the function nrow which will count the number of rows in every subset of the large data.frame for which the columns are the identical. # You need this library for the ddply() function require(plyr) # These are the columns that determine a unique situation (change this if you need) qualities <- c("shotType","clubType","desiredShape","lineDirection","shotQuality") # The call to ddply() actually gives us what we want, which is the number # of times that combination is present in the dataset countedCombos <- ddply(golf,qualities,nrow) # To be nice, let's give that newly added column a meaningful name names(countedCombos) <- c(qualities,"count") # Finally, you probably want to order it (decreasing, in this case) countedCombos <- countedCombos[with(countedCombos, order(-count)),] Now check out your product. The final column has the count associated with each unique combination of columns you provided to ddply: head(countedCombos) shotType clubType desiredShape lineDirection shotQuality count 16 putt putter straight straight good 30 10 approach wedge straight straight good 6 9 approach wedge straight straight bad 5 19 tee driver draw straight good 5 22 tee driver straight straight good 4 2 approach iron draw straight good 3 To see the results for a particular cross-section (say, for example, the driver clubType): countedCombos[which(countedCombos$clubType=="driver"),] shotType clubType desiredShape lineDirection shotQuality count 19 tee driver draw straight good 5 22 tee driver straight straight good 4 21 tee driver straight left good 2 17 tee driver draw right good 1 18 tee driver draw straight bad 1 20 tee driver fade right bad 1 As a bonus, you can dig into these results with ddply again. For example, if you wanted to look at the ratio of "good" to "bad" shotQuality based on shotType and clubType: shotPerformance <- ddply(countedCombos,c("shotType","clubType"), function(x){ total<- length(x$shotQuality) good <- length(which(x$shotQuality=="good")) bad <- length(which(x$shotQuality=="bad")) c(total,good,bad,good/(good+bad)) } ) names(shotPerformance)<-c("count","shotType","clubType","good","bad","goodPct") This gives you a new breakdown of some math performed on the counts of a character field (shotQuality) and shows you how you can build custom functions for ddply. Of course, you can still order these whichever way you want, too. head(shotPerformance) shotType clubType total good bad goodPct 1 approach hybrid 1 1 0 1.0000000 2 approach iron 6 4 2 0.6666667 3 approach wedge 3 1 2 0.3333333 4 approach wood 3 1 2 0.3333333 5 chip wedge 1 1 0 1.0000000 6 putt putter 2 1 1 0.5000000
H: Markov Chains: How much steps to conclude a Transition Matrix I have just learned Markov Chains which I am using to model a real world problem. The model comprises 3 states [a b c]. For now I am collection data and calculating transitional probabilities:- T[a][b] = #transitions from a to b / #total transitions to a However I am stuck at determining the correct Transition Matrix. As I am getting more data, the matrix is changing drastically. So when do I finalize Transition Matrix? Does that mean that my data is too random and cannot be modelled or I am doing some mistake here? AI: I expect you have, or can make, a matrix of transition counts. Consider the data in each row to be draws from a multinomial distribution. Then you should be able to use sample size calculations for the multinomial to get off the ground. It is also possible that your data is not well described by a simple Markov chain. There are some available techniques for this, e.g. multistate modelling, but which may or may not fit your particular problem.
H: How to extract a column that has the highest value within row in Hive? I have a table, more or less in the following format col1 col2 col3 ... col100 val1 val2 val3 ... val100 Where val* are doubles. Is there a way to extract for each row in which column is the highest value within row in Hive? For example, for table like col1 col2 col3 2 4 5 8 1 2 I would get col3 col1 AI: I can't test in hive, but a possible SQL query is as follows (greatest returns the maximum value from the list): select case when col1 = greatest(col1,col2,col3) then 'col1' when col2 = greatest(col1,col2,col3) then 'col2' when col3 = greatest(col1,col2,col3) then 'col3' end as c1 from test; Additional note: you should check, how ties are to be handled, in my solution I simple take the first column.
H: Are there any interesting application of linear regression Linear regression is a widely used ML algorithm. So far I have only encountered 'boring' applications of it. (e.g predict sales for next quarter, predict housing prices for next year , predict population of a country by 2020 etc.) What are some interesting/cool application of linear regression? Finding cool applications greatly helps in motivating myself to learn, hence the quest. Cool as in: Application to stock trading, application to video games, application to astronomy, application to sports betting, application to airfare prediction AI: The way that you have phrased this question makes it tough for people to answer without first offering you some background on linear regression (LR). Its great that you are interested in learning some ML and LR is a great place to start. Linear regression is really just finding a line (or plain or hyperplain) that maps a relationship between two or more variables or features. The important requirement is that the target variable be a continuous numerical value. So its less about finding a problem for which linear regression "works" and more about finding some data that interests you and playing with it using linear regression. I suggest you download some open data in the realms that you have mentioned. There is plenty of open data in every topic that you mentioned that contain continuous numerical values that you can predict using other features of the data. stock trading video games astronomy sports betting flight time prediction Also think about taking some sort of online MOOC as this will help you gain some footing in the subject. Andrew Ng's Coursera on Machine Learning is highly recommended as a starting point and include some linear regression during the first portion and scikit-learn is a great Python based library.
H: Shall I use the Euclidean Distance or the Cosine Similarity to compute the semantic similarity of two words? I want to compute the semantic similarity of two words using their vector representations (obtained using e.g. word2vec, GloVe, etc.). Shall I use the Euclidean Distance or the Cosine Similarity? The GloVe website mentions both measures without telling the pros and cons of each: The Euclidean distance (or cosine similarity) between two word vectors provides an effective method for measuring the linguistic or semantic similarity of the corresponding words. AI: First of all, if GloVe gives you normalized unit vectors, then the two calculations are equivalent. In general, I would use the cosine similarity since it removes the effect of document length. For example, a postcard and a full-length book may be about the same topic, but will likely be quite far apart in pure "term frequency" space using the Euclidean distance. They will be right on top of each other in cosine similarity.
H: Combining Datasets with Different Features I have multiple datasets, with slightly differing features. What tools can I use to make this a homogeneous dataset? Dataset1: featureA,featureB,featureC 1,7,3 4,8,4 Dataset2: featureA,featureC,featureD,featureE 3,4,5,6 9,8,4,6 Homogeneous Dataset featureA,featureB,featureC,featureD,featureE 1,7,3,, 4,8,4,, 3,,4,5,6 9,,8,4,6 AI: You can use R to do that. The smartbind function is the perfect way to combine datsets in the way you are asking for: library(gtools) d1<-as.data.frame(rbind(c(1,7,3),c(4,8,4)))) names(d1)<-c("featureA","featureB","featureC") d2<-as.data.frame(rbind(c(3,4,5,6),c(9,8,4,6))) names(d2)<-c("featureA","featureC","featureD","featureE") d3<-smartbind(d1,d2)
H: Newton-Raphson or EM Algorithm in Python Is there any implementation of Newton-Raphson or EM Algorithm? Can I get the source code of it? I tried googling, but didn't come across any. So asking here. Thanks! AI: scikit learn has the EM algorithm here. Source code is available. And if you are an R fan the mclust package is available here.
H: Normalized Euclidean Distance versus cross correlation? Normalized Euclidean Distance and Normalized Cross - Correlation can both be used as a metric of distance between vectors. What is the difference between these metrics? It seems to me that they are the same, although I have not seen this explicitly stated in any textbook or literature. thank you. AI: These two metrics are not the same. The normalized Euclidean distance is the distance between two normalized vectors that have been normalized to length one. If the vectors are identical then the distance is 0, if the vectors point in opposite directions the distance is 2, and if the vectors are orthogonal (perpendicular) the distance is sqrt(2). It is a positive definite scalar value between 0 and 2. The normalized cross-correlation is the dot product between the two normalized vectors. If the vectors are identical, then the correlation is 1, if the vectors point in opposite directions the correlation is -1, and if the vectors are orthogonal (perpendicular) the correlation is 0. It is a scalar value between -1 and 1. This all comes with the understanding that in time-series analysis the cross-correlation is a measure of similarity of two series as a function of the lag of one relative to the other.
H: Non-parametric approach to healthcare dataset? I have a Healthcare dataset. I have been told to look at non-parametric approach to solve certain questions related to the dataset. I am little bit confused about non-parametric approach. Do they mean density plot based approach (such as looking at the histogram)? I know this is a vague question to ask here. However, I don't have access to anybody else whom I can ask and hence I am asking for some input from others in this forum. Any response/thought would be appreciated. Thanks and regards. AI: They are not specifically referring to a plot based approach. They are referring to a class of methods that must be employed when the data is not normal enough or not well-powered enough to use regular statistics. Parametric and nonparametric are two broad classifications of statistical procedures with loose definitions separating them: Parametric tests usually assume that the data are approximately normally distributed. Nonparametric tests do not rely on a normally distributed data assumption. Using parametric statistics on non-normal data could lead to incorrect results. If you are not sure that your data is normal enough or that your sample size is big enough (n < 30), use nonparametric procedures rather than parametric procedures. Nonparametric procedures generally have less power for the same sample size than the corresponding parametric procedure if the data truly are normal. Take a look at some examples of parametric and analogous nonparametric tests from Tanya Hoskin's Demystifying Summary: Here are some summary references: Another general table with some different information Nonparametric Statistics All of Nonparametric Statistics, by Larry Wasserman R tutorial Nonparametric Econometrics with Python
H: how to perform Step calcuations in R I have a rate-step plan for consumption with which I'm trying to total up costs. I have a rate plant that looks like this: First 20 Kwh is Free; Second 5 Kwh is $.1 Next 100 Kwh is $.053 Next 875 Kwh is $.042 Over 1000Kwh is $.039 Can someone provide a worked example for me, e.g., with $\text{consumtpion}=625$? If possible, with R code for calculating the total cost. For my own approach, I was thinking of using a loop, but if it's a lot of accounts the loop will take too long. Can anyone think of a vector-based approach for calculating the result? AI: Try a simple equation like this: \begin{eqnarray} &&\max(x-20,0)*0.1 + \max(0,x-25)*(-.047) + \\ &&\max(0, x-125)*(-.011) + \max(0,x-1000)*(-.003) \end{eqnarray} This should take the usage $(x)$ and walk it through steps, initially charging 0.1 for each Kwh over 20: $\max(x-20,0)*0.1$ Then, if $x$ is above 20, it takes away 0.047 (0.1-0.053) for every Kwh between 20 and 25: $\max(0, x-25)*(-0.047)$ Then, if $x$ is above 125, it takes away 0.011 (0.053-0.042) for every Kwh between 25 and 125: $\max(0, x-125)*(-0.011)$
H: What does the Ip mean in the Bayesian Ridge Regression formula? From http://scikit-learn.org/stable/modules/linear_model.html#bayesian-ridge-regression, they gave the bayesian ridge distribution as this: $p(w|\lambda) = \mathcal{N}(w|0,\lambda^{-1}{I_{p}})$ And there is a variable $I_p$ but it's unexplained what does the $I_p$ refer to? Also, the variable $\mathcal{N}$ is unexplained but I'm not sure whether I've guessed correctly but is that the Gaussian prior as described in the Bayesian regression section above the Bayesian Ridge? AI: $\mathcal{N}$ does indeed denote a (multivariate) normal / Gaussian distribution. $I_p$ is just an identity matrix of dimension $p$. So this a matrix with $\lambda^{-1}$ along the diagonal. Read this as the covariance matrix, so this is a spherical Gaussian (0 covariance between different dimensions) where each variable has variance $\lambda^{-1}$.
H: Identifying top predictors from a mix of categorical and ordinal data I have a dataset with 261 predictors scraped from a larger set of survey questions. 224 have values which are in a range of scale (some 1-10, some 1-4, some simply binary, all using 0 where no value is given), and the rest are unordered categories. I'm trying to perform classification using these predictors and identify the top n predictors. Am thinking of the following approach: convert the 224 ordered predictors into numeric, centered, and scaled. Run separate modeling (I use caret from R): one for using the numeric predictors, another using the remaining 37 categorical predictors (both cross-validated within each modeling exercise). Choose the respective best-fitting models modelN and modelC for the numeric and categorical predictors. Choose top n (say 10) predictors from model N and model C. Combine them in an ensemble model that can handle both numeric and categorical data (say, random forest). Choose top n predictors in the ensemble model. I am going through this a roundabout way rather than directly fitting all predictors into an ensemble model to try and reduce the complexity of the problem first (and because in R, I'm having a problem with too many levels from the predictors). Would this be a valid approach to identifying the n most salient predictors? Any possible issues to mitigate? AI: Ricky, Loose thoughts: Depending on the algorithm you intend to use, centering might not be a good idea (e.g. if you go for SVM, centering will destroy sparsity) I would suggest not to handle ordered / unordered separately, as you are likely to miss interactions that way. If the categorical ones don't have too many possible values, randomForest in R can handle factors. if that is an issue (as you seem to hint), I think you have two possibilities: binary indicators or response rates if it's feasible in terms of computational cost, i would convert all factors to binaries (use sparse matrices if necessary) and then try a greedy feature selection. caret, if memory serves, has rfe or somesuch. if that's too much trouble, try calculating response rates / average values per factor level (I don't see any info whether your problem is classification or regression): you split your set into folds, and then for each fold fit a mixed effects model (e.g. via lme4) on the remainder, using the factor of interest as the main variable. It's a bit of a pain to setup all the cv correctly, but it's the only way to avoid leaking information. Hope this helps, K
H: Is our data "Big Data" (Startup) I worked at a startup/medium sized company and I am concerned that we may be over-engineering one of our products. In essence, we will be consuming real-time coordinates from vehicles and users and performing analytics and machine learning on this incoming data. This processing can be rather intensive as we try predict the ETAs of this entities matched to historical data and static paths. The approach they want to take is using the latest and most powerful technology stack, that being Hadoop, Storm etc to process these coordinates. Problem is that no-one in the team has implemented such a system and only has had the last month or so to skill up on it. My belief is that a safer approach would be to use NoSQL storage such as "Azure Table Storage" in an event based system to achieve the same result in less time. To me it's the agile approach, as this is a system that we are familiar with. Then if the demand warrants it, we can look at implementing Hadoop in the future. I haven't done a significant amount of research in this field, so would appreciate your input. Questions: How many tracking entities (sending coordinates every 10 seconds) would warrant Hadoop? Would it be easy to initially start off with a simpler approach such as "Azure Table Storage" then onto Hadoop at a later point? If you had to estimate, how long would you say a team of 3 developers would take to implement a basic Hadoop/Storm system? Is Hadoop necessary to invest from the get go as we will quickly incur major costs? I know these are vague questions, but I want to make sure we aren't going to invest unnecessary resources with a deadline coming up. AI: Yes, this is a how-long-is-a-piece-of-string question. I think it's good to beware of over-engineering, while also making sure you engineer for where you think you'll be in a year. First I'd suggest you distinguish between processing and storage. Storm is a (stream) processing framework; NoSQL databases are a storage paradigm. These are not alternatives. The Hadoop ecosystem has HBase for NoSQL; I suspect Azure has some kind of stream processing story. The bigger difference in your two alternatives is consuming a cloud provider's ecosystem vs Hadoop. The upside to Azure, or AWS, or GCE, is that these services optimize for integrating with each other, with billing, machine management, etc. The downside is being locked in to the cloud provider; you can't run Azure stuff anywhere but Azure. Hadoop takes more work to integrate since it's really a confederation of sometimes loosely-related projects. You're investing in both a distribution, and a place to run that distribution. But, you get a lot less lock-in, and probably more easy access to talent, and a broader choice of tools. The Azure road is also a "big data" solution in that it has a lot of the scalability properties you want for big data, and the complexity as well. It does not strike me as an easier route. Do you need to invest in distributed/cloud anything at this scale? given your IoT-themed use case, I believe you will need to soon, if not now, so yes. You're not talking about gigabytes, but many terabytes in just the first year. I'd give a fresh team 6-12 months to fully productionize something based on either of these platforms. That can certainly be staged as a POC, followed by more elaborate engineering.
H: What makes a graph algorithm a good candidate for concurrency? GraphX is the Apache Spark library for handling graph data. I was able to find a list of 'graph-parallel' algorithms on these slides (see slide 23). However, I am curious what characteristics of these algorithms make them parallelizable. AI: Two words: associative and commutative In other words, the operations that the algorithm does need to be independent of how you order or group your data...this minimizes the need for cross-talk in the algorithm and leads to more efficiency.
H: Alignment of square nonorientable images/data Another post where I don't know enough terminology to describe things efficiently. For the comments, please suggest some tags and keywords I can add to this post to make it better. Say I have a 2D data structure where 'orientation' doesn't matter. The examples I ran into: The state of a 2048 game. In terms of symmetry groups this would be D4 / D8, except that an operation doesn't yield an identical state, it just yields another state that has the same solution. Images of plankton or galaxies (without background). Somewhat similar to above except that any rotation (not just 90o) yields an equally valid image (and one might take scale into account, but let's forget about that). In both cases I've wanted to transform all these equivalent states/images to remove all but one of the equivalent images. To illustrate with two that worked: I can use image moments M10 and M01 to transform horizontally and vertically mirrored equivalent data. E.g. apply horizontal mirroring iff it makes M10 bigger. This would transform a 2048 state and it's horizontal mirror image to the same state. I can use the eigenvector of the covariance matrix which has the largest eigenvalue as the orientation. Then I can rotate the image to align this eigenvector with some predetermined axis (e.g. horizontally). That still leaves a lot of operations though (diagonal mirroring, rotations around the center, inversion). And these operations do not commute (D8 is non-Abelian). Is there any comprehensive approach? The reason I want to do this is to help machine learning methods by removing variance that isn't actually meaningful. Hopefully that makes sure they don't have to learn these equivalences, so possibly need less train data (and time). AI: Fun with Group Theory! There are only 8 unique rotation-inversion operations for a square matrix. The four rotation operators are (0,90,180,270). Further rotation or rotation in the reverse direction is the same as these four. Two successive rotations just yields one of the rotation operators, so we will only consider these four rotations applied one time. The five inversion operators are (0,/,\,|,-). Two successive inversions just yields a rotation, so we will only allow for a single inversion. We can thus derive all operators by combining these two vectors, which yields 4*5=20 possible states. (0,90,180,270,0/,90/,180/,270/,0\,90\,180\,270\,0|,90|,180|,270|,0-,90-,180-,270-) But there is still symmetry to be exploited in the inversion operators. You can probably intuit that the 4x4 matrix only has 8 final states: inverted or not and rotated by (0,90,180,270). It turns out that you can arrive at any possible state involving an inversion using any of the other inversion operators followed by one of the rotations. So we only need to retain a single inversion operator! So the final set of 8 orthogonal operations are: (0,90,180,270,0|,90|,180|,270|) If there is any symmetry in the matrix's members then some of the resulting states may be degenerate. In terms of mapping possible states into a ground state, it makes sense to apply a set of successive deterministic rules to determine the ground state orientation. I suggest finding the largest corner square and locating it in the lower right corner. If there are multiple candidates with equally large values in the corner square then use the next closest square as a tie breaker. There are 16 squares, so you can eventually break all ties or declare degeneracy. There is one remaining \ operation that you can decide to apply in order to locate the larger of the two squares adjacent to the lower right corner at the bottom. Again, you can use squares adjacent to these as tie breakers.
H: Extra output layer in a neural network (Decimal to binary) I'm working through a question from the online book. I can understand that if the additional output layer is of 5 output neurons, I could probably set bias at 0.5 and weight of 0.5 each for the previous layer. But the question now ask for a new layer of four output neurons - which is more than enough to represent 10 possible outputs at $2^{4}$. Can someone walk me through the steps involved in understanding and solving this problem? The exercise question: There is a way of determining the bitwise representation of a digit by adding an extra layer to the three-layer network above. The extra layer converts the output from the previous layer into a binary representation, as illustrated in the figure below. Find a set of weights and biases for the new output layer. Assume that the first 3 layers of neurons are such that the correct output in the third layer (i.e., the old output layer) has activation at least 0.99, and incorrect outputs have activation less than 0.01. AI: The question is asking you to make the following mapping between old representation and new representation: Represent Old New 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 2 0 0 1 0 0 0 0 0 0 0 0 0 1 0 3 0 0 0 1 0 0 0 0 0 0 0 0 1 1 4 0 0 0 0 1 0 0 0 0 0 0 1 0 0 5 0 0 0 0 0 1 0 0 0 0 0 1 0 1 6 0 0 0 0 0 0 1 0 0 0 0 1 1 0 7 0 0 0 0 0 0 0 1 0 0 0 1 1 1 8 0 0 0 0 0 0 0 0 1 0 1 0 0 0 9 0 0 0 0 0 0 0 0 0 1 1 0 0 1 Because the old output layer has a simple form, this is quite easy to achieve. Each output neuron should have a positive weight between itself and output neurons which should be on to represent it, and a negative weight between itself and output neurons that should be off. The values should combine to be large enough to cleanly switch on or off, so I would use largish weights, such as +10 and -10. If you have sigmoid activations here, the bias is not that relevant. You just want to simply saturate each neuron towards on or off. The question has allowed you to assume very clear signals in the old output layer. So taking example of representing a 3 and using zero-indexing for the neurons in the order I am showing them (these options are not set in the question), I might have weights going from activation of old output $i=3$, $A_3^{Old}$ to logit of new outputs $Z_j^{New}$, where $Z_j^{New} = \Sigma_{i=0}^{i=9} W_{ij} * A_i^{Old}$ as follows: $$W_{3,0} = -10$$ $$W_{3,1} = -10$$ $$W_{3,2} = +10$$ $$W_{3,3} = +10$$ This should clearly produce close to 0 0 1 1 output when only the old output layer's neuron representing a "3" is active. In the question, you can assume 0.99 activation of one neuron and <0.01 for competing ones in the old layer. So, if you use the same magnitude of weights throughout, then relatively small values coming from +-0.1 (0.01 * 10) from the other old layer activation values will not seriously affect the +-9.9 value, and the outputs in the new layer will be saturated at very close to either 0 or 1.
H: Do I need to buy a NVIDIA graphic card to run deep learning algorithm? I am new in deep learning. I am running a MacBook Pro yosemite (upgraded from Snowleopard). I don't have a CUDA-enabled card GPU, and running the code on the CPU is extremely slow. I heard that I can buy some instances on AWS, but it seems that they don't support macOS. My question is, to continue with the deep learning, do I need to purchase a graphic card? Or is there other solution? I don't want to spend too much on this... AI: I would recommend familiarizing yourself with AWS spot instances. It's the most practical solution I can think of for your problem, and it works your computer too. So, no you don't have to buy an Nvidia card, but as of today you will want to use one since almost all the solutions rely on them.
H: Contributions of each feature in classification? I have some features and I am using Weka to classify my instances. For example I have: Number of adj number of adverb number of punctuation in my feature set. However, I would like to know the contribution of each feature in the feature set. So what metrics or parameters are helpful to get the contribution of features? AI: This is called feature ranking, which is closely related to feature selection. feature ranking = determining the importance of any individual feature feature selection = selecting a subset of relevant features for use in model construction. So if you are able to ranked features, you can use it to select features, and if you can select a subset of useful features, you've done at least a partial ranking by removing the useless ones. This Wikipedia page and this Quora post should give some ideas. The distinction filter methods vs. wrapper based methods vs. embedded methods is the most common one. One straightforward approximate way is to use feature importance with forests of trees: Other common ways: recursive feature elimination. stepwise regression (or LARS Lasso). If you use scikit-learn, check out module-sklearn.feature_selection. I'd guess Weka has some similar functions.
H: Sequence of numbers as single feature Is it possible to use a sequence of numbers as one feature? For example, using libsvm data format: <label> <index1>:<value1> <index2>:<value2> +1 1:123.02 2:1.23 3:5.45,2.22,6.76 +1 1:120.12 2:2.23 3:4.98,2.55,4.45 -1 1:199.99 2:2.13 3:4.98,2.22,6.98 ... Is there any special machine learning algorithm for this kind of data? AI: 2 solutions: You aggregate each sequence of numbers into a single number, which use as a feature. There exist plenty of aggregation functions, such as some derived from descriptive statistics root-mean-square, kurtosis, skewness, max, min, duration, standard deviation, crest factor, mean, or more specific aggregation such as fourier transforms or wavelet transforms. You use some model that accepts sequences as input. Sequences may be of variable length. Example of such model: recurrent neural networks, Dynamic Bayesian networks.
H: How to keep a subsetted value for calculating mean I am currently learning R and I have to solve an Issue were I have to extract values from a data set which are from a specific month and from this values I should calculate the mean Temp. I did it like that: data[data$X..Month.. == 6,] mean(data$X..Temp.., na.rm=TRUE) It gave me the mean value but without taking my first statement into consideration. What do I need to do that both statements are taken into consideration? AI: To just get the mean for month 6: mean(df$temp[df$mon=="Jun"], na.rm=T) You were nearly there but didn't assign your subset to a value, if you had: x = data[data$X..Month.. == 6,] mean(x$X..Temp.., na.rm=TRUE) that should work.
H: When to stop calculating values of each cell in the grid in Reinforcement Learning(dynamic programming) applied on gridworld Considering application of Reinforcement learning(dynamic programming method performing value iteration) on grid world, in each of the iteration, I go through each of the cell of the grid and update its value depending on its present value and the present value of the taking action from that state. Now How long do I keep updating value of each cell? Shall I keep updating unless the change in the previous and the present value function is the least? I am not able to understand how to implement the stopping mechanism in the grid-world scenario(discount not considered) Is the value function the values of all the grids in the grid world? AI: 1- You should set a threshold (a hyper-param) that will allow you to quit the loop. Let V the values for all state s and V' the new values after value iteration. if $\sum_s|V(s) - V’(s)| \le threshold$, quit 2 - V is a function for every cell in the grid yes because you need to update every cell. Hope it helps.
H: Confusion in Policy Iteration and Value iteration in Reinforcement learning in Dynamic Programming What I understood for value iteration while coding is that we need to have a policy fixed. According to that policy the value function of each state will be calculated. Right? But in policy iteration the policy will change from time to time. Am I right? AI: In policy iteration, you define a starting policy and iterate towards the best one, by estimating the state value associated with the policy, and making changes to action choices. So the policy is explicitly stored and tracked on each major step. After each iteration of the policy, you re-calculate the value function for that policy to within a certain precision. That means you also work with value functions that measure actual policies. If you halted the iteration just after the value estimate, you would have a non-optimal policy and the value function for that policy. In value iteration, you implicitly solve for the state values under an ideal policy. There is no need to define an actual policy during the iterations, you can derive it at the end from the values that you calculate. You could if you wish, after any iteration, use the state values to determine what "current" policy is predicted. The values will likely not approximate the value function for that predicted policy, although towards the end they will probably be close.
H: Is it necessary to standardize your data before clustering? Is it necessary to standardize your data before cluster? In the example from scikit learn about DBSCAN, here they do this in the line: X = StandardScaler().fit_transform(X) But I do not understand why it is necessary. After all, clustering does not assume any particular distribution of data - it is an unsupervised learning method so its objective is to explore the data. Why would it be necessary to transform the data? AI: Normalization is not always required, but it rarely hurts. Some examples: K-means: K-means clustering is "isotropic" in all directions of space and therefore tends to produce more or less round (rather than elongated) clusters. In this situation leaving variances unequal is equivalent to putting more weight on variables with smaller variance. Example in Matlab: X = [randn(100,2)+ones(100,2);... randn(100,2)-ones(100,2)]; % Introduce denormalization % X(:, 2) = X(:, 2) * 1000 + 500; opts = statset('Display','final'); [idx,ctrs] = kmeans(X,2,... 'Distance','city',... 'Replicates',5,... 'Options',opts); plot(X(idx==1,1),X(idx==1,2),'r.','MarkerSize',12) hold on plot(X(idx==2,1),X(idx==2,2),'b.','MarkerSize',12) plot(ctrs(:,1),ctrs(:,2),'kx',... 'MarkerSize',12,'LineWidth',2) plot(ctrs(:,1),ctrs(:,2),'ko',... 'MarkerSize',12,'LineWidth',2) legend('Cluster 1','Cluster 2','Centroids',... 'Location','NW') title('K-means with normalization') (FYI: How can I detect if my dataset is clustered or unclustered (i.e. forming one single cluster) Distributed clustering: The comparative analysis shows that the distributed clustering results depend on the type of normalization procedure. Artificial neural network (inputs): If the input variables are combined linearly, as in an MLP, then it is rarely strictly necessary to standardize the inputs, at least in theory. The reason is that any rescaling of an input vector can be effectively undone by changing the corresponding weights and biases, leaving you with the exact same outputs as you had before. However, there are a variety of practical reasons why standardizing the inputs can make training faster and reduce the chances of getting stuck in local optima. Also, weight decay and Bayesian estimation can be done more conveniently with standardized inputs. Artificial neural network (inputs/outputs) Should you do any of these things to your data? The answer is, it depends. Standardizing either input or target variables tends to make the training process better behaved by improving the numerical condition (see ftp://ftp.sas.com/pub/neural/illcond/illcond.html) of the optimization problem and ensuring that various default values involved in initialization and termination are appropriate. Standardizing targets can also affect the objective function. Standardization of cases should be approached with caution because it discards information. If that information is irrelevant, then standardizing cases can be quite helpful. If that information is important, then standardizing cases can be disastrous. Interestingly, changing the measurement units may even lead one to see a very different clustering structure: Kaufman, Leonard, and Peter J. Rousseeuw.. "Finding groups in data: An introduction to cluster analysis." (2005). In some applications, changing the measurement units may even lead one to see a very different clustering structure. For example, the age (in years) and height (in centimeters) of four imaginary people are given in Table 3 and plotted in Figure 3. It appears that {A, B ) and { C, 0) are two well-separated clusters. On the other hand, when height is expressed in feet one obtains Table 4 and Figure 4, where the obvious clusters are now {A, C} and { B, D}. This partition is completely different from the first because each subject has received another companion. (Figure 4 would have been flattened even more if age had been measured in days.) To avoid this dependence on the choice of measurement units, one has the option of standardizing the data. This converts the original measurements to unitless variables. Kaufman et al. continues with some interesting considerations (page 11): From a philosophical point of view, standardization does not really solve the problem. Indeed, the choice of measurement units gives rise to relative weights of the variables. Expressing a variable in smaller units will lead to a larger range for that variable, which will then have a large effect on the resulting structure. On the other hand, by standardizing one attempts to give all variables an equal weight, in the hope of achieving objectivity. As such, it may be used by a practitioner who possesses no prior knowledge. However, it may well be that some variables are intrinsically more important than others in a particular application, and then the assignment of weights should be based on subject-matter knowledge (see, e.g., Abrahamowicz, 1985). On the other hand, there have been attempts to devise clustering techniques that are independent of the scale of the variables (Friedman and Rubin, 1967). The proposal of Hardy and Rasson (1982) is to search for a partition that minimizes the total volume of the convex hulls of the clusters. In principle such a method is invariant with respect to linear transformations of the data, but unfortunately no algorithm exists for its implementation (except for an approximation that is restricted to two dimensions). Therefore, the dilemma of standardization appears unavoidable at present and the programs described in this book leave the choice up to the user.
H: Value Updation Dynamic Programming Reinforcement learning Regarding Value Iteration of Dynamic Programming(reinforcement learning) in grid world, the value updation of each state is given by: Now Suppose i am in say box (3,2). I can go to (4,2)(up) (3,3)(right) and (1,3)(left) and none of these are my final state so i get a reward of -0.1 for going in each of the states. The present value of all states are 0. The probability of going north is 0.8, and going left/right is 0.1 each. So since going left/right gives me more reward(as reward*probability will be negative) i go left or right. Is this the mechanism. Am I correct? But In the formula there is a summation term given. So I basically cannot understand this formula. Can anyone explain me with an example? AI: The probabilities you describe refer only to the go-north action. It means that if you want to go north, you have 80% chance of actually going north and 20% of going left or right, making the problem more difficult (non-deterministic). This rule applies to every direction. Also, the formula does not tell which action to chose, just how to update the values. In order to select an action, assuming a greedy-policy, you'd select the one with the highest expected value ($V(s')$). The formula says to sum the values for all possible outcomes from the best action. So, supposing go-north is indeed the best action, you have: $$.8 * (-.1 + 0) + .1 * (-.1 + 0) + .1 * (-.1 + 0) = -.1$$ But let us suppose that you still don't know which is the best action and want to select one greedily. Then you must compute the sum for each possible action (north, south, east, west). Your example has all values set to 0 and the same reward and so is not very interesting. Let's say you have a +1 reward to east (-0.1 for the remaining directions) and that south already has V(s) = 0.5 (0 for the remaining states). Then you compute the value for each action (let $\gamma = 1$, since it is a user-adjusted parameter): North: $.8 * (-.1 + 0) + .1 * (-.1 + 0) + .1 * (1 + 0) = -.08 - .01 + .1 = .01$ South: $.8 * (-.1 + .5) + .1 * (-.1 + 0) + .1 * (1 + 0) = 0.32 - .01 + .1 = .41$ East: $.8 * (1 + 0) + .1 * (-.1 + 0) + .1 * (-.1 + .5) = .8 - .01 + .04 = .83$ West: $.8 * (-.1 + 0) + .1 * (-.1 + 0) + .1 * (-.1 + .5) = -.08 - .01 + .04 = -.05$ So you would update your policy to go East from the current state, and update the current state value to 0.83.
H: How to preprocess different kinds of data (continuous, discrete, categorical) before Decision Tree learning I want to use some Decision Tree learning, such as the Random Forest classifier. I have data of different types: continuous, discrete and categorical. How do I have to preprocess data in order to have consistent results? AI: One of the benefits of decision trees is that ordinal (continuous or discrete) input data does not require any significant preprocessing. In fact, the results should be consistent regardless of any scaling or translational normalization, since the trees can choose equivalent splitting points. The best preprocessing for decision trees is typically whatever is easiest or whatever is best for visualization, as long as it doesn't change the relative order of values within each data dimension. Categorical inputs, which have no sensible order, are a special case. If your random forest implementation doesn't have a built-in way to deal with categorical input, you should probably use a 1-hot encoding: If a categorical value has $n$ categories, you encode the value using $n$ dimensions, one corresponding to each category. For each data point, if it is in category $k$, the corresponding $k$th dimension is set to 1, while the rest are set to 0. This 1-hot encoding allows decision trees to perform category equality tests in one split since inequality splits on non-ordinal data doesn't make much sense.
H: fit model with sd square If I want to fit a nonlinear regression model with some parameters like $\sigma^2$(where $\sigma$ is the standard deviation, which is positive), how can I guarantee that $\hat{\sigma}$ is positive? I mean, if I use maximum likelihood(and the model only have square term of $\sigma$), how does the optimization method know $\sigma$ is positive? (well, it seems OK that the result estimation of $\sigma$ is negative, but it looks weird, and this is why I ask this question.) AI: I would suggest parametrizing with a logarithm of volatility so you don't have to care about positivity, run the estimation and then invert back to original scale. Alternatively, you can consider constrained optimization routine. Without knowing more about the problem (at least the language you're using), that's about it.
H: Merging repeating data cells in csv I have a CSV file with around 1 Million rows. Let say its have details like Name | Age | Salary name 1 52 10000 name 2 55 10043 name 3 50 100054 name 2 55 10023 name 1 52 100322... and soon . but i need to merge the redundant details . and need a output like Name | Age | Salary name 1 52 110322* name 2 55 20066 * name 3 50 100054 you might notice that the repeating Name 1 and Name 2 details are merged and the Salary values are added .So i'm looking for a way to apply this change to my original data set. so i need a python script to fix my problem . AI: Pandas is a python library that you will find very useful for these types of tasks. Here is a stack overflow post that tells you how to do what you want to accomplish. It boils down to three very pythonic lines with a groupby and transformation followed by a drop_duplicates: import pandas df = pandas.read_csv('csvfile.csv', header = 0) df['Total'] = df.groupby(['Name', 'Age'])['Salary'].transform('sum') df.drop_duplicates(take_last=True)
H: Do you have any real example of Data Science reports? I recently found this use cases on Kaggle for Data Science and Data Analytics. Data Science Use Cases However, I am curious to find examples and case studies of real reports from other professionals in some of those use cases on the link. Including hypothesis, testing, reports, conclusions (and maybe also the datasets they have used) Do you have any kind of those reports to share? I am aware of the difficulty to share this kind of details and this is why it is not a Google-search-and-find answer. AI: I think you can better go through various university thesis reports and data science related journal papers to get more details on "Including hypothesis, testing, reports, conclusions" of the above mentioned Data science related problems. Check these links from Stanford university : http://cs229.stanford.edu/projects2014.html http://cs229.stanford.edu/projects2013.html http://cs229.stanford.edu/projects2012.html
H: Advise on making predictions given collection of dimensions and corresponding probabilities I am a CS graduate but am very new to data science. I could use some expert advise/insight on a problem I am trying to solve. I've been through the titanic tutorial on gaggle.com which I think was helpful but my problem is a bit different. I am trying to predict diabetes risk based upon Age, Sex...and other factors given this data: http://www.healthindicators.gov/Indicators/Diabetes-new-cases-per-1000_555/Profile/ClassicData The data gives new cases people per 1,000 people for each dimension (Age, Sex...etc). What I would like to do is devise a way to predict, given a list of dimensions (Age, Sex...etc) a probability factor for a new diagnosis. So far my strategy is to load this data into R and use some package to create a decision tree, similar to what I saw in the titanic example on kaggle.com, then feed in a dimension list. However, I am a bit overwhelmed. Any direction on what I should be studying, packages/methods/examples would be helpful. AI: Aggregate Data Since you're only given aggregate data, and not individual examples, machine learning techniques like decision trees won't really help you much. Those algorithms gain a lot of traction by looking at correlations within a single example. For instance, the increase in risk from being both obese and over 40 might be much higher than the sum of the individual risks of being obese or over 40 (i.e. the effect is greater than the sum of its parts). Aggregate data loses this information. The Bayesian Approach On the bright side, though, using aggregate data like this is fairly straightforward, but requires some probability theory. If $D$ is whether the person has diabetes and $F_1,\ldots,F_n$ are the factors from that link you provided, and if I'm doing my math correctly, we can use the formula: $$ \text{Prob}(D\ |\ F_1,\ldots,F_n) \propto \frac{\prod_{k=1}^n \text{Prob}(D\ |\ F_k)}{\text{Prob}(D)^{n-1}} $$ (The proof for this is an extension of the one found here). This assumes that the factors $F_1,\ldots,F_n$ are conditionally independent given $D$, though that's usually reasonable. To calculate the probabilities, compute the outputs for $D=\text{Diabetes}$ and $\neg D=\text{No diabetes}$ and divide them both by their sum so that they add to 1. Example Suppose we had a married, 48-year-old male. Looking at the 2010-2012 data, 0.73% of all people get diabetes ($\text{Prob}(D) = 0.73\%$), 0.77% of married people get diabetes ($\text{Prob}(D\ |\ F_1)$$= 0.77\%$), 1.02% of people age 45-54 get diabetes ($\text{Prob}(D\ |\ F_2) = 1.02\%$), and 0.70% of males get diabetes ($\text{Prob}(D\ |\ F_3) = 0.70\%$). This gives us the unnormalized probabilities: $$ \begin{align*} P(D\ |\ F_1,F_2,F_3) &= \frac{(0.77\%)(1.02\%)(0.70\%)}{(0.73\%)^2} &= 0.0103 \\ P(\neg D\ |\ F_1,F_2,F_3) &= \frac{(99.23\%)(98.98\%)(99.30\%)}{(99.27\%)^2} &= 0.9897 \end{align*}$$ After normalizing these to add to one (which they already do in this case), we get a 1.03% chance of this person getting diabetes, and a 98.97% chance for them not getting diabetes.
H: How to plot/visualize clusters in scikit-learn (sklearn)? I have done some clustering and I would like to visualize the results. Here is the function I have written to plot my clusters: import sklearn from sklearn.cluster import DBSCAN from sklearn import metrics from sklearn.preprocessing import StandardScaler from sklearn.cluster import DBSCAN from sklearn import metrics from sklearn.datasets.samples_generator import make_blobs from sklearn.preprocessing import StandardScaler def plot_cluster(cluster, sample_matrix): '''Input: "cluster", which is an object from DBSCAN, e.g. dbscan_object = DBSCAN(3.0,4) "sample_matrix" which is a data matrix: X = [ [0,5,1,2], [0,4,1,3], [0,5,1,3], [0,5,0,2], [5,5,5,5], ] Output: Plots the clusters nicely. ''' import matplotlib.pyplot as plt import numpy as np f = lambda row: [float(x) for x in row] sample_matrix = map(f,sample_matrix) print sample_matrix sample_matrix = StandardScaler().fit_transform(sample_matrix) core_samples_mask = np.zeros_like(cluster.labels_, dtype=bool) core_samples_mask[cluster.core_sample_indices_] = True labels = cluster.labels_ # Black removed and is used for noise instead. unique_labels = set(labels) colors = plt.cm.Spectral(np.linspace(0, 1, len(unique_labels))) for k, col in zip(unique_labels, colors): if k == -1: # Black used for noise. col = 'k' class_member_mask = (labels == k) # generator comprehension # X is your data matrix X = np.array(sample_matrix) xy = X[class_member_mask & core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=14) xy = X[class_member_mask & ~core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=6) plt.ylim([0,10]) plt.xlim([0,10]) # plt.title('Estimated number of clusters: %d' % n_clusters_) plt.savefig('cluster.png') The function above is copied almost verbatim from the scikit-learn demo here. Yet, when I try it on the following: dbscan_object = DBSCAN(3.0,4) X = [ [0,5,1,2], [0,4,1,3], [0,5,1,3], [0,5,0,2], [5,5,5,5], ] result = dbscan_object.fit(X) print result.labels_ print 'plotting ' plot_cluster(result, X) ...It produces a single point. What is the best way to plot clusters in python? AI: When I run the code you posted, I get three points on my plot: The "point" at (0, 4) corresponds to X[1] and the "point" at (0, 5) is actually three points, corresponding to X[0], X[2], and X[3]. The point at (5, 5) is the last point in your X array. The data at (0, 4) and (0, 5) belong to one cluster, and the point at (5, 5) is considered noise (plotted in black). The issue here seems to be that you're trying to run the DBSCAN algorithm on a dataset containing 5 points, with at least 4 points required per cluster (the second argument to the DBSCAN constructor). In the sklearn example, the clustering algorithm is run on a dataset containing 750 points with three distinct centers. Try creating a larger X dataset and running this code again. You might also want to remove the plt.ylim([0,10]) and plt.xlim([0,10]) lines from the code; they're making it a bit difficult to see the points on the edge of the plot! If you omit the ylim and xlim then matplotlib will automatically determine the plot limits.
H: How to convert vector values to fit k-means algorithm function? I have a set of user objects that I want to group using a $k$-means function from their quiz answers. Each quiz question had predefined answers with letter values "a", "b", "c", "d". If a user answers the question #1 with letter "b", I put this answer into vector $(0, 1, 0, 0)$. The $k$-means function I have to use takes a two-dimensional array of numbers as an input vector (in this case array[user][question]), and I can't figure out how to use it, because, instead of a number value representing a user's answer to question, I have a vector input. How can I convert my vector values to numbers so that I can use my $k$-means function? AI: You are 95% there, you just have one hangup... The vectorization that you are doing is alternatively known as binarization or one-hot encoding. The only thing you need to do now is break apart all of those vectors and think of them as individual features. So instead of thinking of the question one vector as $(0,0,1,0)$ and the question two vector as $(0,1,0,0)$, you can now think of them as individual features. So this: - q1, q2 - (a,b,c,d), (a,b,c,d) user1 (0,0,1,0), (0,1,0,0) user2 (1,0,0,0), (0,0,0,1) Becomes this: - q1a,q1b,q1c,q1d,q2a,q2b,q2c,q2d user1 0 0 1 0 0 1 0 0 user2 1 0 0 0 0 0 0 1 And you can think of each one of those binary features as an orthogonal dimension in your data that lies in a 8-dimensional space. Hope this helps!
H: Developing an Empirical Model for Non-Linear Data I have collected Temp vs. Time data for a controlled environment. I performed three tests, with the thermocouple in the same location for all three tests. I now have three Temp. vs. Time curves for this location. The graphs are shown here: I am looking to generate an empirical model based on the Temp. vs. Time at this location that can predict the temperature at this location over time. How should I approach this? My initial approach was to identify the local minima and maxima, and average the three points together (for each maximum and minimum). And then correlate that average to a sequence of time, for example, 30C between 750 and 1250s, or something like that. I really don't think this is an efficient approach... What do you all think would be an efficient approach to this? AI: A couple of thoughts... First as a (former) physicist: As a scientist/physicist I would likely never be satisfied to analyze the data that you have presented because there are some obvious issues with it. I was initially thinking that it was way to schizophrenic, but then I noticed that you referred to the time units as seconds, in which case this is a pretty slowly varying time series. Are the units really seconds? If they happen to be milliseconds or microseconds then you might want to think about a probe that is less noisy (i.e. a larger probe with more of a temperature sink) and check weather you are seeing the effects of ground loop feedback in your system or just EMF interference from your electrical system. Secondly, there are clear patterns in the data and these do not line up very well, so I would rerun the experiment with things timed much more accurately. For instance, the event that occurs midway through the experiment that drops the temperature (looks like an icicle or dagger) occurs 5-8 minutes apart in comparing the dark blue experiment to the purple experiment. It would be great if these could be synced better. The point being that if you can't sync them better, then the noise of the time series renders most phenomenon apart from a flat line, almost useless. Now as a data scientist: This data is really noisy or at least fluctuates significantly! You should stay away from averaging minima and maxima as you suggest in your post. extrema are very sensitive to noise where as you are trying to reduce the noise in your system. You should perhaps think about applying some sort of smoothing method to reduce the jitter in your data. Perhaps a second-order exponentially weighted moving average like Holt-Winters. Finally, you could probably just average the three signals to produce a mean signal. Means are more suseptable to outliers than medians,so the median signal is also an option. Some other options include separating the trend from the general behavior using some type of de-seasoning or double ARIMA procedure. But, it seems like these are too extreme given that the dagger/icicle event looks significant and you don't want to transform it away. So I would only use these if there is some type of periodicity to your data. Hope this helps!
H: Estimating destination according to previous data I need an advice. I can resume my problem like that : I have some travels in a database, for example : Person1 travelled from CityA to CityB on Date1 Person1 travelled from CityB to CityC on Date2 Person2 travelled from CityB to CityD on Date3 ... We can consider that these cities are in the complete graph. Now, according to all the travels in the database, I would like to know where a PersonX is likely to go. I can know when he come from (or not). I don't know if I should use machine learning, data-mining or graph theory. AI: This is a spatio-temporal clustering problem that is likely best solved with a Markov model. You could reasonable group this into machine learning or data mining. Develop your model using machine learning and then (the data mining part) leverage those pattern recognition techniques (that have been developed in machine learning). I think there are at least one or two threads on this over at Cross-Validated that go into more detail. Here are a couple of papers to look at if you are just getting started. Using GPS to learn significant locations and predict movement across multiple users Predicting Future Locations with Hidden Markov Models
H: What's an efficient way to compare and group millions of store names? I'm a total amateur as far as data science goes, and I'm trying to figure out a way to do some string comparison on a large dataset. I've a Google BigQuery table storing merchant transactions, but the store names are all over the board. For example, there can be 'Wal-Mart Super Center' and 'Wal-Mart SC #1234', or 'McDonalds F2222' and 'McDonalds #321'. What I need to do is group ALL 'Wal-mart' and 'McDonalds' and whatever else. My first approach was doing a recursive reg-ex check, but that took forever and eventually timed-out. What's the best approach for doing that with a table of 20 million+ rows? I'm open to trying out any technology that would fit this job. AI: This is an entity resolution aka record linkage aka data matching problem. I would solve this by removing all of the non-alphabetical characters including numbers, casting into all uppercase and then employing a hierarchical match. First match up the exact cases and then move to a Levenshtein scoring between the fields. Make some sort of a decision about how large you will allow the Levenshtein or normalized Levenshtein score to get before you declare something a non-match. Assign every row an id and when you have a match, reassign the lower of the IDs to both members of the match. The Levenshtein distance algorithm is simple but brilliant (taken from here): def levenshtein(a,b): "Calculates the Levenshtein distance between a and b." n, m = len(a), len(b) if n > m: # Make sure n <= m, to use O(min(n,m)) space a,b = b,a n,m = m,n current = range(n+1) for i in range(1,m+1): previous, current = current, [i]+[0]*n for j in range(1,n+1): add, delete = previous[j]+1, current[j-1]+1 change = previous[j-1] if a[j-1] != b[i-1]: change = change + 1 current[j] = min(add, delete, change) return current[n] This Data Matching book is a good resource and is free for seven days on Amazon. Nominally, this is an $n^2$ algorithm without exploiting some sorting efficiencies, so I would expect to have to use multiple cores on $2\times10^7$ rows. But this should run just fine on an 8 core AWS instance. It will eventually finish on a single core, but might take several hours. Hope this helps!
H: When to use Random Forest over SVM and vice versa? When would one use Random Forest over SVM and vice versa? I understand that cross-validation and model comparison is an important aspect of choosing a model, but here I would like to learn more about rules of thumb and heuristics of the two methods. Can someone please explain the subtleties, strengths, and weaknesses of the classifiers as well as problems, which are best suited to each of them? AI: I would say, the choice depends very much on what data you have and what is your purpose. A few "rules of thumb". Random Forest is intrinsically suited for multiclass problems, while SVM is intrinsically two-class. For multiclass problem you will need to reduce it into multiple binary classification problems. Random Forest works well with a mixture of numerical and categorical features. When features are on the various scales, it is also fine. Roughly speaking, with Random Forest you can use data as they are. SVM maximizes the "margin" and thus relies on the concept of "distance" between different points. It is up to you to decide if "distance" is meaningful. As a consequence, one-hot encoding for categorical features is a must-do. Further, min-max or other scaling is highly recommended at preprocessing step. If you have data with $n$ points and $m$ features, an intermediate step in SVM is constructing an $n\times n$ matrix (think about memory requirements for storage) by calculating $n^2$ dot products (computational complexity). Therefore, as a rule of thumb, SVM is hardly scalable beyond 10^5 points. Large number of features (homogeneous features with meaningful distance, pixel of image would be a perfect example) is generally not a problem. For a classification problem Random Forest gives you probability of belonging to class. SVM gives you distance to the boundary, you still need to convert it to probability somehow if you need probability. For those problems, where SVM applies, it generally performs better than Random Forest. SVM gives you "support vectors", that is points in each class closest to the boundary between classes. They may be of interest by themselves for interpretation.
H: Recommendations for storing time series data As part of my thesis I've done some experiments that have resulted in a reasonable amount of time-series data (motion-capture + eye movements). I have a way of storing and organizing all of this data, but it's made me wonder whether there are best practices out there for this sort of task. I'll describe what I've got, and maybe that will help provide some recommendations. So, I have an experiment that requires subjects to use their vision and move their body to complete a task. Each task is one trial, and each subject performs multiple trials to complete the experiment. During a trial I record the movement and the eye tracker (~200 channels) at regularly sampled time points (~100Hz). I store these in a CSV file (one file per trial), with one row per time point, and one column per variable (e.g., left-fingertip-x, left-fingertip-y, left-fingertip-z, etc. for the mocap, and left-eye-x, left-eye-y for the eyes). Associated with each trial is some metadata such as the experimental condition of the trial (e.g., how fast a target in the trial is moving, say). I store these values in the CSV filename itself, using a "key=value" sort of syntax. While this works well enough for my purposes, it's really ad-hoc! I'd like to get a sense of whether other people have solved problems like this, and, if so, how? AI: There are two solutions that are worth looking at: InfluxDB is an open source database platform specifically designed for time series data. The platform includes many optimized functions related to time and you can collect data on any interval and compute rollups/aggregations when reporting. The company recently launched a query app called Chronograf. I have not used this - but if its no good, you can also check out Grafana (which is very widely used and stable). The alternative strategy you may want to pursue is an elasticsearch index. Elasticsearch is great at running aggregations and other mathematical functions on data. Many use it to store server log data and then query said data using Kibana.
H: Logistic Regression in R models for 1 or 0? According to this link, it says SAS models for 0, To model 1s rather than 0s, we use the descending option. We do this because by default, proc logistic models 0s rather than 1s What happens in case of R's glm function? Does it model for 1 or 0? Is there a way to change it? Does it matter? AI: I always learned to model for 1. See below for the impact of switching the encoding, model <- glm(ans ~ x, data = simulated_data, family = binomial) summary(model) # # Call: # glm(formula = ans ~ x, family = binomial, data = simulated_data) # # Deviance Residuals: # 1 2 3 4 5 # 1.6388 -0.6249 -1.2146 -0.8083 0.7389 # # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) -2.4753 2.5006 -0.99 0.322 # x 0.5957 0.6543 0.91 0.363 # # (Dispersion parameter for binomial family taken to be 1) # # Null deviance: 6.7301 on 4 degrees of freedom # Residual deviance: 5.7505 on 3 degrees of freedom # AIC: 9.7505 # # Number of Fisher Scoring iterations: 4 # simulated_data$ans <- !simulated_data$ans model_opp <- glm(ans ~ x, data = simulated_data, family = binomial) summary(model_opp) # # Call: # glm(formula = ans ~ x, family = binomial, data = simulated_data) # # Deviance Residuals: # 1 2 3 4 5 # -1.6388 0.6249 1.2146 0.8083 -0.7389 # # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) 2.4753 2.5006 0.99 0.322 # x -0.5957 0.6543 -0.91 0.363 # # (Dispersion parameter for binomial family taken to be 1) # # Null deviance: 6.7301 on 4 degrees of freedom # Residual deviance: 5.7505 on 3 degrees of freedom # AIC: 9.7505 # # Number of Fisher Scoring iterations: 4 Hope this helps.
H: Data frame mutation in R I have a data frame of the following format: Symbol Date Time Profit $BANKNIFTY 4/1/2010 9:55:00 -1.18% <br>$BANKNIFTY 4/1/2010 12:30:00 -2.84% $BANKNIFTY 4/1/2010 12:45:00 7.17% <br>$BANKNIFTY 5/1/2010 11:40:00 -7.11% ZEEL 26/6/2012 13:50:00 24.75% ZEEL 27/6/2012 15:15:00 -1.90% ZEEL 28/6/2012 9:45:00 37.58% ZEEL 28/6/2012 14:55:00 23.95% ZEEL 29/6/2012 14:20:00 -4.65% ZEEL 29/6/2012 14:30:00 -6.01% ZEEL 29/6/2012 14:55:00 -12.23% ZEEL 29/6/2012 15:15:00 35.13% What I'd like to achieve is convert that data frame into a data frame which has dates for row names, symbol names for columns and sum of percentage profit for each day. Like in the following: Date BankNifty ZEEL 4/1/2010 3.15% 0 5/1/2010 -7.11% 0 26/6/2012 0 24.75% 27/6/2012 0 -1.90% 28/6/2012 0 61.53% 29/6/2012 0 12.24% How can I achieve that in R? dplyr mutation or some apply function? I'm a beginner in R programming. Thanks in advance. The data in R is structure(list(Symbol = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c("BANKNIFTY", "ZEEL"), class = "factor"), Date = structure(c(5L, 5L, 5L, 6L, 1L, 2L, 3L, 3L, 4L, 4L, 4L, 4L), .Label = c("26/6/2012", "27/6/2012", "28/6/2012", "29/6/2012", "4/1/2010", "5/1/2010"), class = "factor"), Time = structure(c(10L, 2L, 3L, 1L, 4L, 8L, 9L, 7L, 5L, 6L, 7L, 8L), .Label = c("11:40:00", "12:30:00", "12:45:00", "13:50:00", "14:20:00", "14:30:00", "14:55:00", "15:15:00", "9:45:00", "9:55:00"), class = "factor"), Profit = structure(c(1L, 4L, 12L, 7L, 9L, 2L, 11L, 8L, 5L, 6L, 3L, 10L), .Label = c("-1.18%", "-1.90%", "-12.23%", "-2.84%", "-4.65%", "-6.01%", "-7.11%", "23.95%", "24.75%", "35.13%", "37.58%", "7.17%"), class = "factor")), .Names = c("Symbol", "Date", "Time", "Profit"), class = "data.frame", row.names = c(NA, -12L)) AI: The fastest way would be, require(data.table) data <- data.table(data) # Remove the percentage from your file and convert the field to numeric. data[, Profit := as.numeric(gsub("%", "", Profit))] data ## Symbol Date Time Profit ## 1: BANKNIFTY 4/1/2010 9:55:00 -1.18 ## 2: BANKNIFTY 4/1/2010 12:30:00 -2.84 ## 3: BANKNIFTY 4/1/2010 12:45:00 7.17 ## 4: BANKNIFTY 5/1/2010 11:40:00 -7.11 ## 5: ZEEL 26/6/2012 13:50:00 24.75 ## 6: ZEEL 27/6/2012 15:15:00 -1.90 ## 7: ZEEL 28/6/2012 9:45:00 37.58 ## 8: ZEEL 28/6/2012 14:55:00 23.95 ## 9: ZEEL 29/6/2012 14:20:00 -4.65 ## 10: ZEEL 29/6/2012 14:30:00 -6.01 ## 11: ZEEL 29/6/2012 14:55:00 -12.23 ## 12: ZEEL 29/6/2012 15:15:00 35.13 # Melt the data so that we can easily dcast afterwards. molten_data <- melt(data[, list(Symbol, Date, Profit)] # Create a summary by date and Symbol. dcast(molten_data, id = c("Symbol", "Date")), Date ~ variable + Symbol, fun = sum) ## Date Profit_BANKNIFTY Profit_ZEEL ## 1: 26/6/2012 0.00 24.75 ## 2: 27/6/2012 0.00 -1.90 ## 3: 28/6/2012 0.00 61.53 ## 4: 29/6/2012 0.00 12.24 ## 5: 4/1/2010 3.15 0.00 ## 6: 5/1/2010 -7.11 0.00
H: Find the column(s) name where the value of the variable matches a regex I am trying to filter a data frame where I need to search for a regular expression within the data frame. dim(df) [1] 10299 561 I wanted to know if there is a way, I could find the name(s) of the column(s) where a particular regex may be available. As an example, how do I find the name of the column where 'rsi' is present. I am referring to iris. AI: Would this work for you, names(iris)[apply(iris, 2, function(x) any(grepl("rsi", x)))]
H: For logistic regression, Predict.glm() outputs $p$ or $ln(p/1-p)$? I'm performing a logistic regression in R. I wanted to know if the function predict.glm outputs $p$ (probability of event occurring) or log odds i.e. $log(p/1-p)$? AI: It returns the log odds. You can see that with this basic example, # Create a perfect correlated data. data <- data.frame(x = c(1,0,1,0), y = c(T, F, T, F)) # Estimate the model. model <- glm(y ~ x, data) # Show predictions. predict(model) ## 1 2 3 4 ## 23.56607 -23.56607 23.56607 -23.56607 # Compute the inverse of the log odd to come to your prediction. boot::inv.logit(predict(model)) ## 1 2 3 4 ##1.000000e+00 5.826215e-11 1.000000e+00 5.826215e-11 If logit would return probabilities you could not take the inverse to come to your predicted values.
H: How is H2O faster than R or SAS? I am trying to understand the abstract details that explain how h2o is faster than R and SAS for data science computations. AI: I have used R, SAS Base and H2O. First, I do not think that H2O seeks to be either R or SAS. H2O provides data mining algorithms that are highly efficient. You can interface with H2O using several APIs such as their R API. The benefit of combining R and H2O is that H2O is very good at exploiting multi-cores or clusters with minimal effort of the user. It is much harder to achieve the same efficiency in R alone. The reason why H2O is much faster is that they have a very good indexing of their data and their algorithms are written such that they exploit parallelism to the fullest. See http://h2o.ai/blog/2014/03/h2o-architecture/ R with the default matrix dynamic libraries can only use one CPU core. Revolution R community edition ships with the Intel Math Kernel Library. This allows for some matrix computations in parallel but definitely not as efficient as H2O. For SAS it is a bit harder to say anything considering it's closed source but based on my CPU utilization I would assume that they have a similar approach as Revolution R. Their matrix algebra exploits parallelism but they algorithms are not as efficient as H2O. Their data storage is also not as efficient as H2O. Lastly, H2O with R comes at a very different price tag than SAS. Hope this clarifies a bit.
H: users' percentile similarity measure Having n vectors of percentile ranks for a list of common users between group #1 and groups #2:n e.g. vec1 = {0.25, 0.1, 0.8, 0.75, 0.5, 0.6} vec2 = {0.35, 0.2, 0.6, 0.45, 0.2, 0.9} The percentile ranks represent activity frequency within the group for instance, opening times. The goal is to find similarity between group #1 and groups #2..n in terms of these common users's rank. The direction taken so far is to use dot product in order to take the magnitude into account (due to rank). The problem is that the scalar answer can take any value and so a threshold cannot be drawn. Do I need to draw a threshold with respect to the dot product of vec #1 (group 1) with itself? or is there another way to set a threshold (maybe dynamic) ? AI: As I understand it, you don't want to use the dot product, because it is unbounded. Yet you also don't want to use the cosine similarity metric: , which is just the dot product normalized by the length of the two vectors being dotted because it does not take magnitude into account e.g. $(1,1)\cdot(1,1)=1=(1,1)\cdot(2,2)$ Perhaps you could therefor use the sorensen dice coefficient or possibly 1-soresen: This varies from 0 to 1 and accounts for magnitudes of the constituent vectors e.g.: $$s_{\nu}[(1,1),(1,1)]=1$$ , where as $$s_{\nu}[(1,1),(2,2)]=\frac{4}{5}$$ Hope this helps!
H: Which type of machine learning to use We are working with a complex application i.e. a physical measurement in a lab, that has approximately 230 different input parameters, many of which are ranges or multiple-value. The application produces a single output, which is then verified in an external (physical) process. At the end of the process the individual tests are marked as "success" or "fail". That is, despite the many input parameters, the output is assessed in a boolean manner. When tests fail, the parameters are 'loosened' slightly and re-tested. We have about 20,000 entries in our database, with both "success" and "fail", and we are considering a machine learning application to help in two areas: 1) Initial selection of optimum parameters 2) Suggestions for how to tune the parameters after a "fail" Many of the input parameters are strongly related to each other. I studied computer science in the mid-90s, when the focus was mostly expert systems and neural networks. We also have access to some free CPU hours of Microsoft Azure Machine Learning. What type of machine learning would fit these use-cases? AI: With using R, You could look at trees / randomforests. Since you have correlated variables, you could look into Projection pursuit classification trees (R package pptree). And there soon will be a ppforest package. But this is still under development. You could also combine randomforest with the package forestFloor to see the curvature of the randomforest and work from there.
H: Can Hadoop be beneficial when data is in database tables and not in a file system I work for a bank. Most of our data is in the form of database tables. Would we benefit by implementing Hadoop? I am of the impression that Hadoop is more for a Distributed File System (unstructured data) as opposed to OLAP databases (Netezza) AI: 'SQL' on Hadoop is very much a thing, though I use quotes since it's probably more accurate to say it's SQL-like. Some options for bringing SQL-like capabilities to Hadoop include Hue, Hive/bee (Heading towards Stinger? So punny Apache), Impala, SparkSQL (probably not a great solution for a bank given the possibility of concurrency issues), among others (Seems like everyone has their own version of it these days) To be honest though, if you're asking if it could be helpful, you probably don't need Hadoop (sorry in advance of that comes off harshly, it's not intended to). Lots an lots of places think they need Hadoop, but very few actually need Hadoop. There are of business that are down on the tech because they transitioned to it when the need didn't really exist. If you truly do need Hadoop or another distributed system it'd be almost impossible to determine which setup would be beneficial to your org without an intimate understanding of your data, and your specific business means.
H: Can all statistical algorithms be parallelized using a Map Reduce framework Is it correct to say that any statistical learning algorithm (linear/logistic regression, SVM, neural network, random forest) can be implemented inside a Map Reduce framework? Or are there restrictions? I guess there may be some algorithms that is not possible to parallelize? AI: Indeed there are: Gradient Boosting is by construction sequential, so parallelization is not really possible Generalized Linear Models need all data at the same time, although technically you can parallelize some of the inner linear algebra nuts and bolts Support Vector Machines
H: Understanding dropout and gradient descent I am looking at how to implement dropout on deep neural networks and found something counter intuitive. In the forward phase dropout mask activations with a random tensor of 1s and 0s to force net to learn the average of the weights. This help the net to generalize better. But during the update phase of the gradient descent the activations are not masked. This to me seems counter intuitive. If I mask connections activations with dropout, why I should not mask the gradient descent phase? AI: In dropout as described in here, weights are not masked. Instead, the neuron activations are masked, per example as it is presented for training (i.e. the mask is randomised for each run forward and gradient backprop, not ever repeated). The activations are masked during forward pass, and gradient calculations use the same mask during back-propagation of that example. This can be implemented as a modifier within a layer description, or as a separate dropout layer. During weight update phase, typically applied on a mini-batch (where each example would have had different mask applied) there is no further use of dropout masks. The gradient values used for update have already been affected by masks applied during back propagation. I found a useful reference for learning how dropout works, for maybe implementing yourself, is the Deep Learn Toolbox for Matlab/Octave.
H: TF-IDF not a strong measure in this senario? I am dealing with a data where I have only two documents and there are some words which are present in both. Now Term Frequencies (tf) of these words are very high for respective single document than the other. For e.g. Word1 is present in Documents D1 and D2, and tf(Word1,D1) = 1000 tf(Word1,D2) = 3 But since Word1 is present in both the documents IDF(Word1) = 0 TF-IDF(Word1,d) = 0 for all d belonging to {D1,D2} So inspite having a very powerful presence in a single document, TF-IDF score will always be 0. One solution I could think is to take word1 as absent if tf(Word1) < threshold. However I still don't feel this is good enough as the Word present in only one single document is given IDF score of only 0.5. I feel like TF-IDF is not a good measurement when number of documents are very low. Any suggestions here? AI: Weighting scheme 2 in table Recommended TF-IDF weighting schemes in tf–idf | Wikipedia should solve your problem.
H: Classification problem where one attribute is a vector Hello I am a layman trying to analyze game data from League of Legends, specifically looking at predicting the win rate for a given champion given an item build. Outline A player can own up to 6 items at the end of a game. They could have purchased these items in different orders or adjusted their inventory position during the course of the game. In this fashion the dataset may contain the following rows with: champion id | items ids | win(1)/loss(0) ---------------------------------------------------------------------------- 45 | [3089, 3135, 3151, 3157, 3165, 3285] | 1 45 | [3151, 3285, 3135, 3089, 3157, 3165] | 1 45 | [3165, 3285, 3089, 3135, 3157, 3151] | 0 While the items are in a different order the build is the same, my initial thought would be to simply multiply the item ids as this would give me an integer value representing that combination of 6 items. While there are hundreds of items, in reality a champion draws off a small subset (~20) of those to form the core (3 items) of their build. A game may also finish before players have had time to purchase 6 items: items ids ------------------------------------------ [3089, XXXX, 3151, 3285, 3165, 0000] [XXXX, 3285, XXXX, 3165, 3151, 0000] [3165, 3285, 3089, XXXX, 0000, 0000] XXXX item from outside core subset 0000 empty inventory slot As item 3089 compliments champion 45 core builds that have item 3089 have a higher win rate than core builds which are missing item 3089. The size of the data set available for each champion varies between 10000 and 100000. The mean is probably around 35000. Questions Is this a suitable problem for supervised classification? How should I approach finding groups of core items and their win rates? AI: 1) If you want to build a model with: Input: Items bought Output: Win/Loss then you will probably want to learn a non-linear combination of the inputs to represent a build. For example item_X may have very different purpose when paired with item_Y than with item_Z. For the input format, you may consider creating a binary vector from the item list. For example if there were only ten items, a game in which the champion purchased items 1,4,5,9 (in any order) would look like row 1; a game where he also purchased item 2 and 7 would look like row 2: item_ID | 0 1 2 3 4 5 6 7 8 9 ________________________________________ champion_1| 0 1 0 0 1 1 0 0 0 1 champion_1| 0 1 1 0 1 1 0 1 0 1 There are a variety of models that might suit this task. You might use decision trees for interpretability. A simple neural net or SVM would likely also do a good job. These should all be found in most basic ML packages. 2) The win rates of various items are directly computable. Simply count the number of times a champion used the items in question and won and divide by the total number of times a champion used that item combination. You can do this for any given group size (1 to 6)
H: Any usable libs to build and visualise SOM in python? I tried SOMpy, though it is very crude now and works only with oldest versions of matplotlib. Is there any fancy lib that can build SOM based on array and visualize it in Python? AI: You could have a try on this package. There is a working example on this page. If what you are interested is the Manifold learning, you could also apply many packages from sklearn.
H: How does Performance function classify predictions as positive or negative? Package:ROCR I'm performing a logistic regression on my training data. I used the glm function to get the model m. Now using the below codes from this link, I calculated AUC $test\$score<-predict(m,type = 'response',test)$ $pred <- prediction(test\$score,test\$good_bad)$ $perf <- performance(pred,"tpr","fpr")$ where score is the dependent variable (0 or 1). To score the tpr (True positive rate) and fpr (False positive rate), you have to classify the predicted probabilities into 1 or 0. What is the cutoff used for that? how can we change it? Could not find anything useful in this main documentation as well. AI: I cant access an R console at the moment to check, but I'm quite certain the cutoff is 0.5: if your glm model does prediction, it first produces real values and then applies the link function on top. To the best of my knowledge, you can't change it inside the glm function, so your best bet is probably to check ROC, find what the optimal threshold is and use that as cutoff.
H: Newbie: What is the difference between hypothesis class and models? I am new to machine learning and I am confused with the terminology. Thus far, I used to view a hypothesis class as different instance of hypothesis function... Example: If we are talking about linear classification then different lines characterized by different weights would together form the hypothesis class. Is my understanding correct or can a hypothesis class represent anything which could approximate the target function? For instance, can a linear or quadratic function that approximates the target function together form a single hypothesis class or both are from different hypothesis classes? AI: Your hypothesis class consists of all possible hypotheses that you are searching over, regardless of their form. For convenience's sake, the hypothesis class is usually constrained to be only one type of function or model at a time, since learning methods typically only work on one type at a time. This doesn't have to be the case, though: Hypothesis classes don't have to consist of only one type of function. If you're searching over all linear, quadratic, and exponential functions, then those are what your combined hypothesis class contains. Hypothesis classes also don't have to consist of only simple functions. If you manage to search over all piecewise-$\tanh^2$ functions, then those functions are what your hypothesis class includes. The big tradeoff is that the larger your hypothesis class, the better the best hypothesis models the underlying true function, but the harder it is to find that best hypothesis. This is related to the bias–variance tradeoff.
H: Identify given patterns in unstructured data like text files I wasn't sure if I had to ask it here or in Stackoverflow, but since I am also seeking research papers/algorithms and not only code, I decided to do it here. When I have a text, I can manually write a regex to find all the possible outputs from what I want to extract from the file. What I want to do, is to find an algorithm or a research, which can let you highlight (set the input) different positions of the same (repeated) data you want to extract in the text file, train the algorithm and then identify all the others under the same contentions of those you set. For example, let's say that I have a text with several titles which are following with \n\n\n and starting with \n\n. It is easy with regex, but I want to do it dynamically. An idea is to build an algorithm which will take examples and create regex automatically. But I am not aware of any research like this and maybe there are also other techniques that you can achieve it. Any ideas? AI: That is exactly what the Trifecta product does (in addition to other features). It uses the Wrangle language which is a DSL (domain specific language) designed for data manipulation. There is a much earlier research project called Wrangler from the same people. The Wrangler papers might give you ideas.
H: StackOverflow Tags Predictor...Suggest an Machine Learning Approach please? I am trying to predict tags for stackoverflow questions and I am not able to decide which Machine Learning algorithm will be a correct approach for this. Input: As a dataset I have mined stackoverflow questions, I have tokenized the data set and removed stopwords and punctuation from this data. Things i have tried: TF-IDF Trained Naive Bayes on the dataset and then gave user defined input to predict tags, but its not working correctly Linear SVM Which ML algorithm I should use Supervised or Unsupervised? If possible please, suggest a correct ML approach from the scratch. PS: I have the list of all tags present on StackOverflow so, will this help in anyway? Thanks AI: This exact problem was a kaggle competition sponsored by Facebook. The particular forum thread of interest for you is the one where many of the top competitors explained their methodology, this should provide you with more information than you were probably looking for: https://www.kaggle.com/c/facebook-recruiting-iii-keyword-extraction/forums/t/6650/share-your-approach In general, it appears that most people treated the problem as a supervised one. Their primary feature was a tf-idf, or unweighted BOW, representations of the text and they ensembled 1000s of single-tag models. Owen, the winner of the competition, noted that the title text was a more powerful feature than the content of the body of the post.
H: Connection between Regularization and Gradient Descent I would like to understand regularization/shrinkage in the light of MLE/Gradient Descent. I know both concepts but I do not know/understand whether both are used to determine coefficients of a linear model. If so, what are the steps followed? To further elaborate, regularization is used to reduce variance which is accomplished through penalizing coefficients of a linear model. The tuning parameter, lambda, is determined through cross-validation. Once, lambda is determined the coefficients are automatically determined, right? Hence, why do we need to minimize (RSS + regularization term) to find coefficients? Are the steps the following: Find lambda through cross-validation Minimize (RSS + regularization) through MLE or GD Find coefficients Penalize coefficients to decrease variance We are left with a small subset of coefficients AI: The fitting procedure is the one that actually finds the coefficients of the model. The regularization term is used to indirectly find the coefficients by penalizing big coefficients during the fitting procedure. A simple (albeit somewhat biased/naive) example might help illustrate this difference between regularization and gradient descent: X, y <- read input data for different values of lambda L for each fold of cross-validation using X,y,L theta <- minimize (RSS + regularization using L) via MLE/GD score <- calculate performance of model using theta on the validation set if average score across folds for L is better than the current best average score L_best <- L As you can see, the fitting procedure (MLE or GD in our case) finds the best coefficients given the specific value of lambda. As a side note, I would look at this answer here about tuning the regularization parameter, because it tends a little bit murky in terms of bias.
H: Python script/GUI to generate positive/negative images for CascadeClassifier? For the first time, I am playing around with a Cascade Classifier with the OpenCV package (also new to the latter). I realized that it would probably be faster to write my own GUI/script to generate the needed positive and negative images from the set of images I have than to open each file in Photoshop or Paint, but I also suspect this has been done many times before. In particular, I am looking for a GUI that lets users page through files in a directory and then use mouse clicks to draw rectangles on a particular image and have the coordinates of the rectangle recorded for later purposes. Any suggestions? If not, I'll be sure to post a link when/if I finish this. It seems like something of general enough utility I am surprised I can't find it in the OpenCV package itself. AI: So I wrote the script. It gave me an excuse to learn Tkinter. It's pasted below. Note this is a one-off, not a model of good programming practice! If anyone uses this and has bugs or suggestions, let me know. Here's the git link and model code is pasted below: import Tkinter import Image, ImageTk from Tkinter import Tk, BOTH from ttk import Frame, Button, Style import cv2 import os import time import itertools IMAGE_DIRECTORY = # Directory of existing files POSITIVE_DIRECTORY = # Where to store 'positive' cropped images NEGATIVE_DIRECTORY = # Where to store 'negative' cropped images (generated automatically based on 'positive' image cropping IMAGE_RESIZE_FACTOR = # How much to scale images for display purposes. Images are not scaled when saved. # Everything stuffed into one class, not exactly model programming but it works for now class Example(Frame): def __init__(self, parent, list_of_files, write_file): Frame.__init__(self, parent) self.parent = parent self.list_of_files = list_of_files self.write_file = write_file self.image = None self.canvas = None self.corners = [] self.index = -1 self.loadImage() self.initUI() self.resetCanvas() def loadImage(self): self.index += 1 img = cv2.imread(self.list_of_files[self.index]) print(self.list_of_files[self.index]) while not img.shape[0]: self.index += 1 img = cv2.imread(self.list_of_files[self.index]) self.cv_img = img img_small = cv2.resize(img, (0,0), fx = IMAGE_RESIZE_FACTOR, fy = IMAGE_RESIZE_FACTOR) b, g, r = cv2.split(img_small) img_small = cv2.merge((r,g,b)) im = Image.fromarray(img_small) self.image = ImageTk.PhotoImage(image=im) def resetCanvas(self): self.canvas.create_image(0, 0, image=self.image, anchor="nw") self.canvas.configure(height = self.image.height(), width = self.image.width()) self.canvas.place(x = 0, y = 0, height = self.image.height(), width = self.image.width()) def initUI(self): self.style = Style() self.style.theme_use("default") self.pack(fill=BOTH, expand=1) print "width and height of image should be ", self.image.width(), self.image.height() self.canvas = Tkinter.Canvas(self, width = self.image.width(), height = self.image.height()) self.canvas.bind("<Button-1>", self.OnMouseDown) self.canvas.pack() nextButton = Button(self, text="Next", command=self.nextButton) nextButton.place(x=0, y=0) restartButton = Button(self, text="Restart", command=self.restart) restartButton.place(x=0, y=22) def nextButton(self): new_img = self.cv_img[self.corners[0][1]/IMAGE_RESIZE_FACTOR:self.corners[1][1]/IMAGE_RESIZE_FACTOR, self.corners[0][0]/IMAGE_RESIZE_FACTOR:self.corners[1][0]/IMAGE_RESIZE_FACTOR] files = self.list_of_files[self.index].split("/") try: os.stat(POSITIVE_DIRECTORY+files[-2]) except: os.mkdir(POSITIVE_DIRECTORY+files[-2]) print("saving to ", "{}{}/{}".format(POSITIVE_DIRECTORY, files[-2], files[-1])) cv2.imwrite("{}{}/{}".format(POSITIVE_DIRECTORY, files[-2], files[-1]), new_img) self.saveNegatives(files) self.restart() self.loadImage() self.resetCanvas() def saveNegatives(self, files): low_x = min(self.corners[0][0], self.corners[1][0])/IMAGE_RESIZE_FACTOR high_x = max(self.corners[0][0], self.corners[1][0])/IMAGE_RESIZE_FACTOR low_y = min(self.corners[0][1], self.corners[1][1])/IMAGE_RESIZE_FACTOR high_y = max(self.corners[0][1], self.corners[1][1])/IMAGE_RESIZE_FACTOR try: os.stat(NEGATIVE_DIRECTORY+files[-2]) except: os.mkdir(NEGATIVE_DIRECTORY+files[-2]) new_img = self.cv_img[ :low_y, :] cv2.imwrite("{}{}/{}{}".format(NEGATIVE_DIRECTORY, files[-2], "LY", files[-1]), new_img) new_img = self.cv_img[ high_y: , :] cv2.imwrite("{}{}/{}{}".format(NEGATIVE_DIRECTORY, files[-2], "HY", files[-1]), new_img) new_img = self.cv_img[ :, :low_x ] cv2.imwrite("{}{}/{}{}".format(NEGATIVE_DIRECTORY, files[-2], "LX", files[-1]), new_img) new_img = self.cv_img[:, high_x: ] cv2.imwrite("{}{}/{}{}".format(NEGATIVE_DIRECTORY, files[-2], "HX", files[-1]), new_img) def restart(self): self.corners = [] self.index -=1 self.canvas.delete("all") self.loadImage() self.resetCanvas() def OnMouseDown(self, event): print(event.x, event.y) self.corners.append([event.x, event.y]) if len(self.corners) == 2: self.canvas.create_rectangle(self.corners[0][0], self.corners[0][1], self.corners[1][0], self.corners[1][1], outline ='cyan', width = 2) def main(): root = Tk() root.geometry("250x150+300+300") list_of_files = [] file_names = [] walker = iter(os.walk(IMAGE_DIRECTORY)) next(walker) for dir, _, _ in walker: files = [dir + "/" + file for file in os.listdir(dir)] list_of_files.extend(files) file_names.extend(os.listdir(dir)) list_of_processed_files = [] processed_file_names = [] walker = iter(os.walk(POSITIVE_DIRECTORY)) next(walker) for dir, _, _ in walker: files = [dir + "/" + file for file in os.listdir(dir)] list_of_processed_files.extend(files) processed_file_names.extend(os.listdir(dir)) good_names = set(file_names) - set(processed_file_names) list_of_files = [f for i, f in enumerate(list_of_files) if file_names[i] in good_names] app = Example(root, list_of_files, IMAGE_DIRECTORY+"positives") root.mainloop() if __name__ == '__main__': main()
H: Original Meaning of "Intelligence" in "Business Intelligence" What does the term "Intelligence" originally stand for in "Business Intelligence" ? Does it mean as used in "Artificial Intelligence" or as used in "Intelligence Agency" ? In other words, does "Business Intelligence" mean: "Acting smart & intelligently in business" or "Gathering data and information about the business" ? This question was the topic of a debate among some fellows in our data-science team, so I thought to ask about it from other experts. One might say that both meanings are applicable, but I'm asking for the original intended meaning of the word as proposed in the 1980s. An acceptable answer should definitely cite original references, and personal opinions are not what I'm seeking. AI: Howard Dresner, in 1989, is believed to have coined the term "business intelligence", to describe "concepts and methods to improve business decision making by using fact-based support systems.". When he was at Gartner Group. This is a common mantra, spread over the Web. I have not been able to trace the exact source for this origin yet. Many insist on he was not at Gartner group in 1989, which is confirmed in the following interview. In his 2008 book, Performance Management Revolution: Improving Results Through Visibility and Actionable Insight, the termed is defined as: BI is knowledge gained through the access and analysis of business information. He says, at the beginning, that In 1989, for example, I started-some might say incited-the BI revolution with the premise that all users have a fundamental right to access information without the help of IT. No apparent claim of the invention of the term on his side. Indeed, one can find older roots in H. P. Luhn, A Business Intelligence System, IBM Journal of Research and Development, 1958, Vol. 2, Issue 4, p. 314--319. Abstract: An automatic system is being developed to disseminate information to the various sections of any industrial, scientific or government organization. This intelligence system will utilize data-processing machines for auto-abstracting and auto-encoding of documents and for creating interest profiles for each of the "action points" in an organization. Both incoming and internally generated documents are automatically abstracted, characterized by a word pattern, and sent automatically to appropriate action points. This paper shows the flexibility of such a system in identifying known information, in finding who needs to know it and in disseminating it efficiently either in abstract form or as a complete document. The author claims that: The techniques proposed here to make these things possible are: Auto-abstracting of documents; Auto-encoding of documents; Automatic creation and updating of action-point profiles. All of these techniques are based on statistical procedures which can be performed on present-day data processing machines. Together with proper communication facilities and input-output equipment a comprehensive system may be assembled to accommodate all information problems of an organization. We call this a Business Intelligence System. He also gives the explanation of the terms "business" and "intelligence": In this paper, business is a collection of activities carried on for whatever purpose, be it science, technology, commerce, industry, law, government, defense, et cetera. The communication facility serving the conduct of a business (in the broad sense) may be referred to as an intelligence system. The notion of intelligence is also defined here, in a more general sense, as "the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal." So the idea of "linking the facts" is already present in H. P. Luhn paper. To many sources, Howard Dresner has re-invented "Business Intelligence" to re-brand decision support system (DSS) and executive information system (EIS) when at DEC, and the term became famous throught the influence of the Gartner group. Apparently, the term has already been used way before, as in the book Wholesale Business Intelligence and Southern and Western Merchants' Pocket Directory to the Principal Mercantile Houses in the City of Philadelphia, for the Year 1839. As I could not fetch this source, I will stick to the Luhn/Dresner acception. It relates to the etymology of intelligence: late 14c., "faculty of understanding," from Old French intelligence (12c.), from Latin intelligentia, intellegentia "understanding, power of discerning; art, skill, taste," from intelligentem (nominative intelligens) "discerning," present participle of intelligere "to understand, comprehend," from inter- "between" (see inter-) + legere "choose, pick out In Business Intelligence for Dummies (Scheps, 2008), the definition chapter plays on Military Intelligence: Business Intelligence Defined: No CIA Experience Required So what the heck is business intelligence, anyway? In essence, BI is any activity, tool, or process used to obtain the best information to support the process of making decisions. For our purposes, BI revolves around putting computing power (highly specialized software in concert with other more common technology assets) to work, to help make the best choices for your organization. Business intelligence is essentially timely, accurate, high-value, and actionable business insights, and the work processes and technologies used to obtain them. I would thus bend toward "Gathering data and information about the business", maybe more "to better conduct business". Additional historical comments can be found in Father of BI? Is he having a laugh?!
H: Simple implementation of Apriori algorithm in R I am preparing a lecture on data mining algorithms in R and I want to demonstrate the famous Apriori algorithm in it. My question Could anybody point me to a simple implementation of this algorithm in R? (I am not looking for a package, e.g. arules, but for comprehensible source code of an implementation from scratch. It also does not help to look at the source code of arules: It just calls a C implementation of the algorithm!) AI: Have you checked the following reference out? Link: http://www.borgelt.net/docs/apriori.pdf The above link has the explanation along with the code.
H: Online/incremental unsupervised dimensionality reduction for use with classification for event prediction Consider the application: We have a set of users and items. Users can perform different action types (think browsing, clicking, upvoting etc.) on different items. Users and items accumulate a "profile" for each action type. For users such profile is a list of items on which they had performed a given action type For items such profile is a list of users who performed a given action type on them We assume that accumulated profiles define future actions. We want to predict the action a user will take using supervised learning (classification with probability estimation) Consider the following problem: These profiles can be very sparse (millions of items and 100 million users) and it is not feasible to use them directly as features We would like to compute "compressed" profiles (eigenprofiles?:)) with dimensionality < 300 that can then be efficiently stored and fed to different classification algorithms Before you say "Use TruncatedSVD/Collapsed Gibbs Sampling/Random Projections on historical data" bare with me for a second. Enters concept drift. New items and users are being introduced all the time to the system. Old users churn. Old items churn. At some point there are items with most of the users never seen in historical data and users with only fresh items. Before you say "retrain periodically", remember that we have a classifier in the pipeline that was taught on the "historic" decomposition and the new decomposition could assign entirely different "meaning" to cells of output vectors (abs(decompose_v1(sample)[0] - decompose_v2(sample)[0]) >> epsilon) rendering this classifier unusable. Some requirements: The prediction service has to be available 24/7. The prediction cannot take more than 15ms and should use a maximum of 4 cpu cores (preferably only one) Some ideas I had so far: We could retrain the classifier on the new decomposition but this would mean that we have to re-run the decomposition on the whole training dataset (with snapshot of profiles at the time of the event we want to predict) and the whole database (all current profiles) plus store it. To make this work we would have to have a second database for storing the decomposed profiles that would be hot-swapped once the new retrained model is ready and all profiles have been decomposed. This approach is quite inefficient in both computational resources and storage resources (this is expensive storage because the retrieval has to be super-fast) We could retrain the classifier as in solution 1. But do the decomposition ad_hoc. This puts a lot of constraints on the speed of the decomposition (has to have sub-millisecond computation times for a single sample). This does a lot of redundant computation (especially for item profiles) unless we add an extra caching layer. This avoids redundant storage and redundant computation of churned users/items at the cost of extra prediction latency and extra caching layer complexity. <---- Please help me here We could use one of online learning algorithms such as VFT or Mondrian Forests for the classifier - so no more retraining + nice handling of concept drift. We would need an online algorithm for decomposition that satisfies strict requirements: a) at least a part of output vectors should be stable between increments (batches). b) it can introduce new features to account for new variance in the data but should do so at a controllable rate c) should not break if it encounters new users/items Questions/points of action: Please evaluate my proposed solutions and propose alternatives Please provide algorithms suitable for online learning and online decomposition (if they exist) as described in alternative 3. Preferably with efficient python/scala/java implementations with a sufficient layer of abstraction to use them in a web service (python scripts that take in a text file as dataset would be much less valuable than scikit modules) Please provide links to relevant literature that dealt with similar problems/describes algorithms that could be suitable Please share experiences/caveats/tips that you learned while dealing with similar problems Some background reading that you may find useful: quora question on ad click prediction google "view from the trenches" criteo paper on click prediction facebook on predicting ad clicks Disclaimer: Our application is not strictly ad conversion prediction and some problems such as rarity do not apply. The event we would like to predict has 8 classes and occurs c.a. 0.3%-3% of times a user browses an item. AI: My take: I agree with the issues raised in 1., so not much to add here - retraining and storage is indeed inefficient Vowpal Wabbit http://hunch.net/~vw/ would be my first choice stability of output between increments is really more of a data than algorithm feature - if you have plenty of variation on input, you won't have that much stability on output (at least not by default) hashing can take care of the variation - you can control by a combination of three parameters: the size of the hashing table and the l1/l2 regularization same for the new features / users (I think - most of the applications i used it had a ercord representing a user clicking or not, so new users / ads were sort of treated "the same") normally I use VW from the command line, but an example approach (not too elegant) for controlling from Python is given here: http://fastml.com/how-to-run-external-programs-from-python-and-capture-their-output/ if you prefer sth purely Python, then a version (without decomposition) of an online learner in the Criteo spirit can be found here: https://www.kaggle.com/c/tradeshift-text-classification/forums/t/10537/beat-the-benchmark-with-less-than-400mb-of-memory I am not sure how to handle the concept drift - haven't paid that much attention to it so far beyond rolling statistics: for the relevant variables of interest, keep track of mean / count over recent N periods. It is a crude approach, but does seem to get the job done it terms of capturing lack of "stationarity" helpful trick 1: single pass over data before first run to create per feature dictionary and flag certain values as rare (lump them into a single value) helpful trick 2: ensembling predictions from more than one model (varying interaction order, learning rate)
H: Passing TFIDF Feature Vector to a SGDClassifier from sklearn import numpy as np from sklearn import linear_model X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) Y = np.array(['C++', 'C#', 'java','python']) clf = linear_model.SGDClassifier() clf.fit(X, Y) print (clf.predict([[1.7, 0.7]])) #python I am trying to predict the values from arrays Y by giving a test case and training it on a training data which is X, Now my problem is that, I want to change the training set X to TF-IDF Feature Vectors, so how can that be possible? Vaguely, I want to do something like this: import numpy as np from sklearn import linear_model X = np.array_str([['abcd', 'efgh'], ['qwert', 'yuiop'], ['xyz','abc'], ['opi', 'iop']]) Y = np.array(['C++', 'C#', 'java','python']) clf = linear_model.SGDClassifier() clf.fit(X, Y) AI: It's useful to do this with a Pipeline: import numpy as np from sklearn import linear_model, pipeline, feature_extraction X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) Y = np.array(['C++', 'C#', 'java','python']) clf = pipeline.make_pipeline( feature_extraction.text.TfidfTransformer(use_idf=True), linear_model.SGDClassifier()) clf.fit(X, Y) print(clf.predict([[1.7, 0.7]]))
H: Optimizing parameters for a closed (black-box) system I am working on a problem that involves finding optimal parameter values for a black-box system. This system consists of 6 inputs and produces a single output. I can compare the returned values to observed values to determine whether the system is well calibrated based on the parameter values I specified. The main problem that I am having is that I do not know how these parameters are used in the black-box system to produce the output. The parameters may be correlated so changing the value of one parameter may affect the way the others behave in the system. Since I do not know the function I am not sure how to optimize this problem efficiently. Question: what are some methods for optimization when the function is unknown? Edits: The variable types are each a vector of real numbers but we can make some assumptions if it is is helpful. The cost to run the black-box system is only time. Let's say that it takes 10 seconds to run the system but I happen to have five black boxes -- so if I can run my algorithm in parallel then I can cut down the run time. If the algorithm runs sequentially then I don't gain anything by having the extra boxes (expect perhaps choosing different starting positions). For this problem I am interested in learning about how to solve this type of problem -- not trying to brute force my way to a solution. That is to say I am less interested in obtaining an answer and more finding out how I would obtain the the answer. I think it is easy to say that given sufficient resources (time, processing power) that simply generating many many random numbers and choosing the combination that gives the best result is the simplest (and probably worst) approach. AI: There are lots of function optimising routines that could be applied, based on the description so far. Random search, grid search, hill-climbing, gradient descent, genetic algorithms, simulated annealing, particle swarm optimisation are all possible contenders that I have heard of, and I am probably missing a few. The trouble is, starting with next to zero knowledge of the black box, it is almost impossible to guess a good candidate from these search options. All of them have strengths and weaknesses. To start with, you seem to have no indication of scale - should you be trying input parameters in any particular ranges? So you might want to try very crude searches through a range of magnitudes (positive and negative values) to find the area worth searching. Such a grid search is expensive - if you have $k$ dimensions and want to search $n$ different magnitudes, then you need to call your black box $n^k$ times. This can be done in parallel though, and given you are confident that the function is roughly unimodal, you can start with a relatively low number of n (maybe check -10, -1, 0, +1, +10 for 15625 calls to your function taking roughly 8 hours 40 mins using 5 boxes). You may need to repeat with other params once you know whether you have found a bounding box for the mode or need to try yet more values, so this process could take a while longer - potentially days if the optimal value for param 6 is more like 20,000. You could also refine more closely, once you have a potential mode you might want to define another grid of values to search based around. This basic grid search might be my first point of attack on a black box system where I had no clue about parameter meaning, but some confidence that the black box output had a rough unimodal form. Given the speed of response you should be storing all input and output values in a database for faster lookup and better model building later. No point repeating a call taking 10 seconds when a cache could look it up in 1 millisecond. Once you have some range of values you think that a mode might be in, then it is time to pick a suitable optimiser. Given the information so far, I would be tempted to run either more grid search (with a separate linear scaling between values of each param) and/or a random search, constrained roughly to the boxes defined by the set of $2^6$ corner points around the best result found in initial order-of-magnitude search. At that point you could also consider graphing the data, to see if there is any intuition about which other algorithms could perform well. With the possibility of parallel calls, then gradient descent might be a reasonable guess, because you can get approximate gradients by adding a small offset to each param and dividing difference that causes in the output by it. In addition, gradient descent (or simple hill climbing) has some chance of optimising with less calls to evaluate the function than approaches that rely on many iterations (simulated annealing) or lots of work in parallel (particle swarm or genetic algorithms). Gradient descent optimisers as used in neural networks, with additions like Nesterov momentum or RMSProp, can cope with changes in function output "feature scale" such as different sizes and heights of peaks, ridges, saddle points. However, gradient descent and hill climbing algorithms are not robust against all function shapes. A graph or several of what your explorations are seeing may help you to decide on a different approach. So keep all the data and graph it in case you can get clues. Finally, don't rule out random brute-force search, and being able to just accept "best so far" under time constraints. With low knowledge of the internals of the black box, it is a reasonable strategy.
H: What are the applications of Solr in Big Data? Why is solr used in Big Data? What is it's purpose? AI: Solr is a highly scalable, fault tolerant search server. So, you can store files in the form of JSON, XML, CSV or binary over HTTP. And you can query it via GET and receive JSON, XML, CSV and binary results. It also has a distributed architecture for the purpose, which is the primary and most important concept in Big Data and it's handling. So, as it is highly scalable and fault tolerant for high traffic and huge documents, it is a choice for a reliable search engine for huge documents, or in short Big Data.
H: Manage x-axis using ggplot() Source: https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip Here is my data prep. NEI <- readRDS("summarySCC_PM25.rds") Baltimore <- NEI[NEI$fips=="24510", ] Baltimore$type <- as.factor(Baltimore$type) Total_Emmisssions <- aggregate(Baltimore$Emissions, by=list(Baltimore$year, Baltimore$type), FUN=sum) names(Total_Emmisssions) <- c("Year","Type","Emissions") Plot code library(ggplot2) g <- ggplot(Total_Emmisssions,aes(x=Year, y=Emissions, colour=Type)) g1 <- g+geom_point() g2 <- g1+facet_grid(. ~ Type) g3 <- g2+geom_smooth(method="lm", se=FALSE) However, in this plot, the year is always 2002-2008. How do I change the scale / ticks to show from 1999-2008 at an interval of 3 years. Using scale_x_discrete does not work and it messes the graph. Please help. str(Total_Emmisssions) 'data.frame': 16 obs. of 3 variables: $ Year : int 1999 2002 2005 2008 1999 2002 2005 2008 1999 2002 ... $ Type : Factor w/ 4 levels "NON-ROAD","NONPOINT",..: 1 1 1 1 2 2 2 2 3 3 ... $ Emissions: num 522.9 240.8 248.9 55.8 2107.6 ... AI: Change Year to a factor and add group=1: g <- ggplot(Total_Emmisssions,aes(x=factor(Year), y=Emissions, colour=Type, group=1)) you can leave the rest the same (you'll also prbly want to change the xlab).
H: Solve a pair of coupled nonlinear equations within certain limits This answer to this question works only for situations in which the desired solution to the coupled functions is not restricted to a certain range. But what if, for example, we wanted a solution such that 0 < x < 10 and 0 < y < 10? There are functions within scipy.optimize that find roots to a function within a given interval (e.g., brentq), but these work only for functions of one variable. Why does scipy fall short of providing a root solver that works for multi-variable functions within specific ranges? How might such a solver be implemented? AI: As a workaround, you could minimize another function that includes both the objective and the constraints, then check if sol.fun is (numerically) equal to zero. from scipy.optimize import minimize import numpy as np f = lambda x: np.sin(x).sum() #the function to find roots of L = np.array([-1,-1]) #lower bound for each coordinate U = np.array([1, 1]) #upper bound for each coordinate g = lambda x: f(x) **2 + max(0, (L - x).max(), (x - U).max()) sol = minimize(g, [0.5,0.5]) Also, scipy.optimize seems to have some optimisers that support rectangular bounds, i.e. differential_evolution (since version 0.15.0).
H: Classifying transactions as malicious I have a big data set of fake transactions for a company. Each row contains the username, credit card number, time, device used, and amount of money in the transaction. I need to classify each transaction as either malicious or not malicious and I am lost for ideas on where to start. Doing it by hand would be silly. I was thinking possibly checking for how often a credit card is used, if it is consistently used at a certain time, or if it is used from lots of different devices (iOS AND Android, as an example) would be possible starting places. I'm still fairly new to all this and ML. Would there be some ML algorithm optimal for this problem? Also, side question: what would be a good place to host the 600 or so GB of data for cheaps? Thanks AI: This problem is popularly called the "Credit Card Fraud Detection" There are several classification algorithms, which aim to tackle this problem. With the knowledge of the dataset you possess, the Decision Trees algorithm can be employed for detecting malicious transactions from the non-malicious ones. This paper is a nice resource to learn and develop the intuition about fraud detection and the usage of basic classification algorithms like the Decision Trees and the SVMs for solving the problem. There are several other papers which solve this problems employing algorithms like Neural Networks, Logistic Regression, Genetic Algorithms, etc. However, the paper which uses the decision trees algorithm is a nice place to start learning. what would be a good place to host the 600 or so GB of data for cheaps? Aws S3 would be a nice, cheap way to do that. It also integrates nicely with Redshift, in case you want to do complex analytics on the data.
H: What commercial software should a data scientist purchase? I am setting up a work computer and have free reign here. My typical go-to software packages are all freely available, such as Rstudio and Anaconda. I have thought about investing in commercial BI software such as Tableau or Spotfire, but nobody else in the group (or myself) is a proficient user. Are there other obvious choices I am missing? At the outset, much of what this group is working on will be exploratory. They are needing some justification to invest further into "data science". AI: If you are a data scientist, then there is very little use for pre-packaged, generally inflexible commercial tools. Part of the reason OSS is so prevalent and useful in data science is that you will often need to combine and/or modify a procedure to fit the needs at hand -- and then deploy it without a bunch of lawyers and sales reps getting involved at every step. Since data scientists are expected to be proficient programmers, you should be comfortable digging into the source code and adding functionality or making it more user friendly. I've come close to recommending the purchase of non-free(as in GPL) a couple times only to find that some industrious person has set up a project in Git that provides most if not all of the functionality if commercial software. In the cases where it doesn't, it at least addresses the core issue and I can modify and extend from it. It's much easier to modify a prototype than start from scratch. Bottom line: be wary of commercial software for data science unless you've done your due diligence in the OSS space and can honestly say that you could not find any projects that could be modified to suit your needs. Commercial software is not only less flexible but you're effectively in a business partnership with these folks and that means your fates are somewhat intertwined (at least for the projects that depend in this software).
H: Classifying text documents using linear/incremental topics I'm attempting to classify text documents using a few different dimensions. I'm trying to create arbitrary topics to classify such as size and relevance, which are linear or gradual in nature. For example: size: tiny, small, medium, large, huge. relevance: bad, ok, good, excellent, awesome I am training the classifier by hand. For example, this document represents a 'small' thing, this other document is discussing a 'large' thing. When I try multi-label or multi-class SVM for this it does not work well and it also logically doesn't make sense. Which model should I use that would help me predict this linear type of data? I use scikit-learn presently with a tfidf vector of the words. AI: If you want these output dimensions to be continuous, simply convert your size and relevance metrics to real-valued targets. Then you can perform regression instead of classification, using any of a variety of models. You could even attempt to train a multi target neural net to predict all of these outputs at once. Additionally, you might consider first using a topic model such as LDA as your feature space. Based on the values, it sounds like the "relevance" might be a variable best captured by techniques from sentiment analysis.
H: SKNN regression problem I am trying to learn scikit-learn neuralnetwork and am coming up against the same problem in regression where no matter the dataset I getting a horizontal straight line for my fit. here is an example using the Linear regression example from scikit-learn and then using the SKNN regressor , simple example code from the docs. # -*- coding: utf-8 -*- # Code source: Jaques Grobler # http://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html # License: BSD 3 clause import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model # Load the diabetes dataset diabetes = datasets.load_diabetes() # Use only one feature diabetes_X = diabetes.data[:, np.newaxis] diabetes_X_temp = diabetes_X[:, :, 2] # Split the data into training/testing sets diabetes_X_train = diabetes_X_temp[:-20] diabetes_X_test = diabetes_X_temp[-20:] # Split the targets into training/testing sets diabetes_y_train = diabetes.target[:-20] diabetes_y_test = diabetes.target[-20:] # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) print "Results of Linear Regression...." print "================================\n" # The coefficients print('Coefficients: ', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((regr.predict(diabetes_X_test) - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(diabetes_X_test, diabetes_y_test)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, regr.predict(diabetes_X_test), color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() # Now using the sknn regressor # http://scikit-neuralnetwork.readthedocs.org/en/latest/guide_beginners.html # from sknn.mlp import Regressor, Layer nn = Regressor( layers=[ Layer("Rectifier", units=200), Layer("Linear")], learning_rate=0.02, n_iter=10) nn.fit(diabetes_X_train, diabetes_y_train) print "Results of SKNN Regression...." print "==============================\n" # The coefficients print('Coefficients: ', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((nn.predict(diabetes_X_test) - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % nn.score(diabetes_X_test, diabetes_y_test)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, nn.predict(diabetes_X_test), color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() Results of Linear Regression: ('Coefficients: ', array([ 938.23786125])) Residual sum of squares: 2548.07 Variance score: 0.47  Results of SKNN Regression: ('Coefficients: ', array([ 938.23786125])) Residual sum of squares: 5737.52 Variance score: -0.19 Changing the number of iterations to 1000 results in a score of -0.15 AI: My best guess here is that your learning rate is way too high for the problem. You also probably have far more neurons in your hidden network than you need, seeing as you're using just one feature. Recall that learning rate is controlling the "step size" in gradient descent and that for your dataset, it is likely far too high. I made some minor changes to your code and got better results than linear regression. Notice the use of 2 hidden neurons, a 0.001 learning rate, and 20 iterations. # Now using the sknn regressor # http://scikit-neuralnetwork.readthedocs.org/en/latest/guide_beginners.html from sknn.mlp import Regressor, Layer nn = Regressor( layers=[ Layer("Rectifier", units=2), Layer("Linear")], learning_rate=0.001, n_iter=20) nn.fit(diabetes_X_train, diabetes_y_train) print("Results of SKNN Regression....") # The coefficients print('Coefficients: ', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((nn.predict(diabetes_X_test) - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % nn.score(diabetes_X_test, diabetes_y_test)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, nn.predict(diabetes_X_test), color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() SKNN regression: Results of SKNN Regression.... Coefficients: [ 938.23786125] Residual sum of squares: 6123.67 Variance score: 0.50
H: Clustering large number of strings based on tags I have string representations of text written by users in the form of parts of speech tags like so: $NNDN,OVDANPN,PNVRV,^^V,^^!$^OV and ^,G,#,!,N,R,$ etc. They are separated into two classes (0 or 1). I want to be able to cluster these such that I will be able to predict (or try to) what class the user in from their tags using the damerau levenshtein distance. The problem is that even a few hundred strings is a huge calculation for any basic clustering that I am aware of (but I am very new to this). I've tried using the counts of each tag to form a vector but applying SVM, knn classifier and Naive Bayes yielded poor results, even when using a KS test to get the best features. My gut feeling is that this seems like a problem that could be solved in the same way that scientists would compare and cluster genes. Should I be looking at different machine learning methods? Is there another way of representing the strings that would be more appropriate? Is there another way of looking at the problem? I'm using the scikit-learn library for Python. AI: Using the Levenshtein distance does not make a lot of sense in this context, as it is made for comparing distances between words. A commonly used representation for texts is the bag-of-words representation, where a text is converted to a vector where every element in the vector represents the count of the corresponding word. In your case you could represent a text as a bag-of-tags. The vector representation makes calculating distances a lot easier. However, I believe this is not necessary as you can classify the bags of words with Naive Bayes. Once you have tried bag-of-words you can try more complicated representations like LDA, word2vec, and the like.
H: Is supervised machine learning by definition predictive? I am trying to organize a cheat sheet of sorts for data science, and I am working with the basic distinction between description, inference, and prediction. As examples of the first I see unsupervised methods described, and for the last I see supervised methods. So my question is simply, do these two sets of categories align? Is unsupervised to supervised as description is to prediction? AI: A description as any statistic drawn from your sample data, say the sample mean, quantiles, etc.. Inference is a conclusion drawn from your sample data about the population, e.g., rejecting or accepting some hypothesis or stating that a model is suitable or not for describing your data. Prediction is simply a guess about future observations, which hopefully uses your data and some function/model of the data in a way to formulate that guess. Both unsupervised and supervised learning methods aim to learn a function of the data that predicts another variable (typically called y) so both are drawing an inference (i.e., a model is well suited to describe your data, see the first sentence here). However, these two methods differ in what data is available. In supervised learning you are able to use an observed sample of y for training your model and in unsupervised learning, y is unobserved. Hope that helps!
H: Error Analysis for misclassification of text documents I am working on a text classification work. The purpose of this work is to classify whether a particular document belong to class A or Class B. I used KNN algorithm and i am able to get some decent results. However I want to know two things. Why a particular document has been classified as Class A or Class B? What keyword or information that made a document to be classified as such? How to perform mis-classification analysis? Kindly help. AI: It seems to me that both of your questions could be answered by storing the retrieved neighbours on your test set and giving them a thorough analysis. Assuming you are using a unigram + tf-idf text representation and a cosine similarity distance metric for your K-NN retrieval, it would be trivial once you have a classified document to display the K neighbours and analyze their common unigrams and their respective tf-idf weights in order to see what influenced the classification. Moreover, doing it on your misclassified documents could help you understand which features caused the error. I'd be interested to know if there is a more systematized approach to those issues.
H: Identifying templates with parameters in text fragments I have a data set with text fragments having a fixed structure that can contain parameters. Examples are: Temperature today is 20 centigrades Temperature today is 28 centigrades or Her eyes are blue and hair black. Her eyes are green and hair brown. The first example show a template with one numerical parameter. The second one is a template with two factor parameters. The number of templates and the number of parameter is not know. The problem is to identify the templates and assign each text fragment to the corresponding template. The obvious first idea is to use clustering. The distance measure is defined as a number of non matching words. I.e. the records in example one have distance 1, in example two distance is 2. The distance between the record in example one and two is 7. This approach works fine, providing the number of clusters is know, which is not the case, so it is not useful. I can imagine a programmatic approach scanning the distance matrix searching for records with lot of neighbors in distance 1 (or 2,3,..), but I'm curious if I can apply some unsupervised machine learning algorithm to solve the problem. R is preferred, but not required. AI: The basic rationale behind the following suggestion is to associate "eigenvectors" and "templates". In particular one could use LSA on the whole corpus based on a a bag-of-words. The resulting eigenvectors would serve as surrogate templates; these eigenvectors should not be directly affected by the number of words in each template. Subsequently the scores could be used to cluster the documents together following a standard procedure (eg. $k$-means in conjunction with AIC). As an alternative to LSA one could use NNMF. Let me point out that the LSA (or NNMF) would probably need to be done to the transformed TF-IDF rather than the raw word-counts matrix.
H: How to create clusters of position data? I am asking this question because the previous one wasn't very helpful and I asked about a different solution for the same problem. The Problem I have lateral positions, xcoord, of vehicles over time which were recorded as the distances from the right edge of the road. This can be seen for one vehicle in the following plot: Each point on the plot represents the position of the front center of the vehicle. When the vehicle changes the lane (lane numbers not shown) there is a drastic change in the position as seen after the 'Start of Lane Change' on the plot. The data behind this plot are like below: Vehicle.ID Frame.ID xcoord Lane 1 2 13 16.46700 2 2 2 14 16.44669 2 3 2 15 16.42600 2 4 2 16 16.40540 2 5 2 17 16.38486 2 6 2 18 16.36433 2 I want to identify the start and end data points of a lane change by clustering the data as shown in the plot. The data points in the plot circled in red are more similar to each other because the variation between them is smaller compared to the data points in the middle which see large variation in position (xcoord). My questions are: Is it possible to apply any clustering technique to segment these data so that I could identify the start and end point of a lane change? If yes, which technique would be most suitable? I use R. I have tried Hierarchical clustering before but don't know how to apply it in this context. Please help. AI: I doubt any of the clustering algorithms will work well. Instead, you should look into: segmentation (yes, this is something different), specifically time series segmentation change detection (as you said, there is a rather constant distribution first, then a change, then a rather constant distribution again segment-wise regression may also work: try to find the best fit that is constant, linearly changing, and constant again. It's essentially four parameters to optimize in this restricted model: average before and after + beginning and end of transition.
H: How to start analysing and modelling data for an academic project, when not a statistician or data scientist I have collected data for a PhD thesis, and need help understanding how to build a road map to do analytical and statistical analysis. The PhD is not itself in statistics or machine learning, but I would like to understand what are the steps and type of analysis that I have to follow for analysing data for an advanced degree? In general, how should I approach such a problem? In the data I have collected, there are 623 observations including one continuous dependent variable and 13 independent variables (continuous, categorical, and ordinal) that are defined based on the researcher experience and literature review. I considered planning to do several regression analysis to predict the dependent variable and study the effective factors (if they are positive, negative, and their magnitude) on it. I've tried multiple linear regression including different transformation on independent variables. On the other hand, I'm not sure if I should study each independent variables through the time and forecast their values in the time horizon? Here are the steps in my mind so far: Plotting the scatter plots of different independent variables vs dependent variable to define outliers and check if the model is linear also with respect to coefficients Removing the potential outliers Splitting the data into two data sets to build the model and validate it after that. If the model is linear then: Performing the multiple linear regression Performing the multiple linear regression including different transformations to enhance the model Validating the model Doing the quantile regression Doing supervised learning machine etc. If the model is not linear, I may instead need to use non-linear statistical techniques. Any feedback would be highly appreciated. My goal is to build a clear and robust road map for this part of the work. AI: Typically, quantitative analysis is planned and performed, based on research study's goals. Focusing on research goals and corresponding research questions, researcher would propose a model (or several models) and a set of hypotheses, associated with the model(s). Model(s) and its/their elements' types usually dictate (suggest) quantitative approaches that would make sense in a particular situation. For example, if your model includes latent variables, you would have to use appropriate methods to perform data analysis (i.e., structural equation modeling). Otherwise, you can apply a variety of other methods, such as time series analysis or, as you mentioned, multiple regression and machine learning. For more details on research workflow with latent variables, also see section #3 in my relevant answer. One last note: whatever methods you use, pay enough attention to the following two very important aspects - performing full-scale exploratory data analysis (EDA) (see my relevant answer) and trying to design and perform your analysis in the reproducible research fashion (see my relevant answer).
H: Reference of SVM Using Spark Can somebody please give me some reference on implementing SVM using PySpark. AI: The documentation of Apache Spark's MLib library has a neat and clear reference to the implementation of linear SVM's in Python. It supports two models: SVMWithSGD and SVMModel
H: Creating validation data for model comparison I am working on building a scoring algorithm for student data, say the attributes are : name, location, age, class, school_name, skill1, skill2, skill3 based on these data I need to create a student score. I need to assign weight-ages for age, class, school_name skills and come up with a score for student. say I have 2 scoring models like : score_1 = x1*location_weight + x2*age_weight + x2*class_weight + x3*school_name_weight + x4*skill1_weight + x5*skill2_weight + x6*skill3_weight score_2 = y1*location_weight + y2*age_weight + y2*class_weight + y3*school_name_weight + y4*skill1_weight + y5*skill2_weight + y6*skill3_weight now how can I compare these models and evaluate them? The problem is I don't have a test or validation set to prove or compare how accurate each of these model is, so in this case what is the best approach to compare and validate different models? also what is the best ways to build a validation set from scratch? AI: Predicting and scoring are two different tasks. And according to your answers and comments you are not solving prediction problem. You just want to set to each student a number in range [1,100] according to some rule. This is ranking (or scoring, whatever). Therefore, the terms #prediction_model, #accuracy, #validation, #training_set are out of this scope. You don't need to validate anything. You are not making predictions. What you want is to map ranks to students. But a problem is that you have mostely categorical data (school name, location etc) that cannot be 'ranked'. Some of them are useless at all: how does the student name refer to his school progress? :) If you change it somehow to numerical (e.g. 'Skill_1_level', 'Skill_2_level', 'remoteness_of_location', 'school rank' etc) than you can do some ranking: Normalize data: each of your factors Multiple by 100, as you want [0,100] range instead of [0,1] Set up weights based on your experience according to factor's importance. So that the sum of weights is 1. And finally build a rank (score): Rank = 0.1 * skill_1_level + 0.2 * skill_2_level + 0.05 * remoteness_of_location + 0.5 * school_rank + ...
H: Multiple confusion matrix for multiple training instances. Which one to take? I am using Matlab Neural Network toolbox for a classification problem. Now considering a single set of data, if the inbuilt neural network is trained and classified with same data multiple number of time, different accuracy and different confusion matrix is obtained. Now which result should I take? Should I take all the vales obtained in all the training instances and average them fix on one particular result? AI: I can't check at the moment (no Matlab at hand), but I suppose the differences come from the different random seeds used to initialize the neural networks (at least this is the only part which i can think of that has a random component). I would suggest predicting class probabilities, averaging those and then viewing the resulting confusion matrix of the "averaged" prediction. This way you - to a degree - mitigating the effect of randomness resulting from different initializations of the weights.
H: what is nn.index mean in KNN output I am getting attr(, "nn.index") as part of my KNN output in R. What is meant by that and how is this value getting calculated? knn.pred <- knn(tdm.stack.nl_train, tdm.stack.nl_Test, tdm.cand_train) print(knn.pred) > knn.pred [1] Silent Silent Silent Silent Silent Silent Silent [8] Silent Silent Silent attr(,"nn.index") [,1] [1,] 292 [2,] 292 [3,] 343 [4,] 444 [5,] 250 [6,] 445 [7,] 270 [8,] 228 [9,] 302 [10,] 355 AI: I guess you are using the fnn package. attr is a list of attributes which can be used for both nn.index and nn.dist. In this case, you are using index. So, index returns an n x k matrix for the nearest neighbor indice. And the definition of the nearest neighbor index is: The nearest neighbor index is expressed as the ratio of the observed distance divided by the expected distance. Definition reference
H: PhD program in statistics I am a first year PhD student in statistics. During this year I have analyzed the scopes of interest of my scientific advisor and found them unpromising. He is majored in mixtures with varying concentrations models for which I have not found any references to authoritative sources. Now I want to change my PhD theme, however, there are no other scientific advisors in my university which majored in statistics. Therefore, I have 2 questions: Is it possible to write at least 5 articles together with a PhD thesis without scientific adviser? If yes, what is a proper way to do this? Here I mean how to choose a theme, where ask for a help and so on. Is it possible to find a remote adviser to consult with? If yes, how and where can I find him? Also I have no much time for the search. I am interested in statistics, especially in machine learning. I would like my PhD thesis to be of practical value, not pure research one which is popular in my department. Also I have commercial experience in programming (C/C++, R, Python) if that can help. Thanks in advance for any help! AI: Certain ingredients are needed to give you the best chance of a successful PhD. One of the important ones is that you and your supervisor have mutual interests. A second important ingredient, in my opinion, is that you immerse yourself in that environment. It's important to develop a network of colleagues. It helps to spread ideas, start collaborations, get help when needed, and to explore unthought of opportunities. From what you have said, I think you will be missing out on these two important ingredients if you continue in the same place or if you work remotely. What is also important is what you to do after the PhD. PhD is required for academic position. But I think you will be in a weak position to get to the next step (fellowships, faculty positions, etc.) if you do what you proposed. In certain industrial positions it can be looked on favourably, not necessarily for the topic you pursued, but because it says something about your personally. Basically that you can get things gone, rise to a challenge, work independently, work as a team, communicate difficult topics and can bring creativity to solving a problems. My advice would be to find a machine learning research group and apply for PhDs. If this is not possible why not consider following the topic of your supervisor and keep machine learning as a hobby? You will become and expert in statistic and so you will find manny concepts will translate between the various statistical disciplines. But only do this if you get along with her/him, and you can see yourself studying this topic. Finally you could try a compromise? Are there applications for "mixing statistics" in machine learning? Can you find one? Is there an unexplored opportunity to do something new? As a side note I find it ridiculous that PhD supervisor ask the student for topics. This always leads to problems because the student doesn't really have a clue about the research field. There is room for flexibility but often this hides supervisor laziness.
H: replicability / reproducibility in topic modeling (LDA) If I'm not wrong, topic modeling (LDA) is not replicable, i.e. it gives different results in different runs. Where does this come from (where does this randomness come from and why is it necessary?) and what can be done to solve this issue or gain more stability across runs? Thanks for the help. AI: LDA is Bayesian model. This means the desired result is a posterior probability distribution over the random vectors of interest (probability of topics etc. having seen some data). Inference for many Bayesian models is done by Markov Chain Monte Carlo. Indeed the wiki on LDA suggests that Gibbs sampling is a popular inference technique for LDA. MCMC draws random samples to provide an approximation of the posterior distribution. Variational inference methods should typically be deterministic, but I'm not too familiar with the VB inference for this particular model. Also one can typically replicate runs of random algorithms by setting of random number generation seeds (if your purpose is scientific). In either case if the results from a Bayesian model show huge variability in the the parameters of interest, it may be telling you that the model is not a good fit for the dataset, or the dataset is not big enough for the model you are fitting. EDIT: I don't know which inference method (Gibbs, VB etc.) the backend of your software is using, so it's not possible to determine what type (if any) of randomisation is going on. For scientific purposes, you'll probably want to read up some more on Bayesian inference. Standard software (e.g. the LDA in scikits.learn) will give you a summary of the outputs of the inference (e.g. most coders just want the best assignment of docs to topics). There's more information hanging around behind the scenes which you might be able to get access to, and could be useful. E.g. (roughly) for scientific applications of Gibbs sampling methods we'd typically run multiple chains, drop the first N samples generated by each, and check if the resulting samples look like they came from the same distribution. If you're concerned about dependence on seeds etc. and your backend is the Gibbs sampler you will want to check out MCMC convergence diagnostics for this model.
H: Beggining in machine learning I just want to know which books, courses,videos, links,etc do you recommend me to start in machine learning, neural networks, languajes most commonly used. I want to start from zero, just in the begging of all beacuse I have not experience in this kind of algorithms but it's something that call my attention. Thank you! AI: Coursera is currently offering a course on Machine learning with collaboration from MIT. Many says its strongly recommended. https://www.coursera.org/learn/machine-learning But I found the below course from Edx more interesting. https://www.coursera.org/learn/machine-learning It also provides hands on the Microsofts New exclusive Machine learning platform On Azure.
H: Equipment failure prediction I have a system that manages equipments. When these equipments are faulty, they will be serviced. Imagine my dataset looks like this: ID Type # of times serviced Example Data: |ID| Type | #serviced | |1 | iphone | 1 | |2 | iphone | 0 | |3 | android | 1 | |4 | android | 0 | |5 | blackberry | 0 | What I want to do is I want to predict "of all the equipments that have not been serviced, which ones are likely to be serviced" ? (ie) identify "at risk" equipments. The problem is my training data will be #serviced > 0. Any #serviced=0 will not be frozen and dont seem to be valid candidates to include in training. (ie) When it fails, it will be serviced hence the count will go up. Is this a supervised or unsupervised problem ? (supervised because I have serviced and not-serviced labels, unsupervised because I want to cluster not-serviced with serviced and there by identify at-risk equipments) What data should I include in training ? Note: The example is obviously simplified. In reality I have more features that describe the equipment. AI: You should include data when the phone was serviced to create a survival model. These models are commonly used in reliability engineering as well as treatment efficacy. For reliability engineering it is very common to fit your data to a Weibull distribution. Even aircraft manufacturers consider the model to be reliable after calibrating with three to five data points. I can highly recommend the R package 'flexsurv' package. You cannot use typical linear or logistic regressions since some phones in your population will leave your observation period without ever being serviced. Survival models allow for this sort of missing information (this is called censoring). Typically you would have the following data |ID| Type | serviced | # months_since_purchase |1 | iphone | 1 | 12 |2 | iphone | 0 | 15 |3 | android | 1 | 2 |4 | android | 0 | 10 |5 | blackberry | 0 | 5.5 With that data you could create the following model in R require(survival) model <- survfit(Surv(months_since_purchase, serviced) ~ strata(Type) + cluster(ID), data = phone_repairs) The survfit.formula Surv(months_since_purchase, serviced) ~ strata(Type) + cluster(ID) indicates that months_since_purchase is the time at which an observation was made, serviced is 1 if the phone was serviced and 0 otherwise, strata(Type) will make sure that you create a different survival model for each phone, cluster(ID) will make sure that events relating to the same ID are considered as a cluster. You could extend this model with Joint Models such as JM.
H: Which value of output should be taken in multiple sessions of training Neural Network Suppose I am using Neural Network for a 2 class classification. After training the network with the training set, I want to predict the class label of a dataset with no class label. Now with retraining, the same dataset gives different result. For example in one training session, a sample gave the output of it belonging to class 1 while in the other session it gave the output of it belonging to class 2. Then which value should be taken as the correct one? AI: This is normal behaviour of most classifiers. You are not guaranteed 100% accuracy in machine learning, and a direct consequence is that classifiers make mistakes. Different classifiers, even if trained on the same data, can make different mistakes. Neural networks with different starting weights will often converge to slightly different results each time. Also, perhaps in your problem the classification is an artificial construct over some spectrum (e.g. "car" vs "van" or "safe" vs "dangerous") in which case the mistake in one case is entirely reasonable and expected? You should use the value from the classifier that you trust the most. To establish which one that is, use cross-validation on a hold-out set (where you know the true labels), and use the classifier with the best accuracy, or other metric, such as logloss or area under ROC. Which metric you should prefer depends on the nature of your problem, and the consequences of making a mistake. Alternatively, you could look at averaging the class probabilities to determine the best prediction - perhaps one classifier is really confident in the class assignment, and the other is not, so an average will go with the first classifier. Some kind of model aggregation will often boost accuracy, and is common in e.g. Kaggle competitions when you want the highest possible score and don't mind the extra effort and cost. However, if you want to use aggregation to solve your problem, again you should test your assumptions using validation and a suitable metric so you know whether or not it is really an improvement.
H: Looking for a 'CITY, STATE' within a body of text (from a CITY-STATE database) I'm looking for an optimal way to search a large body of text for a combination of words that resemble any CITY, STATE combination I have in a separate CITY-STATE database. My only idea would be to do a separate search against the body of text for each CITY, STATE in the database, but that would require a lot of time considering the amount of CITY, STATE combinations the database has in it. The desired result from this query would be to pull a single CITY, STATE for each body of text I am analyzing to tell the geographical side of the story for this data subset. Anyone know of an optimal way/process to do such a query? AI: The only thing I can see would be to separate both city and state lists and treat the problem as an automaton: parse your text, run through the n-grams, whenever you detect a CITY token (meaning a n-gram present in your list of cities or close to it in a similarity sense, as there might be misspellings) then look for a STATE token in its neighbourhood (similarly by looking into a list of states, using an edit distance metric to allow for misspellings). If you find one, then you can tag your text with that geographical location. Of course, allowing for misspellings will bring some false positives but you could easily bypass that by doing a quick lookup through your corpus to see that "SALAMI, OREGANO" is different from "SALEM, OREGON" (because the frequency of the latter will hopefully be higher than the former)
H: What makes columnar databases suitable for data science? What are some of the advantages of columnar data-stores which make them more suitable for data science and analytics? AI: A column-oriented database (=columnar data-store) stores the data of a table column by column on the disk, while a row-oriented database stores the data of a table row by row. There are two main advantages of using a column-oriented database in comparison with a row-oriented database. The first advantage relates to the amount of data one’s need to read in case we perform an operation on just a few features. Consider a simple query: SELECT correlation(feature2, feature5) FROM records A traditional executor would read the entire table (i.e. all the features): Instead, using our column-based approach we just have to read the columns which are interested in: The second advantage, which is also very important for large databases, is that column-based storage allows better compression, since the data in one specific column is indeed homogeneous than across all the columns. The main drawback of a column-oriented approach is that manipulating (lookup, update or delete) an entire given row is inefficient. However the situation should occur rarely in databases for analytics (“warehousing”), which means most operations are read-only, rarely read many attributes in the same table and writes are only appends. Some RDMS offer a column-oriented storage engine option. For example, PostgreSQL has natively no option to store tables in a column-based fashion, but Greenplum has created a closed-source one (DBMS2, 2009). Interestingly, Greenplum is also behind the open-source library for scalable in-database analytics, MADlib (Hellerstein et al., 2012), which is no coincidence. More recently, CitusDB, a startup working on high speed, analytic database, released their own open-source columnar store extension for PostgreSQL, CSTORE (Miller, 2014). Google’s system for large scale machine learning Sibyl also uses column-oriented data format (Chandra et al., 2010). This trend reflects the growing interest around column-oriented storage for large-scale analytics. Stonebraker et al. (2005) further discuss the advantages of column-oriented DBMS. Two concrete use cases: How are most datasets for large-scale machine learning stored? (most of the answer comes from Appendix C of: BeatDB: An end-to-end approach to unveil saliencies from massive signal data sets. Franck Dernoncourt, S.M, thesis, MIT Dept of EECS)
H: How to binary encode multi-valued categorical variable from Pandas dataframe? Suppose we have the following dataframe with multiple values for a certain column: categories 0 - ["A", "B"] 1 - ["B", "C", "D"] 2 - ["B", "D"] How can we get a table like this? "A" "B" "C" "D" 0 - 1 1 0 0 1 - 0 1 1 1 2 - 0 1 0 1 Note: I don't necessarily need a new dataframe, I'm wondering how to transform such DataFrames to a format more suitable for machine learning. AI: If [0, 1, 2] are numerical labels and is not the index, then pandas.DataFrame.pivot_table works: In []: data = pd.DataFrame.from_records( [[0, 'A'], [0, 'B'], [1, 'B'], [1, 'C'], [1, 'D'], [2, 'B'], [2, 'D']], columns=['number_label', 'category']) data.pivot_table(index=['number_label'], columns=['category'], aggfunc=[len], fill_value=0) Out[]: len category A B C D number_label 0 1 1 0 0 1 0 1 1 1 2 0 1 0 1 This blog post was helpful. If [0, 1, 2] is the index, then collections.Counter is useful: In []: data2 = pd.DataFrame.from_dict( {'categories': {0: ['A', 'B'], 1: ['B', 'C', 'D'], 2:['B', 'D']}}) data3 = data2['categories'].apply(collections.Counter) pd.DataFrame.from_records(data3).fillna(value=0) Out[]: A B C D 0 1 1 0 0 1 0 1 1 1 2 0 1 0 1
H: Dividing data between test, learn and predict I was posting on stats.stackexchange but perhaps I should be posting here. Context. Subscription business that charges users a monthly fee for access to the service. Management would like to predict "churn" - subscriptions who are likely to cancel. Management would like to create an email sequence in attempt to prevent high risk accounts from churning, perhaps with a discount code of some sort. So I need to identify those accounts at risk of leaving us. I have a dataset with say 50k records. Each line item is an account number along with some variables. One of the variables is "Churned" with a value of "yes" (they cancelled) or "No" (they are active). The dataset I have is all data since the beginning of time for the business. About 20k records are active paying customers and about 30k are those who used to be paying customers but who have since cancelled. My task is to build a model to predict which of the 20k active customers are currently likely to churn. Here is where I have tied my brain in a knott. I need to run the model (Predict) on the 20k records of active customers. How do I split my data between training, test and predict? Does predict data have to be exclusive of train and test data? Can I split the entire dataset of 50k into 0.8 train and 0.2 test, build a model and then predict on the 20k active accounts? That would imply I'm training and testing on data that I'm also going to predict on. Seems "wrong". Is it? AI: Supervised Learning: Do you have a saved time history of the data? For a supervised learning set you need some churned="No" cases and some churned="Yes" cases, but it sounds like you only have churned="Yes" and the unknown cases e.g. current customers who may or may not churn. With some time history you can go back in time and definitively label the current customers as churn="No". Then it is very easy to split up the data. And no, you probably don't want to predict on any data that you trained on since you can only train on it if you already know the solution so it will be a waste of time and throw off any metrics you might use to assess accuracy (precision/recall/F1) in the future. Unsupervised Learning: If you don't have saved time history of the data then this is an unsupervised learning set for which you have churned="yes" and churned="maybe". You could then employ anomaly or outlier detection on this set. novelty detection: The training data is not polluted by outliers, and we are interested in detecting anomalies in new observations. outlier detection: The training data contains outliers, and we need to fit the central mode of the training data, ignoring the deviant observations. You can do either one but novelty is more powerful. This is kind of a flip around as the novelty here is Churned="No" since all of your data is the confirmed Churn="Yes" cases. Hope this helps!
H: What is conjugate gradient descent? What is Conjugate Gradient Descent of Neural Network? How is it different from Gradient Descent technique? I came across a resource, but was unable to understand the difference between the two methods. It has mentioned in the procedure that: the next search direction is determined so that it is conjugate to previous search directions. What does this mean? Also, what is line search mentioned in the web page? Can anyone please explain it with the help of a diagram? AI: What does this sentence mean? It means that the next vector should be perpendicular to all the previous ones with respect to a matrix. It's like how the natural basis vectors are perpendicular to each other, with the added twist of a matrix: $\mathrm {x^T A y} = 0$ instead of $\mathrm{x^T y} = 0$ And what is line search mentioned in the webpage? Line search is an optimization method that involves guessing how far along a given direction (i.e., along a line) one should move to best reach the local minimum.
H: Is there a library that would perform segmented linear regression in python? There is a package named segmented in R. Is there a similar package in python? AI: No, currently there isn't a package in Python that does segmented linear regression as thoroughly as those in R (e.g. R packages listed in this blog post). Alternatively, you can use a Bayesian Markov Chain Monte Carlo algorithm in Python to create your segmented model. Segmented linear regression, as implemented by all the R packages in the above link, doesn't permit extra parameter constraints (i.e. priors), and because these packages take a frequentist approach, the resulting model doesn't give you probability distributions for the model parameters (i.e. breakpoints, slopes, etc). Defining a segmented model in statsmodels, which is frequentist, is even more restrictive because the model requires a fixed x-coordinate breakpoint. You can design a segmented model in Python using the Bayesian Markov Chain Monte Carlo algorithm emcee. Jake Vanderplas wrote a useful blog post and paper for how to implement emcee with comparisons to PyMC and PyStan. Example: Segmented model with data: Probability distributions of fit parameters: Link to code for segmented model. Link to (large) ipython notebook.
H: Credit card fraud detection - anomaly detection based on amount of money to be withdrawn? I am trying to figure out how the amount of money that a customer would want to withdraw on an ATM tell us if the transaction is fraudulent or not.There are other attributes, of course, but now I would want to hear your views on the amount of money that the customer wants to withdraw. Data may be of this form: Let us assume that a customer, for ten consecutive transactions, withdrew the following amounts: 100.33, 384 , 458, 77.90, 456, 213.55, 500 , 500, 300, 304. Questions: How can we use this data to tell if the next transaction done on this account is fraudulent of not? Are there specific algorithms that can be used for this classification? What I was thinking: I was thinking to calculate the average amount of money, say for the last ten transactions, and check how far is the next transaction amount from the average. Too much deviation would signal an anomaly. But this does not sound much, does it? AI: I was thinking to calculate the average amount of money, say for the last ten transactions, and check how far is the next transaction amount from the average. Too much deviation would signal an anomaly. But this does not sound much, does it? A typical outlier detection approach. This would work in most cases. But, as the problem statement deals with credit card fraud detection, the detection technique/algorithm/implementation should be more robust. You might want to have a look at the Mahalanobis Distance metric for this type of outlier detection. Coming to the algorithms for fraud detection, I would point out to the standards used in the industry (as I have no experience in this, but felt these resources would be useful to you). Check my answer for this question. It contains the popular approaches and algorithms used in the domain of fraud detection. The Genetic Algorithm is the most popular amongst them.
H: Correctly interpreting Cosine Angular Distance Similarity & Euclidean Distance Similarity As an example, let's say I have a very simple data set. I am given a csv with three columns, user_id, book_id, rating. The rating can be any number 0-5, where 0 means the user has NOT rated the book. Let's say I randomly pick three users, and I get these feature/rating vectors. Martin: $<3,3,5,1,2,3,2,2,5>$ Jacob: $<3,3,5,0,0,0,0,0,0>$ Grant: $<1,1,1,2,2,2,2,2,2>$ The similarity calculations: +--------------+---------+---------+----------+ | | M & J | M & G | J & G | +--------------+---------+---------+----------+ | Euclidean | 6.85 | 5.91 | 6.92 | +--------------+---------+---------+----------+ | Cosine | .69 | .83 | .32 | +--------------+---------+---------+----------+ Now, my expectation of similarity is that Martin and Jacob would be the most similar. I would expect this because they have EXACTLY the same ratings for the books that both of them have rated. But we end up finding that Martin and Grant are the most similar. I understand mathematically how we get to this conclusion, but I don't understand how I can rely on Cosine Angular distance or Euclidean distance as a means of calculating similarity, if this type of thing occurs. For what interpretation are Martin and Grant more similar than Martin and Jacob? One thought I had was to just calculate Euclidean distance, but ignore all books for which one user hasn't rated the book. I then end up with this +--------------+---------+---------+----------+ | | M & J | M & G | J & G | +--------------+---------+---------+----------+ | Euclidean | 0 | 5.91 | 6.92 | +--------------+---------+---------+----------+ | Cosine | .69 | .83 | .32 | +--------------+---------+---------+----------+ Of course now I have a Euclidean distance of 0, which fits what I would expect of the recommender system. I see many tutorials and lectures use Cosine Angular distance to ignore the unrated books, rather than use Euclidean and ignore them, so I believe this must not work in general. EDIT: Just to experiment a little, I adjusted Jacob's feature vector to be much more similar: Jacob: $<3,3,5,1,2,3,2,0,0>$ When I calculate Cosine Angular distance with Martin, I still only get .82! Still less similar than Martin and Grant, yet by inspection I would expect these two to be very similar. Could somebody help explain where my thinking is wrong, and possibly suggest another similarity measure? AI: If you look at the definitions of the two distances, cosine distance is the normalized dot product of the two vectors and euclidian is the square root of the sum of the squared elements of the difference vector. The cosine distance between M and J is smaller than between M and G because the normalization factor of M's vector still includes the numbers for which J didn't have any ratings. Even if you make J's vector more similar, like you did, the remaining numbers of M (2 and 5) get you the number you get. The number for M and G is this high because they both have non-zeroes for all the books. Even though they seem quite different, the normalization factors in the cosine are more "neutralized" by the non-zeroes for corresponding entries in the dot product. Maths don't lie. The books J didn't rate will be ignored if you make their numbers zero in the computation of the normalization factor for M. Maybe the fault in your thinking is that the books J didn't rate should be 0 while they shouldn't be any number. Finally, for recommendation systems, I would like to refer to matrix factorization.
H: Filling missing data with other than mean values What are all the options available for filling in missing data? One obvious choice is the mean, but if the percentage of missing data is large, it will decrease the accuracy. So how do we deal with missing values if they are are lot of them? AI: There are of course other choices to fill in for missing data. The median was already mentioned, and it may work better in certain cases. There may even be much better alternatives, which may be very specific to your problem. To find out whether this is the case, you must find out more about the nature of your missing data. When you understand in detail why data is missing, the probability of coming up with a good solution will be much higher. You might want to start your investigation of missing data by finding out whether you have informative or non-informative missings. The first category is produced by random data loss; in this case, the observations with missing values are no different from the ones with complete data. As for informative missing data, this one tells you something about your observation. A simple example is a customer record with a missing contract cancellation date meaning that this customer's contract has not been cancelled so far. You usually don't want to fill in informative missings with a mean or a median, but you may want to generate a separate feature from them. You may also find out that there are several kinds of missing data, being produced by different mechanisms. In this case, you might want to produce default values in different ways.
H: Behaviour of Learning Algorithms on Random Data Suppose we collect data for 100,000 tosses of a fair coin and record "Heads" or "Tails" as the value for the attribute outcome and also record the time, temprature and other irrelevant attributes. We know that the outcome of each toss is random so there should be no way of predicting future unlabeled data instances. My question is how do learning algorithms (support vector machines, for example) behave when we apply them on random data such as this? AI: They will of course still learn some best decision boundary. We know it will be meaningless, but there will still be better and best coefficients for the algorithm to learn when fitting to this particular instance of data from this random process. It may produce better than 50% accuracy on the data set, but of course this is purely due to overfitting whatever the data happens to be. It will not predict future outcomes with more than 50% accuracy.
H: Voting combined results from different classifiers gave bad accuracy I used following classifiers along with their accuracies: Random forest - 85 % SVM - 78 % Adaboost - 82% Logistic regression - 80% When I used voting from above classifiers for final classification, I got lesser accuracy than the case when I used Random forest alone. How is this possible? All classifiers are giving more or less same accuracies when used individually, then how does Random Forest outperform their combined result ? AI: The approach you are considering is similar to a multi-class SVM or a one-vs-the-rest approach. And here is how I describe the problem. The support vector machine, per example, is fundamentally a two-class classifier. In practice, however, we often have to tackle problems involving K > 2 classes. Various methods have therefore been proposed for combining multiple two-class SVMs in order to build a multi-class classifier. One commonly used approach (Vapnik, 1998) is to construct K separate SVMs, in which the kth model y_k(x) is trained using the data from class C_k as the positive examples and the data from the remaining K − 1 classes as the negative examples. This is known as the one-versus-the-rest approach where : y(x) = max_k y_k(x) Unfortunately, this heuristic approach suffers from the problem that the different classifiers were trained on different tasks, and there is no guarantee that the real-valued quantities y_k(x) for different classifiers will have appropriate scales. Another problem with the one-versus-the-rest approach is that the training sets are imbalanced. For instance, if we have ten classes each with equal numbers of training data points, then the individual classifiers are trained on data sets comprising 90% negative examples and only 10% positive examples, and the symmetry of the original problem is lost. Therefor, you got your bad accuracy. PS: Accuracy, in most cases, is not a good measure for evaluating a classifier model. References : Vapnik, V. - Statistical Learning Theory. Wiley-Interscience, New York. Christopher M. Bishop - Pattern Recognition and Machine Learning.