text
stringlengths
83
79.5k
H: Is Data Science just a trend or is a long term concept? I see a lot of courses in Data Science emerging in the last 2 years. Even big universities like Stanford and Columbia offers MS specifically in Data Science. But as long as I see, it looks like data science is just a mix of computer science and statistics techniques. So I always think about this. If it is just a trend and if in 10 years from now, someone will still mention Data Science as an entire field or just a subject/topic inside CS or stats. What do you think? AI: The one thing that you can say for sure is: Nobody can say this for sure. And it might indeed be opinion-based to some extent. The introduction of terms like "Big Data" that some people consider as "hypes" or "buzzwords" don't make it easier to flesh out an appropriate answer here. But I'll try. In general, interdisciplinary fields often seem to have the problem of not being taken serious by either of the fields they are spanning. However, the more research is invested into a particular field, the greater is the urge to split this field into several sub-topics. And these sub-topics sonner of later have to be re-combined in new ways, in order to prevent an overspecialization, and to increase and broaden the applicability of techniques that are developed by the (over?)specialized experts in the different fields. And I consider "Data Science" as such an approach to combine the expertise and findings from different fields. You described it as ...a mix of computer science and statistics techniques And indeed, several questions here aim at the differentiation between data science and statistics. But a pure statistician will most likely not be able to set up a Hadoop cluster and show the results of his analysis in an interactive HTML5 dashboard. And someone who can implement a nice HTML5 dashboard might not be so familiar with the mathematical background of a Chi-Squared-Test. It is reasonable to assume that giving students enough knowledge to apply the most important techniques from the different fields that are covered by data science will lead to new applications of these techniques, and be beneficial - also for the "purists" in these fields. The combination of these techniques is not straightforward in many cases, and can justify an own branch of research. You also asked whether in 10 years, data science will be considered as "just a topic inside computer science". Again: Nobody can say for sure. But I wonder at which point people stopped asking the question whether "Computer Science" will one day only be considered only as a mix of (or a subject of) Electrical Engineering and Mathematics...
H: What is the most efficient data indexing technique As we all know, there are some data indexing techniques, using by well-known indexing apps, like Lucene (for java) or Lucene.NET (for .NET), MurMurHash, B+Tree etc. For a No-Sql / Object Oriented Database (which I try to write/play a little around with C#), which technique you suggest? I read about MurMurhash-2 and specially v3 comments say Murmur is very fast. Also Lucene.Net has good comments on it. But what about their memory footprints in general? Is there any efficient solution which uses less footprint (and of course if faster is preferable) than Lucene or Murmur? Or should I write a special index structure to get the best results? If I try to write my own, then is there any accepted scale for a good indexing, something like 1% of data-node, or 5% of data-node? Any useful hint will be appreciated. AI: I think you messed up some things in your question. Lucene (I know nothing about Lucene,NET, but I suppose is the same) is a library used to analyze, split in tokens, and store documents in order to be able to query and retrieve them later. Lucene has a pretty old but effective model, it uses inverted trees to find and retrieve documents. Without further details, all documents are split in tokens (terms), and for each term is maintained a data structure, which stores all the documents which contains the given term. As a data structure could be used a BTree, a hash table and in the latest major revisions you can even plug in your own data structures. A BTree (see Wikipedia page for further details), is a kind of a tree data structure, which is appropriate for working with big chunks of data and is often used for storing tree-like ordered structures on disk. For in-memory other trees performs better. Murmur hash (see Wikipedia page for further details), is a family of hash functions used in hash table. The implementation of the hash table is not important, it could be a standard chained implementation or more advanced open hash addressing scheme. The idea is that the hash tables allows one to get fast a key, from an unordered set of keys, and can answer to tasks like: is this key part of this set of keys? which is the value associated with this key? Now back to your main problem. You have one library (Lucene) and to data structures, both data structures are used in Lucene. Now you see that it is not possible to answer your question in these terms since they are not comparable. However, regarding you footprint and performance part of the question. First of all you have to know which kind of operations you need to implement. Do you need only get value for key, or do you need to find all elements in a range? In other words do you need order or not? If you do, than a tree can help. If you do not, than a hash table, which is faster could be used instead. Do you have a lot of data which does not fit the memory? If yes than a disk-based solution would help (like BTree). If your data fit the memory, than use the fastest in-memory solution and use disk only as a storage (with a different structure, much simpler).
H: What is generative and discriminative model? How are they used in Natural Language Processing? This question asks about generative vs. discriminative algorithm, but can someone give an example of the difference between these forms when applied to Natural Language Processing? How are generative and discriminative models used in NLP? AI: Let's say you are predicting the topic of a document given its words. A generative model describes how likely each topic is, and how likely words are given the topic. This is how it says documents are actually "generated" by the world -- a topic arises according to some distribution, words arise because of the topic, you have a document. Classifying documents of words W into topic T is a matter of maximizing the joint likelihood: P(T,W) = P(W|T)P(T) A discriminative model operates by only describing how likely a topic is given the words. It says nothing about how likely the words or topic are by themselves. The task is to model P(T|W) directly and find the T that maximizes this. These approaches do not care about P(T) or P(W) directly.
H: How to specify important attributes? Assume a set of loosely structured data (e.g. Web tables/Linked Open Data), composed of many data sources. There is no common schema followed by the data and each source can use synonym attributes to describe the values (e.g. "nationality" vs "bornIn"). My goal is to find some "important" attributes that somehow "define" the entities that they describe. So, when I find the same value for such an attribute, I will know that the two descriptions are most likely about the same entity (e.g. the same person). For example, the attribute "lastName" is more discriminative than the attribute "nationality". How could I (statistically) find such attributes that are more important than others? A naive solution would be to take the average IDF of the values of each attribute and make this the "importance" factor of the attribute. A similar approach would be to count how many distinct values appear for each attribute. I have seen the term feature, or attribute selection in machine learning, but I don't want to discard the remaining attributes, I just want to put higher weights to the most important ones. AI: A possible solution is to calculate the information gain associated to each attribute: $$I_{E}(f) = - \sum \limits_{i = 1}^m f_ilog_2f_i$$ Initially you have the whole dataset, and compute the information gain of each item. The item with the best information gain is the one you should use to partition the dataset (considering the item's values). Then, perform the same computations for each item (but the ones selected), and always choose the one which best describes/differentiates the entries from your dataset. There are implementations available for such computations. Decision trees usually base their feature selection on the features with best information gain. You may use the resulting tree structure to find these important items.
H: Google prediction API: What training/prediction methods Google Prediction API employs? The details of the Google Prediction API are on this page, but I am not able to find any details about the prediction algorithms running behind the API. So far I have gathered that they let you provide your preprocessing steps in PMML format. AI: If you take a look over the specifications of PMML which you can find here you can see on the left menu what options you have (like ModelTree, NaiveBayes, Neural Nets and so on).
H: Using SVM as a binary classifier, is the label for a data point chosen by consensus? I'm learning Support Vector Machines, and I'm unable to understand how a class label is chosen for a data point in a binary classifier. Is it chosen by consensus with respect to the classification in each dimension of the separating hyperplane? AI: The term consensus, as far as I'm concerned, is used rather for cases when you have more a than one source of metric/measure/choice from which to make a decision. And, in order to choose a possible result, you perform some average evaluation/consensus over the values available. This is not the case for SVM. The algorithm is based on a quadratic optimization, that maximizes the distance from the closest documents of two different classes, using a hyperplane to make the split. So, the only consensus here is the resulting hyperplane, computed from the closest documents of each class. In other words, the classes are attributed to each point by calculating the distance from the point to the hyperplane derived. If the distance is positive, it belongs to a certain class, otherwise, it belongs to the other one.
H: How to animate growth of a social network? I am seeking for a library/tool to visualize how social network changes when new nodes/edges are added to it. One of the existing solutions is SoNIA: Social Network Image Animator. It let's you make movies like this one. SoNIA's documentation says that it's broken at the moment, and besides this I would prefer JavaScript-based solution instead. So, my question is: are you familiar with any tools or are you able to point me to some libraries which would make this task as easy as possible? Right after posting this question I'll dig into sigma.js, so please consider this library covered. In general, my input data would be something like this: time_elapsed; node1; node2 1; A; B 2; A; C 3; B; C So, here we have three points in time (1, 2, 3), three nodes (A, B, C), and three edges, which represent a triadic closure between the three considered nodes. Moreover, every node will have two attributes (age and gender), so I would like to be able to change the shape/colour of the nodes. Also, after adding a new node, it would be perfect to have some ForceAtlas2 or similar algorithm to adjust the layout of the graph. AI: It turned out that this task was quite easy to accomplish using vis.js. This was the best example code which I have found. The example of what I have built upon this is here (scroll to the bottom of this post). This graph represents the growth of a subnetwork of Facebook friends. Green dots are females, blue ones are males. The darker the colour, the older the user. By clicking "Dodaj węzły" you can add more nodes and edges to the graph. Anyway, I am still interested in other ways to accomplish this task, so I won't accept any answer as for now. Thanks for your contributions!
H: Multi layer back propagation Neural network for classification Can someone explain me, how to classify a data like MNIST with MLBP-Neural network if I make more than one output (e.g 8), I mean if I just use one output I can easily classify the data, but if I use more than one, which output should I choose ? AI: Suppose that you need to classify something in K classes, where K > 2. In this case the most often setup I use is one hot encoding. You will have K output columns, and in the training set you will set all values to 0, except the one which has the category index, which could have value 1. Thus, for each training data set instance you will have all outputs with values 0 or 1, all outputs sum to 1 for each instance. This looks like a probability, which reminds me of a technique used often to connect some outputs which are modeled as probability. This is called softmax function, more details on Wikipedia. This will allow you to put some constraints on the output values (it is basically a logistic function generalization) so that the output values will be modeled as probabilities. Finally, with or without softmax you can use the output as a discriminant function to select the proper category. Another final thought would be to avoid to encode you variables in a connected way. For example you can have the binary representation of the category index. This would induce to the learner an artificial connection between some outputs which are arbitrary. The one hot encoding has the advantage that is neutral to how labels are indexed.
H: Algorithm for generating classification rules So we have potential for a machine learning application that fits fairly neatly into the traditional problem domain solved by classifiers, i.e., we have a set of attributes describing an item and a "bucket" that they end up in. However, rather than create models of probabilities like in Naive Bayes or similar classifiers, we want our output to be a set of roughly human-readable rules that can be reviewed and modified by an end user. Association rule learning looks like the family of algorithms that solves this type of problem, but these algorithms seem to focus on identifying common combinations of features and don't include the concept of a final bucket that those features might point to. For example, our data set looks something like this: Item A { 4-door, small, steel } => { sedan } Item B { 2-door, big, steel } => { truck } Item C { 2-door, small, steel } => { coupe } I just want the rules that say "if it's big and a 2-door, it's a truck," not the rules that say "if it's a 4-door it's also small." One workaround I can think of is to simply use association rule learning algorithms and ignore the rules that don't involve an end bucket, but that seems a bit hacky. Have I missed some family of algorithms out there? Or perhaps I'm approaching the problem incorrectly to begin with? AI: C45 made by Quinlan is able to produce rule for prediction. Check this Wikipedia page. I know that in Weka its name is J48. I have no idea which are implementations in R or Python. Anyway, from this kind of decision tree you should be able to infer rules for prediction. Later edit Also you might be interested in algorithms for directly inferring rules for classification. RIPPER is one, which again in Weka it received a different name JRip. See the original paper for RIPPER: Fast Effective Rule Induction, W.W. Cohen 1995
H: What does the alpha and beta hyperparameters contribute to in Latent Dirichlet allocation? LDA has two hyperparameters, tuning them changes the induced topics. What does the alpha and beta hyperparameters contribute to LDA? How does the topic change if one or the other hyperparameters increase or decrease? Why are they hyperparamters and not just parameters? AI: The Dirichlet distribution is a multivariate distribution. We can denote the parameters of the Dirichlet as a vector of size K of the form ~$\frac{1}{B(a)} \cdot \prod\limits_{i} x_i^{a_{i-1}}$, where $a$ is the vector of size $K$ of the parameters, and $\sum x_i = 1$. Now the LDA uses some constructs like: a document can have multiple topics (because of this multiplicity, we need the Dirichlet distribution); and there is a Dirichlet distribution which models this relation words can also belong to multiple topics, when you consider them outside of a document; so here we need another Dirichlet to model this The previous two are distributions which you do not really see from data, this is why is called latent, or hidden. Now, in Bayesian inference you use the Bayes rule to infer the posterior probability. For simplicity, let's say you have data $x$ and you have a model for this data governed by some parameters $\theta$. In order to infer values for this parameters, in full Bayesian inference you will infer the posterior probability of these parameters using Bayes' rule with $$p(\theta|x) = \frac{p(x|\theta)p(\theta|\alpha)}{p(x|\alpha)} \iff \text{posterior probability} = \frac{\text{likelihood}\times \text{prior probability}}{\text{marginal likelihood}}$$ Note that here comes an $\alpha$. This is your initial belief about this distribution, and is the parameter of the prior distribution. Usually this is chosen in such a way that will have a conjugate prior (so the distribution of the posterior is the same with the distribution of the prior) and often to encode some knowledge if you have one or to have maximum entropy if you know nothing. The parameters of the prior are called hyperparameters. So, in LDA, both topic distributions, over documents and over words have also correspondent priors, which are denoted usually with alpha and beta, and because are the parameters of the prior distributions are called hyperparameters. Now about choosing priors. If you plot some Dirichlet distributions you will note that if the individual parameters $\alpha_k$ have the same value, the pdf is symmetric in the simplex defined by the $x$ values, which is the minimum or maximum for pdf is at the center. If all the $\alpha_k$ have values lower than unit the maximum is found at corners or can if all values $\alpha_k$ are the same and greater than 1 the maximum will be found in center like It is easy to see that if values for $\alpha_k$ are not equal the symmetry is broken and the maximum will be found near bigger values. Additional, please note that values for priors parameters produce smooth pdfs of the distribution as the values of the parameters are near 1. So if you have great confidence that something is clearly distributed in a way you know, with a high degree of confidence, than values far from 1 in absolute value are to be used, if you do not have such kind of knowledge than values near 1 would be encode this lack of knowledge. It is easy to see why 1 plays such a role in Dirichlet distribution from the formula of the distribution itself. Another way to understand this is to see that prior encode prior-knowledge. In the same time you might think that prior encode some prior seen data. This data was not seen by the algorithm itself, it was seen by you, you learned something, and you can model prior according to what you know (learned). So in the prior parameters (hyperparameters) you encode also how big this data set you apriori saw, because the sum of $\alpha_k$ can be that also as the size of this more or less imaginary data set. So the bigger the prior data set, the bigger is the confidence, the bigger the values of $\alpha_k$ you can choose, the sharper the surface near maximum value, which means also less doubts. Hope it helped.
H: Instances vs. cores when using EC2 Working on what could often be called "medium data" projects, I've been able to parallelize my code (mostly for modeling and prediction in Python) on a single system across anywhere from 4 to 32 cores. Now I'm looking at scaling up to clusters on EC2 (probably with StarCluster/IPython, but open to other suggestions as well), and have been puzzled by how to reconcile distributing work across cores on an instance vs. instances on a cluster. Is it even practical to parallelize across instances as well as across cores on each instance? If so, can anyone give a quick rundown of the pros + cons of running many instances with few cores each vs. a few instances with many cores? Is there a rule of thumb for choosing the right ratio of instances to cores per instance? Bandwidth and RAM are non-trivial concerns in my projects, but it's easy to spot when those are the bottlenecks and readjust. It's much harder, I'd imagine, to benchmark the right mix of cores to instances without repeated testing, and my projects vary too much for any single test to apply to all circumstances. Thanks in advance, and if I've just failed to google this one properly, feel free to point me to the right answer somewhere else! AI: When using IPython, you very nearly don't have to worry about it (at the expense of some loss of efficiency/greater communication overhead). The parallel IPython plugin in StarCluster will by default start one engine per physical core on each node (I believe this is configurable but not sure where). You just run whatever you want across all engines by using the DirectView api (map_sync, apply_sync, ...) or the %px magic commands. If you are already using IPython in parallel on one machine, using it on a cluster is no different. Addressing some of your specific questions: "how to reconcile distributing work across cores on an instance vs. instances on a cluster" - You get one engine per core (at least); work is automatically distributed across all cores and across all instances. "Is it even practical to parallelize across instances as well as across cores on each instance?" - Yes :) If the code you are running is embarrassingly parallel (exact same algo on multiple data sets) then you can mostly ignore where a particular engine is running. If the core requires a lot of communication between engines, then of course you need to structure it so that engines primarily communicate with other engines on the same physical machine; but that kind of problem is not ideally suited for IPython, I think. "If so, can anyone give a quick rundown of the pros + cons of running many instances with few cores each vs. a few instances with many cores? Is there a rule of thumb for choosing the right ratio of instances to cores per instance?" - Use the largest c3 instances for compute-bound, and the smallest for memory-bandwidth-bound problems; for message-passing-bound problems, also use the largest instances but try to partition the problem so that each partition runs on one physical machine and most message passing is within the same partition. Problems which would run significantly slower on N quadruple c3 instances than on 2N double c3 are rare (an artificial example may be running multiple simple filters on a large number of images, where you go through all images for each filter rather than all filters for the same image). Using largest instances is a good rule of thumb.
H: How should one deal with implicit data in recommendation A recommendation system keeps a log of what recommendations have been made to a particular user and whether that user accepts the recommendation. It's like user_id item_id result 1 4 1 1 7 -1 5 19 1 5 80 1 where 1 means the user accepted the recommendation while -1 means the user did not respond to the recommendation. Question: If I am going to make recommendations to a bunch of users based on the kind of log described above, and I want to maximize MAP@3 scores, how should I deal with the implicit data (1 or -1)? My idea is to treat 1 and -1 as ratings, and predict the rating using factorization machines-type algorithms. But this does not seem right, given the asymmetry of the implicit data (-1 does not mean the user does not like the recommendation). Edit 1 Let us think about it in the context of a matrix factorization approach. If we treat -1 and 1 as ratings, there will be some problem. For example, user 1 likes movie A which scores high in one factor (e.g. having glorious background music) in the latent factor space. The system recommends movie B which also scores high in "glorious background music", but for some reason user 1 is too busy to look into the recommendation, and we have a -1 rating movie B. If we just treat 1 or -1 equally, then the system might be discouraged to recommend movie with glorious BGM to user 1 while user 1 still loves movie with glorious BGM. I think this situation is to be avoided. AI: Your system isn't just trained on items that are recommended right? if so you have a big feedback loop here. You want to learn from all clicks/views, I hope. You suggest that not-looking at an item is a negative signal. I strongly suggest you do not treat it that way. Not interacting with something is almost always best treated as no information. If you have an explicit signal that indicates a dislike, like a down vote (or, maybe watched 10 seconds of a video and stopped), maybe that's valid. I would not construe this input as rating-like data. (Although in your case, you may get away with it.) Instead think of them as weights, which is exactly the treatment in the Hu Koren Volinsky paper on ALS that @Trey mentions in a comment. This lets you record relative strength of positive/negative interactions. Finally I would note that this paper, while is very likely to be what you're looking for, does not provide for negative weights. It is simple to extend in this way. If you get that far I can point you to the easy extension, which exists already in two implementations that I know of, in Spark and Oryx.
H: Human activity recognition using smartphone data set problem I'm new to this community and hopefully my question will well fit in here. As part of my undergraduate data analytics course I have choose to do the project on human activity recognition using smartphone data sets. As far as I'm concern this topic relates to Machine Learning and Support Vector Machines. I'm not well familiar with this technologies yet so I will need some help. I have decided to follow this project idea (first project on the top) The project goal is determine what activity a person is engaging in (e.g., WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING) from data recorded by a smartphone (Samsung Galaxy S II) on the subject's waist. Using its embedded accelerometer and gyroscope, the data includes 3-axial linear acceleration and 3-axial angular velocity at a constant rate of 50Hz. All the data set is given in one folder with some description and feature labels. The data is divided for 'test' and 'train' files in which data is represented in this format: 2.5717778e-001 -2.3285230e-002 -1.4653762e-002 -9.3840400e-001 -9.2009078e-001 -6.6768331e-001 -9.5250112e-001 -9.2524867e-001 -6.7430222e-001 -8.9408755e-001 -5.5457721e-001 -4.6622295e-001 7.1720847e-001 6.3550240e-001 7.8949666e-001 -8.7776423e-001 -9.9776606e-001 -9.9841381e-001 -9.3434525e-001 -9.7566897e-001 -9.4982365e-001 -8.3047780e-001 -1.6808416e-001 -3.7899553e-001 2.4621698e-001 5.2120364e-001 -4.8779311e-001 4.8228047e-001 -4.5462113e-002 2.1195505e-001 -1.3489443e-001 1.3085848e-001 -1.4176313e-002 -1.0597085e-001 7.3544013e-002 -1.7151642e-001 4.0062978e-002 7.6988933e-002 -4.9054573e-001 -7.0900265e-001 And that's only a very small sample of what the file contain. I don't really know what this data represents and how can be interpreted. Also for analyzing, classification and clustering of the data, what tools will I need to use? Is there any way I can put this data into excel with labels included and for example use R or python to extract sample data and work on this? Any hints/tips would be much appreciated. AI: The data set definitions are on the page here: Attribute Information at the bottom or you can see inside the ZIP folder the file named activity_labels, that has your column headings inside of it, make sure you read the README carefully, it has some good info in it. You can easily bring in a .csv file in R using the read.csv command. For example if you name you file samsungdata you can open R and run this command: data <- read.csv("directory/where/file/is/located/samsungdata.csv", header = TRUE) Or if you are already inside of the working directory in R you can just run the following data <- read.csv("samsungdata.csv", header = TRUE) Where the name data can be changed to whatever you want to call your data set.
H: Are there any python based data visualization toolkits? Data visualization is an important sub-field of data science and python programmers need to have available toolkits for them. Is there a Python API to Tableau? Are there any Python based data visualization toolkits? AI: There is a Tablaeu API and you can use Python to use it, but maybe not in the sense that you think. There is a Data Extract API that you could use to import your data into Python and do your visualizations there, so I do not know if this is going to answer your question entirely. As in the first comment you can use Matplotlib from Matplotlib website, or you could install Canopy from Enthought which has it available, there is also Pandas, which you could also use for data analysis and some visualizations. There is also a package called ggplot which is used in R alot, but is also made for Python, which you can find here ggplot for python. The Tableau data extract API and some information about it can be found at this link. There are a few web sources that I found concerning it using duckduckgo at this link. Here are some samples: Link 1 Link 2 Link 3 As far as an API like matplotlib, I cannot say for certain that one exists. Hopefully this gives some sort of reference to help answer your question. Also to help avoid closure flags and downvotes you should try and show some of what you have tried to do or find, this makes for a better question and helps to illicit responses.
H: Cascaded Error in Apache Storm Going through the presentation and material of Summingbird by Twitter, one of the reasons that is mentioned for using Storm and Hadoop clusters together in Summingbird is that processing through Storm results in cascading of error. In order to avoid this cascading of error and accumulation of it, Hadoop cluster is used to batch process the data and discard the Storm results after the same data is processed by Hadoop. What is the reasons for generation of this accumulation of error? and why is it not present in Hadoop? Since I have not worked with Storm, I do not know the reasons for it. Is it because Storm uses some approximate algorithm to process the data in order to process them in real time? or is the cause something else? AI: Twitter uses Storm for real-time processing of data. Problems can happen with real-time data. Systems might go down. Data might be inadvertently processed twice. Network connections can be lost. A lot can happen in a real-time system. They use hadoop to reliably process historical data. I don't know specifics, but for instance, getting solid information from aggregated logs is probably more reliable than attaching to the stream. If they simply relied on Storm for everything - Storm would have problems due to the nature of providing real-time information at scale. If they relied on hadoop for everything, there's a good deal of latency involved. Combining the two with Summingbird is the next logical step.
H: Do I need to learn Hadoop to be a Data Scientist? An aspiring data scientist here. I don't know anything about Hadoop, but as I have been reading about Data Science and Big Data, I see a lot of talk about Hadoop. Is it absolutely necessary to learn Hadoop to be a Data Scientist? AI: Different people use different tools for different things. Terms like Data Science are generic for a reason. A data scientist could spend an entire career without having to learn a particular tool like hadoop. Hadoop is widely used, but it is not the only platform that is capable of managing and manipulating data, even large scale data. I would say that a data scientist should be familiar with concepts like MapReduce, distributed systems, distributed file systems, and the like, but I wouldn't judge someone for not knowing about such things. It's a big field. There is a sea of knowledge and most people are capable of learning and being an expert in a single drop. The key to being a scientist is having the desire to learn and the motivation to know that which you don't already know. As an example: I could hand the right person a hundred structured CSV files containing information about classroom performance in one particular class over a decade. A data scientist would be able to spend a year gleaning insights from the data without ever needing to spread computation across multiple machines. You could apply machine learning algorithms, analyze it using visualizations, combine it with external data about the region, ethnic makeup, changes to environment over time, political information, weather patterns, etc. All of that would be "data science" in my opinion. It might take something like hadoop to test and apply anything you learned to data comprising an entire country of students rather than just a classroom, but that final step doesn't necessarily make someone a data scientist. And not taking that final step doesn't necessarily disqualify someone from being a data scientist.
H: What are some easy to learn machine-learning applications? Being new to machine-learning in general, I'd like to start playing around and see what the possibilities are. I'm curious as to what applications you might recommend that would offer the fastest time from installation to producing a meaningful result. Also, any recommendations for good getting-started materials on the subject of machine-learning in general would be appreciated. AI: I would recommend to start with some MOOC on machine learning. For example Andrew Ng's course at coursera. You should also take a look at Orange application. It has a graphical interface and probably it is easier to understand some ML techniques using it.
H: Can machine learning algorithms predict sports scores or plays? I have a variety of NFL datasets that I think might make a good side-project, but I haven't done anything with them just yet. Coming to this site made me think of machine learning algorithms and I wondering how good they might be at either predicting the outcome of football games or even the next play. It seems to me that there would be some trends that could be identified - on 3rd down and 1, a team with a strong running back theoretically should have a tendency to run the ball in that situation. Scoring might be more difficult to predict, but the winning team might be. My question is whether these are good questions to throw at a machine learning algorithm. It could be that a thousand people have tried it before, but the nature of sports makes it an unreliable topic. AI: There are a lot of good questions about Football (and sports, in general) that would be awesome to throw to an algorithm and see what comes out. The tricky part is to know what to throw to the algorithm. A team with a good RB could just pass on 3rd-and-short just because the opponents would probably expect run, for instance. So, in order to actually produce some worthy results, I'd break the problem in smaller pieces and analyse them statistically while throwing them to the machines. There are a few (good) websites that try to do the same, you should check'em out and use whatever they found to help you out: Football Outsiders Advanced Football Analytics And if you truly want to explore Sports Data Analysis, you should definitely check the Sloan Sports Conference videos. There's a lot of them spread on Youtube.
H: How to get an aggregate confusion matrix from n different classifications I want to test the accuracy of a methodology. I ran it ~400 times, and I got a different classification for each run. I also have the ground truth, i.e., the real classification to test against. For each classification I computed a confusion matrix. Now I want to aggregate these results in order to get the overall confusion matrix. How can I achieve it? May I sum all confusion matrices in order to obtain the overall one? AI: I do not know a standard answer to this, but I thought about it some times ago and I have some ideas to share. When you have one confusion matrix, you have more or less a picture of how you classification model confuse (mis-classify) classes. When you repeat classification tests you will end up having multiple confusion matrices. The question is how to get a meaningful aggregate confusion matrix. The answer depends on what is the meaning of meaningful (pun intended). I think there is not a single version of meaningful. One way is to follow the rough idea of multiple testing. In general, you test something multiple times in order to get more accurate results. As a general principle one can reason that averaging on the results of the multiple tests reduces the variance of the estimates, so as a consequence, it increases the precision of the estimates. You can proceed in this way, of course, by summing position by position and then dividing by the number of tests. You can go further and instead of estimating only a value for each cell of the confusion matrix, you can also compute some confidence intervals, t-values and so on. This is OK from my point of view. But it tell only one side of the story. The other side of the story which might be investigated is how stable are the results for the same instances. To exemplify that I will take an extreme example. Suppose you have a classification model for 3 classes. Suppose that these classes are in the same proportion. If your model is able to predict one class perfectly and the other 2 classes with random like performance, you will end up having 0.33 + 0.166 + 0.166 = 0.66 misclassification ratio. This might seem good, but even if you take a look on a single confusion matrix you will not know that your performance on the last 2 classes varies wildly. Multiple tests can help. But averaging the confusion matrices would reveal this? My belief is not. The averaging will give the same result more or less, and doing multiple tests will only decrease the variance of the estimation. However it says nothing about the wild instability of prediction. So another way to do compose the confusion matrices would better involve a prediction density for each instance. One can build this density by counting for each instance, the number of times it was predicted a given class. After normalization, you will have for each instance a prediction density rather a single prediction label. You can see that a single prediction label is similar with a degenerated density where you have probability of 1 for the predicted class and 0 for the other classes for each separate instance. Now having this densities one can build a confusion matrix by adding the probabilities from each instance and predicted class to the corresponding cell of the aggregated confusion matrix. One can argue that this would give similar results like the previous method. However I think that this might be the case sometimes, often when the model has low variance, the second method is less affected by how the samples from the tests are drawn, and thus more stable and closer to the reality. Also the second method might be altered in order to obtain a third method, where one can assign as prediction the label with highest density from the prediction of a given instance. I do not implemented those things but I plan to study further because I believe might worth spending some time.
H: Qualifications for PhD Programs Yann LeCun mentioned in his AMA that he considers having a PhD very important in order to get a job at a top company. I have a masters in statistics and my undergrad was in economics and applied math, but I am now looking into ML PhD programs. Most programs say there are no absolutely necessary CS courses; however I tend to think most accepted students have at least a very strong CS background. I am currently working as a data scientist/statistician but my company will pay for courses. Should I take some intro software engineering courses at my local University to make myself a stronger candidate? What other advice you have for someone applying to PhD programs from outside the CS field? edit: I have taken a few MOOCs (Machine Learning, Recommender Systems, NLP) and code R/python on a daily basis. I have a lot of coding experience with statistical languages and implement ML algorithms daily. I am more concerned with things that I can put on applications. AI: If I were you I would take a MOOC or two (e.g., Algorithms, Part I, Algorithms, Part II, Functional Programming Principles in Scala), a good book on data structures and algorithms, then just code as much as possible. You could implement some statistics or ML algorithms, for example; that would be good practice for you and useful to the community. For a PhD program, however, I would also make sure I were familiar with the type of maths they use. If you want to see what it's like at the deep end, browse the papers at the JMLR. That will let you calibrate yourself in regards to theory; can you sort of follow the maths? Oh, and you don't need a PhD to work at top companies, unless you want to join research departments like his. But then you'll spend more time doing development, and you'll need good coding skills...
H: What are the advantages of HDF compared to alternative formats? What are the advantages of HDF compared to alternative formats? What are the main data science tasks where HDF is really suitable and useful? AI: Perhaps a good way to paraphrase the question is, what are the advantages compared to alternative formats? The main alternatives are, I think: a database, text files, or another packed/binary format. The database options to consider are probably a columnar store or NoSQL, or for small self-contained datasets SQLite. The main advantage of the database is the ability to work with data much larger than memory, to have random or indexed access, and to add/append/modify data quickly. The main *dis*advantage is that it is much slower than HDF, for problems in which the entire dataset needs to be read in and processed. Another disadvantage is that, with the exception of embedded-style databases like SQLite, a database is a system (requiring admnistration, setup, maintenance, etc) rather than a simple self-contained data store. The text file format options are XML/JSON/CSV. They are cross-platform/language/toolkit, and are a good archival format due to the ability to be self-describing (or obvious :). If uncompressed, they are huge (10x-100x HDF), but if compressed, they can be fairly space-efficient (compressed XML is about the same as HDF). The main disadvantage here is again speed: parsing text is much, much slower than HDF. The other binary formats (npy/npz numpy files, blz blaze files, protocol buffers, Avro, ...) have very similar properties to HDF, except they are less widely supported (may be limited to just one platform: numpy) and may have specific other limitations. They typically do not offer a compelling advantage. HDF is a good complement to databases, it may make sense to run a query to produce a roughly memory-sized dataset and then cache it in HDF if the same data would be used more than once. If you have a dataset which is fixed, and usually processed as a whole, storing it as a collection of appropriately sized HDF files is not a bad option. If you have a dataset which is updated often, staging some of it as HDF files periodically might still be helpful. To summarize, HDF is a good format for data which is read (or written) typically as a whole; it is the lingua franca or common/preferred interchange format for many applications due to wide support and compatibility, decent as an archival format, and very fast. P.S. To give this some practical context, my most recent experience comparing HDF to alternatives, a certain small (much less than memory-sized) dataset took 2 seconds to read as HDF (and most of this is probably overhead from Pandas); ~1 minute to read from JSON; and 1 hour to write to database. Certainly the database write could be sped up, but you'd better have a good DBA! This is how it works out of the box.
H: Does Amazon RedShift replace Hadoop for ~1XTB data? There is plenty of hype surrounding Hadoop and its eco-system. However, in practice, where many data sets are in the terabyte range, is it not more reasonable to use Amazon RedShift for querying large data sets, rather than spending time and effort building a Hadoop cluster? Also, how does Amazon Redshift compare with Hadoop with respect to setup complexity, cost, and performance? AI: tl;dr: They markedly differ in many aspects and I can't think Redshift will replace Hadoop. -Function You can't run anything other than SQL on Redshift. Perhaps most importantly, you can't run any type of custom functions on Redshift. In Hadoop you can, using many languages (Java, Python, Ruby.. you name it). For example, NLP in Hadoop is easy, while it's more or less impossible in Redshift. I.e. there are lots of things you can do in Hadoop but not on Redshift. This is probably the most important difference. -Performance Profile Query execution on Redshift is in most cases significantly more efficient than on Hadoop. However, this efficiency comes from the indexing that is done when the data is loaded into Redshift (I'm using the term indexing very loose here). Therefore, it's great if you load your data once and execute multiple queries, but if you want to execute only one query for example, you might actually lose out in performance overall. -Cost Profile Which solution wins out in cost depends on the situation (like performance), but you probably need quite a lot of queries in order to make it cheaper than Hadoop (more specifically Amazon's Elastic Map Reduce). For example, if you are doing OLAP, it's very likely that Redshift comes out cheaper. If you do daily batch ETLs, Hadoop is more likely to come out cheaper. Having said that, we've replaced part of our ETL that was done in Hive to Redshift, and it was a pretty great experience; mostly for the ease of development. Redshift's Query Engine is based on PostgreSQL and is very mature, compared to Hive's. Its ACID characteristics make it easier to reason about it, and the quicker response time allows more testing to be done. It's a great tool to have, but it won't replace Hadoop. EDIT: As for setup complexity, I'd even say it's easier with Hadoop if you use AWS's EMR. Their tools are so mature that it's ridiculously easy to have your Hadoop job running. Tools and mechanisms surrounding Redshift's operation aren't that mature yet. For example, Redshift can't handle trickle loading and thus you have to come up with something that turns that into a batched load, which can add some complexity to your ETL.
H: What do you think of Data Science certifications? I've now seen two data science certification programs - the John Hopkins one available at Coursera and the Cloudera one. I'm sure there are others out there. The John Hopkins set of classes is focused on R as a toolset, but covers a range of topics: R Programming cleaning and obtaining data Data Analysis Reproducible Research Statistical Inference Regression Models Machine Learning Developing Data Products And what looks to be a Project based completion task similar to Cloudera's Data Science Challenge The Cloudera program looks thin on the surface, but looks to answer the two important questions - "Do you know the tools", "Can you apply the tools in the real world". Their program consists of: Introduction to Data Science Data Science Essentials Exam Data Science Challenge (a real world data science project scenario) I am not looking for a recommendation on a program or a quality comparison. I am curious about other certifications out there, the topics they cover, and how seriously DS certifications are viewed at this point by the community. EDIT: These are all great answers. I'm choosing the correct answer by votes. AI: I did the first 2 courses and I'm planning to do all the others too. If you don't know R, it's a really good program. There are assignments and quizzes every week. Many people find some courses very difficult. You are going to have hard time if you don't have any programming experience (even if they say it's not required). Just remember.. it's not because you can drive a car that you are a F1 pilot ;)
H: Data Science as a Social Scientist? as I am very interested in programming and statistics, Data Science seems like a great career path to me - I like both fields and would like to combine them. Unfortunately, I have studied political science with a non-statistical sounding Master. I focused on statistics in this Master, visiting optional courses and writing a statistical thesis on a rather large dataset. Since almost all job adds are requiring a degree in informatics, physics or some other techy-field, I am wondering if there is a chance to become a data scientist or if I should drop that idea. I am lacking knowledge in machine learning, sql and hadoop, while having a rather strong informatics and statistics background. So can somebody tell me how feasible my goal of becoming a data scientist is? AI: The downvotes are because of the topic, but I'll attempt to answer your question as best I can since it's here. Data science is a term that is thrown around as loosely as Big Data. Everyone has a rough idea of what they mean by the term, but when you look at the actual work tasks, a data scientist's responsibilities will vary greatly from company to company. Statistical analysis could encompass the entirety of the workload in one job, and not even be a consideration for another. I wouldn't chase after a job title per se. If you are interested in the field, network (like you are doing now) and find a good fit. If you are perusing job ads, just look for the ones that stress statistical and informatics backgrounds. Hadoop and SQL are both easy to become familiar with given the time and motivation, but I would stick with the areas you are strongest in and go from there.
H: Example tasks of a data scientist and the necessary knowledge Could you give some examples of typical tasks that a data scientist does in his daily job, and the must-know minimum for each of the levels (like junior, senior, etc. if there are any)? If possible, something like a Programmer competency matrix. AI: Becoming a Data Scientist – Curriculum via Metromap is a popular reference for this kind of question.
H: Data Science oriented dataset/research question for Statistics MSc thesis I'd like to explore 'data science'. The term seems a little vague to me, but I expect it to require: machine learning (rather than traditional statistics); a large enough dataset that you have to run analyses on clusters. What are some good datasets and problems, accessible to a statistician with some programming background, that I can use to explore the field of data science? To keep this as narrow as possible, I'd ideally like links to open, well used datasets and example problems. AI: Just head to kaggle.com; it'll keep you busy for a long time. For open data there's the UC Irvine Machine Learning Repository. In fact, there's a whole Stackexchange site devoted to this; look there.
H: How to process natural language queries? I'm curious about natural language querying. Stanford has what looks to be a strong set of software for processing natural language. I've also seen the Apache OpenNLP library, and the General Architecture for Text Engineering. There are an incredible amount of uses for natural language processing and that makes the documentation of these projects difficult to quickly absorb. Can you simplify things for me a bit and at a high level outline the tasks necessary for performing a basic translation of simple questions into SQL? The first rectangle on my flow chart is a bit of a mystery. For example, I might want to know: How many books were sold last month? And I'd want that translated into Select count(*) from sales where item_type='book' and sales_date >= '5/1/2014' and sales_date <= '5/31/2014' AI: Natural language querying poses very many intricacies which can be very difficult to generalize. From a high level, I would start with trying to think of things in terms of nouns and verbs. So for the sentence: How many books were sold last month? You would start by breaking the sentence down with a parser which will return a tree format similar to this: You can see that there is a subject books, a compound verbal phrase indicating the past action of sell, and then a noun phrase where you have the time focus of a month. We can further break down the subject for modifiers: "how many" for books, and "last" for month. Once you have broken the sentence down you need to map those elements to sql language e.g.: how many => count, books => book, sold => sales, month => sales_date (interval), and so on. Finally, once you have the elements of the language you just need to come up with a set of rules for how different entities interact with each other, which leaves you with: Select count(*) from sales where item_type='book' and sales_date >= '5/1/2014' and sales_date <= '5/31/2014' This is at a high level how I would begin, while almost every step I have mentioned is non-trivial and really the rabbit hole can be endless, this should give you many of the dots to connect.
H: Filtering spam from retrieved data I once heard that filtering spam by using blacklists is not a good approach, since some user searching for entries in your dataset may be looking for particular information from the sources blocked. Also it'd become a burden to continuously validate the current state of each spammer blocked, checking if the site/domain still disseminate spam data. Considering that any approach must be efficient and scalable, so as to support filtering on very large datasets, what are the strategies available to get rid of spam in a non-biased manner? Edit: if possible, any example of strategy, even if just the intuition behind it, would be very welcome along with the answer. AI: Spam filtering, especially in email, has been revolutionized by neural networks, here are a couple papers that provide good reading on the subject: On Neural Networks And The Future Of Spam A. C. Cosoi, M. S. Vlad, V. Sgarciu http://ceai.srait.ro/index.php/ceai/article/viewFile/18/8 Intelligent Word-Based Spam Filter Detection Using Multi-Neural Networks Ann Nosseir, Khaled Nagati and Islam Taj-Eddin http://www.ijcsi.org/papers/IJCSI-10-2-1-17-21.pdf Spam Detection using Adaptive Neural Networks: Adaptive Resonance Theory David Ndumiyana, Richard Gotora, and Tarisai Mupamombe http://onlineresearchjournals.org/JPESR/pdf/2013/apr/Ndumiyana%20et%20al.pdf EDIT: The basic intuition behind using a neural network to help with spam filtering is by providing a weight to terms based on how often they are associated with spam. Neural networks can be trained most quickly in a supervised -- you explicitly provide the classification of the sentence in the training set -- environment. Without going into the nitty gritty the basic idea can be illustrated with these sentences: Text = "How is the loss of the Viagra patent going to affect Pfizer", Spam = false Text = "Cheap Viagra Buy Now", Spam = true Text = "Online pharmacy Viagra Cialis Lipitor", Spam = true For a two stage neural network, the first stage will calculate the likelihood of spam based off of if the word exists in the sentence. So from our example: viagra => 66% buy => 100% Pfizer => 0% etc.. Then for the second stage the results in the first stage are used as variables in the second stage: viagra & buy => 100% Pfizer & viagra=> 0% This basic idea is run for many of the permutations of the all the words in your training data. The end results once trained is basically just an equation that based of the context of the words in the sentence can assign a probability of being spam. Set spamminess threshold, and filter out any data higher then said threshold.
H: Amazon S3 vs Google Drive The majority of people use S3. However, Google Drive seems a promising alternative solution for storing large amounts of data. Are there specific reasons why one is better than the other? AI: Personally, we use S3 on top of GCE and really love it. Depending on how much data you're dealing with, Google Drive just doesn't quite match the 5 TB max that S3 gives you. Also, if you're using python, boto does a pretty fantastic job of making most aws services pretty accessible regardless of what stack you're dealing with. Even if you're not using python, they've got a pretty straightforward API that generally is more accessible than Google Drive. Instead of google drive, though google did recently release a cloud storage service, apart from drive, that lets you more closely integrate your storage with any gce instance you've got, google cloud storage They've got an API which seems to be pretty comparable to S3's, but I can't profess to having really played around with it much. Pricing-wise the two are identical, but I think that the large community and experience with aws in general still puts S3 squarely above both google's cloud storage and google drive.
H: Choose binary classification algorithm I have a binary classification problem: Approximately 1000 samples in training set 10 attributes, including binary, numeric and categorical Which algorithm is the best choice for this type of problem? By default I'm going to start with SVM (preliminary having nominal attributes values converted to binary features), as it is considered the best for relatively clean and not noisy data. AI: It's hard to say without knowing a little more about your dataset, and how separable your dataset is based on your feature vector, but I would probably suggest using extreme random forest over standard random forests because of your relatively small sample set. Extreme random forests are pretty similar to standard random forests with the one exception that instead of optimizing splits on trees, extreme random forest makes splits at random. Initially this would seem like a negative, but it generally means that you have significantly better generalization and speed, though the AUC on your training set is likely to be a little worse. Logistic regression is also a pretty solid bet for these kinds of tasks, though with your relatively low dimensionality and small sample size I would be worried about overfitting. You might want to check out using K-Nearest Neighbors since it often performs very will with low dimensionalities, but it doesn't usually handle categorical variables very well. If I had to pick one without knowing more about the problem I would certainly place my bets on extreme random forest, as it's very likely to give you good generalization on this kind of dataset, and it also handles a mix of numerical and categorical data better than most other methods.
H: What to consider before learning a new language for data analysis I'm currently in the very early stages of preparing a new research-project (still at the funding-application stage), and expect that data-analysis and especially visualisation tools will play a role in this project. In view of this I face the following dilemma: Should I learn Python to be able to use its extensive scientific libraries (Pandas, Numpy, Scipy, ...), or should I just dive into similar packages of a language I'm already acquainted with (Racket, or to a lesser extent Scala)? (Ideally I would learn Python in parallel with using statistical libraries in Racket, but I'm not sure I'll have time for both) I'm not looking for an answer to this dilemma, but rather for feedback on my different considerations: My current position is as follows: In favour of Python: Extensively used libraries Widely used (may be decisive in case of collaboration with others) A lot of online material to start learning it Conferences that are specifically dedicated to Scientific Computing with Python Learning Python won't be a waste of time anyway In favour of a language I already know: It's a way to deepen my knowledge of one language rather than getting superficial knowledge of one more language (under the motto: you should at least know one language really well) It is feasible. Both Racket and Scala have good mathematics and statistics libraries I can start right away with learning what I need to know rather than first having to learn the basics Two concrete questions: What am I forgetting? How big of a nuisance could the Python 2 vs 3 issue be? AI: Personally going to make a strong argument in favor of Python here. There are a large number of reasons for this, but I'm going to build on some of the points that other people have mentioned here: Picking a single language: It's definitely possible to mix and match languages, picking d3 for your visualization needs, FORTRAN for your fast matrix multiplies, and python for all of your networking and scripting. You can do this down the line, but keeping your stack as simple as possible is a good move, especially early on. Picking something bigger than you: You never want to be pushing up against the barriers of the language you want to use. This is a huge issue when it comes to languages like Julia and FORTRAN, which simply don't offer the full functionality of languages like python or R. Pick Community: The one most difficult thing to find in any language is community. Python is the clear winner here. If you get stuck, you ask something on SO, and someone will answer in a matter of minutes, which is simply not the case for most other languages. If you're learning something in a vacuum you will simply learn much slower. In terms of the minus points, I might actually push back on them. Deepening your knowledge of one language is a decent idea, but knowing only one language, without having practice generalizing that knowledge to other languages is a good way to shoot yourself in the foot. I have changed my entire favored development stack three time over as many years, moving from MATLAB to Java to haskell to python. Learning to transfer your knowledge to another language is far more valuable than just knowing one. As far as feasibility, this is something you're going to see again and again in any programming career. Turing completeness means you could technically do everything with HTML4 and CSS3, but you want to pick the right tool for the job. If you see the ideal tool and decide to leave it by the roadside you're going to find yourself slowed down wishing you had some of the tools you left behind. A great example of that last point is trying to deploy R code. 'R''s networking capabilities are hugely lacking compared to python, and if you want to deploy a service, or use slightly off-the-beaten path packages, the fact that pip has an order of magnitude more packages than CRAN is a huge help.
H: Choosing a learning rate I'm currently working on implementing Stochastic Gradient Descent, SGD, for neural nets using back-propagation, and while I understand its purpose I have some questions about how to choose values for the learning rate. Is the learning rate related to the shape of the error gradient, as it dictates the rate of descent? If so, how do you use this information to inform your decision about a value? If it's not what sort of values should I choose, and how should I choose them? It seems like you would want small values to avoid overshooting, but how do you choose one such that you don't get stuck in local minima or take to long to descend? Does it make sense to have a constant learning rate, or should I use some metric to alter its value as I get nearer a minimum in the gradient? In short: How do I choose the learning rate for SGD? AI: Is the learning rate related to the shape of the error gradient, as it dictates the rate of descent? In plain SGD, the answer is no. A global learning rate is used which is indifferent to the error gradient. However, the intuition you are getting at has inspired various modifications of the SGD update rule. If so, how do you use this information to inform your decision about a value? Adagrad is the most widely known of these and scales a global learning rate η on each dimension based on l2 norm of the history of the error gradient gt on each dimension: Adadelta is another such training algorithm which uses both the error gradient history like adagrad and the weight update history and has the advantage of not having to set a learning rate at all. If it's not what sort of values should I choose, and how should I choose them? Setting learning rates for plain SGD in neural nets is usually a process of starting with a sane value such as 0.01 and then doing cross-validation to find an optimal value. Typical values range over a few orders of magnitude from 0.0001 up to 1. It seems like you would want small values to avoid overshooting, but how do you choose one such that you don't get stuck in local minima or take too long to descend? Does it make sense to have a constant learning rate, or should I use some metric to alter its value as I get nearer a minimum in the gradient? Usually, the value that's best is near the highest stable learning rate and learning rate decay/annealing (either linear or exponentially) is used over the course of training. The reason behind this is that early on there is a clear learning signal so aggressive updates encourage exploration while later on the smaller learning rates allow for more delicate exploitation of local error surface.
H: Best languages for scientific computing It seems as though most languages have some number of scientific computing libraries available. Python has Scipy Rust has SciRust C++ has several including ViennaCL and Armadillo Java has Java Numerics and Colt as well as several other Not to mention languages like R and Julia designed explicitly for scientific computing. With so many options how do you choose the best language for a task? Additionally which languages will be the most performant? Python and R seem to have the most traction in the space, but logically a compiled language seems like it would be a better choice. And will anything ever outperform Fortran? Additionally compiled languages tend to have GPU acceleration, while interpreted languages like R and Python don't. What should I take into account when choosing a language, and which languages provide the best balance of utility and performance? Also are there any languages with significant scientific computing resources that I've missed? AI: This is a pretty massive question, so this is not intended to be a full answer, but hopefully this can help to inform general practice around determining the best tool for the job when it comes to data science. Generally, I have a relatively short list of qualifications I look for when it comes to any tool in this space. In no particular order they are: Performance: Basically boils down to how quickly the language does matrix multiplication, as that is more or less the most important task in data science. Scalability: At least for me personally, this comes down to ease of building a distributed system. This is somewhere where languages like Julia really shine. Community: With any language, you're really looking for an active community that can help you when you get stuck using whichever tool you're using. This is where python pulls very far ahead of most other languages. Flexibility: Nothing is worse than being limited by the language that you use. It doesn't happen very often, but trying to represent graph structures in haskell is a notorious pain, and Julia is filled with a lot of code architectures pains as a result of being such a young language. Ease of Use: If you want to use something in a larger environment, you want to make sure that setup is a straightforward and it can be automated. Nothing is worse than having to set up a finnicky build on half a dozen machines. There are a ton of articles out there about performance and scalability, but in general you're going to be looking at a performance differential of maybe 5-10x between languages, which may or may not matter depending on your specific application. As far as GPU acceleration goes, cudamat is a really seamless way of getting it working with python, and the cuda library in general has made GPU acceleration far more accessible than it used to be. The two primary metrics I use for both community and flexibility are to look at the language's package manager, and the language questions on a site like SO. If there are a large number of high-quality questions and answers, it's a good sign that the community is active. Number of packages and the general activity on those packages can also be a good proxy for this metric. As far as ease of use goes, I am a firm believer that the only way to actually know is to actually set it up yourself. There's a lot of superstition around a lot of Data Science tools, specifically things like databases and distributed computing architecture, but there's no way to really know if something is easy or hard to setup up and deploy without just building it yourself.
H: Does anyone use Julia programming language? Is anyone using Julia (http://julialang.org/) for professional jobs? Or using it instead of R, Matlab, or Mathematica? Is it a good language? If you have to predict next 5-10 years: Do you think it grow up enough to became such a standard in data science like R or similar? AI: I personally have used Julia for a good number of professional projects, and while, as Dirk mentioned, this is purely conjecture, I can give some insights on where Julia really stands out. The question of whether or not these reasons will prove enough to have Julia succeed as a language is anyone's guess. Distributed Systems: Julia is the easiest language I've ever dealt with in terms of building distributed systems. This is becoming more and more relevant in computing, and will potentially become a deciding factor, but the question of whether or not Julia'a relative ease decides this is up for debate JIT Performance: Julia's JIT compiler is extremely fast, and while there is a lot of debate as to how accurate these benchmark numbers are, the Julia Website shows a series of relevant benchmarks Community: This is an area where Julia just isn't quite there. The community that is there is generally supportive, but not quite as knowledgable as the R or python communities, which is a definite minus. Extensibility: This is another place where Julia is currently lacking, there is a large disconnect between the implies code patterns that Julia steers you toward and what it can actually support. The type system is currently overly bulky and difficult to use effectively. Again, can't say what this means for the future, but these are just a couple of relevant points when it comes to evaluating Julia in my opinion.
H: How to grow a list of related words based on initial keywords? I recently saw a cool feature that was once available in Google Sheets: you start by writing a few related keywords in consecutive cells, say: "blue", "green", "yellow", and it automatically generates similar keywords (in this case, other colors). See more examples in this YouTube video. I would like to reproduce this in my own program. I'm thinking of using Freebase, and it would work like this intuitively: Retrieve the list of given words in Freebase; Find their "common denominator(s)" and construct a distance metric based on this; Rank other concepts based on their "distance" to the original keywords; Display the next closest concepts. As I'm not familiar with this area, my questions are: Is there a better way to do this? What tools are available for each step? AI: The word2vec algorithm may be a good way to retrieve more elements for a list of similar words. It is an unsupervised "deep learning" algorithm that has previously been demonstrated with Wikipedia-based training data (helper scripts are provided on the Google code page). There are currently C and Python implementations. This tutorial by Radim Řehůřek, the author of the Gensim topic modelling library, is an excellent place to start. The "single topic" demonstration on the tutorial is a good example of retreiving similar words to a single term (try searching on 'red' or 'yellow'). It should be possible to extend this technique to find the words that have the greatest overall similarity to a set of input words.
H: What are good sources to learn about Bootstrap? I think that Bootstrap can be useful in my work, where we have a lot a variables that we don't know the distribution of it. So, simulations could help. What are good sources to learn about Bootstrap/other useful simulation methods? AI: A classic book is by B. Efron who created the technique: Bradley Efron; Robert Tibshirani (1994). An Introduction to the Bootstrap. Chapman & Hall/CRC. ISBN 978-0-412-04231-7.
H: How can I transform names in a confidential data set to make it anonymous, but preserve some of the characteristics of the names? Motivation I work with datasets that contain personally identifiable information (PII) and sometimes need to share part of a dataset with third parties, in a way that doesn't expose PII and subject my employer to liability. Our usual approach here is to withhold data entirely, or in some cases to reduce its resolution; e.g., replacing an exact street address with the corresponding county or census tract. This means that certain types of analysis and processing must be done in-house, even when a third party has resources and expertise more suited to the task. Since the source data is not disclosed, the way we go about this analysis and processing lacks transparency. As a result, any third party's ability to perform QA/QC, adjust parameters or make refinements may be very limited. Anonymizing Confidential Data One task involves identifying individuals by their names, in user-submitted data, while taking into account errors and inconsistencies. A private individual might be recorded in one place as "Dave" and in another as "David," commercial entities can have many different abbreviations, and there are always some typos. I've developed scripts based on a number of criteria that determine when two records with non-identical names represent the same individual, and assign them a common ID. At this point we can make the dataset anonymous by withholding the names and replacing them with this personal ID number. But this means the recipient has almost no information about e.g. the strength of the match. We would prefer to be able to pass along as much information as possible without divulging identity. What Doesn't Work For instance, it would be great to be able to encrypt strings while preserving edit distance. This way, third parties could do some of their own QA/QC, or choose to do further processing on their own, without ever accessing (or being able to potentially reverse-engineer) PII. Perhaps we match strings in-house with edit distance <= 2, and the recipient wants to look at the implications of tightening that tolerance to edit distance <= 1. But the only method I am familiar with that does this is ROT13 (more generally, any shift cipher), which hardly even counts as encryption; it's like writing the names upside down and saying, "Promise you won't flip the paper over?" Another bad solution would be to abbreviate everything. "Ellen Roberts" becomes "ER" and so forth. This is a poor solution because in some cases the initials, in association with public data, will reveal a person's identity, and in other cases it's too ambiguous; "Benjamin Othello Ames" and "Bank of America" will have the same initials, but their names are otherwise dissimilar. So it doesn't do either of the things we want. An inelegant alternative is to introduce additional fields to track certain attributes of the name, e.g.: +-----+----+-------------------+-----------+--------+ | Row | ID | Name | WordChars | Origin | +-----+----+-------------------+-----------+--------+ | 1 | 17 | "AMELIA BEDELIA" | (6, 7) | Eng | +-----+----+-------------------+-----------+--------+ | 2 | 18 | "CHRISTOPH BAUER" | (9, 5) | Ger | +-----+----+-------------------+-----------+--------+ | 3 | 18 | "C J BAUER" | (1, 1, 5) | Ger | +-----+----+-------------------+-----------+--------+ | 4 | 19 | "FRANZ HELLER" | (5, 6) | Ger | +-----+----+-------------------+-----------+--------+ I call this "inelegant" because it requires anticipating which qualities might be interesting and it's relatively coarse. If the names are removed, there's not much you can reasonably conclude about the strength of the match between rows 2 & 3, or about the distance between rows 2 & 4 (i.e., how close they are to matching). Conclusion The goal is to transform strings in such a way that as many useful qualities of the original string are preserved as possible while obscuring the original string. Decryption should be impossible, or so impractical as to be effectively impossible, no matter the size of the data set. In particular, a method that preserves the edit distance between arbitrary strings would be very useful. I've found a couple papers that might be relevant, but they're a bit over my head: Privacy Preserving String Comparisons Based on Levenshtein Distance An Empirical Comparison of Approaches to Approximate String Matching in Private Record Linkage AI: One of the references I mentioned in the OP led me to a potential solution that seems quite powerful, described in "Privacy-preserving record linkage using Bloom filters" (doi:10.1186/1472-6947-9-41): A new protocol for privacy-preserving record linkage with encrypted identifiers allowing for errors in identifiers has been developed. The protocol is based on Bloom filters on q-grams of identifiers. The article goes into detail about the method, which I will summarize here to the best of my ability. A Bloom filter is a fixed-length series of bits storing the results of a fixed set of independent hash functions, each computed on the same input value. The output of each hash function should be an index value from among the possible indexes in the filter; i.e., if you have a 0-indexed series of 10 bits, hash functions should return (or be mapped to) values from 0 to 9. The filter starts with each bit set to 0. After hashing the input value with each function from the set of hash functions, each bit corresponding to an index value returned by any hash function is set to 1. If the same index is returned by more than one hash function, the bit at that index is only set once. You could consider the Bloom filter to be a superposition of the set of hashes onto the fixed range of bits. The protocol described in the above-linked article divides strings into n-grams, which are in this case sets of characters. As an example, "hello" might yield the following set of 2-grams: ["_h", "he", "el", "ll", "lo", "o_"] Padding the front and back with spaces seems to be generally optional when constructing n-grams; the examples given in the paper that proposes this method use such padding. Each n-gram can be hashed to produce a Bloom filter, and this set of Bloom filters can be superimposed on itself (bitwise OR operation) to produce the Bloom filter for the string. If the filter contains many more bits than there are hash functions or n-grams, arbitrary strings are relatively unlikely to produce exactly the same filter. However, the more n-grams two strings have in common, the more bits their filters will ultimately share. You can then compare any two filters A, B by means of their Dice coefficient: DA, B = 2h / (a + b) Where h is the number of bits that are set to 1 in both filters, a is the number of bits set to 1 in only filter A, and b is the number of bits set to 1 in only filter B. If the strings are exactly the same, the Dice coefficient will be 1; the more they differ, the closer the coefficient will be to 0. Because the hash functions are mapping an indeterminate number of unique inputs to a small number of possible bit indexes, different inputs may produce the same filter, so the coefficient indicates only a probability that the strings are the same or similar. The number of different hash functions and the number of bits in the filter are important parameters for determining the likelihood of false positives - pairs of inputs that are much less similar than the Dice coefficient produced by this method predicts. I found this tutorial to be very helpful for understanding the Bloom filter. There is some flexibility in the implementation of this method; see also this 2010 paper (also linked at the end of the question) for some indications of how performant it is in relation to other methods, and with various parameters.
H: What are the use cases for Apache Spark vs Hadoop With Hadoop 2.0 and YARN Hadoop is supposedly no longer tied only map-reduce solutions. With that advancement, what are the use cases for Apache Spark vs Hadoop considering both sit atop of HDFS? I've read through the introduction documentation for Spark, but I'm curious if anyone has encountered a problem that was more efficient and easier to solve with Spark compared to Hadoop. AI: Hadoop means HDFS, YARN, MapReduce, and a lot of other things. Do you mean Spark vs MapReduce? Because Spark runs on/with Hadoop, which is rather the point. The primary reason to use Spark is for speed, and this comes from the fact that its execution can keep data in memory between stages rather than always persist back to HDFS after a Map or Reduce. This advantage is very pronounced for iterative computations, which have tens of stages each of which is touching the same data. This is where things might be "100x" faster. For simple, one-pass ETL-like jobs for which MapReduce was designed, it's not in general faster. Another reason to use Spark is its nicer high-level language compared to MapReduce. It provides a functional programming-like view that mimics Scala, which is far nicer than writing MapReduce code. (Although you have to either use Scala, or adopt the slightly-less-developed Java or Python APIs for Spark). Crunch and Cascading already provide a similar abstraction on top of MapReduce, but this is still an area where Spark is nice. Finally Spark has as-yet-young but promising subprojects for ML, graph analysis, and streaming, which expose a similar, coherent API. With MapReduce, you would have to turn to several different other projects for this (Mahout, Giraph, Storm). It's nice to have it in one package, albeit not yet 'baked'. Why would you not use Spark? paraphrasing myself: Spark is primarily Scala, with ported Java APIs; MapReduce might be friendlier and more native for Java-based developers There is more MapReduce expertise out there now than Spark For the data-parallel, one-pass, ETL-like jobs MapReduce was designed for, MapReduce is lighter-weight compared to the Spark equivalent Spark is fairly mature, and so is YARN now, but Spark-on-YARN is still pretty new. The two may not be optimally integrated yet. For example until recently I don't think Spark could ask YARN for allocations based on number of cores? That is: MapReduce might be easier to understand, manage and tune
H: How to speedup message passing between computing nodes I'm developing a distributed application, and as it's been designed, there'll be a great load of communication during the processing. Since the communication is already as much spread along the entire process as possible, I'm wondering if there any standard solutions to improve the performance of the message passing layer of my application. What changes/improvements could I apply to my code to reduce the time spent sending messages? For what it's worth, I'm communicating up to 10GB between 9 computing nodes, and the framework I'm using is implemented with OpenMPI. AI: Firstly, I would generally agree with everything that AirThomas suggested. Caching things is generally good if you can, but I find it slightly brittle since that's very dependent on exactly what your application is. Data compression is another very solid suggestion, but my impression on both of these is that the speedups you're looking at are going to be relatively marginal. Maybe as high as 2-5x, but I would be very surprised if they were any faster than that. Under the assumption that pure I/O (writing to/reading from memory) is not your limiting factor (if it is, you're probably not going to get a lot faster), I would make a strong plug for zeromq. In the words of the creators: We took a normal TCP socket, injected it with a mix of radioactive isotopes stolen from a secret Soviet atomic research project, bombarded it with 1950-era cosmic rays, and put it into the hands of a drug-addled comic book author with a badly-disguised fetish for bulging muscles clad in spandex. Yes, ØMQ sockets are the world-saving superheroes of the networking world. While that may be a little dramatic, zeromq sockets in my opinion are one of the most amazing pieces of software that the world of computer networks has put together in several years. I'm not sure what you're using for your message-passing layer right now, but if you're using something traditional like rabbitmq, you're liable to see speedups of multiple orders of magnitude (personally noticed about 500x, but depends a lot of architecture) Check out some basic benchmarks here.
H: K-means vs. online K-means K-means is a well known algorithm for clustering, but there is also an online variation of such algorithm (online K-means). What are the pros and cons of these approaches, and when should each be preferred? AI: Online k-means (more commonly known as sequential k-means) and traditional k-means are very similar. The difference is that online k-means allows you to update the model as new data is received. Online k-means should be used when you expect the data to be received one by one (or maybe in chunks). This allows you to update your model as you get more information about it. The drawback of this method is that it is dependent on the order in which the data is received (ref).
H: Suggest text classifier training datasets Which freely available datasets can I use to train a text classifier? We are trying to enhance our users engagement by recommending the most related content for him, so we thought If we classified our content based on a predefined bag of words we can recommend to him engaging content by getting his feedback on random number of posts already classified before. We can use this info to recommend for him pulses labeled with those classes. But we found If we used a predefined bag of words not related to our content the feature vector will be full of zeros, also categories may be not relevant to our content. so for those reasons we tried another solution that will be clustering our content not classifying it. Thanks :) AI: Some standard datasets for text classification are the 20-News group, Reuters (with 8 and 52 classes) and WebKb. You can find all of them here.
H: Network structure: k-cliques vs. p-cliques In network structure, what is the difference between k-cliques and p-cliques, can anyone give a brief explaination with examples? Thanks in advanced! ============================ EDIT: I found an online ppt while I am googling, please take a look on p.37 and p.39, can you comment on them? AI: In graph theory a clique indicates a fully connected set of nodes: as noted here, a p-clique simply indicates a clique comoprised of p nodes. A k-clique is an undirected graph and a number k, and the output is a clique of size k if one exists. Clique Problem
H: Is logistic regression actually a regression algorithm? The usual definition of regression (as far as I am aware) is predicting a continuous output variable from a given set of input variables. Logistic regression is a binary classification algorithm, so it produces a categorical output. Is it really a regression algorithm? If so, why? AI: Logistic regression is regression, first and foremost. It becomes a classifier by adding a decision rule. I will give an example that goes backwards. That is, instead of taking data and fitting a model, I'm going to start with the model in order to show how this is truly a regression problem. In logistic regression, we are modeling the log odds, or logit, that an event occurs, which is a continuous quantity. If the probability that event $A$ occurs is $P(A)$, the odds are: $$\frac{P(A)}{1 - P(A)}$$ The log odds, then, are: $$\log \left( \frac{P(A)}{1 - P(A)}\right)$$ As in linear regression, we model this with a linear combination of coefficients and predictors: $$\operatorname{logit} = b_0 + b_1x_1 + b_2x_2 + \cdots$$ Imagine we are given a model of whether a person has gray hair. Our model uses age as the only predictor. Here, our event A = a person has gray hair: log odds of gray hair = -10 + 0.25 * age ...Regression! Here is some Python code and a plot: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns x = np.linspace(0, 100, 100) def log_odds(x): return -10 + .25 * x plt.plot(x, log_odds(x)) plt.xlabel("age") plt.ylabel("log odds of gray hair") Now, let's make it a classifier. First, we need to transform the log odds to get out our probability $P(A)$. We can use the sigmoid function: $$P(A) = \frac1{1 + \exp(-\text{log odds}))}$$ Here's the code: plt.plot(x, 1 / (1 + np.exp(-log_odds(x)))) plt.xlabel("age") plt.ylabel("probability of gray hair") The last thing we need to make this a classifier is to add a decision rule. One very common rule is to classify a success whenever $P(A) > 0.5$. We will adopt that rule, which implies that our classifier will predict gray hair whenever a person is older than 40 and will predict non-gray hair whenever a person is under 40. Logistic regression works great as a classifier in more realistic examples too, but before it can be a classifier, it must be a regression technique!
H: Is GLM a statistical or machine learning model? I thought that generalized linear model (GLM) would be considered a statistical model, but a friend told me that some papers classify it as a machine learning technique. Which one is true (or more precise)? Any explanation would be appreciated. AI: A GLM is absolutely a statistical model, but statistical models and machine learning techniques are not mutually exclusive. In general, statistics is more concerned with inferring parameters, whereas in machine learning, prediction is the ultimate goal.
H: What statistical model should I use to analyze the likelihood that a single event influenced longitudinal data I am trying to find a formula, method, or model to use to analyze the likelihood that a specific event influenced some longitudinal data. I am having difficultly figuring out what to search for on Google. Here is an example scenario: Image you own a business that has an average of 100 walk-in customers every day. One day, you decide you want to increase the number of walk-in customers arriving at your store each day, so you pull a crazy stunt outside your store to get attention. Over the next week, you see on average 125 customers a day. Over the next few months, you again decide that you want to get some more business, and perhaps sustain it a bit longer, so you try some other random things to get more customers in your store. Unfortunately, you are not the best marketer, and some of your tactics have little or no effect, and others even have a negative impact. What methodology could I use to determine the probability that any one individual event positively or negatively impacted the number of walk-in customers? I am fully aware that correlation does not necessarily equal causation, but what methods could I use to determine the likely increase or decrease in your business's daily walk in client's following a specific event? I am not interested in analyzing whether or not there is a correlation between your attempts to increase the number of walk-in customers, but rather whether or not any one single event, independent of all others, was impactful. I realize that this example is rather contrived and simplistic, so I will also give you a brief description of the actual data that I am using: I am attempting to determine the impact that a particular marketing agency has on their client's website when they publish new content, perform social media campaigns, etc. For any one specific agency, they may have anywhere from 1 to 500 clients. Each client has websites ranging in size from 5 pages to well over 1 million. Over the course of the past 5 year, each agency has annotated all of their work for each client, including the type of work that was done, the number of webpages on a website that were influenced, the number of hours spent, etc. Using the above data, which I have assembled into a data warehouse (placed into a bunch of star/snowflake schemas), I need to determine how likely it was that any one piece of work (any one event in time) had an impact on the traffic hitting any/all pages influenced by a specific piece of work. I have created models for 40 different types of content that are found on a website that describes the typical traffic pattern a page with said content type might experience from launch date until present. Normalized relative to the appropriate model, I need to determine the highest and lowest number of increased or decreased visitors a specific page received as the result of a specific piece of work. While I have experience with basic data analysis (linear and multiple regression, correlation, etc), I am at a loss for how to approach solving this problem. Whereas in the past I have typically analyzed data with multiple measurements for a given axis (for example temperature vs thirst vs animal and determined the impact on thirst that increased temperate has across animals), I feel that above, I am attempting to analyze the impact of a single event at some point in time for a non-linear, but predictable (or at least model-able), longitudinal dataset. I am stumped :( Any help, tips, pointers, recommendations, or directions would be extremely helpful and I would be eternally grateful! AI: For the record, I think this is the type of question that's perfect for the data science Stack Exchange. I hope we get a bunch of real world examples of data problems and several perspectives on how best to solve them. I would encourage you not to use p-values as they can be pretty misleading (1, 2). My approach hinges on you being able to summarize traffic on a given page before and after some intervention. What you care about is the difference in the rate before and after the intervention. That is, how does the number of hits per day change? Below, I explain a first stab approach with some simulated example data. I will then explain one potential pitfall (and what I would do about it). First, let's think about one page before and after an intervention. Pretend the intervention increases hits per day by roughly 15%: import numpy as np import matplotlib.pyplot as plt import seaborn as sns def simulate_data(true_diff=0): #First choose a number of days between [1, 1000] before the intervention num_before = np.random.randint(1, 1001) #Next choose a number of days between [1, 1000] after the intervention num_after = np.random.randint(1, 1001) #Next choose a rate for before the intervention. How many views per day on average? rate_before = np.random.randint(50, 151) #The intervention causes a `true_diff` increase on average (but is also random) rate_after = np.random.normal(1 + true_diff, .1) * rate_before #Simulate viewers per day: vpd_before = np.random.poisson(rate_before, size=num_before) vpd_after = np.random.poisson(rate_after, size=num_after) return vpd_before, vpd_after vpd_before, vpd_after = simulate_data(.15) plt.hist(vpd_before, histtype="step", bins=20, normed=True, lw=2) plt.hist(vpd_after, histtype="step", bins=20, normed=True, lw=2) plt.legend(("before", "after")) plt.title("Views per day before and after intervention") plt.xlabel("Views per day") plt.ylabel("Frequency") plt.show() We can clearly see that the intervention increased the number of hits per day, on average. But in order to quantify the difference in rates, we should use one company's intervention for multiple pages. Since the underlying rate will be different for each page, we should compute the percent change in rate (again, the rate here is hits per day). Now, let's pretend we have data for n = 100 pages, each of which received an intervention from the same company. To get the percent difference we take (mean(hits per day before) - mean(hits per day after)) / mean(hits per day before): n = 100 pct_diff = np.zeros(n) for i in xrange(n): vpd_before, vpd_after = simulate_data(.15) # % difference. Note: this is the thing we want to infer pct_diff[i] = (vpd_after.mean() - vpd_before.mean()) / vpd_before.mean() plt.hist(pct_diff) plt.title("Distribution of percent change") plt.xlabel("Percent change") plt.ylabel("Frequency") plt.show() Now we have the distribution of our parameter of interest! We can query this result in different ways. For example, we might want to know the mode, or (approximation of) the most likely value for this percent change: def mode_continuous(x, num_bins=None): if num_bins is None: counts, bins = np.histogram(x) else: counts, bins = np.histogram(x, bins=num_bins) ndx = np.argmax(counts) return bins[ndx:(ndx+1)].mean() mode_continuous(pct_diff, 20) When I ran this I got 0.126, which is not bad, considering our true percent change is 15. We can also see the number of positive changes, which approximates the probability that a given company's intervention improves hits per day: (pct_diff > 0).mean() Here, my result is 0.93, so we could say there's a pretty good chance that this company is effective. Finally, a potential pitfall: Each page probably has some underlying trend that you should probably account for. That is, even without the intervention, hits per day may increase. To account for this, I would estimate a simple linear regression where the outcome variable is hits per day and the independent variable is day (start at day=0 and simply increment for all the days in your sample). Then subtract the estimate, y_hat, from each number of hits per day to de-trend your data. Then you can do the above procedure and be confident that a positive percent difference is not due to the underlying trend. Of course, the trend may not be linear, so use discretion! Good luck!
H: Cross-validation: K-fold vs Repeated random sub-sampling I wonder which type of model cross-validation to choose for classification problem: K-fold or random sub-sampling (bootstrap sampling)? My best guess is to use 2/3 of the data set (which is ~1000 items) for training and 1/3 for validation. In this case K-fold gives only three iterations(folds), which is not enough to see stable average error. On the other hand I don't like random sub-sampling feature: that some items won't be ever selected for training/validation, and some will be used more than once. Classification algorithms used: random forest & logistic regression. AI: If you have an adequate number of samples and want to use all the data, then k-fold cross-validation is the way to go. Having ~1,500 seems like a lot but whether it is adequate for k-fold cross-validation also depends on the dimensionality of the data (number of attributes and number of attribute values). For example, if each observation has 100 attributes, then 1,500 observations is low. Another potential downside to k-fold cross-validation is the possibility of a single, extreme outlier skewing the results. For example, if you have one extreme outlier that can heavily bias your classifier, then in a 10-fold cross-validation, 9 of the 10 partitions will be affected (though for random forests, I don't think you would have that problem). Random subsampling (e.g., bootstrap sampling) is preferable when you are either undersampled or when you have the situation above, where you don't want each observation to appear in k-1 folds.
H: How to select algorithms for ensemble methods? There is a general recommendation that algorithms in ensemble learning combinations should be different in nature. Is there a classification table, a scale or some rules that allow to evaluate how far away are the algorithms from each other? What are the best combinations? AI: In general in an ensemble you try to combine the opinions of multiple classifiers. The idea is like asking a bunch of experts on the same thing. You get multiple opinions and you later have to combine their answers (e.g. by a voting scheme). For this trick to work you want the classifiers to be different from each other, that is you don't want to ask the same "expert" twice for the same thing. In practice, the classifiers do not have to be different in the sense of a different algorithm. What you can do is train the same algorithm with different subset of the data or a different subset of features (or both). If you use different training sets you end up with different models and different "independent" classifiers. There is no golden rule on what works best in general. You have to try to see if there is an improvement for your specific problem.
H: Binary classification model for unbalanced data I have a dataset with the following specifications: Training dataset with 193,176 samples with 2,821 positives Test Dataset with 82,887 samples with 673 positives There are 10 features. I want to perform a binary classification (0 or 1). The issue I am facing is that the data is very unbalanced. After normalization and scaling the data along with some feature engineering and using a couple of different algorithms, these are the best results I could achieve: mean square error : 0.00804710026904 Confusion matrix : [[82214 667] [ 0 6]] i.e only 6 correct positive hits. This is using logistic regression. Here are the various things I tried with this: Different algorithms like RandomForest, DecisionTree, SVM Changing parameters value to call the function Some intuition based feature engineering to include compounded features Now, my questions are: What can I do to improve the number of positive hits ? How can one determine if there is an overfit in such a case ? ( I have tried plotting etc. ) At what point could one conclude if maybe this is the best possible fit I could have? ( which seems sad considering only 6 hits out of 673 ) Is there a way I could make the positive sample instances weigh more so the pattern recognition improves leading to more hits ? Which graphical plots could help detect outliers or some intuition about which pattern would fit the best? I am using the scikit-learn library with Python and all implementations are library functions. edit: Here are the results with a few other algorithms: Random Forest Classifier(n_estimators=100) [[82211 667] [ 3 6]] Decision Trees: [[78611 635] [ 3603 38]] AI: Since you are doing binary classification, have you tried adjusting the classification threshold? Since your algorithm seems rather insensitive, I would try lowering it and check if there is an improvement. You can always use Learning Curves, or a plot of one model parameter vs. Training and Validation error to determine whether your model is overfitting. It seems it is under fitting in your case, but that's just intuition. Well, ultimately it depends on your dataset, and the different models you have tried. At this point, and without further testing, there can not be a definite answer. Without claiming to be an expert on the topic, there are a number of different techniques you may follow (hint: first link on google), but in my opinion you should first make sure you choose your cost function carefully, so that it represents what you are actually looking for. Not sure what you mean by pattern intuition, can you elaborate? By the way, what were your results with the different algorithms you tried? Were they any different?
H: Score matrix string similarity I have a load of documents, which have a load of key value pairs in them. The key might not be unique so there might be multiple keys of the same type with different values. I want to compare the similarity of the keys between 2 documents. More specifically the string similarity of these values. I am thinking of using something like the Smith-Waterman Algorithm to compare the similarity. So I've drawn a picture of how I'm thinking about representing the data - The values in the cells are the result of the smith-waterman algorithm (or some other string similarity metric). Image that this matrix represents a key type of "things" I then need to add the "things" similarity score into a vector of 0 or 1. Thats ok. What I can't figure out is how I determine if the matrix is similar or not similar - ideally I want to convert the matrix to an number between 0 and 1 and then I'll just set a threshold to score it as either 0 or 1. Any ideas how I can create a score of the matrix? Does anyone know any algorithms that do this type of thing (obviously things like how smith waterman works is kind of applicable). AI: As I understood, Document 1 and Document 2 may have different number of keys. And you wand to get final similarity evaluation between 0 and 1. If so, I would propose following algorithm: Sum of max. vals is equal to 0. Select maximum value from doc-doc matrix and add it to Sum of max. vals. Remove row and column with maximum value from the matrix. Repeat steps 2-3 until rows or columns are ended. Denominate Sum of max. vals by average number of key words in two texts. Final estimation would be equal to 1, if both documents have identical length, and every word from Doc 1 has equivalent in Doc 2. You haven't mentioned software, you are using, but here is R example of function, computing such similarity (it takes object of class matrix as input): eval.sim <- function(sim.matrix){ similarity <- 0 denominator <- sum(dim(sim.matrix)) / 2 for(i in 1:(min(c(nrow(sim.matrix), ncol(sim.matrix))) - 1)){ extract <- which(sim.matrix == max(sim.matrix), arr.ind=T)[1, ] similarity <- similarity + sim.matrix[extract[1], extract[2]] sim.matrix <- sim.matrix[-extract[1], -extract[2]] } similarity <- similarity + max(sm.copy) similarity <- similarity / denominator } In python - import numpy as np def score_matrix(sim_matrix): similarity = 0 denominator = sum(sim_matrix.shape) / 2 for i in range(min(sim_matrix.shape)): x, y = np.where(sim_matrix == np.max(sim_matrix))[0][0], np.where(sim_matrix == np.max(sim_matrix))[1][0] similarity += sim_matrix[x, y] sim_matrix = np.delete(sim_matrix,(x),axis=0) sim_matrix = np.delete(sim_matrix,(y),axis=1) return similarity / denominator
H: Detecting cats visually by means of anomaly detection I have a hobby project which I am contemplating committing to as a way of increasing my so far limited experience of machine learning. I have taken and completed the Coursera MOOC on the topic. My question is with regards to the feasibility of the project. The task is the following: Neighboring cats are from time to time visiting my garden, which I dislike since they tend to defecate on my lawn. I would like to have a warning system that alerts me when there's a cat present so that I may go chase it off using my super soaker. For simplicity's sake, say that I only care about a cat with black and white coloring. I have setup a raspberry pi with camera module that can capture video and/or pictures of a part of the garden. Sample image: My first idea was to train a classifier to identify cat or cat-like objects, but after realizing that I will be unable to obtain a large enough number of positive samples, I have abandoned that in favor of anomaly detection. I estimate that if I captured a photo every second of the day, I would end up with maybe five photos containing cats (out of about 60,000 with sunlight) per day. Is this feasible using anomaly detection? If so, what features would you suggest? My ideas so far would be to simply count the number of pixels with that has certain colors; do some kind of blob detection/image segmenting (which I do not know how do to, and would thus like to avoid) and perform the same color analysis on them. AI: You could simplify your problem significantly by using a motion/change detection approach. For example, you could compare each image/frame with one from an early time (e.g., a minute earlier), then only consider pixels that have changed since the earlier time. You could then extract the rectangular region of change and use that as the basis for your classification or anomaly detection. Taking this type of approach can significantly simplify your classifier and reduce your false target rate because you can ignore anything that is not roughly the size of a cat (e.g., a person or bird). You would then use the extracted change regions that were not filtered out to form the training set for your classifier (or anomaly detector). Just be sure to get your false target rate sufficiently low before mounting a laser turret to your feline intrusion detection system.
H: Standardize numbers for ranking ratios I'm trying to rank some percentages. I have numerators and denominators for each ratio. To give a concrete example, consider ratio as total graduates / total students in a school. But the issue is that total students vary over a long range (1000-20000). Smaller schools seem to have higher percentage of students graduating, but I want to standardize it, and not let the size of the school affect the ranking. Is there a way to do it? AI: This is relatively simple to do mathematically. First, fit a regression line to the scatter plot of "total graduates" (y) vs. "total students" (x). You will probably see a downward sloping line if your assertion is correct (smaller schools graduate a higher %). You can identify the slope and y-intercept for this line to convert it into an equation y = mx + b, and then do a little algebra to convert the equation into normalized form: "y / x = m + b / x" Then, with all the ratios in your data , you should subtract this RHS: normalized ratio = (total grads / total students) - (m + b / total students) If the result is postive, then the ratio is above normal for that size (i.e. above the regression line) and if it is negative it is below the regression line. If you want all positive numbers, you can add a positive constant to move all results above zero. This is how to do it mathematically, but I suggest that you consider whether it is wise, from a data analysis point of view, to normalize by school size. This depends on the purpose of your analysis and specifically how this ratio is being analyzed in relation to other data.
H: How to use neural networks with large and variable number of inputs? I'm new to machine learning, but I have an interesting problem. I have a large sample of people and visited sites. Some people have indicated gender, age, and other parameters. Now I want to restore these parameters to each user. Which way do I look for? Which algorithm is suitable to solve this problem? I'm familiar with Neural Networks (supervised learning), but it seems they don't fit. AI: I had almost the same problem: 'restoring' age, gender, location for social network users. But I used users' ego-networks, not visited sites statistics. And I faced with two almost independent tasks: 'Restoring' or 'predicting' data. You can use a bunch of different technics to complete this task, but my vote is for simplest ones (KISS, yes). E.g., in my case, for age prediction, mean of ego-network users' ages gave satisfactory results (for about 70% of users error was less than +/-3 years, in my case it was enough). It's just an idea, but you can try to use for age prediction weighted average, defining weight as similarity measure between visited sites sets of current user and others. Evaluating prediction quality. Algorithm from task-1 will produce prediction almost in all cases. And second task is to determine, if prediction is reliable. E.g., in case of ego network and age prediction: can we trust in prediction, if a user has only one 'friend' in his ego network? This task is more about machine-learning: it's a binary classification problem. You need to compose features set, form training and test samples from your data with both right and wrong predictions. Creating appropriate classifier will help you to filter out unpredictable users. But you need to determine, what are your features set. I used a number of network metrics, and summary statistics on feature of interest distribution among ego-network. This approach wouldn't populate all the gaps, but only predictable ones.
H: Any differences in regularisation in MLP between batch and individual updates? I have just learned about regularisation as an approach to control over-fitting, and I would like to incorporate the idea into a simple implementation of backpropagation and Multilayer perceptron (MLP) that I put together. Currently to avoid over-fitting, I cross-validate and keep the network with best score so far on the validation set. This works OK, but adding regularisation would benefit me in that correct choice of the regularisation algorithm and parameter would make my network converge on a non-overfit model more systematically. The formula I have for the update term (from Coursera ML course) is stated as a batch update e.g. for each weight, after summing all the applicable deltas for the whole training set from error propagation, an adjustment of lambda * current_weight is added as well before the combined delta is subtracted at the end of the batch, where lambda is the regularisation parameter. My implementation of backpropagation uses per-item weight updates. I am concerned that I cannot just copy the batch approach, although it looks OK intuitively to me. Does a smaller regularisation term per item work just as well? For instance lambda * current_weight / N where N is size of training set - at first glance this looks reasonable. I could not find anything on the subject though, and I wonder if that is because regularisation does not work as well with a per-item update, or even goes under a different name or altered formula. AI: Regularization is relevant in per-item learning as well. I would suggest to start with a basic validation approach for finding out lambda, whether you are doing batch or per-item learning. This is the easiest and safest approach. Try manually with a number of different values. e.g. 0.001. 0.003, 0.01, 0.03, 0.1 etc. and see how your validation set behaves. Later on you may automate this process by introducing a linear or local search method. As a side note, I believe the value of lambda should be considered in relation to the updates of the parameter vector, rather than the training set size. For batch training you have one parameter update per dataset pass, while for online one update per sample (regardless of the training set size). I recently stumbled upon this Crossvalidated Question, which seems quite similar to yours. There is a link to a paper about a new SGD algorithm, with some relevant content. It might be useful to take a look (especially pages 1742-1743).
H: Difference Between Hadoop Mapreduce(Java) and RHadoop mapreduce I understand Hadoop MapReduce and its features but I am confused about R MapReduce. One difference I have read is that R utilizes maximum RAM. So do perform parallel processing integrated R with Hadoop. My doubt is: R can do all stats, math and data science related stuff, but why R MapReduce? Is there any new task I can achieve by using R MapReduce instead of Hadoop MapReduce? If yes, please specify. We can achieve the task by using R with Hadoop (directly) but what is the importance of MapReduce in R and how it is different from normal MapReduce? AI: rhadoop (the part you are interested in is now called rmr2) is simply a client API for MapReduce written in R. You invoke MapReduce using R package APIs, and send an R function to the workers, where it is executed by an R interpreter locally. But it is otherwise exactly the same MapReduce. You can call anything you like in R this way, but no R functions are themselves parallelized to use MapReduce in this way. The point is simply that you can invoke M/R from R. I don't think it somehow lets you do anything more magical than that.
H: Do you need a virtual machine for data science? I am brand new to the field of data science, want to break into it, and there are so many tools out there. These VMs have a lot of software on them, but I haven't been able to find any side-by-side comparison. Here's a start from my research, but if someone could tell me that one is objectively more rich-featured, with a larger community of support, and useful to get started then that would help greatly: datasciencetoolKIT.org -> vm is on vagrant cloud (4 GB) and seems to be more "hip" with R, iPython notebook, and other useful command-line tools (html->txt, json->xml, etc). There is a book being released in August with detail. datasciencetoolBOX.org -> vm is a vagrant box (24 GB) downloadable from their website. There seems to be more features here, and more literature. AI: Do you need a VM? You need to keep in mind that a virtual machine is a software emulation of your own or another machine hardware configuration that can run an operating systems. In most basic terms, it acts as a layer interfacing between the virtual OS, and your own OS which then communicates with the lower level hardware to provide support to the virtual OS. What this means for you is: Cons Hardware Support A drawback of virtual machine technology is that it supports only the hardware that both the virtual machine hypervisor and the guest operating system support. Even if the guest operating system supports the physical hardware, it sees only the virtual hardware presented by the virtual machine. The second aspect of virtual machine hardware support is the hardware presented to the guest operating system. No matter the hardware in the host, the hardware presented to the guest environment is usually the same (with the exception of the CPU, which shows through). For example, VMware GSX Server presents an AMD PCnet32 Fast Ethernet card or an optimized VMware-proprietary network card, depending on which you choose. The network card in the host machine does not matter. VMware GSX Server performs the translation between the guest environment's network card and the host environment's network card. This is great for standardization, but it also means that host hardware that VMware does not understand will not be present in the guest environment. Performance Penalty Virtual machine technology imposes a performance penalty from running an additional layer above the physical hardware but beneath the guest operating system. The performance penalty varies based on the virtualization software used and the guest software being run. This is significant. Pros Isolation One of the key reasons to employ virtualization is to isolate applications from each other. Running everything on one machine would be great if it all worked, but many times it results in undesirable interactions or even outright conflicts. The cause often is software problems or business requirements, such as the need for isolated security. Virtual machines allow you to isolate each application (or group of applications) in its own sandbox environment. The virtual machines can run on the same physical machine (simplifying IT hardware management), yet appear as independent machines to the software you are running. For all intents and purposes—except performance, the virtual machines are independent machines. If one virtual machine goes down due to application or operating system error, the others continue running, providing services your business needs to function smoothly. Standardization Another key benefit virtual machines provide is standardization. The hardware that is presented to the guest operating system is uniform for the most part, usually with the CPU being the only component that is "pass-through" in the sense that the guest sees what is on the host. A standardized hardware platform reduces support costs and increases the share of IT resources that you can devote to accomplishing goals that give your business a competitive advantage. The host machines can be different (as indeed they often are when hardware is acquired at different times), but the virtual machines will appear to be the same across all of them. Ease of Testing Virtual machines let you test scenarios easily. Most virtual machine software today provides snapshot and rollback capabilities. This means you can stop a virtual machine, create a snapshot, perform more operations in the virtual machine, and then roll back again and again until you have finished your testing. This is very handy for software development, but it is also useful for system administration. Admins can snapshot a system and install some software or make some configuration changes that they suspect may destabilize the system. If the software installs or changes work, then the admin can commit the updates. If the updates damage or destroy the system, the admin can roll them back. Virtual machines also facilitate scenario testing by enabling virtual networks. In VMware Workstation, for example, you can set up multiple virtual machines on a virtual network with configurable parameters, such as packet loss from congestion and latency. You can thus test timing-sensitive or load-sensitive applications to see how they perform under the stress of a simulated heavy workload. Mobility Virtual machines are easy to move between physical machines. Most of the virtual machine software on the market today stores a whole disk in the guest environment as a single file in the host environment. Snapshot and rollback capabilities are implemented by storing the change in state in a separate file in the host information. Having a single file represent an entire guest environment disk promotes the mobility of virtual machines. Transferring the virtual machine to another physical machine is as easy as moving the virtual disk file and some configuration files to the other physical machine. Deploying another copy of a virtual machine is the same as transferring a virtual machine, except that instead of moving the files, you copy them. Which VM should I use if I am starting out? The Data Science Box or the Data Science Toolbox are your best bets if you just getting into data science. They have the basic software that you will need, with the primary difference being the virtual environment in which each of these can run. The DSB can run on AWS while the DST can run on Virtual Box (which is the most common tool used for VMs). Sources http://www.devx.com/vmspecialreport/Article/30383 http://jeroenjanssens.com/2013/12/07/lean-mean-data-science-machine.html
H: Handling a regularly increasing feature set I'm working on a fraud detection system. In this field, new frauds appear regularly, so that new features have to be added to the model on ongoing basis. I wonder what is the best way to handle it (from the development process perspective)? Just adding a new feature into the feature vector and re-training the classifier seems to be a naive approach, because too much time will be spent for re-learning of the old features. I'm thinking along the way of training a classifier for each feature (or a couple of related features), and then combining the results of those classifiers with an overall classifier. Are there any drawbacks of this approach? How can I choose an algorithm for the overall classifier? AI: In an ideal world, you retain all of your historical data, and do indeed run a new model with the new feature extracted retroactively from historical data. I'd argue that the computing resource spent on this is quite useful actually. Is it really a problem? Yes, it's a widely accepted technique to build an ensemble of classifiers and combine their results. You can build a new model in parallel just on new features and average in its prediction. This should add value, but, you will never capture interaction between the new and old features this way, since they will never appear together in a classifier.
H: Large Scale Personalization - Per User vs Global Models I'm currently working on a project that would benefit from personalized predictions. Given an input document, a set of output documents, and a history of user behavior, I'd like to predict which of the output documents are clicked. In short, I'm wondering what the typical approach to this kind of personalization problem is. Are models trained per user, or does a single global model take in summary statistics of past user behavior to help inform that decision? Per user models won't be accurate until the user has been active for a while, while most global models have to take in a fixed length feature vector (meaning we more or less have to compress a stream of past events into a smaller number of summary statistics). AI: The answer to this question is going to vary pretty wildly depending on the size and nature of your data. At a high level, you could think of it as a special case of multilevel models; you have the option of estimating a model with complete pooling (i.e., a universal model that doesn't distinguish between users), models with no pooling (a separate model for each user), and partially pooled models (a mixture of the two). You should really read Andrew Gelman on this topic if you're interested. You can also think of this as a learning-to-rank problem that either tries to produce point-wise estimates using a single function or instead tries to optimize on some list-wise loss function (e.g., NDCG). As with most machine learning problems, it all depends on what kind of data you have, the quality of it, the sparseness of it, and what kinds of features you are able to extract from it. If you have reason to believe that each and every user is going to be pretty unique in their behavior, you might want to build a per-user model, but that's going to be unwieldy fast -- and what do you do when you are faced with a new user?
H: Create most "average" cosine similarity observation For a recommendation system I'm using cosine similarity to compute similarities between items. However, for items with small amounts of data I'd like to bin them under a general "average" category (in the general not mathematical sense). To accomplish this I'm currently trying to create a synthetic observation to represent that middle of the road point. So for example if these were my observations (rows are observations, cols are features): [[0, 0, 0, 1, 1, 1, 0, 1, 0], [1, 0, 1, 0, 0, 0, 1, 0, 0], [1, 1, 1, 1, 0, 1, 0, 1, 1], [0, 0, 1, 0, 0, 1, 0, 1, 0]] A strategy where I'd simply take the actual average of all features across observations would generate a synthetic datapoint such as follows, which I'd then append to the matrix before doing the similarity calculation. [ 0.5 , 0.25, 0.75, 0.5 , 0.25, 0.75, 0.25, 0.75, 0.25] While this might work well with certain similarity metrics (e.g. L1 distance) I'm sure there are much better ways for cosine similarity. Though, at the moment, I'm having trouble reasoning my way through angles between lines in high dimensional space. Any ideas? AI: You are doing the correct thing. Technically, this averaging leads to computing the centroid in the Euclidean space of a set of N points. The centroid works pretty well with cosine similarities (cosine of the angles between normalized vectors), e.g. the Rocchio algorithm.
H: Which non-training classification methods are available? I am trying to find which classification methods, that do not use a training phase, are available. The scenario is gene expression based classification, in which you have a matrix of gene expression of m genes (features) and n samples (observations). A signature for each class is also provided (that is a list of the features to consider to define to which class belongs a sample). An application (non-training) is the Nearest Template Prediction method. In this case it is computed the cosine distance between each sample and each signature (on the common set of features). Then each sample is assigned to the nearest class (the sample-class comparison resulting in a smaller distance). No already classified samples are needed in this case. A different application (training) is the kNN method, in which we have a set of already labeled samples. Then, each new sample is labeled depending on how are labeled the k nearest samples. Are there any other non-training methods? Thanks AI: What you are asking about is Instance-Based Learning. k-Nearest Neighbors (kNN) appears to be the most popular of these methods and is applicable to a wide variety of problem domains. Another general type of instance-based learning is Analogical Modeling, which uses instances as exemplars for comparison with new data. You referred to kNN as an application that uses training but that is not correct (the Wikipedia entry you linked is somewhat misleading in that regard). Yes, there are "training examples" (labeled instances) but the classifier doesn't learn/train from these data. Rather, they are only used whenever you actually want to classify a new instance, which is why it is considered a "lazy" learner. Note that the Nearest Template Prediction method you mention effectively is a form of kNN with k=1 and cosine distance as the distance measure.
H: How are selected the features for a decision tree in CART? Suppose I want to use CART as classification tree (I want a categorical response). I have the training set, and I split it using observation labels. Now, to build the decision tree (classification tree) how are selected the features to decide which label apply to testing observations? Supposing we are working on gene expression matrix, in which each element is a real number, is that done using features that are more distant between classes? AI: At each split point, CART will choose the feature which "best" splits the observations. What qualifies as best varies, but generally the split is done so that the subsequent nodes are more homogenous/pure with respect to the target. There are different ways of measuring homogeneity, for example Gini, Entropy, Chi-square. If you are using software, it may allow you to choose the measure of homogenity that the tree algorithm will use. Distance is not a factor with trees - what matters is whether the value is greater than or less than the split point, not the distance from the split point.
H: Dealing with events that have not yet happened when building a model I was building a model that predicts user churn for a website, where I have data on all users, both past and present. I can build a model that only uses those users that have left, but then I'm leaving 2/3 of the total user population unused. Is there a good way to incorporate data from these users into a model from a conceptual standpoint? AI: This setting is common in reliability, health care, and mortality. The statistical analysis method is called Survival Analysis. All users are coded according to their start date (or week or month). You use the empirical data to estimate the survival function, which is the probability that the time of defection is later than some specified time t. Your baseline model will estimate survival function for all users. Then you can do more sophisticated modeling to estimate what factors or behaviors might predict defection (churn), given your baseline survival function. Basically, any model that is predictive will yield a survival probability that is significantly lower than the baseline. There's another approach which involves attempting to identify precursor events patterns or user behavior pattern that foreshadow defection. Any given event/behavior pattern might occur for users that defect, or for users that stay. For this analysis, you may need to censor your data to only include users that have been members for some minimum period of time. The minimum time period can be estimated using your estimate of survival function, or even simple histogram analysis of the distribution of membership period for users who have defected.
H: Best python library for neural networks I'm using Neural Networks to solve different Machine learning problems. I'm using Python and pybrain but this library is almost discontinued. Are there other good alternatives in Python? AI: UPDATE: the landscape has changed quite a bit since I answered this question in July '14, and some new players have entered the space. In particular, I would recommend checking out: TensorFlow Blocks Lasagne Keras Deepy Nolearn NeuPy They each have their strengths and weaknesses, so give them all a go and see which best suits your use case. Although I would have recommended using PyLearn2 a year ago, the community is no longer active so I would recommend looking elsewhere. My original response to the answer is included below but is largely irrelevant at this point. PyLearn2 is generally considered the library of choice for neural networks and deep learning in python. It's designed for easy scientific experimentation rather than ease of use, so the learning curve is rather steep, but if you take your time and follow the tutorials I think you'll be happy with the functionality it provides. Everything from standard Multilayer Perceptrons to Restricted Boltzmann Machines to Convolutional Nets to Autoencoders is provided. There's great GPU support and everything is built on top of Theano, so performance is typically quite good. The source for PyLearn2 is available on github. Be aware that PyLearn2 has the opposite problem of PyBrain at the moment -- rather than being abandoned, PyLearn2 is under active development and is subject to frequent changes.
H: Clustering pair-wise distance dataset I have generated a dataset of pairwise distances as follows: id_1 id_2 dist_12 id_2 id_3 dist_23 I want to cluster this data so as to identify the pattern. I have been looking at Spectral clustering and DBSCAN, but I haven't been able to come to a conclusion and have been ambiguous on how to make use of the existing implementations of these algorithms. I have been looking at Python and Java implementations so far. Could anyone point me to a tutorial or demo on how to make use of these clustering algorithms to handle the situation in hand? AI: In the scikit-learn implementation of Spectral clustering and DBSCAN you do not need to precompute the distances, you should input the sample coordinates for all id_1 ... id_n. Here is a simplification of the documented example comparison of clustering algorithms: import numpy as np from sklearn import cluster from sklearn.preprocessing import StandardScaler ## Prepare the data X = np.random.rand(1500, 2) # When reading from a file of the form: `id_n coord_x coord_y` # you will need this call instead: # X = np.loadtxt('coords.csv', usecols=(1, 2)) X = StandardScaler().fit_transform(X) ## Instantiate the algorithms spectral = cluster.SpectralClustering(n_clusters=2, eigen_solver='arpack', affinity="nearest_neighbors") dbscan = cluster.DBSCAN(eps=.2) ## Use the algorithms spectral_labels = spectral.fit_predict(X) dbscan_labels = dbscan.fit_predict(X)
H: Are Support Vector Machines still considered "state of the art" in their niche? This question is in response to a comment I saw on another question. The comment was regarding the Machine Learning course syllabus on Coursera, and along the lines of "SVMs are not used so much nowadays". I have only just finished the relevant lectures myself, and my understanding of SVMs is that they are a robust and efficient learning algorithm for classification, and that when using a kernel, they have a "niche" covering number of features perhaps 10 to 1000 and number of training samples perhaps 100 to 10,000. The limit on training samples is because the core algorithm revolves around optimising results generated from a square matrix with dimensions based on number of training samples, not number of original features. So does the comment I saw refer some real change since the course was made, and if so, what is that change: A new algorithm that covers SVM's "sweet spot" just as well, better CPUs meaning SVM's computational advantages are not worth as much? Or is it perhaps opinion or personal experience of the commenter? I tried a search for e.g. "are support vector machines out of fashion" and found nothing to imply they were being dropped in favour of anything else. And Wikipedia has this: http://en.wikipedia.org/wiki/Support_vector_machine#Issues . . . the main sticking point appears to be difficulty of interpreting the model. Which makes SVM fine for a black-box predicting engine, but not so good for generating insights. I don't see that as a major issue, just another minor thing to take into account when picking the right tool for the job (along with nature of the training data and learning task etc). AI: SVM is a powerful classifier. It has some nice advantages (which I guess were responsible for its popularity)... These are: Efficiency: Only the support vectors play a role in determining the classification boundary. All other points from the training set needn't be stored in memory. The so-called power of kernels: With appropriate kernels you can transform feature space into a higher dimension so that it becomes linearly separable. The notion of kernels work with arbitrary objects on which you can define some notion of similarity with the help of inner products... and hence SVMs can classify arbitrary objects such as trees, graphs etc. There are some significant disadvantages as well. Parameter sensitivity: The performance is highly sensitive to the choice of the regularization parameter C, which allows some variance in the model. Extra parameter for the Gaussian kernel: The radius of the Gaussian kernel can have a significant impact on classifier accuracy. Typically a grid search has to be conducted to find optimal parameters. LibSVM has a support for grid search. SVMs generally belong to the class of "Sparse Kernel Machines". The sparse vectors in the case of SVM are the support vectors which are chosen from the maximum margin criterion. Other sparse vector machines such as the Relevance Vector Machine (RVM) perform better than SVM. The following figure shows a comparative performance of the two. In the figure, the x-axis shows one dimensional data from two classes y={0,1}. The mixture model is defined as P(x|y=0)=Unif(0,1) and P(x|y=1)=Unif(.5,1.5) (Unif denotes uniform distribution). 1000 points were sampled from this mixture and an SVM and an RVM were used to estimate the posterior. The problem of SVM is that the predicted values are far off from the true log odds. A very effective classifier, which is very popular nowadays, is the Random Forest. The main advantages are: Only one parameter to tune (i.e. the number of trees in the forest) Not utterly parameter sensitive Can easily be extended to multiple classes Is based on probabilistic principles (maximizing mutual information gain with the help of decision trees)
H: Running huge datasets with R I'm trying to run some analysis with some big datasets (eg 400k rows vs. 400 columns) with R (e.g. using neural networks and recommendation systems). But, it's taking too long to process the data (with huge matrices, e.g. 400k rows vs. 400k columns). What are some free/cheap ways to improve R performance? I'm accepting packages or web services suggestions (other options are welcome). AI: Although your question is not very specific so I'll try to give you some generic solutions. There are couple of things you can do here: Check sparseMatrix from Matrix package as mentioned by @Sidhha Try running your model in parallel using packages like snowfall, Parallel. Check this list of packages on Cran which can help you runnning your model in multicore parallel mode. You can also try data.table package. It is quite phenomenal in speed. Good reads: 11 Tips on How to Handle Big Data in R (and 1 Bad Pun) Why R is slow & how to improve its Performance?
H: Cloudera QuickStart VM Error I have installed cloudera CDH5 Quick start VM on VM player. When I login through HUE in the first page I am the following error “Potential misconfiguration detected. Fix and restart Hue.” How to solve this issue. AI: Go into the other link from home - to the cloudera manager. From there, you'll see Hue can be restarted, but there is probably an alert that needs to be resolved in there first. If I remember right there's some initial configuration that needs to be done on the quickstart VM that's spelled out as soon as you log into the manager application.
H: How to choose the features for a neural network? I know that there is no a clear answer for this question, but let's suppose that I have a huge neural network, with a lot of data and I want to add a new feature in input. The "best" way would be to test the network with the new feature and see the results, but is there a method to test if the feature IS UNLIKELY helpful? Like correlation measures etc? AI: A very strong correlation between the new feature and an existing feature is a fairly good sign that the new feature provides little new information. A low correlation between the new feature and existing features is likely preferable. A strong linear correlation between the new feature and the predicted variable is an good sign that a new feature will be valuable, but the absence of a high correlation is not necessary a sign of a poor feature, because neural networks are not restricted to linear combinations of variables. If the new feature was manually constructed from a combination of existing features, consider leaving it out. The beauty of neural networks is that little feature engineering and preprocessing is required -- features are instead learned by intermediate layers. Whenever possible, prefer learning features to engineering them.
H: Commercial Text Summarization Tools I'm looking for commercial text summarization tools (APIs, Libraries,...) which are able to perform any of the following tasks: Extractive Multi-Document Summarization (Generic or query-based) Extractive Single-Document Summarization (Generic or query-based) Generative Single-Document Summarization (Generic or query-based) Generative Multi-Document Summarization (Generic or query-based) AI: There are a couple of open source options I know of - LibOTS - http://libots.sourceforge.net/ DocSum - http://docsum.sourceforge.net/docsum/web/about.php A couple of commercial solutions - Intellix Summarizer Pro - http://summarizer.intellexer.com/order_summarizer_pro.php Copernic Summarizer - http://www.copernic.com/en/products/summarizer/ And this one is a web service - TextTeaser - http://www.textteaser.com/ I'm sure there are plenty of others out there. I have used Copernic a good deal and it's pretty good, but I was hoping it could be automated easily, which it can't - at least it couldn't when I used it.
H: How to define a custom resampling methodology I'm using an experimental design to test the robustness of different classification methods, and now I'm searching for the correct definition of such design. I'm creating different subsets of the full dataset by cutting away some samples. Each subset is created independently with respect to the others. Then, I run each classification method on every subset. Finally, I estimate the accuracy of each method as how many classifications on subsets are in agreement with the classification on the full dataset. For example: Classification-full 1 2 3 2 1 1 2 Classification-subset1 1 2 2 3 1 Classification-subset2 2 3 1 1 2 ... Accuracy 1 1 1 1 0.5 1 1 Is there a correct name to this methodology? I thought it can fall under bootstrapping but I'm not sure about this. AI: Random subsampling seems appropriate, bootstrapping is a bit more generic, but also correct. Here are some references and synonyms: http://www.frank-dieterle.com/phd/2_4_3.html
H: How to fight underfitting in a deep neural net When I started with artificial neural networks (NN) I thought I'd have to fight overfitting as the main problem. But in practice I can't even get my NN to pass the 20% error rate barrier. I can't even beat my score on random forest! I'm seeking some very general or not so general advice on what should one do to make a NN start capturing trends in data. For implementing NN I use Theano Stacked Auto Encoder with the code from tutorial that works great (less than 5% error rate) for classifying the MNIST dataset. It is a multilayer perceptron, with softmax layer on top with each hidden later being pre-trained as autoencoder (fully described at tutorial, chapter 8). There are ~50 input features and ~10 output classes. The NN has sigmoid neurons and all data are normalized to [0,1]. I tried lots of different configurations: number of hidden layers and neurons in them (100->100->100, 60->60->60, 60->30->15, etc.), different learning and pre-train rates, etc. And the best thing I can get is a 20% error rate on the validation set and a 40% error rate on the test set. On the other hand, when I try to use Random Forest (from scikit-learn) I easily get a 12% error rate on the validation set and 25%(!) on the test set. How can it be that my deep NN with pre-training behaves so badly? What should I try? AI: The problem with deep networks is that they have lots of hyperparameters to tune and very small solution space. Thus, finding good ones is more like an art rather than engineering task. I would start with working example from tutorial and play around with its parameters to see how results change - this gives a good intuition (though not formal explanation) about dependencies between parameters and results (both - final and intermediate). Also I found following papers very useful: Visually Debugging Restricted Boltzmann Machine Training with a 3D Example A Practical Guide to Training Restricted Boltzmann Machines They both describe RBMs, but contain some insights on deep networks in general. For example, one of key points is that networks need to be debugged layer-wise - if previous layer doesn't provide good representation of features, further layers have almost no chance to fix it.
H: Skewed multi-class data I have a dataset which contains ~100,000 samples of 50 classes. I have been using SVM with an RBF kernel to train and predict new data. The problem though is the dataset is skewed towards different classes. For example, Class 1 - 30 (~3% each), Class 31 - 45 (~0.6% each), Class 46 - 50 (~0.2% each) I see that the model tends to very rarely predict the classes which occur less frequent in the training set, even though the test set has the same class distribution as the training set. I am aware that there are technique such as 'undersampling' where the majority class is scaled down to the minor class. However, is this applicable here where there are so many different classes? Are there other methods to help handle this case? AI: I would suggest you to use libsvm, which already has adjustable class weights implemented in it. Rather than replicating the training samples, one modifies the C parameter for different classes in the SVM optimization. For example if your data has 2 classes, and the first class is only 10% of the data, you would choose class weights to be 10 and 1 for class 1 and 2 respectively. Therefore, margin violations of the first class would cost 10 times more than the margin violations for second class, and per-class accuracies would be more balanced.
H: Starting my career as Data Scientist, is Software Engineering experience required? I am an MSc student at the University of Edinburgh, specialized in machine learning and natural language processing. I had some practical courses focused on data mining, and others dealing with machine learning, bayesian statistics and graphical models. My background is a BSc in Computer Science. I did some software engineering and I learnt the basic concepts, such as design patterns, but I have never been involved in a large software development project. However, I had a data mining project in my MSc. My question is, if I want to go for a career as Data Scientist, should I apply for a graduate data scientist position first, or should I get a position as graduate software engineer first, maybe something related to data science, such as big data infrastructure or machine learning software development? My concern is that I might need good software engineering skills for data science, and I am not sure if these can be obtained by working as a graduate data scientist directly. Moreover, at the moment I like Data Mining, but what if I want to change my career to software engineering in the future? It might be difficult if I specialised so much in data science. I have not been employed yet, so my knowledge is still limited. Any clarification or advice are welcome, as I am about to finish my MSc and I want to start applying for graduate positions in early October. AI: 1) I think that there's no need to question whether your background is adequate for a career in data science. CS degree IMHO is more than enough for data scientist from software engineering point of view. Having said that, theoretical knowledge is not very helpful without matching practical experience, so I would definitely try to enrich my experience through participating in additional school projects, internships or open source projects (maybe ones, focused on data science / machine learning / artificial intelligence). 2) I believe your concern about focusing on data science too early is unfounded, as long as you will be practicing software engineering either as a part of your data science job, or additionally in your spare time. 3) I find the following definition of a data scientist rather accurate and hope it will be helpful in your future career success: A data scientist is someone who is better at statistics than any software engineer and better at software engineering than any statistician. P.S. Today's enormous number of various resources on data science topics is mind-blowing, but this open source curriculum for learning data science might fill some gaps between your BSc/MSc respective curricula and reality of the data science career (or, at least, provide some direction for further research and maybe answer some of your concerns): http://datasciencemasters.org, or on GitHub: https://github.com/datasciencemasters/go.
H: Tools and protocol for reproducible data science using Python I am working on a data science project using Python. The project has several stages. Each stage comprises of taking a data set, using Python scripts, auxiliary data, configuration and parameters, and creating another data set. I store the code in git, so that part is covered. I would like to hear about: Tools for data version control. Tools enabling to reproduce stages and experiments. Protocol and suggested directory structure for such a project. Automated build/run tools. AI: The topic of reproducible research (RR) is very popular today and, consequently, is huge, but I hope that my answer will be comprehensive enough as an answer and will provide enough information for further research, should you decide to do so. While Python-specific tools for RR certainly exist out there, I think it makes more sense to focus on more universal tools (you never know for sure what programming languages and computing environments you will be working with in the future). Having said that, let's take a look what tools are available per your list. 1) Tools for data version control. Unless you plan to work with (very) big data, I guess, it would make sense to use the same git, which you use for source code version control. The infrastructure is already there. Even if your files are binary and big, this advice might be helpful: https://stackoverflow.com/questions/540535/managing-large-binary-files-with-git. 2) Tools for managing RR workflows and experiments. Here's a list of most popular tools in this category, to the best of my knowledge (in the descending order of popularity): Taverna Workflow Management System (http://www.taverna.org.uk) - very solid, if a little too complex, set of tools. The major tool is a Java-based desktop software. However, it is compatible with online workflow repository portal myExperiment (http://www.myexperiment.org), where user can store and share their RR workflows. Web-based RR portal, fully compatible with Taverna is called Taverna Online, but it is being developed and maintained by totally different organization in Russia (referred there to as OnlineHPC: http://onlinehpc.com). The Kepler Project (https://kepler-project.org) VisTrails (http://vistrails.org) Madagascar (http://www.reproducibility.org) EXAMPLE. Here's an interesting article on scientific workflows with an example of the real workflow design and data analysis, based on using Kepler and myExperiment projects: http://f1000research.com/articles/3-110/v1. There are many RR tools that implement literate programming paradigm, exemplified by LaTeX software family. Tools that help in report generation and presentation is also a large category, where Sweave and knitr are probably the most well-known ones. Sweave is a tool, focused on R, but it can be integrated with Python-based projects, albeit with some additional effort (https://stackoverflow.com/questions/2161152/sweave-for-python). I think that knitr might be a better option, as it's modern, has extensive support by popular tools (such as RStudio) and is language-neutral (http://yihui.name/knitr/demo/engines). 3) Protocol and suggested directory structure. If I understood correctly what you implied by using term protocol (workflow), generally I think that standard RR data analysis workflow consists of the following sequential phases: data collection => data preparation (cleaning, transformation, merging, sampling) => data analysis => presentation of results (generating reports and/or presentations). Nevertheless, every workflow is project-specific and, thus, some specific tasks might require adding additional steps. For sample directory structure, you may take a look at documentation for R package ProjectTemplate (http://projecttemplate.net), as an attempt to automate data analysis workflows and projects: 4) Automated build/run tools. Since my answer is focused on universal (language-neutral) RR tools, the most popular tools is make. Read the following article for some reasons to use make as the preferred RR workflow automation tool: http://bost.ocks.org/mike/make. Certainly, there are other similar tools, which either improve some aspects of make, or add some additional features. For example: ant (officially, Apache Ant: http://ant.apache.org), Maven ("next generation ant": http://maven.apache.org), rake (https://github.com/ruby/rake), Makepp (http://makepp.sourceforge.net). For a comprehensive list of such tools, see Wikipedia: http://en.wikipedia.org/wiki/List_of_build_automation_software.
H: Clustering geo location coordinates (lat,long pairs) What is the right approach and clustering algorithm for geolocation clustering? I'm using the following code to cluster geolocation coordinates: import numpy as np import matplotlib.pyplot as plt from scipy.cluster.vq import kmeans2, whiten coordinates= np.array([ [lat, long], [lat, long], ... [lat, long] ]) x, y = kmeans2(whiten(coordinates), 3, iter = 20) plt.scatter(coordinates[:,0], coordinates[:,1], c=y); plt.show() Is it right to use K-means for geolocation clustering, as it uses Euclidean distance, and not Haversine formula as a distance function? AI: K-means should be right in this case. Since k-means tries to group based solely on euclidean distance between objects you will get back clusters of locations that are close to each other. To find the optimal number of clusters you can try making an 'elbow' plot of the within group sum of square distance. This may be helpful
H: t-SNE Python implementation: Kullback-Leibler divergence t-SNE, as in [1], works by progressively reducing the Kullback-Leibler (KL) divergence, until a certain condition is met. The creators of t-SNE suggests to use KL divergence as a performance criterion for the visualizations: you can compare the Kullback-Leibler divergences that t-SNE reports. It is perfectly fine to run t-SNE ten times, and select the solution with the lowest KL divergence [2] I tried two implementations of t-SNE: python: sklearn.manifold.TSNE(). R: tsne, from library(tsne). Both these implementations, when verbosity is set, print the error (Kullback-Leibler divergence) for each iteration. However, they don't allow the user to get this information, which looks a bit strange to me. For example, the code: import numpy as np from sklearn.manifold import TSNE X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) model = TSNE(n_components=2, verbose=2, n_iter=200) t = model.fit_transform(X) produces: [t-SNE] Computing pairwise distances... [t-SNE] Computed conditional probabilities for sample 4 / 4 [t-SNE] Mean sigma: 1125899906842624.000000 [t-SNE] Iteration 10: error = 6.7213750, gradient norm = 0.0012028 [t-SNE] Iteration 20: error = 6.7192064, gradient norm = 0.0012062 [t-SNE] Iteration 30: error = 6.7178683, gradient norm = 0.0012114 ... [t-SNE] Error after 200 iterations: 0.270186 Now, as far as I understand, 0.270186 should be the KL divergence. However I cannot get this information, neither from model nor from t (which is a simple numpy.ndarray). To solve this problem I could: Calculate KL divergence by my self, Do something nasty in python for capturing and parsing TSNE() function's output [3]. However: would be quite stupid to re-calculate KL divergence, when TSNE() has already computed it, would be a bit unusual in terms of code. Do you have any other suggestion? Is there a standard way to get this information using this library? I mentioned I tried R's tsne library, but I'd prefer the answers to focus on the python sklearn implementation. References [1] http://nbviewer.ipython.org/urls/gist.githubusercontent.com/AlexanderFabisch/1a0c648de22eff4a2a3e/raw/59d5bc5ed8f8bfd9ff1f7faa749d1b095aa97d5a/t-SNE.ipynb [2] http://homepage.tudelft.nl/19j49/t-SNE.html [3] https://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call AI: The TSNE source in scikit-learn is in pure Python. Fit fit_transform() method is actually calling a private _fit() function which then calls a private _tsne() function. That _tsne() function has a local variable error which is printed out at the end of the fit. Seems like you could pretty easily change one or two lines of source code to have that value returned to fit_transform().
H: How to increase accuracy of classifiers? I am using OpenCV letter_recog.cpp example to experiment on random trees and other classifiers. This example has implementations of six classifiers - random trees, boosting, MLP, kNN, naive Bayes and SVM. UCI letter recognition dataset with 20000 instances and 16 features is used, which I split in half for training and testing. I have experience with SVM so I quickly set its recognition error to 3.3%. After some experimentation what I got was: UCI letter recognition: RTrees - 5.3% Boost - 13% MLP - 7.9% kNN(k=3) - 6.5% Bayes - 11.5% SVM - 3.3% Parameters used: RTrees - max_num_of_trees_in_the_forrest=200, max_depth=20, min_sample_count=1 Boost - boost_type=REAL, weak_count=200, weight_trim_rate=0.95, max_depth=7 MLP - method=BACKPROP, param=0.001, max_iter=300 (default values - too slow to experiment) kNN(k=3) - k=3 Bayes - none SVM - RBF kernel, C=10, gamma=0.01 After that I used same parameters and tested on Digits and MNIST datasets by extracting gradient features first (vector size 200 elements): Digits: RTrees - 5.1% Boost - 23.4% MLP - 4.3% kNN(k=3) - 7.3% Bayes - 17.7% SVM - 4.2% MNIST: RTrees - 1.4% Boost - out of memory MLP - 1.0% kNN(k=3) - 1.2% Bayes - 34.33% SVM - 0.6% I am new to all classifiers except SVM and kNN, for these two I can say the results seem fine. What about others? I expected more from random trees, on MNIST kNN gives better accuracy, any ideas how to get it higher? Boost and Bayes give very low accuracy. In the end I'd like to use these classifiers to make a multiple classifier system. Any advice? AI: Dimensionality Reduction Another important procedure is to compare the error rates on training and test dataset to see if you are overfitting (due to the "curse of dimensionality"). E.g., if your error rate on the test dataset is much larger than the error on the training data set, this would be one indicator. In this case, you could try dimensionality reduction techniques, such as PCA or LDA. If you are interested, I have written about PCA, LDA and some other techniques here and in my GitHub repo here. Cross validation Also you may want to take a look at cross-validation techniques in order to evaluate the performance of your classifiers in a more objective manner
H: Cosine similarity versus dot product as distance metrics It looks like the cosine similarity of two features is just their dot product scaled by the product of their magnitudes. When does cosine similarity make a better distance metric than the dot product? I.e. do the dot product and cosine similarity have different strengths or weaknesses in different situations? AI: Think geometrically. Cosine similarity only cares about angle difference, while dot product cares about angle and magnitude. If you normalize your data to have the same magnitude, the two are indistinguishable. Sometimes it is desirable to ignore the magnitude, hence cosine similarity is nice, but if magnitude plays a role, dot product would be better as a similarity measure. Note that neither of them is a "distance metric".
H: Is Python suitable for big data I read in this post Is the R language suitable for Big Data that big data constitutes 5TB, and while it does a good job of providing information about the feasibility of working with this type of data in R it provides very little information about Python. I was wondering if Python can work with this much data as well. AI: To clarify, I feel like the original question references by OP probably isn't be best for a SO-type format, but I will certainly represent python in this particular case. Let me just start by saying that regardless of your data size, python shouldn't be your limiting factor. In fact, there are just a couple main issues that you're going to run into dealing with large datasets: Reading data into memory - This is by far the most common issue faced in the world of big data. Basically, you can't read in more data than you have memory (RAM) for. The best way to fix this is by making atomic operations on your data instead of trying to read everything in at once. Storing data - This is actually just another form of the earlier issue, by the time to get up to about 1TB, you start having to look elsewhere for storage. AWS S3 is the most common resource, and python has the fantastic boto library to facilitate leading with large pieces of data. Network latency - Moving data around between different services is going to be your bottleneck. There's not a huge amount you can do to fix this, other than trying to pick co-located resources and plugging into the wall.
H: Getting GitHub repository information by different criteria New to the Data Science forum, and first poster here! This may be kind of a specific question (hopefully not too much so), but one I'd imagine others might be interested in. I'm looking for a way to basically query GitHub with something like this: Give me a collection of all of the public repositories that have more than 10 stars, at least two forks, and more than three committers. The result could take any viable form: a JSON data dump, a URL to the web page, etc. It more than likely will consist of information from 10,000 repos or something large. Is this sort of thing possible using the API or some other pre-built way, or am I going to have to build out my own custom solution where I try to scrape every page? If so, how feasible is this and how might I approach it? AI: My limited understanding, based on brief browsing GitHub API documentation, is that currently there is NO single API request that supports all your listed criteria at once. However, I think that you could use the following sequence in order to achieve the goal from your example (at least, I would use this approach): 1) Request information on all public repositories (API returns summary representations only): https://developer.github.com/v3/repos/#list-all-public-repositories; 2) Loop through the list of all public repositories retrieved in step 1, requesting individual resources, and save it as new (detailed) list (this returns detailed representations, in other words, all attributes): https://developer.github.com/v3/repos/#get; 3) Loop through the detailed list of all repositories, filtering corresponding fields by your criteria. For your example request, you'd be interested in the following attributes of the parent object: stargazers_count, forks_count. In order to filter the repositories by number of committers, you could use a separate API: https://developer.github.com/v3/repos/#list-contributors. Updates or comments from people more familiar with GitHub API are welcome!
H: Uses of NoSQL database in data science How can NoSQL databases like MongoDB be used for data analysis? What are the features in them that can make data analysis faster and powerful? AI: To be perfectly honest, most NoSQL databases are not very well suited to applications in big data. For the vast majority of all big data applications, the performance of MongoDB compared to a relational database like MySQL is significantly is poor enough to warrant staying away from something like MongoDB entirely. With that said, there are a couple of really useful properties of NoSQL databases that certainly work in your favor when you're working with large data sets, though the chance of those benefits outweighing the generally poor performance of NoSQL compared to SQL for read-intensive operations (most similar to typical big data use cases) is low. No Schema - If you're working with a lot of unstructured data, it might be hard to actually decide on and rigidly apply a schema. NoSQL databases in general are very supporting of this, and will allow you to insert schema-less documents on the fly, which is certainly not something an SQL database will support. JSON - If you happen to be working with JSON-style documents instead of with CSV files, then you'll see a lot of advantage in using something like MongoDB for a database-layer. Generally the workflow savings don't outweigh the increased query-times though. Ease of Use - I'm not saying that SQL databases are always hard to use, or that Cassandra is the easiest thing in the world to set up, but in general NoSQL databases are easier to set up and use than SQL databases. MongoDB is a particularly strong example of this, known for being one of the easiest database layers to use (outside of SQLite). SQL also deals with a lot of normalization and there's a large legacy of SQL best practices that just generally bogs down the development process. Personally I might suggest you also check out graph databases such as Neo4j that show really good performance for certain types of queries if you're looking into picking out a backend for your data science applications.
H: What's the difference between data products and intelligent systems? Basically, both are software systems that are based on data and algorithms. AI: This is a very vague question. However, I will try to make sense of it. Considering rules of logic as well as your statement that both entities are "software systems that are based on data and algorithms", it appears that data products are intelligent systems and intelligent systems are, to some degree, data products. Therefore, it can be argued that the difference between the terms "data products" and "intelligent systems" is purely in the focus (source of information or purpose of system dimensions) of each type of systems (data vs. intelligence/algorithms).
H: Advantages of AUC vs standard accuracy I was starting to look into area under curve(AUC) and am a little confused about its usefulness. When first explained to me, AUC seemed to be a great measure of performance but in my research I've found that some claim its advantage is mostly marginal in that it is best for catching 'lucky' models with high standard accuracy measurements and low AUC. So should I avoid relying on AUC for validating models or would a combination be best? AI: Really great question, and one that I find that most people don't really understand on an intuitive level. AUC is in fact often preferred over accuracy for binary classification for a number of different reasons. First though, let's talk about exactly what AUC is. Honestly, for being one of the most widely used efficacy metrics, it's surprisingly obtuse to figure out exactly how AUC works. AUC stands for Area Under the Curve, which curve you ask? Well, that would be the ROC curve. ROC stands for Receiver Operating Characteristic, which is actually slightly non-intuitive. The implicit goal of AUC is to deal with situations where you have a very skewed sample distribution, and don't want to overfit to a single class. A great example is in spam detection. Generally, spam datasets are STRONGLY biased towards ham, or not-spam. If your data set is 90% ham, you can get a pretty damn good accuracy by just saying that every single email is ham, which is obviously something that indicates a non-ideal classifier. Let's start with a couple of metrics that are a little more useful for us, specifically the true positive rate (TPR) and the false positive rate (FPR): Now in this graph, TPR is specifically the ratio of true positive to all positives, and FPR is the ratio of false positives to all negatives. (Keep in mind, this is only for binary classification.) On a graph like this, it should be pretty straightforward to figure out that a prediction of all 0's or all 1's will result in the points of (0,0) and (1,1) respectively. If you draw a line through these lines you get something like this: Which looks basically like a diagonal line (it is), and by some easy geometry, you can see that the AUC of such a model would be 0.5 (height and base are both 1). Similarly, if you predict a random assortment of 0's and 1's, let's say 90% 1's, you could get the point (0.9, 0.9), which again falls along that diagonal line. Now comes the interesting part. What if we weren't only predicting 0's and 1's? What if instead, we wanted to say that, theoretically we were going to set a cutoff, above which every result was a 1, and below which every result were a 0. This would mean that at the extremes you get the original situation where you have all 0's and all 1's (at a cutoff of 0 and 1 respectively), but also a series of intermediate states that fall within the 1x1 graph that contains your ROC. In practice you get something like this: So basically, what you're actually getting when you do an AUC over accuracy is something that will strongly discourage people going for models that are representative, but not discriminative, as this will only actually select for models that achieve false positive and true positive rates that are significantly above random chance, which is not guaranteed for accuracy.
H: Statistics + Computer Science = Data Science? i want to become a data scientist. I studied applied statistics (actuarial science), so i have a great statistical background (regression, stochastic process, time series, just for mention a few). But now, I am going to do a master degree in Computer Science focus in Intelligent Systems. Here is my study plan: Machine learning Advanced machine learning Data mining Fuzzy logic Recommendation Systems Distributed Data Systems Cloud Computing Knowledge discovery Business Intelligence Information retrieval Text mining At the end, with all my statistical and computer science knowledge, can i call myself a data scientist? , or am i wrong? Thanks for the answers. AI: I think that you're on the right track toward becoming an expert data scientist. Recently I have answered related question here on Data Science StackExchange (pay attention to the definition I mention there, as it essentially answers your question by itself, as well as to aspects of practicing software engineering and applying knowledge to solving real-world problems). I hope that you will find all that useful. Good luck in your career!
H: Should I go for a 'balanced' dataset or a 'representative' dataset? My 'machine learning' task is of separating benign Internet traffic from malicious traffic. In the real world scenario, most (say 90% or more) of Internet traffic is benign. Thus I felt that I should choose a similar data setup for training my models as well. But I came across a research paper or two (in my area of work) which have used a "class balancing" data approach to training the models, implying an equal number of instances of benign and malicious traffic. In general, if I am building machine learning models, should I go for a dataset which is representative of the real world problem, or is a balanced dataset better suited for building the models (since certain classifiers do not behave well with class imbalance, or due to other reasons not known to me)? Can someone shed more light on the pros and cons of both the choices and how to decide which one to go choose? AI: I would say the answer depends on your use case. Based on my experience: If you're trying to build a representative model -- one that describes the data rather than necessarily predicts -- then I would suggest using a representative sample of your data. If you want to build a predictive model, particularly one that performs well by measure of AUC or rank-order and plan to use a basic ML framework (i.e. Decision Tree, SVM, Naive Bayes, etc), then I would suggest you feed the framework a balanced dataset. Much of the literature on class imbalance finds that random undersampling (down sampling the majority class to the size of the minority class) can drive performance gains. If you're building a predictive model, but are using a more advanced framework (i.e. something that determines sampling parameters via wrapper or a modification of a bagging framework that samples to class equivalence), then I would suggest again feeding the representative sample and letting the algorithm take care of balancing the data for training.
H: Visualizing a graph with a million vertices What is the best tool to use to visualize (draw the vertices and edges) a graph with 1000000 vertices? There are about 50000 edges in the graph. And I can compute the location of individual vertices and edges. I am thinking about writing a program to generate a svg. Any other suggestions? AI: I also suggest Gephi software (https://gephi.github.io), which seems to be quite powerful. Some additional information on using Gephi with large networks can be found here and, more generally, here. Cytoscape (http://www.cytoscape.org) is an alternative to Gephi, being an another popular platform for complex network analysis and visualization. If you'd like to work with networks programmatically (including visualization) in R, Python or C/C++, you can check igraph collection of libraries. Speaking of R, you may find interesting the following blog posts: on using R with Cytoscape (http://www.vesnam.com/Rblog/viznets1) and on using R with Gephi (http://www.vesnam.com/Rblog/viznets2). For extensive lists of network analysis and visualization software, including some comparison and reviews, you might want to check the following pages: 1) http://wiki.cytoscape.org/Network_analysis_links; 2) http://www.kdnuggets.com/software/social-network-analysis.html; 3) http://www.activatenetworks.net/social-network-analysis-sna-software-review.
H: is this a good case for NOSQL? I'm currently facing a project that I could solve with a relational database in a relatively painful way. Having heard so much about NOSQL, I'm wondering if there is not a more appropriate way of tackling it: Suppose we are tracking a group of animals in a forest (n ~ 500) and would like to keep a record of a set of observations (this is a fictional scenario). We would like to store the following information in a database: a unique identifier for each animal a description of the animal with structured fields: Species, Genus, Family, ... a free text field with additional information each time-point at which it was detected close to a reference point a picture of the animal an indication whether two given animals are siblings And: there might be additional features appearing later as more data comes in We would like to be able to execute the following types of queries: return all the animals spotted between in a given time interval return all the animals of a given Species or Family perform a text search on the free text field Which particular database system would you recommend ? Is there any tutorial / examples that I could use as a starting point ? AI: Three tables: animal, observation, and sibling. The observation has an animal_id column which links to the animal table, and the sibling table has animal_1_id and animal_2_id columns that indicates two animals are siblings for each row. Even with 5000 animals and 100000 observations I don't think query time will be a problem for something like PostgreSQL for most reasonable queries (obviously you can construct unreasonable queries but you can do that in any system). So I don't see how this is "relatively painful". Relative to what? The only complexity is the sibling table. In NOSQL you might store the full list of siblings in the record for each animal, but then when you add a sibling relationship you have to add it to both sibling's animal records. With the relational table approach I've outlined, it only exists once, but at the expense of having to test against both columns to find an animal's siblings. I'd use PostgreSQL, and that gives you the option of using PostGIS if you have location data - this is a geospatial extension to PostgreSQL that lets you do spatial queries (point in polygon, points near a point etc) which might be something for you. I really don't think the properties of NOSQL databases are a problem here for you - you aren't changing your schema every ten minutes, you probably do care that your database is ACID-compliant, and you don't need something web-scale. http://www.mongodb-is-web-scale.com/ [warning: strong language]
H: How can we effectively measure the impact of our data decisions Apologies if this is very broad question, what I would like to know is how effective is A/B testing (or other methods) of effectively measuring the effects of a design decision. For instance we can analyse user interactions or click results, purchase/ browse decisions and then modify/tailor the results presented to the user. We could then test the effectiveness of this design change by subjecting 10% of users to the alternative model randomly but then how objective is this? How do we avoid influencing the user by the model change, for instance we could decided that search queries for 'David Beckham' are probably about football so search results become biased towards this but we could equally say that his lifestyle is just as relevant but this never makes it into the top 10 results that are returned. I am curious how this is dealt with and how to measure this effectively. My thoughts are that you could be in danger of pushing a model that you think is correct and the user obliges and this becomes a self-fulfilling prophecy. I've read an article on this: http://techcrunch.com/2014/06/29/ethics-in-a-data-driven-world/ and also the book: http://shop.oreilly.com/product/0636920028529.do which discussed this so it piqued my interest. AI: In A/B testing, bias is handled very well by ensuring visitors are randomly assigned to either version A or version B of the site. This creates independent samples drawn from the same population. Because the groups are independent and, on average, only differ in the version of the site seen, the test measures the effect of the design decision. Slight aside: Now you might argue that the A group or B group may differ in some demographic. That commonly happens by random chance. To a certain degree this can be taken care of by covariate adjusted randomization. It can also be taken care of by adding covariates to the model that tests the effect of the design decision. It should be noted that there is still some discussion about the proper way to do this within the statistics community. Essentially A/B testing is an application of a Randomized Control Trial to website design. Some people disagree with adding covariates to the test. Others, such as Frank Harrel (see Regression Modeling Strategies) argue for the use of covariates in such models. I would offer the following suggestions: Design the study in advance so as to take care of as much sources of bias and variation as possible. Let the data speak for itself. As you get more data (like about searches for David Beckham), let it dominate your assumptions about how the data should be (as how the posterior dominates the prior in Bayesian analysis when the sample size becomes large). Make sure your data matches the assumptions of the model.
H: Pig script code error? While running the below pig script I am getting an error in line4: If it is GROUP then I am getting error. If I change from 'GROUP' TO 'group' in line4, then the script is running. What is the difference between group and GROUP? LINES = LOAD '/user/cloudera/datapeople.csv' USING PigStorage(',') AS ( firstname:chararray, lastname:chararray, address:chararray, city:chararray, state:chararray, zip:chararray ); WORDS = FOREACH LINES GENERATE FLATTEN(TOKENIZE(zip)) AS ZIPS; WORDSGROUPED = GROUP WORDS BY ZIPS; WORDBYCOUNT = FOREACH WORDSGROUPED GENERATE GROUP AS ZIPS, COUNT(WORDS); WORDSSORT = ORDER WORDBYCOUNT BY $1 DESC; DUMP WORDSSORT; AI: 'group' in strictly lower case in the FOREACH is the thing you are looping/grouping over. http://squarecog.wordpress.com/2010/05/11/group-operator-in-apache-pig/ says: When you group a relation, the result is a new relation with two columns: “group” and the name of the original relation. Column names are case sensitive, so you have to use lower-case 'group' in your FOREACH. 'GROUP' in upper case is the grouping operator. You can't mix them. So don't do that.
H: Data Science Project Ideas I don't know if this is a right place to ask this question, but a community dedicated to Data Science should be the most appropriate place in my opinion. I have just started with Data Science and Machine learning. I am looking for long term project ideas which I can work on for like 8 months. A mix of Data Science and Machine learning would be great. A project big enough to help me understand the core concepts and also implement them at the same time would be very beneficial. AI: I would try to analyze and solve one or more of the problems published on Kaggle Competitions. Note that the competitions are grouped by their expected complexity, from 101 (bottom of the list) to Research and Featured (top of the list). A color-coded vertical band is a visual guideline for grouping. You can assess time you could spend on a project by adjusting the expected length of corresponding competition, based on your skills and experience. A number of data science project ideas can be found by browsing Coursolve webpage. If you have skills and desire to work on a real data science project, focused on social impacts, visit DataKind projects page. More projects with social impacts focus can be found at Data Science for Social Good webpage. Science Project Ideas page at My NASA Data site looks like another place to visit for inspiration. If you would like to use open data, this long list of applications on Data.gov can provide you with some interesting data science project ideas.
H: Unsupervised feature learning for NER I have implemented NER system with the use of CRF algorithm with my handcrafted features that gave quite good results. The thing is that I used lots of different features including POS tags and lemmas. Now I want to make the same NER for different language. The problem here is that I can't use POS tags and lemmas. I started reading articles about deep learning and unsupervised feature learning. My question is: Is it possible to use methods for unsupervised feature learning with CRF algorithm? Did anyone try this and got any good result? Is there any article or tutorial about this matter? I still don't completely understand this way of feature creation so I don't want to spend to much time for something that won't work. So any information would be really helpful. To create whole NER system based on deep learning is a bit to much for now. AI: Yes, it is entirely possible to combine unsupervised learning with the CRF model. In particular, I would recommend that you explore the possibility of using word2vec features as inputs to your CRF. Word2vec trains a to distinguish between words that are appropriate for a given context and words that are randomly selected. Select weights of the model can then be interpreted as a dense vector representation of a given word. These dense vectors have the appealing property that words that are semantically or syntactically similar have similar vector representations. Basic vector arithmetic even reveals some interesting learned relationships between words. For example, vector("Paris") - vector("France") + vector("Italy") yields a vector that is quite similar to vector("Rome"). At a high level, you can think of word2vec representations as being similar to LDA or LSA representations, in the sense that you can convert a sparse input vector into a dense output vector that contains word similarity information. For that matter, LDA and LSA are also valid options for unsupervised feature learning -- both attempt to represent words as combinations of "topics" and output dense word representations. For English text Google distributes word2vec models pretrained on a huge 100 billion word Google News dataset, but for other languages you'll have to train your own model.
H: Neural Network parse string data? So, I'm just starting to learn how a neural network can operate to recognize patterns and categorize inputs, and I've seen how an artificial neural network can parse image data and categorize the images (demo with convnetjs), and the key there is to downsample the image and each pixel stimulates one input neuron into the network. However, I'm trying to wrap my head around if this is possible to be done with string inputs? The use-case I've got is a "recommendation engine" for movies a user has watched. Movies have lots of string data (title, plot, tags), and I could imagine "downsampling" the text down to a few key words that describe that movie, but even if I parse out the top five words that describe this movie, I think I'd need input neurons for every english word in order to compare a set of movies? I could limit the input neurons just to the words used in the set, but then could it grow/learn by adding new movies (user watches a new movie, with new words)? Most of the libraries I've seen don't allow adding new neurons after the system has been trained? Is there a standard way to map string/word/character data to inputs into a neural network? Or is a neural network really not the right tool for the job of parsing string data like this (what's a better tool for pattern-matching in string data)? AI: Using a neural network for prediction on natural language data can be a tricky task, but there are tried and true methods for making it possible. In the Natural Language Processing (NLP) field, text is often represented using the bag of words model. In other words, you have a vector of length n, where n is the number of words in your vocabulary, and each word corresponds to an element in the vector. In order to convert text to numeric data, you simply count the number of occurrences of each word and place that value at the index of the vector that corresponds to the word. Wikipedia does an excellent job of describing this conversion process. Because the length of the vector is fixed, its difficult to deal with new words that don't map to an index, but there are ways to help mitigate this problem (lookup feature hashing). This method of representation has many disadvantages -- it does not preserve the relationship between adjacent words, and results in very sparse vectors. Looking at n-grams helps to fix the problem of preserving word relationships, but for now let's focus on the second problem, sparsity. It's difficult to deal directly with these sparse vectors (many linear algebra libraries do a poor job of handling sparse inputs), so often the next step is dimensionality reduction. For that we can refer to the field of topic modeling: Techniques like Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA) allow the compression of these sparse vectors into dense vectors by representing a document as a combination of topics. You can fix the number of topics used, and in doing so fix the size of the output vector producted by LDA or LSA. This dimensionality reduction process drastically reduces the size of the input vector while attempting to lose a minimal amount of information. Finally, after all of these conversions, you can feed the outputs of the topic modeling process into the inputs of your neural network.
H: R error using package tm (text-mining) I am attempting to use the tm package to convert a vector of text strings to a corpus element. My code looks something like this Corpus(d1$Yes) where d1$Yes is a factor with 124 levels, each containing a text string. For example, d1$Yes[246] = "So we can get the boat out!" I'm receiving the following error: "Error: inherits(x, "Source") is not TRUE" I'm not sure how to remedy this. AI: You have to tell Corpus what kind of source you are using. Try: Corpus(VectorSource(d1$Yes))
H: How to build parse tree with BNF I need to build parse tree for some source code (on Python or any program language that describe by CFG). So, I have source code on some programming language and BNF this language. Can anybody give some advice how can I build parse tree in this case? Preferably, with tools for Python. AI: I suggest you use ANTLR, which is a very powerful parser generator. It has a good GUI for entering your BNF. It has a Python target capability.
H: What is the use of user data collection besides serving ads? Well this looks like the most suited place for this question. Every website collect data of the user, some just for usability and personalization, but the majority like social networks track every move on the web, some free apps on your phone scan text messages, call history and so on. All this data siphoning is just for selling your profile for advertisers? AI: A couple of days ago developers from one product company asked me how they can understand why new users were leaving their website. My first question to them was what these users' profiles looked like and how they were different from those who stayed. Advertising is only top of an iceberg. User profiles (either filled by users themselves or computed from users' behaviour) hold information about: user categories, i.e. what kind of people tend to use your website/product paying client portraits, i.e. who is more likely to use your paid services UX component performance, e.g. how long it takes people to find the button they need action performance comparison, e.g. what was more efficient - lower price for a weekend or propose gifts with each buy, etc. So it's more about improving product and making better user experience rather than selling this data to advertisers.
H: Predicting next medical condition from past conditions in claims data I am currently working with a large set of health insurance claims data that includes some laboratory and pharmacy claims. The most consistent information in the data set, however, is made up of diagnosis (ICD-9CM) and procedure codes (CPT, HCSPCS, ICD-9CM). My goals are to: Identify the most influential precursor conditions (comorbidities) for a medical condition like chronic kidney disease; Identify the likelihood (or probability) that a patient will develop a medical condition based on the conditions they have had in the past; Do the same as 1 and 2, but with procedures and/or diagnoses. Preferably, the results would be interpretable by a doctor I have looked at things like the Heritage Health Prize Milestone papers and have learned a lot from them, but they are focused on predicting hospitalizations. So here are my questions: What methods do you think work well for problems like this? And, what resources would be most useful for learning about data science applications and methods relevant to healthcare and clinical medicine? EDIT #2 to add plaintext table: CKD is the target condition, "chronic kidney disease", ".any" denotes that they have acquired that condition at any time, ".isbefore.ckd" means they had that condition before their first diagnosis of CKD. The other abbreviations correspond with other conditions identified by ICD-9CM code groupings. This grouping occurs in SQL during the import process. Each variable, with the exception of patient_age, is binary. AI: I've never worked with medical data, but from general reasoning I'd say that relations between variables in healthcare are pretty complicated. Different models, such as random forests, regression, etc. could capture only part of relations and ignore others. In such circumstances it makes sense to use general statistical exploration and modelling. For example, the very first thing I would do is finding out correlations between possible precursor conditions and diagnoses. E.g. in what percent of cases chronic kidney disease was preceded by long flu? If it is high, it doesn't always mean causality, but gives pretty good food for thought and helps to better understand relations between different conditions. Another important step is data visualisation. Does CKD happens in males more often than in females? What about their place of residence? What is distribution of CKD cases by age? It's hard to grasp large dataset as a set of numbers, plotting them out makes it much easier. When you have an idea of what's going on, perform hypothesis testing to check your assumption. If you reject null hypothesis (basic assumption) in favour of alternative one, congratulations, you've made "something real". Finally, when you have a good understanding of your data, try to create complete model. It may be something general like PGM (e.g. manually-crafted Bayesian network), or something more specific like linear regression or SVM, or anything. But in any way you will already know how this model corresponds to your data and how you can measure its efficiency. As a good starting resource for learning statistical approach I would recommend Intro to Statistics course by Sebastian Thrun. While it's pretty basic and doesn't include advanced topics, it describes most important concepts and gives systematic understanding of probability theory and statistics.
H: Hadoop Resource Manager Won't Start I am a relatively new user to Hadoop (using version 2.4.1). I installed hadoop on my first node without a hitch, but I can't seem to get the Resource Manager to start on my second node. I cleared up some "shared library" problems by adding this to yarn-env.sh and hadoop-env.sh: export HADOOP_HOME="/usr/local/hadoop" export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" I also added this to hadoop-env.sh: export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native based on the advice of this post at horton works http://hortonworks.com/community/forums/topic/hdfs-tmp-dir-issue/ That cleared up all of my error messages; when I run /sbin/start-yarn.sh I get this: starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-HdNode.out localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-HdNode.out The only problem is, JPS says that the Resource Manager isn't running. What's going on here? AI: So I was never able to find the error by looking through my logs. I ended up reinstalling it with CDH5 (which was MUCH easier than installing "poor" Hadoop) Now everything runs fine! I'm still having trouble getting things to save to the hdfs, but thats a question for another day...
H: R aggregate() with dates I am working on a data set that has multiple traffic speed measurements per day. My data is from the city of chicago, and it is taken every minute for about six months. I wanted to consolidate this data into days only, so this is what I did: traffic <- read.csv("path.csv",header=TRUE) traffic2 <- aggregate(SPEED~DATE, data=traffic, FUN=MEAN) this was perfect because it took all of my data and averaged it by date. For example, my original data looked something like this: DATE SPEED 12/31/2012 22 12/31/2012 25 12/31/2012 23 ... and the final looked like this: DATE SPEED 10/1/2012 22 10/2/2012 23 10/3/2012 22 ... The only problem, is my data is supposed to start at 9/1/2012. I plotted this data, and it turns out the data goes from 10/1/2012-12/31/2012 and then 9/1/2012-9/30/2012. What in the world is going on here? AI: I am going to agree with @user1683454's comment. After importing, your DATE column is of either character, or factor class (depending on your settings for stringsAsFactors). Therefore, I think that you can solve this issue in at least several ways, as follows: 1) Convert data to correct type during import. To do this, just use the following options of read.csv(): stringsAsFactors (or as.is) and colClasses. By default, you can specify conversion to Date or POSIXct classes. If you need a non-standard format, you have two options. First, if you have a single Date column, you can use as.Date.character() to pass the desired format to colClasses. Second, if you have multiple Date columns, you can write a function for that and pass it to colClasses via setAs(). Both options are discussed here: https://stackoverflow.com/questions/13022299/specify-date-format-for-colclasses-argument-in-read-table-read-csv. 2) Convert data to correct format after import. Thus, after calling read.csv(), you would have to execute the following code: dateColumn <- as.Date(dateColumn, "%m/%d/%Y") or dateColumn <- strptime(dateColumn, "%m/%d/%Y") (adjust the format to whatever Date format you need).
H: Modelling on one Population and Evaluating on another Population I am currently on a project that will build a model (train and test) on Client-side Web data, but evaluate this model on Sever-side Web data. Unfortunately building the model on Server-side data is not an option, nor is it an option to evaluate this model on Client-side data. This model will be based on metrics collected on specific visitors. This is a real time system that will be calculating a likelihood based on metrics collected while visitors browse the website. I am looking for approaches to ensure the highest possible accuracy on the model evaluation. So far I have the following ideas, Clean the Server-side data by removing webpages that are never seen Client-side. Collect additional data Server-side data to make the Server-side data more closely resemble Client-side data. Collect data on the Client and send this data to the Server. This is possible and may be the best solution, but is currently undesirable. Build one or more models that estimate Client-side Visitor metrics from Server-side Visitor metrics and use these estimates in the Likelihood model. Any other thoughts on evaluating over one Population while training (and testing) on another Population? AI: If the users who you are getting client-side data from are from the same population of users who you would get server-side data from. If that is true, then you aren't really training on one population and applying to another. The main difference is that the client side data happened in the past (by necessity unless you are constantly refitting your model) and the server side data will come in the future. Let's reformulate the question in terms of models rather than web clients and servers. You are fitting a model on one dataset and applying it to another. That is the classic use of predictive modeling/machine learning. Models use features from the data to make estimates of some parameter or parameters. Once you have a fitted (and tested) model, all that you need is the same set of features to feed into the model to get your estimates. Just make sure to model on a set of features (aka variables) that are available on the client-side and server-side. If that isn't possible, ask that question separately.
H: Facebook's Huge Database I assume that each person on Facebook is represented as a node (of a Graph) in Facebook, and relationship/friendship between each person(node) is represented as an edge between the involved nodes. Given that there are millions of people on Facebook, how is the Graph stored? AI: Strange as it sounds, graphs and graph databases are typically implemented as linked lists. As alluded to here, even the most popular/performant graph database out there (neo4j), is secretly using something akin to a doubly-linked list. Representing a graph this way has a number of significant benefits, but also a few drawbacks. Firstly, representing a graph this way means that you can do edge-based insertions in near-constant time. Secondly, this means that traversing the graph can happen extremely quickly, if we're only looking to either step up or down a linked list. The biggest drawback of this though comes from something sometimes called The Justin Bieber Effect, where nodes with a large number of connections tend to be extremely slow to evaluate. Imagine having to traverse a million semi-redundant links every time someone linked to Justin Bieber. I know that the awesome folks over at Neo4j are working on the second problem, but I'm not sure how they're going about it, or how much success they've had.
H: When is there enough data for generalization? Are there any general rules that one can use to infer what can be learned/generalized from a particular data set? Suppose the dataset was taken from a sample of people. Can these rules be stated as functions of the sample or total population? I understand the above may be vague, so a case scenario: Users participate in a search task, where the data are their queries, clicked results, and the HTML content (text only) of those results. Each of these are tagged with their user and timestamp. A user may generate a few pages - for a simple fact-finding task - or hundreds of pages - for a longer-term search task, like for class report. Edit: In addition to generalizing about a population, given a sample, I'm interested in generalizing about an individual's overall search behavior, given a time slice. Theory and paper references are a plus! AI: It is my understanding that random sampling is a mandatory condition for making any generalization statements. IMHO, other parameters, such as sample size, just affect probability level (confidence) of generalization. Furthermore, clarifying the @ffriend's comment, I believe that you have to calculate needed sample size, based on desired values of confidence interval, effect size, statistical power and number of predictors (this is based on Cohen's work - see References section at the following link). For multiple regression, you can use the following calculator: http://www.danielsoper.com/statcalc3/calc.aspx?id=1. More information on how to select, calculate and interpret effect sizes can be found in the following nice and comprehensive paper, which is freely available: http://jpepsy.oxfordjournals.org/content/34/9/917.full. If you're using R (and even, if you don't), you may find the following Web page on confidence intervals and R interesting and useful: http://osc.centerforopenscience.org/static/CIs_in_r.html. Finally, the following comprehensive guide to survey sampling can be helpful, even if you're not using survey research designs. In my opinion, it contains a wealth of useful information on sampling methods, sampling size determination (including calculator) and much more: http://home.ubalt.edu/ntsbarsh/stat-data/Surveys.htm.