text
stringlengths 83
79.5k
|
---|
H: Can 2 dimensional input be applied to SVM?
When considering Support Vector Machine, in an take in multiple inputs. Can each of these inputs be a vector??
What i am trying to say is, can the input be a 2 dimensional vector??
AI: If I understand your question correctly. Yes, SVM can take multiple inputs. My suggestion for handling a vector as a feature would be to expand it out. For example,
x0 = (1,2) x0 = 1
x1 = .4 -----> x1 = 2
x2 = 0 x2 = .4
x3 = 0
If this does not capture all of the characteristics of the vector that are important, then you may want to add other features (like magnitude of the vector) as well. |
H: What is the best Big-Data framework for stream processing?
I found that Apache-Storm, Apache-Spark, Apache-Flink and TIBCO StreamBase are some powerful frameworks for stream processing. but I don't know which one of them has the best performance and capabilities.
I know Apache-Spark and Apache-Flink are two general purpose frameworks that support stream processing. but Apache-Storm and TIBCO StreamBase are built for stream processing specially. Is there any considerable advantage between these frameworks?
Thanks
AI: It really depends on what you are looking to do. I love Apache Spark, but Storm has some history. I am sure as the streaming capability in Spark is built out that it will become a competitive solution. However, until Spark has some heavy hitting users (for streaming) there will remain unknown bugs.
You can also consider the community. Spark has a great community. I am not sure the level of the Storm community as I am usually the one receiving the data not handling the ingest. I can say we have used Storm on projects and I have been impressed with the real-time analysis and volumes of streaming data. |
H: Can distribution values of a target variable be used as features in cross-validation?
I came across an SVM predictive model where the author used the probabilistic distribution value of the target variable as a feature in the feature set. For example:
The author built a model for each gesture of each player to guess which gesture would be played next. Calculating over 1000 games played the distribution may look like (20%, 10%, 70%). These numbers were then used as feature variables to predict the target variable for cross-fold validation.
Is that legitimate? That seems like cheating. I would think you would have to exclude the target variables from your test set when calculating features in order to not "cheat".
AI: After speaking with some experienced statisticians, this is what I got.
As for technical issues regarding the paper, I'd be worried about data leakage or using future information in the current model. This can also occur in cross validation. You should make sure each model trains only on past data, and predicts on future data. I wasn't sure exactly how they conducted CV, but it definitely matters. It's also non-trivial to prevent all sources of leakage. They do claim unseen examples but it's not explicit exactly what code they wrote here. I'm not saying they are leaking for sure, but I'm saying it could happen. |
H: Can we access HDFS file system and YARN scheduler in Apache Spark?
We can access HDFS file system and YARN scheduler In the Apache-Hadoop. But Spark has a higher level of coding. Is it possible to access HDFS and YARN in Apache-Spark too?
Thanks
AI: Yes.
There are examples on spark official document: https://spark.apache.org/examples.html
Just put your HDFS file uri in your input file path as below (scala syntax).
val file = spark.textFile("hdfs://train_data") |
H: Pre-processing (center, scale, impute) among training sets (different forms) and the test set - what is a good approach?
I am currently working on a multi-class classification problem with a large training set. However, it has some specific characteristics, which induced me to experiment with it, resulting in few versions of the training set (as a result of re-sampling, removing observations, etc).
I want to perform pre-processing of the data, that is to scale, center and impute (not much imputation though) values. This is the point where I've started to get confused.
I've been taught that you should always pre-process the test set in the same way you've pre-processed the training set, that is (for scaling and centering) to measure the mean and standard deviation on the training set and apply those values to the test set. This seems reasonably to me.
But what to do in case when you have shrinked/resampled training set? Should one focus on characteristics of the data that is actually feeding the model (that is what would 'train' function in R's caret package suggest, as you can put the pre-processing object in there directly) and apply these to the test set, or maybe one should capture the real characteristics of the data (from the whole untouched training set) and apply these? If the second option is better, maybe it would be worth it to capture the characteristics of the data by merging the training and test data together just for pre-processing step to get as accurate estimates as possible (I've actually never heard of anyone doing that though)?
I know I can simply test some of the approaches specified here, and I surely will, but are there any suggestions based on theory or your intuition/experience on how to tackle this problem?
I also have one additional and optional question. Does it make sense to center but NOT scale the data (or the other way around) in any case? Can anyone present any example where that approach would be reasonable?
Thank you very much in advance.
AI: I thought about it this way: the training and test sets are both a sample of the unknown population. We assume that the training set is representative of the population we're studying. That is, whatever transformations we make to the training set are what we would make to the overall population. In addition, whatever subset of the training data we use, we assume that this subset represents the training set, which represents the population.
So in response to your first question, it's fine to use that shrinked/resmpled training as long as you feel it's still representative of that population. That's assuming your untouched training set captures the "real characteristics" in the first place :)
As for your second question, don't merge the training and testing set. The testing set is there to act as future unknown observations. If you build these into the model then you won't know if the model wrong or not, because you used up the data you were going to test it with. |
H: How to compute F1 score?
Recently I read about path ranking algorithm in a paper (source: Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion).
In this paper was a table (Table 3) with facts and I tried to understand how they were calculated.
F1 (harmonic mean of precision and recall) = 0.04
P (precision) = 0.03
R (recall) = 0.33
W (weight given to this feature by logistic regression)
I found a formula for F1 via Google which is
$F1 = 2 * \frac{precision * recall}{precision + recall}$
The problem is that I get the result of 0.055 with this formula, but not the expected result of 0.04.
Can someone help me to get this part?
Also, does someone know how 'W' can be calculated?
Thanks.
AI: First you need to learn about Logistic Regression, it is an algorithm that will assign weights to different features given some training data. Read the wiki intro, is quite helpful, basically the Betas there are the same as the Ws in the paper.
The formula you have is correct, and those value do seem off. It also depends on the number of significant figures you have, perhaps they are making their calculations with more than the ones they are reporting.
But honestly, you can't understand much of the paper unless you understand LR |
H: Machine learning for Point Clouds Lidar data
Our main use case is object detection in 3d lidar point clouds i.e. data is not in RGB-D format. We are planning to use CNN for this purpose using theano. Hardware limitations are CPU: 32 GB RAM Intel 47XX 4th Gen core i7 and GPU: Nvidia quadro k1100M 2GB. Kindly help me with recommendation for architecture.
I am thinking in the lines of 27000 input neurons on basis of 30x30x30 voxel grid but can't tell in advance if this is a good option.
Additional Note: Dataset has 4500 points on average per view per point cloud
AI: First, CNNs are great for image recognition, where you usually take sub sampled windows of about 80 by 80 pixels, 27,000 input neurons is too large and it will take you forever to train a CNN on that.
Furthermore, why did you choose CNN? Why don't you try some more down to earth algorithms fisrst? Like SVMs, or Logistic regressions.
4500 Data points and 27000 features seems unrealistic to me, and very prone to over fitting.
Check this first.
http://scikit-learn.org/stable/tutorial/machine_learning_map/ |
H: Proper way of fighting negative outputs of a regression algorithms where output must be positive all the way
Maybe it is a bit general question. I am trying to solve various regression tasks and I try various algorithms for them. For example, multivariate linear regression or an SVR. I know that the output can't be negative and I never have negative output values in my training set, though I could have 0's in it (for example, I predict 'amount of cars on the road' - it can't be negative but can be 0). Rather often I face a problem that I am able to train relatively good algorithm (maybe fit a good regression line to my data) and I have relatively small average squared error on training set. But when I try to run my regression algorithm against new data I sometimes get a negative output. Obviously, I can't accept negative output since it is not a valid value. The question is - what is the proper way of working with such output? Should I think of negative output as a 0 output? Is there any general advice for such cases?
AI: The problem is your model choice, as you seem to recognize. In the case of linear regression, there is no restriction on your outputs. Often this is fine when predictions need to be non-negative so long as they are far enough away from zero. However, since many of your training examples are zero-valued, this isn't the case.
If your data is non-negative and discrete (as in the case with number of cars on the road), you could model using a generalized linear model (GLM) with a log link function. This is known as Poisson regression and is helpful for modeling discrete non-negative counts such as the problem you described. The Poisson distribution is parameterized by a single value $\lambda$, which describes both the expected value and the variance of the distribution.
This results in an approach similar to the one described by Emre in that you are attempting to fit a linear model to the log of your observations. |
H: What is a discrimination threshold of binary classifier?
With respect to ROC can anyone please tell me what the phrase "discrimination threshold of binary classifier system" means? I know what a binary classifier is.
AI: Just to add a bit.
Like it was mentioned before, if you have a classifier (probabilistic) your output is a probability (a number between 0 and 1), ideally you want to say that everything larger than 0.5 is part of one class and anything less than 0.5 is the other class.
But if you are classifying cancer rates, you are deeply concerned with false negatives (telling some he does not have cancer, when he does) while a false positive (telling someone he does have cancer when he doesn't) is not as critical (IDK - being told you've cancer coudl be psychologically very costly). So you might artificially move that threshold from 0.5 to higher or lower values, to change the sensitivity of the model in general.
By doing this, you can generate the ROC plot for different thresholds. |
H: How to determine whether a bad performance is caused by data quality?
I'm using a set of features, says $X_1, X_2, ..., X_m $, to predict a target value $Y$, which is a continuous value from zero to one.
At first, I try to use a linear regression model to do the prediction, but it does not perform well. The root-mean-squared error is about 0.35, which is quite high for prediction of a value from 0 to 1.
Then, I have tried different models, e.g., decision-tree-based regression, random-forest-based regression, gradient boosting tree regression and etc. However, all of these models also do not perform well. (RMSE $\approx $0.35, there is not significant difference with linear regression)
I understand there are many possible reasons for this problem, such as: feature selection or choice of model, but maybe more fundamentally, the quality of data set is not good.
My question is: how can I examine whether it is caused by bad data quality?
BTW, for the size of data set, there are more than 10K data points, each of which associated with 105 features.
I have also tried to investigate importance of each feature by using decision-tree-based regression, it turns out that, only one feature (which should not be the most outstanding feature in my knowledge to this problem) have an importance of 0.2, while the rest of them only have an importance less than 0.1.
AI: First, it sounds like your choice of model selection is a problem here. Your outputs are binary-valued, not continuous. Specifically you may have a classification problem on your hands rather than a traditional regression problem. My first recommendation would be to try a simple classification approach such as logistic regression or linear discriminant analysis.
Regarding your suspicions of bad data, what would bad data look like in this situation? Do you have reason to suspect that your $X$ values are noisy or that your $y$ values are mislabeled? It is also possible that there is not a strong relationship between any of your features and your targets. Since your targets are binary, you should look at histograms of each of your features to get a rough sense of the class conditional distributions, i.e. $p(X_1|y=1)$ vs $p(X_1|y=0)$. In general though, you will need to be more specific about what "bad data" means to you. |
H: Python for data analytics
What are some data analytic package & feature in python which helps do data analytic?
AI: You're looking for this answer: https://www.quora.com/Why-is-Python-a-language-of-choice-for-data-scientists |
H: Machine Learning on financial big data
Disclaimer: although I know some things about big data and am currently learning some other things about machine learning, the specific area that I wish to study is vague, or at least appears vague to me now. I'll do my best to describe it, but this question could still be categorised as too vague or not really a question. Hopefully, I'll be able to reword it more precisely once I get a reaction.
So,
I have some experience with Hadoop and the Hadoop stack (gained via using CDH), and I'm reading a book about Mahout, which is a collection of machine learning libraries. I also think I know enough statistics to be able to comprehend the math behind the machine learning algorithms, and I have some experience with R.
My ultimate goal is making a setup that would make trading predictions and deal with financial data in real time.
I wonder if there're any materials that I can further read to help me understand ways of managing that problem; books, video tutorials and exercises with example datasets are all welcome.
AI: There are tons of materials on financial (big) data analysis that you can read and peruse. I'm not an expert in finance, but am curious about the field, especially in the context of data science and R. Therefore, the following are selected relevant resource suggestions that I have for you. I hope that they will be useful.
Books: Financial analysis (general / non-R)
Statistics and Finance: An Introduction;
Statistical Models and Methods for Financial Markets.
Books: Machine Learning in Finance
Machine Learning for Financial Engineering (!) - seems to be an edited collection of papers;
Neural Networks in Finance: Gaining Predictive Edge in the Market.
Books: Financial analysis with R
Statistical Analysis of Financial Data in R;
Statistics and Data Analysis for Financial Engineering;
Financial Risk Modelling and Portfolio Optimization with R
Statistics of Financial Markets: An Introduction (code in R and MATLAB).
Academic Journals
Algorithmic Finance (open access)
Web sites
RMetrics
Quantitative Finance on StackExchange
R Packages
the above-mentioned RMetrics site (see this page for general description);
CRAN Task Views, including Finance, Econometrics and several other Task Views.
Competitions
MODELOFF (The Financial Modeling World Championships)
Educational Programs
MS in Financial Engineering - Columbia University;
Computational Finance - Hong Kong University.
Blogs (Finance/R)
Timely Portfolio;
Systematic Investor;
Money-making Mankind. |
H: Dividing percentage
A book I'm now reading, "Apache Mahout Cookbook" by Pierro Giacomelli, states that
To avoid [this], you need to divide the vector files into two sets called the 80-20 split <...>
A good dividing percentage is shown to be 80% and 20%.
Is there a strict statistical proof of this being the best percentage, or is it a euristic result?
AI: If this is about splitting your data into training and testing data, then 80/20 is a common rule of thumb. An "optimal" split (which would need to be operationalized) would likely depend on your sample size, distributions and relationships between your variables.
It is also common to split your data three ways (e.g., 60/20/20 - again rules of thumb), into a training set that you train your models on and a test set which you test your model on. You will iterate training and testing until you like the result. Then, and only then you apply the final model (trained on both the training and test set) on the third validation set. This avoids "overfitting on the test set".
However, cross-validation is much better than a simple data split. Your textbook should also cover cross-validation. If it doesn't, get a better textbook. |
H: Best Python library for statistical inference
I'm curious if anyone has Python library suggestions for inferential statistics. I'm currently reading An Introduction to Statistical Learning, which uses R for the example code, but ideally I'd like to use Python as well.
Most of my data experience is with Pandas, Matplotlib, and Sklearn doing predictive modeling.
So far I've found statsmodels. Is this what is recommended or is there something else?
Thanks!
AI: statsmodels is a good, and fairly standard, package to statistics.
For Bayesian interference you can go with PyMC - see as in Cam Davidson-Pilon, Probabilistic Programming & Bayesian Methods for Hackers. |
H: Applications and differences for Jaccard similarity and Cosine Similarity
Jaccard similarity and cosine similarity are two very common measurements while comparing item similarities. However, I am not very clear in what situation which one should be preferable than another.
Can somebody help clarify the differences of these two measurements (the difference in concept or principle, not the definition or computation) and their preferable applications?
AI: Jaccard Similarity is given by
$s_{ij} = \frac{p}{p+q+r}$
where,
p = # of attributes positive for both objects
q = # of attributes 1 for i and 0 for j
r = # of attributes 0 for i and 1 for j
Whereas, cosine similarity = $\frac{A \cdot B}{\|A\|\|B\|}$ where A and B are object vectors.
Simply put, in cases where the vectors A and B are comprised 0s and 1s only, cosine similarity divides the number of common attributes by the product of A and B's distance from zero. Whereas in Jaccard Similarity, the number of common attributes is divided by the number of attributes that exists in at least one of the two objects.
And there are many other measures of similarity, each with its own eccentricities. When deciding which one to use, try to think of a few representative cases and work out which index would give the most usable results to achieve your objective.
The Cosine index could be used to identify plagiarism, but will not be a good index to identify mirror sites on the internet. Whereas the Jaccard index, will be a good index to identify mirror sites, but not so great at catching copy pasta plagiarism (within a larger document).
When applying these indices, you must think about your problem thoroughly and figure out how to define similarity. Once you have a definition in mind, you can go about shopping for an index.
Edit:
Earlier, I had an example included in this answer, which was ultimately incorrect. Thanks to the several users who have pointed that out, I have removed the erroneous example. |
H: Kaggle Titanic Survival Table an example of Naive Bayes?
is the survival table classification method on the Kaggle Titanic dataset an example of an implementation of Naive Bayes ? I am asking because I am reading up on Naive Bayes and the basic idea is as follows:
"Find out the probability of the previously unseen instance
belonging to each class, then simply pick the most probable class"
The survival table
(http://www.markhneedham.com/blog/tag/kaggle/)
seems like an evaluation of the possibilities of survival given possible combinations of values of the chosen features and I'm wondering if it could be an example of Naive Bayes in another name. Can someone shed light on this ?
AI: Naive Bayes is just one of the several approaches that you may apply in order to solve the Titanic's problem. The aim of the Kaggle's Titanic problem is to build a classification system that is able to predict one outcome (whether one person survived or not) given some input data. The survival table is a training dataset, that is, a table containing a set of examples to train your system with.
As I mentioned before, you could apply Naive Bayes to build your classification system to solve the Titanic problem. Naive Bayes is one of the simplest classification algorithms out there. It assumes that the data in your dataset has a very specific structure. Sometimes Naive Bayes can provide you with results that are good enough. Even if that is not the case, Naive Bayes may be useful as a first step; the information you obtain by analyzing Naive Bayes' results, and by further data analysis, will help you to choose which classification algorithm you could try next. Other examples of classification methods are k-nearest neighbours, neural networks, and logistic regression, but this is just a short list.
If you are new to Machine Learning, I recommend you to take a look to this course from Stanford: https://www.coursera.org/course/ml |
H: R Programing beginner
Hi I am new to Data analytics.I am planning to learn R by doing some real time projects. How should I stream line (set goals) my time in learning R and also I have not learnt statistics till data. I am planning to learn both side by side. I am mid level data warehouse engineer who has experience in DBMS Data-Integration. I am planning to learn R so that I can bring out useful analysis from the Integrated data.
If I be specific, I am beginning in R, so what are the basic statistical concepts I should know and implement it in R. If I want to be an expert or above average person in R how should I plan strategically to become one. Say if I can spend 2 hrs a day for 1 year what level I should reach. FYI am working for a SaaS company. What are the way s in which I can utilize R knowledge in a SaaS environment
AI: I second saq7 and Gopinath, the R courses on Coursera are excellent. I really rate the Johns Hopkins ones: https://www.coursera.org/specialization/jhudatascience/1/courses. You should also keep an eye on the software carpentry site for courses they run in your area. If you can't wait, all the software carpentry learning material is online so you can follow it yourself. |
H: When it is time to use Hadoop?
Hadoop is a buzzword now. A lot of start-ups use it (or just say, that they use it), a lot of widely known companies use it. But when and what is the border? When person can say: "Better to solve it without Hadoop"?
AI: It's an economic calculation, really. When you have a computing "problem" (in the most general possible sense) that you can't solve with one computer, it makes sense to use a cluster of commodity machines when doing so A. allows you to solve the problem, and B. is cheaper than a forklift upgrade to a bigger computer, or upgrading to specialized hardware.
When those things are true, and you are going the "commodity cluster" route, Hadoop makes a lot of sense, especially if the nature of the problem maps (no pun intended) well to MapReduce. If it doesn't, one shouldn't be scared to consider "older" cluster approaches like a Beowulf cluster using MPI or OpenMP.
That said, the newer YARN based Hadoop does support a form of MPI, so those worlds are starting to move closer together. |
H: What is a good hardware setup for using Python across multiple users
This question is likely somewhat naive. I know I (and my colleagues) can install and use Python on local machines. But is that really a best practice? I have no idea.
Is there value in setting up a Python "server"? A box on the network where we develop our data science related Python code. If so, what are the hardware requirements for such a box? Do I need to be concerned about any specific packages or conflicts between projects?
AI: Is installing Python locally a good practice? Yes, if you are going to develop in Python, it is always a good idea to have a local environment where you can break things safely.
Is there value in setting up a Python "server"? Yes, but before doing so, be sure to be able to share your code with your colleagues using a version control system. My reasoning would be that, before you move things to a server, you can move a great deal forward by being able to test several different versions in the local environment mentioned above. Examples of VCS are git, svn, and for the deep nerds, darcs.
Furthermore, a "Python server" where you can deploy your software once it is integrated into a releasable version is something usually called "staging server". There is a whole philosophy in software engineering — Continuous Integration — that advocates staging whatever you have in VCS daily or even on each change. In the end, this means that some automated program, running on the staging server, checks out your code, sees that it compiles, runs all defined tests and maybe outputs a package with a version number. Examples of such programs are Jenkins, Buildbot (this one is Python-specific), and Travis (for cloud-hosted projects).
What are the hardware requirements for such a box? None, as far as I can tell. Whenever it runs out of disk space, you will have to clean up. Having more CPU speed and memory will make concurrent builds easier, but there is no real minimum.
Do I need to be concerned about any specific packages or conflicts between projects? Yes, this has been identified as a problem, not only in Python, but in many other systems (see Dependency hell). The established practice is to keep projects isolated from each other as far as their dependencies are concerned. This means, avoid installing dependencies on the system Python interpreter, even locally; always define a virtual environment and install dependencies there. Many of the aforementioned CI servers will do that for you anyway. |
H: How to visualize multivariate regression results
Are there commonly accepted ways to visualize the results of a multivariate regression for a non-quantitative audience? In particular, I'm asking how one should present data on coefficients and T statistics (or p-values) for a regression with around 5 independent variables.
AI: I personally like dotcharts of standardized regression coefficients, possibly with standard error bars to denote uncertainty. Make sure to standardize coefficients (and SEs!) appropriately so they "mean" something to your non-quantitative audience: "As you see, an increase of 1 unit in Z is associated with an increase of 0.3 units in X."
In R (without standardization):
set.seed(1)
foo <- data.frame(X=rnorm(30),Y=rnorm(30),Z=rnorm(30))
model <- lm(X~Y+Z,foo)
coefs <- coefficients(model)
std.errs <- summary(model)$coefficients[,2]
dotchart(coefs,pch=19,xlim=range(c(coefs+std.errs,coefs-std.errs)))
lines(rbind(coefs+std.errs,coefs-std.errs,NA),rbind(1:3,1:3,NA))
abline(v=0,lty=2) |
H: Splitting binary classification into smaller susbsets
As an example. If you are tying to classify humans from dogs. Is it possible to approach this problem by classifying different kinds of animals (birds, fish, reptiles, mammals, ...) or even smaller subsets (dogs, cats, whales, lions, ...)
Then when you try to classify a new data set, anything that did not fall into one of those classes can be considered a human.
If this is possible, are there any benefits into breaking a binary class problem into several classes (or perhaps labels)?
Benefits I am looking into are: accuracy/precision of the classifier, parallel learning.
AI: If you try to get the best accuracy, etc... for a given question you should always learn on a training set that is labeled exactly according to your questions. You shouldn't expect to get better results if you are using more granular class labels. The classifier then would then try to pick up the differences in the classes and try to separate them apart. Since in practice your variables in the training set will not perfectly explain the more granular classification question you shouldn't expect to get a better answer for your less granular classification problem.
If you are not happy with the accuracy of your model you try the following instead:
review the explanatory variables. Think about what might influence the classification problem. Maybe there us a clever way to construct new variables (from your existing ones) that helps. It's nowpossible to give a general advise on that since you have to consider the properties of your classifier
if your class distribution is very skewed you might consider over/undersampling
you might run more different classifiers and then classify based on the majority vote. Note that you will most likely sacrifice explainability of your model.
Also you seem to have some missunderstanding, when you write 'you would assign it to human if it doesn't fall into any of the granular classes'. Note that you always try to pick class labels covering the whole universe (all possible classes). This can be always defined as the complement of the other classes. Also you will have to have instances for each class in your training set. |
H: Going from report to feature matrix
I am starting to play around in datamining / machine learning and I am stuck on a problem that's probably easy.
So I have a report that lists the url and the number of visits a person did. So a combination of ip and url result in an amount of visits.
Now I want to run the k-means clustering algorithm on this so I thought I could approach it like this:
This is my data:
url ip visits
abc.be 123 5
abc.be/a 123 2
abc.be/b 123 2
abc.be/b 321 4
And I would turn in into a feature vector/matrix like so:
abc.be abc.be/a abc.be/b impressions
1 0 0 5
0 1 0 2
0 0 1 2
0 0 1 4
But I am stuck on how to transform my data set to a feature matrix. Any help would be appreciated.
AI: I don't understand what you mean by
So I have a report that lists the url and the number of visits a person did. So a combination of ip and url result in an amount of visits.
Assuming that you equate an IP with a user, and you wish to cluster users by their URL visitation frequencies, your matrix, M, would have
One row per IP (user)
One column for each URL that you are tracking (your features)
and the entries in M would be "visits" of a given URL by a particular IP
Given these assumptions, and your report, M would be:
abc.be abc.be/a abc.be/b
123 5 2 2
321 0 0 4 |
H: Big data and data mining for CRM?
We are currently developing a customer relationship management software for SME's. What I'd like to structure for our future CRM is developing CRM with a social-based approach (Social CRM). Therefore we will provide our users (SME's) to integrate their CRM into their social network accounts. Also CRM will be enhance intercorporate communication of owner company.
All these processes I've just indicated above will certainly generate lots of unstructured data.
I am wondering how can we integrate big data and data-mining contepts for our project; especially for the datas generated by social network? I am not the expert of these topics but I really want to start from somewhere.
Basic capabilities of CRM (Modules)
-Contacts: People who you have a business relationship.
-Accounts: Clients who you've done a business before.
-Leads: Accounts who are your potential customers.
-Oppurtunites: Any business opportunity for an account or a lead.
-Sales Orders
-Calendar
-Tasks
What kind of unstructured data or the ways (ideas) could be useful for the modules I've just wrote above? If you need more specific information please write in comments.
AI: The two modules where you can really harness data mining and big data techniques are probably Leads and Opportunities. The reason is that, as you've written yourself, both contain 'potential' information that you can harness (through predictive algorithms) to get more customers. Taking Leads as an example, you can use a variety of machine learning algorithms to assign a probability to each account, based on that account's potential for becoming your customer in the near future. Since you already have an Accounts module which gives you information about your current customers, you can use this information to train your machine learning algorithms. This is all at a very high level but hopefully, you're getting the gist of what I'm saying. |
H: strings as features in decision tree/random forest
I am doing some problems on an application of decision tree/random forest. I am trying to fit a problem which has numbers as well as strings (such as country name) as features. Now the library, scikit-learn takes only numbers as parameters, but I want to inject the strings as well as they carry a significant amount of knowledge.
How do I handle such a scenario?
I can convert a string to numbers by some mechanism such as hashing in Python. But I would like to know the best practice on how strings are handled in decision tree problems.
AI: In most of the well-established machine learning systems, categorical variables are handled naturally. For example in R you would use factors, in WEKA you would use nominal variables. This is not the case in scikit-learn. The decision trees implemented in scikit-learn uses only numerical features and these features are interpreted always as continuous numeric variables.
Thus, simply replacing the strings with a hash code should be avoided, because being considered as a continuous numerical feature any coding you will use will induce an order which simply does not exist in your data.
One example is to code ['red','green','blue'] with [1,2,3], would produce weird things like 'red' is lower than 'blue', and if you average a 'red' and a 'blue' you will get a 'green'. Another more subtle example might happen when you code ['low', 'medium', 'high'] with [1,2,3]. In the latter case it might happen to have an ordering which makes sense, however, some subtle inconsistencies might happen when 'medium' in not in the middle of 'low' and 'high'.
Finally, the answer to your question lies in coding the categorical feature into multiple binary features. For example, you might code ['red','green','blue'] with 3 columns, one for each category, having 1 when the category match and 0 otherwise. This is called one-hot-encoding, binary encoding, one-of-k-encoding or whatever. You can check documentation here for encoding categorical features and feature extraction - hashing and dicts. Obviously one-hot-encoding will expand your space requirements and sometimes it hurts the performance as well. |
H: Where can I find resources and papers regarding Data Science in the area of Public Health
I'm quite new to Data Science, but I would like to do a project to learn more about it.
My subject will be Data Understanding in Public Health.
So I want to do some introductory research to public health.
I would like to visualize some data with the use of a tool like Tableau.
Which path would you take to develop a good understanding of Data Science? I imagine taking some online courses, eg. Udacity courses on data science, but which courses would you recommend?
Where can I get real data (secondary Dummy Data) to work with?
And are there any good resources on research papers done in Data Science area with the subject of Public Health?
Any suggestions and comments are welcome.
AI: I don't think that you will learn much about data science (meaning, acquire understanding and skills) by using software tools like Tableau. Such tools are targeting mainly advanced users (not data scientists), for example analysts and other subject matter experts, who use graphical user interface (GUI) to analyze and (mostly) visualize data. Having said that, software tools like Tableau might be good enough to perform initial phase of data science workflow: exploratory data analysis (EDA).
In terms of data science self-education, there are several popular online courses (MOOCs) that you can choose from (most come in both free and paid versions). In addition to the one on Udacity that you've mentioned (https://www.udacity.com/course/ud359), there are two data science courses on Coursera: Introduction to Data Science by University of Washington (https://www.coursera.org/course/datasci) and a set of courses from Data Science specialization by Johns Hopkins University (https://www.coursera.org/specialization/jhudatascience/1). Note that you can take specialization's individual courses for free at your convenience. There are several other, albeit less popular, data science MOOCs.
In terms of data sources, I'm not sure what do you mean by "Dummy Data", but there is a wealth of open data sets, including many in the area of public health. You can review corresponding resources, listed on KDnuggets (http://www.kdnuggets.com/datasets/index.html) and choose ones that you're interested in. For a country-level analysis, the fastest way to obtain data is finding and visiting corresponding open data government websites. For example, for public health data in US, I would go to http://www.healthdata.gov and http://www.data.gov (the latter - for corresponding non-medical data that you might want to include in your analysis).
In regard to research papers in the area of public health, I have two comments: 1) most empirical research in that (or any other) area IMHO can be considered a data science study/project; 2) you need to perform a literature review in the area or on the topic of your interest, so you're on your own in that sense.
Finally, a note on software tools. If you're serious about data science, I would suggest to invest some time in learning either R, or Python (if you don't know them already), as those are two most popular open source tools among data scientists nowadays. Both have a variety of feature-rich development environments as well as large ecosystems of packages/libraries and users/developers all over the world.
You might also find useful some of my other related answers here on Data Science StackExchange site. For example, I recommend you to read this answer, this answer and this answer. Good luck! |
H: problem of choosing right statistical method for scheduler prediction
I am struggling to choose a right data prediction method for the following problem.
Essentially I am trying to model a scheduler operation, trying to predict its scheduling without knowing the scheduling mechanism and having incomplete data.
(1) There are M available resource blocks that can carry data, N data channels that must be scheduled every time instance i
(2) Inputs into the scheduler:
Matrix $X_i$ size M by N, consisting of N column vectors from each data source. Each of M elements is index from 1 to 32 carrying information about quality of data channel for particular resource block. 1 - really bad quality, 32 - excellent quality.
Data which contains type of data to be carried (voice/internet etc)
Scheduler prioritizes number of resource blocks occupied by each channel every time instant i.
Given that
I CAN see resource allocation map every time instant
I DO have access to matrix $X_i$
I DON'T know the algorithm of scheduler and
I dont have access to the type of data to be scheduled.
I want to have a best guess (prediction) how the data will be scheduled based on this incomplete information i.e, which resource block will be occupied by which data channel. What is the best choice of prediction/modelling algorithm?
Any help appreciated!
AI: Do you know if the scheduler has a memory?
Let us assume for a moment that the scheduler has no memory. This is a straightforward classification (supervised learning) problem: the inputs are X, the outputs are the schedules (N->M maps). Actually, if every N gets scheduled and the only question is which M it gets, the outputs are lists which channel (or none) is scheduled to each block, and there is only a certain possible number of those, so you can model them as discrete outputs (classes) with their own probabilities. Use whatever you like (AdaBoost, Naive Bayes, RBF SVM, Random Forest...) as a classifier. I think you will quickly learn about the general behavior of the scheduler.
If the scheduler has a memory, then things get complicated. I think you might approach that as a hidden Markov model: but the number of individual states may be quite large, and so it may be essentially impossible to build a complete map of transition probabilities. |
H: Matrix factorization for like/dislike/unknown data
Most literature focus on either explicit rating data or implicit (like/unknown) data. Are there any good publications to handle like/dislike/unknown data? That is, in the data matrix there are three values, and I'd like to recommend from unknown entries.
And are there any good open source implementations on this?
Thanks.
AI: This is very similar to the netflix problem, most matrix factorization methods can be adapted so that the error function is only evaluated at known points. For instance, you can take the gradient descent approach to SVD (minimizing the frobenius norm) but only evaluate the error and calculate the gradient at known points. I believe you can easily find code for this.
Another option would be exploiting the binary nature of your matrix and adapting binary matrix factorization tools in order to enforce binary factors (if you require them). I'm sure you can adapt one of the methods described here to work with unknown data using a similar trick as the one above. |
H: How to test/validate unlabeled data in association rules in R?
I produced association rules by using the arules package (apriori). I'm left with +/- 250 rules. I would like to test/validate the rules that I have, like answering the question: How do I know that these association rules are true? How can I validate them? What are common practice to test it?
I thought about cross validation (with training data and test data) as I read that it's not impossible to use it on unsupervised learning methods..but I'm not sure if it makes sense since I don't use labeled data.
If someone has a clue, even if it's not specifically about association rules (but testing other unsupervised learning methods), that would also be helpful to me.
I uploaded an example of the data that I use here in case it's relevant: https://www.mediafire.com/?4b1zqpkbjf15iuy
AI: You may want to consider using your own APparameter object to put "significance" constraints on the rules learned by Apriori. See page 13 of the arules documentation. This could reduce the number of uninteresting rules returned in your run.
In lieu of gold standard data for your domain, consider bootstrap resampling as a form of validation, as described in this article. |
H: What is an 'old name' of data scientist?
Terms like 'data science' and 'data scientist' are increasingly used these days.
Many companies are hiring 'data scientist'. But I don't think it's a completely new job.
Data have existed from the past and someone had to deal with data.
I guess the term 'data scientist' becomes more popular because it sounds more fancy and 'sexy'
How were data scientists called in the past?
AI: In reverse chronological order: data miner, statistician, (applied) mathematician. |
H: Combining data sets without using row.name
I start with a data.frame (or a data_frame) containing my dependent Y variable for analysis, my independent X variables, and some "Z" variables -- extra columns that I don't need for my modeling exercise.
What I would like to do is:
Create an analysis data set without the Z variables;
Break this data set into random training and test sets;
Find my best model;
Predict on both the training and test sets using this model;
Recombine the training and test sets by rows; and finally
Recombine these data with the Z variables, by column.
It's the last step, of course, that presents the problem -- how do I make sure that the rows in the recombined training and test sets match the rows in the original data set? We might try to use the row.names variable from the original set, but I agree with Hadley that this is an error-prone kludge (my words, not his) -- why have a special column that's treated differently from all other data columns?
One alternative is to create an ID column that uniquely identifies each row, and then keep this column around when dividing into the train and test sets (but excluding it from all modeling formulas, of course). This seems clumsy as well, and would make all my formulas harder to read.
This must be a solved problem -- could people tell me how they deal with this? Especially using the plyr/dplyr/tidyr package framework?
AI: You need neither to use the row names or to create an additonal ID column. Here is an approach based on the indices of the training set.
An example data set:
set.seed(1)
dat <- data.frame(Y = rnorm(10),
X1 = rnorm(10),
X2 = rnorm(10),
Z1 = rnorm(10),
Z2 = rnorm(10))
Now, your steps:
Create an analysis data set without the Z variables
dat2 <- dat[grep("Z", names(dat), invert = TRUE)]
dat2
# Y X1 X2
# 1 -0.6264538 1.51178117 0.91897737
# 2 0.1836433 0.38984324 0.78213630
# 3 -0.8356286 -0.62124058 0.07456498
# 4 1.5952808 -2.21469989 -1.98935170
# 5 0.3295078 1.12493092 0.61982575
# 6 -0.8204684 -0.04493361 -0.05612874
# 7 0.4874291 -0.01619026 -0.15579551
# 8 0.7383247 0.94383621 -1.47075238
# 9 0.5757814 0.82122120 -0.47815006
# 10 -0.3053884 0.59390132 0.41794156
Break this data set into random training and test sets
train_idx <- sample(nrow(dat2), 0.8 * nrow(dat2))
train_idx
# [1] 7 4 3 10 9 2 1 5
train <- dat2[train_idx, ]
train
# Y X1 X2
# 7 0.4874291 -0.01619026 -0.15579551
# 4 1.5952808 -2.21469989 -1.98935170
# 3 -0.8356286 -0.62124058 0.07456498
# 10 -0.3053884 0.59390132 0.41794156
# 9 0.5757814 0.82122120 -0.47815006
# 2 0.1836433 0.38984324 0.78213630
# 1 -0.6264538 1.51178117 0.91897737
# 5 0.3295078 1.12493092 0.61982575
test_idx <- setdiff(seq(nrow(dat2)), train_idx)
test_idx
# [1] 6 8
test <- dat2[test_idx, ]
test
# Y X1 X2
# 6 -0.8204684 -0.04493361 -0.05612874
# 8 0.7383247 0.94383621 -1.47075238
Find my best model
...
Predict on both the training and test sets using this model
...
Recombine the training and test sets by rows
idx <- order(c(train_idx, test_idx))
dat3 <- rbind(train, test)[idx, ]
identical(dat3, dat2)
# [1] TRUE
Recombine these data with the Z variables, by column
dat4 <- cbind(dat3, dat[grep("Z", names(dat))])
identical(dat, dat4)
# [1] TRUE
In summary, we can use the indices of the training and test data to combine the data in the rows in the original order. |
H: Does reinforcement learning only work on grid world?
Does reinforcement learning always need a grid world problem to be applied to?
Can anyone give me any other example of how reinforcement learning can be applied to something which does not have a grid world scenario?
AI: The short answer is no! Reinforcement Learning is not limited to discrete spaces. But most of the introductory literature does deal with discrete spaces.
As you might know by now that there are three important components in any Reinforcement Learning problem: Rewards, States and Actions. The first is a scalar quantity and theoretically the latter two can either be discrete or continuous. The convergence proofs and analyses of the various algorithms are easier to understand for the discrete case and also the corresponding algorithms are easier to code. That is one of the reasons, most introductory material focuses on them.
Having said that, it should be interesting to note that the early research on Reinforcement Learning actually focussed on continuous state representations. It was only in the the 90s since the literature started representing all the standard algorithms for discrete spaces as we had a lot of proofs for them.
Finally, if you noticed carefully, I said continuous states only. Mapping continuous states and continuous actions is hard. Nevertheless, we do have some solutions for now. But it is an active area of Research in RL.
This paper by Sutton from '98 should be a good start for your exploration! |
H: Do you have to normalize data when building decision trees using R?
So, our data set this week has 14 attributes and each column has very different values. One column has values below 1 while another column has values that go from three to four whole digits.
We learned normalization last week and it seems like you're supposed to normalize data when they have very different values. For decision trees, is the case the same?
I'm not sure about this but would normalization affect the resulting decision tree from the same data set? It doesn't seem like it should but...
AI: Most common types of decision trees you encounter are not affected by any monotonic transformation. So, as long as you preserve orde, the decision trees are the same (obviously by the same tree here I understand the same decision structure, not the same values for each test in each node of the tree).
The reason why it happens is because how usual impurity functions works. In order to find the best split it searches on each dimension (attribute) a split point which is basically an if clause which groups target values corresponding to instances which has test value less than split value, and on the right the values greater than equal. This happens for numerical attributes (which I think is your case because I do not know how to normalize a nominal attribute). Now you might note that the criteria is less than or greater than. Which means that the real information from the attributes in order to find the split (and the whole tree) is only the order of the values. Which means that as long as you transform your attributes in such a way that the original ordering is reserved, you will get the same tree.
Not all models are insensitive to such kind of transformation. For example linear regression models give the same results if you multiply an attribute with something different than zero. You will get different regression coefficients, but the predicted value will be the same. This is not the case when you take a log of that transformation. So for linear regression, for example, normalizing is useless since it will provide the same result.
However this is not the case with a penalized linear regression, like ridge regression. In penalized linear regressions a constraint is applied to coefficients. The idea is that the constraint is applied to the sum of a function of coefficients. Now if you inflate an attribute, the coefficient will be deflated, which means that in the end the penalization for that coefficient it will be artificially modified. In such kind of situation, you normalize attributes in order that each coefficient to be constraint 'fairly'.
Hope it helps |
H: R in production
Many of us are very familiar with using R in reproducible, but very much targeted, ad-hoc analysis. Given that R is currently the best collection of cutting-edge scientific methods from world-class experts in each particular field, and given that plenty of libraries exist for data io in R, it seems very natural to extend its applications into production environments for live decision making.
Therefore my questions are:
did someone of you go into production with pure R (I know of shiny, yhat etc, but would be very interesting to hear of pure R);
is there a good book/guide/article on the topic of building R into some serious live decision-making pipelines (such as e.g. credit scoring);
I would like to hear also if you think it's not a good idea at all;
AI: Speed of code execution is rarely an issue. The important speed in business is almost always the speed of designing, deploying, and maintaining the application. An experienced programmer can optimize where necessary to get code execution fast enough. In these cases, R can make a lot of sense in production.
In cases where speed of execution IS an issue, you are already going to find an optimized C++ or some such real-time decision engine. So your choices are integrate an R process, or add the bits you need to the engine. The latter is probably the only option, not because of the speed of R, but because you don't have the time to incorporate any external process. If the company has nothing to start with, I can't imagine everyone saying "let's build our time critical real-time engine in R because of the great statistical libraries".
I'll give a few examples from my corporate experiences, where I use R in production:
Delivering Shiny applications dealing with data that is not/ not yet institutionalized. I will generally load already-processed data frames and use Shiny to display different graphs and charts. Computation is minimal.
Decision making analysis that requires heavy use of advanced libraries (mcclust, machine learning) but done on a daily or longer time-scale. In this case there is no reason to use any other language. I've already done the prototyping in R, so my fastest and best option is to keep things there.
I did not use R for production when integrating with a real-time C++ decision engine. Issues:
An additional layer of complication to spawn R processes and integrate the results
A suitable machine-learning library (Waffles) was available in C++
The caveat in the latter case: I still use R to generate the training files. |
H: Interested in Mathematical Statistics... where to start from?
I have been working in the last years with statistics and have gone pretty deep in programming with R. I have however always felt that I wasn't completely grasping what I was doing, still understanding all passages and procedures conceptually.
I wanted to get a bit deeper into the math behind it all. I've been looking online for texts and tips, but all texts start with a very high level. Any suggestions on where to start?
To be more precise, I'm not looking for an exaustive list of statistical models and how they work, I kind of get those. I was looking for something like "Basics of statistical modelling"
AI: When looking for texts to learn advanced topics, I start with a web search for relevant grad courses and textbooks, or background tech/math books like those from Dover.
To wit, Theoretical Statistics by Keener looks relevant:
http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-93838-7
And this:
"Looking for a good Mathematical Statistics self-study book (I'm a physics student and my class & current book are useless to me)"
http://www.reddit.com/r/statistics/comments/1n6o19/looking_for_a_good_mathematical_statistics/ |
H: How are neural nets related to Fourier transforms?
This is an interview question
How are neural nets related to Fourier transforms?
I could find papers that talk about methods to process the Discrete Fourier Transform (DFT) by a single-layer neural network with a linear transfer function. Is there some other correlation that I'm missing?
AI: They are not related in any meaningful sense. Sure, you can use them both to extract features, or do any number of things, but the same can be said about a many techniques. I would have asked "what kind of neural network?" to see if the interviewer had something specific in mind. |
H: How can the performance of a neural network vary considerably without changing any parameters?
I am training a neural network with 1 sigmoid hidden layer and a linear output layer. The network simply approximates a cosine function. The weights are initiliazed according to Nguyen-Widrow initialization and the biases are initialized to 1. I am using MATLAB as a platform.
Running the network a number of times without changing any parameters, I am getting results (mean squared error) which range from 0.5 to 0.5*10^-6. I cannot understand how the results can even vary that much, I'd imagine there would at least be a narrower and more consistent window of errors.
What could be causing such a large variance?
AI: In general, there is no guarantee that ANNs such as a multi-layer Perceptron network will converge to the global minimum squared error (MSE) solution. The final state of the network can be heavily dependent on how the network weights are initialized. Since most initialization schemes (including Nguyen-Widrow) use random numbers to generate the initial weights, it is quite possible that some initial states will result in convergence on local minima, whereas others will converge on the MSE solution. |
H: Implementation of Association Rules in Javascript
I have implemented an interactive visualization with d3.js, javascript to explore the frequency and various combinations of co-occurring item sets. I want to complement the interactive exploration with some automated options.
Does someone know an efficient javascript implementation of the association rules mining ?
My typical scenario will have just up to 30 different items.
There are some good web site with implementations of frequent item set mining (improvements from the initial apriori algorithm): http://www.borgelt.net/apriori.html
Any help is greatly appreciated.
AI: I don't know how efficient they are, but I did find some implementations:
https://github.com/dmarges/apriori
https://github.com/seratch/apriori.js |
H: Need help with python code as part of a data analysis project
I am new to python programming. As part of a data analysis project, I am trying to get a scatter plot of Salary vs Wins for each year of 4 consecutive years (so I am trying to get 4 different scatter plots, one for each year). I am using the following code:
teamName = 'OAK'
years = np.arange(2000,2004)
for yr in years:
df = joined[joined['yearID']==yr]
plt.scatter(df['salary']/1e6,df['W'])
plt.title('Wins vs Salaries in year' + str(yr))
plt.xlabel('Total Salary (in millions)')
plt.ylabel('Wins')
plt.xlim(0,180)
plt.ylim(30,130)
plt.grid()
plt.show()
However, I am only getting one plot corresponding to 2003.
Can anyone point out the mistake ?
Thanks
AI: All of your plots are appearing on top of each other. You need to invoke plt.subplot(xxx) before you create each plot. For info on how the xxx command works, go to the MATLAB documentation.
You might end up with multiple figures - see this page for info about that. |
H: Does pruning a decision tree always make it more general?
If I prune a decision tree, does that make the resulting decision tree always more general than the original decision tree?
Are there examples where this is not the case?
AI: If you filter something out by choosing one branch over another branch in the tree, the observations you did not choose are forever lost.
But to directly answer your question - no, it does not always make it more general. If you construct a tree where all the decision are exactly the same, then pruning does not make it more general. |
H: Measuring similarity for sets with same cardinality
The Jaccard coefficient measures similarity between finite sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets.
I had 100 sets all of same cardinality. By mistake I calculated the similarity measures as the ratio of intersection with total elements in a set (i.e 100).
This gives different similarity values than the original Jaccard formula.
I was wondering if the original formula has considered the union of two sets to handle cases where there might be sets with different cardinalities.
I think though numerically my values are different, they repersent the same idea.
If anybody could verify/disverify what I am trying to do ?
AI: Yes, the Jaccard similarity score is normalized by the union to deal with sets of different cardinality. Without this normalization (if you used just the intersection), very small sets would always have very low scores.
When the cardinalities of all your sets are the same, the union of any two sets will be a straightforward function of the intersection (this is easy to visualize - as the two sets intersect more and more, their unions get smaller and smaller). The formula is:
union = 2 * cardinality - intersection
So the Jaccard score in your case would be:
intersection / (200 - intersection)
If you plot this, you'll see it's monotonically the same as what you did. |
H: Time series change rate calculations for displaying trend line chart
I'm struggling to find a solution to produce a line chart which shows a trend from a Time Series metric.
I try to make my line chart look similar to the following:
Basically, I want to make use of relative change rates, but I struggle to find a calculation which makes a visualization as seen above possible. The graph should always be relative to a specific time window, meaning that I can query my data dynamically with a start and end timestamp.
My (incomplete) sample data set is the following:
timestamp,value,change_rate,change_rate_absolute_sum,change_rate_delta,change_rate_sum
1426587900000,778;0.00129,0.00129;0.00129,0.00129
1426588200000,778;0,0.00129;-0.00129,0
1426588500000,1189;0.52828,0.52957;0.52828,0.52828
1426588800000,1195;0.00505,0.53462;-0.52323,0.00505
1426589100000,1195;0,0.53462;-0.00505,0
1426589400000,1196;0.00084,0.53546;0.00084,0.00084
1426589700000,1286;0.07525,0.61071;0.07441,0.07525
1426590000000,1290;0.00311,0.61382;-0.07214,0.00311
1426590300000,1294;0.0031,0.61692;-0.00001,0.0031
1426590600000,1296;0.00155,0.61847;-0.00155,0.00155
1426590900000,1356;0.0463,0.66477;0.04475,0.0463
1426591200000,1358;0.00147,0.66624;-0.04483,0.00147
1426591500000,1358;0,0.66624;-0.00147,0
1426591800000,1360;0.00147,0.66771;0.00147,0.00147
1426592100000,1408;0.03529,0.703;0.03382,0.03529
1426592400000,1390;-0.01278,0.69022;-0.04807,-0.01278
1426592700000,1391;0.00072,0.69094;0.0135,0.00072
1426593000000,1410;0.01366,0.7046;0.01294,0.01366
1426593300000,1414;0.00284,0.70744;-0.01082,0.00284
1426593600000,1410;-0.00283,0.70461;-0.00567,-0.00283
1426593900000,1414;0.00284,0.70745;0.00567,0.00284
1426594200000,1420;0.00424,0.71169;0.0014,0.00424
1426594500000,1417;-0.00211,0.70958;-0.00635,-0.00211
Any ideas warmly appreciated. Thanks a lot in advance!
AI: While it is not very clear to me what specific relationships between your data you want to display, I can give some general advice. I think that this time series visualization calls for so-called diverging color scheme (as opposed to sequential, categorical and other ones). ColorBrewer is nice online tool for selecting an appropriate color scheme and other parameters (note that ColorBrewer's color schemes have built-in support in major data analysis and visualization software, such as R, Python, d3.js, plot.ly and others).
If you would use R environment, it would be quite easy to produce the time series trend line chart that you want by using ggplot2 package and its scale_color_gradient2() function:
... + scale_color_gradient2(midpoint=midValue, low="blue", mid="white", high="orange" )
A quick glance at your profile suggested me that you would prefer a JavaScript solution. In that case, I would advise to use d3.js library or one of multiple other JavaScript visualization libraries or packages. For more details, check this relevant answer of mine. |
H: Data Science in C (or C++)
I'm an R language programmer. I'm also in the group of people who are considered Data Scientists but who come from academic disciplines other than CS.
This works out well in my role as a Data Scientist, however, by starting my career in R and only having basic knowledge of other scripting/web languages, I've felt somewhat inadequate in 2 key areas:
Lack of a solid knowledge of programming theory.
Lack of a competitive level of skill in faster and more widely used languages like C, C++ and Java, which could be utilized to increase the speed of the pipeline and Big Data computations as well as to create DS/data products which can be more readily developed into fast back-end scripts or standalone applications.
The solution is simple of course -- go learn about programming, which is what I've been doing by enrolling in some classes (currently C programming).
However, now that I'm starting to address problems #1 and #2 above, I'm left asking myself "Just how viable are languages like C and C++ for Data Science?".
For instance, I can move data around very quickly and interact with users just fine, but what about advanced regression, Machine Learning, text mining and other more advanced statistical operations?
So. can C do the job -- what tools are available for advanced statistics, ML, AI, and other areas of Data Science? Or must I lose most of the efficiency gained by programming in C by calling on R scripts or other languages?
The best resource I've found thus far in C is a library called Shark, which gives C/C++ the ability to use Support Vector Machines, linear regression (not non-linear and other advanced regression like multinomial probit, etc) and a shortlist of other (great but) statistical functions.
AI: Or must I lose most of the efficiency gained by programming in C by calling on R scripts or other languages?
Do the opposite: learn C/C++ to write R extensions. Use C/C++ only for the performance-critical sections of your new algorithms, use R to build your analysis, import data, make plots, etc.
If you want to go beyond R, I'd recommend learning Python. There are many libraries available such as scikit-learn for machine learning algorithms or PyBrain for building Neural Networks etc. (and use pylab/matplotlib for plotting and iPython notebooks to develop your analyses). Again, C/C++ is useful to implement time critical algorithms as Python extensions. |
H: Best way to format data for supervised machine learning ranking predictions
I'm fairly new to machine learning, but I'm doing my best to learn as much as possible.
I am curious about how predicting athlete performance (runners in particular) in a race of a specific starting lineup. For instance, if RunnerA, RunnerB, RunnerC, and RunnerD are all racing a 400 meter race, I want to best predict whether RunnerA will beat RunnerB based on past race result information (which I have at my disposal). However, I have many cases where RunnerA has never raced against RunnerB; yet I do have data showing RunnerA has beat RunnerC in the past, and RunnerC has beat RunnerB in the past. This logic extends deeper as well. So, it would seem that RunnerA should beat RunnerB, given this information. My real concern is when it gets more complicated than this as I add more features (multiple runners, different distances, etc), and so I'm turing to ML algorithms to help my predictions.
However, I am having difficulty figuring out how to include this in my row data that I can train (after all, correctly formatting data is 99% of proper machine learning), and I am hoping that someone here might have thought along the same lines in the past and might be able to shed some light.
Example:
I am currently trying to include RunnerX-RunnerY past race data by counting all the races that RunnerX and RunnerY have run together and normalizing them on a scale from -1 to 1; -1 indicating RunnerX lost all past races against RunnerY; and +1 indicating that RunnerX has won all past races against RunnerY; and +1 indicating. And 0 indicating an equal number of wins and losses (or no past races against each other).
For instance, if RunnerA is racing RunnerB, and RunnerA has beat RunnerB in the past, then I want the algorithm to know that (denoted by a +1 on the RunnerB column of row RunnerA); same for vice versa. Taking it another step further, If RunnerA is racing RunnerC (but the two have never raced each other in the past), and RunnerA has beat RunnerD in a past race, and RunnerD has beat RunnerC in a past race, then I want the algorithm to learn that RunnerA should beat RunnerC. I say beat here, but I mean an "average beat" for any RunnerX-RunnerY combinations when data for more than 1 past race is available.
I have set my data up as:
name track surface distance age RunnerA RunnerB RunnerC RunnerD
RunnerA Home 2 400 11 0 1 0 1
RunnerC Away 2 400 12 0 0 0 -1
RunnerD Home 2 400 10 0 0 1 0
which shows that RunnerA has beat RunnerB and RunnerD in the past. RunnerC has lost to RunnerD. And RunnerD has beat RunnerC.
The problem:
The problem is that I don't really think this is a correct display of the information for an ML algorithm.
From what I understand, ML data should be row independent. And this data isn't because row 1 (RunnerA) has beat RunnerD, yet the data indicating RunnerD has beat RunnerC is in row 3.
Does anyone have any ideas how I might be able to incorporate this past win-percentage-for-runner-pair-combination data??? I'm totally stuck here. I've read a lot about some algorithms that estimate the win loss by simply totaling win statistics, but those don't say anything about the actual probability of a particular runner to beat another particular runner.
Any pointers would be super helpful.
Thanks!!!
AI: This problem looks like a lot with the problem of ranking college football teams. I have never worked on this ranking problem, but I believe you can borrow some tools used there to build your model.
Here goes a couple of references:
Colley Matrix Rankings - This was one of computer rankings used by the BCS. It is also the only one that shared their methodology.
An example of the Colley Matrix Rankings - An easy example to follow.
The Perron-Frobenius Theorem and the Ranking of Football Teams - A well known reference that presents some ranking methods. This paper also shows how to assess the probability of winning a game based on the rankings of the teams. |
H: Which book is the best for introduction to analysis social network using python3
I am a beginner studying social network analysis.
I installed python 3 just 2 weeks ago.
There are a lot of books for python and social network analysis.
I couldn't choose one of them.
I found one book named "Mining the Social Web (Analyzing Data from Facebook, Twitter, Linkedln, and Other Social Media Sites)" written by Matthhew A. Russell.
This book looks very interesting and fits in my purpose, but it is based on python 2.
Is there any good books with python 3? I usually use Twitter, Facebook, or Blog data.
In addition, could you recommend any good book for nodeXL and UCINET?
AI: Russell's book is fine. You might also like Social Network Analysis for Startups. All the examples are in python. You can do all your analysis in that using packages like networkx. NodeXL is for the Excel crowd. Definitely not the ideal tool for the job; I would shy away from it.
The obvious book for NodeXL is Analyzing Social Media Networks with NodeXL, which is written by the authors of NodeXL. |
H: How does SQL Server Analysis Services compare to R?
This may be too broad of a question with heavy opinions, but I really am finding it hard to seek information about running various algorithms using SQL Server Analysis Service Data Mining projects versus using R. This is mainly because all the data science guys I work with don't have any idea about SSAS because no one seems to use it. :)
The Database Guy
Before I start, let me clarify. I am a database guy and not a data scientist. I work with people who are data scientist who mainly use R. I assist these guys with creating large data sets where they can analyze and crunch data.
My objective here is to leverage a tool that came with SQL Server that no one is really leveraging because no one seems to have a clue about how it works in comparison to other methods and tools such as R, SAS, SSPS and so forth in my camp.
SSAS
I have never really used SQL Server Analysis Services (SSAS) outside of creating OLAP cubes. Those who know SSAS, you can also perform data mining tasks on cubes or directly on the data in SQL Server.
SSAS Data Mining comes with a range of algorithm types:
Classification algorithms predict one or more discrete variables,
based on the other attributes in the dataset.
Regression algorithms predict one or more continuous variables, such
as profit or loss, based on other attributes in the dataset.
Segmentation algorithms divide data into groups, or clusters, of
items that have similar properties.
Association algorithms find correlations between different attributes
in a dataset. The most common application of this kind of algorithm
is for creating association rules, which can be used in a market
basket analysis.
Sequence analysis algorithms summarize frequent sequences or episodes
in data, such as a Web path flow.
Predicting Discrete Columns
With these different algorithm options, I can start making general predictions from the data such as finding out simply who is going to buy a bike based on a predictable column, Bike Buyers, against an input column, Age. The histogram shows that the age of a person helps distinguish whether that person will purchase a bicycle.
Predicting Continuous Columns
When the Microsoft Decision Trees algorithm builds a tree based on a continuous predictable column, each node contains a regression formula. A split occurs at a point of non-linearity in the regression formula. For example, consider the following diagram.
Comparison
With some of that said, it seems I can run a range of algorithms on the data and also have various functions available to me in SSAS to run against the data. It also seems I can develop my own algorithms in Visual Studio and deploy them to SSAS (if I'm not mistaken).
So, what am I missing here in regards to languages and tools from R? Is it just that they have more flexibility to deploy and edit complex algorithms versus SSAS etc?
AI: In my opinion, it seems that SSAS makes more sense for someone who:
has significantly invested in Microsoft's technology stack and platform;
prefer point-and-click interface (GUI) to command line;
focus on data warehousing (OLAP cubes, etc.);
has limited needs in terms of statistical methods and algorithms variety;
has limited needs in cross-language integration;
doesn't care much about openness, cross-platform integration and vendor lock-in.
You can find useful this blog post by Sami Badawi. However, note that the post is not recent, so some information might be outdated. Plus, the post contains an initial review, which might be not very accurate or comprehensive. If you're thinking about data science, while considering staying within Microsoft ecosystem, I suggest you to take a look at Microsoft's own machine learning platform Azure ML. This blog post presents a brief comparison of (early) Azure ML and SSAS. |
H: How to create and format an image dataset from scratch for machine learning?
I've only worked with ML with .csv formats. I've worked with image formats too but only premade imagesets (MNIST,etc). If I were to create an imageset from scratch, how are the class labels typically formated? Would I have to manually title the image of a jpeg?
Best,
Jeremy
AI: Looking at existing challenges around and their data format (for example http://www.kaggle.com/c/datasciencebowl/data) I would say put the images in a folder per class. You can use the file names for the index. |
H: In a SVD with user/video bias, why is the UV contribute so small?
I'm testing a SVD-based collaborative filter on my data set, in which the label, $r_{ij}$, is a real value from 0 to 1.
Like the many papers suggested, to have a better performance, instead of using $ \hat{R} = U \cdot V^T $ directly, I use $\hat{R} = \mu + B_u + B_v + U \cdot V^T $, where $\mu$ is the average rating, $B_u$ is the bias of user, and $B_v$ is the bias of item.
Thus, this model corresponds to a loss function:
$\min_{B_u, B_v, U, V} = ||I\circ(R-\mu-Bu-B_v-U\cdot V^T)||_F^2 + \lambda (||B_u||_F^2 + ||B_v||_F^2 + ||U||_F^2 + ||V||_F^2)$
where I is the masking matrix in which $I_{ij} = 1$ if $R_{ij}$ is known, and $||\cdot||_F$ is the frobenius norm.
Then, I solve this by gradient descent, it seems to work fine, and the test RMSE is 0.25.
However, when I investigate the contribution of each part in predict function $\hat{R} = \mu + B_u + B_v + U \cdot V^T $, I notice that, $\mu$ is about 0.5, $b_u$ and $b_i$ are about $\pm0.3$, but the part of $ U \cdot V^T $ is quite small, normally about $\pm 0.01$.
Why does this part contribute so small? Since this part is the actual part where the collaborative filter works, I expect it to contribute more in prediction.
AI: It means the average predictions of the user across all items, and the average of each item across all users accurately predicts the ratings -- you have an easy data set. |
H: How to generate synthetic dataset using machine learning model learnt with original dataset?
Generally, the machine learning model is built on datasets. I'd like to know if there is any way to generate synthetic dataset using such trained machine learning model preserving original dataset characteristics?
[original data --> build machine learning model --> use ml model to generate synthetic data....!!!]
Is it possible? Please point me to related resource if possible.
AI: The general approach is to do traditional statistical analysis on your data set to define a multidimensional random process that will generate data with the same statistical characteristics. The virtue of this approach is that your synthetic data is independent of your ML model, but statistically "close" to your data. (see below for discussion of your alternative)
In essence, you are estimating the multivariate probability distribution associated with the process. Once you have estimated the distribution, you can generate synthetic data through the Monte Carlo method or similar repeated sampling methods. If your data resembles some parametric distribution (e.g. lognormal) then this approach is straightforward and reliable. The tricky part is to estimate the dependence between variables. See: https://www.encyclopediaofmath.org/index.php/Multi-dimensional_statistical_analysis.
If your data is irregular, then non-parametric methods are easier and probably more robust. Multivariate kernal density estimation is a method that is accessible and appealing to people with ML background. For a general introduction and links to specific methods, see: https://en.wikipedia.org/wiki/Nonparametric_statistics .
To validate that this process worked for you, you go through the machine learning process again with the synthesized data, and you should end up with a model that is fairly close to your original. Likewise, if you put the synthesized data into your ML model, you should get outputs that have similar distribution as your original outputs.
In contrast, you are proposing this:
[original data --> build machine learning model --> use ml model to generate synthetic data....!!!]
This accomplishes something different that the method I just described. This would solve the inverse problem: "what inputs could generate any given set of model outputs". Unless your ML model is over-fitted to your original data, this synthesized data will not look like your original data in every respect, or even most.
Consider a linear regression model. The same linear regression model can have identical fit to data that have very different characteristics. A famous demonstration of this is through Anscombe's quartet.
Thought I don't have references, I believe this problem can also arise in logistic regression, generalized linear models, SVM, and K-means clustering.
There are some ML model types (e.g. decision tree) where it's possible to inverse them to generate synthetic data, though it takes some work. See: Generating Synthetic Data to Match Data Mining Patterns. |
H: how to modify sparse survey dataset with empty data points?
I am working on a data set where the categorical variables have lots of empty spaces (not "NA" but ""). For example, one variable has 14587 empty spaces out of 14644 observations. There are many such variables where most of the observations are empty.In fact it is a survey dataset where the participant just chose to ignore a particular question.
I have never handled similar dataset. I am looking for advise as to how best to handle such datasets before any modeling is done. Deleting the rows or the variables with lots of empty spaces doesn't seem feasible.
Thanks a lot.
AI: I would consider approaching this situation from the following two perspectives:
Missing data analysis. Despite formally the values in question are empty and not NA, I think that effectively incomplete data can (and should) be considered as missing. If that is the case, you need to automatically recode those values and then apply standard missing data handling approaches, such as multiple imputation. If you use R, you can use packages Amelia (if the data is multivariate normal), mice (supports non-normal data) or some others. For a nice overview of approaches, methods and software for multiple imputation of data with missing values, see the 2007 excellent article by Nicholas Horton and Ken Kleinman "Much ado about nothing: A comparison of missing data methods and software to fit incomplete data regression models".
Sparse data analysis, such as sparse regression. I'm not too sure how well this approach would work for variables with high levels of sparsity, but you can find a lot of corresponding information in my relevant answer. |
H: What is the term for when a model acts on the thing being modeled and thus changes the concept?
I'm trying to see if there is a conventional term for this concept to help me in my literature research and writing. When a machine learning model causes an action to be taken in the real world that affects future instances, what is that called?
I'm thinking about something like a recommender system that recommends one given product and doesn't recommend another given product. Then, you've increased the likelihood that someone is going to buy the first product and decreased the likelihood that someone is going to buy the second product. So then those sales numbers will eventually become training instances, creating a sort of feedback loop.
Is there a term for this?
AI: There are three terms from social science that apply to your situation:
Reflexivity - refers to circular relationships between cause and effect. In particular, you could use the definition of the term adopted by George Soros to refer to reverse causal loop between share prices (i.e. present value of fundamentals) and business fundamentals. In a way, the share price is a "model" of the fundamental business processes. Usually, people assume that causality is one-way, from fundamentals to share price.
Performativity - As used by Donald MacKenzie (e.g. here), many economic models are not "cameras" -- taking pictures of economic reality -- but in fact are "engines" -- an integral part of the construction of economic reality. He has a book of that title: An Engine, Not a Camera.
Self-fulfilling Prophecy - a prediction that directly or indirectly causes itself to become true, by the very terms of the prophecy itself, due to positive feedback between belief and behavior. This is the broadest term, and least specific to the situation you describe.
Of the three terms, I suggest that MacKenzie's "performativity" is the best fit to your situation. He claims, among other things, that the validity of the economic models (e.g. Black-Scholes option pricing) has been improved by its very use by market participants, and therefore how it reflects in options pricing and trading patterns. |
H: Do data scientists use Excel?
I would consider myself a journeyman data scientist. Like most (I think), I made my first charts and did my first aggregations in high school and college, using Excel. As I went through college, grad school and ~7 years of work experience, I quickly picked up what I consider to be more advanced tools, like SQL, R, Python, Hadoop, LaTeX, etc.
We are interviewing for a data scientist position and one candidate advertises himself as a "senior data scientist" (a very buzzy term these days) with 15+ years experience. When asked what his preferred toolset was, he responded that it was Excel.
I took this as evidence that he was not as experienced as his resume would claim, but wasn't sure. After all, just because it's not my preferred tool, doesn't mean it's not other people's. Do experienced data scientists use Excel? Can you assume a lack of experience from someone who does primarily use Excel?
AI: Most non-technical people often use Excel as a database replacement. I think that's wrong but tolerable. However, someone who is supposedly experienced in data analysis simply can not use Excel as his main tool (excluding the obvious task of looking at the data for the first time). That's because Excel was never intended for that kind of analysis and as a consequence of this, it is incredibly easy to make mistakes in Excel (that's not to say that it is not incredibly easy to make another type of mistakes when using other tools, but Excel aggravates the situation even more.)
To summarize what Excel doesn't have and is a must for any analysis:
Reproducibility. A data analysis needs to be reproducible.
Version control. Good for collaboration and also good for reproducibility. Instead of using xls, use csv (still very complex and has lots of edge cases, but csv parsers are fairly good nowadays.)
Testing. If you don't have tests, your code is broken. If your code is broken, your analysis is worse than useless.
Maintainability.
Accuracy. Numerical accuracy, accurate date parsing, among others are really lacking in Excel.
More resources:
European Spreadsheet Risks Interest Group - Horror Stories
You shouldn’t use a spreadsheet for important work (I mean it)
Microsoft's Excel Might Be The Most Dangerous Software On The Planet
Destroy Your Data Using Excel With This One Weird Trick!
Excel spreadsheets are hard to get right |
H: SAS. How to write "OR"
How to write "OR" in this example?
DATA a1;
set a;
if var1=1 OR 2;
run;
P.S. va1 is the categorial (with categories: 1, 2, 3)
AI: i'm assuming this will be migrated to stack overflow, but instead of trying to do if var1=1 or 2 wouldn't it be better to use if var1 in (1, 2)?
...and somebody with enough reputation should probably create an sas tag (and an spss tag while you're at it) unless data scientists only use open source languages like r and python now... |
H: Building a static local website using Rmarkdown: step by step procedure
I am trying to understand the procedure of building a static local website using R and Rmarkdown. I am aware of a Rmarkdown website where the procedure is outlined, but unfortunately I do not understand the steps.
Does anybody here have some experience in building a static local website and would be so kind as to describe the procedure in more detail?
AI: In most things, related to R, there are many approaches to solve a problem, sometimes too many, I would say. The task of building a static website, using RMarkdown, is not an exception.
One of the best, albeit somewhat brief, sets of workflows on the topic include the following one by Daniel Wollschlaeger, which includes this workflow, based on R, nanoc and Jekyll, as well as this workflow, based on R and WordPress. Another good workflow is this one by Jason Bryer, which is focused on R(Markdown), Jekyll and GitHub Pages.
Not everyone likes GitHub Pages, Jekyll, Octopress and Ruby, so some people came up with alternative solutions. For example, this workflow by Edward Borasky is based on R and, for a static website generator, on Python-based Nicola (instead of Ruby-based Jekyll or nanoc). Speaking about static website generators, there are tons of them, in various programming languages, so, if you want to experiment, check this amazing website, listing almost all of them. Almost, because some are missing - for example, Samantha and Ghost, listed here.
Some other interesting workflows include this one by Joshua Lande, which is based on Jekyll and GitHub Pages, but includes some nice examples of customization for integrating a website with Disqus, Google Analytics and Twitter as well as getting custom URL for the site and more.
Those who want a pure R-based static site solution, now have some options, including rsmith (https://github.com/hadley/rsmith), a static site generator by Hadley Wickham, and Poirot (https://github.com/ramnathv/poirot), a static site generator by Ramnath Vaidyanathan.
Finally, I would like to mention an interesting project (from an open science perspective) that I recently ran across - an open source software by Mark Madsen for a lab notebook static site, which is based on GitHub Pages and Jekyll, but also supports pandoc, R, RMarkdown and knitr. |
H: Possibility of working on KDDCup data in local system
I'm trying to apply classification algorithms to KDD Cup 2012 track2 data using R
http://www.kddcup2012.org/c/kddcup2012-track2
It seems not possible to work with this 10GB training data on my local system with 4GB RAM.
Can anyone work on this data using this kind of a local system ? Or is using a cluster the norm ?
It would be great if anyone could provide me with any guidance on how to get started with working on a cluster and the normally used type of cluster for such tasks
AI: I think that you have, at least, the following major options for your data analysis scenario:
Use big data-enabling R packages on your local system. You can find most of them via the corresponding CRAN Task View that I reference in this answer (see point #3).
Use the same packages on a public cloud infrastructure, such as Amazon Web Services (AWS) EC2. If your analysis is non-critical and tolerant to potential restarts, consider using AWS Spot Instances, as their pricing allows for significant financial savings.
Use the above mention public cloud option with R standard platform, but on more powerful instances (for example, on AWS you can opt for memory-optimized EC2 instances or general purpose on-demand instances with more memory).
In some cases, it is possible to tune a local system (or a cloud on-demand instance) to enable R to work with big(ger) data sets. For some help in this regard, see my relevant answer.
For both above-mentioned cloud (AWS) options, you can find more convenient to use R-focused pre-built VM images. See my relevant answer for details. You may also find useful this excellent comprehensive list of big data frameworks. |
H: Correlations - Get values in the way we want
I have :
a matrix X with N lines
a vector Y
I've computed the Euclidean distance with Y for each line of X.
What I get is a vector of distances.
What I want is a vector of scores between 0 and 1, 1 meaning "very" high correlation, 0 meaning "no" correlation.
Here what I did :
I divided the vector of distances by the max distance inside it.
I get vector D.
1 - D is the final result with values between 0 and 1.
The problem is that I get many values (75%) too close to 1.
Do you think what I did is correct ?
How would you get a better result ?
(Between 0 and 1 but not everything too close to 1)
For now, I tried to take the square of the result. (To stay between 0 and 1 but to minimize the values)
Here a picture of the distance values I want to turn in a score
AI: Several kernel functions can serve as similarity functions (=scores). See a list, for example, here. You can try several of them and see which suits you the best.
You need something that drops fast at low distances. You can try
$$ score = 1/(1+distance)^2$$
and adjust coefficient in front of distance so that the score fits between 0 and 1
About your picture: what are axis labels? and what are x-ticks? |
H: SAS PROC means (two variants together)
PROC means data=d mean;
var a;
class b; var a;
run;
I want to perform the "PROC means" for continuous "var a":
1) in general and
2) by classes.
But it performed by the classes only.
How to make procedure for "var a" here in general too?
P.S. SAS WARNING: Analysis variable "a" was defined in a previous statement, duplicate definition will be ignored.
AI: Just run
PROC means data=d mean; var a; run;
for overall data |
H: about predict the class for a new datapoint
I have a new data point and want to classify it into the existing classes.I can calculate pairwise distance for the new point to all existing points(in the existing classes). I know using KNN would be a straightforward to classify this point. Is there a way I could randomly sampling existing classes and then correlated the new point to a potential classes without calculating all pairwise distances?
AI: I think you need to take a step back and figure out what you're trying to do at a higher level.
How were the existing classes built? If they were built by clustering unlabeled data, then with this new data point you're continuing with the clustering process.
If the existing classes are labeled data, then k-NN is one possible classification method, and there are plenty more (decision trees, naive bayes, neural networks, etc.).
If you're doing clustering, then there are several ways of assigning a point to a cluster, among measuring the distances from the point to cluster centroids is one. There's also single-linkage (distance is min of distances from point to points of cluster), complete-linkage (max of distances). These different methods will give clusters with different shapes and there's no universally best approach. You could test them with points that are already in clusters... but then if you're certain of what classes they re in, then you have a classification problem.
So... if it's classification, then you can use k-NN, that's similar to the idea of assigning a point to a cluster according to distance. But it's not defined as finding the nearest cluster, it's defined as finding the classes of the k nearest points, then applying a vote or something. 1-NN is basically like single-linkage clustering. kNN does require finding the most similar (training) data points to your new data point. Sampling is definitely sub-optimal, but it may be good enough if you classes are well separated. If the cost of calculating distances is high, then one way of reducing the cost of calculation is the idea of skyline clustering: use a cheap distance metric to determine a subset of points that are likely to be among the k nearest neighbours, then compute these neighbours using the more expensive distance metric.
Finally, if you will be classifying many points and not updating your model, it may be worth training a model (e.g. a decision tree) on the existing classes. |
H: Pandas time series optimization problem: add year
I have a pandas DataFrame containing a time series column. The years are shifted in the past, so that I have to add a constant number of years to every element of that column.
The best way I found is to iterate through all the records and use
x.replace(year=x.year + years) # x = current element, years = years to add
It is cythonized as below, but still very slow (proofing)
cdef list _addYearsToTimestamps(list elts, int years):
cdef cpdatetime x
cdef int i
for (i, x) in enumerate(elts):
try:
elts[i] = x.replace(year=x.year + years)
except Exception as e:
logError(None, "Cannot replace year of %s - leaving value as this: %s" % (str(x), repr(e)))
return elts
def fixYear(data):
data.loc[:, 'timestamp'] = _addYearsToTimestamps(list(data.loc[:, 'timestamp']), REAL_YEAR-(list(data[-1:]['timestamp'])[0].year))
return data
I'm pretty sure that there is a way to change the year without iterating, by using Pandas's Timestamp features. Unfortunately, I don't find how. Could someone elaborate?
AI: Make a pandas Timedelta object then add with the += operator:
x = pandas.Timedelta(days=365)
mydataframe.timestampcolumn += x
So the key is to store your time series as timestamps. To do that, use the pandas to_datetime function:
mydataframe['timestampcolumn'] = pandas.to_datetime(x['epoch'], unit='s')
assuming you have your timestamps as epoch seconds in the dataframe x. That's not a requirement of course; see the to_datetime documentation for converting other formats. |
H: Finding predictions using biglm without finding errors
I'm using the biglm R package for linear regression.
click/impression is the output required. But the test data does not contain click and impression.
The predict function of biglm gives the error
Error in eval(expr, envir, enclos) : object 'click' not found
I assume this is because predict tries to compute the standard errors also.
Is there a method to just obtain the predictions ? I tried assigning values to se.fit and type attributes, but I get the same error.
AI: biglm calls model.frame which "all the variables in the formula are included in the data frame" see documentation for model.frame. This is the issue that comes up when predict is called on the biglm class. It looks for those values in the predict function. To get around this you can just create a variable and encode it with 0. See below...
data(trees)
ff<-log(Volume)~log(Girth)+log(Height)
chunk1<-trees[1:10,]
chunk2<-trees[11:20,]
chunk3<-trees[21:31,]
a <- biglm(ff,chunk1)
summary(a)
#produces same error
chunk2 <- select(chunk2, -Volume)
predict(a, chunk2)
#Fixed
chunk2$Volume <- 0
predict(a, chunk2) |
H: What are some good sources to learn fraud/anomaly detection in normal/time-series data?
I would like to know more on fraud/anomaly detection. I am looking for good source or survey article/book etc out there which will give me some preliminary idea of the area.
Any suggestion is greatly appreciated.
Thanks
AI: Usually this is done as outliers analysis (fraud is an outlier vs normal usage). For this aspect, you can find more info in the data mining: concepts and techniques book, even if general purpose book.
I am convinced that learning this kind of basis is needed to understanding the domain specific methods. |
H: percentage of confidance on desion trees results
I am looking towards a solution where classification algorithms produce output with some confidence value. but I am confused whether classification algorithms are able to produce results with percentage of confidence?
Thanks
AI: Some classification algorithms can indeed return a probability distribution over the considered classes (see Wikipedia on probabilistic classification).
In the topic of your question you're asking about Decision Trees. Well, these have their limitations in terms of providing probability estimates (see this paper on probability estimates from decision trees).
In case you would like to play around with this, it's very easy to start with scikit-learn:
import numpy as np
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier()
# training samples
X = np.array([[1, 1, 0],
[1, 1, 0],
[0, 0, 1],
[0, 0, 1]])
# target values for training samples
y = np.array([0, 0, 0, 1])
dtc.fit(X, y)
print 'Class probabilities for training samples:'
print dtc.predict_proba(X)
print 'Probabilities for previously unseen samples:'
for sample in ((1, 0, 1), (0, 0, 1), (1, 1, 1), (0, 0, 0)):
print 'Sample {}. Result: {}'.format(sample, dtc.predict_proba(sample))
This code returns following results:
Class probabilities for training samples:
[[ 1. 0. ]
[ 1. 0. ]
[ 0.5 0.5]
[ 0.5 0.5]]
Probabilities for previously unseen samples:
Sample (1, 0, 1). Result: [[ 0.5 0.5]]
Sample (0, 0, 1). Result: [[ 0.5 0.5]]
Sample (1, 1, 1). Result: [[ 0.5 0.5]]
Sample (0, 0, 0). Result: [[ 1. 0.]]
At this scale the results are easily interpretable:
a sample of features (1, 1, 0) is classified as 0 with 100% probability
a sample of features (0, 0, 1) is classified 50/50 as 0 or 1. etc.
This brings us to an important question of how accurate is your model? |
H: Finding frequencies in a noisy, "uneven" dataset
I'm working on a problem where frequency analysis applies (decomposition of a signal into frequencies, that is), but it's noisy and the samples are unevenly spaced.
Specifically: given a list of items purchased at a bar/restaurant, try to estimate the number of guests on the check based on distinct "frequencies" of purchase. The logic is that if there are N guests on a check, then it's reasonable to see N frequencies of drinks being purchased, one person buying every 10 minutes, another every 15, etc. (Plenty of other properties of the check should be included, but here I'm focusing specifically on estimating distinct frequencies).
So more formally: given a noisy, unevenly spaced time series, find the smallest number of frequencies which reproduce the signal while minimizing the error (... for some sensible definition of how to minimize both the error and the number of frequencies simultaneously).
This is more a machine learning problem than signal processing. I realize it's also an open question, but can anyone point me in the right direction? Is there a particular method or algorithm that applies here?
AI: Just to point you in one possible direction: you could treat this problem as one of probabilistic mixture modeling.
Imagine that each person's drink ordering is governed by a probability distribution. That distribution may be characterized by the time since their last drink order. As time passes, the probability that they will order another drink increases until eventually they order another drink and the time resets.
One possible model for the time between drink orders for a single person is the exponential distribution. If you consider many people ordering, the resultant times would likely be a mixture of exponentials. The problem then comes down to fitting a mixture of exponentials to the table's drink data. You would likely have to supply some additional prior data as well to get a meaningful model (otherwise it's difficult to tell the difference between 5 people ordering 2 drinks a piece or 1 person ordering 10 drinks). |
H: Can someone explain the following error in my python code?
I am analyzing a dataset in Python for strictly learning purpose.
In the code below that I wrote, I am getting some errors which I cannot get rid off. Here is the code first:
plt.plot(decade_mean.index, decade_mean.values, 'o-',color='r',lw=3,label = 'Decade Average')
plt.scatter(movieDF.year, movieDF.rating, color='k', alpha = 0.3, lw=2)
plt.xlabel('Year')
plt.ylabel('Rating')
remove_border()
I am getting the following errors:
1. TypeError: 'str' object is not callable
2. NameError: name 'remove_border' is not defined
Also, the label='Decade Average' is not showing up in the plot.
What confuses me most is the fact that in a separate code snippet for plots (see below), I didn't get the 1st error above, although remove_border was still a problem.
plt.hist(movieDF.rating, bins = 5, color = 'blue', alpha = 0.3)
plt.xlabel('Rating')
Any explanations of all or some of the errors would be greatly appreciated. Thanks
Following the comments, I am posting the data and the traceback below:
decade_mean is given below.
year
1970 8.925000
1980 8.650000
1990 8.615789
2000 8.378947
2010 8.233333
Name: rating, dtype: float64
traceback:
TypeError Traceback (most recent call last)
<ipython-input-361-a6efc7e46c45> in <module>()
1 plt.plot(decade_mean.index, decade_mean.values, 'o-',color='r',lw=3,label = 'Decade Average')
2 plt.scatter(movieDF.year, movieDF.rating, color='k', alpha = 0.3, lw=2)
----> 3 plt.xlabel('Year')
4 plt.ylabel('Rating')
5 remove_border()
TypeError: 'str' object is not callable
I have solved remove_border problem. It was a stupid mistake I made. But I couldn't figure out the problem with the 'str'.
AI: Seems that remove border is not defined. You have to define the function before used.
I do not know where the string error comes, is not clear to me. If you post the full traceback it will be clearer.
Finally your label is not show because you have to call the method plt.legend() |
H: Finding parameters with extreme values (classification with scikit-learn)
I am currently working with the forest cover type prediction from Kaggle, using classification models with scikit-learn. My main purpose is learning about the different models, so I don't pretend to discuss about which one is better.
When working with logistic regression, I wonder if I need the 'penalty' parameter (where I can choose L1 or L2 regularization). Based on what I found, these regularization terms are useful to avoid over-fitting, specially when the parameter values are extreme (by extreme I understand the range of some parameter values are very large compared to other parameters, Correct me if I am wrong. In this case, wouldn't it be enough to apply log-scale or normalization to these values?).
The main questions are: as the number of parameters is large, are there visualization techniques and tools in scikit-learn which can help me to find parameters with extreme values? is there any statistical function/tool which returns how extreme the values of parameters are?
AI: If by "parameters" you mean features (called "Data Fields" at Kaggle), then, yes, you can log-scale those. To visualize them you can just use histograms.
To do it for all features in python, for example, you can put your data in pandas DataFrame (let us call it "data") and then use data.hist()
This has nothing to do with the regularization in any model.
If by "parameters" you mean the coefficients obtained after fitting the logistic regression, then one uses regularization. This has, however, is not directly related to log-transform. How you list/visualize your coefficients depends on the programming tool you use for logistic regression (or other model) |
H: What is the best way to scale a numerical dataset
I have a dataset with differents attributes which don't have the same range on their values which is a problem when we need to compute distance beetween objects. After some research i found that i can do the regularisation job with this formula : (value-min)/(max-min) where min and max are respectively the minimum and maximum value in the domain of val attribute.
The question is that one, does it exist other ways ?
Thank you for your help.
AI: There is pretty much mess in terminology in your question :).
Data Regularization is used for model selection, it is not about data processing. Here it is described in more friendly manner.
What you mean is Feature Scaling. It can be done in several ways including Rescaling, the method you described. You may also use Standardization (normalization) and Scaling to unit length.
These answers may be helpful:
Normalization vs Scaling
Normalization vs Standardization |
H: How to classify whether text answer is relevant to an initial text question
I have a text classification problem in which i need to classify an answer to a message as either relevant or not.
In the first phase of my calculations, I have already used a SVM to determine if the original message was relevant or not, deciding whether a message contains a hint or question if somebody's twitter account has been hacked.
example:
"Hey @foobar, have you been hacked?" <-- relevant
"My bank account has just been hacked" <-- not relevant
However, when I want to classify whether the answer is relevant, I would want to have both the original message and the answer as input, right? An answer is relevant in my case if it, in any way, responds to the original message. Is this approach possible using a SVM or any other machine learning tool? I'm using python with the scikit-learn library.
example:
"Hey @foobar, have you been hacked?"
"@barfoo it seems so, thx for suggesting" <-- relevant
"Hey @foobar, have you been hacked?"
"Lose 20 pounds quickly! http://blabla.com" <-- not relevant
I'm not very experienced in this field, so any input would be very appreciated.
AI: Both message and answer are your input, so your feature vector should contain information about both.
Here's a simple structure of a possible solution using scikit-learn:
import numpy as np
from sklearn.svm import SVC
from sklearn.feature_extraction import DictVectorizer
dataset = (("Hey @foobar, have you been hacked?",
"@barfoo it seems so, thx for suggesting",
True), # True for relevant, False for not relevant
("Hey @foobar, have you been hacked?",
"Lose 20 pounds quickly! http://blabla.com",
False))
def extractMessageFeatures(message):
# here comes your real feature extraction algorithm
return { 'message_predicted_spam': False,
'message_contains_valid_username': True }
def extractAnswerFeatures(answer):
# here comes your real feature extraction algorithm
return { 'answer_predicted_spam': False,
'answer_contains_valid_username': True }
def extractFeatures(data):
features = []
for instance in data:
instanceFeatures = extractMessageFeatures(data[0])
instanceFeatures.update(extractAnswerFeatures(data[1]))
features.append(instanceFeatures)
return features
def trainClassifier(data):
features = extractFeatures(data)
vec = DictVectorizer()
featureVector = vec.fit_transform(features)
print vec.get_feature_names()
print featureVector.toarray()
svc = SVC()
svc.fit(featureVector, np.array([i[2] for i in data]))
return svc
clf = trainClassifier(dataset)
# now, you can clf.predict(...)
Now, the hardest part is to decide which features to extract from both messages and answers. It's up to you.
One of the simplest solutions would be to use n-gram features.
Other approach would be to use some spam detection to decide whether answer is spam or not and treat this information as a feature.
You can also use Twitter-specific information (for example, whether users are mentioning each other in their tweets, using the same hashtags etc.).
You can combine these features in whatever fashion you like, of course.
Except of creating feature extraction functionality you need a labeled dataset of messages, answers and relevant/non-relevant labels.
But once you have both (feature extraction functionality and a proper dataset), you're good to go with a task which clearly matches standard machine learning approaches. |
H: Optimizing Weka for large data sets
First of all, I hope I'm in the right StackExchange here. If not, apologies!
I'm currently working with huge amounts of feature-value vectors. There are millions of these vectors (up to 20 million possibly). They contain some linguistic/syntactic features and their values are all strings.
Because most classifiers do not handle string data as values, I convert them to binary frequency values, so an attribute looks like this:
@attribute 'feature#value' numeric
And per row, the value is either 1 or it is absent (so note it's a sparse ARFF file).
The thing is, with 250K rows, there are over 500K attributes and so, most algorithms have a hard time with this.
There are a lot of algorithms. I'm really curious as to what you would consider a suitable one (preferably unsupervised, but anything works), and if you even have some ideas how I could improve performance. I can train on small subsets of data, but the results only get better when using large amounts of data (at least 7 million events).
For now, I've been using NaiveBayes variations (Multinomial and also DMNBText) and those are really the only ones that are able to chew up data with acceptable speed.
Thanks a lot. If you need more information, please let me know.
Cheers.
AI: I would go for dimensionality reduction. You can start with SVD (should be available in Weka). If SVD is too slow / too memory consuming, then there are still some options:
CUR-decomposition: a variant of singular-value decomposition that keeps the matrices of the decomposition sparse if the original matrix is sparse (see: this chapter of Mining Massive Datasets book)
Random projections: projecting the data onto a random lower-dimensional subspace (see: the Random projection in dimensionality reduction: Applications to image and text data paper)
Coresets: Given a matrix A, a coreset C is defined as a weighted subset of rows of A such that the sum of squared distances from any given k-dimensional subspace to the rows of A is approximately the same as the sum of squared weighted distances to the rows in C see the Dimensionality Reduction of Massive Sparse Datasets Using Coresets paper)
That's the tip of an iceberg. More approaches are there in the wild. The problem is that I doubt any of these solutions come with Weka (please, correct me if I am wrong on this). I would search for a usable Java implementation of any of these algorithms and try to port it to work with Weka's arff files. |
H: Question on decision tree in the book Programming Collective Intelligence
I'm currently studying Chapter 7 ("Modeling with Decision Trees") of the book "Programming Collective intelligence".
I find the output of the function mdclassify() p.157 confusing. The function deals with missing data. The explanation provided is:
In the basic decision tree, everything has an implied weight of 1,
meaning that the observations count fully for the probability that an
item fits into a certain category. If you are following multiple
branches instead, you can give each branch a weight equal to the
fraction of all the other rows that are on that side.
From what I understand, an instance is then split between branches.
Hence, I simply don't understand how we can obtain:
{'None': 0.125, 'Premium': 2.25, 'Basic': 0.125}
as 0.125+0.125+2.25 does not sum to 1 nor even an integer. How was the new observation split?
The code is here:
https://github.com/arthur-e/Programming-Collective-Intelligence/blob/master/chapter7/treepredict.py
Using the original dataset, I obtain the tree shown here:
Can anyone please explain me precisely what the numbers precisely mean and how they were exactly obtained?
PS : The 1st example of the book is wrong as described on their errata page but just explaining the second example (mentioned above) would be nice.
AI: There are four features:
referer,
location,
FAQ,
pages.
In your case, you're trying to classify an instance where FAQ and pages are unknown: mdclassify(['google','France',None,None], tree).
Since the first known attribute is google, in your decision tree you're only interested in the edge that comes out of google node on the right-hand side.
There are five instances: three labeled Premium, one labeled Basic and one labeled None.
Instances with labels Basic and None split on the FAQ attribute. There's two of them, so the weight for both of them is 0.5.
Now, we split on the pages attribute. There are 3 instances with pages value larger than 20, and two with pages value no larger than 20.
Here's the trick: we already know that the weights for two of these were altered from 1 to 0.5 each. So, now we have three instances weighted 1 each, and 2 instances weighted 0.5 each. So the total value is 4.
Now, we can count the weights for pages attribute:
pages_larger_than_20 = 3/4
pages_not_larger_than_20 = 1/4 # the 1 is: 0.5 + 0.5
All weights are ascribed. Now we can multiply the weights by the "frequencies" of instances (remembering that for Basic and None the "frequency" is now 0.5):
Premium: 3 * 3/4 = 2.25 # because there are three Premium instances, each weighting 0.75;
Basic: 0.5 * 1/4 = 0.125 # because Basic is now 0.5, and the split on
pages_not_larger_than_20 is 1/4
None: 0.5 * 1/4 = 0.125 # analogously
That's at least where the numbers come from. I share your doubts about the maximum value of this metric, and whether it should sum to 1, but now that you know where these numbers come from you can think how to normalize them. |
H: Training Dataset required for Classifier
I am currently trying to develop a classifier in python using Naive Bayes technique. I need a dataset so that I can train it. My classifier would classify a new document given to it into one of the four categories : Science and technology, Sports, politics, Entertainment. Can anybody please help me find a dataset for this. I've been stuck on this problem for quite some time now. Any help would be greatly appreciated.
AI: This should get you maximum datasets for your classification exercise. |
H: Predicting earthquakes using disturbances in DTH TV transmission
It is said that before an earthquake happens, a viewer experiences disturbances in DTH TV transmission in the form of distorted images on the screen which automatically correct after a few seconds.
Is it possible to identify patterns of such disturbances by continuously monitoring TV images so that earthquakes can potentially be predicted at least few minutes in advance and many lives could be saved?
AI: I think that if there are disturbances caused by something preceding earthquake, they might be used.
The problem is that you need to find which cause produces that effect and measure it. For example you can team up with geologists, geophysicists or someone similar and try to build as complete hypothesis as possible and then design experiment to gather data. If your experiment would bring some non noise data then you can start to thinking about machine learning algorithms which will work in this situation.
But you might also try to do it on "brute force" way by actually recording DTH TV images in seismically unstable regions and try to correlate those videos with seismic data. Then after some manual examination and categorization you could define (or not if this cause and effect hypothesis is wrong) some possible glitches that are observed and try to develop software that tries to detect them (OpenCV might be useful for example).
IMHO in either way, some domain knowledge related to earthquakes would be more useful on beginning stage, and experience related to machine learning would be more useful on latest stages of such ambitious project. |
H: Gradient Descent Step for word2vec negative sampling
For word2vec with negative sampling, the cost function for a single word is the following according to word2vec:
$$
E = - log(\sigma(v_{w_{O}}^{'}.u_{w_{I}})) - \sum_{k=1}^K log(\sigma(-v_{w_{k}}^{'}.u_{w_{I}}))
$$
$v_{w_{O}}^{'}$ = hidden->output word vector of the output word
$u_{w_{I}}$ = input->hidden word vector of the output word
$v_{w_{k}}^{'}$ = hidden->output word vector of the negative sampled word
$\sigma$ is the sigmoid function
And taking the derivative with respect to $v_{w_{O}}^{'}.u_{w_{j}}$ is:
$
\frac{\partial E}{\partial v_{w_{j}}^{'}.u_{w_{I}}} = \sigma(v_{w_{j}}^{'}.u_{w_{I}}) * (\sigma(v_{w_{j}}^{'}.u_{w_{I}}) - 1)
$ $ \text{if } w_j = w_O $
$
\frac{\partial E}{\partial v_{w_{j}}^{'}.u_{w_{I}}} = \sigma(v_{w_{j}}^{'}.u_{w_{I}}) * \sigma(-v_{w_{j}}^{'}.u_{w_{I}})
$ $ \text{if } w_j = w_k \text{ for } k = 1\ldots K$
Then we can use chain rule to get
$ \frac{\partial E}{\partial v_{w_{j}}^{'}} = \frac{\partial E}{\partial v_{w_{j}}^{'}.u_{w_{I}}} * \frac{\partial v_{w_{j}}^{'}.u_{w_{I}}}{\partial v_{w_{j}}^{'}} $
Is my reasoning and derivative correct? I am still new to ML so any help would be great!
AI: Looks good to me. This derivative is also presented in the paper (equations 56-58).
The paper you're linking to is the most advanced attempt - at least to best of my knowledge - to explain how word2vec works, but there is also a lot of other resources on the topic (just search for word2vec on arxiv.org). If you're interested in word2vec, you may find GloVe interesting too (see: Linking GloVe with word2vec). |
H: What are Hybrid Classifiers used in Sentiment Analysis?
What are Hybrid classifiers used for sentiment analysis? How are they built? Please suggest good tutorial/book/link for reference. Also how are they different from other classifiers like SVM and Naive Bayes?
AI: In sentiment analysis you may want to combine a number of classifiers. Let's say: a separate classifier for emoticons, another one for emotionally loaded terms, another one for some special linguistic patterns and - let's say - yet another one to detect and filter out spam messages. It's all up to you.
You can use either SVM, Naive Bayes, or anything else that best suits your problem. You may use majority voting, weights (for example based on cross validation results), or any other more advanced technique to decide which class is the most appropriate one.
Also, googling for hybrid sentiment returns tons of papers containing answers to the questions that you have stated. Please, don't ask us to rewrite this papers here. |
H: Downloading a large dataset on the web directly into AWS S3
Does anyone know if it's possible to import a large dataset into Amazon S3 from a URL?
Basically, I want to avoid downloading a huge file and then reuploading it to S3 through the web portal. I just want to supply the download URL to S3 and wait for them to download it to their filesystem. It seems like an easy thing to do, but I just can't find the documentation on it.
AI: Since you obviously posses an AWS account I'd recommend the following:
Create an EC2 instance (any size)
Use wget(or curl) to fetch the file(s) to that EC2 instance. For example: wget http://example.com/my_large_file.csv.
Install s3cmd
Use s3cmd to upload the file to S3. For example: s3cmd cp my_large_file.csv s3://my.bucket/my_large_file.csv
Since connections made between various AWS services leverage AWS's internal network, uploading from an EC2 instance to S3 is pretty fast. Much faster than uploading it from your own computer. This way allows you to avoid downloading the file to your computer and saving potentially significant time uploading it through the web interface. |
H: How do I access data on my EBS Volume from R-Studio Server on Ubuntu EC2 Instance
I have setup R-Studio Server on an Ubuntu EC2 instance for the first time and successfully started r-studio server in my browser. I also have putty ssh client. How do I set path in r-studio server to my mounted EBS volume and why do I not see the contents of my EBS volume in the r-studio files area (bottom right side? ) . Also, I had a file in an s3 bucket. I passed this command to bring it from s3 to my ebs volume: s3cmd get s3://data-analysis/input-data/filename.csv . I assume this command downloads the file from s3 into the ebs volume. But I can't find it in RStudio Server! I have scoured the internet looking for help on this but not able to solve my problem.
AI: I see this question got up-voted so someone else is looking for the answer. Here it is:
I was logged in as the default user on Ubuntu instance when i did the
$s3cmd get s3://data-analysis/input-data/filename.csv
The data got saved from the s3 bucket to the home directory of the user named ubuntu. this can be verified with :
$ df -h /home/ubuntu/
Since I had followed randy zwitch's tutorial on installing R on aws - i had created a user named rstudio and that was the user name I was using to log-in to rstudio server. Hence I had to move the file from the default 'ubuntu' user's home directory to the user 'rstudio'. this can be done with:
$ sudo mv /home/ubuntu/filename.csv /home/rstudio
Happy to answer follow up questions. |
H: Different definitions of Macro F1 score, which one is used in Scikit-learn?
In this article Macro F1 and Macro F1 two different definitions of the F1 used in the literature are demonstrated.
The first F1 score is computed such as:
F1 scores are computed for each class and then averaged via
arithmetic mean
The second such as:
The harmonic mean is computed over the arithmetic means of precision
and recall
I was wondering which definition is actually implemented in Scikit-learn.
From the docs I cannot derive which definition is used:
Calculate metrics for each label, and find their unweighted mean. This
does not take label imbalance into account.
AI: The first variant is implemented:
$$F1_{macro}= \ \sum_{classes} \frac{F1\text{ }of \text{ }class}{number\text{ }of\text{ }classes}$$
You can find an example calculation in this answer.
Sometimes the scikit learn documentation does not include all the details. In these cases it is often helpful to look into the source code which is linked on all help-sites. Here you can find some more details on the f1 score calculation. |
H: keras CNN lstm add model depth
Would anyone have any advice on how to add model depth? This works below but I was hoping to experiment with adding in additional non-TimeDistributed layers.
# reshape from [samples, timesteps] into [samples, subsequences, timesteps, features]
n_features = 1
n_seq = 2
n_steps = 2
# define model
model = Sequential()
model.add(TimeDistributed(Conv1D(filters=16, kernel_size=1, activation='relu'), input_shape=(None, n_steps, n_features)))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(50, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
# fit model
model.fit(X, y, epochs=500, verbose=0)
For example, if I add this in:
# define model
model = Sequential()
model.add(TimeDistributed(Conv1D(filters=16, kernel_size=1, activation='relu'), input_shape=(None, n_steps, n_features)))
model.add(TimeDistributed(MaxPooling1D(pool_size=2)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(50, activation='relu'))
model.add(LSTM(40, activation='relu'))
model.add(LSTM(30, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
# fit model
model.fit(X, y, epochs=500, verbose=0)
This will throw ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2
Any tips help! Sorry not a lot of Wisdom here but learning :)
AI: CTry to change
model.add(LSTM(50, activation='relu'))
to
model.add(LSTM(50, activation='relu', return_sequences=True))
as explained in the documentation ( https://keras.io/layers/recurrent/#lstm ) return_sequences allow the LSTM to return the complete sequence of vectors of the LSTM rather than a single vector. |
H: Weights for unbalanced classification
I'm working with an unbalanced classification problem, in which the target variable contains:
np.bincount(y_train)
array([151953, 13273])
i.e. 151953 zeroes and 13273 ones.
To deal with this I'm using XGBoost's weight parameter when defining the DMatrix:
dtrain = xgb.DMatrix(data=x_train,
label=y_train,
weight=weights)
For the weights I've been using:
bc = np.bincount(y_train)
n_samples = bc.sum()
n_classes = len(bc)
weights = n_samples / (n_classes * bc)
w = weights[y_train.values]
Where weightsis array([0.54367469, 6.22413923]), and with the last line of code I'm just indexing it using the binary values in y_train. This seems like the correct approach to define the weights, since it represents the actual ratio between the amount of values of one class vs the other. However this seems to be favoring the minoritary class, which can be seen by inspecting the confusion matrix:
array([[18881, 19195],
[ 657, 2574]])
So just by trying out different weight values, I've realized that with a fairly close weight ratio, specifically array([1, 7]), the results seem much more reasonable:
array([[23020, 15056],
[ 837, 2394]])
So my question is:
Why using the actual weights of each class yielding poor metrics?
Which is the right way to set the weights for an unbalanced problem?
AI: Depending on your choice of accuracy metric, you'll find that different balancing ratios give the optimum value of the metric. To see why this is true, consider optimizing precision alone vs. optimizing recall alone. Precision is optimized (=1.0) when there are no false positives. Upweighting negative data reduces the positive rate, and therefore the false positive weight. So if you just want to optimize precision, give the positive data zero weight! You'll always predict negative labels and the precision will be ideal. Likewise, for only optimizing recall, give the negative data zero weight - you'll always get the ideal value of recall. These extreme cases are silly for real-world applications, but they do show that your "best" balancing ratio depends on your metric.
As you're probably aware, metrics like AUC and F1 try to compromise between precision and recall. In the absence of prior information, people often try to choose "equal balance" between precision and recall, as implemented in AUC. Since AUC is relatively insensitive to data balance, 1:1 data balancing is generally appropriate. However, in real life you may care more about precision than recall, or vice versa. So, you do need to select your metric in advance, depending on the problem you're solving. Then keep your metric fixed, vary your data balance, and look at your trained model performance on realistic test datasets. Then you can see whether your model is making the optimum predictions, from the point of view of your chosen metric and your real-world dataset. |
H: Select best answer from several existing ones for a question
After analyzing questions on a forum, a human support team has created a set of general answers, that can be used to provide basic answers on the forum.
I am trying to build a system that:
Selects best answer from this set of answers for a given question. How to do this?
Estimates acceptability of such an answer. Which metrics to use?
Using document embeddings, such as doc2vec to find similarity between question and answer does not solve the problem, I think. Other ideas?
Update 1
In my case I don't have labeled data set with good answers to train my model. My problem is unsupervised learning problem.
AI: This problem is multiple choice answering question. I can see you have already tried gensim, doc2vec etc. You can try pytorch based transformer solution. Here is the link: multiple-choice . You can create your data in swag format and remove --do_train in below code for prediction on your dataset.
It has been trained on swag dataset and has given decent accuracy.
If it works fine for you then good, else you would like to finetune it. For finetuning --do_train should be mentioned below:
#training on 4 tesla V100(16GB) GPUS
export SWAG_DIR=/path/to/swag_data_dir
python ./examples/run_multiple_choice.py \
--model_type roberta \
--task_name swag \
--model_name_or_path roberta-base \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir models_bert/swag_base \
--per_gpu_eval_batch_size=16 \
--per_gpu_train_batch_size=16 \
--gradient_accumulation_steps 2 \
--overwrite_output |
H: Why to exclude features used for label generation during modeling?
I have a dataset like below without labels
But with the help of experts opinion, we generate labels based on the below 3 rules (all 3 rules has to be met to label it as 1)
So now the dataset looks like below
As you can see that my final dataset has the labels.
Now I can run a ML model for classification. Am I right?
But I read that during model building process, features that were used to create the labels will have to be excluded because they might result in perfect separation of classes and model might fail. what does it mean by fail? Aren't we aiming for separation of classes through classification algorithms?
May I know why do we have to exclude these features (Ex: RG, FG and BP features which were used to derive labels)?
It's basically my model will be built on below dataset. But aren't we losing the predictive power? why do we have to build a model by excluding those features (that were used to derive labels)?
AI: You created the labels using the data. If you are able to label them with the data, then why do you need a machine learning model? It simply becomes a rule based classifier. What you would like to do, is to find a function that fits your data points.
For example, if you run a decision tree classifier, then it's going to find perfect splits based on your labelling rules. Hence, you're feeding the model a bit about the labelling technique. This is called data leakage. The model sees something really obvious and will have accuracy of 1 usually. |
H: Should bias updates be porportional to overfitting?
According to questions on the internet, the bias is a learnable parameter,
and there are different solutions to updating it, but I failed to find a concise methodology of
correctly updating biases during training.
When I tried to overfit a small network, it failed when the bias was introduced into the training:
When I tried to scale down bias updates it produeced similar, but delayed results:
Next I tried to make bias updates porportional to the train set error, again only delaying the trend:
Next I tried to make bias updated inversely porportional to the error in the training set, but I suppose this would not have any shown benefits without a validation set. Alas the effect was the same:
According to @Noah Weber, Bias is something that would help in reducing overfitting during training, which is actually consistent with my previous experiments.
Based on this I would suppose the more
overfitting occurs, the more the bias term should be updated. This can clearly be measured by the differences in the error and test set. Should bias be updated according to that?
AI: I think you are mixing up the bias of a model as in here, with the bias terms of a neural network which are just the constant term of the linear model of each layer. Updating the biases for training will not reduce overfitting since each bias is an additional parameter of the model.
Remember that the weights (and the bias is also a weight) are updated proportionally to the negative gradient of the loss function. Therefore there must be an error in your implementation since the training error gets larger which is highly unlikely for gradient descent (unless your learning rate is far too high). |
H: Asynchronous Hyperparameter Optimization - Dependency between iterations
When using Asynchronous Hyperparameter Optimization packages such as scikit optimize or hyperopt with cross validation (e.g., cv = 2 or 4) and setting the number of iteration to N (e.g., N=100), should I expect:
Dependency between sequential iterations where the loss value improves sequentially (e.g., the optimized hyperparameters in iteration number 10 are better than the optimized hyperparameters generated in iteration number 9, etc.). In this case I should always select with the hyperparameters generated in the last iteration.
or
Expect independency between iterations where after all 100 iterations are completed I should select the iteration with the smallest loss value.
If option a) is the right answer, that what does it mean if the best Hyperparameter are associated with iteration 50, does it mean that the data is not stable, or the loss function is ill-specified, and therefore, the hyperparameter optimization process outcome should not be trusted?
AI: hyperopt proceeds sequentially, unless you let it use a parallel backend. Is N the max_evals parameter? Yes, you always want to select the hyperparams with the best validation loss. That's what it returns in the end. It may be that it found the best hyperparameters well before the final trial. It does not mean anything is wrong. It learns as it goes the distribution of the loss conditional on hyperparameters, and on purpose explores less-certain parts of the space that are most likely to yield improvement, but may not, especially towards the end of the search. This happens with entirely well-defined loss functions. |
H: Can one property name be used twice in the same branch of a DecisionTreeRegressor?
I am using this dataset for the analysis (Generated using make_regression of sklearn library)
I was trying to learn the DecisionTreeRegression algorithm of sklearn library. I used the following code to fit the regressor.
from sklearn.tree import DecisionTreeRegressor as DTR
regressor1 = DTR(max_depth=2)
regressor1.fit(X,y)
y_pred1 = regressor1.predict(X)
These are the leaf node values that I got,
It seems like, the decision tree first did a split on prop 2 at -1.0923644932716892 for the root node then on the right child of the root it again did another split on prop 2 at 0.0340153523120978.
But what I learnt about Decision Trees is that is a split is done at a property then in the same branch the property should not be used again. Then why the sklearn library is doing this thing?
AI: Good job looking at the tree and understanding what has happened.
There is no problem splitting on the same feature multiple times. A continuous feature has many split points available. The tree continues to subset and refine. The split criteria shows what will be the "best" greedy split at this point. If a feature is income, perhaps the best split is \$100,000. Then on the high side, there is another split for \$10,000,000 since those people behave differently from the \$100,100 income people.
Even a categorical variable may split again. For example black and blonde hair go left, all other hair color go right. Later splitting black and blonde is the best split available.
I saw a research project where scikit learn code was adjusted to give split priority to features that were already used to reduce the number of features in the model which may happen in trees on spurious interactions due to greediness. It worked well. |
H: redundancy or functional dependency of two columns
Being a beginner in Data Analysis I ask you not to be much judgemental about this question.
I was tring to find if there is a standard way (function?) in Pandas module to identify redundancy of a column. For example, notably for categorical data types, if a column is a function of another it could be considered as redundant and thereof could be disregarded without any loss of prediction power. I would prefer not to implement any heavy machinery myself, but use common knowledge methods, therefore is my question.
I guess correlation is a good metric, but it has a different meaning and it is not a good fit for two categorical columns.
Any answers would be much appreciated, cheers.
AI: Well, it always depends, for example, on what model you might be training (i.e. some are robust to multicollinearity). I am pretty sure you are aware, but to have it said as a rule of thumb it is always helpful if you know what you are looking for, rather than hoping naively one function or method would give all the answers.
Said that, there are good progress and in fact a powerful Python package pandas-profiling that can simply takes a pandas dataframe and returns lots of useful information in a blick of an eye (well if it is not super large dataset). And yes, for redundancy, correlation is a good start, and pandas-profiling highlights of highly correlated variables based on Spearman, Pearson and Kendall matrices. Take a look at this post for quick read about pandas-profiling.
For categorical columns, often things get trickier. In past, pandas-profiling did not offer anything. I just double checked, and interestingly they implemented Cramer’s V at this Closed Issue at their repo (see this nice post 'The Search for Categorical Correlation' if you want to learn about Cramer’s V4). I cannot confirm its functionality, you gotta test it on your own, but I believe it should be reliable as many contribute to this project. Unfortunately their documentations isn't thorough, but it seems it is there, see correlations.html page.
Update: Just figured that they have an advanced tutorial under Examples i.e. link to the Colab notebook Tutorial: report structure using Kaggle data (advanced), where the dataset is a combination of many data types including numerical and categorical, and they in fact show all the correlations elegantly, see a screenshot:
Good luck! |
H: What is neural structure learning in tensorflow?
What is neural structure learning?
what is the difference between neural network vs neural structure
learning?
AI: It seems that you actually mean "neural structured learning".
From the Tensorflow webpage on their neural structured learning framework, it seems it is just an umbrella term to two types of regularizations: Neural Graph Learning and Adversarial Learning.
Basically you add an extra term to your training loss where you force the internal representations for certain input to be close to the representation of its "neighbors".
The neighborhood is expressed either as a graph (hence leading to neural graph learning) or as a neighbor/no neighbor relation (hence leading to adversarial training).
Therefore, neural structured learning refers to normal neural networks that are trained with extra knowledge on what inputs are "close" to each other. |
H: Pre-processing mixed data prior to clustering
I am new to hierarchical clustering, and wish to perform clustering on mixed data. I am slightly confused on the necessary pre-processing steps. I understand how to pre-process purely continuous data, what I haven't been able to identify is what pre-processing steps are necessary for mixed data? Do I just scale my continuous variables, impute missing data, and leave the categorical variables alone? Or do I need to perform transformations across all of my variable types?
AI: This depends on many factors including: the data and data types, the distance metric, the clustering method. You also need bare in mind that different software packages may handle / not handle various steps and transformations differently.
Numerical data:
Normalise or Scale numerical features to ensure that these are on the same scale and or unit variance. For instance min max scale so that all values are in the 0-1 range.
Categorical data:
For nomial data such as gender or country, one can apply Dummy / One Hot Encoding to effectively treat each value as a binary feature. For cases where there is high cardinality (>15), for instance US states, it can be necessary to reduce these by applying feature engineering or other techniques.
Ordinal data is perhaps the hardest to handle. One needs to understand and account for the ordering and relative difference between each value. Take Olympic medals where we can assign Bronze (1), Silver (2), and Gold (3), and then apply MinMax 0-1 scaling to treat these effectively as numerical features. What is key is that this approach implies that silver is double the value of bronze, and gold is three times the value of bronze. This may hold true but can become challenging when there is less clear order in the data. One I frequently have to deal with is company revenue bins of unequal size. Another approach is to use the fraction of each value in respect of a target in the case of classification.
I am writing this python notebook and blog post on clustering mixed datatype data here - it’s a work in progress but the key concepts are there. |
H: can I use t-sne or PCA to reduce number of classes?
I wanted to know if I can use t-sne or PCA to reduce the number of classes depending on the similarity between them. For example, if I have 100 classes of 100 different animals and would like to put all the cats in a group and all the dogs in a group etc. (to get few groups of these 100 classes).
AI: No. t-Distributed Stochastic Neighbor Embedding (t-SNE) and Principal Component Analysis (PCA) are dimension reduction techniques, aka fewer columns of a tidy dataframe.
Clustering will reduce the number of observations, aka fewer rows of a tidy dataframe. In particular, you might be looking for hierarchical clustering. |
H: Would it be okay to stop training my neural network?
When the validation error of my Neural Network that I am trying to train is slowly decreasing but not by much, is it okay to stop train the network at that point, or do I need to increase the training time until the minimum validation error is reached?
For instance, in the last 5 epochs my validations errors are shown below:
| end of epoch 1 | time: 3782.50s | valid loss 6.7914 | valid ppl 890.1194
| end of epoch 2 | time: 3802.14s | valid loss 6.6084 | valid ppl 741.2616
| end of epoch 3 | time: 3791.33s | valid loss 6.5249 | valid ppl 681.8797
| end of epoch 4 | time: 3792.55s | valid loss 6.4513 | valid ppl 633.5318
| end of epoch 5 | time: 3804.15s | valid loss 6.3884 | valid ppl 594.8927
so like between the 4th epoch and the 5th epoch, the loss decreased by ~0.975% (= (6.4513-6.3884)/6.4513 * 100)? would it be okay to stop training the network at this point?
Thank you,
AI: You should keep training.
In many scenarios ~1% decrease in validation loss a bid deal in itself. However, looking at the trend it looks like you validation loss is set to decrease by more than ~1%, if you let it train form, say, 20 more epochs. The decreases will get smaller and smaller, but they will accumulate.
Generally, you should continue training if your validation loss is decreasing. |
H: Discrimination vs Calibration - Machine Learning Models
I came across a new term called Calibration while reading about prediction models.
Can you please help me understand how different it is from Discrimination.
We build ML models to discriminate two/more classes from one another
But what does calibration mean and what does it mean to say that "The model has good discriminative power but poorly calibrated/calibrative power`?"
I thought we usually only look for separation only between 2 classes.
Can help me with this with a simple example please?
AI: Discrimination is the separation of the classes while calibration gives us scores based on risk of the population.
For example, there are 100 people that we’d like to predict a disease for and we know that only 3 out of 100 people have this disease. We get their probabilities from our model. Due to good predictability power, our model predicts probabilities between 0-0.05 for 70 people and 0.95-1 for 30 people. This is a good discrimination between classes. We now know that 30 people are at high risk considering only discrimination. But we also know that only 3 out of 100 people get the condition which is 3% prevalence. We use the 3% prevalence to calibrate our scores which will give the actual risk based on population of 100. That means, 0.95 x 0.03 = 0.0285 is their actual risk to the disease.
This is a very crude approach, there are advanced techniques like Kernels, Platt Scaling etc., |
H: When can you reorder log operations?
For example, you can reorder a softmax + nl (negative likelihood)
to log_softmax + nll (negative log-likelihood)
Essentially changing log(softmax(x)) to softmax(log(x))
However, what are the rules to reordering logging of things?
AI: In general, you cannot reorder like that. The example you give is a very special case that only works because softmax is based on the exponential function, which is the inverse function of the natural logarithm. |
H: Change date in linear regression model
Im trying to run a linear regression model with Years as the x variable and temperature on the y variable but I keep getting errors. I manage to run a regression model using the below code and also predicting for a future date 2040. But what Im struggling with is to change the data so its running from a specific date, 1950 instead of 1880 which is the original start of the data. How do you change the date range in a linear regression model?
sns.regplot('Year','No_Smoothing',data=df)
x=df.Year
y=df.No_Smoothing
np.array(x).reshape((-1,1))
from sklearn.linear_model import LinearRegression
reg =LinearRegression()
reg.fit(df[['Year']],df.No_Smoothing)
reg.predict([[2040]])
AI: Slice your DataFrame accordingly, and then apply your previous routine (you may want to put this in a function) to this part of the data:
df2 = df.loc[df.Year >= 1950] |
H: ImageDataGenerator - trained with model.fit instead of model.fit_generator
I am a beginner in using the ImageDataGenerator from Keras and I accidentally used model.fit instead of model.fit_generator.
def gen_Image_data():
gen = ImageDataGenerator(
width_shift_range=0.1,
horizontal_flip=True)
return gen
train_gen = gen_Image_data()
test_gen = ImageDataGenerator()
train_samples = train_gen.flow(X,y, batch_size=64)
test_samples = test_gen.flow(X_val, y_val, batch_size=64)
history = model.fit(train_samples, steps_per_epoch = np.ceil(len(X)/64),
validation_data=(test_samples),
validation_steps=np.ceil(len(X_val)/64),
epochs=300, verbose=1, callbacks=[es])
Many thanks for every hint
AI: The data augmentations are defined when you instantiate your data generators. An example is as such:
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
Other augmentation techniques can also be used by setting the correct parameters. Refer to this link: https://keras.io/preprocessing/image/ |
H: Multiclassification problem
I was wondering what happens when an image not in the training set is provided to the model in a multiclassification problem? Does it just classify something which is close to this image?
AI: As the model is not trained to recognize an image from this new specific class, the only thing it will do, is to give a probability-or similarity measure for each of the classes for which the model has been trained on. Hence in a Classification problem, the class with the highest probability for this testing image will be the classification output. |
H: How to interpret Keras predict output?
I am new in Keras and would want to apply a neural network on this dataset:
https://www.drivendata.org/competitions/57/nepal-earthquake/
I have proprocessed the dataset transforming categorical variables into numerical with pd.get_dummies pandas method. Also, target (that is 1, 2 or 3 - depending on the damage grade) is tranformed into three columns indicating the probability of being one of those values.
The NN I wrote is simple, but I don't understand what are the values that predict method returns.
# first neural network with keras tutorial
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
import tensorflow as tf
import keras.backend as K
# define the keras model
model = Sequential()
# 12 8 3 Accuracy: 57.08
# 25 12 3 Accuracy: 56.89
model.add(Dense(25, input_dim=68, activation='relu'))
model.add(Dense(12, activation='relu'))
model.add(Dense(3, activation='softmax'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[get_f1])
# fit the keras model on the dataset
model.fit(X, y, epochs=5, batch_size=10)
predicted = model.predict(test)
This is the output for the first individual:
array([0.09522187, 0.57914054, 0.32563758], dtype=float32)
Any idea?
PD: If you request some additional information or code, please let me know :)
Thanks a lot!!!
AI: The output of softmax is a probability distribution, It gives the probability of each class of being correct. So, you have to find out
the array index of the max value in array.
predicted_class = np.argmax(predicted) |
H: Difference between Ridge and Linear Regression
From what I have understood, the Ridge Regression is just having the loss function for an optimization problem with the addition of the regularization term (L2 Norm in the case of Ridge). However I am not sure if the loss function can be described by a non-linear function or it needs to be linear. In this case, if the loss functions needs to be linear, then from what I understand the Ridge regression, is simply performing Linear regression with the addition of the L2-Norm for regularization. Please correct me if I am wrong.
AI: Introduction to Statistical Learning (page261) gives some instructive details. The linear regression loss function is simply augmented by a penalty term in an additive way. |
H: What could be the reason of having a lower RMSE than MAE?
I used some machine learning algorithms in my dataset and I found that my RMSE goes low than the MAE. What are the most common reasons for that type of typical scenario. Since from my understanding the RMSE is normally higher than the MAE. But if I am wrong is it actually possible to have a lower RMSE and higher MAE? (for example: RMSE: 26, and MAE : 36)
AI: Yes, you are correct that in general $RMSE(x) \geq MAE(x)$ holds (see this answer for a good explanation of the different error measures and this paper for an interesting comparison of the two measures).
Therefore, your case must stem from other sources:
Randomness in models: the models you have applied are non-deterministic, e.g. your random forest randomly sub-samples features. This could either be solved by fixing all random parameters or measure RMSE and MAE on the same trained model:
model.fit(X_train)
y_pred = model.predict(X_test)
rmse = RMSE(y_test, y_pred)
mae = MAE(y_test, y_pred)
instead of
model.fit(X_train)
y_pred = model.predict(X_test)
rmse = RMSE(y_test, y_pred)
model.fit(X_train)
y_pred = model.predict(X_test)
mae = MAE(y_test, y_pred)
This answer shows a way to fix random seeds.
Randomness in data splits: Data splits are another source of randomness. So you either either make the split deterministic or measure both errors on the same split:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.7)
y_pred = model.predict(X_test)
rmse = RMSE(y_test, y_pred)
mae = MAE(y_test, y_pred)
instead of
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.7)
y_pred = model.predict(X_test)
rmse = RMSE(y_test, y_pred)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.7)
y_pred = model.predict(X_test)
mae = MAE(y_test, y_pred) |
H: LogisticRegression NotImplementedError on fit function
im a newbie in data science or machine learning. i try to implement code from here, but i got error when try to call fit function
here is the code:
classification_data = dataset.drop([10], axis=1).values
classification_label = dataset[10].values
class LogisticRegression:
def __init__(self, lr=0.01, num_iter=100000):
self.lr = lr
self.num_iter = num_iter
self.weights = None
self.bias = None
def fit(self, X, y):
'''Build a logistic regression classifier from the training set (X, y)'''
# YOUR CODE HERE
n_samples, n_features = X.shape
# init parameters
self.weights = np.zeros(n_features)
self.bias = 0
# gradient descent
for _ in range(self.num_iter):
# approximate y with linear combination of weights and x, plus bias
linear_model = np.dot(X, self.weights) + self.bias
# apply sigmoid function
y_predicted = self.predict_proba(linear_model)
# compute gradients
dw = (1 / n_samples) * np.dot(X.T, (y_predicted - y))
db = (1 / n_samples) * np.sum(y_predicted - y)
# update parameters
self.weights -= self.lr * dw
self.bias -= self.lr * db
raise NotImplementedError()
def predict_proba(self, X):
'''Predict class probabilities of the input samples X'''
'''hint: you can put or call your sigmoid function here to predict probablity of input sample X'''
# YOUR CODE HERE
return 1 / (1 + np.exp(-X))
raise NotImplementedError()
def predict(self, X, threshold=0.5): # default threshold adalah 0.5
'''Predict class value for X'''
'''hint: you can use predict_proba function to classify based on given threshold'''
# YOUR CODE HERE
linear_model = np.dot(X, self.weights) + self.bias
y_predicted = self.predict_proba(linear_model)
y_predicted_cls = [1 if i > threshold else 0 for i in y_predicted]
return np.array(y_predicted_cls)
raise NotImplementedError()
then i tried to run
model = LogisticRegression(lr=0.1, num_iter=300000)
%time model.fit(classification_data, classification_label)
but i got error:
31 self.weights -= self.lr * dw
32 self.bias -= self.lr * db
---> 33 raise NotImplementedError()
34
35 def predict_proba(self, X):
NotImplementedError
apparently, it raised the NotImplementedError() on the fit function.
AI: There are two issues in the code. First, you are using a y variable which can take on values of either 1 or 2, which will not work well since the sigmoid function can only output values between 0 or 1. Second, you are using a threshold of 0.5 for the y variable which has values 1 or 2, which will cause everything to be classified as 2. |
H: How to compute denominator in Naive Bayes?
Suppose we have class C_k and input feature vector x in dataset
How to calculate probability p(x)?
AI: in the Examples section of the Wikipedia article there is a nice example. The calculation of $p(\mathbf{x})$ can be done via
$$p(\mathbf{x}) = \sum_k p(C_k) \ p(\mathbf{x} \mid C_k)$$
Note that using the conditional independence assumption of the Naive Bayes one can write
$$ p(\mathbf{x} \mid C_k) = \Pi_{i} \, p(x_i \mid C_k) $$ |
H: How do I test a difference between two proportions representing fatality rate for Covid 19 in Philippines and World (except Philippines)?
I'm trying to analyse if the fatality rate from my country (A third world country) vary significantly from the world's fatality rate.
So I'd basically have two samples, labeled (Philippines) and (World excluding the Philippines) then i can compute the fatality rate for the 2 groups.
Does Mcnemar's test apply here for me to check if fatality rate in the Philippines is higher, or do you have any suggestions? Thanks
AI: It is not a case of paired nominal data. Hence, Mc Nemar's test can not be applied to check whether there is a higher fatality rate in Philippines ?. THE fatality rate is given for Philippines and world ( excluding Philippines ). As defined, it is expressed as proportion. Therefore, t-test/z-test shall be appropriate given that you meet other conditions such as sample-size. |
H: Processing train and test data
I have X numpy array as my features and y numpy array as my target. I split both of it into train and test data. From many QnA i have read they only say to preprocess both train and test separately. I assume i only do it to my feature (X) train and test data and not the target (y). Do we also preprocess the target?
AI: Not necessarily but depends on what your target(y) is and which algorithm/methodology you are trying to use.
It also depends on your data quality.
Few instances that come to my mind:
If your target value is categorical and multilabel in nature, it needs to be one hot encoded, also think about adding extra category to account for unknown classes
If your target is a continuous variable some transformations could work better depending on data distribution and quality- log transforms are common(if no negatives are present),
Normalization/MinMax scaling etc are employed when different features and targets are in very different scales.
https://machinelearningmastery.com/how-to-transform-target-variables-for-regression-with-scikit-learn/ |
H: Skip Connections in Residual Modules
I am a beginner in CNN theory and would like to understand the usage of residual modules better.
As far as I understand residual modules can be skipped, only the activation function must be computed with the given input.
My question is: How does the network know if it should skip the residual module or not?
AI: Residual blocks contain weights as you can see in this overview of some different variants:
Source: this blog post
Since weights are learned parameters the neural net can learn to use or not use non-skip/non-identity paths, i.e. by optimizing with gradient descent the network can learn to skip these blocks (or not).
To phrase it differently: the networks "knows" by following the opposite direction of the gradient. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.