text
stringlengths 83
79.5k
|
---|
H: How to do imbalanced classification in deep learning (tensorflow, RNN)?
I am trying to do binary classification of News Articles (Sports/Non-Sports) using recurrent neural net in tensorflow. The training data is highly skewed [Sports:Non-Sports::1:9].
I am using cross-entropy as my cost function, which treats both classes equally.
What are the ways by which user can penalise one class? Or is there any other cost function suitable for this purpose?
AI: This has already been answered both in stackoverflow and crossvalidated.
The suggestion in both cases was to add class weights to the loss function, by multiplying logits:
loss(x, class) = weights[class] * (-x[class] + log(\sum_j exp(x[j])))
For example, in tensorflow you could do:
ratio = 31.0 / (500.0 + 31.0)
class_weight = tf.constant([ratio, 1.0 - ratio])
logits = ... # shape [batch_size, 2]
weighted_logits = tf.mul(logits, class_weight) # shape [batch_size, 2]
xent = tf.nn.softmax_cross_entropy_with_logits(
weighted_logits, labels, name="xent_raw") |
H: How to extract most occuring words based on month & what tool to use?
Hi guys I'm very new to data science,
I have intermediate background on programming and have used Pentaho Data Integration tool once for DB migration & data cleansing.
Let's say I have this kind of data:
item_details, timestamp
Wooden chairs, 01-07-2017
Plastic chairs, 02-07-2017
Stainless table, 11-07-2017
Decorated window, 12-07-2017
and so on
I want to know based on monthly time frame what are the top trending items in that month.
Let's say in January the top 3 item is:
1. Table
2. Chairs
3. Window
In February :
1. Door
2. Chair
3. Cupboard
.. And so on
How can I achive this and using what kind of tool? (preferably free or open source tools, can be GUI based or script library, having a visualization or dashboard is a plus)
Thanks for the help. Sorry for noob questions
AI: If you are new to data science and data munging, this could be kind of a tricky task, but a good one to get your feet wet. Many programming languages have the capability to do this (R, Python, Matlab, etc.). I use R primarily, so I'll give you a brief heuristic for how I'd approach the task in R. Perhaps looking into these steps will get you started.
Install R
Install some packages that will help you along ('tm' for text mining,'dplyr' for cleaning/organizing your data, and perhaps also 'lubridate' for working with dates/times)
Read in your data from your source, be it a text file, spreadsheet, or some database (if a database, you'll have to conquer connecting R to said database too)
You want to do a word frequency analysis for each month. How you accomplish this will depend on how the data is organized, which I do not know, but it would involve first rounding all your dates to month (using lubridate's 'floor_date()' function is one way), then parsing the text for each month into a corpus that can be analyzed (using package tm).
Finally, for each month I would make a table counting words, sorting by frequency. That would give you, for each month, the top 'trending' words. To discount words like 'the' and 'a', I might also use some of the tools in the 'tm' package to clean things up.
Note that in #5 I said 'words', not 'terms'. If you want to account for terms consisting of > 1 words, you'll have to 'tokenize' them, but that's beyond the scope of this very brief intro.
As with many data science tasks, there are many ways to attack this; the above is but one of many possibilities.
Hope that helps. |
H: Installing Orange Package with older Python Version?
I am trying to install an Add-on package from GitHub that contains prototype widgets for Orange Data Mining. I am trying to install it from the GitHub page found here.
I am using the following Terminal code to install this:
git clone http://github.com/biolab/orange3-prototypes.git
Everything then appears to install correctly and the download shows 100%. Then, however, it throws an error and says:
Orange requires Python >= 3.4
I am using Mac OS. Clearly, it is suggesting that I need to use a different version of python to install, however, I already updated my pip install. Any insight into how I can fix this?
AI: The error message shows exactly what you need to do:
Orange requires Python>=3.4
You have to specify a Python>=3.4 version (with orange3 itself installed) while installing orange3-prototypes.
I'm not a Mac user, however, in Windows, Orange3 installer will automatically install a Python 3.4 if there's no compatible Python available. |
H: Feature reduction convenience
In the field of machine learning, I'm wondering about the interest of applying feature selection techniques.
I mean, I often read articles or lectures speaking about how to reduce the number of feature (dimensionality reduction, PCA), how to select the best features (feature selection etc).
I'm not sure of the main purpose of this:
Does feature reduction techniques always improve accuracy of the learned model?
Or is it just a computational cost purpose?
I would like to understand when it is necessary to reduce the number of features and when it is not, in order to improve interpretability or accuracy.
Thanks!
AI: Feature Selection (FS) methods are focused on specializing the data as much as possible to find accurate models for your problem. Some of the main issues that drive the need for FS are:
Curse of dimensionality: Most algorithms suffer to grasp relevant chacteristics of data for a specific prediction task, when the number of dimensions (features) of the data is high, and the number of examples is not sufficiently big. Check here some more detailed explanation
Correlation between variables: Typically, the presence of highly correlated pair of variables can cause ML algorithms to pay to much attention to a particular effect that is "over-represented". For this reason many FS methods address the reduction of this correlation. Reducing the number of correlated variables oftenly increases the model predictive power.
Latent features: Although specific variables might be highly expressive for your problem, a lot of power can be achieved when finding "latent features", such as linear and non-linear combinations of the original variables. Here there are hundreds of approaches, from PCA to Neural Networks. Independently from the approach (and statistical assumptions) the idea is to create new features that condense the information of a bigger set of features into a smaller one. Hopefuly the new set of features is more representative, and being smaller could be more easily learnt.
Feature selection does not necessarily improve the predictive quality of the model. Reducing or transforming the features might lead you to loss of information and then a less accurate model. Is an open and complex field of research. However in many cases it becomes quite useful. It will depend on how good and different are your original features for describing the target variable.
If you look into bionformatics, you'll see people dealing with thousands or even millions of features, while having only hundred of examples. Here feature selection becomes increasingly relevant.
PS: Most commonly I've seen the term "feature selection" used for the creation of compound features, as most examples I've mentioned, while Feature Extraction term is used for actually removing specific features from the dataset without taking into account is relation with the target variable. |
H: Is there a way to measure correlation between two similar datasets?
Let's say that I have two similar datasets with the same size of elements, for example 3D points :
Dataset A : { (1,2,3), (2,3,4), (4,2,1) }
Dataset B : { (2,1,3), (2,4,6), (8,2,3) }
And the question is that is there a way to measure the correlation/similarity/Distance between these two datasets ?
Any help will be appreciated.
AI: I see a lot of people post this similar question on StackExchange, and the truth is that there is no methodology to compare if data set A looks like set B. You can compare summary statistics, such as means, deviations, min/max, but there's no magical formula to say that data set A looks like B, especially if they are varying data sets by rows and columns.
I work at one of the largest credit score/fraud analytics companies in the US. Our models utilize large number of variables. When my team gets a request for a report, we have to look at each individual variable to inspect that the variables are populated as they should be with respect to the context of the client. This is very time consuming, but necessary. Some tasks do not have magical formulas to get around inspecting and digging deep into the data. However, any good data analyst should understand this already.
Given your situation, I believe you should identify key statistics of interest to your data/problems. You may also want to look at what distributions look like graphically, as well as how variables relate to others. If for data set A, Temp and Ozone are positively correlated, and if B is generated through the same source (or similar stochastic process), then B's Temp and Ozone should also exhibit a similar relationship.
My I will illustrate my point via this example:
data("airquality")
head(airquality)
dim(airquality)
set.seed(123)
indices <- sample(x = 1:153, size = 70, replace = FALSE) ## randomly select 70 obs
A = airquality[indices,]
B = airquality[-indices,]
summary(A$Temp) ## compare quantiles
summary(B$Temp)
plot(A)
plot(B)
plot(density(A$Temp), main = "Density of Temperature")
plot(density(B$Temp), main = "Density of Temperature")
plot(x = A$Temp, y = A$Ozone, type = "p", main = "Ozone ~ Temp",
xlim = c(50, 100), ylim = c(0, 180))
lines(lowess(x = A$Temp, y = A$Ozone), col = "blue")
plot(x = B$Temp, y = B$Ozone, type = "p", main = "Ozone ~ Temp",
xlim = c(50, 100), ylim = c(0, 180))
lines(lowess(x = B$Temp, y = B$Ozone), col = "blue")
cor(x = A$Temp, y = A$Ozone, method = "spearman", use = "complete.obs") ## [1] 0.8285805
cor(x = B$Temp, y = B$Ozone, method = "spearman", use = "complete.obs") ## [1] 0.6924934 |
H: Convolutional neural network fast fourier transform
I've read that some convolution implementations use FFT to calculate the output feature/activation maps and I'm wondering how they're related. I'm familiar with applying CNNs, and (mildly) familiar with the use of FFT in signal processing, but I'm not sure how the 2 work together
When I think of convolutions, I imagine taking a kernel, flipping it, multiplying (and adding) the elements of the kernel with the overlapping input, shifting the kernel and repeating the process. How does a FFT fit into this process?
AI: By transforming both your signal and kernel tensors into frequency space, a convolution becomes a single element-wise multiplication, with no shifting or repeating.
So you can convert your data and kernel into frequencies using FFT, multiply them once then convert back with an inverse FFT. There are some fiddly details about aligning your data first, and correcting for gain caused by the conversion.
If you have a good FFT library, this can be very efficient, but there is overhead cost for running the Fourier transform and its inverse, so your convolution needs to be relatively large before it is worth looking at FFT.
I have explored this a while ago in a Ruby gem called convolver. You can see some of the code for an FFT-based convolution here and the project includes unit tests that prove that direct convolution gets same numerical results as FFT-based convolution. There is also code that attempts to estimate when it would be more efficient to calculate convolutions directly by repeated multiplications or use FFT-based solution (that is rough and ready guesswork though, and implementation-dependent). |
H: Multiple testing of unbalanced data by R
I have a general question of unbalanced data. I'm performing a T test on Group A and Group B. Group A has 20 data while Group B has 500. I set unequal variance (Welch) for the adjustment and the P-value is 0.01. Can I conclude that there is significance?
I have another idea that I'm not sure about. I randomly select about 20 samples from Group B multiple times and test against the Group A. Should I do some adjustment for the result? I am not sure if this is works or not. If it works, what kind of adjustment I should use for multi-testing?
Thank you!
I appreciate any replies!
AI: it does not have anything to do with whether t test or welch t test. As long as your group A samples estimate the mean and standard deviation of population A very well, and group B samples estimate the mean and standard deviation of population B very well, then there shouldn't be any problem. If the SD of population A is assumed different from that of population B, then you better go welch t test. Your question may be better asked at cross-validated, who cares more statistics, I think. |
H: XGBRegressor vs. xgboost.train huge speed difference?
If I train my model using the following code:
import xgboost as xg
params = {'max_depth':3,
'min_child_weight':10,
'learning_rate':0.3,
'subsample':0.5,
'colsample_bytree':0.6,
'obj':'reg:linear',
'n_estimators':1000,
'eta':0.3}
features = df[feature_columns]
target = df[target_columns]
dmatrix = xg.DMatrix(features.values,
target.values,
feature_names=features.columns.values)
clf = xg.train(params, dmatrix)
it finishes in about 1 minute.
If I train my model using the Sci-Kit learn method:
import xgboost as xg
max_depth = 3
min_child_weight = 10
subsample = 0.5
colsample_bytree = 0.6
objective = 'reg:linear'
num_estimators = 1000
learning_rate = 0.3
features = df[feature_columns]
target = df[target_columns]
clf = xg.XGBRegressor(max_depth=max_depth,
min_child_weight=min_child_weight,
subsample=subsample,
colsample_bytree=colsample_bytree,
objective=objective,
n_estimators=num_estimators,
learning_rate=learning_rate)
clf.fit(features, target)
it takes over 30 minutes.
I would think the underlying code is nearly exactly the same (i.e. XGBRegressor calls xg.train) - what's going on here?
AI: xgboost.train will ignore parameter n_estimators, while xgboost.XGBRegressor accepts. In xgboost.train, boosting iterations (i.e. n_estimators) is controlled by num_boost_round(default: 10)
In your case, the first code will do 10 iterations (by default), but the second one will do 1000 iterations. There won't be any big difference if you try to change clf = xg.train(params, dmatrix) into clf = xg.train(params, dmatrix, 1000),
References
http://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.train
http://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBRegressor |
H: How relevant is Self Organizing Maps in today's science?
Self-Organizing Maps is a pretty smart yet fast & simple method to cluster data. But Self-Organizing maps were developed in 1990 and a lot of robust and powerful clustering method using dimensionality reduction methods have been developed since then. What are some of its applications in today's world of science and engineering?
AI: One important and well known (in their respective research domains) usage of SOMs today is the usage of SOMs for data visualization and clustering.
The U-Matrix uses a SOM with a lot of neurons in order to achieve emergent behavior. Based upon the SOM, a so called U-height is calculated for each neuron.
For a neuron $n$, a weight $w_n$ of the neuron $n$, as well as a set $NN(n)$ of immediate neighbours of $n$ the U-height is calculated as:
$$\sum_{m \in NN(n)} d(w_n - w_m)$$
$d$ is the same distance as used in the SOM training. This U-height is used to visualize the SOM and can be capable of catching quite interesting
inherent structure. It is visualized by using a color scheme similar to topographical maps:
The U-Matrix uses distances for visualization. It can be combined with the so called P-Matrix, which uses densities, to use the density of data points as well, and allows the combination of densities and distances. Based on the U-Matrix one can do classification or clustering as well. An R package is distributed by the research group of Prof. Ultsch.
I wouldn't say that it's important in Machine Learning per se, but in the fields of Data Mining and KDD it is, at least academically. You can clearly see this if you look on the reception of this or similar work on semanticscholar or researchgate.
Disclaimer: I studied with Prof. Ultsch, the head behind U-Matrix and U*-Matrix.
NB: Sorry for the missing links to the papers, I haven't got enough reputation yet to post more than two links.
Papers:
U*-matrix: a Tool to Visualize Clusters in High Dimensional Data, Ultsch 2004
Self-Organizing Neural Networks for Visualisation and Classification, Ultsch 1993
I even spotted the U-Matrix in the wild here. Also see the sompy package. |
H: Why use convolutional NNs for a visual inspection task over classic CV template matching?
I had an interesting discussion come up based on a project we were working on: why use a CNN visual inspection system over a template matching algorithm?
Background: I had shown a demo of a simple CNN vision system (webcam + laptop) that detected if a particular type of object was "broken"/defective or not - in this case, a PCB circuit board. My CNN model was shown examples of the proper and broken circuit boards (about 100 images of each) on a static background. Our model used the first few conv/maxpool layers of pre-trained VGG16 (on imagenet), and then we added a few more trainable convs/pools, with a few denses, leading to a dim-3 one hot encoded vectored output for classification: (is_empty, has_good_product, has_defective_product).
The model trained pretty easily and reached 99% validation acc no problems; we also trained with various data augmentation since we know our dataset was small. In practice, it worked about 9 times out of 10, but a few random translations/rotations of the same circuit board would occasionally put it in the opposite class. Perhaps more aggressive data augmentation would have helped. Anyways, for a prototype concept project we were happy.
Now we were presenting to another engineer and his colleague, and he brought up the argument that NNs are overkill for this, should just use template matching, why would one want to do CNNs?
We didn't have a great answer for why our approach could be better in certain applications (e.g. other parts to inspect). Some points we brought up:
1) More robust to invariances (through e.g. data augmentation)
2) Can do online learning to improve the system (e.g. human can tell the software which examples it got wrong)
3) No need to set thresholds like in classical computer vision algorithms
What do you guys think, are there more advantages for a CNN system for this type of inspection task? In what cases would it be better than template matching?
A few more random ideas for when deep NNs could be the tech for the job: for systems that require 3D depth sensing as part of the input, or any type of object that can be deformed/stretched/squished but still be "good" and not defective (e.g. a stuffed animal, wires, etc). Curious to hear your thoughts :)
AI: The engineer in question that proposed traditional CV methods for your application simply did so out of habit. Using template matching is extremely outdated and has been shown to perform very poorly. However, I do think a CNN is overkill depending on your dataset's size.
How does template matching work?
Template matching slides a window across your image that will provide a percent match with the template. If the percent match is above a certain predefined threshold then it is assumed to be a match. For example if you have an image of a dog and you want to determine if there is a dog in the image, you would slide a dog template around the entire image area and see if there is a sufficiently large percent match. This will likely result in very poor performance because it requires the template to overlap the image identically. What is the likelihood of that in practice? Not very high.
The only time template matching is a sufficient technique is if you know exactly what you are looking for and you are confident that it will appear almost identically in every example of a given class.
Why use machine learning instead?
Machine learning techniques are not rigid. Unlike what stmax said, CNNs are able to generalize a dataset very well. That is why they are so powerful. Using the dog example, the CNN does not need to see a picture of every dog in existence to understand what constitutes as a dog. You can show it maybe 1000 images from a Google search, and then the algorithm will be able to detect that your dog, is in fact a dog. The fact that machine learning algorithms generalize very well is the reason that they replaced all the ancient CV techniques. Now the problem is the amount of data that you need to train a CNN. They are extremely data intensive.
I do not think that 100 data points is sufficient to train a robust CNN. Due to the deep complexity of the model in order to limit the bias you need to increase your number of examples. I usually suggest 100 examples for every feature for deep models and 10 examples for every feature for shallow models. It really all depends on your feature-space.
What I suggest.
What you are truly doing is anomaly detection. You have a lot of examples that will be presented of PCBs that are otherwise in good shape. You want to detect those which are broken. Thus I would attempt some anomaly detection methods instead. They are much simpler to implement and you can get good results using shallow models especially in skewed datasets (1 class is over represented). |
H: Forecasting non-negative sparse time-series data
I have a time-series dataset (daily frequency) representing the sales of a product to a customer over time. The sales is represented as the following:
$$[0, 0, 0, 0, 24, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 17, 0, 0, 0, 0, 9, 0, ...]$$
in which each number represents the sales of the product in a day.
The problem is that time-series forecast methods (ARMA, HoltWinters) work well for "continuous" and "smooth" data, but is not producing good results in this case.
I want to make a forecast of that series, with attention to 2 points: (1) assuring non-negative values and (2) sparse/ non-continuous data. Anyone knows how to approach this problem? What methods/ technique?
Thanks!
AI: I have two ideas here, maybe they will be helpful.
Idea 1: Model time between events
You might think of your data as being generated by two processes: the first is a distribution over time intervals, and the second is a distribution over purchase amounts. So to model your data you could create one distribution (gaussian?) over the nonzero values in your dataset, and another over the lengths of sequences of zeros (poisson?).
Idea 2: Model customer inventory
Even though the sales events in your dataset are sparse, you could spend a little time to come up with a model of why the customer is making purchases when they do. In one possible model, the customer has an inventory that shrinks over time, and they make purchases when their inventory crosses some minimum threshold. You could use your sales data to fit the slope (for linear shrinkage) or rate (for exponential shrinkage) as well as the threshold.
This could get arbitrarily complex, since the customer under this model might have different thresholds or shrink rates at different times ... but for starters it could be a useful approach to get a sense of things. |
H: Order SparseVectors by the closest distance to given SparseVector
I have a Spark dataset containing a column of SparseVector types. Additionally, I have another SparseVector $X$ which is not a part of the dataset. I want to order my dataset according to the closest distance (or similarity) relative to $X$.
Can anyone help me with how to implement this?
AI: it appears that problem was solved with BucketedRandomProjectionLSH. After fit && transformation, and approxNearestNeighbors resulted dataset contains distCol, which (as per LSH.scala):
@param distCol Output column for storing the distance between each result row and the key. |
H: Gradient Boosting Tree: "the more variable the better"?
From the tutorial of the XGBoost, I think when each tree grows, all the variables are scanned to be selected to split nodes, and the one with the maximum gain split will be chosen. So my question is that what if I add some noise variables into the data set, would these noise variables influence the selection of variables (for each tree growing)? My logic is that because these noise variables do NOT give maximum gain split at all, then they would never be selected thus they do not influence the tree growth.
If the answer is yes, then is it true that "the more variables the better for XGBoost"? Let's not consider the training time.
Also, if the answer is yes, then is it true that "we do not need to filter out non-important variables from the model".
Thank you!
AI: My logic is that because these noise variables do NOT give maximum gain split at all, then they would never be selected thus they do not influence the tree growth.
This is only perfectly correct for very large, near infinite data sets, where the number of samples in your training set gives good coverage of all variations. In practice, with enough dimensions you end up with a lot of sampling noise, because your coverage of possible examples is weaker the more dimensions your data has.
Noise on weak variables that ends up correlating by chance with the target variable can limit the effectiveness of boosting algorithms, and this can more easily happen on deeper splits in the decision tree, where the data being assessed has already been grouped into a small subset.
The more variables you add, the more likely it is that you will get weakly correlated variables that just happen to look good to the split selection algorithm for some specific combination, which then creates trees that learn this noise instead of the intended signal, and ultimately generalise badly.
In practise, I have found XGBoost quite robust to noise on a small scale. However, I have also found that it will sometimes select poor quality engineered variables, in preference to better-correlated data, for similar reasons. So it is not an algorithm where "the more variables the better for XGBoost" and you do need to care about possible low-quality features. |
H: Cross-validation of a cross-validated stacking ensemble?
let me begin by saying that I understand how to build a stacked ensemble by using cross-validation to generate out-of-fold predictions for the base learners to generate meta-features. My question is about the methodology when cross-validating the entire stacked ensemble to check generalization error.
To eliminate any confusion, I'm going to call the cross-validation to generate out of fold predictions for the base learner CV A, while I'll call the cross-validation of the entire stacking ensemble CV B.
When I do CV B, is it valid to do CV A just once and use those out of fold predictions for the entire CV B process? Or do I have to keep doing CV A and generate new out of fold predictions during each fold of CV B?
Normally, I'd think that there'd be some data leakage in the first method, but one could also reason out that since the out of fold predictions are taken, well, out of fold, that issue is taken care of. The main reason I'm asking this is because doing the second method would surely remove any data leakage but there would be an order of magnitude of additional computational complexity involved.
AI: I posted this same question on Reddit and someone was kind enough to answer
By https://www.reddit.com/user/patrickSwayzeNU
This
When I do CV B, is it valid to do CV A just once and use those out of fold predictions for the entire CV B process?
Normally, I'd think that there'd be some data leakage in the first method, but one could also reason out that since the out of fold predictions are taken, well, out of fold, that issue is taken care of.
Yes, your data set created by CV A is now "good as new". |
H: Different available packages in TensorFlow virtualenv?
I have installed TensorFlow on Linux (Anaconda) by following the documentation which states that one should create and activate a virtual environment tensorflow. So far, so good (albeit it is not entirely clear why is this virtual environment is necessary when I want to incorporate TF to my existing env).
But when I activate the tensorflow environment I observe that several packages are unavailable in the new environment while available outside the environment at the same time:
$source activate tensorflow
$python
>>>import h5py
... No module named h5py
$source deactivate tensorflow
$python
>>>import h5py
>>>
No problem in this case.
I guessed that I should install the missing package in the tensorflow environment as well, but when I try to, I get informed that the package in question has already been installed and nothing happens:
$source activate tensorflow
$pip install h5py
Requirement already statisfied ....
The same inconsistency occurs with several other packages. What is the problem here?
AI: I can't offer any suggestions as to why your virtual environment can't find the packages (I'm not familiar with anaconda). The only thing that comes to mind is a PATH problem.
Perhaps try installing the package with the following command:
conda install -n yourenvname [package]
or, in your case:
conda install -n tensorflow h5py
To install additional packages only to your virtual environment, enter
the following command where yourenvname is the name of your
environemnt, and [package] is the name of the package you wish to
install. Failure to specify “-n yourenvname” will install the package
to the root Python installation.
source: https://uoa-eresearch.github.io/eresearch-cookbook/recipe/2014/11/20/conda/ |
H: Installing Prototype Widgets in Orange running on Mac
I am having trouble downloading some add-ons for the Orange data mining software that are not available through the normal add-ons menu. To do this, I am attempting to install a particular set of add-ons from the GitHub page here.
I am very novice when it comes to using Terminal and terminal commands. Could anyone help provide some insight on how to install this package?
Some people have suggested that they should appear in the list of the prototypes, but they do not. Does anyone have Orange running on Mac so that they can try this to see if it works?
Specifics for the exact code that I need to implement in the terminal to get this to work would also be helpful.
AI: The easiest way to install Orange's addons is through the application itself. Open Orange, in the menu click Options -> Addons. In the popup window mark Orange3-Prototypes and click OK. Note, that by doing so you will get the latest version that is published on PyPI.
If you would want to install the bleeding edge version directly from GitHub — assuming that you installed Orange by downloading a bundle from its website and not for example by using Anaconda — do the following. Open a terminal and paste this command:
Doing so you will trigger the pip that came with the bundle and instruct it to install the addon directly from Github.
Further, to install opencv, which is required by some widgets (e.g. webcamecapture) run
Note, however, that this is an unofficial version of OpenCV but it seems easier to install. |
H: Has anyone tried to use the hierarchy of ImageNet?
The classes of ImagNet have a hierarchy. Did anybody try to build a hierarchy of classifiers to use this fact?
Searching for "multistage classification" leads to different results.
AI: Yes, multiple papers have used this. I've heard of multiple ways to exploit this hierarchial structure. This paper Hierarchical Deep Convolutional Neural Network for Large Scale
Visual Recognition uses multiple levels by predicting the more coarse distribution and I think it then passes this as features to the more low level classification. YOLO9000 actually uses the ImageNet hierarchy for the problem of object detection. Hierarchial classification is the name of the problem you are describing. |
H: Using Generative Adversarial Networks for a generation of image layer
Has anybody seen any application that would use GAN that would take input image and would output image of the same size, that could be used as a layer for the first image. The layer would contain in eg. point of interest in the input image. Would that even be a good practice of the GAN usage?
I am looking for articles, examples of application that used something similar.
AI: In terms of generating an image "layer", that is just the same as generating an output image that can be overlaid on the input using standard graphics software. If you want pixel-level accuracy in the output then the output will need to be the same size as the input, otherwise it could be smaller, provided it is the same aspect ratio, and in which case it would need to be scaled up in order to be used as an overlay. In any case, as the output of a GAN can be an image (and often is), then this part is easy.
The "G" in GAN stands for Generative. The purpose of a generative network is to create samples from a population, where there are typically many possibilities. Those samples can be conditioned on some additional data, and that additional data could be an image, although many examples will simpler conditioning, such as a category that the training output is representative of.
One possibility for using a GAN, is where your population contains a range of traits, and you can calculate vectors that control that trait. So you can take an input image, reconstruct it in the GAN then modify it by adding/subtracting the trait-related vector. An interesting example of this is Face Aging With Conditional Generative Adversarial Networks, and similar examples are around of adding/removing glasses etc. For this to work for you, you would literally need images that had your points-of interest in them and ones without them, and then you would be able to control addition/removal of points-of-interest. The network would not detect these points in the input, instead it would add them into the output. From reading your question this does not seem to be what you want.
A similar paper uses a GAN to remove rain from photos, based on training many images with and without rain then learning the "rain vector", encoding new images with rain in them into the GAN's internal representation and subtracting this "rain vector".
GANs conditioned on input images (as opposed to categories or internal embeddings) are also possible - this example of image completion might be closer to your goal. If your points of interest are variable with many options feasible, then it could work for you.
However, if your points of interest are supposed to always be the same pixels in each image, then your goal might be better defined by strict ground truth and become more like semantic segmentation, which can be attempted with variations on CNNs, such as described in this paper by Microsoft. These are much easier to set up and train than GANs, so if you can reasonably frame your problem as pixel classification from the original image, this is probably the way to go. |
H: Problem designing CNN network
I seem to have a problem modelling my CNN network.
I want to extract from features vector from different sized images.
Whats consistent with the images is the y-axis, and the color dimension, but the x-axis is not constant.
Depending on the length of the x-axis will the length of the feature vector also be altered. I already know how long they should be,
but not the ratio between the length of the x-axis and length of the feature vector, I guess one does exist.
Is it possible to train a CNN network such that it can alter the feature vector length depending on the length of the input of the x-axis of the image?
AI: It is not really possible to alter input feature array size per example on normal CNNs. Instead this is fixed when the model is built for the first time, before you start training.
Depending on your goal, it might be possible to work around that using some kind of pipeline that worked with image patches (taking multiple slightly randomised patches to augment the training data can improve results and doing the same with prediction inputs can also drive up classifier accuracy). Or a more complex variant using RNN/CNN hybrid to consume an image as a sequence of parts, which might also be used for multi-object recognition.
However, these solutions are complex, and state-of-the-art results in image classification can be achieved by simpler techniques such as taking a centre crop and/or padding. Provided your training data is also treated the same way, and aspect ratios are not extreme this can work adequately. |
H: Retrain final layer of Inception model
I'm trying to retrain Inception model final layer for a binary classification.
My training image set contain 2000 images in class 1 and more than 6000 images in class 2.
Will this huge difference in number of images of each class in training set affect my classification?
AI: The short answer is yes.
While your classes are imbalanced, model will be more likely to "learn" class 2 during the training, especially when it comes to mini-batch updates. (even though each mini-batch is unlikely to have only class 2 samples)
The common solution is to use weighted loss function or feed weighted samples to mini-batch.
In my practice, I always use weighted samples if I have sufficient data, even when my classes are not that skewed. Weights always help, at least it never ruins the model. |
H: Which graph will be appropriate for the visualization task?
I have some terminal charging values for US and CHINA
comes in a pandas DataFrame like the following,
value country
0 550.0 USA
1 820.0 CHINA
2 835.0 CHINA
3 600.0 USA
4 775.0 CHINA
5 785.0 USA
6 790.0 USA
This is the sample data and I have in total 5K+ entries. The data is cleared for the outliers and needs to be visualized. What kind of visualization can I use to plot my data meaningfully?
AI: Maybe you can try something like this?
df.hist(by='country', bins=50)
If you want to plot these histograms in the same figure, checkout this.
References
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html |
H: How to test accuracy of an unsupervised clustering model output?
I am trying to test how well my unsupervised K-Means clustering properly clusters my data. I have an unsupervised K-Means clustering model output (as shown in the first photo below) and then I clustered my data using the actual classifications.
The photo below are the actual classifications. I am trying to test, in Python, how well my K-Means classification (above) did against the actual classification.
For my K-Means code, I am using a simple model, as follows:
kmeans = KMeans(n_clusters=4, random_state=0).fit(myData)
labels = kmeans.labels_
What would be the best way for me to compare how well my unsupervised KMeans clustering model did against the actual classifications?
AI: Since you have the actual labels, you can compare them with the obtained labels and evaluate performance. Typically purity and nmi (normalized mutual information) are used. Read this (Evaluation of Clustering) document for detailed explanation.
If you don't have actual labels then you can use modularity or [Silhouette] for measuring clustering performance. |
H: Pandas v. SFrame in learning data science
I'm taking a machine learning course that introduces the related assignments as such that a student may use either Pandas or SFrame in solving them. As a beginner, it's hard for me to assess which approach would be more beneficial for me in the long run. Hence the question; should I prefer Pandas or SFrame for learning purposes?
Initially I might've preferred Pandas to make sure I learn the "proper foundations" first, so to speak. However, after reading about this online, SFrame appears to be very popular after having been made open source. Can it be considered a Pandas replacement from a (future) data scientist's point of view?
Thanks a bunch! Protips always welcome in comments ;)
AI: SFrame is not used much in industry, so I'd stick to pandas or Spark DataFrames. But they're all rather similar, and you should not spend much time thinking about it: it is easy to pick these things up, and employers understand this. Concentrate on the algorithms; that's the real "foundation", not the tools. |
H: Designing CNN that does one column convolution across the x-axis
I am currently working on designing a certain number of CNN for extracting features from images.
The images are spectogram, and each have a shape being (276,x,3).
X is here the number of column, which incidently also the length of the feature vector that should be created.
So the cnn somehow has to use a kernel that has to be 276 rows and 1 column wide, but is it possible in keras to make a 2d kernel and perform 1d convolution. The most important factor is the 1d convolution and the shape of the 2d kernel, as it is being used to alter the importance of some of the entries in the spectogram.
AI: Designing CNN that does one column convolution across the x-axis
So the cnn somehow has to use a kernel that has to be 276 rows and 1 column wide, but is it possible in keras to make a 2d kernel and perform 1d convolution.
I'm not sure I understand your question correctly, but if you have input images that are 276 high, X=500 wide and have 3 color channels, then the following performs column-wise convolutions across the X-axis:
rows, cols, chans = 276, 500, 3
...
model.add(Convolution2D(64, rows, 1, input_shape=(rows, cols, chans)))
...
The output shape of this layer will be (1, 500, 64).
I inserted "64" as the number of filters.. you can set that to whatever you need / whatever works best.
Note: shapes are specified in tensorflow mode (i.e. the order is (rows, cols, channels)). Depending on your configuration you might have to change those shapes to theano mode: (channels, rows, cols).
So the cnn somehow has to use a kernel that has to be 276 rows and 1 column wide,
It actually has to use a kernel that is 276 rows, 1 column and 3 channels wide since your input images have 3 channels.. it would be 276 rows and 1 column wide if you used grayscale images.. but that shouldn't matter. |
H: Data Scientist Consulting Interview Guide
Does anyone have any books or blogs that specifically sheds light on questions to ask your organization (from a consulting POV) as a data scientist? I am a new data scientist, which I have a background in consulting and predictive analytics.
There are several good reads such as "The Data Scientist's Field Guide" by Booz Allen, "The McKinsey Mind" from McKinsey, or maybe "Business Analytics for Managers" from SAS. I have come across interview guides to understand and execute strategy, but none that specifically structure questions for a preliminary assessment of analytic capability.
Any recommendations would be appreciated.
Thank you.
AI: This might be a resource that you would be interested in. It's a preliminary assessment for health centers, but a lot of the structure can be used in any industry. |
H: Which algorithm can i use for predicting length of stay in coming year based on historical claims data?
I have two years historical health claims data of one thousand members. Based on this two years data, I have to predict length of stay in hospital in 3rd year for all members. here is the data sample.
Year MembID x1 x2 x3 x4 x5 x6 x7 LengthOfStay
2010 1 6 35 0 3 0 0 4 1
2010 1 8 35 0 5 0 0 3 0
2009 1 5 35 0 5 0 0 3 3
2009 1 3 35 0 8 2 0 8 0
2010 2 6 30 0 3 3 2 4 0
2010 2 8 30 0 5 0 0 3 0
2009 2 5 30 0 5 0 0 3 0
2010 2 5 30 1 5 0 2 2 0
2009 3 5 55 1 5 1 2 2 0
2010 3 10 55 1 5 0 2 2 0
2010 3 5 55 1 5 1 2 2 0
2009 3 10 55 1 5 0 0 2 0
2010 4 5 24 1 5 0 0 2 0
2009 4 3 24 1 8 0 0 2 0
2009 5 10 65 1 5 1 2 4 5
2009 5 5 65 1 5 0 2 3 0
2010 5 6 65 1 3 0 0 4 1
2010 5 4 65 1 5 0 0 4 0
2010 6 10 44 1 5 1 2 4 5
2011--- i expect------ 1
I did the classification with randomforest. How can I proceed further for prediction on 2011?
AI: If you want to do prediction using 2011 features, the answer is yes, you can do that.
However, as you don't want to use these features, the answer might be no.
Without using 2011 features, your dataset will have only 2 samples(2009 and 2010) under the assumption that every memberID is different. Prediction from two samples is neither reliable nor feasible. |
H: Selecting dataset splitting strategy
I found this very informative figure, on how to split the dataset depending on how much data (or how many observations to be more precise) you have.
My question is, since "less data" is very subjective, is there a statistical test you can perform or even a rule of thumb on which split to follow?
My current problem is a classification problem with 145 observations, 22 features, 2 labels (18-True, 127-False), but I'm interested on the general approach.
Thank you.
AI: As usual in these cases, there is no magic wand to determine which splitting method to use. It all depends on your specific data. Is your collected data redundant enough so that k-fold cross validation is not necessary? Does your data capture most of the input space?
Now, taking a look at your numbers, I'd say that the number of observations you have (145) is not likely to be large enough to capture all the potential variability in the input space, giving the fact that you have a high number of features (22). This conclusion, of course, depends on the type of the features (are they binary? categorical? numerical?), and whether all these features are actually necessary to make a prediction (are there redundant/correlated features? features that do not give any information about the output variables?).
In your case, and not knowing more about the data, I'd go for a cross-validation splitting scheme. |
H: What is a batch in machine learning?
Karpathy's' LSTM batch network LSTM batch network operates with batches
def checkSequentialMatchesBatch():
""" check LSTM I/O forward/backward interactions """
n,b,d = (5, 3, 4) # sequence length, batch size, hidden size
input_size = 10
WLSTM = LSTM.init(input_size, d) # input size, hidden size
X = np.random.randn(n,b,input_size)
#...
def checkBatchGradient():
""" check that the batch gradient is correct """
# lets gradient check this beast
n,b,d = (5, 3, 4) # sequence length, batch size, hidden size
input_size = 10
WLSTM = LSTM.init(input_size, d) # input size, hidden size
X = np.random.randn(n,b,input_size)
#...
What does batch applied for? I'm only familiar with feeding one-hot word representation vector and can't understand LSTM learning process with batches. Please explain in terms of text processing.
Thanks in advance.
AI: A batch is a grouping of instances from your dataset. For example a batch of 100 text samples that will be fed to train your model together. |
H: How does one deploy a model, after building it in Python or Matlab?
I have been playing around with a lot of different machine learning models (clustering, neural nets, etc...), but I am sort of stuck on understanding what happens after you finish building the model in Python or Matlab.
For example, let's say that I trained a basic neural network model for a binary classification problem. How does one deploy that to colleagues, for example, so that they can load in the dataset to spit out a prediction? I have trained a model, but what happens to that model, now? How do I "save" that model that I just trained in Python?
I see a lot of tutorials on how to pre-process data, train the model, spit out the statistics and predictions; but, what comes next?
Obviously Facebook, Google, and anyone else heavily involved in machine learning / AI applications are creating a framework to use their models. But, is there software that allows you to pull in data and then apply it to your Python code? Is this what Weka, TensorFlow, and these other packages do?
AI: what comes after training a model is persisting (in layman terms, saving) a model, if you want to save the parameters for later deployment.
All companies that do serious machine learning spend many hours (sometimes even days or weeks) training the models. Obviously, when you need rapid predictions in a deployment mode, you use a pre-trained model to generate said prediction.
In python, for example, there are a number of way to persist models. The most common libraries used are pickle and cpickle (i/o much faster I believe as the core is in C).
Here is a link to model persistence in sklearn:
http://scikit-learn.org/stable/modules/model_persistence.html |
H: How to deal with string labels in multi-class classification with keras?
I am newbie on machine learning and keras and now working a multi-class image classification problem using keras. The input is tagged image. After some pre-processing, the training data is represented in Python list as:
[["dog", "path/to/dog/imageX.jpg"],["cat", "path/to/cat/imageX.jpg"],
["bird", "path/to/cat/imageX.jpg"]]
the "dog", "cat", and "bird" are the class labels. I think one-hot encoding should be used for this problem but I am not very clear on how to deal it with these string labels. I've tried sklearn's LabelEncoder() in this way:
encoder = LabelEncoder()
trafomed_label = encoder.fit_transform(["dog", "cat", "bird"])
print(trafomed_label)
And the output is [2 1 0], which is different that my expectation output of somthing like [[1,0,0],[0,1,0],[0,0,1]]. It can be done with some coding, but I'd like to know if there is some "standard" or "traditional" way to deal with it?
AI: Sklearn's LabelEncoder module finds all classes and assigns each a numeric id starting from 0. This means that whatever your class representations are in the original data set, you now have a simple consistent way to represent each. It doesn't do one-hot encoding, although as you correctly identify, it is pretty close, and you can use those ids to quickly generate one-hot-encodings in other code.
If you want one-hot encoding, you can use LabelBinarizer instead. This works very similarly:
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
transfomed_label = encoder.fit_transform(["dog", "cat", "bird"])
print(transfomed_label)
Output:
[[0 0 1]
[0 1 0]
[1 0 0]] |
H: Alternative methods for improved clustering separation?
I have the following labeled cluster, which is what an ideal clustering algorithm would generate:
Now, I have applied a basic K-Means clustering algorithm to the data, and the outcome is as follows:
I recognize that this is a tough problem to properly cluster because some of the classes are so similar.
But I was wondering if there are any alternative algorithms that may help me improve the separability of the clusters, and improve how well my unsupervised clustering algorithm would work on new data?
AI: Your data doesn't appear to be easily separable. In general, one could apply some kind of transformation that pulls apart the distributions for each class. Having labels available makes it possible, in principle, to learn such a transformation (as @Emre metnioned in the comments). But, there are a couple issues with your particular data set. 1) You don't appear to have many data points (unless you've only plotted a small subset). This would limit you to very simple transformations (otherwise you'd probably get severe overfitting). 2) The points are simply overlapping. A transformation can only work based on its inputs and, if the coordinates are indistinguishable, there's nothing that can be done. In the best case, you might be able to pull the lower left turquoise cluster and the yellow points further from the main mass, but the rest of the points are pretty much intermingled. Any transformation that could manage to separate them in the training data would be very complicated, and probably just reflect sample noise (i.e. it would probably be completely overfit, and not generalize to new data).
The ideal thing would be to find/measure additional (relevant) variables. In this case, the classes may become separable in the higher dimensional space. For example, imagine additing a third axis, where the red points become 'lifted' above the blue points. |
H: Fine-tuning a model from an existing checkpoint with TensorFlow-Slim
I'm trying to retrain the final layer of a pretrained model with a new image dataset using TensorFlow-Slim.
Lets say I want to fine-tuning inception-v3 on flowers dataset. Inception_v3 was trained on ImageNet with 1000 class labels, but the flowers dataset only have 5 classes. Since the dataset is quite small we will only train the new layers.
An example at official tf github page shows how to do it:
$ DATASET_DIR=/tmp/flowers
$ TRAIN_DIR=/tmp/flowers-models/inception_v3
$ CHECKPOINT_PATH=/tmp/my_checkpoints/inception_v3.ckpt
$ python train_image_classifier.py \
--train_dir=${TRAIN_DIR} \
--dataset_dir=${DATASET_DIR} \
--dataset_name=flowers \
--dataset_split_name=train \
--model_name=inception_v3 \
--checkpoint_path=${CHECKPOINT_PATH} \
--checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits/Logits \
--trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits/Logits
I couldn't fully understand all the parameters in the above code:
train_dir = ?
dataset_dir = new dataset directory location
dataset_name = name of the dataset (but why ? )
dataset_split_name = ?
model_name = name of the model on which we want to train
checkpoint_path = path to the model checkpoint
checkpoint_exclude_scopes = ?
trainable_scopes = ?
Help me figure out what these parameter means and correct me if I'm wrong about any of the parameter?
Note: I'm aware that we can retrain inception_v3 with a method mentioned at official tensorflow website but I want to do the same with tensorflow-slim.
AI: You can get all information you need from the source code.
train_dir: Directory where checkpoints and event logs are written to. (default: /tmp/tfmodel/)
dataset_dir: The directory where the dataset files are stored.
dataset_name: The name of the dataset to load. (default: imagenet)
dataset_split_name: The name of the train/test split. (default: train)
model_name: The name of the architecture to train. (default: inception_v3)
checkpoint_path: The path to a checkpoint from which to fine-tune.
checkpoint_exclude_scopes: Comma-separated list of scopes of variables to exclude when restoring from a checkpoint.
trainable_scopes: Comma-separated list of scopes to filter the set of variables to train. By default, None would train all the variables. |
H: Extracting Features Using TensorFlow CNN
I'm trying to extract features of set of images. I'm using CNN from this site.
Can anyone please tell me how to do feature extraction of images using CNN? I looked for various places. But nowhere it's clearly mention the feature extraction part.
AI: Actually, after you've completed your training, the weights of all these convolution layers are the feature maps you've extracted.
You may try to visualize these weights as well as activations to get the full feature maps. Here's a guide you may refer to.
References
https://github.com/fchollet/keras/issues/12
http://cs231n.github.io/understanding-cnn/ |
H: Kernel trick explanation
In support vector machines, I understand it would be computationally prohibitive to calculate a basis function at every point in the data set. However, it is possible to find this optimal solution due to the so-called kernel trick.
Other answers to this question use advanced math and statistics jargon to answer the question (I assume) properly, causing it to be inaccessible to general a data science audience. Could someone post a "big-picture" description (i.e., not necessarily thorough or technically complete) illustrating what the kernel trick is and how it works?
AI: The kernel trick is based on some concepts: you have a dataset, e.g. two classes of 2D data, represented on a cartesian plane. It is not linearly separable, so for example a SVM could not find a line that separates the two classes. Now, what you can do it project this data into an higher dimension space, for example 3D, where it could be divided linearly by a plane.
Now, a basic concept in ML is the dot product. You often do dot products of the features of a data sample with some weights w, the parameters of your model. Instead of doing explicitly this projection of the data in 3D and then evaluating the dot product, you can find a kernel function that simplifies this job by simply doing the dot product in the projected space for you, without the need to actually compute projections and then the dot product.
This allows you to find a complex non linear boundary that is able to separate the classes in the dataset. This is a very intuitive explaination. |
H: Wrong output multiple linear regression statsmodels
I recently moved to python for data analysis and apparently I am stuck on the basics. I am trying to regress the parameters of the following expression: z=20+x+3*y+noise, and I get the right intercept but the x and y parameters are clearly wrong. What am I doing missing? Code below:
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
# generate true values, and noise around them
np.random.seed(5)
x = np.arange(1, 101)
y = np.arange(1, 101)
z = 20 + x + 3* y + np.random.normal(0, 20, 100)
data = pd.DataFrame({'x':x, 'y':y, 'z': z})
lm = smf.ols(formula='z ~ x + y', data=data).fit()
# print the coefficients
lm.summary()
returns
where the x and y parameters are both 1.5, instead of being 1 and 3. What's wrong?
AI: I think that you are seeing what you are seeing because the model sees the relationship of each set of points in your data frame, which are governed by the equation:
z = 20 + x +3*y + noise
But all the model sees is the resulting Z, NOT that equation which you know created z.
So it attempts to build a model which considers how Z was accomplished without knowing there was noise, while KNOWING that both x and y were in this equation because you explicitly told it the are in the model.
Based on this data. (at least this is what I got without a seed, when I ran your data..so it is close with probably different Z due to different noise)
x y z
1 1 32.824550
2 2 21.382597
3 3 80.615424
4 4 30.958157
5 5 42.192197
6 6 75.649622
7 7 29.815352
8 8 40.167267
9 9 59.752065
10 10 53.402601
Because x and y are the same for each point and because your formula has x + 3*y + noise , z is also equal to 4*x+ noise or 4*y+ noise or 2*x +2*y+ noise for each row. There are many ways to get the same change to Z with x & y in exact proportion plus some noise.
So the regressors are being assigned equal influence plus an equal share of the noise. It is the most parsimonious evaluation of x and y.
It is not what you expect knowing the equation, but it is the answer you should get. If you use a linear model which reduces variables of zero influent, you might even get a 0 or NA value for y.
To test that lm is working, simply reverse the order of y changing the relationship between x & y within that equations and you have a completely different result. I think the one you were looking to find.
y = np.arange(100,0,-1)
coef std err t P>|t| [0.025 0.975]
Intercept 0.0439 0.000 119.009 0.000 0.043 0.045
x 1.2184 0.038 32.471 0.000 1.144 1.293
y 3.2130 0.038 85.629 0.000 3.139 3.288
You built your model right, but in this case the data set was not created in a way to test for what you hoped to see...but it was not wrong. |
H: Which Amazon EC2 instance for Deep Learning tasks?
I have discovered that Amazon has a dedicated Deep Learning AMI with TensorFlow, Keras etc. preinstalled (not to mention other prebuilt custom AMIs). I tried out this with a typical job on several GPU-based instances to see the performances. There are five such in the Ireland region (maybe in other regions exist even more, I don't know, this variance is a bit confusing):
g2.2xlarge
g2.8xlarge
p2.xlarge
p2.8xlarge
p2.16xlarge
My first question is, what is the difference between the two groups (g-something and p-something)? Both group mentions "GPUs along with CPU", but no further clue for Deep Learning usability.
My second problem is that I have been running my job on g2.2 and g2.8 as well, and while the task processing toke quite long time to run, the workload of the GPUs was relatively low (20-40%). Why didn't the framework increase the workload if there is spare processor capacity? Is is necessary/possible to paramter/set anything to optimize the work?
AI: I think the differences and use cases are well pointed here. As far the workload, there are features which help you optimise it. According to the official documentation, you can try:
For persistency,
sudo nvidia-smi -pm 1
Disabling the autoboost feature
sudo nvidia-smi --auto-boost-default=0
Set all GPU clock speeds to their maximum frequency.
sudo nvidia-smi -ac 2505,875 |
H: make seaborn heatmap bigger
I create a corr() df out of an original df. The corr() df came out 70 X 70 and it is impossible to visualize the heatmap... sns.heatmap(df). If I try to display the corr = df.corr(), the table doesn't fit the screen and I can see all the correlations. Is it a way to either print the entire df regardless of its size or to control the size of the heatmap?
AI: I found out how to increase the size of my plot with the following code...
plt.subplots(figsize=(20,15))
sns.heatmap(corr) |
H: CNN for classification giving extreme result probabilities
I'm having issues with my CNN, using Keras with Theano backend.
Basically, I need to classify 340x340 grayscale images into 6 categories. The problem is my CNN gives too "hard" probabilities, for instance it will rarely give predictions with some uncertainty, and always tries to push for a 90%+ for one class. The problem is that for my coursework, the penalty used is very harsh for complete miss classification, and uncertainty is much preferred. ( so having a prediction like [0.6, 0.3, 0.2, ...] is much better than having [0.9,0.03,0.02,..].
I'm unsure why this is happening. My dataset consists of 2400 images,
which are from different CCTV, and task is about recognising possible objects. Only 800 of the samples are actually from the data, the other 1600 have been generated through data augmentation.
Note that it is therefore extremely likely a that some pictures are either identical, or extremely similar (e.g. the same scene, one second later)
model = Sequential()
#1
# Few filter to take big stuff out
# Also, first layer is not conv so that I can reuse that layer separately
model.add(Dropout(0.1, input_shape=(1,340,340)))
model.add(Convolution2D(64, 4, 4, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th"))
#2
model.add(Dropout(0.1))
model.add(Convolution2D(128, 4, 4, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th"))
#3
model.add(Dropout(0.1))
model.add(Convolution2D(256, 4, 4, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th"))
#4
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('relu'))
#5
model.add(Dense(512))
model.add(Activation('relu'))
#6
model.add(Dense(6))
model.add(Activation('softmax'))
opt = SGD(lr=0.001)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
print "Training.."
filepath = "log/weights-improvement-{epoch:02d}---{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
model.fit(X_t, y_t, validation_split=0.1, nb_epoch=500, batch_size=32, callbacks=callbacks_list)
How do you suggest I fix this?
Thank you in advance!
AI: There are a few different factors involved here. It is difficult to tell, without getting heavily involved, which could be the most important. I will put them in the order I think worth looking at first.
Your data is images from CCTV, so you likely have more than one image from each camera. From your results (reasonable training and CV scores, but bad training), it looks like you are over-fitting. But your CV approach is not spotting this. So I think that the test set is likely to be from a different set of cameras to the training set. In order to properly measure CV therefore, you have to split train/cv by camera - you cannot just use the 0.1 split, because a random split will include images which are correlated with training data and will give you too high an estimate, allowing you to overfit without noticing.
It occurs to me it might just be your data augmentation causing a problem for you here. If you augment first then randomly split to train/CV, then your CV set will contain images very similar to training set, and will get too high a score. You can more easily check and fix this than split by camera, so give it a try.
800 original images is not much to work with. You need to do something about that. Here are some ideas:
Scale down the images. You probably don't need 340x340. Depending on the target object, maybe just 78x78 will do. You can assess this easily enough - scale down and check if you can still differentiate the classes easily by eye.
You don't have enough data to get best quality filters in the lower layers, which will limit the capabilities of the CNN. You might bootstrap from a pre-trained image model. Take a publicly available pre-trained CNN, such as VGG-19, use its weights in convolutional layers as a starting point, put different classifier layers on top and fine-tune your classifier starting from this. This might change the ideal image scale too - you want something that fits the pre-trained CNN.
Augmenting the data, which you have started. You could go a lot further. Take random patches from training examples, possibly flipped horizontally (if this maintains the object class). However, don't augment data used for cross validation, unless your model used for testing also includes augmentation - e.g. if it takes 8 random augmented variants of the test image and returns an average of predictions, then you can do similar for cross-validation.
A 0.1 train/CV split run once on this much data is not going to give you an accurate assessment of the model. You need to run k-fold cross-validation. This is annoying because NNs take a long time to train, but if you want some confidence that you have really found some good parameters, you will need to do this. Remember to split by camera if you can.
The scoring appears to be strongly related to categorical cross-entropy, so you have the right loss function. You should optimise, using cross-validation, to find the model with the lowest loss, not the best accuracy. |
H: Application of Machine learning or Neural Networks for automatic Time table scheduling
I've been trying to come up with an intelligent solution to build a Time table scheduling application with the use of Machine learning or Neural networks. What would be the algorithm or approach to build such application. I'm planing to take data from google calendar API and through the system. The system should purpose best time slots to conduct lectures considering the data taken from lecturers and students.
I found some research documents where they have used some genetic algorithms but can this be done with the help of neural networks?
AI: Scheduling problems might be NP-Complete problems.
It is not clear what are the specific details.
You might get lucky and have specific constraints that are leading to an easy sub problem or just an easy instance.
However, since many variations like On the Complexity of Scheduling University Courses (which might be your case), Job shop scheduling,Multiprocessor scheduling and Open-shop scheduling are NP-complete, you are probably in the same situation.
Usually, it is better to treat such problems like optimization problems and not like classification problems. If you'll try to treat the problem like a classification problem you might have severe problems in building a classified dataset that will represent well the time scheduling problem that you would like to solve.
There are general technique in order to cope with optimization problem. I also found some work related to your case. I'm not familiar with this specific case but it seems that Solving the Course Scheduling Problem Using
Simulated Annealing and the work done here might help you. |
H: Interpretation of an SVD for recommender systems
The idea is to motivate the SVD for use in a recommender system.
Consider a matrix $A\in \mathbb{R}^{f\times u}$ where $A_{ij}$ caputures how user $j$ rates film $i$ (on a scale from 1-10, some entries may be missing).
Considering matrices $K=AA^T$, $L = A^TA$ what do $K_{ij}$ and $L_{ij}$ tell us?
What interpretations can you give for matrices $U$ and $V$ in the SVD $A=U\Sigma V^T$?
Riiight ... so
K = $AA^T$ = $U\Sigma^2U^T$, L = $A^TA = V\Sigma^2V^T$
With $K_{ij} = \langle A[i,:], A[j,:] \rangle$ being the dot product of all ratings for film $i$ with all ratings for film $j$.
Similarly, $L_{ij} = \langle A[:, i], A[:,j] \rangle$ being the dot product of all ratings from user $i$ with all ratings from user $j$.
That tells us ... what exactly? I expect it to amount to some kind of similarity measure, but beyond that, I have no idea.
As for $U$ and $V$ ... I know the first $r$ of them to be Eigenvectors of $K$ and $L$ for some $r$. I also know
$u_1,..,u_r$ is an orthonormal basis for the column space
$v_1,..v_r$ is an orthonormal basis for the row space
but that doesn't really tell me anything.
Yet have no idea if the notion of "row space" and "column space" even makes sense. What would the "span" of $A$'s columns / rows even mean?
I also have no idea what the nullspace of $A$ or $A^T$ would represent - if it represents anything at all - nor what
$u_{r+1},..,u_{m}$ being an orthonormal basis for $ker(A)$
or $v_{r+1},..,v_{n}$ being an orthonormal basis for $ker(A^T)$
implies - again, if anything at all.
AI: First of all, note that the dot product between two movies/users is by definition the correlation between them. So your intuition for treating it as a similarity measure is not wrong.
Now, applying SVD to $A$ is simply a re-representation of it in some other basis. Let's call it feature space. If we'll take it one step further and keep only $k$ leading vectors of $U$ and $V$, we'll get two matrices: $U^{(k)} \in \mathbb{R}^{f,k}$ and $V^{(k)} \in \mathbb{R}^{k,u}$ (where $f$ and $u$ are the dimensions you supplied in your question). where $U^{(k)}\cdot V^{(k)} \approx A^{(k)}$ is a $k$-rank approximation for $A$ ** (note it has the same dimensions).
So you can of $U^{(k)}$ as a matrix which rows movies, and each movie has some $k$ latent features. Similarly, $V^{(k)}$ will be a "list" (columns) of users and each user is described by some $k$ values (latent features).
Note that you can "split" the scaling matrix $\Sigma^{(k)}$ between $U^{(k)}$ and $V^{(k)}$: $U'^{(k)} = U^{(k)}\Sigma^{(k)0.5}$ and $V'^{(k)} = V^{(k)}\Sigma^{(k)0.5}$, to make up for the lack of scale of $U$ and $V$.
This factorization can be elaborated into more interpretable methods such as LDA.
** The variational approach to singular values tell us that this approximation minimizes the difference between $A$ and any other $k$ rank matrix in terms of Frobenius-norm and operator-norm (and possibly other norms I'm not aware of). |
H: Adding more features in SVC leading to worse performance, even w/ regularization
I have a relatively small dataset of 30 samples with binary labels (16 positive and 14 negative). I also have five continuous features for each these samples. I'm trying to use the support-vector classifier (SVC) for this task. I tested the performance of different feature combinations and regularization strengths in the classification task using leave-one-out cross-validation.
One odd thing that I found is that if I took feature A and used it alone for classification, I might get, say 87% classification accuracy. If I use feature B in isolation, I might get 60% classification accuracy (i.e., same as majority classifier baseline). But then combining all the features, I would get only 63% classification accuracy. This is despite performing a search across a large range of regularization strengths.
In case it matters, I'm using the sklearn SVC implementation, and varying the regularization parameter C.
Is this sort of behavior typical with an SVC classifier? I'm not too familiar with this support vector algorithms in general.
AI: 30 samples probably just isn't enough. The more features you want to use, the more samples you'll need - otherwise you'll get huge model instability and overfitting (especially with bad/useless features). With only 30 samples you'll probably get the "best" results with 1 or 2 carefully selected features. Get 100 or 200 samples, then try again with 5 features.
Also make sure you're standardizing your features - for example by removing the mean and scaling to unit variance. SVMs don't like features that are a lot larger than other features. |
H: Run Apriori algorithm in python 2.7
I have a DataFrame in python by using pandas which has 3 columns and 80.000.000 rows.
The Columns are: {event_id,device_id,category}.[]
each device has many events and each event can have more than one category.
I want to run Apriori algorithm to find out which categories seem together.
My idea is to create a list of lists[[]]: to save the categories which are in the same event for each device. like: [('a'),('a','b')('d'),('s','a','b')] then giving the list of lists as transactions to the algorithm. I need help to create the list of lists.
If you have better idea please tell me because I am new in Python and this was the only way I found out.
AI: df.groupby('device_id')['category'].apply(list).tolist()
There's your transactions LOL.
If you aren't limited to Python 2.7, I'd suggest Orange3-Associate which contains a frequent_itemsets() function based on FP-growth algorithm, which is orders of magnitude faster than Apriori. |
H: Using TensorFlow with Intel GPU
Is there any way now to use TensorFlow with Intel GPUs? If yes, please point me in the right direction.
If not, please let me know which framework, if any, (Keras, Theano, etc) can I use for my Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller.
AI: At this moment, the answer is no. Tensorflow uses CUDA which means only NVIDIA GPUs are supported.
For OpenCL support, you can track the progress here.
BTW, Intel/AMD CPUs are supported.
The default version of Tensorflow doesn't work with Intel and AMD GPUs, but there are ways to get Tensorflow to work with Intel/AMD GPUs:
For Intel GPUs, follow this tutorial from Microsoft.
For AMD GPUs, use this tutorial. |
H: Why are variables of train and test data defined using the capital letter (in Python)?
I hope this question is the most suitable in this site...
In Python, usually the class name is defined using the capital letter as its first character, for example
class Vehicle:
...
However, in machine learning field, often times train and test data are defined as X and Y - not x and y. For example, I'm now reading this tutorial on Keras, but it uses the X and Y as its variables:
from sklearn import datasets
mnist = datasets.load_digits()
X = mnist.data
Y = mnist.target
Why are these defined as capital letters? Is there any convention (at least in Python) among machine learning field that it is better to use the capital letter to define these variables?
Or maybe do people distinguish the upper vs lower case variables in machine learning?
In fact the same tutorial later distinguish these variables like the following:
from sklearn.cross_validation import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, Y, train_size=0.7, random_state=0)
AI: The X (and sometimes Y) variables are matrices.
In some math notation, it is common practice to write vector variable names as lower case and matrix variable names as upper case. Often these are in bold or have other annotation, but that does not translate well to code. Either way, I believe that the practice has transferred from this notation.
You may also notice in code, when the target variable is a single column of values, it is written y, so you have X, y
Of course, this has no special semantic meaning in Python and you are free to ignore the convention. However, because it has become a convention, it may be worth maintaining if you share your code. |
H: HTML Words Remover?
Does anyone know if there is functionality similar to StopWordsRemover but intended to clean out HTML syntax? e.g. get the text without any html tags after transformation.
AI: Wrote simple class - if someone will be interested:
import org.apache.spark.ml.Transformer;
import org.apache.spark.ml.param.Param;
import org.apache.spark.ml.param.ParamMap;
import org.apache.spark.sql.Column;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.functions;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructType;
import java.util.UUID;
public class HTMLStripper extends Transformer {
private static final String HTMLStripper = "HTMLStripper";
private String inputColumn;
private String outputColumn;
public HTMLStripper (String inputColumn, String outputColumn) {
this.inputColumn = inputColumn;
this.outputColumn = outputColumn;
}
@Override
public String uid() {
return UUID.fromString("HTMLStripper").toString();
}
@Override
public StructType transformSchema(StructType schema) {
return schema.add(outputColumn, DataTypes.StringType, true);
}
@Override
public Dataset<Row> transform(Dataset<?> dataset) {
dataset.sqlContext().udf().register(HTMLStripper, (String str) -> str.replaceAll("<[^>]*>", ""),
DataTypes.StringType);
Column col = dataset.col(inputColumn);
col = functions.callUDF(HTMLStripper, col);
return dataset.withColumn(outputColumn, col);
}
@Override
public Transformer copy(ParamMap extra) {
return new HTMLStripper(inputColumn, outputColumn);
}
} |
H: GANs to augment training data
I have been reading about Generative Adversarial Networks (GANs) and was wondering if it would make sense to train a generator function only to use it for creating more training data.
In a scenario where I don't have enough training data to build a robust classifier, can I use this limited data to train a generator that'll produce samples good enough to improve the accuracy of my discriminator (classifier)?
AI: Yes and no depending on how you define "good enough samples".
You will likely end up with a chicken and egg problem: you want to use the GAN to generate training data, but the GAN doesn't have enough training data itself to generate convincing enough samples.
Other techniques exist for data synthesis of training images. For example: adding noise, flipping axis, change luminosity, change color, random cropping, random distorsion. |
H: TensorFlow with Phonegap
I'm new into the ML Scene and I want to create a phonegap app involving Tensorflow but I'm unsure where to start or if this is even possible. Can anyone give me a hand (Probably by linking me to some resources)? My app will just use tensor flow image recognition (probably pre-trained).
Thanks, Felix.
AI: TensorFlow is a tool to write computation using data flow graphs; this being said, if you want your app to use a pre-trained model only, there is no requirements to use TensorFlow specifically. You could even use one of the ML library written in Javascript to import and run the pre-trained model.
Javascript DL libs: http://cs.stanford.edu/people/karpathy/convnetjs/, https://github.com/dmlc/mxnet.js/
TensorFlow for mobile: https://www.tensorflow.org/mobile/
Caffe Android: https://github.com/sh1r0/caffe-android-lib
DL4J Android: https://deeplearning4j.org/android |
H: EEG data layout for RNN
How should one structure an input data matrix (containing EEG data) for an RNN?
Normally, RNNs are presented as language models where you have a one hot vector indicating the presence of a word. So if you input was the sentence "hello how are you", you would have 4 one hot vectors (I think):
[1, 0, 0, 0]
[0, 1, 0, 0]
[0, 0, 1, 0]
[0, 0, 0, 1]
How do these individual vectors get condensed into a single data matrix?
In the case of single channel EEG (with only 1 electrode), 256 samples per second, 1 second long samples, how should this data be structured? Would it be 256 vectors? If so, what does each vector represent/contain? Or should it be 1 vector that is 256 elements long?
Furthermore, how does this extend to multi channel EEG with, say, 64 electrodes over 256 time samples?
I would prefer to use the raw EEG data, rather than trying some dimensionality reduction (calculating means, spectrograms etc)
AI: RNNs are not designed to do language modeling exclusively, they are designed to process time series data, and language happened to be representable as time series.
There is plenty of papers demonstrating how to use RNNs to do classification and regression on time series (awesome list of papers).
One-hot encoding is often used in cases where the input is discrete and not a number that can be directly fed into a model. However, one-hot encoding is actually not always the norm for language-modeling, some research actually map each character (or each word depending on how one wants to model the problem) to a unique numerical identifier (e.g. with $id\in \left\{ 0\ldots n \right\}$; with $n$ the size of the vocabulary) and the model map that identifier to a vector representation. This is particularly useful when the size of the vocabulary is huge and you want to avoid having to deal with big one-encoded vectors. Take a look at word2vec and word2vec Tensorflow for more details about this.
In your case, you want to process sensor data. There is no need for one-hot encoded because your data are continuous and numerical. In other words, you can input the recorded EEG data directly; although it's usually better to clean them up and normalize them beforehand. There is actually plenty of papers about EEG data processing with RNNs.
Regardless of the type of data, the idea to fed them into a RNN remains the same: provide a data sample $x$ for each timestep $t$.
For EEG data recorded for 5 timesteps with 1 electrode $a$, you would have a 1-dimensional vector with one sample per timestep:
$a = [0.12, 0.44, 0.134, 0.39, 0.23]$
$inputvector = [0.12, 0.44, 0.134, 0.39, 0.23]$
For EEG data recorded for 5 timesteps with 3 electrodes $a, b, c$, you would have a 3-dimensional vector with one sample per timestep:
$a = [0.12, 0.44, 0.134, 0.39, 0.23]$
$b = [0.43, 0.92, 0.3, 0.37, 0.4]$
$c = [0.13, 0.1, 0.4, 0.21, 0.14]$
$inputvector = [[0.12, 0.43, 0.13], [0.44, 0.92, 0.1], ...]$
I highly recommend you to read some of the literature I linked above. |
H: Is ML a good solution for identifying what the user wants to do from a sentence?
I am learning machine learning and I'm trying to implement a solution for a real problem: predict from a human sentence what programming function he/she is trying to do.
I have a series of programming functions related with a series of descriptions (there can be n > 0 descriptions for each unique function).
I created a neural network and a bag of words model trying to convert a human sentence "we get the data from the database" to a programming function. So far it works with very easy examples but not with my real data.
Something like this works:
"description" | programming function
lala lolo lulu ka | function1
lala lolo lulu ke | function1
lala lolo lulu ko | function1
lala lele lili ka | function2
lala lele lili ki | function2
lala lele lili ko | function2
Every word in the description is converted to an neuron-input (with value of 1 if present and 0 if not present) and every possible function is converted to a neuron-output.
I'm using pyBrain with back propagation and an error threshold of 0.005. The neural network has three layers, and the middle one has lenght: number of possible words + number of possible programming functions (this is kind of arbitrary).
I know full text search or auto-complete is probably a better alternative for this task, but I'm just experimenting with machine learning and I'd like this to work if possible. In my real data I have 1000 descriptions related with ~500 functions.
So my question is:
Is bag of words + neural networks a good approach for solving this?
Maybe Word2vec is a better option?
If neither is good, is there any known machine learning approach that could work with something like that?
AI: Yes this problem is extremely well suited for machine learning. However, I think you should be careful as to which algorithms you tend to use.
A machine learning algorithm should be structured as follows: feature extraction and then your model. These are two things that should be done separately.
Feature Extraction
This is the bag of words, n_grams and word2vec. These are all good choices for text examples. I think bag of words is a good choice in your case. However, if this generates a sparse matrix then maybe n_grams can be better. You can test all 3 methods.
The Model
Theoretically, the more parameters in your model the more data you need to train it sufficiently otherwise you will retain a large amount of bias. This means a high error rate. Neural networks tend to have a very high number of parameters. Thus they require a lot of data to be trained.
But, you have 1000 instances!!! Yes. But, you also have 500 classes. So imagine you have a very young child and you want him to be able to correctly classify 500 different types of images. Then you can't just show the kid 2 different examples of each class for him to truly understand what each class really means.
As a very general rule of thumb, the number of instances you need to train a model increases exponentially with number of classes. So you will need a MASSIVE amount of data to properly train a neural network model.
I would suggest a less intensive model. Moreover, looking at your example, it seems that the classes should be linearly separable. So you can use something really simple, linear regression, logistic regression, naive bayes or knn. These methods would do MUCH MUCH better than a neural network.
My Suggestion
I would start with bag of words and then use knn. This should be a good starting point.
A neural network is 0% recommended for the amount of data you have. |
H: Is Overfitting a problem in Unsupervised learning?
I come to this question as I read the use of PCA to reduce overfitting is a bad practice. That is because PCA does not consider labels/output classes and so Regularization is always preferred.
That seems purely valid in Supervised Learning.
What about the case for Unsupervised Learning?
We don't have any labels whatsoever.
So 2 questions.
Is overfitting a problem in Unsupervised learning?
If yes, Can we use PCA to prevent overfitting?Is that a good practice?
AI: Overfitting happens when the model fits the training dataset more than it fits the underlying distribution. In a way, it models the specific sample rather than producing a more general model of the phenomena or underlying process.
It can be presented using Bayesian methods. If I use Naive Bayes then I have a simple model that might not fit either the dataset or the distribution too well but of low complexity.
Now suppose that we use a very large Bayesian network. It might end up not being able to gain more insight and use the complexity to model the dataset (or even just trash).
So, overfitting is possible in unsupervised learning.
In PCA we start with a model in the size of the dataset. We have assumptions about the way the data behaves and use them to reduce the model size by removing parts which don't explain the main factors of variation. Since we reduce the model size one could expect to benefit always.
However, we face problems.
First, a model in the size of the dataset is extremely large (given such a large size you can model any dataset). Compressing it a bit is not enough.
Another problem is that it is possible that our assumptions are not correct. Then we will have a smaller model be it won't be aligned with the distribution. Again, it this case it might overfit or just not fit the model.
Though that, PCA is aimed to reduce the dimensionality, what lead to a smaller model and possibly reduce the chance of overfitting. So, in case that the distribution fits the PCA assumptions, it should help.
To summarize, overfitting is possible in unsupervised learning too. PCA might help with it, on a suitable data. |
H: SWF - Incremental mining
I am new to this data mining. Can anyone please help me with an example for Sliding Window Filtering algorithm(SWF) for incremental mining?
AI: There are pseudo-codes in this paper, Section 3.1 shows an example of incremental mining by this algorithm. However I can't find any projects or working source codes with this algorithm implemented. |
H: Finding the perfect algorithm for realtime optimizing of content
I am looking for an algorithm that allows me the following:
I have a webpage and I want to randomly show one "content" from a list of contents on that webpage depending on who (visitor) sees the webpage. I know my visitors' demographic features like age, gender, locale. I assume that visitors with similar demographics have a similar taste regarding content on my website. I also know that they liked my content when they share it in the end.
What I have:
A list of available contents, let's say: red, green, blue, purple,
violet
A constant stream of events of visitors with specific
demographic data that share a content
What I want:
First of all, all contents should be displayed completely randomly.
Each user should randomly get one content without any preference
As soon as the first user with a specific demographic shares the content
they got I want other visitors with similar demographic features that
end up on my webpage to see this specific content with a higher
probability.
So basically I want a self optimizing system that learns in realtime.
AI: This sounds like a classic use for a contextual bandit solver.
In essence you can run a simple online model (pretty much any regression model, or even a simple classifier like logistic regression if your reward signal is binary success/fail such as in your case) that learns to associate your demographic data with expected reward from each possible action - for you the reward can simply be 1 for a share link created or 0 for no share link.
Whilst the model is learning, you select the next action according to predicted reward from the model. There are choices between different workable strategies. For instance you could use an $\epsilon$-greedy approach: Pick the action with maximum predicted expected reward (or randomly choose between shared maximum values), but sometimes - with probability $\epsilon$ - you choose random content. There are other approaches and options that you can discover by researching contextual bandits and the simpler multi-armed bandit problems.
As an example, you could use a logistic regression model to predict expected reward from user demographics, with one such model per possible action. For a version that picks evenly to start, but prefers items that have been shared more over time, you can use a Boltzmann distribution (also called Gibbs distribution) using the predicted rewards as the inverse "energies" for the actions, and lowering the temperature as you collect more data. You can also initialise the weights of your model to predict a small but optimistic positive reward to start with to encourage early exploration. Whenever a user views your page, you pick the action to take based on the predicted rewards, and afterwards take the user response (share or not share) as feedback to update the one model associated with that action.
In the above example, the logistic regression learning rate, temperature scheme and starting reward are hyper-parameters of your model, and you use them to trade off responsiveness to individual events versus long-term accuracy for selecting the best action. |
H: How do I represent a hidden markov model in data structure?
My task involves a POS Tagging using HMM. I am given a training data set (word/tag). I have to write a file with transition probabilities and emission probabilities. I am currently using a nested dictionary of the form {State1: {State2: count, State3 :count}}. However, while calculating the probabilities now via the counts in nested dict, my program is running very slow for mid size files (e.g. 2000 sentences and tags)
Is there a better way to store a HMM in python? For my project, I cannot use any external library that already does this, I must use standard python libraries.
AI: With 29 states and 841 possible transitions to track whilst reading a file with 2000 entries (word, tag), then you should not be experiencing a speed problem when using a dictionary of dictionaries.
Assuming your data structure as described called transition_counts, and receiving data in pairs, (this_pos, next_pos) then running 2000 times:
transition_counts[this_pos][next_pos] += 1
takes only a fraction of a second. This is similar for code that calculates $p(POS_{t+1}|POS_t)$:
total_from_pos_t = sum(transition_counts[pos_t].values())
prob_pos_tplus_one = transition_counts[pos_t][pos_tplus_one] / total_from_pos_t
This is very fast. Your problem is not with the representation. |
H: Which type of regression has the best predictive power for extrapolating for smaller values?
I have a data set which deals with response variable in the order of 10-20. The scatter plot for such a regression appears linear, but the problem being when I predict for test cases using values very small compared to the trained sample the predicted response variables appear in negative values. Please note that the values of the data set predicted should not be negative.
Here is the 3d scatter plot of my data
Is there a type of regression which has better predictive power which can perform such operations without such an error?
AI: OK, so you apply linear regression to create a model for your data, and when you use that model to predict new values, the output values don't satisfy a constraint (namely, being positive). I can think of only a few different things that might be going on here:
The new input values you are giving to the model are not within the allowable range for the problem. This is unlikely given your setup - presumably you know your situation well enough to determine what input values are allowed - but if this does happen to be the case, you need to change the way in which you obtain inputs.
A linear model is not accurate over the full range between your original input data and new input data. Without more domain knowledge, it's impossible to tell what sort of model would be a better substitute. If this is a situation where you can come up with some sort of theoretical prediction about what general form of model should describe the data, you should use that form. If not... try an exponential I guess?
A linear model is accurate, it's just not this particular one. You might have to do something like forcing the regression to include a particular point (e.g. the origin), for example. That would mean that you're not actually doing a full linear regression, but rather fitting a model with one fewer free parameter - e.g. in slope-intercept form, you're basically forcing the intercept to zero and only fitting the slope. Or the equivalent if using another point. This should be doable with some regression library that you can use. But if you use a linear model, it will predict negative values somewhere. You'll have to live with declaring those regions of parameter space to be outside the domain of the model (or possibly finding a way to make sense of negative values).
Based on the little information you've provided here, there's really not much more I can say. Maybe if you described your situation in more detail, I could offer more specific advice. |
H: Correct number of biases in CNN
What is the correct number of biases in a simple convolutional layer? The question is well enough discussed, but I'm still not quite sure about that.
Say, we have (3, 32, 32)-image and apply a (32, 5, 5)-filter just like in Question about bias in Convolutional Networks
Total number of weights in the layer kernel trivially equals to $3 \times 5 \times 5 \times 32$.
Now let us count biases. The link above states that total count of biases is $1 \times 32$, which makes sense because weights are shared among all output cells, so it is natural to have only one bias for each output feature map as a whole.
But from the other side: we apply activation function to each cell of output feature map separately, so if we will have different bias for each cell, they do not sum together, so the number $0 \times 0 \times 32$ instead of $1 \times 32$ makes sense too (here $0$ is the output feature map height or width).
As I can see, first approach is widely used, but I also saw the second approach in some papers.
So, ($3 \times 5 \times 5 + 1) \times 32$ or $(3 \times 5 \times 5 + 0 \times 0) \times 32$?
AI: As you say, both approaches are used. It's called tied biases if you use one bias per convolutional filter/kernel ((3x5x5 + 1)x32 overall parameters in your example) and untied biases if you use one bias per kernel and output location ((3x5x5 + OxO)x32 overall parameters in your example).
Untied biases increase the capacity of your model, so they can be a good idea if you are underfitting. But in this case using tied biases and more filters and/or layers might also help, see https://harmdevries89.wordpress.com/2015/03/27/tied-biases-vs-untied-biases/. |
H: Best way to fix the size of a sentence [Sentiment Analysis]
I am working on a project that is about Natural Language Processing. However I am stuck at the point which is I have a ANN that has fixed size of input neurons.
I am trying to do sentiment analysis with using Imdb movie review set. To able to do that, firstly, I calculated the word embeddings for each word with creating a word-context matrix then applied SVD. So I have the word embedding matrix. But I do not know the best way to compress sentence's vector (which contains embeddings for each word in the sentence) into a fixed size to be able to feed the neural net. I tried PCA but result was not satisfying.
Any help?
AI: The easiest way is to average the word- embeddings. This works quite well.
Another thing you can try is to represent each document as a bag of words - i.e. - to have a vector in the size of your vocabulary, where each element in the vector represents the number of times a certain word had been mentioned in your document (for example, the first element in the vector will represent how many times the word a was mentioned, and so on).
Afterwords, to reduce the size of the vector you can use techniques like LDA, SVD, or autoencoders. |
H: Retrieving column names in R
I am trying to retrieve the column names of the data set model$data using the following formula:
sample(colnames(model$data),1)
When I run it I receive the following error message:
Error in sample.int(length(x), size, replace, prob) :
invalid first argument
Appreciate any help!
str(model) looks like this:
> str(model)
List of 13
$ data :List of 1
..$ : num [1:1000, 1:56] 1 1 1 1 0 1 1 0 1 1 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:1000] "7530" "5975" "552" "815" ...
.. .. ..$ : chr [1:56] "Agriculture_and_Hunting" "Baking" "Biochemistry" "Braiding" ...
$ unit.classif : num [1:1000] 3 5 5 5 16 3 5 1 3 3 ...
$ distances : num [1:1000] 0.000806 0.000239 0.000239 0.000239 0.001953 ...
$ grid :List of 6
..$ pts : num [1:25, 1:2] 1.5 2.5 3.5 4.5 5.5 1 2 3 4 5 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : NULL
.. .. ..$ : chr [1:2] "x" "y"
..$ xdim : num 5
..$ ydim : num 5
..$ topo : chr "hexagonal"
..$ neighbourhood.fct: Factor w/ 2 levels "bubble","gaussian": 1
..$ toroidal : logi FALSE
..- attr(*, "class")= chr "somgrid"
$ codes :List of 1
..$ : num [1:25, 1:56] 0.000388 0.99996 1 1 1 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:25] "V1" "V2" "V3" "V4" ...
.. .. ..$ : chr [1:56] "Agriculture_and_Hunting" "Baking" "Biochemistry" "Braiding" ...
$ changes : num [1:100, 1] 0.00261 0.00263 0.00262 0.00254 0.00254 ...
$ alpha : num [1:2] 0.05 0.01
$ radius : Named num [1:2] 3 0
..- attr(*, "names")= chr [1:2] "67%" ""
$ user.weights : num 1
$ distance.weights: num 1
$ whatmap : int 1
$ maxNA.fraction : int 0
$ dist.fcts : chr "sumofsquares"
- attr(*, "class")= chr "kohonen"
AI: Your data boils down to something like this structure:
> str(model)
List of 2
$ data:List of 1
..$ : int [1:3, 1:4] 1 2 3 4 5 6 7 8 9 10 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : NULL
.. .. ..$ : chr [1:4] "a" "b" "c" "d"
$ foo : num 1
but you have some other components that we don't need to bother with, and my data doesn't have row names and is a lot smaller.
model is a list of 2 (for me) and 13 (for you) parts.
The $data component is also a "List of 1" component.
So colnames(model$data) is trying to get the colnames of a list, and failing:
> colnames(model$data)
NULL
Which you would have spotted if you'd tried running colnames(model$data) yourself.
You want the colnames of the first element of the list model$data:
> colnames(model$data[[1]])
[1] "a" "b" "c" "d"
and hence:
> sample(colnames(model$data[[1]]),1)
[1] "b"
Its possible that because this is a "kohonen" class object that there are functions that get these data matrices for you. You'll need to read the documentation to figure this out. What I've shown above is digging in the structure to find the data you want. |
H: Pandas categorical variables encoding for regression (one-hot encoding vs dummy encoding)
Pandas has a method called get_dummies() that creates a dummy encoding of a categorical variable. Scikit-learn also has a OneHotEncoder that needs to be used along with a LabelEncoder. What are the pros/cons of using each of them? Also both yield dummy encoding (k dummy variables for k levels of a categorical variable) and not one-hot encoding (k-1 dummy variables), how can one get rid of the extra category? How much of a problem does this dummy encoding create in regression models (collinearity issues - a.k.a. dummy variable trap)?
AI: One advantage of get_dummies is that it can operate on values other than integers (so you don't need the LabelEncoder) and returns a DataFrame with the categories as column names. Also, you can conveniently drop one redundant category using drop_first=True.
One advantage of scikit-learn's OneHoteEncoder lies in the scikit-learn API. OHE gives you a transformer which you can apply to your training and test set separately if you specify the total number of categories. This doesn't work with get_dummies ,for example, if the training set misses categories present in the test set.
You can still delete categories by simply deleting columns from the resulting numpy array (e.g. using n_values_ or feature_indices_ to see which columns correspond to the same feature). Some models work regardless, for example tree-based models. Also, L1 regularization can often set redundant features to zero (see Lasso regression). |
H: How should I expose data from my app for data scientists?
I'm the product manager for an online app. I'm currently researching a new feature where our users will be able to access "all their raw application data". This data is likely to be used by data scientists, it's also likely to be loaded into BI tools. The datasets will contain up to a few million rows.
How should I actually expose the data in a practical sense?
For example:
Online datasource like Amazon Redshift
Some other RDBMS available online (e.g. a dedicated postgres installation)
CSV files available on S3
CSV files available for download in a web interface
Dumps into Google sheets
Doesn't matter as decent data scientists can easily handle and automate anything
AI: Usually, people are indeed quite indifferent to the format, as long as there is an easy way in which they can transfer it to their favorite format.
CSV is a very common format, since most tools can load it.
Note that there are some dataset whose CSV representation is inconvenient (e.g., the text has many commas, data set in which the type is importnat yet hard to deduce).
As for the storage of the format most people are even more indifferent to the source too. Web interface is the common option.
If you use a relational database you gain few advantages:
The dataset is structured so you are protected from wrong structuring problems.
The dataset can be naturally updated (e.g., you can keep appending records to it on a daily basis).
You can allow the users load only part of the dataset, which is convenient with large dataset (e.g., give me just America users from the last month).
In case that you are willing to enable that, you can let the users work on your database directly (e.g., as Google Big Query host github data)
To summarize, the location of the information is not important.
Unless you need one of the advantages provided by a database, use a CSV.
Need a database:
Online datasource like Amazon Redshift
Some other RDBMS available online (e.g. a dedicated postgres installation)
Big query
The raw data is enough
CSV files available on S3
CSV files available for download in a web interface
Dumps into Google sheets
General true
Doesn't matter as decent data scientists can easily handle and automate anything (anything reasonable but yet, it is nice that you are looking for the most convenient format). |
H: How to treat outliers in a time series dataset?
I've read the following article about how to treat outliers in a dataset: http://napitupulu-jon.appspot.com/posts/outliers-ud120.html
Basically, he removes all the y which has a huge difference with the majority:
def outlierCleaner(predictions, ages, net_worths):
"""
clean away the 10% of points that have the largest
residual errors (different between the prediction
and the actual net worth)
return a list of tuples named cleaned_data where
each tuple is of the form (age, net_worth, error)
"""
#calculate the error,make it descend sort, and fetch 90% of the data
errors = (net_worths-predictions)**2
cleaned_data =zip(ages,net_worths,errors)
cleaned_data = sorted(cleaned_data,key=lambda x:x[2][0], reverse=True)
limit = int(len(net_worths)*0.1)
return cleaned_data[limit:]
But how may I apply this technique to a time series dataset if its rows are correlative?
AI: Decide how auto-correlative your usual event in the time series is.
For example, "I'm tracking temperature over time and it rarely changes more
than 30 degrees F in an hour".
Throw out or smooth any values where the observed value changes
more than that. In other words, "If ever I see the temperature
changing more than 30 degrees in an hour, I'm going to ignore that value and substitute the average of the prior and the next value because that must be a sensor malfunction".
Once you were comfortable with doing that, use something like the standard deviation of the data over a rolling window instead of an absolute, arbitrary value like I did. |
H: Does it make sense to train a CNN as an autoencoder?
I work with analyzing EEG data, which will eventually need to be classified. However, obtaining labels for the recordings is somewhat expensive, which has led me to consider unsupervised approaches, to better utilize our quite large amounts of unlabeled data.
This naturally leads to considering stacked autoencoders, which may be a good idea. However, it would also make sense to use convolutional neural networks, since some sort of filtering is generally a very useful approach to EEG, and it is likely that the epochs considered should be analyzed locally, and not as a whole.
Is there a good way to combine the two approaches? It seems that when people use CNN's they generally use supervised training, or what? The two main benefits of exploring neural networks for my problem seem to be the unsupervised aspect, and the fine-tuning (it would be interesting to create a network on population data, and then fine tune for an individual, for instance).
So, does anyone know if I could just pretrain a CNN as if it was a "crippled" autoencoder, or would that be pointless?
Should I be considering some other architecture, like a deep belief network, for instance?
AI: Yes, it makes sense to use CNNs with autoencoders or other unsupervised methods. Indeed, different ways of combining CNNs with unsupervised training have been tried for EEG data, including using (convolutional and/or stacked) autoencoders.
Examples:
Deep Feature Learning for EEG Recordings uses convolutional autoencoders with custom constraints to improve generalization across subjects and trials.
EEG-based prediction of driver's cognitive performance by deep convolutional neural network uses convolutional deep belief networks on single electrodes and combines them with fully connected layers.
A novel deep learning approach for classification of EEG motor imagery signals uses fully connected stacked autoencoders on the output of a supervisedly trained (fairly shallow) CNN.
But also purely supervised CNNs have had success on EEG data, see for example:
EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces
Deep learning with convolutional neural networks for brain mapping and decoding of movement-related information from the human EEG (disclosure: I am the first author of this work, more related work see p. 44)
Note that the EEGNet paper shows that also with a smaller number of trials, purely supervised training of their CNN can outperform their baselines (see Figure 3). Also in our experience on a dataset with only 288 training trials, purely supervised CNNs work fine, slightly outperforming a traditional filter bank common spatial patterns baseline. |
H: How to fill missing value based on other columns in Pandas dataframe?
Suppose I have a 5*3 data frame in which third column contains missing value
1 2 3
4 5 NaN
7 8 9
3 2 NaN
5 6 NaN
I hope to generate value for missing value based rule that first product second column
1 2 3
4 5 20 <--4*5
7 8 9
3 2 6 <-- 3*2
5 6 30 <-- 5*6
How can I do it use data frame? Thanks.
How to add condition to calculate missing value like this?
if 1st % 2 == 0 then 3rd = 1st * 2nd
else 3rd = 1st + 2nd
1 2 3
4 5 20 <-- 4*5 because 4%2==0
7 8 9
3 2 5 <-- 3+2 because 3%2==1
5 6 11 <-- 5+6 because 5%2==1
AI: Assuming three columns of your dataframe is a, b and c. This is what you want:
df['c'] = df.apply(
lambda row: row['a']*row['b'] if np.isnan(row['c']) else row['c'],
axis=1
)
Full code:
df = pd.DataFrame(
np.array([[1, 2, 3], [4, 5, np.nan], [7, 8, 9], [3, 2, np.nan], [5, 6, np.nan]]),
columns=['a', 'b', 'c']
)
df['c'] = df.apply(
lambda row: row['a']*row['b'] if np.isnan(row['c']) else row['c'],
axis=1
) |
H: How can I handle missing categorical data that has significance?
I have a data set that is highly categorical and has a lot of missing values. For instance:
i | A_foo | A_bar | A_baz | outcome
--+-------+-------+-------+--------
0 | nan | nan | nan | 1
1 | 0 | 1 | 0 | 1
2 | nan | nan | nan | 0
3 | 1 | 0 | 0 | 0
The problem is that 1 and 0 have a different meaning that nan. I don't want to impute the data because assigning values of 0 or 1 will bias my data, but many machine learning algorithms will not work on a dataset with missing values. How can I handle this?
AI: Imputation and dealing with missing data a broad subject; you should start by researching standard material on this subject.
The first question to figure out is Why is some data missing? and What is the process that causes data to be missing? It's important to understand how this happens, because this will affect what solution is appropriate.
Randomly missing data
If data is missing totally at random (whether a value is missing does not depend on any of the feature values of that item), then imputation can be appropriate. It should not create bias, if you do it appropriately. There are many techniques for imputation. You don't mention what you tried or why you think it will bias your results, but in general, if you use an appropriate method of imputation, there is no reason why it needs to bias your data.
Alternatively, you can use a classifier that can tolerate missing data. Some classifiers are designed to handle missing data and can tolerate it. However, I don't know of any reason to use them over imputation.
Non-randomly missing data
In contrast, if the chance for data to go missing for some object depends on the value of the features of that object, then you have a bigger problem. In that case imputation can create bias -- as can any other method. Your best hope is to understand in greater depth the random process that causes data to be missing and the probability distribution (probability that data goes missing, as a function of feature values), and try to design a procedure that is appropriate for that process.
Your specific situation: all features missing
Your specific situation is especially weird: it appears in your case either all features are missing, or none are. That's a weird one. For instances where the features are missing, you have absolutely no information about those instances. So, the best classification decision in that case is probably a very simple rule: take whichever class appears most frequently in your training set (or, most frequently among instances with missing data). Run the classifier on the remaining instances, i.e., the instances with no missing data.
But in real life this situation is pretty rare. It's more typical that some features are missing and others are present, and that requires more work to handle. |
H: How to improve an existing machine learning classifier in python?
I have a big dataset (1million x 50) to which I want to predict a particular class. I have thought of segregating the dataset in batches of 20k. And then train a classifier (lets say random forest or a basic SVM). How do I then improve that classifier by providing it with additional dataset. In other words, how do I preserve the random forests created in iteration i and use as starting model in interation i+1 to improve the model in python?
AI: The answer depends on your motivation for breaking the data into blocks. It could range from 'use online training' to 'use ensemble methods' to 'don't break the data into blocks'. Here are a few options to consider.
You could train multiple models on different subsets of the data, then ensemble them, as @Hobbes suggested. This would let you train the models in parallel on different machines. The function learned would be different than training a single model on the full data set.
If using a linear SVM, you can simply continue training the model. Say you train on the first 20k points, then receive an additional 20k points. Initialize the optimization using the first model, then train using the full set of 40k points. Online methods like stochastic gradient descent are a good way to train SVMs on large data sets. SGD processes a single point at a time, so dividing the data into blocks isn't necessary. If the full dataset is available, you'd probably get faster convergence by sweeping through it all instead of breaking it into blocks.
I'd venture a guess that you're not directly using a kernelized SVM on 1M data points. But, you could approximate one by using the kernel to perform a feature space mapping, then train a linear SVM in the feature space. For example, the Nyström technique approximates the mapping using a random subsample of data points. One option would be to learn the feature space mapping using all data, then incrementally train the linear SVM. An alternative would be to learn the mapping using only the first block of data. This is valid if all blocks are equally representative of the full data set and adding new data wouldn't change the mapping.
The case isn't so straightforward for random forests because the individual trees choose hierarchical splits in a greedy manner. The splits chosen for the initial 20k points may not match what would have been chosen had the full 40k points been available. But, it's not easy to go back and revise them. However, a number of papers have proposed online variants of random forests that could be used in this setting. A search for 'incremental' or 'online' random forests should turn up the relevant results. |
H: Is there an R package for Locally Interpretable Model Agnostic Explanations?
One of the researchers, Marco Ribeiro, who developed this method of explaining how black box models make their decisions has developed a Python implementation of the algorithm available through Github, but has anyone developed a R package? If so, can you report on using it?
AI: I think you're talking about the lime Python package. No, there is no R port for the package. The implementation for the localized model requires enhancements to the existing machine-learning code (explained in the paper), a new implementation for R would be very time consuming.
You may want to take a look at this for interfacing Python in R.
My suggestion is stick with Python. The package is only useful for highly complicated non-linear models, which Python offers better support than R. |
H: Generalization Error Definition
I was reading about PAC framework and faced the definition of Generalization Error. The book defined it as:
Given a hypothesis h ∈ H, a target concept c ∈ C, and an underlying distribution
D, the generalization error or risk of h is defined by
The generalization error of a hypothesis is not directly accessible to the learner
since both the distribution D and the target concept c are unknown. However, the
learner can measure the empirical error of a hypothesis on the labeled sample S.
I can not understand the equation. Can anyone please tell me how it can be interpreted? Also what is x~D?
Edit:
How do I formally write this term? Is something like $$\mathbb{E}_{x \sim D} [1_{h(x)\neq c(x)}] = \int_X 1_{h(\cdot) \neq c(\cdot)} (\omega) dD(\omega)$$
correct or do I need to define some random variable? Also, to show that the empirical error
$$ \hat{R}(h) = \frac{1}{m} \sum_{i =1}^m 1_{h(x_i)\neq c(x_i)} $$
is unbiased, we have
$$\mathbb{E}_{S \sim D^m} [\hat{R}(h)] = \frac{1}{m} \sum_{i =1}^m \mathbb{E}_{S \sim D^m} ~ \left[ 1_{h(x_i)\neq c(x_i)} \right] = \frac{1}{m} \sum_{i =1}^m \mathbb{E}_{S \sim D^m} ~ \left[ 1_{h(x)\neq c(x)} \right]$$,
but how do we formally get
$$ \mathbb{E}_{S \sim D^m} ~ \left[ 1_{h(x)\neq c(x)} \right]= \mathbb{E}_{X \sim D} ~ \left[ 1_{h(x)\neq c(x)} \right] = R(h)$$
I think that I understand it intuitionally, but I can't write it down formally. Any help is much appreciated!
AI: There exists somewhere in the world a distribution $D$ from which you can draw some samples $x$. The notation $x \sim D$ simply states that the sample $x$ came from the specific distribution that was noted as $D$ (e.g. Normal or Poisson distributions, but also the possible pixel values of images of beaches).
Say you have some ground truth function, mark it as $c$, that given a sample $x$ gives you its true label (say the value 1). Furthermore, you have some function of your own, $h$ that given some input, it outputs some label.
Now given that, the risk definition is quite intuitive: it simply "counts" the number of times that $c$ and $h$ didn't agree on the label. In order to do that, you (ideally) will
go over every sample $x$ in your distribution (i.e. $x \sim D$).
run it through $c$ (i.e. $c(x)$) and obtain some label $y$.
run it through $h$ (i.e. $h(x)$) and obtain some label $\hat{y}$.
check if $y \neq \hat{y}$. If so, you add 1 to your count (i.e. $1_{h(x) \neq c(x)}$ - that notes the indicator function)
Now last thing to note is that I wrote above "count", but we don't really care if the number is 500 or 100, we care for the relative number of mistakes (like 40% or 5% of the samples that were checked were classified differently). That is why it is noted as expectancy ($\mathbb{E}$).
Let me know if that was clear enough :-) |
H: Forecasting time series: Method Selection
Im new to forecasting time series and Im looking for some advice on selecting the best method based on the analaysis of the graph.
I have the following data and based on the little knowledge I have, Im assuming it is a stationary time series because of the shape of the data and because of the result I got when performing a Dicker Fulley Test
Test Statistic -6.560544e+00
p-value 8.402824e-09
#Lags Used 2.100000e+01
Number of Observations Used 1.164000e+03
Critical Value (5%) -2.864026e+00
Critical Value (1%) -3.435980e+00
Critical Value (10%) -2.568094e+00
dtype: float64
That made me choose Arima with a p and q values of 1. However the result Im getting are pretty awful. Can anyone guide on how should I choose the adequeate method for forecasting and if case ARIMA is a good choice, how should I tweak parameters to improve my results? The end goal is to predict the next month of data, so it might not be necessary to use all of the data.
AI: You can try two different approaches:
1) Kalman filter, the method is battle-tested and has proven useful in many areas.
Resources:
Understanding the Basis of the Kalman Filter
kalman filter in pictures
2) Recurrent Neural Networks, the LSTM and GRU architectures are particularly interesting for time series predictions.
Resources:
RNN effectiveness
Understanding LSTMs
To do regression and predict future data points, you would need to build a training dataset consisting of a sequence of events. Let's say a value $x$ for every timestamp $t$.
Your data seems to have 1 dimension, so both the network input layer and the output layer would consist of 1 unit. You would then train your model to predict $(x_{t+1})$ given $(x_{t})$.
Let $M$ be our trained model and let's say you want to forecast a data point at time $k$ and you know the current value at time $t$.
$M(x_{t}) = (x_{t+1})$
$t = t+1$
$M(x_{t}) = (x_{t+1})$
$...$ increment $t$ and keep predicting until $t+1 = k-1$
$M(x_{t+1}) = (x_{k})$
Put in pictures this corresponds to:
(picture from Udacity lecture about Deep Learning) |
H: In multiple linear regression why is it best to use an $F$-statistic when evaluating predictors?
I am currently going through Hastie and Tibshirani's 'Introduction to Statistical Learning' textbook and I have come across something I don't understand on page 77. I have two questions.
The author states that if we had 100 variable predictors (with respective coefficients $\beta_i$) and supposed that the null hypothesis, $$H_0 : \beta_1 = ... = \beta_{100} = 0$$ were true, then roughly $5\%$ of the $p$-values would fall below $0.05$ by chance, and therefore we might wrongly conclude that certain predictors are related to the response. Why would this happen by chance? Is this simply a mathematical truth?
In addition, the author then goes on to state that the $F$-statistic is a better measure because "if $H_0$ were true, then there is only a $5\%$ chance that the $F$-statistic would result in a $p$-value below $0.05$". I don't understand the difference - could somebody explain a bit more clearly?
AI: The example given in your textbook proposes a multiple linear regression with 100 predictors, all of which have a "true" regression coefficient of 0. In other words, your independent variables have no statistical association with your dependent variable.
When you calculate the $p$-value of an individual coefficient, you're looking at the magnitude of the coefficient, the standard error of the coefficient, making some distributional assumptions about it, and asking the following question: "what is the probability of seeing a coefficient value this extreme if the true value is actually zero?"
If our distributional assumptions are correct, any given coefficient with a true value of 0 will report a <0.05 $p$-value approximately 5% of the time. That's not so much of a problem if we only have one predictor, but by the law of large numbers, if we have lots of predictors, we'd expect 5% of them to report a $p$-value this low. This makes the $p$-values in high-dimensional regressions hard to interpret.
The $F$-test is different. Instead of evaluating every single coefficient for statistical significance, it applies a single test to the entire regression. So instead of having 100 chances to throw up an erroneous $p$-value, it only has one chance. This makes the $F$-test useful for evaluating whether or not there is a regression effect for high-dimensional regressions. |
H: RandomForestClassifier : binary classification scores
I am using sklearn's RandomForestClassifier to build a binary prediction model. As expected, I am getting an array of predictions, consisting of 0's and 1's. However I was wondering if it is possible for me to get a value between 0 and 1 along with the prediction array and set a threshold to tune my model.
Many thanks in advance
AI: Referring to http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html, you are using teh predict method to derive the binary response. If you call predict_proba, you will get an array of probabilities corresponding to the binary response. |
H: Sales prediction of an Item
So, I've been trying to implement my first algorithm to predict the (sales/month) of a single product, I've been using linear regression since that was what were recommended to me. I'm using data from the past 42 months, being the first 34 months as training set, and the remaining 8 as validation.
I've been trying to use 4 features to start:
Month number(1~12)
Average price that the product was sold during that month
Number of devolutions previous month
Number of units sold previous month
Here are images with graphs comparing the Real Data x Predicted Data and a Error x number of elements graph:
So far the results are not good at all (as shown in the images above), the algorithm can't even get the training set right. I tried to use higher degrees polynomials, and the regularization parameter, it seems to make it worse.
Then, I would like to know if there is a better approach for this problem, or what could I do to improve the performance.
Thanks a lot in advance!
AI: Based on the information given by you. I'm assuming you have performed multiple linear regression ie multiple features and one response feature to be predicted.
First, apply PCA on all of your features except the response variable you want to predict. In your case the four features you mentioned. Then transform it into a 2 component matrix using PCA. Once you are done with that plot the new Matrix you formed with the response features as a scatter plot.So effectively a 3D scatter plot.
When you generate this scatter plot you will be able to visualize much better on which regression you have to use. You can decide for yourself if it is linear or not. Depending on how many outliers you are comfortable with. |
H: What is Ground Truth
In the context of Machine Learning, I have seen the term Ground Truth used a lot. I have searched a lot and found the following definition in Wikipedia:
In machine learning, the term "ground truth" refers to the accuracy of the training set's classification for supervised learning techniques. This is used in statistical models to prove or disprove research hypotheses. The term "ground truthing" refers to the process of gathering the proper objective (provable) data for this test. Compare with gold standard.
Bayesian spam filtering is a common example of supervised learning. In this system, the algorithm is manually taught the differences between spam and non-spam. This depends on the ground truth of the messages used to train the algorithm – inaccuracies in the ground truth will correlate to inaccuracies in the resulting spam/non-spam verdicts.
The point is that I really can not get what it means. Is that the label used for each data object or the target function which gives a label to each data object, or maybe something else?
AI: The ground truth is what you measured for your target variable for the training and testing examples.
Nearly all the time you can safely treat this the same as the label.
In some cases it is not precisely the same as the label. For instance if you augment your data set, there is a subtle difference between the ground truth (your actual measurements) and how the augmented examples relate to the labels you have assigned. However, this distinction is not usually a problem.
Ground truth can be wrong. It is a measurement, and there can be errors in it. In some ML scenarios it can also be a subjective measurement where it is difficult define an underlying objective truth - e.g. expert opinion or analysis, which you are hoping to automate. Any ML model you train will be limited by the quality of the ground truth used to train and test it, and that is part of the explanation on the Wikipedia quote. It is also why published articles about ML should include full descriptions of how the data was collected. |
H: Can generic data sets be suitable for specific sentiment analysis
I have used the stanford movie review dataset for creating a experimentation of sentiment analysis.
Managed to create a basic application on top of Spark using the Naive bayes classification algorithm.
Steps that I did for pre-processing from the spark ML pipeline
Tokenization
Bigrams
The provided dataset above also has a testing dataset with itself which is separate of the training set. After training it I got around 97% accuracy which I believe is pretty good for Naive bayes.
Now can I use this ML model to predict for other texts such as email/chat etc., My guess is that this dataset has a large enough collection of words to perform good predictions and certain english words regardless of the business context like "I dont like this","This does not look good" is the same across different domains such as Movies/Emails/Chats etc.
I have not done the experiment since the data that I need to get hold of belongs to the customer and due to privacy restrictions we cannot access the data.
Any help/guidance would be much appreciated.
AI: It depends.
You're basically asking if your sample (training data) is representative of the population (all written words).
Are you doing sentiment analysis on movie reviews? It'll work great.
Are you doing sentiment analysis on TV reviews? It'll probably work great.
Are you doing sentiment analysis on book reviews? I would give better than 50-50 odds it'll work.
Are you doing sentiment analysis on Twitter posts? Now we're getting shaky. People tend to write much less, use less formal language, and use more emojis which your movie review model wouldn't have seen.
That being said, there are definitely "generic" sentiment analysis services like here. Try out your model against Algorithmia on what you would consider a generic set of data (e.g. a bunch of tweets) and see how it does. |
H: Reward dependent on (state, action) versus (state, action, successor state)
I am studying reinforcement learning and I am working methodically through Sutton and Barto's book plus David Silver's lectures.
I have noticed a minor difference in how the Markov Decision Processes (MDPs) are defined in those two sources, that affects the formulation of the Bellman equations, and I wonder about the reasoning behind the differences and when I might choose one or the other.
In Sutton and Barto, the expected reward function is written $R^a_{ss'}$, whilst in David Silver's lectures it is written $R^a_{s}$. In turn this leads to slightly different formulations of all the Bellman equations. For instance, in Sutton and Barto, the equation for policy evaluation is given by:
\begin{align}
v_{\pi}(s) = \sum_a \pi(a|s) \sum_{s'} P_{ss'}^a(R_{ss'}^a + \gamma v_{\pi}(s'))
\end{align}
Whilst David Silver's lectures show:
\begin{align}
v_{\pi}(s) = \sum_a \pi(a|s) \left(R_{s}^a + \gamma \sum_{s'} P_{ss'}^a v_{\pi}(s') \right)
\end{align}
In both cases:
$\pi(a|s)$ is policy function - probability of choosing action $a$ given state $s$.
$\gamma$ is discount factor.
$P_{ss'}^a$ is transition function, probability of state changing to $s'$ given $s, a$
I understand that $R_{ss'}^a$ and $R_{s}^a$ are related (via $P_{ss'}^a$), so that these two sources are explaining the exact same thing. Note that the first equation can also be written as
\begin{align}
v_{\pi}(s)
&= \sum_a \pi(a|s) \sum_{s'} (P_{ss'}^aR_{ss'}^a + \gamma P_{ss'}^av_{\pi}(s'))\\
&= \sum_a \pi(a|s) \left( \sum_{s'} P_{ss'}^aR_{ss'}^a + \sum_{s'} \gamma P_{ss'}^av_{\pi}(s') \right) \\
&= \sum_a \pi(a|s) \left( \sum_{s'} P_{ss'}^aR_{ss'}^a + \gamma \sum_{s'} P_{ss'}^av_{\pi}(s') \right)
\end{align}
Hence, it must be true that $R_{s}^a = \sum_{s'} P_{ss'}^a R_{ss'}^a$.
My question is whether there is any reason I should prefer to use one or the other notation?
I started with Sutton and Barto, and find that notation more intuitive - the reward may depend on the eventual state, and this is explicit in the equations. However, it looks like in practice that the notation used in the video lectures describes more efficient calculations (essentially $R_{s}^a = \sum_{s'} P_{ss'}^a R_{ss'}^a$ is cached, if the formula is translated directly to code). Is that all there is to it?
AI: Your intuition is correct. In the most general case (Sutton's definitions), the model of the environment consists of the state transition distribution and the reward distribution. The latter one is rarely considered, as many times the reward is being assigned by the modeler, dependent only by the selected action from the current state and being deterministic. As you mentioned it simplifies a lot the coding implementation. |
H: Estimate the normal distribution of the mean of a normal distribution given a set of samples?
Let's say there is a distribution, call it D, for which I don't know details (i.e. mean and variance) but can assume that it's a normal distribution. I now have N samples from D. I cannot take more samples because it's too heavy. Given the N samples, and the assumption that D is a normal distribution, how can I compute the mean and variance of the mean of D? Naturally it's less likely for the mean of D to be far away from the samples (thought still possible), while more likely to be near the samples.
AI: The same can be done through any of the statistical estimation technique such as maximum likelihood or Minimum variance unbiased estimator(MVUE). Generally maximum likelihood is the easiest way to evaluate the mean and variance of the distribution. If you perform Maximum likelihood method the estimated mean of the normal distribution is the sample mean. Given x1,x2.....xn are the samples
Estimated mean u = (x1 + x2 + ... + xn)/n
Similarly the estimated variance is Σ(xi - u)^2/n
Both of these are obtained by setting the partial derivative of the likelihood function with respect to the mean and the standard deviation to zero.
Hope this helps. |
H: How to shift rows values as columns in pandas?
Input: I have csv file like below as input....
ID, Year,Specialty,AgeRange,PlaceSvc,Count, Group
101,2009,Internal, 20-29, Office, 0, PRGNCY
101,2010,Emergency, 20-29, Urgent Care,0, GIOBSENT
101,2011,Internal, 20-29, Office, 0, GYNEC1
102,2010,Other, 30-39, Office, 1, PRGNCY
102,2010,Laboratory,30-39, Independent,1, MSC2a3
103,2009,Laboratory,30-39, Independent,1, MSC2a3
103,2011,Other, 30-39, Office, 0, PRGNCY
Output: I want output like below...
ID,Year,Specialty_Internal,Specialty_Emergency,Specialty_Labrotory,Specialty_Other,Age20_29,Age30_39,PlaceSvc_Urgent,PlaceSvc_Office,PlaceSvc_Independent,Count,GroupPrgncy,GroupGiobsent,GroupGynec1,GroupMsc2a3
101,2009,1,0,0,0,1,0,0,1,0,0,1,0,0,0
101,2010,0,1,0,0,1,0,1,0,0,0,0,1,0,0
101,2011,1,0,0,0,1,0,0,1,0,0,0,0,1,0
102,2010,0,0,1,1,0,1,0,1,1,2,1,0,0,1
103,2009,0,0,1,0,0,1,0,0,1,1,0,0,0,1
103,2011,0,0,0,1,,1,0,1,0,0,1,0,0,0
How can i do this by pandas? or is there any other techinque to do this?
AI: You want pandas.get_dummies.
If you call get_dummies on a categorical column, it will output the binary dummy variables you're looking for. You should then be able to merge this with your original DataFrame on the index, or construct a new DataFrame using only the columns you want. |
H: Can't reproduce results from GridSearchCV?
I am trying to find optimized n_neighbors value for KnearestClassifier using GridSearchCV. I am able to get optimized parameters but when I enter those in my classifier results don't match with GridSearchCVs best results.
clf = KNeighborsClassifier(n_neighbors=15, weights='uniform')
clf.fit(features_train, labels_train)
print('Score using optimized parameters: {}'.format(clf.score(features_test, labels_test)))
params = {'n_neighbors':[1,10,15,20,25,30,35,40,45,50,60,70,80,90,100], 'weights':['uniform', 'distance']}
grid = GridSearchCV(clf, params, cv=10, )
grid.fit(features_train, labels_train)
print('Optimized Parameters:{}'.format(grid.best_params_))
print('Best Score from GridsearchCV parameters{}'.format(grid.best_score_))
Output:
Score using optimized parameters: 0.928
Optimized Parameters:{'n_neighbors': 15, 'weights': 'uniform'}
Best Score from GridsearchCV parameters: 0.962666666667
AI: The score from your GridsearchCV is biased. You can use cross-validation either for estimating accuracy, or for choosing hyperparameters; but not both. If you use cross-validation to pick the best choice of hyperparameters, by measuring the accuracy of each possible option, the accuracy you got for the option you chose will tend to overestimate the accuracy you'll see on the test set.
To avoid this bias, select a separate hold-out validation set for estimating the accuracy of your selected parameters, or use nested cross-validation (or a scikit-learn Pipeline).
See https://datascience.stackexchange.com/a/17835/8560. |
H: Missing data imputation with KNN
-1
down vote
favorite
I have a dataset including missing data for most of the variables. Assume the dataset is as follows:
Obs. var1 var2 var3 var4 var5 var6
1 x11 x12 x13 x14 Nan Nan
2 x21 x22 x23 Nan x25 x26
3 x31 x32 x33 x34 x35 x36
...
n xn1 xn2 xn3 xn4 Nan xn6
I have split the dataset to d1 where we have complete data for all variables and d2 where all records have at least one missing variable.
I made different models using KNN: To predict the values of var5 and var6 for the first observation, I used d1 (dataset without missing value) and modeled on var1, var2, var3 and var4.
To predict the value of var5 for the last observation, I used d1 and modeled on var1, var2, var3, var4, and var5.
Does my approach make sense?! Any suggestion are welcome. Thank you.
AI: There are various approaches for dealing with missing values. Suppose we've got 4 instances in a dataset:
x1 = [1 2 3]
x2 = [1 ? 3]
x3 = [2 4 2]
x4 = [1 3 3.5]
one simple approach (specially popular in medical datasets) is finding values with regard to most similar instance; in the above case, missing value of x2 would be then: 2 (since x1 is the most similar)
a more sophisticated approach is weighted averaging through k most similar instances (of course only applicable if the missing value is numeric or at least ordinal) in this case, you should calculate: x2(2) = (2*(1-(0/3)) + 3*(1-(0.5/3)) / (2-(0/3)-(0.5/3)) (x4 has also been counted as a similar case
another approach, is voting among k most similar instances (applicable both for categorical and ordinal values).
cases 1 and 3 are what you have implemented (case1 = 1 Nearest Neighbor / case2 = k-nearest neighbors)
there are other approaches for handling a missing value, but it depends on what you're going to do with your dataset. For example in very large datasets, sometimes it is efficient to simply ignore every instance that contains at least one missing value, or ignore only the missing value (not the entire instance) in further processes (e.g. in VFI algorithm) |
H: State of the art for Object detection/image recognition
I was asked to verify the feasibility for solving a particular problem: recognizing for a fashion brand the model of its products.
I have little experience with image recognition in general, I always used Google Vision Api or some pre trained nets from Google/tensorflow like the VGG-16.
I am wondering: Do you think this level of granularity can be achieved?
I guess I should for example train the last layer of a pre-trained network, but I need many images to do that.
AI: Maybe.
Do they have a unique feature on the item (e.g. a polo player, a red sole) the model can be trained on? I have seen people try identifying clothing model from images via Google's Vision API and it didn't work well. If you are in a fashion category where the business model is to copy designs like H&M or Zara do, you're likely going to have trouble. |
H: Is it scientifically correct to derive conclusions unrelated to hypothesis from A/B test data
Consider a software A/B test with the hypothesis that "the addition of feature F is predicted to increase metric X".
At the end of the test, the data doesn't show any significant change in X, but it does show a significant increase in Y - something that wasn't expected or even considered at the beginning of the experiment.
At this point, is it scientifically valid to say that F increases Y, or should a new A/B test be designed and executed?
AI: It looks analagous to drug testing, where reporting of side effects during drug trials is obviously very important - i.e. the increase in Y seems analagous to a side effect. And some famous drugs have begun their lives as research into a side effect. Viagra is probably the most famous case, being a spinoff from a drug developed as angina medication. So in your write-up on your experiment you should definitely report the apparent effect on Y.
However, if the effect on Y is commercially important, then you still need to go back and do an experiment around a hypothesis that references the increase in Y to validate the existence of the effect properly. |
H: python - What is the format of the WAV file for a Text to Speech Neural Network?
I am creating a Text to Speech system for a phonetic language called "Kannada" and I plan to train it with a Neural Network. The input is a word/phrase while the output is the corresponding audio.
While implementing the Network, I was thinking the input should be the segmented characters of the word/phrase as the output pronunciation only depends on the characters that make up the word, unlike English where we have slient words and Part of Speech to consider. However, I do not know how I should train the output.
Since my Dataset is a collection of words/phrases and the corrusponding MP3 files, I thought of converting these files to WAV using pydub for all audio files.
from pydub import AudioSegment
sound = AudioSegment.from_mp3("audio/file1.mp3")
sound.export("wav/file1.wav", format="wav")
Next, I open the wav file and convert it to a normalized byte array with values between 0 and 1.
import numpy as np
import wave
f = wave.open('wav/kn3.wav', 'rb')
frames = f.readframes(-1)
#Array of integers of range [0,255]
data = np.fromstring(frames, dtype='uint8')
#Normalized bytes of wav
arr = np.array(data)/255
How Should I train this?
From here, I am not sure how to train this with the input text. From this, I would need a variable number of input and output neurons in the First and Last layers as the number of characters (1st layer) and the bytes of the corresponding wave (Last layer) change for every input.
Since RNNs deal with such variable data, I thought it would come in handy here.
Correct me if I am wrong, but the output of Neural Networks are actually probability values between 0 and 1. However, we are not dealing with a classification problem. The audio can be anything, right? In my case, the "output" should be a vector of bytes corresponding to the WAV file. So there will be around 40,000 of these with values between 0 and 255 (without the normalization step) for every word. How do I train this speech data? Any suggestions are appreciated.
EDIT 1 : In response to arduinolover's Answer
From what I understand, Phonemes are the basic sounds of the language. So, why do I need a neural network to map phoneme labels with speech? Can't I just say, "whenever you see this alphabet, pronounce it like this". After all, this language, Kannada, is phonetic: There are no silent words. All words are pronounced the same way they are spelled. How would a Neural Network help here then?
On input of a new text, I just need to break it down to the corresponding alphabets (which are also the phonemes) and retrieve it's file (converted from WAV to raw byte data). Now, merge the bytes together and convert it to a wav file.
Is this this too simplistic? Am I missing something here? What would be the point of a Neural Network for this particular language (Kannada) ?
AI: Speech data is made up of unique acoustic units called phonemes. Any audio file can be represented as a sequence of phonemes. Both automatic speech recognition (ASR) and speech synthesis (SS) systems model these phonemes. In ASR, speech signal (wav file) is used as input and phoneme labels are predicted and in SS, phoneme labels can be input and speech signal is output.
You can use a phonetic dictionary for converting your text files into sequence of phonemes. e.g play -> P L EY
If you have phoneme boundary marked data e.g. in audio file file1.wav 0.1s to 0.5s phoneme x and 0.5s to 0.9s is phoneme y. Now you have You can use a NN to learn the mapping between phoneme labels and speech signal (400 data points as output and phoneme label of these 400 points as input).
But there are many things that affect the pronunciation. Some of them are listed below:
Context: 'to' and 'go' have the same phoneme 'o' but have very different pronunciations.
Pitch: Female speakers usually have higher pitch than male speakers.
Speaking rate: Speaking rate varies across speakers. It also depends on speaking mode while reading a text we tend to have less number of pauses as compared to conversations.
length_of_output_phoneme: The length of wav file to generate
So in the end input to your NN will look something like this
[left_context, phoneme, right_context, specking_rate, pitch, length_of_output_phoneme] and output will be corresponding speech signal. You can either use MFCC features or raw wav data as NN output. There are many other factors that affect the pronunciation.
If you don't have time marked data. You can use Hidden Markov model HMM for speech synthesis. A separate model will be learned for each phoneme. Input for HMM will be text files (sequence of phonemes) and output will be specch signal. These learned models can be used for generating speech data later.
Some speech synthesis resources are listed below:
1) CMU festvox
2) wavenet
3) Deep Learning in Speech Synthesis
The biggest challenge will be to make it sound like human voice.
Edit 1: Some of the problems with "whenever you see this alphabet, pronounce it like this" are listed below
1) context: From the above to and go example which pronunciation of 'o' will be used
2) discontinuity: Vocal tract vibrations (lip, tongue motions) produce phoneme. Phonemes diffuse into neighboring phonemes since vocal tract doesn't stop vibrating immediately. If you copy-paste phonemes then there will be an abrupt change. You can copy-paste sounds for words since they are independent (kind of, 'can not' becomes 'can't' while speaking) and insert small pauses in between words. But then you have to store pronunciations for all the words.
3) stress phoneme: While speaking one or more phonemes are stressed (more focused, longer), Copy-pasting leaves out the stress information since everything is same
If you can store pronunciations of all the phonemes in all the possible contexts and can smooth the transition between adjacent phonemes then you can copy-paste sounds.
With a generative model we try model the human vocal tract system and depending on input context, speaking rate audio data is generated. Go through the first and third resource for detailed explanations.
All these problem arise because we want to make it sound more human. Phoneme transition and diffusion are the major hurdles.
todo: Pick any word and record it in different context, different speaking rate, different speakers and plot waveforms. You'll see each time it has a different waveform. Even if you don't change any condition each time there will be a slight variation in pronunciation of same word. Save pronunciations of few phoneme and copy paste them to generate a word and listen to it. |
H: Ordered elements of feature vectors for autoencoders?
Here is a newbie question; when one trains an autoencoder or a variational autoencoder, does the order of the objects in the training vector $x$ matter?
Suppose I take an MNIST image image $(28\times28)$ and turn it into a feature vector of size $x \in \mathbb{R}^{1\times784}$. Then does it matter if I e.g. flatten the whole image vertically, or horizontally, or some other fancy way? Or if I were to scramble the order of the elements in the feature vector, would that make the VAE or AE mess up?
AI: For a fully-connected network the precise order of features does not matter initially (i.e. before you start to train), as long as it is consistent for each example. This is independent of whether you have an auto-encoder to train or some other fully-connected network. Processing images with pixels as features does not change this.
Some caveats:
To succeed in training, you will need the pixel order to be the same for each example. So it can be randomly shuffled, but only if you keep the same shuffle for each and every example.
As an aside, you will still get some training effect from fully random shuffling the variables, because for example writing an "8" has more filled pixels than writing a "1" on average. But the performance will be very bad, accuracy only a little better than guessing, for most interesting problem domains.
To visualise what the auto-encoder has learned, your output needs to be unscrambled. You can actually input a (same shuffle each example) scrambled image and train the autoencoder to unscramble it - this will in theory get the same accuracy as training to match the scrambled input, showing again that pixel order is not important. You could also train autoencoder to match scrambled input to scrambled output and visualise it by reversing the scrambling effect (again this must be a consistent scramble, same for each example).
In a fully-connected neural network, there is nothing in the model that represents the local differences between pixels, or even that they are somehow related. So the network will learn relations (such as edges) irrespective of how the image is presented. But it will also suffer from being unable to generalise. E.g. just because an edge between pixels 3 and 4 is important, the network will not learn that the same edge between pixels 31 and 32 is similar, unless lots of examples of both occur in the training data.
Addressing poor generalisation due to loss of knowledge about locality in the model is one of the motivations for convolutional neural networks (CNNs). You can have CNN autoencoders, and for those, you intentionally preserve the 2D structure and local relationships between pixels - if you did not then the network would function very poorly or not at all. |
H: Predict the date an item will be sold using machine learning
I would like to predict the date a item will be sold using features such as:
Product ID Price Days_since_first_post Last_repost Type First_Post_Date Sold_Date
How would machine learning principles be used in such a way to do this task? If so, what is the proper means by which to design such an algorithm? What would be the different steps which would be involved.
More generally, what are the logical steps that should be taken when tackling a machine learning problem.
Thank you
AI: A machine learning problem can be separated into a few modular parts. Of course these are all massive in reach and possibility. However, every problem you encounter should be thought of in this way at first. You can then skip things where you feel necessary.
Data pre-processing and feature extraction
The model
Post-processing
Data pre-processing and feature extraction
This and feature extraction are the two most important parts of a machine learning technique. That's right, NOT THE MODEL. If you have good features then even a very simple model will get amazing results.
Data pre-processing goes from your raw data and remolds it to be better suited to machine learning algorithms. This means pulling out important statistics from your data or converting your data into other formats in order to it being more representative. For example if you are using a technique which is sensitive to range, then you should normalize all your features. If you are using text data you should build word vectors. There are countless ways pre-processing and feature extraction can be implemented.
Then, you want to use feature selection. From all the information you extracted from your data not all of it will be useful. There are machine learning algorithms such as: PCA, LDA, cross-correlation, etc. Which will select the features that are the most representative and ignore the rest.
In your case
First, let's consider the data pre-processing. You notice that type might not be an integer value. This may cause problems when using most machine learning algorithms. You will want to bin these different types and map them onto numbers.
Feature selection: besides using the techniques I outlined above, you should also notice that productID is a useless feature. It should for sure NOT be included in your model. It will just confuse the model and sway it.
As a general rule of thumb, the amount of data that is suggested to have for shallow machine learning models is $10 \times \#features$. So you are limited by the size of your dataset. Also, make sure the outputs are quite well distributed. If you have some skew in your dataset, like a lot of examples where the item was sold right away, then the model will learn this tendency. You do not want this.
The Model
Now that you have your feature-space, which might be entirely different from the original columns you posted, it is time to choose a model. You are trying to estimate the time of sale for an algorithm. Thus, this can be done in two different ways. Either as a classifier problem or as a regression problem.
The classifier problem would separate the different times of sales into distinct bins. For example
Class 1: [0 - 5] days
Class 2: [5 - 10] days
etc...
Of course the more classes you will choose to have then the harder it will be to train the model. That means the resolution of your results is limited by the amount of the data you have available to you.
The other option is to use a regression algorithm. This will learn the tendency of your curve in higher dimensional space and then estimates where along that line a new example would fall. Think of it in 1-dimension. I give you a bunch of heights $x$ and running speeds $y$. The model will learn a function $y(x)$. Then if I give you just a height, you will be able to estimate the running speed of the individual. You will be doing the same thing but with more variables.
There are really a ton of methods that can do this. You can look through some literature reviews on the subject to get a hold of all of them. But, I warn you there is A LOT. I usually start with kernel-Support Vector Regression (k-SVR).
To test your model. Separate your dataset into three parts (training, validating, testing) if you have sufficient data. Otherwise two parts (training, testing) is also fine. Only, train your model on the training set and then evaluate it using the example it has not seen yet which are reserved in the testing set.
Post-processing
This is the step where you can further model your output $y(x)$. In your case that might not be needed. |
H: sentence classification with RNN-LSTM - output layer
i have read a few blogs and papers on the IMDB exercise w.r.t sentiment classification using LSTM's (and at times in conjunction with CNN) but there the output layer can contain just 1 neuron with a sigmoid since the sentiment can either be good or bad. But if i need to use the same technique to classify, say, 30,000 sentences into 20 different labels then what should my output layer look like ? if i am not mistaken i should have the same number of neurons in my output layer as the number of labels i am training the data for (20 in this case) each of which has the same sigmoid. Can you please let me know if this makes sense ?
AI: When doing a multiclass classification problem, in which the goal is predict exactly one class label for each input, it is standard to use the softmax function (a normalized exponential) as the activation function for the last layer. However, if your problem is a multilabel classification (in which the classes are not mutually exclusive), then using a sigmoid as the activation function would be appropriate. In both situations, you would have as many output units as labels (20 in your case). |
H: Input and output feature shapes in CNN for speech recognition
I am currently studying this paper and are trying to understand what exactly the input and output shape is. The paper describes an acoustic model consisting of using cnn-hmm as the acoustic model. The input is a image of mel-log filter energies visualised as spectograms. The paper describes a method for phone recognition in which (as far I understand) applying a CNN on these spectograms with a limited weight sharing scheme should be beneficial for phone recognition.
The input shape as far i understand, is 9-15 frames, which seem a bit confusing, as they don't consider number of phonemes a utterance may have, or the length of them, but simply just "choose" a number of frames to operate with.. The number doesn't seem to be connected with the output in any way - or am I misinterpreting something?
For the output
We used 183 target class labels, i.e., 3 states for each HMM of 61 phones. After decoding, the original 61 phone classes were mapped to a set of 39 classes as in [47] for final scoring. In our experiments, a bigram language model over phones, estimated from the training set, was used in decoding. To prepare the ANN targets, a mono-phone HMM model was trained on the training data set, and it was used to generate state-level labels based on forced alignment.
So the output is divided in to 183 classes, being mapped into HMM's with 3 states for each 61 phonemes, and the ANN target (As I see it target = posterior probability) by training a monophone hmm with forced alignment. I am not sure I understand this process. If the ANN targets are those the CNN should aim/regress to and at the end classify the state based on, Why then process the input?.. why not make a simple DNN that does the regression/classification?
It looks like the improvement lies in the use of forced alignment here, and only on monophone? where is the improvement?
And again how I am supposed to link the input shape and the output shape based this? This would require the audio files to have a certain length, the length of the audio is never specified, so I am assuming that this is not the case.
AI: DNN/CNN prediction(training) is done for 1 frame at a time. The output can be any of the 183 outputs states. Length of the audio files is not a problem since the input to the DNN/CNN is of same dimension only the number of inputs change with audio length.
e.g 1.wav has 500 features and each feature is 39 dimensional and 2.wav has 300 features,
the trained model will take a 39 dimensional input and output will be 183 dimensional. So depending on length we'll get different number of outputs.
Since all the frames in an utterances are tested against all the 183 possibilities so the output always remains 183 dimensional. There is no need to specify number of phonemes an utterance can have as everything is being done at frame level.
Frame concatenation (9-15 frames) is done to leverage contextual properties of speech data. Phone changes are context dependent. For 15 frame context, we change the input of DNN to [7*39 (left_context) 39 7*39(right_context)], a 585 dimensional vector. So now DNN will take 585 dimensional data as input and will output a 183 dimensional vector.
CNN input
There exist several different alternatives to organizing these
MFSC features into maps for the CNN. First, as shown in
Fig. 1(b), they can be arranged as three 2-D feature maps,
each of which represents MFSC f
eatures (static, delta and
delta-delta) distributed along both frequency (using the fre-
quency band index) and time (using the frame number within
each context window). In this case, a two-dimensional con-
volution is performed (explained below) to normalize both
frequency and temporal variations simultaneously. Alterna-
tively, we may only consider normalizing frequency variations.
In this case, the same MFSC features are organized as a number
of one-dimensional (1-D) feature maps (along the frequency
band index), as shown in Fig. 1(c). For example, if the context
window contains 15 frames and 40
fi
lter banks are used for each
frame, we will construct 45 (i.e., 15 times 3) 1-D feature maps,
with each map having 40 dimensions, as shown in Fig. 1(c).
As a result, a one-dimensional convolution will be applied
along the frequency axis. In this paper, we will only focus on
this latter arrangement found in Fig. 1(c), a one-dimensional
convolution along frequency
So the input to the CNN will be an image patch of size 45 * 40 regardless of the length of the audio file, just the number of such inputs will depend on the length of audio file.
Why force alignment?
Now we are doing everything at frame level so for each frame we need the state labels. Usually this timing information is not available.
Transcribed data usually looks like this,
1.wav -> I am a cat
Now I don't know how many frames belong to I or to a.
HMMs are trained on this data and force alignment is done to generate state level labels for each frame.
Better modeling of input-output relation gives improvement over other methods. Filter bank features also contribute to improvements. A DNN trained with filter bank features gives better performance as compared a DNN trained with MFCC features. |
H: Why is my neural network not learning?
I am using the Keras library (with Python 3.6) to create a neural network.
My network maintains a constant overall maximum accuracy of 62.5%, over 16 training samples.
In what ways can I increase this accuracy?
Should I increase the number of training samples, or will restricting some of the data in the training samples help? Or something else that I might not know of?
Any help is greatly appreciated.
Here are the layers of my neural network:
# Build neural network
# Neural net with multiple layers
model = Sequential()
model.add(Dense(32, input_dim=17, init='uniform', activation='sigmoid'))
model.add(Dense(64, init='uniform', activation='relu'))
model.add(Dense(64, init='uniform', activation='relu'))
model.add(Dense(64, init='uniform', activation='relu'))
model.add(Dense(32, init='uniform', activation='relu'))
model.add(Dense(16, init='uniform', activation='sigmoid'))
model.add(Dense(4, init='uniform', activation='sigmoid'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit model
history = model.fit(X, Y, validation_split=0.46, nb_epoch=150, batch_size=3)
AI: A neural network is the wrong approach for a problem with a small training set. Even if you only have 2 features that are very representative of your function then 16 feature are not sufficient.
As a very general rule of thumb I use 100 examples for each feature in my dataset. This then increases exponentially with every single different class you expect. 16 instances is not enough to train a neural network. You will always have huge error margins when applying your model on a testing set. Even more problematic is the fact that you are using a very deep neural network. This will require even more training instances to properly learn the function.
I suggest you use a general machine learning technique such as SVM. This will likely result in better result. Try these techniques instead and see what results you get: k-NN, kernel SVM, k-means clustering.
But, be warned 16 training instances is still very little. |
H: How can I know how to interpret the output coefficients (`coefs_`) from the model sklearn.svm.LinearSVC()?
I'm following Introduction to Machine Learning with Python: A Guide for Data Scientists by Andreas C. Müller and Sarah Guido, and in Chapter 2 a demonstration of applying LinearSVC() is given. The result of classifying three blobs is shown in this screenshot:
The three blobs are obviously correctly classified, as depicted by the colored output.
My question is how are we supposed to know how to interpret the model fit output in order to draw the three lines? The output parameters are given by
print(LinearSVC().fit(X,y).coef_)
[[-0.17492286 0.23139933]
[ 0.47621448 -0.06937432]
[-0.18914355 -0.20399596]]
print(LinearSVC().fit(X,y).intercept_)
[-1.07745571 0.13140557 -0.08604799]
And the authors walk us through how to draw the lines:
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X,y)
...
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1]) #HOW DO WE KNOW
plt.ylim(-10, 15)
plt.xlim(-10, 8)
plt.show()
The line of code with the comment is the one that converts our coefficients into a slope/intercept pair for the line:
y = -(coef_0 / coef_1) x - intercept/coef_1
where the term in front of x is the slope and -intercept/coef_1 is the intercept. In the documentation on LinearSVC, the coef_ and intercept_ are just called "attributes" but don't point to any indicator that coef_0 is the slope and coef_1 is the negative of some overall scaling.
How can I look up the interpretation of the output coefficients of this model and others similar to it in Scikit-learn without relying on examples in books and StackOverflow?
AI: Here's one (admittedly hard) way.
If you really want to understand the low-level details, you can always work through the source code. For example, we can see that the LinearSVC fit method calls _fit_liblinear. That calls train_wrap in liblinear, which gets everything ready to call into the C++ function train.
So train in linear.cpp is where the heavy lifting begins. Note that the w member of the model struct in the train function gets mapped back to coef_ in Python.
Once you understand exactly what the underlying train function does, it should be clear exactly what coef_ means and why we draw the lines that way.
While this can be a little laborious, once you get used to doing things this way, you will really understand how everything works from top to bottom. |
H: How to detect the match precision of OneVsRestClassifier
I've improved my text classification to topic module, from simple word2vec to piped tfidf and OneVsRestClassifier (using sklearn). It does improve the classification but with word2vec I was able to calculate the match percentage for each topic and with OneVsRestClassifier i get a match or no match to a specific topic. Is there a way to see with OneVsRestClassifier what was the percentage of the classification?
P.S.
I am not talking about evaluating the performance of the training but the actual real time matching percentage.
AI: Yes, of course.
Assuming that you have used sklearn's OneVsRestClassifier and so you have a decision function for example a Support Vector Classifier with say linear kernel. Use set_params to change probability key to True, default is False. Use this in the OneVsRestClassifier classifier and then go with the inbuilt function predict_proba like
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
mod = OneVsRestClassifier(SVC(kernel='linear').set_params(probability=True)).fit(samples,classes)
print mod.predict_proba(np.array([your_sample_vector]).reshape(1,-1))
Edit:
You can use your old LinearSVC with decision_function to find the distance from the hyperplane and convert them to probabilities like
mod = OneVsRestClassifier(LinearSVC()).fit(sample,clas)
proba = (1./(1.+np.exp(-mod.decision_function(np.array(your_test_array).reshape(1,-1)))))
proba /= proba.sum(axis=1).reshape((proba.shape[0], -1))\
print proba
Now you don't need tuning the parameters, I guess. :) |
H: Orange (Data Mining) : How to start using "Orange" from Python Anaconda Environment?
I have install "Orange Data Mining v3.4.1" in Anaconda Python v3 environment using commands : "conda install orange3" sucessfully.
However, I do not know how to call "Orange" as Application to start using it. There is also NO icon on the desktop as well.
Please help. Thank you very much in advance
Regards
Pearapon S.
AI: Instructions can be found on their Github page: https://github.com/biolab/orange3#starting-orange-gui
Short Answer:
In the conda env run orange-canvas
or
python3 -m Orange.canvas. Add --help for a list of program options. |
H: Error While Trying To Transform Data Into Stocks Object
I am following a tutorial on R-Bloggers on an introduction to stock market data analysis. I got to this part -
if (!require("magrittr")) {
install.packages("magrittr")
library(magrittr)
}
## Loading required package: magrittr
stock_return % t % > % as.xts
head(stock_return)
But, I keep getting this error -
Error: unexpected '>' in "stock_return % t % >"
Please how do I resolve this?
AI: Turns out there were two versions of the tutorial. One version incomplete, the other complete. This is the complete version -
if (!require("magrittr")) {
install.packages("magrittr")
library(magrittr)
}
stock_return = apply(stocks, 1, function(x) {x / stocks[1,]}) %>%
t %>% as.xts
head(stock_return) |
H: How to use binary text classifier(built using SVM with TF-IDF) to classify new text document?
I have built binary text classifier using SVM on TF-IDF for news articles(Sports: Non-Sports).
But I not sure how to classify new document using this model. Since TF-IDF is calculated based on the occurrence of a word in all other documents.
Do I have merge test and train data every time I receive a new document for classification? It will change the model as well every time.
Am I missing something? I think, although SVM on TF-IDF giving good results it can not be used in production.
Is there any other way to tackle this issue?
Lets take an example
Training Set:
Doc_1: Chelsea won the match. {Sports}
Doc_2: India won the third test match against Austrailia {Sports}
Doc_3: I want to sleep {Non-Sport}
Doc_4: 13 palace to see in Auckland {Non-Sport}
New Testing Set:
Doc_5: Climate change impacts in Austrailia
Now how can I find IDF score of "Austrailia" in Doc_5 without merging this document with training set?
Since Doc_5 contains the word "Austrailia", it will change the IDF score of "Australia" in Doc_1 will also change, thus model needs retraining
AI: What is your model built in?
Most popular libraries have a score function separate from the training part. You should be able to just pass the new document to the score function of the trained model and get back the predicted class. |
H: Building a machine learning model based on a set of timestamped features to predict/classify a label/value?
I'm trying to apply machine learning to pharmaceutical manufacturing to predict whether batches of drug products manufactured are good or not. for the sake of relatability, let's use coffee brewing as an analogous process. Let's imagine that I'm trying to predict the acidity of the coffee that I've brewed.
The dataset that I have contains features such as temperature of water, stirring speed and pressure that are constantly measured (say..on a per second basis) over a variable amount of time (the first cup may be brewed in 5 minutes, the second in 10 etc).
What kind of preprocessing should I perform on such a multidimensional dataset? One stumbling block is that for each observation, the duration is different, which may complicate dimension reduction? Once preprocessed, is there any specific model that would suit the task at hand? I'm looking at something like a regression but alternatively, classifiers seem to be fine as well if I split the acidity(pH) into "<5.5" or ">5.5"?
I hope to get some general directions and if you can paste a few links to texts or examples that'll be good! Also, I'm more familiar with python and scikit learn, so if you can point me in the right section in the documentation that'll be great too!
AI: I don't know much about coffee or pharmaceuticals but I think the widely varying time samples is a problem. If I brewed one batch of coffee for a minute and another for 5 hours, I'm pretty sure the 5 hour batch would come out burnt-tasting in all cases.
Can you break the samples up into cohorts by duration and then train on each cohort? You'd end up with a model for the "1 minute batch", a model for the "1 hour batch", etc. |
H: Dynamic clustering for text documents
I have few hundred thousands of text documents. Some of them are pretty similar - they differ just in ex. names or some numbers, all other text is the same. I would like to cluster these documents, so when I list them, the most similar are listed together in groups. That's how I would avoid having numerous (almost) same documents listed one, after another.
I was thinking some kind of clustering would come handy. But the problem is, I don't know how many clusters I need. Also the number would have to be dynamic. And still, most of the documents wouldn't belong to a cluster, because they don't have any similar documents. So I would cluster just similar documents.
Can anyone point me to the direction that would help me solve this problem, or provide some examples of similar problems.
AI: It sounds as if you don't need clustering.
But rather you are trying to detect near duplicates.
The difference is that clustering tries to organize everything with a focus on the larger, overall structure. But much of your data probably isn't duplicate. Clustering is difficult and slow. Near duplicates is much easier, and much faster (e.g., with MinHash or similarity search) |
H: Why don't tree ensembles require one-hot-encoding?
I know that models such as random forest and boosted trees don't require one-hot encoding for predictor levels, but I don't really get why. If the tree is making a split in the feature space, then isn't there an inherent ordering involved? There must be something I'm missing here.
To add to my confusion I took a problem I was working on and tried using one-hot encoding on a categorical feature versus converting to an integer using xgboost in R. The generalization error using one-hot encoding was marginally better.
Then I took another variable and did the same test, and saw the opposite result.
Can anyone help explain this?
AI: The encoding leads to a question of representation and the way that the algorithms cope with the representation.
Let's consider 3 methods of representing n categorial values of a feature:
A single feature with n numeric values.
one hot encoding (n Boolean features, exactly one of them must be on)
Log n Boolean features,representing the n values.
Note that we can represent the same values in the same methods. The one hot encoding is less efficient, requiring n bits instead of log n bits.
More than that, if we are not aware that the n features in the on hot encoding are exclusive, our vc dimension and our hypothesis set are larger.
So, one might wonder why use one hot encoding in the first place?
The problem is that in the single feature representation and the log representation we might use wrong deductions.
In a single feature representation the algorithm might assume order. Usually the encoding is arbitrary and the value 3 is as far for 3 as from 8. However, the algorithm might treat the feature as a numeric feature and come up with rules like "f < 4". Here you might claim that if the algorithm found such a rule, it might be beneficial, even if not intended. While that might be true, small data set, noise and other reason to have a data set that mis represent the underlying distribution might lead to false rules.
Same can happen in logarithmic representation (e.g., having rules like "third bit is on). Here we are likely to get more complex rules, all unintended and sometimes misleading.
So, we should had identical representations, leading to identical results in ideal world. However, in some cases the less efficient representation can lead to worse results while on other cases the badly deduce rules can lead to worse results.
In general, if the values are indeed very distinct in behaviour, the algorithm will probably won't deduce such rule and you will benefit from the more efficient representation. Many times it is hard to analyze it beforehand so what you did, trying both representations, is a good way to choose the proper one. |
H: How is PCA is different from SubSpace clustering and how do we extract variables responsible for the first PCA component?
New update:
I understand PCA components ensure we select variables responsible for high variance, but I would like to know how to extract key variables responsible only for high variance through PCA components.
Ideally, a simple example would help.
This is my code:
#Implementing PCA for visualizing after Kmeans clustering
`# Interpret 3 cluster solution
model3=KMeans(n_clusters=3)
model3.fit(clus_train)
clusassign=model3.predict(clus_train)
# plot clusters
'''The new variables, called canonical variables, are ordered in terms
of the proportion of variance and the clustering variables that is
accounted for by each of the canonical variables. So the first
canonical variable will count for the largest proportion of the
variance. The second canonical variable will account for the next
largest proportion of variance, and so on. Usually, the majority of
the variance in the clustering variables will be accounted for by the
first couple of canonical variables and those are the variables that
we can plot. '''
from sklearn.decomposition import PCA
pca_2 = PCA(2) # Selecting 2 components
plot_columns = pca_2.fit_transform(clus_train)
plt.scatter(x=plot_columns[:,0], y=plot_columns[:,1], c=model3.labels_,)
Observations are more spread out indicating less correlation among the
observations and higher within cluster variance.
plt.xlabel('Canonical variable 1')
plt.ylabel('Canonical variable 2')
plt.title('Scatterplot of Canonical Variables for 3 Clusters')
plt.show()`
AI: Reducing the dimensionality of a dataset with PCA does not only benefits humans trying to look at the data in a graspable number of dimensions. It is also useful for machine learning algorithms to be trained on a subset of dimensions. Both to reduce the complexity of the data and the computational cost of training such machine learning model. |
H: MNIST Deep Neural Network using TensorFlow
I have been working on this code for a while and it gave me a lot of headache before I got it to work. It basically tries to use the mnist dataset to classify handwritten digits. I am not using the prepackaged mnist in TensorFlow because I want to learn preprocessing the data myself and for deeper understanding of TensorFlow.
Its finally working but I would love it if someone with expertise could take a look at it and tell me what they think and if the results its producing are actually real stats or if its overfitting or not learning at all.
It's giving me accuracy between 83% and 91% from the test dataset.
the dataset I'm using is from https://pjreddie.com/projects/mnist-in-csv/ basically the two links on top of the page.
here is the code:
import numpy as np
import tensorflow as tf
sess = tf.Session()
from sklearn import preprocessing
import matplotlib.pyplot as plt
with tf.Session() as sess:
# lets load the file
train_file = 'mnist_train.csv'
test_file = 'mnist_test.csv'
#train_file = 'mnist_train_small.csv'
#test_file = 'mnist_test_small.csv'
train = np.loadtxt(train_file, delimiter=',')
test = np.loadtxt(test_file, delimiter=',')
x_train = train[:,1:785]
y_train = train[:,:1]
x_test = test[:,1:785]
y_test = test[:,:1]
print(x_test.shape)
# lets normalize the data
def normalize(input_data):
minimum = input_data.min(axis=0)
maximum = input_data.max(axis=0)
#normalized = (input_data - minimum) / ( maximum - minimum )
normalized = preprocessing.normalize(input_data, norm='l2')
return normalized
# convert to a onehot array
def one_hot(input_data):
one_hot = []
for item in input_data:
if item == 0.:
one_h = [1.,0.,0.,0.,0.,0.,0.,0.,0.,0.]
elif item == 1.:
one_h = [0.,1.,0.,0.,0.,0.,0.,0.,0.,0.]
elif item == 2.:
one_h = [0.,0.,1.,0.,0.,0.,0.,0.,0.,0.]
elif item == 3.:
one_h = [0.,0.,0.,1.,0.,0.,0.,0.,0.,0.]
elif item == 4.:
one_h = [0.,0.,0.,0.,1.,0.,0.,0.,0.,0.]
elif item == 5.:
one_h = [0.,0.,0.,0.,0.,1.,0.,0.,0.,0.]
elif item == 6.:
one_h = [0.,0.,0.,0.,0.,0.,1.,0.,0.,0.]
elif item == 7.:
one_h = [0.,0.,0.,0.,0.,0.,0.,1.,0.,0.]
elif item == 8.:
one_h = [0.,0.,0.,0.,0.,0.,0.,0.,1.,0.]
elif item == 9.:
one_h = [0.,0.,0.,0.,0.,0.,0.,0.,0.,1.]
one_hot.append(one_h)
one_hot = np.array(one_hot)
#one_hot = one_hot.reshape(len(one_hot),10,1)
#one_hot = one_hot.reshape(len(one_hot), 7,1)
#return tf.constant([one_hot])
return one_hot
def one_hot_tf(val):
indices = val
depth = 10
on_value = 1.0
off_value = 0.0
axis = -1
oh = tf.one_hot(indices, depth,
on_value=on_value, off_value=off_value,
axis=axis, dtype=tf.float32,
name='ONEHOT')
return (oh)
x_train = normalize(x_train)
x_test = normalize(x_test)
# x_train = sess.run(tf.convert_to_tensor(x_train))
# x_test = sess.run(tf.convert_to_tensor(x_test))
'''
data_initializer = tf.placeholder(dtype=x_train.dtype,
shape=x_train.shape)
label_initializer = tf.placeholder(dtype=x_test.dtype,
shape=x_test.shape)
x_train= sess.run(tf.Variable(data_initializer, trainable=False, collections=[]))
x_test = sess.run(tf.Variable(label_initializer, trainable=False, collections=[]))
'''
y_test = one_hot(y_test)
y_train = one_hot(y_train)
print(y_test[:5])
# y_test = sess.run(one_hot_tf(y_test))
# y_train = sess.run(one_hot_tf(y_train))
# define the parameters
input_nodes = 784
output_nodes = 10
hl1_nodes = 500
hl2_nodes = 500
hl3_nodes = 500
epochs = 10
x = tf.placeholder(tf.float32, [None, input_nodes])
y = tf.placeholder(tf.float32)
# graphing
loss_rate = []
def nn(data):
layer1 = {'w':tf.Variable(tf.random_normal([input_nodes, hl1_nodes])),
'b':tf.Variable(tf.random_normal([hl1_nodes]))}
layer2 = {'w':tf.Variable(tf.random_normal([hl1_nodes, hl2_nodes])),
'b':tf.Variable(tf.random_normal([hl2_nodes]))}
layer3 = {'w':tf.Variable(tf.random_normal([hl2_nodes, hl3_nodes])),
'b':tf.Variable(tf.random_normal([hl3_nodes]))}
output_layer = {'w':tf.Variable(tf.random_normal([hl3_nodes, output_nodes])),
'b':tf.Variable(tf.random_normal([output_nodes]))}
l1 = tf.add(tf.matmul(data, layer1['w']), layer1['b'])
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1, layer2['w']), layer2['b'])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(l2, layer3['w']), layer3['b'])
l3 = tf.nn.relu(l3)
output = tf.add(tf.matmul(l3, output_layer['w']), output_layer['b'])
return(output)
def train(x):
prediction = nn(x)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
optimizer = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(epochs):
epochloss = 0
batch_size = 10
batches = 0
for batch in range(int(len(x_train)/batch_size)):
next_batch = batches+batch
_, c = sess.run([optimizer, loss], feed_dict={x:x_train[batches:next_batch, :], y:y_train[batches:next_batch, :]})
epochloss = epochloss + c
batches += batch
loss_rate.append(c)
print("Epoch ", epoch, " / ", epochs, " - Loss ", epochloss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
print("Accuracy : ", accuracy.eval({x:x_test, y:y_test}))
train(x)
plt.plot(loss_rate)
plt.show()
The output of 3 different runs are:
=========== RESTART: /Users/macbookpro/Desktop/AI/tf/OWN/test3.py ===========
(10000, 784)
[[ 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]]
Epoch 0 / 5 - Loss nan
Epoch 1 / 5 - Loss nan
Epoch 2 / 5 - Loss nan
Epoch 3 / 5 - Loss nan
Epoch 4 / 5 - Loss nan
Accuracy : 0.9053
=========== RESTART: /Users/macbookpro/Desktop/AI/tf/OWN/test3.py ===========
(10000, 784)
[[ 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]]
Epoch 0 / 5 - Loss nan
Epoch 1 / 5 - Loss nan
Epoch 2 / 5 - Loss nan
Epoch 3 / 5 - Loss nan
Epoch 4 / 5 - Loss nan
Accuracy : 0.8342
=========== RESTART: /Users/macbookpro/Desktop/AI/tf/OWN/test3.py ===========
(10000, 784)
[[ 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]]
Epoch 0 / 5 - Loss nan
Epoch 1 / 5 - Loss nan
Epoch 2 / 5 - Loss nan
Epoch 3 / 5 - Loss nan
Epoch 4 / 5 - Loss nan
Accuracy : 0.9
---Update---
I found the answer in rewriting the code as follows:
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
sess = tf.Session()
file = "mnist_train.csv"
data = np.loadtxt(file, delimiter=',')
y_vals = data[:,0:1]
x_vals = data[:,1:785]
seed = 3
tf.set_random_seed(seed)
np.random.seed(seed)
batch_size = 90
# split into 80/20 datasets, normalize between 0:1 with min max scaling
train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False)
# up there we chose randomly 80% of the data
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
# up we chose the remaining 20%
print(test_indices)
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
def normalize_cols(m):
col_max = m.max(axis=0)
col_min = m.min(axis=0)
return (m-col_min)/(col_max - col_min)
x_vals_train = np.nan_to_num(normalize_cols(x_vals_train))
x_vals_test = np.nan_to_num(normalize_cols(x_vals_test))
# function that initializes the weights and the biases
def init_weight(shape, std_dev):
weight = tf.Variable(tf.random_normal(shape, stddev=std_dev))
return(weight)
def init_bias(shape, std_dev):
bias= tf.Variable(tf.random_normal(shape, stddev=std_dev))
return(bias)
# initialize placeholders.
x_data = tf.placeholder(shape=[None, 784], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
# the fully connected layer will be used three times for all three hidden layers
def fully_connected(input_layer, weights, biases):
layer = tf.add(tf.matmul(input_layer, weights), biases)
return (tf.nn.relu(layer))
# Now create the model for each layer and the output layer.
# we will initialize a weight matrix, bias matrix and the fully connected layer
# for this, we will use hidden layers of size 500, 500, and 10
'''
This will mean many variables variables to fit. This is because between the data and the first hidden layer we have
784*500+500 = 392,500 variables to change.
continuing this way we will have end up with how many variables we have overall to fit
'''
# create first layer (500 hidden nodes)
weight_1 = init_weight(shape=[784,500], std_dev=10.0)
bias_1 = init_bias(shape=[500], std_dev=10.0)
layer_1 = fully_connected(x_data, weight_1, bias_1)
# create second layer (5-- hidden nodes)
weight_2 = init_weight(shape=[500,500], std_dev=10.0)
bias_2 = init_bias(shape=[500], std_dev=10.0)
layer_2 = fully_connected(layer_1, weight_2, bias_2)
# create third layer (10 hidden nodes)
weight_3 = init_weight(shape=[500,10], std_dev=10.0)
bias_3 = init_bias(shape=[10], std_dev=10.0)
layer_3 = fully_connected(layer_2, weight_3, bias_3)
# create output layer (1 output value)
weight_4 = init_weight(shape=[10,1], std_dev=10.0)
bias_4 = init_bias(shape=[1], std_dev=10.0)
final_output = fully_connected(layer_3, weight_4, bias_4)
# define the loss function and the optimizer and initializing the model
loss = tf.reduce_mean(tf.abs(y_target - final_output))
optimizer = tf.train.AdamOptimizer(0.05)
train_step = optimizer.minimize(loss)
init = tf.global_variables_initializer()
sess.run(init)
# we will now train our model 10 times, store train and test los, select a random batch size,
# and print the status every 1 generation
# initalize the loss vectors
loss_vec = []
test_loss = []
for i in range(10):
# choose random indices for batch selection
rand_index = np.random.choice(len(x_vals_train), size=batch_size)
# get random batch
rand_x = x_vals_train[rand_index]
#rand_y = np.transpose(y_vals_train[rand_index])
rand_y = y_vals_train[rand_index] #???????????
# run the training step
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
# get and store train loss
temp_loss = sess.run(loss, feed_dict={x_data:rand_x, y_target:rand_y})
loss_vec.append(temp_loss)
# get and store test loss
#test_temp_loss = sess.run(loss, feed_dict={x_data:x_vals_test, y_target:np.transpose([y_vals_test])})
test_temp_loss = sess.run(loss, feed_dict={x_data:x_vals_test, y_target:y_vals_test}) #???????
test_loss.append(test_temp_loss)
if(i+1) %1==0:
print('Generation: '+str(i+1)+". Loss = "+str(temp_loss))
plt.plot(loss_vec, 'k-', label='Train Loss')
plt.plot(test_loss, 'r--', label='Test Loss')
plt.title('Loss Per generation ')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.legend(loc='upper right')
plt.show()
I commented most of it just so if someone stumbles here and needs some help they can understand whats going on.
AI: Given that you have such high error on the test set and have so many hidden layers/nodes, it's quite possible that your model is overfitting. Try using dropout or weight decay to regularize the weights of your network. |
H: Randomizing selection in R
I am trying to create a comparison group. So far this group contains 45 data points and I need to populate the remaining 55 (for a total of 100 data points).
These remaining 55 need to be a randomized selection supplied from a larger data set. Any recommendations for R code that would create a randomization loop?
AI: This will give you a sample with 55 record from the larger data-set that you have.
sample <- df[sample(1:nrow(df), 55, replace=FALSE),]
If you want to make a reproducible example of the sample, you need to set seed like this:
set.seed(57)
sample <- df[sample(1:nrow(df), 55, replace=FALSE),]
So later you can use the same seed for getting the same result.
p.s. replace=FALSE means that each record after that has been picked from the df to be included in the sample, will be excluded from the df and it will not be chosen multiple times. |
H: Incorrect output dimension?
I am trying to start the learning of a cnn network which has 72 input and one output being a vector of length 24 stating the a class for each third input 72/24 = 3. There are 145 classes.
this is how i've designed the network currently passed my data:
print "After test_output/input"
print "Length:"
print len(data_train_input)
print len(data_train_output)
print len(data_test_input)
print len(data_test_output)
print "Type;"
print type(data_train_input[0])
print type(data_train_output[0])
print "Size [0]"
print data_train_input[0].shape
print data_train_output[0].shape
list_of_input = [Input(shape = (78,3)) for i in range(72)]
list_of_conv_output = []
list_of_max_out = []
for i in range(72):
list_of_conv_output.append(Conv1D(filters = 32 , kernel_size = 6 , padding = "same", activation = 'relu')(list_of_input[i]))
list_of_max_out.append(MaxPooling1D(pool_size=3)(list_of_conv_output[i]))
merge = keras.layers.concatenate(list_of_max_out)
reshape = Reshape((-1,))(merge)
dense1 = Dense(500, activation = 'relu')(reshape)
dense2 = Dense(250,activation = 'relu')(dense1)
dense3 = Dense(1 ,activation = 'softmax')(dense2)
model = Model(inputs = list_of_input , outputs = dense3)
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam" , metrics = [metrics.sparse_categorical_accuracy])
reduce_lr=ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1, mode='auto', epsilon=0.01, cooldown=0, min_lr=0.000000000000000000001)
stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1, mode='auto')
print "Train!"
list_train_input = []
list_test_input = []
for i in range(len(data_train_input)):
list_train_input.append(data_train_input[i])
for i in range(len(data_test_input)):
list_test_input.append(data_test_input[i])
hist_current = model.fit(x = [np.array(list_train_input[i]) for i in range(72)],
y = np.array(data_train_output),
shuffle=False,
validation_data=([np.array(list_test_input[i]) for i in range(72)], np.array(data_test_output)),
validation_split=0.1,
epochs=150000,
verbose=1,
callbacks=[reduce_lr,stop])
Which generates this output:
After test_output/input
Length:
9436
9417
1017
1035
Type;
<type 'numpy.ndarray'>
<type 'numpy.ndarray'>
Size [0]
(72, 78, 3)
(24,)
Train!
Traceback (most recent call last):
File "keras_cnn_phoneme_classification.py", line 382, in <module>
model(train_input_data_interweawed_normalized, output_train, test_input_data_interweawed_normalized, output_test, test_name_interweawed_normalized)
File "keras_cnn_phoneme_classification.py", line 361, in model
callbacks=[reduce_lr,stop])
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1405, in fit
batch_size=batch_size)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1299, in _standardize_user_data
exception_prefix='model target')
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 133, in _standardize_input_data
str(array.shape))
ValueError: Error when checking model target: expected dense_3 to have shape (None, 1) but got array with shape (9417, 24)
Why am i getting this error?
I tried changing the output size of the dense3 to many different? but is is it expecting?
AI: The error message is pretty clear... you feed a vector of length 24 to your model, but your model is outputting a vector of length 1.
Change :
dense3 = Dense(1 ,activation = 'softmax')(dense2)
to :
dense3 = Dense(24 ,activation = 'softmax')(dense2) |
H: Convolutional layer dropout layer in keras
According to classical paper
http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf
dropout operation affects not only training step but also test step - we need to multiply all neuron output weights by probability p.
But in keras library I found the following implementation of dropout operation:
retain_prob = 1. - level
...
random_tensor = rng.binomial(x.shape, p=retain_prob, dtype=x.dtype)
...
x *= random_tensor
x /= retain_prob
return x
(see https://github.com/fchollet/keras/blob/master/keras/backend/theano_backend.py)
Why x is divided by retain_prob while it should be multiplied?
Or I'm just confused, and multiplying weights is equivelent to dividing output value?
AI: You are looking at the Keras code implementing dropout for training step.
In the Keras implementation, the output values are corrected during training (by dividing, in addition to randomly dropping out the values) instead of during testing (by multiplying). This is called "inverted dropout".
Inverted dropout is functionally equivalent to original dropout (as per your link to Srivastava's paper), with a nice feature that the network does not use dropout layers at all during test and prediction. This is explained a little in this Keras issue. |
H: Should weights on earlier layers change less than weights on later layers in a neural network
I'm trying to debug why my neural network isn't working. One of the things I've observed is that the weights between the input layer and the first hidden layer hardly change at all, whereas weights later in the network (eg. the weights between the last hidden layer and the output) change significantly. Is this to be expected or a symptom of an error in my code?
I'm applying backpropagation and gradient descent to alter the weights.
AI: This is expected and well established - Vanished Gradient
https://en.wikipedia.org/wiki/Vanishing_gradient_problem
This has the effect of multiplying n of these small numbers to compute gradients of the "front" layers in an n-layer network, meaning that the gradient (error signal) decreases exponentially with n and the front layers train very slowly.
While I'm not an expert in neural network (experts, please add an answer), I'm sure professional implementation for neural network is tricky and highly optimized. If you simply implement a vanilla text-book-styled neural network, you won't be able to use it to train a large network. Keep it small and you'll be fine. |
H: Can we generate huge dataset with Generative Adversarial Networks
I'm dealing with a problem where I couldn't find enough dataset(images) to feed into my deep neural network for training.
I was so inspired by the paper Generative Adversarial Text to Image Synthesis published by Scott Reed et al. on Generative Adversarial Networks.
I was curious to know that, can I use available small dataset as an input to a GAN model and generate a much bigger dataset to deal with deeper network models?
Will it be good enough?
AI: This is unlikely to add much beyond your direct data collection efforts.
The quality of current GAN outputs (as of 2017) will not be high enough. The images produced by a GAN are typically small and can have unusual/ambiguous details and odd distortions. In the paper you linked, the images generated by the system from a sentence have believable blocks of colour given the subject matter, but without the sentence priming you what to expect most of them are not recognisable as any specific subject.
GANs with a less ambitious purpose than generating images from sentences (which is despite my criticism above, a truly remarkable feat IMO) should produce closer to photo-realistic images. But their scope will be less and probably not include your desired image type. Also, typically the output size is small e.g. 64x64 or 128x128*, and there are still enough distortions and ambiguities that original ground truth photos would be far preferable.
The GAN is itself limited by training library available - it will not do well if you attempt to generate images outside of the scope of its training data. The results shown in the research paper of course focus on the domain supplied by the training data. But you cannot just feed any sentence into this model and expect a result that would be useful elsewhere.
If you find a GAN that has been trained on a suitable data set for your problem, then you are most likely better off trying to source the same data directly for your project.
If you are facing a problem with limited ground truth data, then maybe a better approach to using a GAN would be to use a pre-trained classifier such as VGG-19 or Inception v5, replace the last few fully-connected layers, and fine tune it on your data. Here is an example of doing that using Keras library in Python - other examples can be found with searches like "fine tune CNN image classifier".
* State-of-the art GANs have got better since I posted this answer. A research team at Nvidia has had remarkable success creating 1024x1024 photo-realistic images. However, this does not change the other points in my answer. GANs are not a reliable source of images for image classification tasks, except maybe for sub-tasks of whatever the GAN has already been trained on and is able to generate conditionally (or maybe more trivially, to provide source data for "other" categories in classifiers). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.