text
stringlengths 83
79.5k
|
---|
H: Inappropriate stemming in nltk.stem
I was starting to learn stemming with nltk and a few words were quite inappropriately stemmed.
For example:- very was stemmed to 'veri', important to 'import', once to 'onc', poorly to 'poorli' , etc.
I was just thinking that during data analysis, does it cause any error or these errors can be ignored?
Or is there a better option for the same purpose
AI: The renown algorithm for stemming is Porter stemming algorithm. Hence, you can use stemmer = nltk.stemmer.PorterStemmer() for the stemming. you can test it using stemmer.stem('poorly').
Moreover, you can see this post for more details. |
H: Are the image data augmentation generators in Keras randomly applied
I am working on an image classification problem and using data augmentation in Keras.
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
rotation_range=2,
horizontal_flip=True)
I would like to know if the ImageDataGeneratorapplies the transformation randomly to image patches. That is for example, rotation may be applied on one image, whilst flip may be applied to another image. I want to know if the decision to apply rotation or a flip is randomly determined.
AI: I suggest having a look at the relevant documentation.
There, it states:
rotation_range: Int. Degree range for random rotations.
...
horizontal_flip: Boolean. Randomly flip inputs horizontally.
Saying each of the operations is applied randomly, I would say, means your images will be generated sometime with and sometimes without the augmentation steps, independently from one another.
If that doesn't convince you, here is the relevant snippet from the source code:
if self.horizontal_flip:
if np.random.random() < 0.5:
x = flip_axis(x, img_col_axis)
The code for the rotation step is a little more involved, but contained within the same class that I liked above. |
H: What should be the training frequency of a rnn model for timeseries prediction?
If I use a rnn model for time series forecasting how frequently do I have to retrain the model.
AI: It depends on any number of factors. What kind of accuracy are you currently getting? How often are you getting new predictions from your model? Do you have human intervention at some point to check your errors, relabel, and improve your training sets?
It's also possible that you don't need a schedule, per-se. Depending on your data, the pipeline and the programming of your model, you could also just have continuous learning built into your model with a new release of said model after every epoch. So, really, a lot of the answers you are seeking are very specific to your implementation. |
H: How cross validation works for regression?
For regression type problem we know the result is a continuous value, so how is it be cross validated?
In classification type problem we know the class label so easy to compare, but how is it compare in regression type problem?
AI: In both scenarios, we pick one or more performance measures and validate the model based on them. In classification, one may choose to use accuracy, precision, recall, or F-score. In regression, other metrics such as Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R Squared Score (R^2) might be useful.
In regression/classification problems that the order of data points matters (eg time series), we cannot use conventional cross-validation. Instead, some special variations of cross-validation procedure must be used to make sure we do not train our model on the current samples and validate on past instances.
p.s. In regression problems, the labels do not need to be "continuous" (as we define in the continuity of a function in calculus). They can be discrete but real-valued. |
H: Vectorizing text data for ML models
Here is the sample data I have:
Tag 1(Val: X), Tag 2(Val: Y), Tag 3(Val: Z), Label (Val: P)
Tag 1(Val: A), Tag 2(Val: B), Tag 3(Val: C), Label (Val: Q)
Tag 1(Val: D), Tag 2(Val: E), Tag 3(Val: F), Label (Val: R)
Tag 1(Val: G), Tag 2(Val: H), Tag 3(Val: I), Label (Val: S)
All the values are strings and I need to encode them into vectors for training ML models using this data.
How do I make sure that the strings are always converted to the "same" vectors everytime?
I notice that when I try a test input with the same value as the training data, it gets vectorized into a different integer.
What is the standard procedure for preserving the mapping between String <--> Hashed Integer representation so that i get the same hash everytime?
AI: Regardless of the "hashing" algorithm that is used in your code, the same strings should be always mapped to unique values unless they only look the same but not really the same.
Please look for common errors such as capital vs non-capital letters, type of whitespaces (tab, blank, cr, ln), number of whitespaces, 1 vs l, 0 vs O, etc.
If this does not solve the issue, please provide some real examples (real tags with the different integer hashing) and a little bit more information about the code/algorithm that you used in your project. |
H: GlobalAveragePooling2D in inception v3 example
I'm a complete begginer at Keras. In the Inception v3 example at https://keras.io/applications/
# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 200 classes
predictions = Dense(200, activation='softmax')(x)
As at first step it adds a GlobalAveragePooling2D layer, which is described as:
Global average pooling operation for spatial data.
What GlobalAveragePooling2D does and why the example uses it instead of something like Flatten? Which information is averaged?
AI: Prior to GAP, one would flatten your tensor and then add a few fully connected layers in your model. The problem is that a bunch of parameters in your model end up being attributed to the dense layers and could potentially lead to overfitting. A natural solution was to add dropout to help regulate that.
However a few years ago, the idea of having a Global Average Pooling came into play. GAP can be viewed as alternative to the whole flatten FC Dropout paradigm. GAP helps prevent overfitting by doing an extreme form of reduction. Given a H X W X D tensor, GAP will average the H X W features into a single number and reduce the tensor into a 1 X 1 X D tensor.
The original paper simply applied GAP and then a softmax. However, it's now common to have GAP followed by a FC layer. |
H: What is the reason that CNN classify some images horribly wrong
I'm trying to train a CNN for MNIST, everything goes well except for the loss stays very high in my model, while it's very low in example(which has different structure).
this model still yields high accuracy though there is high loss.
Here I attached my code.
with tf.name_scope("inputs"):
X = inputs = tf.placeholder(tf.float32,
shape=[None, 28, 28, 1], name="X")
y = tf.placeholder(
tf.int32,
name="y")
training = tf.placeholder_with_default(False, [], name="training")
conv1 = tf.layers.conv2d(
inputs,
filters=6,
kernel_size=3,
strides=(1, 1),
padding='SAME',
activation=tf.nn.selu,
name="conv1",
)
conv2 = tf.layers.conv2d(
conv1,
filters=12,
kernel_size=3,
strides=(1, 1),
padding='SAME',
activation=tf.nn.selu,
name="conv2",
)
max_pool3 = tf.layers.max_pooling2d(
conv2,
pool_size=8,
strides=2,
padding='SAME',
name="max_pool3"
)
with tf.name_scope("conv4"):
conv4 = tf.layers.conv2d(
max_pool3,
filters=12,
kernel_size=3,
strides=(1, 1),
padding='SAME',
activation=tf.nn.selu,
name="conv4",
)
num_ele = int(conv4.shape[1]*conv4.shape[2]*conv4.shape[3])
conv4_flat = tf.reshape(
conv4,
shape=[-1, num_ele],
name="conv4_flat"
)
conv4_flat_dropout = tf.layers.dropout(conv4_flat, rate=dropout_rate,
training=training,
name="conv4_flat_dropout")
with tf.name_scope("fc5"):
fc5 = tf.layers.dense(
conv4_flat_dropout,
conv4_flat_dropout.shape[1]//2,
activation=tf.nn.selu,
name="fc5",
)
fc5_dropout = tf.layers.dropout(fc5, rate=dropout_rate,
training=training,
name="fc5_dropout")
logits = tf.layers.dense(
fc5_dropout,
n_outputs,
name="logits",
)
And the training process
# It starts with very low accuracy, but instead in the sample after the first epoch the accuracy for training sets reaches 1.
0 train loss:1.8484
train acc:0.751745
validation loss:1.7019
validation acc:0.7656
1 train loss:0.0745
train acc:0.978927
validation loss:0.0782
validation acc:0.9764
2 train loss:0.0958
train acc:0.972818
validation loss:0.1072
validation acc:0.9706
3 train loss:0.1186
train acc:0.971727
validation loss:0.1292
validation acc:0.9714
4 train loss:0.1397
train acc:0.969836
validation loss:0.1422
validation acc:0.9738
# Accuracy for some reason always drops dramaticlt here. Which I don't understand why.
5 train loss:0.8394
train acc:0.939564
validation loss:0.8237
validation acc:0.9470
6 train loss:0.3108
train acc:0.979182
validation loss:0.3345
validation acc:0.9786
7 train loss:0.6576
train acc:0.967382
validation loss:0.8300
validation acc:0.9652
8 train loss:0.2005
train acc:0.987273
validation loss:0.3021
validation acc:0.9832
9 train loss:0.2915
train acc:0.984145
validation loss:0.4509
validation acc:0.9812
10 train loss:0.7932
train acc:0.968273
validation loss:1.1119
validation acc:0.9634
11 train loss:0.2778
train acc:0.988636
validation loss:0.4988
validation acc:0.9848
12 train loss:0.4892
train acc:0.982982
validation loss:0.6407
validation acc:0.9826
13 train loss:0.5457
train acc:0.983382
validation loss:0.9361
validation acc:0.9806
14 train loss:0.3998
train acc:0.989527
validation loss:0.7423
validation acc:0.9876
15 train loss:0.3925
train acc:0.985745
validation loss:0.7599
validation acc:0.9788
16 train loss:0.2093
train acc:0.993236
validation loss:0.5771
validation acc:0.9850
17 train loss:0.5663
train acc:0.989855
validation loss:1.2298
validation acc:0.9846
18 train loss:0.6623
train acc:0.988927
validation loss:1.3572
validation acc:0.9824
19 train loss:0.1555
train acc:0.994891
validation loss:0.6606
validation acc:0.9872
And the sample code.
with tf.name_scope("inputs"):
X = tf.placeholder(tf.float32, shape=[None, n_inputs], name="X")
X_reshaped = tf.reshape(X, shape=[-1, height, width, channels])
y = tf.placeholder(tf.int32, shape=[None], name="y")
training = tf.placeholder_with_default(False, shape=[], name='training')
conv1 = tf.layers.conv2d(X_reshaped, filters=conv1_fmaps, kernel_size=conv1_ksize,
strides=conv1_stride, padding=conv1_pad,
activation=tf.nn.relu, name="conv1")
conv2 = tf.layers.conv2d(conv1, filters=conv2_fmaps, kernel_size=conv2_ksize,
strides=conv2_stride, padding=conv2_pad,
activation=tf.nn.relu, name="conv2")
with tf.name_scope("pool3"):
pool3 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="VALID")
pool3_flat = tf.reshape(pool3, shape=[-1, pool3_fmaps * 14 * 14])
pool3_flat_drop = tf.layers.dropout(pool3_flat, conv2_dropout_rate, training=training)
with tf.name_scope("fc1"):
fc1 = tf.layers.dense(pool3_flat_drop, n_fc1, activation=tf.nn.relu, name="fc1")
fc1_drop = tf.layers.dropout(fc1, fc1_dropout_rate, training=training)
with tf.name_scope("output"):
logits = tf.layers.dense(fc1, n_outputs, name="output")
Y_proba = tf.nn.softmax(logits, name="Y_proba")
I found that most of the classifications are classified with very high probability(like 1). Both those correct ones and false ones.
(softmax probablity)
[0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0.000000e+00 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
0.000000e+00 0.000000e+00 0.000000e+00 6.966008e-33 0.000000e+00]
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
The probability are corresponidng to each of the images
My question is
Why will this happen what exactly caused this, was it a bug in my code or it's natural for this to happen?
If this has to do with the architecture, what potential problem might cause it and how can I fix this problem and avoid this in the future?
Why is the accuracy after first epoch so low and why the accuracy drops a lot after 3 to 6 epoches?
Was the styling of my code acceptable, and what should I improve?
Original codes are here
https://github.com/Dovermore/handson-ml/blob/master/chapter_13_exer/question_7.ipynb
https://github.com/Dovermore/handson-ml/blob/master/13_convolutional_neural_networks.ipynb
(go to the bottom to find the related segment in the second link)
AI: Always remember classification is computational process with group of numbers in matrices and when you train a model for particular set it goes through the data and build a matrices that is called as trained model.
When you give a data set to classify, It tries to associate the values of test data with trained models and it just classifies mathematically,If any labels in the model comes close to that of test data it tries to associate that test data to the highest accurate that it identifies not true output but predicted mathematically
Sometimes they may fall into into the wrong label based on the accuracy of your model and when your training a data like MNIST,sometimes it does classify to wrong labels when parameters are not tuned as such its accuracy is low
Hope it helps |
H: Keras Callback example for saving a model after every epoch?
Can someone please post a straightforward example of Keras using a callback to save a model after every epoch? I can find examples of saving weights, but I want to be able to save a completely functioning model after every training epoch.
AI: Setting 'save_weights_only' to False in the Keras callback 'ModelCheckpoint' will save the full model; this example taken from the link above will save a full model every epoch, regardless of performance:
keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)
Some more examples are found here, including saving only improved models and loading the saved models. |
H: Which algorithm should be used for an accurate job recommendation system
I'm building a testing project to get an introduction to DS & ML.
As a person part of the working force, sometimes finding a job is harder than it should be. I thought I could built a testing project to help workers find a job that best match their interests and their skills.
I could use a classifier, but I do not know if regression if the best way to approach this as a first pass. I was thinking of a GA system that could find some way to find way to approach this over time in a X generation. Maybe this isn't the best way for recommending jobs.
What I'm looking for isn't the code for the problem but more of algorithms ideas I should take a look at and implement in any given programming language. I'm looking to implement a system that can take a peak at job descriptions, interests and skills of an individual. I think this doesn't sound too crazy to be able to train an agent and then make it look at every jobs there is on LinkedIn for instance and take a look at 'Software Engineer' and give me 45% matching and here's the three skills that you have to work on or 98% match, please apply.
AI: You are much, much too early in your process to even begin thinking about your models. At this stage, you should be thinking about what the data looks like and how much of it do you need. For starters, what are you measuring? What is the answer that you seek? Is it job satisfaction? Likelihood to land an interview? Likelihood to get a job offer?
Once you have that, how are you quantifying that? Is it a categorical variable? A continuous variable? Then you would have to decide on the likely list of factors that go into that answer and determine how you're going to get that data. Do you need to run a custom survey? Are there other datasets that you can leverage? You need to think about all of these answers before you're ready for any sort of modeling discussion. |
H: drop_duplicates() doesn't work in pandas
I am working on my pandas tutorial. Below is my dataframe
I am trying to drop duplicated header row using
df_FDNY_dataset.drop_duplicates(subset=['FacilityName','FacilityAddress','Borough'])
But drop_duplicates() function doesn't work in this case. Could you help me understand the same.
Thanks.
AI: So basically you want to drop the 1st row, which is indexed as 0 in the DataFrame.
This can be done by
df.drop(df.index[0], inplace = True) |
H: Shape of a distribution as a feature
How can I use the shape of a distribution as a feature in machine learning ? Do I use something like the standard deviation ?
AI: If this distribution is row specific (each sample has a different associated distribution) or category specific this is not a bad approach to encode more information in your features. It's unclear how you have these distributions, are there empirical samples or do you have a parameterized distribution? A few approaches you could take to encoding this could be:
Fit a distribution family (or if you have one already) and use the location/shape/scale parameters as features
Add a few moments and other statistics of the distribution
Or similarly take a few percentiles
Depending on the parameterization of the distribution, the first might be less direct but it should be easy to test. |
H: Why will the accuracy of a highly unbalanced dataset reduce after oversampling?
I have created a synthetic dataset, with 20 samples in one class and 100 in the other, thus creating an imbalanced dataset. Now the accuracy of classification of the data before balancing is 80% while after balancing (i.e., 100 samples in both the classes) it is 60%. What are the possible reasons for this?
AI: Imagine that your data is not easily separable. Your classifier isn't able to do a very good job at distinguishing between positive and negative examples, so it usually predicts the majority class for any example. In the unbalanced case, it will get 100 examples correct and 20 wrong, resulting in a 100/120 = 83% accuracy. But after balancing the classes, the best possible result is about 50%.
The problem here is that accuracy is not a good measure of performance on unbalanced classes. It may be that your data is too difficult, or the capacity of your classifier is not strong enough. It's usually better to look at the confusion matrix to better understand how the classifier is working, or look at metrics other than accuracy such as the precision and recall, $F_1$ score (which is just the harmonic mean of precision and recall), or AUC. These are typically all easy to use in common machine learning libraries like scikit-learn. |
H: Why training model give great result but real data gives very bad result: Azure ML Studio
I am using Two-Class Boosted Decision Tree to train model.
Evaluation result I'd say really good.
But when I am using real dataset - the result is very bad.
What can possibly go wrong that makes such huge difference?
Below is the screenshot of my model:
Two Class Boosted Decision Tree parameters (default):
AI: Your question is not clear. There's 2 ways to understand it. Which dataset did you use to train your model?
You trained and tested on a premade dataset. The result is great. Then you applied this model to real dataset and the result is really bad.
If this is the case, you should retrain on your real dataset or apply some Transfer Learning techniques to your current model.
You trained and tested on a premade dataset. The result is great. Using the same model, you trained and tested on real dataset but the result is much worse.
I can't tell exactly the reason for this. Normally, real data is much more noisy. Did you handle missing data and do some feature engineering before training? |
H: How to iterate and modify rows in a dataframe( convert numerical to categorical)
I have a pandas dataframe like this
0 15.55
1 15.55
2 15.55
3 15.55
4 20.84
Name: Y1, dtype: float64
I want to convert the values of Y1 to categorical (i.e) if its greater than 18.25, I want it 1 else 0
Can someone please help me on how to do it
This is what i tried so far
for temp in TRAIN_ID1:
train_ID1.loc[(train_ID1['Y1'] > 18.250000), 'Y1'] = 1
train_ID1.loc[(train_ID1['Y1'] < 18.250000), 'Y1'] = 0
But im getting an error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()
TypeError: an integer is required
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-118-2cccb791d834> in <module>()
1 for temp in train_ID1:
----> 2 train_ID1.loc[(train_ID1['Y1'] > 18.250000), 'Y1'] = 1
3 train_ID1.loc[(train_ID1['Y1'] < 18.250000), 'Y1'] = 0
~\Anaconda3\envs\deeplearning\lib\site-packages\pandas\core\series.py in __getitem__(self, key)
621 key = com._apply_if_callable(key, self)
622 try:
--> 623 result = self.index.get_value(self, key)
624
625 if not is_scalar(result):
~\Anaconda3\envs\deeplearning\lib\site-packages\pandas\core\indexes\base.py in get_value(self, series, key)
2558 try:
2559 return self._engine.get_value(s, k,
-> 2560 tz=getattr(series.dtype, 'tz', None))
2561 except KeyError as e1:
2562 if len(self) > 0 and self.inferred_type in ['integer', 'boolean']:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_value()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_value()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
KeyError: 'Y1'
AI: As the error suggests, you don't have a column called Y1. Hence the error. Here is my suggestion to fix this. Assuming you input data looks like this -
15.55
15.55
15.55
15.55
20.84
Read it in pandas this way -
import pandas as pd
df = pd.read_csv('path/to/file.csv', header=None)
Provide a column name for this -
df.columns = ['Y1']
If you have more columns, just fill the df.columns list accordingly.
Finally, use the pandas best practices as per their latest documentation to assign a new column -
df = df.assign(Y2= (df['Y1'] > 18.250000).astype(int))
Output
print(df)
Y1 Y2
0 15.55 0
1 15.55 0
2 15.55 0
3 15.55 0
4 20.84 1
Note: Since I don't have full visibility on what you are working on, I have assumed what might be the problems you are facing. If this doesn't work let me know. |
H: When does boosting overfit more than bagging?
If we consider two conditions:
Number of data is huge
Number of data is low
For what condition does boosting or bagging overfit more compared to the other one?
AI: I read your question as: 'Is boosting more vulnerable to overfitting than bagging?'
Firstly, you need to understand that bagging decreases variance, while boosting decreases bias.
Also, to be noted that under-fitting means that the model has low variance and high bias and vice versa for overfitting.
So, boosting is more vulnerable to overfitting than bagging. |
H: Transform Categorical Variables into Numerical
I'm very new to machine learning approaches. I'm reading a tutorial for build a predictive model using random forests.
One of the transformations implemented was transform categorical variables to binary.
Imagine (short sample):
Field_Desc Field_Value
A 32
A 100
B 1
And then the developer pass this dataset into:
Field_A1 Field_B1 Field_Value
1 0 32
1 0 100
0 1 1
What is the advantage to make this transformation for Random Forest Prediction? And for K-Means there will have any advantage?
Thanks!
AI: Suppose that you want to have k-means algorithm, in the formulation of average, you have to take the average of each cluster, and then reassign the centers. If you have categorical data, how do you want to take mean? Changing categorical data to numeric data is for translating situations which don't have numerical features to be suited to be used for such algorithms. |
H: Why do pre-trained CNNs use low image resolution?
I want to use a pre-trained convolutional network for image classification. My base data has resolutions of 500x500px up to 1000x1000px. Pre-trained architectures often expect less (between 255 and 299px in case of Googles Inception network).
Firstly: Would it potentially have a big impact to use higher resolution images? I.e. is it worthwhile investigating it?
Secondly: Does it make sense and is it possible to use a pre-trained network on low resolution and re-training the last layer/classifier with higher resolutions?
AI: Would it potentially have a big impact to use higher resolution images?
Yes, if you increase the input size to your convolutional neural network, the size of each activation map for each layer increases, so you will have more computation. Also if you use same architecture, the number of neurons and consequently the number of parameters, in dense layers increases.
Does it make sense and is it possible to use a pre-trained network on low resolution and re-training the last layer/classifier with higher resolutions?
The answer is no. When you train a network with a special size of input, you reserve variables to hold the weights and middle variables. If you increase the size of input, dense layers will have different size, so their number of wights should vary too.
To wrap up, for networks with classification tasks, it is appropriate to pass the network small size of images. For other tasks like edge detection where the information of edges can be destroyed by resizing, you have to be careful. In those cases you have to find an appropriate size of the image in order to keep the important information. The small size of the inputs is for reducing number of operations and number of parameters. |
H: How the squared Euclidean distance is an example of non-metric function?
I am reading a book on Pattern Recognition (by Prof V Susheela Devi and Prof Murty) where in the chapter of data representation 2.3.3 the non metric similarity function is defined as those which do not obey either the triangular inequality or symmetry.
In that context, it further adds The squared Euclidean distance is itself an example of a non-metric, but it gives the same ranking as the Euclidean distance which is a metric.
Is the squared Euclidean distance different from the Euclidean distance?
How is the squared Euclidean distance non-metric?
AI: Let $x, y \in \mathbb{R}^n$. The Euclidean distance $d$ is defined as
$$
d(x,y) = \sqrt{\sum_{i=1}^n (x_i - y_i)^2}.
$$
The squared Euclidean distance is therefore
$$
d(x,y)^2 = \sum_{i=1}^n (x_i - y_i)^2.
$$
We know that Euclidean distance is a metric.
Let us check whether squared Euclidean distance is also a metric. I will use the definition from Wikipedia (Ankit Seth's definition is equivalent).
Non-negativity: $d(x,y)^2 \ge 0$. This one is obvious.
Identity of indiscernibles: $d(x,y)^2 = 0$ if and only if $x=y.$ This is true because $d$ is a metric, and $d(x,y)^2 = 0$ if and only if $d(x,y) = 0$.
Symmetry: $d(x,y)^2 = d(y,x)^2$. This is again true because $d$ is a metric and therefore $d(x,y) = d(y,x).$
(You can also see that 2. and 3. are true directly from the definition of $d(x,y)^2$).
Looks fine so far. But:
Triangle inequality: Do we have $d(x,z)^2 \le d(x,y)^2 + d(y, z)^2$ for all $x, y, z \in \mathbb{R}^n$? No. Pick an arbitrary $x \in \mathbb{R}^n \setminus \{0\}$ and set $y = 2x$ and $z=3x$. Then
$$
d(x,z)^2 = \sum_{i=1}^n (x_i - 3x_i)^2 = 4 \sum_{i=1}^n x_i^2
$$ and
$$d(x,y)^2 + d(y,z)^2 = \sum_{i=1}^n x_i^2 + \sum_{i=1}^n x_i^2 = 2 \sum_{i=1}^n x_i^2.
$$
Since $4\sum_{i=1}^n x_i^2 > 2 \sum_{i=1}^n x_i^2$, we have found a counterexample that shows that the triangle inequality does not hold.
What does it mean that squared Euclidean distance gives the same ranking as Euclidean distance?
Suppose we have $x, y, z$ such that $d(x,y) < d(x,z)$.
Then $d(x,y)^2 < d(x,z)^2$ as well: It ranks points in the same way as Euclidean distance.
This is good to know. For instance, this tells us that the $k$-nearest neighbors classifiers gives the exact same results for squared Euclidean distance and Euclidean distance. |
H: What does "Norm" term mean?
In this paper (page 1 abstract) which considers regularization technique, the author used the word "norm" - what does it stand for?
Is it related to Batch Normalization / L1 or L2 Normalization?
"We stabilize the activations of Recurrent Neural Networks (RNNs) by penalizing
the squared distance between successive hidden states’ norms"
Googling "what is a norm Neural Net" doesn't provide any links
AI: A norm is a concept in linear algebra which assigns a size to a vector. Many different norms exist you can read up on their many uses here.
In this paper, they are training a recurrent neural network. These are used for time series data. The assumption is a hidden state should be similar for successive time inputs $t$ and $t-1$. Thus, the author proposes an additional constraint based on the relative size of the successive hidden state vectors as
$||h_{t}||_2 - ||h_{t-1}||_2$.
The more different the norm of these successive states, the greater the cost will be. |
H: Does K-Means' objective function imply distance metric is Euclidean
The objective/loss function of K-Means algorithm is to minimize the sum of squared distances, written in a math form, it looks like this:
$$J(X,Z) = min\ \sum_{z\in Clusters}\sum_{x \in data}||x-z||^2$$
If we have different distance metric, for instance, cosine (I realize there's a conversion between cosine and Euclidean but let's forget it for now), manhattan etc, does it mean we will have a different loss function?
That is, the traditional K-Means based on expectation maximization won't be working right? Because for every iteration we revise centroids, normally, by computing the average. However, for some of the metrics other than Euclidean, average might not be a legit representation for the center.
AI: You're right and you're wrong.
The objective/loss function of K-Means algorithm is to minimize the sum of squared distances
Yes, absolutely.
written in a math form, it looks like this:
$$J(X,Z) = min\ \sum_{z\in Clusters}\sum_{x \in data}||x-z||^2$$
Err... sort of. This is definitely the most popular formulation of kmeans, but a more appropriate formulation would be:
$$J(X,Z) = min\ \sum_{z\in centroids}\sum_{x \in data} distance(x,z)$$
In your formulation, you're concretely defining distance as euclidean distance, i.e. the L2 norm. But you can swap out L2 for any distance kernel and apply kmeans.
The caveat here is that kmeans is a "hill climbing" algorithm, which means each iteration should always be at least as good as the previous iteration, and so it must be the case that this improvement will be true for both the E and M steps. For most common distance metrics (L1, L2, cosine, hamming...) this is the case and you're good to go, but there are infinite possible distance metrics and if we're going to be technical about it, the probability that a random distance metric will satisfy this criterion is almost surely 0.
So, to circle back to your question: does the objective function as you formulated it imply the distance metric is euclidean? Yes. But does kmeans only apply to euclidean space? No, absolutely not. Use whatever distance metric you want and throw the EM algorithm at it and bam: you've got yourself a non-euclidean kmeans.
Generally, when people say "kmeans", they're talking about euclidean kmeans. But Kmeans is super easy to modify with a different distance metric and I think only the most pedantic people would argue that you shouldn't call it "kmeans" after such a modification. Although it's generally always described with the objective you posted (which, yes, does imply euclidean distance), you can really drop in pretty much any useful distance metric, throw EM at it, and it'll work. Some people might call it "___ kmeans" depending on the distance metric, but really it's all kmeans.
I think part of the reason kmeans often isn't described this way isn't formulated like this is because its often compared with gaussian mixture models (GMM). With a euclidean norm, kmeans is equivalent to a GMM with diagonal covariance (and hard cluster assignment), and is consequently often described as an easy way to fit a GMM with spherical clusters. But this comparison fails if we use a different distance metric.
I suggest you check out the CrossValidated discussion on this topic. Full disclosure: the highest voted answers disagree with me, but as you probably guessed I think they're being fairly pedantic. |
H: Should I consider feature scaling for all gradient descent based algorithms?
In the Coursera course machine learning in the section on Multivariate Linear Regression, Andrew Ng provides the following tips on gradient descent:
Use Feature Scaling to converge quicker
Get feature into an approx -1 < x < 1 range
Mean normalization
Andrew Ng also provides some other tips:
Plot cost vs iterations
to ensure cost decreases on every iteration (try smaller alpha)
to identify if convergence is too slow (try larger alpha)
to identify approximately the number of iterations to converge
Are these tips applicable to all problems involving gradient descent using different machine/deep learning algorithms or just to Multivariate Linear Regression?
AI: About the tips regarding plot cost vs. iteration, they are generally applicable to gradient descent approaches, including deep learning, where hyperparameter tuning (e.g. learning rate) is crucially important.
About the proper input scaling, it is not only related to the machine learning approach, but to the specific problem under consideration. Sometimes machine learning algorithms rely on distances to compute the similarity between individuals. Scaling changes some of these distances. In these cases, the resulting distance after scaling should be assessed to check whether it is more appropriate than without scaling. Here you can find examples for clustering. For some machine learning algorithms, you need standardized features, e.g. regularized linear/logistic regression. For most optimization-based machine learning algorithms, it makes sense to have feature scaling. On the other hand, there are problems where scaling doesn't even make sense (e.g. discrete input problems, like token-based natural language processing). |
H: How to fix class imbalance in training sample?
I was very recently asked in a job interview about solutions to fix an imbalance of classes in the training dataset. Let's focus on a binary classification case.
I offered two solutions: oversampling the minority class by feeding the classifier balanced batches of data, or partitioning the abundant class such as to train many classifiers with a balanced training set, a unique subset of the abundant and the same set of the minority. The interviewers noded, but I was later cut off and one of the knowledge gaps they mentioned was this answer. I know now that I could have discussed changed the metric..
But the question that pops in my mind now is: is it really a problem to train a classifier with 80% class A if the testing set will have the same proportion? The rule of thumb of machine learning seems to be that the training set needs to be as similar as possible to the testing for best prediction performance.
Isn't just in the cases with have no idea (no prior) about the distribution of the testing that we need to balance the classes? Maybe I should have raised this point in the interview..
AI: Actually what they mentioned is right. The idea of oversampling is right and is one of, in general, Resampling methods to cope with such problem. Resampling can be done through oversampling the minorities or undersampling the majorities. You may have a look at SMOTE algorithm as a well-stablished method of resampling.
But about your main question: No it's not only about the consistency of distributions between test and train set. It is a bit more.
As you mentioned about metrics, Just imagine accuracy score. If I have a binary classification problem with 2 classes, one 90% of the population and the other class 10%, then with no need of Machine Learning I can say my prediction is always the majority class and I have 90% accuracy! So it just does not work regardless of the consistency between train-test distributions. In such cases you may pay more attention to Precision and Recall. Usually you would like to have a classifier which minimizes the mean (usually harmonic mean) of Precision and Recall i.e. the error rate is where FP and FN are fairly small and close to each other.
Harmonic mean is used instead of arithmetic mean because it supports the condition that those errors are as equal as possible. For instance if Precision is $1$ and Recall is $0$ the arithmetic mean is $0.5$ which is not illustrating the reality inside the results. But harmonic mean is $0$ which says however one of the metrics is good the other one is super bad so in general the result is not good.
But there are situations in practice in which you DO NOT want to keep the errors equal. Why? See the example bellow:
An Additional Point
This is not exactly about your question but may help understanding.
In practice you may sacrifice an error to optimize the other one. For instance diagnosis of HIV might be a case (I am just mocking-up an example). It is highly imbalanced classification as, of course, the number of people with no HIV is dramatically higher than the ones who are carrier. Now let's look at errors:
False Positive: Person does not have HIV but test says they have.
False Negative: Person does have HIV but test says they don't.
If we assume that wrongly telling someone that he got HIV simply leads to another test, we may take much care about not wrongly say someone he is not a carrier as it may result in propagating the virus. Here your algorithm should be sensitive on False Negative and punishes it much more than False Positive i.e. according to the figure above, you may end up with higher rate of False Positive.
Same happens when you want to automatically recognize people faces with a camera to let them enter in an ultra-security site. You don't mind if the door is not opened once for someone who has permission (False Negative) but I'm sure you don't want to let a stranger in! (False Positive)
Hope it helped. |
H: Convolutional Neural Networks layer sizes
I am trying to understand an article Backpropagation In Convolutional Neural Networks
But I can not wrap my head around that diagram:
The first layer has 3 feature maps with dimensions 32x32. The second layer has 32 feature maps with dimensions 18x18. How is that even possible ? If a convolution with a kernel 5x5 applied for 32x32 input, the dimension of the output should be $(32-5+1)$ by $(32-5+1)$ = $28$ by $28$.
Also, if the first layer has only 3 feature maps, the second layer should have multiple of 3 feature maps, but 32 is not multiple of 3.
Also, why is the size of the third layer is 10x10 ? Should it be 9x9 instead ? The dimension of the previouse layer is 18x18, so 2x2 max pooling should reduce it to 9x9, not 10x10.
AI: Actually I guess you are making mistake about the second part. The point is that in CNNs, convolution operation is done over volume. Suppose the input image is in three channels and the next layer has 5 kernels, consequently the next layer will have five feature maps but the convolution operation consists of convolution over volume which has this property: each kernel will have its width and height, moreover, a depth. its depth is equal to the number of feature maps, here channels of the image, of the previous layer. Take a look at here. |
H: What is clustering used for?
I know that clustering can be used for unsupervised learning and some people told me for many more techniques, but I was left with no answer, when I asked for what else clustering is used.
AI: Labeling data is not always an easy task. There are occasion that the data in hand does not have label and you need to make a model using them. You have to find the similarities and differences in your input data. Clustering approaches try to find these similarities and differences to find similar data. Also they are used as a pre-processing before doing supervised classification. In cases that the input data does not have any label, employing clustering approaches can be a way to label the data and use them for training supervised models. |
H: Exploratory Data Analysis and selecting good predictor variables ?
In what way would exploratory data analysis aid in feature selection, other than to preprocess the data ? Say, if a bivariate analysis was conducted for each predictor variable w.r.t. the target variable, in what way would this help with feature selection, if possible ?
AI: This is an interesting but broad question.
Imagine PCA. You yse it for exploring the data embedded in lower dimensional space but the first $n$ principale components are also used as the features (after projection of data on them).
Or you use correlation analysis and remove (deselect) features with high correlation with an existing feature.
You calculate the variance of each feature abd low variances tell you that there is no infirmation in this feature.
You inspect feature distributions according to target to determine how much they contribute to the prediction.
And of course much more ... |
H: NLP grouping word categories
Suppose I have a dictionary:
{apple:large apple, apple:red apple, apple:aple, orange:mandarin, orange:orang, orange:blood orange}
and so on...
And then I want to replace a large document of entries with the keys. However, occasionally a new value will come up, i.e. {apple:green apple}
Is there a method where I can replace all values with the corresponding key, but then also replace 'close' values like the one given if they appear?
Example document:
var1
_____
aple
apple
orange
Apple
Red apple
gren Apple
blood Orange
orang
var1_replaced
______________
apple
apple
orange
apple
apple
apple
orange
orange
AI: Well ... the simplest approach is using Fuzzy String Matching and it will work. Just go through the examples in python implementation of it (fuzzywuzzy) and you will understand how it works. You need to find a threshold by practice to determine if two strings are similar enough to be considered as same concept.
If it didn't work please drop a line in the comments so I can propose more sophisticated algorithms.
Good luck! |
H: How would one impute missing values for a Discrete variable?
How would one imputing missing values (without using the mode) for a discrete variable, e.g. a variable corresponding to a count.
AI: Apart from the methods @Media mentioned, here are some more:
Imputing with info from other variables
This method is to create a (multi-class) model based on target variable. So that missing values would be predicted.
The steps are likely to be:
Subset data without missing value in the variable you want to impute
Machine learning on the data with predict model
Predict on data with the missing value from model created
Clustering
Whether missing values are mainly related to a combination of variables? Unsupervised methods may help here.
An example using randomForest:
https://stats.stackexchange.com/questions/107530/using-cluster-information-in-multiple-imputation
Domain Knowledge
If we known that the reason for missing value, we can assign the missing value to a proper level. For example, a survey data are collected from web where given choices are not applicable for some cases, hence leave blank. In this case, it would be better to leave as a separate value.
Implementation
There are some R package can impute the data for you;
MICE
Amelia
missForest
Hmisc
mi
https://www.analyticsvidhya.com/blog/2016/03/tutorial-powerful-packages-imputing-missing-values/ |
H: Python - Get FP/TP from Confusion Matrix using a List
I using two different classifiers to predict a binary target (Random Forests and Decision Trees). Now I want to evaluate my model creating a confusion matrix. For example, for predicting the binary value using random forests I've:
training_features, test_features, training_target, test_target, = train_test_split(df.drop(['score_goal'], axis=1),
df['score_goal'],
test_size = .3,
random_state=12)
clf_rf = RandomForestClassifier(n_estimators=25, random_state=12)
clf_rf.fit(training_features, training_target)
print("Accuracy using Random Forest Classifier is ", clf_rf.score(test_features, test_target)*100)
I'm confusing because I don't know how I can compare the predicted values to identify how many False Positives, etc. I have.
Anyone knows how can I build that function?
Thanks!
AI: Looks like you're using scikit-learn. So why not explore a bit more? Scikit has a metrics module, that can be of use for your problem. Essentially, what you need is to have two separate arrays - one with real labels and another with predicted labels. And then you're good to go, you could call metrics.classification_report or metrics.confusion_matrix or metrics.accuracy_score, all of them use the real labels and the predicted labels.
There's nothing wrong with using clf_rf.score(test_features, test_target), but it will only give you a single value. If you look at the source code, what happens is that the score method calls a predict method with test_feature for prediction of labels, which occurs behind the scenes.
It's better to actually capture those predicted labels, so that you can reuse them.
clf_rf.fit(training_features, training_target)
predicted_target = clf_rf.predict(test_features)
accuracy = sklearn.metrics.accuracy_score(test_target, predicted_target)
cnf_matrix = sklearn.metrics.confusion_matrix(test_target, predicted_target)
class_report = sklearn.metrics.classification_report(test_target, predicted_target)
And then you could do whichever you like with the calculated metrics, print, plot, etc.
Have a look at the examples that are included for each model/metrics you're using 1, 2, 3, 4, etc. |
H: Applying machine learning algorithms to subset of attributes in dataframe
I have this huge mixed data set consisting of both numerical and categorical attributes which upon OneHotEncoding results into a data set with very high dimensionality.
Is it wise to apply machine learning algorithms like K-means clustering, dimensionality reduction and regression on subsets of data set? For example applying K-means clustering to numerical columns first and join the result with categorical data set later.
AI: Applying a machine learning algorithm on only a subset of the data and including other subsets later does not allow the algorithm to assess the importance of each attribute equally.
For example, say you have a data set called A, which has subsets B and C. Without loss of genearality, if you fit a model ('apply an algorithm') on subset B, and then include subset C later, then you're saying 'given subset B is already in the model, assess the impact of including subset C'. Instead, if you apply the entire algorithm on the entire data set (A), then you're allowing the algorithm to discover which features are most important for the desired outcome.
That being said, it may be wise to process the different elements of your data set differently. That is, categorical covariates may be modelled differently from continuous covariates. If you're using something like a feed-forward neural network, then it's not a big deal, but if you're using a more traditional statistical model you may need to take that into account. For example, in R, you need to specify that a categorical covariate is in fact a 'factor' variable. |
H: How to plot cost versus number of iterations in scikit learn?
One of the recommendations in the Coursera Machine Learning course when working with gradient descent based algorithms is:
Debugging gradient descent. Make a plot with number of iterations on the x-axis. Now plot the cost function, J(θ) over the number of iterations of gradient descent. If J(θ) ever increases, then you probably need to decrease α.
Do gradient descent based models in scikit-learn provide a mechanism for retrieving the cost vs the number of iterations?
AI: Based on the answer here, use the following code:
old_stdout = sys.stdout
sys.stdout = mystdout = StringIO()
clf = SGDClassifier(**kwargs, verbose=1)
clf.fit(X_tr, y_tr)
sys.stdout = old_stdout
loss_history = mystdout.getvalue()
loss_list = []
for line in loss_history.split('\n'):
if(len(line.split("loss: ")) == 1):
continue
loss_list.append(float(line.split("loss: ")[-1]))
plt.figure()
plt.plot(np.arange(len(loss_list)), loss_list)
plt.savefig("warmstart_plots/pure_SGD:"+str(kwargs)+".png")
plt.xlabel("Time in epochs")
plt.ylabel("Loss")
plt.close()
Also take a look at here |
H: Train Accuracy vs Test Accuracy vs Confusion matrix
After I developed my predictive model using Random Forest I get the following metrics:
Train Accuracy :: 0.9764634601043997
Test Accuracy :: 0.7933284397683713
Confusion matrix [[28292 1474]
[ 6128 889]]
This is the results from this code:
training_features, test_features, training_target, test_target, = train_test_split(df.drop(['bad_loans'], axis=1),
df['target'],
test_size = .3,
random_state=12)
clf = RandomForestClassifier()
trained_model = clf.fit(training_features, training_target)
trained_model.fit(training_features, training_target)
predictions = trained_model.predict(test_features)
Train Accuracy: accuracy_score(training_target, trained_model.predict(training_features))
Test Accuracy: accuracy_score(test_target, predictions)
Confusion Matrix: confusion_matrix(test_target, predictions)
However I'm getting a little confuse to interpret and explain this values.
What exactly this 3 measures tell me about my model?
Thanks!
AI: Definitions
Accuracy: The amount of correct classifications / the total amount
of classifications.
The train accuracy: The accuracy of a model on examples it was constructed on.
The test accuracy is the accuracy of a model on examples it hasn't seen.
Confusion matrix: A tabulation of the predicted class (usually
vertically) against the actual class (thus horizontally).
Overfitting
What I would make up of your results is that your model is overfitting. You can tell that from the large difference in accuracy between the test and train accuracy. Overfitting means that it learned rules specifically for the train set, those rules do not generalize well beyond the train set.
Your confusion matrix tells us how much it is overfitting, because your largest class makes up over 90% of the population. Assuming that you test and train set have a similar distribution, any useful model would have to score more than 90% accuracy: A simple 0R-model would. Your model scores just under 80% on the test set.
In depth look at the confusion matrix
If you would look at the confusion matrix relatively (in percentages) it would look like this:
Actual TOT
1 2
Predicted 1 | 77% | 4% | 81%
Predicted 2 | 17% | 2% | 19%
TOT | 94% | 6% |
You can infer from the total in the first row that your model predicts Class 1 81% of the time, while the actual occurrence of Class 1 is 94%. Hence your model is underestimating this class. It could be the case that it learned specific (complex) rules on the train set, that work against you in the test set.
It could also be worth noting that even though the false negatives of Class 1 (17%-point, row 2, column 1)) are hurting your overall performance most, the false negatives of Class 2 (4%-point, row 1 column 2) are actually more common with respect to the total population of the respective classes (94%, 6%). This means that your model is bad at predicting Class 1, but even worse at predicting Class 2. The accuracy just for Class 1 is 77/99 while the accuracy for Class 2 is 2/6. |
H: Binning which variables?
I will try to implement a k-means algorithm over this dataset:
Team_Categorical CreditAmount_Numeric Retired?_Binary
A 15.3 1
B 12 0
C 6.2 1
In order to apply k-means algorithm I need to transform the categorical variables into numeric. So I will apply binning transformaton over the Team_Categorical field. So I will have this dataset:
A B C CreditAmount_Numeric Retired?_Binary
1 0 0 15.3 1
0 1 0 12 0
0 0 1 6.2 1
My question is: Should I transform CreditAmount_Numeric to binary too?
AI: To answer your question, you need not transform the numeric to binary variable(you meant binning right?).
I will try to explain it with the above example:
Let's start with row-1 where A = 1 , B =0 , C=0 so it means that you are talking about A. So the value of CreditAmount_Numeric belongs to A and the retired binary value also belongs to A. Similarly, you can deduce for the rest records.
Conclusion is that you need not do any transformation on your numeric variable. |
H: What is the difference between cross_validate and cross_val_score?
I understand cross_validate and how it works, but now I am confused about what cross_val_score actually does. Can anyone give me some example?
AI: cross_val_score is a helper function on the estimator and the dataset.
Would explain it with an example:
>>> from sklearn.model_selection import cross_val_score
>>> clf = svm.SVC(kernel='linear', C=1)
>>> scores = cross_val_score(clf, iris.data, iris.target, cv=5)
>>> scores
array([ 0.96..., 1. ..., 0.96..., 0.96..., 1. ])
This example demonstrates how to estimate the accuracy of a linear kernel support vector machine on the iris dataset by splitting the data, fitting a model and computing the score 5 consecutive times (with different splits each time)
The cross_validate function differs from cross_val_score in two ways -
It allows specifying multiple metrics for evaluation.
It returns a dict containing training scores, fit-times and score-times in
addition to the test score.
Note: When the cv argument is an integer, cross_val_score uses the KFold or StratifiedKFold strategies by default, the latter being used if the estimator derives from ClassifierMixin
You can go through this link for better understanding
Different examples using cross_val_score, you can go through about its different implementations. |
H: What if MNIST dataset had another feature
MNIST is a famous data set of hand written digits. Suppose we knew who wrote digits for example, female, left handed, 25 yrs old.
How would I use these information in CNN in tensorflow? or any other library.
Digits are images and that what CNN handles well but gender, dominant hand, and age are not images. How would you use that information?
AI: Convolutional layers are useful for images because they take into consideration the neighborhood of pixels. However, for labels like gender and handedness a convolutional layer may not be particularly useful.
However, after the convolutional layers you usually tend to place some densely connected layers. It is there that you may want to add these additional features. When you reshape the 2D matrices which results from the convolutions, you can concatenate the additional features, then feed this new vector to your Dense layer. |
H: How to compare Timeseries Sequences?
I have multiple time series sequences and I want for each new time series to find the most alike old one.
I found that I can use the sum of errors between points. Is this a good approach?
Is there a way to be able to compare sequences with different lengths (maybe a sequence look like a subsequence of another sequence)?
Will scaling the data before comparing make difference?
AI: The answer to your questions depend a lot on the nature of the data represented in the time series. You should ask yourself some questions to better understand what might or might not work, e.g.:
Are the time sequences perfectly aligned?
Are two slightly shifted time series considered similar or not?
Are two time series with the same shape but different scale considered similar?
Normally, the answers to those questions are that series are not perfectly aligned and that variations in scale are also fine as long as the shape is similar. For these scenarios, the classical measure is Dynamic Time Warping (DTW). There are lower bounds for DTW that are very efficient computationally. The research of Professor Keogh might be interesting if you need theoretical foundation for it.
Also, normally euclidean distance and Manhattan distance are not very appropriate for time series due to their sensitivity to signal transformations (e.g. shifts), but actually they are often used in practice. |
H: Matrix Confusion - Get Model Precision
I've this matrix confusion:
[9779 107]
[2227 148]
What is the accuracy of my model? My doubt is because the confusion matrix is calculated based on Test dataset so how can it evaluate the accuracy of my model?
Thanks!
AI: A confusion matrix gives you the following:
[TP, FP]
[FN, TN]
where TP = 'true positives'; FP = 'false positives'; FN = 'false negatives'; TN = 'true negatives'.
You can read more here: http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/
By taking TP+TN and dividing by TP+FP+FN+TN, you can get the classification accuracy of your model. In your case, that means (9779+148)/(9779+107+2227+148) = about 81%
More details:
This type of confusion matrix is used for binary classification.
A 'true positive' in this case means a true instance of class 0; that is, the model predicts that a given example belongs to class 0 and it really does belong to class 0.
A 'true negative' means a true instance of class 1.
A 'false positive' means the model predicts that the example belongs to class 0 when it really belongs to class 1;
A 'false negative' means the model predicts that the example belongs to class 1 when it really belongs to class 0. |
H: Watermark detection in Python
I have a lot of images and I would like to be able to classify them into two groups: one containing images with watermarks and one containing images without any watermark.
There are about 40 different watermarks. I created "fake" watermarked images to train a CNN and it worked very well on the "fake" validation set but not on the real images. Plus it was a long shot because I would have needed to train a model for each watermark (and I don't have the original watermark) or train a big model.
I quit the watermark approach to try and find text. So I tried OpenCV text detection and it really wasnt working since the text is crooked and not that different from the background.
Is there an easy solution I missed? Any idea is welcolme. I am kinda new to machine learning :)
AI: Interesting question! Maybe the pretrained models in Keras can help. Either by means of transfer learning, so that you might have to label only a small number of images by hand to retrain the higher layers. Or by using them for feature extraction and see if a certain keyword appears frequently for watermarked images.
Or just upload the pictures some place that does not allow watermarks and see if they get flagged ;-) |
H: Dropout dividing by compensation term = overshoots the result?
When applying dropout mask, why is it acceptable to divide the resulting state by the percentage of survived neurons?
I understand that it's to prevent signal from dying out. But I've done the test, and found that it disproportionally magnifies the resulting state.
Assume the original state is $(0.1, 0.1, 0.2, 5.0)$
and our mask is $(0, 0, 1, 1)$ (with 50% of neurons that survive).
So, the original length is $$
\begin{align*}\sqrt{(0.1\times 0.1 + 0.1\times 0.1 + 0.2\times 0.2 + 5\times 5)} &= \sqrt{25.06}
\approx 5.006.\end{align*}$$
As for the masked vector, its length is $$\sqrt{0.2\times 0.2 + 5\times 5} = \sqrt{25.04} \approx 5.004.$$
Applying the compensation gives $5.004 / 0.5 = 10.008.$
This seems incorrect: my compensation just blew up the state vector. Perhaps we should be compensating differently - more carefully? I think it would even get worse if we mask the individual weights (like DropConnect does).
In my actual test, the state-vector of $192$ elements has length of $0.885$ and the masked vector, with compensation has length of $1.305$
AI: Your example is cherry-picked: You mask out small numbers and keep a large one.
But dropout is applied randomly. Each of the following six masks, and of the corresponding values for the vector length, is equally likely to appear:
$$
\begin{align*}
&(1, 1, 0, 0): &\sqrt{0.1^2 + 0.1^2} &\approx 0.1414,\\
&(1, 0, 1, 0): &\sqrt{0.1^2+ 0.2^2} &\approx 0.2236,\\
&(1, 0, 0, 1): &\sqrt{0.1^2+ 5^2} &\approx 5.0010,\\
&(0, 1, 1, 0): &\sqrt{0.1^2 + 0.2^2} &\approx 0.2236,\\
&(0, 1, 0, 1): &\sqrt{0.1^2 + 5^2} &\approx 5.0010,\\
&(0, 0, 1, 1): &\sqrt{0.2^2 + 5^2} &\approx 5.0040.\\
\end{align*}
$$
The average vector length is
$$
\frac16 (0.1414+0.2236+5.0010+0.2236+5.0010+5.0040) = 2.5991,
$$
which is roughly half of the original vector length $5.006$. So it makes sense to divide it by the dropout rate of $50\%$. |
H: Confusion Matrix - Get Items FP/FN/TP/TN - Python
After run my python code:
print(confusion_matrix(x_test, x_pred))
I get this:
[100 32
211 21]
My question is how can I get the following list:
True positive = 100
False positive = 32
False negative = 211
True negative = 21
Is this possible?
AI: Considering you have two lists y_actual and y_pred ( I assume you made a typo error on x_test and x_pred as in your code), you can pass the two lists to this function to parse them
def perf_measure(y_actual, y_pred):
TP = 0
FP = 0
TN = 0
FN = 0
for i in range(len(y_pred)):
if y_actual[i]==y_pred[i]==1:
TP += 1
if y_pred[i]==1 and y_actual[i]!=y_pred[i]:
FP += 1
if y_actual[i]==y_pred[i]==0:
TN += 1
if y_pred[i]==0 and y_actual[i]!=y_pred[i]:
FN += 1
return(TP, FP, TN, FN)
Alternatively, if confusion matrix is a 2x2 matrix (named cm), you can use
TP = cm[0][0]
FP = cm[0][1]
FN = cm[1][0]
TN = cm[1][1] |
H: Data augmentation: rotating images and zero values
A lot of people rotate images to create a larger training set for neural networks. For most nets, all of the inputs have to be the same size so the image rotation function has to crop the newly rotated images to match the input size. So, say you have $32\times32$ resolution images and do 45 degree rotations. Some of the output images will look diamond shaped with black (zero values) in the corners. So, my question is: should you leave these zero values alone or change them in some way and if so, how?
AI: Keeping the values as zeros will introduce some bias to your network. Given, you have this corner effect for the majority of your dataset, you do not want the network to identify a high probability of making the corners black. Thus, you should fill them, you can extend the edge, do a reflection, wrapping. You can also do some more complex function, like take average of a few patches in your image then place them in the missing areas.
Keras has a very nice function that can do all this for you.
import numpy as np
from keras.datasets import mnist
from keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
%matplotlib inline
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.astype('float32')
# set up your data generator
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
vertical_flip = False)
# Fit the generator using your data
datagen.fit(X_train.reshape((len(X_train), 28, 28, 1)))
# Black images
image = X_train[5]
plt.imshow(image, cmap='gray')
plt.show()
plt.figure(figsize=(12,12))
plt.subplot(4, 4, 1)
plt.imshow(image.reshape((28,28)), cmap='gray')
for j in range(15):
augmented = datagen.random_transform(image.reshape((28,28,1)))
plt.subplot(4, 4, j+2)
plt.imshow(augmented.reshape((28,28)), cmap='gray')
plt.tight_layout()
plt.show()
# White images
image = -1*X_train[5]
plt.imshow(image, cmap='gray')
plt.show()
plt.figure(figsize=(12,12))
plt.subplot(4, 4, 1)
plt.imshow(image.reshape((28,28)), cmap='gray')
for j in range(15):
augmented = datagen.random_transform(image.reshape((28,28,1)))
plt.subplot(4, 4, j+2)
plt.imshow(augmented.reshape((28,28)), cmap='gray')
plt.tight_layout()
plt.show() |
H: Classification of goods by category
i have labeled data with goods like:
"Double chamber refrigerator Hitachi R-WB 482 PU2 GBW".
I need to predict category like: laptops, household appliances etc.
How can i do this?
AI: Random forests should be able to handle categorical values natively so look for a different implementation so you don't have to encode all of those features and use up all your memory..
XGBoost
Dimensionality Reduction
CatBoost
Hierarchical Clustering Using Scipy
Target Encoding |
H: rectangular markers in bubble plot (Python)
I would like to make a 'bubble' plot, but with rectangular markers. Is it possible to do this in Python?
On x-axis should be the day of the week (Mon, Tue, Wed,...), on y-axis - counts, how many times people came to a restaurant, the area of rectangles - the number of people came to a restaurant. A height of rectangles is always the same, only a width changes.
So, one rectangle on the plot shows how many people how many times came to a restaurant on a particular day of the week. As axis y may have values > 30, bubbles are not suitable and I consider that rectangles works better.
A sketch of what I would like to draw is below.
Is it possible to draw rectangular markers or is there another good way for visualizing the same concept?
UPDATE:
I've already tried box and violin plots.
The problem with boxplots is that in my case a height of a box is quite small and a tail is long. There are a lot of points indicating outliers. I would prefer to see some difference in outliers instead of a single point. With a bubble plot, it is possible to use some customized function for bubble size, e,g, logarithm.
AI: I can imagine a transposed boxplot() working well... in a figure with seven subplots...
https://github.com/olisteadman/ga_pre/blob/master/archive/bar_as_rectangular_bubble.ipynb |
H: Classification of very similar images
I have two groups of images, each one with 1000 samples.
The speckle pattern, in this context, is the same as a random pattern or "white noise" image. So these images are fundamentally different.
In group one, each figure is generated by considering a random function that returns something similar to a speckle pattern (see fig. 1).
In group two we follow the same procedure as group 1, but we plot a small point above that can be positioned anywhere and with any color (see fig. 2).
I want to classify both groups and I already tried to do it with simple neural networks, but I have been unsuccessful.
What is the best technique for this kind of problems?
Fig. 1:
Fig. 2:
AI: I found the answer in the paper linked above.
The authors use a CNN to solve the problem. I will post the code.
https://link.springer.com/article/10.1007/s00170-017-0882-0 |
H: Train new data to pre-trained model
Let's say I've trained my model and made my predictions.
My question is... How can I append some new data to my pre-trained model without retrain the model from the beginning.
AI: I cannot comment yet. If you just load the model and use a fit method it will update the weights, not reinstance all the weights. It will just perform a number of weights update that you can chose, using the new data. |
H: Why use Gradient Descent when Gradient just solve the problem? (With Neural Nets)
My knowledge is that gradient actually gets to the global minimum and gradient descent try to take steps to the direction that he judges to be the lowest.
I know that calculate the gradient from complex functions is non trivial problem, but let's say that we are a using neural net with a quadratic cost function, would be that hard to calculate the gradient and get actual global minimum?
AI: You might be mistaking the gradient itself with the mathematical approach to find the critical points of a differentiable function.
In the latter approach, you take the derivative of the function with respect to its parameters and find the values of the parameters that make it zero. These points in parameter space are critital points, that is, where the function has either a local maxima, a local minima or a saddle point. In order to know the actual type, you have to take the second derivative.
The problem with applying such an approach to neural networks is not finding the gradient of the loss function with respect to the parameters; currently we have automatic differentiation software that computes the gradients for you. The actual problem is solving the equation where such gradient equals to zero. We don't know how to do that but in the trivial cases. Furthermore, this only gives you local optima, not global ones.
A solution to that problem is using numerical optimization techniques, like the gradient descent family. They basically explore the parameter space in the descending direction of the gradient of the loss function, hoping to reach a minimum. This cannot be assured to converge for non-convex problems. Nevertheless, in practice it works quite well.
The reasons why gradient descent techniques work well in non-convex problems are an active line of research. |
H: Python - Feature Selection - Should I remove bad variables?
I've this code to print the importance of each variable on my model:
importances = trained_model.feature_importances_
std = np.std([trained_model.feature_importances_ for trained_model in trained_model.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(training_features.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
I print a lot of variables with feature ranking as 0.0. Should I remove that variables? I can I do it using Python?
Like this:
df = df.drop('Col_A', 1) WHERE importances[indices[f]] = 0
Thanks!
AI: Assuming your features are df[X] and your target is df[y], I would just do the following:
keepfeatures = X[trained_model.feature_importances_ > 0]
keepfeatures.append(y)
df = df[keepfeatures] |
H: Python - Calculate Cost profitability and benefit of the model
I've this code in Python in order to calculate the precision of my model and to print confusion matrix using Decision Trees Classifier:
coef_gini = DecisionTreeClassifier(criterion = "gini", random_state = 100, max_depth = 3, min_samples_leaf = 5)
coef_gini.fit(training_features, training_target)
y_pred = coef_gini.predict(test_features)
y_pred
for name, importance in zip(training_features.columns, coef_gini.feature_importances_):
print(name, importance)
print ( "Train Accuracy using Decision Trees Classifier is : ", accuracy_score(training_target, coef_gini.predict(training_features)))
print ( "Test Accuracy using Decision Trees Classifier is : ", accuracy_score(test_target, y_pred))
print ( "Confusion matrix using Decision Trees Classifier is ", confusion_matrix(test_target, y_pred))
What is the cost matrix? Is this the money that company will lost for each wrong predictive target value? Anyone have an example?
Thanks!
AI: Confusion Matrix
A Confusion Matrix is an important tool to measure accuracy of a classification algorithm. It compares predicted class of an outcome and actual outcome.
Scenario 1: Credit Risk
Based on a credit risk scorecard, application for credit card are classified as “Good” and “Bad”. “Good” indicates applicants paying back dues on credit card and “Bad” indicates customers defaulting on the dues. Now, the customers are compared against actual performance of the customer payment behaviour after say 18 months. So, comparison of Predicted Class (“Good” or “Bad”) to actual customer behaviour state (“Defaulted” or “Regular”).
There is always a trade-off between Type I error/False Positives (accepting Bad Customers) and Type II Error/False Negatives (Rejecting Good Customers).
We generally require to optimize between False Positive Rate (Type I Error) and False Negative Rate (Type II Error).
So, role of cost matrix comes in picture to find the optimal cut off value for a classification rule. Now, going back to Credit Risk Model. The cut off value optimize between cost of an opportunity loss (miss to accept a good customer/Type II Error) and cost of accepting a potential defaulter (involved in loss due to default).
Referencing Example is from a blog and image from Google..
Cost Matrix
Cost Matrix is similar of confusion matrix. It’s just, we are here more concerned about false positives and false negatives .There is no cost penalty associated with True Positive and True Negatives as they are correctly identified.
The goal of this method is to choose a classifier with lowest total cost.
Total Cost = C(FN)xFN + C(FP)xFP
where,
FN is the number of positive observations wrongly predicted
FP is the number of negative examples wrongly predicted
C(FN) and C(FP) corresponds to the costs associated with False Negative and False Positive respectively.
Remember, C(FN) > C(FP).
Hope this Helps.. |
H: Groupby product, return tuple
I have found out the most profitable products in my dataframe by using:
df.groupby('ProductName')['ProfitPerOrder'].sum().sort_values().tail()
It gives me the output below. each ProductName has a ProductCategory. How do I display the category next to the product name in the output below?
ProductName
WFS Shoes 29033.887659
RDL Suit 45845.253318
Tennis Suit 46848.513342
Davenport Shoes 127103.915707
Halter Dress 314155.742025
Name: ProfitPerOrder, dtype: float64
AI: Add more columns when you are doing group by in the first parentheses..
First we should understand why it's giving this result..
It's similar to Sql,we are applying an aggregate function on a grouped by value,
That's why it's giving only one value,
If you want to have further information,
Add the columns name in the first parentheses in a $list$
It will be something like this
df.groupby([Col1,Col2...,Coln])(another col).sum().sort_values().tail()
Edit -1
Recently enough, I came across that we can cascade group by together by playing smart and using the .join(second group by(). aggregate ())
Here's the working example (ignore the column names...it's relevant to my dataset)
df.groupby('Year_of_Release')[['Global_Sales']].sum().join( df.groupby('Year_of_Release')[['Name']].count()) |
H: What is Teacher Helping technique?
I read this paper, but I am having trouble understanding what Teacher Helping technique (page 3) is in context of RNN.
Can someone explain to me what it is? Please assume I don't have much experience in statistics. I'll be glad if this explanation would be as simple as possible (I'll be even more grateful for some example).
AI: In normal RNNs, you can train using either back-propagation through time (BPTT) or teacher forcing. With BPTT the network receives as input for each time step the output of the previous time step. In teacher forcing, the network receives the gold data tokens directly; this induces exposure bias on the trained network.
With textual GANs, you normally don't feed the gold data to the generator, because the generator is generating the sequence so there is no gold data to use.
In the article you refer, they are taking subsequences from real data, feeding them to the generator and having it only generate the final token. Then the discriminator receives the concatenation of the real sequence prefix and the generated final token. This is what they call "teacher helping". |
H: 'DecisionTreeClassifier' object has no attribute 'importances_'
I've this code in order to visualize the most important feature of each model:
dtc = DecisionTreeClassifier(min_samples_split=7, random_state=111)
rfc = RandomForestClassifier(n_estimators=31, random_state=111)
trained_model = dtc.fit(features_train, labels_train)
trained_model.fit(features_train, labels_train)
predictions = trained_model.predict(features_test)
importances = trained_model.feature_importances_
std = np.std([trained_model.feature_importances_ for trained_model in
trained_model.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
for f in range(features_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
plt.figure()
plt.title("Feature importances")
plt.bar(range(features_train.shape[1]), importances[indices], color="r", yerr=std[indices], align="center")
plt.xticks(range(features_train.shape[1]), indices)
plt.xlim([-1, features_train.shape[1]])
plt.show()
Using RandomForestClassifier this code runs good but when I try it using Decison Trees classifier I get the following error:
std = np.std([trained_model.feature_importances_ for trained_model in trained_model.estimators_], axis=0)
builtins.AttributeError: 'DecisionTreeClassifier' object has no attribute 'estimators_'
Which attribute should I use see the most important feature of each model?
AI: Do Checkout this Link
To Visualise The Tree Itself
from sklearn.tree import export_graphviz
import graphviz
export_graphviz(tree, out_file="mytree.dot")
with open("mytree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
OR
from sklearn.tree import convert_to_graphviz
convert_to_graphviz(tree)
OR
from sklearn.tree import convert_to_graphviz
import graphviz
graphviz.Source(export_graphviz(tree))
The Visualisation You Can Get Will be Whole Tree Itself..
To Display Feature Importances
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(features, labels)
for name, importance in zip(features.columns, classifier.feature_importances_):
print(name, importance)
## Now You Can Do Whatever You Want(plot them using a Barplot etc) |
H: Words as features of a neural networks
I'm new in Machine learning and I'm working on a problem related to text. I know that in ML we can use features as numerical values as input to neural network, but I don't know how to use features as words. In some papers I read that we take features to be n words with some property. I really don't understand how is that possible. Please, if it is not a problem, just to tell me some good papers or textbooks or links where it is explained how to do that.
AI: You need to make a dictionary of words. It means you have to make a dictionary which you assign each word a unique value. then you can use one-hot-encoding to represent each word uniquely. If this is what you need it will do what you want. But this has a big problem. When you think about cats and dogs, you may find similarities and differences between them. This is because you have more knowledge than the only representation of words in your brain. Consequently, you should use approaches to assign a unique number to each word, and put near concepts as neighbors. For the first part take a look at here and the second part, take a look at here. |
H: Chi square distribution for feature selection
In one paper on ML I read that chi square distribution is used to reduce the number of features. In that paper, features are words. That paper is related to Sentiment Analysis, so we have "positive", "negative" and "neutral" category.
How to calculate chi square distribution in that case?
In Python there is scipy.stats.chisquare which gives chi_square value and a p_value. What do we do then with these two pieces of information?
What to do for example with word "good" as a feature?
How to calculate chi square distribution, and what to do with that?
What does it mean to exclude some feature from the set of features, because in that paper it is mentioned that we take n of them with top chi square.
I really don't know how to do it. If there is any paper or book or link to learn that, please tell me.
AI: There are different ways for feature selection. A very good read in machinelearningmastery, to recap:
Univariate Selection.
Recursive Feature Elimination.
Principal Component Analysis.
Feature Importance.
Chi-Squared test For Feature Selection goes under the Univariate Selection method for non-negative features. My favorite explanation of chi-Squared in one photo taken from this blogpost is:
As you can see scikit-learn has an implementation for feature_selection using chi2 (perhaps according to scipy.stats.chisquare) as was shown very briefly in the above-mentioned blog post.
If you want a more thorough explanation and details how test ranks features based on statistics according to chi2 distribution and p-value etc., and also how to build your own chi2 class for feature selection in Python see this great post. Obviously one can read about the basics of chi2 distribution and test in wikipedia. |
H: Training neural network classifier with one class after another
Is it possible to train a neural network classifier with only one class, and after that with only another class?
For example, first train it only on recognizing dogs, and after finishing that training, only train it on recognizing cats, so in the end the net can classify between dogs and cats.
Or do the classes always need to be mixed for training?
Please tell me any scientific papers explaining why classes have to be mixed or explaining why they do not need to be mixed.
AI: It will probably work but less Efficiently(or lower Accuracy) if you train the Network one after the another (like trained for Dog's first , followed by Cats etc..) because
It will Change The Weights of The Hidden Layers which in turn will destroy the same Recognition Capability of The Network ...
Also Dogs and Cats differ in their natural looks a lot, So the Net Will Have a tough Time Doing So Simultaneously..
But
What You Can Do is either of the followings -
Train Two Neural Nets Simultaneously or Train them one after the another but saving the Weights of the Layers somewhere for all your Training.. And Then Predict What the Net thinks the input image to be..
Better Will Be To Train a CNN to do the same Task with Ease and will Achieve Better Accuracy.
If you only have dogs and cats, then it will be sufficient to only train on one of them i.e if it isn't a dog, then its surely a cat..
Why we prefer mixing of similar Image Groups?
It's more of Human Nature to do a task which can en-compass similar things at once, rather than doing the particular tasks repeatedly.. |
H: Data binning - Why we need to transform Categorical Variables?
Having a lot of categorical features and other numerics why we need to transform the categorical to binary values? Is it for using the values in mathematics functions of the algorithms?
Thanks!
AI: Yes. If you need the information of your categorical variables then you need to represent them by numbers as most of machine learning algorithms are math-based. |
H: What type of regression should I use
I have a dataset that gives data on infertitlity and causes. The dataset is mainly 0,1 to represent "yes" and "no". However, some fields have "Sometimes", "Often" which would be represented by -1 or 2. I've only learnt how to do categorical data i.e. 1,0 and Numberical data. So my question is since there are more options other than 1 and 0, which type of regression do I use? Logistic regression or Linear regression?
AI: Using Logistic regression or Linear regression depend on the dependent variable(DV). Based on your question, I believe that your DV will be infertitlity(Yes/No) so you should use logistic regression because linear regression is for the continuous variables(e.g:exam score) and logistic regression is for categorical variables(e.g.L Yes/NO) |
H: Probability of dropout growth
In the DNN literature, is there analysis or a term on a dropout ratio (oppositely-)proportional to the depth of a layer?
By intuition, I'd like to dropout fewer neurons on the layers next to the input and drop more when approaching the end layers. For example, passing from a p_keep = 0.8 to p_keep=0.5, especially when the inputs features are all relevant. Where p_keep is the probability of keeping a node.
For instance:
Layer1: p_keep=0.9
Layer2: p_keep=0.7
Layer3: p_keep=0.6
Layer4: p_keep=0.55
Layer5-n: p_keep=0.5
AI: Hinton advocates tuning dropout in conjunction with tuning the size of your hidden layer. Increase your hidden layer size(s) with dropout turned off until you perfectly fit your data. Then, using the same hidden layer size, train with dropout turned on. This should be a nearly optimal configuration
A good initial configuration for the hidden layers is 50%. If applying dropout to an input layer, it's best to not exceed 25%.
By intuition I'd like to dropout fewer neurons on the layers next to the input and drop more when approaching the end layers
If I were to try to generalize, I'd say that it's all about balancing an increase in the number of parameters of your network without over-fitting. So if e.g. you start with a reasonable architecture and amount of dropout, and you want to try increasing the number of neurons in some layer, you will likely want to increase the dropout proportionately, so that the same number of neurons are not-dropped. If p=0.5 was optimal for 100 neurons, a good first guess for 200 neurons would be to try p=0.75.
Also, I get the feeling that dropout near the top of the network can be more damaging than dropout near the bottom of the network, because it prevents all downstream layers from accessing that information.
As to why 0.5 is generally used, I think its because tuning things like the dropout parameter is really something to be done when all of the big choices about architecture have been settled. It might turn out that 0.10 is better, but it takes a lot of work to discover that, and if you change the filter size of a conv-net or change the overall number of layers or do things like that, you're going to have to re-optimize that value.
So 0.5 is seen as a sort of placeholder value until we are at the stage where we are chasing down percentage points or fractions of a percentage point.
Original Paper and the opening paragraph is from the original paper... |
H: Unsupervised clustering without of Data which is supposed to be on a linear function
When I have a dataset where each datum has x and y,
and the (x,y) has a relation of one of y = a_i*x + b_i (i=1,2,...).
Is the process written below available? and which algorithm does it belong to?
The process is.....
I have many points (x, y).
The machine finds 2 linear functions which represents the points.
The machine eliminates points which are far from the 3 lines.
In this case, I think I put parameters the number of linear functions and criterion to judge if a point is on a line or not..
The first figure is my dataset.
I want to have a machine which accepts the number of line (3 in this case)
and finds 3 lines(as the second figure (the lines in the figure are just put for idea without computation)), and then finally suggests points which may belong to neither of them. (In this case, for example, (71.6, 22))
Should I, for instance, extend the k-means algorithm to achieve this procedure?///
AI: Very interesting question!
First Approach: PCA + K-means
Your data will be explained very well on the second principle component. If you apply PCA on your data the first PC captures the data along the lines in which you completely lose the differentiation but the second PC is prependicular to the first one so your data will be projected in a way that points correspondig to each line are placed closer to each other. As you know the number of lines (number of clusters) a priori then you simply apply k-means and that's it!
See image in the link to have an idea how the second pc vector would be.
Second approach: GMMs
Gaussian Mixture Models are fitted to clusters in a data using maximum likelihood estimation (you use Expectation Maximization algorithm for that). Your clusters along second PC are pretty gaussian so you will get a good soft-clustering if you fit a mixture of $n$ (number of lines again) Gaussian kernels to them.
Variant: $a$s Are Not Equal
In this case your lines cross each other as slopes are different. Your image does not show that but I include it here anyways. In this case you fit a linear regression to each line and keep the coefficients of the line. The you have a $2D$ data in which each line is described by just a slope and an intercept. Then the prependicular distance between each point and all lines tells you which line is closer so that is the cluster. (The distance can also simply be the residual of that point from regression line. You just need a distance metric to determine the closest line)
If you need implementation as well, please drop a comment here so I can update answer with Python code.
Good luck :) |
H: Clustering algorithm prior to model building?
I would like to understand, how a clustering algorithm can be used (if possible) to identify naturally occurring groups within a data set, prior to building predictive models/model, and to hence improve accuracy of models/model
AI: In clustering the outcome variable or the response is unknown, this is why it's called clustering. Irrespective, of the fact the data being labeled or unlabelled, clustering can be applied as a data preprocessing algorithm.
Essentially, you must proceed by employing the initial data preprocessing tasks (like missing value treatment, collinearity, skewness etc). Once, the data is "statistically clean", then you can apply any clustering technique. However, remember clustering requires data to be "grouped" such that data points within a group are related to each other and unrelated to other data points belonging to another group. This can be achieved only if you have a "statistically clean" data. The next important point to consider is, "how to determine the possible number of clusters". Because any clustering algorithm will divide the data points into groups oblivious of the fact whether the groups exist or not. Therefore, you will have to prove mathematically/statistically the occurrence of groups in the dataset. In literature, there exist several methods like the "Principal Component Analysis (PCA)" or the "elbow method".
Once, you have determined such groups, you can then label the groups and perform predictive analytics. |
H: How to create 3D images from .nii file
What are .nii files, and how is data stored in them? I have some of these and I want to know how can I create 3D image of MRI scan from them.
I can load the file in my python script using nibabel. Where to go from here next?
AI: The most common way of processing images in python through numpy arrays. Since you have already loaded your image through nibabel, you need to get the data from the image object and then cast it as a numpy array.
import nibabel as nib
import numpy as np
# Get nibabel image object
img = nib.load("path/to/image.nii")
# Get data from nibabel image object (returns numpy memmap object)
img_data = img.get_data()
# Convert to numpy ndarray (dtype: uint16)
img_data_arr = np.asarray(img_data)
By default this should be a 3D numpy array with a shape of (height,width,image). If you want to access one image you can always use PIL. For example to save the first image of the .nii file:
from PIL import Image
img = Image.fromarray(img_data_arr_norm[:,:,0], 'L')
img.save("image.jpeg") |
H: How to plot similarity of two datasets?
I'm performing some simulations, and at the end I get a CSV file with three columns. One column holds the values for the x-axis, which was also input to the simulation and theoretical calculations, second one holds theoretically expected values, and the other column holds the values obtained by the simulation. I was planning to plot something like this:
But that does not look good in my case, as the values in y-axis normally keep doubling, and the values for the x-axis exponentially increase, so most of the points end up getting collected at the lower left part, near the intersection of x-axis and y-axis of the plot. Therefore, I need a different way to plot my data, which will be more visually appealing and inform how close the simulation results are to the theoretical expected ones. For example, some of my values can be seen below (and they keep increasing in such a way):
x = [2, 4, 8, 16, 32, 64] # partially removed for brevity
expected = [47.9995, 95.9783, 191.9127, 383.9708, 767.8831] # partially removed for brevity
simulated = [48, 96, 191.8, 383.8, 767.4] # partially removed for brevity
What is a good way to plot such a data that doubles in the y-axis and exponentially increases on the x-axis all the time, and to view how similar the two datasets actually are?
AI: Here is an example of r lattice xyplot using log scale on the x axis and the difference of your two measures I(expected - simulated)
df <- data.frame(
x = c(2, 4, 8, 16, 32, 64),
expected = c(47.9995, 95.9783,
191.9127, 383.9708, 767.8831,
1457.2771),
simulated = c(48, 96, 191.8, 383.8,
767.4, 1458.1228))
xy <- xyplot(I(expected - simulated) ~ x ,
auto.key=TRUE,
data = df , type=c("p","g"),
scales=list(x=list(log = 10) ),
ylab="difference expected - simulated", xlab="x", main="Simulation Results" )
print (xy)
Note, that I added a 6th result to your sample data,that was missing. |
H: PCA or cluster table of experimental fitness scores
I need to find patterns experimental data.
The columns are "experiments" which are chemical treatments for growth experiments. The rows are individual gene names, the values are a fitness-defect score, which reflect the genes contribution to growth.
I would like to find patterns that are reflected across all experiments using some type of PCA or clustering. I have been trying to use sklearn but have not been successful in applying a model.
The data looks like:
gene SGTC_1 SGTC_2 SGTC_3
YAL002W 3.56420220283773 1.80774301690328 0.431491057210906
YAL004W -0.885645399324204 -1.76020417788351 0.883034190306176
....
There are 4000 rows for genes and 30 columns for experiments.
Any suggestions would be greatly appreciated.
AI: PCA is a dimensionality reduction algorithm - it projects your high dimensional data onto a lower dimensional plane. This is useful for either visualisation (if you reduce to 2 or 3 dimensions to plot), or for training machine learning models.
You say you want to find patterns in your data - I’m not quite sure what you mean by this. Do you want to visualise your data, train a model on it and make some prediction, or something else?
To visualise high dimensional data, you could use either the tSNE (t-stochastic neighbour embedding) algorithm, or PCA.
Depending on the type of data you have, you can “find patterns” in different ways.
If your data is unlabelled (you don’t know the classes of each sample or there is no dependent variable) you can use unsupervised learning algorithms such as K-means clustering, K nearest neighbours, Gaussian mixture model. If your data has dependent variables, depending on whether your dependent variable is categorical or continuous, you could use classification algorithms for the former and regression algorithms for the latter. Classification algorithms include logistic regression or decision trees. Regression models include linear regression. |
H: When to remove outlier in preparing features for machine learning algorithm
I have a numeric variable (price) and it has a long tail in both training and test data sets. I found that if you remove the highest 1% of the value in both train and test data set for this variable, then the histogram of this variable in train and test data set looks pretty much the same. See the following figure.
My question is: I still need to use the training data (with both features and labels) to make predictions on the test data (with features only). In this case, how should I deal with this feature variable? I was thinking about removing the top 1% data in both training and test data set, but as I still need to make predictions on that 1% test data, so this is not a good idea I guess. In this example, as the empirical distribution of this variable in both training and test data sets look the same before and after removing the "outlier", should we just leave this variable unchanged? Also, in general, how should we handle the outlier before we put the feature into the machine learning algorithm?
AI: Dealing with outliers requires knowledge about the outlier, the dataset and possibly domain knowledge. Given this, there are many options to handle outliers.
Without taking a look at your specific data, it could be that this outlier represents a total? Perhaps the data source you have included totals, which should be removed.
Generally, figuring out what to do with outliers requires investigating the outlier.
If the outlier is a data processing or entry error, it can generally be removed, or replaced with say, the mean (without the outlier).
If errors have been ruled out think about whether the outlier could be a legitimate value? There are no Right or Wrong answer.
Instead, documentation of your decisions is really important. I'd suggest running your models with the outlier, and without, comparing the results and document any decision to remove or transform the outlier.
Depending on your purpose, it should become relatively clear as to whether the outlier should be removed, eg if you're visualising the data, removing the outlier will likely have more a meaningful impact than retaining it. |
H: Cluster algorithm to group events in more general domains
I've a list of 1,300 news events, represented by only three terms coming from running LDA topic model on thousands of tweets. Here's some of them as an example:
['manchester,bony,city', 'attack,claims,responsibility', 'police,officers,nypd',
'goal,arsenal,liverpool', 'test,pakistan,sunday', 'obama,ukraine,merkel', ...]
I need to group them in more general domains (Politics, Sport, Health, Economy, etc.).
Which kind of clustering algorithms could I use (in Python)?
Or maybe, can I use LDA topic model, even if I don't have documents but only three words?
AI: Some creative post-processing can be done. For instance applying Named Entity Recognition and simplify some parts (Manchester is a City). Using Knowledge Graph Analysis also gives some meta-info e.g. mapping your word to Wikipedia graph or using DBpedia may help you to recognize Named Entitiy categories (Obama and Merkel are politicians and NER does not necessarily capture their profession).
Note that the combination of named entity recognition and knowledge-base (Wikipedia, DBpedia, etc.) mapping is called Entity Linking.
Regardless of statistical learning for NLP techniques, all structures above are actually graphs so they can give you also the semantic similarity measure based on which you can use other clustering algorithms like Spectral Clustering and go to a semantically higher level of clustering.
Hope it helps :) |
H: Most Efficient Post Processing with Python and Pandas
This question is about best practices for working in Pandas dataframes. Speed, ease of use, and memory consumed could all impact any answers you might have. I start by pulling a data set into a dataframe like this:
Date Location Value
3/4/2018 1 4795
3/5/2018 1 4795
3/4/2018 2 5022
3/5/2018 2 5088
3/4/2018 3 100
3/5/2018 3 100
3/4/2018 4 117154
3/5/2018 4 117154
I would like to sum the Value based on some other criteria. For this example, lets use two states, SD and ND. Location 1,2,4 are in ND, and Location 3 is in SD. As I see it, I have two options:
Have Pandas post process the location numbers. IE Pythonicly: ND = Sum(Loc 1,2,4), SD = Sum(Loc 4). Then build/pivot the time series based on state
Build a lookup table and have Pandas append the state to each row in the dataframe. Then filter/group by state for the time series. Lookup table would look like so:
Location State
1 ND
1 ND
2 ND
2 ND
3 SD
3 SD
4 ND
4 ND
In option one, would Pandas add a new row to the dataframe for each day of the timeseries with the state totals? Or would the output be a pivot table like structure with only the state totals.
If option two, what would be the best type of way to host the lookup table? CSV? JSON? Table in SQL DB?
I'm concerned in option one changes would need to be made directly to the code. Whereas in option two an addition to the lookup table could add the information required to add location to the correct group.
While I know this is open ended, I hope someone is willing to provide thoughts on efficient structure for this type of data flow.
AI: I'd do it this way:
helper dictionary:
In [79]: d = {'SD':[3], 'ND':[1,2,4]}
let's convert it to a Pandas Series:
In [80]: lkp = pd.Series({el:key for key,lst in d.items() for el in lst})
In [81]: lkp
Out[81]:
1 ND
2 ND
3 SD
4 ND
dtype: object
now we can map Location into State and group by it:
In [82]: df.assign(Location=df['Location'].map(lkp)).groupby('Location')['Value'].sum()
Out[82]:
Location
ND 254008
SD 200
Name: Value, dtype: int64 |
H: How to use pca results for linear regression
I have a data set of 11 variables with allot of observations for each one. I want to make linear regression on the variables with the observed $\vec{y}=\alpha +\beta*\vec{X}$ when X is matrix. I'm trying to reduce my parameters so I activate pca algorithm on X. I get the "loading" data but i don't understand how to use it to get only four (for example) variables to estimate instead of 11.
somebody can help?
AI: Welcome to the site!
So, the outcome which you get from PCA explain the most of your original dataset. You need to name them based on your business understanding(Assuming that you know about data, as you mentioned that you wanted to apply, Linear Regression) else you might need some Subject Matter Experts expertise.
Of-course, the Features won't be same with the original data or else what is the point in performing PCA(I know that you understood this part). To decide on the number of features, you need to look at Scree Plot.
PCA is a Dimensionality Reduction algorithm which helps you to derive new features based on the existing ones. PCA is an Unsupervised Learning Method, used when the has many features, when you don't understand anything about the data, no data dictionary etc.For better understanding on PCA you can go through this link-1,link-2.
Now before performing Linear Regression, you need to check if these new features are explaining the Target Variable by applying Predictor Importance test(PI Test), you can go through the Feature Selection test in the python,R.
Based on the outcome of PI Test you can go ahead and use those important feature for modeling and discarding the features which are not explaining the target variable well.
Finally, you can achieve the results which you are looking for.
Let me know if you are stuck somewhere. |
H: Do we need to increase training data size when increasing dropouts?
I am using a fully connected feed forward neural network built using keras for text classification. It consists of 3 hidden layer. I am planning to add a dropout layer after each hidden layer to prevent overfitting. While tuning the dropout rate, I am increasing the value from 0.2 -> 0.3 -> 0.4 -> 0.5.
I want to know if I should increase the training data size to have a more accurate comparison. What I mean is suppose I am having training data of size 1 million for a dropout rate 0.2. Should I increase the training data size to 1.5 million for dropout rate 0.3?
Calculation:
1000000 * 0.8 * 0.8 * 0.8 = 512000 (for dropout ratio 0.2)
1500000 * 0.7 * 0.7 * 0.7 = 514500 (for dropout ratio 0.3)
AI: No, you should only tune one hyperparameter at a time. If you change two hyperparameters and the performance increases, how do you know which of the two parameters is responsible?
If you have the time, do a full grid search on the dropout rate and the training data size. |
H: Purpose of weights in neural networks
I'm beginner at Neural Networks. After reading multiple articles on wikipedia, i've seen the term "weight" being used a lot, although it is a little confusing.
I know, that before the inputs are summed and passed to activation functions, they are separately weighted, after some research, i found out that the purpose of the weight function was to:
ensure orthonormality
avoid data loss
I know that two inputs are only orthonormal if they are orthogonal unit vectors, if they are orthogonal then their dot product is always 0.
Dot product:
where theta is angle between two unit vectors and absolute value of a is norm.
For example, if two unit vectors are perpendicular to each other, then they are orthonormal and their dot product is 0.
But what does this have to do with data loss? I've also heard that sometimes the value of input might be zero, and we know that multiplication by zero outputs zero.
So what is the real purpose of the weight in neural networks? and what does it have to do with orthonormality? For example, what would be the purpose of weights in linear regression?
AI: The reason for weights in machine learning is actually a lot easier than it seems. It's the way by which our model learns some underlining function and performs the classification or regression. We tune these weights in order to model some underlining function which can map our input to a desired output. Either a class in classification, or a range of values in a regression.
Let's look at an easier machine learning model so that we can understand why weights are needed.
Linear Regression
This is just a straight line which splits data into two sections. Let us apply this model in 2D, where we have two features $x = [x_1, x_2]$, for example weight and height. And labels $y$ which will split our data into $y \in \{men, women\}$.
A random line in this space is defined as
$0 = -2x_2 + x_1 + 1$.
Assume this is our boundary line. Let's just assume that women fall above this line and men fall below it. Like this
Now if we get a new data point $x_{new} = [2, 8]$ we will label this as being a woman. So the entire decision is based on the numbers $1, -2, 1$ from our linear equation. We need to tune these values using the training data. We usually call these trainable parameters the weights $w$ associated with the features $x$ and we also add a bias $b$. In general the linear separator in 2D is
$0 = w_1x_1 + w_2x_2 + b$.
Our predicted label is $\hat{y} = w_1x_1 + w_2x_2 + b$. If $\hat{y} > 0$ then woman, else man.
In $n$ dimensions this can be written as a matrix multiplication as
$\hat{y} = w^Tx + b$
Obviously, a linear separator is not sufficient for most classification tasks. Things are not always linearly separable. So we use more complex models.
Neural networks
In neural networks each node is associated with a function much like the linear separator. However, we will use the sigmoid function
$\sigma(w^Tx) = \frac{1}{1 + exp^{-(w^Tx + b)}}$.
The weights here have the same effect. They will modulate the input values $x$ such that we are able to learn some classification or regression.
How do the weights affect the decision boundary. We want the two different classes, circles and x's, to be on opposite sides of this boundary in order for us to be able to correctly classify them.
The weights are trained iteratively using gradient descent. We can see that the decision boundary starts off terribly and then gets progressively better.
$0 = -1.0 - 1.0x_1 - 1.0x_2$
$0 = -16.0 - -39.0x_1 - 71.0x_2$
$0 = -36.0 - -94.0x_1 - 61.0x_2$
$0 = -83.5 - -134.0x_1 - 76.0x_2$
$0 = -88.5 - -114.0x_1 - 98.5x_2$
As you can see we changed the weights until we were able to find this ideal boundary between the two classes. If you want to find our more about how the gradient descent algorithm works to tune these weights you can look here. |
H: Multimodal distribution and GANs
What is intuition behind multimodal distribution? and
How does GANs generate samples from it?
AI: You can think of a multimodal distribution as a union of multiple unimodal distributions. In the case of GANs for image processing, each mode could be a category of images. To significantly simplify what's going on to motivate the idea, if you have a dataset of cat pictures and dog pictures, you can think of that as bimodal. Just cat pictures, that's unimodal. Training a GAN on a multimodal dataset of images means that it should be able to generate members of any image category in your dataset, and it should generate different categories with the same frequency with which they appeared in the training data.
GANs are trained by taking a random vector as input and attempt to construct a feasible member of the data distribution as output. In effect, the GAN learns a (surjective) mapping from the random space onto the multimodal distribution, such that random inputs will generate samples from the multimodal data distribution as outputs. |
H: How to structure data and model for multiclass classification in SVM?
I am trying to predict a categorial variable given a set of input variables, which are also categorical. Both the target and features variables only take the class values [below mean, mean, above mean].
I have used one-hot encoding on both the target and feature vectors, but I am not sure if this is correct.
If I have 2 feature vectors I get 2x3 columns after one-hot encoding. For example, the first instance may be:
X[0] = [0,1,0, 1,0,0]
indicating the first feature has mean value ([0,1,0]) and the second feature is below mean ([1,0,0]), with the corresponding target
y[0] = [1,0,0]
indicating the target is below mean.
With the encoded X and y, I have tried to do this:
svm_clf = SVC()
svm_clf.fit(X_train, y_train)
y_pred = svm_clf.predict(X_test)
but it gives me 'ValueError("bad input shape {0}".format(shape))'
I have given the input and output below for the real situation, where the input has 9 features (so 9x3 columns after encoding). The error message indicates there is a problem with the 'y' vector.
Should I be one-hot encoding the target?
How can I tell the classifier that the output can only take a "single value" i.e. the 3 columns in the target are not independent as they all belong to the one variable. For example, the output for a given instance cannot be [1,1,0] as this indicates it is both "below mean" and "mean" which is not possible.
I have also tried a random forest classifier which ran OK, but the results were not plausible, so I assume I am doing something wrong,
Input features:
X_train type: and shape: (872, 27) and head(5):
[[ 0. 1. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0.]
[ 0. 1. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0.]
[ 0. 1. 0. 0. 1. 0. 0. 1. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 1. 0. 0. 1. 0. 0. 1. 1. 0. 0.]
[ 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 0. 0. 0. 1. 0. 1. 0. 1. 0. 0.]]
Target:
y_train type: and shape: (872, 3) and head(5):
[[ 0. 0. 1.]
[ 0. 0. 1.]
[ 1. 0. 0.]
[ 0. 0. 1.]
[ 0. 1. 0.]]
Error message:
File "analyse.py", line 287, in dummy_test
svm_clf.fit(X_train, y_train)
File "/usr/lib64/python3.6/site-packages/sklearn/svm/base.py", line 149, in fit
X, y = check_X_y(X, y, dtype=np.float64, order='C', accept_sparse='csr')
File "/usr/lib64/python3.6/site-packages/sklearn/utils/validation.py", line 578, in check_X_y
y = column_or_1d(y, warn=True)
File "/usr/lib64/python3.6/site-packages/sklearn/utils/validation.py", line 614, in column_or_1d
raise ValueError("bad input shape {0}".format(shape))
ValueError: bad input shape (872, 3)
AI: See the svm documentation here. It does support multiclass problem. The only thing is that your y should be like this [0, 1, 2, 3]. If you do not use one-hot encoding and use the y label as it is, it might work. |
H: MLPClassifier threshold factor to eliminate test samples that are not in match with train data
I am using MLPClassifer example from
scikit-learn
The code for training:
from sklearn.neural_network import MLPClassifier
X = [[0., 0.], [1., 1.]]
y = [0, 1]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(X, y)
At the predict step, we use test data [2., 2.], [-1., -2.] in
clf.predict([[2., 2.], [-1., -2.]]). The output of this function is
array([1, 0])
As we observe, the test data [2.,2.] is not in the train dataset we passed. Still, we got the closest match as label 1.
What i am trying to find is if the test data i supplied is not in the train dataset, i should print a message to user that data is not valid instead of telling him the wrong label as 1.
For instance, in knn classification, i have kneighbours function which tells the distance of my closest neighbours to the test data i supplied in a 0 to 1 scale. So, i could easily eliminate the test data samples which are highly distant from my train data samples by keeping threshold at 0.6 or 0.7.
Is there any criteria/threshold like this i could do with MLPClassifier or with any one of Incremental Classifiers mentioned here which can restrict my test samples if not present in train dataset ?
Question migrated from SO
AI: SGDClassifier has desicion_function which tells the distance to the hyperplane, where the values are compared to.
This value could imply too big and too low values. |
H: How does a Recommender System recommend movies to a New User?
Consider a New user which has never rated any movie on the Website or the System has never seen the user.
How does the System recommend Movies to the User and based on what ?
How will we evaluate the Recommendation , is the recommendation accurate or appropriate and how confident the system is about the Recommendation .
AI: In general, a safe default option for recommender systems is to recommend the most popular product, so two options for the recommendation:
simply recommend the most popular movie in a time window around the current moment (maybe taking into account momentum),
do the same but marginalizing over account info, e.g. age, gender, country, language; also, IP location is an option to get such type of info.
Confidence assessment can be done based on recommendation success rate over the results of the same recommendation to previous users with the same profile characteristics, if such data is available. |
H: Are the raw probabilities obtained from XGBoost, representative of the true underlying probabilties?
1) Is it feasible to use the raw probabilities obtained from XGBoost, e.g. probabilities obtained within the range of 0.4-0.5, as a true representation of approximately 40%-50% chance of an event occurring? (assuming we have an accurate model)
2) The same question as in 1), but for other models that can output raw probabilities.
AI: It depends on the definition of accurate model, but in general the answer to your question 1) is No.
Regarding your second question (based on results in the paper of Niculescu-Mizil & Caruana linked below):
boosted trees and stumps - NO
Naive Bayes - NO
SVM - NO
bagged trees - YES
neural nets - YES
You can test whether it is the case for your particular model and dataset by looking at the so called reliability plot:
Create N bins based on the model output (e.g. 10-20)
Create a scatter plot with average model output for each bin along X axis and average true probability for each bin along Y axis
Ideally, your X-Y points should lie near the diagonal Y=X, otherwise the output of your classifier cannot be interpreted as a probability of an event.
However, not all is lost and if needed, one can try to modify (calibrate) the output of the model in a such way that it better reflects the observed probability. In order to assess whether the calibration exercise was successful one might look at the reliability plot based on the calibrated model output (instead of using raw model output).
Two most widely used techniques for the calibration of classifier outputs are Platt scaling and isotonic regression, see the links below.
Note, that it is not advisable to cailbrate the classifier using the training dataset (you might need to reserve a separate subset of your data for the purpose of calibration).
Some relevant links
Predicting Good Probabilities With Supervised Learning
CALIBRATING CLASSIFIER PROBABILTIES
Classifier calibration with Platt's scaling and isotonic regression |
H: How to use correct weights in linear regression model
I'm trying to understand can we implement a simple linear regression model.
Let's say we are predicting price currencies. We want to know whether the currency will raise or not.
As i understand, we need to define two vectors for this:
$x=[1,2,3,4,5,6,7,8,9,10,11,12]$ - months
$y=[2.30,2.33,2.29,2.30,2.36,2.40,2.46,2.50,2.48, 2.43,2.38,2.35]$ - average prices.
Let's plot this:
Before sigmoid separator, i will try a simple linear separator.
I'm guessing that at first, I need to choose some random slope (and bias will be the smallest scalar in vector).
$f(x) = 0.026x+2.3$
As we see, the separator is inaccurate, let's try the quadratic cost function at f(1):
$C = \frac{1}{N} \sum_{i=0}^{N}(\hat{y} - y)^2$
$C = \frac{1}{1} \sum_{i=0}^{1}(2.3259999999999996-2.30)^2=0.0006759999999999897$
It seems accurate in the beginning, but it gets worse as it progresses, so somehow i need to improve it.
From my knowledge, the next step is to find the derivative of the function.
Normally, gradient descent algorithm is used for this, but finding a slope of the tangent line is very easy here:
$\frac{dy}{dx} = 0.026$
What is the next step? How can i use this derivative to use proper weights to minimize the cost function?
AI: You are correct for a linear separator line the derivative seems trivial. In this case gradient descent is not necessary because a closed form solution exists for the weights.
$w = (X^TX)^{-1}X^Ty$
We only use optimization techniques such as gradient descent for models where a closed form solution does not exist. However, if you perform gradient descent on the weights using the derivative and doing the following equation iteratively
$w^{new} = w^{old} - \nu \frac{dy}{dx}$
would yield the same result as the closed form solution. This is however very unnecessary when not necessary. |
H: Multi task learning architecture for Multi-label classification
I am working a classification problem. The dataset was collected from Painters by number, a competition hosted by kaggle. The task is to identify painter,style and genre given paintings.
So far, I trained individual models to predict painter,style,genre given paintings. Now i would like to incorporate Multi task learning (i.e) Developing a single model which can predict all three tasks.
The issue i am currently facing is in designing architecture.
Model No of classes (Softmax)
(Individual Models)
-------------------- -----------------------
Model predicts painter 8 (8 - painters)
given paintings
Model predicts style 10 (10 style classes)
given paintings
Model predicts genre 23 (23 genre classes)
given paintings
I don't know how to combine the above models in keras. Any suggestions or feedback would be helpful.
AI: You should design a multi-task model (MTM).
MTM has the ability to share learned representations from input between several tasks. More precisely, we try to simultaneously optimize a model with m types of loss function, one for each task. Consequently, MTM will learn more generic features, which should be used for several tasks, at its earlier layers. Then, subsequent layers, which become progressively more specific to the details of the desired task, can be divided into multiple branches, each for a specific task.
You need an architecture like the following:
# Your input and hidden layers
inp = Input(...)
x = Layer1(...)(inp)
x = Layer2(...)(x)
...
x = Layer_N-1(...)(x)
# Your 'm' output layers
out_1 = Layer_N(x)
out_2 = Layer_N(x)
...
out_m = Layer_N(x)
# your model
MTM = Model(inputs=inp, outputs=[out_1,out_2,...,out_m])
MTM.compile(loss=[loss_func_1, loss_func_2,..., loss_func_m],
optimizer='xxx',
metrics=xxx)
MTM.fit(train_data, [train_labels_1,train_labels_2,...,train_labels_m],
batch_size = xxx,
epochs = xxx)
If you want a real example in Keras, I have implemented a 2-task ConvNet for my project here. |
H: Compare image similarity in Python
I'm using a dataset of movies and would like to group if a movie is the same across different retailers.
Example:
Movie: Beauty and the Beast
Platforms: Google, Netflix, iTunes, Amazon.
I have access to signals like:
Studio, Movie Name, Runtime, Language, Release Year, etc. However, in this case some movies, which are not the same and signals mentioned before, are not capable of finding the right match. I need to do what a human would do: Check Movie cover. Example:
Beauty and the Beast http://www.imdb.com/title/tt2771200/
Beauty and the Beast http://www.imdb.com/title/tt6305650/?ref_=fn_al_tt_1
I have access to the art image.
I'm using Python to do this comparison.
Is there a library that can help me compare 2 images and determine if they are similar?
AI: The problem you mention is not trivial. There is no library that out of the box will compare the pictures for you and give you a reliable similarity value. Therefore, you need to develop a system that works for both your problem and your dataset.
Having said that, since neural networks work better than any other method for image recognition you can try:
Autoencoders: (In case your data is unlabeled) The idea is that the model extracts the features for you and then you omit the output layers so you have a new representation of your image but in a new feature space the model has learnt from data. Once your images are in this new feature space, you can use whatever technique to compute similarity. You can have an example on how to do this here.
Hash binary codes: (In case your data is labeled). This is a supervised method based on CNNs that seems to work quite nice to find relevant features in your images. Have a look at this paper.
Working with images is normally not quite straighforward and it requires some effort and experimentation to master these techniques. However, it is absolutely worth it and fun. |
H: Card game for Gym: Reward shaping
I am working on a card game for openai gym and currently I ask myself how to shape the reward function for it. One round of the game consists of each player picking a card from its hand, whereas not every card can be played depending on the card which has been played by one of the players before. For every set of played cards, there is a total order such that the player with the highest card wins the round.
In the situation in which cards are rejected I want to give the agent some reward.
In case of an invalid card, it is hard to say if that card is any nearer to one of the valid cards than any other. Also the agent should learn that this card is not playable at this point.
For completeness, the agent gets a discrete observation of everything it can remember of the game (its own cards, cards played in current round, cards played in past rounds, game mode (defines the total order of cards)). Then it should play a discrete action which either is a game mode in the beginning or a card during the round. Then it either gets a reward because its card got rejected or it gets a reward based on whether it wins the round or not. The game accounts a certain amount of points for a won round depending on the constellation of played cards in that round.
My question is how to shape the rewards for card rejection and for winning a round. Any ideas? Positive or negative?
In case any more details are required, just ask for them.
AI: My question is how to shape the rewards for card rejection and for winning a round. Any ideas? Positive or negative?
In reinforcement learning, you must set rewards so that they are maximised when the agent achieves the goals of the problem. You should avoid trying to "help" the agent by setting interim rewards for things that might help it achieve those goals.
For card rejection, if that is part of the game (i.e. it is valid to play a "wrong" card, and you lose your turn), then either no reward, or a negative one might suffice. Probably you should go with no reward, because the punishment would be in not winning that round anyway.
If an invalid card cannot actually be played according to the rules of the game, and there is no "pass" move or equivalent, then you should not allow the agent to select it. Simply remove the action from consideration when making action selection. It is OK for the agent/environment to enforce this in a hard-coded fashion: A common way to do that, if your agent outputs a discrete set of action probabilities or preferences, is to filter that set by the environment's set of allowed actions, and renormalise.
What if you want the agent to learn about correct card selection? Once you have decided that, then it becomes a learning objective and you can use a reward scheme. The action stops being "play a card" and becomes "propose a card to play". If the proposal is valid, then the state change and reward for the round's play are processed as normal. If the proposal is not valid, then the state will not change, and the agent should receive some negative reward. Two things to note about this approach:
Turns in the game and time steps for the agent are now separate. That's not a problem, just be aware of the difference.
This will probably not encourage the agent to play better (in fact for same number of time steps, it will probably have learned less well how to win, because it is busy learning how to filter cards based on the observed features), but it will enable it to learn to propose correct cards without that being forced on it in a hard-coded fashion.
For winning a round, then you might want to reward the agent according to the game score it accumulates. Assuming that the winner of the overall game is the player with the highest score, this should be OK.
However, there is a caveat to that: If by making certain plays the agent opens up other players to score even higher, then simply counting how many points the agent gets is not enough to make it competitive. Instead, you want very simple sparse rewards: e.g. +1 for winning the game, 0 for drawing, -1 for losing. The main advantage of using RL approach in the first place is that the algorithms can and should be able to figure out how to use this sparse information and turn it into an optimal strategy. This is entirely how AlphaGo Zero works for instance - it has absolutely no help evaluating interim positions, it is rewarded only for winning or losing.
If you go with +1 win, -1 lose rewards, then you could maybe make players' current scores part of the state observation. That may help in decision making if there is an element of risk/gambling where a player behind in the scores might be willing to risk everything on the last turns just for a small chance to win overall. |
H: Tactics to avoid feeling overwhelmed by machine learning
Short version: despite lots of reading, machine learning still feels like being a monkey in the dark. Any advice?
For background, I'm a researcher in computer science, in a field non-related to machine learning.
I have been trying to get more proficient in machine learning*, yet no matter how much I read and fiddle with code/toy datasets, when I try to go to a harder problem, I always feel overwhelmed by the choices I need to make:
I have to choose the algorithm: This is the part I typically find the most straightforward;
For said algorithm, I have to choose the objective function : usually, many are applicable, and I find it difficult to gain a good intuition of what makes an objective function adapted in some cases rather than others, apart from the very classical ones for linear or logistic regression
And then, I should devise the features: this still feels completely arcane to me, apart from using content-based features readily available in the data.
I am under the impression that I have to "create" the tailored algorithm and the data.
Concerning the algorithm, I have spent some time into studying gradient boosting and the math behind it, to the point that I have a reasonably solid comprehension of how it works, and an intuition of parameter tuning for simple datasets. However, that knowledge does not generalize.
How are these issues typically approached? Are there any resources that can help?
* By taking the Machine Learning Coursera course and its more in-depth version, reading more XgBoost-specific material (on its internals and parameter tuning and intuition), as well as playing with the Titanic dataset, and a housing market dataset.
AI: We have to climb up a steep learning curve when we learn about machine learning. Your question is quite general: One of the tactics I use when learning is divide and conquer. Get some coarse overview about the whole area, then pick some particular area and dig deeper only there.
Perhaps the question is too general, the best tactic may vary and depend on the area you address.
But I am not sure if learning the math is always helpful (although it may always be interesting for those who care).
The algorithms can often be applied in a black box approach, and it may be sometimes not necessary to understand an algorithm in math terms (white box), but sufficient to know it's function, strengths and weaknesses (black box).
You may be the first one that tests that algorithm for the domain, so pure experimentation is useful in the end. |
H: Neural Network beginner level tutorial
I am trying to build a simple multi layer perceptron Neural Network in Java, but apparently my calculations are off. I am looking for a beginner-level tutorial which can help me to understand how to properly calculate forward and backward pass, preferably with examples.
AI: One of articles which helped me a lot is : A Step by Step Backpropagation Example by Matt Mazur. It covers forward and backward pass of MLP. I hope that helps.
Another great source is http://www.deeplearningbook.org/ |
H: Finding optimal weights for models
I'm trying to implement an algorithm to find the minimal value of a function.
Before moving to sigmoid activation functions, i'm trying to understand linear regression.
Usually, a gradient descent algorithm is used to find an minimal value where the algorithm converges, but there are some other ways for linear models.
Say I have two vectors:
x=[1,2,3,4,5,6,7,8,9,10,11,12]
y=[2.3,2.33,2.29,2.3,2.36,2.4,2.46,2.5,2.48,2.43,2.38,2.35]
Between these points, I would like to add a linear separator with least squares.
Say I have some imperfect linear function:
$f(x)=0.026x+2.3$
As I know, there are two ways to find this:
$w = (X^TX)^{-1}X^Ty$
and a gradient descent algorithm:
$w^{new} = w^{old} - \nu \frac{dy}{dx}$
Although for linear models finding derivative is trivial, thus second method is not necessary.
Now i've used the first equation on the vectors, in Python:
w = ((np.transpose(x)*x)**-1)*np.transpose(x)*y
Unfortunately, the output was irrelevant:
[ 2.3, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]
Then i've tried using the second method for 500 iterations, in Python:
for i in range(1,5000):
x_old = x_new
x_new = x_old - v*dydx
print("x_new = {0} - {1}({2}) = {3}".format(x_old, v, dydx, x_new))
However, i'm not sure how to know when it reaches a convergence point.
How can I use these methods properly for linear models? And if so, how can they be used for more complex models such as logistic regression?
AI: In this case your feature matrix $X$ has a single dimension. Each point in your graph has a $y$ value that depends only on 1 value of $x$.
Ok let's go through the code
x=[1,2,3,4,5,6,7,8,9,10,11,12]
y=[2.3,2.33,2.29,2.3,2.36,2.4,2.46,2.5,2.48,2.43,2.38,2.35]
Let's convert these to matrices. We will also add a column of 1's to the end of the $X$ matrix. This will be used to train the bias value.
temp = np.ones((len(x), 2))
temp[:,0] = np.asarray(x)
x = temp
y = np.asarray(y)
Now we will calculate our weights as
$w = (X^TX)^{-1}X^Ty$
w = np.matmul(np.matmul(np.linalg.inv(np.matmul(np.transpose(x), x)), np.transpose(x)), y)
array([ 0.01174825, 2.30530303])
Look at the dimensions of our weights vector. It only has 2 values. One value associated with $x$, the first column of our $X$ matrix, and a bias, associated with the 1's column that we added. The equation of this line is described as
$y = 0.01174825 x_1 + 2.30530303$
We can see that this line indeed describes the data pretty well for a linear regression.
Deeper
However, your data looks like it would be better fit using a polynomial. You should try
$y = w_1 x_1^2 + w_2 x_1 + b$
To do this add a new feature in the $X$ matrix which corresponds to $x^2$.
x=[1,2,3,4,5,6,7,8,9,10,11,12]
y=[2.3,2.33,2.29,2.3,2.36,2.4,2.46,2.5,2.48,2.43,2.38,2.35]
temp = np.ones((len(x), 3))
temp[:,0] = np.power(np.asarray(x), 2)
temp[:,1] = np.asarray(x)
x = temp
y = np.asarray(y)
w = np.matmul(np.matmul(np.linalg.inv(np.matmul(np.transpose(x), x)), np.transpose(x)), y)
[![xx = range(1,15,1)
yy = \[0\]*len(xx)
for ix, i in enumerate(xx):
yy\[ix\] = w\[0\]*i**2 + w\[1\]*i + w\[2\]][2]][2]
Even deeper
And going one step further by adding the $x^3$ term in the same way we get
Make sure not to add too high of a degree to your polynomial or you will be overfitting!! This means although you characterize your training data perfectly, it will not generalize well to new instances. Thus this will be a useless model. That is why you need to split your training and testing data, that way you can verify if the model you build using your training data can generalize. |
H: How to maximize recall?
I'm a little bit new to machine learning.
I am using a neural network to classify images. There are two possible classes. I am using a Sigmoid activation at the last layer so the scores of images are between 0 to 1.
I expected the scores to be sometimes close to 0.5 when the neural net is not sure about the class of the image, but all scores are either 1.0000000e+00 (due to rounding I guess) or very close to zero (for exemple 2.68440009e-15). In general, is that a good or bad thing ? How can this behaviour be avoided?
In my use case I wanted to optimize for recall by setting a lower threshold but this has no impact because of what I described above.
More generally, how can I minimize the number of false negatives when in training the neural net only cares about my not ad-hoc loss ? I am ok with decreasing accuracy a little bit to increase recall.
AI: Train to avoid false negatives
What your network learns depends on the loss function you pass it. By choosing this function you can emphasize various things - overall accuracy, avoiding false negatives, false positives etc.
In your case you probably use a cross entropy loss in combination with a softmax classifier. While softmax squashes the prediction values to be 1 when combined across all classes, the cross entropy loss will penalise the distance between the actual ground truth and the prediction. In this calculation it will not take into account what the values of the "false negative" predictions are. In other words: The loss function only cares for the correct class and its related prediction, not for the values of all other classes.
Since you want to avoid false negatives this behaviour is probably the exact thing you need. But if you also want the distance between the actual class and the false predictions another loss function that also takes into account the false values might even serve you better. Give your high accuracy this poses the risk that your overall performance will drop.
What to do then?
Making the wrong prediction and being very sure about it is not uncommon. There are millions of things you could look at, so your best guess probably is to investigate the error. E.g. you could use a confusion matrix to recognize patterns which classes are mixed with which. If there is structure you might need more samples of a certain class or there are probably labelling errors in your training data.
Another way to go ahead would be to manually look at all (or some) examples of errors. Something very basic as listing the errors in a table and trying to find specific characteristics can guide you towards what you need to do. E.g. it would be understandable if your network usually gets the "difficult" examples wrong. But maybe there is some other clear systematic your network did not pick up yet due to lack of data? |
H: Why adding combinations of features would increase performance of linear SVM?
I have a dataset of ~5000 elements represented by vectors composed by ~30 binary values (0 or 1)
on which I am performing binary classification with SVM with linear kernel (I use the Scikit learn lib).
For curiosity, I tried to add an extra feature that consists in a AND between two others (remember that all my features are boolean). The result was that the performance of the SVM improved. I was surprised by this improvement because the AND operation is equivalent to a multiplication, therefore I would expect that my SVM, as every linear classifier, was somehow naturally already taking into account mutiplications between features.
What is wrong with my theoretic understanding of SVM ?
AI: Multiplication is not a linear operation. Your linear SVM constructs a (hyper-)plane
$$
w_0 = w_1 x_1 + w_2 x_2
$$
for some weights $w_0, w_1, w_2.$
By introducing the AND-feature, you add another dimension:
$$
w_0 = w_1 x_1 + w_2 x_2 + w_3 x_1 x_2.
$$
It might well be that your two-dimensional data set is not linearly separable, but the three-dimensional data set is.
A small addition: Would adding the OR-feature increase performance even further? No, because it is a linear combination of the other three features: $x \vee y = x + y - (x \wedge y)$ where $\vee$ is OR and $\wedge$ is AND. |
H: Multi-Class classification with CNN using keras - trained model predicts object even in a fully white picture
I built an multi classification in CNN using keras with Tensorflow in the backend. It nicely predicts cats and dogs. However, when it comes to an image which does not have any object-white background image-, it still finds a dog ( lets say probability for dog class 0.75…, cats 0.24… ). I am a quite newbie learner in learning built with neural network.
Sorry if I am asking a silly question, even though I have searched the internet I could not find any answer.
What is my exception from the case of the white background image as an input to prediction method, is 0 probability for dog and cat classes.
Any suggestion would make me so happy.
The below is how I implemented the training.
classifier = Sequential()
classifier.add(Conv2D(32, 3, 3, input_shape=(64, 64, 3), activation='relu'))
classifier.add(MaxPool2D(pool_size=(2, 2)))
classifier.add(Conv2D(32, 3, 3, activation='relu'))
classifier.add(MaxPool2D(pool_size=(2, 2)))
classifier.add(Flatten())
classifier.add(Dense(units=128, activation='relu'))
classifier.add(Dense(units=2, activation='softmax'))
# Metrics will be categorical_accuracy
classifier.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
training_set = train_datagen.flow_from_directory(
'/Users/ozercevikaslan/Desktop/Convolutional_Neural_Networks/dataset/training_set',
target_size=(64, 64),
batch_size=32,
class_mode='categorical')
test_set = test_datagen.flow_from_directory(
'/Users/ozercevikaslan/Desktop/Convolutional_Neural_Networks/dataset/test_set',
target_size=(64, 64),
batch_size=32,
class_mode='categorical')
classifier.fit_generator(
training_set,
steps_per_epoch=8000,
epochs=25,
validation_data=test_set,
validation_steps=2000)
AI: As I mentioned in my question post, the post is a bit silly even for a new learner. In this case, the world has only 2 classes which are dogs and cats so the output must be either a dog or a cat. |
H: How to compute precision and accuracy of a sequence that is not strictly binary?
Given a predicted sequence and actual sequence I want to compute it's precision and accuracy, for example: Note that these sequences will only contain 0, 1 or -1
predicted sequence: -1,0,1,1,-1,0,1,1,0,-1
actual sequence: -1,1,0,1,-1,1,0,1,0,-1
I know that precision is computed using this tp/tp+fp and accuracy is computed using tp + tn /tp + tn + fp + fn. But because I have -1 in it I am unsure how I would compute true positives? My understanding that a true positive is if I predicted a 1 and it's corresponding actual value is a 1. A walk through of the computation for precision and accuracy would help.
AI: Welcome to the Site!
We know that this problem is Multi-Class Classification Problem.
To get a confusion matrix for the same you can use the following command:
from mlxtend.evaluate import confusion_matrix
#import the required packages
from mlxtend.evaluate import confusion_matrix
from mlxtend.evaluate import plot_confusion_matrix
#Actual Target Values
y_target = [-1,1,0,1,-1,1,0,1,0,-1]
#Predicted Values
y_predicted = [-1,0,1,1,-1,0,1,1,0,-1]
#creation of confusion matrix
cm = confusion_matrix(y_target=y_target,
y_predicted=y_predicted,
binary=False)
#to print the calculated values of Confusion Matrix
cm
Outcome:
array([[3, 0, 0],
[0, 1, 2],
[0, 2, 2]])
For visualizing the cm you can use the following command:
fig, ax = plot_confusion_matrix(conf_mat=cm)
plt.show()
You can go through this Link for better understanding of mlextend.
You can get the Precision and Accuracy values by using the following formulas:
$\text{Precision}_{~i} = \cfrac{M_{ii}}{\sum_j M_{kji}}$
$\text{Recall}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ijk}}$
Go through these Link-1,Link-2 for better understanding on how to compute the same, in the Link-3 is GitHub link which explains on how they implemented for a 1-D array, looking at that you can try expanding it for your outcome. |
H: Plots getting rid of whitespace
So I'm doing the Udemy course for Data Science in Python, and there's something weird happening with the plots. The first one in the above picture is mine, the second is the instructors.
How do I get rid of the white space on the sides, they seem to be default when using the plotting function in pandas.
Edit:
This is all the code. Everything is basically default, so I'm guessing the matplotlib has some default settings changed in newer version of the package. The file df3 is here https://github.com/sxsheng/Udemy
Code:
import pandas as pd
import matplotlib.pyplot as plt
df3 = pd.read_csv('df3')
%matplotlib inline (This line is to be used in jupyter notebooks to display plots)
df3['a'].plot.hist()
AI: Welcome to the site!
what you where thinking is right, the new version plot looks a bit different, to get exactly the same plot as above you can use the following code:
import pandas as pd
import matplotlib.pyplot as plt
df3 = pd.read_csv('df3')
#%matplotlib inline (This line is to be used in jupyter notebooks to display plots)
ax = plt.subplot(111)
ax.hist(df3['a'],ec='black')
#this is x axis limiter since your value ranges in between 0 to 1 I gave these values
plt.xlim([0,1])
plt.show()
Outcome:
Let me know if you need anything else.
If this is what you were looking for, then you can accept the answer by clicking on the green tick mark. |
H: Re: Logistic Regression
I am working on a dataset that has a dependent variable that is binary, but it contains 98% of 0's and 2% of 1's. I am trying to use Logistic regression to predict purchase of a product. But because of the huge number of 0's, the model is not predicting well and getting a large number of false positive result.
Kindly suggest how can I approach this.
AI: This kind of problem is call Data Imbalance issue, this is a very common issue in Financial Industry, Health Care Industry(Cancer Cell Detection) like Banks and Insurance (for Fraud Detection)
To overcome such issues, we use different techniques like Over-sampling or Under-sampling.
Over-sampling tries to increase that minority records by duplicating those records to make balance in the data
Under-sampling tries to decrease the majority records by removing some records which are not significant to make balance in the data.
There are different algorithms for implementing the same.
you can go through these Link-1,Link-2, for Explanation and Implementation of the same.
Let me know if you need anything else. |
H: How do I split number string with digit pattern?
I am trying to split number string to two to digit numbers
How do I get two different Numbers out that string
Example:
I want separate two numbers
x <- c("-26755.22-50150.60")
To this
-26755.22
-50150.60
I have tried stringr::str_split but I din't manage to keep the digits.
AI: I think this should do the thing for you:
#the input
str <- " -26755.22-50150.60"
#you can use this command to split the data into 2 columns
#this doesn't have any constraint in the number of digits before and after the decimal
strspl <- as.numeric(unlist(regmatches(str,gregexpr("(?>-)*[[:digit:]]+\\.*[[:digit:]]*",str, perl=TRUE))))
Outcome:
> strspl
[1] -26755.22 -50150.60
If you want to store the outcome in a data-frame then you can use the below command
#everything is same but instead of as.numeric we replace it with as.data.frame
str_df <- as.data.frame(unlist(regmatches(str,gregexpr("(?>-)*[[:digit:]]+\\.*[[:digit:]]*",str, perl=TRUE))))
#it is not necessary to change the column but it is a good practice to do so
colnames(str_df) <- "variable_1"
Outcome:
Let me know if you have any doubt, would help you.
If you got what you are looking for, you can accept the answer by clicking on the green tick mark. |
H: Performace of Fischer projection as dimension reduction compared to other LDA methods
How is the performance of Fischer projection compared to other LDA methods of dimension reduction? I thought that Fischer projection was a great method of dimension reduction by maximizing class separation, but when I looked at the LDA methods in scikit learn, Fischer projection wasn't even in the list. This got me thinking, is it any good compared to the other methods out there?
Edit (answer): My bad, they are the same. Fischer projection is a 2 class special case of LDA. Projection using LDA can be performed using the fit_transform and transform methods in sklearn
AI: I didn't get what you meant by "other LDA Methods". To the best of my knowledge, Fisher method is just one.
Sklearn has the implementation and you can find it here.
Is it good or bad? When it comes to dimensionality reduction you don't know as dimensionality reduction methods are usually unsupervised. So if PCA works better? No one knows. At least in the context of two-class classification, Fisher method is still brilliant.
Hope it helped! |
H: Get the probabilities of Tensorflow
Hi I am studying tensorflow for cifar-10 image classification using the code here
AI: I want to ask you that how to predict the probabilities of each class in test images.
At line 27 in the train.py you have the following code:
correct_prediction = tf.equal(y_pred_cls, tf.argmax(y, axis=1))
It tries to find whether the predicted values are the same as the real ones. You can run y_pred_cls to see the probability of each class for your desired input.
I want to use the code to predict the probabilities of new data's labels, how to save and load the model we have trained which used the train data.
for saving your model and its weights you can take a look at here. As you can see from there, you have to make a saver object:
import tensorflow as tf
#Prepare to feed input, i.e. feed_dict and placeholders
w1 = tf.placeholder("float", name="w1")
w2 = tf.placeholder("float", name="w2")
b1= tf.Variable(2.0,name="bias")
feed_dict ={w1:4,w2:8}
#Define a test operation that we will restore
w3 = tf.add(w1,w2)
w4 = tf.multiply(w3,b1,name="op_to_restore")
sess = tf.Session()
sess.run(tf.global_variables_initializer())
#Create a saver object which will save all the variables
saver = tf.train.Saver()
#Run the operation by feeding input
print sess.run(w4,feed_dict)
#Prints 24 which is sum of (w1+w2)*b1
#Now, save the graph
saver.save(sess, 'my_test_model',global_step=1000)
And for loading that you have to restore it:
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('my_test_model-1000.meta')
saver.restore(sess,tf.train.latest_checkpoint('./'))
# Access saved Variables directly
print(sess.run('bias:0'))
# This will print 2, which is the value of bias that we saved
# Now, let's access and create placeholders variables and
# create feed-dict to feed new data
graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict ={w1:13.0,w2:17.0}
#Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")
print sess.run(op_to_restore,feed_dict)
#This will print 60 which is calculated
Edit: Actually the code is a bit strange, anyway. The following part:
tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=y)
can help you. It outputs the probabilities for each class. |
H: Word2Vec benefit in LSTM
if Word2Vec is nothing but a transformation of one-hot into a dense vector, why can't I just feed one-hot into LSTM (or for that matter sacrifice first dense layer, in any network that will end up using the embedding) and call it a day?
Why would I actually spend time pre-computing Word2Vec embeddings?
Yes, the resulting embedings are vectors that are clustered together if the words have similar meaning. But I presume a feed-forward classifier would figure out a good internal model anyway?
AI: You don't need word embeddings. Actually, in neural machine translation is frequent not to use them and simply train the embeddings along with the task.
Nevertheless, word embeddings work as a data augmentation technique, as you normally use a different (and much larger) dataset to train them, so they can be useful when you don't have much training data.
Therefore, the decision of using pre-trained word embeddings or not should be driven by the available data. |
H: Identify Bad Products from given parameters using neural networks
I have a problem at hand to identify Good/Bad Products using given parameters. The number of parameters are in the order of 5000s and there are multiple values for the parameters. However I do not have a labelled set of data which says these are the products that are good or bad.
For Example, Say the parameters are AX, AY, AZ, B, C, DX, DY, etc.
Each of them has a different range. Is decision trees the right approach?
Can classification be applied to this problem?
AI: No. Classification requires labelled data. Without labelled data there is no way to solve this. How would you do anything at all, if you don't know which of the products in the training set are good and which are bad? There's no basis for making a decision of any sort. |
H: Could we use image hasing techniques for images classification tasks?
I have read some articles about image hashing, and I would like to know if we could apply this technique for general purpose images classification tasks.
Especially I would like to know which could be the drawbacks in using image hashing for this kind of tasks.
AI: You could, but it won't be very effective.
Image hashing is aimed at detecting two instances of almost the same image. So, if your training set contains an image of a dog, and the test set contains an almost-identical image, then using image hashing you could use that to learn the label of the test set image. But in practice that doesn't provide much generalization power. In practice, we want to take hundreds of different images of dogs, and use that to learn to recognize new images of docs, so that if in the test set we are given a totally new image of a dog, we can still classify it as a dog. Image hashing won't help with that.
In other words, image hashing isn't designed for this sort of thing and will work poorly. |
H: How to use LSTM to make prediction with both feature from the past and the current ones?
Suppose I have a data frame with 2 columns, which are sales and promotions. I want to predict the next day sales based on the past sales and promotion info of 3 days, plus the promotion to be applied at the next day? How do I process and reshape the dataframe? I mean if only previous promotions need to be considered, then, after some shifting, the data frame could be reshaped into (sample size, 3, 2), but it becomes a problem if I also need to consider the promotion at the next day. It is a pretty common issue, does anyone has any thought about this?
AI: Have a look at this question. There is a nice discussion about how to implement this. The idea in to create a separated Input to your model and concatenate it AFTER the recurrent layer(s). Also in the Keras documentation, there is an example on how to build such models with a few lines of code. |
H: Feature selection vs Feature extraction. Which to use when?
Feature extraction and feature selection essentially reduce the dimensionality of the data, but feature extraction also makes the data more separable, if I am right.
Which technique would be preferred over the other and when?
I was thinking, since feature selection does not modify the original data and it's properties, I assume that you will use feature selection when it's important that the features you're training on be unchanged. But I can't imagine why you would want something like this..
AI: Adding to The answer given by Toros,
These(see below bullets) three are quite similar but with a subtle differences-:(concise and easy to remember)
feature extraction and feature engineering: transformation of raw data into features suitable for modeling;
feature transformation: transformation of data to improve the accuracy of the algorithm;
feature selection: removing unnecessary features.
Just to add an Example of the same,
Feature Extraction and Engineering(we can extract something from them)
Texts(ngrams, word2vec, tf-idf etc)
Images(CNN'S, texts, q&a)
Geospatial data(lat, long etc)
Date and time(day, month, week, year, rolling based)
Time series, web, etc
Dimensional Reduction Techniques (PCA, SVD, Eigen-Faces etc)
Maybe we can use Clustering as well (DBSCAN etc)
.....(And Many Others)
Feature transformations(transforming them to make sense)
Normalization and changing distribution(Scaling)
Interactions
Filling in the missing values(median filling etc)
.....(And Many Others)
Feature selection(building your model on these selected features)
Statistical approaches
Selection by modeling
Grid search
Cross Validation
.....(And Many Others)
Hope this helps...
Do look at the links shared by others.
They are Quite Nice... |
H: Data Snooping, Information Leakage When Performing Feature Normalization
Assume that we have a training data set (with both features and labels) and a test data set (with only features).
When we build a machine learning model that requires normalization of the features, the correct way of doing normalization (to prevent information leakage) is to only use the training data set. Namely, a wrong way of doing normalization is to stack the training (exclude the Y column) and test data set together, and perform normalization (i.e., using the mean and variance of the entire training + test data set). The intuition here is very clear: if you want to get an unbiased estimator of your model performance in production, then when you train your model, you shouldn't instill any information in the test data set that will be used to gauge the actual performance of your model.
My question is the following: When we have trained the model correctly and about to use this model to do prediction, how should we normalize the test data set? I believe the correct way to normalize the test data set is: using the mean and variance obtained from the training data set to do normalization on the test data set.
However, why not just normalize the test data using its own mean and variance? Or why not stack train and test data set together and using the overall mean and variance to normalize the test data set? In the prediction stage, the idea and intuition of data snooping and information leakage is not clear to me.
AI: The normalization parameters you fitted in training are now part of your model. You fitted the model weights on the training data: the normalization step is part of your model now, and the "parameters" for that step are the mean and variance of the training data, not the test data.
To help make this more concrete, let's pretend that this model is in production. You fit your model on whatever training data you had available, and now it's running as a scoring service. You can evaluate your model accuracy over time as data you've scored gets labeled to ground truth, but you're just evaluating the model not refitting it. How do you normalize these incoming predictions? You're going to need to use the mean and variance from whatever dataset you used to fit the model. If you'd rather do something like using the last K observations to estimate mean/variance, then that's the procedure you need to be testing when you fit the model. |
H: Proper/Possible methods for extracting unstructured data from websites
I'm working in Python, using Scrapy, and NLTK to try to understand how I can extract data from college websites.
My scraper can navigate through the university websites and find their tuition fees pages perfectly , but when trying to extract specific fees like :
Resident
Non Resident
Per Credit Hour
Per Semester
I'm running into trouble due to the data being so unstructured from site to site.
I've tried using NLTK to parse data based on parts of speech tags and regex chunking to try to extract sentences such as "tuition cost for resident: $12,500" but colleges can display this data in a number of ways.
Here is my question:
Are there any better ideas/methodologies that I should be looking into that can help me with extracting this type of data?
AI: You need to build a couple of classifiers. First, you need a classifier to give you thumbs up that you want to parse the page at all. Call this the "is_relevant" model. Once you've determined that a page is relevant, you should pass it through a separate classifier for each data element you're hoping to capture (or a multiclass classifier capable of recognizing each of those elements and distinguishing them from content you're not interested in). |
H: Is correlation needed when building a model?
Some papers report the correlation between features when building a model and some don't. Is there a need to check the correlation between features and target feature? It won't be easy if the number of features is high.
AI: Not really, no. Sort of. It depends on how complex your model/data is.
It's entirely possible to have a situation where a feature taken in isolation will not be correlated with the target variable, but multiple features considered together will. This is why univariate correlation is unreliable for feature selection. A trivial case that demonstrates this is a bivariate model performing a binary classification where the positive class is bounded by the right upper and left lower quadrants, and the negative class is bounded by the left upper and right lower quadranta (i.e. the "XOR" pattern):
If the input features have the same sign (x>0 & y>0 or x<0 & y<0), it's the positive class, else it's the negative class. But either feature in isolation is completely useless and uncorrelated with the target.
Additionally, modern models like deep neural networks are effectively capable of "learning their own features", i.e. constructing extremely complex features by developing abstractions from the raw inputs. The "final" features learned by such a model will likely be correlated with the target, but the input features need not be.
For example, if you consider the imagenet task (classifying a photo as a member of one of 10,000 classes), I'd be very surprised to learn that there's any correlation between the values of specific pixels and any target class. That's just not how we interpret photos. The value of the pixel in position [25, 10] should not have any correlation with whether or not the picture is a photo of a dog. But, if we think of the entire network before the output layer as a feature engineering system (such that the "classifier" is just the output layer), then the "features" provided by the penultimate layer (the inputs to "The Classifier") probably have some correlation with the target.
TL;DR: If a feature is correlated with the target, it probably contains information about the target that will be useful for modeling. But that does not mean uncorrelated features are useless. Reporting correlation when it's there is a simple way to demonstrate that there's a signal in that variable. But lack of correlation doesn't necessarily mean you should throw those features away. In fact, correlation doesn't even mean you should necessarily use that feature either: you can have multiple features correlated with the target that are highly correlated with each other, in which case you would probably only want one or a handful of that group in your model. |
H: How to optimize XGBoost performance accuracy?
I have dataset to predict customers dropout(yes,no), with 5 numerical features and 2 categorical features. I have applied a scaler to the numerical data and transformed the categorical features into dummies variables, creating 29 features. My dataset has shape of 6552 rows and 34 features. What is the recommend approach to tune the parameters of XGBClassifier, since I created the model using default values, i.e., model=XGBClassifier()? Should I use a brute-force looping the values in some parameters until I find a optimal prediction value? In this case what is recommended?
AI: There are three main techniques to tune up hyperparameters of any ML model, included XGBoost:
1) Grid search: you let your model run with different sets of hyperparameter, and select the best one between them. Packages like SKlearn have routines already implemented. But also in this case you have to pre-select the nodes of your grid search, i.e. which values have to be tried by the routine
2) Random search: similar to Grid Search, but you basically only choose the parameters boundaries, and the routine randomly try different sets of hyperparameters.
more informations about method 1 and 2 are here.
3) Bayesian optimization algorithms; this is the way I prefer. Basically this algorithms guesses the next set hyperparameter to try based on the results of the trials it already executed. An easy to use and powerful is SMAC. |
H: counting number of parameters keras
I'm implementing a 1D CNN in keras by following the keras tutorial on the same -
link. Once the model is built, when I execute model.summary(), I get the following output.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 1000) 0
_________________________________________________________________
embedding_1 (Embedding) (None, 1000, 100) 17410600
_________________________________________________________________
conv1d_1 (Conv1D) (None, 996, 128) 64128
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 199, 128) 0
_________________________________________________________________
conv1d_2 (Conv1D) (None, 195, 128) 82048
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 39, 128) 0
_________________________________________________________________
conv1d_3 (Conv1D) (None, 35, 128) 82048
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 1, 128) 0
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 16512
_________________________________________________________________
dense_2 (Dense) (None, 20) 2580
=================================================================
Total params: 17,657,916
Trainable params: 247,316
Non-trainable params: 17,410,600
_________________________________________________________________
None
The conv1d_1 has the total number of parameters as 64128. But since the conv1d_1 was initialized with filters = 128, kernel_size = 5, padding = 'valid' (which means no padding), shouldn't the number of parameters be
=> kernel_size * kernel_size * num_filters + num_filters * bias
=> 5 * 5 * 128 + 128 * 1
=> 26 * 128
=> 3328
AI: In fact, you use 1D convolution.
Given that the dimension of the output of embedding layer is 100, that the kernel size is 5, and that the number of filters is 128,
You have 100x5x128 = 64000 weights. Add to this 128 biases and you get 64128 parameters.
Note that parameter sharing is used, so that there is only one set of weights and biases per filter, in depth. |
H: How to get rid of some parts in a string variable?
The original string is like this "965 - Vehicle parking area", how do I get rid of the "965 - " part in R? (There are two spaces before and after the dash.)
Thank you!
AI: One possibility is to use gsub function to find the first ' - ' and remove everything before and also remove the dash with the space before and after that.
> tmp
[1] "965 - Vehicle parking area"
> gsub("^.*? - ","", tmp)
[1] "Vehicle parking area"
If there are two spaces:
> tmp <- "965 - Vehicle parking area"
> sub("^.*? - ","", tmp)
[1] "Vehicle parking area" |
H: Can't understand this simple matrix multiplication in python
Ok guys, I might be very tired here, but I can't figure out why this matrix multiplication by a scalar gives the following result (python)
Matrix named 'dx'
[ 1.6, 3.6, 0.4, 14.4, 25.6],
[10. , 10. , 0.4, 14.4, 3.6],
[ 0.4, 0. , 0. , 1.6, 10. ],
[ 6.4, 0. , 3.6, 1.6, 0.4],
[14.4, 0. , 25.6, 0.4, 6.4]
10 * dx, in python, gives
[ -40., -60., -20., -120., -160.],
[-100., -100., -20., -120., -60.],
[ -20., 0., 0., -40., -100.],
[ -80., 0., -60., -40., -20.],
[-120., 0., -160., -20., -80.]
From what I understand, each member should be multiplicated by 10, but it's clearly not the case.
I'm using Python 3.6 & numpy
What Am I missing ?
Link to matrix multiplication by scalar , pretty basic stuff :
http://www.purplemath.com/modules/mtrxmult.htm
Thanks !
AI: It works for me.
>>> import numpy as np
>>> dx = np.matrix([[ 1.6, 3.6, 0.4, 14.4, 25.6],
... [10. , 10. , 0.4, 14.4, 3.6],
... [ 0.4, 0. , 0. , 1.6, 10. ],
... [ 6.4, 0. , 3.6, 1.6, 0.4],
... [14.4, 0. , 25.6, 0.4, 6.4]])
>>> 10 * dx
matrix([[ 16., 36., 4., 144., 256.],
[ 100., 100., 4., 144., 36.],
[ 4., 0., 0., 16., 100.],
[ 64., 0., 36., 16., 4.],
[ 144., 0., 256., 4., 64.]])
You might want to check if you have redefined dx before the multiplication. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.