text
stringlengths
83
79.5k
H: What's the best strategy to train a CNN with images that only have labels for positive characteristics? I have a large database of images that are only partially labeled for multiple, non-exclusive characteristics or objects present on them. For instance, an underwater scene might feature the labels water, swimsuit and fish on it. Problem is it's only partially labeled, meaning that the fact the label cat is absent doesn't mean there's no cat on the image. What'd be the best strategy to train a CNN able to tell the missing labels on the dataset? The image base has 230 thousand images on it, but given that multiple labels are possible, it's hard to tell the ratio of images that are fully labeled. I'm able to code python and managed to use keras to train on mnist dataset on gpu. AI: I think your best bet would be transfer learning. Start with a model that has already been trained with a wider dataset such as the ones presented here. From there you can train the model with your specific dataset. You can then use output nodes for the labels which you have available to you, and you can get the predictions for the other images from the pre-trained model which are usually trained for thousands of different classes. Alternatively, you can train a model with all the output classes you have in your label set and another output for "other". Then when an output node is selected you can pass that same input to the pre-trained model which was trained with your data as well.
H: Q-learning why do we subtract the Q(s, a) term during update? I can't understand the meaning of $-Q(s_t, a_t)$ term in the Q-learning algorithm, and can't find explanation to it either. Everything else makes sence. The q-learning algorithm is an off-policy algorithm, unlike SARSA. The Bellman equation describes q-learning as follows: $$Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \alpha\left[ r_t + \gamma \cdot argmax(Q(s'_t, a'_t)) - Q(s_t, a_t) \right] $$ "The q value for action $a$ taken in state $s$ at time $t$ becomes equal to: that same q-value plus small amount of: currently received reward (reward could be zero) with some amount $\gamma$ of the best Q-value available in the new state, minus our current value To me, this $-Q(s_t, a_t)$ term at the very end is redundant. ...If we set gamma to $0.8$, the future rewards will decay anyway. Yes, if instead we now set $\gamma=0$, then the $-Q(s_t, a_t)$ term will drag our value down - is there a case when it would be useful and what would the result be? Edit: wikipeda uses a slighly different form of the bellman equation $$Q(s_t, a_t) \leftarrow (1-\alpha)\cdot Q(s_t, a_t) + \alpha\left[ r_t + \gamma \cdot argmax(Q(s'_t, a'_t)) \right] $$ It's the same equation as the one above, because we can multiply $Q(s_t, a_t)$ with an $\alpha$ and then factor it out, obtaining the first equation. This representation makes me kind-off understand that we are linearly interpolating from current to newer Q-value, but I can't tie it to the original representation ...In the original representation (the first equation), it magically seems that gamma would be enough - can someone clear it up for me? AI: The wikipedia formulation does indeed show you a better view of how the update rule for action values is constructed: $$Q(s_t, a_t) \leftarrow (1-\alpha)\cdot Q(s_t, a_t) + \alpha\left[ r_t + \gamma \cdot \text{max}_{a'}(Q(s_{t+1}, a')) \right] $$ . . . here you can see that you are taking a weighted average between $Q(s_t, a_t)$ and $ r_t + \gamma \cdot \text{max}_{a'}(Q(s_{t+1}, a')$. An aside: In both cases you have written $\text{argmax}$, where the actual term is $\text{max}$. The intuitive reason for the update is because the second term contains a new estimate of the true value of Q from the environment - the immediate reward $r_t$, and the actual state transition that occurred $s_{t+1}$ are observed data that are part of the new estimate. The term $r_t + \gamma \cdot \text{max}_{a'}(Q(s_{t+1}, a'))$ is usually called the Temporal Difference Target or just TD Target. In the original representation (the first equation), it magically seems that gamma would be enough - can someone clear it up for me? The $\gamma$ term is the discount rate, and nothing to do with convergence of Q-learning or SARSA. It is a term used to control preference for more immediate rewards (low values) or longer-term rewards (high values), and as such is part of the problem definition. It is not a learning parameter like $\alpha$, the learning rate. And in fact $\gamma = 0$ is possible (rarely used, but meaning that only next reward is important); $\gamma = 1$ is also possible and quite a common choice for episodic problems. Clearly when using $\gamma = 1$, there is no "decay of future rewards" happening. What is actually happening is that the TD Target is a stochastic estimate for the correct Q action value (in fact with all TD learning such as SARSA and Q-learning, this is a biased estimate, although the bias should reduce as the system approaches convergence). If you rename $Q(s,a)$ as $Q_{so\_far}$ and $r_t + \gamma \cdot \text{max}_{a'}(Q(s_{t+1}, a'))$ as $Q_{latest\_estimate}$, you get the idea of the update: $$Q_{updated} \leftarrow (1-\alpha)Q_{so\_far} + \alpha Q_{latest\_estimate}$$ which is the same as $$Q_{updated} \leftarrow Q_{so\_far} + \alpha(Q_{latest\_estimate} - Q_{so\_far})$$ You might ask "Why is the TD Target an estimate of $Q(s_t, a_t)$?" The answer can be derived from the definition of Q, and is called the Bellman equation - in this case the Bellman equation for action value under the optimal policy $\pi^*$ (anther aside: what you have called the Bellman equation is not, although it is related). I won't give the full derivation here, but in essence the definition of Q is "the expected total reward when starting with state $s_t$ and action $a_t$ then following the optimal policy form then on", or $$q_{*}(s, a) = \mathbb{E}_{\pi^*}[\sum_{k=0}^{\infty}\gamma^k R_{t+k+1} | S_t = s, A_t = a]$$ and from this definition, you can extract the first reward term and write out $q_*(s_t, a_t)$ in terms of $q_*(s_{t+1}, a_{t+1})$. $$q_{*}(s, a) = \sum_{r,s'} p(r,s'|s,a)(r + \gamma \text{max}_{a'} q_{*}(s', a'))$$ Where $p(r,s'|s,a)$ is the transition probability of getting reward $r$ and ending up in state $s'$ when starting in state $s$ and taking action $a$. Note this is an equality, not an update rule. Q-learning and SARSA turn it into an update rule by sampling the right hand side, and using that sample to update the current best estimate of the left-hand term. Note that by taking a sample, you don't need to know $p(r,s'|s,a)$ . . . that is what is meant by "model free" learning, as if you knew $p(r,s'|s,a)$ then you would have a model of the underlying Markov Decision Process (MDP).
H: What to give as predictors to predict future values? I am new to machine learning techniques. I was going through few supervised machine learning model examples and i have doubt in predicting future values. I have daily time series dataset from database where my target variable is complete noise signal like this: To train and predict the models, we divide the dataset into train set, validation set and test set to check if the model is efficient or not. I have two independent variable and one target variable. I am using linear regression,Keras LSTM and other models. My basic question is how do i predict for future values (for next one week, one month) when i don't have independent variables for next week? what am i supposed to give as predictors in that case? Any information is much appreciated. AI: Welcome to the site! Firstly, when we use any kind of predicting algorithm, then you need to have the future values of the independent variables(features for explaining the target variable), for Prediction. Secondly, If you don't have such values then you cannot use predicting algorithms, we have forecasting algorithms to do such task. To apply this you need to have data which is time series data. From the sample and graph I think your data is time series data. FYI, you can also use additional features even in time series. For Feature Engineering on Time Series data you can go through the link. If you need any additional information let me know.
H: Handle Unbalanced data I have a data-set with 2 target classes. In training dataset, the ratio of the 2 classes are 1:93 With my neural network, the current accuracy is 63%. I tried undersampling, oversampling, equal sampling but not improvements. AI: You have not specified that what neural network you are using but as comments, you should try to fit your data first. You have to try to find a model that learns your training data. For this purpose you don't have to increase the number of data, at least not at this stage. You should try to find a good model which suits your data. For this purpose you have to change the hyper-parameters of your neural network, e.g. number of layers or number of neurons in layers. You can take a look at here and here which the former can help you and the latter helps you understand the features learned by CNNs in case you are using them. For using F1 score in Keras I've not seen but you can implement it and pass it to compile method, take a look at here.
H: Several fundamental questions about CNN I am trying to make a CNN for 3D image recognition but everything is predicted to only one class out of three. And the prediction even quickly converges during the first epoch. I have been working on this for an week and totally lost. I have my own several guess why it always converges to one class. My 3D tensor is huge as 40x35x30 and there are a lot of white spaces because I had to put my objects in cubic box. Would this be problematic? I have only ~5000 samples for training and ~500 for test sets. Do I have too little samples? My labels have 3 classes and ~75% of the whole data belong to the class number 1. During the first epoch, my model quickly converges to predict 99% of data as this class. Would this unbalanced data cause the problem? My model is designed as conv1(kernel_size=10,stride=1,filter=32)-batch_norm1-maxpool1-conv2(8,1,64)-batch_norm2-maxpool2-conv3(8,1,64)-maxpool3-fc4(1024)-batchnorm4-dropout4(20%)-fc5(384)-batchnorm5-dropout5(20%)-fc6(3) I standardized the input between -1 and 1 I use leaky-relu activation for conv layers. I use Adam optimizer with decay rate of 0.99. First, I am not sure if it is okay to perform batch normalization at every layer. Do I miss any important concept for designing CNN model here? or maybe my data is just bad.. I kind of suspect that my data are not significantly different from each other and there is no pattern at all. In this case, is there any statistical method/model to check if my data have meaningful differences? (The 3D images are some chemical/physical data in 3D space that I converted as numpy matrix) But I think it is more like vanishing problem because when I initialize the variable with Xavier's way, the convergence to the class #1 is slower. Please someone help me :( AI: You more than likely do not have enough training data for a neural network. Your class imbalance problem is probably an issue. Instead of using accuracy as a measurement trying some type of F-score. Batch normalization should be applied between the convolution layer and the activation function. If you think you have a vanishing or dying activation problem, plot the gradients or the sum of gradients. It'll give you an idea if you're right or not.
H: XGBoost: predictive, descriptive (or both) model? I have trained an XGBoost model for prediction. The algorithm is able to calculate variable importances. I was asked why I have not analyzed these variable importances and I did not because as I understood XGBoost is rather a predictive than a descriptive model. I am not sure what extent theses importances can be used to capture real importance (I understand that it may be dependent on the context). Am I right that XGBoost is rather a predictive model? AI: You're right that it is a predictive model, but analysing feature importance, partial dependence and other metrics/plots are intended to allow you to gain an understanding of the classification process of models which are often very black-box in nature. Yes, you can use this to gain some understanding of the underlying data, but I'd be more inclined to say that the primary reason for looking at feature importance is to understand your model rather than to understand your data.
H: StratifiedKFold: ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multilabel-indicator' instead Working with Sklearn stratified kfold split, and when I attempt to split using multi-class, I received on error (see below). When I tried and split using binary, it works no problem. num_classes = len(np.unique(y_train)) y_train_categorical = keras.utils.to_categorical(y_train, num_classes) kf=StratifiedKFold(n_splits=5, shuffle=True, random_state=999) splitting data into different folds for i, (train_index, val_index) in enumerate(kf.split(x_train, y_train_categorical)): x_train_kf, x_val_kf = x_train[train_index], x_train[val_index] y_train_kf, y_val_kf = y_train[train_index], y_train[val_index] ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multilabel-indicator' instead. Is there a way I can used KFold with multi-class? AI: There is an easier way instead of using loops. Scikit provides cross_val_score. from sklearn.cross_validation import KFold, cross_val_score k_fold = KFold(len(y), n_folds=10, shuffle=True, random_state=0) clf = <any classifier> print cross_val_score(clf, X, y, cv=k_fold, n_jobs=1) The topic also has been discussed here. You can also see here which has a code snippet which may help you: from sklearn.model_selection import KFold X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) y = np.array([1, 2, 3, 4]) kf = KFold(n_splits=2) kf.get_n_splits(X) print(kf) for train_index, test_index in kf.split(X): print("TRAIN:", train_index, "TEST:", test_index) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] which The first n_samples % n_splits folds have size n_samples // n_splits + 1, other folds have size n_samples // n_splits, where n_samples is the number of samples.
H: Putting a predictive model into production Even after all these years of data science from 2010 to 2018, why is there no general framework for putting a predictive model into production? AI: Depending on what exactly you mean by framework, I would argue that there is. Using a REST interface to serve a production model at inference time is pretty close to a general serving framework at this point. However, there are a potentially infinite number of use cases that would diverge from this standard even if one categorically existed. Some mobile apps using image recognition models on personal data that the user doesn't want transmitted to the cloud are pushing the envelope of ways to compress formerly huge neural network architectures into ones that easily fit into mobile-sized memory, for example. If you're talking software framework, it shouldn't be surprising that a field of programming practice (or, rather, plethora of fields) that find needs for a variety of languages and even ecosystems within languages would have difficulty forming a consensus around a single aporoach. It's likely that deployment frameworks have yet to build in enough benefit to justify a reduction in general flexibility to justify their use.
H: Creating data model out of .csv file using Python I want to create a data model out of a .csv file using Python. I mean to create dependencies, for example the primary key and stuff such that I can check if the new .csv complies with the given data model. I would appreciate some suggestions regarding how to do that, the libraries, frameworks etc AI: Adding on top of @El Burro's answer, most of the training/testing proof of concepts of model making happens on manipulating DataFrame which provides easy functionalities like chaining operations, broadcasting, filling missing values etc and pandas is one such library. It has datatype inferring too and it uses python stack numpy which is fast manipulating arrays. Other than that, as you asked if you want to check format of incoming data that is being passed to the model. You can use pandas here like (this is just a demo that you can achieve all kinds of stuff, there might be a better way to pull this off) >> import pandas as pd >> df1 = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']}) >> df1 a b 0 1 a 1 2 b 2 3 c >> df2 = pd.DataFrame({'a': [1, 2, 3], 'b': [1.0, 2.0, 3.0]}) >> df2 a b 0 1 1.0 1 2 2.0 2 3 3.0 >> df1.dtypes a int64 b object dtype: object >> df2.dtypes a int64 b float64 dtype: object >> df1.dtypes.to_dict() == df2.dtypes.to_dict() False
H: Adapting the Keras variational autoencoder for denoising images I am asking this question here after it went unanswered in Stack Overflow. I'm trying to adapt the Keras example for VAE I have modified the code to use noisy mnist images as the input of the autoencoder and the original, noiseless mnist images as the output. import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm from keras.layers import Input, Dense, Lambda, Layer from keras.models import Model from keras import backend as K from keras import metrics from keras.datasets import mnist batch_size = 100 original_dim = 784 latent_dim = 2 intermediate_dim = 256 epochs = 1 epsilon_std = 1.0 x = Input(shape=(original_dim,)) h = Dense(intermediate_dim, activation='relu')(x) z_mean = Dense(latent_dim)(h) z_log_var = Dense(latent_dim)(h) def sampling(args): z_mean, z_log_var = args epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0., stddev=epsilon_std) return z_mean + K.exp(z_log_var / 2) * epsilon z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var]) # we instantiate these layers separately so as to reuse them later decoder_h = Dense(intermediate_dim, activation='relu') decoder_mean = Dense(original_dim, activation='sigmoid') h_decoded = decoder_h(z) x_decoded_mean = decoder_mean(h_decoded) # Custom loss layer class CustomVariationalLayer(Layer): def __init__(self, **kwargs): self.is_placeholder = True super(CustomVariationalLayer, self).__init__(**kwargs) def vae_loss(self, x, x_decoded_mean): xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean) kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) return K.mean(xent_loss + kl_loss) def call(self, inputs): x = inputs[0] x_decoded_mean = inputs[1] loss = self.vae_loss(x, x_decoded_mean) self.add_loss(loss, inputs=inputs) # We won't actually use the output. return x y = CustomVariationalLayer()([x, x_decoded_mean]) vae = Model(x, y) vae.compile(optimizer='rmsprop', loss=None) # train the VAE on MNIST digits (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) noise_factor = 0.5 x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape) x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape) x_train_noisy = np.clip(x_train_noisy, 0., 1.) x_test_noisy = np.clip(x_test_noisy, 0., 1.) vae.fit(x_train_noisy, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, validation_data=( x_test_noisy,x_test)) But I am getting the following error message: File "ask_vae.py", line 86, in <module> validation_data=( x_test_noisy,x_test)) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1574, in fit batch_size=batch_size) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1411, in _standardize_user_data exception_prefix='target') File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 58, in _standardize_input_data 'expected no data, but got:', data) ValueError: ('Error when checking model target: expected no data, but got:', array([[ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], ..., [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.]], dtype=float32)) It seems that the model is not capable of receiving an output ; it works when I change the output to None, like so: vae.fit(x_train_noisy, None, shuffle=True, epochs=epochs, batch_size=batch_size, validation_data=( x_test_noisy,None)) Is that because of the way the Custom Loss Layer is defined? How should I proceed? Thanks :) AI: Since I asked this question here as well, I am pasting my answer to it here. I used a different way to define the VAE loss, as demonstrated in: https://github.com/keras-team/keras/blob/keras-2/examples/variational_autoencoder.py I changed it to allow for denoising of the data. It works now, but I'll have to play around with the hyperparameters to allow it to correctly reconstruct the original images. import numpy as np import time import sys import os from scipy.stats import norm from keras.layers import Input, Dense, Lambda from keras.models import Model from keras import backend as K from keras import metrics from keras.datasets import mnist from keras.callbacks import ModelCheckpoint filepath_for_w='denoise_by_VAE_weights_1.h5' ########### ########## experiment_dir= 'exp_'+str(int(time.time())) os.mkdir(experiment_dir) this_script=sys.argv[0] from shutil import copyfile copyfile(this_script, experiment_dir+'/'+this_script) ########## ########### batch_size = 100 original_dim = 784 latent_dim = 2 intermediate_dim = 256 epochs = 10 epsilon_std = 1.0 x = Input(batch_shape=(batch_size, original_dim)) h = Dense(intermediate_dim, activation='relu')(x) z_mean = Dense(latent_dim)(h) z_log_var = Dense(latent_dim)(h) def sampling(args): z_mean, z_log_var = args epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0., stddev=epsilon_std) return z_mean + K.exp(z_log_var / 2) * epsilon # note that "output_shape" isn't necessary with the TensorFlow backend z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var]) # we instantiate these layers separately so as to reuse them later decoder_h = Dense(intermediate_dim, activation='relu') decoder_mean = Dense(original_dim, activation='sigmoid') h_decoded = decoder_h(z) x_decoded_mean = decoder_mean(h_decoded) def vae_loss(x, x_decoded_mean): xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean) kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) return xent_loss + kl_loss vae = Model(x, x_decoded_mean) vae.compile(optimizer='rmsprop', loss=vae_loss) # train the VAE on MNIST digits (x_train, y_train), (x_test, y_test) = mnist.load_data() #after loading the data, change to the new experiment dir os.chdir(experiment_dir) # ########################## x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) noise_factor = 0.5 x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape) x_test_noisy = np.clip(x_test_noisy, 0., 1.) for i in range (10): x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape) x_train_noisy = np.clip(x_train_noisy, 0., 1.) checkpointer=ModelCheckpoint(filepath_for_w, monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=True, mode='auto', period=1) vae.fit(x_train_noisy, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, validation_data=(x_test_noisy, x_test), callbacks=[checkpointer]) vae.load_weights(filepath_for_w) #print (x_train.shape) #print (x_test.shape) decoded_imgs = vae.predict(x_test,batch_size=batch_size) np.save('decoded'+str(i)+'.npy',decoded_imgs) np.save('tested.npy',x_test_noisy) #np.save ('true_catagories.npy',y_test) np.save('original.npy',x_test)
H: Layer notation for convolutional neural networks When reading about convolutional neural networks (CNNs), I often come across a special notation used in the community and in scientific papers, describing the architecture of the network in terms of layers. However, I was not able to find a paper or resource describing this notation in detail. Could someone explain to me the details or point to where it is described or "standardized"? Examples: input−100C3−MP2−200C2−MP2−300C2−MP2−400C2−MP2−500C2−output (source) input−(300nC2−300nC2−MP2)_5−C2−C1−output (source) A good guess seems that xCy are convolution layers (x is number of filters? y is one side of square kernel?). MPz is max-pooling layer (pool size z×z?). But instead of guessing, I would love to have a reference (which I could possibly also reference in a paper). AI: One paper referenced by the first paper you linked to is here. It explains in section 3 (experiments) the following notation: 2x48x48-100C5-MP2-100C5-MP2-100C4-MP2-300N-100N-6N represents a net with: 2 input images of size 48x48 a convolutional layer with 100 maps and 5x5 filters a max-pooling layer over non-overlapping regions of size 2x2 a convolutional layer with 100 maps and 4x4 filters a max-pooling layer over non overlapping regions of size 2x2 a fully connected layer with 300 hidden units, a fully connected layer with 100 hidden units a fully connected layer with 6 neurons (one per class) From this, the answer to your question is: 100C3 means a convolutional layer with 100 maps and 3x3 filters MP2 means a max-pooling layer with non overlapping regions of size 2x2 200C2 means a convolutional layer with 200 maps and 2x2 filters etc with the "C" layers (means convolutional, preceding integer is the number of features maps; final integer is the filter size) According to the second paper you linked, the subscript _5 indicates five pairs of 300nC2−300nC2−MP2 connected layers (see section 3), and the n indicates "the number of filters in the nth convolutional layer is [300]n". According to the accompanying model diagram (figure 3 in the linked paper), the C2 and C1 layers produce 1x1 output, meaning a scalar value. This would mean C2 is a convolutional layer with 1 map and a 2x2 filter and C1 is a convolutional layer with 1 map and 1x1 filter (though I don't fully understand what this adds).
H: Math behind L2 Regularization for Logistic Regression I read that L2 regularization in logistic regression creates a sort of sphere that limits the choice of $w$ weight, but why does this happen? AI: Your question is really about the method of Lagrange multipliers in constrained optimization, not logistic regression per se. The gist of it is that a constrained optimization problem can be recast as an unconstrained optimization problem by adding a term, called the regularizer, and vice versa. The sphere comes from recasting the unconstrained problem into a constrained one; recall that a constant $L_2$ norm defines a hypersphere.
H: Derivation of the cross-entropy equation in Michael Nielsen's book I am reading the book http://neuralnetworksanddeeplearning.com/chap3.html by Michael Nielsen. So this is a question mostly for the people familiar with the book and understanding the material. In the equations (71-75) we are trying to find a cost-function $C$ satisfying: $$\frac{\partial{C}}{\partial{\omega_j}}=x_j (a-y),$$ $$\frac{\partial{C}}{\partial{b}}=(a-y),$$ where $w_j$ and $b$ are weights and bias of a neuron, $a$ is the output of the sigmoid function $$a = \sigma(\sum{\omega_{j} x_j + b)}=\sigma(z).$$ We apply the chain rule (equation 73): $$\frac{\partial{C}}{\partial{b}}=\frac{\partial{C}}{\partial{a}}\frac{\partial{a}}{\partial{b}}=\frac{\partial{C}}{\partial{a}}\sigma\prime(z).$$ And in the next line the author writes Using $\sigma\prime(z)=\sigma(z)(1-\sigma(z))=a(1-a)$ the last equation becomes... Where does this expression come from? $\sigma(z)(1-\sigma(z))$ AI: The author has taken activation function as sigmoid in this case. The derivative of this function can be re-written like $$\sigma(z) = \frac{1}{1+e^{-x}}$$ whose derivative $$\sigma'(z) = \frac{e^{-x}}{(1+e^{-x})^2}$$ which can rewritten as $$\frac{1+e^{-x}-1}{(1+e^{-x})^2} \rightarrow \sigma(z) * (1 - \sigma(z))$$. This is subtly mentioned in eq.3 of chap1.
H: TensorFlow: number of channels of conv1d filter I want to apply a ConvNet on my one dimensional data retrieved from 13 sensors. So, each of my samples consists of 13 channels (of 51 values) I am using 'conv1d' to apply a ConvNet on my data. The network works nicely, but I wonder how 'conv1d' determines the number of channels for it's filters... To my knowledge, a filter should have an equal number of channels as its input data, which makes it a $5x13$ filter. I set the filter to have a width of 5, but don't need to set the number of channels anywhere. My question is: how does layer 'conv1' determine it's number of channels? Below is a portion of my code: # We have 13 1D channels of 51 points each # Note that we've indicated -1 for batch size, which specifies that this dimension should be dynamically computed # based on the number of input values in features["x"], holding the size of all other dimensions constant. input_layer = tf.reshape(features["x"], [-1, 51, 13]) # Convolutional Layer #1 # Shouldn't this filter also need to number of channels? This should match the input number of channels conv1 = tf.layers.conv1d(inputs=input_layer, filters=32, kernel_size=5, padding="same", activation=tf.nn.relu) # Pooling Layer #1 pool1 = tf.layers.max_pooling1d(inputs=conv1, pool_size=2, strides=2) AI: In TensorFlow there are different convolution layers. Conv1d, Conv2d and Conv3d. the first one is used for one dimensional signals like sounds, the second one is used for images, gray-scale or RGB images and both cases are considered to be two dimensional signals. The last one is used for three dimensional signals like video frames, images as two dimensional signals vary during time. In your case Conv1d is used as one dimensional signal and you can specify the number of filters in the arguments of the method. You can take a look at here and here.
H: Does it make sense to parallelize machine learning algorithms as part of PhD research? I'm developing machine learning algorithms to aid in the diagnosis and prognosis of various cancers for my PhD. My lab is an Nvidia teaching center (CUDA). My supervisor thinks that I need to also optimize ML by parallelizing it in CUDA. However, as I see it, a model is trained once and there is no need to train again. Testing a model is also not time consuming. My interests are in ML, not Parallel Processing. 1) Should I spend a large chunk of my time parallelizing with CUDA? 2) Is CUDA still a viable framework for research? 3) In the world outside of research, will this make it easier to get a ML job? AI: From practise point of view, just sharing some thoughts. I don't have any research (PhD) type of experience, and your supervisor may communicate similar contents below. And, paralleling calculation here, personally, I defined as single PC that utils graphical card GPU cores for calculation acceleration as oppose to clustering calculation. Some Theoretical Thoughts Sometimes, it worth building multiple models, or even, same model multiple times on the same data in ML world. The example would be like: using cross validation (same method multiple times) to have robust model output or parameters. Or model ensemble with lots of weak learners (multiple models) to have better accuracy. Those ML process are time consuming, and paralleling can help reducing the time. Also, from the info provided, I assume your ML project is image recognition with GPU acceleration. But I've no idea what's the main purpose of the project. It could be developing/improving new ML methods, or comparing known ML methods to form a academy review. Whatever the case, I assume, the result should still achieve some certain accuracy level. Hence, it would make sense to consider some efficient method (e.g. paralleling calculation) to accelerate the modelling process. Some Practical Considerations In practise, efficiency is very import. When you have a theoretically accurate model, but takes a long time to build models, it won't be acceptable. You can step back to say let's come back to some simple quick model with less accuracy. But what's the point of ML when traditional methods have similar or better accuracy? Personal Answer to Questions 1) Should I spend a large chunk of my time parallelizing with CUDA? Paralleling is useful to make the ML training quicker. And CUDA is a cool technique to learn and apply to ML. Just balance the time between main purpose of the project and time. 2) Is CUDA still a viable framework for research? This meant to be the first chapter of your report / essay / dissertation. At least, more and more ML uses deep learning, which can be much quicker with GPU involved. 3) In the world outside of research, will this make it easier to get a ML job? I cannot definitely say knowing CUDA/paralleling would make you a top player (as other aspects would also be considered). But when other candidates have similar background, one with paralleling calculation experience would stand out.
H: ANN applied to Boston Housing dataset returns negative value This example is taken from the book Deep Learning With Python from Jason Brownlee. It applies a fully connected neural model with one hidden layer (13, 13, 1) using Keras library and the Tensorflow backend. 1 - Import the packages import numpy from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasRegressor from sklearn.model_selection import cross_val_score from sklearn.model_selection import KFold from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from sklearn.datasets import load_boston 2 - Load the dataset boston = load_boston() X = boston.data[:,0:13] Y = boston.target 3 - Define base model def baseline_model(): # create model model = Sequential() model.add(Dense(13, input_dim=13, \ kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal')) # Compile model model.compile(loss='mean_squared_error', optimizer='adam') return model 4 - Fix random seed for reproducibility seed = 7 numpy.random.seed(seed) 5 - Fit & evaluate model estimator = KerasRegressor(build_fn=baseline_model, epochs=10, \ batch_size=5, verbose=0) kfold = KFold(n_splits=10, random_state=seed) results = cross_val_score(estimator, X, Y, cv=kfold) 6 - Value of results parameter Out[63]: array([-13.89524042, -14.2215869 , -6.21156782, -42.65242339, -26.58890147, -56.30860755, -28.6575911 , -89.67339525, -27.7172946 , -22.67604859]) Which returns a mean negative value print("Baseline: %.2f (%.2f) MSE" % (results.mean(), results.std())) It returns about -30 and it should return about +30 according to the book. I've tried both Theano and Tensorflow with no success. I've also tested this code both on Windows and Linux, having obtained the same bad result. The problem seems to be in the cross-validation part, because if I don't run cross validation, I get more sensible results. 7 - Without cross-validation model = Sequential() model.add(Dense(13, input_dim=13, kernel_initializer='normal', \ activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X, Y, epochs=150, batch_size=10) Now if I evaluate the model I get a more sensible value In [68]: model.evaluate(X, Y) 506/506 [==============================] - 0s 58us/step Out[68]: 27.778296670423664 What can be happening here? Why is the cross-validation procedure returning negative values? AI: "This is to be expected. sklearn has changed their API to invert their cost functions. Nothing to be concerned about." I got this answer from Dr. Jason Brownlee.
H: Get number of correct predictions for each class in Keras I am having unbalanced dataset(1:93) and want to use kappa's metric. However, for that I need to capture how many correct predictions are made for each class. I have tried understanding from here and other google links. Is it possible to capture class wise #correct predictions made? AI: That's true. For understanding how many correct decision your classifier has made, confusion matrix can be used. The main diagonal illustrates that. It depicts how many data samples are correctly classified and how many are mislabeled to which class. you can take a look at here.
H: Recreating the sum symbol using python I am currently reading a white paper relating to Expectation-Maximisation (EM) and would like to encode a formula so I can play with it in order to help my understanding. The formula in question is a sum over values and shown below; I am wondering what the advice on the best way to acheive this would be. I am currently thinking about using a for loop to create some kind of cumulative value as I iterate from t = 1 to m. AI: In most cases, I would go for NumPy. Implement a Python function f(t) that calculates the $t$-th summand. Then run import numpy as np result = np.array([f(t) for t in range(1,m+1)]).sum() This will be very fast, unless $m$ is so large that [f(t) for t in range(1,m+1)] does not fit into memory. In this case, I would follow your approach and use a for-loop: result = 0 for t in range(1, m+1): result += f(t)
H: How to classify parametric curves? I am working on a project which aims at determining whether a patient has cervical issues or not, based on a certain movement (for instance, turning the head from left to right and so on). For each patient, I have one or more sets of coordinates acquired with a VR headset. One problem is, all the patients are assumed healthy and I cannot compare their data with patients who actually have cervical issues. I am currently working with two coordinates at a time, not the three, and I am considering two approaches: the first is to use approximation (splines...), the second is to use concave hulls. I am a bit more inclined to use hulls and I thought that I could calculate the distance between two hulls A and B as follows: area(A\B)+area(B\A). Note that the curves are parametric. I have two questions: Is it possible to "classify" patients using a data set that only consists of healthy patients? Or to find a "descriptor" for them ? If so, or assuming I can get data on unhealthy patients, what tools can I use to classify the curves? I did not find anybody who worked on a similar problem. AI: Whenever you have skewed dataset, it means that you know some classes better than some others. In such cases it means that the data is your knowledge and there are learning algorithms for such occasions. Consider an important fact here. Suppose that you have feature vectors of conditions of a nuclear company and they describe whether the company is in danger of nuclear radiation or not. In such case it is clear that it does not happen a lot that you have infected companies so most of your data have label of healthy condition. You have so much knowledge about the healthy class but you don't know much about the infected class because you don't have much data consequently you don't know its distribution and you can not estimate it well. Whenever your data is skewed, it means that e.g. you have 1 million feature vectors of negative class and 5 feature vectors of positive class. Now suppose that you change the feature vectors. In such cases you have imbalanced data-set or you just have the data samples of some classes without some other, you can use anomaly detection. In data mining, anomaly detection (also outlier detection) is the identification of items, events or observations which do not conform to an expected pattern or other items in a dataset. Typically the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are also referred to as outliers, novelties, noise, deviations and exceptions.
H: ValueError: operands could not be broadcast together with shapes while using two sample independent t test I am trying to perform two sample t test. My data set consists of 744 rows and 186 columns for which I have calculated total sum and mean. I need to perform two sample t test. My csv looks like this from which I have to calculate ttest and rank sum test for each row as individual row denotes separate ID and have the corresponding values : SRA ID ERR169499 ERR169498 ERR169497 Label 1 0 1 TaxID PRJEB3251_ERR169499 PRJEB3251_ERR169499 PRJEB3251_ERR169499 333046 0.05 0.99 99.61 1049 0.03 2.34 34.33 337090 0.01 9.78 23.22 The labels 0 and 1 are for case and control respectively. So far I have done this: import pandas as pd import numpy as np from scipy.stats import ttest_ind from scipy.stats import ranksums def transposer(filename): file = open(filename, 'rt') pd.read_csv(file).T.to_csv(str(filename).split("/") [-1].split(".")[0]+'_transposed.csv',header=False) pd.read_csv('project.csv').T.to_csv('transposed.csv', header=False) file = open('transposed.csv', 'rt') out = open('final_out.csv', 'w') meta = open('Meta3251.csv', 'rt') contents = {} for ids in meta: contents[ids.split(',')[1]]=ids.split(',')[-1] count = 0 for row in file: if count == 0: out.write('SraID, Label,'+row) count=1 else: try: pid = row.split(',')[0].split('_')[1] out.write(pid.replace('\n','')+','+contents[pid].replace('\n','') +','+str(row)) out.flush() except: print(pid) pass file.close() out.close() transposer('final_out.csv') file1 = open('final_out_transposed.csv','rt') label = [] data = {} x = open('final_out_transposed.csv','rt') for r in x: datas = r.split(',') if datas[0] == ' Label': label.append(r.split(",")[1:]) label = label[0] label[-1] = label[-1].replace('\n','') counter = len(label) for row in file1: content = row.split(',') if content[0]=='SraID' or content[0]== 'TaxID' or content[0]==' Label': pass else: dt = row.split(',') dt[-1] = dt[-1].replace('\n','') data[dt[0]]=dt[1:] keys = list(data) sum_file = open('sum.csv','w') sum_file.write('TaxId,sum_case,sum_ctrl,case_count, ctrl_count,case_mean,ctrl_mean,\n') for key in keys: sum_case = 0 sum_ctrl = 0 count_case = 0 count_ctrl = 0 mean_case = 0 mean_ctrl = 0 for i in range(counter): if label[i] == '0': sum_case=np.float64(sum_case)+np.float64(data[key][i]) count_case = count_case+1 mean_case = sum_case/count_case else: sum_ctrl = np.float64(sum_ctrl)+np.float64(data[key][i]) count_ctrl = count_ctrl+1 mean_ctrl = sum_ctrl/count_ctrl sum_file.write(key+','+str(np.float64((sum_case)))+',' +str(np.float64((sum_ctrl)))+','+str(np.float64((count_case))) +','+str(np.float64((count_ctrl)))+','+str(np.float64((mean_case))) +','+str(np.float64((mean_ctrl)))+'\n') sum_file.flush() sum_file.close() df = pd.read_csv('final_out_transposed.csv', header=[1,2], index_col=[0]) case = df.xs('0', axis=1, level=0).dropna() ctrl = df.xs('1', axis=1, level=0).dropna() (tt_val, p_ttest) = ttest_ind(case, ctrl, equal_var=False) print (tt_val) print (p_ttest) I am getting the error: ValueError: operands could not be broadcast together with shapes (92,) (95,) How can I handle this error. I cannot change my data. AI: The answer to this question would be : The objects created by the xs method of the Pandas DataFrame look like two-dimensional arrays. These must be flattened to look like one-dimensional arrays when passed to ttest_ind. The values attribute of the Pandas objects gives a numpy array, and the ravel() method flattens the array to one-dimension. It would go like : df = pd.read_csv('final_out_transposed.csv', header=[1,2], index_col=[0]) case = df.xs('0', axis=1, level=0).dropna() ctrl = df.xs('1', axis=1, level=0).dropna() (tt_val,p_ttest ) = ttest_ind(case.values.ravel(), ctrl.values.ravel(), equal_var=False) print (tt_val) print(p_ttest)
H: Clustering: How to cluster multiple CSV files that each represent a Steam user I am currently about to do clustering analysis regarding Steam users activity. So I have a thousands of CSV’s, each representing a Steam user and his/her purchased games (with ID and genre). I am planning to use k-modes clustering because my data is categorical, and I have found a python library k-modes. This is what a CSV file looks like, representing a random steam user. I have researched a lot online, but it seems that many people have all their data in one CSV file, just like this example of stock exchange data. How could I read each file and represent it as one unique data then do clustering on all the data among the thousands of CSV files? AI: You're likely going to have to do a little data wrangling to get the data in a better format. I'm assuming each file has a varying amount of rows, one for each game the user purchased? So, you could create one big matrix with the rows representing users and the columns representing games, and create an indicator matrix for to map purchases to games. Then potentially add additional columns per user including additional insight, like: the number/percent of games purchased for each category(action, RPG, ...) total purchases Having data in that format may lend to more useful application of clustering algorithms.
H: How to train data by batch from disk? I am working on a convolutional neural network for image classification. The training dataset is too large to be loaded on my computer memory (4gb), on top of that I also need to try some augmentation to balance the classes. I am using keras. I have looked into many examples but I have not found a solution to my problem. I am thinking about replacing the parameters of the model.fit call by passing a function or generator doing the 'batching'. This function I will design to import from disk and apply augmentation. I have no idea how to implement that. Any suggestions? AI: As you are working on image classification and would also like to implement some data augmentation, you can combine the two AND load the batches directly from a folder using the mighty 'ImageDataGenerator` class. Have a look at the execellent documentation! I won't copy and paste the example from that link, but I can outline the steps that you go through: create the generator class: data_gen = ImageDataGenerator() If you want it to perform on-the-fly augmentation for you, that can be specified when creating the class: data_gen = ImageDataGenerator(samplewise_center=True, ...) If you use augmentation processes that require some statistics about the dataset, e.g. feature_wise normalisation (not sample-wise), you must prepare the generator by showing it some of your data: data_gen.fit(training_data). This fit method simply precomputes things like the mean and standard deviation, which is later used for normalisation. the generator goes into the model's fit_generator method, and we call the flow_from_directory method of the generator: model.fit_generator(training_data=data_gen.flow_from_directory('/path/to/folder/'), ...) You can also create a separate generator using ImageDataGenerator for your validation data, where you should then not apply the augmentation, so that validation tests are done on real data, giving you and accurate picture of the model's performance. In any case, these generators will theoretically run forever, generating batches from your folder. Therefore I recommend using a callback function from Keras to stop when a certain criteria is met. See the documentation for the EarlyStopping class. You could also do this manually, but Keras makes it very simple! If you want fine-grained control, you could do all of the above manually, loading enough samples from disk for a single batch, performing some augmentation and then running the model.train_on_batch() method. If you want to get into the details, you might be best first learning the Keras way, then progress to your own detailed models combing Tensorflow as required. The two can be used toegether very nicely!
H: How to import image data into python for keras? I'm new to CNNs, starting off with keras. I'm currently using ImageDataGenerator to import my train/validation folders (which each have 2 class subfolders for my binary classification task). Was wondering how can I import my train/validation files without using ImageDataGenerator? I'm aware that ImageDataGenerator is good for accuracy as it does some augmentation, but I want to compare the accuracy to a training set without any augmentations. Essentially I think I need to put all the images into an array, but not sure how to. Basically I want to know what is the normal way to import training/validation data for images, so I can compare what is the accuracy difference with/without imagedatagen. I know with normal NN tasks it's easy as you can just do pd.read_csv(). I'm currently importing like so: train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( 'data/train', target_size=(150, 150), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( 'data/validation', target_size=(150, 150), batch_size=32, class_mode='binary') AI: The docs for ImageDataGenerator suggest that no augmentation is done by default. So you could instantiate it without any augmentation parameters and keep the rest of your code for handling your directory structure: train_datagen = ImageDataGenerator(rescale=1./255) You are also allowed to write your own custom data generator and pass it to model.fit_generator(). Here is a nice tutorial. Or if your data fits in memory you could write some simpler code possibly using keras.preprocessing.image.load_img to load all the images into an array and pass them to model.fit instead.
H: Xgboost interpretation: shouldn't cover, frequency, and gain be similar? I was surprised to see the results of my feature importance table from my xgboost model. Based on the tutorials that I've seen online, gain/cover/frequency seems to be somewhat similar (as I would expect because if a variable improves accuracy, shouldn't it increase in frequency as well?) but my numbers are drastically different. Am I perhaps doing something wrong or is my intuition wrong? Thank you in advance! Feature Gain Cover Frequency 1: Var1 0.21943765 0.02821822 0.009433962 2: Var2 0.18207910 0.05509272 0.066037736 3: Var3 0.10746529 0.22117710 0.216981132 4: Var4 0.10294292 0.05267401 0.018867925 5: Var5 0.06928732 0.10185434 0.141509434 6: Var6 0.05745753 0.05482397 0.047169811 AI: My layman's understanding of those metrics as follows: Gain = (some measure of) improvement in overall model accuracy by using the feature Frequency = how often the feature is used in the model. It's important to remember that the algorithm builds sequentially, so the two metrics are not always directly comparable / correlated. An example (2 scenarios): Var1 is relatively predictive of the response. It is included by the algorithm and its "Gain" is relatively high. Once its link to the response has been captured it might not be used again - e.g. there may be other features which are more predictive at later stages of modelling or all of Var1's link to the response may have been captured - and so its "Frequency" is low. Var1 is extremely predictive across the whole range of response values. We can expect that Var1 will have high "Gain". Now, since Var1 is so predictive it might be fitted repeatedly (each time using a different split) and so will also have a high "Frequency". In most cases, we prioritise accuracy and so will likely prioritise "Gain" over "Frequency", but if you're using the algorithm for feature selection then it may be a good idea to use a mixture of both to inform your decision, much like @bbennett36 suggested.
H: Visualizing item similarities I have an implicit dataset. It contains which user click which item. I'm doing collaborative filtering and finally i get the item similarites. So now i have data like; Item - SimilarItem - SimilarityValue A - C - 0.12 A - R - 0.42 A - Y - 0.34 A - J - 0.62 B - A - 0.16 B - Y - 0.83 C - J - 0.23 or Item -- Similarities A -- C,R,Y,J B -- A,Y C -- J,A,D As i know all the relations between 40K item, can i turn this into a nice visualization showing some of them get together at somewhere and someothers are elsewhere. I can do this either with python or some javascript library. What should be the way of doing this? Can you show me some examples? AI: You could treat your results as an undirected graph with weighted edges. Your nodes are A, B, C, etc. and your vertices are the connections between items, weighted by similarity value. Here is a Python function that could be a great starting point. It draws thicker connections for vertices with a larger weight. I copied it and replaced the example data with your data points above: [...] node_list = ['A', 'B', 'C', 'D', 'J', 'R', 'Y'] [...] G.add_edge(node_list[0],node_list[2],weight=0.12) # A - C G.add_edge(node_list[0],node_list[5],weight=0.42) # A - R G.add_edge(node_list[0],node_list[6],weight=0.34) # A - Y G.add_edge(node_list[0],node_list[4],weight=0.62) # A - J G.add_edge(node_list[0],node_list[1],weight=0.16) # A - B G.add_edge(node_list[0],node_list[6],weight=0.83) # B - Y G.add_edge(node_list[2],node_list[4],weight=0.21) # C - J The result is this: Of course, if you have 40,000 items, that would probably require additional work to make the plot comprehensible. You could begin by ignoring all connections between nodes with a similarity value of, say, 0.01 or less.
H: What does it mean when we say most of the points in a hypercube are at the boundary? If I have a 50 dimensional hypercube. And I define it's boundary by $0<x_j<0.05$ or $0.95<x_j<1$ where $x_j$ is dimension of the hypercube. Then calculating the proportion of points on the boundary of the hypercube will be $0.995$. What does it mean? Does it mean that rest of the space is empty? If $99\%$ of the points are at the boundary then the points inside the cube must not be uniformly distributed? AI: Speaking of '$99\%$ of the points in a hypercube' is a bit misleading since a hypercube contains infinitely many points. Let's talk about volume instead. The volume of a hypercube is the product of its side lengths. For the 50-dimensional unit hypercube we get $$\text{Total volume} = \underbrace{1 \times 1 \times \dots \times 1}_{50 \text{ times}} = 1^{50} = 1.$$ Now let us exclude the boundaries of the hypercube and look at the 'interior' (I put this in quotation marks because the mathematical term interior has a very different meaning). We only keep the points $x = (x_1, x_2, \dots, x_{50})$ that satisfy $$ 0.05 < x_1 < 0.95 \,\text{ and }\, 0.05 < x_2 < 0.95 \,\text{ and }\, \dots \,\text{ and }\, 0.05 < x_{50} < 0.95. $$ What is the volume of this 'interior'? Well, the 'interior' is again a hypercube, and the length of each side is $0.9$ ($=0.95 - 0.05$ ... it helps to imagine this in two and three dimensions). So the volume is $$\text{Interior volume} = \underbrace{0.9 \times 0.9 \times \dots \times 0.9}_{50 \text{ times}} = 0.9^{50} \approx 0.005.$$ Conclude that the volume of the 'boundary' (defined as the unit hypercube without the 'interior') is $1 - 0.9^{50} \approx 0.995.$ This shows that $99.5\%$ of the volume of a 50-dimensional hypercube is concentrated on its 'boundary'. Follow-up: ignatius raised an interesting question on how this is connected to probability. Here is an example. Say you came up with a (machine learning) model that predicts housing prices based on 50 input parameters. All 50 input parameters are independent and uniformly distributed between $0$ and $1$. Let us say that your model works very well if none of the input parameters is extreme: As long as every input parameter stays between $0.05$ and $0.95$, your model predicts the housing price almost perfectly. But if one or more input parameters are extreme (smaller than $0.05$ or larger than $0.95$), the predictions of your model are absolutely terrible. Any given input parameter is extreme with a probability of only $10\%$. So clearly this is a good model, right? No! The probability that at least one of the $50$ parameters is extreme is $1 - 0.9^{50} \approx 0.995.$ So in $99.5\%$ of the cases, your model's prediction is terrible. Rule of thumb: In high dimensions, extreme observations are the rule and not the exception.
H: Can i use deep learning in my agriculture PHD? My PHD is about yield of soybeans and it is typical agriculture theme, but I am pretty good with programming and Python programming language and I have already some deep learning programs which I made by myself successfully. Problem is that model is not very easy to show on the paper. Can I show model just like a chart or graph? And maybe somebody already solved issue with transporting deep learning model file to "paper"? Is it normal to use deep learning in my agriculture PHD instead common middle age Linear Regression etc? Inputs and outputs is ok on paper but deep learning model is a little bit controversial in my scientific circles and I don't know how to present it so I'm asking maybe somebody already has successful experience by presenting deep learning model on the paper in science. AI: Please note that the following is my personal opinion; I still hope that you find it useful. If you cannot properly explain a concept on paper, then it should not be part of your PhD thesis. It is possible to implement Deep Learning algorithms without any conceptual knowledge. While this may be (somewhat) acceptable in the industry, it will get you into trouble in academia. I have no doubt that your algorithms are well-programmed and solve many issues that are challenging from a programmer's perspective. But that is not what the reviewers of your thesis are interested in. They are also not interested in the prediction accuracy of your algorithm. As scientists, they want to know why something happens; and to my knowledge, Deep Learning algorithms provide next to nothing on this question. And this is where 'middle age' linear regression has its strengths (by the way: a quick way to make an enemy is to tell him that the method he has been using all his life is too old to be useful). Yes, linear regression is unexciting, and it has many shortcomings. But it is extremely well understood. Most importantly, there are tons of books and papers about the connection between assumptions and valid interpretations: we know under which assumptions and in which sense the linear regression estimator is optimal; we know which assumptions lead to which kind of confidence intervals for the estimator; we know about robustness with respect to model misspecification; etc. As long as you cannot address these issues convincingly and on paper (That is: In what sense is my result optimal? How confident can I be about my result? How robust is my model?), you should not use Deep Learning in research as an alternative to classical statistical methods.
H: Possible to correct an actual cell state in LSTM via gradient? Why in LSTM we calculate gradient w.r.t weights, but not w.r.t the cell state? Is it theoretically possible to correct the contents of the cell state, and what would it result in? I understand that weights are like a "set of skills", so that network can respond correctly to the input, even gazillions of iterations later. The cell-state is an understanding of what's going on in the past, up to the start of the current minibatch. So why not to correct the value stored in the cell state? It would be very useful if we carry the cell state forward, between minibatches. https://stackoverflow.com/a/44183738/9007125] Generally it is recommended to reset state after each epoch, as the state may grow for too long and become unstable. However in my experience with small size datasets (20,000- 40,000 samples) resetting or not resetting the state after an epoch does not make much of a difference to the end result. For bigger datasets it may make a difference. AI: Is it theoretically possible to correct the contents of the cell state, and what would it result in? Yes it is. Using back propagation, it is possible to get the gradient of any value that affects a well-defined output. For training data, that includes all current cell outputs - in fact these are necessary to calculate as an interim step in order to get the gradients of the weights. Once you have the gradients of a cost or error function, then you can perform a step of gradient descent in order to discover a value that would result in a lower error for given training data. In usual training scenarios you do not alter neuron outputs after they have been calculated using feed-forward, because these are not parameters of the model. Typical reasons to alter neuron values (or even the input) are in order to view what the ideal state might be in a given scenario. If your state can be visualised through some decoder - maybe even another neural network - then this could allow you to see the difference between actual internal state, and a potentially better one. That could be useful in error analysis for example. So why not to correct the value stored in the cell state? That's because in usual training scenarios, you are creating a network that predicts an output value. You can calculate the right corrections for training data, but not when predicting, because the whole point of predicting is to estimate a label that you do not already have. As such, you want to alter your function parameters, and not interim values. It would be very useful if we carry the cell state forward, between minibatches. Only during training. In a prediction scenario you usually have no way of calculating the necessary gradients. What you don't want is to train a system that then requires using error values and gradients that you do not have in production. In some scenarios, such as an online system predicting next item $x_{t+1}$ in a sequence, where you could immediately train based on error after you observed the next item and before you continued the prediction sequence for $x_{t+2}$, you could possibly use the approach. I am not sure whether it would help performance, but in principle it could. If it did help, you'd have to compare the improvement versus other simpler changes such as different hyper-parameters on a network that didn't correct internal state using gradients. In summary, it is possible your idea would work quite well in an online system with near-immediate feedback. In that case you could think of a set of weights as being "rules to update a belief state from data", and the output of hidden layer neurons as being "a current belief state". When errors occur, it does appear to make sense to update both the rules that led to the error and the current belief that resulted from earlier faulty rules. It is perhaps worth an experiment or two. The main caveat is that the two update processes (for weights and LSTM layer state) would interact and/or adapt to each other, so it may not lead to measurably different performance than just adding more LSTM cells to the layer.
H: When are weights updated in CNN? In CNNs when do we update the kernel parameters using back propagation? Suppose I have batch size of 50 and training data of 1000. Do I back propagate after each batch has been presented to network or after each data sample? AI: Whenever you train the network using batch means that you have chosen to train using batch gradient descent. There are three variants for gradient descent algorithm: Gradient Descent Stochastic Gradient Descent Batch Gradient Descent The first one passes the whole data through the network and finds the error rate for all of them and finds the gradients with respect to all the data samples and updates the weights after passing the whole data-set. That means for each epoch, passing the whole data-set through the network, one update occurs. This update is accurate toward descending gradient. The second one, updates the weights after passing each data which means if your data sample has one thousand samples, one thousand updates will happen whilst the previous method updates the weights one time per the whole data-sample. This method is not accurate but is so much faster than the previous one. The last one tries to find a trade-off between the above approaches. You specify a batch size and you will update the weights after passing the data samples in each batch, means the gradients are calculated after passing each batch. Suppose you have one thousand data sample and you have specified a batch size with one hundred data sample. You will have 10 weight update for each epoch. This method is more accurate than the second approach and is more faster than the first approach. Do I back propagate after each batch has been presented to network or after each image? Your method is the last one. Consequently, after passing the entire batch, you would update the weights. Based on the comments of one of our friends, the above approaches are named as follows, respectively: [Batch] Gradient Descent (batch size = all training samples) True SGD (batch size = 1 - weights update for each training sample) Mini-batch SGD (batch size = m out of n training samples).
H: Multi-image superresolution using CNNs I'm trying to write a program that can take multiple low-resolution images as inputs and output a high-resolution image. My understanding is that for single-image superresolution, Convolutional Neural Networks work great. I can just take a network with just three convolution layers as described here. I can then train the network with a huge dataset of low-res images and their corresponding high-res images (with mean-squared error cost function), and it should theoretically work. If I want to achieve greater accuracy, though, I can gain more information about a potential high-resolution image by looking at multiple low-resolution images. So my question is: How would I modify this neural-network based algorithm to be able to take multiple images (of the same object) as input, and output a more accurate high-resolution image as a result? I would assume this is a lot more complicated, because the multiple images might be off by a fraction of a pixel, or taken from a marginally different angle. (Sorry if anything I said is incorrect; I know virtually nothing about this field, so any advice helps!) AI: I would assume this is a lot more complicated, because the multiple images might be off by a fraction of a pixel, or taken from a marginally different angle. I agree with your analysis that this problem presents different challenges than single-image super-resolution. In the single image case, we know exactly where everything should be in the output image, because it is just a higher resolution version of the input. So why not try and extend this idea to your case? Here is how you might do that: Concatenate each low-res image and a low-res version of the output image as each training and test example For prediction concatenate each low-res image and duplicate one of them to specify the desired orientation of the output So if you have n 3-channel h x w input images in each training example, your input shape should be (h, w, 3*(n+1)) This should give the network enough information to learn the correct transformation.
H: How does sklearn KNeighborsClassifier compute class probabilites? The KNeighborsClassifier has a method for predicting class probabilities. However, I cannot find any documentation describing how these probabilities are computed. Here is a toy example that returns class probabilites: from sklearn.neighbors import KNeighborsClassifier import numpy as np N = 100 np.random.seed(1) X = np.random.random((N,2)) Y = np.random.randint(2, size=N) model = KNeighborsClassifier(5, weights='distance') model.fit(X, Y) print(model.predict_proba([[0.5, 0.5]])) #>>> [[0.55486525 0.44513475]] AI: The class probabilities are the normalized weighted average of indicators for the k-nearest classes, weighted by the inverse distance. For example: Say we have 6 classes, and the 5 nearest examples to our test input have class labels 'F', 'B', 'D', 'A', and 'B', with distances 2, 3, 4, 5, and 6, respectively. Then the unnormalized class probabilities can by computed by: (1/2) * [0, 0, 0, 0, 0, 1] + (1/3) * [0, 1, 0, 0, 0, 0] + (1/4) * [0, 0, 0, 1, 0, 0] + (1/5) * [1, 0, 0, 0, 0, 0] + (1/6) * [0, 1, 0, 0, 0, 0] = [1/5 ,1/2, 0, 1/4, 0, 1/2] And after normalizing we get: [0.13793103, 0.34482758, 0.0, 0.17241379, 0.0, 0.34482758] If you choose weights='uniform' then it becomes a simple average, which can be computed by multiplying each indicator by (1/k) and summing, with no need for additional normalization.
H: Right Way to Input Text Data in Keras Auto Encoder I have several thousand text documents and I am currently working on obtaining the latent feature representations of words and generate sentences using variational auto encoder. The main obstacle I am facing is “how to input such large textual vectors into VAE (or say even in a simple auto encoder)". I know there are other techniques such as adding layers in the form of LSTMs, but I want to keep it simple and keep the model as just VAE. One way is to use one-hot-encoded vectors or bag of words, but again, this is not the most efficient way since for a vocabulary of 100K unique words, each document will have a 100K input vector. Additionally, we loose the sentence structure. For small datasets however, there is no problem in training an autoencoder using this type of input. Another method is to use word embeddings using a pre-trained Word2Vec. This is what I have been trying to do and the python notebook which can be DOWNLOADED HERE uses this technique. The code is too long and has multiple pre-processing steps, so I am unable to embed this code in my post. The following are my questions: Now, each sentence (or document) will have different number of words. So the number of word embeddings for each document will have different lengths. Unfortunately, keras requires all the inputs to be of same length ( if I am right). So, how to handle such cases of varying input lengths?. Currently, in the fifth block of the python notebook, you can see the statement data = [x for x in vect if len(x) == 10]. That is, I only consider documents that have exactly 10 words to overcome this problem. Ofcourse, this is not practical. Can we pad 0 vectors? The VAE example shown in Keras blog uses MNIST data as exmple. Therefore, they use a sigmoid activation in the final reconstruction layer; consequently, “binary_crossentropy” as the loss function (along with the KL divergence). Since my inputs are word embeddings, where there are even negative values in the embeded vectors, I believe I should not use the activation as “sigmoid” in the final reconstruction layer. Is that right?. Additionally, I have also changed the loss as “mean_squared_error” instead of “binary_crossentropy” in the attached code. It will be great if someone who has worked on VAE and autoencoders for text data can provide their inputs regarding the questions mentioned above. Note: the attached code is a simplified version of the code in THIS LINK AI: Here's my take on your questions. Yes, you can zero-pad vectors. However, I would strongly recommend you use an LSTM as part of your encoder if you're using sentences as input. An LSTM can take variable-length input - that's part of its charm! The loss function depends entirely on your goal. It sounds like you're trying to create a sequence-to-sequence VAE. I would go about this by putting a softmax function on your output layer, to generate the next most probable word in the sentence. So, I would use categorical_crossentropy as the loss function, and I would use softmax instead of sigmoid, because you're choosing the next most likely word from a set vocabulary (this is not a binary issue but rather a categorical issue). Hope that helps!
H: Parameter tuning for machine learning algorithms When it comes to the topic of tuning parameters, most of the time you read grid search. But if you have 6 parameters, for which you want to test 10 variants, you get to 10^6 = 1000000 runs. Which in my case would be several months of processing time. That's why I was looking for an alternative. On the Kaggle website, I found a tutorial that uses a different approach. It almost works like this: 1) Test all 6 parameter individually with the other 5 parameters as default value and plot the results 2) Change the default values for alle 6 parameters to the best value of the associated plot 3) Test all 6 parameter individually with the other 5 parameters as last best value and plot the results 4) Repeat step 2 and 3 until the results does not change anymore This approach has the advantage of requiring much fewer runs. Is this a scientifically accepted procedure? And does this approach have a name in the literature? AI: In general, your approach will get stuck in local minima. This is why it is not scientifically accepted. (Notice that this may be different in very special cases, in particular if the performance of the algorithm is a strictly convex function of all input parameters). To see how the approach fails, suppose your machine learning algorithm has two parameters, $x$ and $y$, which can be either $0$ or $1$. The default values are $x=1$ and $y=1.$ The performance of your machine learning algorithm is $f$ and should be as high as possible. Assume the following performance levels $f(x,y)$: | x=0 | x=1 ----|-----|----- y=0 | 0.9 | 0.2 y=1 | 0.1 | 0.3 Your approach would do the following: First, choose the default value $x=1$ and compute $f(x=1, y=0) = 0.2$ and $f(x=1, y=1) = 0.3$. Second, choose the default value $y=1$ and compute $f(x=0,y=1)=0.1$ and $f(x=1, y=1) = 0.3$. Change the default values to the best value. In this case, this requires no change since $x=1$ and $y=1$ are the best values, respectively. The result did not change. Report $(x=1, y=1)$ as the best parameter combination. But the global performance maximum occurs at $(x=0, y=0).$
H: compare two lists I have a SQL (MS SQL Server) database of ~30 million companies. For example: +-----------------------+----------------+-----------+ | company_name | country | ID_number | +-----------------------+----------------+-----------+ | Mercedes Benz Limited | Germany | 12345 | | Apple Corporation | United States | 67899 | | Aunt Mary Butcher | United Kingdom | 56789 | +-----------------------+----------------+-----------+ Then, I have another list of companies and I would like to assign ID_number based on approximate company name match. +--------------------+----------------+ | company | country | +--------------------+----------------+ | Mercedes Benz Ltd. | Germany | | Apple Corp. | United States | | Butcher Aunt Mary | United Kingdom | | Volkswagen Gmbh | Germany | +--------------------+----------------+ My goal obviously is to limit the number of comparisons I have to make. So I approach it the following way: Normalize the names - remove Ltd. Corp. etc. Filter by country Filter by name length (I assume that strings of very different length can't be very similar) Filter by first n letters Calculate the similarity with Levenshtein or Jaccard agains all filtered companies Select the best match So for Mercedes Benz I would only take companies from Germany, that start with ME and have length of Mercedes Benz +-7 letters. However for Aunt Mary Butcher this wouldn't work because Butcher Aunt Mary doesn't start with AU. Apart from this being very naive method it's also very slow. I could put more hardware on it but I don't think the method is efficient itself. How do you usually tackle problems like that? AI: FYI This isn't really a data science question, it's really related more to SQL and should be asked on those forums. But I'm going to try and help you anyways. This is something where you need to do matching by pronunciation using something like SOUNDEX in MySQL (I'm not aware of other RDBMS that offer this). Overall, it's really something that requires an iterative approach where the first pass you're doing straight matches, the next pass you're doing matches with wildcards and then the next pass you use SOUNDEX. In theory, that should leave you with very few (if any) unmatched items for you to do manually.
H: Basic Time Series Classification Examples I've been using matlab until now to classify a large number of labelled time series I have. This has been relatively successful but I'd like to try using Tensorflow to apply a Deep Learning paradigm instead. I'm a complete noob at this and so I'm a bit overwhelmed with the literature as I'm struggling to generalise examples such as the 0-9 digits classification examples to my problem. My current code reads in the 1064 time series (length 3125), reads in the labels, converts the labels to onehot_encoding and extracts training and validation sets. #Not all code included def read_data(): #Get labels from the labels.txt file labels = pd.read_csv('labels.txt', header = None) labels = labels.values data = pd.read_csv('ts.txt',header = None) return data, labels data, labels = read_data() # Default split is 75% train 25% test ts_train, ts_test, labels_train, labels_test = train_test_split(data,labels) onehot_encoder = OneHotEncoder(sparse=False) labels_train = onehot_encoder.fit_transform(labels_train) labels_test = onehot_encoder.fit_transform(labels_test) #Construct the graph batch_size = 100 seq_len = 3125 learning_rate = 0.0001 epochs = 1000 But now I need to construct the graph and I'm a bit lost. If someone could link me some useful examples I'd be very grateful thank you. AI: What you need to learn are RNNs and LSTMs. Here are some links I used to learn these: CS231n RNN+LSTM lecture Understanding LSTMs Also I would suggest you to use Keras, a Tensorflow API. In my experience, it makes working with RNNs and LSTMs way easier, if you're a beginner. I found these articles which seemed related to your problem: Time series classification project by naveen sai on github https://aboveintelligent.com/time-series-analysis-using-recurrent-neural-networks-lstm-33817fa4c47a https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/ Keywords to search: Time series classification tensorflow keras
H: Neural Network with Connections to all forward layers In classical neural nets, we have that each layer is connected only with the following layer. What if we relaxed this constraint and allowed it to be connected to any or all subsequent layers? Has this architecture been explored? It seems to be backprop would still work on this. At the very least, this type of network could be emulated by artificially creating identity neurons at each layer that bring in the value of every earlier node, where the incoming weights to those layers are fixed at 1. AI: What you describe has been explored in Deep Residual Neural Networks. A residual block will combine two or more blocks from a standard architecture like a CNN with a skip connection that adds the input to the first block to the output of the last block. The intuition is that deep networks have a harder and harder time learning the identity function between layers, which has been proven to be useful especially in image recognition tasks. Residual connections also mitigate the problem of vanishing gradients. Residual connections help solve the "degradation" problem, where deeper architectures lead to reduced accuracy. For example GoogLeNet won ILSVRC in 2014 with a 22-layer CNN, but in 2015 Microsoft ResNet won with a 152-layer Res Net.
H: K-Means vs hierarchical clustering When hierarchical clustering is preferred over k means clustering? AI: I would say hierarchical clustering is usually preferable, as it is both more flexible and has fewer hidden assumptions about the distribution of the underlying data. With k-Means clustering, you need to have a sense ahead-of-time what your desired number of clusters is (this is the 'k' value). Also, k-means will often give unintuitive results if (a) your data is not well-separated into sphere-like clusters, (b) you pick a 'k' not well-suited to the shape of your data, i.e. you pick a value too high or too low, or (c) you have weird initial values for your cluster centroids (one strategy is to run a bunch of k-means algorithms with random starting centroids and take some common clustering result as the final result). In contrast, hierarchical clustering has fewer assumptions about the distribution of your data - the only requirement (which k-means also shares) is that a distance can be calculated each pair of data points. Hierarchical clustering typically 'joins' nearby points into a cluster, and then successively adds nearby points to the nearest group. You end up with a 'dendrogram', or a sort of connectivity plot. You can use that plot to decide after the fact of how many clusters your data has, by cutting the dendrogram at different heights. Of course, if you need to pre-decide how many clusters you want (based on some sort of business need) you can do that too. Hierarchical clustering can be more computationally expensive but usually produces more intuitive results.
H: How were auto encoders used to intialize deep neural networks? In a document on deep learning about auto encoders, it is said that these networks were used back from 2006 to 2010 for deep neural networks initialization. Can somebody explain how this was done? AI: There were a few different techniques. One popular one was stacked autoencoders, where each layer was trained separately. Essentially this was done by progressively growing the autoencoder, two layers at a time (one encode layer, plus equivalent decode layer), followed by complete training at each step of growth. If learning from a fixed training set, you could store the encoded representation of the whole dataset so far as input into next stage of training, saving some computation when building up the layers. After training each encoder layer separately you could use the weights of the encoder section of the autoencoder as the starting weights of the deep NN. Intuitively this made sense as you would have a representation of the input that you knew could be used to reconstruct it, and that typically was compressed, so should in theory have extracted salient details from the training data population. On top of these pre-trained layers, you may add one or two new layers that implemented whatever classification or regression task that you needed the final NN to perform. Then you would train with the labelled data - this is similar to fine-tuning networks and transfer learning that is still done nowadays. The results from this pre-training stage could be worthwhile. It is still a valid technique if you have a lot of unlabelled data, and a relatively small amount of labelled data. However, the introduction of ReLU and careful controls on weight initialisation meant that deep networks could often be trained more directly. Recent additions such as skip connections and batch normalisation have further improved more direct training approaches. Here is an example with code, using TensorFlow.
H: Training Neural network classifier using string inputs My thesis topic is about building a (deep) neural-network classifier to classify the type of a place. I am given both labels and some inputs in string type. So for example the label "Supermarket" might have a feature like "Food". How should I feed my string input features to the neural network? In other words, is there any efficient way to substitute those string inputs with numerical values knowing that there are A LOT of them? AI: You always avoid feeding direct strings into neural networks. This thread here explains why you should avoid doing this : Neural Network parse string data? Once you convert the strings you have into vectors or any other form of numerical representation and encoding your labels as categorical, it will solve the problem you have at hand. If you need me to elaborate more on this, I would be more than happy to do so.
H: Is there an R build tool like Maven? Is there an R build tool (like Maven or Gradle for Java) to get the dependencies and package an R project? AI: The packrat package is what you're looking for. It bypasses R's native packages and allows you to build and deploy a bundle of packages and dependencies. However, it does not do Maven-style dynamic dependency resolution; you need to specify a list of packages that you believe will work together, then the bundle is installed/deployed as a single unit.
H: Keras LSTM with 1D time series I'm learning how to use Keras and I've had reasonable success with my labelled dataset using the examples on Chollet's Deep Learning for Python. The data set is ~1000 Time Series with length 3125 with 3 potential classes. I'd like to go beyond the basic Dense layers which give me about 70% prediction rate and the book goes on to discuss LSTM and RNN layers. All the examples seem to use datasets with multiple features for each timeseries and I'm struggling to work out how to implement my data as a result. If for example, I have 1000x3125 Time Series, how do I feed that into something like the SimpleRNN or LSTM layer? Am I missing some fundamental knowledge of what these layers do? Current code: import pandas as pd import numpy as np import os from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM, Dropout, SimpleRNN, Embedding, Reshape from keras.utils import to_categorical from keras import regularizers from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt def readData(): # Get labels from the labels.txt file labels = pd.read_csv('labels.txt', header = None) labels = labels.values labels = labels-1 print('One Hot Encoding Data...') labels = to_categorical(labels) data = pd.read_csv('ts.txt', header = None) return data, labels print('Reading data...') data, labels = readData() print('Splitting Data') data_train, data_test, labels_train, labels_test = train_test_split(data, labels) print('Building Model...') #Create model model = Sequential() ## LSTM / RNN goes here ## model.add(Dense(3, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print('Training NN...') history = model.fit(data_train, labels_train, epochs=1000, batch_size=50, validation_split=0.25,verbose=2) results = model.evaluate(data_test, labels_test) predictions = model.predict(data_test) print(predictions[0].shape) print(np.sum(predictions[0])) print(np.argmax(predictions[0])) print(results) acc = history.history['acc'] val_acc = history.history['val_acc'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and Validation Accuracy') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() AI: LSTM layers require data of a different shape. From your description, I understand the starting dataset to have 3125 rows and 1000 columns, where each row is one time-step. The target variable should then have 3125 rows and 1 column, where each value can be one of three possible values. So it sounds like you're doing a classification problem. To check this in code, I would do: >>> X.shape (3125, 1000) >>> y.shape (1000,) The LSTM class requires each single sample to consist of a 'block' of time. Let's say you want to have a block of 100 time-steps. This means X[0:100] is a single input sample, which corresponds to the target variable at y[100]. this means your window size (a.k.a number of time-steps or number of lags) is equal to 100. As stated above, you have 3125 samples, so N = 3125. To form the first block, we unfortunately have to discard the first 100 samples of y, as we cannot form an entire block of 100 from the available data (we would end up needing the data points before X[0]). Given all this, an LSTM requires you to deliver batches of shape (N - window_size, window_size, num_features), which translates into (3125 - 100, 100, 1000) == (3025, 100, 1000). Creating these time-blocks is a bit of a hassle, but create a good function once, then save it :) There is more work to be done, perhaps look at more in depth examples of my explanation above here... or have a read of the LSTM documentation, (or better still, the source code!). The final model would then be simple enough (based on your code): #Create model model = Sequential() model.add(LSTM(units=32, activation='relu', input_shape=(100, 1000)) # the batch size is neglected! model.add(Dense(3, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) Have a look at the documentation regarding input shape for the Sequential model. It basically says that we don't need to specify the number of batches within input_shape. It can be done with e.g. batch_size=50, if you require it to be a fixed number. I know the input_shape argument is not in the documentation for LSTM, but the class itself inherits from RNN, which in turn inherits from Layer - so it will be able to use the info you provide. One last tip: if you plan on adding several LSTM layers ('stacking' them), then you shall need to add one more argument to all but the last LSTM, namely, the return_sequences=True.
H: Preprocess list data I got question about preparation data for my ML algorithm. Raw data has format similar to: { "finances": [ { "assets": 1230.39, "investments": 3245.39, "netProfit": 8765.45, "year": 2017 }, { "assets": 111.11, "investments": 222.22, "netProfit": 333.33, "year": 2016 }, { "assets": 1111.11, "investments": 2222.22, "netProfit": 3333.33, "year": 2015 } ], "someValue": 123.45, "title": "Hello!" } And I am wondering what is best way to pass data about finances to my algorithm (Number of years can differ from 0 to 8). I was thinking about making every parameter for every year new attribute, but this would lead to have many not null values for some cases, because not every record has 8 years of financial history. My question is: What is best way to handle such data in Input data (In fact every of finances elements would have more values in it (around 10-15). AI: The ultimate end goal of your modeling is going to affect the way you want to format your data. It's a good practice whenever you start a machine learning project to ask yourself, what is the precise question you want to answer, because whatever model you generate, it's only going to make sense if used in the context of the question asked. If in your case you want to predict loan approvals, then first you need to check whether that information is even present in the data you have. The JSON you have just shows historical data, but do you know the outcome for each data point? Is that another feature, maybe recorded in the "someValue", "title" area of the file? Without it, you really can't do anything. If you can get that information, then it's perfectly fine to generate a variable for each year: assets_year1, assets_year2, ... assets_year8, investments_year1, ... etc. True, year8 might have a lot of Null values, but that's not necessarily bad. For example, most models working with text data consist of really sparse training matrices, yet they do very well in practice. Depending on the algorithm, it might weed out those variables anyways. With Null values, you just have to try imputing the Null values differently, and you can also create an indicator variable for whether the column has missing values, and see how different settings change performance. To take it a step further, you can also create additional features to encapsulate the nuances that are occurring over the years. Create features like the historical average, average change per year, total number of years of history, etc. Creative feature engineering is the hardest part, but can lead to big changes in performance.
H: Can PCA be applied to reduce dimensionality of only a subset of features? Lets say I have a feature set of f0 to f1000. I am thinking of applying PCA on f500 to f1000 reducing their dimensionality. Can I combine this reduced set with the features f0 to f499 as the feature space for training a learning algorithm? AI: Yes, absolutely. Simply split your data into two sets feature-wise, apply PCA to one of them, and then stick them back together again. How to actually perform this will vary depending on your programming language/frameworks, but it is trivially easy in python + pandas, for example.
H: Decision trees for Rstudio v3.3 Anyone know the best packages to build a decision tree in Rstudio v3.3? I want to look at some data-driven segmentation for my data. I was thinking of doing chaid analysis (mainly because this is why I've done in the past). I've looked around myself, but can't find the best package for this version! AI: The "best" package depends on your goals and data really. A few tree / forest packages that I've come across: randomForest - an implementation of the original algorithm ranger - a flexible implementation designed to run on high-dimensional data (my personal favourite forest package) party's ctree / cforest functions - conditional inference trees / forests rpart Some of these packages focus on forests rather than trees; you can coerce the forest algorithm into building a tree by flexing appropriate parameters.
H: Can the number of epochs influence overfitting? I am using a convolution neural network ,CNN. At a specific epoch, I only save the best CNN model weights based on improved validation accuracy over previous epochs. Does increasing the number of epochs also increase over-fitting for CNNs and deep learning in general? AI: Yes, it may. In machine-learning there is an approach called early stop. In that approach you plot the error rate on training and validation data. The horizontal axis is the number of epochs and the vertical axis is the error rate. You should stop training when the error rate of validation data is minimum. Consequently if you increase the number of epochs, you will have an over-fitted model. In deep-learning era, it is not so much customary to have early stop. There are different reasons for that but one of them is that deep-learning approaches need so much data and plotting the mentioned graph would be so much wavy because these approach use stochastic-gradient-like optimizations. In deep-learning again you may have an over-fitted model if you train so much on the training data. To deal with this problem, another approaches are used for avoiding the problem. Adding noise to different parts of models, like drop out or somehow batch normalization with a moderated batch size, help these learning algorithms not to over-fit even after so many epochs. In general too many epochs may cause your model to over-fit the training data. It means that your model does not learn the data, it memorizes the data. You have to find the accuracy of validation data for each epoch or maybe iteration to investigate whether it over-fits or not.
H: Decision tree not using all features from training dataset I have built CART model using sklearn. I'm having total 6 features in training dataset and passing all of them in fit function. I've tested both criteria Gini and entropy. But whenever I plot tree using graphviz, the tree uses only 3 features in case of Gini and 4 features in case of entropy. I've also implemented CART from scratch for cross check purpose and still, Gini uses 3 features and entropy uses 4 features. Everything else is working fine. I'm getting the result as expected in test dataset with accuracy 97%. I just want to confirm that, is this normal? Does cart chooses best features only and ignore other features to avoid the wrong classification? Also, my dataset is quite small in size, only 220 records. AI: When a feature is not that informative of your target, the algorithm can choose not to use it. This can be for two reasons: All the information is already in the other features, meaning it adds nothing to include it There is some regularization going on, meaning your algorithm punishes complexity if it doesn't add enough It can also be that the regularization is very explicit, meaning there is a maximum depth set as hyperparameter. In this case the other features are more informative and useful for your task.
H: Multi-dimentional and multivariate Time-Series forecast (RNN/LSTM) Keras I have been trying to understand how to represent and shape data to make a multidimentional and multivariate time series forecast using Keras (or TensorFlow) but I am still very unclear after reading many blog posts/tutorials/documentation about how to present the data in the correct shape (most examples being of slightly less My Dataset: several cities for which I have info about say temperature, car traffic, humidity for say the last 2 years (one record for each day) What I want to do: I'd like to forecast for each city the temperatures I can expect for the next year using a possibly lagged version of temperature, car traffic and humidity (of course there would be several more features but this is just an example for thought). What I am confused about: If I have 2 cities, for which I recorded 3 features for 365 days. How should I shape my input so that the model can output a forecast for 365 days for these two cities (i.e. 2 time series of temperatures for 365 days)? Intuitively the tensor shape would be (?, 365, 3) for 365 days and 3 features. But I'm not sure what to stick into the first dimension and, most importantly, I would be surprised if it had to be for the number of cities. But at the same time, I have no idea how to specify into the model that it has to understand the dimensions properly. Any pointers will be helpful. I'm pretty familiar with the rest of the problem (i.e. how you build a network in Keras etc since I have done this for other neural networks but more specifically how best to encode the sequence for the desired input.) Oh and also, I guess I could train and predict for each city independently, but I'm sure everyone will agree there are probably things to be learned that are not particular to any city but that can only be seen if considering several of them, hence why I think it is important to encode it in the model. AI: The input shape for an LSTM must be (num_samples, num_time_steps, num_features). In your example case, combining both cities as input, num_features will be 2x3=6. If you lump all your 365 time steps into one sample, then the first dimension will be 1 - one single sample! You can also do sanity check by using the total number of data points. You have 2 cities, each with 365 time-steps and 3 features: 2x365x3= 2190 . This is obviously the same as 1x365x6 (as I said above) - so it would be a possibility (Keras will run) - but it obviously won't learn to generalise at all, only giving it one sample. Have a look at this relevant question, which I recently answered. There I speak a little about using a rolling window (check the comments of the answer for more info). That will buy you more samples if you need them. If you want to train a single model with data for both cities as input, then making predictions for both cities at each time-step is as simple as defining a final Dense layer, which outputs 2 units. Your validation/test data must then of course contain a tuple of (city1, city2). A perhaps more sophisticated way to approach this would be to create datasets on a single-city basis, then train several sub-models on each city individually (say for 5 layers), then Merge/Concatenate them and put several further layers on top. This will mean you are combining the learnt features of each city, which are in turn being combined to a higher level of abstraction. Here is the first image I got from a search engine, which sketches the idea.
H: Why does GridSearchCV (sklearn) change the value of n_samples? I thought n_samples is the number of training examples. But when using GridSearchCV, n_samples becomes 32 rather than 50. Error when using GridSearchCV: Expected n_neighbors <= n_samples, but n_samples = 32, n_neighbors = 50 Training examples: print(X_train.shape[0]) => 50 print(len(y_train)) => 50 This works: neigh = KNeighborsClassifier(n_neighbors=50) neigh.fit(X_train, y_train) result = neigh.predict(X_test) This fails: from sklearn.model_selection import GridSearchCV grid_params = { "n_neighbors" : [50] } g = GridSearchCV(KNeighborsClassifier(), grid_params) g.fit(X_train, y_train) I'm confused why n_samples becomes 32 when using GridSearchCV. AI: The CV stands for CrossValidation, meaning it will split up your training set in a number of folds (in this case 3), train on n-1 of those folds and test on the remaining one. This is why your training is now done on 32 instead of 50 samples. Crossvalidation is useful for estimating how well your model (including specific hyperparameters) does on unseen data.
H: Unable to Use The K-Fold Validation Sklearn Python I have an dataset. I am unable to use the K-Fold Validation. I am getting the error raised: ValueError("{0} is not supported".format(y_type)) ValueError: continuous is not supported . I do not want to do encoding to int, since it may affect the data, and also I want to understand why K-fold is not working. Below is my python code. import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import RandomForestRegressor from sklearn import cross_validation, metrics from sklearn.cross_validation import train_test_split from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn import svm from sklearn import preprocessing - `List item` url = "https://archive.ics.uci.edu/ml/machine-learning-databases/00242/ENB2012_data.xlsx" names=['Relative Compactness','Surface Area','Wall Area','Roof Area','Overall Height','Orientation','Glazing Area','Glazing Area Distribution','Heating Load','Cooling Load'] df = pd.read_excel(url,names=names) #Feature selection train=df.sample(frac=0.8,random_state=150) test=df.drop(train.index) #save the original values in a dataframe so we can compare later test_loads=test[["Cooling Load"]] #Create 2 lists of response values to train our model Y1=np.array(train['Heating Load']) Y2=np.array(train['Cooling Load']) #Select the features train_corr=train[['Overall Height','Relative Compactness','Roof Area','Surface Area']] test_corr=test[['Overall Height','Relative Compactness','Roof Area','Surface Area']] seed = 7 scoring = 'accuracy' X_train,X_test,y_train,y_test=cross_validation.train_test_split(train_corr,Y1,test_size=0.2) kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(RandomForestRegressor(), X_train, y_train, cv=kfold, scoring=scoring) print (cv_results.mean()) AI: Based on the answer here, Since you are doing a classification task, you should be using the metric R-squared (co-effecient of determination) instead of accuracy score (accuracy score is used for classification purposes). You should use something like score for evaluation because your task is regression.
H: Feature importance parameter in machine learning models like Naive Bayes Sorry for vague heading for the question. My question is that, is there any way to compare features (or attributes) used in machine learning algorithm? I have used Naive Bayesian classifier for binary classification which consists of total 6 features. I want to compare this features with one another and also list out features in priority order of their significance in models(how important that feature is). For example we use information gain or gini index to decide how effective feature is in CART. How can I compare this in naive bayes? I have read about confusion matrix and f measure but they are good at defining goodness of models and not features. AI: Since Naive Bayes assumes independence and outputs class probabilities most feature importance criteria are not a direct fit. The feature importance should be no different from the skewness of the feature distribution in the set: You could try to directly compare the probability of the features given the classes (implemented in sklearn for instance), the variability of those probabilities with respect to the classes should express the importance of those features.
H: Is a 100% model accuracy on out-of-sample data overfitting? I have just completed the machine learning for R course on cognitiveclass.ai and have begun experimenting with randomforests. I have made a model by using the "randomForest" library in R. The model classifies by two classes, good, and bad. I know that when a model is overfit, it performs well on data from its own trainingset but badly on out-of-sample data. To train and test my model I have shuffled and split the complete dataset into 70% for training and 30% for testing. My question: I am getting a 100% accuracy out of the prediction done on the testing set. Is this bad? It seems too good to be true. The objective is waveform recognition on four on each other depending waveforms. The features of the dataset are the cost results of Dynamic Time Warping analysis of waveforms with their target waveform. AI: High validation scores like accuracy generally mean that you are not overfitting, however it should lead to caution and may indicate something went wrong. It could also mean that the problem is not too difficult and that your model truly performs well. Two things that could go wrong: You didn't split the data properly and the validation data also occured in your training data, meaning it does indicate overfitting because you are not measuring generalization anymore You use some feature engineering to create additional features and you might have introduced some target leakage, where your rows are using information from it's current target, not just from others in your training set
H: Tips & Tricks on training DCGAN on small dataset I have made a DCGAN which I am trying to train on custom dataset of only 1200 images. I have tried to gather more, but even gathering these 1200 was hard enough. If you are wondering I used Google Chromes extension "Fakun Batch Download Image" to gather my dataset. TRAINING DETAILS: In training procedure I am simultaneously updating parameters of both, Generator, and Discriminator network. I've read that it works much better then training only one player ( Discriminator ) for K steps and then other ( Generator ) for one. QUESTION: Should I perform maybe some kind of transformation on all of those images and then merge transformed images with the initial ones, or something similar? AI: Extending a small dataset comprised of images A deep learning algorithm will learn a mapping function from your input space to your outputs. The variations in your input images will be learned within this function. Thus, you will want to consider this fact when you augment your dataset. The distribution of your input features should be concise as to what you plan to model. You are thus left balancing between adding data and adding variability to your input space. For example, it might not be worth it to rotate by 180 degrees when trying to generate Dragon Ball characters. Their heads should not be where their feet are. You would want your network to understand that the bottom of the image should contain some strange space boots, and the top exotic haircuts or bald heads. Augmenting images Here are some useful transformations that you can use to get more data Apply transformations (rotations, translations) Mirror the image Add distortions Change zoom factor Add blurring thus to better generalize the input data Invert colors (skew, add brightness, etc.) In Keras In Keras you can use the ImageDataGenerator functions. import numpy as np from keras.datasets import mnist from keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt %matplotlib inline # load data (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.astype('float32') # set up your data generator datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rotation_range=60, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, vertical_flip = True) # Fit the generator using your data datagen.fit(X_train.reshape((len(X_train), 28, 28, 1))) # let's look at some generated images image = X_train[5] plt.figure(figsize=(12,12)) plt.subplot(4, 4, 1) plt.imshow(image.reshape((28,28)), cmap='gray') for j in range(15): augmented = datagen.random_transform(image.reshape((28,28,1))) plt.subplot(4, 4, j+2) plt.imshow(augmented.reshape((28,28)), cmap='gray') plt.tight_layout() plt.show()
H: What is the allowable limit of oversampling? Suppose I have 2 classes. One class has 16 samples and the other class has 435 samples. Is it justified to oversample the class with 16 sample to have a 435 number of samples? Or is it better to undersample the class with 435 samples? If so, what should be the number of samples after the undersampling is carried out? AI: This depends on the nature of your data. If you can effectively simulate 435 samples using any given oversampling methods such as SMOTE or ADASYN for instance, then I would say oversampling would be better. Because it would provide data for various scenarios. But if exact replication is not possible and if replicated there might be an inherent problem with the model then you should choose to undersample. A good example for undersampling would be that for example if you're conducting a scientific experiment and the data you have are limited to a few scenarios and the other data is yet to be fully observed, you would choose to undersample. Oversampling would be the reverse case, where you can effectively simulate data for various classes based on some parameter and the generated data can mimic actual data and scenarios, then you should oversample.
H: Should we apply normalization to test data as well? I am doing a project on an author identification problem. I applied the tf-idf normalization to train data and then trained an SVM on that data. Now when using the classifier, should I normalize test data as well. I feel that the basic aim of normalization is to make the learning algorithm give more weight to more important features while learning. So once it has been trained, it already knows which features are important and which are not. So is there any need to apply normalization to test data as well? I am new to this field. So please ignore if the question appears silly? AI: Yes you need to apply normalisation to test data, if your algorithm works with or needs normalised training data*. That is because your model works on the representation given by its input vectors. The scale of those numbers is part of the representation. This is a bit like converting between feet and metres . . . a model or formula would work with just one type of unit normally. Not only do you need normalisation, but you should apply the exact same scaling as for your training data. That means storing the scale and offset used with your training data, and using that again. A common beginner mistake is to separately normalise your train and test data. In Python and SKLearn, you might normalise your input/X values using the Standard Scaler like this: scaler = StandardScaler() train_X = scaler.fit_transform( train_X ) test_X = scaler.transform( test_X ) Note how the conversion of train_X using a function which fits (figures out the params) then normalises. Whilst the test_X conversion just transforms, using the same params that it learned from the train data. The tf-idf normalisation you are applying should work similarly, as it learns some parameters from the data set as a whole (frequency of words in all documents), as well as using ratios found in each document. * Some algorithms (such as those based on decision trees) do not need normalised inputs, and can cope with features that have different inherent scales.
H: Multi-class classification v.s. Binary classification A training set has five classes including: "label-A", "label-B", "label-C", "label-D", "others" But the problem is much simpler - it is to determine whether each input belongs to "label-ABCD" or "others". In this case, there are two solutions to solve this problem in my mind. Solution 1: Train a 5-classes classifier, when the classifier predicts the input as "label-A" or "label-B" or "label-C" or "label-D", we relabel it as "label-ABCD". Solution 2: Train a 2-classes classifier, we relabel the data as "label-ABCD" which is labeled as "label-A" or "label-B" or "label-C" or "label-D". And then it becomes a binary-classification problem. My questions are: Which way can the model get a better performance in "theorem"? In real case, these two cases get almost the same performance by a CNN classification model, and I am wondering if I adopt a weaker classifier like C4.5, Naive Bayes, SVM...which method will win? Thanks!! AI: The greater the number of output nodes the higher complexity you will add to your model. This means that given a fixed amount of data, a greater number of output nodes will lead to poorer results. I would use a ABCD vs. others strategy. Instead of conditioning your model to learn the distributions of the class A, B, C and D separately you will combine them. This means that is A and B are different in some way, but this difference is irrespective of the classification with "others" then there is no need to learn that distinction. For example: if you want to detect dog, cat, human with features such as weight, height and number of legs. The number of legs feature will have relatively low importance, because cats and dogs will likely all have 4 legs. However if I want to classify cat/dog vs humans, then the number of legs will be the most important feature. It might be the only feature you need. One caveat may be severe class imbalance. By combining your classes in this way you may end up with an over representation of ABCD. You can use techniques such as anomaly detection to train a model on your ABCD data and then detect whether a novel instance falls within this distribution, or is an outlier in which case you would label it as "others."
H: Random Forest Multiclass Classification Problem Statement: Given the details about a product, we need to map it to its category. Currently we are using Product Name as a feature and Product Category as the Label There are around 50,000 categories available currently and it will grow in future. I created a small dataset which consisted 20 categories and 100 records for each label. so the total record count is 2000. Using RandomForest I got 92% accuracy. Problem: So I went on to create a model with 1800 categories[labels] and records for each categories varies from 500-1500. When I ran the same model with new dataset I got only 19% accuracy and more than 50% of the predicted value pointed to the same label. Dataset Sample: Product_Combined Category 2Pcs Led Light Lamp Strip Dimmer Switch Brightness Adjustable Control 12-24V 8A Arts, Crafts & Sewing | Painting, Drawing & Art Supplies | Drawing | Light Boxes 10 Pcs 1/4" Male To 1/4" Female Screw Adapter For Tripod Camera Flash Bracket Stand Arts, Crafts & Sewing | Painting, Drawing & Art Supplies | Drawing | Light Boxes L-Fine A4 Tracing LED Light Pad Box(13.86x9.45 Inches) with Adjustable Light Intensity for Artists,Drawing, Sketching, Animation Arts, Crafts & Sewing | Painting, Drawing & Art Supplies | Drawing | Light Boxes BZONE Solar Powered Operated Copper Wire LED Fairy Light Decorative String Lights for Indoor Outdoor Home Garden Lawn Patio Party Christmas Valentine''s Day (16.4ft, Pink Color) Arts, Crafts & Sewing | Painting, Drawing & Art Supplies | Drawing | Light Boxes LitEnergy 32.5 Inch Diagonal A2 Tracing Table with LED Light and Paper Arts, Crafts & Sewing | Painting, Drawing & Art Supplies | Drawing | Light Boxes Code: import string import codecs import pandas as pd import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from stemming.porter2 import stem from sklearn.metrics import confusion_matrix from nltk.stem import PorterStemmer from nltk.corpus import stopwords from sklearn.model_selection import cross_val_score from sklearn.externals import joblib stop = stopwords.words('english') data_file = "Book3.txt" #Reading the input/ dataset data = pd.read_csv( data_file, header = 0, delimiter= "\t", quoting = 3, encoding = "ISO-8859-1") data = data.dropna() #Removing stopwords, punctuation and stemming data['Product_Combined'] = data['Product_Combined'].apply( lambda x: ' '.join([word for word in x.split() if word not in (stop)])) data['Product_Combined'] = data['Product_Combined'].str.replace( '[^\w\s]',' ').replace('\s+',' ') data['Product_Combined'] = data['Product_Combined'].apply( lambda x: ' '.join([stem(word) for word in x.split()])) train_data, test_data, train_label, test_label = train_test_split( data.Product_Combined, data.Breadcrumb, test_size=0.3, random_state=100) RF = RandomForestClassifier(n_estimators=100) vectorizer = CountVectorizer( max_features = 50000, ngram_range = ( 1,3 ) ) data_features = vectorizer.fit_transform( train_data ) RF.fit(data_features, train_label) test_data_feature = vectorizer.transform(test_data) Output_predict = RF.predict(test_data_feature) print ("BreadCrumb_Accuracy: " + str(np.mean(Output_predict == test_label))) with codecs.open("out_bread_crumb.txt", "w", "utf8") as out: out.write("Input\tPredicted\tActual\n") for inp, pred, act in zip(test_data, Output_predict, test_label): try: out.write("{}\t{}\t{}\n".format(inp, pred, act)) except: continue Output: Input Predicted Actual Centuri Duster Dispos Compress Gas Duster 10 oz 2 Pk Automotive | Exterior Accessories | Towing Products & Winches | Winches Electronics | Computers & Accessories | Computer Accessories & Peripherals | Cleaning & Repair | Compressed Air Dusters BB Mall Phone Ring Stand Metal Stainless Steel Univers 360 Rotat Ring Kickstand iPhon 6 6s 6 s plus Samsung Note 5 Note 4 S5 iPad All SmartPhon Tablet Black Automotive | Exterior Accessories | Towing Products & Winches | Winches Cell Phones & Accessories | Accessories | Mounts & Stands | Stands Standard Motor Product 6444 Ignition Wire Set Automotive | Exterior Accessories | Towing Products & Winches | Winches Automotive | Replacement Parts | Ignition Parts | Spark Plugs & Wires | Wires | Wire Sets Walker 52271 Extension Pipe Automotive | Exterior Accessories | Towing Products & Winches | Winches Automotive | Replacement Parts | Exhaust & Emissions | Exhaust Pipes & Tips ACDelco KS10640 Profession Time Compon Seal Automotive | Exterior Accessories | Towing Products & Winches | Winches Automotive | Replacement Parts | Bearings & Seals | Seals | Camshafts As you can see more than 50% of the actual test data was labeled as Automotive | Exterior Accessories | Towing Products & Winches | Winches AI: There are bunch of things to take care of before you can solve the problem. How is the label distribution in the training? If the distribution is not appropriate, then you need to sample the training data appropriately. With regards to the approach: Using random forest is appropriate. But as features to the random forest it would be better to use word vectors as input to the model. That would take into account products with same labels to have a very strong similarity score based on their names. I have used this before almost for the exact problem and I saw a big boost in my results.
H: Meaning of "TRUE" column in R RandomForest output for Importance()? I want to assess the importance of variables in my model using the Importance() function of R RandomForest package. I have a binary response variable / class and binary feature values. mytree.rf <- randomForest(class ~ ., data=mydata, ntree=1500,keep.forest=FALSE,importance=TRUE) importance(mytree.rf) The output matrix contains the MeanDecreaseAccuracy and MeanDecreaseGini. I understand those two. My problem is with two other columns in the output. One simply says "TRUE" the other one "FALSE". Neither in the documentation nor online I was able to find an answer what those values are and how they are calculated... Can anyone help me out? Cheers Edit 1: Thanks to Davids answer I realized TRUE and FALSE are my class "names". I still don't understand how the value given in the matrix is calculated though... Can anyone help with that? Edit 2: Thanks to David again, it turns out the answer is in the documentation. But it cannot be found in the chapter about the importance() function, but rather in the description of objects of class randomForest. Importance is one of these objects. [...] a matrix with nclass + 2 (for classification) or two (for regression) columns. For classification, the first nclass columns are the class-specific measures computed as mean descrease in accuracy. The nclass + 1st column is the mean descrease in accuracy over all classes. The last column is the mean decrease in Gini index. AI: I don't see any columns like that. Here's a simple example: library(randomForest) data(iris) iris.rf <- randomForest(Species ~ ., data=iris, importance=TRUE) colnames(importance(iris.rf)) # [1] "setosa" "versicolor" # [3] "virginica" "MeanDecreaseAccuracy" # [5] "MeanDecreaseGini" If you're seeing columns named "TRUE" and "FALSE" in the matrix returned by importance, my best guess is that those are the names of columns in your training data.
H: Are there any examples of neural networks that take two samples as input with a label of {same class, different class}? There must be examples of this, though I haven't been able to find any. Maybe I don't know what to search for. AI: Yes. This is how a face recognition algorithm might work for example, where two pictures might be of the same person or different person. To build such a system, just pair up your data in a training set, double the input vector space and run a binary classifier that outputs "true" if the two items are the same. Any ML classifier could be adapted to this problem. In practice, you may want more control over classification, and be robust against new classes that your algorithm has not seen before. A big problem in face recognition is the large number of potential classes, including classes not seen in training data, coupled with an equally large variance of images in the wild. This means that a naive approach as above will have poor performance in practice. There is a more sophisticated approach: Triplet Loss. This requires that you train with three inputs for each example. Unlike the naive version: The input is one image at a time. You train in triplets - an "anchor", and a postive match, and a negative match, in order to calculate one loss value for backprop. The output is not a same/different class, but a vector description of the object. Similarity of objects is the distance between vectors. You do not need a label for this vector, even though this is supervised learning, thanks to how the loss function works. The loss function is based on difference of distance from anchor example to positive example compared with a desired higher distance to negative example. This encourages learning key features of the inputs by making distance between vectors low (ideally zero) when they represent the same class, and as high as possible when they represent different classes. In both the naive approach and triplet loss approach, you need to be careful about selecting training data. You want to make the learning algorithm work hard to learn key differences, otherwise it is too easy to get a good loss. So taking MNIST as an example, when looking at negative matches, you want to pair up more (1, 7), (3, 2), (3, 8), (4, 9) etc pairs than (0,1), (9, 5). There are ways to drive this selection based on feedback from previous training epoch.
H: What Is Saturating Gradient Problem Can anyone explain what is Saturating Gradient problem? It would be nice if anyone can provide math details as well. Thank you in advance! AI: If you use sigmoid-like activation functions, like sigmoid and tanh, after some epochs of training, the linear part of each neuron will have values that are very big or very small. This means that the linear part will have a big output value regardless of its sign. Consequently, the input of sigmoid-like functions in each neuron which adds non-linearity will be far from the center of these functions. In those locations, the gradient/derivative value is very small. Consequently, after numerous iterations, the weights get updated so slowly because the value of the gradient is very small. This is why we use the ReLU activation function for which its gradient doesn't have this problem. Saturating means that after some epochs that learning happens relatively fast, the value of the linear part will be far from the center of the sigmoid and it somehow saturates, and it takes too much time to update the weights because the value of gradient is small. You can take a look at here as a solution for this problem. If I want to explain the math part, suppose that you are using sigmoid as the activation function. If $\sigma$ represents sigmoid, its gradient is $\sigma (1-\sigma$). Now suppose that your linear part, the input of sigmoid is a positive number which is too large, then sigmoid which is: $$ \frac{1}{1+e^{-x}} $$ will have a value near to one but smaller than that. On the other hand $\sigma$ will be so close to zero, multiplying $\sigma$ and $1 - \sigma$ will result in a small value, which means the value of the gradient is very small. If the value of the linear part is very small, then $\sigma$ will be close to zero and $1- \sigma$ will be close to 1 but smaller than that. Again, multiplying these will lead to a small value as the gradient.
H: Best practice for developing using Spark I am looking for any tips and best practice on how to develop applications using Spark. I currently have access to a cluster, with data as well as a version of Spark 2.1.0 on an edge node and IntelliJ on my local machine. I am wondering what the best way would be to go about development of an application to run on the data that is in the cluster. Things on my mind for consideration are the following; IntelliJ is a nice IDE but I can't store the volume of data I want to look at on my local machine I can run spark-shell on the edge node but its not great for data exploration and development How is it best to bridge this gap between data and a nice development environment? AI: What you want to do is develop your code in IntelliJ, and then package your code and dependencies into an executable jar file using SBT or Maven. When you have your jar stored locally, you can use spark-submit to transfer the jar to your cluster (along with some other parameters) for execution. You might also want to take a sample of your data and store it locally so you can run spark locally and test/debug your code in IntelliJ. This can speed up development considerably, and having access to a debugger is a huge help.
H: Is there a person class in ImageNet? Are there any classes related to humans? If I look at one of the many sources for the Imagenet classes on the Internet I cannot find a single class related to human beings (and no, harvestman is not someone who harvests, but it's what I knew as a daddy longlegs, a kind of spider :-). How is that possible? I would have at least expected a person class, and even something more specific such as man, woman, toddler, etc. Nothing of the sort. Why? Did Fei-Fei Li and her team make a conscious choice not to have people images in the database? Am I looking at the wrong file? For the sake of the question, we can consider the ImageNet versions from 2014 onwards. AI: You can also take a look at here for the labels in the imagenet. I guess you are right, there is no label for human in the data-set but there is something to notice. There are labels in imagenet like cowboy or some specific hats and other related things to human like shirt and t-shirt. You can take a look at here and also here. In the latter link Yosinski et al, has tried to show that the popular AlexNet has learned to recognize human faces although there is no label as human face in the imagenet data-set. In their paper, they have investigated that Convolutional neural networks may try to learn things that are distributed among layers or maybe not and they may not have special label in the training data. As an example, the face of cats and humans can be referred to. Moreover, as you can see here maybe the aim was attribute learning in large-scale datasets, as quoted in the last line of the page, as the reference.
H: How to match up categorical labels in training and evaluation I am creating a CNN to categorise a sentence into one of N possible labels. I have used the tutorial from WildML to start the code, and I have modified it to allow multiple outputs instead of just true/false. I am using VocabularyProcessor to convert the words into numbers and then using OneHotEncoder to transform the labels for evaluation. I am currently using the same VocabularyProcessor for converting both X and y Where x_text is the loaded sentences and y_text is the correct labels max_document_length = max([len(x.split(" ")) for x in x_text]) vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length) X = np.array(list(vocab_processor.fit_transform(x_text))) y_val = np.array(list(vocab_processor.transform(y_text))) from sklearn.preprocessing import OneHotEncoder onehotencoder = OneHotEncoder(categorical_features = [0]) y = onehotencoder.fit_transform(y_val).toarray() y = y[:, 1:] This then goes through the training. The problem that I am finding is that during evaluation, if the labels are not exactly the same then the indexes do not match the prediction that is coming out. Basically what my question is, is how do I match up the output of the evaluation with the text label so that I can use it after training without hard coding the labels up front AI: You are going about it all wrong. You shouldn't transform your labels into numerical values and match them to the output; that's not how classification is done. Instead, for each class (category) you will add a node in the output layer; the predicted class is then the argmax of the output layer. For example, let's say you have three classes: simple, compound and complex. We'll assign an integer for each class starting from $0$. Your output layer will have three nodes, one for each class, and for each training example $x$, the true label $y$ is a three dimensional "hot-encoded" vector. (i.e $y = (y_0, y_1, y_2)$ where $y_i = 1$ if $x$ belongs to class $i$ otherwise $y_i = 0$ for $i \in \{0,1,2\}$). Now, let's suppose you feed your (trained) network an input value x and it outputs the following vector: $y' = (0.4, 1.4, 2.5)$. The predicted label in this case is $\arg\max(y') = 2$. So the prediction for the sentence x is: complex. Note: since argmax is not differentiable, we use softmax during training to convert the output vector y' into a vector of probabilities that sums up to $1$ before applying the loss function. The prediction in this case is the class with the highest probability. For example, if we apply softmax to the same y' from the example above, we get: $\text{softmax}(y') = (0.08, 0.23, 0.69)$. It's clear that the highest probability ($69\%$) belongs to the class $2$ (complex) and that these probabilities sum up to $1$.
H: Is there a thumb-rule for designing neural-networks? I know that a neural-network architecture is mostly based on the problem itself and the types of input/output, but still - there's always a "square one" when starting to build one. So my question is - given a input dataset of MxN (M is the number of records, N is the number of features) and a C possible output classes - is there a thumb-rule to how many layers/units should we start with? AI: This question has been answered in detail on CrossValidated: How to choose the number of hidden layers and nodes in a feedforward neural network? However, let me add my own two cents: There is no magic rule for choosing the best neural network architecture, but if you can find an architecture someone has used to solve a similar problem this is often an excellent starting point. The best places to look are official or unofficial examples using popular neural network libraries such as Keras, PyTorch, or Tensorflow, and architectures described in academic literature. keras/examples on github is a great resource. These architectures were likely chosen after lots of trial and error, so most of the work will have been done for you.
H: How to choose variables for regression I have a dataset of long/short equity hedge funds returns and their associated benchmarks (market indices). I need to form multiple regression on the fund returns using the benchmarks returns as independent variables (i am allowed to form linear combination or manipulation of the indices or even non-linear combinations). Of course, I do not know which independent variables to choose. Are techniques such as subset selection, Lasso, and Ridge supposed to be used in situation like this? AI: There are a few steps you can take to choose features for linear regression: 1 - Exclude variables that are highly correlated with each other. If variables are highly correlated you are essentially inputting the same information multiple times which can cause over-fitting and does not satisfy the properties no multi-collinearity for linear regression. You can create a Pearson correlation matrix and decided which variables are too highly correlated using some chosen threshold i.e only keep variables with a correlation coefficient of < 0.3 2 - If you have many variables you could perform principal component analysis (PCA) to reduce the dimensions of the data and use those as your linear regression features. The idea of PCA is reduce dimensions while holding all of the information. Each component from PCA are uncorrelated, satisfying the no multi-collinearity property. 3 - There is also a method known as stepwise linear regression. You allow all variables to enter the model and it will iteratively remove and add variables until the model with the highest R-squared (or whatever your chosen model metric is) is produced. You do have to be cautious using the stepwise method as it can lead to overfitting, but it can give an indication on what features to use. Here's some info on stepwise: https://en.wikipedia.org/wiki/Stepwise_regression 4 - If you are using R, there is a brilliant package called "caret" that can help with feature selection. Here is a fantastic link to use as a guide: https://machinelearningmastery.com/feature-selection-with-the-caret-r-package/ Hope this helps out as a starting point
H: Calculating correlation of slightly out of sync data I am trying to do some analysis on some data that comes from special glasses that track a few things including pupil size and gaze velocity. I would like to calculate the correlation between two glasses on two different people. At the moment I cannot use df.correlate() because the timestamps are not identical and therefore the data looks something like this: index | ts | r_person | l_person ----------------------- 0 | 23 | 3.0 | NAN 1 | 25 | NAN | 3.2 2 | 28 | 3.1 | NAN 3 | 32 | 3.0 | NAN I was wondering if there was still any way to calculate a correlation directly. At the moment I was thinking of possible filling the NAN values with the averages of the data points above and below. For example row 2 column r_person would become $3.05$. This would be less trivial than it seems because even at the start it wasn't always one data point R one data point L and after cleaning the data it has become less so. In other words multiple NAN values might appear in the same column as you can see in the example. I can still deal with that by just spreading out the average. My second technique was going to try and merge the values together if they were close enough. Bearing in mind the data was collected at 50hz. My question is whether anyone has a quicker or better way of aligning the data without losing it or changing it too much? AI: At the moment I was thinking of possible filling the NAN values with the averages of the data points above and below. For example row 2 column r_person would become $3.05$. The problem here is that the missing data point isn't necessarily associated with a timestamp that is the average of the timestamps above and below. If it were, the procedure you suggested would be equivalent to a linear interpolation, i.e. drawing a line between the points before and after the missing timestamp, and using that line to construct a local prediction for the missing observation. Which is what you should actually do instead. For example in the case above, the line for the point your trying to interpolate is $\text{r_person}(ts) = 3.0 +\frac{.1}{5}(ts-23)$, so $\text{r_person}(25) = 3.04$ (I think... doing this in my head right now).
H: Including the dependent variable in your data to perform principal component analysis? Let's say you have a data set with GPA (dependent variable) and Amount of alcohol, Amount of study, IQ, and SAT score as the independent variables. And you want to perform the principal component analysis in R for dimension reduction. In the csv file you read in R, do you have to have GPA or do you have to remove it? AI: You need to remove the dependent variable before performing PCA. Otherwise you are essentially using the dependent variable to explain itself. Also, think about prediction: You might want to use your model to predict unknown GPA scores based on information about amount of alcohol, amount of study, IQ and SAT. In that case, it is impossible to include information on GPA scores.
H: Kernel with complex vectors I have a question regarding my machine learning lecture where we had to decide whether $$K(x,y)=x_1y_1-x_2y_2$$ is a valid kernel (e.g. for a SVM). My intuition would say that it is a valid kernel since we can display it with: $$\Phi(x)=(x_1, ix_2)\implies K(x,y)=\Phi(x)\Phi(y)$$ with $i$ being the imaginary number. Is that right? AI: Kernels are considered valid if they are positive-definite, so we must have $$ \sum_{i=1}^n \sum_{j=1}^n c_i c_j K(x_i, x_j) \ge 0 $$ for all $n \in \mathbb{N}$ and $c_1, \dots, c_n \in \mathbb{R}$ and (in your case) all $x^1, \dots, x^n \in \mathbb{R}^2.$ Letting $n=1$ and $c_1 = 1$ and $x^1 = (0, 1)$ shows that $K$ is not positive-definite, since $$ K\big((0,1), (0,1)\big) = 0\times 0 - 1\times 1 = -1. $$ The problem with your approach is that you define $\Phi$ on $\mathbb{C}$ but write $K$ as the dot product for real-valued vectors. The feature map argument for positive definite vectors only works if the product is an inner product on $\Phi$'s codomain. The dot product for real-valued vectors is not an inner product on $\mathbb{C}^2$ (because $i\cdot i = -1 < 0$). You would have to use the generalized dot product $x \cdot y = \sum_i x_i \overline{y_i}$, or another inner product on $\mathbb{C}^2$.
H: Machine Learning to predict risk of items I'm trying to find out what I need to research and start learning to try and apply machine learning to this problem: In multiple offices I have 20 chairs, all of these chairs will need to have a risk assessment carried out, as the chairs become older I want to understand when they become more at risk because of other factors such as; age, usage, last time assessed and areas level of risk. I then want to be able to come up with a simple risk score of each individual chair. I have done some research and I think it points towards using a machine learning technique such as supervised learning with a neural network but I'm uncertain. Given my scenario what should I be looking to start learning and doing? Furthermore I believe that from my limited knowledge that providing all of the values I'm using are the same across other items I could apply and scale out this model for other items? Further info; As I'm familliar with Javascript I'm choosing to use synaptics.js to create a neural network and I'm going to be generating sample data inside a node server. I'm going to be using the following for my datasets that will be used as my model for learning from: Age (days to years), Location(low, med, high), Office(low, med, high), Last Risk Assessment Date (days-> years capped by twenty), Daily use (Low, Medium, High) and quality(low, med, high). All of the above will be given an expected outcome manually so that I can attempt to figure out a way for the neural network to process the above and come to the same conclusion I did. With the above labels I will be adding a weighting that will then be used to classify the risk level it should be associated to. The factors I'm looking for the neural network to understand is that given my factors a chair with a lot of high factors will likely be placed in a higher outcome value of say 1-4 for risk factors. I'm then hoping I can use this learning to be applied to multiple other things other than chairs and increase the labels the network can use to learn from as I go along. AI: This sounds like a homework problem and those are frowned upon on this web site; so I'm going to be cautious about helping you and just give you general guidance: 1 - be very, very careful about picking models before you do anything else. That is a bit of Researcher Bias and you should try to avoid that; let the data tell you where to go, not the other way around. 2 - you have to think about what "factors" you can use to describe each chair. How old is it? What color is it? Does it have wheels or not? Some of these features will have an effect on your models, others will not. But it should be your goal to start with a long list of factors and then try to model them out and see which ones have an effect on your results. Start putting a list of factors together and then you will be in a better position to continue your studies.
H: Looking for advice: data transfer I'm requesting data from a government body, and they asked me what format I want to receive the data in. This will be a table of about 400,000 rows and about 10 columns. My options are: "comma or tab delimited ASCII, Microsoft Access database, Microsoft Excel file etc" They also want to know the media by which I want to receive data. My options are: "encrypted CD, DLT Tape, etc." I'm used to working with JSON, and doing data analysis via Python and R. These data will have to remain confidential. Any pointers? AI: Either format will work with R and Python, though you'll need a library for MS Access, which is the least common option. If any of these formats is the native format of the data, I'd go for that, which will avoid weird transformation artefacts. If neither of the formats is, I'd choose CSV, which would allow me to use a text editor if needed. However, if one of the columns contains user generated texts, it might be worth the trouble to choose Excel, that would limit the chance that the export would get botched. It is way easier to get your hands on a CD-reader. Your data doesn't sound that impressively large that it warrants using tape.
H: Linear Regression of Sine wave using Gradient descent Not working I am writing an algorithm to fit a sine wave. I want to have 4 parameters ( amplitude, frequency, phase & centre position). When I tried to program with all 4 parameters I wasn't able to find a good fit. So I tried to first make it work with each parameter separately and making the other 3 constant hardcoded values. I managed to make this work for all parameters except for the frequency. So I think the problem is the derivative of the error function w.r.t the frequency parameter. As seen below all the other parameters of the sine are hardcoded values except the frequency parameter %This is iterated for a large number of times for i = 1:length(x) %computing hypothesis and error TrialY(i) = 4* sin(w2(end) *x(i) + 4) -2; Error(end) = Error(end) + (TrialY(i) - y(i))^2; end Error(end) = Error(end) / (2*length(x)); dw2 = 0; %derivative value %derivative of error function with respect to frequency (w2) for i = 1: length(x) dw2 = dw2 + (4* sin(w2(end) *x(i) + 4) -2 - y(i)) * 4 * x(i) * cos(w2(end) * x(i) + 4); %hardcoded derivative end dw2 = 2 * dw2 / length(x); %normalising error w2(end + 1) = 0; w2(end) = w2(end - 1) - alpha * dw2; % alpha is the update constant It seems like the parameter is not converging on the right value with whatever alpha and initial values I give it. While the plot of the Error with time is decreasing but converging on a value > 0. AI: The problem seems to be that in the case of frequency parameter, the error function is not convex, and has multiple local minima of value greater than 0. The gradient descent is converging to one of these minima. The error will reduce to 0 only if the starting estimate lies in the right convex basin. For the amplitude and center value parameters, the error functions are convex, and for phase parameter, there are multiple local minima all of value 0.
H: How to build a recurrent neural net in Keras where each input goes through a layer first? I'm trying to build an neural net in Keras that would look like this: Where $x_1$, $x_2$, ... are input vectors that undergo the same transformation $f$. $f$ is itself a layer whose parameters must be learned. The sequence length $n$ is variable across instances. I'm having trouble understanding two things here: What should the input look like? I'm thinking of a 2D tensor with shape (number_of_x_inputs, x_dimension), where x_dimension is the length of a single vector $x$. Can such 2D tensor have a variable shape? I know tensors can have variable shapes for batch processing, but I don't know if that helps me here. How do I pass each input vector through the same transformation before feeding it to the RNN layer? Is there a way to sort of extend for example a GRU so that an $f$ layer is added before going through the actual GRU cell? AI: What should the input look like? You are right to think a 2D tensor, but usually we add one more dimension for the batch. You can indeed have a variable length number_of_x_inputs, but to train during batch processing all inputs in a single batch will need to have the same shape. (Setting batch size to 1 will get around this.) During inference, you can have whatever length you want. See below code sample. How do I pass each input vector through the same transformation before feeding it to the RNN layer? Use TimeDistributed. The example below passes all vectors $x_i$ through the same feed forward network (Dense(5, ...)), but you should be able to swap that out for whatever f you were thinking. from keras.models import Sequential from keras.layers import LSTM, Dense, TimeDistributed x_dimension = 16 num_classes = 2 model = Sequential() model.add(TimeDistributed(Dense(5, activation='relu'), input_shape=(None, x_dimension))) model.add(LSTM(32, return_sequences=True)) model.add(LSTM(8)) model.add(Dense(num_classes, activation='softmax')) print(model.summary(90)) This prints the following model: Layer (type) Output Shape Param # ========================================================================================== time_distributed_1 (TimeDistributed) (None, None, 5) 85 __________________________________________________________________________________________ lstm_1 (LSTM) (None, None, 32) 4864 __________________________________________________________________________________________ lstm_2 (LSTM) (None, 8) 1312 __________________________________________________________________________________________ dense_2 (Dense) (None, 2) 18 ========================================================================================== Total params: 6,279 Trainable params: 6,279 Non-trainable params: 0 __________________________________________________________________________________________
H: Imbalanced data causing mis-classification on multiclass dataset I am working on text classification where I have 39 categories/classes and 8.5 million records. (In future data and categories will increase). Structure or format of my data is as follows. ---------------------------------------------------------------------------------------- | product_title | Key_value_pairs | taxonomy_id | ---------------------------------------------------------------------------------------- Samsung S7 Edge | Color:black,Display Size:5.5 inch,Internal | 211 Storage:128 GB, RAM:4 GB,Primary Camera:12 MP Case cover Honor 8 | Color:transparent,Height:15 mm,width:22 mm | 212 Ruggers Men's T-Shirt | Size:L,ideal for:men,fit:regular, | 111 sleeve:half sleeve Optimum Nutrition Gold | Flavor:chocolate,form:powder,size:34 gm | 311 Standard Whey Protein Data distribution is not normal; it is highly imbalanced: ------------------------- | taxonomy_id | count | ------------------------- 111 | 851750 112 | 355592 113 | 379433 114 | 23138 115 | 117735 116 | 145757 117 | 1339471 121 | 394026 122 | 193433 123 | 78299 124 | 111962 131 | 1776 132 | 4425 133 | 908 134 | 23062 141 | 22713 142 | 42073 211 | 7892 212 | 1574744 221 | 1047 222 | 397515 223 | 53009 231 | 1227 232 | 7683 251 | 739 252 | 327 253 | 38974 254 | 25 311 | 2901 321 | 7126 412 | 856 421 | 697802 422 | 414855 423 | 17750 425 | 1240 427 | 658 429 | 1058 431 | 20760 441 | 257 As you can see they are highly imbalanced and leading to mis-classifications. Steps I have performed till now 1) Merge product_title and key_value_pairs column and remove stop words and special characters and perform stemming. 2) I have used pipeline for TFIDFvectorizer(), LinearSVC() vectorizerPipe = Pipeline([ ('tfidf', TfidfVectorizer(lowercase=True, stop_words='english')), ('classification', OneVsRestClassifier(LinearSVC(penalty='l2', loss='hinge'))), ]) After this I have fit pipeline and stored the classifier in pickle prd = vectorizerPipe.fit(df.loc[:, 'description'], df.loc[:, 'taxonomy_id']) On Testing side I have repeated step 1 as mentioned above and then load the pickle and use predict function pd = cl.predict([testData]) Issues I am facing A lot of products are being mis-classified into some other categories Example: Ultimate Nutrition Prostar 100% Whey Protein should be classified into category 311 but my classifier is classifying it as 222 which is completely wrong. I am not sure whether to use TFidfVectorizer() or Hashingvectorizer(), can you guys help me in selecting one of this along with their parameters? Algorithm I am using is LinearSVC, is it a good choice for multi-class classification problems with large amount of data? Or should I use different algorithms? As my data is highly imbalanced I tried random undersampling. The results were improved but they were still not up to the mark. Also I am not sure whether this is the right approach to perform random undersampling: pipe = make_pipeline_imb( HashingVectorizer(lowercase=True), RandomUnderSampler(ratio={111: 405805, 112: 170431, 113: 241709, 114: 8341, 115: 50328, 116: 89445, 117: 650020, 121: 320803, 122: 162557, 123: 66156, 124: 36276, 131: 1196, 132: 3365, 133: 818, 134: 15001, 141: 6145, 142: 31783, 211: 24728, 212: 100000, 221: 791, 222: 8000, 223: 35406, 231: 785, 232: 3000, 251: 477, 252: 127, 253: 29563, 254: 33, 311: 2072, 321: 5370, 412: 652, 421: 520973, 422: 99171, 423: 16786, 425: 730, 427: 198, 429: 1249, 431: 13793, 441: 160},random_state=1), OneVsRestClassifier(LinearSVC(penalty='l2', loss='hinge'))) I am new in machine learning so I have used this approach for text classification. If my approach is wrong then please correct me with right one. (It would be great if you give suggestion or solution with examples as it will help me understand better). ***EDIT-1**** RndmFrst = RandomForestClassifier(n_estimators=100, max_depth=20, max_features=5000,n_jobs=-1) LogReg = LogisticRegression() voting = VotingClassifier(estimators=[('LogReg ', LogReg), ('RndmFrst', RndmFrst)], voting='soft', n_jobs=-1) pipe = Pipeline([('tfidf', TfidfVectorizer(ngram_range=(1,4), max_features=50000)), ('clf', voting)]) pipe = pipe.fit(df.loc[:,'description'], df.loc[:,'taxonomy_id']) Preds = pipe.predict(test_data) AI: Nice question! Some Remarks For imbalanced data you have different approaches. Most well-established one is resampling (Oversampling small classes /underssampling large classes). The other one is to make your classification hierarchical i.e. classify large classes against all others and then classify small classes in second step (classifiers are not supposed to be the same. Try model selection strategies to find the best). Practical Answer I have got acceptable results without resampling the data! So try it but later improve it using resampling methods (statistically they are kind of A MUST). TFIDF is good for such a problem. Classifiers should be selected through model selection but my experience shows that Logistic Regression and Random Forest work well on this specific problem (however it's just a practical experience). You may follow the code bellow as it worked simply well then you may try modifying it to improve your results: train = pd.read_csv(...) test = pd.read_csv(...) # TFIDF Bag Of Words Model For Text Curpos. Up to 4-grams and 50k Features vec = TfidfVectorizer(ngram_range=(1,4), max_features=50000) TrainX = vec.fit_transform(train) TestX = vec.transform(test) # Initializing Base Estimators clf1 = LogisticRegression() clf2 = RandomForestClassifier(n_estimators=100, max_depth=20, max_features=5000,n_jobs=-1) # Soft Voting Classifier For Each Column clf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2)], voting='soft', n_jobs=-1) clf = clf.fit(TrainX, TrainY) preds = clf.predict_proba(TestX)[:,1] Please note that the code is abstract so TianX, TrainY,TestX,etc should be properly defined by you. Hints Be careful about what is StopWord. Practically many people (including myself!) made this mistake to remove stop words according to pre-defined lists. That is not right! Stop words are corpus-sensitive so You need to remove stopwords according to information theoretic concepts (to keep it simple you need to know TFIDF kind of ignores your corpus-specific stopwords. If you need more explanation please let me know to update my answer). VotingClassifier is a meta-learning strategy in the family of Ensemble Methods. They take benefit from different classifiers. Try them as they work pretty well in practice. Voting schema simply takes the results of different classifiers and return the output of the one which has the highest probability to be right. So kind of democratic approach against dictatorship ;) Hope it helps!
H: Normalizing the final weights vector in the upper bound on the Perceptron's convergence The convergence of the "simple" perceptron says that: $$k\leqslant \left ( \frac{R\left \| \bar{\theta} \right \|}{\gamma } \right )^{2}$$ where $k$ is the number of iterations (in which the weights are updated), $R$ is the maximum distance of a sample from the origin, $\bar{\theta}$ is the final weights vector, and $\gamma$ is the smallest distance from $\bar{\theta}$ to a sample (= the margin of hyperplane). Many books implicitly say that $\left \| \bar{\theta} \right \|$ is equal to 1. But why do they normalize it ? AI: The Perceptron's output $f$ is $$ f(\overline\theta \cdot \overline{x}) = \begin{cases} 1 &\text{ if } \overline{\theta }\cdot \overline{x} > 0 \\ 0 &\text{ if } \overline{\theta }\cdot \overline{x} \le 0\end{cases} $$ Here, $\overline{x} = (1, x_1, \dots, x_n)$ where $(x_1, \dots, x_n)$ is the input vector. You can see that the output only depends on the sign of the product $\overline{\theta }\cdot \overline{x}.$ Therefore the output will not change if we multiply $\overline\theta \cdot \overline{x}$ with a positive constant: $$ f(\overline\theta \cdot \overline{x}) = f(c\times \overline\theta \cdot \overline{x}) \qquad \text{ for all } c > 0. $$ In particular, we may choose $c = 1/\lVert \overline\theta \rVert.$ So we get $ f(\overline{\theta }\cdot \overline{x}) = f(\tilde\theta\cdot \overline{x}), $ where $\tilde\theta = \overline\theta / \lVert \overline\theta \rVert$ is the normalized vector. Conclude that there is no loss of generality in assuming $\lVert \overline\theta \rVert = 1.$
H: Is the gradient descent the same if cost function has interaction? We know how to determine regression parameters using gradient descent. If and the cost function is C=|Y-Y(X)|^2, we update b as where is the learning rate and is the partial differential of the cost function C with respect to b. If in multiple regression there exist an interaction and we want to stick on the linear model formulation (not using tree or other non-linear regressors), such that and the cost function is still the same, do we just do the same way to update b? i.e. the existence of interaction terms doesn't have impact on gradient descent. I didn't see any difference of gradient descent between with/without interaction. AI: The usual way to use interaction terms in linear regression is to construct new $x_n$, e.g. $x_3 = x_1 x_2$, and treat those identically as any other $x_n$. The learned parameter $b$ does not "know" the difference in how you calculated $x$, and the problem is still considered linear regression even if you create really complex functions of $x_n$ to create an input. Taking your example, but with slightly different notation: The model estimate for Y is $\hat{y} = a + bx_1x_2$ The value of Y you want to learn is $y$ Mean squared error for a single example is $L = \frac{1}{2}(y-\hat{y})^2$ The factor of 2 does not change this answer, and is commonly used to simplify the gradient. Typically $C$ is the mean of $L$ over all examples. In order to learn optimal value of $b$, for one example the gradient you need is $\frac{\partial L}{\partial b}$. We can get that by expanding the loss function: $L = \frac{1}{2}(y-\hat{y})^2$ $L = \frac{1}{2}(y^2 - 2y\hat{y} + \hat{y}^2)$ We could expand further, but typically now we calculate $\frac{\partial L}{\partial \hat{y}}$ and use the chain rule, because that has a simpler, more intuitive-looking result. Terms without $\hat{y}$ are zero: $\frac{\partial L}{\partial \hat{y}} = \hat{y} - y$ We want $\frac{\partial L}{\partial b}$ for gradient descent $\frac{\partial L}{\partial b} = \frac{\partial L}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial b}$ (Chain rule) $\frac{\partial L}{\partial b} = (\hat{y} - y)(\frac{\partial}{\partial b} a + bx_1x_2)$ Again, terms without $b$ in them are constants: $\frac{\partial L}{\partial b} = (\hat{y} - y)(x_1x_2)$ Note that the $x_1x_2$ term is unchanged from the input. It could be any function $x_n = f(x_1, x_2, x_3 ....)$
H: What is the use of additional column of 1s in normal equation? Currently I am going through Normal Equation in Machine Learning. $$ \hat\theta = (X^T \cdot X)^{-1} \cdot X^T \cdot y $$ But when I see how they use this equation, I found they always add an additional column of 1s in the starting of matrix X before transposing. I don't understand why. What's the logic behind this? The places where I found such things 1) Coursera - Theory 2) Implementation Now let’s compute using the Normal Equation. We will use the inv() function from NumPy’s Linear Algebra module (np.linalg) to compute the inverse of a matrix, and the dot() method for matrix multiplication: X_b = np.c_[np.ones(( 100, 1)), X] # add x0 = 1 to each instance Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems (p. 111). O'Reilly Media. Kindle Edition. AI: The normal equations are designed such that each coefficient in the model has an input of some kind it's being multiplied against. The column of ones is the "input" to the intercept term.
H: Improve Precision of a binary classifier - Decision Tree in Python Currently, I am working on a project. The dataset is balanced roughly in the ratio of 50:50. I created a decision tree classifier. I am achieving decent accuracy (~75%) on validation data but the precision for the target variable is biased. For class=0 it is approx. 98% while for the class = 1 it is just 17%. I have tried scaling the data using MinMaxScaler still no luck. model = tree.DecisionTreeClassifier(class_weight={1:30}, min_samples_leaf=160, max_depth=10) X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=10) min_max_scaler = preprocessing.MinMaxScaler() X_train_scaled = min_max_scaler.fit_transform(X_train) X_test_scaled = min_max_scaler.fit_transform(X_test) model = model.fit(X_train_scaled, y_train) prediction = model.predict(X_test_scaled) print metrics.accuracy_score(y_test, prediction) print classification_report(y_test, prediction) Size of x_train_scaled = 12600 and x_test_scaled = 5400 Accuracy: 75% Precision: {0:100%, 1:17%} Recall: {0:74%, 1:100%} F1-Score: {0:85%, 1:29%} How can I improve the precision of class=1 while still maintaining the overall precision and accuracy? AI: I guess differences in accuracies between class 0 and class 1 come from the class_weight parameter you have used. Class 1 will benefit from this overweighting towards class 0. You could try to play on this parameter to re-balance your results in class 0 and class 1. An other idea could be to play on probabilities outputs and decision boundary threshold. Remember than when calling for method .predict(), sklearn decision tree will compare outputed probability with threshold 0.5. If it is greater than 0.5, then it assign class 1. On the contrary, if it is less than 0.5, it will assign class 0. You could try to play on this threshold by outputing probabilities first with .predict_proba()[:,1] and then test results for different thresholds decision boundaries. You can operate such as below : model = clf.fit(df[features], df[label]) df["proba"] = model.predict_proba(df[features])[:,1] threshold = 0.4 # You can play on this value (default is 0.5) df["pred"] = df["proba"].apply(lambda el: 1.0 if el >= threshold else 0.0)
H: How to build an encoder using a distance matrix I have a similarity/distance matrix: a | b | c a 0 | 1 | 2 b 1 | 0 | 3 c 2 | 3 | 0 I want to build an encoder/model that learns an n-dimensional representation of each of the points in the dataset s.t. the euclidean-difference between the representations produces the difference provided in the matrix, e.g. distance(a,b) = 1 etc. Any ideas? AI: This is essentially what t-SNE does. It supports transforming data from any input dimension to any output dimension, and it tries to preserve distances between all pairs of points. From the author's website: Can I use a pairwise Euclidean distance matrix as input into t-SNE? Yes you can! Download the Matlab implementation, and use your pairwise Euclidean distance matrix as input into the tsne_d.m function.
H: Perceptron learning rate irrelevant in convergence Via this MIT document i studied the single layer Perceptron convergence proof (= maximum number of steps). In the convergence proof inside this document , the learning rate is implicitly defined as 1 . After studying it, by myself i tried to re-do the proof inserting this time a generic learning rate $\eta $. The result is that the result remains the same: $k\leq \frac{R^{2}\left \| \theta ^{*} \right \|^{2}}{\gamma ^{2}}$ that is the learning rate $\eta$ cancelled out in the proof. Is it possible , or i make mistakes in my proof ? AI: Your conclusion is correct. Note that the classifier is of the form of $$f(x; \theta) = \operatorname{sign}(\theta^Tx)$$ while the update rule is $$\theta^{(k+1)}=\theta^{(k)}+\eta y_tx_t$$ which only occurs when there is a misclassification and we only care about the in the classification but $$\operatorname{sign}(\theta^Tx)=\operatorname{sign}(\eta \theta^Tx)$$ as long as $\eta$ is positive. It only scales $\theta^Tx$ but our concern is just the sign, hence the learning rate is irrelevant. Remark: You might like to note the assumptions being used in the proof though. For instance, there is this assumption that $\theta^*$ exists. It is assumed that the data can be perfectly separated by a hyperplane.
H: Non Deterministc Dimensionality reduction could you please suggest me a nondeterministic algorithm for dimensionality reduction except t-SNE. AI: Autoencoders are non-deterministic, since they rely on a random weight initialization.
H: How to download dynamic files created during work on Google Colab? I have two different files and on the first, I tried to save data to file as: np.save(open(Q1_TRAINING_DATA_FILE, 'wb'), q1_data) On second file, i'm trying to load it the same way using: q1_data = np.load(open(Q1_TRAINING_DATA_FILE, 'rb')) I then get the error: FileNotFoundError: [Errno 2] No such file or directory: 'q1_train.npy' I searched my google drive but couldn't find this file. Platform: https://research.google.com Edit: I'm trying to run below Kaggle problem on Colab platform. The author has two files (Jupyter and nbs) - one to prepare and 2nd to train. The step on nb1 where it's creating some files - which later to be consumed by file 2 is where I'm struck. https://github.com/bradleypallen/keras-quora-question-pairs/blob/master/quora-question-pairs-training.ipynb AI: Based on what I've seen and experienced, the best way is to store and retrieve your data from your drive account. Actually your question is a bit unclear but first I say, try to use the following command to see the current files in your directory, although I guess each 12 hours they all would be deleted automatically. !ls Anyway I recommend the following instructions: Use the following code for having permission to access to your drive account: !pip install -U -q PyDrive import tensorflow as tf import timeit config = tf.ConfigProto() config.gpu_options.allow_growth = True from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authenticate and create the PyDrive client. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) Use the following code to get the id of contents in your drive: file_list = drive.ListFile({'q': "'root' in parents and trashed=false"}).GetList() for file1 in file_list: print('title: %s, id: %s' % (file1['title'], file1['id'])) Put the id of the desired file, e.g. a typical text file, in the content of the following dictionary with id key: downloaded = drive.CreateFile({'id': 'the id of typical text file'}) file = downloaded.GetContentString() print('Downloaded content "{}"'.format(len(file))) Till now you have copied the text file, then you have to write it in your Colab disk using the following code: text_file = open("your desired name.txt", "w") text_file.write(file) text_file.close() Create & upload a file. uploaded = drive.CreateFile({'title': 'filename.csv'}) uploaded.SetContentFile('filename.csv') uploaded.Upload() print('Uploaded file with ID {}'.format(uploaded.get('id'))) Downloading from Colab without Uploading to drive from google.colab import files files.download('your typical h5 file or what ever.h5') For more information about transferring different data formats there are more explanations in the notebook provided with Colab.
H: What are examples for XOR, parity and multiplexer problems in decision tree learning? In scikit-learn documentation and in decision tree learning Wikipedia article there is mention of "There are concepts that are hard to learn because decision trees do not express them easily, such as XOR, parity or multiplexer problems." I can't recall knowing such problem types. What are examples (and possibly related datasets) of such problems? AI: Below is an example of XOR dataset for classification. As you can see, decision trees perform pretty poorly on this dataset. Reason is decision trees splits space into rectangular regions. Therefore they are not pretty good with this kind of distributions. If you really want to use trees in that sort of situation, it is interesting to use so-called rotation trees. Rotation is about performing PCA (principal components analysis) on input features while learning trees. Using it, decision trees can then build non-rectangular regions. Also, here is a playground to test gradient boosting algorithms including on XOR dataset. It is really interesting ! You can click on "rotate trees" to activate rotation. Gradient boosting playground
H: How the "def match(self, example)" method is automatically calling and how the example arguments is working here? trainung_data = [ ['Green', 3, 'Apple'], ['Yellow', 3, 'Apple'], ['Red', 1, 'Grape'], ['Red', 1, 'Grape'], ['Yellow', 3, 'Lemon'], ] header = ["color", "diameter", "label"] def is_numeric(value): return isinstance(value, int) or isinstance(value, float) class Question: def __init__(self, column, value): self.column = column self.value = value def match(self, example): val = example[self.column] if is_numeric(val): return val>=self.value else: return val>=self.value def __repr__(self): condition = '==' if is_numeric(self.value): condition = '>=' return "Is %s %s %s?" % (header[self.column], condition, str(self.value)) Now if i call the class like: Question(0, 'Green') It give me output like this Is color == Green? AI: Typing Question(0,'Green') should not print out anything. However if you type print(Question(0,'Green')) then, the $\_\_$repr$\_\_$ method is called.
H: What is the advantage of using Dunn index over other metrics for evaluating clustering algorithm? There are many metrics to evaluate clustering algorithm like Calinski-Harabaz Index, Dunn index, Rand index, etc. Are there any advantage of using Dunn index over other metrics for evaluating clustering algorithm (K-means in particular)? If yes, what are the advantages and disadvantages? AI: They will often give the same preferences. Do not forget that these are largely heuristics. They won't have much advantages over one another. That is why there are so many. As a guideline, look at the definitions. Choose that index, whose equation is most relevant for your problem. (Yes, you do need to understand what they do. Clustering is hard, it is easy to get meaningless results by just looking at the code and scores and forgetting the underlying math.)
H: Applying dimensionality reduction on OneHotEncoded array I have a really large data set with mixed variables. I have converted categorical variables to numerical using OneHotEncoding and it has resulted in more than a couple of thousand different features, combined that is. Is it possible to apply dimensionality reduction algorithms on OneHotEncoded data which looks like [[1. 0. 1. 0.]...[0. 0. 0. 0.]] or should it be done by merging with the original data set? AI: Following your example, you have different points in a 4-dimensional space. So, yes! you can use any dimensionality reduction technique, from PCA to UMAP. In general, if your data is in a numeric format (and one-hot actually is), all the elements have the same dimensionality, and you don't have undefined values (NAN, inf), you can always use dimensionality reduction.
H: Why do we need to discard one dummy variable? I have learned that, for creating a regression model, we have to take care of categorical variables by converting them into dummy variables. As an example, if, in our data set, there is a variable like location: Location ---------- Californian NY Florida We have to convert them like: 1 0 0 0 1 0 0 0 1 However, it was suggested that we have to discard one dummy variable, no matter how many dummy variables are there. Why do we need to discard one dummy variable? AI: Simply put because one level of your categorical feature (here location) become the reference group during dummy encoding for regression and is redundant. I am quoting form here "A categorical variable of K categories, or levels, usually enters a regression as a sequence of K-1 dummy variables. This amounts to a linear hypothesis on the level means." This is already discussed at this very nice stats.stackexchange answer. I was told there is an advanced course by Yandex in Coursera that covers this subject in more details if you still have doubts, see here. Note you can always audit the course content for free. ;-) Another nice post if you want a thorough explanation with lots of examples with statistical perspective and not being limited to only dummy coding, see this from UCLA (in R) Note that if you using pandas.get_dummies, there is a parameter i.e. drop_first so that whether to get k-1 dummies out of k categorical levels by removing the first level. Please note default = False, meaning that the reference is not dropped and k dummies created out of k categorical levels!
H: Help me understand how word-as-vector representations are constructed Let's suppose I have a big list of words. I want to turn this list into a vector space of dimension $N$ such that each word is a vector in this vector space. But I have no idea how to go about with that. Some questions: Is the list enough? For each element of the list, do I need $x$ example sentences also? How does the computer deduce the dimensions of the vector space from the list/corpus? Is there a way to figure out whether the dimensions of the vector space correspond to something in English? AI: I am assuming that you mean a vector representation of words, not to be confused by the vector representation produced in a bag of words approach that represent a document in vector space. Word2vec is an approach in which you train a model to represent words as a function of the provided context. The answers that follow are: 1) No: You'll need some representation of context in which the word is used. For example: Skip-grams. 2) No: That is a user defined parameter 3) < I do not understand this question >
H: Orange: Data has no target variable error I am trying to apply Random Forest algorithm on a data set using Orange. The target variable is not set in the data set. However, I know which column is the target variable. How can I specify the target variable in a .csv file using Orange or any other tools? AI: You can specify the target by choosing target in the feature column
H: Guided topic modeling: generating words from topics I need to generate lists of words related to specific topics for a project. I am familiar with clustering methods of topic modeling such as LDA, but I have something else in mind. Are there any methods to generate lists of related words from a root word? For instance, where I could type "medicine" and it would return a list of words or terms related to medicine, such as "doctor", "malpractice", "office", etc.? AI: You could try using word2vec: It orders words in vector space according to context, it allows you to retrieve words with similar context. You can find pretrained models on the web.
H: Does Tensorflow support a Decision Tree Classifier? I am trying to implement decision tree classifier to classify my data set. I am using Python. Now it is easy to implement in scikit learn, but how can I implement this in tensorflow. AI: Basically I guess TensorFlow does not support decision trees. I quote from here, This is a big oversimplification, but there are essentially two types of machine learning libraries available today, Deep learning (CNN,RNN, fully connected nets, linear models) and Everything else (SVM, GBMs, Random Forests, Naive Bayes, K-NN, etc). The reason for this is that deep learning is much more computationally intensive than other more traditional training methods, and therefore requires intense specialization of the library (e.g., using a GPU and distributed capabilities). If you're using Python and are looking for a package with the greatest breadth of algorithms, try scikit-learn. In reality, if you want to use deep learning and more traditional methods you'll need to use more than one library. There is no "complete" package. You can see from here that there are other learning algorithms implemented in TensorFlow which are not deep models. You can take a look at here for tracking algorithms implemented in TensorFlow.
H: How to approach model reporting task I have been tasked to report on an ensemble model that was created in h2o which includes several model subtypes such as Random Forest, GBM, linear models etc. The end goal is to predict churn rates for products in a large telco company, but the approach we use could apply to any similar problem. The models produced in this way contain a few potentially useful performance measures such as variable importance, precision, recall and some others. Each model has roughly 150 input variables. The model scores have been used to group the customers by decile and measure the churn rate of each group. The present situation is that the scores appear to be too good which suggests we may have a data leakage problem. For instance, for one of the models the 1st decile captures 84% of the churn, with 99% of the churn captured by the 4th decile. My task is to understand and report on potential issues with the model performance so we can improve the models and recommend action to the business. What I would like to know is: What are some basic analyses that I can perform to address the data leakage issue. How can I leverage the model metadata to better understand model performance? What other important questions should I know to ask in order to fully address this task? AI: Remove input data to test for leakage This is very generalized question, so without knowing the types and provenance of the input data, this can be hard to answer. But, in general, to check for leakage, you can use the model on some subsets of the input variables while removing other input variables. If you get data from multiple sources, then try removing all input variables from a single source, then re-run your models. You may be able to identify the source of the data leakage. Alternately, if computational power allows, you can brute force it by running the model with each of the 150 input variables removed, or all sets of two variables, etc. Use customer-centered time data Regarding model meta-data, again I would investigate data provenance. Are you predicting churn using the complete patterns of customers who stopped using the service? What I mean to say is, instead of looking backwards from a fixed real-time period, like today, to all customers who did or did not stop using the service, try looking from a fixed customer-time. Use only data from the first year that each customer used the service, and attempt to predict whether each customer will remain with the service for another year. The warning signs of a customer dropping the service may be obvious in the months leading up to that customer dropping the service, but by then, the predictive power of your model may be too late to stop that customer from leaving. Instead, index the time component of each customer's history to zero when the first start using the service, and run your model on this data.
H: Classification or regression? Which model is more accurate if I only care about being above or under the threshold? If I have a regression problem that can also be a classification problem by converting a continuous variable to a binary depending on a threshold, which model would be more accurate if I only care about being above or under the threshold? Example: I have a column with house prices [20, 30, 40, 70] but I only care if the house is expensive (more than 40) or cheap (less than 40) AI: Classification is the more direct approach and it will likely give better results. This is because the model's goal is exactly the same as your goal - i.e. predicting whether the price is above or below the threshold - and it will maximize this accuracy. A regression model that is trying to minimize MSE, for example, could give you a model that doesn't accurately put predictions on the correct side of the threshold - which it doesn't know or care about. The model does not have the same goal as you do. Let's construct a very simple example to see why this might be the case: Training Test x y x y 1.0 20 1.8 ? 2.0 20 2.4 ? 3.0 200 2.8 ? 4.0 200 3.2 ? Here I used the rule if x<2.5: y=20, else y=200 to generate the data. A maximum-margin classifier would get all of the test examples right, but a linear regression model fit on the training data would get all of the test examples wrong.
H: Class weight degrades Multi Label Classification Performance I noticed something strange while I was conducting a multiple label classification problem via keras neural network. My data set consist of imbalance data with 12 features and 25 possible labels. When I instantiate my model with no class weight I get a precision of 97%, recall of 13%, subset accuracy of 14%, f1-score of 23% using the micro average. When I apply class weight these scores are significantly reduced to the below. ('Accuracy', 0.1757093081134893) ('Precision:', 0.19632925472747498) ('Recall', 0.1637291280148423) F1 -score 0.178553363682 Also I calculate the weights with below code that I copied and modify from a previous post: def class_out(s): y_classes = s#.idxmax(1, skipna=False) # Instantiate the label encoder le = LabelEncoder() # Fit the label encoder to our label series le.fit(list(y_classes)) # Create integer based labels Series y_integers = le.transform(list(y_classes)) #print y_integers # Create dict of labels : integer representation labels_and_integers = dict(zip(y_classes, y_integers)) print labels_and_integers class_weights = compute_class_weight('balanced', np.unique(y_integers), y_integers) sample_weights = compute_sample_weight('balanced', y_integers) class_weights_dict = dict(zip(le.transform(list(le.classes_)), class_weights)) class_sweights_dict = dict(zip(le.transform(list(le.classes_)), sample_weights)) print class_weights_dict return class_weights_dict Also see a sample of the model: batch_size = 100 weights = class_out(df_all['tag']) model = Sequential() model.add(Dense(10, activation="relu", input_shape=(12,))) #model.add(Dense(10, activation='relu')) #model.add(Dense(8, activation='relu')) #model.add(Dropout(0.50)) model.add(Dense(25, activation="sigmoid")) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy',precision,mcor,recall, f1]) model.fit(X_train, Y_train, batch_size=batch_size, epochs=15,class_weight=weights, verbose=1,validation_data=(test, target_test)) Is there a reason to believe that the model performance is best without class weights ? AI: Adding class weight but not changing the way you measure performance will usually degrade overall performance as it is designed to allow increased loss on lower-weighted classes. I would recommend also weighting your accuracy measures. This is a bit tricky with accuracy/precision etc. so maybe calculated the weighted logloss and compare it to the unweighted logloss of the unweighted model. Basically it comes down to the question: are you happy with a model that performs worse overall but better on your heavily weighted classes?
H: interpret results of nvidia-smi Every 1.0s: nvidia-smi Tue Feb 20 12:49:34 2018 Tue Feb 20 12:49:34 2018 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 390.25 Driver Version: 390.25 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro M1000M Off | 00000000:01:00.0 Off | N/A | | N/A 59C P0 N/A / N/A | 1895MiB / 2002MiB | 64% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1166 G /usr/lib/xorg/Xorg 239MiB | | 0 1864 G compiz 80MiB | | 0 6755 C python 1408MiB | | 0 25674 G ...-token=5769296849603E2A1B668201DBB31D6A 149MiB | +-----------------------------------------------------------------------------+ I'm a new practitioner of keras+gpu based deep learning. What is this watch -n 1 nvidia-smitelling me? Is there any value proposition in it, can it be used for improving performance of the gpu? In the backgrpound a small Keras model is training. AI: watch is linux command and not related to nvidia-smi command,which using that the nvidia-smi command will be updated every 1 second and it give you a view to see how gpu parameters changing like: RAM usage, temperature, P0(means performance,which p0 is best performance and performance decrease when you get p1,p2,...) about "can it be used for improving performance of the gpu?" the performance of gpu is showing by p0 so your gpu currently is in max performance. note that minimum interval for watch command is 0.1 second so if you use watch -n 0.1 nvidia-smi then every 0.1 second the output will be update. for further information type man watch or man nvidia-smi in terminal.
H: Is R2 score a reasonable regression measure on huge datasets? I'm running a regression model on a pretty large data set and getting a fairly woeful $R^2$ score of ~0.2 (see plot below), despite the plot looking like the model is generally pointing in the right direction. My question is, when you have over a million data points, how high can you realistically expect the $R^2$ to go in real world data with a decent amount of noise? What prompts by scepticism of such traditional measures are articles such as this that discuss how quantity of data can degrade statistical tests. Let me know what you think and any regression examples using $R^2$ score as a quality metric. AI: The coefficient of determination $r^2$ is defined in terms of variance: it is the proportion of variance in the dependent variable that is explained by the independent variable. Variance is a property of normal distributed data. Hence, the coefficient of determination can only be used when you assume that both the dependent and independent variables are normally distributed. Just like other properties of normal distributed data, the estimation of $r^2$ improves when the amount of data increases. With very little data a coincidental colinearity might be the case, but this is not possible with large amounts of data. Back to your example. Your data is cleary not normal distributed, it has right-skew and has large outliers. For this reason it is not advisable to use $r^2$. Imagine for example that in the lower left corner (where the majority of the data is) you would observe a negative trend, but in general there is a positive trend. The regression line would be the same and the $r^2$ would be in the same range. This is known as the Simpsons paradox. In short, if your data is normal distributed you can use $r^2$ for any size of dataset. If it is not normal distributed you cannot use $r^2$.
H: k-means with one cluster K-means may give different results, because the initial choice of centroids is random. However, if I were to choose k=1, will the algorithm always provide the same answer equal to the "barycentre" of my data? AI: Yes. The centroid will converge to the center of all your data and this will occur in a single iteration. This is due to all the data points belonging to a single centroid, thus it will be centered according to all these instances immediately. import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans # Generate data class1 = np.random.randn(1000, 2) class2 = np.random.randn(1000, 2) class2[:,0] = 4+class2[:,0] class2[:,1] = 4+class2[:,1] class3 = np.random.randn(1000, 2) class3[:,0] = -4+class3[:,0] class3[:,1] = 4+class3[:,1] data = np.append( class1, class2, axis= 0) data = np.append( data, class3, axis= 0) print(data.shape) # Plot the data plt.scatter(data[:,0], data[:,1]) plt.show() # Cluster kmeans = KMeans(n_clusters=1, random_state=0, verbose = 1).fit(data) # Plot clustered results plt.scatter(data[:,0], data[:,1], c=kmeans.labels_) plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], c = 'r') plt.show() # Show the cluster centers print(kmeans.cluster_centers_) Initialization complete Iteration 0, inertia 81470.055 Iteration 1, inertia 48841.695 Converged at iteration 1: center shift 0.000000e+00 within tolerance 8.140283e-04
H: Is a good practice to sum each rate (i.e. crime rate per 100.000 people)? Consider a dataset from 1990 to 2017 that contains the crime rate per 100,000 people in some cities from Latin America. I want to measure which city is more complex according to this data and other indicators. I'm using The analytic hierarchy process proposed by Saaty. Consider this example: city_code r1990 r1991 r1992 r1993 r1994 r1994 12345 120 80 91 110 105 99 23456 10 15 16 12 7 11 34567 90 91 85 75 77 65 According to my question, I want to sum each rate in each year, and then get for each year the mean of each rate based on the total. Based on the example above: city_code total 12345 605 23456 71 34567 483 city_code mean1990 mean1991 mean1992 mean1993 mean1994 mean1994 12345 0.20 0.13 0.15 0.18 0.17 0.16 23456 0.14 0.21 0.23 0.17 0.10 0.15 34567 0.19 0.19 0.18 0.16 0.16 0.13 So, is this a good practice? I couldn't find an example following this method? Any orientation about this subject will be appreciated. AI: This can be fairly good if you deal with few years. In general it's not a good practice, since asymptotically will lead you to have almost the same fraction of crimes per all the considered years. But unless you have data from the last 10000 years, you can simply ignore the asymptotic behavior a continue the way you are doing. Otherwise, you can normalize the data to a reference value, say the maximum number of crimes on the series. This is more robust.
H: Get a portion of a long field in Pandas? I have a Pandas dataframe that has some fields that contain very verbose text. I want to be able to iterate through the DF but only display a limited set of words. I have code similar to: for index, row in df.iterrows() : print(row['A'], row['B']) How can I make sure that I only print the first 300 characters from 'A'? AI: Fairly simply, for 'A' only: max_chars = 300 for index, row in df.iterrows() : print(row['A'][:max_chars], row['B'])
H: How do I arrange my data to predict 6 weeks of daily sales I have a data.table base that has many variables to use them to forecasting sales for the next 6 weeks of daily sales. In fact, all the database is arranged by date as you can see here.Note that here I just show you some of variables. > Data_train[order(Date)] Store DayOfWeek Date Sales Customers Open Promo StateHoliday SchoolHoliday 1: 1 2 2013-01-01 0 0 0 0 a 1 2: 2 2 2013-01-01 0 0 0 0 a 1 3: 3 2 2013-01-01 0 0 0 0 a 1 4: 4 2 2013-01-01 0 0 0 0 a 1 5: 5 2 2013-01-01 0 0 0 0 a 1 --- 1017205: 1111 5 2015-07-31 5723 422 1 1 0 1 1017206: 1112 5 2015-07-31 9626 767 1 1 0 1 1017207: 1113 5 2015-07-31 7289 720 1 1 0 1 1017208: 1114 5 2015-07-31 27508 3745 1 1 0 1 1017209: 1115 5 2015-07-31 8680 538 1 1 0 1 . My question is about the arrangement of the data according to the target goal. My problem is really about the Date variable. In fact, I suggest this path: I sum all sales by each date (because I have many types of stores). I order my database according to the Date 's ascending order. I didn't need duplicated rows in the date variables so I delete them. Just to show you the new base for considered variables. > Data_train[,SumSaleseachDay:=sum(Sales),by=c('Date')][order(Date)][!duplicated(Date)][,-c('Sales','Customers'),with=FALSE] Store DayOfWeek Date Open Promo StateHoliday SchoolHoliday SumSaleseachDay 1: 1 2 2013-01-01 0 0 a 1 97235 2: 1 3 2013-01-02 1 0 0 1 6949829 3: 1 4 2013-01-03 1 0 0 1 6347820 4: 1 5 2013-01-04 1 0 0 1 6638954 5: 1 6 2013-01-05 1 0 0 1 5951593 --- 938: 1 1 2015-07-27 1 1 0 1 10707292 939: 1 2 2015-07-28 1 1 0 1 9115073 940: 1 3 2015-07-29 1 1 0 1 8499962 941: 1 4 2015-07-30 1 1 0 1 8798854 942: 1 5 2015-07-31 1 1 0 1 10109742 ADDED INFORMATION: I have a database of 1017209 rows. And for each Store, I have its historic of Sales between 2013-01-01 and 2015-07-31. And I have also 17 variables included to build the model. The steps above just lead to forecast by day. If I want to forecast for each Store and by day, what should I do? thank you in advance! AI: I guess you are trying to predict the future sales based on the data from two years that each store gathered. So I would suggest you to train model on each individual set for each store which will predict the forecast of the day for that store and feed the prediction to the separate model which will predict the overall forecast on all the stores based upon the individual store prediction, thus you can get individual predictions as well as overall forecast from the model. -> Based on your comments Remembers it is a bit time consuming but i think it worth spending for good results you can skip it and learn from tutorials where i learned alot (not promoting) They are not for your specific purpose but close to yours You can learn the whole process of prediction in the following tutorials Tutorials Here we go step by step what i had explained -> First normalize all the data sets from each store -> then Choose the algorithm you want to use -> then Arrange the store data in a organized way because you may get bit confused while training -> Initially Just pick one data set and tune the hyper parameters and understand working of those parameters carefully because your going to them a lot. -> When you get the point Start from the top to bottom of store list and Store the trained model for each store with similar names -> When your done with all the stores you will be getting predictions for each as you requested and then collect those prediction and average them and you would be getting approximate predication from all the stores
H: How to obtain with a RRN a version of a temporal XOR function using keras/tensorflow? I'm trying to implement a model of a recurrent neural network to solve a temporal version of the XOR problem, but I am not still able to do that. Any hints? AI: I think following this link helps you, I have gone trough those tutorials previously. Also take a look at this tutorial .
H: Is There any RNN method used for Object detection after reading the state of the art about object detection using CNN (R-CNN Faster R-CNN ,YOLO, SSD...) I was wondering if there is a method that use RNN's or that combine the use of CNN's and RNN's for object detection ?? Thank you AI: Yes, there have been many attempts, but perhaps the most noteable one is the approach described in the paper of Andrej Karpathy and Li Fei-Fei where they connect a CNN and RNN in series (CNN over image region + bidirectional RNN + Multimodal RNN) and use this for labeling a scene with a whole sentence. Though, this one is more than just object detection as it leverages a data set of scenes and their descriptions to generate natural language descriptions of new unseen images. Another example is Ming Liang and Xiaolin Hu's approche where they mix a CNN with an RNN and use this architecture for better object detection. As Ming and Xiaolin explained in their paper (linked above), the RNN is used to improve the CNN: A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer.