text
stringlengths
83
79.5k
H: NameError 'np' is not defined after importing np_utils I am running a MNIST example in a Jupyter notebook running in an Anaconda virtual environment. I have tried to run the code below (not yet finished, I was testing it) when it comes up with an error (can be seen below the code). (X_train, y_train), (X_test, y_test) = mnist.load_data() #print("X_train shape", X_train.shape) #print ("y_train shape", y_train.shape) #print("X_test shape", X_test.shape) #print ("y_test shape", y_test.shape) from keras.models import Sequential #imports the sequential model from keras.layers import Dense, Dropout, Conv2D, MaxPool2D from keras.utils import np_utils #flattens the images from the 28x28 pixels to 1D 787 pixels X_train = X_train.reshape (60000, 784) #flattening the image into 28x28 pixels, so into an array of 784. X_test = X_test.reshape (10000, 784) X_train = X_train.astype('float32') #using a 32-bit precision when training the neural network. This is most commonly used. X_test = X_test.astype('float32') X_train /=255 #Used for normalisation. Each image has a 'degree' of darkness within the range of 0-255 so you need to reduce that range to 0-1 for your Neural Network X_test /=255 #one-hot encoding using keras' numpy-related utilities n_classes=10 print ("Shape before one-hot encoding: ", y_train.shape) Y_train = np.utils.to_categorical(y_train, n_classes) Y_test = np.utils.to_categorical(y_test, n_classes) print ("Shape after one-hot encoding:", Y_train.shape) Error: --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-4-0063f366d5bd> in <module> 22 n_classes=10 23 print ("Shape before one-hot encoding: ", y_train.shape) ---> 24 Y_train = np.utils.to_categorical(y_train, n_classes) 25 Y_test = np.utils.to_categorical(y_test, n_classes) 26 print ("Shape after one-hot encoding:", Y_train.shape) NameError: name 'np' is not defined I guess I need to define np somewhere, but the practise code on the website I am using doesn't actually define np anywhere. Any suggestions? Many thanks AI: Y_train = np_utils.to_categorical(y_train, n_classes) Y_test = np_utils.to_categorical(y_test, n_classes) please update code line as np_utils not np.utils
H: Why does an imbalanced data set badly effect distance measures like Mahalanobis? I'm relatively new to data science and I am struggling to understand why the Mahalanobis distance (or any other distance measure) applied to an imbalanced data-set becomes inaccurate. I have a data set that consists of three classes A, B and C. There are 100 observations for class A, 60 for class B, and 20 for class C. When I calculate the Mahalanobis distance between each class, the results do not appear consistent with my PCA (principal component analysis) plot. In the PCA plot, class C is the most separate class; however, the Mahalanobis distance does not reflect this. For balanced data sets, i.e., where classes A, B and C have the same number of observations, this has never been an issue. The Mahalanobis distance has always quite accurately reflected the results of PCA for balanced data. I have read some similar questions and answers on here about why imbalanced data must be handled carefully for classification algorithms, but is this the same for distance measures? From what I can tell the Mahalanobis distance doesn't explicitly depend on sample size. Therefore, I ask why does this measure lose reliability for imbalanced data? AI: Mahalanobis distance is defined as a distance between a point and a distribution. The key is how you define the distribution and I would say the imbalance of classses is not the problem in itself here. What could be a problem is mahalanobis is sensitive to initialization and the sample sizes of your classes are not that big. You could check the covariance matrix if it is reasonable for you task. About more general question -- the distance between two points obviously does not depend on size of the classes. If we talk about distance between a point and a set, then it could affect the result: i.e. you defined the distance as the distance to the closest point in set, then obviously the more points, the bigger chance to get the closer point.
H: There could be a problem with the linear layer after the attention inside a transformer? My question regards this image: It seems that after the multi head attention there is a linear layer as they mention also from here: the linearity is given by the weights W^{o}. my quesion is: for the decoder, doesn't this linear layer mess up with the masking of the attention? I mean, if each token from the attention layer should not depend from the next tokens then these weights seem to mess up the whole thing or they don't? Indeed these linear weights will learn the dependencies among all the tokens and during inference there could be a problem, maybe(?). Thanks for any clarification. AI: No, this is not a problem. If we zoom into the scaled dot product attention blocks, which happen before the projection with $W^O$ we see this: There, you can see how the masking of the current and future positions happens inside the scaled dot product attention, which happens before the multiplication by $W^O$. Therefore, the values learned for $W^O$ are trained only with the information from the previous tokens in the decoder.
H: unable to predict by LinearRegression Should I add csv as text in SO question? There's lot more data. %matplotlib inline plt.xlabel('Year') plt.ylabel('Income($US)') plt.scatter(df.year,df.income,color='red',marker='+') reg = linear_model.LinearRegression() reg.fit(df[['year']],df.income) Output : LinearRegression() reg.predict('10000') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in ----> 1 reg.predict('10000') ~/.local/lib/python3.9/site-packages/sklearn/linear_model/_base.py in predict(self, X) 236 Returns predicted values. 237 """ --> 238 return self._decision_function(X) 239 240 _preprocess_data = staticmethod(_preprocess_data) ~/.local/lib/python3.9/site-packages/sklearn/linear_model/base.py in decision_function(self, >X) 218 check_is_fitted(self) 219 --> 220 X = check_array(X, accept_sparse=['csr', 'csc', 'coo']) 221 return safe_sparse_dot(X, self.coef.T, 222 dense_output=True) + self.intercept ~/.local/lib/python3.9/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs) 61 extra_args = len(args) - len(all_args) 62 if extra_args <= 0: ---> 63 return f(*args, **kwargs) 64 65 # extra_args > 0 ~/.local/lib/python3.9/site-packages/sklearn/utils/validation.py in check_array(array, >accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator) 628 # If input is scalar raise error 629 if array.ndim == 0: --> 630 raise ValueError( 631 "Expected 2D array, got scalar array instead:\narray={}.\n" 632 "Reshape your data either using array.reshape(-1, 1) if " ValueError: Expected 2D array, got scalar array instead: array=10000. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or >array.reshape(1, -1) if it contains a single sample. I am not sure why I am getting above error. I have income lists of some years. So, I graph it. When I was trying to predict a data from linearRegression I got the error. I am new to ML(Machine Learning) How to solve it? What am I missing? AI: The input to the predict method should be a 2d array of shape (n_samples, n_features), which in your case with one feature and one sample would be (1, 1). Therefore add a second dimension for the number of samples: reg.predict([['10000']]).
H: Is CRF suitable for multi-words Named Entity Recognition? I've a problem where I should create a custom NER by using sklearn CRF. In the official tutorial, they are using CoNLL2002 corpus is available in NLTK where the entities are represented with a single word but in my problem, an entity can be formed with multiple words ex: United States of America, Cinema at Miami, etc. Can CRF handle this? AI: Absolutely. If you look at the training tutorial, it implies that this isn't an issue at all. When using multi-word entities, you typically need to use a IOB or BILUO tagging schemes, which helps your model train better. But from a mathematical perspective, there aren't any restrictions for a CRF, as CRF models the likelihood of particular sequences/transitions. Often, people set restrictions for particular transitions if you know in advance that they aren't possible. But by default, all transitions are allowed. In sklearn-crf, allowing all transitions is done by using the all_possible_transitions=True argument.
H: find value which occurred more times in a group from data frame column R I have a data frame with latitude and longitude of a particular place. one place can have multiple lat, longs and those lat, long values can be same or different. I need to find the correct lat, long based on no.of occurances of lat, long for particular place. example input data frame df: place lat long hsy 7.343 32.849 hsy 7.343 32.122 hsy 7.567 31.567 hsy 6.934 32.122 bls 2.67 6.2 bls 3.345 5.9 bls 2.987 6.321 bls 2.987 4.56 bls 1.876 6.321 bls 2.987 6.321 expected output data frame place lat long hsy 7.343 32.122 bls 2.987 6.321 expected output for lat is which ever value is occurred more times among its values for particular place and the output for 'long' column value also which ever value is occurred more times among its values for particular place. tried below code using across, but across is not working. getting Error: 'across' is not an exported object from 'namespace:dplyr' or could not find function "across" error. any alternative for across function? df %>% group_by(place) %>% summarise(across(c(lat, long),~names(sort(table(.x), decreasing = TRUE))[1])) AI: I think below code may help you. df <-data.frame(place=c('hsy','hsy','hsy','hsy','bls','bls','bls','bls','bls','bls'), lat = c(7.343,7.343,7.567,6.934,2.67,3.345,2.987,2.987,1.876,2.987), long=c(32.849,32.122,31.567,32.122,6.2,5.9,6.321,4.56,6.321,6.321) ) library(dplyr) Mode function which will fetch number which has highest frequency mode_fun<-function(x){ tbl=table(x) tbl=sort(tbl,decreasing = T) return(as.numeric(names(tbl[1]))) } df1 = df %>% group_by(place) %>% summarise(lat1=mode_fun(lat),long1=mode_fun(long))
H: Overall AUC higher than all "stratified" AUCs For one of my binary classification models, I have observed this (Simpson's Rule-esque) paradox. The AUC on the test set as a whole is 0.8. Gender is one of the model's features. So I decided to produce a "bias" report, for which I calculated AUCs for each of the Male and Female subgroups. But I noticed that each of these AUCs is around 0.7. How is this possible, given that the overall test AUC is 0.8? (In my dataset, every data point belongs to either the Male or the Female subgroup.) I don't expect the overall AUC to simply be a (weighted) linear combination of AUCs for the individual strata. I'm hoping to get both a technical/mathematical answer and a high-level explanation. Please let me know if any further information is needed (if you think I should plot the overall, Male, and Female ROC curves, for instance). Thank you! AI: AUC can be defined as $P(X_1 > X_0)$ where $X_1$ is the score of a randomly chosen positive instance and $X_0$ is the score of a randomly chosen negative instance. Like in Simpson's "paradox", what you see could happen because the group has a relatively large effect on your target. For example, one group could be mostly positive, another could be mostly negative. The classifier might not work at all within each of the groups, but if it is merely able to predict the group we are in (or as in your case it is simply one of the inputs), it can have a high AUC.
H: Plot overlapping time series I'm trying to plot my test set and test set predictions to check the differences and see how my autoencoder reconstructed the data, but since I have a test set 30x10 I have a huge visualization problem: How can I solve it? This is the code, I'm just showing the first row (X-test[0]), but I have 29 more and I don't how to show all of them properly. fig = plt.figure() plt.plot(X_test[0].T, label='y_true') plt.plot(x_test_pred[0].T, label='yhat_conv') _ = plt.legend() plt.show() plt.close() AI: It's not going to be pretty, but you just for-loop your code: ax, fig = plt.subplots() for i in range(30): ax.plot(X_test[i].T, label='y_true_' + i) ax.plot(x_test_pred[i].T, label='yhat_conv' + i) plt.show() Because you have so many lines, it would be best to remove the legend from the plot as it will be too much information to interpret. You could use additional subplots e.g. ax,fig = plt.subplots(5,6) and plot one row on each subplot e.g. ax[1].plot(first_plot) etc. It might be more suitable to generate some summary statistics or differences of the lines (as suggested in the comments). One way would also to plot the error bars/distributions of the lines as shaded regions and just the line of the average for particular groups, similar to the plot below:
H: Finding an appropriate binary classification algorithm for time series data intervals Maybe someone here has experience in this matter and can point me in the right direction. I want to classify parts of an interval of numerical movement data as either resting or not resting. I have training data of what resting intervals look like. And am looking for the right algorithm to tackle this problem. I don't need the code, just a friendly push in the right direction. Here is a rundown of the expected process. I have an array of values that represent measurement over 5 minutes. testData = [0, 4, 3, 5, 2, 4, 3, 4, 6, 11, 9, 10, 3, 15, 21, 5] I have training data trainingDataResting = [[1, 2, 1, 6, 4, 2], [3, 2, 2, 0, 4, 1, 5, 2], [3, 6, 2, 1, 0, 4, 5, 2]] trainingDataActive = [[10, 4, 5, 9, 19, 13], [12, 8, 20, 9, 14], [13, 22, 19, 21, 11, 7, 9]] A resting interval has to be at least 6 measurements long, and the longest possible resting interval is of interest. Is there a way to classify portion of the testData as resting, based on the trainingData? Something like: testData = [0, 4, 3, 5, 2, 4, 3, 4, 6, 11, 9, 10, 3, 15, 21, 5] '-------------------------' '----------------------' Resting Not Resting testResting = [0, 4, 3, 5, 2, 4, 3, 4, 6] testActive = [11, 9, 10, 3, 15, 21, 5] I get that there will be many solutions of possible resting intervals, but I am particularly interested in the longest and most resting one. Of course I am working with a lot more training data than I provided here. I was working on a decision tree -> Decision Tree, before I was informed that there is prelabeled data that I could use for training. I was thinking about long short term memory networks, do you know whether this applies here? Thank you for the help. Best, Jo AI: You could do logistic regression combined with an additional moving average/ clustering step. Combine your data into training rest and active data into a single array X, and have an additional array y which would represent the labels e.g. 0 - active / 1 - resting for each row in your training dataset. from sklearn.linear_model import LogisticRegression clf = LogisticRegression(random_state=0).fit(X, y) Then predict for each step whether it is resting or active. The predictions of the test set would look something like: [0,1,1,1,0,1,1,0,0,0,0,1,0,0,0] The additional step would be to find the groups. You could do this with a moving window like "if 5 active states in a window of 7, label it as active". You could also solve that final step with clustering, e.g. "if 0 is surrounded by 1s it is 0", or find local clusters of binary values.
H: What is the upscaling factor in super resolution with deep learning? I have been reading papers on single image super resolution (SISR) and I frequently encounter X3 upscaling factor, X4 upscaling factors. Example: SRGAN mentioning x4 upscaling factor It would be wonderful if anyone could explain it in simpler words. Thanks AI: The upscaling factor simply denotes how much the original will be upscaled. In the case of SRGAN, which uses 4x upscaling, this simply means that the output image will be four times as large as the input image. An input image of size 720 by 1080 pixels will be of size 2880 by 4320 pixels after being upscaled using SRGAN.
H: What will be the input_shape of tf.keras.layers.Conv3D be for these inputs I have many videos, and each video is made up of 37 images (there are 37 frames in the whole video). And the dimension of each image is (100, 100, 3).... So the shape of my dataset is [num_of_videos, 37, 100, 100, 3] If I want to pass these videos thorugh the tf.keras.layers.Conv3D()... what will be the input shape for the conv3D be: Conv3D(32, 3, padding='same', activation='relu', input_shape=[What will the input shape here be]) AI: From Keras document, Input shape 5+D tensor with shape: batch_shape + (channels, conv_dim1, conv_dim2, conv_dim3) if data_format='channels_first' or 5+D tensor with shape: batch_shape + (conv_dim1, conv_dim2, conv_dim3, channels) if data_format='channels_last' Assuming data_format='channels_last' and num_of_videos may be used to decide batch_size(which is not required while defing input_shape), input_shape=[ 100, 100, 37, 3 ]
H: How does class_weight work in Decision Tree? I am interested in Cost-Sensitive learning. And I am trying to understand how class_weight in DecisionTree works in terms of math. I read a lot of articles that there are a lot of algorithms Cost Sensitive Decision Tree. So what exactly does class_weight do in Decision Tree? AI: It is used, for example, when classes are imbalanced, so different weights are assigned to different classes, instead of equal ones. Another case is when some class is more significant than others, so loss wrt this class counts more. The class_weight parameter (eg for decision tress) is used by giving different weight to different class samples (doc: https://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_class_weight.html, src: https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/utils/class_weight.py#L11), which is then used to position the sample accordingly. Note that class_weight can be used in different ways depending on algorithm and model used. What makes sense for each algorithm. Mathematicaly, usually, is a simple multiplication of the sample value in some loss function Related: How does the class_weight parameter in scikit-learn work?. For CARTs with Gini Criterion: The original gini impurity is defined as: $${\displaystyle \operatorname {I} _{G}(p)=\sum _{i=1}^{J}\left(p_{i}\sum _{k\neq i}p_{k}\right)=\sum _{i=1}^{J}p_{i}(1-p_{i})=1-\sum _{i=1}^{J}{p_{i}}^{2}}$$ If classes are assigned weights $w_i$ then the weighted gini impurity is computed as follows: the weight of all the observations in a potential child node, $c$, is $$t_c = \sum_i w_i * n_i$$ where $n_i$ is the number of observations of class $i$ in $c$, and $w_i$ is the weight assigned to class $i$. The impurity of child node $c$ is then $$i_c = 1 - \sum_i (\frac{w_i * n_i}{t_c})^2$$ where $n_i$ is again the number of observations of class $i$ in the node, $w_i$ is the weight assigned to the class and $t_c$ is as calculated previously. The impurity of the entire potential split is then $$\sum_c \frac{t_c}{t_p} * i_c$$ where $t_c$ and $i_c$ are as calculated previously, and $t_p$ is the total weight of all observations in the parent node that is being split. Reference: How is the Weighted Gini Criterion defined?
H: Is there an appropriate use of adjusting class weights for a balanced dataset? I ask this because I am currently working with a CNN model built for diagnosis of pneumonia. Originally, I followed a notebook on kaggle to build the model and thereby learn what each bit of code is for, etc. The dataset used was rather imbalanced, with a far greater number of pneumonia cases than normal (healthy) ones. Hence the model.fit class_weight parameter was set to {0:6.0, 1:0.5}. (0 being normal, 1 being pneumonia) Since then, whilst working on the model and making adjustments, I acquired a number of new data to add to the model such that now the dataset is fairly balanced. In fact, I ensure that the data is loaded into the model so that it is exactly balanced, the dataframes used are coded to ensure an equal number of pneumonia and normal cases in the training testing and validation dataframes. So, accordingly, I am now trying to remove the use of the class_weights parameter as (as far as I understand it) it is not necessary and may impart some bias in the results. However, in doing so, the model no longer seems to improve in accuracy. It essentially stalls on 0.5 indefinitely. Whereas, with the weights applied, I achieve 0.90+ accuracy. Simply put, is there some reason for this? The code is quite long, but I'm happy to post it if it is deemed required, but I feel like this may be due to my lack of understanding than error in code (as it has otherwise been working fine and as expected). Thanks in advance. EDIT: For the sake of clarity and understanding, I performed a grid search over possible values for applied weight values. It confirmed an appropriate choice as being 0:~4.0, 1:0.4, but also suggests 0:1, 1:5.0. EDIT 2: For further clarity, a link to a github containing the model code and output files etc. https://github.com/GeeKandaa/ML-Code AI: Class Weight can be important even for balanced data if, for example, some class is more significant than others, so loss wrt this class should count more. One can even think of class weights as unique extra hyper-parameters with their own effect on the outcome (either positive or negative) and treat them as such without interpretation. Related: How does class_weight work in Decision Tree?
H: Visualizing a Perceptron I wanted to visualize how a perceptron learns, so I made a class that performs gradient descent. To show the decision, I plot a plane showing where positive examples and negative examples are, according to the perception. I also plot the decision line. Right now, this is the output: As you can see, the line appears to be incorrect, but the plane appears to be correct. A decision line of a perception, as I understand it, can be represented like this: $$y=\frac{-w_0}{w_1}x -\frac{bias}{w_1}$$ Now, if in the code below I change the return in get_decision_line from return slope * xs + intercept to return slope * xs + 2*intercept, this is what I get: However, that's clearly not the correct equation. I can't see what I'm doing incorrectly. What is odd to me is that anytime I check the ratio of the bias to $w_1$, I don't get the correct intercept, yet the plane is correct. Can anyone see what I am doing incorrectly? import numpy as np import matplotlib.pyplot as plt from matplotlib.lines import Line2D x = np.array([0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4]) y = np.array([0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3]) targets = np.array([-1,-1,-1,-1,-1,-1,-1,1,-1,-1,1,1,-1,1,1,1,1,1,1,1]) plt.plot(x[targets>0],y[targets>0],"o",x[targets<0],y[targets<0],"x"); class Perceptron(): activation_functions = { 'sign': np.sign } def __init__(self, eta=0.25, activation='sign'): self.bias = np.random.uniform(-1, 1, 1).item() self.weights = np.random.uniform(-1, 1, 2) self.eta = eta self.activation = self.activation_functions[activation] def predict(self, inputs): """ activation(bias + w dot x) """ return self.activation((self.bias + self.weights * inputs).sum(axis=1)) def error(self, inputs, targets): """compute the error according to the loss function """ return np.count_nonzero(targets - self.predict(inputs)) def GD(self, inputs, targets): """ perform gradient descent to learn the weights and bias """ error_t = [self.error(inputs, targets)] weights_t = [self.weights.copy()] bias_t = [self.bias] while self.error(inputs, targets) > 0: error = targets - self.predict(inputs) self.weights += self.eta * np.dot(error, inputs) self.bias += (self.eta * error).sum() error_t.append(self.error(inputs, targets)) weights_t.append(self.weights.copy()) bias_t.append(self.bias) return error_t, weights_t, bias_t #------------- # Plotting #------------- def confusion(self, inputs, targets): output = self.predict(inputs) tp, tn, fp, fn = [], [], [], [] for point, t, o in zip(inputs, targets, output): if t == o: # correct classification if t == 1: # true positive tp.append(point) else: # true negative tn.append(point) else: # incorrect classification if o == 1: # false positive fp.append(point) else: # false negative fn.append(point) return tp, tn, fp, fn def get_decision_plane(self, xs, ys): xx, yy = np.meshgrid(xs, ys) n=xx.size mesh_input = np.concatenate((xx.reshape(n,1),yy.reshape(n,1)),1) output = self.predict(mesh_input) return output.reshape(xs.shape[0], ys.shape[0]) def get_decision_line(self, xs): slope = -self.weights[0] / self.weights[1] intercept = -self.bias / self.weights[1] return slope * xs + intercept def plot_decision_boundary(self, inputs, targets, ax=None, legend = False): """ plot the decision boundary of the perceptron and show the classification of the inputs additionally, the targets are classified as true/false positive and true/false negatives """ xmin, xmax = (-6, 6) ymin, ymax = (-6, 6) xs = np.arange(xmin, xmax, 0.1) ys = np.arange(ymin, ymax, 0.1) plane = self.get_decision_plane(xs, ys) if ax is None: fig, ax = plt.subplots() ax.clear() ax.set_ylim([xmin, xmax]) ax.set_xlim([ymin, ymax]) ax.grid() ax.set_frame_on(False) ax.xaxis.set_ticks_position('bottom') ax.imshow(plane, extent=[xmin, xmax, ymin, ymax], alpha=.1, origin='lower', cmap='RdYlGn') ax.plot(xs, self.get_decision_line(xs), color='green') tp, tn, fp, fn = self.confusion(inputs, targets) tp_col = 'green' tn_col = 'red' fp_col = 'fuchsia' fn_col = 'lightseagreen' for lst, marker, col in zip([tp, tn, fp, fn], ['o', 'x', 'o', 'x'], [tp_col, tn_col, fp_col, fn_col]): for x, y in lst: ax.plot(x, y, marker, color=col) if legend: legend_label_colors = {'true positive' : (tp_col, 'o'), 'false positive' : (fp_col, 'o'), 'true negative' : (tn_col, 'x'), 'false negative': (fn_col, 'x')} lines = [] labels = [] for tp, (color, marker) in legend_label_colors.items(): lines.append(Line2D([0], [0], color=color, linewidth=0, marker=marker)) labels.append(tp) ax.legend(lines, labels, bbox_to_anchor=(1.05, 1), loc='upper left') inputs = np.array(list(zip(x, y))) perceptron = Perceptron(eta = 0.25, activation='sign') error_t, weights_t, bias_t = perceptron.GD(inputs, targets) w0 = perceptron.weights[0] w1 = perceptron.weights[1] t = perceptron.bias print(perceptron.weights, perceptron.bias) print(f'{-w0 / w1} x + {-t / w1}') fig, axes = plt.subplots(1, 2, figsize=(15, 5)) w0s, w1s = map(list, zip(*weights_t)) axes[0].plot(error_t, c='red') axes[0].set_frame_on(False) axes[0].grid() axes[0].set_title('Number of errors over time') perceptron.plot_decision_boundary(inputs, targets, ax = axes[1], legend=True) plt.show() AI: I think the problem is in your predict method: (self.bias + self.weights * inputs).sum(axis=1) adds the bias to both of the weight*input values before summing (the arrays are broadcast to the same shape). Hence why the 2*intercept makes things match up.
H: keras mnist dataset I am learning Neural Network. I was running following source code import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt %matplotlib inline import numpy as np (X_train , y_train) , (X_test , y_test) = keras.datasets.mnist.load_data() I was searching about keras mnist dataset. I found this. From that webpage I found This is a dataset of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images. More info can be found at the MNIST homepage. But, I was trying lot of datas. Finally, I thought to write plt.matshow(X_train[10010]) It outputs : As mnist digits classification were they had test set of 10,000 images. So, over than 10000 should return error. While it is showing more plots. Why? AI: You are actually plotting the train set with X_train which has 60k samples. Try accessing X_test[10010] and it will indeed raise an IndexError. keras.datasets.mnist.load_data() returns numpy.array objects, so you can check the shape of the arrays >>> print("Train:", X_train.shape) Train: (60000, 28, 28) >>> print("Test:", X_test.shape) Test: (10000, 28, 28)
H: What is feature channels mentioned in U-Net? I was reading the U-Net paper for medical image segmentation. I had a doubt in the architecture. The authors mention that the max pooling layers in contraction path double the number feature channels while Downsampling. Can anyone explain what are feature channels and how are they doubled while max pooling? In simple language please. AI: The feature channels simply mean the number of channels at a given point in the network. In the U-Net architecture, the number of channels doubles after the max pooling layers, see for example the first max pooling layer. The input is of size 568 by 568 pixels with 64 (feature) channels, after max pooling this becomes an array of size 284 by 284 pixels with the same 64 channels. The next convolutional layer then keeps the same size, so 284 by 284 pixels, but doubles the number of channels to 128.
H: Understanding time series anomaly detection using Autoencoder I'm studying how to detect anomalies in the time series using an Autoeconder. In particular, I'm following the guide posted in the Keras website, but I don't understand why they are creating and how can I adapt it to my dataset. In their guide they load the dataset and create a sequence: TIME_STEPS = 288 # Generated training sequences for use in the model. def create_sequences(values, time_steps=TIME_STEPS): output = [] for i in range(len(values) - time_steps): output.append(values[i : (i + time_steps)]) return np.stack(output) x_train = create_sequences(df_training_value.values) print("Training input shape: ", x_train.shape) The reason why they are using 288 as TIME_STEPS is because they have a value for every 5 mins for 14 days. My questions are: Does this method split the data or is he just creating a 3D variable in the correct format for the Convolutional Autoencoder? I have a dataset where there are stored the measurements of 30 devices. Each device has about 4000 values and it is structured as well: device01 0.02;0.13;1.15;0.10;8.30;........;4.20 device02 0.06;0.13;1.40;0.03;7.40;........;6.30 ........ device30 0.03;0.24;1.10;0.43;4.40;........;2.30 Since I do not have a timestamp reference in my dataset, how can I define the TIME_STEPS variable? AI: The length of TIME_STEPS in one sequence is more of a Hyperparameter. That you should try to optimize. Does this method split the data or is he just creating a 3D variable in the correct format for the Convolutional Autoencoder? It simply create dataset for a 1-dimsnional convolutional network.Something like this, $\hspace{3cm}$ Each row is an instance. Convolution will happen across the row. I have a dataset where there are stored the measurements of 30 devices You may simply assume each data point as a Time-step assuming it was collected simultaneously. But your data has an additional dimension i.e. Devices. So you should either build one model for each device or take the average of each device and consider it as the single time-step. Or you can apply a 1-D CNN with a 2-D kernel in your first layer, assuming each device as a channel. Check an example here. In case, Keras doesn't allow a 2-D kernel, then use a 2D-CNN with kernel size "30xM". With that, the convolution will happen in only one direction. Beware that if you use the last 2 options, then your pre-processing function will have to change i.e each time-step will have "30x1" as a dimension.
H: Mathematical bias and weight vs machine learning bias and weight I am a little confused about the term Bias and Weight with respect to machine learning. Say we want to predict the heights of people whose weights are given. So plot weights to x-axis and height to yaxis. To find out the linear relationship between height and weight we draw a straight line that shows the relationship between height and weight. Using the equation for a line, you could write down this relationship $y= mx+b ...i)$ more specifically in the machine learning terminology it could be $y= b+ w_1x_1...ii)$ So here b is the bias as per machine learning. However, b is the y-intercept as per mathematics. By defination b can be defined in machine learning A value indicating how far apart the average of predictions is from the average of labels in the dataset. Is that mean bias(b) is the distance between some particular point of the red line(as per picture) to the true point(say a blue or green point). Now another confusion if this is the case then what is loss? As per defination loss is A measure of how far a model's predictions are from its label. Then what is the difference between loss and bias? Now, for weight, here, weight(m)means slope as per equation i). Mathematically slope can be define as $m = \frac{rise}{run} =\frac{y_2 - y_1}{x_2 - x_1}$ However, weight$(w_1)$ can be defined in machine learning as A coefficient for a feature in a linear model. So my confusion is that is the procedure of finding weight is same as finding slope in mathematics? AI: Okay so let's start with the first question: Is that mean bias(b) is the distance between some particular point of the red line(as per picture) to the true point(say a blue or green point). You'd be correct had you used the word difference rather than distance. Bias is the difference between the estimated value and the true value. Think of it in this way, if your weighing machine always showed 5kg less for every measurement then you'd be adding 5 to every weight it measures, as a way to offset all measures made by the machine. So if your data is 'biased' towards one side then this bias term offsets this behaviour by adding/subtracting the 'biases of the weights. Here's some more rigorous explanation to it. what is the difference between loss and bias? Loss, on the other hand, is a measure of how close your estimator is to true prediction, there are different loss function that one can employ with different type of training, for linear regression we can use MSE. Using loss functions we can objectively optimize our weights and biases. So, having an appropriate value of bias(and weight) causes our loss to be lower, thereby increasing the accuracy of our model. So my confusion is that is the procedure of finding weight is same as finding slope in mathematics? The procedure is different, but the end result is the same. To see why you'd need to realize is that you are calculating $\hat y$ as the following: Given $m,b$ and a point $x$, calculate the value of $\hat y$ which is then used to calculate the loss. This value along with $m$ and $b$ is then used to calculate a better value for $m$ and $b$ using Gradient Descent. Notice at no point are we talking about directly calculating $m$ using $\hat y$, We are iteratively calculating better values for weights and biases. The end result? For single-dimension data, the weight that the model has learnt is indeed the slope (when plotted) and bias is the y-intercept. This stands true as the number of dimension increases but then rather than being single values, it becomes a multi-dimensional vector.
H: How to professionally spell library names such as "scikit learn"? For example for inclusion in a CV. This is mostly a question of whether one should use capital letters or not. Usually I search the library to find how the library name is spelled on the official page. For scikit-learn it is spelled without capital letter, while most of the other letter has some capital letter at the beginning (and often inside as well). Still it doesn't feel right next to other capitalised names. Should I still use a capital "Scikit-Learn" or am I overthinking this? AI: Scikit-learn has a citation guide: https://scikit-learn.org/stable/about.html#citing-scikit-learn They spell it with a uppercase "S" and lowercase "l": Scikit-learn: Machine Learning in Python
H: Does REPEATED K-fold cross validation make sense with Random Forest? When using random forest, would using normal cross-validation and just taking the average results from multiple models with different random states give me the same results as using Repeated K-fold cross validation? Repeated K-fold cross-validation basically repeats cross-validation with multiple different splits of the data and reports the average results. AI: To the title question, yes, repeated k-fold makes sense with random forests; and to the question body, no, the results will not generally be the same as repeated model fits with one k-fold split. The reason is that fixing one k-fold split and then repeatedly fitting random forests (with different random seeds) still only gives each forest access to $(k-1)/k$ of the data at a time. It may be easiest to think about the case when the number of trees is astronomical: the random choices for the bagging get averaged out, to the point where different random seeds don't actually matter: the forests converge to the same result, given a training split. Then the average of the forests' scores are the same for each of the splits, and so you average just $k$ scores. Compare that to repeated $k$-fold, where each of the forests converges, but are all on different training sets, and so the average happens with more variety. Whether that has a sizable impact, or in which direction, is harder to say. Repeated $k$-fold seems like it should give more stable results, even when the number of trees is something more reasonable, because the forests are less correlated.
H: What is the intuition of using clustering for performing feature engineering in machine learning tasks? I am trying to implement the research paper Combining Boosted Trees with Metafeature Engineering for Predictive Maintenance. The paper has a section called meta feature engineering where they have used hierarchical clustering to create features. The paper says: The third method we used to analyze the outliers in the dataset is based on an hierarchical Agglomerative Clustering algorithm [5]. Hierarchical Agglomerative Clustering starts with Z groups (Z being the number of observations), each initially containing one object, and then at each step it merges the two most similar groups until there is only one single group, containing all data. The rationale for this method is that the last observation that are merged might still be significantly different from the group they are merged into. By definition outliers are different cases and will typically not fit well into a cluster, unless that cluster is comprised by other outliers itself. Yet again, since these are not ordinary data points, we do not expect them to form large groups. I am unable to understand the authors' intuition behind doing this. The problem I am trying to solve and the paper is related to is the IDA-2016 competition dataset. You can find more about the competition here AI: Overall the paper is not very clear so there are a few uncertainties, but the general approach is this: Their main idea is to create new features which represent the "outlyingness" of the instance. They use several different methods in order to detect outliers, however they do not explain how exactly are the new features created. One of the methods they use to detect outliers is based on hierarchical clustering: the result of such clustering is a binary tree in which the most distant clusters/instances are connected last, i.e. close to the root of the tree. Their assumption is that an outlier tends not to be close to any other instance or cluster, therefore they are connected last. While the method makes sense, it's not clear in the paper whether they only retain the very last instance as outlier or the last few instances (or any other variant). So the clustering and the feature creation are only indirectly related: The clustering is used to detect one or several outliers in the instances, and several other methods are used for the same purpose. Based on the detection of these outliers, one or several new features are created which contain a value describing the "outlier status" of the instance. The simplest option would be to create a single boolean feature which is true if the instance was detected as outlier by any of the methods, but one can imagine more advanced options. For example, based on the hierarchical clustering one can obtain the order in which the instances are connected, and the rank of every instance can be used as a feature.
H: The sum of multi-class prediction is not 1 using tensorflow and keras? I am studying how to do text classification with multiple labels using tensorflow. Let's say my model is like: model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, 50, weights=[embedding_matrix], trainable=False), tf.keras.layers.LSTM(128), tf.keras.layers.Dense(4, activation='sigmoid')]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=tf.metrics.categorical_accuracy) I have 4 classes, the prediction function has the following results: pred=model.predict(X_test) pred array([[0.915674 , 0.4272042 , 0.69613266, 0.3520468 ], [0.915674 , 0.42720422, 0.69613266, 0.35204676], [0.915674 , 0.4272042 , 0.69613266, 0.3520468 ], [0.9156739 , 0.42720422, 0.69613266, 0.3520468 ], ...... You can see that every data has 4 prediction values. But the sum of them is not 1, which I do not understand. Did I do something wrong, or how to interpret this? AI: To get the summation in the last layer to be one, you need to use Softmax activation not Sigmoid. Often Sigmoid is used in a special case when there is a binary classification problem.
H: Set seed for a Class that calls Keras Models I have class that I use to optimize parameters of a Keras LSTM model. It is known that to set seed for keras, one must input the follow on its code. But what I'm not understanding is where to put it in the case of a class that will build and modify the models. from numpy.random import seed seed(1) from tensorflow.random import set_seed set_seed(2) Should it be in the __init__ like bellow? from numpy.random import seed from tensorflow.random import set_seed class OptimizeLSTM: def __init__(self, X_train, y_train, X_test, y_test, verbose=False): self._X_train = X_train self._y_train = y_train self._X_test = X_test self._y_test = y_test self._verbose = verbose seed(1) set_seed(2) AI: No you should declare it ahead of the class, just after importing the packages. You have declare the seed for numpy and tensorflow separately. For further details read this blog - https://machinelearningmastery.com/reproducible-results-neural-networks-keras/
H: Is a multi-layer perceptron exactly the same as a simple fully connected neural network? I've been learning a little about StyleGans lately and somebody told me that a Multi-Layer Perceptron, MLP, is used in parts of the architecture for transforming noise. When I saw this person's code, it just looked like a normal 8-layer fully connected network (i.e. linear-->relu-->linear-->relu-->...) Last year, I read Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow 2 by Aurelien Geron and he talks about MLPs. When I read about it, I interpreted his description as that an MLP is not exactly the same as a vanilla fully connected neural network. I didn't fully understand the text and don't have the book anymore so, unfortunately, can't recall exactly what I read so I might have been completely wrong in my understanding of what he wrote. Is an MLP the same thing as very basic fully connected network? AI: Yes, a multilayer perceptron is just a collection of interleaved fully connected layers and non-linearities. The usual non-linearity nowadays is ReLU, but in the past sigmoid and tanh non-linearities were also used. In the book, the MLP is described this way: An MLP is composed of one (passthrough) input layer, one or more layers of TLUs, called hidden layers, and one final layer of TLUs called the output layer (see Figure 10-7). The layers close to the input layer are usually called the lower layers, and the ones close to the outputs are usually called the upper layers. Every layer except the output layer includes a bias neuron and is fully connected to the next layer. ("TLU" stands for threshold logic unit)
H: with ML/DL model Is possible predict numbers of items required? I have a dataset is regarding ambulance call data. Data sample: v_type district gender complaint age Month 0 Advanced District 1 Male Chest Pain 28 jan 1 Advanced District 2 Male Heart Problem 50 dec 2 General District 3 Male Cardiac Arrest 76 jun 3 Advanced District 4 Male Heart Problem 45 oct 4 General District 5 Female Cardiac Arrest 52 nov 5 Advanced District 1 Male Chest Pain 34 feb 6 Advanced District 2 Male Cardiac Arrest 44 jun 7 General District 3 Female Heart Problem 55 july 8 Advanced District 4 Female Heart Problem 86 may 9 General District 5 Male Heart Problem 65 aug 10 General District 1 Male Heart Problem 60 nov 11 Advanced District 2 Male Chest Pain 36 mar In the data v_type (Vehicle type) we have Advanced (Advanced featured emergency vehicle) and General (Basic featured emergency vehicle). How to predict how many advanced or general ambulances should arrange in a district for a particular month? Example: If in a month (jan) and in district3 has huge complaint then predict and need to show 5 or 6.. Advanced vehicle type is required AI: The first thing to do is to reorganize the data in a way which suits the problem that you're trying to solve: since the goal is to predict the number of each type of vehicle by month, your data should contain a column for this number. Here you need to count the number of rows for each month and each type of vehicle so that you obtain something like this: v_type district month number Advanced District 1 jan 1 General District 1 jan 2 Advanced District 2 jan 0 General District 2 jan 3 Advanced District 1 feb 2 General District 1 feb 3 ... After that you will probably need to represent the month (and year) in a way that a regression algorithm can use, typically an integer starting at month 1.
H: How to determine the inputshape of a ANN in Keras I started to use Keras for ANN and something that I do not really understand is which values to choose for the input_shape parameter of the first layer in an ANN? I know that the number should be equal to the inputs but how can I determine the order and the other value in the vector. For example, here is the code of a Multilayer Perceptron that takes 3 inputs and calculates 1 output model.add(keras.layers.Flatten(input_shape=[3,])), model.add(Dense(20, activation='relu')) model.add(Dense(40, activation='relu')) model.add(Dense(20, activation='relu')) model.add(Dense(3, activation='linear')) In this case I use 'input_shape=[3,]'. The 3 comes from the 3 inputs but why do I have a ",]" as the second argument. Here you see on the other hand the code for a recurrent neural network that is used for time series forecasting and that has 6 input features for every timeslot of the time series and calculates 24 outputs: model6 = keras.models.Sequential([ keras.layers.SimpleRNN(20, return_sequences=True, input_shape=[None, 6]), keras.layers.SimpleRNN(20, return_sequences=True), keras.layers.TimeDistributed(keras.layers.Dense(24)) ]) Why do I have here another order with "input_shape=[None, 6]" and not "input_shape=[6, None]" and why do I need here "None" as the first argument and not like in the multilayer perceptron just ",]". Has this something to do with the recurrent neural network such that I always have to use the number of input features as the second argument and 'None' as the first argument. I read about this in the keras documentation but it is still confusing for me. Can you tell me how to choose those input_shape argument`I'd appreciate every comment. AI: There are basically two parts. Why there is a comma in the [3,]? In this case you can skip it and just use [3]. You can encounter it in the tutorials, because you can pass the tuple as shape as well - (3,) and if you skip the comma in the tuple, then it will be just number, not tuple. So, it's just more of a python, not a keras itself. Try this in terminal >>> 3 == (3) True >>> 3 == (3,) False >>> [3] == [3,] True The bottom line is that keras in this case expects 1-dimensional objects of size 3. About the recurrent networks. In contrast with fully-connected neural nets, the input to the RNN is two-dimensional. I.e. you have a time series with 10 steps, each defined by 3 numbers, then the shape would be (10, 3). The problem is that you don't know beforehand how many steps you will have - it could be 20, it could be 5. So, you use None as a placeholder to say the keras to expect some dimension there. You can ask, why we use None here and not something else. The reason is numpy has been using similar notations for years. See the example below >>> import numpy as np >>> x = np.ones(5) >>> x.shape (5,) >>> x_2 = x[None, :5] >>> x_2.shape (1, 5) In this case None in index tells numpy to add a dimension. It's not very transparent thing and could be confusing at first, especially with all the other notation, but you will get used to it. EDIT Worth mentioning, the number of dimensions of input tensors always will be bigger on one, i.e. 2 dimensions for MLP and 3 for the RNN, because we process data in batches. Thus, the first dimension always will be the number of samples. Let's look on the examples. For MLP input tensor shape could be [128, 3], where the 128 is the number of samples and 3 is number of features. input_shape=(3,) For RNN input tensor shape could be [128, 30, 3], where the 128 is the number of sample, 30 is sequence length and 3 is number of features. input_shape=(None, 3) For convolutional NN the inputs will be images and shape like [128, 220, 220, 3], where the 128 is the number of images, 220x220 - size of the image and 3 is number of channels (colors). input_shape=(220, 220, 3) The interesting fact - we asked to specify the input shape not because keras authors are pedants, but because the specific size of the network is depend on it and we need this info during initialization. As batch size doesn't influence the model size, it is omitted by convention. We would probably omitted the sequence length as well, if the RNN was the only architecture, but for consistency with other type of models it is as it is.
H: Is it good practice to use SMOTE when you have a data set that has imbalanced classes when using BERT model for text classification? I had a question related to SMOTE. If you have a data set that is imbalanced, is it correct to use SMOTE when you are using BERT? I believe I read somewhere that you do not need to do this since BERT take this into account, but I'm unable to find the article where I read that. Either from your own research or experience, would you say that oversampling using SMOTE (or some other algorithm) is useful when classifying using a BERT model? Or would it be redundant/unnecessary? AI: I don't know about any specific recommendation related to BERT, but my general advice is this: Do not to systematically use oversampling when the data is imbalanced, at least not before specifically identifying performance issues caused by the imbalance. I see many questions here on DataScienceSE about solving problems which are caused by using oversampling blindly (and often wrongly). In general resampling doesn't work well with text data, because language diversity cannot be simulated this way. There is a high risk to obtain an overfit model.
H: In Neural network, if one node is deleted, where should other nodes be connected? If it's a fully connected neural network, should we just remove those lines that were originally connected to the deleted node, and hence no actual changes on the remaining nodes? Except there weights and bias will be updated? AI: Whether you are actually removing the nodes or simply deactivating them, more or less the same concept would apply. The idea is called Dropout and is employed to inhibit the tendency to overfit. Suppose you're removing a node from the $i^{th}$ layer which has $n_i$ nodes. Removing it will not only involve removing the connections from the previous layers (${n_{i-1}}$ connections) but also removing the connections from the removed node to the next layer ($n_{i+1}$ connections) which would essentially remove $n_{i-1}+n_{i+1}$ connections. It would be much better to simply set the weights and biases to $0$. Here's the math behind the idea and you can find one possible implementation here.
H: When does it make sense to add numbers with different units? Given two vectors containing numbers that have different natures / units, (example length in Meters and weight in Kilograms), does it make sense to calculate euclidean distance between these two vectors or cosine similarity? The equations imply that you have to add $meters^2$ and $kilometers^2$ which supposedly does not make sense. Yet, I see this done many times indirectly, for instance, when calculating cosine similarity of documents based on tf-idf (vector that contains objects with dissimilar natures). AI: Speaking as an ex-physicist, I would say it never makes sense to add quantities with different units. When problems like this do arise it makes sense to honestly define some scale constants, normalize your quantities with respect to those constants and then add them. This 'define and normalize' will likely not change much of your procedure, but being explicit about your constants can help to avoid problems later on. If you want to consider meters and kilogarms to be part of the same metric space, it means that somewhere in your problem there must be a constant with units kilogram/meter, i.e. linear density. Such constants frequently arise in natural sciences, and can be quite interesting in their own right. Charge of electron, speed of light, are just few. I would suggest trying to understand how this linear density constant arises in your problem, might be an interesting insight. It will become even more interesting if you have two constants with the same units in your problem. Then their relative values can indicate qualitative switch in behaviours. For example, https://en.wikipedia.org/wiki/Reynolds_number are unit-less numbers that indicate whether or not you can have turbulence. I understand this is not a physics question :-)
H: Do I load all files at once or one at a time? I currently have $1700+$ CSV files. Each of them is in the same format and structure, give or take a row or possibly a column at the end. Each CSV is $\approx 3.8$ MB. I need to perform a transformation on each file Extract one data set, perform a summation and column select, then store inside a folder. Extract an array, no need for column names here, then sore inside a folder. From an algorithmic POV, is it better to load each file, perform the actions needed and then move on to the next file? OR Do I load all files, perform the action on all files and then store to hard drive? I know how to do the actual process, I am after a 20,000 feet POV of dynamical programming/optimisation. Thanks. AI: Since the process can be run independently on every file/batch, I would tend to recommend processing each file one by one for the sake of scalability: Depending on the details of the task there could be some minor optimizations which can be done only by processing the whole data at once, like things related to I/O and memory usage. I can't think of any significant improvement obtainable this way. By itself processing the files independently won't bring any performance improvement either, but it has a massive advantage in terms of scalability: the process can be distributed easily by using multiple cores and processing any number of files on each core. This option offers flexibility for processing the current data (if you can afford multiple cores/machines, the process can be faster) and importantly any future version of the data, even if it grows very large.
H: Prevent model from over-focusing on strong features I have a classification model (DNN/Linear layers with some transformers and other things later). The input to the model are several different modalities of different lengths and different amounts of information. I am trying to mitigate the dimensionality difference by projecting different modalities into the same dimensional space and then combining them into the same space. However, the model seems to be focusing completely on the strongest input features and almost completely ignores the abundance of weaker ones. All of the input features are normalized (bools to 0/1 and continuous to mean=0, stddev=1), so the issue is not the scale of feature values but the predictive power of those few features which end up choking others. Are there any methods out there for addressing this? AI: Use dropout. Dropout is a form of regularization where you zero out the input components with a certain probability. If you want your model to learn representations that rely less on some features, just place a dropout layer before those features enter the rest of the network. This will force the network to learn more robust representations that depend less on the strongest features.
H: How to best visualize or capture time interval between lab measurements? I have a table like as shown below subject_id lab_test_id lab_test_date value difference "10005606" "20364112" "2143-12-06 02:32:00" "1.3" "13:10:00" "10005606" "20364112" "2143-12-06 15:42:00" "1.3" "02:02:00" "10005606" "20364112" "2143-12-06 17:44:00" "1.3" "10:11:00" "10005606" "20364112" "2143-12-07 03:55:00" "1.3" NULL "10005866" "20364112" "2149-10-01 19:30:00" "2.1" "07:42:00" "10005866" "20364112" "2149-10-02 03:12:00" "2.2" "08:51:00" "10005866" "20364112" "2149-10-02 12:03:00" "2.1" "04:59:00" "10005866" "20364112" "2149-10-02 17:02:00" "1.6" NULL the difference column indicates the time difference between each of the lab tests Now, the doctor likes to know how often these measurements are done for each patient? How can I best convey this information? Should I capture the average difference between lab tests? For example, for patient = 10005606, it is 8 hours. Meaning adding all different values and divide by total number of records. 25 hours, 30 mins (approx)/3 records = 8 hours approx Is there any other better way to represent this using median or any other measure etc? Can guide me with this? AI: The average or median seem reasonable here. A boxplot might be a good choice of visualistion: The y-axis could be your time dimension, and boxplot shows the normal distributions of differences, including the median value and outliers. You could then compare certain measurements against this. But it would of course depend on if you have enough data in your sample to represent the total population. Or simply calculate the standard deviation of differences, and report this with your median (the mean value may be warped by outliers).
H: Efficient way to create matrix that shows if data exits per day So I have a dataset containing different ID's and the time the data was created. ID Date 0 123123 2021-03-24 12:43:13.494000+00:00 1 123412 2021-03-24 12:43:13.494000+00:00 2 123123 2021-03-24 12:43:15.935000+00:00 3 234234 2021-03-24 12:43:15.935000+00:00 4 432424 2021-03-24 12:43:13.494000+00:00 The goal should be to validate that there is at least one data row for every id for every given day. What I did so far is converting the timestamps to dates like this: 0 2021-03-24 1 2021-03-24 2 2021-03-24 3 2021-03-24 4 2021-03-24 Now I missing a solution to create a matrix that tells me if for the given ID on the given date a data row of this data exists or not: The data frame creation looks like this: df = pd.DataFrame(index=df['id'].unique(),columns=df['date'].sort_values().unique()) which creates this matrix: 2021-03-19 2021-03-20 2021-03-21 2021-03-22 2021-03-23 2021-03-24 2021-03-25 12341 NaN NaN NaN NaN NaN NaN NaN 12312 NaN NaN NaN NaN NaN NaN NaN 12324 NaN NaN NaN NaN NaN NaN NaN 12345 NaN NaN NaN NaN NaN NaN NaN 12345 NaN NaN NaN NaN NaN NaN NaN ... ... ... ... ... ... ... ... 12399 NaN NaN NaN NaN NaN NaN NaN 12394 NaN NaN NaN NaN NaN NaN NaN 34567 NaN NaN NaN NaN NaN NaN NaN 98764 NaN NaN NaN NaN NaN NaN NaN 10023 NaN NaN NaN NaN NaN NaN NaN sure I could just use loops to now fill the values, however this would be a a really inefficient way to do it.. I think there is a better way and I am super sure somebody can tell me how so I can learn it for the future. Thank you very much! AI: You should be able to just use a pandas pivot table for this purpose: import pandas as pd pd.pivot_table(df, index="ID", columns="Date", aggfunc="count")
H: reliability of human-level evaluation of the interpretability quality of a model Christoph Molnar, in his book Interpretable Machine Learning, writes that Human level evaluation (simple task) is a simplified application level evaluation. The difference is that these experiments are not carried out with the domain experts, but with laypersons. This makes experiments cheaper (especially if the domain experts are radiologists) and it is easier to find more testers. An example would be to show a user different explanations and the user would choose the best one. (Chapter = Interpretability , section = Approaches for Evaluating the Interpretability Quality). Why would anyone pick/trust a human-backed(not an expert) model over, say a domain-expert backed model or even a functionally evaluated model(i.e. accuracy/precision/recall/f1-score etc. are considerably good)? AI: This is specifically for interpretability of outcomes, i.e. a task where non-expert humans outperform machines. There is a problem in collecting labels in machine learning, whereby labelling datasets is very expensive and time consuming (due to size of datasets & cost of experts' time). So it's less about trust, its more about practicality. Consider hiring a data scientist to develop an algorithm to automatically label a dataset based of expert heuristics (e.g. "label the data as cancerous if it looks red"), it might take 6 months to collect data, plan, develop & test - therefore for certain use-cases hiring 10 non-experts and telling them the heuristic might be cheaper and faster. The book uses an example "show a user different explanations and the human would choose the best." in the context of radiology, it could be something like: "Look at the images of the patient, and compare it to this dictionary of images and diagnoses, combine multiple sources and then report what the diagnosis is" Of course if you have an algorithm which outperforms non-experts, you might just want some expert labels to validate your algorithm, and forget the non-experts.
H: How to deal with class imbalance problem in natural language processing? I am doing a NLP binary classification task, using Bert + softmax layer on top of it. The network uses cross-entropy loss. When the ratio of positive class to negative class is 1:1 or 1:2, the model performs well on correctly classifying both classes(accuracy for each class is around 0.92). When the ratio is 1:3 to 1:10, the model performs poorly as expected. When the ratio is 1:10, the model has a 0.98 accuracy on correctly classifying negative class instances, but only has a 0.80 accuracy on correctly classifying positive class instances. The behavior is as expected as the model turns to classify most/all instances toward negative class since the ratio of positive class to negative class is 1:10. I just want to ask what's the recommended way for handling this kind of class imbalance problem in natural language processing specifically? I saw someone suggests to change loss function, or perform up/down sampling, but most of them are targetting computer vision class imbalance problem. AI: Disclaimer: this answer might be disappointing ;) In general my advice would be to carefully analyze the errors that the model makes and try to make the model deal with these cases better. This can involve many different strategies depending on the task and the data. Here are a few general directions to consider: Most of the time the imbalance is not the real problem, the real problem is why the model can't differentiate between the classes. Even in case of extreme imbalance if the classes are easy to discriminate a model can perform very well. The imbalance only causes the model to assign the majority class when it doesn't have enough indication to decide, so it resorts to the conservative choice. If the minority class is really small in absolute terms it's likely that there's not enough language diversity in the positive instances (data sparsity). This will usually cause a kind of overfitting which can be hidden by the fact that the model almost always assigns the majority class. In this case the goal should be to treat the overfitting, so the first direction is to simplify the model and/or the data representation. Sometimes it can make sense to consider alternative ML designs: in a regular classification problem a model relies on the distribution of the classes, by principle. Some alternative approaches might not as influenced by the distribution, for example one-class classification. Of course it's not suited for every problem. Overall my old-school advice is not to rely too much on technical answers such as resampling methods. It can make sense sometimes, but it shouldn't be used as some kind if magical answer instead of careful analysis.
H: Drug Making Using Genetic Algorithms I want to create a drug using N different chemicals for fighting a bacterial infection those N chemicals are contained inside the drug in different quantities my work environment is a simulated one and I want to create the drug using genetic algorithms. How can I use genetic algorithm for this problem. What's representation for chromosomes can I use and what kind evaluation function should I use? What could go wrong using genetic algorithms for these kinds of problems? AI: A simple representation of the problem in terms of a genetic algorithm could be something like this: A gene represents a chemical and the values it takes represent the proportion of the chemical in the drug An individual is a combination of N genes/chemicals, each with their particular proportion A cross-over is the combination of the genes of two individuals A and B, either by picking randomly the value of either A or B for each gene or by some other kind of aggregation. A mutation is the random modification of a gene to a different proportion. The standard genetic algorithm works like this: Randomly pick a set of say 100 individuals (first generation) Calculate the "performance" of every individual, i.e. evaluate how good this particular combination of chemicals proportions is. Select say the top 10 individuals according to their performance, then produce the next generation of 100 individuals by cross-over among these top 10. Optionally add some random mutations to the new individuals' genes. Iterate again from step 2. Keep iterating unless some stop condition is satisfied, for example the average performance over the last 5 generations doesn't increase anymore. As far as I understand the task, the main problem is the evaluation in step 2: if there is no automatic way to evaluate the performance of a combination of chemicals, it's impossible to use the genetic algorithm. In theory the evaluation doesn't need to be automatic, one could imagine doing a manual experiment for every individual, but that means performing thousands of experiments, a lot of them probably useless.
H: Why is the optimal C chosen by GridSearchCV so small? I'm trying to use GridSearchCV to select the optimal C value in this simple SVM problem with non-separable samples. The issue I'm having is that when I run the code the optimal C is chosen to be ridiculously small (~e-18) so that the margin is expanded to contain all samples. Even when I alter the samples so that they are easily separable, the optimal C is still on the scale of e-18. GridSearchCV selects a very small C however I try to alter the samples. Does anyone know why this is happening? import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets.samples_generator import make_blobs from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, confusion_matrix from sklearn.model_selection import GridSearchCV X, y = make_blobs(n_samples = 500, centers = 2, random_state = 6, cluster_std = 1.2) fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(X[:,0], X[:,1], c = y, cmap = 'rainbow', s = 30, edgecolors = 'white') ax.set_xlabel(r'$x_1$', fontsize = 20) ax.set_ylabel(r'$x_2$', fontsize = 20) svc = SVC(kernel = 'linear') c_space = np.logspace(-20, 1, 50) param_grid = {'C': c_space} svc_cv = GridSearchCV(svc, param_grid, cv = 5) svc_cv.fit(X, y) c = svc_cv.best_params_['C'] svc.C = c svc.fit(X, y) support_vecs = svc.support_vectors_ x1_min = min(X[:,0]) x1_max = max(X[:,0]) x2_min = min(X[:,1]) x2_max = max(X[:,1]) x1 = np.linspace(x1_min, x1_max, 100) x2 = np.linspace(x2_min, x2_max, 100) X1, X2 = np.meshgrid(x1, x2) points = np.vstack([X1.ravel(), X2.ravel()]).T boundary = svc.decision_function(points).reshape(X1.shape) ax.contour(X1, X2, boundary, colors = 'k', levels = [-1, 0, 1], linestyles = ['--', '-', '--']) ax.scatter(support_vecs[:,0], support_vecs[:,1], s = 250, linewidth = 1, facecolors = 'none', edgecolors = 'k') ``` AI: Have a look at svc_cv.cv_results_: there are many values of C that tied for best, with accuracy 99.6%, and the chosen C is the smallest of those. The point is that the width of the margin doesn't affect the actual hyperplane very much, and so the accuracy score doesn't change very much. A few suggestions: With larger and less-separable data sets, this might get minimized, since small changes in the hyperplane would be more likely to classify points differently. For this case, where many scores are exactly equal, you might prefer to tie-break for larger values of C. This can be easily accomplished by just reversing the order of the list c_space, or more robustly by defining a custom scorer for the search, that takes the mean test score plus some function of C. Something more refined than accuracy as the scorer for the search could help separate the values for different C, but it's not clear what would be best. Something like log-loss would require calibrating probabilities; maybe AUROC?
H: Input layer is incompatible even when dimensions (apparently) match I am making a sequential neural network for classification, with 3 dense layers, which will be trained on a simple synthetic dataset. The description of dataset is as follows: Data and class labels are integers. They are 2000 each. There is only a single feature column (populated by np.arange(2000) * 3) There is only a single label which indicates last digit of number (populated by np.arange(2000) *3 % 10). After making the model, I am encountering the following error when calling model.fit(): ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 1500 but received input with shape (100, 1) I have uploaded the commented Jupyter Notebook for this code on Google Collab: https://colab.research.google.com/drive/14v92NTBxIEIFJh2BhybfqhawHYIBvKnm?usp=sharing Any suggestion about how to fix this error and get reasonable accuracy on training set? AI: You set the input shape to (1500, 2) whereas your data only contains a single feature. You should therefore change the shape to (1,) or (None, 1) to match the shape of the input data.
H: How can precision be less than one in Leave-One-Subject-Out binary classification if each subject contains only one class Say I'm trying to classify a medical condition. Theres only two classes: Sick and Healthy. I build a model and I can't split the data because I don't want data from the same patient being in training and test set. So I elect to use Leave-One-Subject-Out, training the model on all subject except one and testing on the left out subject. So for each test set I have one subject and they are either healthy or sick. So the confusion matrix only contains one class where precision is technically one every time and recall equals accuracy. I've been reading some papers that claim to use leave-subject-out training and test splits for tasks where patients either have a medical condition or do not. I've seen papers that report accuracy, recall, and precision but I don't understand how you could have precision be less than one if each subject only contains one class. I doubt these papers are lying because I've seen this more than once. I just want to know whats going on here for them to get precision values that are less than one. Are they doing some kind of averaging or am I missing something and thinking about this in the wrong way? None of the papers explain this either. AI: Your reasoning is correct that the gold standard class is the same for all the instances in a single leave-one-out test set (under the assumption that the test patient cannot become sick at some point in time, thus having both healthy and sick status). What you're missing is the aggregation across multiple test sets: a full leave-one-out experiment repeats this process for every single patient, i.e. if there are N patients then there are N unique pairs of (training set, test set). Here is a pseudo code to show this clearly: correct,incorrect=0 for every patient p: training_set = all the patients except p test set = patient p model = train(training_set) // to simplify I count the patient as one instance, it's easy to count instances instead if model.predict(test_set) == gold_standard(p): correct += 1 else: incorrect += 1 accuracy = correct / (correct + incorrect) The calculation across all the patients can lead to some predictions being correct and some others being incorrect, this is why the accuracy, precision or recall can be any value between 0 and 1.
H: How do I calculate the accuracy for graph mining in terms of (top 1%)? I have 3600 samples in my dataset. I split the dataset into the train (2700) and test (900). My problem is related to new link prediction. I am using the Common Neighbor (CN) method. Using CN, we can predict new links based on the score. The score comes from the CN formula. CN gives a score for every possible edge/path/link. A higher score means a higher possibility to be a new link. Now, I applied my train dataset to the CN methods. From CN, I have received 4200 new links with a score. From this 4200, I took only the top 1% (based on the score given by CN). So, the top 1% is 42. Now, I tried to match the 42 with my test dataset. Means, actually, how many edges/links/path I can recover which were missing in the training dataset. Among these 42 (CN output), 38 samples are perfectly matched with the test dataset (means same edge/link/path of the train and test). But, I did not give or these 38 edges/link/paths were missing in the training dataset. True Positive: 38 False Positive: 42-38 = 04 False Negative: 900-38 = 862 True Negative: 900 - (38 + 04 + 862) = -04 From this, I am trying to calculate Precision, Recall, Accuracy. But, what is happening, I am getting very little Accuracy and Recall. Can you give me an idea to improve the Accuracy and Recall? AI: You have a mining problem which is basically a search problem. In search problem you have TP, FP, FN but not TN (you have it but it is trivial, imagine search in google. Basically bilions of webpages are TN in every search. right?) I also assume that you are not interested in ranking results as you did not mention it in the question. In this case you can easily go with Precision and recall: TP: Graphs which are right and you could find. FP: Graphs which are not right but your model found. FN: Graphs which are right but your model did not find. Then you simply calculate: Precision: $$\frac{TP}{TP+FP}$$ Recall: $$\frac{TP}{TP+FN}$$ And as a combined score, you may calculate F1: $$\frac{2TP}{2TP+FP+FN}$$ About Accuracy When the volume of search space is huge, as I said above, accuray is not meaning much. According to the formulation: $$ACC=\frac{TP+TN}{TP+TN+FP+FN}$$ when $TN$ is very large then the whole fraction approaches zero and does not inform much. But if your search space is relatively small, you can calculate accuracy with the formulation above. However it was just a theoretical justification and still your best metrics are precision and recall and their related metrics. Update For Your Specific Question Your set of top 1% results may have different sizes due to different size of your model output (If it is always 4200 please write in the comments but in "mining" questions usually the size of output is variant). In your case TP and FP are pretty straight-forward and meaningful. About FN, if your test set is also ranked, then you can evaluate your results according to top 1% of your ground-truth which includes 90 samples and go on as you already did, or you can fix the size of ground-truth set equal to your 1% (i.e. take 42 top test results) and again calculate same scores. The first one is more stable and significant for comparing different models. A more precise metric which is related to the ranked results is called NDCG. If the test set is not ranked then go this way: $|correctly\_found\_graphs|$: Number of graphs that you correctly found. $|wrongly\_found\_graphs|$: Number of graphs you found but they are not in ground-truth $|all\_found\_graphs|$: here 42 $|all\_correct\_graphs|$: here 900 Then calculate Precision@k (here precision@42) metric which calculates the precision only on the top k results of a search output: $Precision@42 = \frac{|correctly\_found\_graphs|}{|all\_found\_graphs|}=\frac{38}{42}$ This is a well-established information retrieval metric. $Recall = \frac{|correctly\_found\_graphs|}{|all\_correct\_graphs|}=\frac{38}{900}$ How can I improve the recall? As long as size of your top 1% results is less than ground-truth, your recall is limited to an upper-bound i.e. even if all 42 all relevant, still your recall is 42/900. Knowing this fact you can use recall but I do not see the point. Recall makes more sense if you take all 4200 outputs into account. Your calculation of True Negative in the comments is also wrong. TN in your case is the number of all graphs that your model should not have returned and it didn't i.e. 900-42 Moreover, all samples are positive in my test dataset. Not single sample is negatives See the problem from an Information Retrieval point of view. In this approach, any graph which is not in 900 set is a negative sample. You can comment in case it is still unclear.
H: On what principle did Google's DeepMind learn to walk? I just saw this video on Youtube. On what principle did Google's DeepMind learn to walk? Was it Q-Learning or a Genetic Algorithm or Policy Gradient? AI: The full method is explained in the paper Emergence of Locomotion Behaviours in Rich Environments by the DeepMind team. Quoting from that paper: Using a novel scalable variant of policy gradient reinforcement learning, our agents learn to run, jump, crouch and turn as required by the environment . . . So to answer your question, the researchers used a policy gradient method.
H: How to interpret fast-rcnn metrics? I'm following this tutorial to fine tune Faster RCNN model, during training process a lot of statistics are produced however I don't know how to interpret them. what are major characteristics to look at ? How to characterize my model performance ? Here is an example of output. Epoch: [6] [ 10/119] eta: 0:01:13 lr: 0.000050 loss: 0.4129 (0.4104) loss_classifier: 0.1277 (0.1263) loss_box_reg: 0.2164 (0.2059) loss_objectness: 0.0244 (0.0309) loss_rpn_box_reg: 0.0487 (0.0473) time: 0.6770 data: 0.1253 max mem: 3105 Epoch: [6] [ 20/119] eta: 0:01:07 lr: 0.000050 loss: 0.4165 (0.4302) loss_classifier: 0.1277 (0.1290) loss_box_reg: 0.2180 (0.2136) loss_objectness: 0.0353 (0.0385) loss_rpn_box_reg: 0.0499 (0.0491) time: 0.6843 data: 0.1265 max mem: 3105 Epoch: [6] [ 30/119] eta: 0:01:00 lr: 0.000050 loss: 0.4205 (0.4228) loss_classifier: 0.1271 (0.1277) loss_box_reg: 0.2125 (0.2093) loss_objectness: 0.0334 (0.0374) loss_rpn_box_reg: 0.0499 (0.0484) time: 0.6819 data: 0.1274 max mem: 3105 Epoch: [6] [ 40/119] eta: 0:00:53 lr: 0.000050 loss: 0.4127 (0.4205) loss_classifier: 0.1209 (0.1265) loss_box_reg: 0.2102 (0.2085) loss_objectness: 0.0315 (0.0376) loss_rpn_box_reg: 0.0475 (0.0479) time: 0.6748 data: 0.1282 max mem: 3105 Epoch: [6] [ 50/119] eta: 0:00:46 lr: 0.000050 loss: 0.3973 (0.4123) loss_classifier: 0.1202 (0.1248) loss_box_reg: 0.1947 (0.2039) loss_objectness: 0.0315 (0.0366) loss_rpn_box_reg: 0.0459 (0.0470) time: 0.6730 data: 0.1297 max mem: 3105 Epoch: [6] [ 60/119] eta: 0:00:39 lr: 0.000050 loss: 0.3900 (0.4109) loss_classifier: 0.1206 (0.1248) loss_box_reg: 0.1876 (0.2030) loss_objectness: 0.0345 (0.0365) loss_rpn_box_reg: 0.0431 (0.0467) time: 0.6692 data: 0.1276 max mem: 3105 Epoch: [6] [ 70/119] eta: 0:00:33 lr: 0.000050 loss: 0.3984 (0.4085) loss_classifier: 0.1172 (0.1242) loss_box_reg: 0.2069 (0.2024) loss_objectness: 0.0328 (0.0354) loss_rpn_box_reg: 0.0458 (0.0464) time: 0.6707 data: 0.1252 max mem: 3105 Epoch: [6] [ 80/119] eta: 0:00:26 lr: 0.000050 loss: 0.4153 (0.4113) loss_classifier: 0.1178 (0.1246) loss_box_reg: 0.2123 (0.2036) loss_objectness: 0.0328 (0.0364) loss_rpn_box_reg: 0.0480 (0.0468) time: 0.6744 data: 0.1264 max mem: 3105 Epoch: [6] [ 90/119] eta: 0:00:19 lr: 0.000050 loss: 0.4294 (0.4107) loss_classifier: 0.1178 (0.1238) loss_box_reg: 0.2098 (0.2021) loss_objectness: 0.0418 (0.0381) loss_rpn_box_reg: 0.0495 (0.0466) time: 0.6856 data: 0.1302 max mem: 3105 Epoch: [6] [100/119] eta: 0:00:12 lr: 0.000050 loss: 0.4295 (0.4135) loss_classifier: 0.1171 (0.1235) loss_box_reg: 0.2124 (0.2034) loss_objectness: 0.0460 (0.0397) loss_rpn_box_reg: 0.0498 (0.0469) time: 0.6955 data: 0.1345 max mem: 3105 Epoch: [6] [110/119] eta: 0:00:06 lr: 0.000050 loss: 0.4126 (0.4117) loss_classifier: 0.1229 (0.1233) loss_box_reg: 0.2119 (0.2024) loss_objectness: 0.0430 (0.0394) loss_rpn_box_reg: 0.0481 (0.0466) time: 0.6822 data: 0.1306 max mem: 3105 Epoch: [6] [118/119] eta: 0:00:00 lr: 0.000050 loss: 0.4006 (0.4113) loss_classifier: 0.1171 (0.1227) loss_box_reg: 0.2028 (0.2028) loss_objectness: 0.0366 (0.0391) loss_rpn_box_reg: 0.0481 (0.0466) time: 0.6583 data: 0.1230 max mem: 3105 Epoch: [6] Total time: 0:01:20 (0.6760 s / it) creating index... index created! Test: [ 0/59] eta: 0:00:15 model_time: 0.1188 (0.1188) evaluator_time: 0.0697 (0.0697) time: 0.2561 data: 0.0634 max mem: 3105 Test: [58/59] eta: 0:00:00 model_time: 0.1086 (0.1092) evaluator_time: 0.0439 (0.0607) time: 0.2361 data: 0.0629 max mem: 3105 Test: Total time: 0:00:14 (0.2378 s / it) Averaged stats: model_time: 0.1086 (0.1092) evaluator_time: 0.0439 (0.0607) Accumulating evaluation results... DONE (t=0.02s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.210 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.643 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.079 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.210 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.096 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.333 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.333 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Epoch: [7] [ 0/119] eta: 0:01:16 lr: 0.000050 loss: 0.3851 (0.3851) loss_classifier: 0.1334 (0.1334) loss_box_reg: 0.1845 (0.1845) loss_objectness: 0.0287 (0.0287) loss_rpn_box_reg: 0.0385 (0.0385) time: 0.6433 data: 0.1150 max mem: 3105 Epoch: [7] [ 10/119] eta: 0:01:12 lr: 0.000050 loss: 0.3997 (0.4045) loss_classifier: 0.1250 (0.1259) loss_box_reg: 0.1973 (0.2023) loss_objectness: 0.0292 (0.0303) loss_rpn_box_reg: 0.0479 (0.0459) time: 0.6692 data: 0.1252 max mem: 3105 Epoch: [7] [ 20/119] eta: 0:01:07 lr: 0.000050 loss: 0.4224 (0.4219) loss_classifier: 0.1250 (0.1262) loss_box_reg: 0.2143 (0.2101) loss_objectness: 0.0333 (0.0373) loss_rpn_box_reg: 0.0493 (0.0484) time: 0.6809 data: 0.1286 max mem: 3105 AI: Region Proposal Network is a subcomponent of the Fast RCNN and Faster RCNN architectures. It proposes candidate boxes and scores whether there is an object in this regions. RPN loss and objectness loss must be losses of this predictions. Regressor loss is the loss of the prediction of bounding box coordinates, and classifier loss is the loss of prediction of object classes in bounding boxes. IOU is acronym for intersection over union, and it gives how much bounding boxes are overlapped. In RPN it is calculated between suggested boxes(anchors), and ground truths. Higher IOU scores means suggested box includes an object of interest. Average precision is the average of the areas under the precision recall curve for each of the object classes.
H: Organizing a csv file of multiple datasets into a list of Pandas dataframes I have a csv file, containing results from a Computational Fluid Dynamics (CFD) simulation (a sample of my csv file is attached as a google drive link; file size: 1,392KB). In particular, the csv file has information about multiple streamlines (number of streamlines may reach 1000 depending on the case). All the data for all the streamlines are saved back to back in the csv file (so there is no empty row or something to tell us the end of one streamline and start of the next one). The only way I can distinguish streamlines from each other is that when the value in column "IntegrationTime" is zero, it indicates the start of a new streamline, until we hit another zero in the"IntegrationTime" column which is the start of the next streamline. I need to read this csv file, and organize its data into a list of Pandas datafreames, like: streamlineList = [df_for_streamline_1, df_for_streamline_2, ...., df_for_streamline_N] Note (extras question here): This is not crucial but would be nice to have: if you look at the end of my csv file, you see multiple rows where IntegrationTime is zero (100 rows to be exact). Preferably, I don't need these lines to be included in my final list of data frames. Can somebody help me a way to do this? https://drive.google.com/file/d/1lJhOJadGrno1C-KZOUqxV-KNkaOHIJNk/view?usp=sharing AI: It is possible to solve this problem procedurally by thinking line-by-line or streamline-by-streamline. For each streamline that can be matched with IntegrationTime == 0.0, extract the slice from the data frame, and append the slide to an output list (if it has more than 1 data point). A code like the following should address this problem: import pandas as pd # read the dataset df = pd.read_csv("vel.csv") # get row indexes where integrationtime is zero (start of each streamline) start_index_list = df.loc[df['IntegrationTime']==0.0].index.values stream_line_list = [] # the output list for i in range(len(start_index_list)): # for each stream line, obtain the slice of the original dataframe that corresponds to it start_index = start_index_list[i] end_index = None if i+1 < len(start_index_list): end_index = start_index_list[i+1]-1 stream_line_df = df.loc[start_index:end_index] # only append streamline with more than 1 data point if len(stream_line_df) > 1: stream_line_list.append(stream_line_df) print(f"number of complete streamlines found: {len(stream_line_list)}") ```
H: Obtaining column values for a dataframe from another dataframe based on a common column variable I have 2 dataframes: df1 = pd.DataFrame({"animal": [Cat, Dog, Rat, Bull, Dog, Bull, Bull, Dog, Cat, Rat, Dog], "lifeSpan": [2, 4, 6, 0, 4, 0, 0, 4, 2, 6, 4]}) df2 = pd.Dataframe({"animal":[Bull, Cat, Rat, Dog]}) The "LifeSpan" value is unique to each i in df1["animal"]. I need a third dataframe that mentions the "lifeSpan" value in a separate column against each j in df2["animal"] based on the 'lifespan' values in df1. AI: Just use pandas.merge to combine the two dataframes: import pandas as pd df1 = pd.DataFrame({"animal": ["Cat", "Dog", "Rat", "Bull", "Dog", "Bull", "Bull", "Dog", "Cat", "Rat", "Dog"], "lifeSpan": [2, 4, 6, 0, 4, 0, 0, 4, 2, 6, 4]}) df2 = pd.DataFrame({"animal":["Bull", "Cat", "Rat", "Dog"]}) df3 = df2.merge(df1.drop_duplicates(), on="animal") This returns the following dataframe: animal lifeSpan Bull 0 Cat 2 Rat 6 Dog 4
H: How to make a linear model with a constant value in R? I'm working on an unassessed homework problem from unpublished course notes of a statistics module from a second year university mathematics course. I'm trying to plot a 2-parameter full linear model and a 1-parameter reduced linear model for the same data set. I can't figure out how to plot the 1-parameter model; all attempts so far have either given errors or a non-constant slope. xs <- c(0,1,2) ys <- c(0,1,1) Data <- data.frame(x = xs, y = ys) mf <- lm(y ~ x, data = Data) # model_full mr <- lm(y = mean(y), data = Data) # model_reduced; this is the bit I can't get to work attach(Data) plot(x, y, xlab="x", ylab="y") abline(mf$coefficients[1], mf$coefficients[2]) abline(mr$coefficients[1], mr$coefficients[2]) AI: To use only a single parameter, which will be a constant, you have to specify the lm formula as follows: y ~ 1. R will then fit the model, where the coefficient for this constant will be equal to the mean of y, see also this stats stackexchange answer.
H: Different number of features in train vs test when using Label Encoding This is not a duplicate of Different number of features in train vs test There are some categorical columns in my data, and the cardinality for each of them is large, so I chose to use LabelEncoding over OneHotEncoding. However, some categories in the validation set do not appear in the training set, and they cause my model to perform very poorly. I was scraping the internet but couldn't find a way. Is there any way I can use to solve this issue? Thank you in advance. AI: It's a mistake to use LabelEncoding for a categorical feature, it should be used only for a categorical target variable. This is because it converts values to integers, hence introducing an arbitrary order over the values. It's not the values which don't appear in the training set which cause the poor performance (you can check), it's very likely because your model overfits: since there are many different values in the features, you would need a massive number of instances so that the model gets a representative sample for every value. Of course data is never like that, and it's clear from your description that some values occur too rarely (that's why some occur only in the test set). The solution is to simplify the data, so that the model doesn't rely on patterns which appear by chance in the training set: Replace values which appear rarely with a special value, e.g. RARE_VALUE. Try different thresholds for the minimum frequency. Encode categorical features with one hot encoding (OHT). Since the rare values were removed, the number of OHT features will be lower. In order to avoid overfitting, the ratio instances / features should be high enough. In case there are still values in the test set which don't occur in the training set, replace them with your special value RARE_VALUE.
H: Image classification vs medical grading problems For image classification problems like cat vs dogs, the output layer is 2. Image classification problems like diabetic retinopathy seem to be more of a grading classifier. Although the targets range from 0 to 4, (signifying the severity of the condition), is it better to have 1 as the output layer, or 5, for these kinds of problems? I have seen Kaggle kernels where both are utilized. AI: The number of outputs will depend on the task and the number of targets. In the case of medical grading, you can either interpret the problem as: a classification i.e. each grade is a class (5 outputs, one for each class) a regression: predict a grade and round the result (1 output) Make sure to check the loss function for each task. That will usually tell you how the code writer decided to approach the problem. To come back to your example, the first task should have a classification loss (crossentropy for instance) and the second a regression (MSE). Also check for the last layer activation function: usually softmax/sigmoid for classification and None for regression. It's complicated to say what would work without seeing the data. A recent competition (Janestreet Market predicition) also saw people approach the problem both as a classification and a regression. The highest rated public notebook on the private set so far is a regression-based approach. I suspect that this happened because the classification (of whether a stock is going to have a positive return in the future) requires to binarize the labels which basically lead to information loss.
H: Using Sklearn's predefined split I am working on a binary classification task using SVM. The dataset is quite large so I don't want to use k-fold CV for parameter tuning, but instead a simple train-validation-test split. I have done the following: X_train, X_test, y_train, y_test = train_test_split( X, y, stratify = y, test_size=0.2, random_state=1) X_train, X_val, y_train, y_val = train_test_split( X_train, y_train, stratify = y_train, test_size=0.25, random_state=1) So I have a 60-20-20 training-validation-test split. Since my validation set is predefined I want to use Sklearn's predefinedsplit. So I need to get the indices of my training and validation samples and set the validation indices to 0 and training indices to -1 so I tried the bottom answer of this question: split_index = [0 if x in X_val.index else -1 for x in X_train.index] But this returns a list of only -1's. I am unsure where I am going wrong. All my X and y _train, _val, _test are dataframes so .index can be applied. I have printed out X_train.index and X_val.index and both return Int64Index arrays of different lengths. I also tried using Hypopt as one of the answers in the above link mentioned but it seems to be broken at the moment. Any suggestions as to how I should proceed? AI: The result that all values are equal to -1 is to be expected, since you're checking if the indices from your training set occur in your validation set, which by definition is not the case since your train and validation sets should not have the same observations. If you want to add the 0/1 indicator to you original dataset you have to use X instead of X_train: split_index = [0 if x in X_val.index else -1 for x in X.index] This will set the index to 0 if it occurs in X_val and -1 for all other cases.
H: Integer encoding and weighing when one feature consists of more names Hello I am trying to make a content based movie recommendation system and one feature is genre of the movie. I will give an integer number to each genre randomly. However, some movies are of more than 1 genre. I will use tf-idf for weigthing these features. However, I am very confused that what to do when a movie is horror and action and comedy movie at the same time. Should I multiply or add these weighted features, I have no idea. Also I do not know why we use weighting in the first place at all. Can you help me get through this? By the way, let me say that each movie will be regarded as a document and then tf-idf calculation is done on them. AI: That's a good question. This task would be referred to as multi-label encoding. Bascially if a movie belongs to several genres, you one hot-encode each genre and add the vectors. If there were only 6 genres (horror, romance, action, adventure, comedy, fantasy) For instance a movie that is horror, action and comedy (The Dead don't Die?): horror = [1, 0, 0, 0, 0, 0] action = [0, 0, 1, 0, 0, 0] comedy = [0, 0, 0, 0, 1, 0] So a movie that belongs to all three would be encoded as: [1, 0, 1, 0, 1, 0]. A movie that only has a category then its encoding is a one-hot encoded label. You can perform this task in scikit-learn with a multi label binarizer
H: Audio not saving in google colab # all imports from IPython.display import Javascript from google.colab import output from base64 import b64decode RECORD = """ const sleep = time => new Promise(resolve => setTimeout(resolve, time)) const b2text = blob => new Promise(resolve => { const reader = new FileReader() reader.onloadend = e => resolve(e.srcElement.result) reader.readAsDataURL(blob) }) var record = time => new Promise(async resolve => { stream = await navigator.mediaDevices.getUserMedia({ audio: true }) recorder = new MediaRecorder(stream) chunks = [] recorder.ondataavailable = e => chunks.push(e.data) recorder.start() await sleep(time) recorder.onstop = async ()=>{ blob = new Blob(chunks) text = await b2text(blob) resolve(text) } recorder.stop() }) """ def record(sec=5): display(Javascript(RECORD)) s = output.eval_js('record(%d)' % (sec*1000)) b = b64decode(s.split(',')[1]) with open('audio.wav','wb') as f: f.write(b) return 'audio.wav' # or webm ? The above is the code for audio recording in google colab. But the audio file is not saved in google colab and showing after running the code. kindly help? AI: You are only defining the record function but never actually running it. Adding record() to the end of the code works for me in Google Colab and saves the audio to the audio.wav file.
H: 3 images as one input in CNN (U-Net) I have been advised by my supervisor that if my U-Net segmentation network has RGB images at the input then I could use the channels for different images - median filter for R, normalization for G, canny-edge detection for B (example). I have no idea how to do that. Tried to find something similiar here but unfortunately without a success. Would be greateful if somebody could explain me this in detail, I'm new in DL. Thanks AI: An input to a U-Net are normally images with 3 (or 4) channels, as images have a red, green, and blue channel. It is however technically possibly to use other images (or arrays for that matter) as the input which a larger number of channels. If you wanted to use the three different methods that you mention, you would first have to apply all three methods to original image, which would then return an array with 1 or more channels. You then have to stack them in the channels direction, which would result in an array of shape HxWxC, where C is the total number of channels. The total number of channels in the input then depends on the image preprocessing methods you use and the number of channels of the original image.
H: Vectorized method to find matching values between two columns I'm trying to locate the most recent rows within my Dataframe that contain the same values in two separate columns. Presently, I am doing this slowly with looping, but I suspect there's a way to cleverly use the apply method or some other vectorized function to do this faster. My present code: def enumerate_matching(df): a = list(df['A']) b = list(df['B']) matching = [] for i in range(0, len(a)-1): for j in range(i+1, len(b)): if a[i] == b[j]: matching.append(i) matching.append(i+j) break return matching Is there a faster method to do this? AI: you could use set to get the intersection (it has a complexity logarithmic in the size of the sets a and b) a = set(df['A']) b = set(df['B']) a.intersection(b)
H: How can I avoid requiring global information for performing regression on meter variables? Note: With a meter variable a timestamped value is the sum of all previous differences plus a difference to the most recent value. Think of a electricity meter counting the use of energy. The goal here is to perform some form of regression (e.g. a Random Forest method) on a data series of a meter variable and then use the resulting model to fix gaps in the data series and possibly do further analysis on the data, for example removing noise from a faulty sensor. However the possible patterns in the data are likely periodic on timescales smaller than the entire series. Thus we're not interested in modeling the sum of the values. We transform the data by calculating $$\delta_i=\frac{v_i-v_{i-1}}{t_i - t_{i-1}}$$ and then perform the regression on $\delta$. With the generated model $m(t)$ we can calculate a missing value $v_i$ as $$v_i = v_{i-1} + (t_i - t_{i-1})\cdot m(t).$$ Easy, right? But now let's throw in some serious noise. Depending on the chosen regression method this doesn't matter much and only makes the predictions worse, but still useful. But what if we want to calculate a missing $v_i$ and $v_{i-1}$ happens to be some bogus value? The model doesn't mind, but for the calculation of $v_i$ we have to assume that $v_{i-1}$ is correct. Is there a way around that? Is it possible to calculate missing values using only (time-)local information? AI: Your "missing value" formula mentions $v_{i-1}$, and you seem to be suggesting that you'd prefer not to use that. Ok, fair enough. So use an estimate, $\hat{v}_{i-1}$, which is based on weighted regression of $v_{i-1}$ and several preceding values. Give greater weights to more recent values. You may wish to implement a classifier that labels each reading as "good" or "bogus". Part of that is straightforward, as you already have a "missing" label for some of your samples. The classifier would additionally discard readings that swing implausibly high, or that show physically impossible behavior like having negative derivative. You don't mention the generative source of bogus readings. If they are due to noise events on the digital channel used for obtaining each reading, then they don't affect the data source and will soon average out. Here is another technique. Suppose you're willing to suffer higher readout latency in exchange for numeric accuracy. Delaying by one sample would let you compute $\hat{v}_{i}$ as median of three values: $v_{i-1}$, $v_{i}$, $v_{i+1}$. Similarly, you could delay by two and find median of five values, or by three with median of seven values.
H: What are the possible approaches to fixing Overfitting on a CNN? Currently I am trying to make a cnn that would allow for age detection on facial images. My dataset has the following shape where the images are grayscale. (50000, 120, 120) - training (2983, 120, 120) - testing And my model currently looks like the following - I've been testing/trying different methods. model = Sequential() model.add(Conv2D(64, kernel_size=3, use_bias=False, input_shape=(size, size, 1))) model.add(BatchNormalization()) model.add(Activation("relu")) model.add(Conv2D(32, kernel_size=3, use_bias=False)) model.add(BatchNormalization()) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, use_bias=False)) model.add(BatchNormalization()) model.add(Activation("relu")) model.add(Dropout(0.5)) model.add(Dense(10, activation='softmax')) #TODO: Add in a lower learning rate - 0.001 adam = optimizers.adam(lr=0.01) model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=number_of_epochs, verbose=1) After running my data on just 10 epochs I started to initially see decent values but at the end of the run my results were the following and it has me concerned that my model is definitely over fitting. How many epochs: 10 Train on 50000 samples, validate on 2939 samples Epoch 1/10 50000/50000 [==============================] - 144s 3ms/step - loss: 1.7640 - acc: 0.3625 - val_loss: 1.6128 - val_acc: 0.4100 Epoch 2/10 50000/50000 [==============================] - 141s 3ms/step - loss: 1.5815 - acc: 0.4059 - val_loss: 1.5682 - val_acc: 0.4059 Epoch 3/10 50000/50000 [==============================] - 141s 3ms/step - loss: 1.5026 - acc: 0.4264 - val_loss: 1.6673 - val_acc: 0.4158 Epoch 4/10 50000/50000 [==============================] - 141s 3ms/step - loss: 1.3996 - acc: 0.4641 - val_loss: 1.5618 - val_acc: 0.4209 Epoch 5/10 50000/50000 [==============================] - 141s 3ms/step - loss: 1.2478 - acc: 0.5226 - val_loss: 1.6530 - val_acc: 0.4066 Epoch 6/10 50000/50000 [==============================] - 141s 3ms/step - loss: 1.0619 - acc: 0.5954 - val_loss: 1.6661 - val_acc: 0.4086 Epoch 7/10 50000/50000 [==============================] - 141s 3ms/step - loss: 0.8695 - acc: 0.6750 - val_loss: 1.7392 - val_acc: 0.3770 Epoch 8/10 50000/50000 [==============================] - 141s 3ms/step - loss: 0.7054 - acc: 0.7368 - val_loss: 1.8634 - val_acc: 0.3743 Epoch 9/10 50000/50000 [==============================] - 141s 3ms/step - loss: 0.5876 - acc: 0.7848 - val_loss: 1.8785 - val_acc: 0.3767 Epoch 10/10 50000/50000 [==============================] - 141s 3ms/step - loss: 0.5012 - acc: 0.8194 - val_loss: 2.2673 - val_acc: 0.3981 Model Saved I assume the issue might be related to the number of images I have for each output class, but other then that I am a bit stuck in moving forward. Is there something wrong in my understanding/implementation? Any advice or critique would be well appreciated this is more of a learning project for me. AI: Try to use dropout after your dense layers not after maxpooling layers. Whatever comes before dense layers can be considered as the inputs of a classification layer. So keep them otherwise it somehow means you are loosing appropriate information. You should also be aware that you should not use dropout after the last layer. Also you can add another dense layer, two hidden dense layers, for classification. It seems your data is not easy to learn.
H: imblearn error installing smote I wanna install smote from imblearn package and I got the Following error: ImportError Traceback (most recent call last) <ipython-input-10-77606507c62c> in <module>() 66 len(data[data["num"]==0]) 67 #balancing dataset ---> 68 from imblearn.over_sampling import SMOTE 69 import matplotlib.pyplot as plt 70 sm = SMOTE(random_state=42) ~\Anaconda3\lib\site-packages\imblearn\__init__.py in <module>() 33 """ 34 ---> 35 from .base import FunctionSampler 36 from ._version import __version__ 37 ~\Anaconda3\lib\site-packages\imblearn\base.py in <module>() 17 from sklearn.utils import check_X_y 18 ---> 19 from .utils import check_sampling_strategy, check_target_type 20 from .utils.deprecation import deprecate_parameter 21 ~\Anaconda3\lib\site-packages\imblearn\utils\__init__.py in <module>() 5 from ._docstring import Substitution 6 ----> 7 from ._validation import check_neighbors_object 8 from ._validation import check_target_type 9 from ._validation import check_ratio ~\Anaconda3\lib\site-packages\imblearn\utils\_validation.py in <module>() 12 13 from sklearn.base import clone ---> 14 from sklearn.neighbors.base import KNeighborsMixin 15 from sklearn.neighbors import NearestNeighbors 16 from sklearn.externals import six ~\Anaconda3\lib\site-packages\sklearn\neighbors\__init__.py in <module>() 7 from .kd_tree import KDTree 8 from .dist_metrics import DistanceMetric ----> 9 from .graph import kneighbors_graph, radius_neighbors_graph 10 from .unsupervised import NearestNeighbors 11 from .classification import KNeighborsClassifier, RadiusNeighborsClassifier ~\Anaconda3\lib\site-packages\sklearn\neighbors\graph.py in <module>() 5 # License: BSD 3 clause (C) INRIA, University of Amsterdam 6 ----> 7 from .base import KNeighborsMixin, RadiusNeighborsMixin 8 from .unsupervised import NearestNeighbors 9 ~\Anaconda3\lib\site-packages\sklearn\neighbors\base.py in <module>() 20 from .kd_tree import KDTree 21 from ..base import BaseEstimator ---> 22 from ..metrics import pairwise_distances_chunked 23 from ..metrics.pairwise import PAIRWISE_DISTANCE_FUNCTIONS 24 from ..utils import check_X_y, check_array, gen_even_slices ImportError: cannot import name 'pairwise_distances_chunked' from 'sklearn.metrics' (C:\Users\ASUS\Anaconda3\lib\site-packages\sklearn\metrics\__init__.py) AI: Try quitting and restarting ipython. imblearn requires scikit-learn >= 0.20 and sometimes the ipython runtime loads an older version of scikit-learn. If the issue still persists, then reinstall all packages together to make sure they are compatible with each other.
H: How is the min_rank used with the flights database I'm exploring the https://r4ds.had.co.nz/transform.html#add-new-variables-with-mutate r for data science handbook and don't really understand the min_rank() operator. Doing the exercises it asks to Find the 10 most delayed flights using a ranking function. How do you want to handle ties? Carefully read the documentation for min_rank(). here is the code that I performed that doesn't work. Please explain: min_rank(flights, dep_delay) AI: You can put the results of min_rank in to a new column, e.g.: f2 <- flights %>% mutate(rank = min_rank(dep_delay)) Where showing just the selected rows: f2 %>% select(dep_delay, rank) You get this: # A tibble: 336,776 x 2 dep_delay rank <dbl> <int> 1 2 208140 2 4 219823 3 2 208140 4 -1 164763 5 -6 48888 6 -4 94410 7 -5 69589 8 -3 119029 9 -3 119029 10 -2 143247 # ... with 336,766 more rows Here is another example where you can see all the rows. aa <- tibble(y = c(9, 8, 3, 4, 5, 7, 6), x = c(1, 2, 3, 4, 4, 4, 5)) aa %>% mutate(rank = min_rank(y)) Which gives us: # A tibble: 7 x 3 y x rank <dbl> <dbl> <int> 1 9 1 7 2 8 2 6 3 3 3 1 4 4 4 2 5 5 4 3 6 7 4 5 7 6 5 4
H: How to handle date data for Knn? I'm working on a project about predicting kickstarter project success(classification) and my dataset has many columns that could be used as features such as : state_changed_at, launched_at, created_at. Now the dataset has these features on unix timestamps. Do I need to convert dates to some other format? Can date data be used as a feature ? If so how do I handle them ? Do I keep them as a unix timestamp and try to scale/normalize them ? AI: Can date data be used as a feature? Yes. If so how do I handle them ? Think about your problem. Why should the date be a reasonable indicator for the success of a startup? Answering this question tells you also in which way you need to transform it. Most often, when I use some date information for models, I do the following: Day of the week: integer/one-hot encoding for Monday, Tuesday, ..., Sunday Month: integer/one-hot encoding for January, February, ..., December Hour of the day: integer/one-hot encoding for 0, ..., 23 Seconds/minutes/hours/days since XY: Usually normalized or at least scaled in some way
H: How to calculate prediction error in a LSTM keras I have an LSTM which I have constructed and run in keras using python. I use this model to predict $n$ points into the future for a time series forecasting problem. When I use a method such as ARIMA to make the forecast I am able to generate prediction errors for my predictions as the model is fit by minimising the MLE using AIC for example. Is there a way that is currently supported in keras for me to generate prediction errors for my regression predictions? If there isn't, is there a way that I can calculate it myself? AI: You may use the technique explained in the article "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning" Very briefely the technique consists of applying dropout for training and predictions. In Keras this can be easily applied passing the training argument in the call of the Dropout layer. import keras input = ... x = keras.layer.Dense(100)(input) dp = keras.layer.Dropout(0.5)(x, training=True) output = keras.layer.Activation('relu') model = keral.Model(input, output) During prediction you get the mean and standard deviations: T = 1000 # Do 1000 predictions to estimate uncertainty predictions = np.array([model.predict(X_test)] for _ in range(T)]) pred_mean = results.mean(axis=0) pre_std = results.std(axis=0) You can increase (or descrease) T if you need more (or less) precision.
H: NN for fuzzy classification What loss-function / optimizer to use for fuzzy classification problems? E.g: Four categories hot, mild, cold, freezing. Edit: I use one-hot encoding and have ~ 60 datapoints. AI: Adding to J.C. answer, please note that you don't have to stick with one-hot encoding. For your hot-mild-cold-freezing a target could also be 0,0.3,0.5,0.2.
H: Customer Targeting for CRM Marketing Campaign I need help or ideas to solve the below business challenge. Sample questions has been provided. A snapshot of the sample data has been attached below: AI: Can you share all available columns in the data set? its hard to tell what data is there to use. Given the data you show, I would have started with some explaratory analysis: Group data per shops, check user activity ( how much purchases they do per month/week, what is AOV, LTV, etc). Idealy, its good to use email marketing, because its cheaper that paid ads and you have list of users that can be mailed - given the low costs of email marketing campaigns, you can basicaly target all users that stopped shopping in given places and allowed you to send them mails. Additional approach: you can use paid ads for remarketing and geo-based ads to recover lost clients - given the fact that you want to cut the costs, its might be beneficial to retain only loyal customers ( which have more that 2 or 3 transactions, this number should be based on your average sales per client). So, answering the questions: What are your recommendations for the campaign? Should we do it? How should we target and why? Should we do it? Select all users that were lost after opening new shop and calculate lost income per month (or week, based on your data) - you want to know how much money you can expect from recovered users, this can be your marketing spend baseline ( Recoverd Income - Costs - Margin ~ Marketing Budget). Marketing Budget + Recoverd Income will indicate if its logical to run the campaign. What are your recommendations for the campaign? I would start with email - cheap and effective, we can use targeting in emails based on what items users buy, their gender and age. If you have segment of loyal users with high income - you can develop dedicated email campaign based on their needs, give bigger discounts ( its good to know why they left in the first place). Is there any additional customer information that would help you? What is it? How would it help? Any data that will help to better segment users will be usefull - age, gender, social status, all this info can influence email campaigs. Also it would be good to have data about costs and effectivness of Paid Remarketing Campaigns - based of this data we can try to use remarketing for our loyal customers. Have both stores been impacted similarly? How has customer shopping behavior been affected? Split data per shops, calculate statistics, do some good-looking and easy to undestand graphs, cant say more without actual data. Hope this help! EDIT Thanks for sharing data, here is my opinion about this task: You can split data before-after 25 of may, and target users that have losses in sales comparing to period before 25 of may. Based on Customer_Data dataset, it looks like you have user that can be mailed and emailed, so you can launch 2 campaigns for different users based on Mailable_flag / Emailable_flag flags. Talking about losses: Based on given data is looks like your shops lost around 3k in sales and ~60 transcations for both shops after 25 of may and the losses are equal amont users that can and connot be emailed. This leads us to a point of "additional" data - ideally you want to know the past performance of mailed and emailed campaigns to adjust the targeting. Also the thing I havent done - you should calculate LTV ( basicaly how much money the users brings you per month) - if some users have very low LTV, you dont want to spend money advertasing them. I`ve attached ipython notebook with brief workflow for you to check . Please, be aware that its just my ideas and they cant be totally wrong :)
H: What is the difference between reconstruction vs backpropagation? I was following a tutorial on understanding Restricted Boltzmann Machines (RBMs) and I noticed that they used both the terms reconstruction and backpropagation to describe the process of updating weights. They seemed to use reconstruction when referring to the links between the input and the first hidden layer and then backpropagation when referring to the links to the output layer. Are these terms used interchangeably or are they different concepts? AI: Reconstruction is used for the concept Restricted Boltzmann Machine (RBM), it describes a phase where the structure reconstructs (generates) visible samples from the hidden states of the layers. For more detail, you can refer to: https://stackoverflow.com/questions/4105538/restricted-boltzmann-machine-reconstruction Backpropagation is something different entirely; you can find backpropagation in Deep Neural Nets, Convolutional Nets, RBMs (in some way) and so on. Assume a Deep Neural Net with N hidden layers. Upon training we feed the input forward through the randomly initialized weights up to the last neuron. From the output of the last neuron, we calculate our loss using a cost function that calculates the error between predicted and the true output. This is forward propagation. Knowing the loss and the loss function, we start to take derivatives backwards to find gradients of weights and biases of each layer using the chain rule, all back to the input side. This is called backpropagation. After backpropagation, we update all of the weights and biases of N layers with the gradients for each layer we calculated. Then we do forward propagation again, backpropagation again, update again and all again until we achieve our goal (which is usually minimizing the error as solving hopefully a convex optimization problem). You can also refer to: https://www.youtube.com/watch?v=x_Eamf8MHwU for backpropagation Also a final note: Gradients of RBM algorithm are not able to be calculated via classic backpropagation, a method called as contrastive divergence is used for the calculation. You can also have a look at the RBM paper on this matter and more, by Geoffrey Hinton: https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf Hope I could help, good luck!
H: Machine learning Classification model for binary input and output data I have a large longitudinal dataset with 5 minute granularity for a period of around 30 months from thousands of households. I would like to classify them using a binary output (0/1) based on the input which is also a set of binary variables (sensors activated or not 0/1). I have a training dataset available with the labeled binary output (0/1) with binary inputs. I would like to know which machine learning model will be best for this type of case where both input and outputs are binary in nature. Whether Logistic regression is one of the options or not? AI: Your problem is one of "sequence classification" for which Recurrent Neural Networks (RNN) e.g. Long short-term memory (LSTM) are generally used. See here for a good example. and here for a technical paper. Here is a specialized package for sequence classification which uses convolutional neural networks (CNN). CPT algorithm, an accurate method for sequence prediction, can also be used here. A continuous output can easily be rounded to 0 or 1 to get binary result.
H: Evaluating performance of Generative Adverserial Network? What is the best way to evaluate performance of Generative Adverserial Network (GAN)? Perhaps measuring the distance between two distributions or maybe something else? AI: I think it depends on what exactly you're doing with the GANs. If you're generating images, the two most popular (to my knowledge) are the Inception Score [1] and Frechet Inception Distance [2]. GANs aren't my direct area of study, but I don't think you can miss starting with those two. Actually, Goodfellow himself endorsed FID (a year ago) on twitter, so I'd start there.
H: How to implement a Fourier Convolution layer in keras? I'm currently investigating the paper FCNN: Fourier Convolutional Neural Networks. The main contribution of the paper is that CNN training is entirely shifted to the Fourier domain without loss of effectiveness. The proposed architecture looks as follows: The authors state that the implementation was done in keras, however, it is not publicly available. I know I can define a Fourier transformation in the following way: model.add(layers.Lambda(lambda v: tf.real(tf.spectral.rfft(v)))) But this is not a Fourier Convolution, right? How should I go from here? AI: An FFT-based convolution can be broken up into 3 parts: an FFT of the input images and the filters, a bunch of element-wise products followed by a sum across input channels, and then an IFFT of the outputs (Source). Or as it is written in the paper: So, for a Fourier Convolution Layer you need to: Take the input layer and transform it to the Fourier domain: input_fft = tf.spectral.rfft2d(input) Take each kernel and transform it to the Fourier domain: weights_fft = tf.spectral.rfft2d(layer.get_weights()) Note: The Fourier domain "images" for the input and the kernels need to be of the same size. Perform element-wise multiplication between the input's Fourier transform and Fourier transform of each of the kernels: conv_fft = keras.layers.Multiply(input_fft, weights_fft) Perform Inverse Fourier transformation to go back to the spatial domain. layer_output = tf.spectral.irfft2d(conv_fft) Note: I used pseudo-code, it will probably needs some tuning for it to actually to work.
H: Adapting Neural Network to new domain without labels Is there an approach for the following problem: Lets say, I trained a neural network on a big dataset for categorizing different fruits in $k$ classes. Afterwards I got a nice model, which performs very well. Now I want to use the model for categorizing fruits in the corresponding $k$ classes, as it was planned beforehand. Unfortunately the fruits I want to categorize now are all not ripe yet, but my training set consisted only of ripe fruits. Furthermore I have some pictures of these not ripe fruits, but no labels. How can I adapt my neural network to these slightly different domain with my pictures of not ripe fruits (and no labels!). Performance on the old task does not matter. The only thing I want, is categorizing not ripe fruits. My only Idea now is to use virtual adversarial training (VAT) for the unlabeled pictures. AI: I think those are one of the most cited papers: https://arxiv.org/pdf/1409.7495.pdf http://www.jmlr.org/papers/volume17/15-239/15-239.pdf http://openaccess.thecvf.com/content_cvpr_2017/paper/Tzeng_Adversarial_Discriminative_Domain_CVPR_2017_paper.pdf http://openaccess.thecvf.com/content_cvpr_2017/papers/Bousmalis_Unsupervised_Pixel-Level_Domain_CVPR_2017_paper.pdf
H: Tensorflow (or Keras) vs. Pytorch vs. some other ML library for implementing a CNN I am looking into implementing a convolutional neural network for a research problem. I've heard of deep learning libraries like Pytorch and Tensorflow and was hoping to get some additional information about their suitability for my needs. I haven't looked much into Pytorch, and have only briefly read about Tensorflow. I don't hear very nice things about Tensorflow in terms of ease of use. I hear Pytorch is easier to use. But there seems to be more tutorials for Tensorflow, and specifically for creating CNNs. What sort of questions should I be asking myself in determining which library would best suit my needs? AI: If you are looking for something easy to use and to read, definitely go for Keras. Example of CNN in Keras : model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) So easy to read ! Source, literally the first link when searching for "keras CNN" on Google. I really enjoy Keras, because it's easy to read, easy to use, great documentation, and if you want to mess up things at lower level you can do it by touching the back-end of Keras (Tensorflow or Theano) EDIT (following your comment) Excellent blog : Keras vs Tensorflow
H: Want suggestions on choosing open source embedded BI tool I need some suggestions from all group members regarding the open source reporting/dash-boarding tool which fulfills below specific requirements apart from some basic BI functionalities: 1. Can be embedded inside a web 2. Interactive (filters and one chart selection affecting other charts) 3. Able to give excel downloadable option for plotted data 4. Role based user access You can suggest tools supporting all requirements or partial also. AI: You can try Power BI - it can do all mentioned stuff ( and even more, you can plug in R/python code), but its not open-sourced, though you can go for a free 1-user account. I guess tableau can do the same and also has "community" version, but im a fan of Power BI, so cant tell more about it. If you need some custom, build-from-scratch solution, you can go for Shiny - fully build on R, fully open sourced and has free hosting option, can be embedded as an iframe ( will prolly look horrible), but fully customizable and can be tuned the way you want it to be. Similar to Shiny, but with Python: Dash
H: How Can I Solve it? TypeError: fillna() got an unexpected keyword argument 'implace' I am trying to replace NaN values in a given dataset with this import pandas as pd import quandl import math df.fillna(-9999, implace=True) But I keep on getting this error: ** Traceback (most recent call last): File "regression.py", line 44, in df.fillna(0,implace=True) File "/home/compname/.local/lib/python3.6/site-packages/pandas/core/frame.py", line 3790, in fillna downcast=downcast, **kwargs) TypeError: fillna() got an unexpected keyword argument 'implace' ** AI: You have a typo, change implace for inplace (change the m for the n)
H: How to think about - and sometimes impute - geographic distances I have a dataset with one of the (important) features being the geographic distances from NYC. Of course, some of the values are missing.... The goal is predicting whether people with certain attributes (proximity being one of them, and the typical age, sex, education, etc. being the others) will engage in an activity in NYC (e.g., visiting MOMA, taking a Broadway show, moving to the city altogether, enrolling in an area school, things like that). My basic question is - missing values aside - is it correct to just take distances into account "as is" or should they somehow be divided between "driving/train distances" and "flying distances" (essentially, converting them into "the number of hours it takes someone to get to NYC by the most efficient means")? If we take Los Angeles and Richmond, VA as examples - the distance to NYC from LA is about 10 times that of Richmond; flight times are only 4 times longer, but flight time from LA and drive time from Richmond are approximately the same. So what's the right way to think about that? And once the right approach to distances is determined, how does one go about imputing distances for missing values? AI: I would say that for your goal, the time to NYC is better than the distance. Indeed, whether I'm 100km or 10km, if it takes me an hour to go to the city center, it's the same burden to me. So I would advise you to use the time as a metric rather than the distance itself due to how logistics work. Then for your missing measurements, the best is to use compute an approximate geodesic distance/time. This also makes sense because for a user to go from A to C but if there is no such path, he would need to first spend time to go to B and then from B to C. So for the missing entries A->C, you can fill them with min(AB+BC+epsilon), where B is the set of all available cities (I don't think more than 1 hop is something people would consider, so you don't need the full distance matrix -> Floyd Warshall algorithm) and espilon may be a time to go from one train station/airport to another.
H: Abbreviation in Orange's contingency table What do the scores ARI and AMI mean in Orange's contingency table? AI: ARI stands for adjusted Rand index and AMI stands for adjusted mutual information. They are metrics for clustering. Remark: ARI might take negative values. The AMI takes a value of 1 when the two partitions are identical and 0 when the MI between two partitions equals the value expected due to chance alone.
H: What would you do in Knn specific case I'd like to know what would you do in this specific and unrealistic case appliying Knn when k = 1, k = 2 and k = 3. Class 1 individuals: [1,1] [2,2] [2,4] [2,5] Class 2 individuals: [3,1] [4,1] [4,2] individual to classify: [2,1] Plot: I don't know if there would be any criteria rather than the generation order to choose the class it belongs. AI: When k=3 it is a simple case, the 3 nearest examples share the same distance to the new sample. Two of them are from class 1 and a single example is from class 2, so the new sample will be classified as class 1. When k=2 it is more tricky. Assuming that the 2 features (x,y) share similar importance to the classification (otherwise distances should be calculated with some predetermined weights), you can: Choose randomly 2 out of the 3 candidates (you are basically acknowledging that for this case, your model doesn't have enough information for classification based on 2-NN). take all candidates into account (i.e. for cases like this you go to 3-NN). When k=1, same options as with the k=2 case (you will still need to go up to 3-NN). In this specific case, because your 3 nearest neighbors are all of the same distance, I would choose to use K-NN. However for a similar case with 4 samples at the same distance, 2 of each class, I would choose randomly (not going to 5-NN).
H: Should I remove outliers if accuracy and Cross-Validation Score drop after removing them? I have a binary classification problem, which I am solving using Scikit's RandomForestClassifier. When I plotted the (by far) most important features, as boxplots, to see if I have outliers in them, I found many outliers. So I tried to delete them from the dataset. The accuracy and Cross-Validation dropped by approximately 5%. I had 80% accuracy and an Cross-Val-Score of 0.8 After removing the outliers from the 3 most important_features (RF's feature_importance) the accuracy and Cross-Val-Score dropped to 76% and 77% respectively. Here is a part of the description of my dataset: Here is an overview of my data: Here are the boxplots before removing the outliers: Here are the feature importances before removing outliers: Here is the accuracy and Cross-Val-Score: Accuracy score: 0.808388941849 Average Cross-Val-Score: 0.80710845698 Here is how I removed the outliers: clean_model = basic_df.copy() print('Clean model shape (before clearing out outliers): ', clean_model.shape) # Drop 'num_likes' outliers clean_model.drop(clean_model[clean_model.num_likes > (1938 + (1.5* (1938-125)))].index, inplace=True) print('Clean model shape (after clearing out "num_likes" outliers): ', clean_model.shape) # Drop 'num_shares' outliers clean_model.drop(clean_model[clean_model.num_shares > (102 + (1.5* (102-6)))].index, inplace=True) print('Clean model shape (after clearing out "num_shares" outliers): ', clean_model.shape) # Drop 'num_comments' outliers clean_model.drop(clean_model[clean_model.num_comments > (54 + (1.5* (54-6)))].index, inplace=True) print('Clean model shape (after clearing out "num_comments" outliers): ', clean_model.shape) Here are the shapes after removing the outliers: Clean model shape (before clearing out outliers): (6992, 20) Clean model shape (after clearing out "num_likes" outliers): (6282, 20) Clean model shape (after clearing out "num_shares" outliers): (6024, 20) Clean model shape (after clearing out "num_comments" outliers): (5744, 20) Here are the boxplots after removing the outliers (still have outliers somehow.. If I delete these too, I will have really few datapoints): Here is the accuracy and Cross-Val-Score after removing the outliers and using same model: Accuracy score: 0.767981438515 Average Cross-Val-Score: 0.779092230906 How come is removing the outliers drops the accuracy and F1-score? Should I just leave them in the dataset? Or remove the outliers that are to see in the 2nd boxplot (after removing the 1st outliers as shown above)? Here is my model: model= RandomForestClassifier(n_estimators=120, criterion='entropy', max_depth=7, min_samples_split=2, #max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=8, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=1, verbose=0, warm_start=False, class_weight=None, random_state=23) model.fit(x_train, y_train) print('Accuracy score: ', model.score(x_test,y_test)) print('Average Cross-Validation-Score: ', np.mean(cross_val_score(model, x_train, y_train, cv=5))) # 5-Fold Cross validation AI: As a rule of thumb, removing outliers without a good reason to remove outliers rarely does anyone any good. Without a deep and vested understanding of what the possible ranges exist within each feature, then removing outliers becomes tricky. Often times, I see students/new hires plot box-plots or check mean and standard deviation to determine an outlier and if it's outside the whiskers, they remove the data points. However, there exist a myriad of distributions in the world that if you did that, you would be removing perfectly valid data points. In your example, it looks like your dealing with social media data. If I were to sample 1000 users from a social media database and then plotted a box-plot to find "outliers" for number of likes a post gets, I can imagine that there could be a few so-called outliers. For example, I expect my Facebook post to get a handful of likes on any given day, but when my daughter was born, my post related to that got into the hundreds. That's an individual outlier. Also, within my sample of 1000 users, let say I managed to get user Justin Bieber and simply like at his average number of likes. I would say that he's an outlier because he probably gets into the thousands. What outliers really mean is that you need to investigate the data more and integrate more features to help explain them. For example, integrating sentimental and contextual understanding of my post would explain why on my daughter's birthday, I received hundreds of likes for that particular post. Similar, incorporating Justin Bieber verified status, large following may help explain why a user like him receives a large number of likes. From there you can move on to either building separate models for different demographics (average folks like me vs people like Justin Bieber) or try to incorporate more features. TL;DR. Don't remove outliers just because they are abnormal. Investigate them.
H: What's The Difference Between The Terms Predictor And Feature For the term 'predictor', I found the following definition: Predictor Variable: One or more variables that are used to determine or predict the target variable. Whereas Wikipedia contains the following definition of the word 'feature': Feature is an individual measurable property or characteristic of a phenomenon being observed. What is the difference between 'predictor' and 'feature' in machine learning? AI: Feature and predictor are used interchangeably in machine learning today though I must admit that it seems that feature is being used more than predictor. The definition is the one on Wikipedia which you have already mentioned. The term predictor comes from statistics, here one definition: An independent variable, sometimes called an experimental or predictor variable, is a variable that is being manipulated in an experiment in order to observe the effect on a dependent variable, sometimes called an outcome variable. and my favorite definition: A predictor variable explains changes in the response. In a nutshell: X columns: features, predictors, independent variables, experimental variables. y column(s): target, dependent variable, outcome, outcome variable.
H: Input shape in a multivariate RNN So I've seen this: Keras LSTM with 1D time series And this: Multi-dimentional and multivariate Time-Series forecast (RNN/LSTM) Keras I have many, many, many accountIDs, and 40 or more features associated with them for the start of each week since 2017. I'm attempting to predict expected profit next week (next time step) for a given accountID with a given history (the idea is to modify strategy features, pass them both to the RNN, and see which one has the higher expected profit once the model is trained). The suggestion provided in the second one I linked is that I make a sub-model for each accountID it seems, but I have many of them and new ones come in all the time. Should I strip out accountID, but still pass its whole history in as a sequence, and then just append it back after prediction? What if I only have a few weeks of data for some accounts, and years for others? The dataset would look roughly like this: (weekstart, accountID, x, y, z, p, q, ...), and each accountID would have many tuples like that, but they wouldn't all have the same number of tuples. For reference, I'm familiar with CNNs and standard neural networks, but this is my first attempt at using an RNN. AI: Ok... Take a step back. When you're dealing with RNNs your dataset should have a shape that looks like (nb_samples, nb_timesteps, nb_features) Translating this to your use case means that each account is a sample (what you'll iterate when doing mini-batching), each week is a timestep (what your rnn will iterate over) and your features are.... Features. Now... Don't use accountID as a feature. That's just an ID with no prediction power. Also try to add some seasonality features, like month or something. Does that help you out?
H: Classification loss function: how to implement individual weights for each observation and class The problem I have to solve is a classification problem. The costs of a misclassification are very different (but known) for the various observations, so I plan to include them by assigning weights to each observation accordingly. My issue is that additionally, the costs of misclassification are different for different classes (and these differences depend on the observations). So in theory, I would need to incorporate into the loss function weights $w_{ij}$ for each observation $i$ and class $j$. But I have no idea how to do this for example for a neural network in keras (or with any other classifier I didn't build from scratch myself). Any ideas on how to do this would be greatly appreciated! AI: What you are referring to is called a weighted loss function. In Keras: Define a dictionary with your labels and their associated weights or just a list of the weights (by class order): loss_weights = {0: 0.5, 1: 0.2, 2: 1.5}... Feed the it to the compile method: model.compile(optimizer=opt, loss='categorical_crossentropy', metrics='acc', loss_weights=loss_weights) The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients.
H: plot the histogram of purchases i have a dataframe with 34154695 obs. in a dataset a Class variable with value 0 indicate "not purchased" and 1 indicate "purchase". > str(data) 'data.frame': 34154695 obs. of 5 variables: $ SessionID: int 1 1 1 2 2 2 2 2 2 3 ... $ Timestamp: Factor w/ 34069144 levels "2014-04-01T03:00:00.124Z",..: 1452469 1452684 1453402 1501801 1501943 1502207 1502429 1502569 1502932 295601 ... $ ItemID : int 214536500 214536506 214577561 214662742 214662742 214825110 214757390 214757407 214551617 214716935 ... $ Category : Factor w/ 339 levels "0","1","10","11",..: 1 1 1 1 1 1 1 1 1 1 ... $ Class : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ... I am facing difficulties finding a solution to plot a histogram of the number of purchase per week, per day and time wise purchase based on class value = 1 and wanna output like this. Could someone please inform how I should proceed?? Really, thank you for any help and suggestings. Kind regards AI: I think first of all, if you want to get aggregated data, you need to group it by day/week, like this: library(dplyr) library(lubridate) x <- strptime(data$Timestamp, format = "%Y-%m-%d")#assume you need only days/month , assign to a variable, because dplyr has problems with with date type. data$month <- month(x) #get month from date obj. month_summ <- data %>% group_by(month) %>% #group by month and calculated sold items per month summarise( total_sales = n() ) library(ggplot2) ggplot(data=month_summ, aes(x=month, y=total_sales)) + geom_bar(stat="identity") #plot the histogram This should do the work for you or act like a starting point. Here is a good reference for ggplot2 bar plots Hope this helps!
H: Information about LTSM RNN backpropagation algorithm I am attempting to make a LTSM RNN in python from scratch and I have completed the code for forward pass but I am struggling to find a clear outline of the equations I need to calculate to get the gradients using back-propagation. Is there any straightforward resource that I can learn these equations from and how to implement them(as my programming skills are limited)? Thanks for any help, The notation for my code is that; hu is the update gate, hf is the forget gate and ho is the output gate. def forward(inp,target): loss = 0 c_temp,c,x,a,y,prob = {},{},{},{},{},{} c_old = {} c[-1] = np.zeros((hidden_size,1)) a[-1] = np.zeros((hidden_size,1)) for t in range(len(inp)): x[t] = np.zeros((vocab_size,1)) x[t][inp[t]] = 1 X = np.concatenate((x[t],a[t-1])) c_temp[t] = tanh(wc @ X + bc) hf[t] = sigmoid( wf @ X + bf) hu[t] = sigmoid( wu @ X + bu) ho[t] = sigmoid(wo @ X + bo) c[t] = hu*c_temp[t] + hf * c[t-1] a[t] = ho * tanh(c[t]) y[t] = wy @ a[t] + by prob[t] = softmax(y[t]) loss += loss(prob[t],target[t]) AI: Here is a good tutorial on LSTM from scratch with forward and backward pass.
H: When is a neural network better "traditional" models like decisions trees and lassos? There's a whole theory of statistical inference based off calculus studying consistency, efficiency, robustness, BLUE, unbiasedness of linear models (Gaussian,Exponential, Chi-square, F-distribution, etc)... that make up regression models. When is a neural network better than these traditional models based off calculus that are used regression-analysis? Is there a whole mathematical theory including robustness, Rao-blackwell stuff, consistency, sufficency dedicated to them that someone like Wackerly/HOFF writes about in a book? Are neural networks not explainable by calculus as logistic regression is explainable as the maximum likelihood estimator? What's the mathematics behind neural networks or is that more CompSci theory? Are neural networks unbiased estimators that are consistent, efficient, etc that are on average the right answer? Can they estimate the true parameters of the population? AI: Neural network are "just" a way of estimating a numerical function. Their power is that this comes out of simple matrix multiplications with non linear functions that can be efficiently parallelized. As such, they are perfectly described by calculus. The question on whether they are biased or not is then related to how they are trained and their black-box behavior. You can put stats at the end of a neural network as well. The problem IMHO is that generalization for them is a complex topic. Presented with new data, they can have unexpected behavior compared to more traditional techniques for which we understand the behavior at the edge. If you consider simple regression, like least squares, then neural networks are not actually in play. We use the frameworks to build equations, but we don't build neural networks. As such, we just benefit from the capability of these to process huge amount of data. We are still going to estimate $y=ax+b$ if we want to estimate the coefficients. For logistic regression, it's still the same boundaries as if we were doing a logistic regression without neural networks. Neural networks are only in play when the function is unknown and needs to be estimated.
H: PCA: projection of positive data on negative side of plane I did PCA on my data and projected the data on first two eigen vectors. After projection I see that the scatter plot of the data starts from [-1,-1]. My data is all positive. Is it correct for the data to be negative in the projected space. AI: Yes, it is, there is no reason why a component cannot have negative values. Let's take this matrix: [[1, 1],[0.5, 1]] The eigenvectors have negative values: [[ 0.81649658, -0.81649658], [ 0.57735027, 0.57735027]]
H: Curvature - Linear Assumption They asked if the linear assumption is correct, but I just see a graph and based on the data I was taught that a curvature is nonlinear so nonlinear means that a graph is incorrect? What is the assumption? I know this may seem stupid, but I still haven't grasped the question. AI: Yes, the linear assumption is incorrect since the graph is non linear
H: Sparse matrix in R based on the data frame Suppose I have book ratings in the form of data frame (where 0 means no rating): $\begin{array}{|c|c|c|} \hline \textbf{User.ID}& \textbf{ISBN} & \textbf{Book.Rating} \\ \hline 276725 & 034545104X & 0 \\ \hline 276726 & 0155061224 & 5 \\ \hline 276725 & 3257224281 & 7 \\ \hline ... & ... & ... \\ \hline \end{array}$ In what easiest way can I get the form as below (I want to use it to create a realRatingMatrix object) ? $\begin{array}{|c|c|c|c|c|} \hline \ & \textbf{034545104X} & \textbf{0155061224} & \textbf{3257224281} & ...\\ \hline \textbf{276725} & . & 3 & 7 & ...\\ \hline \textbf{276726} & 5 & 5 & . & ...\\ \hline ... & ... & ... & ... & ...\\ \hline \end{array}$ AI: Your data looks like this at the moment: data <- data.frame(User_Id = c(276725, 276726, 276725, 276726, 276725), ISBN = c("A", "B", "C", "A", "B"), Book_Rating = c(0, 5, 7, 5, 3)) > data User_Id ISBN Book_Rating 1 276725 A 0 2 276726 B 5 3 276725 C 7 4 276726 A 5 5 276725 B 3 With the following commands you can create a dgCMatrix, required as input for a realRatingMatrix object. library(Matrix) data_sparse = sparseMatrix(as.integer(data$User_Id), as.integer(data$ISBN), x = data$Book_Rating) colnames(data_sparse) = levels(data$ISBN) rownames(data_sparse) = levels(data$User_Id) > data_sparse 276726 x 3 sparse Matrix of class "dgCMatrix" A B C [1,] . . . [2,] . . . [3,] . . . .............................. ........suppressing rows in show(); maybe adjust 'options(max.print= *, width = *)' .............................. [276722,] . . . [276723,] . . . [276724,] . . . [276725,] 0 3 7 [276726,] 5 5 . After that you can call the command new("realRatingMatrix", data = data_sparse) Bear in mind that this matrix starts at one and it's only populated in two rows (276725 and 276726) but the rest of the columns from 1 to 276725 exist. If you don't want to use User_Id as indices you will have to create new indices and use those instead and have one User_Id correspond to a new index. That means if User_Id starts at, for instance, [11, 18...] then it'd become User_Id 11 = New_Index 1, User_Id 18 = New_Index = 2. Use New_Index instead of User_Id and you're done. If that's the case then data should end up looking like this: data <- data.frame(User_Id = as.factor(c(276725, 276726, 276725, 276726, 276725)), ISBN = c("A", "B", "C", "A", "B"), Book_Rating = c(0, 5, 7, 5, 3), New_Index = c(1, 2, 1, 2, 1)) > data User_Id ISBN Book_Rating New_Index 1 276725 A 0 1 2 276726 B 5 2 3 276725 C 7 1 4 276726 A 5 2 5 276725 B 3 1 library(Matrix) data_sparse = sparseMatrix(as.integer(data$New_Index), as.integer(data$ISBN), x = data$Book_Rating) colnames(data_sparse) = levels(data$ISBN) rownames(data_sparse) = levels(data$User_Id) > data_sparse 2 x 3 sparse Matrix of class "dgCMatrix" A B C 276725 0 3 7 276726 5 5 .
H: Analyzing Videos using Deep Learning Is there any work done on analyzing sequence of frames from a video using Deep Learning techniques? By "analyzing" I mean like memorizing them in order to classify or predict something (e.g. by taking into account first 10 frames of a video the model can make some sort of conclusion). AI: In 2016 some people at MIT's CSAIL group "made an important new breakthrough in predictive vision, developing an algorithm that can anticipate interactions more accurately than ever before." They wrote an article, Teaching machines to predict the future. They made a great video which shows their results. They trained an algorithm on YouTube videos and TV shows to predict when two individuals will shake hands, hug, kiss, or slap five. The researchers, Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba, published a paper at CVPR 2016 entitled Anticipating Visual Representations with Unlabeled Video. Chia-Wen Cheng has a repository of an LSTM-based model implemented in TensorFlow. For a more up to date list of papers on predicting actions or activities see this list.
H: Implementing a custom hard sigmoid function I need to implement an activation function that is similar to Keras's "hard-sigmoid", only for different limit values: 0 if x < 0 1 if x > 1 x if 0 <= x <= 1 How do I implement it with a tensorflow backend Keras? AI: Based on this post, hard-sigmoid in Keras is implemented as max(0, min(1, x*0.2 + 0.5)). To obtain the graph you like you have to tweak the shift and slope parameters, i.e. leave them out in your case: $$ max(0, min(1, x)) $$ This will generate following graph: For Keras' TensorFlow backend you can find the implementation here. This would be the corresponding changed "hard-sigmoid", for your case: zero = _to_tensor(0., x.dtype.base_dtype) one = _to_tensor(1., x.dtype.base_dtype) x = tf.clip_by_value(x, zero, one) return x
H: Is the magnitude of the gradient a weakness of Gradient Descent? The formula for Gradient Descent is as follows: $$ \mathbf{w} := \mathbf{w} - \alpha\; \triangledown C $$ The gradient itself points in the direction of steepest ascent, therefore it is logical to go in the opposite direction by subtracting it. But besides the direction, the gradient also has a magnitude which actually doesn't say anything about the path to the optimum. My question is, is this considered a weakness of gradient descent and the reason for the learning rate? AI: The magnitude doesn't say anything about the path to the optimum, in fact nothing in gradient decent knows anything about the optimum and the algorithm relays on local information only. However if you assume that the space of available solutions is relatively smooth and doesn't contain sharp drops, you can treat the magnitude of the gradient as a score of your confidence in your current surrounding. So if the gradient magnitude is high, you are fairly certain that you are far from a good solution and you can make big steps and if the magnitude is small, than you are closer to a good solution and should make smaller changes. The learning rate is just a hyper-parameter that controls how much we are adjusting the weights of our network with respect that said loss gradient. It can also be viewed in terms of your confidence in the surrounding solutions space. It also helps the model to converge deeper into a specific minimum point (as we reduce the learning rate during training).
H: How Do I Learn Neural Networks? I'm a freshman undergraduate student (mentioning this so you may forgive my unfamiliarity) who is currently doing research using neural networks. I've coded a three-node neural network (that works) based on my professor's guidance. However, I'd like to pursue a career in AI and Data Science, and I'd like to teach myself more about these properly in-depth. Are there any books or resources that will teach me more about neural network structures, deep learning, etc? Are there any recommendations? Note: I'm proficient in Java, Python, Bash, JavaScript, Matlab, and know a bit of C++. AI: I have a Master's in Computer Science and my thesis was about time-series prediction using Neural Networks. The book Hands on machine learning with Scikit and Tensorflow was extremely helpful from a practical point of view. It really lays things very clearly, without much theory and math. I strongly recommend it. On the other hand, the book by Ian Goodfellow is also a must (kind of the bible of DL). There you'll find the theoretical explanations, also it will leave you much much more knowledgeable with regards to deep learning and the humble beginning of the field till now. Another, as others have suggested, is of course, Deep Learning with Python by Chollet. I indulged reading this book. Indeed it was very well written, and again, it teaches you tricks and concepts that you hardly grasp from tutorials and courses online. Furthermore, I see you are familiar with Matlab, so maybe you have taken some stats/probability classes, otherwise, all these will overwhelm you a bit.
H: Maximize the margin formula in support vector machines algorithm I was recently reading about support vector machines and how they work and I stumbled on an article and came across Maximize the distance margin. Can anyone tell me what do we have to minimize here? I wasn't able to understand this part I pasted below. What is $w$ and $m$ here in the formulas given below? Maximize the margin For the sake of simplicity, we will skip the derivation of the formula for calculating the margin, m which is $$m = \frac{2}{\|\vec{w}\|}$$ The only variable in this formula is w, which is indirectly proportional to m, hence to maximize the margin we will have to minimize ||w||. This leads to the following optimization problem: $$\min_{(\vec{w}, b)} \frac{\|\vec{w}\|^2}{2}$$ subject to $$y_i(\vec{w}\cdot\vec{x}+b) \ge 1, \forall i = 1, \ldots, n$$ The above is the case when our data is linearly separable. There are many cases where the data can not be perfectly classified through linear separation. In such cases, Support Vector Machine looks for the hyperplane that maximizes the margin and minimizes the misclassifications. Link to the article : Hackerearth AI: The separating hyperplane that we are looking for is $w^Tx+b=0$. That is $w$ is the coefficient or the slope of the separating hyperplane and $m$ is the margin. By scaling, for the positive class, we want the closest data point (support vectors) to satisfies $w^Tx+b=1$ and for the negative data class, we want the closest data point to satisfies $w^Tx+b=-1$. Recall from geometry class, (a $3$ dimensional case is discussed here) that the distance from a point $y$ from a plane $w^Tx+b=0$ is $$\frac{|w^Ty+b|}{\|w\|}.$$ Hence as a result, the distance of the closest point to the hyperplane would be $\frac{|\pm1|}{\|w\|}=\frac{1}{\|w\|}$ and hence, the margin, that is the closest distance between the two classes would be $\frac2{\|w\|}$. We want to maximize the margin, $\frac2{\|w\|}$ which is equivalent to minimize $\frac{\|w\|}2$ which is equivalent to minimize $\frac{\|w\|^2}2$ subject to the constraints that we classify the points correctly.
H: Neural Network unseen data performance I started dabbling in neural networks quite recently and encountered a situation which is quite strange (at least with my limited knowledge). The problem I'm using a NN is a regression problem which tried to predict the sales of a product for a particular kind of promotion in an FMCG. Although the data is not strictly time series, it still has some time related attributes like a prediction may depend on a similar kind of promotion last year (which I've modelled using feature engineering). Now to the problem I'm having: I took the data from 2015-2017,augumented it by adding small amount of noise, shuffled it, and ran it through a neural network(I think the architecture is not important, but let me know if it is, I'll try and post it). Optimizer: Adam with a decay Loss Function : Volume weighted mape Validation set : Randomly selected 20% of the data The network trained well and gave me errors as small as 8%. And both training and validation set errors decreased together and did not indicate an overfit. But the kicker is that when I applied the algorithm to new unseen data(2018, first half), the errors rose to 55%. . I tried researching the issue on Google but didn't find anything useful. What is happening here? Am I doing something wrong? AI: I finally figured out the problem. I'm just posting it as the answer so that if somebody has the same problem, they can get a clue in the right direction.. It was an issue of data leak. I was augumented the data, but I augumented it before the train validation split and hence a lot of similar cases were present in validation and train sets which pushed up the performance. When I augumented only the train set and gave the validation set specifically to the method, the strangeness disappeared. Now the validation accuracy and test accuracy are comparable.
H: Understanding LSTM input shape for keras I am learning about the LSTM network. The input needs to be 3D. So I have a CSV file which has 9999 data with one feature only. So it is only one file. So usually it is (9999,1) then I reshape with time steps 20 steps timesteps = 20 dim = data.shape[1] data.reshape(len(data),timesteps,dim) but I am getting following error ValueError: cannot reshape array of size 9999 into shape (9999,20,1) and the input in LSTM model.add(LSTM(50,input_shape=(timesteps,dim),return_sequences=True, activation="sigmoid")) AI: (9999,1) has 9999*1 elements = 9999. However, (9999,20,1) will have 9999*20*1 elements, which are not available. Break your data into a batch/sequence length of say 99. Then reshape it into (101,99,1)
H: training and Predicting probabilities using logistic regression model I have these game data where the output variable is continous which indicates probability of winning. How do I train my classification model using this probabilites and predict output probabilites for test data AI: Since your output variable is continuous it can't be used directly in classification. There are some steps through which you can achieve inference. Based on the probability, convert your 'y' (output) variable into a class label. for example, if your class A & class B has a probability of 0.6, 0.4 respectively. convert it into a single variable y and it should have the value that represent the respective classes. Now you have the features mapped to their respective class label. Now we can train the model with the available data. After training the model, the prediction will give you class probabilities on your test data. Note : Use softmax in case of neural networks to squash the output values to probabilities.
H: Example of a problem with structured output labels I'm studying SSVM (Structured SVMs). On my book is stated that Structured SVM is an extension of the SVM, in which Each sample is assigned to a structured output label z ∈ K, e.g. partitions, trees, lists, etc. It's not clear to me what a structured output label is. Could you provide me with an explanation of this term along with some tangible examples? AI: Structured Learning is basically learning prediction functions that is used to map input data to complex output space. The regular SVMs are used for univariate classification where input is mapped to atomic labels or regression where input is mapped to vectors or scalar numbers. Structured Output learning extends this very process to more complex output spaces like: - Multi Class Classification : where output is a "set" of class labels. - Object localization Problem : where output space defines bounding boxes. - Sequence alignment problems in protein homology detection, - Parts of Speech tagging in NLP , etc. Hope this clears it up for you.
H: Recurrent Neural Network (LSTM) not converging during optimization I am trying to train a RNN with text from wikipedia but I having having trouble getting the RNN to converge. I have tried increasing the batch size but it doesn't seem to be helping. All data is one hot encoded before being used and I am using the Adam optimizer which is implemented like this. for k in M.keys(): ##For k in weights M[k] = beta1 * M[k] + (1-beta1)*grad[k] R[k] = beta2 *R[k] + (1-beta2)*grad[k]**2 m_k = M[k] / (1-beta1**n) r_k = R[k] / (1-beta2**n) model[k] = model[k] - alpha * m_k / np.sqrt(r_k + 1e-8) Beta1 is set to 0.9, beta2 to 0.999 and alpha is set to 0.001. When I train it for 50,000 I get very high fluctuation of the cost and it never seems to significantly decrease (only sometimes due to the fluctuations (and I catch the weights with the lowest cost)). After sketching the cost of iterations I get a graph like this: It seems to be increasing on average only seeming to decrease to the the large fluctuations. What can I change to have better success and have it converge? Thanks for any help AI: I think the problem lies with your text processing to one-hot vectors. Try using embeddings instead of one-hot vectors. An embedding is also a n-dimesional vector that allows words with similar meanings to have similar vectorial representation. One-hot vectors don't have such information. For them, there's a set of words of say cardinality c , then each vector is cx1 . The 1 in the vector just represents its location. It's like bit manipulation in this sense. So no semantic meaning is preserved. For e.g., in your corpus, there may be words like adore and love. Both have similar meaning. But one-hot vector for love and adore may be far away, depending upon where in set love and adore are mentioned. But if you use embeddings, words with similar meanings will have similar representations in predefined vector space. With this, your LSTM will learn dependencies better and will start converging. Hope this helps :)
H: Context classification problem I have a bunch of articles about science from a certain website. When a new article is published, I want to determine if that article is really talking about science (and not politics for example). How can I do that? What machine learning technique shroud I use? I'm thinking at using something similar to spam detection. Is that ok? Thank you! AI: Spam detection can be done with many different methods, the same goes for your task. They do share the similar idea of processing a given text and classifying it to be one of 2 classes (science/not-science or spam/not-spam). What you first need to do is to turn the articles into a vector of constant size (for example with Word2vec which takes as its input a text and produces a vector space). Ones you have a vector representing each article, you can start training your classifier and feature extractor (these days they are trained together). As for determining which machine learning approach to take, you can try first using an SVM, it will probably be good enough. You can follow one of the following tutorials (there are many more), just replace their dataset with yours : Email Spam Filtering: An Implementation with Python and Scikit-learn Spam Classifier in Python from scratch
H: Financial Time Series data normalization I'm using Keras in R to predict financial time series. It's easy to normalize price, simply compute returns or log returns, usually it's enough. I want to use Goldman Sachs Financial Conditions Index and MSCI World Index to predict other securitites and I want to use levels and their returns or first differences. I think that using minmax or z-score normalization is not appropriate because series distribution will change. So, the question is how to normalize nonstationary time series data? AI: Z-score normalization, as you have already guessed, cannot deal well with non-stationary time series since the mean and standard deviation of the time series vary over time. Min-max and another commonly used normalization in stationary data, the decimal scaling normalization depend on knowing the maximum values of a time series. The most commonly used method for data normalization of non-stationary time series is the sliding window approach (J. Lin and E. Keogh, 2004, Finding or not finding rules in time series). In short: The basic idea of this approach is that, instead of considering the complete time series for normalization, it divides the data into sliding windows of length ω, extracts statistical properties from it considering only a fraction of ω consecutive time series values (H. Li and S. Lee, 2009, Mining frequent itemsets over data streams using efficient window sliding techniques, Expert Syst. Appl., v. 36, n. 2, p. 1466-1477. J.C. Hull, 2005), and normalizes each window considering only these statistical properties. The rationale behind this approach is that decisions are usually based on recent data. (...) The sliding window technique has the advantage of always normalizing data in the desired range. However, it has a drawback of assuming that the time series volatility is uniform, which is not true in many phenomena Another more advanced and less used (so far) is Adaptive Normalization can be divided into three stages: (i) transforming the non-stationary time series into a stationary sequence, which creates a sequence of disjoint sliding windows (that do not overlap); (ii) outlier removal; (iii) data normalization itself. Check the link on Adaptive Normalization and all its references, there is relevant information in there.
H: How to normalize just one feature by scikit-learn? Wanna apply a specific scaler, say StandardScaler, on a specific feature, keeping other features intact. the dataset format is something like: [ [1, 0.2, 1000], [2, 0.1, 2400], [3, 0.9, 7620] ] I need to transform only one column, the third in this example. I don't want to use pandas. AI: Just pass one column to the scaler, and change the data inlace, something like: x[i,:] = scaler.transform(x[i,:]) Once the scaler is fitted.
H: Create recommendation system to recommend products to a customer on any e-commerce website The recommendations should be based on the products consumer has searched on other sites like Google. This basically means, that recommendations have to be made to the user based on his/her search history. No other information is available for a user. Has this sort of thing already been implemented? (preferably in Python) The approach I have thought of is removing stop words and extracting keywords from a search query, based on some criteria. I am falling short at the implementation. AI: To answer your 1st question,I am not sure if a recommendation system based on search history has been implemented or not.This approach does sound cool. Secondly, yes there are various algorithms to extract phrases from text. Phrase extraction and Text Summarization are one of the most important aspects in development of chatbots for conversational AI. To give you an example, there's a good library for this purpose, called PyTextRank. Go through this link for better understanding of how this works. Once you have a ranked summary of phrases for the given user's search history, you will be able to generate the list of item/s , that person is interested in. This is a good start for building the person's unique feature vector. After this process, you can incorporate normal recommendation algos like Collaborative Filtering , etc. Hope this helps :)
H: Calculate weighted mean for two columns and hundreds of rows? I'm trying to teach myself basics of R and I couldn't find the answer: Say, I have a csv file and I want to calculate weighted mean for each subject such that I have a mean mu = 0.015*1030+0.16*26930+0.24*0+0.87*250+0.29*310+0.77*6240+0.98*3730+0.98*0+0.08*1400 for Subject A, for example. How would I accomplish it in R or Excel? Also, if I need to multiply each pair by corresponding value from the same row from the column salary: 0.015*1030*111+0.16*26930*222+0.24*0*333+0.87*250*444+0.29*310*555+0.77*6240*666+0.98*3730*777+0.98*0*888+0.08*1400*999 Thanks UPDATE: AI: Given your data is in DataFrame called df, you can simply do this: mu = sum(df$Probability * df$SubjectA * df$salary)
H: Select two best classifier using F1-score,Recall and precision I have three classifiers that classify same dataset with these results: classifier A: precision recall f1-score micro avg 0.36 0.36 0.36 macro avg 0.38 0.43 0.36 weighted avg 0.36 0.36 0.32 classifier B: precision recall f1-score micro avg 0.55 0.55 0.55 macro avg 0.60 0.60 0.56 weighted avg 0.61 0.55 0.53 classifier C: precision recall f1-score micro avg 0.34 0.34 0.34 macro avg 0.36 0.38 0.32 weighted avg 0.39 0.34 0.32 I want two select two best of them, and I know F1-score is a parameter for compare the classifiers because of its harmony between precision and recall. So, at first I select classifier B for its best F1-score. for next, both A and C have a same F1-measure, I want to ask how can I select between them? AI: f1-score combines precision and recall in a single figure. As both are pretty similar in A and C cases, f1-score is similar too. Your choice depends on what it is less harmful in your categorization: false positives or false negatives. I do recommend you to read the 3rd chapter of "DEEP LEARNING:From Basics to Practice" volume 1 by Andrew Glassner. There you have the three concepts (precision, recall and f1-score) described in a very illustrative way.
H: AUC ROC in keras is different when using tensorflow or scikit functions. Two solutions for using AUC-ROC to train keras models, proposed here worked for me. But using tensorflow or scikit rocauc functions I get different results. def auc(y_true, y_pred): auc = tf.metrics.auc(y_true, y_pred)[1] K.get_session().run(tf.local_variables_initializer()) return auc and def auc(y_true, y_pred): return tf.py_func(roc_auc_score, (y_true, y_pred), tf.double) Based on the history, it looks like both are being applied to train and validation. When I plot history metrics, tensorflow curve looks very smoothed compared to scikit. Shouldn't I get about the same results using both functions? AI: No, you shouldn't have the same numbers. All depends on the additional parameters: tf.metrics.auc( labels, predictions, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, curve='ROC', name=None, summation_method='trapezoidal' ) This means that this curve will have 200 points, so very smooth. sklearn version doesn't have this kind of parameters: roc_auc_score(y_true, y_score, average=’macro’, sample_weight=None, max_fpr=None) The number of outputs depends on the curve and the number of points if I remember properly.
H: Non-linear Regression For example suppose I've data set which looks like: [[x,y,z], [1,2,5], [2,3,8], [4,5,14]] It's easy to find the theta parameters from those tiny data set. Which is theta = [1,2,0] z = 1*x + 2*y + 0 But if my data set are non linear. Suppose: [[x,y,z], [1,2,6], [2,3,15]]] If i choose the mapping function to be of: z = xy+yy It would return the theta parameter : theta = [1,1,0] So my deal is how to choose such mapping function for data sets which varies over time. As in recommender system the user rating varies as per the time, to reduce the cost. I've recently gone through regularization. Is there any other ideas for reducing the cost. AI: To answer you first question about Non linear regression: I believe your problem of choosing mapping function for non linear regression can be solved by using Support Vector Machines. SVMs can learn non linear mapping functions in a kernel-induced feature space. What this means is in svms , the basic idea is to map the input data X into some high dimensional feature space f using a non linear mapping (kernel) and then doing linear regression in this feature space. To learn more about non-linear regression and kernels, you can read this. Secondly, Regularization is a technique that is used to solve over-fitting problem. This usually happens when you use a very dense model for your training set or you train the model for far too many steps. In this case, while the accuracy on your train set is high,but it performs very poorly in case of unseen data. Hence when you add regularization, it helps reduce the cost function. Regularization is of two types, L1 and L2. The difference lies in the power of weight-coefficients.These should be enough for your SVM based models. To reduce overfitting induced high cost, you can also use BatchNormalization and Dropout algorithms. Hope this helps :)
H: Training AI to do repetive retouching job I'm new to AI, so bear with me... What would be the easiest way to do this using AI? Where do I start? What technologies are already available? Sample data: 50.000 before images ex. there's a red ball that needs to be removed 50.000 after images the red ball is removed out of the image (by a human in photoshop) Feed the data AI learning magic Feed the AI Input: 1 other before image featuring the red ball Output expected: 1 image where the red ball is removed from the image. AI: This is a very (very) broad question. What you are trying to do is utilize some neural model to perform image processing. So your output needs to be the same size as your input, meaning that you need to use a fully-convolutional model for this. I suggest you read the following article: Fast Image Processing with Fully-Convolutional Networks They are doing exactly what you need, only for different image-processing methods (and more than one). However! I think that before going on to implementing a neural model, you should read a bit about the general subject of machine learning and deep learning. How Do I Learn Neural Networks?