text
stringlengths
83
79.5k
H: Python and data analytics how to get the unique number of orders per seller using groupby? I am totally new to data analytics and python. I took a good course on Machine learning. But I didn't really worked with real life data. I managed to get data from a store who wants to create a recommendation system. I convinced them, that I will learn and get experience, and you will get benefited with a recommendation system, plus some insights on your previous orders. I've got a json file having the following structure (I transformed it into excel): Each order_id is connected through one seller_id. And it may contain multiple product_id. With that I mean, that a user could have orderd from the seller_id=1, 2 products and the data would be like this: { order_id: 12, seller_id: 1, product_id: 3, ... }, { order_id: 12, seller_id: 1, product_id: 89, ... } The main things I need to export from these data is the following: Number of unique orders per seller; Products that are being ordered the most per seller; Products that are being ordered the most in general; Sum of the quantity field for each product to check how many times has been order; Number of orders for each customer; And other things that I may come up with later. I started with the number 1, wherre I need to get the unique number of orders per seller. I tried the following: data.groupby('order_id')['seller_name'].value_counts().sort_values(ascending=False) But I didn't get the right result. A seller having 110 unique orders, but using the line above, it shows he is having more than 500 orders. Here is a snippet of the data available: Even though, seller_id=1 having 4 rows, but the actual number of orders is uniquely 2, despite the fact that the quantities of items within these 2 orders are 7. The same for seller_id=5, he is having 4 rows, but they are only for one single order. So how can I get the unique number of orders of each seller as a start? I think I will figure the other points out once I get to do the first one. AI: Based on the information available above I think the solution below should work # Number of unique orders per seller a = pd.pivot_table(df, index = ['seller_id'], values = ['order_id'], aggfunc = {'order_id' : pd.Series.nunique}) # Products that are being ordered the most per seller # Products that are being ordered the most in general b = pd.pivot_table(df, index = ['seller_id', 'name'], values = ['product_id'], aggfunc = 'count') # Sum of the quantity field for each product to check how many times has been order c = pd.pivot_table(df, index = ['name'], values = ['product_id', 'quantity'], aggfunc = {'product_id': 'count', 'quantity': 'sum'}) # Number of orders for each customer d = pd.crosstab(df.customer_id, df.order_id)
H: Data Under fitting and its handling Regularization is used to handle over-fitting problem. Similarly can we have some methodology to overcome under fitting problem or merely adding new features or training data will help us in reducing under-fitting issue? AI: As such we don't have much ways to handle under-fitting issue. Generally I follow following ways to handle it: Add more features to training data. That includes deriving new features front the existing one. Make a complex model. Increase iterations. Adding new training data only, I believe will not solve this problem.
H: What metric does `Scalar test loss` in AutoKeras mean? I'm using the following code: import autokeras as ak regressor = ak.StructuredDataRegressor(max_trials=100) regressor.fit(x=train_dataset.drop(columns=['MPG']), y=train_dataset['MPG']) # Evaluate the accuracy of the found model. print('Accuracy: {accuracy}'.format( accuracy=regressor.evaluate(x=test_dataset.drop(columns=['MPG']), y=test_dataset['MPG']))) The result of the evaluate function is Scalar test loss. I just want to clarify what metric would that be. What is the default loss function? AI: Since you are using StructuredDataRegressor, the default loss function is mean squared error. That information can be found in the __init__ function with the default keyword argument `loss='mean_squared_error'.
H: Convolution v.s. Cross-Correlation I understand that from mathematical point of view, only difference between Convolution and Cross-Correlation is that Convolution is commutative, while Cross-Correlation is now. In many articles Cross-Correlation is explained as measuring similarity between two signals, and Convolution is explained as calculating relationship between two signals. My question is this, what is the point of commutative property? How does it make Convolution different from Cross-Correlation in practice? Or is it just there for "better" mathematics and easier proofs? Also, what are, if any, benefits of implementing CNN with Convolution over Cross-Correlation? AI: Indeed, convolution and cross-correlation are closely related. The former is a bit more natural in some areas of mathematics; most notably, in the convolution theorem for the Fourier transform, which states that the Fourier transform of the convolution of two functions is equal, under certain conditions, to the product of their Fourier transforms: $$ \mathcal{F}\{f * g\} = \mathcal{F}\{f\} \mathcal{F}\{g\}, $$ thus providing an efficient way to compute the convolution. In the context of CNN, there difference between the convolution and the cross-correlation is irrelevant. In the discrete two-dimensional case $$ (f {**} g)[n_1, n_2] = \sum_{m_1 = -\infty}^{+\infty}\sum_{m_2 = -\infty}^{+\infty} f[m_1, m_2] \, g[n_1 - m_1, n_2 - m_2], $$ $$ (f' {\star\!\star} g)[n_1, n_2] = \sum_{m_1 = -\infty}^{+\infty}\sum_{m_2 = -\infty}^{+\infty} \overline{f'[m_1, m_2]} \, g[n_1 + m_1, n_2 + m_2]. $$ If $f$ is the filter and $g$ is the output of the previous layer of the CNN, you can see that the convolution is equivalent to cross-correlation with a filter $$ f'[m_1, m_2] = \overline{f[-m_1, -m_2]}, $$ which is just a reflection around the secondary diagonal since all values are real. In short, there are no benefits to using the true convolution in CNN. The only difference would be that the learned filters would be mirrored about the secondary diagonal.
H: What is the best way to find minima in Logistic regression? In the Andrew NG's tutorial on Machine Learning, he takes the first derivative of the error function then takes small steps in direction of the derivative to find minima. (Gradient Descent basically) In the elements of statistical leaning book, it found the first derivative of the error function, equated it to zero then used numerical methods to find out the root (In this case Newton Raphson) In paper both the methods should yield similar results. But numerically are they different or one method is better than the other ? AI: Being a second-order method, Newton–Raphson algorithm is more efficient than the gradient descent if the inverse of the Hessian matrix of the cost function is known. However, inverting the Hessian, which is an $\mathcal O(n^3)$ operation, rapidly becomes impractical as the dimensionality of the problem grows. This issue is addressed in quasi-Newton methods, such as BFGS, but they don't handle mini-batch updates well and hence require the full dataset to be loaded in memory; see also this answer for a discussion. Later in his course Andrew Ng will discuss neural networks. They can easily contain millions of free parameters and be trained on huge datasets, and because of this variants of the gradient descent are usually used with them. In short, the Newton–Raphson method can be faster for relatively small problems, but the gradient descent scales better with complexity of problem.
H: Tensorflow CNN not predicting correctly for a well trained model So I have built a CNN and trained it to around 95% accuracy on training data, and 90% on testing data. The issue is, when I save this model, load it in again, and predict, it always predicts 1 or close to 1. I even used training data. import numpy as np import matplotlib.pyplot as plt import seaborn as sns import cv2 import os import random from keras.models import Sequential from keras.layers import Conv2D,Dense,Flatten,Dropout,MaxPooling2D from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import ReduceLROnPlateau image_height = 150 image_width = 150 batch_size = 10 no_of_epochs = 10 model = Sequential() model.add(Conv2D(32,(3,3),input_shape=(image_height,image_width,3),activation='relu')) model.add(Conv2D(32,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) model.add(Conv2D(64,(3,3),activation='relu')) model.add(Conv2D(64,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) model.add(Conv2D(128,(3,3),activation='relu')) model.add(Conv2D(128,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(units=128,activation='relu')) model.add(Dropout(0.2)) model.add(Dense(units=1,activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) train_datagen = ImageDataGenerator(rescale=1./255, rotation_range=15, shear_range=0.2, zoom_range=0.2 ) test_datagen = ImageDataGenerator(rescale=1./255) import numpy as np import matplotlib.pyplot as plt import seaborn as sns import cv2 import os import random from keras.models import Sequential from keras.layers import Conv2D,Dense,Flatten,Dropout,MaxPooling2D from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import ReduceLROnPlateau from keras import * from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing import image training_set = train_datagen.flow_from_directory('/content/hackmed2020/train', target_size=(image_width, image_height), batch_size=batch_size, class_mode='binary') test_set = test_datagen.flow_from_directory('/content/hackmed2020/test', target_size=(image_width, image_height), batch_size=batch_size, class_mode='binary') # Updated part ---> val_set = test_datagen.flow_from_directory('/content/hackmed2020/val', target_size=(image_width, image_height), batch_size=1, shuffle=False, class_mode='binary') reduce_learning_rate = ReduceLROnPlateau(monitor='loss', factor=0.1, patience=2, cooldown=2, min_lr=0.00001, verbose=1) callbacks = [reduce_learning_rate] history = model.fit_generator(training_set, steps_per_epoch=5216//batch_size, epochs=no_of_epochs, validation_data=test_set, validation_steps=624//batch_size, callbacks=callbacks ) model.save('path_to_my_model.h5') model = models.load_model('path_to_my_model85.h5') path_to_file = "/content/Pneumonia1.jpeg" img = image.load_img(path_to_file, target_size=(150,150)) img = image.img_to_array(img) img = np.expand_dims(img,axis=0) prediction = model.predict(img) AI: I dont think you are rescaling the image after reading it. Because you have rescaled it during training. img = image.load_img(path_to_file, target_size=(150,150)) img = image.img_to_array(img) img = img/255 # this must be done.
H: prediction() returning mistakenly false positives I do not know how to interpret the result of: prediction(c(1,1,0,0), c(1,1,0,0)) prediction() functino comes from prediction {ROCR} it has this site: http://rocr.bioinf.mpi-sb.mpg.de/ The above is a working example. As per the documentation the first parameter is 'predictions' and the second 'labels' (they would be the true values). The output is this, which I do not fully understand, specially why there is a '2' in "fp". : An object of class "prediction" Slot "predictions": [[1]] [1] 1 1 0 0 Slot "labels": [[1]] [1] 1 1 0 0 Levels: 0 < 1 Slot "cutoffs": [[1]] [1] Inf 1 0 Slot "fp": [[1]] [1] 0 0 2 AI: The slot "fp" counts how many false positives there are at each choice of classification cutoff (which can be found in the "cutoff" slot). The cutoff represents at what value you set the threshold to binarize the numerical values into classes. Your output already appears to be binary classes, so the concept of a threshold doesn't really make much sense, but the package still tries, setting the potential thresholds at Infinity, 0, and 1. When you set the threshold at 0, everything gets classified positive, including the 2 actual negative samples - when the classification threshold is 0, you get 2 false positives.
H: When should train and test data be merged? I am very new to data science and machine learning. I am following courses on datacamp, and then trying to solve problems on kaggle/drivendata. Very often, I try to use the sklearn.model_selection train_test_split()-method, but because my training (X) and test (y) data is not same shape, I get the error: ValueError: Found input variables with inconsistent numbers of samples: [913000, 45000] When I look at other people's solution, it looks like they very often combine training and test data (datasets in this case: train & test), like this: all_data = train.append(test, sort = False) Then later, they split this up again into variables X and y, which they can use on the train_test_split method. (You can probably hear I am very new and don't quite understand what's happening yet). Can anyone explain me, why I need to combine my training and test data, when I already have them in separate .csv files? Maybe share some resources that explain it? Thanks a lot AI: The answer to this is simple. When we perform the cleaning of the dataset we'll need to do the whole cleaning process for training data first then we'll do the same data cleaning process for the test dataset too. So to avoid doing the same data cleaning process twice, we merge the training and testing data then we perform the data cleaning process and after that we separate both dataset.
H: Why there is very large difference between cross validation scores? I have a very simple regression model and I am doing the cross validation. When cv=10 the highest score i got is 60.3 and lowest is -9.7 which is useless. Average will be 30. No of row data set= 658 AI: Your $R^2$ scores indicate that a linear model does not describe your data well. On top of this, there seems to be a large variability in data. You could try the following: If the linear model is supposed to describe the data, check for outliers. They might be responsible for the large variation across the CV folds. Try reducing the number of features if there are many. The model might be fitting noise. Introducing regularization (lasso or ridge regression) might make the model more robust. This should decrease the variability of the CV errors, but the $R^2$ scores will get even worse.
H: Convolution and Pooling as Infinitely strong priors I am currently reading about Convolutional Neural Networks in Deep Learning Book. I am stuck on section 9.4 titled "Convolution and Pooling as Infinitely String Priors". Could someone please intuitively explain to me what prior probability distribution is and what is its association to Convolution and Pooling operations in context of CNNs. Thank you! AI: A prior distribution expresses your assumptions about the model without observing any data. E.g. when doing linear regression, you a priori assume that the slope is close to zero. Now you start measureing data points and it turns out that the slope should probably be close to one, so you compromise and pick a value somewhat inbetween. If your belief in your prior assumption of zero slope is weak, it does not take much data to convince you to pick a slope closer to one. If your belief is strong, you will pick a slope close to zero anyway and demand to see many data points until you move it slowly towards one. In case of regression the strength of your belief would be parameterized by the weight of the regularization term in the optimization objective. The connection between regularization and priors is the same for neural networks (see here for a little more detail). So you can think of the strength of the prior as a measure for how much evidence you would need to see until you deviate from your prior assumptions about the model. When you train a CNN, you assume a priori that this network structure is the best for your problem, i.e. that the model should internally compute convolutions. Since the CNN structure is build into the model, no amount of data will convince you to abandon this structure in favour of e.g. a fully-connected NN. Hence the prior is infintely strong. The argument for pooling etc. would be similar. I would even claim that most if not all non-trainable parameters can be seen as an infinitely strong prior.
H: How to extract the reason behind a prediction using TensorFlow? I created a CNN using TensorFlow2 and trained it as a binary classifier. Is there a way to extract the influence of each pixel upon the prediction? I am trying to obtain a mask similar to the following: AI: [updated] You are looking for GradCam. Code: https://fairyonice.github.io/Grad-CAM-with-keras-vis.html https://tf-explain.readthedocs.io/en/latest/index.html Paper: https://arxiv.org/abs/1610.02391 This is a good resource I found https://christophm.github.io/interpretable-ml-book/
H: Conv1D specify output chanels in tensorflow 2.1 Hello I'm trying to implement the "Tacotron Towards end to end" paper and in the Encoder CBHG - a Conv1D bank of K=16, conv-k-128-ReLU “conv-k-c-ReLU” - denotes 1-D convolution with width k and c output channels with ReLU activation. And I wanted to ask how exactly I can implement it in Tensorflow 2.1? because in the documentation I haven't seen how to specify the output channels Reference to the paper: https://arxiv.org/pdf/1703.10135.pdf AI: Refer to this: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1D The parameter filter determines the output channel.
H: What is the difference between filling missing values with 0 or any othe constant term like -999? Most of the text book says to fill missing values use mean/median(numerical) and most frequent(categorical) but I working on a data set which has too many missing values and I can't remove those columns because they are important. train.isnull().sum() TransactionID 0 isFraud 0 TransactionDT 0 TransactionAmt 0 ProductCD 0 ... id_36 449555 id_37 449555 id_38 449555 DeviceType 449730 DeviceInfo 471874 Length: 434, dtype: int64 train.shape (590540, 434) You can see that DeviceType and DeviceInfo has too many missing values. I am not sure if filling these with most frequent value would be a right choice. So, I am thinking about filling it with some constant term 0 or -999. Is there any difference when fill it with 0 or -999. What is the right way to fill when you have too many missing values? AI: Coming to your questions: I am thinking about filling it with some constant term 0 or -999. Is there any difference when fill it with 0 or -999. The reason values are imputed with their mean is that in this way inputed data won't alter the regression slopes. You shouldn't use constant terms such as -999, otherwise the regression parameters would result completely messed up. Random Forests would suffer too, since the model would then learn to partition the variables space in a strange way, trying to chase outliers that don't make sense. What is the right way to fill when you have too many missing values? Drop these data if possible. If it's not the case, try inputing the mean. If you have outlier problems (and the mean suffers from them), substitute it with the median. This is the simplest thing you can do. An alternative is to fit a preliminary model on the available, complete data, then use it to predict missing values, then retrain the final model on the bigger imputed dataset. This would introduce a more "flexible" imputation than just a constant number, at your own risk of course.
H: Non-categorical loss in Keras I am training a neural network (arbitrary architecture) and I have a label space that is not one-hot encoded, but continuous. The reason is that for the given problem, it is not possible to assign a single class only, it is more of a probability mapping. So in the end, my targets sum up to 1 again, but they are not 1-hot. I wonder if I am misunderstanding Keras documentation, but for what I read, there is no Crossentropy implementation for this. There is categorical and sparse_categorical (which seem to do exactly the same, but only expect a different label format). My idea was to wrap every single target index into a binary crossentropy, but this feels wrong and I think there is a better solution. Can you please help me find an appropriate CE loss for my task? AI: It sounds like you want your model to output a probability distribution which matches a "ground truth" probability distribution. Instead of a cross-entropy loss, you should try Kullback-Leiber divergence (keras.losses.kullback_leibler_divergence). KL-divergence measures the difference between two probability distributions. Minimizing KL-divergence should cause your predicted distribution to match the actual distribution. By the way, KL-divergence is not just a workaround for Keras's cross-entropy limitations. It's actually the better loss function for this task. From the wikipedia page on KL-divergence (emphasis added by me): $$D_{KL}(P \| Q) = H(P, Q) - H(P)$$ where $H(P, Q)$ is the cross entropy of P and Q, and $H(P)$ is the entropy of P (which is the same as the cross-entropy of P with itself). The KL divergence $D_{KL}(P \| Q)$ can be thought of as something like a measurement of how far the distribution Q is from the distribution P. The cross-entropy $H(P, Q)$ is itself such a measurement, but it has the defect that $H(P, P) = H(P)$ is not zero, so we subtract $H(P)$ to make $D_{KL}(P \| Q)$ agree more closely with our notion of distance.
H: What is the selection criteria to choose between XGBoost and Random Forest I am trying to understand - when would someone choose Random Forest over XGBoost and vice versa. All the articles out there highlights on the differences between both. I understand them. But when actually given a real world data set, how should we approach the problem to choose between these? For eg: Is there a set of statistical tests for variance check, and then choose? Or is it simply like you have a number of features, and cannot really choose to do parameter tuning, so you apply Random Forest to get results? AI: Let´s say that the best way to choose is empirical. You run both algorithms in the dataset and check which one has better performance. It's true that you can do a lot of theoretical analysis but at the end you have to try no matter what. They both use decision trees ensemble so the results should not be too different. By experience gradient boosting tends to achieve better results. Also, it is more mathematically complicated to understand. Normally decision trees don´t require too much parameter turning, or at least less than other models. There is no classical statistic test that will tell you will perform better. There are some heuristics but I find them overly complicated.
H: Extremely high MSE/MAE for Ridge Regression(sklearn) when the label is directly calculated from the features Edit: Removing TransformedTargetRegressor and adding more info as requested. Edit2: There were 18K rows where the relation did not hold. I'm sorry :(. After removing those rows and upon @Ben Reiniger's advice, I used LinearRegression and the metrics looked more saner. The new metrics are pasted below. Original Question: Given totalRevenue and costOfRevenue, I'm trying to predict grossProfit. Given that it's a simple formula totalRevenue - costOfRevenue = grossProfit, I was expecting that the following code would work. Is it a matter of hyperparameter optimization or have I missed some data cleaning. I have tried all the scalers and other regressions in sklearn but I don't see any big difference. # X(107002 rows × 2 columns) +--------------+---------------+ | totalRevenue | costOfRevenue | +--------------+---------------+ | 2.256510e+05 | 2.333100e+04 | | 1.183960e+05 | 2.857000e+04 | | 2.500000e+05 | 1.693000e+05 | | 1.750000e+05 | 8.307500e+04 | | 3.905000e+09 | 1.240000e+09 | +--------------+---------------+ # y +--------------+ | 2.023200e+05 | | 8.982600e+04 | | 8.070000e+04 | | 9.192500e+04 | | 2.665000e+09 | +--------------+ Name: grossProfit, Length: 107002, dtype: float64 # Training import numpy as np import sklearn from sklearn.compose import TransformedTargetRegressor from sklearn.preprocessing import StandardScaler from sklearn.linear_model import Ridge from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=13) x_scaler = StandardScaler() pipe_l = Pipeline([ ('scaler', x_scaler), ('regressor', Ridge()) ]) regr = pipe_l regr.fit(X_train, y_train) y_pred = regr.predict(X_test) print('R2 score: {0:.2f}'.format(sklearn.metrics.r2_score(y_test, y_pred))) print('Mean Absolute Error:', sklearn.metrics.mean_absolute_error(y_test, y_pred)) print('Mean Squared Error:', sklearn.metrics.mean_squared_error(y_test, y_pred)) print('Root Mean Squared Error:', np.sqrt(sklearn.metrics.mean_squared_error(y_test, y_pred))) print("Scaler Mean:",x_scaler.mean_) print("Scaler Var:", x_scaler.var_) print("Estimator Coefficient:",regr.steps[1][1].coef_) Output of above metrics after training(Old Metrics with 18k rows which did not confirm to the relation) R2 score: 0.69 Mean Absolute Error: 37216342513.01034 Mean Squared Error: 7.601569571667974e+23 Root Mean Squared Error: 871869805169.7842 Scaler Mean: [1.26326695e+13 2.14785735e+14] Scaler Var: [1.24609190e+31 2.04306993e+32] Estimator Coefficient: [1.16354874e+15 2.59046205e+09] Ridge(After removing the 18k bad rows) R2 score: 1.00 Mean Absolute Error: 15659273.260432156 Mean Squared Error: 8.539990125466045e+16 Root Mean Squared Error: 292232614.97420245 Scaler Mean: [1.57566809e+11 9.62274405e+10] Scaler Var: [1.20924187e+25 5.95764210e+24] Estimator Coefficient: [ 3.47663586e+12 -2.44005648e+12] LinearRegression(After removing the 18K rows) R2 score: 1.00 Mean Absolute Error: 0.00017393178061611583 Mean Squared Error: 4.68109129068828e-06 Root Mean Squared Error: 0.0021635829752261132 Scaler Mean: [1.57566809e+11 9.62274405e+10] Scaler Var: [1.20924187e+25 5.95764210e+24] Estimator Coefficient: [ 3.47741552e+12 -2.44082816e+12] AI: (To summarize the comment thread into an answer) Your original scores: Mean Absolute Error: 37216342513.01034 Root Mean Squared Error: 871869805169.7842 are based on the original-scale target variable and are between $10^{10}$ and $10^{12}$, at least significantly smaller than the mean of the features (and the target)? So these aren't automatically bad scores, although for a perfect relationship we should hope for better. Furthermore, a 0.69 R2 values is pretty low, no scale-consciousness needed. That both of the model's coefficients came out positive is the most worrisome point. I'm glad you identified the culprit rows; I don't know how I would have diagnosed that from here. Your new ridge regression still has "large" errors, but significantly smaller than before, and quite small compared to the feature/target scale. And now the coefficients have different signs. (I think if you'd left the TransformedTargetRegressor in, you'd get largely the same results, but with less penalization.) Finally, when such an exact relationship is the truth, it makes sense not to penalize the regression. Your coefficients here are a little bit larger, and the errors drop away to nearly nothing, especially considering the scale of the target.
H: what is learning rate in neural network? When I am creating a model using Keras we should define the learning rate(lr) in that optimizer method Please refer to the below code. from keras import optimizers sgd = optimizers.SGD(lr=0.01, clipnorm=1.) Please give a simple example with a definition. How to define the values in lr parameter? Is that random value? AI: In simple terms, the learning rate affects how big a step you update your parameters to move towards the minimum point in your loss function. If your step is too big, you might miss the minimum point on your loss function. If your step is too small, it might take too long to converge to the minimum point. Ways to deal with the above problems are the use of momentum which is a weighted average of the previous learning rate and the current learning rate or use a decaying learning rate.
H: Sklearn Pipeline for mixed features: numerical and (skewed) categorical I am working on a dataset from Kaggle (housing price prediction). I have done some pre-processing on the data (missing values, category aggregation, selecting ordinal vs one-hot). I am trying to implement a pipeline to streamline the code. The pipeline consists of a ColumnTransformer with two components: one component contains a standard scaler applied to numerical and ordinal features; the second component has a one-hot encoder for the remaining set of features. I am passing this transformer to a GridSearchCV object to tune hyperparameters. In this case, it is a LASSO model. So, I am trying to tune the coefficient of the penalty term. The problem is some of the one-hot encoded features are highly skewed with the count in mostly one category. When GridSearchCV tries to run cross-validation, it raises an error saying that unknown categories are found while validating the model. I think this happens because while fitting the one-hot encoder the train set doesn't contain data points with specific labels that show up in the validation set. One obvious way to handle this would be to fit a one-hot encoder, keep it aside and then build a pipeline and carry on with the grid search (/validation) steps. This seems a bit disconnected to me considering the notion of pipeline was defined for exactly this purpose. Maybe I am missing something here. Is there a better (/efficient) way to achieve the above rather than separating the one-hot encoder from the pipeline? For reference, the data for the above histogram, Category Counts CompShg 1434 Tar&Grv 11 Tar&Grv 11 WdShngl 6 WdShake 5 Roll 1 ClyTile 1 Metal 1 Membran 1 AI: You can use handle_unknown='ignore' in the OneHotEncoder; levels present in the test set but not the train set will be encoded as all-zeros rather than raising an error. But... that example you provide, I rather doubt it's worth it to keep all the levels. The coefficient learned for such a small level's dummy variable will be overly specialized (overfit). Consider alternatives.
H: 80-20 or 80-10-10 for training machine learning models? I have a very basic question. 1) When is it recommended to hold part of the data for validation and when is it unnecessary? For example, when can we say it is better to have 80% training, 10% validating and 10% testing split and when can we say it is enough to have a simple 80% training and 20% testing split? 2) Also, does using K-Cross Validation go with the simple split (training-testing)? AI: 1) In 80-10-10 scheme, 80% is for training, 10% is for validation and 10% is for testing. Validation set required to search for the optimal hyperparameters. For models having no hyperparameters, it doesn't do much good to use a validation set (although, it is still useful in determining when to stop the training of the model using early stop). In such situation, one might just keep 80% as training set and 20% as testing set. 2) Yes, K fold CV can be used with simple split.
H: Find the optimal n_estimator by looping the model accuracy indicator in random forest algorithm - python i'm trying to find the best n_estimator value on a Random Forest ML model by running this loop: for i in r: RF_model_i = RandomForestClassifier(criterion="gini", n_estimators=i, oob_score=True) RF_model_i.id = [i] # dynamically add fields to objects RF_model_i.fit(X_train, y_train) y_predict_i = RF_model_i.predict(X_test) accuracy_i = [accuracy_score(y_test, y_predict_i), i] results.append(accuracy_i) # put the result on a list within the for-loop question #1 ** What i like to understand is if this could be a good way to understand how to decide the n_estimator parameter and possibly why (i'm not so sure it is) 2.**question #2 If what i'm doing have some sense, could be a good idea extend this loop to all of other main parameters? What i obtain is this: a level of accuracy for a number of estimator associated with which i runned the random forest algorithm Thank you AI: The number of trees in a random forest doesn't really need to be tuned, at least not in the same way as other hyperparameters. Adding more trees just stabilizes the results (you're averaging more samples from a distribution of trees); you want enough trees to get stable results, and adding more won't hurt except for computational resources. More directly (question 1), you could instead train the RF with, say, 1000 n_estimators, then just grab the predictions from each tree and average only the first 100 of them, the first 200, etc. Since the trees are built independently, this is essentially the same as training repeatedly on 100, then 200, etc. trees from scratch (but of course faster). More generally (question 2), yes, this is one-dimensional grid search. Hyperparameters may be interdependent though, so doing independent 1D searches will probably be suboptimal. Look into full grid search, random search, and maybe more advanced hyperparameter optimization methods.
H: Split text into phrases of a person and an operator I have about 5,000 texts without punctuation marks. Each text is a conversation between an operator and a person. For example: "Hello hello how can I help you how can I find out how much money I have in my account" How can one separate phrases between a person and an operator using machine learning methods? AI: I think the task you're looking for is called "Punctuation restoration" or "Punctuation prediction". An example of this is described in this paper, or this other paper. Unfortunately I am not aware of available per-trained model that you might apply to your dataset. Honestly I think that it is unlikely that something trained specifically for costumer service calls exists, unless you want to annotate your own dataset and train a model out of it.
H: How to approximate constant parameters of any generalized function? I am new to Data Science and Machine Learning. I am trying to approximate constant parameters of any generalized not necessarily polynomial function given the input, desired output and the form of the function using neural network. Is there any other way to do it using neural network or should I use some other technique? For example this is one of the functions whose parameters par6 and par7 I am trying to approximate: I have the values for x and f2(x). Any help will be appreciated. Thank You. AI: There are general techniques around. E.g., https://en.wikipedia.org/wiki/Non-linear_least_squares, https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html For your specific example, there's a linearization (depending on the values that $y$ can take on): $$ \begin{align*} y &= \left[1+\left(\frac{x_1-x_2}{p_6}\right)^{p_7}\right]^{-1} \\ \frac1y-1 &= \left(\frac{x_1-x_2}{p_6}\right)^{p_7} \\ \frac1y-1 &= \left( \frac1{p_6}\right)^{p_7} (x_1-x_2)^{p_7} \\ \underbrace{\log\left(\frac1y-1\right)}_{y'} &= \underbrace{-p_7\log p_6}_{a} + \underbrace{p_7}_b\underbrace{\log(x_1-x_2)}_{x'} \end{align*}$$ At this point, your original problem is expressed as a linear model. Of course, the error terms are now skewed from what they originally might have been, and taking the logarithm won't have worked if $y\notin(0,1)$.
H: using feature selection to improve model performance I have a highly sparse dataset that I am using to predict a continuous variable via a random forest regression. I have achieved an acceptable level of performance following cross-validation, and I am now thinking of potential ways I might further improve accuracy. Given that my dataset is highly sparse, I was thinking that recursive feature elimination (using the cross validated version in sklearn) might be a good way to go. My understanding is that this will give me the 'optimal' number of features and thus may reduce issues related to over-fitting. My question is, is it then appropriate to re-run the analysis with these optimized features, or am I in someway biasing the model? I have a test set that has not been used at all in training/validation, so I am assuming that as long as I don't leak info between training and testing, I should be good. But I'm unclear if these assumptions are correct. Is this a suitable use of RFE, or should i be considering a different path? For info, my training/validation dataset is 370 rows, with approx. 900 features. AI: 370 rows, with approx. 900 features is not optimal. I would suggest some dimension reduction. PCA, factor analysis, PLS regression are some alternatives. You could try a lasso- / elasticnet -regression as well. Here is a good guide. https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html
H: I don't understand RidgeCV's fit_intercept, and how to use it for my data Alright, I have an assignment that makes me calculate weights for a function with different terms. At first, I thought I might just leave the weight for the term $1$ out, and instead use the intercept. I have decided to use RidgeCV, as I had a large amount of multicolinearity. However, now I have appended my $x$ by a row of $1$'s, and did the following: RidgeCV(fit_intercept = False).fit(x, y) Alright, now I have an array of weights. For comparison, I have tried also RidgeCV(fit_intercept = True).fit(x, y) As expected, the weight for $1$ became 0. However, all other values changed too - and the intercept is different from the weight for $1$ from before. I have also a different .score() - the first one is higher. Why is that the case? I thought all fit_intercept was doing is adding a row of $1$ to my $x$, which obviously can't be true. Also, should I try centering my data myself, or is that now unnecessary? AI: The weight on the constant column was also penalized by Ridge, whereas the intercept fitted directly is not. To your second question: Centering shouldn't be necessary (except maybe for computational reasons), though scaling is still recommended so that coefficient penalties are applied more equally.
H: Confusion matrix and ROC AUC curves are not in sync I created a classification model with three target classes and created a confusion matrix to measure the accuracy, here is the matrix code from sklearn.datasets import load_wine data = load_wine() x = pd.DataFrame(data=data.data, columns=data.feature_names ) y = pd.DataFrame(data=data.target, columns=['target']) x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.2,shuffle=True,random_state = 42) model = DecisionTreeClassifier() model.fit(x_train, y_train) y_pred = model.predict(x_test) mat=confusion_matrix(y_test, y_pred, labels=[0,1,2]) The output of the above matrix is array([[13, 1, 0], [ 0, 14, 0], [ 1, 0, 7]]) So, this is a balanced dataset and giving me an accuracy of 94% almost. The problem is, when I tried to draw ROC AUC curve for class 0 using the below code, the AUC curve is the opposite and I am getting only 0.05 area under the curve. fpr,tpr, thres = roc_curve(y_test, y_pred, pos_label=0) roc_auc = auc(fpr,tpr) plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() I tried to calculate and draw ROC AUC manually and I got TPR as 0.9285 and FPR as 0.04545 and the graph is perfect on the sheet, could you please help me why the above code is giving the graph the other way. I verified the above code for the other two classes and the graph is good. Thanks in advance. AI: The ROC curve is built by taking different decision thresholds, and should be built using the predict_proba of your estimator. In particular, in your multiclass example, the ROC is using the values 0,1,2 as a rank-ordering! So there are four thresholds, the one between 0 and 1 being the most important here: there, you declare all of the samples the model predicts as being class 0 to be negative, and all others as positive, giving you the complementary point in the ROC-space. Instead, you should have fpr,tpr, thres = roc_curve(y_test, model.predict_proba(x_test)[:,0], pos_label=0) # or fpr,tpr, thres = roc_curve(y_test==0, model.predict_proba(x_test)[:,0])
H: Model selection metric for validation phase in deep learning I have been taught that for each epoch in training, we perform a training phase, and then a validation phase where we decide whether the new set of parameters is better than the current best. This selection of parameters is done using the same loss function that we use for training, but applied to validation data instead of training data. My question is, does the metric used in the validation phase really have to be the same loss function, or can it be some other metric, such as accuracy or average precision? AI: It doesn't have to be the same function and usually is not. The point of the validation set is to measure how well our model is actually doing. This is only useful if we measure it on a metric that has some value to us. For instance, in classification, no one is concerned about which model achieves the least cross-entropy, rather which one achieves the highest accuracy, which makes more sense from a business standpoint. One clarification I'd like to make on your initial statement that doesn't have to do with the question itself. I have been taught that for each epoch in training, we perform a training phase, and then a validation phase where we decide whether the new set of parameters is better than the current best. This selection of parameters is done using the same loss function that we use for training, but applied to validation data instead of training data. This is incorrect! An epoch is technically the point at which the model has seen the training dataset once. After the end of the epoch, we usually do a pass on the validation set to measure how well our model is actually doing. However, there is no hyperparameter tuning performed at this step, as the model hasn't been fully trained yet. A model typically takes multiple epochs to train, after which we can change the hyperparameters and start again. The only change we might make to the hyperparameters at the end of an epoch is something like a scheduled learning rate reduction.
H: Can OLS regression be used to predict from a complete sequence of data? Reading online and following this example from scipy I understand OLS can be used to find data between gaps in a sequence (interpolate?) but I already have a complete sequence and want to predict the future of it (extrapolate, i think). Can OLS be used for that? AI: Ordinary least squares (OLS) is an optimization method to find the best parameter estimates for linear regression, gradient descent is another. Regardless of the specific optimization method, linear regression is not appropriate for predicting sequence data. Generally, time series methods are used to predict sequence data.
H: Interpretation for test score , training score and validation score in machine learning? Interpretation for test score , training score and validation score ? what they actually tell us? What's an acceptable difference between cross test score , validation score and test score? If difference between test score and training score is small mean it is a good model/fit? overfitting and under fitting on the basis of test score , training score and validation score? whether any of these score or difference among them tells us , if we need more data(observations) AI: Interpretation for test score , training score and validation score ? what they actually tell us? We usually divide our data-set in 3 parts. Training-data, validation-data and test-data. Then we analyze the score: Training Score: How the model generalized or fitted in the training data. If the model fits so well in a data with lots of variance then this causes over-fitting. This causes poor result on Test Score. Because the model curved a lot to fit the training data and generalized very poorly. So, generalization is the goal. Validation Score This is still a experimental part. We keep exploring our model with this data-set. Our model is yet to call the final model in this phase. We keep changing our model until we are satisfied with the validation score we get. Test Score This is when our model is ready. Before this step we have not touched this data-set. So, this represents real life scenario. Higher the score, better the model generalized. What's an acceptable difference between cross test score , validation score and test score? I think the "cross test score, Validation score" there are no difference. The right naming is "cross validation score". Cross validation is used by interchanging the training and validation data in different ways multiple times. A image that describes (Copyright reserved by towardsdatascience.com): So your question should be "What's an acceptable difference between cross validation score and test score?" There's no straightforward answer to this question. Here's a long discussion you might want to consider: Cross validation vs Test set. If difference between test score and training score is small mean it is a good model/fit? Yes!! This is what we strive for. And often times to achieve this, We require various engineering techniques, calculations and parameter tuning. overfitting and under fitting on the basis of test score , training score and validation score? Usually, high training score and low test score is over-fitting. Very low training score and low test score is under-fitting. First example here, in technical term is called low bias and high variance which is over-fitting. The latter example, high variance and high bias called under-fitting. In other words, A model that is underfit will have high training and high testing error while an overfit model will have extremely low training error but a high testing error. whether any of these score or difference among them tells us , if we need more data(observations) This is a very good question. you can refer here. And the best explanation of this I got from Andrew NG. This is the link. These will give you a solid understanding. but in simple words I can give example for intuition (won't be accurate): suppose I have 500 rows of data. I will first take 100 of my data. Then I will analyse my bias and variance. Next I will take 100+100=200 data from those 500. The again I will record my bias and variance. I will keep doing this by taking 300, 400 and 500 data. If I observe that my bias and variance is improving (if model is improving) it means adding more data might actually help. This is a very simplified example. But hope gives an intuition.
H: How gradients are learned for pooling layers in Convolution Neural Network? Assuming we could compute a layerwise Hessian of the error function when training a neural network, the error sub-surface of pooling layers will be flat.?? Is that correct? There is no weights to be learnt for pooling layer but for eg. max pool can have different values at different iterations? Will that effect the error surface? AI: 1) Will the error subsurface be flat? One cannot find the hessian or any such error subsurface of the pooling layer because common pooling layers like max and avg pooling do not have any parameters to learn (as you mentioned)! 2) But, we can speak for the effect of the pooling layers on the error surface of the previous layers. The effect is different for different types of pooling layers. For max pooling, it generates a relatively sparser gradient flow for the parameters on the previous layers as only a few of the output from the previous layers are selected during forward propagation. Whereas for average pooling, its allows for a more smooth gradient flow to all the learnable parameters on the previous layers.
H: How to transform and cummulate data in python I would like to transform dataframe and cummulate them using pandas. year country value 1999 JAPAN 10 2000 KOREA 15 2000 USA 20 2001 USA 13 2002 JAPAN 30 * I want to transform dataframe and cummulate value for each country year country value 1999 JAPAN 10 1999 KOREA 0 1999 USA 0 2000 JAPAN 10 2000 KOREA 15 2000 USA 20 2001 JAPAN 10 2001 KOREA 15 2001 USA 33 2001 JAPAN 40 2001 KOREA 15 2001 USA 33 I need your help. Thank you. AI: I think the following lines should give the output you are looking for: # Create pivot table df = pd.pivot_table(df, values="value", index="country", columns="year") # Calculate cumulative sum and forward fill NaN df = df.cumsum(axis=1).fillna(method="ffill", axis=1) # Reshape data back into long format df = df.reset_index().melt("country") I first create a pivot table with the countries in the rows and the years in the columns to get all the years that may be missing in the long format data. I then calculate the cumulative sum, after which I forward fill the values over the columns. To finally reshape the data back into the original long format I reset the index and melt the dataframe with the country as the id column. Given your input, this returns the following dataframe: country year value 0 JAPAN 1999 10.0 1 KOREA 1999 NaN 2 USA 1999 NaN 3 JAPAN 2000 10.0 4 KOREA 2000 15.0 5 USA 2000 20.0 6 JAPAN 2001 10.0 7 KOREA 2001 15.0 8 USA 2001 33.0 9 JAPAN 2002 40.0 10 KOREA 2002 15.0 11 USA 2002 33.0
H: The meaning of γt−t0 in Reinforcement learning with pytorch When reading pytorch tutorial: Our aim will be to train a policy that tries to maximize the discounted, cumulative reward Rt0=∑∞t=t0γt−t0rt, where Rt0 is also known as the return I know γ is the discount factor, but I am not sure that what t-t0 ofγt−t0 mean? Thank you. AI: I have no experience with reinformcement learning, however looking at the figure I think I understand what is meant. Gamma is the discount factor, which is taken to the power t-t0, i.e. the number of episodes starting from t. This gives the discount factor for a specific episode, which is then multiplied by the return of that episode, r_t, to get the discounted reward for one specific episode. The total return is then computed by summing all the future rewards for future episodes.
H: How is the weight matrix set and its dimension in CNN I am trying to calculate the output size of each layer and the number of parameters for 3 class classification using CNN. I have calculated till the final maxpooling layer and would really appreciate help in understanding how to reach the fully connected layer. In Matlab I checked that the size for fully connected (FC) is 1176 and the weights are 3*1176 I am struggling to understand what these 2 numbers mean and how they have been calculated. I could guess that 3 comes from the number of classes but how did 1176 come? It does not match my calculation. Question: How to determine the dimension of the last layer- FC? Since I have 3 classes, do I have 3 layers? Please correct me where wrong. Attached is the screenshot which shows the dimensions for the weight as 3*1176 . AI: The 1176 is the number of neurons in the layer before the last layer, which is the number of pixels in the convolutional layer but flattened (height*width*n_filters, i.e. 7*7*24=1176). The 3 comes from the number of neurons in the final layer, since you have 3 classes your final layer has 3 neurons, 1 for each class (which will output zero or one). This then gives a weight matrix of 3x1176 for the weights between the second to last layer and the last layer. Using torch we can show the dimensions of the data passed between the layers in the network. In the example below I have omitted the batch normalization and relu layers since they do not affect the dimensions. The dimensions are defined as follows: batch_size * n_channels * height * width. import torch x = torch.randn(1, 1, 28, 28) x.shape # torch.Size([1, 1, 28, 28]) conv1 = torch.nn.Conv2d(in_channels=1, out_channels=12, kernel_size=3, padding=1)(x) conv1.shape # torch.Size([1, 12, 28, 28]) pool1 = torch.nn.MaxPool2d(kernel_size=2, stride=2)(conv1) pool1.shape # torch.Size([1, 12, 14, 14]) conv2 = torch.nn.Conv2d(in_channels=12, out_channels=24, kernel_size=3, padding=1)(pool1) conv2.shape # torch.Size([1, 24, 14, 14]) pool2 = torch.nn.MaxPool2d(kernel_size=2, stride=2)(conv2) pool2.shape # torch.Size([1, 24, 7, 7]) pool2.view(-1).shape # torch.Size([1176])
H: Kohen Kappa Coefficient of Naive Bayes with 62% overall accuracy is better than Logistic Regression with 98% accuracy? I have been trying to evaluate my models used on fire systems dataset with a huge imbalance in the dataset. Most models failed to predict any true positives correctly however naive Bayes managed to do that but with a very high rate of False Positive. I had run the experiments on both the confusion matrix and classification report for both can be seen below. The same dataset and train/test split was used with both of the datasets Naive Bayes Confusion Matrix and Classification Report [[TN=732 FP=448] [FN=2 TP=15]] precision recall f1-score support 0 1.00 0.62 0.76 1180 1 0.03 0.88 0.06 17 accuracy 0.62 1197 macro avg 0.51 0.75 0.41 1197 weighted avg 0.98 0.62 0.75 1197 Logistic Regression Confusion Matrix and Classification Report [[TN=1180 FP=0] [FN=17 TP=0]] precision recall f1-score support 0 0.99 1.00 0.99 1180 1 0.00 0.00 0.00 17 accuracy 0.98 1197 macro avg 0.49 0.50 0.50 1197 weighted avg 0.97 0.99 0.98 1197 However I got the Kohen Kappa Coefficient for these models and I am quite confused on how to interpret the values. Please find values below Logistic Regression=0.0 Naive Bayes=0.03 These values indicate very slight agreement. But why is the value of Naive Bayes slightly better than Logistic regression ? AI: Logistic Regression is only predicting one class (in this case the negative class)! Because of the high imbalance in the data, this model gives a high accuracy score. This metric, however, isn't reliable for imbalanced datasets. A more proper metric like Cohen's Kappa penalizes this behavior. Naive Bayes, on the other hand, tries to predict both classes. It misses a lot more predictions this way, but it's Kappa is higher.
H: Predicting the missing word using fasttext pretrained word embedding models (CBOW vs skipgram) I am trying to implement a simple word prediction algorithm for filling a gap in a sentence by choosing from several options: Driving a ---- is not fun in London streets. Apple Car Book King With the right model in place: Question 1. What operation/function has to be used to find the best fitting choice? The similarity functions in the library are defined between one word to another word and not one word to a list of words (e.g. most_similar_to_given function). I don't find this primitive function anywhere while it is the main operation promised by CBOW (see below)! I see some suggestions here that are not intuitive! What am I missing here? I decided to follow the head first approach and start with fastText which provides the library and pre-trained datasets but soon got stuck in the documentation: fastText provides two models for computing word representations: skipgram and cbow ('continuous-bag-of-words'). The skipgram model learns to predict a target word thanks to a nearby word. On the other hand, the cbow model predicts the target word according to its context. The context is represented as a bag of the words contained in a fixed size window around the target word. The explanation is not clear for me since the "nearby word" has a similar meaning as "context". I googled a bit and ended up with this alternative definition: In the CBOW model, the distributed representations of context (or surrounding words) are combined to predict the word in the middle. While in the Skip-gram model, the distributed representation of the input word is used to predict the context. With this definition, CBOW is the right model that I have to use. Now I have the following questions: Question 2. Which model is used to train fastText pre-trained word vectors? CBOW or skipgram? Question 3. Knowing that the right model that has to be used is CBOW, can I use the pre-trained vectors trained by skipgram model for my word prediction use case? AI: Question 1: To do so, I would use the Gensim wrapper of FastText because Gensim has a predict_output_word which does exactly what you want. Given a list of context words, it provides the most fitting words. Question 2: It is up to the user. FastText isn't inherently CBOW or Skipgram. See this Question 3: Yes, even though CBOW and SkipGram are different training procedures, they share a common goal. Both will generate word embeddings where (hopefully) words that are semantically close also have embeddings that are close. The main difference between SkipGram and CBOW is the inherent heuristic used for semantic closeness.
H: Spark/Databricks: GPU does not appear to be utilized for ML regression (cross-validation, prediction) notebook I have created and attached a notebook to a GPU-enabled Databricks cluster (6.4 ML (includes Apache Spark 2.4.5, GPU, Scala 2.11), EC2 type: p2.xlarge). I have started running the notebook that includes cells with PySpark/MLlib code for performing cross-validation and prediction using a Pipeline consisting of a VectorAssembler, MinMaxScaler, and GBTRegressor. When I run this job it appears to be utilizing only CPU (Ganglia UI shows no GPU activity whatsoever, but plenty of CPU being used). Perhaps there is PyCpark code I need to add to my notebook and/or configuration settings for the cluster to allow for running this code with the help of my cluster's GPU? I am new with Spark/MLlib, it's very possible that I am missing something obvious. Thanks in advance for any suggestions! AI: Spark itself does not use GPUs at all, so this is not surprising. The operations it performs are at best L3 BLAS ops of moderate size, and most are small L1 operations, so generally a GPU isn't a win. It does use BLAS to accelerate those ops in hardware if a BLAS library like MKL or OpenBLAS is present.
H: cross validation on whole data set or training data? I am always having cross validation score smaller then the training score and I am performing cross validation on just training data is that normal thing ? Kfold = 5 AI: Yes, it's called overfitting. Your model is beginning to memorize the training set, but not performing well on any validation or test set. If your question is why is this happening, I'd like to refer you to another answer I wrote explaining this phenomenon in more detail. One interesting question that could be made is why is the performance on the cross-validation folds worse than on the test set? This is a bit more tough to answer, because I don't have all the details. Some possible explanations could be that the since the training set is larger than each fold, the model was trained better, or that simply the test set examples happened to be easier.
H: What model do I use to predict a regression problem with timeseries data Overall Goal To predict how much reagent "A" I started with in a reaction. Data: To predict this I have timeseries data of reagent "B". For each time step a measurement of reagent "B" is taken (the amount of reagent "B" present at that time point). The overall timeseries is a curve. This curve may change based on how much reagent "A" I start with. My question is what model should I use to predict reagent A? Will a Recurrent NN work? I have only seen RNN used in predicting the very next time steps or to classify something based on the timeseries. I am looking for the model to use time series data to predict a regression problem. AI: [Updated] If you have measured Reagent B for same number of time steps for each experiment, then: Using these time steps as feature values, you can build a regression model using, Multi Layer Perceptions, XGBoost/LGBM/RandomForests etc. Even Recurrent Neural Nets could work if enough data is available. Treat your time step values of Reagent B as single dimention embedding values and use BiLSTM architecture with Linear activation function in the last layer. Loss could be mean squared error. BiLSTM could work in your case as it makes sense to read from last time step to first time step also If reagent B is measured for different time steps, in different experiments: You can still use Ensemble Trees/MLP/RNN based methods, but you will have to consider adding paddings(imputing) or limiting max time steps to make number of time steps uniform across all experiments. XGB/LGBM can take missing values as inputs and handle it well, so you dont need to do any imputation.
H: What is the current best state of the art algorithm for graph embedding of directed weighted graphs for binary classification? Sorry if this is a newbee question, I'm not an expert in data science, so this is the problem: We have a directed and weighted graph, which higher or lower weight values does not imply the importance of the edge (so preferably the embedding algorithm shouldn't consider higher weights as more important), they are just used to imply the timing of the events which connect the nodes, so the higher weighted edges are events that have happened after the lower ones. There can be multiple edges between nodes, and I want to do a binary classification using deep learning, meaning that my model gets a embedded graph vector as input and decides if its malicious(1) or not(0). So what is the best state of the art graph embedding algorithm for this task that can capture as much information as possible from the graph? i read some graph embedding papers but couldn't find any good comparison of them since there are so many new ones. IMPORTANT NOTE: One problem i have seen with some of the graph embedding algorithms, is that they try to have a small vector dimension, since i guess they are used in fields which there are a LOT of nodes so they need to do this, but in this task its not really important, the nodes are the functions in the program, and they very rarely go above 2000 functions, so even if the algorithm creates a 20k dimensions its no problem, I'm saying this because some of the algorithms that I'm reading will produce a vector that even has lower dimensions compared to number of the nodes in the graph! and that causes loss of information in my opinion. so to sum up, the performance and large vector size is not a problem in my task. so preferably the algorithm should gather as much information as possible from the graph. AI: I'm no expert myself, but recently (i.e. this is true to 2019), I've heard (at a meetup from an expert) that Node2Vec is the SOTA. Here's a link to a post on Medium explaining it - basically, Node2Vec generates random walks on the graph (with hyper-parameters relating to walk length, etc), and embeds nodes in walks the same way that Word2Vec embeds words in a sentence. Note that since random walks are generated on the graph, intuitively you don't have to use edge weights, and can use multiple edges. P.S. There's an older (2014) algorithm for this, called "DeepWalk", which I don't know much about but is supposed to be similar, simpler, and not as performative. I'm just adding this in case you want to search the term.
H: TF2.0 DQN has a loss of 0? I am having a hard time understanding why my loss is constantly a zero when using DQN. I'm trying to use the gym environment to play the game CartPole-V0. In the code below, r_batch indicates rewards sampled from the replay buffer, and similarly s_batch, ns_batch, and dones_batch indicate the sampled state, next states, and if the "game" was done. Each time, I sample 4 samples from the buffer. # This is yi target_q = r_batch + self.gamma * np.amax(self.get_target_value(ns_batch), axis=1) * (1 - done_batch) # this is y target_f = self.model.predict(s_batch) # this is the gradient descent computation losses = self.model.train_on_batch(s_batch, target_f) The values of target_q are: [ 0.42824322 0.01458293 1. -0.29854858] The values of target_f are: [[0.11004215 1.0435755 ] [0.20311067 2.1085744 ] [0.413234 4.376865 ] [0.24785805 2.6716242 ]] I'm missing something here, and I don't know what.. AI: OK, a bit more digging led me to this: The TD target or "target value" gets its name because by updating a Q table or training a NN with it as a ground truth, the estimator will output values in future closer to the supplied value. The estimator "gets closer to the target". So, I have to update the prediction network with the "ground truth". for i, val in enumerate(actions): predict_q[i][val] = target_q[i]
H: Feature-to-parameter mapping in neural networks For neural networks, can we tell which parameters are responsible for which features? For example, in an image classification task, each pixel of an image is a feature. Can I somehow find out which parameters encode the learned information, say from the top-left pixel of my training instances? AI: Yes, at least you can identify what pixels' are contributing most in the prediction. Tool like Layerwise Relevance Propagation, used for Explainable AI, serves the similar purpose and evaluate the values(weights) during back propagation and evaluate what pixels are contributing most. Many opensource implementation are available and on similar track, instead of identifying just relevant pixel you can perform activity for each pixel. I believe that I'm able to answer your question.
H: Principle Component Analysis on multiple functions I am working on a research project that uses data of geometrical structure complexities of different land coverage types (primarily cities, pastures and natural structures). My supervisor has instructed me to use PCA for the reduction of dimensions, but I have trouble understanding how it would work on my data. That data is a collection of a hundred 2D plots of which the x-axis runs from 0 to 255 (in steps of 1) and the y-axis from 1 to 2 (in non-integer steps). The individual plots are not linear, but to some extent have the same shapes. My problem is that for as far as I know, PCA won't work here because for every x-value there are multiple y-values, if I plot all of the individual datasets into one big graph. Also, isn't it a problem that the individual plots are nonlinear? So my question is: would PCA work on this 'multivalued and nonlinear' dataset? If so, where could I look for an explanation on how exactly it would work? AI: Variations of PCA can still be applicable. If the data is not linear, use nonlinear PCA. It is not an issue there a multiple "y"s for every "x". PCA is unsupervised, there is no notion of targets. In PCA, there are only dimensions. Typically, the dimensions are standardized so each dimension can be weighted independent of scale. One caveat - PCA is domain agnostic. Given that you have physical land data, you can use domain-specific methods such as hierarchical spatial indexing.
H: Word2Vec Implementation In word2vec why is the implementation of likelihood function multiplication of probabilities of finding a neighbouring word given a word? I didnt get why the probabilities should be multiplied.Is there a reason/intuition behind it ? AI: "probabilities of finding a neighboring word given a word" here you refer to the Skip-Gram architecture, where given the center word you predict the surrounding words. This extract from these notes might clarify your question. Note that by assuming the conditional independence the total probability factors into a product. "As in CBOW, we need to generate an objective function for us to evaluate the model. A key difference here is that we invoke a Naive Bayes assumption to break out the probabilities. If you have not seen this before, then simply put, it is a strong (naive) conditional independence assumption. In other words, given the center word, all output words are completely independent." Maybe this article can also help, though it is about negative sampling it is a very clear exposition.
H: Can you combine two xgboost models into one? If you have built two different xgbost models, with say 100 trees each, is it possible to combine into an xgboost model with 200 trees? AI: No: the trees' results are added together to produce the final score, so combining two models would produce outputs roughly twice as large as desired. (Gradient boosted trees change the target labels being fitted by each tree, so the 101st tree has "reset" the targets when training.)
H: How to improve results for clustering of words I have a list of words (names actually) on which I would like to apply some entity resolution. My first guess was to create clusters of similar names so I could extract a representative entity from multiple name shapes. I need to specify that I have no labelled data, and I am not working on a document analysis (this is different from Improve results of a clustering for example), only a raw list. To do so, and based on what I could read, I attempted two approaches : apply n-gram transformation on my names and use k-means clustering apply n-gram transformation, compute a similarity matrix (cosine distance) and use it for affinity propagation Both approaches give me interesting results, yet I can't understand some of the results. For example, I get the following clusters : Geronese, Varonese, Veronefe, Veronese, Veronesse, ... Cameroni, Veronèse, Veronèse P., Veronése, Veronêse Why do I get two different clusters for shapes that look so similar (except for Cameroni which I don't know why it is in that cluster) ? Is this a problem in the k-means algorithm tuning ? Also, I tried using the silhouette metrics to find the optimum number of clusters but I get the exact same value no matter what is the number of clusters (0.315 for what it's worth). As for the affinity propagation approach, I get a lower silhouette score for my clusters, and I get some similar effects, like having this kind of cluster : Birttetti, Laruette, Laruelle, Larvette, Laurette, ... Any ideas how I could improve my results (if this is possible) ? Or maybe any idea for a better approach than mine ? AI: I cannot write comments yet, so I will start with an answer that still contains some questions, feel free to complement/correct it! A) The data Without any contextual information, it looks like the only solution you have to link entities is their character similarity. I don't know which distance metric you use, but have you tried these metrics that are well designed for string data?: Levenshtein distance (also called string edit distance): basically computes the number of operations needed to transform one string into another Jaro-Winkler distance: close to levenshtein but favoring strings with a common prefix Jaccard similarity: roughly computes the ratio of common elements between two sets, be it characters, ngrams, words.. You can try these metrics without the n-gram transformation first, but also on transformed data and use cosine distance afterwards. You can also perform additional normalization but you need to assess whether it is relevant for your dataset, which I don't know, so I am just firing ideas, not advices: stripping accents, special characters, short tokens (like the 'P.' in your example), which will reduce the variance in your data. B) The clustering One thing that could explain your result is that with KMeans or Affinity Propagation, any data point must belong to a cluster, so names that should be alone are assigned to the cluster so that it minimizes the loss of the algorithm. Have you thought of trying DBSCAN? It can label some data points as noise, and if you are using one of the string distances above, you can encode prior knowledge about the maximum distance between two potential matches through the epsilon parameter. But as with any unsupervised method, you will never have the guarantee to get rid of noise.
H: count_vectorizer.vocabulary_.items() and count_vectorizer.vocabulary_ Inconsistent number of returns # create a count vectorizer object count_vectorizer = CountVectorizer() # fit the count vectorizer using the text data count_vectorizer.fit(data['text']) # collect the vocabulary items used in the vectorizer dictionary = count_vectorizer.vocabulary_.items() To my understanding, after count_vectorizer fits to data['text'], it generates a list of features. In my case, it generated 25,257 features and these are mapped as dict data type when I call count_vectorizer.vocabulary_. Which is still 25,257 tuples. It means, it used all the features. Problem is, when I call count_vectorizer.vocabulary_.items() it returns 15,142 tuples as dict_items. Why the number has been reduced here? Should't all the features be used to make the dictionary? Here are the lengths I'm talking about: len(data['text']) #19579 len(count_vectorizer.get_feature_names()) #25257 items len(count_vectorizer.vocabulary_) #25257 items len(dictionary) #15142 items (??????) AI: For future readers: as Ben wrote in the comments, it is really hard in general that len(my_dict) != len(my_dict.items()). When these kind of strage behaviours happen, it is always a good practise to perform some routine checks: Clean your environment from every variable, even better restart the kernel and then run again your code. Check your code for variables with same name, be sure you're not overwriting stuff, it's easy also to end up defining functions with same name of other imported library functions. Try your code on different datasets, to be sure the mistake is not due to some external issue with the data. If none of the above checks solve the mystery, then you can start thinking about a bug, a corrupted installation of a library or some other more nested problems.
H: How to make a classification problem into a regression problem? I have data describing genes which each get 1 of 4 labels, I use this to train models to predict/label other unlabelled genes. I have a huge class imbalance with 10k genes in 1 label and 50-100 genes in the other 3 labels. Due to this imbalance I'm trying to change my labels into numeric values for a model to predict a score rather than a label and reduce bias. Currently from my 4 labels (of most likely, likely, possible, and least likely to affect a disease) I convert the 4 labels into scores between 0-1: most likely: 0.9, likely: 0.7, possible: 0.4, and least likely: 0.1 (decided based on how similar the previous label definitions were in their data). I've been using scatter plots with a linear model to try to understand which model would best fit my data and reducing overfitting, but not sure if there's more I can infer from this except that the data has homoskedasticity (I think? I have a biology background so learning as I go): I'm not sure if there is a more official way I should be approaching this or if this regression conversion is problematic in ways I don't realise? Should I be trying to develop more scores in my training data or is there something else I should be doing? Edit for more information: The current 4 labels I have I create based on drug knowledge of the genes and the drug data I currently have for each gene, I could incorporate other biological knowledge I have to make further labels I think. For example, currently the 'most likely' labelled genes are labelled as such because they are drug targets for my disease, 'likely' label because they are genes which interact with drugs to cause a side effect which leads to the disease, and the other 2 labels go down in relevance until there are least likely genes with no drug related or statistical evidence to cause the disease. AI: So, the direct answer here is clearly NO. The answer comes from the definitions of classification and regression. In a classification task what a model predicts is the probability of an instance to belong to a class (e.g. 'image with clouds' vs 'image without clouds' ), in regression you are trying to predict continuous values (e.g. the level of 'cloudiness' of an image). Sometimes you can turn a regression problem into a classification one. For example if I have a dataset of images with labels of their cloudiness level from 0 to 5 I could define a threshold, i.e. 2.5 and use it to turn the continuous values into discrete one, and use those discrete values as classes (cloudiness level < 2.5 equal image without clouds) but the opposite is definitely not possible. Here's also a link to a similar question Could I turn a classification problem into regression problem by encoding the classes? To solve the problem of imbalanced classes there are many ways, not just oversampling, you can generate artificial data, add class weights in the loss function, use active learning to gather new data, use models that returns uncertainty score for each prediction (like bayesian networks), I'm sure here is plenty of answers and strategy you can try.
H: Convolutional neural network block notation The paper by He et al. "Deep Residual Learning for Image Recognition" illustrates their residual network in Figure 3 as follows: I am not a neural network expert, so could somebody please explain to me what the highlighted notation above "3x3 conv, 256, /2" means? The first part is clear (convolutional neural network with a 3x3 pixel window), but what is the "256" and the "/2"? AI: 3x3 conv, 256, /2 stands for: 3x3 Kernel 256 filters a stride of 2 halving the spatial dimensions The latter is explained on page 3 where the authors state (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. This means ResNet does, except for the beginning and end of the network, not use pooling layers to reduce spatial dimensions but conv. layers. Also, table 1 shows what is happening: The part you have highlighted in your screenshot is the transition from conv3_x to the conv4_x layer of the 34-layer network. As you can see in the table the output size is reduced from 28x28 to 14x14 (that is what /2 does) while the filters are doubled from 128 to 256.
H: A Loss of 55.2164 with a sparse_categorical_crossentropy in a sequential neural network? I'm following Aurélion Géron's book on Machine Learning. The following code tries to evaluate a neural network with a sparse categorical cross entropy loss function, on the Fashion Mnist data set. How come I get such a strange value for loss? import tensorflow as tf from tensorflow import keras import numpy as np import pandas as pd import matplotlib.pyplot as plt fashion_mnist=keras.datasets.fashion_mnist (X_train_full,y_train_full),(X_test,y_test)=fashion_mnist.load_data() X_valid , X_train = X_train_full[:5000]/255.0 , X_train_full[5000:]/255.0 y_valid , y_train = y_train_full[:5000] , y_train_full[5000:] model=keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[28,28])) model.add(keras.layers.Dense(300,activation="relu")) model.add(keras.layers.Dense(100,activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) model.compile(loss="sparse_categorical_crossentropy",optimizer="sgd",metrics=["accuracy"]) history=model.fit(X_train,y_train,epochs=30,validation_data=(X_valid,y_valid)) #some instructions with *history* for plotting a graph model.evaluate(X_test,y_test) and the output I get is [55.21640347443819, 0.8577] How come I get a loss over 1? AI: The reason you are getting a very high loss has to do with the normalization you are performing. You are normalizing both the X_train and X_valid, however you do not normalize X_valid. Adding normalization of X_valid to your code the final loss for X_valid is much lower. fashion_mnist = keras.datasets.fashion_mnist (X_train_full,y_train_full), (X_test,y_test) = fashion_mnist.load_data() X_valid , X_train = X_train_full[:5000]/255.0 , X_train_full[5000:]/255.0 y_valid , y_train = y_train_full[:5000] , y_train_full[5000:] # Normalizing X_test as well X_test = X_test/255.0 model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape=[28,28])) model.add(keras.layers.Dense(300,activation="relu")) model.add(keras.layers.Dense(100,activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) model.compile(loss="sparse_categorical_crossentropy",optimizer="sgd",metrics=["accuracy"]) history = model.fit(X_train,y_train,epochs=30,validation_data=(X_valid,y_valid)) model.evaluate(X_test,y_test) # [0.328696608543396, 0.8866999745368958]
H: How to select the best features for Support Vector Classification I have a feature set that contains approximately 2 dozen features of technical analysis indicators. My own domain knowledge tells me that some of these features are better than others for predicitive power. But what methodical processes do I follow other than 'just a hunch', to go about refining the feature set to ones that matter the most? At the moment, I'm just using sklearns preprocessing package and I just throw all the features in, but I know that there must be a better way. min_max_scaler = preprocessing.MinMaxScaler() df[['MACD', 'MFI', 'ROC', 'RSI', 'Ultimate Oscillator', 'Williams %R', 'Awesome Oscillator', 'KAMA', 'Stochastic Oscillator', 'TSI', 'Volume Accumulator', 'ADI', 'CMF', 'EoM', 'FI', 'VPT','ADX', 'ADX Negative', 'ADX Positive', 'EMA', 'CRA']] = min_max_scaler.fit_transform(df[['MACD', 'MFI', 'ROC', 'RSI', 'Ultimate Oscillator', 'Williams %R', 'Awesome Oscillator', 'KAMA', 'Stochastic Oscillator', 'TSI', 'Volume Accumulator', 'ADI', 'CMF', 'EoM', 'FI', 'VPT','ADX', 'ADX Negative', 'ADX Positive', 'EMA', 'CRA']]) I'm quite new to machine learning and would love some feedback. I am using Pandas and Sklearn as well. AI: Please read about feature selection. Have you are a bunch of methods: Univariate Selection Feature Importance Correlation Matrix with Heatmap Check them out and choose the best. Sample implementation you find at the link: https://towardsdatascience.com/feature-selection-techniques-in-machine-learning-with-python-f24e7da3f36e
H: Problem when using Autograd with nn.Embedding in Pytorch I am in trouble with taking derivatives of outputs logits with respect to the inputs input_ids. Here is an example of my input: # input_ids is a list of token ids got from BERT tokenizer input_ids = torch.tensor([101., 1996., 2833., 2003., 2779., 1024., 6350., 102.], requires_grad=True) content_outputs = self.bert(input_ids, position_ids=position_ids, token_type_ids=token_type_ids, attention_mask=attention_mask, head_mask=head_mask) # content_outputs[1] is sentence embedding logits = F.linear(content_outputs[1], self.W_s, self.b_s) # Notes: # - input_ids.dtype = torch.float32 # - input_ids.is_leaf = True # - input_ids.requires_grad = True # - Torch version: 1.0.1 To compute the gradients for input_ids, input_ids.dtype must be float; otherwise, I will get the following error: RuntimeError: Only Tensors of floating point dtype can require gradients However, my model is using an embedding layer which requires the input with dtype=long causing another problem if I use the input_ids initialized with float type above: RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.FloatTensor instead (while checking arguments for embedding) I have also searched on Stack Overflow as well as Stack Exchange but I found nothing related to this problem. Please help me with this. Any comments would be appreciated! AI: So, this is just half answer, I write it here to be able to format the text clearly. You are facing troubles because you are trying to do something that you shouldn't, which is applying gradient to indices instead of embeddings. When using embeddings (all kinds, not only BERT), before feeding them to a model, sentences must be represented with embedding indices, which are just number associated with specific embedding vectors. Those indices are just values mapping words to vectors, you will never apply any operation on them, especially you will never train them, cause they are not parameters. ['Just an example...'] # text | | Words are turned into indices v [1., 2., 3., 4., 4., 4.] # indices, tensor type long, no sense in applying gradient here | | Indices are used to retrieve the real embedding vectors v [0.3242, 0.2354, 0.8763, 0.4325, 0.4325, 0.4325] # embeddings, tensor type float, this is what you want to train If you want to fine tune BERT for a specific task, I suggest you to take a look to this tutorial BERT-fine-tuning
H: How can I tell whether my Random-Forest model is overfitting? I was trying to generate predictions for Iris species using the UCI Machine Learning Iris dataset. I used a RandomForestClassifier with GridSearchCV and calculated the mean absolute error. However, upon generating predictions with the testing set it gave me a suspicious MAE of 0.000000, and a score of 1.0. Is it likely that the model is overfit? If so, why did this happen, and how do I prevent this? iris = pd.read_csv('/iris/Iris.csv') le = LabelEncoder() i2 = iris.copy() labelled_iris_df = pd.DataFrame(le.fit_transform(i2.Species)).rename(columns={0:'Species_Encoded'}) i3 = i2.drop('Species', axis=1) i3 = pd.concat([i3, labelled_iris_df], axis=1) #Encoded dataset y = i3.Species_Encoded X = i3.drop('Species_Encoded', axis=1) X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=42) params = { 'n_estimators':[50,100,150,200], 'max_depth':[3,4,5,6] } rfc = RandomForestClassifier(random_state=42) gc = GridSearchCV(rfc, params, cv=3).fit(X,y) print (gc.best_params_) #n_estimators: 50, max_depth:4 model = RandomForestClassifier(n_estimators=50, max_depth=4, random_state=42) model.fit(X_train,y_train) preds = model.predict(X_test) mae = mean_absolute_error(y_test, preds) sc = model.score(X_test, y_test) print("mae: %f \t\t score: %f" % (mae, sc)) #Prints mae: 0.000000 score: 1.0 I'm a beginner to Machine Learning so please feel free to comment on bad sections of this code and how I can improve them. AI: I think you are probably overfitting. The issue is that while you have performed a train/test split, you are selecting your hyperparameters based on the whole dataset! This way you are feeding information to the model, about the test set, through your hyperparameter selection. To be honest I haven't seen this in such small grid searches, but to be sure you aren't overfitting you need to change: gc = GridSearchCV(rfc, params, cv=3).fit(X, y) to gc = GridSearchCV(rfc, params, cv=3).fit(X_train, y_train) Another detail I'd like to point out is that MAE isn't the proper metric for classification; it is more suited for regression problems. For example is an example belongs to class "1", predicting the class "3" isn't twice as worse than predicting class "2". They are both simply misclassifications. If after these you are still getting high performance, then I'd say its safe to assume you aren't overfitting. If you are I'd like to point you to this answer.
H: 'numpy.ndarray' object has no attribute 'plot' I am trying to balance my data set and using imblearn library for this but After performing fit operation when i try to see the data count in dependent variable its showing me below error AttributeError Traceback (most recent call last) in ----> 1 y_rus.plot(kind='bar',title='QUALITY COUNT') AttributeError: 'numpy.ndarray' object has no attribute 'plot' Please find the code below. AI: There is nothing wrong with your data, your main mistake is that you need to pay attention to the plot function used. This link tells you how to plot your data. https://matplotlib.org/tutorials/introductory/pyplot.html However I would change the last line to something like: import matplotlib.pyplot as plt plt.plot(y_rus) plt.show() or something like: plt.scatter(y_rus) plt.show() In general is a bit difficult to understand what exactly you want or the format of the data, so I hope this works. Follow the tutorial it will help you understand what you really need to continue.
H: What do Python's pandas/matplotlib/seaborn bring to the table that Tableau does not? I spent the past year learning Python. As a person who thought coding was impossible to learn for those outside of the CS/IT sphere, I was obviously gobsmacked by the power of a few lines of Python code! Having arrived at an intermediate level overall, I was pretty proud of myself as it greatly expands my possibilities in data analysis and visualization compared to Excel (aside from the millions of other uses there are for Python). Purely in terms of data analysis and visualization: what does approaching the same data set with pandas/matplotlib/seaborn/numpy bring to the table as opposed to using Tableau? (sidenote: I was greatly disappointed to see all my hard-earned Python data wrangling skills were available in such a user-friendly GUI... :'( ) AI: Don't worry - your hard-earned Python skills are still important ;) Tableau is not a replacement - it is essentially a means of sharing your insights/findings. It is a wrapper around your normal toolkit (Pandas, Scikit-Learn, Keras, etc.). It can do some basic analysis (just using basic models from sklearn), but the powerful thing is it can deploy your models to allow people to run inference on stored data/new data, and then play around with it in an interactive dashboard. Watch this video for a good overview of everything it can do, and how it connects to Python (and R/MatLab). There is just a bit of boiler plate code around your normal Python code. Tableau also offer TabPy to set up a server, allowing nice deployments of your work, but in the end you need their desktop application to view the results (i.e. your customers need it to look at the results). This is not free: https://www.tableau.com/pricing/individual In summary, I'd say Tableau is more of a business intelligence tool, allowing e.g. your non-data-scientist boss or other stakeholders to interactively explore the data and the results of your modelling. Similar to Microsoft's PowerBI.
H: Using MultiLabelBinarizer for SMOTE This is my first NLP project. I'm trying to use SMOTE for a classifier with 14 classes. I need to convert the classes into an array before using SMOTE. I tried using MultiLinearBinarizer but it does not seem to be working. From the stack trace, it seems like everything is getting converted. Do I need to convert something else to an array? How would I do that? from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import TfidfTransformer from sklearn.preprocessing import MultiLabelBinarizer nb = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB()), ]) nb.fit(X_train, y_train) mlb = MultiLabelBinarizer() print(mlb.fit_transform(df["Technique"].str.split(","))) print(mlb.classes_) import imblearn from imblearn.over_sampling import SMOTE smote = SMOTE('minority') x_sm, y_sm = smote.fit_sample(X_train, y_train) #print(x_sm.shape, y_sm.shape) pd.DataFrame(x_sm.todense(), columns=tv.get_feature_names()) I'm getting the error ValueError: could not convert string to float: 'left left center' Here is the stack trace [[1 0 0 ... 0 0 0] [1 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] ['Appeal_to_Authority' 'Appeal_to_fear-prejudice' 'Bandwagon' 'Black-and-White_Fallacy' 'Causal_Oversimplification' 'Doubt' 'Exaggeration' 'Flag-Waving' 'Labeling' 'Loaded_Language' 'Minimisation' 'Name_Calling' 'Red_Herring' 'Reductio_ad_hitlerum' 'Repetition' 'Slogans' 'Straw_Men' 'Thought-terminating_Cliches' 'Whataboutism'] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-45-68681190410f> in <module>() 10 smote = SMOTE('minority') 11 ---> 12 x_sm, y_sm = smote.fit_sample(X_train, y_train) 13 #print(x_sm.shape, y_sm.shape) 14 pd.DataFrame(x_sm.todense(), columns=tv.get_feature_names()) 8 frames /usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py in asarray(a, dtype, order) 83 84 """ ---> 85 return array(a, dtype, copy=False, order=order) 86 87 ValueError: could not convert string to float: 'left left center' ``` AI: You can't fit X_train into y_train without encoding. Try something like this for features: from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder(handle_unknown='ignore') enc.fit(X) and label encoding for labels.
H: how to get the classe names in an image classifer after predection? I made an image classifier of 80 classes of handwritten numbers then I tested my model and it worked pretty fine, the only problem that I have now is the display of the correct names of these classes. Dataset: 2 folders: [Train Folder===> 80 folders each has 110 images, Validation folder===> 80 folders each has 22 images] Bellow the code I used for training, saving and testing my model: from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense from keras import backend as K # dimensions of our images. img_width, img_height = 251, 54 #img_width, img_height = 150, 33 train_data_dir = 'C:/Users/ADEM/Desktop/msi_youssef/PFE/test/numbers/data/train' validation_data_dir = 'C:/Users/ADEM/Desktop/msi_youssef/PFE/test/numbers/data/valid' nb_train_samples = 8800 #10435 nb_validation_samples = 1763 #2051 epochs = 30 #20 # how much time you want to train your model on the data batch_size = 32 #16 if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height) else: input_shape = (img_width, img_height, 3) model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=input_shape)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(64)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(80)) #1 model.add(Activation('softmax')) #sigmoid model.compile(loss='sparse_categorical_crossentropy',optimizer='rmsprop',metrics=['accuracy'])#categorical_crossentropy #binary_crossentropy # this is the augmentation configuration we will use for training train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.1, zoom_range=0.05, horizontal_flip=False) # this is the augmentation configuration we will use for testing: # only rescaling test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') model.fit_generator( train_generator, steps_per_epoch=nb_train_samples // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=nb_validation_samples // batch_size) model.save('testX_2.h5') #first_try last epoche resulat Epoch 30/30 275/275 [==============================] - 38s 137ms/step - loss: 0.9406 - acc: 0.7562 - val_loss: 0.1268 - val_acc: 0.9688 how I tested my model: from keras.models import load_model from keras.preprocessing import image import matplotlib.pyplot as plt import numpy as np import os #result = [10,7] def load_image(img_path, show=False): img = image.load_img(img_path, target_size=(251, 54)) img_tensor = image.img_to_array(img) img_tensor = np.expand_dims(img_tensor, axis=0) img_tensor /= 255. if show: plt.imshow(img_tensor[0]) plt.axis('off') plt.show() return img_tensor if __name__ == "__main__": # load model model = load_model('C:/Users/ADEM/Desktop/msi_youssef/PFE/other_shit/testX_2.h5') # image path img_path = 'C:/Users/ADEM/Desktop/msi_youssef/PFE/dataset/1.75/eeza.png' # load a single image new_image = load_image(img_path) # check prediction pred = model.predict_classes(new_image) print(pred) it gives me this result instead of given the name of folder: [7] AI: I get your problem, the point is that its correct what the model does, but you have to build a look-up table for its answer. Your ground-truth, looks somethink like that [0,0,0,0,1], a one-hot vector for example. You, the human know what this code stands for, for example cats. just like that you have to build an numpy array, listing the word-embeddings in the correct order and afterwards calling it like: class_names[prediction],prediction being your CNN-result ->[7]. To sum it up, the Dense-layer in the end is giving you with softmax-activation a propability desnity function P. You are using this in comparing it with your ground-truth desnity function q, and calculate the difference. You just use numbers, not words, so you have to write an interpretation of the models answeres, for example in form of a lookup-table. An example could be like: pred = model.predict_classes(new_image) labels=np.array(["cats","dogs","cars","humans"]) print(labels[pred[0]]) -> >>> cats
H: How to change random number of random elements in a column by group in R? I have the following data frame: >df Name 1 Ed 2 Ed 3 Bill 4 Tom 5 Tom 6 Tom 7 Ed 8 Bill 9 Bill 10 Bill My goal is that from each group by "Name" change the "Name" values 25-75% of random rows to "Name"+"_X" (the remaining rows don't change). So the expected output is similar to: Name 1 Ed 2 Ed_X 3 Bill_X 4 Tom 5 Tom_X 6 Tom 7 Ed_X 8 Bill 9 Bill 10 Bill_X I have tried with for cycle like this (a the moment, for 50% random rows: for (n in unique(df$Name)){ df[sample(which(df$Name==n), nrow(df[df$Name==id,])/2), 1] <- paste(df$Name, "_X", sep="") } Unsuccessfully, however. AI: The idea is correct, even if you have a couple of mistakes in your code (like id is not defined, and inside paste you want to use just n, not df$Name). This is not super compressed code but it does the job: Name = c('Ed','Ed','Bill','Tom','Tom','Tom','Ed','Bill','Bill','Bill') df = data.frame(Name) for (n in unique(df$Name)){ # get indices indices = which(df$Name==n) # sample size samp_size = round(length(df[df$Name==n,])/2) # get indices to replace samp = sample(indices, samp_size) # need to set column as character df$Name = as.character(df$Name) # set new values df[samp,] = paste(n,'_X',sep='') # set again column as factor df$Name = as.factor(df$Name) } Out: Name 1 Ed 2 Ed_X 3 Bill 4 Tom_X 5 Tom_X 6 Tom 7 Ed_X 8 Bill 9 Bill_X 10 Bill_X
H: Three Staggered Values - Visualisation? I have 3 process parameters that independently cycle between zero and some peak value, as shown on this filled line time-series plot: Although not a direct control variable, the 'health' of the system can be gauged by the consistency of relative 'stagger' (or difference) between these three parameters at any single point in time. A-B, B-C, C-A. I am struggling to find an effective way to represent these relative stagger values - on the same timeline. A simple plot of e.g. A - B cycles positive to negative but isn't very intuitive. Ideally I'd like to represent all three relative differences on one discernible plot. AI: You could compare the relative values over time quite easily with a "stacked" graph, where each of the processes is a layer in the stack. In Pandas' plotting, it is referred to as an area plot. Here is an example with some dummy data: import pandas as pd import numpy as np import matplotlib.pyplot as plt p1, p2, p3 = np.random.random(100), np.random.random(100), np.random.random(100) df = pd.DataFrame([p1, p2, p3]).T df.head(2) # output # 0 1 2 # 0 0.542490 0.099974 0.831589 # 1 0.988922 0.035026 0.752813 df.plot(kind="area", stacked=True); plt.show() I think this might allow you to see the relative swings in each of the processes quite clearly - you could also shift the slightly to be in phase, which will make things easier.
H: Model to predict coronavirus (covid19) spread im new in data sience and machine learning but i have some mathematical and statistics backgroud. I really just want some information about models (like papers or raw models). So if you have any information please share them with me. thank you AI: For a beginner, I would say the SIR model is a great place to start: https://idmod.org/docs/general/model-sir.html Numberphile did a great video on using SIR to predict Covid-19: https://www.youtube.com/watch?v=k6nLfCbAzgo Hopefully this can get you started on your journey!
H: Why apply a 50:50 train test split? I am going through the "Text classification with TensorFlow Hub" tutorial. In this tutorial, a total of 50,000 IMDb reviews are split into 25,000 reviews for training and 25,000 reviews for testing. I am surprised by this way of splitting the data, since I learned in Andrew Ng's course that for fairly small datasets (<10,000 examples) the "old-fashioned" rule of thumb was to consider 60% or 70% of the data as training examples and the remainder as dev/test examples. Is there a reason behind this 50:50 split? Is it common practice when working with text? Does it have anything to do with using a "pre-trained" TensorFlow Hub layer? AI: A safer method is to use the integer part of the fraction (after truncating) $n_c \approx n^{3 \over 4}$ examples for training, and $n_v \equiv n - n_c$ for validation (a.k.a. testing). If you are doing cross-validation, you could perform that whole train-test split at least $n$ times (preferably $2n$ if you can afford it), recording average validation loss at the end of each cross-validation "fold" (replicate), which is what tensorflow records anyway; see this answer for how to capture it). When using Monte Carlo cross-validation (MCCV) then for each of the $n$ (or $2n$ if resource constraints permit) replicates, one could randomly select (without replacement to make things simpler) $n_c$ examples to use for training and use the remaining $n_v$ examples for validation, without even stratifying the subsets (based on class, for example, if you are doing classification). This is based on a 1993 paper (look at my answer here for more information) by J. Shao in which he proves that $n_c \approx n^{3 \over 4}$ is optimal for linear model selection. At that time, non-linear models such as machine learning (see this answer for yet another discussion on that) were not as popular, but as far as I know (would love to be proven wrong) nobody has taken the time to prove anything similar for what is in popular use today, so this is the best answer I can give you right now. UPDATE: Knowing that GPUs work most efficiently when they are fed a batch sized to be a power of two, I have calculated different ways to split data up into training and validation which would follow Jun Shao's strategy of making the training set size $n_c \approx n^{\frac{3}{4}}$ and where both $n_c$ and $n_v \equiv n - n_c$ are close to powers of two. An interesting note is that for $n = 640$, $n_c \approx 127$ and therefore $n_v \approx 513$; because $127 \approx 2^7$ and $513 \approx 2^9$ I plan to go ahead and use those as my training and validation test sizes whenever I am generating simulated data.
H: Cannot create new percent columns from CSV file I am fairly new to coding and cannot figure out how to create the AB_perc and CD_perc columns in the Desired Output table below. The original csv file I am using has all sorts ot DNA sequencing data, so below is a simplified example of what I am trying to do. CSV file: Desired Output: I was able to generate the AB_sum and CD_sum columns by using: df['AB_sum'] = df['A'] + df['B'] df['CD_sum'] = df['C'] + df['D'] df.groupby('RunNumber', as_index=False).agg({'AB_sum': 'sum', 'Confusion C<->G': 'sum'}) But I cannot figure out how to create the AB_perc and CD_perc columns. For example: To calculate the percentage of AB for Run 1 (AB_perc: 79.94), the AB_sum value, 570, is divided by 713, the sum of all orange numbers in the csv file, and then multiplied by 100 I tried several things all day and nothing worked for me. I'm sure there's a simple solution, but I feel like if I try any harder my nose will start to bleed. Haha. Any help would be much appreciated! AI: Try using groupby with a custom function and apply. df = pd.DataFrame() df['RunNumber'] = ['Run1','Run1','Run2','Run2'] df['A']=[500,60,20,30] df['B']=[5,5,2,2] df['C']=[5,10,2,6] df['D']=[3,4,5,4] df['E']=[65,56,56,44] def calculate_sum_percent(input_df): output_df = pd.Series() output_df['AB_sum']=input_df['A'].sum() + input_df['B'].sum() output_df['CD_sum']=input_df['C'].sum() + input_df['D'].sum() output_df['AB_perc']=100*(input_df['A'].sum() + input_df['B'].sum())/input_df.sum().sum() output_df['CD_perc']=100*(input_df['C'].sum() + input_df['D'].sum())/input_df.sum().sum() return output_df stats_df = df.groupby('RunNumber').apply(calculate_sum_percent) display(stats_df)
H: How to order a python dataframe by adding the row values? I have the following dataframe: M1 M2 M4 M5 N1 45 46 54 57 N2 32 36 29 56 N3 56 44 40 55 N4 57 43 42 54 How is it possible to order it according to the sum of its lines, from the largest sum to the lowest? That is, N1 in the first line, because the sum of the line values ​​is the largest, then N4, N3 and finally N2 (because the sum of the values ​​is the smallest). Thus: M1 M2 M4 M5 N1 45 46 54 57 N4 57 43 42 54 N3 56 44 40 55 N2 32 36 29 56 AI: You could simply create a sum column and then use sort_values. Afterwards, you can drop that column: In [3]: df Out[3]: M1 M2 M4 M5 N1 45 46 54 57 N2 32 36 29 56 N3 56 44 40 55 N4 57 43 42 54 In [4]: df.sum(axis=1) Out[4]: N1 202 N2 153 N3 195 N4 196 dtype: int64 In [5]: df['sum'] = df.sum(axis=1) In [6]: df Out[6]: M1 M2 M4 M5 sum N1 45 46 54 57 202 N2 32 36 29 56 153 N3 56 44 40 55 195 N4 57 43 42 54 196 In [7]: df = df.sort_values("sum", ascending=False) In [8]: df Out[8]: M1 M2 M4 M5 sum N1 45 46 54 57 202 N4 57 43 42 54 196 N3 56 44 40 55 195 N2 32 36 29 56 153 In [9]: df = df.drop("sum", axis=1) In [10]: df Out[10]: M1 M2 M4 M5 N1 45 46 54 57 N4 57 43 42 54 N3 56 44 40 55 N2 32 36 29 56
H: Why GPU doesn't utilise System memory? I have noticed that more often when training huge Deep Learning models on consumer GPUs (like GTX 1050ti) The network often doesn't work. The reason is that the GPU just doesn't have enough memory to train the said network. This problem has solutions like using the CPU's cache memory for storage of things that are not being actively being used by GPU. So my question is - Is there any way to train models on CUDA with memory drawn from System RAM? I understand that the tradeoff would involve speed and accuracy but it JUST might work to train a BERT model on my 1050ti AI: When you are training one batch, the GPU memory is full, i.e. it is generally needed for calculation. Time of one batch processing is like 0.1 second or less. What is your CPU-GPU bandwidth, 2 GB/s? So without loosing speed you could send at maximum 0.1 GB back and forward... Or in terms of "I want anyway", you can extend your memory by 1 GB for the price of 10x slowdown. There are techniques of throwing out some intermediate layers' activations and recalculating them during back-propagation. It's much better idea to sacrify say half of (enormous) GPU calculation power to save some memory. I saw reports that people actually use them. But don't know how and didn't see any supports in ML frameworks
H: Meaning of axes in Linear Discrimination Analysis Looking at the LDA of the Iris Dataset, which looks like this: It's understandable that the 3 types of flowers have succesfuly been seperated into categories, with versicolor and virginica slightly overlapping. My question is, how does this help us in a practical way? What can we do with the information, now that we've seperated the 3 categories like this. It's great to know that Setosa is apparent for values between 5 and 10, but what do those values mean? AI: You could be asking yourself the same questions using PCA! Usually these linear transformations (or even non-linear for that matter, PCA, LDA, ICA, UMAP etc.) are used to reduce the high dimensional features space to serve one or multiple purposes. For example in PCA one could use it to compress the high-dimensional feature space to lower (usually 2 or 3) by finding the directions of maximal variance, either for better separation or a simple compression or better explainability via projected visualization for human eyes. Sometimes they are served for better class separation like the case of LDA, in a supervised fashion (in contrast to PCA). In case you missed it check out Sebastian Raschka's post on the difference between LDA and PCA for dimensionality reduction (be careful with LDA assumptions). In order to answer the question about the axes, we better touch briefly on what LDA is aiming to do, borrowing the explanation from this blog post: LDA aims to find a new feature space to project the data in order to maximize classes separability. The first step is to find a way to measure this capacity of separation of each new feature space candidate. In 1988, a statistician called Ronald Fisher proposed the following solution: Maximize the function that represents the difference between the means, normalized by a measure of the within-class variability. The Fisher’s propose is basically to maximize the distance between the mean of each class and minimize the spreading within the class itself. Thus, we come up with two measures: the within-class and the between-class. [The Answer]: As in the case of PCA: the first axis (by convention PCA 1) represents the axis holding the maximum variance in the data, PCA 2 the second in the rank and so forth. The same analogy holds true to LDA. Here though, LDA 1 (your x-axis in the graph, again by convention and the visual judgement) accounts for the maximum variation among the classes, and LDA 2 the second and so forth (still confused, you might one to check this video). You can define a simple linear model to separate them, that is it, that perhaps you couldn't using the original features. [Bonus]: Although I am finished with the answer, and the following it is not super relevant, but I would like to bring a super nice example what these techniques might be able to answer from data. In the case of, again PCA, Tableau team is showing such capabilities, see this video. In real world problems, not a simple IRIS, these will come handy, and surely these possible even outside Tableau, but requires more efforts.
H: low error, high CV(RMSE)? I am comparing 2 neural network models. I have used the model to make predictions on unseen data. One model returns an error of 20.9% for y1, 36.6% for y2, 4.53% for y3 on unseen data, and a CV(RMSE) of 19.3. The other returns an error of 15.5% for y1, 33.8% for y2 and 4.83% for y3 on unseen data, and a CV(RMSE) of 31.5. I'm struggling to interpret the results. A lower CV(RMSE) is better, yet why do I get a much higher error on unseen data? AI: When you calculate RMSE error on multi-output models you will normally end up with 3 RMSE's. You can average those errors into one by specifying weights. They basically reflects of bad error on y1, y2 and y3 are for your application. Even if the outputs are averaged equally, keep in mind that RMSE is "scale sensitive". If y1 value is way higher than y2 and y3 values, it will also produce bigger RMSE. If you want to get rid of this problem I recommend you to use RMSE% wich is a relative error metric (scale invariant) as mentionned here : https://stats.stackexchange.com/questions/190868/rmse-is-scale-dependent-is-rmse
H: Standardizing features by one specific feature I am working on a project with a dataset that looks something like the following: velocity accel_amp f_vert tau_vert f_pitch_filt tau 0 3.778595 -5.796777 2.400000 32.753227 1.600000 27.844535 1 1.970611 -6.087134 2.272727 32.638705 1.704545 30.639998 2 3.581163 -6.241817 2.400000 32.850969 1.600000 30.449256 3 4.735210 -6.109532 1.400000 28.809865 1.000000 127.749313 4 5.340568 -6.614317 1.400000 20.249699 1.000000 124.549628 I was suggested to standardize the last 5 features by velocity in order to improve my PCA. Does this simply mean to take each element in these last 5 columns, subtract the mean of the velocity column, and then divide by the standard deviation of the velocity column? That is how I interpreted this suggestion. Is there a functionality in Python for doing this? Any suggestions or clarification would be appreciated. Thanks. AI: If this is a pandas DataFrame: vel_mean = df.velocity.mean() vel_std = df.velocity.std() df = df.apply(lambda x: (x - vel_mean) / vel_std) In case it is a numpy array: data = (data - np.mean(data, axis=0)[0]) / np.std(data, axis=0)[0]
H: How to use r2-score as a loss function in LightGBM? I am trying to implement a custom loss function in LightGBM for a regression problem. The intrinsic metrics do not help me much, because they penalise for outliers... Is there any way to use r2_score from sklearn as a loss function for LightGBM? AI: $R^2$ is just a rescaling of mean squared error, the default loss function for LightGBM; so just run as usual. (You could use another builtin loss (MAE or Huber loss?) instead in order to penalize outliers less.)
H: Is there a reference data set for ridge regression? In order to test an algorithm, I am looking for a reference data set for ridge regression in research papers. Kind of like the equivalent of MNIST but for regression. AI: Try using the Boston house data set. import pandas as pd from sklearn.datasets import load_boston boston = load_boston() boston_df = pd.DataFrame(boston.data, columns = boston.feature_names) boston_df.insert(0, 'Price', boston.target) Then you can use Ridge Regression to predict the housing prices from the other features in the data set. Source: https://towardsdatascience.com/ridge-and-lasso-regression-a-complete-guide-with-python-scikit-learn-e20e34bcbf0b
H: Numeric variables in Decision trees If we have numeric variable, decision trees will use < and > comparisons as splitting criteria. Lets consider this case : If our target variable is 1 for even numeric value, and 0 for odd numeric value. How to deal with this type of variables? How to even identify these type of variables if we have large number of variables? Is there any specific names for these type of variables? AI: I would call this bad feature engineering, I'm afraid: as the designer of a ML system, one is supposed to analyze their data and find the best way to make the ML system perform as well as possible. In this case by adding a simple feature x % 2 for every instance the decision tree can perform perfectly. [added] Even in the case of a more complex pattern, if there are such "clusters" of numerical values then there must be a logical explanation why this happens, i.e. something which depends on the task that an expert in this problem can analyze and understand. In most real cases this implies that there are some hidden/intermediate variables, and designing the system so that it represents these variables is key. In other words, the numeric variable is not directly semantically relevant for predicting the response variable, because the assumption when using numeric values is that their order matters (here the numeric value behaves more like a categorical variable).
H: What annotators are used in Cohen Kappa for classification problems? I am working on a classification problem using algorithms such as Logistic Regression, Support Vector Machines, Decision Trees, Random Forests and Naive Bayes. My data consists of binary class classification i.e. Fire(1) or No Fire(0). Due to the imbalance in data Cohen Kappa was recommended for evaluation of model performance. I am using the scikit-learn sklearn.metrics.cohen_kappa_score to compute the cohen kappa score. To compute the value it takes the following inputs from sklearn.metrics import cohen_kappa_score cohen_score = cohen_kappa_score(y_test, predictions) print(cohen_score) So it takes the y_test and predictions made by the specific model the same inputs used for confusion matrix and classification report. However, Cohen Kappa is suppose used to measure the inter-rater agreement between observers or annotators. If we were measuring the quality of a document based on scores by 2 reviewers. Those 2 reviewers would be the annotator. But in the case of a classification problem. When I compute the score in scikit learn as mentioned above. What 2 annotators or observers does Cohen Kappa use in classification problems when you compute the score in scikit learn? AI: What 2 annotators or observers does Cohen Kappa use in classification problems when you compute the score in scikit learn? The two annotators are exactly the two parameters you fed the method with: cohen_kappa_score(y_test, predictions), i.e. the truth/labels and your predictions. And these are then measured in terms of their "inter-rater" agreement.
H: Prepare data for an LSTM -I want to make a python program using the LSTM model to predict an output value that is 1 or 0. -My data is stored in a .csv file of the form: (Example of the line) Date time temperature wind value-output 10-02-2020 10:00 25 10 1 -I found several courses, several examples of LSTM but I don't find my classification problem to do the same thing, there are many examples on translation. -I am stuck on how to prepare my program my data to give them to the LSTM model. -I want to take into consideration my temperature and wind inputs in addition to the time to predict the output value. (I have already made a python program based on a simple ANN to predict my output value by following a tutorial), but for the LSTM I find it difficult. Thanks in advance for your help. AI: You have to prepare your data as a numpy array with the following shape: ( Number of observations , Input length , Number of variables ) Assuming you are working with Keras, the input of the LSTM() layer is as above, but you don't need to report the number of observations: input_shape = (Input length , Number of variables). Input length is an hyperparameter of your choice. I pushed this Notebook on GitHub that contains a function to preprocess your dataset for RNNs: def univariate_processing(variable, window): ''' RNN preprocessing for single variables. Can be iterated for multidimensional datasets. ''' import numpy as np # create empty 2D matrix from variable V = np.empty((len(variable)-window+1, window)) # take each row/time window for i in range(V.shape[0]): V[i,:] = variable[i : i+window] V = V.astype(np.float32) # set common data type return V def RNN_regprep(df, y, len_input, len_pred): #, test_size): ''' RNN preprocessing for multivariate regression. Builds multidimensional dataset by iterating univariate preprocessing steps. Requires univariate_processing() function. Args: df, y: X and y data in numpy.array() format len_input, len_pred: length of input and prediction sequences Returns: X, Y matrices ''' import numpy as np # create 3D matrix for multivariate input X = np.empty((df.shape[0]-len_input+1, len_input, df.shape[1])) # Iterate univariate preprocessing on all variables - store them in XM for i in range(df.shape[1]): X[ : , : , i ] = univariate_processing(df[:,i], len_input) # create 2D matrix of y sequences y = y.reshape((-1,)) # reshape to 1D if needed Y = univariate_processing(y, len_pred) ## Trim dataframes as explained X = X[ :-(len_pred + 1) , : , : ] Y = Y[len_input:-1 , :] # Set common datatype X = X.astype(np.float32) Y = Y.astype(np.float32) return X, Y Let me know if that's what you were looking for.
H: Smoothing or averaging plots to better represent trends and their variations My picture looks like this. It shows some percent variation but, due to fluctuations, curves are not very smooth at all. I would like to draw a smoother image, without altering too much the data (it is okay if the graph is not a perfect representation of the data). I am using matplotlib to draw the data, and my code looks like this: x = df["Value"].shift(period)+1 y = 100*df["Value"].diff(period)/(period*x) Basically, it draws the variation from the total over a period. What is the suggested way to alter the points (possibly drawing them alongside) the original data, so that the graphs do not look so jagged? I tried just drawing every other third point, doing: w = x[2::3] z = y[2::3] but I am sure there are better ways to do this. Again, this is for a nice representation only and I am mostly interested in showing a trend or any variation of it, not in showing the exact data points. AI: You could take averages piecewise. Ideally, is like sliding a window of size k on the trend, taking the average of these points (it could be 3 or 5, for example). Alternatively, you can use exponential smoothing functions, such as savgol filter available in scipy, LOESS smoothing, or Holt-Winters smoothing. The statsmodels.tsa submodule made many smoothing techniques available.
H: How is the Beeswarm plot better than a histogram? I have read about Beeswarm and Histogram plots. But I did not understand the difference. In a Histogram, if the bins number is increased then its intrepretation is changed. So how can a beeswarm plot help to understand the data? AI: They are quite similar but : Histograms results of aggregation into bins (loss of information due to aggregation: high-resolution point (2.324) is reduced to bins resolution [2,3]) Beeswarm plot displays the exact value of points on the axis. It is a 1D-scatter plot. However, if two points are very close to each other they will overlap if the point size is big enough. To avoid this, the beeswarm plot will slightly shift one point to the side so it does not overlap. They will often look similar because the more points you have locally, the higher the probability of overlapping. Then, high-density areas will often result in "peaks" as in histograms. The beeswarm plot is a more faithful representation Gap in the data: depending on the binning (both placement and size), the histogram may not represent it (as on point 1 in the figure). Min and max: Imagine values are limited at 5 but the histogram last bins continues to 6, it might give the impression that the max value of the variable is 6. In both cases, you can see that beeswarm is a more faithful representation. Because of this, it is not highly scalable (it has to represent every single point with no overlap). On the seaborn.swarplot function docs : This function is similar to stripplot(), but the points are adjusted (only along the categorical axis) so that they don’t overlap. This gives a better representation of the distribution of values, but it does not scale well to large numbers of observations. This style of plot is sometimes called a “beeswarm”.
H: Finding TN,FN, TP, and FN for arrays using confusion matrix My prediction results look like this TestArray [1,0,0,0,1,0,1,...,1,0,1,1], [1,0,1,0,0,1,0,...,0,1,1,1], [0,1,1,1,1,1,0,...,0,1,1,1], . . . [1,1,0,1,1,0,1,...,0,1,1,1], PredictionArray [1,0,0,0,0,1,1,...,1,0,1,1], [1,0,1,1,1,1,0,...,1,0,0,1], [0,1,0,1,0,0,0,...,1,1,1,1], . . . [1,1,0,1,1,0,1,...,0,1,1,1], this is the size of the arrays that I have TestArray.shape Out[159]: (200, 24) PredictionArray.shape Out[159]: (200, 24) I want to get TP, TN, FP and FN for these arrays I tried this code cm=confusion_matrix(TestArray.argmax(axis=1), PredictionArray.argmax(axis=1)) TN = cm[0][0] FN = cm[1][0] TP = cm[1][1] FP = cm[0][1] print(TN,FN,TP,FP) but the results I got TN = cm[0][0] FN = cm[1][0] TP = cm[1][1] FP = cm[0][1] print(TN,FN,TP,FP) 125 5 0 1 I checked the shape of cm cm.shape Out[168]: (17, 17) 125 + 5 + 0 + 1 = 131 and that does not equal the number of columns I have which is 200 I am expecting to have 200 as each cell in the array suppose to be TF, TN, FP, TP so the total should be 200 How to fix that? Here is an example of the problem import numpy as np from sklearn.metrics import confusion_matrix TestArray = np.array( [ [1,0,0,1,0,1,1,0,1,0,1,1,0,0,1,1,1,0,0,1], [0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,1,1,0,1,1], [1,0,1,1,1,1,0,0,1,1,1,1,0,0,1,0,0,0,0,0], [0,1,1,1,0,0,0,0,0,1,0,0,1,0,0,1,0,1,1,1], [0,0,0,0,1,1,0,1,1,0,0,1,0,1,1,0,1,1,1,1], [1,0,0,1,1,1,0,1,1,0,1,0,0,1,1,0,0,1,0,0], [1,1,1,0,0,1,0,0,1,1,0,1,0,1,1,1,1,1,0,1], [0,0,0,1,0,0,1,0,1,0,1,0,0,0,0,1,0,0,1,1], [1,0,1,0,0,0,0,1,0,1,0,1,0,0,0,0,1,0,1,0], [1,1,0,1,1,1,1,0,1,0,1,0,1,1,1,1,0,1,0,0] ]) TestArray.shape PredictionArray = np.array( [ [0,0,0,1,1,1,1,0,0,0,1,0,0,0,1,0,1,0,1,1], [0,1,0,0,1,0,1,1,0,0,0,1,1,0,0,1,1,0,0,1], [1,1,0,1,1,1,0,0,0,0,0,1,0,0,1,0,0,1,0,0], [0,1,0,1,0,0,1,0,0,1,0,1,1,0,0,1,0,0,1,1], [0,0,1,0,0,1,0,1,1,1,0,1,1,1,0,0,1,1,0,1], [1,0,0,1,0,1,1,1,1,0,0,1,0,1,1,1,0,1,1,0], [1,1,0,0,1,1,0,0,0,1,0,1,0,0,1,1,0,1,0,1], [0,0,0,0,0,0,0,1,1,0,1,0,0,1,0,1,1,0,1,1], [1,0,1,1,0,0,0,1,0,1,0,1,1,1,1,0,0,0,1,0], [1,1,0,1,1,1,1,1,1,0,1,0,0,0,0,1,1,1,0,0] ]) PredictionArray.shape cm=confusion_matrix(TestArray.argmax(axis=1), PredictionArray.argmax(axis=1)) TN = cm[0][0] FN = cm[1][0] TP = cm[1][1] FP = cm[0][1] print(TN,FN,TP,FP) The output is 5 0 2 0 = 5+0+2+0 = 7 !! There are 20 columns in the array and 10 rows but cm gives to total of 7!! How can I get the actual TP, TN, FP, and FN? AI: 1) Your code : Expected inputs for confusion matrix sklearn.metrics.confusion_matrix expect $y_{pred}$ and $y_{true}$ to be $(n\_samples,)$ shape. : y_true array-like of shape (n_samples,) Ground truth (correct) target values. y_pred array-like of shape (n_samples,) Estimated targets as returned by a classifier. Results of argmax on your matrix : The argmax function on your test and prediction array is : for each row, where is the first (max element) of the row. So let's take the first four rows of your test set : TestArray = np.array( [ [1,0,0,1,0,1,1,0,1,0,1,1,0,0,1,1,1,0,0,1], [0,1,1,0,1,0,0,1,0,0,0,1,0,1,0,1,1,0,1,1], [1,0,1,1,1,1,0,0,1,1,1,1,0,0,1,0,0,0,0,0], [0,1,1,1,0,0,0,0,0,1,0,0,1,0,0,1,0,1,1,1],...] Argmax returns : [0, 1, 0, 1, ...] Wich is the locations of the first 1 in each row. I imagine this is not what you expected. 2) What I propose : If you want to get a confusion matrix that sums up to $(n_{rows}$ x $n_{columns})$ you have to ravel the inputs : cm=confusion_matrix(TestArray.ravel(), PredictionArray.ravel()) TN = cm[0][0] FN = cm[1][0] TP = cm[1][1] FP = cm[0][1] print(TN,FN,TP,FP) returns (sum up to 200) : 74 28 73 25
H: Colab can not connect to GPU from a python file I am trying to run a github deep learning repository in Colab but I can not connect the python files to colab GPU. I can connect to GPU when writing a script in the colab notebook e.g. when I run this cod from a notebook cell : import os, torch print('Torch', torch.__version__, 'CUDA', torch.version.cuda) print('Device:', torch.device('cuda:0')) print(torch.cuda.is_available()) I get: Torch 1.4.0 CUDA 10.1 Device: cuda:0 True but when I run it from a file called myExample.py e.g. using !python myExample.py I get: Torch 1.4.0 CUDA 10.1 Device: cuda:0 False Is there any solution to this? AI: Why dont you import the file into notebook and run your function. Eg: if your example.py has something like this if __name__ == '__main__': myfunction() Then in your colab, import os os.chdir("path/to/cloned/dir/") import example example.myfunction()
H: Heterogeneous clustering with text data I have a dataset which consists of multiple user ratings. Each rating looks similarly to: | Taste | Flavour | Look | Enjoyed | ..... | Tag | |-------|---------|------|---------|-------|--------| | 4 | 2 | 2 | 3 | ..... | Banana | | 5 | 4 | 1 | 2 | ..... | Apple | | 3 | 1 | 4 | 1 | ..... | Pasta | | .... | .... | .... | .... | .... | .... | The columns contain ranks for each row. The task is to clusterize rows, e.g. I would like to find something similar to: cluster 1: Banana, Apple cluster 2: Pasta, Spagetty .... We use HDBSCAN with edit distance metric to find clusters, and it works more or less. The problem, however, is that there are too few features (12 in total) to have "good" clusters. Therefore I would like to somehow account for the information from "Tag" in clustering. The idea is to calculate embeddings for each tag and use them as features. What I'm not certain about is how to include these new features? I would like the clustering to be primarily determined by the original features. The dimension of embeddings is much larger than the dimension of the original features, and the metric on these features is different (e.g. cosine similarity). Therefore, I would like to answer 2 questions: What will be a proper method to combine these heterogeneous features? How to properly select the weight for the "Tag" feature? Ideally, I would not like to just postulate it AI: TL;DR Pass the data twice in HDBSCAN. Cluster based on the tag using word embeddings and cosine distance. Sub-cluster the cluster from step 1 with your existing method (using the remaining 12 features). Explanation I suggest you do it in two steps and not give a weight to the tag feature, because the distance metric used is different. For word-embeddings, you will need to use the cosine distance between embeddings. Where as for the other 12 features you are currently using another distance (Euclidean I assume). The first step should cluster your data on the semantical characteristic of your tag. This should cluster things like fruits, meats, vegetables, pastas... Then, the second step can sub-cluster the data with your other 12 features. However, given your example cluster 1: Banana, Apple cluster 2: Pasta, Spaghetti I don't see why this second step is necessary. You could instead of clustering a second time, just use the 12 features for ordering the data points for the purpose of your exercise. E.g. getting the top "fruits", as clustered at step 1, that people "enjoy" the most.
H: Should you perform feature scaling/mean normalization when predicting with a model? Should you also perform feature scaling/mean normalization when predicting with a model, that was trained and tested on with feature scaling/mean normalization? AI: Yes, testing data should follow the same preprocessing as the training data. Otherwise, testing data will have nothing comparable with what the algorithm learned, leading to (very) bad performances. note: In Sklearn, the Pipeline class helps you to respect the fundamentals of ML modeling like data leakage and applying the same transformations to train and test set.
H: Combining convolution operations Reading an article about 1x1 convolution, I found this: It should be noted that a two step convolution operation can always be combined into one, but in this case [GoogLeNet] and in most other deep learning networks, convolutions are followed by non-linear activation and hence convolutions are no longer linear operators and cannot be combined. What do they mean saying "two step convolution operation can always be combined into one"? AI: I think the comment is true for any kind of network where the neuron has a linear transformation function and there is no activation. Convolution is just a special case of linear transformation. Basically, if your first layer outputs linear combinations of your features, and the second layer outputs linear combinations of the first layer outputs, then the second layers outputs is a linear combination of the initial features. In the case of convolution, the interesting thing is that the convolution product is associative. So that if you apply two kernels consecutively $K_2 * (K_1*X) $ it is equivalent to $(K_2 * K_1)*X $ so that you can calculate the combined operation $ K_1 * K_2 $ as a unique kernel easily.
H: Strange out of memory while loading lots of pictures before input CNN for deep learning I’m using the TFlearn, and want to classify pictures to two category. But the strange out of memory while loading lots of pictures before input CNN for deep learning. The RAM is 64 G in my deep learning box. I using 482426 pictures(resize to 224 x 224) for training (241213 are A category, the other 241213 are B category ). The process shows the '%MEM: 98.4' whenever the I load these images, so that I can't complete the process. I would like to know how can I edit my code for this situation? imgs = [] for filename in glob.glob(Learning_Data_Path+"/Training/A/"+"*.tif"): img = load_image(filename) img=img.resize((224,224)) img_arr = np.asarray(img) imgs.append(img_arr) for filename in glob.glob(Learning_Data_Path+"/Training/B/"+"*.tif"): img = load_image(filename) img=img.resize((224,224)) img_arr = np.asarray(img) imgs.append(img_arr) imgs = np.array(imgs) y_data = np.r_[np.c_[np.ones(Training_A_num), np.zeros(Training_A_num)],np.c_[np.zeros(Training_B_num), np.ones(Training_B_num)] AI: You have about 700K of 224 X 224 X3 images. If you try to put all these into memory at once you will get a resource exhaust error. The solution for handling such large data sets is to feed the training data to your network in "batches". Keras provides utilities to make this process easy. Use the Keras ImageDataGenerator.flow_from_directory to fetch data of a specified batch size from a directory containing your training data and provide it to your network. Documentation is here. Use the Keras model.fit_generator to train your model. Documentation for that is here
H: Applying a keras model working with greyscale images to RGB images I followed this basic classification TensorFlow tutorial using the Fashion MNIST dataset. The training set contains 60,000 28x28 pixels greyscale images, split into 10 classes (trouser, pullover, shoe, etc...). The tutorial uses a simple model: model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10) ]) This model reaches 91% accuracy after 10 epochs. I am now practicing with another dataset called CIFAR-10, which consists of 50,000 32*32 pixels RGB images, also split into 10 classes (frog, horse, boat, etc...). Considering that both the Fashion MNIST and CIFAR-10 datasets are pretty similar in terms of number of images and image size and that they have the same number of classes, I naively tried training a similar model, simply adjusting the input shape: model = keras.Sequential([ keras.layers.Flatten(input_shape=(32, 32, 3)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10) ]) Alas, after 10 epochs, the model reaches an accuracy of 45%. What am I doing wrong? I am aware that I have thrice as many samples in an RGB image than in a grayscale image, so I tried increasing the number of epochs as well as the size of the intermediate dense layer, but to no avail. Below is my full code: import tensorflow as tf import IPython.display as display from PIL import Image from tensorflow import keras import numpy as np import matplotlib.pyplot as plt import pdb import pathlib import os from tensorflow.keras import layers #Needed to make the model from tensorflow.keras import datasets, layers, models (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() IMG_HEIGHT = 32 IMG_WIDTH = 32 class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] train_images = train_images / 255.0 test_images = test_images / 255.0 def make_model(): model = keras.Sequential([ keras.layers.Flatten(input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)), keras.layers.Dense(512, activation='relu'), keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) return model model=make_model() history = model.fit(train_images, train_labels, epochs=10) AI: Your model is not sufficiently complex to adequately classify the CIFAR 10 data set. CIFAR-10 is considerably more complex than the Fashion-MNIST data set and therefore you need a more complex model.You can add more hidden layers to your model to achieve this. You should also add DROPOUT layers to prevent over fitting. Perhaps the easiest solution is to use transfer learning. I would recommend using the MobileNet CNN model if you want to try transfer learning. Documentation for that can be found here. Since CIFAR-10 has 50,000 sample images I do not think you will need data augmentation. First try a more complex model without augmentation and see what accuracy you achieve. If it is not adequate then use the keras ImageData Generator to provide data augmentation. Documentation for that is here.
H: How to add jpg images information as a column in a data frame I have jpg images stored in a folder. For ex: 11_lion_king.jpg,22_avengers.jpg etc. I have a data frame as below: data_movie.head() movie_id genre 11 ['action','comedy] 22 ['animation',comedy] .......... I want to add a new column movie_image into the data_movie data frame with the jpg information mapped correctly with movie_id column as shown below: movie_id genre movie_image 11 ['action','comedy] 11_lion_king.jpg 22 ['animation',comedy] 22_avengers.jpg ......... Help will be appreciated. AI: I assume you a list of the filenames called movie_images # Could get filenames with: # import os; movie_images = os.listdir("./folder/with/images/") movie_filenames = ["11_lion_king.jpg", "22_avengers.jpg"] First create a mapping between the ID values and the filenames: # Use the "_" to split the filename and take the first items, the ID mapping = {f.split("_")[0]: f for f in movie_filenames} # <-- a dictionary-comprehension Now add a column of some empty values (whatever you like) that will hold the movie_image values: data_movie["movie_image"] = pd.Series() # will be filled with NaN values until populated Now iterate over this mapping, inserting the movie filenames for the correct movie IDs: for movie_id, movie_image_filename in mapping.items(): df.loc[df.movie_id == movie_id, "movie_image"] = movie_image_filename This should produce the output dataframe you described. As a side note (in case you are ever tempted): never load the actual images into a pandas dataframe. It is best to load them as NumPy arrays or something similar. Pandas DataFrames are in essence just annotated NumPy arrays anyway.
H: How to perform one hot encoding on multiple categorical columns I am trying to perform one-hot encoding on some categorical columns. From the tutorial I am following, I am supposed to do LabelEncoding before One hot encoding. I have successfully performed the labelencoding as shown below #categorical data categorical_cols = ['a', 'b', 'c', 'd'] from sklearn.preprocessing import LabelEncoder # instantiate labelencoder object le = LabelEncoder() # apply le on categorical feature columns data[categorical_cols] = data[categorical_cols].apply(lambda col: le.fit_transform(col)) Now I am stuck with how to perform one hot encoding and then join the encoded columns to the dataframe (data). Please how do I do this? AI: LabelEncoder is not made to transform the data but the target (also known as labels) as explained here. If you want to encode the data you should use OrdinalEncoder. If you really need to do it this way: categorical_cols = ['a', 'b', 'c', 'd'] from sklearn.preprocessing import LabelEncoder # instantiate labelencoder object le = LabelEncoder() # apply le on categorical feature columns data[categorical_cols] = data[categorical_cols].apply(lambda col: le.fit_transform(col)) from sklearn.preprocessing import OneHotEncoder ohe = OneHotEncoder() #One-hot-encode the categorical columns. #Unfortunately outputs an array instead of dataframe. array_hot_encoded = ohe.fit_transform(data[categorical_cols]) #Convert it to df data_hot_encoded = pd.DataFrame(array_hot_encoded, index=data.index) #Extract only the columns that didnt need to be encoded data_other_cols = data.drop(columns=categorical_cols) #Concatenate the two dataframes : data_out = pd.concat([data_hot_encoded, data_other_cols], axis=1) Otherwise: I suggest you to use pandas.get_dummies if you want to achieve one-hot-encoding from raw data (without having to use OrdinalEncoder before) : #categorical data categorical_cols = ['a', 'b', 'c', 'd'] #import pandas as pd df = pd.get_dummies(data, columns = categorical_cols) You can also use drop_first argument to remove one of the one-hot-encoded columns, as some models require.
H: Relative feature importance w.r.t hyperparameters Could changing the hyperparameters of a model change relative feature importance? AI: Yes. The most obvious example is when using a Lasso regression : for an increasing $\alpha$ parameter you will have more and more coefficients set to zero. This resulting in a smaller set of features and thus a bigger share of feature importance for remaining features.
H: extract classifier properties from pickled file I have *.clf file which I get from fit() of sklearn. I fit my data with SVM or KNN and want to show its properties when using it for predictions. For example I open earlier pickled classifier file and when I print it I get something like this: SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf', max_iter=-1, probability=True, random_state=None, shrinking=True, tol=0.001, verbose=True) How can I get the value of, for example, gamma to print out it somewhere else except for traversing it as string? Because at first I have to define either it's SVM or KNN. AI: clf = SVC() clf.fit(X, y) print(clf.get_params())
H: Gradient for starting Backpropagation I was reading this nice tutorial about Pytorch's basics: https://pytorch.org/tutorials/beginner/pytorch_with_examples.html In the first example (pure Numpy), the author starts the backward phase by setting as his "first" gradient the numerical result of the loss function calculation multiplied by the constant 2: grad_y_pred = 2.0 * (y_pred - y) Why does he multiply by 2? AI: This is because of the loss assumption is the (Mean) Squared Error $\mathcal{L} = (\hat{y} - y)^2$ and the derivative is $$ \frac{\partial}{\partial \hat{y}} \mathcal{L} = 2 (\hat{y} - y) $$ which is then passed "backward" for use in the chain rule.
H: tensorboard showing the epoch loss and accuracy for validation data but not training data I am doing binary classification using CNN and viewing the results using tensorboard. The problem is that the tensorboard is not showing the loss and accuracy for training data. As seen curves for training data are not shown AI: It is hard to know what is happening from just that screenshot and no code. The training and validation plots are usually separated on the page, not lines on the same graph. If you are using Tensorflow 2.0, there is a known issue, regarding the syncing of TB and the tfevent file (where logs are stored). A couple of things to try: Try adding the TensorBoard callback with the argument: profile_batch=0 Try restarting the tensorboard a few times... reading can fail or be very slow to load I am referring to the tensorflow.keras API: tf.keras.callbacks.TensorBoard( log_dir='logs', update_freq='epoch', profile_batch=0, # <-- default value is 2 )
H: Why is the curve for alternative hypothesis shifted compared to that of null hypothesis? In hypothesis testing the concept of type 1 and type 2 error is very important and very often we come across the graph shown below. I have trouble understanding why is the curve for alternative hypothesis shifted compared to the null hypothsis? AI: The idea is that the population has the mean of the null hypotesis, say mean height of Sweden is 181cm. Then you do a sample of the population and find that the sample height is 185. Is this enough for you to discard the null and accept the alternative? This is what the curves are trying to illustrate. The reason for the shifted curve for the alternative is that the mean is located at a different point on the x-axis (185 in my example). So, given that h0 is true (mean 181), what are the probability to draw a sample mean of 185? That would be the right tail of the null curve straight under the alternatives mean and all to the right of it.
H: What is the origin of YOLO/darknet coordinates I am aware of that darknet/yolov3 outputs relative coordinates, would the coordinate (0,0) be at the bottom left, or at the top left? I am confused as opencv2 seems to have the y=0 at the top of the image as is usual when working with matrices. Yet mathematicaly and intuitively (for me at least) the coordinate (0,0) should be at the bottom left. AI: (0, 0) is top left. Here an a helpful blog that goes through all the features in the output vector. This is common in image processing. There are a few reasons that this is the convention in computer vision. Check here for a list, the accepted answer states: This is caused in the history. Early computers had Cathode Ray Tubes (CRTs) which "draw" the image with a cathode ray from the upper left corner to the lower right.
H: GAN Loss Function Notation Clarification In the Generative Adversarial Network loss function, what do these mean?: $E_{x~p_{data}(x)}$ and $E_{z~p_{z}(z)}$ and how are they used in this context? AI: Generally, the notation $\mathbb{E}_{x\sim p}f(x)$ or $\mathbb{E}_{x\sim p(x)}f(x)$ refers to the expectation of $f(x)$ with respect to the distribution $p$ for variable $x$ (e.g. see the explanation in the notation section in "Pattern Recognition and Machine Learning" by Bishop). In the context of GANs, it means that for an objective function such as $$\mathbb{E}_{x\sim p_{data}(x)}[logD(x)] + \mathbb{E}_{z\sim p_z(z)}[log(1 - D(G(z)))]$$ the first summand is the expectation with regards to $x$ coming from the data and the second summand is the expectation with regards to $z$ which you sample from $p_z$ as an input for G. Since the first summand stands for correctly classified real data (coming from $p_{data}$) and the second summand stands for correctly as fake classified images coming from G using $z$ sampled from $p_z$, D tries to maximize this expression. And G does the opposite.
H: Multivariate Polynomial Feature generation I don't quite seem to understand the rules used to create the polynomial features when trying to find a polynomial model with Linear Regression in the multivariate setting. Let's say I have a two predictor variables a and b. When generating polynomial features (for example using sklearn) I get 6 features for degree 2: y = bias + a + b + a * b + a^2 + b^2 This much I understand. When I set the degree to 3 I get 10 features instead of my expected 8. I expected it to be this: y = bias + a + b + a * b + a^2 + b^2 + a^3 + b^3 What is the general formula of generating multivariate features? How does this look like in the 3rd degree? AI: General Formula is as follow: \begin{equation} N(n,d)=C(n+d,d) \end{equation} where n is the number of the features, $d$ is the degree of the polynomial, $C$ is binomial coefficient(combination). Example with vector (2,3) to 3rd degree : x = np.array[[2,3]] pf = PolynomialFeatures(degree=3, include_bias=True) pf.fit_transform(x) returns : array([[ 1., 2., 3., 4., 6., 9., 8., 12., 18., 27.]]) wich is : [1, $\:$ $x_1$, $\:$ $x_2$, $\:$ $x_1^2$,$\:$ $x_1x_2$, $\:$ $x_2^2$, $\:$ $x_1^3$, $\:$ $x_2x_1^2$, $\:$ $x_2^2 x_1$, $\:$ $x_2^3$] You can see this as expanding this equation and throwing out the coefficients : \begin{equation}1 + (x_1 + x_2) + (x_1 + x_2)^2 + (x_1 + x_2)^3 \end{equation}
H: find the difference between two columns in specific rows I have the following columns from one data frame 0 1585742136995 1 1585742137014 2 1585742137035 3 1585742137058 4 1585742137080 ... 177809 1585570661653 177810 1585570661675 177811 1585570661686 177812 1585570661709 177813 1585570661731 Name: acctimestamp, Length: 177814, dtype: int64> and 0 1585742136982 1 1585742136996 2 1585742137015 3 1585742137036 4 1585742137058 ... 177809 1585570661632 177810 1585570661653 177811 1585570661676 177812 1585570661698 177813 1585570661710 Name: gyrtimestamp, Length: 177814, dtype: int64> I want to start from 299 row and by 200 rows to find and print the difference between two columns (acctimestamp, gyrtimestamp) so let's say at 299+200=499 row what is the difference between the 2 columns. After that what is the difference in 699 row And I want to do this for all the len of the data frame. I am facing difficulties with the for loop can you help me a little bit? how a loop like this would be? i write this df['diff'] = df['acctimestamp'] - df['gyrtimestamp'] but it only show me the difference line by line how to start from 499 row print the difference and print every 200 row? i try this code: for i in range (len(df)): i=200 df1=df.loc[299+i, "acctimestamp" ]-df.loc[299+i, "gyrtimestamp"] but the counter print the difference in line 499 and do not continue all the way to the end of the data frame AI: for i in range (99,len(df),200): try: df1=df.loc[i+200,'acctimestamp'] - df.loc[i+200,'gyrtimestamp'] print(df1) except: print('End') Is what you're looking for? Step is 200. So i will take 299 -> 499 -> 699 etc.., and you're taking only the rows between i ( 299 ) and i + 200 ( 499 ) and so on.
H: How to add column name I have a dataset in which some columns have header but the other ones don't have. Therefore, I want to add column name only to those who doesn't have column name. I know how to add column names using header but I would like to know how to do this using index so that I can add column names only to the empty ones. Any suggestions will be appreciated. AI: Loop through df.columns checking whether there is a column name or if it is empty: import pandas as pd # create df with two empty columns df = pd.DataFrame({'a': [1,2], 'b': [1,2], 'c': [1,2], 'd':[1,2]}) df.columns = ['', 'b', '', 'd'] df with empty columns: b d 0 1 1 1 1 1 2 2 2 2 # new of list columns names must be the same length as number of empty columns empty_col_names = ['x', 'y'] new_col_names = [] count = 0 for item in df.columns: if item == '': new_col_names.append(empty_col_names[count]) count += 1 else: new_col_names.append(item) df.columns = new_col_names df with filled columns: x b y d 0 1 1 1 1 1 2 2 2 2
H: Calculate correlation between two sensors regarding the time I am new in all this, so I am sorry if I am asking something stupid. I have two-time series - the measurement of the value by two sensors every 5 min and I want to see are the measurements correlated in time (are the measurements similar). So far now I only found questions related to autocorrelation and classic correlation. My dataframe looks like this: id time sensor1 sensor2 1 24.1.2020. 00:00:00 0.052 0.631 1 24.1.2020. 00:05:00 0.812 0.102 .... 1 24.1.2020. 00:10:00 0.326 0.500 .... 1 24.1.2020. 00:15:00 1.021 0.999 .... 1 24.1.2020. 00:20:00 1.033 1.000 .... and so on for 10 days. So, time here is a really important aspect because measurements depend on part of the day, of climate condition during the day and so on. I saw on some question that people are suggesting pandas function: corr = df['value1'].corr(df['value2']) But as I see, this line doesn't include the time. Also if you have some course for data science to recommend, it will be appreciated. I am already taking something. AI: Your suggested method should work, assuming the two column are in the same dataframe with the same timestamps as the index. This means that they are already aligned in time and therefore the time is implicitly encoded. If there are any missing datapoints, you could impute them (fill them) with something like the previous value, which is usually best for time-series data. Replacing with e.g. the mean of the entire time-series might be misleading because there are temporal fluctuations/cycles involved. df.fillna(method='ffill') You could also simply plot them to see which kind of correlation value you might expect, doing something like this: import matplotlib.pyplot as plt # <-- needed if running in script (not a Jupyter notebook) df[["sensor1", "sensor2"]].plot() plt.plot()
H: How to ignore vectors of zeros (i.e. paddings) in Keras? I'm implementing a LSTM model with Keras. My dataset is composed by words and each word is an 837 long vector. I grouped the words in groups of 20 and to do this I padded them: initially I had groups of words of variable length and the maximum group length that I found was 20, this is why I padded all groups to 20. For example, a group of 5 words is: [[x1,x2....x837], [x1,x2....x837], [x1,x2....x837], [x1,x2....x837], [x1,x2....x837]] where xi is the i-th feature of the vector. To pad this group to a length of 20, I added 15 vectors composed by 837 feature with value equal to zeros: [[0.......0], ............ ............ [0........0]] So, at the end, my group is of the form: [[x1,x2....x837], [x1,x2....x837], [x1,x2....x837], [x1,x2....x837], [x1,x2....x837], [0...........0], .............. .............. [0...........0]] How could I ignore the vectors of zeros during training? AI: You can use Masking layer (with mask value of zero) before LSTM layer in order to ignore all timesteps with only zeros (i.e. zeros vector). You can find more information about this layer on its documentation. Here is an example from documentation which uses Masking layer: Consider a Numpy data array x of shape (samples, timesteps, features), to be fed to an LSTM layer. You want to mask sample #0 at timestep #3, and sample #2 at timestep #5, because you lack features for these sample timesteps. You can do: set x[0, 3, :] = 0. and x[2, 5, :] = 0. insert a Masking layer with mask_value=0. before the LSTM layer: model = Sequential() model.add(Masking(mask_value=0., input_shape=(timesteps, features))) model.add(LSTM(32))
H: How to classify/cluster clients of an application of transport like the Uber Model? I have a question about User segmentation I have a data-set of rides of an application that works at the same model as Uber. The attributes I have : Reservation it means the id of the Ride and statutCourse means the status of the ride cancelled or finished ..... the Id client and the entreprise is the company where the client works IDChaufeur is conductor ID and i have the geographic coordinations of the start point and the finish point i also have date and hour of the ride also real time and estimated time and estimated distance I want to classify/cluster clients profiles and conductors profiles My problem is that one client can make many rides and I don't know if I can use many lines of rides for the same client in one dataset and than use this dataset to classify the clients into classes or profiles ? AI: If you want to cluster people, you need to have one row per person. To do this you will need to group by IDClient/IDChauffeur and compute features like meanHourRide, stdHourRide, ... By feeding the algorithm with multiple records of the same person (with the same travels for example), the clustering algorithm might create a cluster with similar travels (time/location/distance-wise) even though it is made by only one customer. This would result in travel clustering, which is also possible but not what you want here.
H: Predicting correct match of French to English food descriptions I have a training and test set of food descriptions pairs (please, see example below) First name in a pair is a name of food in French and second word is this food description in English. Traing set has also a trans field that is True for correct descriptions and False for wrong descriptions. The task is to predict trans field in a test set, in other words to predict wich food description is corect and which is wrong. dishes = [{"fr":"Agneau de lait", "eng":"Baby milk-fed lamb", "trans": True}, {"fr":"Agrume", "eng":"Blackcurrants", "trans": False}, {"fr":"Algue", "eng":"Buttermilk", "trans": False}, {"fr":"Aligot", "eng":"potatoes mashed with fresh mountain cheese", "trans": False}, {"fr":"Baba au rhum", "eng":"Star anise", "trans": True}, {"fr":"Babeurre", "eng":"seaweed", "trans": False}, {"fr":"Badiane", "eng":"Sponge cake (often soaked in rum)", "trans": False}, {"fr":"Boeuf bourguignon", "eng":"Créole curry", "trans": False}, {"fr":"Carbonade flamande", "eng":"Beef Stew", "trans": True}, {"fr":"Cari", "eng":"Beef stewed in red wine", "trans": False}, {"fr":"Cassis", "eng":"citrus", "trans": False}, {"fr":"Cassoulet", "eng":"Stew from the South-West of France", "trans": True}, {"fr":"Céleri-rave", "eng":"Celery root", "trans": True}] df = pd.DataFrame(dishes) fr eng trans 0 Agneau de lait Baby milk-fed lamb True 1 Agrume Blackcurrants False 2 Algue Buttermilk False 3 Aligot potatoes mashed with fresh mountain cheese False 4 Baba au rhum Star anise True 5 Babeurre seaweed False 6 Badiane Sponge cake (often soaked in rum) False 7 Boeuf bourguignon Créole curry False 8 Carbonade flamande Beef Stew True 9 Cari Beef stewed in red wine False 10 Cassis citrus False 11 Cassoulet Stew from the South-West of France True 12 Céleri-rave Celery root True I think to solve this as text classification problem, where text is a concatenation of French name and English description embeddings. Questions: Which embeddings to use and how concatenate them? Any other ideas on approach to this problem? BERT? Update: How about the following approach: Translate (with BERT?) French names to English Use embeddings to create two vectors: v1 - translated English vector and v2 - English description vector (from data set) Compute v1 - v2 Create new data set with two columns: v1 - v2 and trans Train classifier on this new data set Update 2: It looks like cross-lingual classification may be the right solution for my problem: https://github.com/facebookresearch/XLM#iv-applications-cross-lingual-text-classification-xnli It is not clear yet from the description given on the page with the link above, where to fit my own training data set and how to run classifier on my test set. Please help to figure this out. It would be ideal to find end-to-end example / tutorial on cross-lingual classification. AI: As you suspected, the best approach would be to take a massive multilingual pretrained language model and make use of the information about French and English that it has already learned. You can read about some good options here. The basic idea is to train a new, lightweight network to make predictions based on the output from the pretrained model; its usual to just have a single layer feed forward network for this “fine-tuning”. Some implementations will already have this conveniently coded up for you, so check the documentation for whatever you decide to use! Your problem is specifically a sentence pair classification problem, and there is a tutorial for that here. Pay close attention to the data processing phase of the tutorial. Overall, the differences you need to apply to what the tutorial describes are You need to use multilingual BERT You need to prepare your data exactly as the tutorial says about how to set up your data, but use your own snippets in place of the sentence pairs
H: How can I handle a column with list data? I have a dataset which I processed and created six features: ['session_id', 'startTime', 'endTime', 'timeSpent', 'ProductList', 'totalProducts'] And the target variable is a binary class (gender). The feature 'productList' is a list: df['ProductList'].head() Out[169]: 0 [13, 25, 113, 13793, 2, 25, 113, 1946, 2, 25, ... 1 [12, 31, 138, 14221, 1, 31, 138, 1979, 1, 31, ... 2 [13, 23, 127, 8754, 0] 3 [13, 26, 125, 5726, 2, 26, 125, 5727, 2, 26, 1... 4 [12, 23, 119, 14805, 1, 23, 119, 14806, 0] Name: ProductList, dtype: object Now, it is obvious that I can't use this feature as it is. How do I handle this feature? I can explode the list and create a row for each list item, but will it serve my purpose? Update: I applied OHE after exploding the list, and it results in 10k+ columns, which my GCP instance and my computer can't handle; when applying PCA. PS: There are over 17,000 unique products. AI: You basically want to create a column for each product bought, as the presence or absence of each in the list is a feature in itself. See Hadley Wickham’s definition of tidy data. That being said, you seem to have numerous products. To avoid the curse of dimensionality, what I would do is take your binary bought/not features (or count values might be even more effective if you have that data) and do dimensionality reduction to get a reasonable set of features. Latent Dirichlet Allocation (which comes from topic modeling), PCA, t-SNE, and our UMAP are all easy to implement and worth trying. PCA is the least sophisticated and the fastest to run and would be a good baseline. When you have your smaller list of features, you might want to try using a classifier that further selects the most relevant features, like gradient-boosted trees.
H: Size Issue when Plotting the Predicted Vaue I wrote a simple Stock Prediction Algorithm and got the predicted value. Then, I wanted to plot the relation between Adjusted close price and predicted value, but got the ValueError: x and y must be the same size. I tried to reshape it, but no luck. I'm having problem with the last 5 lines of the following code. How can I resize X and y_predict in order to have a same size of them? What is the exact problem with my resize code? import quandl import pandas as pd import numpy as np import datetime import matplotlib.pyplot as plt %matplotlib inline from sklearn.linear_model import LinearRegression from sklearn import preprocessing, svm, datasets from sklearn.model_selection import cross_validate, train_test_split stock_data = quandl.get("WIKI/AAPL") stock_data = stock_data[["Adj. Close"]] forecast_out = int(30) stock_data["Prediction"] = stock_data[["Adj. Close"]].shift(-forecast_out) X = np.array(stock_data.drop(['Prediction'],1)) X = preprocessing.scale(X) X_forecast = X[-forecast_out] X = X[:-forecast_out] y = np.array(stock_data["Prediction"]) y = y[:-forecast_out] LinearRegression().fit(X.reshape(-1,1),y) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 0) lin_reg = LinearRegression().fit(X_train,y_train) y_predict = lin_reg.predict(X_test) LinearRegression().fit(X.reshape(-1,1),y_predict) plt.scatter(X_train, y_predict, color='blue') plt.plot(X_train, regr.coef_[0][0]*X_train + lin_reg.intercept_[0], '-r') plt.xlabel("Adjusted closing price") plt.ylabel("Predicted price") ``` AI: You are using the full X dataset and want to plot it with the y_predict values, this is not possible since the size of both arrays is not the same. y_predict are the predicted values on X_test while X contains all inputs (i.e. X_train and X_test). You should therefore use X_test instead of X.
H: Understanding how many biases are there in a neural network? I am trying to understand biases in neural nets, but different websites show very different answers. For example, how many biases is there in a fully connected neural network with a single input layer with 5 units and a single output layer with 4 units? And what about a fully connected neural network with a single input layer with 5 units, a single hidden layer with 4 units, and a single output layer with 3 units? For example, if I understand this correctly, https://ai.stackexchange.com/questions/17584/why-does-the-bias-need-to-be-a-vector-in-a-neural-network, the answer of the first should be 5 and for the second 4 + 3. Each neuron except for in the input-layer has a bias. However, at https://ayearofai.com/rohan-5-what-are-bias-units-828d942b4f52, it is explained such that each layer including the input-layer has one bias. So the answer to the example above is one in the first and two in the second. What is correct? What am I misunderstanding here? AI: The sources are both correct, they implement bias in different ways, and are counting slightly different things: Your first source implements a separate bias vector in each output layer in addition to the weights, and is referring to the dimension of the bias vector when it is counting biases. Your second source implements bias as both a separate fixed value $1.0$ in each input layer, and a larger weights matrix with an extra column containing the actual learned bias value. It is referring to the extra added value in each layer when counting the "biases" - more accurately it is counting the added bias "signals" and not the learned bias values, because the learned bias values are implemented inside the weights matrix when using this approach. In both cases, the number of learned values added due to bias are the same, and are the same as this: the answer of the first should be 5 and for the second 4 + 3. Each neuron except for in the input-layer has a bias In the second case the equivalent values appear in weight matrices as extra columns. It's really just an implementation difference. By adding a fixed bias signal to inputs, it simplifies the learning update for each layer to occur to only one matrix, as opposed to a matrix plus a separate bias vector (requiring different update rules for each). This is at the expense of needing to manipulate the inputs each time, and there is no great difference between the approaches in terms of efficiency. Some libraries take one approach, some take the other.
H: How are weights organized in Conv2D layers? I trained a model using Keras from this example. The model summary showed me this result Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ conv2d_2 (Conv2D) (None, 24, 24, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 12, 12, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 12, 12, 64) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 9216) 0 _________________________________________________________________ dense_1 (Dense) (None, 128) 1179776 _________________________________________________________________ dropout_2 (Dropout) (None, 128) 0 _________________________________________________________________ dense_2 (Dense) (None, 10) 1290 ================================================================= Total params: 1,199,882 Trainable params: 1,199,882 Non-trainable params: 0 There are 8 layers and model.get_weights() also shows the first dimension of 8. As I understood correctly things like Pooling and Flatten are "operators" over the input matrices, so why it is presented as a layer? How to understand what is stored in weight array for example in Pooling layer (model.get_weights()[2])? AI: If you print those shapes using below for loop weights_m=model.get_weights() for i in range(8): print(weights_m[i].shape) you will get output as (3, 3, 1, 32) (32,) (3, 3, 32, 64) (64,) (9216, 128) (128,) (128, 10) (10,) so we will get one layer weight and bias. we have a total of 4 layers(2 conv + 2 dense) so 8 weight vectors.
H: How to measure the distance (in generalized sense) between geographical regions? I need to construct a distance matrix for a few U.S. counties that are adjacent to one or another, and choosing the definition of distance is very tricky. The shortest path (i.e the minimum number of times one has to cross county borders to walk from A to B) and the Vincenty distance between the most populous cities are not working well already. What are other options? Also, I read somewhere and some time ago that distance between two geographical regions can be measured by the strength of "interaction". Volume of telephone calls, trajectories of trucks, and frequencies of flights were mentioned. Yet I can't any details about such modeling. Can somebody refer me to a paper or something? Are there open datasets of those souces available? Anybody help me out here? AI: In terms of interaction data, you might be thinking of this paper and this project I love the idea of using flight data. You could start with this set hosted at Kaggle. In terms of estimating distance between states, the simplest thing you could do that might work is to measure the distance between state capitals.