text
stringlengths
83
79.5k
H: Dealing with missing data in several features at once Good day, What are the approaches for handling missing data in several features (categorical and continuous) at once? I look through each feature and plotted several histograms of the distribution of them, and I think that a simple substitution of value by some constant value (mean, mode or something else) is not the best choice. I want to build to several models (classifications for categorical and regressions models for continuous values) in order to impute data, but I don't know the right approach for doing this. If I train my models only using non-null rows, then I won't be able to predict values with null values. On the other hand, If I simply drop columns(features) with null values, then I got bad results because features with null values are valuable for prediction. I have 615739 rows * 49 columns data set (after one-hot encoding procedure). If I drop all rows that contain at least one null value, then I will get 451063 rows. Thanks for your advice in advance. AI: Imputing one or more Features at once you still Need to Diagnose. Generic, mean or constant imputations rarely work (good). To decide which strategy is appropriate, it is important to investigate the mechanism that led to the missing values to find out whether the missing data is missing completely at random (MCAR), missing at random (MAR), or missing not at random (MNAR). MCAR means that there is no relationship between the missingness of the data and any of the values. MAR means that that there is a systematic relationship between the propensity of missing values and the observed data, but not the missing data. MNAR means that there is a systematic relationship between the propensity of a value to be missing and its values. Given what you have told its likely that its MCAR. (assumption is that you already tried to find this propensity yourself (domain knowledge) or build a model between the missing columns and other features and failed in doing so) Some other techniques to impute the data, I would suggest looking at KNN imputation (from experience always solid results) but you should try different methods fancy impute supports such kind of imputation, using the following API: from fancyimpute import KNN # Use 10 nearest rows which have a feature to fill in each row's missing features X_fill_knn = KNN(k=10).fit_transform(X) Here are different methods also supported by this package: •SimpleFill: Replaces missing entries with the mean or median of each column. •KNN: Nearest neighbor imputations which weights samples using the mean squared difference on features for which two rows both have observed data. •SoftImpute: Matrix completion by iterative soft thresholding of SVD decompositions. Inspired by the softImpute package for R, which is based on Spectral Regularization Algorithms for Learning Large Incomplete Matrices by Mazumder et. al. •IterativeSVD: Matrix completion by iterative low-rank SVD decomposition. Should be similar to SVDimpute from Missing value estimation methods for DNA microarrays by Troyanskaya et. al. •MICE: Reimplementation of Multiple Imputation by Chained Equations. •MatrixFactorization: Direct factorization of the incomplete matrix into low-rank U and V, with an L1 sparsity penalty on the elements of U and an L2 penalty on the elements of V. Solved by gradient descent. •NuclearNormMinimization: Simple implementation of Exact Matrix Completion via Convex Optimization by Emmanuel Candes and Benjamin Recht using cvxpy. Too slow for large matrices. •BiScaler: Iterative estimation of row/column means and standard deviations to get doubly normalized matrix. Not guaranteed to converge but works well in practice. Taken from Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares. MICE was deprecated and they moved it to sklearn under iterative imputer Another Option is old kaggle trick: A trick I have seen on Kaggle. Step 1: replace NAN with the mean or the median. The mean, if the data is normally distributed, otherwise the median. In my case, I have NANs in Age. Step 2: Add a new column "NAN_Age." 1 for NAN, 0 otherwise. If there's a pattern in NAN, you help the algorithm catch it. A nice bonus is that this strategy doesn't care if it's MAR or MNAR (see above).
H: Repeated features in Neural Networks with tabular data When using algorithms like linear regression or least-squares methods, having repeated or highly correlated features can be harmful for the model. For tree based models, they are generally not too strongly affected by highly correlated features. There are no numeric stability issues as with least squares. But what happens with Neural Networks? Most of the literature on NN is made for images, signal and there is not too much about tabular data. Having repeated features in a tabular data Neural Network model, does it harm the accuracy? Or NN are able to select features? AI: Strictly theoretically it makes no difference on accuracy. Here is why: We already know mathematically that NN can approximate any function. So lets say that we have Input X. X is highly correlated, than we can apply a decorrelation technique out there. Main Thing is, you get X` that has different numerical representation. Most likely more difficult for NN to learn to map to Outputs y. But still in Theory you can Change the architecure, Train for longer and you can still get the same Approximation, i.e. Accuracy. Now, Theory and Praxis are same in Theory but different in Praxis, and I suspect that this Adjustments of Architecture etc will be much more costly in reality depending on the dataset.
H: How does skewed data affect deep neural networks? I'm playing around with deep neural networks for a regression problem. The dataset I have is skewed right and for a linear regression model, I would typically perform a log transform. Should I be applying the same practice to a DNN? Specifically, I'm curious how skewed data affects regression with a DNN and, if it's negatively, are the same methods that would be applied to a linear regression model the right way to go about fixing it? I couldn't find any research articles about it but if you know of any feel free to link them in your answer! AI: Strictly theoretically it makes no difference on DNN, I answered it today here and I said: Here is why: We already know mathematically that NN can approximate any function. So lets say that we have Input X. X is highly correlated, than we can apply a decorrelation technique out there. Main Thing is, you get X` that has different numerical representation. Most likely more difficult for NN to learn to map to Outputs y. But still in Theory you can Change the architecure, Train for longer and you can still get the same Approximation, i.e. Accuracy. Now, Theory and Praxis are same in Theory but different in Praxis, and I suspect that this Adjustments of Architecture etc will be much more costly in reality depending on the dataset. BUT I want to add another point of view: Convergence speed. Strickly theoretically you dont even need [batch normalization] for performance (you can just adjust weights and bias and you should get same results) but we know that making this transformation has big benefits for NN To conclude for you: Yeah, I had experience where it made difference, and where it didnt. You cant expect theoretical results that say skewed is bad
H: Macro and micro average for imbalanced binary classes Micro and macro averaging are metrics for multi-class classification. However, for binary classification when data are imbalanced, it seems that micro and macro precision have different results. My question is that: does it make sense to use micro and macro precision in binary classification problems when classes are imbalanced? AI: does it make sense to use micro and macro precision in binary classification problems when classes are imbalanced? In general micro- and macro-average performance are not relevant in binary classification, whether the classes are balanced or not. Their value can be especially misleading if there is a strong imbalance, because it takes into account both the minority class (harder for the classifier) and the majority class (easier): By definition micro-average gives more weight to the majority class, so the micro-average performance can be high even if the classifier does a terrible job at distinguishing the two classes. The macro-average is not biased towards any of the two classes, still it's uselessly complex, it makes it harder to understand what's going on than simple performance on the positive class, which is normally the minority one (because that's the challenging one). Of course there can be cases where it makes sense not to follow this standard evaluation setting, it's always a matter of choosing the appropriate way to evaluate a particular task. The below example illustrates why micro- and macro-average are confusing in a standard case of imbalance: true A true B predicted A 90 9 predicted B 0 1 For A: precision = 0.91, recall = 1, f1-score = 0.95 For B: precision = 1, recall = 0.1, f1-score = 0.18 micro-average: precision = 0.91, recall = 0.91, f1-score = 0.91 macro-average: precision = 0.95, recall = 0.55, f1-score = 0.70 Assuming we don't know anything else than the selected performance measure, this classifier: performs almost perfectly according to the performance of the majority class A, performs very well according to micro-average, performs decently according to macro-average, performs terribly according to the performance of the minority class B. Looking at the confusion table, it's clear that the classifier doesn't do a good job at distinguishing the two classes. So the most "honest" performance measure is the last one, i.e. the non-averaged performance on the minority class.
H: Is sampling a valid way to reduce complexity? I'm facing an issue where I have a massive amount of data that I need to cluster. As we know, clustering algorithms can have a very high O complexity, and I'm looking for ways to reduce the time my algorithm is running. I want to try a few different approaches, like pre-clustering (canopy clustering) or subspace clustering, correlation clustering etc. However, something that I haven't heard about, and I wonder why - Is it viable to simply get a representative sample from my dataset, run the clustering on that, and generalize this model to the whole dataset? Why/why not is this a viable approach? Thank you! AI: I would get a sufficiently large random/representative sample and cluster that. To see what is such a sample, you will have to get two such samples and cluster them to get cluster solutions c1 and c2. If the matching clusters of c1 and c2 have the same model parameters, then you probably have representative samples. You can match the clusters by looking at how c1 and c2 assign drawn data to clusters.
H: Is Cross Validation needed for regression if you already know the predictors in your model? Let's say you want to model the behavior of Y = X1 + X2 and you know that this is the model you want to make. Whether or not that approximates the true relationship well is unknown. But since you want to be able to have coefficients that explain how Xi affects Y, you build a regression model. You don't plan on adding/subtracting predictors (since you don't have any additional data) and you don't plan on comparing this model with another (no other model allows for interpretation). Does it make sense to still use sample splitting or cross validation? If you do cross validation, do you average the coefficients? Or could you just use your entire data set to train the model. Thanks! AI: Ask yourself why you perform cross validation. Contrary to what Dave's answer says, the point of cross-validation is to estimate your generalization error, that is how your model will perform on future data. Model selection comes out of this, definitely, however to say that the point of CV is model selection is not true. That said, if all you are interested in is relationships between predictors and dependent variables and you aren't trying to do some sort of step-wise selection then you do not need to perform cross-validation. When was the last time a Statistics based regression textbook/class mentioned cross-validation? Never, at least not in any of the regression classes I took. One point, if you do use CV, absolutely DO NOT average the coefficients. The correct process is to use CV to estimate your error rate and then regather all of your data and run the model on all data which would give you your coefficients.
H: Query relating to Pandas Rows manipulation I have a query regarding Pandas data manipulation. Let's say I have a dataframe, df with following structure. A B C 1 1 7 5 3 3 3 3 2 7 5 2 5 NaN 2 We have 3 columns in the dataframe A, B & C. B column consists of mean values wrt A. For example, Value of B in 3rd row (which is 3) is mean of first 3 rows of A (9/3) Similarly, value of B in 4th row = (Sum of values in 2nd,3rd and 4th row of A)/3 Now, let's say I have many NaN values in B and there are no NaN values in A, how do I write a function or code to fill the NaN values as per the logic discussed above? I tried using loc and iloc but I guess I made some mistake. AI: Assuming you don't have NaNs in the first two entries of column B, the following code works index_nan = df.index[df['B'].isna()] #get all indices where B has NaNs new_df = pd.DataFrame({'B': [np.mean(df['A'][i-2:i+1]) for i in index_nan]}, index=index_nan) df.update(new_df) #update those values of column B in df
H: How to Include Features that Apply to Specific Classes I'm predicting hours that will be worked for building tasks. Due to the overall low sample size, I've stacked multiple related tasks together into a single model. (There may be 100 total samples in a single model, each task having 10 to 20 samples individually) An example would be - how long will it take a worker to complete each task associated with installing 2 different sizes of pipe in a hospital. There are many tasks associated with installing a pipe - Cutting the pipe Welding the pipe Bending the pipe Riveting the pipe We know from experience that the more bends a pipe has - the more difficult it is weld. But the difficulty of cutting and riveting are completely unrelated to the number of bends. Additionally there are multiple sizes of pipe in a single model, and the above tasks are completely unrelated between different sizes of pipe. An example of the data is: | Task | Pipe Size | Amount | Ratio of Bends to Welds | Predicted Hours | |------------|-----------|--------|-------------------------|-----------------| | Cut Pipe | 3 inches | 5 | NULL | 2 | | Weld Pipe | 3 inches | 10 | 2 | 4 | | Bend Pipe | 3 inches | 20 | NULL | 8 | | Rivet Pipe | 3 inches | 10 | NULL | 2 | | | | | | | | Cut Pipe | 10 inches | 1 | NULL | 1 | | Weld Pipe | 10 inches | 2 | 5 | 2 | | Bend Pipe | 10 inches | 10 | NULL | 15 | | Rivet Pipe | 10 inches | 1 | NULL | 0.5 | There are many different types of these "ratio" features within a single model, my current plan is to include them and null out the feature in all other tasks where it isn't relevant. It's the first time I've stacked this many classes together in a single model, and also the first time I've encountered features which are only applicable to some rows and not others. I'm currently using a random forest model. Is there anything conceptually wrong with doing this? AI: If I understood correctly, you said that Pipe Size does not have a correlation with the Predicted Hours. If you are sure that one or more variables do not possess the information about the target then drop them. Other than that, if a feature (or some of your features) has a relation only with a specific task, then replace the null values with 0. But in your case, I think you should also try the simpler algorithms like Linear Regression. But be careful about the sample size vs a number of features balance. Since you have few samples, use at most 1 variable for every 15 samples (In rare cases, 10 can be used too, but it is not recommended). As this source mentions: In summarizing the findings by Schmidt, Green suggested that the minimum number of SPV ranges from 15 to 25. Also, you need to evaluate your model's performance, but since you may have an unbalanced number of samples for each Task Type, be careful while splitting your data to train and test, make sure your test data has enough samples from each Task Type. Some other important things: Try to do bivariate (between the target variable and an independent variable) analysis before modeling, so you can eliminate unnecessary variables. In the case of using Linear Regression, be careful about multicollinearity if you will use the coefficients to interpret your results. Because if your model has multicollinearity, then your coefficients will be incorrect although your predictions will note be spurious i.e. they will still be trustable.
H: How to find correlated knowledge among different documents? Say I have a sequence of documents clicked by a user, how can I mine the identical or semanticly similar word/knowledge/phrases shared among different documents? Maybe someone can give a paper or subject relating to my goal? AI: The simplest approach would be extracting the keywords from both documents and by using them as features you can compare the mutuality of the papers. A better approach would be, build building knowledge graphs for the documents then comparing them. This paper illustrates a way of doing that. Another Source Another Source However, if you want to build a deeper knowledge about the issue, for example how text-similarity, plagiarism or recommendation systems work, A. Rajaraman and J. D. Ullman, Mining of Massive Datasets, Cambridge University Press, 2011 has very good content about the topic.
H: How to conclude the generality of any classification methods? Suppose a classification task A, and there exist a lot of methods $M_1, M_2, M_3$. The task $A$ is measured by a consistent measure. For instance, the task A can be a binary classification. In this case, F-score, ROC curve can be used. I did a survey on some research are and found that $M_1$ is evaluated with dataset $D_1$ (open) using pre-processing $P_1$ only (seems the seminal work). $M_2$ is evaluated with dataset $D_1$ (open), $D_2$ (private) and compared with $M_1$, claiming $M_2$ has more accurate result, but using different data pre-processing $P_2$. $M_3$ proposes new way with dataset $D_3$ (private) and did not provide any comparison against $M_2$ and $M_1$ I'm trying to work on this area, but there are lots of inconsistency. None of the methods are validated with validation data. They just used train and test data. I think some parameters are tuned for test dataset although authors do not claim so. Since this field is not a data science-oriented and the amount of dataset is few, this may happen. Which method can we consider as a state-of-the-art? How can we conclude the generality of each of the method? AI: You're experiencing an unfortunately common issue with the current state of system/model evaluation. In addition to evaluating on different datasets, authors often leave out important details, such as the procedure for hyperparameter tuning, detailed evaluation metrics (i.e. true positives, false negatives, etc. in addition to F-score), and ablation analyses. In cases like these, we cannot conclude that one method is necessarily better than the others or state-of-the-art. The best way to estimate the generality of each method when the literature has so far failed to do so is to implement each yourself and do a fair comparative evaluation. You would evaluate all methods on the same dataset with the same pre-processing steps and hyperparameter tuning procedure and, if possible, introduce additional evaluation datasets. It can also be very enlightening to perform an ablation analysis in which you iteratively remove certain components of the methods and re-evaluate to see how much of a performance hit you take. Doing the above and communicating it (via a publication, blog post, or whatever) will not only help you, but everyone else working in the area.
H: Calculation of VC dimension of simple neural network Suppose I have a perceptron with one-hidden layer, with the input - one real number $x \in \mathbb{R}$, and the activation function of the output layers - threshold functions: $$ \theta(x) = \begin{cases} 0, x \leq 0 \\ 1, x > 0 \end{cases} $$ The hidden layer may contain $k$ units and I would like to calculate the VC dimension of this feed-forward neural network. The VC dimension is defined as a cardinality of the maximal set, that can be shattered by properly adjusting the weights of the neural network. The threshold functions have a VC dimension of $n+1$, where $n$ is a number of input neurons, because by a plane $n-1$ plane one may split $n$ points in any way. So when considering the results in the first layer, we have a VC dimension of $2$ for each gate, and the total number of points, that can be separated by the activation is $2 k$. Then we have a vector $\in \mathbb{R}^k$ to be processed to output, and the output unit has a dimension $k + 1$. Do I understand correctly, that the resulting VC dimension of this simple neural network is : $$ 2 k + k + 1 = 3k + 1 $$ AI: I do not believe this is correct. The entire network will represent a piecewise-constant function with at most $k+1$ pieces, and has VC dimension $k+1$. Each hidden neuron is a step function, and together there are at most $k$ jump points among them. Taking a linear combination of those, we still cannot create any new jump points, so at the output neuron before activation, we have a piecewise-constant function with at most $k$ jump points, and arbitrary constant values on the $k+1$ intervals between them. After the activation, it's just piecewise constant with values 0 and 1 on at most $k+1$ intervals.
H: LSTM followed by Dense Layer in Keras I am working on LSTMs and LSTM AutoEncoders, trying different types of architectures for multivariate time series data, using Keras. Since it is not really practical to use relu in LSTM because of exploding gradients, I added a Dense layer following LSTM, so it is like: model = Sequential() model.add(LSTM(number_of_features, batch_input_shape = (batch_size, time_steps, number_of_features), return_sequences = True)) model.add(Dense(number_of_features)) What I want to know is: Is this fully connected Dense layer connected to only the last step in LSTM? Or does it add a fully connected Dense layer for all time steps? When I checked the number of parameters to be sure about this. Dense layer has number_of_features $\times$ (number_of_features + 1) parameters, which implies this Dense layer is applied to all time steps in LSTM network. This makes sense since I set return_sequences = True, but even when I set it to False, this does not change, which made me doubt my understanding. So, How does Dense work with LSTM with Return_Sequences? What is its different from TimeDistributed layer? Why changing return_sequences to False did not result in a reduction in number of parameters of Dense layer, from number_of_features $\times$ (number_of_features + 1), to (number_of_features + 1)? AI: I have been able to find an answer in Tensorflow Warrior's answer here. In Keras, when an LSTM(return_sequences = True) layer is followed by Dense() layer, this is equivalent to LSTM(return_sequences = True) followed by TimeDistributed(Dense()). When return_sequences is set to False, Dense is applied to the last time step only. Number of parameters were same even when I set return_sequences = False because even though applied to all time steps, they shared the same parameters, that is after all what TimeDistributed() does.
H: What is considered short and long text in NLP (document similarity) What is considered short and long text in NLP? I'm working on a dataset that contains documents from 10 to 600 words and I'm asking myself if I should treat them differently. Also, I haven't found a source which explicitly defines short and long text in NLP yet. The goal for my task is to find similar documents. AI: As Erwan said in the comments, it depends. In my experience, it depends specifically on two things: Tokenization method: The length of a document in number of tokens will vary considerably depending on how you split it up. Splitting your text into individual characters will result in a longer document than splitting it into sub-word units (e.g. WordPiece), which will still be longer than splitting on white space. Model: Vanishing gradients aside, an RNN doesn't care how long the input text is, it will just keep chugging along. Transformers, however, are limited. BERT can realistically handle sequences of up to 512 WordPiece units, while the LongFormer claims to handle sequences of up to 32k units (given sufficient compute resources). Thus your documents of 10 - 600 tokens would be long for BERT but short for the LongFormer. Whether you should treat documents of length 10 differently from those of length 600 is not something I can answer without knowing the details of your specific task. Intuitively, I doubt a very short document would ever be very similar to a much longer one, simply because it likely contains less content.
H: Neural networks with not-fixed dimension for input and output I would like to know if it exists a model/method which can deal with input and output of different dimension. For example, let us say that the maximum number of info we could have is 6 features and 5 output. Then I could have examples with 4 features and 3 output. Less input features always relates to less output. And relations stays the same. with only 4 features I have only 4 outputs, and so on. Most important, it is not that I do not have them for missing knowledge, but because in the same problem dominion I could have all 6 of the features, or less. It is possibile to create a model which deal with this kind of things ? The other solution I thought was to just use a simple deep network, with the maximum number of features and output as dimension, and use a value = 0 when I have a missing feature or a missing target. But that destroyed completely the training performances AI: If youre searching for neural network architecture that have varying number of inputs and outputs, Recurrent Neural Networks, LSTM's .. etc are examples. They are used in Natural Language Processing where the main goal is to examine patterns in sentences. But I highly doubt that they will work for your use case since no information about it is provided. Another way would be to create multiple neural networks with a different input/ output sizes, such that the input/output sizes are averages of the groups of similar input/output sizes.
H: np.unique() explanation? What happends in this numpy function: https://numpy.org/doc/stable/reference/generated/numpy.unique.html a = np.array([1, 2, 5, 3, 2]) u, indices = np.unique(a, return_inverse=True) The results are: u array([1, 2, 3, 5)] indices array([0,1,3,2,1), dtype=int64) u[indices] array([1, 2, 5, 3, 1]) u are the single values inside this array. What is indices? why is it a 3 and not a 5? and what is going on in u[indices]? AI: Firstly the function returns the variables u and indices: u contains the unique elements sorted. In other words no element is repeated (the number 2 does not appear twice) and the elements will be listed from the smallest to the largest value indices is the same size as a and basically it contains the index in u you should used to recover a. So when they give you [0,1,3,2,1], in this case the number 3 refers to the 3rd index of u = [1, 2, 3, 5] which in this case is 5 You can see how u and indices can be used to recover a by running the following code: for element in indices: print(u[element]) This also explains your last question: u[indices] it basically gives you back a. Note that in your question u[indices] should return array([1, 2, 5, 3, 2]). I assume you made a typo
H: Augmentation on test dataset and validation dataset I'm training a segmentation model (computer-vision). Thus, my dataset contains images and masks (binary segmentation of objects). I'm augmenting the training dataset (applying random crop, rotation or shift etc.) to get a larger dataset. I don't apply augmentation on test and validation dataset. Should I use augmentation on the validation dataset or the test dataset too ? AI: Your test and validation dataset should reflect the type of data you would expect when you deploy your model in the actual setting. So usually you do not apply augmentation to the validation and test dataset, since in the real setting you will not receive some strange augmented images. Another way to think of it is if you apply augmentation to your validation dataset then you will actually measure how your model will perform on augmented data and not on 'real' data. You will use this to inform the best settings for your model so you will end up with a model tuned to perform well on augmented data.
H: Coloring clusters so that nearby clusters have different colors I have clustered a large number of points (~3000) into (~400) clusters. I want to plot the data and visualize the clusters. I want to make sure that nearby clusters have different colors. Can anyone recommend an approach to coloring the clusters? This is a conceptual question, but I'm most interested in solutions in python or R. AI: I found that taking the centroids of each cluster, running k-nearest-neighbors, and then applying https://en.wikipedia.org/wiki/Greedy_coloring works well. Just keep increasing K until the clusters stand out. Edit: following @Fatemeh Asgarinejad's suggestion, use the minimum distance from a cluster centroid to a member of the other clusters as the distance in computing KNN Now. This is slower but seems to give a more robust coloring when clusters overlap or have irregular shapes. My python code: # data is a pandas data frame of data points with cluster labels from sklearn.neighbors import NearestNeighbors def assign_cluster_colors(data, clusters, n_colors=10, n_neighbors = 8): centroids = data.groupby('cluster').agg({'x':np.mean,'y':np.mean}) color_ids = np.arange(n_colors) distances = np.empty(shape=(centroids.shape[0],centroids.shape[0])) groups = tsne_data.groupby('cluster') for centroid in centroids.itertuples(): c_dists = groups.apply(lambda r: min(np.sqrt(np.square(centroid.x - r.x) + np.square(centroid.y-r.y)))) distances[:,centroid.Index] = c_dists nbrs = NearestNeighbors(n_neighbors=n_neighbors,metric='precomputed').fit(distances) distances, indices = nbrs.kneighbors() color_assignments = np.repeat(-1,len(centroids)) for i in range(len(centroids)): knn = indices[i] knn_colors = color_assignments[knn] available_colors = color_ids[list(set(color_ids) - set(knn_colors))] if(len(available_colors) > 0): color_assignments[i] = available_colors[0] else: raise Exception("Can't color this many neighbors with this many colors") centroids = centroids.reset_index() colors = centroids.loc[:,['cluster']] colors['color'] = color_assignments data = data.merge(colors,on='cluster') return(data)
H: no decrease loss and val_loss I try to train a neural network for time series. I use some data from Covid, mainly the goal is knowing 14 days of number of people at hospital to predict the number at J+1. I have use some early stopping to not over fit, but almost one time over two the learning stop at patience+1 and there is no decrease of loss and val_loss. I have tries to move hyperparameters like learning rate but the problem is always here. Any guess? The main code is below and the whole code with data :https://github.com/paullaurain/prediction import os from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error from keras.models import Sequential from keras.layers import Dense from keras.callbacks import EarlyStopping from keras.callbacks import ModelCheckpoint from keras.models import load_model from keras.optimizers import Adam from sklearn.preprocessing import MinMaxScaler # fit a model def model_fit(data, config): # unpack config n_in,n_out, n_nodes, n_epochs, n_batch,p,pl = config # prepare data DATA = series_to_supervised(data, n_in, n_out) X, Y = DATA[:, :-n_out], DATA[:, n_in:] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.1) # define model model = Sequential() model.add(Dense(4*n_nodes, activation= 'relu', input_dim=n_in)) model.add(Dense(2*n_nodes, activation= 'relu')) model.add(Dense(n_nodes, activation= 'relu')) model.add(Dense(n_out, activation= 'relu')) model.compile(loss='mse' , optimizer='adam',metrics=['mse']) # fit es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=p) file='best_modelDense.hdf5' mc = ModelCheckpoint(filepath=file, monitor='loss', mode='min', verbose=0, save_best_only=True) history=model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=n_epochs, verbose=0,batch_size=n_batch, callbacks=[es,mc]) if pl: plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() saved_model=load_model(file) os.remove(file) return history.history['val_loss'][-p], saved_model # repeat evaluation of a config def repeat_evaluate(data,n_test, config, n_repeat,plot): # rescale data scaler = MinMaxScaler(feature_range=(0, 1)) scaler = scaler.fit(data) scaled_data=scaler.transform(data) scores=[] for _ in range(n_repeat): score, model= model_fit(scaled_data[:-n_test], config) scores.append(score) # plot the prediction id asked if plot: y=[] x=[] for i in range(n_test,0,-1): y.append(float(model.predict(scaled_data[-14-i:-i].reshape(1,14)))) x.append(scaled_data[-i]) X=scaler.inverse_transform(x) plt.plot(X) Y=scaler.inverse_transform(np.array([y])) plt.plot(Y.reshape(10,1)) plt.title('result') plt.legend(['real', 'prdiction'], loc='upper left') plt.show() return scores # summarize model performance def summarize_scores(name, scores): # print a summary scores_m, score_std = mean(scores), std(scores) print( '%s: %.3f RMSE (+/- %.3f)' % (name, scores_m, score_std)) # box and whisker plot pyplot.boxplot(scores) pyplot.show() #setting variable n_in=14 n_out=1 n_repeat=5 n_test=10 # define config n_in, n_out, n_nodes, n_epochs, n_batch, pateince, draw loss config = [n_in, n_out, 10, 2000, 50, 200,True] # compute scores scores = repeat_evaluate(data,n_test, config, n_repeat, True) print(scores) # summarize scores summarize_scores('mlp ', scores) Result: AI: Please have a look at your weights after training. I assume your Neurons die due to relu activation as they output Zero for Input < 0. Unfortunately, the ReLU activation function is not perfect. It suffers from a problem known as the dying ReLUs: during training, some neurons effectively “die,” meaning they stop outputting anything other than 0. In some cases, you may find that half of your network’s neurons are dead, especially if you used a large learning rate. A neuron dies when its weights get tweaked in such a way that the weighted sum of its inputs are negative for all instances in the training set. When this happens, it just keeps outputting zeros, and Gradient Descent does not affect it anymore because the gradient of the ReLU function is zero when its input is negative. From Hands-on Machine Learning with Scikit-Learn, Keras & TensorFlow Concepts, Tools, and Techniques to Build Intelligent Systems, Aurélien Géron, 2019 So to combat this problem remove the ReLU activations and use LeaklyReLU instead. So for your case following are the changes: from tensorflow.keras.layers import LeakyReLU # for leakly relu model.add(Dense(8*n_nodes, input_dim=n_in)) model.add(LeakyReLU(alpha=0.05)) model.add(Dense(4*n_nodes)) model.add(LeakyReLU(alpha=0.05)) model.add(Dense(2*n_nodes)) model.add(LeakyReLU(alpha=0.05)) model.add(Dense(n_nodes)) model.add(LeakyReLU(alpha=0.05)) model.add(Dense(n_out)) model.add(LeakyReLU(alpha=0.05)) After changing your code as specified the problem should be fixed. But in general I'm with @meTchaikovsky, for time series data recurrent neural networks are better suited for modelling.
H: How to increase sales and revenue of a Client? I was asked this in an interview for a Data Scientist position: Lets say Holland and Barret came to you and said they'd like to increase their sales and revenue. How will you go about it? My answer wasn't hitting the mark or touching the points the interviewer was looking for. How to go about answering this? AI: I think the question was asked to see how would you approach the problem. In similar questions, there is not a single answer, and the interviewer does not expect a certain answer instead expects a reasonable approach by you. It is like the famous interview question "How many golf balls can you fit into a swimming pool?". Such a question is asked to see the analytical thinking of the interviewee. A structure of the answer may be like, Churn analysis: understand the reason why customers "not coming" anymore, by doing so you will be able to reduce it. Do a market analysis to understand the future state of the market to adjust the strategy and products according to the market. Understanding customer segments (This is one of the most important implementations since you can come up with creative ideas after understanding the segments). Then, suggest the relevant products to relevant segments. Understanding the competitors and their products. What are the differences between their strategy and ours, their products and ours? Understanding the market size thus keep the predictions in a realistic range and to make solid steps. Understanding costs and putting forward optimizations to reduce them. etc. These will make them sure that you have good knowledge over-analyzing the company, its strategies, products, etc. The above-mentioned solutions will state them you are not just able to increase the products and revenue, but also decrease the costs to maximize the profit. In this way, you can be a consultant to them.
H: Which learning rate should I choose? I'm training a segmentation model, Unet++, on 2d images and I am now trying to find the optimal learning rate. The backbone of the model is Resnet34, I use Adam optimizer and the loss function is the dice loss function. Also, I use a few callbacksfunctions: callbacks = [ keras.callbacks.EarlyStopping(monitor='val_loss', patience=15, verbose=1, min_delta=epsilon, mode='min'), keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, verbose=1, mode='min', cooldown=0, min_lr=1e-8), keras.callbacks.ModelCheckpoint(model_save_path, save_weights_only=True, save_best_only=True, mode='min'), keras.callbacks.ReduceLROnPlateau(), keras.callbacks.CSVLogger(logger_save_path) ] I plotted the curves of training loss over epochs for a few learning rates: The validation loss and training loss seem to decrease slowly. However, the validation loss isn't oscillating (it is almost always decreasing). The validation and training losses decreased quickly on first 2/3 epochs. After 6 or 7 epochs, the validation loss increases again. I have a few questions (I hope it is not too much): *What is normally the best way to find the learning rate i.e. How many epochs should I wait before considering that the learning rate isn't good? What are the criteria on the loss function to determine if a learning rate is "good"? Is there a big difference if I use a small learning (which still converges) instead of the "optimal" learning rate ? Is it normal that the validation loss function oscillates over the training? Which learning rate should I use according to my results? Even a partial response would help me a lot. AI: I am afraid the that besides learning rate, there are a lot of values for you to make a choice for over a lot of hyperparameters, especially if you’re using ADAM Optimization, etc. A principled order of importance for tuning is as follows Learning rate Momentum term , num of hidden units in each layer, batch size. Number of hidden layers, learning rate decay. To tune a set of hyperparameters, you need to define a range that makes sense for each parameter. Given a number of different values you want to try according to your budget, you could choose a hyperparameter value from a random sampling. Specifically to learning rate investigation though, you may want to try a wide range of values, e.g. from 0.0001 to 1, and so you can avoid sampling random values directly from 0.0001 to 1. You can instead go for $x=[-4,0]$ for $a=10^x$ essentially following a logarithmic scale. As far as number of epochs go, you should set an early stopping callback with patience~=50, depending on you "exploration" budget. This means, you give up training with a certain learning rate value if there is no improvement for a defined number of epochs. Parameter tuning for neural networks is a form of art, one could say. For this reason I suggest you look at basic methodologies for non-manual tuning, such as GridSearch and RandomSearch which are implemented in the sklearn package. Additionally, it may be worth looking at more advanced techniques such as bayesian optimisation with Gaussian processes and Tree Parzen Estimators. Good luck! Randomized Search for parameter tuning in Keras Define function that creates model instance # Model instance input_shape = X_train.shape[1] def create_model(n_hidden=1, n_neurons=30, learning_rate = 0.01, drop_rate = 0.5, act_func = 'ReLU', act_func_out = 'sigmoid',kernel_init = 'uniform', opt= 'Adadelta'): model = Sequential() model.add(Dense(n_neurons, input_shape=(input_shape,), activation=act_func, kernel_initializer = kernel_init)) model.add(BatchNormalization()) model.add(Dropout(drop_rate)) # Add as many hidden layers as specified in nl for layer in range(n_hidden): # Layers have nn neurons model.add(Dense(nn, activation='relu' model.add(Dense(n_neurons, activation=act_func, kernel_initializer = kernel_init)) model.add(BatchNormalization()) model.add(Dropout(drop_rate)) model.add(Dense(1, activation=act_func_out, kernel_initializer = kernel_init)) opt= Adadelta(lr=learning_rate) model.compile(loss='binary_crossentropy',optimizer=opt, metrics=[f1_m]) return model Define parameter search space params = dict(n_hidden= randint(4, 32), epochs=[50], #, 20, 30], n_neurons= randint(512, 600), act_func=['relu'], act_func_out=['sigmoid'], learning_rate= [0.01, 0.1, 0.3, 0.5], opt = ['adam','Adadelta', 'Adagrad','Rmsprop'], kernel_init = ['uniform','normal', 'glorot_uniform'], batch_size=[256, 512, 1024, 2048], drop_rate= [np.random.uniform(0.1, 0.4)]) Wrap Keras model with sklearn API and instantiate random search model = KerasClassifier(build_fn=create_model) random_search = RandomizedSearchCV(model, params, n_iter=5, scoring='average_precision', cv=5) Search for optimal hyperparameters random_search_results = random_search.fit(X_train, y_train, validation_data =(X_test, y_test), callbacks=[EarlyStopping(patience=50)])
H: multi regression for energy data I'm trying to develop a multi regression model to predict energy consumption during one day period. X-set dimension is (10178, 52) and consist of 52-feature and Y-set dimension is (10178, 48) as output. I have used the following code: xtrain, xtest, ytrain, ytest=train_test_split(X, Y, test_size=0.1) in_dim = X.shape[1] out_dim = Y.shape[1] model = Sequential() model.add(Dense(48*4, input_dim=in_dim, activation="relu")) model.add(Dense(86, activation="relu")) model.add(Dense(out_dim)) model.summary() model.compile(loss="mean_absolute_error", optimizer="adam") model.fit(xtrain,ytrain, epochs=100, batch_size=12,) after compiling my model although my model's loss is very low but when I visualize my output the result is unsatisfying as follow: any idea what I'm doing wrong?!? my initial guess is that since output dimension is high(48-dimension) compared to input dimension I need a lot more Data. or maybe I'm using wrong loss function or the model is too shallow. also it is noticeable that model's output at spark point is very poor. AI: As you can see, your predictions are able to catch the trends. In other words, the model is able to predict the direction of movement almost every day. The only point that it is not able to catch is those high peaks, which can be treated as outliers. It is because due to seasonality or some other cause your daily data drastically changes on some of the days. This change is not normal for models to capture because those points deviate from the general characteristics of your time series. It is quite normal having low energy consumption for 6 consecutive days but having a large energy consumption on the 7th day if you would consider that 6 days were sunny and suddenly the weather gets cold. This is just one single case where there might be lots of those. To capture these anomalies you should have a variable to explain to them (e.g. Image that those anomalies are only due to weather conditions, then including weather variable would help you. However, it is too hard to find all those variables that explain all anomalies). To solve the issue, you can decrease the frequency of your data. That is, instead of modeling daily data you can model weekly average or weekly end (means last day of every week), or even monthly average or monthly end. If these outliers are consistent every week, averaging them will help you. In any case, I think decreasing the frequency of your data will help you. To summarize, there is not a wrong thing with your model, it is just outliers make it unhandy in some predictions.
H: What is the difference between active learning and reinforcement learning? From Wikipedia: Active learning is a special case of machine learning in which a learning algorithm can interactively query a user (or some other information source) to label new data points with the desired outputs. Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment in order to maximize the notion of cumulative reward. How to distinguish them? What are the exact differences? AI: Active learning is a technique that is applied to Supervised Learning settings. In the supervised learning paradigm, you train a system by providing inputs and expected outputs (labels). The system learns to mimic the training data, ideally generalizing it to unseen but extrapolable cases. Active learning is applied normally in cases where obtaining labels is expensive so, we obtain new labels dynamically, defining an algorithmic strategy to maximize the usefulness of the new data points. Reinforcement learning is a different paradigm, where we don't have labels, and therefore cannot use supervised learning. Instead of labels, we have a "reinforcement signal" that tells us "how good" the current outputs of the system being trained are. Therefore, in reinforcement learning the system (ideally) learns a strategy to obtain as good rewards as possible.
H: Math behind 2D convolution for RGB images I read many threads discussing why 2D convolutional layer is typically used for RGB images in neural network. I read that it is possible to use 3D conv layer. What I do not understand is the math behind it. Say your image is 300 by 300, and the kernel_size = (3, 3) and filter = 16 for the Conv2D layer. Input_shape would be (300, 300, 3) because there are 3 channels(RGB). Since the kernel is 2D, the convolution can only be done at 1 channel at a time. Is that correct? Are the same kernel applied/convolved for the 3 channels? If so there should be 3 output but the dimension of the output would be (298, 298, 16). Is it averaged over the 3 channels? AI: If your image is 3D then your kernel should be 3D too. Of course, you can also apply the 2D in which the same filter will be applied to all channels. Image Source (Content is also well). However, normally you apply a 3D filter to a 3D image. So if you apply 16 filters of size 3x3x3 to an image of size 6x6x3, then you will get 16 outputs of size 4x4 (Updated: The third dimension of the input image (i.e. 3 for RGB channel) should be matched with the dimension of filter, which should be also 3). If you would apply 16 filters of size 3x3 filters, you would get 16 outputs of size 4x4x3. It would be treating each channel separately. But when you use a 3D filter, your output of convolution operation depends on all three dimensions. In other words, you multiply your 27 points from your 3x3x3 filter with the corresponding 27 points (3x3 pixels and their 3 channels) from the image, and then add them to get the result. Thus, 1 more dimension would be there for you to handle (16x4x4x3 instead of 16x4x4). The answer to your question 1 is Yes, you would apply the filter 1 channel at a time. Check the link for a very good explanation by Andrew NG.
H: Is applying pre-trained model on a different type of corpus called transfer learning? I trained my classification model on corpus A and evaluated it on corpus B. I do it, because for corpus A I have a lot more labeled sentences than for B. Nature of sentences used in A is different than sentences using in B. A has name of products from e-shop, B has names of products as they appear in shopping lists, with all slang, abbreviations, spelling errors and private notes. Am I doing transfer learning? AI: The basic concept of transfer learning is: Storing knowledge gained from solving one problem and applying it to different but related problem I guess to be precise this is called Transductive Transfer Learning. In this we learn from the already observed training dataset and then predict the labels of the testing dataset. Even though we do not know the labels of the testing datasets, we can make use of the patterns and additional information present in this data during the learning process. Refer: Ruder
H: batch_size in neural network When NN is construsted, batch size is not defined and place holder is used and its summary(tensorfow) shows the batch size as None. This is useful because you can change batch size later. In case of a simple model with 10 input features, 1 hidden layers with 10 neurons and output layer, the shape of the hidden layer would be (None, 10), which means if the batch size is 20, the hidden layer would have the shape of (20, 10). When the model is used to predict for a single output, with shape(10, 1), how does the math work? AI: I guess you have a confusion here. The None part represents the number of samples. For example if you have a neural network with architecture 100-50-10, it means that you have (None,100) : input layer shape (100,50) : shape for weights connecting input to hidden layer (None,50): shape for hidden layer given by (None,100)*(100,50) matrix multiplication (None,50): shape after nonlinearity application. (50,10): shape for the shape matrix between hidden and output layer (None,10) : output layer shape (None,50)*(50,10) matrix multiplication So if youre feeding a single input sample the shapes would be: (1,100)[Input] => (1,100)(100,50) = (1,50)[Hidden Layer] => (1,50)*(50,10)=(1,10)[Output Layer]
H: Getting NN weights for every batch / epoch from Keras model I am trying to get weights for every batch / epoch from Keras model after it is trained. To do so I use callback to make model save weights during training. Yet after model is trained it looks like I get weights only from the final epoch. How to get all weights that model generates? Here is a simple example: import numpy as np import tensorflow as tf from tensorflow import keras from keras import layers # Generate data start, stop = 1,100 cnt = stop - start + 1 xs = np.linspace(start, stop, num = cnt) b,k = 1,2 ys = np.array([k*x + b for x in xs]) # Simple model with one feature and one unit for regression task model = keras.Sequential([ layers.Dense(units=1, input_shape=[1], activation='relu') ]) model.compile(loss='mae', optimizer='adam') batch_size = int(cnt / 5) epochs = 80 Next goes callback to save the Keras model weights at some frequency. According to Keras docs: save_freq: 'epoch' or integer. When using 'epoch', the callback should save the model after each epoch. When using integer, the callback should save the model at end of this many batches. checkpoint_filepath = './checkpoint.hdf5' model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_filepath, save_weights_only=True, save_freq ='epoch', # 1 for every batch save_best_only=False ) # Train model history = model.fit(xs, ys, batch_size=batch_size, epochs=epochs, callbacks=[model_checkpoint_callback]) I use two different ways to get weights. First: w, b = model.weights print("Weights: \n {} \n Bias: \n {}".format(w,b)) Weights: <tf.Variable 'dense/kernel:0' shape=(1, 1) dtype=float32, numpy=array([[-0.1450262]], dtype=float32)> Bias: <tf.Variable 'dense/bias:0' shape=(1,) dtype=float32, numpy=array([0.], dtype=float32)> This results in one weight and one bias, not all weights generated by model at every batch /epoch. And second method to get weights directly from h5 file: # Functions to read weights from h5 file import h5py def getH5Keys(fileName): keys = [] with h5py.File(fileName, mode='r') as f: for key in f: keys.append(key) return keys def isGroup(obj): if isinstance(obj, h5py.Group): return True else: return False def isDataset(obj): if isinstance(obj, h5py.Dataset): return True else: return False def getDataSetsFromGroup(datasets, obj): if isGroup(obj): for key in obj: x = obj[key] getDataSetsFromGroup(datasets, x) else: datasets.append(obj) def getWeightsForLayer(layerName, fileName): weights = [] with h5py.File(fileName, mode='r') as f: for key in f: if layerName in key: obj = f[key] datasets = [] getDataSetsFromGroup(datasets, obj) for dataset in datasets: w = np.array(dataset) weights.append(w) return weights This method returns the same singular values for one weight and one bias: layers = getH5Keys(checkpoint_filepath) firstLayer = layers[0] print(layers) # ['dense'] weights = getWeightsForLayer(firstLayer, checkpoint_filepath) for w in weights: print(w.shape) print(weights) Output: (1,) (1, 1) [array([0.], dtype=float32), array([[-0.1450262]], dtype=float32)] Again I get only one weight and one bias. How to get all weights generated by model for every batch /epoch? Update Answer from 10xAI works for me. However, in my case I have one level of network with one unit, so I access weights and bias differently: weights_dict = {} weight_callback = tf.keras.callbacks.LambdaCallback \ ( on_epoch_end=lambda epoch, logs: weights_dict.update({epoch:model.get_weights()})) # Train model history = model.fit(xs, ys, batch_size=batch_size, epochs=epochs, callbacks=[weight_callback]) print(weights_dict[0]) Output: [array([[1.5375139]], dtype=float32), array([0.00499998], dtype=float32)] print("*** Epoch: ", epoch, "\nWeight: ", weights_dict[0][0][0], " bias: ", weights_dict[1][0]) Output: *** Epoch: 79 Weight: [1.5375139] bias: [[1.5424858]] AI: You may use lambda callback and save it in a dictionary. weights_dict = {} weight_callback = tf.keras.callbacks.LambdaCallback \ ( on_epoch_end=lambda epoch, logs: weights_dict.update({epoch:model.get_weights()})) history = model.fit( x_train, y_train, batch_size=16, epochs=5, callbacks=weight_callback ) # retrive weights for epoch,weights in weights_dict.items(): print("Weights for 2nd Layer of epoch #",epoch+1) print(weights[2]) print("Bias for 2nd Layer of epoch #",epoch+1) print(weights[3]) You can create it for batch level too.
H: Ranking problem and imbalanced dataset I know about the problems that imbalanced dataset will cause when we are working on classification problems. And I know the solution for that including undersampling and oversampling. I have to work on a Ranking problem(Ranking hotels and evaluate based on NDCG50 score this link), and the dataset is extremely imbalanced. However, the example I saw on the internet use the dataset as it is and pass it to train_test_split without oversampling/undersampling. I am kind of confused if that is true in the Ranking problems in which the imbalanced data does not matter and we do not need to fix this before passing the data to the model? And if that is the case why? Thanks AI: You are completely right, imbalance of labels does have an impact on ranking problems and people are using techniques to counter it. The example in your notebook applies list-wise gradient boosting. Since pairwise ranking can be made list-wise by injecting the NDCG into the gradient, I will focus on pair-wise rank loss for the argument. I will base myself on this paper (https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf). $C = -\bar{P}_{ij}$log$P_{ij} - (1 - \bar{P}_{ij})$log$(1 - P_{ij})$ with, $P_{ij}\equiv P(U_{i}\rhd U_{j})\equiv {1\over{1 + e^{-\sigma(s_{i} - s_{j})}}}$ and $\bar{P}_{ij} = {1\over2}(1 + S_{ij})$ for $S_{ij}$ being either $0$ or $1$. This is actually just a classification problem, with 0 being article i being less relevance than article j and 1 in the opposite case. Imagine that now you are working with queries which have a lot of matching documents but only a couple of documents have been tagged as relevant. Often such sparse tagging does not mean that ONLY these documents were relevant, but only is caused by the limitation of the estimation of relevance (https://www.cs.cornell.edu/people/tj/publications/joachims_etal_05a.pdf). Hence, it is not uncommon to down-sample high rated documents. Another reason for applying imbalance methods such as reweighting labels, is for example bias due to position (see for example https://ciir-publications.cs.umass.edu/getpdf.php?id=1297). The loss is reweighted based on the observed position of the documents when their relevance was tagged.
H: Why use gradient descent on Deep Nets / RNNs when cost function is not convex? Why do we use gradient descent on very non-convex loss functions such as in Deep nets / RNNs rather than a heuristic search (genetic algorithms, simulated annealing, etc)? AI: Even if your cost function is not convex, any minimization algorithm will do its job - including gradient descent. That said, a standard gradient descent is a local optimization procedure and can only ever guarantee a local minimum, not a global one. For quite a few applications, a good local minimum is good enough - for others, a lot of effort is put into using “global” optimization algorithms (such as genetic/evolutionary, simulated annealing, scatter search, multi start techniques, direct/multilevel searches etc...). Those algorithms are definitely more powerful but they are also significantly slower in converging to a good, potentially global, minimum. And none of them can guarantee global optimality unless you allow them a veeeeeeery large number of functions evaluations.
H: Metric to use to choose between different models - Hyperparameters tuning I'm building a Feedforward Neural Network with Pytorch and doing hyperparameters tuning using Ray Tune. I have train, validation and test set, train and validation used during the training procedure. I have different versions of the model (different learning rates and numbers of neurons in the hidden layers) and I have to choose the best model. But I'm unsure on which metric should I use to choose the best model. Basically I don't know the XXXX in this line of code: analysis.get_best_config(metric='XXXX', mode='min') Should it be test loss or validation loss? AI: Essentially, the function of your testset is to evaluate the performance of your model on new data. It mimics the situation of your model being put into production. The validation set is used for optimizing your algorithm. Personally I would recommend tuning your algorithm using your validation set and using the hyperparameters of the training epoch with the lowest validation loss. The accuracy of the model using these hyperparameters on your testset can be used to estimate the performance of your model in the 'real world'.
H: What is the appropriate statistical significance test for multi-class classification? I have a multi-class classification problem. I am primarily using macro-average F1 measure to evaluate the performance of models and want to verify if the results are statistically significant. I have the results of two classifiers on the same train/test-set (paired observations). Some sources suggest to use McNemar’s test for binary classification task. However, is there any generalization of McNemar’s test for multi-class classification problem? If so, what would be the appropriate procedure to carry out these tests? AI: Generalisation of Mcnemars is called Cochran–Mantel–Haenszel test. There is an implementation in R, but I suppose porting to Python should not be too hard. You can find the R version here.
H: What is the difference between GPT blocks and Transformer Decoder blocks? I know GPT is a Transformer-based Neural Network, composed of several blocks. These blocks are based on the original Transformer's Decoder blocks, but are they exactly the same? In the original Transformer model, Decoder blocks have two attention mechanisms: the first is pure Multi Head Self-Attention, the second is Self-Attention with respect to Encoder's output. In GPT there is no Encoder, therefore I assume its blocks only have one attention mechanism. That's the main difference I found. At the same time, since GPT is used to generate language, its blocks must be masked, so that Self-Attention can only attend previous tokens. (Just like in Transformer Decoders.) Is that it? Is there anything else to add to the difference between GPT (1,2,3,...) and the original Transformer? AI: GPT uses an unmodified Transformer decoder, except that it lacks the encoder attention part. We can see this visually in the diagrams of the Transformer model and the GPT model: For GPT-2, this is clarified by the authors in the paper: There have been several lines of research studying the effects of having the layer normalization before or after the attention. For instance the "sandwich transformer" tries to study different combinations. For GPT-3, there are further modifications on top of GPT-2, also explained in the paper:
H: Why does horizontal lines in plt.plot(feature, '.') mean that the data have been properly shuffled? I am following a Mooc and in this lecture about visualisation in explenatory data analysis the lecturer claims that when plotting the row indexes against feature values, if we have lines on the feature value axis it means that the data have been properly shuffled. I can't see why. Shouldn't an index have only one value in the feature axis? One horizontal line should mean that the feature values for all indexes have been uniformized, not randomized? On the contrary, in the following lecture, the lecturer claims that from the absence of vertical lines, the data hasn't been properly shuffled: I think I get it as if it was, I would have seen clear lines. But how can I bee sure there isn't more classes hidden in these subs? AI: Shouldn't an index have only one value in the feature axis? Yes, that's correct. On the graph given as example this is not visible because there are too many row indexes (50000). As a consequence it's impossible to distinguish a particular index from its neighbors, but if the X axis was stretched long enough one would see a single feature value for every index. One horizontal line should mean that the feature values for all indexes have been uniformized, not randomized? I think there could be two different confusions here: An horizontal line means that a single feature value is distributed uniformly across the indexes, which is equivalent to saying that the indexes are random for this feature value. In other words, the chance that this feature value appears at a particular index is the same as at any other index. This is what the author means: the order (indexes) is random for any feature value. The values for all the features have not been uniformized, this can be seen from the fact that vertically the density of the points is different around the middle (say 0.4-0.6) and the extremes (say 0-0.2 and 0.8-1). Of course this would be more visible with a standard histogram, which would show a kind of peak in the middle but with two high bars at the extremes for 0 and 1 (it can be seen from the continuous lines for these two features values that they appear much more frequently). One may also note on this graph that there is some kind of underlying discrete distribution of the values: very clearly for values 0 and 1, but also from all the white horizontal lines which show that some values seldom exist in the data.
H: Encoding Tags for Random Forest I have the following data set: I want to use attributes Tags and Authors to classify each record into their respective Rating. In order to do so I want to use a random forest classifier. My concern is how to deal with Tags attribute. Each of the entry has an undetermined number of tags separated by a commas. There are a total of 4412 unique tags and the entry with more tags contains 20 tags. The first entry has tags ["Rhode Island","Economy", "Taxes", "Lincoln Chafee"]. How should I encode this attribute such that I can use Random Forest Classifier from sklearn? AI: You should use sklearn MultiLabelBinarizer from sklearn.preprocessing import MultiLabelBinarizer lb = MultiLabelBinarizer() lb.fit_transform([['A', 'B', 'C'],[ 'A', 'D', 'E', 'B']]) array([[1, 1, 1, 0, 0], $\hspace{1cm}$ [1, 1, 0, 1, 1]]) If required, remove the columns below a threshold value (sum of the column). This will reduce the Features count by removing the low variance Features
H: Gradient descent does not converge in some runs and converges in other runs in the following simple Keras network When training a simple Keras NN (1 input, 1 level with 1 unit for a regression task) during some runs I get big constant loss that does not change in 80 batches. During other runs it decreases. What may be the reason that gradient does not converge in some runs and converges in other runs in the following network: ? import numpy as np import tensorflow as tf from tensorflow import keras from keras import layers # Generate data start, stop = 1,100 cnt = stop - start + 1 xs = np.linspace(start, stop, num = cnt) b,k = 1,2 ys = np.array([k*x + b for x in xs]) # Simple model with one feature and one unit for regression task model = keras.Sequential([ layers.Dense(units=1, input_shape=[1], activation='relu') ]) model.compile(loss='mae', optimizer='adam') batch_size = int(cnt / 5) epochs = 80 Next goes callback to save the Keras model weights at some frequency. According to Keras docs: save_freq: 'epoch' or integer. When using 'epoch', the callback should save the model after each epoch. When using integer, the callback should save the model at end of this many batches. weights_dict = {} weight_callback = tf.keras.callbacks.LambdaCallback \ ( on_epoch_end=lambda epoch, logs: weights_dict.update({epoch:model.get_weights()})) Train model: history = model.fit(xs, ys, batch_size=batch_size, epochs=epochs, callbacks=[weight_callback]) I get: Epoch 1/80 5/5 [==============================] - 0s 770us/step - loss: 102.0000 Epoch 2/80 5/5 [==============================] - 0s 802us/step - loss: 102.0000 Epoch 3/80 5/5 [==============================] - 0s 750us/step - loss: 102.0000 Epoch 4/80 5/5 [==============================] - 0s 789us/step - loss: 102.0000 Epoch 5/80 5/5 [==============================] - 0s 745us/step - loss: 102.0000 Epoch 6/80 ... ... ... Epoch 78/80 5/5 [==============================] - 0s 902us/step - loss: 102.0000 Epoch 79/80 5/5 [==============================] - 0s 755us/step - loss: 102.0000 Epoch 80/80 5/5 [==============================] - 0s 1ms/step - loss: 102.0000 Weights: for epoch, weights in weights_dict.items(): print("*** Epoch: ", epoch, "\nWeights: ", weights) Output: *** Epoch: 0 Weights: [array([[-0.44768167]], dtype=float32), array([0.], dtype=float32)] *** Epoch: 1 Weights: [array([[-0.44768167]], dtype=float32), array([0.], dtype=float32)] *** Epoch: 2 Weights: [array([[-0.44768167]], dtype=float32), array([0.], dtype=float32)] *** Epoch: 3 Weights: [array([[-0.44768167]], dtype=float32), array([0.], dtype=float32)] ... ... As you can see, weights and biases also do not change, bias = 0. Yet on other runs gradient descent converges, weights and non-zero biases are fitted with much smaller loss. The problem is repeatable. The problem is that it converges in 30% of runs with exactly the same set of parameters that it does not in 70% of runs. Why it does some times and some times does not with the same data and parameters? AI: Their is some random elements when using packages such as TensorFlow, Numpy etc. Some examples includes: How the weights are initialized. How the data is shuffled (if enabled) in each batch. Batches containing different data, will produce different gradients which might influence convergence. This means, that even when you run the same code it is actually not 100% the same and that is why you get different results. If you want the same results, you should fix the random seed as follow: tf.random.set_seed(1234). This is usually done after the imports. The value 1234 can be any integer, for example if I use a value of 500 I get the same results and good convergence. Some other points to note If I remember correctly calculations perform using a GPU might also introduce random factors. It is a good idea to also fix Numpy seed, random package seed and any function which takes a seed value e.g. sklearn.model_selection.train_test_split
H: From where does BERT get the tokens it predicts? When BERT is used for masked language modeling, it masks a token and then tries to predict it. What are the candidate tokens BERT can choose from? Does it just predict an integer (like a regression problem) and then use that token? Or does it do a softmax over all possible word tokens? For the latter, isn't there just an enormous amount of possible tokens? I have a hard time imaging BERT treats it like a classification problem where # classes = # all possible word tokens. From where does BERT get the token it predicts? AI: There is a token vocabulary, that is, the set of all possible tokens that can be handled by BERT. You can find the vocabulary used by one of the variants of BERT (BERT-base-uncased) here. You can see that it contains one token per line, with a total of 30522 tokens. The softmax is computed over them. The token granularity in the BERT vocabulary is subwords. This means that each token does not represent a complete word, but just a piece of word. Before feeding text as input to BERT, it is needed to segment it into subwords according to the subword vocabulary mentioned before. Having a subword vocabulary instead of a word-level vocabulary is what makes it possible for BERT (and any other text generation subword model) to only need a "small" vocabulary to be able to represent any string (within the character set seen in the training data).
H: How to choose between different types of feature scaling? The feature set for my multi-class multi-label classification task, using the MLPClassifier from scikit learn, contains mostly features where the values are in the same range of [0,1], but there are 3 out of 45 features where this isn't the case and feature scaling is required. So far I've tried out min-max normalization, mean normalization and z-score normalization on these features. However all scaling methods result in slightly different train and test performance and z-score standardization results in the fastest convergence but worst scores overall. To measure performance, Precision, Recall, F1 and MCC were used. What is a decent strategy when choosing a type of feature scaling? AI: You've listed a few different error metrics, I would pick one metric that is best suited to your problem. Trying to maximize several metrics at once makes it difficult to tell if your model is getting better. In any case, if normalization leads to the best score - then that is your answer. Since all other variables were already in range [0,1], then that's probably what I would have started with just to keep it consistent. I am curious how much worse the z-score standardization is performing compared to normalization. From what you've written about the data - I'd be surprised if standardization vs. normalization produced significantly different results when only 3 out of 45 features are impacted. This blog explains the topic better than I can About Feature Scaling. This article has several examples of training a MLP with scaled and unscaled variables How to use Data Scaling Improve Deep Learning Model Stability and Performance. I'd also consider some simpler algorithms. It might be good to also train a logistic regression or similar as a comparison, fewer parameters will be easier to configure and less prone to error. A neural net will give you a boost if you have a lot of training data or a very non-linear problem.
H: One-hot vector for fixed vocabulary given a vocabulary with $|V|=4$ and V = {I, want, this, cat} for example. How does the bag-of-words representation with this vocabulary and one-hot encoding look like regarding example sentences: You are the dog here I am fifty Cat cat cat I suppose it would look like this $V_1 = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ \end{pmatrix}$ $V_2 = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \\ \end{pmatrix}$ $V_3=\begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \\ \end{pmatrix}$ But what exactly is the point of this representation? Does is show the weakness of one-hot encoding with a fixed vocabulary or did I miss something? AI: library(quanteda) mytext <- c(oldtext = "I want this cat") dtm_old <- dfm(mytext) dtm_old newtext <- c(newtext = "You are the dog here") dtm_new <- dfm(newtext) dtm_new dtm_matched <- dfm_match(dtm_new, featnames(dtm_old)) dtm_matched $V_1$ Document-feature matrix of: 1 document, 4 features (100.0% sparse). features docs i want this cat newtext 0 0 0 0 $V_2$ Document-feature matrix of: 1 document, 4 features (75.0% sparse). features docs i want this cat newtext 1 0 0 0 $V_3$ Document-feature matrix of: 1 document, 4 features (75.0% sparse). features docs i want this cat newtext 0 0 0 3 Of course when using a "one hot" vectorizer, "cat" in $V_3$ would be 1 (instead of the count).
H: What's the difference between sequence preprocessing and text preprocessing in Keras? In Keras, we mainly have three types of preprocessing, i.e., sequence preprocessing, text preprocessing, and image preprocessing. However, for me, I think the meanings of the word "sequence" and "text" are the same. How to understand the differences between these two preprocessing operations? AI: In tf.keras.preprocessing.text (docs) you have utilities to process discrete token sequences, normally used to represent text. In tf.keras.preprocessing.sequence (docs) you have utilities to process both continuous value sequences (normally used to represent time series) like TimeSeriesGenerator, and discrete token sequences (i.e. text), like the skipgrams function.
H: Generate new features from two columns I have database with three columns, y,x1 and x2: >>>y x1 x2 0 0.25 -19.3 -25.1 1 0.24 -18.2 -26.7 2 0.81 -45.2 -31.4 ... I want to create more features based on the x columns. until now I have just created random functions and tries to check their correlation with the y, but my question is if there is any propeer way/ common functions in order to create thise new features. I have used PolynomialFeatures of scikit learn but as I understood is not common to do more than 3. from sklearn.preprocessing import PolynomialFeatures #split x y.... poly = PolynomialFeatures(3) poly=pd.DataFrame(poly.fit_transform(X)) My end goal is to use those new columns in random forest algorithm (I have more columns than x1 and x2 but those two that are interesting for me and would like to investigate them and their relationshop more). AI: You are going to have to do something - You can try combining them in different ways, multiply them together, divide them by each other, subtract one from another. Without the context around what these features actually relate to its difficult to say what would make sense. Ultimately to make a new derived feature you are going to have to combine them or transform them in some way.
H: How would I approach training a model and encoding this categorical data So I have the following data: I have one series where each word has a value that describes the average review score that would get. For example, if the word "excellent" showed up in reviews with a score of 2,3,5,4 it would gain a value of 3.5. I also have a list of the words contained in a review, and the review scores of each of those written reviews. For example, Unique_words ["good","clean","hotel","enjoyed","stay","here"] score 4 (These are ofc simplified examples, my actual data is a lot longer) I also have the original reviews, from which the unique_words are taken from. The question is, how would I use this data in order to train a machine-learning algorithm to predict what score a review would get, given the unique words contained inside it. AI: You need to encode categorical variables as dummies. This means to create new features for each type of category and then assigned either a 1 (where a record has that category) or 0 (to each record that doesn't have that category). With some examples, this should look something like this good clean hotel enjoyed stay here I am good 1 0 0 0 0 0 My face is clean 0 1 0 0 0 0 This is a clean hotel 0 1 1 0 0 0 I enjoyed a good meal 1 0 0 1 0 0 Don't stay here 0 0 0 0 1 0 In python you can use the pandas function pd.get_dummies()
H: How pre-trained BERT model generates word embeddings for out of vocabulary words? Currently, I am reading BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. I want to understand how pre-trained BERT generates word embeddings for out of vocabulary words? Models like ELMo process inputs at character-level and can generate word embeddings for out of vocabulary words. Can BERT do something similar? AI: BERT does not provide word-level representations, but subword representations. You may want to combine the vectors of all subwords of the same word (e.g. by averaging them), but that is up to you, BERT only gives you the subword vectors. Subwords are used for representing both the input text and the output tokens. When an unseen word is presented to BERT, it will be sliced into multiple subwords, even reaching character subwords if needed. That is how it deals with unseen words. ELMo is very different: it ingests characters and generate word-level representations. The fact that it ingests the characters of each word instead of a single token for representing the whole word is what grants ELMo the ability to handle unseen words.
H: Dense Layer vs convolutional layer - when to use them and how I know what's the difference between the two, however I am a little confused on to use them. I have also seen some models that have a mix of both. what's the logic behind it? or is it only random things? AI: As known, the main difference between the Convolutional layer and the Dense layer is that Convolutional Layer uses fewer parameters by forcing input values to share the parameters. The Dense Layer uses a linear operation meaning every output is formed by the function based on every input. In other words, we "force" every input to the function and let the NN learn its relation to the output. As a result, there appear n*m connections (or weights) where n denotes the number of inputs and m denotes the number of outputs. On the other hand, the Convolutional layer uses a filter to operate the convolution operation which has a small size most of the time. An output of the convolution layers is formed by just a small size of inputs which depends on the filter's size and the weights are shared for all the pixels. That is, the output is constructed by using the same coefficients for all pixels by using the neighboring pixels as an input. In the convolutional layer, as known, the filter's center is located in the pixel and then the linear operation is processed using the neighboring pixels of that pixel. That is, we, in advance, know that there is a strong relationship between the neighboring pixels. If we would not be sure that the neighboring pixels have a strong relationship with neighboring pixels it would not be logical to use Convolutional Layer, instead we would force all the pixels (inputs) to the function to learn the relationship by using Dense Layer. So, by using the Convolutional Layer we assume that the neighboring pixels are the main representative of the center pixel and as we go far away from the pixel, that is, the pixels that are away from the center pixel do not really possess the same characteristics as the center pixel. They might even be a different object thus may lead to a spurious result or would cause your function to learn redundant information which in reality has no relationship with. In short, since we have prior knowledge about our data and information in it, we not only take the heavy load from our model by using a convolutional layer but also show it the exact location of the data that might be useful for it to learn while keeping it away from redundant data. However, frequently we use both in the same model, it is often simply due to the fact that we do not know what is going on in layer 10 (for example). Namely, we do not have that prior information about data anymore, because we do not know what it learns in those deep layers. Thus, we use Dense Layer, by giving all the inputs, we give the "full responsibility" to learn. In other words, we say to a Dense Layer that "Here are my features (pixels maybe), I don't know the true relationship between them, please find it yourself". We say to a Convolutional Layer that "Here are my features (pixels maybe), I am sure that it is enough just to look the few pixels around the center pixel" and that relationship is preserved through all the pixels. So knowing, how it works, in different contexts, you can apply them to different scenarios. If you have a good knowledge of your data and its structure then you can choose the relevant layer while designing your model.
H: What is the use of applying img_to_array() after cv2.imread() In a book, I saw the following code to load images from a directory: 1.image = cv2.imread(imagePath) 2.image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) 3.image = cv2.resize(image, (28,28)) 4.image = img_to_array(image) 5.data.append(image) cv2.imread() It converts image to a numpys array in BGR format img_to_array() It converts PIL instance to numpy array, but in this case image will stay in BGR format as the image is loaded by cv2. What is the use of img_to_array() (from keras.preprocessing.image module) after cv2.imread(), as it is already in numpy array. Also: To double check, I displayed images before and after applying img_to_array() using cv2.imshow(). BEFORE applying img_to_array() AFTER applying img_to_array()### Also Also: But, if I try to save the images using cv2.imwrite into a file, they both are saved as normal (ie. like the first image). Why is this happening? AI: This is because of the data type you provide to the imshow() function. Check the documentation: The function may scale the image, depending on its depth: If the image is 8-bit unsigned, it is displayed as is. If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That is, the value range [0,255*256] is mapped to [0,255]. If the image is 32-bit or 64-bit floating-point, the pixel values are multiplied by 255. That is, the value range [0,1] is mapped to [0,255]. So when you use cv2.imread() it reads data uint8, that is 8-bit unsigned format thus imshow() displays the original data. But when you use img_to_array() it converts data to the float. Thus imshow() multiplies it with 256. It expects the float values in normalized form, that is all the pixels are divided by 256. That is the reason for such a display. This is how cv2.imread() data looks like... and its data type... This is how img_to_array() data looks like... and its data type... If you divide your image by 1.0 to convert it to the float... import cv2 # from keras.preprocessing.image import img_to_array image = cv2.imread("car.jpg") image = image/1.0 cv2.imshow("Divided by 1.0", image) cv2.waitKey(0) ...this is what you will get: Same as img_to_array. But it you divide image with 256.0 as follows: import cv2 # from keras.preprocessing.image import img_to_array image = cv2.imread("car.jpg") image = image/256.0 cv2.imshow("Divided by 256.0", image) cv2.waitKey(0) You get the original image since imshow() multiplies the float with 256. So what you need is to divide your img_to_array() output by 256 or convert it to the uint8.
H: How do I transform a file to .txt file using pandas? I have to submit a machine learning project, and it has to be node in a .txt file. I know that if I am using pandas, and I want to transform a file from another format to .csv format I can use .to_csv(). Is there something similar for transforming the file into .txt() file using pandas? AI: pandas.to_csv() as you might know is part of pandas owned IO-API (InputOutput API). Currently panas is providing 18 different formats in this context. And of course pandas is also able to write txt-files. With other words pandas is a multi data converter: Pandas IO tools To do so with pandas use to_csv with the sep='\t' attribute: df.to_csv('data.txt', sep='\t') The next option you have to do it with numpy in that way, which is a bit laborious: Call pd.DataFrame.to_numpy() to convert a pd.DataFrame to a NumPy array Use np.savetxt(fname, X, fmt = "%s") with the string "%d" as fmt and the NumPy array as X to write the contents of X into the file fname.
H: ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 25, 25, 1] I am trying to use conv1D but getting that error. My dataset's is batched and has a shape of [None, 25, 25, 1] I am using input_shape=(25,25) I am not able to figure out what should I change so I can get it to work. My model: model = Sequential() model.add(Conv1D(32, kernel_size=3, activation='relu', input_shape=(25,25)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) AI: I have solved the problem by changing the shape of my dataset using: tf.reshape(data, [25, 25])
H: What is the difference between AI, ML, NN and DL? What is the difference between the following four categories: Artificial Intelligence (AI) Machine Learning (ML) Neural Network (NN) Deep Learning (DL) Data Science My current understanding is that each of these encapsulates some set of algorithms. I feel like, amongst ML, NN, and DL, Machine Learning (ML) is the largest set, which contains NN, which in turn contains DL. See figure below. Data Science is the largest set amongst all 5. Is this correct? Where would AI be on this figure? AI: Machine learning is the field that researches methods that fit (= optimize) model structures to data and output a final model that is based on the combination of model structure and optimized model parameters. Statistics and ML are intersecting fields such that some methods belong to both fields, however there exist some methods that only belong to ML and some that only belong to statistics. Neural networks are a set of such machine learning methods and a subset of those methods are deep learning neural networks (DLNN). Deep learning methods are a set of machine learning methods that use multiple layers of modelling units. Most research in deep learning is in the area of DLNN, but it is not exclusive, as for example probabilistic circuits are usually not considered to be NN (albeit they look similar on first glance) and they can be deeply layered. Approaches that have hierarchical nature are usually not considered to be "deep", which leads to the question what is meant by "deep" in the first place. An example might be hierarchical clustering methods, of which exist many very different ones - since (probably) every clustering method can be easily made hierarchical. Artificial intelligence (AI) is difficult to define if one does not want to use it as a buzzword, like "the pursuit of human intelligence". One issue is to define what intelligence is. As far as I can see if one defines AI to be broader than ML, then one ends up with a definition that is basically identical to "computer science", otherwise one ends up with my definition (see above) for ML. One must not make the mistake to just name a regular algorithm - as the minimax algorithm or a rule-based system as deep blue - and call it AI. Otherwise any algorithm would be AI, which would be ridiculous. Possibly https://blog.piekniewski.info/2020/06/08/ai-the-no-bullshit-approach/ will give an answer in how expert system are AI and not ML and not just a rule-based system, but I did not get around to read it, yet. Data science is used as a buzzword in industry and - except for the term "applied data science" - avoided in research. The main issue is that much of data "science" in industry is no science at all. However, I believe that there is a chance to consider "data science" not just as buzzword, but an actual science. In fact, I am working on a methodology for a while now. I have the chance to teach "data science as a science" at university right now. However, I noticed that my definition is not yet of such quality as I would want to. It is getting better though. To give you an idea, I wrote as a comment to another post: Without going into detail regarding my methodology/definition, the result of my definition of data science is that you could say it is "mathematical epistemology" - not the actual appliance, but the methodology. Data analytics would then be appliance of the data science principles. You could say that data science is to data analytics, as physic is to mechanical engineering. With that, a company would hire a data scientist / physicist to work on an an engineering project. From https://academia.stackexchange.com/questions/158586/scientific-data-science-conferences-and-journals#comment423975_158587 In this line of argument, "communication skills" are not a part of data science, in the same way as they are not a part of medicine, even though a physician should be a good communicator in order to be effective. So, ML is a superset of NN and DL, which intersect. AI is (based on my research on the topic so far) the same as ML. Data Science uses methods from ML, but it also uses other methods, e.g. from non-ML statistics.
H: Pandas: Compare two Dataframe and Groupby categorical My Question is about pandas DataFrame, I have two DataFrame both follow the same structure. df_1: Index Text Category 0 Text 01 1 1 Text 02 1 2 Text 03 1 3 Text 04 1 df_2: Index Text Category 0 Text 05 2 1 Text 02 2 2 Text 09 2 3 Text 04 2 I want to marge both for this purpose I use pd.concat(df_1, df_2) but it simply merge both files, but I want to merge in that manner. df_merge: Index Text Category 0 Text 01 1 1 Text 02 1,2 2 Text 03 1 3 Text 04 1,2 4 Text 05 2 5 Text 09 2 But I really don't know how to do that. AI: Try: Minimum reproducible example: A = pd.DataFrame({"Text":["01", "02","03","04"], "Category":[1,1,1,1]}) B = pd.DataFrame({"Text":["05", "02","09","04"], "Category":[2,2,2,2]}) C = pd.concat([A,B], ignore_index= True) C.groupby("Text").Category.unique().to_frame() Result:
H: Can anyone recommend me a very good pre-trained model for face or head detection? I really need to know the best pre-trained models to detect faces and/or peoples' head. Not a face recognition model, but only to classify whether an object is a person's head/face or not. I'm implementing one on top of Resnet50 from tensorflow.keras.applications.resnet50, but I'm not sure if it's a good approach. Other models like centernet and efficientnet in tensorflow are pre-trained to detect several objects, but I'm not sure if I can also use it for this purpose. I need something better than those provided by OpenCV (e.g. cv2.HOGDescriptor, cv2.CascadeClassifier, etc.). AI: Using facenet pretrained model to detect faces in image, but i thing ti will not detect the head, so i suggest that you train object detection model to detect faces and heads.
H: How to find appropliate algorithm for natural language based two data What I would like to do I would like to create a model to infer nationality from name and created the below data frame combining two dataset from Kaggle. AI: There's no algorithm intended specifically for this task, you need to design the process yourself (like for most tasks btw). Given that the goal would be to use a person's name as an indication, I'd suggest you represent a name as a vector of characters n-grams in the features. Example with bigrams ($n=2$): "Braund" = [ #B, Br, ra, au, un, nd, d# ] Intuitively the goal is for the model to find the sequences of letters which are more specific to a nationality. You could try with unigrams, bigrams or trigrams (the higher $n$, the more data you need for training). Once the names are represented as features this way, you can train any type of supervised model, for example Decision Tree or Naive Bayes.
H: Which term is correct Datafication or Datification? I have recently started reading Introduction to Data Science: A python approach to Concepts, Techniques, and Applications and taking notes on Data Science. Chapter 1 repeatedly uses the term Datification(the process of rendering into data aspects of the world that have never been quantified before). I could not find the word in the dictionary and the web shows some results but with different spellings Datafication(a technological trend turning many aspects of our life into data). I am wondering which one is correct? AI: It's a neologism, there's no standard spelling or even meaning. For the record there are related terms which are more standard: "data representation" (usually low-level) or "knowledge representation".
H: How to merge all the data to have a final dataset I am working with a problem that has different tables, end goal to predict if a customer will end up subscribing based on purchases. Mother table containing user_id,register_reason |user_id|reason_reg|source| |-------|----------|------| | 1 | 2 | 3 | | 2 | 3 | 1 | I have then the purchase data where a customer can have more than one entry |user_id|product_id| |-------|----------| | 1 | A | | 1 | B | | 1 | C | | 1 | D | | 2 | A | | 2 | E | Ideally I want to have one dataset where the uniqueidentifier is the user_id and there are no duplicated rows based on this value: The final dataset (in my head), could look like |user_id|reason_reg|source|product_id_A|product_id_B|product_id_C|product_id_D|product_id_E| |------------------------------------------------------------------------------------------| | 1 | 2 | 3 | 1 | 1 | 1 | 1 | 0 | | 2 | 3 | 1 | 1 | 0 | 0 | 0 | 1 | My questions are: Is the approach correct? Is there a dataframe or library that does that automatically? or I do that myself by panda before feeding it to a algorithm. In your opinion, is there a better way to approach the problem(I could add also add aditional columns like total_products with the sum of how many products the user bought) AI: A slightly hacky way to get there maybe but you can do this to get what you want from the second table; df2['count'] = 1 pivot = df.pivot_table(df, index='userid', columns='productid', values = 'count').reset_index() pivot = pivot.fillna(0) You would then want to merge this to the first dataset like this; finaldf = pd.merge(df1, pivot, left_on='userid', right_on='userid') another great thing to use for generating the dummies for categorical variables is pd.get_dummies() The approach seems ok to me and making some more features would also not be a bad idea.
H: Why I would use TF-IDF after Bag-of-Words (CountVectorizer)? In my recent studies over Machine Learning NLP tasks I found this very nice tutorial teaching how to build your first text classifier: https://towardsdatascience.com/machine-learning-nlp-text-classification-using-scikit-learn-python-and-nltk-c52b92a7c73a The point is that I always believed that you have to choose between using Bag-of-Words or WordEmbeddings or TF-IDF, but in this tutorial the author uses Bag-of-Words (CountVectorizer) and then uses TF-IDF over the features generated by Bag-of-Words. text_clf = Pipeline([('vect', CountVectorizer()), ... ('tfidf', TfidfTransformer()), ... ('clf', MultinomialNB()), ... ]) Is that a valid technique? Why would I do it? AI: This is the standard TF-IDF feature extraction: you transform the document counts. It just looks odd to separate the two steps like this. sklearn provides both TfidfTransformer and TfidfVectorizer; note the documentation of the latter: Equivalent to CountVectorizer followed by TfidfTransformer.
H: Difference between Word Embedding and Text Embedding I am working on a dataset of amazon alexa reviews and wish to cluster them in positive and negative clusters. I am using Word2Vec for vectorization so wanted to know the difference between Text Embedding and Word Embedding. Also, which one of them will be useful for my clustering of reviews (Please consider that I want to predict the cluster of any reviews that I enter.) Thanks in advance! AI: A Text embedding is a vector representation of a text. A trivial way to construct a text embedding is to average the word embeddings of each word in the text. However using this method, you will lose contextual information.
H: Why use regularization? In a linear model, regularization decreases the slope. Do we just assume that fitting a lin model on training data overfits by almost always creating a slope which is higher than it would be with infinite observations instead? What is the intuition? AI: Regularization is used to help smooth multi-dimensional models. Take this example, y = x_1 + eps*(x_2 + ... + x_100) Let's say eps is very small. It doesnt seem very useful to store those 99 coefficients, isn't it? How do we manage to fit a model in such a way that we drop negligible coefficients? This is exactly what L1-regularisation does! Each other type of regularisation has another geometric intuition.
H: List value in Pandas DataFrame column makes analysis harder Should I move to database? I have a list of courses data in JSON format that looks like this: courses = [ { course_id: "c_01", teachers: ["t_01", "t_02"] }, { course_id: "c_02", teachers: ["t_02", "t_03"] } ] And a list of teachers that look like this: teachers = [ { teacher_id: "t_01", teacher_fullname: "teacher_01" }, { teacher_id: "t_02", teacher_fullname: "teacher_02" }, { teacher_id: "t_03", teacher_fullname: "teacher_03" } ] I also have other data like courseworks and submissions that I want to cross with these to create summary analytics like: ¿Which are the courses of a teacher? ¿Who is the teacher with most courses? ¿What is the average coursework quantity per course per teacher? etc... I loaded each list (courses, teachers, courseworks, submissions) into DataFrames, but I'm having a hard time selecting the courses of a teacher using vectorized methods. Programming attempts Tried using DataFrame.query but failed to use array methods inside the query Tried using Series.isin but as it can't hash the list inside the Series teachers is useless. Data manipulations attempts Then I tried to flatten the teachers Series as follows: courses = [ { course_id: "c_01", teacher_id: "t_01" }, { course_id: "c_01", teacher_id: "t_02" }, , { course_id: "c_02", teacher_id: "t_02" } { course_id: "c_02", teacher_id: "t_03" } ] Which works because it allows merges and all kinds of queries, but because of combinatorial explosions I ended up with thousands extra rows I no certainty that the aggregated numbers are correct. My last approach was to add one column per teacher in each course like this: courses = [ { course_id: "c_01", teacher_01: 1, teacher_02: 1, teacher_03: 0, }, { course_id: "c_02", teacher_01: 0, teacher_02: 1, teacher_03: 1, } ] It actually works perfectly and the aggregations become very easy to do. One concern is that I may end up with thousands of columns (I'm not sure if that's a problem) and the other is that each time a run an analysis on a batch of data I'll end up with different columns so it will be harder to make cross analysis between different datasets. Anyway, my final thoughts are that I should store everything in a database and query the information that I need already "joined" to perform an easier and cleanier analysis process. so...Should I move to database? Am I missing something? AI: Pandas unfortunately does not allow for the type of conditional join you wish to do, without copying a lot of unnecessary data before processing. Your best solution is pandas to explode the teaches columns like you did and broadcast join the teachers. If you are dealing with memory issues, and still want to deal with pandas then you will ave to multiprocess the join by splitting the courses dataframe.
H: Can we identify that an academic dataset was used for commercial purpose There are many datasets that are released on the internet. Authors of many of these datasets state that the datasets are strictly for academic usage and not for commercial purposes. Although some datasets are released for both academic and commercial use, many of them are restricted from commercial use. If someone uses many of these academic datasets to train a machine learning or deep learning model and then offers this trained model as a REST-API-based Cloud service to earn a profit, then what would be the way to find out that he or she used academic datasets to train the model? Many people might be already using academic datasets to earn a profit? Similarly, If I collected data from many of my friends and family and published it for the academic research community and did not allow license for commercial purposes, then someone might use this data to build products and sell it commercially? How can we find that my dataset was used unethically? My friends and family might not give their data as they won't like someone to earn a profit from their data? Are there some of the recent techniques in interpretable machine learning helpful to detect such issues? AI: If the model is a discriminative model (e.g. a classification model), it is highly unlikely that you can identify whether it was trained with some specific dataset. If the model is generative, (e.g. a language model or a machine translation system), you may be able to try to identify if the model was trained with your data by trying to extract from it information only available in your data. This article for instance studies the feasibility of doing precisely that. However, a different issue would be to prove that the model was indeed trained on your data. Yet a further issue would be if such a fact is legal or not for a specific country/legal system; see these questions on the matter to better understand the problems posed by this kind of situations (I am not a lawyer, you should seek professional legal advice for this kind of matter): [reddit] Is it legal to use copyright material as training data? [reddit] Are there any legal issues with training machine learning models on copyrighted content? [reddit] Copyright laws and machine learning algorithms [law stackexchange] Restrictions on machine learning models trained on materials licensed with creative commons
H: How to compute score and predict for outcome after N days Let's say I have a medical dataset/EHR dataset that is retrospective and longitudinal in nature. Meaning one person has multiple measurements across multiple time points (in the past). I did post here but couldn't get any response. So, posting it here This dataset contains information about patients' diagnosis, mortality flag, labs, admissions, and drugs consumed, etc. Now, if I would like to find out predictors that can influence mortality, I can use logistic regression (whether the patient will die or not). But my objective is to find out what are the predictors that can help me predict whether a person will die in the next 30 days or the next 240 days, how can I do this using ML/Data Analysis techniques? In addition, I would also like to compute a score that can indicate the likelihood that this person will die in the next 30 days? How can I compute the scores? Any tutorials links on how is this score derived?, please? Can you please let me know what are the different analytic techniques that I can use to address this problem and different approaches to calculate score? I would like to read and try solving problems like this AI: This could be seen as a "simple" binary classification problem. I mean the type of problem is "simple", the task itself certainly isn't... And I'm not even going to mention the serious ethical issues about its potential applications! First, obviously you need to have an entry in your data for a patient's death. It's not totally clear to me if you have this information? It's important that whenever a patient has died this is reported in the data, otherwise you cannot distinguish the two classes. So the design could be like this: An instance represents a single patient history at time $t$, and it is labelled as either alive or dead at $t+N$ days. This requires refactoring the data. Assuming data spans a period from 0 to $T$, you can take multiple points in time $t$ with $t<T-N$ (for instance every month from 0 to $T-N$). Note that in theory I think that different times $t$ for the same patient can be used in the data, as long as all the instances consistently represent the same duration and their features and labels are calculated accordingly. Designing the features is certainly the tricky part: of course the features must have values for all the instances, so you cannot rely on specific tests which were done only on some of the patients (well you can, but there is a bias for these features). To be honest I doubt this part can be done reliably: either the features are made of standard homogeneous indicators, but then these indicators are probably poor predictors of death in general; or they contain specialized diagnosis tests for some patients but then they are not homogeneous across patients, so the model is going to be biased and likely to overfit. Ideally I would recommend splitting between training and test data before even preparing the data in this way, typically by picking a period of time for training data and another for test data. Once the data is prepared, in theory any binary classification method can be applied. Of course a probabilistic classifier can be used to predict a probability, but this can be misleading so be very careful: the probability itself is a prediction, it cannot be interpreted as the true chances of the patient to die or not. For example Naive Bayes is known to empirically always give extreme probabilities, i.e. close to 0 or close to 1, and quite often it's completely wrong in its prediction. This means that in general the predicted probability is only a guess, it cannot be used to represent confidence. [edit: example] Let's say we have: data for years 2000 to 2005 N=1, i.e. we look at whether a patient dies in the next year. a single indicator, for instance say cholesterol level. Of course in reality you would have many other features. for every time $t$ in the features we represent the "test value" for the past 2 years to the current year $t$. This means that we can iterate $t$ from 2002 (2000+2) to 2004 (2005-N) Let's imagine the following data (to simplify I assume the time unit is year): patientId birthYear year indicator 1 1987 2000 26 1 1987 2001 34 1 1987 2002 18 1 1987 2003 43 1 1987 2004 31 1 1987 2005 36 2 1953 2000 47 2 1953 2001 67 2 1953 2002 56 2 1953 2003 69 2 1953 2004 - DEATH 3 1969 2000 37 3 1969 2001 31 3 1969 2002 25 3 1969 2003 27 3 1969 2004 15 3 1969 2005 - DEATH 4 1936 2000 41 4 1936 2001 39 4 1936 2002 43 4 1936 2003 43 4 1936 2004 40 4 1936 2005 38 That would be transformed into this: patientId yearT age indicatorT-2 indicatorT-1 indicatorT-0 label 1 2002 15 26 34 18 0 1 2003 16 34 18 43 0 1 2004 17 18 43 31 0 2 2002 49 47 67 56 0 2 2003 50 67 56 69 1 3 2002 33 37 31 25 0 3 2003 34 31 25 27 0 3 2004 35 25 27 15 1 4 2002 66 41 39 43 0 4 2003 67 39 43 43 0 4 2004 68 43 43 40 0 Note that I wrote the first two columns only to show how the data is calculated, these two are not part of the features.
H: How to collect info about unseen bugs given user's comments/feedbacks? I have a dataframe which looks like: user_id, comment 0, 'Functional but Horrible UI' 1, 'Great everything works well' 2, 'I struggled finding plus button because of theme colors in dark mode' 3, 'Keeps stopping on Android 10' 4, 'I like the functionaity but color theme could be better' 5, 'Consistently crashing. Uninstalled' 6, 'Good overall' 7, 'sfdfsdlfksd' 8, 'I lost in complex settings' 9, 'Configuring app is really a headache' 10, 'aaaaaaaaaaaaa' And I want to figure out some data science approach to pluck out information about what users are struggling with and which issues appeared how much and stuff like this. Even some simple output would be good for me so that we know which parts of app to focus on more. Like for sample above I am aiming for an output as simple as: problems = { 'color_theme': 3, 'app_settings': 2, 'crashing' : 2} So I kinda wants labeling and how much time a label is occured based on to which label a review belongs. But the problem is I cannot train a model with predefined labels because: I do not have labels for reviews. If we have to go through each review to know what problem is it talking about (i.e. to label it), we would just have filed it as well and would know what we have to work on. I do not know in advance what problems are gonna come in future so even if we somehow label all at some point in time, it wouldn't be enough as some unseen problem may come and we have to do again. Even if we have a system of labeling somehow, how would we update model, like do we define a new model with a different architecture for ever changing labels? So under these circumstances, I was trying to figure out an AI approach to ease in my situation. I am pretty good at python and do have working knowledge of keras/tensorflow and other libraries but none of them seem to have such flexible model approach. I was going through Google Cloud Platform's AI platform as well but it could do sentiment analysis to an extent but not understand in an app context that e.g. button is a part of UI and color as well. So how could I approach this problem in a more elegant way? AI: It sounds like you are looking for an unsupervised learning approach (meaning you don't need to manually label your data). Something like k-means clustering could work well. This would allow you to group you comments into k distinct clusters. You could then view counts of comments in those clusters and explore the clusters to determine their meaning. In order to perform the clustering, you need to transform your data from text to a numerical vector space. A common approach would be tf-idf, but you may find that something else works better. Since you mentioned Python, both k-means and tf-idf can be accomplished using sklearn: sklearn.cluster.KMeans sklearn.feature_extraction.text.TfidfVectorizer There's a pretty nice example of k-means clustering using tf-idf on Kaggle.
H: Can neural networks have multi-dimensional output nodes? I'm trying to understand what's possible with TensorFlow's output layer. Specifically, are outputs always a flat array? Since a neuron (or 'unit', in TF) has just one number, and there is only one set of outputs, it seems that output must have a single dimension. With one-hot probabilities, this is easy to understand. But what about an image? If my output is going to be a picture, can I have TF output a multi-dimensional array of pixels, e.g. [[r0, g0, b0], [r1, g1, b1], ...]? If so, how would that network be constructed? How would I define the output layer's dimensionality/shape? The only param I know of that defines output shape is this, from tf.layers.dense, which seems inherently one-dimensional: units (number) Positive integer, dimensionality of the output space. Any help you can provide is greatly appreciated! Reference: https://js.tensorflow.org/api/latest/#layers.dense AI: The output layer does not have to be 1D (excl. batch size) but even if it is, it does not necessary mean you cannot transform it to a n dimensional space. Consider an autoencoder used to reconstruct an image: In the simplest case we could flatten a image (e.g. 24 x 24 pixels) and learn a network to predict the 24 x 24 pixels (output a 1D image). These pixels can then be transformed back to an 2D image (https://www.tensorflow.org/tutorials/generative/autoencoder). So in other words, even if your network outputs a 1D shape, nothing prevent you from reconstructing it to a higher space. We can achieve similar results as stated in the point above, by using an encoding network (convolutional + pooling layers) followed by a decoding layer (transposed convolutional + up sampling layers). In this case you can effectively generate a 2D image directly (https://www.tensorflow.org/tutorials/generative/cvae). You can also look at image segmentation networks for inspiration of how higher dimensions outputs can be generated.
H: Is it possible to use a pretrained scikit learn model to make predictions on a dataset with different features (than those used during training) Say we have a model trained on dataset A, which has a number of features, as usual. We then persist that model to disk and use it when we need to run inference (make predictions). Usually we run inference against unseen data with the same features. Is it possible to run inference against unseen data with different features. Note that these new features are very similar to the ones used in training. So in a sense it would be a type of transfer learning (actually more like “transfer inference”). Can this be done in scikit learn, given that loaded models (from say Pickle) expect the column names to be the same? AI: Technically, the only constraint would be the number of features that has to be the same than during the training phase (in most scikit learn models). Will it perform well ? I would say it really depends on the type of model/approach you are using and what is behind: Note that these new features are very similar to the ones used in training Generally speaking it will probably not perform well unless the training features and the new features are related in a way that is acceptable by your model.
H: TensorFlow 2 one-hot encoding of labels I was following this basic TensorFlow Image Classification problem, where images of flowers have to be classified into one of 5 possible classes. The labels in the training set are not one-hot encoded, and are individual numbers: 1,2,3,4 or 5 (corresponding to 5 classes). The final layer of the ConvNet however has num_class number of units. Wouldn't there be a dimension mismatch while computing the loss, since you are finding the difference between a [num_class, 1] (predicted label) dimensioned vector and a [1, 1] (true label) dimensioned vector? Does the Keras backend automatically convert the labels into one-hot vectors? Thank you in advance! AI: The loss function handles the conversion. TensorFlow has a SparseCategoricalCrossentropy and a CategoricalCrossentropy loss function. The first expect your labels to be provided as integers, where the latter expects one-hot encodings. In the given example, they use the SparseCategoricalCrossentropy loss function, therefore it is ok to supply your labels as integers.
H: Does GridSearchCV not save the best parameters? So I tuned the hyperparameters using GridSearchCV, fitted the model to the data, and then used best_params_. I'm just curious why GridSearchCV takes too long to run best_params_, unlike RandomSearchCV where it instantly gives answers. The time it takes for GridSearchCV to give the best_params_ is similar to the time it takes for GridSearchCV to tune hyperparameters, and fit the model to the data. It's as if it's doing it all over again when it has done so already. Is this the case? If not, what's taking it so long when it should have saved the best_params_ when I ran GridSearchCV the first time? AI: It doesn't, please try the following code CELL1: import numpy as np from sklearn import svm, datasets from sklearn.model_selection import GridSearchCV iris = datasets.load_iris() CELL2: %%time parameters = {'kernel':('linear', 'rbf'), 'C':np.linspace(0.1,100,1000)} svc = svm.SVC() clf = GridSearchCV(svc, parameters) clf.fit(iris.data, iris.target) CELL3: %%time clf.best_params_ Wall time of CELL2 will be about 7-9 seconds. Wall time of CELL3 will be 0ns. ( instantaneous ) This is because best_params_ is an argument of GridSearchCV. It is however only created (and accessible) once you run .fit method.
H: Twitter Data-Analyse: What can I do with the data? I retrieve data to a specific topic from Twitter and did my sentiment analysis on it. I never did anything in NLP, etc. So what else can I do with that? "Main goal" would be to find out if the Twitter community is against this "topic" or not. I am also struggling with cleaning the data and I mean by that, that I am unsure how much should I clean on that Tweet. I would be also glad to get any advise on books, articles, communities, videos... AI: Community analysis implies graph analysis. here is a short list of things you can work on: People often reshares tweets among a certain social group. Minimum-cut method, Girvan–Newman and Modularity maximization are someof the starting algorithms to extract these type of substructures. You can try and find different hierarchies among the groups sharing a particular topics You can try and analyse the lifetime of tweets for particular topics (survival analysis) Analysing tweets is closer to graph analytics rather than NLP. Here is a great overview on community analysis. For coding and algorithms, please check graphX Spark library. If your data is not too large, networkX is easier. For survival analysis, lifeline is one of the easier options.
H: Why this TensorFlow Transformer model has Linear output instead of Softmax? I am checking this official TensorFlow tutorial on a Transformer model for Portuguese-English translation. I am quite surprised that when the Transformer is created, their final output is a Dense layer with linear activation, instead of Softmax. Why is that the case? In the original paper Attention is All You Need the image is pretty clear, there is a Softmax layer just at the end (Fig.1, p. 3). How can you justify this difference, when your task involves building a language model and your Loss is based on sparse categorical crossentropy? AI: The key is precisely in the definition of the loss: loss_object = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction='none') As you can see, the loss is created with the flag from_logits=True which means that the input to the loss is not a probability distribution, but unnormalized log probabilities, namely "logits", which is precisely the result of the final projection, before any softmax. When from_logits is true, the softmax itself is handled inside the loss, combining it with the sparse categorical cross-entropy into a more numerically stable form. From the docs: from_logits: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution. Note - Using from_logits=True may be more numerically stable.
H: sine, cosine transformed cyclical features - am I losing information? If I use sine, cosine transformation for cyclical features (e.g. weekday or hour of the day), do I lose information if the first ordinal value was 0 respectively? Assume hours of the day are encoded as follows: 0, 1, ..., 23 If I apply the sine, cosine transformation, I get 23 data points instead of 24 (cf. plot) Am I losing information on the first ordinal value (0)? Many thanks in advance! AI: Are 0 and 23 supposed to be the same? If not then you should use $( \sin(2i\pi/24),\cos(2i\pi/24) )$ instead of $( \sin(2i\pi/23),\cos(2i\pi/23) )$ with $i \in \{0,...,23\}$
H: Differentiate between positive and negative clusters I have applied k-means clustering on my dataset of Amazon Alexa reviews. model = KMeans(n_clusters=2, max_iter=1000, random_state=True, n_init=50).fit(X=word_vectors.vectors.astype('double')) Now I want to check which cluster is positive and which is negative, can anyone suggest me some way to do that? Also, is there any way to check is a particular word belongs to which cluster. E.g, the word 'bad' belongs to which cluster - 0 or 1 AI: Maybe you don't have a positive and a negative class. Your input are word vectors. Unless you trained your word vectors before with explicit positive and negative labels, it is very unlikely that your KMeans learned that difference. If you used pre-trained word vectors, your KMeans could have learned an arbitrary difference between cluster 0 and cluster 1. Maybe it learned which reviews are from males and which from females, maybe which have the word "parachute" and which don't have the word "parachute", the options are endless. What you can do, is access which labels your KMeans learned (model.labels_) and filter your input X per cluster. Then, count the occurence of each word in each cluster and order which words happen the most in each of them. This might help you understand the difference between cluster 0 and cluster 1. Note: if the top words you get are words like: a, the, of, if, etc. Use a stop-word list, or filter those word with a max document frequency threshold.
H: Use a GPU to speed up neural net training in R I'm currently training a neural net model in R and am wanting to use a GPU to speed up this process. I've looked into this but it appears that this is unavailable to Mac users as Apple no longer uses NVIDIA GPUs. Can anyone tell me if this is the case, and if not how I can go about utilizing a GPU? AI: If you're able to convert the code into python, then you could use the Google colab environment or Kaggle kernels. These online platforms provide free GPU's that you can utilize. Kaggle kernels also support R directly.
H: distribution difference between image and text Once for the task of image captioning I've read that, the features extracted from image and text by deep networks are from two different worlds and got different distribution. My question is how is the distribution in two of them and how are they different? AI: Suppose you trained two identical neural nets on different datasets. Network A is trained using a dataset of cat pictures. Network B is trained using a dataset of traffic sign images. Because the two networks are identical, they will obviously produce a feature map in the same space, right? But the distribution of features in that space will be different for the two networks, because they were trained on different datasets, and you need different feature extractors to recognize cats vs. traffic signs. This is analogous to what you have read about the text/image features. Suppose we train a network to embed image data in some N-dimensional space, and then we train another network to embed text data in the same N-dimensional space. Although the resulting feature vectors are in the same space, they are almost certain to have different distributions, because they were trained using different datasets. Unfortunately, we cannot give a general answer about how the distributions are shaped and what exactly the differences are. These details will vary from case to case. Although we may not know exactly how the distributions are different until we have them in hand, we can be confident that they are in fact going to have substantial differences.
H: Role of decoder in Transformer? I understand the mechanics of Encoder-Decoder architecture used in the Attention Is All You Need paper. My question is more high level about the role of the decoder. Say we have a sentence translation task: Je suis ètudiant -> I am a student The encoder receives Je suis ètudiant as the input and generates encoder output which ideally should embed the context/meaning of the sentence. The decoder receives this encoder output and an input query (I, am, a, student) as its inputs and outputs the next word (am, a, student, EOS). This is done step by step for every word. Now, do I understand this correctly that the decoder is doing two things? Figuring out relationship between the input query and encoder embedding i.e how is the query related to the input sentence Je suis ètudiant Figuring out how is the current query related to previous queries through the masked attention mechanism. So when the query is student, the decoder would attend to relevant words which have already occurred (I am a). If this is not the right way to think about it, can someone give a better explanation? Also, if I have a task of classification or regression for a time series, do I need the decoder? I would think just the encoder would suffice as there is no context in the output of the model. AI: Yes, you are right in your understanding of the role of the decoder. However, your use of "query" here, while somewhat technically correct, seems a bit strange. You are referring as "query" to the partially decoded sentence. While the partially decoded sentence is actually used as query in the first multihead attention block, people normally do not refer to it as "query" when describing stuff from the conceptual level of the decoder. About needing the decoder in classification or regression tasks: the decoder is used when the output of the model is a sequence. If the output of the model is a single value, e.g. for a classification task or a single-value regression task, the encoder would suffice. If you want to predict multiple values of a time series, you should probably use a decoder that lets you condition not only on the input but also on the partially generated output values.
H: Why transform embedding dimension in sin-cos positional encoding? Positional encoding using sine-cosine functions is often used in transformer models. Assume that $X \in R^{l\times d}$ is the embedding of an example, where $l$ is the sequence length and $d$ is the embedding size. This positional encoding layer encodes $X$’s position $P \in R^{l\times d}$ and outputs $P + X$ The position $P$ is a 2-D matrix, where $i$ refers to the order in the sentence, and $j$ refers to the position along the embedding vector dimension. In this way, each value in the origin sequence is then maintained using the equations below: $${P_{i, 2j} = \sin \bigg( \frac{i}{10000^{2j/d}}} \bigg) $$ $${P_{i, 2j+1} = \cos \bigg( \frac{i}{10000^{2j/d}}} \bigg)$$ for $i = 0,..., l-1$ and $j=0,...[(d-1)/2]$ I understand the transormation across the time dimension $i$ but why do we need the transformation across the embedding size dimension $j$? Since we are adding the position, wouldn't sin-cos just on time dimension be sufficient to encode the position? EDIT Answer 1 - Making the embedding vector independent from the "embedding size dimension" would lead to having the same value in all positions, and this would reduce the effective embedding dimensionality to 1. I still don't understand how the embedding dimensionality will be reduced to 1 if the same positional vector is added. Say we have an input $X$ of zeros with 4 dimensions - $d_0, d_1, d_2, d_3$ and 3 time steps - $t_0, t_1, t_2$ $$ \begin{matrix} & d_0 & d_1 & d_2 & d_3\\ t_0 & 0 & 0 & 0 & 0\\ t_1 & 0 & 0 & 0 & 0\\ t_2 & 0 & 0 & 0 & 0\\ \end{matrix} $$ If $d_0$ and $d_2$ are the same vectors $[0, 0, 0]$, and the meaning of position i.e time step is the same, why do they need to have different positional vectors? Why can't $d_0$ and $d_2$ be the same after positional encoding if the input $d_0$ and $d_2$ are the same? As for the embedding dimensionality reducing to 1, I don't see why that would happen. Isn't the embedding dimensionality dependent on the input matrix $X$. If I add constants to it, the dimensionality will not change, no? I may be missing something more fundamental here and would like to know where am I going wrong. AI: First, let's reason why positional embeddings are needed at all: A multi-head attention layer of the Transformer architecture performs computations that are position-independent. This means that, if the same inputs are received at two different positions, the attention heads in the layer would return the same value at the two positions. Note that this is different from LSTMs and other recurrent architectures which, apart from the input, receive the state from the previous time step. The role of positional embeddings is to supply information regarding the position of each token. This allows the attention layer to compute results that are context-dependent, that is, two tokens with the same value in the input sentence would get different representations. Second, let's clarify why having a fixed formula to compute the positional embeddings: Positional embeddings can be handled as "normal" embedding matrixes and therefore can be trained with the rest of the network. These are "trainable positional embeddings". With this kind of positional embeddings, after each training step, the positional embedding matrix is updated together with the rest of the parameters. However, we can obtain the same level of performance (translation quality, perplexity, or whatever other measure being used) if, instead of training the positional embeddings, we used the formula proposed in the original transformer paper. This saves us from having to train a very big embedding matrix. Now, about why using the "embedding size dimension" in the formula: We need different values in each position of the embedded vector. Having the same value in each position of the vector would leave us with an "effective" embedding size of 1, as we are wasting the other $d-1$ positions. In order to compute different values for each position of the embedded vector, we need an independent variable that we use to compute the value at each position based on it. We don't have any other suitable variable but the position itself. That's why it is used in the formula. Old answer: Making the embedding vector independent from the "embedding size dimension" would lead to having the same value in all positions, and this would reduce the effective embedding dimensionality to 1. The formula uses the embedding size dimension to be able to provide different values within each embedded vector.
H: What exactly is a dummy trap? Is dropping one dummy feature really a good practice? So I'm going through a Machine Learning course, and this course explains that to avoid the dummy trap, a common practice is to drop one column. It also explains that since the info on the dropped column can be inferred from the other columns, we don't really lose anything by doing that. This course does not explain what the dummy trap exactly is, however. Neither it gives any examples on how the trap manifests itself. At first I assumed that dummy trap simply makes the model performance less accurate due to multicollinearity. But then I read this article. It does not mention dummy trap explicitly, but it does discuss how an attempt to use OHE with OLS results in an error (since the model attempts to invert a singular matrix). Then it shows how the practice of dropping one dummy feature fixes this. But then it goes on to demonstrate that this measure is unnecessary in practical cases, as apparently regularization fixes this issue just as well, and algorithms that are iterative (as opposed to closed-form solution) don't have this issue in the first place. So I'm confused right now in regards to what exactly stands behind the term "dummy trap". Does it refer specifically to this matrix inversion error? Or is it just an effect that allows the model to get trained but makes its performance worse, and the issue described in that article is totally unrelated? I tried training an sklearn LinearRegression model on a OHE-encoded dataset (I used pd.get_dummies() with the drop_first=False parameter) to try to reproduce the dummy trap, and the latter seems to be the case: the model got trained successfully, but its performance was noticeably worse compared to the identical model trained on the set with drop_first=True. But I'm still confused about why my model got successfully trained at all, since if the article is to be believed, the inversion error should have prevented it from being successfully trained. AI: There are two main problems - You have one Feature which is correlated (multi-collinearity) to all the others. If you are trying to solve using "closed-form solution", the following will happen $y = w_0 + w_1X_1 + w_2X_2 + w_3X_3$ $w_0$ is the $y$ intercept and to complete the matrix form 1=$X_0$. Hence, $y = w_0X_0 + w_1X_1 + w_2X_2 + w_3X_3$ Solution for $w$ is $(X^{T}X)^{-1}X^{T}y$ So, X must be an Invertible Matrix. But, If the model contains dummy variables for all values, then the encoded columns would add up (row-wise) to the intercept ($X_0$ here)(See below table) and this linear combination would prevent the matrix inverse from being computed (as it is singular). \begin{array} {|r|r|} \hline X_0 & X1 &X2 &X3 \\ \hline 1 &1 &0 &0 \\ \hline 1 &0 &1 &0 \\ \hline 1 &0 &0 &1 \\ \hline \end{array} why my model got successfully trained at all since if the article is to be believed, the transposition error should have prevented it from being successfully trained. Valid question! Aurelien Geron(Author of "Hands-On Machine Learning" has answered Here. - The LinearRegression(Scikit-Learn) class actually performs SVD decomposition, it does not directly try to compute the inverse of X.T.dot(X). The singular values of X are available in the singular_ instance variable, and the rank of X is available as rank_ On Performance In a practically large dataset, a closed-form solution is not preferred. May use an Iterative approach algorithm i.e. Gradient-Descent Multi-collinearity too, will not impact the performance but Interpretability of Features. Coeff changes depending upon which dummy is removed - This is obvious as each dummy is now a Feature with a different level of contribution (based on data). The only thing that is sure is their effect together and one-less is the same. The inconsistent result you are getting should be due to some other issue. Dummy-variable Trap I have never heard of this term except "Udemy course A-Z ML". So I don't think that there is any special meaning of the word "trap" if you understand the points(i.e. Singularity, Multi-collinearity, and Interpretability) separately References - www.feat.engineering - Sec#5.1 Sebastian Raschka stats.stackexchange
H: MSE relevance as a metric when errors < 1 I'm trying to build my first models for regression after taking MOOCs on deep learning. I'm currently working on a dataset whose labels are between 0 and 2. Again, this is a regression task, not classification. The low y values imply that the loss for each sample is quite low, always < 1. My question is then about the relevance of mse as a metric in such a case : since the loss is < 1, squaring it will result in an even smaller value, making the metric value drop very rapidly. In this case, would it be more relevant to use mae ? Or should I multiply the y values so that the order of magnitude of a sample loss would be > 1. I found this nice article about regression metrics, but didn't find the answer in it. Thanks for your help. AI: I'd use relative RMSE $\sqrt{\frac{1}{n} \sum \frac{(Preicted - True)^2}{True^2}}$. In this case, close to 0 implies a good model, regardless of the scale of the true values. Similarly, you can try relative MAE.
H: Does auto.arima of the forecast package deal with seasonality and trend automatically I'm reading some code involving auto.arima method from the forecast package in R. What I'm curious is whether there is a necessity for decomposing the time series data into seasonal, trend and stochastic compoents before passing to the auto.arima method, or is it automatically handled by the functionality of the method? Thanks in advance! AI: Yes, the aim of auto.arima is for fitting ARIMA models automatically. You do not need to decompose your time series before hand. See how the algorithm works here https://otexts.com/fpp3/arima-r.html. You may still want to look at arguments available in the auto.arima function, and you may want to change default maximum values for p, q, and d, etc.
H: Can the use of EarlyStopping() offset overfitting problems caused by validation_split? Keras gives users the option, while fitting a model, to split the data into train/test samples using the parameter "validation_split. Example: model = Sequential() model.add(Dense(3, activation = 'relu')) /// Compile model /// model.fit(X_train, y_train, validation_split = 0.2) However, my intuition suggests that using validation_split (as opposed to creating train, test samples before fitting the model) will cause overfitting, since although validation_split splits the batches into train and test at each epoch, the overall effect is that the entire dataset is 'seen' by the model. I was wondering if: my intuition is correct assuming that 1) is true, if there are any circumstances where using the EarlyStopping() callback and validation_split would be better than splitting the data into train/test before fitting the model AI: The validation split parameter splits the data fed into .fit() into train and test sets. There is no mixing of the train and test sets after each epoch. So in terms of splitting, it behaves the same as sklearn's train_test_split(), the only difference being that by default, keras splits the data by index (so if you have validation_split = 0.2, the first 80% of indices are taken for training, the rest for testing). So, in principle, it should not cause overfitting. However, many times overfitting can occur based on how the model is evaluated. What I've seen many people do is the following: # use previously created model model.fit(X, y, validation_split = 0.2) model.evaluate(X, y) Here, the model would be overfitting because keras doesn't 'remember' the split that it used to train the data. If you are using validation_split for visualisation purposes, an alternative would be to do the following: X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2) model.fit(X_train, y_train, validation_data = (X_test, y_test), callbacks = [callback]) model.evaluate(X_test, y_test) This way, during training you can still get the test curves while the model is training, but at the end when using evaluate (or predict), you'll still be predicting data that is previously unseen from a training POV. As for EarlyStopping(), it can be used here the same way it would be used with validation_split.
H: How to train a neural network on multiple objectives? I have a multi-class neural network classifier that has K classes(products). For every row, only one of the classes will be 1 at a time. Now, this approach works fine if I have only 1 objective to optimize i.e Which of these N products was "clicked" by the user. But how will I solve this problem if I need to optimize on a 2nd objective i.e Which of these N products was "purchased" by the user? A purchase event is always preceded by a click event. I can obviously solve this problem by training two separate models - 1 for click and the other for purchase. But purchase data is very low as compared to the click data. And we ran the purchase model on production. It did not perform well. So how do I take both the click and purchase data and frame my problem as "Which of these N products will be clicked and possibly purchased by the user"? and train a single model. Any sources or papers in this direction will be really helpful. AI: What you're referring to is called multi-task learning, where your goal is to have a single network learn multiple tasks (in your case "click" and "purchase"). The benefit of having a single model learn both tasks is that the network can use information extracted for one task to improve its performance on the other. Technically you need your network to have two output layers (i.e. one for predicting "clicks" and one for "purchases") and likewise two loss functions (one for each objective). These loss functions usually contribute equally towards the total loss, unless you don't want to consider both tasks of equal importance. If you're using keras an example for multi-output models can be found here.
H: In sequence models, is it possible to have training batches with different timesteps each to reduce the required padding per input sequence? I want to train an LSTM model with variable length inputs. Specifically I want to use as little padding as possible while still using minibatches. As far as I understand each batch requires a fixed number of timesteps for all inputs, necessitating padding. But different batches can have different numbers of timesteps for the inputs, so in each batch inputs only have to be padded to the length of the longest input-sequence in that same batch. This is what i want to implement. What I need to do: Dynamically create batches of a given size during training, the inputs within each batch are padded to the longest sequence within that same batch. The training data is shuffled after each epoch, so that inputs appear in different batches across epochs and are padded differently. Sadly my googling skills have failed me entirely. I can only find examples and resources on how to pad the entire input set to a fixed length, which is what i had been doing already and want to move away from. Some clues point me towards tensorflow's Dataset API, yet I can't find examples of how and why it would apply to the problem I am facing. I'd appreciate any pointers to resources and ideally examples and tutorials on what I am trying to accomplish. AI: The answer to your needs is called "bucketing". It consists of creating batches of sequences with similar length, to minimize the needed padding. In tensorflow, you can do it with tf.data.experimental.bucket_by_sequence_length. Take into account that previously it was in a different python package (tf.contrib.data.bucket_by_sequence_length), so the examples online may containt the outdated name. To see some usage examples, you can check this jupyter notebook, or other answers in stackoverflow, or this tutorial.
H: Why should I use data augmentation as Keras layer I've seen in several Tensorflow/Keras tutorials that data augmentation functions are added as keras layers. When I converted my Keras Python model (for production purpose) to TensorflowJS I faced the issue that e.g. the RandomFlip layer is not available in TensorflowJS. So I have to use the ImageDataGenerator. I don't understand why should I put the data augmentation layers at all in the model, I mean when I want to use my model in production I'll still have them in my model which doesn't make sense for me. AI: You don't need that at Inference time. It's for training purposes. You can skip these Layers while exporting to JavsScript. These layers do not have weights. On this example [Link], which has these Layers, I removed these layers and it worked fine. # Added this code to remove the first 4 Aug layers input = model.input model_export = input for layer in model.layers[4:]: model_export = layer(model_export) model_export = keras.Model(input, model_export) Prediction output - This image most likely belongs to sunflowers with a 100.00 percent confidence. Model before (Inital Layers) Model After
H: label encoding or one-hot encoding or none when using decision tree? I've been learning about decision tree from multiple resources but still not fully understanding data preprocessing step. from https://www.youtube.com/watch?v=PHxYNGo8NcI&t=535s&ab_channel=codebasics it uses decision tree with label encoder and in another resource it says we don't need to convert categories to strings, I'm confused. Given I have data that looks like gender level score male 1 34 female 2 77 female 1 44 If we are using label encoder we would only need to convert gender however if that maps male = 0, female = 1 wouldn't the machine treat female > male? and if it ignores ordinality it will ignore level1 < level2 and treat as if level 1 and level 2 are same level which is not true. What is the right preprocessing step and why? AI: If we are using label encoder we would only need to convert gender however if that maps male = 0, female = 1 wouldn't the machine treat female > male? You are correct, using label encoder to encode categorical features is wrong in general, for the reason you mention. Note that scikit documentation advises against using it with features, it's supposed to be used only with a response variable. In the particular case of a binary variable like "gender" to be used in decision trees, it actually does not matter to use label encoder because the only thing the decision tree algorithm can do is to split the variable into two values: whether the condition is gender > 0.5 or gender == female would give the exact same results. Also note that whether the variable is interpreted as ordinal or not is a matter of implementation. For example in Weka it's possible to specify that a feature is categorical ("nominal"). and if it ignores ordinality it will ignore level1 < level2. Not necessarily, because in theory it's possible to have features with different types (e.g. some categorical and some numerical). However this may depend on the implementation as well.
H: Which are the worse machine learning models for text classifications? I was looking at text classification, and for curiosity I was searching online for which were the best models for text classifications. About this, I found that they are linear support vector machines and naive bayes. But which are the worse models to use in text classification? And, if possible, why? AI: First, the question is too broad because there are many different kinds of text classification tasks. For example one wouldn't use the same approach for say spam detection and author profiling (e.g. predict the gender of the author), two tasks which are technically text classification but have little in common (and there are many others). Second, even with a more specific kind of problem, the question of the type of model is misleading because a lot of what makes a ML system perform better than another in text classification is due to other things: the type and amount of training data of course, but also crucially the features being used. There are many options in terms of representing text as features, and these different options usually have a massive impact on performance. I even think that most of the time the choice of a type of classification model does not matter as much as the design of the features. Finally I'm actually going to answer the question but probably not in the way OP expects: the worst model in any classification task is exactly like the best model, but it swaps the answers in order to have as many wrong predictions as possible (e.g. class 1 -> class 2, class 2 -> class 3, .., class N -> class 1). Since it's a lot of work to implement the best classifier just to obtain the worst one, a close to worst one can be done with a minority baseline classifier: just predict every instance as the least frequent class in the training data. I hope a few of the things I said will be helpful, even though it's probably not what OP wished for! :)
H: Is there an inherent recency bias in deep learning? When working with very large models within Deep Learning, training often takes long and requires small batch sizes due to memory restrictions. Usually, we are left with a model checkpoint after training has commenced. I am wondering whether the exact time at which we take that checkpoint significantly factors in to the statistical properties of a model's outputs. For example: Within text generation, lets assume that just before we extract the checkpoint, the model learns statistically anomalous batches with longer sentences than the mean. Would that result in our model generating longer sentences, overrepresenting that recent batch of anomalous texts? As training batches are often randomly generated from the dataset, such unrepresentative batches may certainly occur, sometimes right before we save the checkpoint. Has there been any research regarding such, potentially unwanted, recency bias in slower deep learning scenarios? The only references I could find were intentionally trying to employ such biases, but I have not found any literature on unwanted recency bias. AI: Your question is very interesting, however I feel you are overlooking a key point in your reasoning: You usually take a model checkpoint at the point that it performs best on the validation set. This means that the instance of the model you keep is inherently the most robust and generalizable version of the model that you have evaluated, thus suffering the least from recency bias. Suppose though, you don't checkpoint the model but stop it at a point arbitrarily. Naturally, you'd think that the samples in the final batch influenced the current state of the model much more than the first batches of the epoch. In practice this would show up as regular overfitting, however, instead of recency bias. Some ways to deal with this: relatively small learning rate equivalently regularization as parameter norm penalties (i.e. L1, L2, ...) ensembling other more specialized techniques such as SGDA
H: Micro Average vs Macro Average for Class Imbalance I have a dataset consisting of around 30'000 data points and 3 classes. The classes are imbalanced (around 5'000 in class 1, 10'000 in class 2 and 15'000 in class 3). I'm building a convolutional neural network model for classification of the data. For evaluation I'm looking at the AUC and ROC curves. Because I have three classes I have to either use micro- or macro-average. To calculate the micro- and macro-averaged AUC and ROC curve, I use the approach described here: https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html The micro-averaged AUC / ROC is calculated by considering each element of the label indicator matrix as a binary prediction and the macro-averaged AUC / ROC is calculated by calculating metrics for each label, and find their unweighted mean. In my case micro-averaged AUC is usually higher than macro-averaged AUC. If we look at the sklearn.metrics.roc_auc_score method it is written for average='macro' that This does not take label imbalance into account. I'm not sure if for micro-average, they use the same approach as it is described in the link above. Is it better to use for dataset with class imbalance micro-average or macro-average? That means which metric is not affected by class imbalance? In my case micro-averaged AUC (0.85) is higher than macro-averaged AUC (0.79). When I look at the confusion matrix, the majority class is very well predicted (because the network probably learns to predict the majority class) but the minority classes are poorly predicted (almost as many false negatives as true positives). So, overall the AUC should not be that high I think. AI: The question is actually about understanding what it means to "take imbalance into account": Micro-average "takes imbalance into account" in the sense that the resulting performance is based on the proportion of every class, i.e. the performance of a large class has more impact on the result than of a small class. Macro-average "doesn't take imbalance into account" in the sense that the resulting performance is a simple average over the classes, so every class is given equal weight independently from their proportion. Is it actually a good idea to "take imbalance into account"? It depends: With micro-average, a classifier is encouraged to focus on the largest classes, possibly at the expense of the smallest ones. This can be considered a positive because it means that more instances will be predicted correctly. With macro-average, a classifier is encouraged to try to recognize every class correctly. Since it is usually harder for the classifier to identify the small classes, this often makes it sacrifice some performance on the large classes. This can be considered a positive in the sense that it forces the classifier to properly distinguish the classes instead of lazily relying on the distribution of classes. One could say that it's a kind of quantity vs. quality dilemma: micro-average gives more correct predictions, macro-average gives attention to actually distinguishing the classes. Very often one uses macro with strongly imbalanced data, because otherwise (with micro) it's too easy for the classifier to obtain a good performance by relying only on the majority class. Your data is not strongly imbalanced so it's unlikely this would happen, but I think I would still opt for macro here.
H: BERT minimal batch size Is there a minimum batch size for training/re-fining a BERT model on custom data? Could you name any cases where a mini batch size between 1-8 would make sense? Would a batch size of 1 make sense at all? AI: Small mini-batch size leads to a big variance in the gradients. In theory, with a sufficiently small learning rate, you can learn anything even with very small batches. In practice, Transformers are known to work best with very large batches. You can simulate large batches by accumulating gradients from the mini-batches and only do the update once in several steps. Also, when finetuning BERT, you might also think of fine-tuning only the last layer (or several last layers), so you save some memory on the parameter gradients and can have bigger batches.
H: How to store efficiently very large sparse 3D matrices To train a CNN, I have stacked arrays of images over observations [observations x width x length]. The dataset is very sparse ($95\%$). What would be an efficient way of storing these matrices efficiently in terms of format (e.g. pickle, parquet) structure (e.g. scipy.sparse.csr_matrix, List of Lists) AI: Sparse matrix compression techniques is a massively efficient way of storing sparse data. Scipy package has a variety of methods to address the above in scipy.sparse. However, none of these are compatible with matrix dimensions higher than 2. I have found handy the Sparse package that supports Coordinate List compression (COO), for higher dimension matrices, as in my use case: Sparse matrix compression with coo #Load sequence array file A = np.load('array.npy', allow_pickle=True) sparsity = 1 - (np.count_nonzero(A) / A.size) print( "Sparsity of A:%s%%" % np.round(sparsity,3)) Sparsity of A:0.996% #Calculate coordinate list sparse array of A S = sparse.COO(A) # Size calculation. print('Size of A in bytes: %s' %A.nbytes) Size of A in bytes: 16563527400 print('Size of S in bytes: %s' %S.nbytes) Size of S in bytes: 249330624 On disk: array.npy --> 15.43 GB array_after.npy --> 16.40 MB
H: Attention for time-series in neural networks Neural networks in many domains (audio, video, image text/NLP) can achieve great results. In particular in NLP using a mechanism named attention (transformer, BERT) have achieved astonishing results - without manual preprocessing of the data (text documents). I am interested in applying neural networks to time-series. However, in this domain, it looks like most people apply manual feature engineering by either: transposing the matrix of events to hold columns for each time observation and a row for each thing (device, patient, ...) manually generating sliding windows and feeding snippets to an RNN/LSMTM. Am I overlooking something? Why can't I find people using attention? Wouldn't this be much more convenient (automated)? AI: It is an interesting question. I would not completely agree with you though when you say that most time-series models dont use attention. However there is not as much documentation available on the web as there is for other applications. LSTNet was one of the first papers that proposed using an LSTM + attention mechanism for multivariate forecasting time series. Temporal Pattern Attention for Multivariate Time Series Forecasting by Shun-Yao Shih et al. focused on applying attention specifically attuned for multivariate data. Attend and Diagnose leverages self attention on medical time series data. This time series data is multivariate and contains information like a patient’s heart rate, SO2, blood pressure, etc. A good link to further study this would be: https://towardsdatascience.com/attention-for-time-series-classification-and-forecasting-261723e0006d Further quoting form the above paper: "self-attention and related architectures have led to improvements in several time series forecasting use cases, however, altogether they have not seen widespread adaptation. This likely revolves around several factors such as the memory bottleneck, difficulty encoding positional information, focus on pointwise values, and lack of research around handling multivariate sequences. Additionally, outside of NLP many researchers are not probably not familiar with self-attention and its potential." I dont completely agree with the last statement, nonetheless I do agree that the benefits of attention have not yet captured the attention of researchers outside of NLP to the extent that it should have
H: How to Present All Categories in All Samples I have a data contains many categorical columns. When I sampled this data randomly a few times and applied one-hot encoding to categorical columns I noticed that it ended up with datasets with different column counts. Because not all categories in columns preserved in samples and different samples includes different subset of categories for each column. Is there a way to ensure all categorical columns in all samples contains all possible categories? AI: The first thing we must accept that the sampling is probably doing the right job. What I mean is that if only 10% is being sampled then some unique value which is less than 5 can be easily missed. Ideally, you should club these values into some generic value i.e. OTHER_COL_1 But, if you want to get away with this natural result, you should apply some tweaking. We may do the following - Get the sample as you are doing now Match the unique element of each column to the unique from the main data Iterate on each col and missed unique value Let's assume UNIQUE_4 is missed for COL_2 Sample all the records for UNIQUE_4 from COL_2 of main data and Pick one random data out of it
H: Can we consider high correlation to be a good predictor? The problem of predicting the daily number of COVID-19 cases is indeed challenging and many (external) factors should be taken into account to come up with a reasonable predictor. However, we have studied Twitter for a specific country (not English) for the period Mar-Nov 2020, and found out that the volume of daily tweets related to symptom X is highly correlated with the number of confirmed cases in that country (pearson correlation 0.84 with p-value 0.00031). In the field of data science, would this suffice, at least partially, to say daily tweets of X is a good predictor for the number of COVID-19 cases? AI: Isn't this a variant of the "Google Flu Trends" story from a few years back? (Substitute "Google" with "Twitter" and "Flu" with "Covid-19", "Searches" with "Tweets". Long story. In a nutshell, frequency of Google Searches for terms like "Flu", "Headache" "Nausea" were an excellent predictor for flu season forecasting, until they weren't. (when people started to search for "flu trends"? I don't remember why). There was a negative feedback loop, and forecasts became less reliable.) Google finally removed that feature, to avoid criticism and to stay out of trouble. There are many papers on this, and why it was taken offline.
H: Why does GPU speed up inference? I understand that GPU can speed up training for each batch multiple data records can be fed to the network which can be parallelized for computation. However, for inference, typically, each time the network only processes one record, for instance, for text classification, only one text (i.e., a tweet) is fed to the network. In such a case, how can GPU speed up? AI: Although what you describe is correct, such online/real-time usage is far from being the only (or even the most frequent) use case for DL inference. The keyword here is "batch"; there are several applications where the inference can be also run in batches of incoming data instead of on single samples. Take the example mentioned by NVIDIA in their AI Inference Platform technical overview (p.3): Inference also batch hundreds of samples to achieve optimal throughput on jobs run overnight in data centers to process substantial amounts of stored data. These jobs tend to emphasize throughput over latency. However, for real-time usages, high batch sizes also carry a latency penalty. For these usages, lower batch sizes (as low as a single sample) are used, trading off throughput for lowest latency. A hybrid approach, sometimes referred to as “auto-batching,” sets a time threshold—say, 10 milliseconds (ms)—and batches as many samples as possible within those 10ms before sending them on for inference. This approach achieves better throughput while maintaining a set latency amount. Although, as correctly pointed out in another thread comment, it is on NVIDIA's best interest to convince you that you need GPUs for inference (which is indeed not always true), hopefully you can see the pattern: whenever we want to emphasize throughput over latency, GPUs will be useful for speeding up inference. Practically, any application that runs on existing archives of data (videos, audio, music, text, documents) instead of waiting for incoming streams in real-time can meaningfully rely on GPUs for inference. And here "archives" does not imply necessarily time spans of years or months (although it can be so, e.g. in astronomy applications); the archive consisting of the photos uploaded to Facebook in the last 3 minutes (or since I have started writing this...) is huge, and it, too, can benefit from GPU-accelerated inference. Videos, in specific, since they are usually broke up in frames to be processed, may benefit from sped up GPU inference even in near-real time applications. Bu if you want to just set up your web app that will process low-traffic incoming photos or tweets to respond in real-time, then indeed GPU may not offer anything substantial in performance.
H: Comparing TFIDF vectors of different shapes I'm working on a project using TF-IDF vectors and agglomerative clustering -- the idea is that the corpus of documents increases over time, and when a new document is added, the mean cosine similarity with each cluster will be calculated to find the best match. However, I'm under the impression that it is costly and inefficient to re-calculate the TF-IDF vectors of every document each time a new doc is added to the corpus. I'm trying to discern if it's possible to generate the TF-IDF vectors on an individual basis for each document (see below), and then calculate the cosine similarity by temporarily reshaping/modifying the two vectors. doc_1 = ['some', 'text', 'here'] doc_2 = ['other', 'text', 'here'] vectorizer = TfidfVectorizer() vector_1 = vectorizer.fit_transform(doc_1) vector_2 = vectorizer.fit_transform(doc_2) # somehow add the features from vector_1 to vector_2 that do not exist there, and vice versa... # this way vectors can be calculated on the fly without having to regenerate vectors with an ever-expanding corpus It's my understanding that running the sklearn TF-IDF vectoriser returns not only the vectors but a mapping between the unique words in the corpus and the vectors they correspond to, so I think this approach would be possible, but I'm not sure? Finally, as I'm new to data science, am I approaching this issue horribly? Is there a much cleaner or efficient way of solving the problem of non-parametrical clustering (i.e. no n_clusters) with an ever-expanding corpus. AI: Your thinking is correct, it is unefficient to recalculate TFIDF on the whole collection every time a document is added. There are two parts in TF-IDF: $TF(w,d)$ is the Term Frequency of a word $w$ in a particular document $d$, it's independent from other documents. It's just defined as the proportion of $w$ in $d$. $IDF(w)$ is the Inverse Document Frequency: it is defined as the log of the inverse of the document frequency, where the document frequency of a word is the proportion of documents in the collection which contain this word: $$IDF(w)=log\left(\frac{|D|}{| \{d \in D \text{ such that } w\in d \} |}\right)$$ $IDF(w)$ depends on all the documents so it's costly to re-calculate. However updating IDF whenever there is a new documents can be implemented efficiently: Maintain a global map of the document frequency (not the IDF) for every word. This map is easy to update: for every distinct word $w$ in the new document, increment $m[w]$. Whenever a TF-IDF weight is needed, collect the required document frequency and apply the IDF formula. I don't know whether this can be done with scikit API or not, but it's not too big a task to implement it yourself (and it's a good little exercise to understand TFIDF ;) )
H: What is the difference between BERT architecture and vanilla Transformer architecture I'm doing some research for the summarization task and found out BERT is derived from the Transformer model. In every blog about BERT that I have read, they focus on explaining what is a bidirectional encoder, So, I think this is what made BERT different from the vanilla Transformer model. But as far as I know, the Transformer reads the entire sequence of words at once, therefore it is considered bidirectional too. Can someone point out what I'm missing? AI: The name provides a clue. BERT (Bidirectional Encoder Representations from Transformers): So basically BERT = Transformer Minus the Decoder BERT ends with the final representation of the words after the encoder is done processing it. In Transformer, the above is used in the decoder. That piece of architecture is not there in BERT
H: Over-sampling: is my model over-fitting? I would like to ask you some questions on how to consider (good or not) the following results: OVER-SAMPLING precision recall f1-score support 0.0 1.00 0.85 0.92 873 1.0 0.87 1.00 0.93 884 accuracy 0.92 1757 macro avg 0.93 0.92 0.92 1757 weighted avg 0.93 0.92 0.92 1757 Confusion Matrix: [[742 131] [ 2 882]] I have a dataset with 3500 obs (3000 with class 0 and 500 with class 1). I would like to predict class 1 (target variable). Since it is a problem of imbalance classes, I had to consider re-sampling methods. The result shown above is from over-sampling. Do you think it over-fits and/or that it cannot be a good re-sampling method for my case? I am looking at the f1-score column, since it is a text classification problem. AI: In order to get accurate results, you should not oversample the test set! Otherwise you are simply evaluating on synthetic samples that you yourself have created. The support on your classification report should mirror the imbalance in your dataset. From what I understand you have 3500 samples, then you did some oversampling (probably brought them to around 6000) and then took 1757 from these for testing. This evaluation scheme is wrong. Take a look at the illustration below to see a more correct scheme. |--- train --> oversample train set --> train model---| set --| |--> evaluation on test set |--- test --------------------------------------------|
H: How to set class_weight parameter for cost sensitive learning? I'm dealing with a binary classification problem with a balanced data set, however false positives are much more costly than false negatives. Let's just say that an FP is in general 3 times more costly than an FN and the response variable = 1 means a positive identification. How should I set the class_weight parameter in sklearn's RandomForest to reflect this ? From my understanding, I would say: class_weight = {0:1.0,1:3.0} I'm not sure if I understand this parameter correctly or should it be the inverse ? Thanks. AI: \begin{array} {|r|r|} \hline - &(1) &(0) \\ \hline (1) &- &FP\\ \hline (0) &FN &-\\ \hline \end{array} Rows elements are Y_true and column is Y_pred. FP means, we predicted Positive and it came out False i.e. Class was Negative(0 here). It means we don't want the Model to mis-classify the Negative class. So, it implies, we will put a bigger penalty for the Negative class. Hence, class_weight = {0:3.0.0,1:1.0}
H: Cause of periodic jumps in loss function I might be missing something obvious as I am new to machine learning. I am training an SSD Inception V2 for detecting buildings from satellite images. I use the Tensorflow Object Detection API. I am having troubles interpreting why the value of loss seems to change periodically: Please let me know if I need to add more information AI: Looks like you may be feeding training data in a very specific way or weights and biases reset after a specific period of time, e.g. end of epoch? I would start with checking that training and validation sets consist of a desired class ratio datapoints within mini batches are being shuffled model parameters are being correctly updated during training
H: making a picture of a neural network in tensorflow I'm new to Machine Learning and I have just gone through a tutorial explaining how to create a neural network in TensorFlow. I was wondering if it is possible to visualize the neural network I created. The output should be a picture like this Thanks. MRE: import tensorflow as tf ann = tf.keras.models.Sequential() ann.add(tf.keras.layers.Dense(units=6, activation='relu')) ann.add(tf.keras.layers.Dense(units=6, activation='relu')) ann.add(tf.keras.layers.Dense(units=1, activation='sigmoid')) tf.keras.utils.plot_model(ann, to_file='model.png', show_shapes=True, show_layer_names=True, rankdir='TB', expand_nested=False, dpi=96 ) AI: This should do the trick: tf.keras.utils.plot_model( model, to_file='model.png', show_shapes=False, show_layer_names=True, rankdir='TB', expand_nested=False, dpi=96 ) This will generate an image like: Example: https://www.tensorflow.org/guide/keras/functional
H: What is the meaning of the sparsity parameter Sparse methods such as LASSO contain a parameter $\lambda$ which is associated with the minimization of the $l_1$ norm. Higher the value of $\lambda$ ($>0$) means that more coefficients will be shrunk to zero. What is unclear to me is that how does this method decides which coefficients to shrink to zero? If $\lambda = 0.5$ then does it mean that those coefficients whose values are less than or equal to 0.5 will become zero? So in other words, whatever be the value of $\lambda$, the coefficients whose values fall within $\lambda$ will be turned off/become zero? OR is there some other meaning to the value of $\lambda$? Can $\lambda$ be negative? AI: When we implement penalized regression models we are saying that we are going to add a penalty to the sum of the squared errors. Recall that the sum of squared errors is the following and that we are trying to minimize this value with Least Squares Regression: $$SSE = \sum_{i=1}^{n}(y_i-\hat{y_i})^2$$ When the model overfits or there is collinearity present, the estimates for the coefficients for our least squares model may be higher than they should be. How do we fix this? We use regularization. What this means is that we add a penalty to the sum of squared errors thereby limiting how large the parameter estimates can get. For Ridge Regression this looks like this: $$SSE_{L2 norm} = \sum_{i=1}^{n}(y_i-\hat{y_i})^2 + \lambda \sum_{j=1}^{P}\beta_j^2$$ Notice what is different with this model. Here, we add a L2 regularization penalty to the end of the SSE. What this does is add the multiplication of $\lambda$ by the square of the parameter estimations as a penalty to the SSE. This limits how large the parameter estimates can get. As you increase the "shrinkage parameter" $\lambda$ the parameter estimates are shrunk more towards zero. What is important to note with Ridge Regression, is that this model shrinks the values towards zero, but not to zero. You may also use the LASSO Regression technique as shown below: $$SSE_{L1 norm} = \sum_{i=1}^{n}(y_i-\hat{y_i})^2 + \lambda \sum_{j=1}^{P} \lvert{\beta_j^2}\rvert$$ Notice here that the change by adding the L1 penalty is very similar. However, this difference here is that we are now penalizing the absolute value of the coefficients. This allows shrinkage to zero and can be considered a form of feature selection. In either case both methods penalize model complexity and are parsimonious! Two things to note: In answer to your question, no, $\lambda$ cannot be negative. Why? This would make no sense. $\lambda$ is multiplied by either the L2 or L1 norm to add a penalty to the SSE. If instead, you had a negative $\lambda$ you would actually be rewarding the model's complexity not penalizing it. When you have a value of $\lambda = 0$ you have no penalty and just have regular least squares regression!
H: How to deal with a binary classification problem, where the instances in the negative class are very similar? Let's say, one wants to detect, whether a picture of a fixed size contains a cat or not. But as a dataset, you have 10000 pictures of cats, and 30000 pictures which don't contain a cat, but are very similar to each other. For example, let's assume, the 30000 pictures in the "not cat" class contain only pictures of one or two kinds of spiders. When training a CNN, you will find that you achieve a high score on the test set (here high score = almost fully diagonal confusion matrix) but when you want to use the CNN in the real world, you find that almost everything gets classified as a cat. Why does the network generalize badly in this case? Even if the dataset doesn't represent the kind of data, the CNN would see in the real world, shouldn't it be easy for the CNN to say "I have seen 10000 examples of cats, therefore anything which doesn't look like a cat is not a cat" ? How would one deal with this problem (besides gathering more data)? AI: The CNN in this case does not learn what is a cat but rather what differentiate an image with a cat from one without a cat. If all of your "no-cat" images contain spider(s), the CNN could also converge only by detecting the spider(s) : images with spider(s) belong to "no-cat" and others belong to "cat". That explains why you have such a good confusion matrix on testing data and such poor performance in real-world. You have some options to avoid such a situation: add other images to your "no-cat" label as you said (using public datasets for instance) try one-class classification approaches. If you are familiar with neural networks, you could dig into reconstruction error of autoencoders (see this post: How to use a dataset with only one category of data or this sklearn guide) Hope it helps
H: Causal inference VS Active learning? Imagine we have some lists of features that are changing in time. Each row of the list corresponds to a sample (Change in space). I would like to know whether machine learning is able to determine the effect of each sample on another sample. For instance, the target value for the sample "S" is dependent on features of samples "S-4","S-3","S-2","S-1","S+1","S+2","S+3". I have seen something like Active learning and Causal Inference but still not sure each of which would be useful for my aim. To elaborate more, imagine we have the picture below: the red line is a result for one year and the blue one is next year. we have these results in an appropriate amount so in this manner we do not have problem. for the target that has been shown by the red circle and other samples, we have different features. But I am looking for an algorithm to tell me whether group 1 is affecting my target in the red circle point or group 2. To this aim, it is good to use Causal inference or Active Learning? AI: This is how the problem is formulated as a Causal Inference problem: Take group 1 as the Control Group and group 2 the Treatment Group. Based on the observations ( "S-4","S-3","S-2","S-1","S+1","S+2","S+3"), a model is fit on the observations. How many models depend on the type of learning, S,T or X-learner. Basically, what these learners do, is separately fit the features to the target (S), conditionally on the features originating from control or treatment. Once you have fit the learners, a treatment effect is estimated. You can see the treatment effect as the difference between the predictions on either group, given the knowledge restricted to the opposite group. Here is a sample code from Uber's causalML from causalml.inference.meta import XGBTRegressor y, X, treatment = load_data() xg = XGBTRegressor(random_state=42) te, lb, ub = xg.estimate_ate(X, treatment, y) print('Average Treatment Effect (XGBoost): {:.2f} ({:.2f}, {:.2f})'.format(te[0], lb[0], ub[0]))
H: Why does my GPU immediately run out of memory when I try to run this code? I am trying to write a neural network that will train on plays by Shakespeare and then write its own passages. I am using pytorch. For some reason, my GPU immediately runs out of memory. Note I am not running it on my own GPU; I am running it using the free GPU acceleration from Google Colab. I've tried running a different notebook using the GPU and it works, so I know it's not because I ran into some GPU usage quota or anything like that. Here is a link to the notebook: https://colab.research.google.com/drive/1WNzmN-F3EOvy2HtHCQ0TyBYA5RsCCfN0?usp=sharing so you can try running it yourself. Alternatively, I will paste the code below as well notice i have a print(i) in the last for loop. when I run it, the only output I get from that print is a single 0, and then I get --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-52-e6121e5b189f> in <module>() 23 targets = targets.to(dtype=torch.float32).cuda() 24 ---> 25 out, hidden = net(inputs, hidden) 3 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py in forward(self, input, hx) 580 if batch_sizes is None: 581 result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers, --> 582 self.dropout, self.training, self.bidirectional, self.batch_first) 583 else: 584 result = _VF.lstm(input, batch_sizes, hx, self._flat_weights, self.bias, RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.90 GiB total capacity; 12.43 GiB already allocated; 5.88 MiB free; 15.08 GiB reserved in total by PyTorch) it's running out of memory before it has even done a single batch! import numpy as np import torch from torch import nn import torch.nn.functional as F import re with open('drive/MyDrive/colab/shakespeare.txt', 'r') as file: text = file.read() print(text[:100]) chars = list(set(text)) index2char = dict(enumerate(chars)) char2index = {char: index for index, char in index2char.items()} encoded = [char2index[word] for word in text] print(encoded[:100]) seq_length = 50 regex = '.{1,' + str(seq_length + 1) + '}' dataset = np.array(re.findall(regex, text, flags=re.S)) batch_size = 10 n_batches = len(dataset) // batch_size dataset = dataset[:n_batches * batch_size] device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") print(device) dataset = dataset.reshape(n_batches, -1) print(dataset.shape) print(len(dataset[0][0])) def passage_to_indices(passage: str): return np.array([char2index[char] for char in passage]) class Net(nn.Module): def __init__(self, input_size, batch_size, hidden_size, num_layers): super().__init__() self.lstm = nn.LSTM(input_size, hidden_size, num_layers) self.criterion = nn.CrossEntropyLoss() self.input_size = input_size self.batch_size = batch_size self.hidden_size = hidden_size self.num_layers = num_layers def forward(self, input, hidden): # lstm should take input of size (seq_length, batch_size, input_size) # and hidden of size (num_layers, batch_size, hidden_size) out, hidden = self.lstm(input, hidden) return out, hidden def init_hidden(self): hidden = ( torch.zeros(self.num_layers, self.batch_size, self.hidden_size, dtype=torch.float32 ).cuda(), torch.zeros(self.num_layers, self.batch_size, self.hidden_size, dtype=torch.float32 ).cuda() ) return hidden input_size = len(chars) # 65 hidden_size = 256 num_layers = 2 net = Net(input_size=input_size, batch_size=batch_size, hidden_size=hidden_size, num_layers=num_layers ) net = net.cuda() epochs = 2 for epoch in range(epochs): hidden = net.init_hidden() for i, batch in enumerate(dataset): print(i) targets = torch.tensor([char2index[str(example[-1])] for example in batch]) # dim passages = (batch_size, seq_length) passages = np.array([passage_to_indices(str(example[:-1])) for example in batch]) one_hot = F.one_hot(torch.tensor(passages), num_classes=input_size) inputs = one_hot.view(seq_length, batch_size, input_size) inputs = inputs.to(dtype=torch.float32).cuda() targets = targets.to(dtype=torch.float32).cuda() out, hidden = net(inputs, hidden) AI: Okay it was a really stupid issue. I should have tried to run my code somewhere other than Google Colab before posting here. In case anyone has the same issue, here is how I solved it: In Google Colab, click Runtime --> Manage sessions. I had a bunch of other active sessions there. I closed them, and now it runs fine.
H: Scheduler for activation layer parameter using Keras callback I want to train a keras model with a custom activation layer. The custom activation layer has one fixed non trainable parameter. I want to change/set this non trainable parameter of all custom activation layers in model during training after few epochs. How to achieve this using keras callback? AI: You will need to write a custom callback for this, that implements the on_epoch_end method. Roughly it should look something like this class CustomCallback(keras.callbacks.Callback): def __init__(self, freq): super().__init__() self.freq = freq # how often to change the parameter def on_epoch_end(self, epoch): if epoch % freq == 0 and epoch > 0: weights = self.model.get_weights() # here you change the weight you want, e.g. it is the 5th layer weights[4] = weights[4] / 10 self.model.set_weights(weights)
H: Predicting the likelihood that a prediction from a linear regression model is accurate So to set up the problem: I have a data set that had labeled data like colour, brand and quality as independent variables and the dependent is RRP (price). I have made a linear regression model using this data and can predict the dependent variable using the independent variables (i am using scikitlearn so just using model.predict. This is causing me siginificant problems and I'm not sure if this is the right way to deal with this and I'm not sure if this will hinder my goal of getting accurate values for predicted variables. Is there a way to calculate the potential accuracy of the price that is predicted? It seems to me if I ask the model to predict on brand x and quality y and the model knows brand x and the quality always produces a tight range of prices the accuracy is potentially higher? AI: In case you are talking about providing certain interval to your predictions, what you might need is adding some confidence interval to your linear regression predictor, something which you can make via a resampling method like bootstrapping as a robust way to find predictions intervals. One key advantage is that it does not assume any kind of distribution, being a distribution-free method to find your predictions and, if needed, to your regression coefficients estimates. The steps would be: Draw n random samples (with replacement) from your dataset, where n is the bootstrap sample size Fit a linear regression on the bootstrap sample from step 1 and predict a value Take a single residual at random from the original regression fit, add it to the predicted value and save the result. Repeat 1 to 3 steps several times (1k times for instance) Find the desired percentiles of your interval (2.5th to 97.5th for instance) Source of info in this book On the other hand, if you mean providing a generic confidence metric value for your model, you should find for instance a MSE or MSA on a test set.
H: Is there a quantitative way to determine if a class of algorithms tends produce low bias or low variance models? I understand that some machine learning models tend to be low bias, whereas others tend to be low variance (source). As an example, a linear regression will tend to have low variance error and high bias error. In contrast, a decision tree will tend to have high variance error and low bias error. Intuitively this makes sense because a decision tree is prone to overfitting the data, whereas a linear regression is not. However, is there a more quantitative way to determine if a class of algorithms tends to produce low bias or low variance models? AI: It's more a matter of complexity of the model than of the class of algorithms. Of course some classes of algorithm produce more complex models than others by construction, but this is not always the case. For example the complexity of a Decision Tree usually depends on the options/hyper-parameters: maximum depth, pruning, minimum number of instances in a branch. If these parameters are set to produce a small tree then the risk is bias (underfitting), but if they are set to produce a large tree then the risk is variance (overfitting). The complexity of a model depends mostly on the number and nature of its parameters, so as a first approximation the number of parameters is a reasonable quantitative measure of complexity (see this closely related question). Also keep in mind that most models try to use all the features they are provided with, so the number of features also has a high impact on the model complexity.