text
stringlengths
83
79.5k
H: Keras P/R metrics at different thresholds during training I'm training a binary classifier and I'd like to see Precision/Recall metrics at different thresholds. Tensorflow 2.3 introduced tf.keras.metrics.Precision and tf.keras.metrics.Recall which take a thresholds parameter, where you can specify one or multiple thresholds for which you want the metrics computed. This all works as advertised i.e. m = tf.keras.metrics.Precision(thresholds=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6]) m.update_state([0, 1, 0, 1], [0.4, 0.5, 0.3, 0.8]) m.result().numpy() Returns the precision value at each threshold [0.5, 0.5, 0.6666667, 1., 1., 1.], as per the documentation. However when passed as metrics to Model.compile I get a single metric regardless of how many thresholds I have. pr_thresholds = list(np.arange(0.05, 0.95, 0.05)) model.compile( 'adam', 'binary_crossentropy', metrics=[ keras.metrics.Precision(thresholds=pr_thresholds), keras.metrics.Recall(thresholds=pr_thresholds), ] ) I get Epoch 34/50 395/395 [==============================] - 22s 54ms/step - loss: 0.4314 - precision: 0.7886 - recall: 0.9008 - val_loss: 0.5113 - val_precision: 0.7434 - val_recall: 0.8769 What's happening here ? Does it always use the default threshold value of 0.5 in this case ? Is there a way I can get it to display the values for multiple thresholds during training ? AI: You can see the metrics value for each threshold along the fitting process if you explicitely instantiate the corresponding metric class for each threshold, as follows: model.compile( optimizer=keras.optimizers.Adam(learning_rate=1e-2), loss='categorical_crossentropy', metrics=[metrics.Recall(thresholds=0.6), metrics.Recall(thresholds=0.9)]) model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test)) and as you can see in the image below, for each epoch you can see that the first recall value (with threshold 0.6) is higher than the second one (threshold 0.9) as expected: And for your case, to build the list of metrics objects programatically, where you can now see 3 recalls per epoch: thresholds = [0.6, 0.7, 0.9] metrics_objs_list=[metrics.Recall(thresholds=thr) for thr in thresholds]
H: Neural Network for Multiple Dependent Outputs I have a dataset with approx 6 input features and 5 output values to be predicted. I am trying to understand what kind of neural network would be most suitable to assign probability across multiple dependent outputs, with the motivation to maximise the value of a single feature. For example, suppose the inputs are: company size number of employees turnover average salary country years of operation And the outputs to be predicted are: % budget allocated to marketing % budget allocated to sales % budget allocated to R&D % budget allocated to training % budget allocated to shareholders The training dataset also contains the column: profit Which is what I want to optimise against (i.e. the primary motivation of the model). For a given set of inputs (company size, number of employees, turnover, average salary, country, years of operation), I want to be able to predict values for the 5 outputs which are most likely to achieve the highest 'profit'. The sum of the 5 outputs must equal 100%. In other words, there is a finite budget to be allocated, and I want to create a model to predict the best budget allocation to maximise profit. What neural network would be most suitable for this purpose? I have looked into multi-output regression, however I imagine this would assign independent probabilities for each of the 5 outputs (i.e. they won't add up to 100%). Is it possible for probability to instead be assined to each output dependent of the other outputs (i.e. so they add up to 100%)? If so, is there a name for this type of approach? AI: In many cases, at the end of a neural network you could find a Softmax layer, which outputs probabilities, so they add up to 1. It seems exactly what you're looking for. Please find more information about softmax function here or here. Hope it helps.
H: Keras layer weights shape is different compared to other conventions I have been looking at the layers.weights output of Keras layers. The shape of the layer weight matrix is listed as (number_of_inputfeatures, dense_layer_neurons). The first example in docs. However, all the theoretical courses I saw, as well as in pytorch, layers have weight matrix shape the opposite way where weight matrix shape is given by (dense_layer_neurons, input_features) or (layer_2_neurons, layer_1_neurons) https://www.coursera.org/lecture/neural-networks-deep-learning/getting-your-matrix-dimensions-right-Rz47X Why are these two conventions opposite to each other? Am I missing anything? Can someone please clarify. Thanks. AI: This Explains it - weight matrix shape is dependent on how you shape the input data https://medium.com/from-the-scratch/deep-learning-deep-guide-for-all-your-matrix-dimensions-and-calculations-415012de1568
H: Tensorflow gradient returns nan or Inf I am trying to implement a WGAN-GP model using Tensorflow and Keras (for credit card fraud data from kaggle). I mostly followed the sample code that is provided in Keras website and several other sample codes on the internet (but changed them from image to my data), and it is pretty straightforward. But when I want to update the critic, the gradient of loss w.r.t critic's weights becomes all nan after a few batches. And this causes the critic's weights to become nan and after that the generator's weights become nan,... So everything become nan! I used tf.debugging.enable_check_numerics and found that the problem arises because a -Inf appears in the gradient after some iterations. This is directly related to the gradient-penalty term in the loss, because when I remove that the problem goes away. Please note that the gp itself is not nan, but when I get the gradient of the loss w.r.t critic's weights (c_grads in the code below) it contains -Inf and then somehow becomes all nan. I checked the math and network architecture for possible mistakes (like probability of gradient vanishing, etc.), and I checked my code for possible bugs for hours and hours. But I'm stuck. I would very much appreciate it if anyone can find the root of the problem Note: Bear in mind that the critic's output and loss function is slightly different from the original paper (because I'm trying to make it conditional) but that has nothing to do with the problem because as I said before, the whole problem goes away when I just remove the gradient penalty term This is my critic: critic = keras.Sequential([ keras.layers.Input(shape=(x_dim,), name='c-input'), keras.layers.Dense(64, kernel_initializer=keras.initializers.he_normal(), name='c-hidden-1'), keras.layers.LeakyReLU(alpha=0.25, name='c-activation-1'), keras.layers.Dense(32, kernel_initializer=keras.initializers.he_normal(), name='c-hidden-2'), keras.layers.LeakyReLU(alpha=0.25, name='c-activation-2'), keras.layers.Dense(2, activation='tanh', name='c-output') ], name='critic') This is my gradient-penalty function: def gradient_penalty(self, batch_size, x_real, x_fake): # get the random linear interpolation of real and fake data (x hat) alpha = tf.random.uniform([batch_size, 1], 0.0, 1.0) x_interpolated = x_real + alpha * (x_fake - x_real) with tf.GradientTape() as gp_tape: gp_tape.watch(x_interpolated) # Get the critic score for this interpolated data scores = 0.5 * (self.critic(x_interpolated, training=True) + 1.0) # Calculate the gradients w.r.t to this interpolated data grads = gp_tape.gradient(scores, x_interpolated) # Calculate the norm of the gradients # Gradient penalty enforces the gradient to stay close to 1.0 (1-Lipschitz constraint) gp = tf.reduce_mean(tf.square(tf.norm(grads, axis=-1) - 1.0)) return gp And this is the critic's update code # Get random samples from latent space z = GAN.random_samples((batch_size, self.latent_dim)) # Augment random samples with the class label (1 for class "fraud") for conditioning z_conditioned = tf.concat([z, tf.ones((batch_size, 1))], axis=1) # Generate fake data using random samples x_fake = self.generator(z_conditioned, training=True) # Calculate the loss and back-propagate with tf.GradientTape() as c_tape: c_tape.watch(x_fake) c_tape.watch(x_real) # Get the scores for the fake data output_fake = 0.5 * (self.critic(x_fake) + 1.0) score_fake = tf.reduce_mean(tf.reduce_sum(output_fake, axis=1)) # Get the scores for the real data output_real = 0.5 * (self.critic(x_real, training=True) + 1.0) score_real = tf.reduce_mean((1.0 - 2.0 * y_real) * (output_real[:, 0] - output_real[:, 1])) # Calculate the gradient penalty gp = self.gp_coeff * self.gradient_penalty(batch_size, x_real, x_fake) # Calculate critic's loss (added 1.0 so its ideal value becomes zero) c_loss = 1.0 + score_fake - score_real + gp # Calculate the gradients c_grads = c_tape.gradient(c_loss, self.critic.trainable_weights) # back-propagate the loss self.c_optimizer.apply_gradients(zip(c_grads, self.critic.trainable_weights)) Also Note: As you can see, I don't use any cross entropy or other self-written functions with the risk of division-by-zero. AI: So after much more digging into the internet, it turns out that this is because of the numerical instability of tf.norm (and some other functions as well). In the case of norm function, the problem is that when calculating its gradient, its value appears in the denominator. So d(norm(x))/dx at x = 0 would become 0 / 0 (this is the mysterious division-by-zero I was looking for!) The problem is that the computational graph sometimes ends up with things like a / a where a = 0 which numerically is undefined but the limit exists. And because of the way tensorflow works (which computes the gradients using the chain rule) it results in nans or +/-Infs. The best way probably would be for tensorflow to detect these patterns and replace them with their analytically-simplified equivalent. But until they do so, we have another way, and that is using something called tf.custom_gradient to define our custom function with our custom gradient (related issue on their github) Although in my case there was actually an even simpler solution (although it wasn't simple when I didn't know that the tf.norm was the culprit): So instead of: tf.norm(x) You can use: tf.sqrt(tf.reduce_sum(tf.square(x)) + 1.0e-12) Note: Be careful about dimensions (if x is a matrix or tensor and you need to calculate row-wise or column-wise norms)! this is just a sample code to demonstrate the concept Hope it helps someone
H: Compare two tends with big difference in absolute value I'm studying the Spotify streams trend for different nations in order to show the impact of the pandemic on music listeners. So far i retrieved my data, and plotted it. Obviously, since the various nations have different numbers in population, spotify users ecc... this graphic doesn't tell much. So i decided to scale every curve dividing its values by its peak. So for example, the US has a maximum value of 3.50 million streams, i normalized that curve with that value and i did the same for all the other states (with their maximum value) and I obtained this: Could this be a good approach? In general which approach should I use if i want to compare different curves which have a really big difference in absolute values? EDIT : In the end i normalized my data using a zscore normalization for each single line. So basically I computed the mean and standard deviation for each state and then I normalized the single state with its mean and std. This is the resulting plot: Is this a good approach? Can I now compare the different trends and conclude that there's an overall decrease in the period between 10 and 15 weeks? AI: I think it would be better to use a standard scaler that removes the mean and divides by the standard deviation. See here for more info and an implementation using sklearn. Why? At least you should be aware that dividing by the maximum could hide smaller effects. In the case you have an outlier that has a very high value, you would loose the small changes in the corresponding curve. Moreover, you might not compare the same changes between all the curves. Edit On the question when to use standard scaler vs minmax scaler, I think a good start is the sklearn preprocessing page that deeply explains both. Then in this post, @Sammy pointed "Python Machine Learning" book by Raschka. The author provides some guidance on page 111 when to normalize (min-max scale) and when to standardize data that I requote here: Although normalization via min-max scaling is a commonly used technique that is useful when we need values in a bounded interval, standardization can be more practical for many machine learning algorithms. The reason is that many linear models, such as the logistic regression and SVM, [...] initialize the weights to 0 or small random values close to 0. Using standardization, we center the feature columns at mean 0 with standard deviation 1 so that the feature columns take the form of a normal distribution, which makes it easier to learn the weights. Furthermore, standardization maintains useful information about outliers and makes the algorithm less sensitive to them in contrast to min-max scaling, which scales the data to a limited range of values. Here are some pictures to go with the Why? section above.
H: model selection in clustering I am working on a mall customer segmentation dataset (5 features, 200 rows) using clustering. This dataset does not have any ground truth labels. I had a few doubts regarding clustering: Can I use model selection in clustering using the silhouette score? - Since my dataset does not have any ground truth labels, I read on the sklearn documentation that you can use Silhouette score to evaluate the performance of the model. Can I use different clustering techniques (like K Means, DBSCAN, Mean shift, etc.) and select the model with the highest silhouette score? The idea is sort of similar to how we do model selection in supervised learning except in the latter we use cross validation. How do I detect overfitting in clustering? Since the dataset has no labels, I cannot think of a way to identify if the model is overfitting the data. How do I plot the final clusters when my dataset has more than 2 dimensions? I have seen a lot of visualizations around clustering (like the one below): Should I use PCA to reduce the features to 2 and then plot the clusters? or is there another way to do this? AI: To answer your initial question, yes you can use silhouette score with different clustering methods. You could also use the Davies-Bouldin Index or the Dunn Index. Regarding over-fitting, (this is my personal suggestion) but you could train the model n times on different types of the same data to see if there clustering is the same even though the values are changed. Short example: If you have to cluster 5 apples and 6 oranges, the cluster should be the same for 10 apples and 12 oranges. You can find a bit more detail on this here: https://datascience.stackexchange.com/a/20292/103857 For your third query: Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the key aspect of your question. Read up on multidimensional scaling (MDS) for this. Finally, color your points according to cluster membership. (source for third query: https://stats.stackexchange.com/a/173823) Regarding pca, its subjective. PCA works well with high correlation. If your dimensions are like apples and oranges then your directly effecting your models performance, so do keep that in check. A bit of eda would help before you dive into that.
H: Do I need to square a column if I want a neural network to try using that? In the classical linear regression implementation, if I suspect the square of the values of the column is correlated to the target, then I actually need to create a new column with the squares for the algorithm to make use of that. Is this also necessary when using neural networks? I know it's a broad question - are there cases where this is necessary and cases where it isn't? AI: You don’t necessarily need to, according to the universal function approximation theorem. It is easier for a neural network to learn an identity function than some other function, so if one of the inputs definitely needs to be squared your network will learn faster if you pass the input already squared If your network is sufficiently large it should work out that squaring that input is helpful and approximate the squaring function as part of the overall learning process
H: Problem regarding calculating correlation approach? There are 3 columns say product_id, product_type, and price_drop. product_id is unique while a product can belong to say 100 classes that information is given by product_type. price_drop column value is 1 when a drop in the price of the product occurred else 0. What I have done is, I one-hot encoded the product_type column created the dummy variable for them. Using each dummy variable I calculated their correlation with the price_drop column. I wanted to see the correlation between each product type and a drop in price. Is this approach is correct? AI: Most probably, you are using Pearson's correlation method. This method is used for two Continuous features. Here, both the price_drop and the OHE features are Binary Categorical features. So, you can use these methods - Phi - Phi is a measure of the degree of association between two binary variables (two categorical variables, each of which can have only one of two values) Crammer's V - Cramer’s V is an extension of phi for tables larger than 2×2. Both are extensions of the Chi-square test of Independence. Since both the Features have 2 values, both of the above methods will output the same result. # dataset is your DataFrame s1 = dataset['Status'] s2 = dataset[product_type_OHE_01] import pandas as pd from scipy.stats import chi2_contingency n = len(s1) r,c = s1.nunique(), s2.nunique() matrix = pd.crosstab(s1,s2).values chi_sq = chi2_contingency(matrix) phi = np.sqrt(chi_sq[0]/n) cramm_V = np.sqrt(chi_sq[0]/(n*min(r-1,c-1))) print(phi, cramm_V)
H: What does a blue empty cell mean [ ] in Anaconda if the cell above is still [*]? I've been running a machine learning algorithm for two days now, I have a large dataset and a slow machine. I noticed today that the cell below the "fit model" cell turned from green to blue, however, the cell above (the one with fit) is still running [*] AI: The cell with [*] means it is currently running, you can queue other cells for execution but until this cell finishes they will not run. When it is empty [] it means you haven't run this cell.
H: How should I configure my Neural Network to accept a column of numpy.ndarrays as input? I have a dataframe with 10k rows and the following columns: array target [1,5,6,1,3,etc...] 5 [3,3,1,0,5,etc...] 10 [0,0,1,1,7,etc...] 3 . . . . . . Each array has 33222 elements, so I have 10,000 rows each with a 33,222 long numpy.array that I want to input into a Neural Network to predict the target variable. Here is how the NN is configured: x = df.loc[:, 'array'] y = df.loc[:, 'target'] model = Sequential() model.add(Dense(12, activation='linear')) model.add(Dense(1, activation='linear')) model.compile(loss='mse', optimizer='adam', metrics=['accuracy', 'mse', 'mae']) model.fit(x, y, epochs=10, batch_size=1, verbose=1) I'm getting "ValueError: Please provide as model inputs either a single array or a list of arrays" I haven't attempted to use arrays as input for a NN before, so I would also appreciate any advice on the optimal layer choice and configuration for this kind of problem. AI: You should set the input_shape parameter in the first dense layer. Like this: model = Sequential() model.add(Dense(12, activation='linear', input_shape=(33222,))) model.add(Dense(1, activation='linear'))
H: Equivalence of two separate data distribution I am working in generative modelling. I have a real dataset of which I want to know the underlying distribution. For this I create a synthetic data generator which tries to mimic the real data. I train a classifier on one part of the real dataset. Then I test on the artificial data as well as the testset from the real data. If my specified model is predicting equally well (say in terms of classification accuracy) on two unrelated datasets; can I assume/conclude that the two datasets follow the same distribution? The parameters of the predictive model remain exactly the same in both the cases. Thanks for your time! AI: If my specified model is predicting equally well (say in terms of classification accuracy) on two unrelated datasets; can I assume/conclude that the two datasets follow the same distribution? The parameters of the predictive model remain exactly the same in both the cases. No you cannot, because there is no general equivalence between the two datasets having the same distribution and their performance being identical. There is only a one-way implication: if the two datasets have the same distribution, then their performance is identical. There are actually many ways in which the datasets could have the same performance without following the same distribution, for instance if the design of the model doesn't properly represent the data, or simply by chance. So the exact same performance is only compatible with the hypothesis that the datasets have the same distribution, but it doesn't prove it. As far as I know, proving that two datasets have the same distribution with a complex set of features is a difficult problem. Additionally if the original dataset is small it's difficult to know how representative it is of the true distribution.
H: I want to start studying the field of machine translation I've studied Japanese language and literature and passed some linguistic courses and now as for my masters, I want to study natural language processing and especially machine translation. so I tried taking some data science courses online and I'm now a little bit familiar with data science but I know literally nothing about machine translation. so, long story short, I need to write a proposal in the machine translation field (university requirements) but I don't know where to start reading about machine translation. I tried to read some essays but the level was too high for me, I didn't understand a single thing. I'd be so thankful if you could guide me through this journey. thank you ^__^ AI: As you certainly know, Machine Translation (MT) is a very challenging and useful task in the domain of Natural Language Processing (NLP). As such it is a very specialized research domain but also a very active area of research, and a very competitive one (in particular due to commercial applications, obviously). So there's a massive amount of research being done and a massive amount of resources, but in order to gain real expertise in MT one needs to acquire quite a lot of background knowledge. Let's be clear: a beginner level in data science is not sufficient to understand state of the art MT. Typically one needs not only a good level in statistics and programming, but also knowledge of the recent progress in MT: the old statistical MT approach (e.g. Moses) has been replaced with better Neural MT approaches. A slightly less ambitious objective would be to study the limitations of current MT systems, since this doesn't require understanding how they work. Note that even simply training a state of the art model using existing software is not trivial, and requires quite a lot of computational resources. I'd suggest looking at the resources and papers published at the Workshop on Machine Translation (check also the previous years). Note also that there are many sub-tasks related to MT to look at: model design evaluation metrics building training corpora quality estimation post-editing The WMT Shared Tasks offer datasets for these different sub-tasks. Reading the overview paper for a task is a good way to get an idea of what it is and how it's done.
H: Train/Test size and bias I'm running a classifier (logistic regression). The information on my dataset are the following: dataset size= 279 observations (80/20 rule) train size= 233 test size = 56 # of events in train = 31 # of events in test = 8 I think my classifier and results may be affected due to this not equal proportion. Is there any way to avoid bias issues and improve accuracy? What do you personally think of such data? AI: If you're referring to the fact that your dataset is small: You should use k-fold cross validation. This will let you evaluate your model on all 279 instances If you're referring to the class imbalance being 31:202 in train and 8:48 in test: Use AUROC and PRC to eliminate bias in thresholding Also see MCC
H: onehotencoder random forest In a Random Forest context, do I need to setup dummies/OnehotEncoder in a dataset where features/varibles are numerical but refer to some kind of category? Let's say I have the following variables: Where Y is the variable I want to predict. X's are features. I will focus on X1. Its numerical but refers to a specific category (i.e. 1 refers to math, 2 refers to literature and 3 for history). Do I need to apply OnehotEncoder (or dummy approach) for a Random Forest algoritm? I guess I don't need to do it, but I 'm not sure. AI: In theory, categories themselves can be handled in decision trees. However, most python implementations will only work with numbers, so you need to convert them. It looks like you already have. OneHot Encoding is one way to do that, but now that you've done that, it isn't necessary. You may try other schemes to turn them into 1 2 3, if there isn't a logical reason to how you have them this way. This is called Categorical Encoding. One popular way is to order them alphabetically and number them. Another way, in order of their frequency. At the end of the day, it won't matter a whole lot because the random forest will bin the number according to its algorithm. It might make minor differences though, so maybe try different ways. But no, you won't need to OneHot encode your categories because they are already numbers.
H: Is autocorrelation of residuals a problem in machine learning? Let's assume I have a random forest model and the residuals of the model are autocorrelated. Is this a problem? As an example, let's assume I have two different random forest models, A and B, with a similar predictive performance. The residuals of model A are less autocorrelated than the residuals of model B. Should I prefer model A? AI: Yes, autocorrelation in residuals is a problem, but this is essentially because it is a clear illustration that there was more learnable information in the process you are modelling but your model missed it. In the unlikely event that you have two equally performant models but one shows significant autocorrelation (you can test for this using the Durbin-Watson test as suggested in Noah Weber’s answer), this suggests neither model is working as well as we might hope (the autocorrelated model has failed to predict some predictable patterns and the other model is failing in some other way as its predictive power isn’t any better). If you have two models that have different residuals but both are beating a naïve baseline, you’ve probably got models that will ensemble well.
H: Clusterize Spectrum I have pandas table which contains data about different observations, each one was measured in different wavlength. These observsations are different than each other in the treatment they have gotten. The table looks something like this: >>>name treatment 410.1 423.2 445.6 477.1 485.2 .... 0 A1 0 0.01 0.02 0.04 0.05 0.87 1 A2 1 0.04 0.05 0.05 0.06 0.04 2 A3 2 0.03 0.02 0.03 0.01 0.03 3 A4 0 0.02 0.02 0.04 0.05 0.91 4 A5 1 0.05 0.06 0.04 0.05 0.02 ... I would like to classify the different observations based on their spectrum (the numerical columns). I have tried to run PCA and to paint it according to the treatment the observations got, and to compare it to the results of classifications like k-means and Spectral clustering, but i'm not sure that I choose the right methods because is seems all the time like the clusters are too much like euclidean distance and i'm not sure that they take into account the spectrum (I have used all the numerical columns for the prediction). This is for exampel the comparison between the PCA+Colors compared to the Spectral cllasification: PCA: classification( the points located according to PCA1 PCA2 but the colores are according the the classification: as you can see here, it seems like the classification is based on real distance and I would like something that take into account all the numerical values. So, i'm looking for any insights regard other methods of classifications that could give me better results or maybe other ideas how I can check if there are clusters inside my data based on the measurments in different columns, like if I could predict the treatment from the clusters AI: This sounds like a normal supervised classification task. Have you tried other standard methods like Support Vector Machines, RandomForests, Gradient Boosting, kNN, Neural Networks etc. as well or is there a particular reason why you only tried clustering methods. Clustering methods like kmeans or spectral clustering are usually used in an unsupervised setting where class memberships are not available. Often they make certain assumptions about the data which might be violated, e.g. kmeans assumes spherical clusters, which is clearly not the case for your data.
H: How to expand abbrevations in text during preprocessing? Im doing preprocessing on english text data. I have some domain specific abbreviations, for which i'm maintaining internal dictionary with key-value pairs. The problem i'm facing is the text has abbreviations in plural forms with and without contractions like: Mgr's = manager mgrs = manager mgr = manager All 3 points to a manager. Im able to capture the plural form with contractions using a regex(r"'s") and removing the 's' but, in case of no contractions i'm creating one more entry in the dictionary with plural form of the abbreviations. Im somehow feel this is duplication and not a clean approach. Is there any better solution to address this problem? Any immediate help on this is much appreciated. Thank you AI: Do it in two steps: replace the ones with 's' first, then do the rest of them.
H: ValueError: Found array with 0 sample(s) (shape=(0, 13)) while a minimum of 1 is required I am new to this logistic regression and finding this error. Please help. solver='lbfgs' def lr_model(X_train, y_train): # import the LogisticRegression class from sklearn.linear_model import LogisticRegression # create the model object model = LogisticRegression() # fit the model to the training data model.fit(X_train, y_train) #error here time.sleep(0.1) #return the fitted model return model # Test tmp_model = lr_model(X_train[0:12], y_train[0:12] ) #showing error here print(tmp_model.predict(X_train[4:5])) print(tmp_model.predict(X_train[5:6])) AI: It looks like there are no rows in your X_train array or y_train array. Check the shape and contents of X_train using np.shape(X_train) if X_train is an array or X_train.shape if X_train is a pandas dataframe. Also, if you are using a pandas dataframe for X_train, you should change tmp_model = lr_model(X_train[0:12], y_train[0:12] ) to tmp_model = lr_model(X_train.iloc[0:12,:], y_train[0:12] ) as this way you are correctly accessing the rows and columns of the pandas dataframe.
H: Is regression or ANOVA the right approach for determining RRP based on a set of RRPs associated with categorical variables Basically I have scraped some retail site data and I would like to know if this is a regression problem: If I have data from the website about shoes like RRP, what brand they are, what colour they are and what size and I collate that data to build a model around such that I can give the model known variables about a new pair of shoes (brand, size colour) to determine it's likely RRP based on the set of shoes i already know all the variables and RRP for, should I use ANOVA or regression. My reasonings for regression is that it seems like a classic problem in that price is a continuous variable. But I'm unsure to go about this, a lot of the regression examples i see use contiuous variables for both the predictor and outcome variables maybe ANOVA is better? AI: ANOVA is a linear regression...a regression on categorical variables. Regressions can have any mix of continuous and categorical variables. Some, like ANOVA and ANCOVA, have special names that you may want to use when you're communicating with people who have less training in statistics and don't know how to unify them with linear regression.
H: Getting sentence embeddings with sentence_transformers I have a text column in my data frame which contains paragraph(s) having multiple and variable sentences in each instance/example/row of the dataframe. Then, I created the sentence tokens of that paragraph using sent_tokenizer of nltk and put it into another column. So my data frame looks like this: index text class 0 ["Hello i live in berlin", 'I'm xxx'] 1 1 ["My name is xx", "I have a cat", "Love is life"] 0 now when I'm using: from sentence_transformers import SentenceTransformer model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens') sentences = df['text'] sentences = sentences.tolist() embeddings = model.encode(sentences) I'm getting: TypeError: expected string or bytes-like object The encode method is not taking a list of list of sentences as an argument. AI: I finally solved this problem. My dataframe looks like this: index text class 0 ["Hello i live in berlin", 'I'm xxx'] 1 1 ["My name is xx", "I have a cat", "Love is life"] 0 Text column contains list of sentences in each row. I applied following function: df['Embeddings'] = df['text'].apply(lambda x: model.encode(x)) It created a new column called Embeddings. Embeddings column now contains list of vectors of size 768 for each row. Now I will apply the average function using lambda on my newly created Embedding column's each element which will create a single vector of length 768 for each row and then I will store it in a new column, lets say 'X'. I will then feed X to the SVM along with class labels. Basically, what we are doing is, averaging the number of embedding vectors generated for sentences in text column. So, for e.g. : lets say for index 0 in df['text'] we have two sentences: ["Hello i live in berlin", 'I'm xxx'] Now, after encoding, it will look something like this: [v1,v2] # where length of v1 and v2 vectors is 768 Next, we will take average of these two vectors using np.average. It will result in a single vector: [v] This single vector can now be easily fed to SVM. of-course, we will do this for all the rows and then feed it to the SVM.
H: Is it vital to use weighted metrics with weighted loss functions? When one has an imbalanced machine learning problem, let's say a binary classification problem where class 0 is a majority class and class 1 is a minority class. Here class 1 is the most important class for you. Now your dataset is composed of 100 samples: 70 samples from class 0 and 30 samples from class 1. In this case, usually, one defines and uses a weighted binary cross-entropy loss function to calculate the loss. classweights = torch.FloarTensor([1.000, 1.300]) weighted_loss = torch.nn.BCELoss(weight = classweights) For calculation of metrics like F1-score, is it required that these weights are used? sample_classweights = sklearn.utils.class_weight.compute_sample_weight(class_weight, y_true) classification_report = sklearn.metrics.classification_report(y_true, y_pred, sample_weight=sample_classweights) Any hint or help is appreciated. AI: Calculation of loss and the calculation of metrics on the test set are two different entities. Usually, the weighted loss function is used to weight one of the class (you can use higher weights to the important class in balance or unbalanced class distribution). For metrics like F1, its always safer to use multiple measures in unbalance class distribution. It would be good to keep a check on macro as well as weighted-F1. For example: a confusion matrix of the following order: predicted labels: 0 1 true label 0: | 70 | 0 | true label 1: | 30 | 0 | Has macro-F1: 0.41 and weighted-F1: 0.58 Further, Aconfusion matrix of the following order: predicted labels: 0 1 true label 0: | 99 | 0 | true label 1: | 1 | 0 | Has macro-F1: 0.50 and weighted-F1: 0.99 Both these cases are unbalance where 70 and 90 samples belong to class 0 in 1st and 2nd case respectively. However, when the dataset is too unbalanced, and model predictions are biased towards majority class, then weighted F1 becomes higher.
H: Cross validation and evaluation: neural network loss function continuously decreases in cross-validation I am evaluating a neural network model using cross validation in 2 different ways ( A & B ) that I thought were equivalent. Evaluation type A : For each cross validation loop, the model is instantiated and fitted. Evaluation type B : I instantiate the model once and then that instantiated model is fitted for each loop of the cross validation procedure. I am using the metric mean absolute error (MAE). Question: Why do I get a continuously decreasing MAE over cross-validation loops when using type B evaluation and not when using type A evaluation? Code and details First I generate synthetic data : from sklearn.datasets import make_regression X , y = make_regression( n_samples = 1000 , n_features = 10 , n_informative = 5 , n_targets = 1 , random_state = 2 ) I then define a function to get a model ( neural network ) : from keras.models import Sequential from keras.layers import Dense def get_model( n_nodes_hidden_layer , n_inputs , n_outputs ) : model = Sequential() model.add( Dense( n_nodes_hidden_layer , input_dim = n_inputs , kernel_initializer = 'he_uniform' , activation = 'relu' ) ) model.add( Dense( n_outputs ) ) model.compile( loss = 'mae' , optimizer = 'adam' ) return model After that I define 2 evaluation functions using : from sklearn.model_selection import RepeatedKFold from sklearn.metrics import mean_absolute_error Type A evaluation function : def evaluate_model_A( X , y ) : results = list() cv = RepeatedKFold( n_splits = 10 , n_repeats = 1 , random_state = 999 ) for train_ix, test_ix in cv.split( X ) : X_train, X_test = X[ train_ix ] , X[ test_ix ] y_train, y_test = y[ train_ix ] , y[ test_ix ] model = get_model( 20 , 10 , 1 ) model.fit( X_train , y_train , epochs = 100 , verbose = 0 ) y_test_pred = model.predict( X_test ) mae = mean_absolute_error( y_test , y_test_pred ) results.append( mae ) print( f'mae : {mae}' ) return results Type B evaluation function : def evaluate_model_B( model , X , y ) : results = list() cv = RepeatedKFold( n_splits = 10 , n_repeats = 1 , random_state = 999 ) for train_ix, test_ix in cv.split( X ) : X_train, X_test = X[ train_ix ] , X[ test_ix ] y_train, y_test = y[ train_ix ] , y[ test_ix ] model.fit( X_train , y_train , epochs = 100 , verbose = 0 ) y_test_pred = model.predict( X_test ) mae = mean_absolute_error( y_test , y_test_pred ) results.append( mae ) print( f'mae : {mae}' ) return results Before using type B evaluation function I need to instantiate the model because it is an argument of the function : model = get_model( 20 , 10 , 1 ) What I do not understand is the fact that while using type B evaluation function the MAE is decreasing for each cross validation loop which is not the case with type A evaluation function. Is this specific to neural networks? Note : when I am using a RandomForestRegressor() the phenomenon does not show up. AI: In the evaluation type B approach, your neural network weights and biais are not reset before each loop of cross-validation. The neural network is then learning from one loop to another, so you see the MAE continuously decreasing. A solution is to store your weights and biais before fitting the model and load them at each loop so they have the same init. You can use those methods to do so: model.save_weights('model.h5') # right after model instantiation model.load_weights('model.h5') # in the loop before fitting In evaluation type A, because you instantiate the model in the loop, weights and biais are reset so you don't see the phenomenon.
H: Why are the ANN training and validation accuracy graphs not smooth? I am currently training an ANN using Keras (Python3), and I am slowly optimizing the model's architecture and came across something I have not seen before. The graph of the training and validation accuracy seems a bit odd. The graph appears 'step-like' in the sense that it is not a smooth curve, but different. Model: model = tf.keras.models.Sequential() model.add(Dense(7,activation='relu')) model.add(Dense(10,activation='relu')) model.add(Dense(10,activation='relu')) model.add(Dense(1,activation='sigmoid')) model.compile(optimizer=tf.keras.optimizers.Adam(0.0001), loss='binary_crossentropy', metrics=['accuracy', 'mse', 'mae']) history = model.fit( X_train.values, y_train, epochs=1000, validation_split = 0.2, verbose=1, ) My two questions are: (1) what is this indicative of, and (2) how do i fix this assuming it is problematic? AI: Considering that your validation accuracy has bigger steps than your training accuracy, this may simply be an issue of data size. Accuracy counts correct/not correct, so if the model switches its opinion on a sample the accuracy increases suddenly. Because your validation data is likely smaller than the training data the steps are bigger there. There is nothing to worry about, this looks normal. On the other hand your validation loss is increasing, so you are overfitting.
H: Why is the cosine distance used to measure the similatiry between word embeddings? While computing the similarity between the words, cosine similarity or distance is computed on word vectors. Why aren't other distance metrics such as Euclidean distance suitable for this task. Let us consider 2 vectors a and b. Where, a = [-1,2,-3] and b = [-3,6,-9], here b = 3*a, i.e, both the vectors have same direction but different magnitude. The cosine similarity between a and b is 1, indicating they are identical. While the euclidean distance between a and b is 7.48. Does this mean the magnitude of the vectors is irrelevant for computing the similarity in the word vectors? AI: You're asking two questions here. Does this mean the magnitude of the vectors is irrelevant? Yes. Cosine similarity is $ S_{cos} = \frac{A \cdot B}{\|A\|\|B\|} $, which just comes from the definition of inner product, $A \cdot B = \|A\|\|B\|\cos\theta$. Cosine distance is simply $1 - S_{cos}$. Why is the cosine distance used? Or, to think of it another way, why is the answer to (1) a desirable property to have in a distance metric? In a word embedding, we choose a dimensionality $d$ for the embedding. This is the number of components in our embedding space. The components (or, linear combinations of the components) are meant to encode some kind of semantic meaning. Classic examples are like that the vector for "queen" plus the vector for "man" should be near the vector for "king". That sort of thing. There's a direction that roughly corresponds to "royalty" and a direction for gender. Look at your example, where $b = 3a$, $a=[-1,2,-3], b=[-3,6,-9]$. This is a perfect illustration of why we use cosine similarity. They have very different magnitudes but point in the same direction. They have cosine distance 0, and we want that because it means that they have the same relative proportion of each component. If we use euclidean distance, $a$ and $b$ are $\sim7.48$ units apart. It would be easy to find another vector $c$ that is around the same distance from $a$ as $b$ is, in a completely different direction. If our space has been learned properly, $c$ should have completely different semantic meaning from $b$, but they're both the same distance from $a$. The euclidean distance doesn't measure the similarity that we want very well.
H: Predictive power of a dataset I am reading a book on machine learning for undergraduate. I am actually confused on linear regression flexibility as the say: Occasionally, linear regression will fail to recover a good solution for a data set. While this may be because our data doesn’t actually have predictive power, it might also just indicate that our data is provided in a format unsuitable for linear regression. I read some questions here related to predictive power and I noticed that it's all about the model produced. What do we mean when we talk about the predictive power of a dataset? I think of it as there is no relationship between features (as linear regression is intented to learn the relationship between the inputs $X$ and the output $Y$). But I am not convinced of this answer yet. AI: Usually predictive power refers to the model, rather than the data. I've occasionally seen some people use it in the way that the author of your book uses it (see this for example). In the context of your book, yes, predictive power refers to whether input can be mapped to target output $X\rightarrow Y$. We can infer a dataset's "predictive power" by trying to model it (e.g. linear regression). If the model performs poorly, then there are two possibilities as the book says: either the dataset is not predictive (i.e. it does not offer a clean mapping from input to target output) or the methods we are using are unsuitable to model the mapping. Some examples of both situations: If you generated random data for $X$ and $Y$, the resulting dataset would (probably) have no predictive power as no model could reasonably generalize the mapping $X\rightarrow Y$. If you have a nonlinear mapping, then linear regression would not fit it well. For example, if our dataset was such that $y_1$ is mapped to by $||\vec{x}||<\alpha $ and all other inputs map to $y_2$, then our dataset is extremely predictive, but our linear regression model cannot fit it (since the mapping is nonlinear). In this toy example, it's easy to see the predictive power of the dataset, particularly if the input is in 2D/3D since we could just plot it. However, manually observing such trends in highly dimensional space using actual data can be very difficult, hence we use the tools that you are learning to help interpret the data. Also, when there's nonlinearity, it's difficult to statistically evaluate the dataset itself. Variables with linear relationships are simple to correlate (e.g. Pearson's correlation coefficient) but nonlinearities can make correlation difficult. I assume that this is why your book defers to vague terminology as it's probably for pedagogical, rather than pedantic, purposes. After all, it gets the point across without needing to discuss the ongoing research into quantifying nonlinear correlations.
H: Batch Normalization For Federated Learning As you know, in a federated learning setting, clients train local versions of the joint global model with their Non i.i.d data and each submit an update to the global model which would be aggregated into the next joint global model. The normalization which happens by Batch Normalization layers during the training phase is based on the local batch statistics. My question is, How should one aggregate these local statistics (batch normalization parameters) for the global model so they represent the global statistics of all the data? I am talking about beta, alpha, moving mean and variance for each batch normalization layer. Should we treat them like weight and biases of fully connected (or Conv) layers and simply average them? AI: One approach would be to simply average everything, as proposed in the FedAvg preprint. Some very recent preprints suggest only relaying the learned parameters back to the central server and keeping local batch normalization (BN) statistics separate, as proposed by the SiloBN preprint. The authors of this paper claim that: Keeping BN statistics local permits the federated training of a model robust to the heterogeneity of the different centers, as the local statistics ensure that the intermediate activations are centered to a similar value across centers. To paraphrase them; they distinguish BN statistics as encodings for local domain information whereas the learned parameters are to be domain-invariant. I believe their aggregation method for the relayed information is simply averaging. Here's their first figure for reference:
H: How to compare Random Forest with other models I am new to Machine Learning and I am trying to undrestand the Out of Bag Error in Random Forests and its use. Let's say that we have a dataset. First we use the whole dataset (without splitting it) to get a Random Forest and its Out of Bag error. Then we split the dataset, train a Neural Network on the training part and test it on the test part of the dataset. Can I choose between the two models by comparing the Out of Bag error of the random forest with the total test error of the Neural Network ? Does it make sense ? AI: We generally rely on sampling to validate the result of our Model. We make train/test to test a model on a separate unseen dataset. If we are doing a Hyperparameter tuning, we keep another set as a validation set to validate the result of new hyperparameters. Random forest builds each new tree on a Bagged sample from the original sample(train data). Bagging means sampling is done with replacement i.e. you pick one data point, put it back and then pick the next. In this process, many duplicated data is sampled and many data points are not sampled. ~63% of the data points are selected Read here The other "not selected" 37% data points is called Out of Bag samples. Hence, the way Bagging, and RF is designed, we got another set of data to do our validation. It an opportunity to do a level of validation with these samples. What it means, You are getting 37% data points to validate your model But OOB is not done with the fully grown Ensemble. It is done using all the trees in the random forest ensemble for which the particular data point was omitted during training Read here It's not equivalent to K-Fold or Train/test on fully built RF but it gives a decent idea about the validation error about to come.
H: Is there any possibility to apply deep dreaming in data augmentation? I looked into the deep dreaming concepts and feel like this has the potential for data generation. But i'm not sure how possible this concept is. Any thoughts regarding this? AI: TL;DR Not usually, because Deep Dream ignores the distribution of your input population. You cannot extract additional informative data using generators, only remix and recombine what you already have. There may be a few situations where that is useful because you can use a generator trained on other data to bring in extra relevant information that was not in your main dataset. Deep Dream logically alters an input structure to maximise some measure within a neural network based on neuron activations. This might be the activation of a single neuron, or of a layer of neurons. I have worked with Deep Dream scripts based on Google's Inception v5 network which was trained on ImageNet, just for fun. Here's a relatively extreme example of output, with a few specific neuron activatioons maximised and a long zoom so that the data is essentially only a super-stimulous for those artifical neurons: One thing you can say with reasonable certainty is that this image does not look like anything you might draw at random from ImageNet, from any category. It would be a poor choice as augmentation for ImageNet classification tasks, because it is not from the target population. The same is true in general for all obvious Deep Dream modifications. The modifications do not look natural and produce inputs that would not reasonably be in the input distribution. Once you are clearly outside the input distribution for your use case, then learning how to process those inputs may not give you any benefit related to the goals of your machine learning task. More subtle changes might be useful, if on inspection it is clear that they should not change the categorisation. This might be similar augmentation to adding noise, and if the alterations were chosen carefully, might help defend against adversarial inputs. In fact generating adversarial inputs is an almost identical process to Deep Dream, usually focusing on the output categories and updates that look like high frequency noise so that humans tend not to see the difference. There are better data generating models that attempt to create data that could have come from the input population. Two major ones are variational autoencoders (VAE) and generative adversarial networks (GAN). These might be useful for data generation. However, if you are considering somehow "bootstrapping" a small dataset by training one of these generators on it then using its output to augment your data, it is unlikely that this will work. The quality of the output will usually depend on its training data such that generated data will not truly add information that you did not already have in the original training set. In most cases you will better off looking at simpler logical transformations of data that you are confident will not change the target label (or that change it in predictable ways) - for image-based CNNs that might include changing rotation, crop, brightness/contrast and other basic image manipulations. There might be a few cases in which working with a generator helps. For instance if you can train it on far more data because you have lots of unlabelled data, or dat afrom a different but similar task. Then the quality of generated inputs could be high enough that you effectively get to make use of unsupervised learning from the same dataset. You might still find it easier to do transfer learning, or train an autoencoder and re-use its first few layers though.
H: How to account for reduced student capacity when calculating program retention? I work at a nonprofit youth center that has 2 distinctive programs throughout the year. Our afterschool program runs during the school year, and our summer camp program runs during the summer. I am working on calculating our student retention between Afterschool 2019-20 and SummerCamp 2020 to see which students we retained, however, Between AS2019-20 and SC2020, we reduced our maximum student capacity by 10. I figure that this definitely impacts the retention rate calculation, but I have no idea how to account for this reduction in capacity. Does this impact anything, or am I overthinking? I have to assume it would be some sort of calculation like returned students / eligible students = retention and then maybe dividing that percentage by the quotient of SC2020 capacity / AS2019-20 capacity = proportion to account for the proportion. Is this correct? AI: Rationale Some of the terms are a little vague, particularly what you refer to as eligible students and returned students. I'll set some variables for clarity, but tell me if I defined them incorrectly. I assume them to mean: eligible students $ = A $ being the set of all students in the after-school program 2019-2020 returning students $ = A\cap S $ where $S$ is the set of all of summer camp 2020 students Now we can define retention rate to be $\frac{|A\cap S|}{|A|}$. I base this on the US government definition of university student retention rate, which I think is pretty similar, but please correct me if it's not. The reasons as to why you might have $|A\cap S|<|A|$ are irrelevant. Example Let's say you have $|A|=100$ so that, in your situation, $|S|=90$. For simplicity, let's also say that there are no newcomers to the mix. The retention is, quite literally, how many students you retained. If you retained $90$ students, then the retention rate is $90/100$. Even though $|S|=|A|-10$, the number of students you retained $|A\cap S|$ as a percentage of the number of students in the original program $|A|$ is still $90\%$. It wouldn't make sense to normalize this to $100\%$.
H: How to limit a function's cosine value as it approaches 0 How do I get a cosine to peak up to a certain value as it hits 0? With the above code, the y-value peaks to infinity as it approaches 0. import matplotlib.pyplot as plt import numpy as np amp = 1 time = np.linspace(-5, 5, 1000) signal = np.cos(1 * np.pi * time) * abs(1/time) plt.figure(figsize=(10, 10)) plt.ylim(-4, 4) plt.plot(time, signal, color="black") plt.savefig('output.png', bbox_inches='tight', pad_inches=0, transparent=True) Put another way, based on my google-fu, how can I create a symmetrical exponentially decaying sinusoid with a maximum y-value as it approaches 0? AI: $\alpha e^{-\beta x^2}cos(x)$ where $\alpha$ is your peak and $\beta$ is proportional to the strength of the decay This one's slightly longer and the peak is at $\alpha+1$ but the graph is easier to manipulate: $cos(x)e^{-\beta x^2}+\alpha e^{-x^2}$
H: Estimate correlation in Python I have a dataset with labels and usernames: Labels Usernames 1 Londonderry 1 Londoncalling 1 Steveonder43 0 Maryclare_re 1 Patent107391 0 Anonymous 1 _24londonqr ... It seems that the usernames containing the word London are very frequent in having assigned label=1. Do you have any idea on how I could proof it? AI: You could create a second label for your usernames according to whether they contain london or not (pseudocode below): for idx, username in df['Usernames']: if 'London' in username: df['London'].iloc[idx] = 1 else: df['London'].iloc[idx] = 0 Consequently given you want to go with correlation and that you are comparing binary variables, the metric to go with is Pearson correlation coefficient (pseudocode below): import scipy.stats.pearsonr as rho corr = rho(df['labels'], df['London'])
H: Scikit learn linear regression - learning rate and epoch adjustment I am trying to learn linear regression using ordinary least squares and gradient descent from scratch. I read the documentation for the Scikit learn function and I do not see a means to adjust the learning rate or the epoch with the sklearn.linear_model.LinearRegression class. Is there a standard learning rate for the linear regression model? Epochs I am assuming are determined by the change in the error function and are thus dependent on the dataset, with some predetermined maximum amount before convergence. Are there any other good packages that have a linear regression model that I can test my own model against with adjustable learning rates and epochs? AI: A linear regression model $y=\beta X+u$ can be solved in one "round" by using $(X'X)^{-1}X'y=\hat{\beta}$. It can also be solved using gradient descent but there is no need to adjust something like a learning rate or the number of epochs since the solver (usually) converges without much trouble. Here is a minimal example in R: x0 <- c(1,1,1,1,1) x1 <- c(1,2,3,4,5) x2 <- c(8,4,3,1,8) x <- as.matrix(cbind(x0,x1,x2)) y <- as.matrix(c(3,7,5,11,14)) x y # (X'X)^-1 X'y beta1 = solve(t(x)%*%x) %*% t(x)%*%y # R's regression command beta2 = summary(lm(y ~ x[, 2:3])) # Gradient decent m <- nrow(y) grad <- function(x, y, theta) { gradient <- (1/m)* (t(x) %*% ((x %*% t(theta)) - y)) return(t(gradient)) } # define gradient descent update algorithm grad.descent <- function(x, maxit){ theta <- matrix(c(0, 0, 0), nrow=1) # Initialize the parameters alpha = .05 # set learning rate for (i in 1:maxit) { theta <- theta - alpha * grad(x, y, theta) } return(theta) } # results without feature scaling print(grad.descent(x,2000)) beta1 beta2
H: Calculate all days for each years between two dates in a pandas dataframe I'm new in python and coding. I'm doing a university project exercise. Last question is : " For each year, compute the total amount of loans. Each loan that has planned expiration time and disburse time in different years must have its amount distributed proportionally to the number of days in each year. For example, a loan with: disburse time = 2016/12/01 planned expiration time = 2018/01/30 amount = 5000USD has an amount of : 5000 * 31 / (31+365+30) = 363.85 for 2016, 5000 * 365 / (31+365+30) = 4284.04 for 2017, 5000 * 30 / (31+365+30) = 352.11 for 2018. " I splitted original dataset in two dataset: one where "planned expiration time" and "disburse time" have the same year; one where "planned expiration time" and "disburse time" have different year. I computed the total amount of loans for each years for the first split dataset, but i don't understand how to compute total amount of loans for each years for the second split dataset. I'm using Jupyter notebook. My code: loans_cleaned = loans_cleaned.dropna(subset= ['planned_expiration_time', 'disburse_time']) loans_cleaned loans_cleaned['planned_expiration_time'] = loans_cleaned['planned_expiration_time'].dt.tz_localize(None) loans_cleaned['disburse_time'] = loans_cleaned['disburse_time'].dt.tz_localize(None) loans_cleaned['planned_expiration_time'] = loans_cleaned['planned_expiration_time'].dt.normalize() loans_cleaned['disburse_time'] = loans_cleaned['disburse_time'].dt.normalize() loans_same_year = pd.DataFrame(loans_cleaned[loans_cleaned['planned_expiration_time'].dt.year == loans_cleaned['disburse_time'].dt.year][["loan_id" , "disburse_time", "planned_expiration_time","loan_amount"]]) loans_same_year.reset_index(drop=True, inplace=True) loans_same_year loan_id disburse_time planned_expiration_time loan_amount 0 658010 2014-01-09 2014-02-15 400.0 1 659347 2014-01-17 2014-02-21 625.0 2 659605 2014-01-15 2014-02-20 350.0 3 660240 2014-01-20 2014-02-21 125.0 4 661601 2014-01-10 2014-02-25 1600.0 ... ... ... ... ... loans_same_year['year'] = loans_same_year['disburse_time'].dt.year loans_amount_year = pd.DataFrame(loans_same_year.groupby('year')['loan_amount'].sum().reset_index()) loans_amount_year year loan_amount 0 2012 103911725.0 1 2013 98427750.0 2 2014 120644250.0 3 2015 131208475.0 4 2016 133271575.0 5 2017 144870625.0 6 2018 85300.0 loans_different_year = pd.DataFrame(loans_cleaned[loans_cleaned['planned_expiration_time'].dt.year != loans_cleaned['disburse_time'].dt.year][["loan_id" , "disburse_time", "planned_expiration_time","loan_amount"]]) loans_different_year.reset_index(drop=True, inplace=True) loans_different_year How can I compute the total days for each years of each loans in the loans_different_year and compute the total amount of loans for each years? Thanks for the attention. i tried to: def func(disburse_time, planned_time): cost=loans_different_year['loan_amount'] for year in range(disburse_time.year, planned_time.year+1): if year==disburse_time.year: dict_map[year] = (datetime.date(year, 12, 31) - disburse_time).days elif year==planned_time.year: dict_map[year] = (planned_time - datetime.date(year-1, 12, 31)).days else: if year%4==0: dict_map[year]=366 else: dict_map[year]=365 dict_year_share = {year:cost*days/sum(dict_map.values()) for year,days in dict_map.items()} return dict_year_share a = loans_different_year.apply(lambda x: func(x['disburse_time'], x['planned_expiration_time']), axis=1) a TypeError: unsupported type for timedelta days component: Timestamp in the follow line code: dict_map[year] = (datetime.date(year, 12, 31) - disburse_time).days i setted: loans_different_year['planned_expiration_time'] = pd.to_datetime(loans_different_year['planned_expiration_time']) loans_different_year['disburse_time'] = pd.to_datetime(loans_different_year['disburse_time']) ``` AI: Not sure if a 1-2 liner is possible. This can be a working function import pandas as pd, datetime def func(disburse_time, planned_time, loan_amount): total_cost=loan_amount dict_map = {} for year in range(disburse_time.year, planned_time.year+1): if year==disburse_time.year: dict_map[year] = (pd.to_datetime(datetime.date(year, 12, 31)) - disburse_time).days elif year==planned_time.year: dict_map[year] = (planned_time - pd.to_datetime(datetime.date(year-1, 12, 31))).days else: if year%4==0: dict_map[year]=366 else: dict_map[year]=365 dict_year_share = {year:total_cost*days/sum(dict_map.values()) for year,days in dict_map.items()} return dict_year_share Calling on the df rows dataset = pd.read_csv('/content/loan.csv') dataset.disburse_time = pd.to_datetime(dataset.disburse_time,format="%Y-%d-%m") dataset.planned_expiration_time = pd.to_datetime(dataset.planned_expiration_time,format="%Y-%d-%m") result = dataset.apply(lambda x: pd.Series(func(x['disburse_time'], x['planned_expiration_time'],x['loan_amount'])), axis=1) result This is my dataset "disburse_time","planned_expiration_time","loan_amount" "1959-01-01","1960-30-11",35000 "1959-01-02","1962-31-08",32000 "1959-01-03","1965-31-05",30000 "1959-01-11","1959-30-11",31000 "1959-01-10","1961-31-03",44000 Also, need a check for boundary scenarios.
H: Multiclass Classification and log_loss I hope I can make this clear with few lines of code/explanation. I've a 16K list of texts, labelled over 30 different classes that were ran through different classifiers; my Prediction and the Ground truth match on average at 94%. I am now after measuring something extra (not sure what should I measure on top of an F1_score minimum as I'm still learning, though) and I came across the log_loss from sklearn, which result I understand it range between 0 and 1. When ran against my prediction, however, the results is 1.48xxx, which is in fact higher. In trying to understand what was wrong. I have explored the result of ComplementNB.predict_proba that is required for the log_loss, and the value matches the one of my prediction array. Below some code: from sklearn.metrics import log_loss y = ... # This is my array of value that is my source of truth labels = numpy.unique(y) label_ary = [idx for gt in y for idx, lbl in enumerate(labels) if gt == lbl] print(f'The log loss is {log_loss(label_ary, clf.predict_proba(X.toarray()))}') Whether I use label_ary or y, in both the circumstance I am obtaining the same value, meaning that some conversion inside the log_loss is already happening. I'm not sure whether it me misinterpreting the results, or the specific of the function. What am I doing wrong? Thanks AI: Interpretability of log loss Log loss isn't necessarily between the range [0; 1] - it only expects input to be in this range. Take a look at this example: $$ y_{pred} = 0.1 \\ y_{true} = 1.0 \\ log\_loss = -(log(y_{pred}) * y_{true} + (1 - y_{true}) * log(1 - y_{pred})) = -(log(0.1) * 1.0) = 2.302 $$ In an extreme case log loss can even be equal to infinity. So there is nothing wrong with the code and also there aren't many interesting things you can derive from the fact that log_loss is lower or greater than 1. What you can do with it is the same as with any loss function - compare it to a similar model with different hyperparameters and choose the one with the lowest average loss as your best model (a process called hyperparameter optimization). When to use loss and when f1 score? Let's say you have a dataset and a classification problem which you want to solve. You know that you can create a statistical model which returns probabilities of a given class. You also know that there is (hypothetically) an algorithm which classifies based on some heuristics which requires no training. You would like to know which of these is the best for your problem. What you do, if we simplify a little, is: Split your dataset to train, validation and test sets. Use your train set to train the model While training the model calculate loss for train and validation set in each epoch (if you're not using deep neural networks you can and should use cross validation). Plot loss for train and validation set and see if your model is biased (high train loss and high validation loss) or overfitted (low train loss and high validation loss). The lower loss for validation set the better. Do 3. and 4. multiple times for different hyperparameters and select one with the lowest validation set loss. You now have a trained statistical model. Now use f1 score to compare your model to the algorithm you also know about. The higher score the better. Notice that assuming that the algorithm returns classes and not probabilities if it is incorrect for even one example its log loss function will be equal to infinity. This is why we can't use log loss as a metric to compare these two methods. In short, you should use loss as a metric during training/validation process to optimize parameters and hyperparameters and f1 score (and possibly many more metrics for example Area Under Curve) during test process to select the best method to solve your problem. This way it's possible to compare different methods to solve the problem - even ones which don't use machine learning at all.
H: Sentiment analysis of the target in articles The goal is as follows: I have a big article and I want to define the sentiment of the particular word. For example, the article describes pros and cons of bikes and cars and I want to find the sentiment of the word car. In such an example I cannot use document-level SA as the article itself can be positive while the car was mentioned in a negative way. So, I studied papers related to aspect-based sentiment analysis, but my constait is absence of data for training NNs. Hence, I concentrated on the approaches that basically do not involve training process. One of my attemts was to build sentiment analysis tool using word2vec and K-Means so that each cluster corresponds to one of three sentiments (pos, neg and neu). It actually worked great but I found that for some reason one word can be at two clusters at the same time. Plus it generally goes not give sentiment for specific keyword but for all aspects found in the text. Another problem is that basically cannot test the correctness of the output if only not to read the text by myself and check whether the keyword belonged to the correct cluster or not. So I came to the decision to make summarization of the article first and then applying sentiment analysis (like sentiwordnet or similar). Question 1 Are there ways to improve word2vec+KMeans approach? Is it even worse improving? Question 2 Is it a good idea to go through text summarization before sentiment analysis? Question 2 Is there better way to find sentiment of the particular word without training process (due to no training data and small amount of unlabeled data)? AI: I'm not sure I completely get the idea, but it looks to me like what you're actually interested in is the sentiment of a word in a particular context: a content word like "car" might not carry a stable sentiment by itself, but its usage in a specific context might. So I'd suggest a method like this: for any target word you extract either the sentence or a context window, i.e. N words on the left and N words on the right of the target word. Then you could use predefined sentiment analysis tools to extract a sentiment value for this instance. From there you could: measure the mean sentiment for a word by averaging over the instances compare the distribution of sentiment or average sentiment for two different words
H: How do I deploy a model when using Stratified K fold? I have used Stratified K fold for learning the model . Below is the python code: >def stratified_cv_v1(X, y, clf, shuffle=True, n=10,): > stratified_k_fold = StratifiedKFold(n_splits=n,shuffle=shuffle) > y_pred_v1 = y.copy() > for ii, jj in stratified_k_fold.split(X,y): > X_train, X_test = X[ii], X[jj] > y_train = y[ii] > clf_v2 = clf() > clf_v2.fit(X_train,y_train) > y_pred[jj] = clf.predict(X_test) > return y_pred_v1 >print(classification_report(y, stratified_cv_v1(X, y, GradientBoostingClassifier))) Now how do I use the model to deploy on a new data set where I need to predict ? AI: k-fold CV is meant to evaluate the model. Once the evaluation is done and one is ready to move to deployment, there's no point using CV anymore: the method has been tested and validated, so one can reasonably assume that from now on applying the same method to the same kind of data will lead to the same level of performance. Thus the usual process is: Train a final model on the full dataset (no CV, no testing) Apply the model to new instances
H: Python (Pytorch) loss function syntax I have seen many examples of this syntax that is being used for the loss function specifically: loss = nn.BCEWithLogitsLoss()(pred, y) Can anyone explain me what does the (pred, y) do exactly, that it directly computes the loss, instead of calling loss as a function of these 2 arguments again? AI: This is an example of Python's builtin __call__ method, as described here. In short: BCEWithLogitsLoss is a class. The first set of parentheses (empty, in your case) provides any needed arguments to the class initializer. Then the second set of parentheses are passed to the call method. So, this is convenient syntax that allows you to instantiate the class and evaluate one of its methods in one line. Confirming this in the source code is actually a bit difficult. You can see the source for BCEWithLogitsLoss here, which confirms it is a class. But its only method is forward; where is __call__? For that, we notice that the BCEWithLogitsLoss class inherits from the _WeightedLoss class, which inherits from Loss, which inherits from Module. We can then see that this base class implements the mapping between __call__ and forward.
H: How to use a dataset with only one category of data I am performing a classification task, to try to detect an object. A picture of the environment is taken, candidates are generated of this possible object using vision algorithms, and once isolated, these candidates will be passed through a CNN for the final decision on whether the object has been detected or not. I am attempting to use transfer learning on InceptionV3 but am having difficulty training it, as I only have one set/class of images. The dilemma is that I only have one class of data and when I pass it through the network, I get a 100% accuracy (because there is nothing to compare it to). How should I overcome this? Should I find more categories online to add to my dataset? What should these categories be? Just to clarify, as an example, I have class "cat". Not "cat" and "dog". Not "cat" and "no cat". Just "cat". That is what my dataset consists of at the moment. AI: The Model learns to match the weights as per the image and feedback from label data. If you will feed a few Image classes as "Not Cat", it will learn to classify similar features as "Not Cat". But might fail for a new Class. e.g. if it is trained on "Car/Furniture/Dog" as "Not Cat", then chances are high that a Wild Cat will be classified as Cat. Dumping all the Imagenet dataset will definitely provide quite a good variance to the "Not Cat" class and may work most of the time but that is not the appropriate solution for the problem. Such type of problem will fall under One-Class-Classification. Core idea is to use CNN to extract features then use some specialized models e.g. one-class SVM, Gaussian Mixtures, etc. to define a boundary for "Cat" This problem, as defined by the one-class SVM approach, consists of identifying a sphere enclosing all (or the most) of the data. The classical strategy to solve the problem considers a simultaneous estimation of both the center and the radius of the sphere. You may start with these links(In the specified order) - Hackernoon blog Arxiv Researchgate There are other approaches too i.e. based on Auto-encoder. Here, we try to put a Threshold on reconstruction error. References- Quora SO Keras blog Also, may look here to check an idea which generated Random images for "No Cat" Class.Here
H: How to use a multiple linear regression model built from normalized data I built a linear multivariable regression model from normalized data (for the interval [0; 1]). Initially, the data was not normalized, I normalized the data by myself (independent and dependent variables). I want to use this model to make predictions from newly received data (I get the values ​​of independent variables and I want to predict the value of the dependent variable). The problem is that the data comes in a raw, unnormalized form. How can I normalize newly arriving data if only one "observation" is received? What if I want to get the real values ​​of the dependent variable using my model, and not the normalized ones? AI: So, the question asks: How to normalise incoming (individual observations) How to get the real value predictions and not the normalised values. When we do normalisation using the Sci-kit learn module, instead of using the very handy fit_transform() method in the scaler, you could instead perform a .fit() over your original observations and then apply the .transform() to the newly-observed values. Obviously, in this case you need separate scalers foreach feature, as they are distributed differently from one another. Again in Sci-kit learn, there is an inverse_transform() method which reverts the normalised value back to the original scale.
H: How to improve results from a Naive Bayes algorithm? I am having some difficulties in improving results from running a Naive Bayes algorithm. My dataset consists of 39 columns (some categorical, some numerical). However I only considered the main variable, i.e. Text, which contains all the spam and ham messages. Since it is a spam filtering, I think that this field can be good. So I used countvectorizer and fit transform using them after removing stopwords. I am getting a 60% of accuracy which is very very low! What do you think may cause this low result? Is there anything that I can do to improve it? These are the columns out of 39 that I am considering: Index(['Date', 'Username', 'Subject', 'Target', 'Country', 'Website','Text', 'Capital', 'Punctuation'], dtype='object') Date is in date format (e.g. 2018-02-06) Username is a string (e.g. Math) Subject is a string (e.g. I need your help) Target is a binary variable (1 -spam or 0-not spam) Country is a string (e.g. US) Website is a string (e.g. www.viagra.com) Text is the corpus of the email and it is a string (e.g. I need your HELP!!) Capital is a string (e.g. HELP) Punctuation is string (!!) What I have done is the following: removing stopwords in Text: def clean_text(text): lim_pun = [char for char in string.punctuation if char in "&#^_"] nopunc = [char for char in text if char not in lim_pun] nopunc = ''.join(nopunc) other_stop=['•','...in','...the','...you\'ve','–','—','-','⋆','...','C.','c','|','...The','...The','...When','...A','C','+','1','2','3','4','5','6','7','8','9','10', '2016', 'speak','also', 'seen','[5].', 'using', 'get', 'instead', "that's", '......','may', 'e', '...it', 'puts', '...over', '[✯]','happens', "they're",'hwo', '...a', 'called', '50s','c;', '20', 'per', 'however,','it,', 'yet', 'one', 'bs,', 'ms,', 'sr.', '...taking', 'may', '...of', 'course,', 'get', 'likely', 'no,'] ext_stopwords=stopwords.words('english')+other_stop clean_words = [word for word in nopunc.split() if word.lower() not in ext_stopwords] return clean_words Then applying these changes to my dataset: from sklearn.feature_extraction.text import CountVectorizer import string from nltk.corpus import stopwords df=df.dropna(subset=['Subject', 'Text']) df['Corpus']=df['Subject']+df['Text'] mex = CountVectorizer(analyzer=clean_text).fit_transform(df['Corpus'].str.lower()) and split my dataset into train and test: X_train, X_test, y_train, y_test = train_test_split(mex, df['Target'], test_size = 0.80, random_state = 0) df includes 1110 emails with 322 spam emails. Then I consider my classifier: # Multinomial Naive Bayes from sklearn.naive_bayes import MultinomialNB classifier = MultinomialNB() classifier.fit(X_train, y_train) print(classifier.predict(X_train)) print(y_train.values) # Train data set from sklearn.metrics import classification_report,confusion_matrix, accuracy_score from sklearn.metrics import accuracy_score pred = classifier.predict(X_train) print(classification_report(y_train ,pred )) print('Confusion Matrix: \n',confusion_matrix(y_train,pred)) print() print("MNB Accuracy Score -> ",accuracy_score(y_train, pred)*100) print('Predicted value: ',classifier.predict(X_test)) print('Actual value: ',y_test.values) and evaluate the model on the test set: from sklearn.metrics import classification_report,confusion_matrix, accuracy_score pred = classifier.predict(X_test) print(classification_report(y_test ,pred )) print('Confusion Matrix: \n', confusion_matrix(y_test,pred)) print() print("MNB Accuracy Score -> ",accuracy_score(y_test, pred)*100) getting approx 60%, which is not good at all. Output: precision recall f1-score support 0.0 0.77 0.34 0.47 192 1.0 0.53 0.88 0.66 164 accuracy 0.59 356 macro avg 0.65 0.61 0.57 356 weighted avg 0.66 0.59 0.56 356 Confusion Matrix: [[ 66 126] [ 20 144]] I do not know if the problem are the stopwords or the fact that I am considering only Text or Corpus as column (it would be also good to consider Capital letters and punctuation as variables in the model). AI: Your model certainly overfits. It's likely that the main issue is the inclusion in the features of words which appear very rarely (especially those which appear only once in the corpus): Words which appear only once don't help classification at all, if only because they can never be matched again. More generally, words which appear rarely are more likely to appear by chance, so using them as features causes overfitting. Naive Bayes is very sensitive to overfitting since it considers all the features independently of each other. It's also quite likely that the final number of features (words) is too high with respect to the number of instances. A low ratio instances/words causes overfitting. The solution is to filter out words which occur less than $N$ times in the data. You should try with several values of $N$, starting with $N=2$. Another issue: in your current process the data is preprocessed before splitting between training and test set, this can cause data leakage. Note that filtering out words of low frequency should be done using the training data only, and then just selecting the same words on the test set (ignoring any other word).
H: 100% accuracy on both train and test after feature engineering The original dataset is of ~17K compound structures almost equally divided with labels indicating yes or no, after heavy use of mol2vec and rdkit I have created ~300 datapoints Using the boosted trees method on the same shuffled train and test dataset gives 98% train accuracy and 89% test accuracy, but a simple neural network gives 100% train and test accuracy I have checked the code again to ensure target leakage is not occurring, I have also coded it from the scratch twice to ensure I haven't made any mistake, yet I do not believe I should be getting 100% accuracy on both train and test Does this mean that the model is actually accurate due to so many data points? AI: Yes - Getting 100% accuracy is possible for neural networks compared to tree-based models. Neural networks can learn non-linear relationships through the activation function. Tree-based models are restricted to piece-wise linear relationships.
H: Construct word2vec (CBOW) training data from beginning of sentence When constructing training data for CBOW, Mikolov et al. suggest using the word from the center of a context window. What is the "best" approach to capturing words at the beginning/end of a sentence (I put best in quotes because I'm sure this depends on the task). Implementations I see online do something like the this: for i in range(2, len(raw_text) - 2): context = [raw_text[i - 2], raw_text[i - 1], raw_text[i + 1], raw_text[i + 2]] I see two issues arising from this approach. Issue 1: The approach gives imbalanced focus to the middle of the sentence. For example, the first word of the sentence can only appear in 1 context window and will never appear as the target word. Compare this to the 4th word in the sentence which will appear in 4 context windows and will also be a target word. This will be an issue as some words appear frequently at the beginning of sentences (i.e. however, thus, etc.). Wouldn't this approach minimize their use? Issue 2: Sentences with 4 or fewer words are completely ignored, and the importance of short sentences is minimized. For example, a sentence with 5 words can only contribute one training sample while a sentence of length 8 will contribute 4 training samples. Can anyone offer insight as to how much these issues affect the results or any alternative approaches for constructing the training data? (I considered letting the first word be the target word and using the next N words as the context, but this creates issues of it's own). Note: I also asked this question on Stack Overflow: https://stackoverflow.com/questions/63747999/construct-word2vec-cbow-training-data-from-beginning-of-sentence AI: Here is a great answer to this question. I'll summarize: The code example was taken from a "buggy" repository on GitHub and is not typical of robust solutions. Robust solutions actually do use the first word as a target word. If the context window is length 10, then the method uses the next 5 words as the context and the first word as the target (it won't actually have a context of size 10 since the first half of the context doesn't exist). Even though the first few words in the sentence are used as target words, they still will not appear in as many contexts. This issue is mitigated because they appear in smaller contexts. Since the contexts they appear in are smaller, they have more impact on the context and, therefore, more significance in back propagation. Many of the more robust implementations will use an entire paragraph or document as opposed to a sentence (some even include punctuation). This make sense because the ending of one sentence may give context for the beginning of another sentence. When this approach is implemented, there are far fewer start/ending words, which reduces the issue. The answer linked above has some other helpful details and is worth reading.
H: Should normalization be applied? I have more then 100 columns with the values of 1-0. But the two features at the end as seen in the below image, have different values then the rest. Should I rescale the values in the last two columns into a range of [0,1]? Note : I use K-Means algorithm. AI: Let $x \in \mathbb{R}^{d}$ be a feature vector (one row of your table). Your data has the problem that the distribution per dimension is very different, in particular $x_{1} \in \{0,1\}$ and $x_{d} \in \mathbb{R}$. Using K-Means means to consider each data point in the euclidean space where the similarity (or distance) is measured. However, the main problem arises when using the euclidean space in your context! If we measure a distance in the euclidean space for $x,y \in \mathcal{X}$, then $||x-y|| = \sqrt{\sum_{i = 1}^{d}(x_{i}-y_{i})^{2}}$. To simplify the examination, let us now consider the squared euclidean loss $||x-y||^{2} = \sum_{i = 1}^{d}(x_{i}-y_{i})^{2} $ and let $e_{i}:= (x_{i}-y_{i})^2$ be the squared deviation per dimension. Then $||x-y||^{2} =\sum_{i = 1}^{d}e_{i}$. For a column $i$ with binary values we have $e_{i} \in [0,1]$. However, for the last two columns we have $e_{i} \in \mathbb{R}$. Therefore, the last two columns have already a higher weight on the distance calculation $||x-y||^{2} =\sum_{i = 1}^{d}e_{i}$, which might not represent the true importance of these dimensions. Even more, by using the euclidean space you implicitly assume that the importance of the features are comparable to each other, which might not be the case. So what are your options: 1.) Min-Max scaling (so that also the last two dimensions are in $[0,1]$). You can do that, but you should be aware of the impact: You could say from the perspective of the distance measurement, all dimensions have now equal weight, which might be misleading again. 2.) Standardizing your data You can do that. It is often better than min-max scaling as you maintain the distribution per dimension. Still you should be aware of the impact: You could say from the perspective of the distance measurement, all dimensions still have kind of an equal weight, which might be misleading again. 3.) You can perform L2 normalization and use the cosine similarity. It results in the same issues.. Therefore, if the importance of some features is very different, or if the importance of different features cannot be determined, e.g. they are not comparable, using K-Means (or measurements in the euclidean space) is basically a bad idea. 4.) Recall that Random forest is invariant to these issues: If $\mathcal{X}$ is a set of data points that is classified by an inner node of a decision tree that has been trained by the random forest algorithm, then the algorithm provides a threshold $\theta \in \mathbb{R}$ and a dimension $d' \in \{1,\ldots,d\}$. The inner node splits the data into $\{x \in \mathcal{X} \mid x_{d'} \leq \theta \}$ and $\{x \in \mathcal{X} \mid x_{d'} > \theta \}$. In other words, it learns at each inner node of a decision tree a weak classifier that is applied to a single dimension, thereby avoiding to define a comparable importance weight for each dimension. You can use a random forest to build a distance measurement (see Wikipedia), and apply K-Means afterwards. 5.) Use K-Means variants. 6.) Use PCA to project your data to a space $\mathbb{R}^{d'}$ with $d' < d$, then apply K-means. 7.) Use an autoencoder to learn a better embedding. Then apply K-means.
H: Adaboost with other classifier fitting There is the opportunity to fit decision trees with other decision trees. For example: adaclassification= AdaBoostClassifier(RandomForestClassifier(n_jobs=-1)) adaclassification.fit(X_train,y_train) I got better results with random forest, so improved the result from adaboost with the random forest classifier. However I dont understand what´s happening here? It sounds easy: adaboost uses a random forest to fit it´s classification. But what´s mathematically going on here? Adaboost is made of the residuals as a sequence (boosting). Random forest (bagging) built a forest out of trees. AI: Your description is apt. There isn't anything especially "mathematical" happening here, aside from the AdaBoost algorithm itself. In psuedocode, something like this is happening: For n in 1 .. N_Estimators do Train classifier Tn on data X with weights W Compute weighted residuals E from Tn Update W based on E Renormalize W end In your case, Tn would be a Random Forest model, which is itself an ensemble based on bagging. So at each iteration of the "outer" AdaBoost model, an entire Random Forest model is being trained, i.e. several decision trees are fitted on random sub-samples of data points and features. Of course, this is an unusual setup for a boosting model. But there's no conceptual or computational reason why you couldn't run the algorithm this way. If you are curious about how exactly the weights are computed and updated, Scikit-learn uses the SAMME algorithm, which is based on but not exactly identical to the original AdaBoost. SAMME is described in "Multi-Class AdaBoost" by Zhu, Rhosset, Zhou, & Hastie (2006).
H: Area under the ROC curve approximation I wanted to compute the Area under the ROC curve for a logistic regression model in the context of binary classification. For that I computed, for a list of thresholds, say 0.1 0.2 ... 0.9, the TPR and FPR. I have thus (TPR_threshold1, ... TPR_thresholdN) (FPR_threshold1, ... FPR_thresholdN) Can I just do TPR_threshold1*FPR_threshold1 + ... + TPR_thresholdN*FPR_thresholdN in order to compute the area under the roc curve or do I need some more elaborate mathematical modeling? Thanks a lot! AI: The thresholds don't matter; what matters are the (FPR, TRP) values at those thresholds, as they are points on the curve. Sort them by FPR ascending. For this to work out, you'll want to include the points (0,0) and (1,1) in your list, corresponding to thresholds 1 and 0. You can use a trapezoidal approximation, as each successive pair of points defines a trapezoid of area under the curve. You'll just add up the areas. Let's take two successive points (FPR1, TPR1) and (FPR2, TPR2). The area is (FPR2 - FPR1) * (TPR1 + TPR2) / 2. Just sum that over all successive pairs of points. Of course, most libraries can compute this for you from this input, like scikit-learn.
H: Plotting multiple precision-recall curves in one plot I have an imbalanced dataset and I was reading this article which looks into SMOTE and RUS to address the imbalance. So I have defined the following 3 models: # AdaBoost ada = AdaBoostClassifier(n_estimators=100, random_state=42) ada.fit(X_train,y_train) y_pred_baseline = ada.predict(X_test) # SMOTE sm = SMOTE(random_state=42) X_train_sm, y_train_sm = sm.fit_sample(X_train, y_train) ada_sm = AdaBoostClassifier(n_estimators=100, random_state=42) ada_sm.fit(X_train_sm,y_train_sm) y_pred_sm = ada_sm.predict(X_test) #RUS rus = RandomUnderSampler(random_state=42) X_train_rus, y_train_rus = rus.fit_resample(X, y) ada_rus = AdaBoostClassifier(n_estimators=100, random_state=42) ada_rus.fit(X_train_rus,y_train_rus) y_pred_rus = ada_rus.predict(X_test) I then plotted the precision-recall curve for these 3 models. I chose this curve as I want to visualise how the models are performing, and I am not very interested in true negatives (the negative class is the majority class). To plot the curve, I used ScikitLearn's plot_precision_recall_curve method, like so: from sklearn.metrics import precision_recall_curve from sklearn.metrics import plot_precision_recall_curve import matplotlib.pyplot as plt disp = plot_precision_recall_curve(ada, X_test, y_test) disp.ax_.set_title('Precision-Recall curve') disp = plot_precision_recall_curve(ada_sm, X_test, y_test) disp.ax_.set_title('Precision-Recall curve') disp = plot_precision_recall_curve(ada_rus, X_test, y_test) disp.ax_.set_title('Precision-Recall curve') This resulted in 3 separate plots. However, I want to have these 3 curves in one plot such that they can be easily compared. So I want a plot like the one in the article: But I am not sure how to do this as the plot_precision_recall_curve method only takes one classifier as input. Some help would be appreciated. AI: Try using Matplotlib gca() method in this way you can indicate what axis you want to plot in from sklearn.metrics import precision_recall_curve from sklearn.metrics import plot_precision_recall_curve import matplotlib.pyplot as plt plot_precision_recall_curve(ada, X_test, y_test, ax = plt.gca(),name = "AdaBoost") plot_precision_recall_curve(ada_sm, X_test, y_test, ax = plt.gca(),name = "SMOTE") plot_precision_recall_curve(ada_rus, X_test, y_test, ax = plt.gca(),name = "RUS") plt.title('Precision-Recall curve')
H: Binary classification of 2d vectors with neural networks I'm trying to make a neural network learn a 2D shape. I have an $n\times m$ grid of points with a binary function defined on it, equal to $1$ when the point is in the shape and $0$ when it isn't. I want my network to accept as input two numbers, $x$ and $y$, and tell me with a single binary output if the point $(x, y)$ is in the shape or not. Ultimately I'd like to have a largeish image and a somewhat complicated shape (say an disc with a hole in it) and train the network on a random collection of points, but for now I'm working with a small image, an axis-aligned rectangle, and I'm training the network on every pixel in the image. My problem is that my network doesn't work at all. I got the Keras tutorial working and it reaches accuracy of over 90% within ten epochs, however with my code the training looks like this: Epoch 1/10 17/17 [==============================] - 0s 932us/step - loss: 14.4782 - accuracy: 0.0506 Epoch 2/10 17/17 [==============================] - 0s 757us/step - loss: 14.4782 - accuracy: 0.0506 Epoch 3/10 17/17 [==============================] - 0s 722us/step - loss: 14.4782 - accuracy: 0.0506 Epoch 4/10 17/17 [==============================] - 0s 696us/step - loss: 14.4782 - accuracy: 0.0506 Epoch 5/10 17/17 [==============================] - 0s 653us/step - loss: 14.4782 - accuracy: 0.0506 Epoch 6/10 17/17 [==============================] - 0s 655us/step - loss: 14.4782 - accuracy: 0.0506 Epoch 7/10 17/17 [==============================] - 0s 656us/step - loss: 14.4782 - accuracy: 0.0506 Epoch 8/10 17/17 [==============================] - 0s 677us/step - loss: 14.4782 - accuracy: 0.0506 Epoch 9/10 17/17 [==============================] - 0s 751us/step - loss: 14.4782 - accuracy: 0.0506 Epoch 10/10 17/17 [==============================] - 0s 777us/step - loss: 14.4782 - accuracy: 0.0506 50/50 - 0s - loss: 14.4773 - accuracy: 0.0506 Not only does it not converge, the accuracy doesn't move at all. I tried 100 epochs and had the same results. My code: import tensorflow as tf from tensorflow import keras from tensorflow.keras.layers import Flatten, Dense import numpy as np import matplotlib.pyplot as pyplot def get_coordinate_map(width, height): ones = np.ones((height, width)) offset = np.array([(width / 2) * ones, (height / 2) * ones]) coordinates = np.indices((height, width)) - offset return coordinates.reshape(2, width * height).T def get_rectangle(width, height, rectangle_width, rectangle_height): ones = np.ones((height, width)) offset = np.array([(width / 2) * ones, (height / 2) * ones]) coordinates = np.indices((height, width)) - offset flat_coordinates = coordinates.reshape(2, width * height) rectangle = [] x1, x2 = -rectangle_width / 2, rectangle_width / 2 y1, y2 = -rectangle_height / 2, rectangle_height / 2 for x, y in flat_coordinates.T: if x1 < x < x2 and y1 < y < y2: rectangle.append(1) else: rectangle.append(0) return np.array(rectangle).T if __name__ == "__main__": SCALE = 40 training_points = get_coordinate_map(SCALE, SCALE) training_labels = get_rectangle(SCALE, SCALE, 10, 10) model = keras.Sequential( [ Dense(20, activation="relu"), Dense(1, activation="softmax") ] ) model.compile( optimizer="adam", loss=tf.keras.losses.BinaryCrossentropy(), metrics=["accuracy"] ) model.build((1, 2)) model.fit(training_points[::3], training_labels[::3], epochs=10) loss, accuracy = model.evaluate(training_points, training_labels, verbose=2) And since my get_***() functions are probably difficult to read, here's picture of the image generated by get_rectangle(): I'm doing this for the purposes of learning about neural networks, so I'm not interested in solutions involving using a different kind of ML technique altogether. My question is why I'm getting such strikingly unhelpful behavior and if this kind of binary classification of low dimensional data is even possible with a neural net. AI: if you use "softmax" you would actually need two neurons as the output layer, one for each label 0 or 1, complementing each other. It basically produces a probability distribution for the lables (in this case 2 labels 0,1). This link might be helpful regarding softmax. Then the target you provide should be also two columns: if __name__ == "__main__": SCALE = 40 training_points = get_coordinate_map(SCALE, SCALE) training_labels = get_rectangle(SCALE, SCALE, 10, 10) model = keras.Sequential( [ Dense(20, activation="relu"), Dense(2, activation="softmax") ] ) model.compile( optimizer="adam", loss=tf.keras.losses.BinaryCrossentropy(), metrics=["accuracy"] ) model.build((None, 2)) model.fit(training_points[::3], np.array([training_labels[::3], 1-training_labels[::3]]).T, epochs=10, batch_size=20) loss, accuracy = model.evaluate(training_points, np.array([training_labels, 1-training_labels]).T, verbose=2) print(loss, accuracy) result will be: Epoch 1/10 534/534 [==============================] - 1s 2ms/step - loss: 0.3898 - acc: 0.8109 Epoch 2/10 534/534 [==============================] - 0s 88us/step - loss: 0.1154 - acc: 0.9494 Epoch 3/10 534/534 [==============================] - 0s 94us/step - loss: 0.1001 - acc: 0.9494 Epoch 4/10 534/534 [==============================] - 0s 90us/step - loss: 0.0967 - acc: 0.9494 Epoch 5/10 534/534 [==============================] - 0s 91us/step - loss: 0.0944 - acc: 0.9494 Epoch 6/10 534/534 [==============================] - 0s 82us/step - loss: 0.0922 - acc: 0.9494 Epoch 7/10 534/534 [==============================] - 0s 103us/step - loss: 0.0905 - acc: 0.9494 Epoch 8/10 534/534 [==============================] - 0s 87us/step - loss: 0.0889 - acc: 0.9494 Epoch 9/10 534/534 [==============================] - 0s 86us/step - loss: 0.0876 - acc: 0.9494 Epoch 10/10 534/534 [==============================] - 0s 90us/step - loss: 0.0861 - acc: 0.9494 0.0849673491239082 0.95 instead you can use "sigmoid" as the activation function of the output layer: if __name__ == "__main__": SCALE = 40 training_points = get_coordinate_map(SCALE, SCALE) training_labels = get_rectangle(SCALE, SCALE, 10, 10) model = keras.Sequential( [ Dense(20, activation="relu"), Dense(1, activation="sigmoid") ] ) model.compile( optimizer="adam", loss=tf.keras.losses.BinaryCrossentropy(), metrics=["accuracy"] ) model.build((None, 2)) model.fit(training_points[::3], training_labels[::3], epochs=10, batch_size=20) loss, accuracy = model.evaluate(training_points, training_labels, verbose=2) print(loss, accuracy) results: Epoch 1/10 534/534 [==============================] - 1s 2ms/step - loss: 1.6602 - acc: 0.5187 Epoch 2/10 534/534 [==============================] - 0s 85us/step - loss: 0.6545 - acc: 0.7060 Epoch 3/10 534/534 [==============================] - 0s 87us/step - loss: 0.2457 - acc: 0.9251 Epoch 4/10 534/534 [==============================] - 0s 83us/step - loss: 0.1578 - acc: 0.9494 Epoch 5/10 534/534 [==============================] - 0s 88us/step - loss: 0.1345 - acc: 0.9494 Epoch 6/10 534/534 [==============================] - 0s 87us/step - loss: 0.1224 - acc: 0.9494 Epoch 7/10 534/534 [==============================] - 0s 84us/step - loss: 0.1149 - acc: 0.9494 Epoch 8/10 534/534 [==============================] - 0s 82us/step - loss: 0.1100 - acc: 0.9494 Epoch 9/10 534/534 [==============================] - 0s 91us/step - loss: 0.1064 - acc: 0.9494 Epoch 10/10 534/534 [==============================] - 0s 108us/step - loss: 0.1040 - acc: 0.9494 0.1027598740439862 0.949375
H: How to determine semantic differences in NLP I would need to determine the difference in meaning between the following two sentences: I am at home I am not at home I am at the office the first two sentences differs in verb, which changes the meaning of the sentences (to negative); the second one, with the first one, differs because of the place. I have thought of word2vec, but I am not completely sure if this is the best tool to analyse sentences like the above ones. Also cosine_similarity could be a solution, but I would have not information about the meaning. I think it is more about semantic meaning... AI: Out of the box, something like Google's Universal Sentence Encoder (USE) may work for your use-case. Many of the common NLP embedding techniques nowadays work on individual words and so creating sentence-level embeddings means averaging multiple word-level vectors together. USE was built to operate at the sentence level, so you may find it better. The original paper can be found here: https://arxiv.org/abs/1803.11175 An example blog post leveraging USE: https://medium.com/@gaurav5430/universal-sentence-encoding-7d440fd3c7c7
H: Semantic similarity between two or more sentences I need to determine how similar sentences (in meaning) are to one another. In order to do it, I have been considering an algorithm (cosine similarity) to determine the similarity between sentences. I have thought as appropriate Word2vec or wordnet to build features for similarity. If you have used this (or similar) approach, could you please provide me an example of use of word2vec/wordnet for similarity analysis? AI: Word2vec as the name suggests will create an embedding for each word in your sentence. In order to get a sentence level embedding you would need to average (or combine in some other way) the individual embeddings together. An example of a model to generate sentence level embedding would be the Universal Sentence Encoder (USE). You may want to try it out and see if it can outperform a word-level model in your use-case. The original paper can be found here: https://arxiv.org/abs/1803.11175 An example blog post leveraging USE: https://medium.com/@gaurav5430/universal-sentence-encoding-7d440fd3c7c7
H: DQN with decaying epsilon I'm new to reinforcement learning. I'm studying DQN with decaying epsilon. I came across such example: EPISODES = 91 GAMMA = 0.2 EPSILON_DECAY = 0.999 MIN_EPSILON = 0.01 MAX_EPSILON = 1 My questions are: Is it correct if epsilon doesn't reach MIN_EPSILON? Is there something wrong with the reward - the reward is not higher and higher but it is behaving otherwise - it decreases in time? AI: If you set epsilon decay to 0.999 you will need $$ \epsilon_{max} \cdot \epsilon_{decay}^x = \epsilon_{min} \\ 1 \cdot 0.999^x = 0.01 \\ x \approx 4603 $$ 4603 episodes to reach minimum epsilon. After 91 episodes you will reach $$ \epsilon_{current} = \epsilon_{max} \cdot \epsilon_{decay}^{episodes} = 1 \cdot 0.999^{91} \approx 0.913 $$ which is exactly what you can see in your plot. It's not a problem but remember that this model still makes over 91% moves randomly. Average reward should not decrease over time. It can mean a few things for example error in dqn algorithm or too high learning rate in your model. The best way to debug is to start with as simple environment as possible and let your model learn to play it and only then increase the difficulty.
H: How to normalize noisy data Suppose I have 1-D data which has some outliers, I want to normalize the data to be in the range [0,1]. I tried calculating the maximum value and the minimum value as follows: q1,q2,q3 = quartiles of the data max = q3 + (q3-q1)*1.5 min = q1 - (q3-q1)*1.5 I used the above approach because I have read that data above maximum or data less than the minimum (as calculated above is noise). My question is: whatever I am doing, is it correct or is there any other way to achieve good results? Thank you for helping. AI: The question, whatever I'm doing is it correct or ... will occur over and over again, until you have a firm understanding of statistics. See if you are looking for short-term learning, then you might pick up some fragments of information here and there, but, for long term learning, I'll suggest that you read a good text on introductory statistics. Try, this book, or this one on Applied Predictive Modeling Now to answer the question, Yes what you are doing is correct. I've explained it below; Outliers are observations that fall below the lower limit or above the upper limit. The five number summary of a continuous variable (or numerical variable) is Min, Q1, Q2, Q3, Max. Where the lower and upper limits are defined as, Lower Limit = Q1 - 1.5 x IQR, Upper Limit = Q3 + 1.5 x IQR and IQR is the Inter Quartile Range, which is the difference between the first and third quartile. Q1 is the first quartile, Q2 is the second quartile or the median and Q3 is the third quartile. Finally, please always plot your data.
H: Group K-fold with target stratification I have a pd.DataFrame pd.DataFrame({ 'person': ['a', 'b', 'c', 'aa', 'bb', 'cc', 'aaa', 'bbb', 'ccc'], 'group': [10, 10, 10, 20, 20, 20, 30, 30, 30], 'target': [1, 2, 2, 3, 2, 3, 1, 2, 3] }) I want to split df into train and test by group several times (K-Fold), so train and test contains examples from mutually exclusive group subsets. For example, if test contains examples from groups [10, 20] test should contain examples only from group [30]. I also want to keep stratification by target. So target distribution is also considered for groups subsets creation. How is it possible? Didn't find anything in scikit-learn that may help. In practice I have a df with about 500k rows, 2.5k different persons, 30 groups and following target distribution AI: To use both groups and stratification, it seems that you should write your own code. Note that you will inevitably loose samples in training and/or in testing set (unless you are lucky). One way of achieving it is to: do your split by groups (you could use the GroupKFold method from sklearn) check the distribution of the targets in training/testing sets. randomly remove targets in training or testing set to balance the distributions. Note: It is possible that a group disappear using such algorithm. You might prefer to not randomly remove the targets when balancing the training/testing sets. Here is a sample code import pandas as pd import numpy as np from sklearn.model_selection import GroupKFold df = pd.DataFrame({ 'person': ['a', 'b', 'c', 'aa', 'bb', 'cc', 'aaa', 'bbb', 'ccc'], 'group': [10, 10, 20, 20, 20, 20, 20, 30, 30], 'target': [1, 2, 2, 3, 2, 3, 1, 2, 3] }) X = df['person'] y = df['target'] groups = df['group'].values group_kfold = GroupKFold(n_splits=3) group_kfold.get_n_splits(X, y, groups) # First split by groups for train_index, test_index in group_kfold.split(X, y, groups): print("Groups split: TRAIN:", train_index, "TEST:", test_index) y_train_grouped, y_test_grouped = y[train_index], y[test_index] final_train_index = [] final_test_index = [] # Then balance the distributions for each target for target in df['target'].unique(): target_train_index = y_train_grouped[y_train_grouped == target].index.tolist() target_test_index = y_test_grouped[y_test_grouped == target].index.tolist() n_training = len(target_train_index) n_testing = len(target_test_index) print("Target:" + str(target) + " - n_training:" + str(n_training) + " - n_testing:" + str(n_testing) + " | target_train_index:" + str(target_train_index) + " - target_test_index:" + str(target_test_index)) # Shuffle to remove randomly np.random.shuffle(target_train_index) np.random.shuffle(target_test_index) # Check if we need to remove samples from training or testing set if n_training > n_testing: while n_training > n_testing: target_train_index.pop(0) n_training = len(target_train_index) if n_training < n_testing: while n_training < n_testing: target_test_index.pop(0) n_testing = len(target_test_index) # Append new indexes to global train/test indexes final_train_index.append(target_train_index) final_test_index.append(target_test_index) # Flatten for readability final_train_index = [item for sublist in final_train_index for item in sublist] final_test_index = [item for sublist in final_test_index for item in sublist] print("FINAL split: TRAIN:", final_train_index," TEST:", final_test_index, "\n") EDIT Using stratified cross-validation seems not to be mandatory (see links below), so you might reconsider using it. You might find this links usefuls: Why use stratified cross validation? Why does this not damage variance related benefit? Understanding stratified cross-validation A study of cross-validation and bootstrap for accuracy estimation and model selection Regarding over/undersampling, I think it is hard to answer without more details of the data distribution and how imbalanced your classes are.
H: How to stop a neural network from regressing to the mean I'm very familiar with neural networks for classification, but I'm trying a regression task for the first time. I'm finding that the network tends to go towards guessing a mean for the whole dataset rather than making case specific predictions. What are the potential reasons for this behaviour and how can I stop it? Hint In my specific case this behaviour is present in both the training and validation set. The training set is very small, and it's questionable as to whether the inputs have a strong correlation with the target. Also might be worth noting that I'm training on images. So I'm using a CNN. Update I did some inspection of my model and found that it drives the output of my CNN backbone to 0 and simply uses the single bias on my fully connected layer to produce the output. I also found that if I freeze the bias during training, I can increase the learning rate indefinitely without the loss blowing up... AI: The regression tasks are not very different from classification and the behavior you faced with is probably due to a bug in the code. If your training set is small and the network is comparably large, it should overfit on it (correlation with inputs doesn't matter, there was a paper https://arxiv.org/pdf/1611.03530.pdf that shows you can randomly shuffle imagenet labels and train one of the version of resnet on it and get perfect accuracy on that training set). Sample agnostic predictions mean underfitting. You can also try to overfit a small batch and get a zero loss. If you won't be unable to do this, there's a bug, definitely. Possibly your gradients affect the latter layers much more than the first layers, which means after some epochs of training you still have random outputs from the first layer and it's no wonder the net is trying to predict the mean of the dataset. This can happen if your net is very deep and you're not using residual connections. But it's not the case for a net of 10 (for instance) or less layers. Maybe your learning rate is too high and your first layers can't be tuned correctly. To check whether your first layers extract some meaningful information, try to look at the outputs of some of your first layers. If those outputs look like a complete noise, double check the code for optimization. You can find more interesting cases of bugs and the behavior of a buggy code in Andrej Karpathy's blog http://karpathy.github.io/2019/04/25/recipe/
H: What does it mean when the model Learning Curve displays this behavior? I made a classification LSTM model and it seemed to be working as intended until I increased the number of epochs. That's when I noticed that the Validation and Training curves cross each other at a certain point: I have never seen this kind of curve... also couldn't find anything like it on my searches. Does it mean my model is over-fitting? Should I worry with this behavior? What can I do to avoid this? Thanks. AI: Yes, this is overfitting. As you can see in your loss curve the training loss is steadily decreasing like it should, but at the same time your validiation loss is increasing. On data the model hasn't seen before it's actually getting worse! This means your model is starting to learn the training data, and does not instead keep the learning generalized. As soon as validation loss is increasing you should stop training. I'm answering this question although overfitting is a general concept and has been answered before because this question might want to have an answer that is specific to LSTMs.
H: Encoding categorical data with pre-determined dictionary in case feature encoding, if I'd like to encode my values based on my pre-determined dictionary, how do I do that? For instance, say, I've values as [Red, Green, and Blue] and I want to encode them as [-1,0,1] -1 for red, 0 for Green, 1 for Blue... I'll apply it to my feature. I believe I can do it by mapping, apply method, not sure. But is there any better way to do that? Column expectedEncoding Red -1 Red -1 Blue 1 Green 0 Red -1 Blue 1 ``` AI: Assuming you have a pandas DataFrame and one mapping per column, with all mappings stored in a 2-level dict where the keys of the first level correspond to the columns in the dataframe and the keys of the second level correspond to the categories: {'fruit': {'banana': -1, 'apple': 1}, 'color': {'yellow': -1, 'red': 1}} Then, you can do the following: encoded_data = data.apply(lambda col: col.map(mappings[col.name])) [EDIT] if have columns for which you don't have a mapping, you can do one of the following: data.update(data[list(mappings)].apply(lambda col: col.map(mappings[col.name]))) or if you want it in a new dataframe (eg to keep the dataframe with the original values): encoded_data = data.copy() encoded_data.update(data[list(mappings)].apply(lambda col: col.map(mappings[col.name])))
H: Book suggestions Considering, I do have some knowledge about machine learning, python, data science through the courses I have done from sites like Udemy, Udacity, Edx. However, I would like to read some books related to the subject matter. Could you please refer me to some helpful books related to machine learning, python. AI: For Python, Python: The Complete Reference and Head First Python.These two should be good enough. For Machine learning, it really depends whether you want to get into all the math or not. If not, I think there are enough resources online. But if you do, I would recommend Pattern Recognition and Machine Learning by Christopher Bishop. It really dives deep into the math for ML.
H: PyTorchs ConvTranspose2d padding parameter Im confused about what PyTorchs padding parameter does when using torch.nn.ConvTranspose2d. The docs say that: "The padding argument effectively adds dilation * (kernel_size - 1) - padding amount of zero padding to both sizes of the input". So my guess was that the dimensions of the feature maps increase when applying padding. However running test they decrease: inp = torch.ones((1, 1, 2, 2)) conv_no_pad = nn.ConvTranspose2d(1, 1, kernel_size=(3, 3), stride=2, padding=0) conv_pad = nn.ConvTranspose2d(1, 1, kernel_size=(3, 3), stride=2, padding=1) print(conv_no_pad(inp).shape) # => (1, 1, 5, 5) print(conv_pad(inp).shape) # => (1, 1, 3, 3) Can somebody explain how the padding works? AI: As you quoted The padding argument effectively adds dilation * (kernel_size - 1) - padding, so you see the padding value is subtracted, the resulting shape becomes lower. It's a reverse (in some sense) operation to Conv2d, which means the arguments work the opposite way here. And I think this behavior is introduced to make it easier to design neural nets with symmetric architecture (like autoencoders) -- you just copy the parameters of kernel size, stride and padding from the corresponding Conv2d layer and get an operation which preserves the input shape of an image.
H: Interpretation of the data through scatter plot I was exploring the data and had observed the data points are forming a triangle on the lower side. x-axis: Total items y-axis: Cancelled items Can someone help me in interpreting this data? And help me how to proceed further in analyzing and building a model? AI: It is forming a triangle because you always have Cancelled items < Total items which is expected. Using this representation of the data is not so informative as many points are clustered and we can't assess the distribution. You might want to consider a plot such as a 2D histogram of (x=Total items, y=Proportion of Cancelled Items) in order to assess if some relation exists. And if you do so you might want to normalize each X slice so as to visualize the distribution for each range [Xmin, Xmax] If you are looking for a linear regression, you might want to use scikit-learn to fit a linearRegression model with X = total Items and Y=Cancelled Items and check the correlation coefficient.
H: How to predict multiple images from folder in python Here is the code for the Prediction of multiple images from the folder. But getting the same label(class) for all the images.I'm not able to find out why every image shows the same label. # import the necessary packages from tensorflow.keras.models import load_model import argparse import pickle import cv2 from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.applications.imagenet_utils import decode_predictions import numpy as np import logging, os import sys from keras.preprocessing import image import tensorflow as tf import math import operator from pathlib import Path # disable the warnings logging.disable(logging.WARNING) os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" image_path = "test_image_folder" images = [] # load all images into a list for img in os.listdir(image_path): img = os.path.join(image_path, img) img = image.load_img(img, target_size=(64,64)) img = image.img_to_array(img) img = np.expand_dims(img, axis=0) # normalize the image processed_image = np.array(img, dtype="float") / 255.0 images.append(processed_image) images = np.vstack(images) # relative paths to the model and labels model_path = os.path.join("Output", 'VGG_model.h5') label_file_path = os.path.join("Output", 'labels') # load the model and the label encoder model = load_model(model_path) lb = pickle.loads(open(label_file_path, "rb").read()) # make a prediction on the image images_data = [] filenames = [] for filename in os.listdir(image_path): pred_result = model.predict(images) images_data.append(pred_result) filenames.append(filename) #sorts attributes according to confidence score (how probable attribute exists) top_k = [] pred = [] for i in range(len(images_data)): rank = images_data[i][0].argsort()[-len(images_data[i][0]):][::-1] top_k.append(rank) top = top_k[i][:15] print(filenames[i]) for node_id in top: human_string = label_file_path[node_id] score = images_data[i][0][node_id] print('%s (score = %.5f)' % (human_string, score)) AI: You are looping on a folder to predict each image - for filename in os.listdir(image_path): pred_result = model.predict(images) images_data.append(pred_result) filenames.append(filename) But the argument of the predict function is not changing. Its a stacked value defined above as - images = np.vstack(images) This same prediction is being appended into images_data Assuming your prediction is not failing, it means every prediction is the prediction on all the images stacked in the images_data. So, for every iteration for i in range(len(images_data)): This images_data[i][0] is returning you the 1st prediction only. Changing to for i in range(len(images_data[0])): and images_data[i] should work
H: Automating the training and deployment of the model as more data becomes available My question is about automating the training of the model as more data becomes available. In this scenario, I have 1MM items that I split into training and test datasets to train, validate and ultimately deploy the model. As more data becomes available on a daily basis, is it common in real-life projects or even advisable to automate the process above, as to leverage the new data and potentially make the model more accurate? AI: This is a big topic with a lot of possible solutions depending on your context, I'll just provide some of my personal experience from projects I've built. I would say that it is common in industry to retrain models in an ongoing fashion as new data comes in. Many times this happens at the daily level using either a basic cron-like solution or an enterprise orchestration solution (e.g. Airflow/Kubeflow). Cron is easy to set up and manage for small/personal projects, the others require more engineering set up to manage but they also are more powerful/fully featured. Once you have a model pipeline which trains in an ongoing basis, you would then need to decide whether to promote the new model to production or keep your existing model running. This could be done automatically given certain conditions (e.g. accuracy of new model > old model) or could require a manual step to evaluate the new model and promote after review. Many times the answer comes down to data quality/stability. For instance, if the input data has intermittent quality issues, you would be less likely to retrain and promote automatically without manual oversight. Potentially helpful links: Cron for ML: https://towardsdatascience.com/why-crontab-is-the-ultimate-data-science-back-end-tool-e3f212f2b13d Airflow for ML: https://towardsdatascience.com/10-minutes-to-building-a-machine-learning-pipeline-with-apache-airflow-53cd09268977 Kubeflow Pipelines: https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/
H: How can I determine the relationship between spam and weekdays? I am trying to check if there is a correlation between spam emails and weekdays. My dataset looks like as follows: Spam? Day 0 1.0 Saturday 1 1.0 Saturday 3 0.0 Saturday 5 1.0 Saturday 7 0.0 Friday ... ... ... 346 0.0 Friday 348 1.0 Friday 361 0.0 Saturday 383 1.0 Thursday 387 1.0 Friday where 1 means spam and 0 not spam. I have tried as follows corr = (numpy.corrcoef(df['Spam?'],df['Days'])) I do not know how to explain a possible relationship between these two variables and if a plot could help to better visualise data and relationship. AI: (started as a comment but it turned out to be longer than expected) With a dataset like this a simple barplot could be very insightful: on the X axis the days of the week, on the Y axis the frequency, with two bars (spam/not spam using different color) for each day. A slightly more advanced version: two boxplots, one for weekdays the other for weekends. A boxplot is kind of overkill for only 5 (mon-fri) and 2 (sat-sun) values but it's easy to do and shows the big picture. In order to test whether any difference (e.g. weekdays vs weekends) is significant I think this is a good case for a chi square test.
H: In Mini Batch Gradient Descent what happens to remaining examples Suppose my dataset has 1000 samples (X=1000) . I choose batch size of 32. As 1000 is not perfectly divisible by 32 , remainder is 8. My question is what happens to the last 8 examples. Are they considered? If they are, will they effect the efficiency of my model. def next_batch(X, y, batchSize): for i in np.arange(0, X.shape[0], batchSize): yield (X[i:i + batchSize], y[i:i + batchSize]) This code is from a book and according to me this code is not considering the last remaining data points AI: It's an implementation-dependent point but there is no reason that the last few records should be left. In Keras - It takes the remaining data points as the last step. Addition of one extra elemtn increases the steps by 1. Case-I - Data count is divisible by batch_size epochs = 1 batch_size = 16 history = model.fit(x_train.iloc[:864], y_train[:864], batch_size=batch_size, epochs=epochs) 54/54 [==============================] - 0s 3ms/step Case-II - Adding an extra data point epochs = 1 batch_size = 16 history = model.fit(x_train.iloc[:865], y_train[:865], batch_size=batch_size, epochs=epochs) 55/55 [==============================] - 0s 3ms/step - In your example too, same thing is happening batch_size = 16 np.arange(0, x_train.shape[0], batch_size) .....672, 688, 704, 720, 736, 752, 768,832, 848, 864]) When the last slice will happen, it will be a batch of 11 datapoints len(x_train[864:880]) # Although x_train end at 875 11
H: How to apply multiple filter in Data Frame? How to implement multiple filters for checking data cell in a range ? Suppose, I have a list of numbers like, range_1 = [ 70 ,15,5,7,3,7,8,3,2, 63 ] # and range_1 = [ 50, 56, 80, 61, 83, 87, 13, 58, 43, 24, 84, 54, 64,36, 48 ] And I want to check any column values exist within these two lists. Any suggestion would be appreciated AI: data.loc[ data['age'].isin(range_1) & data['height'].isin(range_2) ]
H: Neural network regression is not dynamic enough to predict target range? I'm working with a CNN on a regression task, using MSE as the loss function. Here's a plot of predictions vs targets for the training set. Note: Legend is wrong. Blue = prediction vs target | Red = fit | Green = ground truth (x=x) I know that the approximate mean for the target is 0.8 so I initialise the bias of the last fully connected layer to this. (In my case it's reasonable to assume this for unseen samples as well). That's why the predicted fit crosses the ground truth at (0.8, 0.8). What's interesting is that the R^2 is pretty good, but the slope is off. If I wanted to calibrate my model without further training I could easily apply a linear transformation to change the slope (note that the validation data doesn't look too dissimilar). I feel like the model should have learned this extra factor. What could cause this lack of "sensitivity" or "dynamism"? For the signal processing people here, this reminds me a lot of when a PID regulation loop doesn't have a strong enough actuator to keep up with fast transients. Although it's a completely different concept, I feel like somehow the weights aren't large enough to produce the necessary gradient??? AI: I've solved it myself. Actually the whole premise was incorrect. It was tempting to look at the plot and think that it just takes a linear transformation to change the slope of the line, but that's because one axis of the plot is the target data! My claim was that the model should stretch the data along the y-axis (around the mean) because it's the obvious thing to do. But is it really? Have a look at these 4 examples. According to my "obvious fudge factor", the two points on the right should have been stretched up, and the two on the left should have been stretched down. But then what would that mean for all the other points that were on target? It would shift them off target and overall the model would perform worse. Here's what would happen to some of the points stretched up away from the mean. The tempting human thing to do, which is what got me, is to include the x-axis in your perception of the problem. But we have to remember that it's not there for the model. Every time the model predicts deviations from the mean it needs to have a pretty strong reason to do it. If it hasn't actually found the real reason to do so, then it shouldn't, otherwise the overall performance would be worse. As the model gets better, we don't just see a better R^2 (dots line up better), but we also see the slope increase, because the model gets more confident about deviating from the mean.
H: Interpreting decision tree results after target encoding I am not sure how to interpret the results of my decision tree after I had used target encoding, could someone clarify? The example below doesn't need target encoding just for explanation of my confusion here. For instance I am trying to classify if a fruit is rotten or not given its age and fruit type. I use target encoding for the fruit column: I then get the following decision tree with default sklearn decision tree classifier parameters: I believe after encoding I have lost information about fruit type and I can only say that if fruit_target <= 0.841 then the fruit is rotten if smaller, else not rotten. But then how do i interpret 0.841; what does it mean? AI: I believe after encoding I have lost information about fruit type and I can only say that if fruit_target <= 0.841 then the fruit is rotten if smaller, else not rotten. But then how do i interpret 0.841; what does it mean? Recall what the target encoding actually is in this example: it is the share of rotten fruits per fruit type, e.g. $75 \%$ of data points with fruit == pear are estimated to be rotten (I say "estimated" because it depends on the type of target encoding whether this an exact number or an estimate). Accordingly, you can infer from the decision tree that a data point will be classified as rotten iff its fruit type has more than $0.841 = 84.1\%$ rotten data points in the training set.
H: Which tool is good to collect tweets on 50 keywords over the last 5 years and then analyze them with the LDA algorithm or sentiment analysis? I want to find tweets from the last 5 years to a topic. For this I decide for 50 Keywords (related to the main topic), where I want to find data on Twitter. I want to find out how the trend on the subject is, are there any opposing/negative opinions, are there recognizable patterns, ... (I am also unsure which possibilities I have to get something useful from the data) I totally new to Data Science and never done it before. I have found the following tool https://algorithmia.com/pricing and https://monkeylearn.com/ . Especially Algorithmia looks like it could help me. Are there other alternatives that could help me with this task? Since my programming skills are not very good and my data science skills are not existing, I thought it would be the easiest thing to buy a tool. I am happy about any advice. :) AI: As for Algorithmia and monkeylearn, I don't know if you will find much help for those here due the fact that Python and R/RStudio are the predominant 'data science' open source (read: free) platforms. Otherwise, if Algorithmia or monkeylearn has tutorials go for it. The project you are interested in doing is not trivial (with no experience) but completely within the grasp of a novice given enough time. May I suggest starting with R/RStudio. This is simply my opinion based on my learning path. "Your mileage may vary!" Then find some tutorials such as Towards Data Science and see how it goes.
H: How to handle fixed values for variables in pre-processing I have a dataset which contains few variables whose values do not change. Some of the variables are non-numeric (for example all values for that variable contain the value 5) and few variables are real-valued but all same values. When doing standardization of the variables so that each is a zero mean and variance 1, these variables give NaN values. Therefore, is it ok to exclude such variables (irrespective of being categorical or real-valued) that contain constant values from the normalization/standardization step? These variables are important as features hence I cannot delete them. Is there any other way to handle such variables? AI: By definition, if these columns or features contain a constant value and yet the output variables change, then they are not influencing the output and likely can be ignored. A more formal test is to determine how much of the variance between a model that uses that feature is attributable to that feature. A simple example to illustrate this principle is to look up examples of PCA. In those examples, the technique tries and identifies feature that drive the most variance.
H: Is it possible for a model with a large amount of data to perform very well and reach an extremely low cost within a single epoch? I am working on a project to detect human awareness levels using this dataset. I preprocessed the video data as the following: Convert video to frames(taking a frame every 5 seconds. Rotate the frames to be vertical. Apply OpenCV DNN to extract the faces from the images. Split the data into 90% train, 5% validation and 5% test. All in the dataset has a size of about 570,000 images. I am using the model on a mobile device so I used transfer learning with MobileNetV2. The model classification is extremely good but it feels odd seeing it do so well and reach a very low loss so fast. Is this even possible on a dataset this big? I am feeling that I did something wrong cause when I try to use the model on the mobile device with Tensorflow.js it does not perform well at all. After doing some research I realized that I should be using a model that combines a CNN and a LSTM as this is video data. But I am bit strapped for time to redo the whole preprocessing of the data to convert the images into a sequence of frames and then do the training once more. What I was planning to do was make an average of the predictions on the mobile device to improve the accuracy there but I am wondering if I messed up the process anywhere. AI: So the model was performing poorly because I was making predictions on the entire input image instead of doing face detection then performing predictions on the cropped faces.
H: With Stochastic Gradient Descent why we dont compute exact derivative of loss function? In a blog I read this: With Stochastic Gradient Descent we don’t compute the exact derivate of our loss function. Instead, we’re estimating it on a small batch. blog. Now I am confused with the whole concept. Why we take estimate of the derivative? Please explain. AI: That's because the whole loss is $\frac{1}{N} \sum\limits_{i=1}^N L(x_i, y_i)$ and that number $N$ is the dataset size, it can be very large. It's just too slow to compute the true gradient, thus we compute its unbiased estimate via Monte Carlo. There are some theorems that say that stochastic gradient descent converges under certain conditions, so it's a reasonable method. You just don't to wait long (computing the true gradient), you can converge faster. The speed isn't the only reason. Also, researchers found out that using small batch size can improve the performance of neural networks and it's reasonable as well because the lower the batch size the higher is the variance of the estimate, and the higher variance (i.e. noise) and the higher variance prevents the net from overfitting.
H: Time series binary classification probability smoothing Problem Suppose we have trained binary classifier and want to predict value of [x1, ..., x5] with associated timestamps [t1, ..., t5]. We get the prediction as following: [0.25, 0.99, 0.1, 0.75, 0.79]. Assume that I have the domain knowledge to say that probability of positive class must not change abruptly. Jumps like from 0.99 at t2 to 0.1 at t3 cannot occur in real application. Questions Can I enforce smooth output constraint on (any/some) classifier? Does applying moving average on the prediction probability to smooth it make sense? AI: You can use a total variation regularizer (https://en.wikipedia.org/wiki/Total_variation_denoising), it's a penalty for abrupt changes of neighbor values. It's usually used for images, that's why its TF version (https://www.tensorflow.org/api_docs/python/tf/image/total_variation) operates with 4D tensors, but if you're writing your model in pytorch for instance, it's easy to implement that regularizer yourself. Also possibly you don't need it if you've got enough data and target values there are already smooth. Your ML algorithm would just learn that smoothness from data, the only 2 cases you'd need it is when your dataset is small or when your training targets aren't smooth, but testing targets should be smooth.
H: Object detection is not improving although loss is decreasing I am working on a project for detecting buildings from satellite images (using OpenStreetMap data as labels) using the Tensorflow Object Detection API. I recently upgraded to Tensorflow 2 and chose EfficientDet D4 from the Model ZOO. I train (fine-tune) this pre-trained model on my data. Throughout the training, I keep making predictions using always the current state of the model. The point is to see how the performance of the model gets better with training. Problem: The ability to predict buildings in validation dataset is not improving at all. In fact, it is getting worse! However, the loss during training is constantly decreasing. Here is an example - in blue are predictions of the model after less than 1 epoch while in red are predictions by a model trained much more: To me, it doesn't make any sense. I've spent loads of time trying different variations of this, using another dataset for evaluation, etc. But still I get worse results (in terms of precision, recall, f1) when I train the model more. And this is not due to overfitting, the model's performance starts to decrease almost immediately. Earlier, I was using Tensorflow 1 and SSD Inception V2 as a model. That worked as expected, i.e. the model's ability to detect buildings was improving as the training progressed. Question: What could be the cause of this and what can I try to solve this problem?I have run out of ideas. I am new to machine learning so I am likely missing some important information here. Please let me know if more details are needed to answer this. AI: So the question asks why you are seeing a decrease in the loss function (for both training and validation?), but you are also observing decreasing generalisation performance over iterations. One first thought could be due to the loss function that you have chosen might not be appropriate for your task.
H: Why Mini batch gradient descent is faster than gradient descent? According to me: Mini Batch Gradient Descent : 1.It takes a specified batch number say 32. 2.Evaluate loss on 32 examples. 3.Update weights. 4.Repeat until every example is complete. 5.Repeat till a specified epoch. Gradient Descent : 1.Evaluate loss for every example. 2.Update loss accordingly. 3.Repeat till a specified epoch. My questions are: 1.As Mini batch GD is updating weights more frequently shouldn't it be slower than normal GD. 2.Also I have read somewhere that we estimate loss in SGD (ie. we sacrifice some accuracy in loss calculation for speed). What does it means and does it helps in increasing speed. AI: It is slower in terms of time necessary to compute one full epoch. BUT it is faster in terms of convergence i.e. how many epochs are necessary to finish training which is what you care about at the end of the day. It is because you take many gradient steps to the optimum in one epoch when using batch/stochastic GD while in GD you only take one step per epoch. Why don't we use batch size equal 1 every time then? Because then we can't calculate things in parallel and computation resourses are not used efficiently. It turns out in every problem there is a batch size sweet spot which maximises training speed by balancing how parallelized your data is and number of gradient updates per epoch. mprouveur answer is very good; I'll just add that we deal with this problem by simply calculating average or sum loss over all batches' losses. We don't really sacrifice any accuracy i.e. your model is not worse off because of SGD - it's just that you need to add up results from all batches before you can say anything about the results.
H: High validation loss, high validation accuracy I'm just getting started into the field of deep learning, and I completed my first model training using PyTorch. I decided to use a pre-build model from torchvision, more specifically the mobilenet_v2 (https://pytorch.org/docs/stable/_modules/torchvision/models/mobilenet.html#mobilenet_v2) on a custom dataset for binary classification. I manage to get 100% accuracy for both training and test sets (this particular dataset is not a difficult one, so nothing weird about that) What I don't understand is why the test loss seems to be several orders of magnitude higher than the train loss, while the accuracy is 100% in both cases. Could someone here explain me what is happening here? I show you the training stats below: Epoch: 1/15, Train Loss: 0.22990, Train accuracy: 88.48%, Test Loss: 2.10275, Test accuracy: 99.22% Epoch: 2/15, Train Loss: 0.03988, Train accuracy: 99.41%, Test Loss: 2.08563, Test accuracy: 99.22% Epoch: 3/15, Train Loss: 0.02211, Train accuracy: 99.41%, Test Loss: 2.05521, Test accuracy: 100.00% Epoch: 4/15, Train Loss: 0.01173, Train accuracy: 99.80%, Test Loss: 2.05332, Test accuracy: 100.00% Epoch: 5/15, Train Loss: 0.00499, Train accuracy: 100.00%, Test Loss: 2.04989, Test accuracy: 100.00% Epoch: 6/15, Train Loss: 0.00381, Train accuracy: 100.00%, Test Loss: 2.04952, Test accuracy: 100.00% Epoch: 7/15, Train Loss: 0.00171, Train accuracy: 100.00%, Test Loss: 2.04999, Test accuracy: 100.00% Epoch: 8/15, Train Loss: 0.00072, Train accuracy: 100.00%, Test Loss: 2.04971, Test accuracy: 100.00% Epoch: 9/15, Train Loss: 0.00045, Train accuracy: 100.00%, Test Loss: 2.04938, Test accuracy: 100.00% Epoch: 10/15, Train Loss: 0.00035, Train accuracy: 100.00%, Test Loss: 2.04932, Test accuracy: 100.00% Epoch: 11/15, Train Loss: 0.00029, Train accuracy: 100.00%, Test Loss: 2.04920, Test accuracy: 100.00% Epoch: 12/15, Train Loss: 0.00025, Train accuracy: 100.00%, Test Loss: 2.04922, Test accuracy: 100.00% Epoch: 13/15, Train Loss: 0.00022, Train accuracy: 100.00%, Test Loss: 2.04906, Test accuracy: 100.00% Epoch: 14/15, Train Loss: 0.00020, Train accuracy: 100.00%, Test Loss: 2.04914, Test accuracy: 100.00% Epoch: 15/15, Train Loss: 0.00018, Train accuracy: 100.00%, Test Loss: 2.04905, Test accuracy: 100.00% AI: In binary classification we have either 0 or 1. You must be using softmax function at the end layer to get the output . It ranges from (0,1) Loss: Generally speaking , it is the difference between predicted and actual value. Suppose CASE 1:predicted value is 0.6 real answer is 1. CASE 2:predicted value is 0.9 real answer is 1. Although they both have the correct answers(because they both round off to become 1) , loss in case 2 is less than case 1.
H: AUC ROC Curve multi class Classification Here is the part of the code for ROC AUC Curve calculation for multiple classes. n_classes= 5 y_test = [0,1,1,2,3,4] #actual value pred1 = [0,1,1,1,3,4] #predicted value fpr = dict() tpr = dict() roc_auc = dict() for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(np.array(pd.get_dummies(y_test))[:, i], np.array(pd.get_dummies(pred1))[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) Error : I understand that there are only 4 classes are predicted 0,1,3,4 in pred1 and 5 classes 0,1,2,3,4 in the y_test. So this error arises. How to solve this error. AI: df = pd.get_dummies(pred1) df.insert(loc=2,column='2',value=0) #print(df) add this before the for loop and instead of using pd.get_dummies(y_test) use only df
H: Ordering of standardization, pca, and/or tfidf for neural network I have 60k rows of text data. I have tokenized it into 55k columns. I am using a neural network to classify the data but have some questions about how to order my preprocessing steps. I have too much data for my hardware (doesn't fit in memory/too slow) so I am using PCA to reduce dimensions. Obviously, I need to scale before PCA. I am currently standardizing the columns, but I am wondering if I can use tfidf instead of standardization. Some rows have 50k+ tokens while others have <1k tokens so I am worried these rows have undue influence on the outcome of scaling which will trickle down the pipeline. Is this a good/bad idea? Would I maybe use tfidf then standardize before PCA? Generally neural nets prefer standardized data. After PCA the first few columns have much greater magnitude than the rest b/c they capture so much variance. Should I standardize after PCA and before training? The reason for standardizing before training is so no feature has bigger influence on the model just b/c the scale is bigger, but isn't PCA telling me that the first few features are actually more important? FWIW, I've tried both and not scaling seems a little better. What about performing tfidf after PCA and before training? Again, rows with 50k+ tokens will prefer a network with orders of magnitude larger weights than rows with <1k tokens. Wouldn't it be hard for the network to set weights for both types of rows? Diagram for clarity: data -> tokenize -> ?standardize/tfidf? -> PCA -> ?standardize/tfidf? -> neural net AI: I would go for this: data -> tokenize -> tfidf* -> neural net But in tfidf vectorizer, you could actually regularize the number of terms used, say for example restricting the minimum number of occurrences of a term and/or defining the max_number of features so that you only keep the ones that have the highest importance according to Tfidf. If you want to reduce the number of features via some decomposition technique PCA won't be adequate since the term-frequency matrix is sparse, so you could, for example, using NMF (Non-negative matrix factorization instead) So: data -> tokenize -> tfidf->NMF -> neural net This time the regularization on tfidf is not necessary since you have an additional step. In the end, metrics on CV will guide you on what the best strategy is
H: Can i expect good results having low correlation attributes? This was a question i saw in an interview for a data scientist position: "Here is the following correlation heatmap that i got from my attributes. Regarding the correlation of each feature with the dependent variable (target/class), it is noticeable that correlations are not very expressive. Yet, i would like to know if can i expect good results from a classification model using this dataset. Also, what further investigation can i do (if i shouldn't look after correlation only)?" AI: It's a general question, so there are more then a few things you can do. Although, what stopping you to train a basic clssifier and investigate the results? Some ideas: Use Predictive Power Score to keep on investigate your data Check for non-linear correlation between the features Investigation the features importance Use dimension reduction Check for imbalances
H: Cleaning data with two fields mixed in the same column? I am working on a template for a dashboard that is giving me some serious trouble. The format the data comes in is highly unstructured. In the image below, circled items are "Site Names", data is Month-level. The issue is that the "Site Names" are mixed in with the Month, with an inconsistent number of months for each site. Some sites will display one year worth of data, others will contain up to 3 years. There is no field indicating Year, only Month. I have managed to extract a list of unique Site Names from this column, so I have those on their own. My issue now is that I have no idea how to match them to their corresponding Months. I was thinking through how to do so with an HLOOKUP, however after planning it out briefly I'm not sure this would work. The site names I would be using as a lookup are not all present in the top row. I am absolutely stumped on how to achieve this. Provided this isn't asking too much in one question, I also am faced with the challenge of getting Excel to understand that the first instance of a month for each site is an older instance of a month. I.E. "Jul" would be 2018 the first time it appears, 2019 the second time, 2020 the third time. Any help/tips would be greatly appreciated. AI: I suggest adding a company name for each corresponding month. See the attached picture. The formula for the first column determined if it is for a month or for the company name. Assuming that all your months are in the three-letter format and there is no company named 'May' or 'Sep', the formula for the cell B2 would be =SUMPRODUCT(--(A2={"Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"}))>0 The formula for C2 is =INDEX($A$2:$A$13,MATCH(2,1 /($B$2:B2=FALSE))+1) Please refer to this page (https://www.get-digital-help.com/index-match-last-value/) for explanations for this formula. Finally, you can filter the third column in my example to keep months for the specific company only. They will be in the order needed.
H: As RELU is not differentiable when it touches the x-axis, doesn't it effect training? When I read about activation functions , I read that the reason we don't use step function is because, it is non differentiable which leads to problem in gradient descent. I am a beginner in deep learning , as Relu is almost a linear function and also non differentiable where it touches x-axis , why it performs so much better than tanh or sigmoid functions. And why is it so widely used in Deep learning. As it is non differentiable doesn't it affect in training? AI: A step function is discontinuous and its first derivative is a Dirac delta function. The discontinuity causes the issue for gradient descent. Further the zero slope everywhere leads to issues when attempting to minimize the function. The function is essentially saturated for values greater than and less than zero. By contrast RELU is continuous and only its first derivative is a discontinuous step function. Since the RELU function is continuous and well defined, gradient descent is well behaved and leads to a well behaved minimization. Further, RELU does not saturate for large values greater than zero. This is in contrast to sigmoids or tanh, which tend to saturate for large value. RELU maintains a nice linear slope as x moves toward infinity. The issue with saturation is that gradient descent methods take a long time to find the minimum for a saturated function. Summarizing: Step function: discontinuous and saturated at +/- large numbers. Tanh: Continuous and well defined, but saturated at +/- large numbers. Sigmoid: Continuous and well defined, but saturated at +/- large numbers. Relu: Continuous and well defined. Does not saturate at + large number. Hope this helps!
H: CNN: How do I handle Blurred images in the dataset? I have 30% blurred images in each classes. I have a total of 10 classes. I'm not allowed to drop these blurred images. How do I train the model to get better accuracy for both blurred and nonblurred training dataset ? Currently, I'm at 11% accuracy. The images were blurred using a Gaussian blur. I have used a Wiener filter, but not able to restore the image from blurred images. Please can anyone suggest a good way to train the model. AI: I will suggest using data augmentation approaches to even out your data distributions. It will make your blurred images more usable to the model The data distribution of 30% of your images deviates from the rest because they are blurred. Experiment with training using random blur with appropriate min-max ranges in the data augmentation pipeline (on the images that aren't blurred). This will help the model to smoothly generalize across blurred images. If you don't have labels of which images are blurred, use blur detection algorithms to determine a threshold after which you want to augment. After doing this, it may be important that you do test-time data augmentation as well.
H: How we can identify the problem of Overfitting and underfitting and maintain bias? Basically, I'm new to the data science field, and I'm getting a little bit of confusion about overfitting and underfitting. Are overfitting and underfitting is totally depending upon the number of datasets or the behavior of data ? Can anyone explain the term of overfitting and underfitting and how to deal with this kind of problem? AI: Under/overfitting depends on two things: the amount of data in your dataset and the complexity of your model. To identify when each of these is happening, you will have to split the data you have into two parts: training data and test data. You then train your model only on the training data, and then evaluate its performance (e.g. calculate its accuracy or any other metric you are interested in) on the training data and test data. If your model performs well on your training data (e.g. you get a very good accuracy while training a model), but cannot make good predictions on your test data, then we say that the model is overfitting. What this means is that the model has memorized the training data instead of learning the patterns in it. As a result, it cannot generalize and make good predictions on data it hasn't seen before (e.g. the test data). This could be fixed by either reducing the complexity of the model (e.g. if it is a neural network then reduce the number of layers) or by increasing the amount of data (e.g. collecting more data, or using data augmentation techniques) If your model doesn't perform well on both training and test data, then we say it is underfitting. This means that the model is not complex enough to learn the pattern in the training data. This can be fixed by using a more complex model (i.e. a model with more parameters).
H: Machine learning algorithms and Computer Vision technologies for detecting 52 playing cards deck https://en.wikipedia.org/wiki/Standard_52-card_deck https://en.wikipedia.org/wiki/Playing_card https://www.google.com/search?q=playing+cards&client=ms-android-lava&prmd=sinv&sxsrf=ALeKk02cmtbG25vrqk59BybOJWHF7PG3Aw:1600237953234&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjcuIj2hu3rAhXtzjgGHZegCj0Q_AUoAnoECAwQAg https://www.google.com/search?q=playing+cards&client=ms-android-lava&prmd=sinv&sxsrf=ALeKk02cmtbG25vrqk59BybOJWHF7PG3Aw:1600237953234&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjcuIj2hu3rAhXtzjgGHZegCj0Q_AUoAnoECAwQAg https://www.instagram.com/p/CFMBjc6J7PN/?utm_source=ig_web_button_share_sheet Are there Machine learning algorithms & Computer vision technologies which can view the 52 playing card deck? Input : 52 playing card deck images as .jpg,.gif,.tiff Output : Detecting the playing card names. For example : Spade Ace, Three diamonds, Jack of Hearts, King of Clubs. All the playing cards names should be detected by inputting playing cards images. If no, what are the limitations? Do the 52 playing card images have to be modified? One can also add one more playing card as "Joker" to the existing 52 playing cards added as a image as input. AI: Yes, you can recognize these cards. For easy implementation, you can check here and here. Also, you can build your custom neural network model with tensorflow, keras, pytorch etc. Recognizing visually "52 card deck" is already solved problem. Because, cards have good features & landmarks. You either can use neural networks or "old school" computer vision algorithms.
H: How we can Identify Specific Feature from a larger amount of Dataset? In Machine Learning, we need to play with any kind of datasets. In the Dataset, There are too many records and features, Some datasets had lots of features (sometimes it's called columns), So the main problem for a data scientist is to know the behavior of the dataset and get meaningful insights from the dataset, Let's take one example from Kaggle platform, There's a dataset for house price prediction to know what the price of a house, based on there features, here is the link of the datasets House Price Prediction Advance Regression Machine Learning Problem So, The question is how to identify the meaningful feature from the dataset? AI: I dont think there is one correct way, but what you can do is Use PCA if you have many features. This will reduce some number of features based on the amount of variance in each feature. You may use other dimensionality reduction techniques. You can use models like Lightgbm or random forest and know which feature are important. 3. You may use Lasso Regression for feature selection. You may use intuition to see if some features are just does not make any sense. These are some of the methods to understand important features. You may read this article: https://towardsdatascience.com/the-5-feature-selection-algorithms-every-data-scientist-need-to-know-3a6b566efd2
H: Why we can't Remove features with missing values in Data Preprocessing In a Real Time Dataset, There are many missing values available in the Dataset and also we need to deal with data preprocessing. And there are many ways to minimize the problem of missing values preprocessing. So, Can we use mean, median, standerd deviation or we can remove those whole records? Why many people denying to remove complete record from dataset, Why? AI: Missing values doesn't necessarily mean missing information. Sometime missing value represent an information in itself. For example: we have a data set which have features such as pool area, no. Of rooms and area. Now pool area have 90% of its value missing. You can create a new column called is_pool, which tells if the house has pool or not, from pool area column by using condition that if pool area is missing, make is_pool =0 otherwise 1. This is one basic example, in my experience most difficult thing while doing EDA is identifying is missing value is really means no information or is it represents something else entirely. In short, understand why the value is missing.
H: feature importance after classification I have time series data and more or less 200 features for each sample, I used a recurrent neural network for the binary classification task. After the classification I would like to know which features contribute most to one of the target(let's say target=1). Any suggested method? Thank you AI: You may use Permutation importance - Get your base-line score - Permutate a feature values. May replace with Random values - Calculate the score again - The dip is the feature importance for that Feature - Repeat for all the Features ....Breiman and Cutler also described permutation importance, which measures the importance of a feature as follows. Record a baseline accuracy (classifier) or R2 score (regressor) by passing a validation set or the out-of-bag (OOB) samples through the Random Forest. Permute the column values of a single predictor feature and then pass all test samples back through the Random Forest and recompute the accuracy or R To check the importance for the individual Class i.e. 0/1 Extrapolate the same to check if the increase is more for False-Positive or False-Negative. Read Beware Default Random Forest Importances for more explanation. Few other quotes from the page- Any machine learning model can use the strategy of permuting columns to compute feature importances. This fact is under-appreciated in academia and industry. The permutation mechanism is much more computationally expensive than the mean decrease in impurity mechanism, but the results are more reliable. The permutation importance strategy does not require retraining the model after permuting each column; we just have to re-run the perturbed test samples through the already-trained model.
H: EfficientNet function composition or Hadamard In the page 3 of the paper of EfficientNet, there is a equation $$\mathcal{N} = \bigodot_{i=1...s} \mathcal{F}_{i}^{L_i} \big(X_{\langle H_i, W_i, C_i \rangle}\big)$$ where $\mathcal{N}$ is the conv net and each $\mathcal{F}_i^{L_i}$ is the $i$th-stage layer operator that has length $L_i$. What I don't understand is, what is this $\odot$ in this equation? Does the author refer to the Hadamard product or does he refer to the function composition? He previously mentioned that $\mathcal{N} = \mathcal{F}_k \odot ... \odot \mathcal{F}_1 (X_1)$, where k is the depth of the net. So I thought it means that $\odot$ is just function composition. But EfficientNet has skip connection. In the keras implementation it uses layers.merge.Multiply() so it can also means that the input data $X$ is multiplied with the transformed $\mathcal{F}(X)$, and $\odot$ maybe means the Hadamard product. Does anyone knows the answer? Thanks. AI: It probably represents repeated function composition. Key reasons: The authors' wording before this equation is: "a list of composed layers". A Hadamard product could not be applied to layers of different sizes, whereas function composition of course can. Yes, EfficientNet has skip connections, but I think they try to validate function composition via a distinction between "layers" and "stages". They say that "$F_i^{L_i}$ denotes layer $F_i$ is repeated $L_i$ times in stage $i$". They also explain that "all layers in each stage share the same architecture. Therefore, I think they put skip-connections in the form of function composition, but it is confusing. I think it's their way of compactly expressing the (often) repeated architecture of skip-connected layers.
H: Distinguish randomly generated texts from reasonable for human texts I have strings short texts of 2 types: '23jd2032n0d2mn', 'fn830n30rn83', 'fhui29n4ok', 'qn4foml', ... and 'sweetie23', 'king3prussia', 'maryjesus', 'lovedog4and_kitties', ... Is there a way to distinguish one type from another? I've tried to vectorize texts with word2vec and classify on these vectors with xgboost, but I didn't succeded to achieve got F1-score. AI: You could train a character-level language model, e.g. an LSTM, on the real short texts, and use the perplexity as the signal to know whether a piece of text is real or not. In order to find an appropriate perplexity threshold, you can have a look at the distribution of perplexities over a validation holdout dataset. UPDATE: There are multiple implementations of language models. For "classical" options, you can go for KenLM or if you have GPUs to train the model, you can use fairseq. Just remember to prepare your text to have character-level tokens (normally you just need to have a space between every letter).
H: How to structure the output layer of an MLP that finds the quadrant of an arbitrary point in a rectangle? I'm trying to write a neural network that outputs the quadrant of a rectangle that an arbitrary point lies in. This rectangle has its upper left at {0, 0} and its lower right at {1, 1} (e.g. point {0.25, 0.25} should return 'upper left', and point {0.75, 0.75} should return 'lower right'). I'm confused about the structure of the output layer. I've seen examples that suggest two nodes with Boolean "isLeft" and "isTop". I've also seen examples that suggest four nodes with probabilities for UL, UR, LL and LR. Is one or the other considered the proper structure in this case? Are both approaches valid? I want to do this exercise as simply as possible, as I work to learn Multi Layer Perceptron. If there is a subtopic I need to focus on, please direct me to it. Thanks in advance! AI: You need only two hyperplanes to solve this. Thus you need two neurons in the hidden layer. You can use two or four neurons in the output layer. Both options result in the correct solution (theoretically). You can use perceptrons. With perceptrons, the output is a boolean vecotor, not a probability.
H: Replace values that starts with "<" in a data frame in r I am new to R and would like some assistance on replacing multiple values in a large dataset with many variables to NA at once in r. I want all the cells with "<0.01, <0.02, and many others with < sign at the beginning" to be changed to NA. How do I do it at once? Here is a screenshot of the data. AI: Using gsub with a regular expression to replace every string starting with a "<" with NA. x <- c("<0.01", "0.027", "<0.01", "<0.01", "0.2", "0.22", "0.04", "0.09") gsub("<.*", NA, x) # [1] NA "0.027" NA NA "0.2" "0.22" "0.04" "0.09"
H: Matrix notation in Sutton and Barto On pg. 206 of Barto and Sutton's Reinforcement Learning, there is a curious statement about the result of a scalar product: As I interpret it, A is the expectation of a scalar product of two d-dimensional vectors: which should be a scalar, right? So how do they get a dxd-matrix from it? Is it a shorthand for a scalar matrix (diagonal with the repeated coefficient, namely this scalar product)? AI: In Sutton & Barto, vectors are considered column vectors by default. So if you have this kind of product: $$\mathbf{a}\mathbf{b}^T$$ where $\mathbf{a}$ and $\mathbf{b}$ are $d$ dimensional vectors, it does not calculate the scalar product. Instead it treats both vectors as matrices and calculates a matrix product, which will be a $d \times d$ matrix because you are multiplying a $d \times 1$ matrix by a $1 \times d$ matrix. Worthing noting that the scalar product can also be calculated as a $1 \times 1$ matrix if follow the same matrix multiplication rules but with the first vector transposed instead: $$\mathbf{a}^T\mathbf{b}$$ which leads to multiplying a $1 \times d$ matrix by a $d \times 1$ matrix. This is why the value function approximation can be written as $\mathbf{w}^T\mathbf{x}_t$ (there is a small liberty taken of assuming a $1 \times 1$ matrix is the same as a scalar value in terms of notation).
H: Machine learning algorithms for interpreting Companies brand/s logo/s https://www.google.com/search?q=Company+brand+logos&client=ms-android-lava&prmd=isnv&sxsrf=ALeKk0218I-1fMd-hNXX_fAF8_fu6EOotA:1600348128111&source=lnms&tbm=isch&sa=X&ved=2ahUKEwinlcWtofDrAhVAyjgGHZcYAhgQ_AUoAXoECA4QAQ&biw=360&bih=592&dpr=2 https://www.google.com/search?q=Company+brand+logos&source=lmns&bih=592&biw=360&client=ms-android-lava&prmd=isnv&hl=en&sa=X&ved=2ahUKEwjNyJS8o_DrAhXFoUsFHUsWCeoQ_AUoAHoECAAQAw Can Machine learning algorithms with companies brand/s logo/s as input images dataset interpret & give information about the company, products & services ? Input : Logo images format : gif, jpg,tiff. Example : Intel logo image. Output : Website : intel.com Products & Services : Integrated Chips manufacturers. AI: Very likely no! Machine learning algorithms aren't magic, they cannot see or find stuff that is not there. We know for a fact that some trends and hints exist that link a companies exterior communication to it's industry e.g. social media companies like blue logos (think Twitter, Facebook, linkedin, etc.). However for the most part logos, brand names, etc. do not have a structured way that ties them to such a specific information such as website-url, products and services. Practically you will also have a huge problem gathering training material. This is a supervised image recognition problem which means that you would need hundreds/thousands of examples were images are correctly tagged with the information to train your model. Given that a lot of industries do not even have that many brand names this will be near impossible. Alternatives? If all you want to do is identify the proper information about a company you could use ML to systematically crawl and gather that information from wikipedia or a companies website. However that time and money is properly better spend on buying access to one of many, many company databases that already exist.
H: How to build a database of image data for machine learning? I want to build a database of image data for machine learning. But how should this be done? I'm assuming people don't just dump all of their image data into a folder? Do they use a relational database management system, like MySQL? Or do they use a NoSQL database, like MongoDB? Is there a textbook that explores this part of machine learning in particular? Is this what "data warehouse" refers to? AI: There are several approaches to this as you need both the input (images) and if your problem is a classification one, you need to reliably store the labels. You might also have some additional information about the images that could be useful for your problem: you can store the images in such a way that all information is contained in the permanent store (for instance folder names with the labels that you want to learn and all the images of a given class within that folder). Keras has a method that allows you to create a dataset from a directory tf.keras.preprocessing.image_dataset_from_directory. another way (which I prefer) is to store in a (SQL) database all of the metadata (label, image url in a table for instance). This is more flexible because you can easily change a label, add a new category without having to move images around. This also allows you to change the format and add additional data related to each image.
H: Why does the overfitting decreases if we choose K to be large in K-nearest neighbors? I am studying machine learning and I am focusing on K-nearest neighbors . I have understood the algorithm, but I have still a doubt, which is on how to choose the K for the number of neighbors. I have seen that choosing $K=1$ lads to a frafmented decision boundary, while if I choose a larger value of K, I obtain a smoother boundary: I think that I have lear the reason of why this happens. But I have also studied that if $K=1$ we do overfitting, whiile if we increase $K$ the overfitting decreases, but why? So, why does the overfitting decreases if we choose K to be large in K-nearest neighbors? AI: Overfitting is "The production of an analysis which corresponds too closely or exactly to a particular set of data, and may therefore fail to fit additional data or predict future observations reliably." (Oxford dictionary) When you fit a ML model, you use a dataset that you assume is a sample of the real statistical distribution you want to model. This means that the dataset does not fully reflect the real distribution and will contain anomalies, exceptions or some randomness. An overfitted model would stick too much to the particularities of this dataset and be too variant, instead of learning statistical patterns that would smooth these particularities and bring the model closer to the real statistical distribution. In the case of KNN, K controls the size of the neighborhood used to model the local statistical properties. A very small value for K makes the model more sensitive to local anomalies and exceptions, giving too many weight to these particular points. On the contrary, a too large value of K would make the model ignore the local structure of the distribution you try to learn, and produce an underfitted model. Let's take an example to illustrate this: In a given city, you want to predict if a household has low or high income because you have data about most of them, but not all. You decide to use KNN and predict the income category of the households with no data according to the majority category among the K closest neighbors. This is likely to work well because we know that housing prices tend to create low or high income neighborhoods. If you use a small K, let's say K=1 (you predict based on the closest neighbor), you might end up with these kind of predictions: In a low income neighborhood, you wrongly predict one househlod to have a high income because its closest neighbor has a high income. Indeed, this neighbor refused to move out from his childhood house though he could afford living anywhere else. In a high income neighborhood, you wrongly predict one household to have a low income because its closest neighborhood has a low income. Indeed, though he couldn't afford the rent, the house belongs to the family since ever. The model overfits on these particular cases. Now if you increase the number of neighbors you compare to (= increase K), you will get a majority of either low or high income households in the neighborhood and correctly classify these examples. Finally, if you increase K too much, you will have an underfitted model. In our example, if we increase K indefinitely we will end up taking all households in the city as neighbors, and basically always predict the category that is the majority category in the city, ignoring local particularities.
H: Problem with converting string to dummy variables I'm new in data science, I have data which want to work on it, I omitted extra columns and convert it to 4 columns ( Product, Date, Market, Demand ) . in this data Product and Market are string, I know for working on this data must convert them. I want to convert the string to dummy variables but this isn't logical because I have 64 fruits in the product column. I am confused and I don't know what can I do whit this strings. AI: There are a variety of ways to convert a categorical column to a numeric one, with the right answer many times being use-case specific. Trial and error can help here to see what works best for your problem. To give a specific recommendation, you may want to try Target Encoding as an option and see how it performs. It will probably be better than One Hot Encoding or Ordinal Encoding in your case. Example links: https://contrib.scikit-learn.org/category_encoders/targetencoder.html https://maxhalford.github.io/blog/target-encoding/ https://brendanhasz.github.io/2019/03/04/target-encoding
H: Feature Selection - Conditional Entropy I've developed an algorithm to define conditional entropy for feature selection in text classification. I'm following the formula at Machine Learning from Text by Charu C. Aggarwal (5.2.2). The author mentions that Conditional Entropy values are between (0, log(number of classes)) in which my case is (0, 0.6931472). The author also mentions that features with the largest values can be removed, but he don't give further information about the criteria to define 'largest' (is it only the max value of entropy or a set of the largest entropy values?) Have you ever guys applied Conditional Entropy for feature selection? If so, based on results, what criteria was used to define features to be removed. Here a summary of my Conditional Entropy results: E.tj. Min. :0.5701 1st Qu.:0.6562 Median :0.6563 Mean :0.6558 3rd Qu.:0.6564 Max. :0.6564 AI: Individual feature selection methods assign a numerical value to every feature so that features can be ranked according to this value. The calculated value is chosen to represent how much the feature contributes to knowing the label/response variable: common choices are conditional entropy, but also information gain or correlation. The actual values assigned to the features are not really useful on their own, what matters is the ordering of the features according to this value. Thus the standard method for selecting the features is not to choose a particular threshold on the value, but simply to choose the number of features one wants to obtain. Example: in a text classification task, there are 1000 documents and a vocabulary of 20000 unique words as candidate features. Using all the words would certainly cause overfitting, so we decide to use only 100 words as features. We can calculate the conditional entropy of every word with respect to the label, and then select the bottom 100 words according to the corresponding ranking as features (the 19900 other words are ignored). Since individual feature selection is very efficient, it's often possible (and a good idea) to try a range of values as the number of features, and train/test the model for each of these values. This way one can experimentally determine the optimal number of features (the one which maximizes performance on the data). Note that this a form of hyper-parameter tuning, and therefore one has to use a validation set for the tuning stage, and then the final model (with the selected optimal number of features) should be applied on a fresh test set.
H: What is the meaning of Face Recognition in wild and in static? What is meant by when someone says face recognition on wild dataset and on static dataset? AI: In the wild refers to uncontrolled imagery, as opposed to images taken in lab environments or posed pictures such as driver's license/passport pictures or mug shots. static in this context refers to still imagery, as opposed to temporal data (e.g. video). With regard to facial expression recognition (not sure if you're interested in this related area), you could read this paper which describes a popular static database for images in the wild, which has good background information: Static Facial Expression Analysis in Tough Conditions: Data, Evaluation Protocol and Benchmark
H: Dropping missing rows in two dataframes I have two files : Test_data - contains the features of a dataset to find predictions for Submission_data - contains two columns : The index column for test data and another column for its corresponding predicted value So , I have to make predictions on the test data and store the predicted values in the submission file. During preprocessing of the test data , I am dropping rows that do not contain values (NaN) for atleast 50% of the features(columns) : test_data = test_data.dropna(thresh=math.ceil(test_data.shape[1]/2)) Now , How do I remove the corresponding rows in the submissions dataframe ? Because , If I drop some rows in the test data , I cannot make a prediction for the corresponding row in the submissions dataframe/file. The problem is , there is an Index column that does NOT HAVE UNIQUE values (In both test data and submissions data) So , How do I drop the rows in Submissions data that were also dropped in Test data ? I am new to ML challenges and I find this challenging . AI: When you read the two csv files and store the data in two dataframes, you could then combine it into one dataframe, do the dropna and then split it back. I will give an example using pandas import pandas as pd df1 = pd.read_csv('test_data.csv') df2 = pd.read_csv('submission_data.csv') df3 = pd.concat([df1, df2], axis=1) # this will combine the two dfs. reduced_data = df3.dropna(thresh=math.ceil(test_data.shape[1]/2)) predictions = reduced_data.loc[:,['predictions']] reduced_data.drop(columns=['predictions'], inplace=True) #instead of 'predictions', use whatever column name you have for the predictions in submission_data.csv file. Hope this helps.
H: Using constants in input of a ML Model I'm currently building a binary classifier. My input is a sequence of 32 time-steps. Certain time-steps of the input will be constant (ex: t-0 will always be 0, t-5 will always be 9, etc) Does it make sense to add these time-steps as features into the model? Im thinking its not since: The model will have to pay attention to these features and they will add a type of noise/bias into the model - since there isnt any new information to be gained from them. Am i thinking about this correctly? AI: You are thinking about this correctly. If data doesn't vary between your outcomes then it doesn't need to be included. That being said, if you are using time series techniques such as trend decomposition to feature engineer, then changing the structure of your data could complicate interpretation (ie: what is a moving average if you've removed data points?). In that light I'd say that you should not build these quirks into the code you write, and that you should keep it as general as possible except where absolutely necessary. This is related to the concept of writing "DRY" code where you don't repeat yourself. Personal opinion: Classifiers, like software, should try not to make assumptions about data if possible. This gives you a better chance of being able to reuse it or share it.
H: Decision Tree Regressor: domain of the y variable just wondering about a thing. suppose you fit a Decision Tree Regressor and your training y variable has got a domain that spans from 0 to 1 included, is it possible that the predicted y variable of the fitted model on the test set to exceed these boundaries? AI: Decision Tree Regressor won't predict values outside the range of values they are given in the train set. If your extreme values are -4 and +10 the predicted values will be between -4 and +10. The reason for that is that a decision tree splits the training data in groups. The prediction associated with a node is then the mean value of the training data of that node (or if you choose something else than MSE, whatever value that minimize the cost function for this group). An example of how a regression tree works : https://www.saedsayad.com/decision_tree_reg.htm The output for Windy and Sunny is 26.5 as the training data has 2 data points that fit these criteria (23 and 30)
H: RandomizedSearchCV() not scoring all fits I'm experiencing an issue with a RandomizedSearchCV grid that is not able to evaluate all of the fits. 50 of the 100 fits I'm calling do not get scored (score=nan), so I'm worried I'm wasting a bunch of time trying to run the gridsearch. I'm wondering how to troubleshoot this and haven't found anything in the past few days and I'm hopeful that the community can help me squash this bug. Now, the details: I have constructed a XGBClassifier model as such: xgb_clf = xgb.XGBClassifier(tree_method="exact", predictor="cpu_predictor", verbosity=1, objective="binary:logistic", scale_pos_weight= 1.64) # my trainingset is imbalanced 85k majority class, 53k minority class Currently, I am attempting to use the hashing trick to encode my categorical variables, as they are all nominal. I do this after splitting my training set into X and y variables ce_hash = ce.HashingEncoder() hashed_new = ce_hash.fit_transform(X) hashed_X = hashed_new I then conduct my train_test_split as normal, then instantiate a RandomizedSearchCV with a parameter grid, code is as such: X_train, X_test, y_train, y_test = tts(hashed_X, y, test_size=.25) # create my classifier xgb_clf = xgb.XGBClassifier(tree_method="exact", predictor="cpu_predictor", verbosity=1, objective="binary:logistic", scale_pos_weight= 4) # Create parameter grid params = {"learning_rate": [0.2, 0.1, 0.01, 0.001], "gamma" : [10, 12, 14, 16], "max_depth": [2, 4, 7, 10, 13], "colsample_bytree": [ 0.8, 1.0, 1.2, 1.4], "subsample": [0.8, 0.85, 0.9, 0.95, 1, 1.1], "eta": [0.05, 0.1, .2, ], "reg_alpha": [1.5, 2, 2.5, 3], "reg_lambda": [0.5, 1, 1.5, 2], "min_child_weight": [1, 3, 5, 7], "n_estimators": [100, 250, 500]} from sklearn.model_selection import RandomizedSearchCV # Create RandomizedSearchCV Object xgb_rscv = RandomizedSearchCV(xgb_clf, param_distributions=params, scoring='precision', cv=10, verbose=3) # Fit the model by running ten fits on ten 'folds', or 100 individual fits. model_xgboost = xgb_rscv.fit(X_train, y_train) However, during 50% of the 100 fits, I will get a score that looks like this: [CV] subsample=0.8, reg_lambda=2, reg_alpha=3, n_estimators=100, min_child_weight=3, max_depth=10, learning_rate=0.001, gamma=16, eta=0.1, colsample_bytree=1.4, **score=nan**, total= 0.1s When this occurs, it occurs in sections of ten, so 10 straight fits will all generate a score of nan. The 50 nan scores don't always occur in the same order, but there are always 50 that don't get scored correctly. Would anyone know how I can attempt to correct this and ensure that all 100 fits get scored? Is this happening because I'm using a hashed feature set? Thanks! AI: Some of your hyperparameter values aren't allowed (colsample_bytree and subsample cannot be more than 1), so probably xgboost errors out and sklearn helpfully moves on to the next point, recording the score as NaN. Half of your values for colsample_bytree are disallowed, which supports seeing half of your scores as NaN; and that will happen regardless of the fold, which explains why you always see them in groups of 10.
H: How should I sample from a mixture distribution? Let's say we have a mixture distribution, defined by density $f(x)= w_1 p_1(x) + w_2 p_2(x)$, where $w_i$ is a scalar weight. Furthermore, we have efficient methods to evaluate the pdf and cdf/icdf for the distribution $D_i$ corresponding to density $p_i$. I would like to sample from such a distribution. The method I currently employ is an implementation of rejection sampling. I construct a proposal function $M*u(x)$, where $u \sim \text{Unif}(lb,ub)$ ($lb,ub$ are constructed such that at least 99% of the cdf of each $D_i$ is contained within, using icdf) and $M$ is constructed by finding the $\max_{x \in [lb,ub],i} p_i(x)$. Because such proposal function envelopes $f$, I am able to sample $f$ by choosing $x \in X \sim \text{Unif}(lb,ub) \times \text{Unif}(0,M)$ and rejecting if $\pi_2 x > f(x)$ [$\pi_2 x$ being the second coordinate of $x$]. However, doing this is quite slow. Not only is the proposal function inopportune (it is scaled uniform, which likely will lead to many rejections), but the construction of $M$ is very slow, as maximizing a function is a non-trivial task. Is there a more efficient way to sample such a distribution? I had considered icdf sampling, but constructing the icdf for $f$ seems non-trivially difficult. Is this impression incorrect? Or perhaps is there some other effective method? If it is helpful, I am implementing this in python and am currently using the scipy and pytorch libraries. AI: The mixture distribution can be obtained in the following way. Let $f(x)=w_1p_1(x) + w_2p_2(x) + ... + w_np_n(x)$, where $p_i$ are density functions and $w_i>0$. Note that $f(x)$ is a density function if the sum of all weights is one. Then, we use the following two-stage process. Stage 1. Draw a random variable $X$ (selector, if I remember correctly), such that $P(X=i)=w_i$ for $i=1,2,...,n$. Stage 2. Return a random variable drawn accordingly to $p_X$, where $X$ is the index obtained in the stage 1.