text
stringlengths
83
79.5k
H: How can I deal with tiny categories? I'm playing around with UCI Bank Marketing Dataset. So, there is a categorical variable named default which tells us if client "has credit in default". That variable has three options: no, yes and unknown. Look at the distribution of it: no 32588 unknown 8597 yes 3 As you can see, we meet yes in only 3 cases and my question is how to deal with such tiny categories in general? Should I just exclude that from the dataset every time I come across it? Or maybe I should make something like oversampling but merely for that cases? I'm asking because I'm concerned about its impact on a classification task. As far as I understand, if all of these yes will fall into validation or test parts of the dataset during partitioning, it will distort a metric's result. AI: It totally depends on the target task and the importance of the "tiny class" for this task: In some tasks with multiple classes where there is no particular emphasis on any specific class, "tiny classes" can simply be discarded. But in the case of this dataset, the natural target task is to detect default cases, and there's little point in an application which classifies customers between "no" or "unknown" default cases. This means that despite its small size, the "yes" class is very important for the most relevant application of this dataset. There's no obvious answer to the question of how to deal with a class like this: Oversampling it is an option, but this would almost certainly introduce some bias in the model. My prefered option here would be to consider the "unknown" class as unlabelled data and try to apply some kind of semi-supervised learning. In my opinion this kind of imbalance is close to anomaly detection territory. Normally anomaly detection is unsupervised, but maybe there would be something to investigate here.
H: Improving the performance of neural networks I have 3 questions in mind about the neural network For the best model performance, is it better to train a model only on high resolution images or does it not matter whether the training data includes high-resolution and low-resolution images. Let us say I would like a model that can detect cats and there are 10 cats I am interested, again, in the best performance. Would it be better if I had 10 classes for each cat or just one class for cats (like just cat) or does it not matter? Why might a model trained on 2D images not work well when validated by 360 images? AI: It will depend on the application you are targeting. Certainly, bigger resolution will increase the performance. But if you take that model and apply it in an environment where the resolution is low, then the performance also drops. A good option is if u combine both dataset. That will make your model robust to the resolution change. But that also comes at training cost as you will have higher number of dataset. In general, you should asses your application area and decide a scenario which will enable your model to perform well. The question is a bit vague to understand. It comes down to distribution change. Neural networks are very sensitive for distribution changes. To check, plot the distribution of the pixels of both kinds of images and you will notice the difference.
H: Bayes theorem on the probability of an object drawn at random using percentages (not Naive Bayes) It's the normal Bayes equation but I'm not sure if I've calculated this correctly or how to check my work, here is a somewhat similar question but I wasn't sure if our math was the same, the question was a little more complex, here's my question: A company has four machines that manufacture levers for cars, machines M1, M2, M3 and M4 manufactures 10%, 25%, 25% and 40% of the levers respectively. Of their outputs 10,15,20,and 2 percent respectively are defective levers. You draw a lever at random during inspection from the product bin and you find it to be defective. What is the probability that it was manufactured by machine M2? Derive the Bayes (not Naive Bayes) formulation from the conditional probability after listening to the lecture. Here's my work: m1 = 10%; defective output 10% m2 = 25%; defective output 15% m3 = 25%; defective output 20% m4 = 40%; defective output 2% m1 = 1/10; 1/10 defective m2 = 1/4; 3/20 defective m3 = 1/4; 1/5 defective m4 = 2/5; 1/50 defective 1/4 * [(3/20)/(1/4)] ------------------------ + 1/10 * [(1/10)/(1/10)] + 1/4 * [(3/20)/(1/4)] + 1/4 * [(1/5)/(1/4)] + 2/5 * [(1/50)/(2/5)] .25 * [(.15)/(.25)] ------------------------ + .10 * [(.10)/(.10)] + .25 * [(.15)/(.25)] + .25 * [(.2)/(.25)] + .40 * [(.02)/(.40)] .15 ----- .10+.15+.2+.02 .15 ----- 0.47 15/47 = 0.319148936 ~32% This is the example in the textbook I'm basing my work on, I've highlighted some things for clarity, the example doesn't use percentages and is more simplified: AI: Perhaps I am missing something, but isn't it as follows? Probability of drawing a lever from a specific machine value $p\left(M_1\right)$ 10/100 $p\left(M_2\right)$ 25/100 $p\left(M_3\right)$ 25/100 $p\left(M_4\right)$ 40/100 And Probability of faulty lever given machine value $p\left(F|M_1\right)$ 10/100 $p\left(F|M_2\right)$ 15/100 $p\left(F|M_3\right)$ 20/100 $p\left(F|M_4\right)$ 2/100 Probability of machine 2, given faulty is then: $$ p\left(M_2|F\right)=\frac{p\left(F|M_2\right)\cdot p\left(M_2\right)}{p\left(F\right)}=\frac{p\left(F|M_2\right)\cdot p\left(M_2\right)}{\sum_k p\left(F|M_k\right)\cdot \left(M_k\right)}\approx0.355\dots $$
H: AB testing split algorithm I want to understand what is the most effective algorithm for splitting. I have ids of users and I want to split them into 2 groups. Now I have 2 variants: Modulo approach - let's say we will place all even ids into one group, odd numbers into another. Pros - for any sequence we will have a uniform distribution of users. So for any day or hour, users that registered during that time will be equally divided between 2 groups. Con - if I need two perform a few tests on the same group of users I might have problems Splitting with md5 - see this. Pros - we can easily perform a lot of test on the same data, just need to use different salt Con - for some sequence of orders, we will not have a uniform split. So we might have some problems with, for example, weekly seasonality. So is there something in the middle? Can I find an appropriate hash that won't be so 'random' as md5 and also will allow me to conclude multiple tests on the same group of users. We are talking about let's say 5-10 tests AI: I assumed when you say "I need two[sic] perform a few tests on the same group of users I might have problems", you intend to re-randomise the said group of users into a new control and treatment group when you start a new test. A possible middle-ground is to randomise twice, i.e. first randomise users into many buckets, and then randomise buckets into treatment/control groups when you need to run a new test. There are a couple flavours to this as well: 1. "Deterministically" assign user into buckets (based on some rules on an sufficiently random identifier that is sufficiently independent from user feature(s) and the treatment), randomly pick buckets to test: For example, the experimenter can create 10 buckets based on the last digit of the user ID. This assumes you have sufficient users such that the last digit is uncorrelated to some hourly/daily seasonality. For each test, the experimenter then randomly select 5 buckets into the control group and 5 buckets into the treatment group, e.g.: Test 1 - Control: 5, 9, 6, 0, 3; Treatment: 4, 8, 1, 7, 2 Test 2 - Control: 1, 7, 2, 6, 9; Treatment: 3, 4, 8, 5, 0 Test 3 - Control: 7, 4, 2, 6, 3; Treatment: 0, 9, 1, 5, 8 ...and so on [1]. 2. Randomly assign user into buckets, deterministically / randomly pick buckets to test: This is the approach used by many big techs' experimentation platforms. See, e.g. Figure 1 of [2] or [3]. In [2] they have both users and user clusters as experimentation unit, but the bucketing principle is the same. Users are randomly assigned into, say, 1,000 buckets based on some hash and have their bucket assignment recorded. The experimenter will then decide how many buckets (e.g. 100 for each of control/treatment) they need for their experiments, and either pick, or get randomly assigned by the experimentation platform, the buckets they need. This approach has an advantage whereas an experimenter can enforce some sort of bucket exclusion, whereas the experimenter would prevent users participating in a particular experiment to also join the one soon to start as the treatments may interact with each other. Theoretically, both approaches will lead to a randomised assignment between the control and treatment group. Of course, this assumes you have enough users and your bucketing implementation is correct, which is something that needs to be checked carefully. [1] I used the Random Integer Set Generator with the options: Generate 5 set(s) with 10 unique random integer(s) in each. Each integer should have a value between 0 and 9... ✔️ Use commas to separate the set members ⚫ Print the sets in the order they were generated". [2] B. Karrer et al., Network experimentation at scale, In: KDD'21. Available: https://arxiv.org/pdf/2012.08591.pdf [3] J. Rydberg, Spotify’s New Experimentation Platform (Part 2). Available: https://engineering.atspotify.com/2020/11/02/spotifys-new-experimentation-platform-part-2/
H: How to apply two input and one output with LR and SVM Q1: how to feed 2 input to LR and SVM? My dataset consist of three columns which are: sentence1 , sentence 2, and label (1 if the sentence2 is a paraphrased of sentence1) I prepare my data and convert it numeric features using (tf-idf) now I would like to train a classifier, but all the tutorials I find do one input and one output while I would like two inputs and one output. Could you help with an example? Picture of my data: AI: The simplest option in order to represent the two sentences independently of each other is to represent each of the two sentences with its own TFIDF vector of features and concatenate the two vectors. In other words you obtain 2 * N features where N is the size of the vocabulary. But at first sight it looks like the wrong approach for the problem that you're trying to solve: LR or SVM are unlikely to capture the high-level nature of paraphrasing, especially if fed with only basic vocabulary features like this. A slightly more advanced approach would be to provide the model with features which represent the relationship between the two sentences: length, words in common, readability measure, etc.
H: Do I have to remove features with pairwise correlation even if I am doing a regularized logistic regression? Normally we would remove features that have high pairwise correlation with another feature before performing regression. But is this step necessary if I am applying L2 regularized logistic regression (since the regularization algorithm would shrink the "irrelevant" feature coefficients to zero anyway)? AI: Yes the L1 regularization will shrink the irrelevant feature coefficients to zero and hence it doesn't require feature selection. In fact it IS a commonly used feature selection technique. So basically you are performing feature selection!!
H: Confidence intervals for evaluation on test set I'm wondering what the "best practise" approach is for finding confidence intervals when evaluation the performance of a classifier on the test set. As far as I can see, there are two different ways of evaluating the accuracy of a metric like, say, accuracy: Evaluate the accuracy using the formula interval = z * sqrt( (error * (1 - error)) / n), where n is sample size, error is classification error (i.e. 1-accuracy) and z is a number representing multiples of gaussian standard deviations. Train split the test set into k folds and train k classifiers, leaving a different fold out for each. Then evaluating all of these on the test set and calculating mean and variance. Intuitively, I feel like the latter would give me an estimate on how "sensitive" the performance is to changes in the data whereas the former would give allow me to compare two different models directly. I have to say I'm a bit confused... AI: You need to distinguish between uncertainty on the prediction and uncertainty on the class. One example, lets say that we are tossing a coin. I am 100% confident that the probability of getting "tails" is 50% On the other hand, there is a 90% probability that tomorrow will rain but the weatherman is not very certain of this to happen. To get this definition I will recommend to read this paper: https://arxiv.org/abs/1910.09457 In recent years the tendency has been to use ensemble methods and extract some basic statistics to calculate a given interval
H: cbind doesn't attribute the name of the column I'm trying to create a new data frame that contains the row names and the first column of an existing data : i tried this #To take the rownames of my old data New_data <- as.data.frame(row.names(old_data)) #To add the first column of my old data to a new data New_data <- cbind(old_data[,1]) When i visualize my new data using View(New_Data) i don't see the name of my the first column just V1 example > old_data NAME AGE SEXE A AQ 22 M B RT 14 M C DS 26 F D YY 19 M E IO 32 F F PP 20 F New_data <- as.data.frame(row.names(old_data)) New_data A B C D E F #add the first column of my old data to a new data New_data <- cbind(old_data[,1]) >New_data V1 A 22 B 14 C 26 D 19 E 32 F 20 As you can see the name of the column in my new data is V1 , i want to bind the first column of my old data with the name of this column like this >New_data Name A 22 B 14 C 26 D 19 E 32 F 20 AI: The issue is not with cbind but rather with old_data[,1]. Since you are selecting only 1 column R will convert it into a vector, and a vector has no column name. Try cbind(old_data[,1,drop=F]).
H: Compare model accuracy when training with imbalanced and balanced data So I was recently doing a data science project which is a multi class classification. The project can be found https://www.kaggle.com/c/otto-group-product-classification-challenge. The dataset is an imbalanced dataset with 93 features and 9 possible outcomes (targets). Since we don't know what any of these features are we don't know what kind of categories the targets represent I am not sure if balancing the data before training the model makes sense. Therefore I just trained each of my test models with both, once with a balanced and once with an imbalanced dataset. In particular this is what I did: do a simple 80/20 split for training and test to create an imbalanced data and training set index <- createDataPartition(data$target, p=.8, list=FALSE,times=1) training <- data[index,] test <- data[-index,] downsample the training split and use it to create a downsampled training set and use the rest of the data for testing training.downsampled <- downSample(training[,-ncol(training)],y=training$target,yname="target") test.downsampled <-subset(data, !(id %in% training.downsampled$id)) So now to come to my main question. If I now train a model, for example a random forest, can I use the accuracies of both to compare if the model delivers a better accuracy while using balanced data? I am concerned since I test against more data for the balanced one. If I can't compare it like this, then what would be a suitable method to achieve comparison of the both. AI: Accuracy is the worst metric you could use for an imbalanced dataset. If you choose accuracy as a metric when you have class imbalance, you will get very high accuracy. This is because the majority class has a higher frequency (or has more number of records) and hence the model will predict the majority class as the prediction majority of the time. The metric you choose depends on what kind of dataset you have. If your data has class imbalance, you can go for F1 score, AUC score, True positive/True negative rate. They will give a more realistic score rather than accuracy. Another point to remember is that if you want to balance your dataset, never use downsampling as it results in data loss which is a BIG NO NO. Always use oversampling. A word of caution though. Some experts believe that undersampling or oversampling is not the way to go when dealing with imbalance. Rather choosing the right metric is enough to deal with it. But other experts say that SMOTE is the way to go. It depends on you on what you think is right although comparing models like you are doing is probably a safe bet. Other than that you are correct in your procedure to compare both the models.
H: Elegant way to plot the L2 regularization path of logistic regression in python? Trying to plot the L2 regularization path of logistic regression with the following code (an example of regularization path can be found in page 65 of the ML textbook Elements of Statistical Learning https://web.stanford.edu/~hastie/Papers/ESLII.pdf). Have a feeling that I am doing it the dumb way - think there is a simpler and more elegant way to code it - suggestions much appreciated thanks. counter = 0 for c in np.arange(-10, 2, dtype=np.float): lr = LogisticRegression(C = 10**c, fit_intercept=True, solver = 'liblinear', penalty = 'l2', tol = 0.0001, n_jobs = -1, verbose = -1, random_state = 0 ) model=lr.fit(X_train_z, y_train) coeff_list=model.coef_.ravel() if counter == 0: coeff_table = pd.DataFrame(pd.Series(coeff_list,index=X_train.columns),columns=[10**c]) else: temp_table = pd.DataFrame(pd.Series(coeff_list,index=X_train.columns),columns=[10**c]) coeff_table = coeff_table.join(temp_table,how='left') counter += 1 plt.rcParams["figure.figsize"] = (20,10) coeff_table.transpose().iloc[:,:10].plot() plt.ylabel('weight coefficient') plt.xlabel('C') plt.legend(loc='right') plt.xscale('log') plt.show() AI: sklearn has such a functionality already for regression problems, in enet_path and lasso_path. There's an example notebook here. Those functions have some cython base to them, so are probably substantially faster than your version. One other improvement that you can include in your implementation without adding cython is to use "warm starts": nearby alphas should have similar coefficients. So try # This needs to be instantiated outside the loop so we don't start from scratch each time. lr = LogisticRegression(C = 1, # we'll override this in the loop warm_start=True, fit_intercept=True, solver = 'liblinear', penalty = 'l2', tol = 0.0001, n_jobs = -1, verbose = -1, random_state = 0 ) for c in np.arange(-10, 2, dtype=np.float): lr.set_params(C=10**c) model=lr.fit(X_train_z, y_train) ...
H: What are the most known ML-models that use complex numbers? (if there are any) Basically just the header. The question is out of curiosity as I haven't seen one yet. AI: As it turns out there are Neural Networks which are designed to work with complex numbers: A Survey of Complex-Valued Neural Networks Artificial neural networks (ANNs) based machine learning models and especially deep learning models have been widely applied in computer vision, signal processing, wireless communications, and many other domains, where complex numbers occur either naturally or by design. However, most of the current implementations of ANNs and machine learning frameworks are using real numbers rather than complex numbers. There are growing interests in building ANNs using complex numbers, and exploring the potential advantages of the so-called complex-valued neural networks (CVNNs) over their real-valued counterparts. In this paper, we discuss the recent development of CVNNs by performing a survey of the works on CVNNs in the literature. Specifically, a detailed review of various CVNNs in terms of activation function, learning and optimization, input and output representations, and their applications in tasks such as signal processing and computer vision are provided, followed by a discussion on some pertinent challenges and future research directions. There are also complex-valued SVMs: Complex and Hypercomplex-Valued Support Vector Machines: A Survey In recent years, the field of complex, hypercomplex-valued and geometric Support Vector Machines (SVM) has undergone immense progress due to the compatibility of complex and hypercomplex number representations with analytic signals, as well as the power of description that geometric entities provide to object descriptors. Thus, several interesting applications can be developed using these types of data and algorithms, such as signal processing, pattern recognition, classification of electromagnetic signals, light, sonic/ultrasonic and quantum waves, chaos in the complex domain, phase and phase-sensitive signal processing and nonlinear filtering, frequency, time-frequency and spatiotemporal domain processing, quantum computation, robotics, control, time series prediction, and visual servoing, among others. This paper presents and discusses the importance, recent progress, prospective applications, and future directions of complex, hypercomplex-valued and geometric Support Vector Machines.
H: How to convert a nested dictionary's keys in a pandas data frame I am new to Python and want to store a nested dictionary keys as rows in a pandas dataframe. Specifically, here is how the dictionary looks like: apicalls= [{'GetNativeSystemInfo': 1, 'DeviceIoControl': 2, 'RegCloseKey': 1, 'NtDuplicateObject': 1,'NtSetInformationFile': 5,{'NtDuplicateObject': 3, 'DeviceIoControl': 1, 'GetVolumePathNameW': 1, 'RegCloseKey': 1, 'NtQueryKey': 2,'NtQueryValueKey': 6}, {'LdrUnloadDll': 1, 'LdrGetDllHandle': 2, 'NtCreateSection': 1, 'NtOpenKey': 3, 'LdrGetProcedureAddress': 4, 'SetUnhandledExceptionFilter': 5}] The above are list of API calls extracted from 3 executable files during dynamic analysis. I have tried to generate the data frame using the following line of code, but it is not giving me what I want. api_frame = pd.DataFrame(apicalls) I want the data frame that looks like the one below. kindly help. AI: Since the data sample you provided does not follow the python syntax, I assumed that the data looks like this: apicalls = [ {'GetNativeSystemInfo': 1, 'DeviceIoControl': 2, 'RegCloseKey': 1, 'NtDuplicateObject': 1,'NtSetInformationFile': 5}, {'NtDuplicateObject': 3, 'DeviceIoControl': 1, 'GetVolumePathNameW': 1, 'RegCloseKey': 1, 'NtQueryKey': 2,'NtQueryValueKey': 6}, {'LdrUnloadDll': 1, 'LdrGetDllHandle': 2, 'NtCreateSection': 1, 'NtOpenKey': 3, 'LdrGetProcedureAddress': 4, 'SetUnhandledExceptionFilter': 5} ] You can simply loop over the list to get the keys for each dictionary using the .keys() method, and then pass that list to pandas.DataFrame: import pandas as pd pd.DataFrame([call.keys() for call in apicalls]) This results in the following dataframe: 0 1 2 3 4 5 GetNativeSystemInfo DeviceIoControl RegCloseKey NtDuplicateObject NtSetInformationFile NtDuplicateObject DeviceIoControl GetVolumePathNameW RegCloseKey NtQueryKey NtQueryValueKey LdrUnloadDll LdrGetDllHandle NtCreateSection NtOpenKey LdrGetProcedureAddress SetUnhandledExceptionFilter
H: Is my approach about the ML model correct? First of all, I am a newbie here and it is my first question on this platform, so I apologize for the mistakes about the format if there are any. In my thesis study, I am trying to identify the non-normal fuel consumption of an aircraft for a specific flight by looking at the commercial aviation parameters. To achieve this, I use two separate databases; one is the actual flight data (QAR data), while the other is high-fidelity simulations (Operational Flight Plans). My strategy is to train the feed-forward ML model (I use Pytorch) with simulations (OFP) and test them with QAR data. Below is the best model result with certain ML conditions. The above means, the trained model can predict an actual flight's fuel burn with less than %5 error for %99.3 of the flights. In the same manner, error<%3 --> %93.4 of flights, error<%2 --> %78.9 of flights. This is where my confusion begins. Let's say, %0.7 of the flights burned %5 fuel less or more. How could I be sure this is not caused by the training error? If I test the model with OFP parameters and look at the model's error flight-by-flight and identify the flights with training errors (i.e. %5 more or less fuel burn), and exclude them from the first non-normal identification process, would that work? In my opinion, this idea won't work since the training dataset will be the same as the test dataset and the model will overfit. Do you think the above approach is correct? Is there any other option that I can stick with to overcome the training errors? Or should I accept the training errors as they are because there is nothing to do about them in this case? AI: Disclaimer: please bear in mind that I'm no expert in this kind of application. How could I be sure this is not caused by the training error? You can be sure because the evaluation process is valid. Errors are expected in any ML process, what matters is to correctly estimate the expected level of error (performance evaluation). As far as I can tell, your current approach is correct in the sense that your evaluation setup seems valid: the test set is made of actual flight data. As long as the evaluation is reliable, the way the model is trained doesn't matter. In fact the model could even be a simple heuristic with no training data: probably it wouldn't perform as well, but what matters in terms of methodology is that the performance is properly estimated. I'm not 100% sure that I follow the process that you consider doing for cleaning up the training data. Importantly it should not rely on the actual flight data that you use as test set, since this would cause data leakage. As long as it doesn't, you can preprocess the dataset any way you want. But be careful that if you plan to attempt many different options for the training data and evaluate each of them, then you should use a validation set, different from the final test set (this process would be akin to parameter tuning). Btw you could consider using a small subset of the actual flight data as validation set during the training process (I assume that you currently use some of the simulated data right?). Again if you do this make sure to use a completely independent subset as test set.
H: ValueError: Layer model_4 expects 1 input(s), but it received 10 input tensors I've directory structure like this, for my dataset: |--train |--test |--valid In the train folder, these are pairs of images like xyz_sat and xyz_mask. So I've loaded them with Pillow, and converted them to NumPy array to feed to TensorFlow: train_sat = [np.array(Image.open(name),dtype="float32") for name in train_names_sat] train_mask = [np.array(Image.open(name).convert('RGB'),dtype="float32") for name in train_names_mask] Then normalized them and all other things. I'm trying to feed images to my model like this: history = model.fit(train_sat, train_mask, validation_split = 0.15, epochs=EPOCHS, batch_size = BATCH_SIZE, #callbacks = [callbacks] ) ​ But I'm getting this error: ValueError: Layer model_4 expects 1 input(s), but it received 10 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 1024, 3) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(None, 1024, 3) dtype=float32>, <tf.Tensor 'IteratorGetNext:2' shape=(None, 1024, 3) dtype=float32>, <tf.Tensor 'IteratorGetNext:3' shape=(None, 1024, 3) dtype=float32>, <tf.Tensor 'IteratorGetNext:4' shape=(None, 1024, 3) dtype=float32>, <tf.Tensor 'IteratorGetNext:5' shape=(None, 1024, 3) dtype=float32>, <tf.Tensor 'IteratorGetNext:6' shape=(None, 1024, 3) dtype=float32>, <tf.Tensor 'IteratorGetNext:7' shape=(None, 1024, 3) dtype=float32>, <tf.Tensor 'IteratorGetNext:8' shape=(None, 1024, 3) dtype=float32>, <tf.Tensor 'IteratorGetNext:9' shape=(None, 1024, 3) dtype=float32>] ​ Please share how you debugged the error, that would be more helpful. I can understand that it's because I've 10 images in my dataset training set, for testing purposes, they are causing the error. ​ My UNet model looks like this: __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_5 (InputLayer) [(None, 1024, 1024, 0 __________________________________________________________________________________________________ conv2d_112 (Conv2D) (None, 1024, 1024, 6 1792 input_5[0][0] __________________________________________________________________________________________________ batch_normalization_124 (BatchN (None, 1024, 1024, 6 256 conv2d_112[0][0] __________________________________________________________________________________________________ conv2d_113 (Conv2D) (None, 1024, 1024, 6 36928 batch_normalization_124[0][0] __________________________________________________________________________________________________ batch_normalization_125 (BatchN (None, 1024, 1024, 6 256 conv2d_113[0][0] __________________________________________________________________________________________________ conv2d_114 (Conv2D) (None, 1024, 1024, 6 36928 batch_normalization_125[0][0] __________________________________________________________________________________________________ max_pooling2d_20 (MaxPooling2D) (None, 512, 512, 64) 0 conv2d_114[0][0] __________________________________________________________________________________________________ batch_normalization_126 (BatchN (None, 512, 512, 64) 256 max_pooling2d_20[0][0] __________________________________________________________________________________________________ conv2d_115 (Conv2D) (None, 512, 512, 64) 36928 batch_normalization_126[0][0] __________________________________________________________________________________________________ batch_normalization_127 (BatchN (None, 512, 512, 64) 256 conv2d_115[0][0] __________________________________________________________________________________________________ conv2d_116 (Conv2D) (None, 512, 512, 64) 36928 batch_normalization_127[0][0] __________________________________________________________________________________________________ batch_normalization_128 (BatchN (None, 512, 512, 64) 256 conv2d_116[0][0] __________________________________________________________________________________________________ conv2d_117 (Conv2D) (None, 512, 512, 64) 36928 batch_normalization_128[0][0] __________________________________________________________________________________________________ max_pooling2d_21 (MaxPooling2D) (None, 256, 256, 64) 0 conv2d_117[0][0] __________________________________________________________________________________________________ batch_normalization_129 (BatchN (None, 256, 256, 64) 256 max_pooling2d_21[0][0] __________________________________________________________________________________________________ conv2d_118 (Conv2D) (None, 256, 256, 64) 36928 batch_normalization_129[0][0] __________________________________________________________________________________________________ batch_normalization_130 (BatchN (None, 256, 256, 64) 256 conv2d_118[0][0] __________________________________________________________________________________________________ conv2d_119 (Conv2D) (None, 256, 256, 64) 36928 batch_normalization_130[0][0] __________________________________________________________________________________________________ batch_normalization_131 (BatchN (None, 256, 256, 64) 256 conv2d_119[0][0] __________________________________________________________________________________________________ conv2d_120 (Conv2D) (None, 256, 256, 64) 36928 batch_normalization_131[0][0] __________________________________________________________________________________________________ max_pooling2d_22 (MaxPooling2D) (None, 128, 128, 64) 0 conv2d_120[0][0] __________________________________________________________________________________________________ batch_normalization_132 (BatchN (None, 128, 128, 64) 256 max_pooling2d_22[0][0] __________________________________________________________________________________________________ conv2d_121 (Conv2D) (None, 128, 128, 64) 36928 batch_normalization_132[0][0] __________________________________________________________________________________________________ batch_normalization_133 (BatchN (None, 128, 128, 64) 256 conv2d_121[0][0] __________________________________________________________________________________________________ conv2d_122 (Conv2D) (None, 128, 128, 64) 36928 batch_normalization_133[0][0] __________________________________________________________________________________________________ batch_normalization_134 (BatchN (None, 128, 128, 64) 256 conv2d_122[0][0] __________________________________________________________________________________________________ conv2d_123 (Conv2D) (None, 128, 128, 64) 36928 batch_normalization_134[0][0] __________________________________________________________________________________________________ max_pooling2d_23 (MaxPooling2D) (None, 64, 64, 64) 0 conv2d_123[0][0] __________________________________________________________________________________________________ batch_normalization_135 (BatchN (None, 64, 64, 64) 256 max_pooling2d_23[0][0] __________________________________________________________________________________________________ conv2d_124 (Conv2D) (None, 64, 64, 64) 36928 batch_normalization_135[0][0] __________________________________________________________________________________________________ batch_normalization_136 (BatchN (None, 64, 64, 64) 256 conv2d_124[0][0] __________________________________________________________________________________________________ conv2d_125 (Conv2D) (None, 64, 64, 64) 36928 batch_normalization_136[0][0] __________________________________________________________________________________________________ batch_normalization_137 (BatchN (None, 64, 64, 64) 256 conv2d_125[0][0] __________________________________________________________________________________________________ conv2d_126 (Conv2D) (None, 64, 64, 64) 36928 batch_normalization_137[0][0] __________________________________________________________________________________________________ max_pooling2d_24 (MaxPooling2D) (None, 32, 32, 64) 0 conv2d_126[0][0] __________________________________________________________________________________________________ batch_normalization_138 (BatchN (None, 32, 32, 64) 256 max_pooling2d_24[0][0] __________________________________________________________________________________________________ conv2d_127 (Conv2D) (None, 32, 32, 64) 36928 batch_normalization_138[0][0] __________________________________________________________________________________________________ batch_normalization_139 (BatchN (None, 32, 32, 64) 256 conv2d_127[0][0] __________________________________________________________________________________________________ conv2d_128 (Conv2D) (None, 32, 32, 64) 36928 batch_normalization_139[0][0] __________________________________________________________________________________________________ batch_normalization_140 (BatchN (None, 32, 32, 64) 256 conv2d_128[0][0] __________________________________________________________________________________________________ conv2d_transpose_20 (Conv2DTran (None, 64, 64, 64) 36928 batch_normalization_140[0][0] __________________________________________________________________________________________________ concatenate_20 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_20[0][0] conv2d_125[0][0] __________________________________________________________________________________________________ batch_normalization_141 (BatchN (None, 64, 64, 128) 512 concatenate_20[0][0] __________________________________________________________________________________________________ conv2d_129 (Conv2D) (None, 64, 64, 96) 110688 batch_normalization_141[0][0] __________________________________________________________________________________________________ batch_normalization_142 (BatchN (None, 64, 64, 96) 384 conv2d_129[0][0] __________________________________________________________________________________________________ conv2d_130 (Conv2D) (None, 64, 64, 64) 55360 batch_normalization_142[0][0] __________________________________________________________________________________________________ batch_normalization_143 (BatchN (None, 64, 64, 64) 256 conv2d_130[0][0] __________________________________________________________________________________________________ conv2d_transpose_21 (Conv2DTran (None, 128, 128, 64) 36928 batch_normalization_143[0][0] __________________________________________________________________________________________________ concatenate_21 (Concatenate) (None, 128, 128, 128 0 conv2d_transpose_21[0][0] conv2d_122[0][0] __________________________________________________________________________________________________ batch_normalization_144 (BatchN (None, 128, 128, 128 512 concatenate_21[0][0] __________________________________________________________________________________________________ conv2d_131 (Conv2D) (None, 128, 128, 96) 110688 batch_normalization_144[0][0] __________________________________________________________________________________________________ batch_normalization_145 (BatchN (None, 128, 128, 96) 384 conv2d_131[0][0] __________________________________________________________________________________________________ conv2d_132 (Conv2D) (None, 128, 128, 64) 55360 batch_normalization_145[0][0] __________________________________________________________________________________________________ batch_normalization_146 (BatchN (None, 128, 128, 64) 256 conv2d_132[0][0] __________________________________________________________________________________________________ conv2d_transpose_22 (Conv2DTran (None, 256, 256, 64) 36928 batch_normalization_146[0][0] __________________________________________________________________________________________________ concatenate_22 (Concatenate) (None, 256, 256, 128 0 conv2d_transpose_22[0][0] conv2d_119[0][0] __________________________________________________________________________________________________ batch_normalization_147 (BatchN (None, 256, 256, 128 512 concatenate_22[0][0] __________________________________________________________________________________________________ conv2d_133 (Conv2D) (None, 256, 256, 96) 110688 batch_normalization_147[0][0] __________________________________________________________________________________________________ batch_normalization_148 (BatchN (None, 256, 256, 96) 384 conv2d_133[0][0] __________________________________________________________________________________________________ conv2d_134 (Conv2D) (None, 256, 256, 64) 55360 batch_normalization_148[0][0] __________________________________________________________________________________________________ batch_normalization_149 (BatchN (None, 256, 256, 64) 256 conv2d_134[0][0] __________________________________________________________________________________________________ conv2d_transpose_23 (Conv2DTran (None, 512, 512, 64) 36928 batch_normalization_149[0][0] __________________________________________________________________________________________________ concatenate_23 (Concatenate) (None, 512, 512, 128 0 conv2d_transpose_23[0][0] conv2d_116[0][0] __________________________________________________________________________________________________ batch_normalization_150 (BatchN (None, 512, 512, 128 512 concatenate_23[0][0] __________________________________________________________________________________________________ conv2d_135 (Conv2D) (None, 512, 512, 96) 110688 batch_normalization_150[0][0] __________________________________________________________________________________________________ batch_normalization_151 (BatchN (None, 512, 512, 96) 384 conv2d_135[0][0] __________________________________________________________________________________________________ conv2d_136 (Conv2D) (None, 512, 512, 64) 55360 batch_normalization_151[0][0] __________________________________________________________________________________________________ batch_normalization_152 (BatchN (None, 512, 512, 64) 256 conv2d_136[0][0] __________________________________________________________________________________________________ conv2d_transpose_24 (Conv2DTran (None, 1024, 1024, 6 36928 batch_normalization_152[0][0] __________________________________________________________________________________________________ concatenate_24 (Concatenate) (None, 1024, 1024, 1 0 conv2d_transpose_24[0][0] conv2d_113[0][0] __________________________________________________________________________________________________ batch_normalization_153 (BatchN (None, 1024, 1024, 1 512 concatenate_24[0][0] __________________________________________________________________________________________________ conv2d_137 (Conv2D) (None, 1024, 1024, 9 110688 batch_normalization_153[0][0] __________________________________________________________________________________________________ batch_normalization_154 (BatchN (None, 1024, 1024, 9 384 conv2d_137[0][0] __________________________________________________________________________________________________ conv2d_138 (Conv2D) (None, 1024, 1024, 6 55360 batch_normalization_154[0][0] __________________________________________________________________________________________________ conv2d_139 (Conv2D) (None, 1024, 1024, 1 65 conv2d_138[0][0] ================================================================================================== Total params: 1,617,441 Trainable params: 1,612,513 Non-trainable params: 4,928 AI: This error is caused by the fact that you are passing a list of arrays with the image data to .fit() instead of a single array with the first dimension being the number of samples. Try using numpy.stack to convert the list of arrays to a single numpy array.
H: How to create a confusion matrix for k-means with two features? I have the need to do a confusion matrix for data run through k-means with two features. I am aware that this is a clustering algorithm and not a classification algorithm but I have seen some articles and questions where it has been done. I am just to thick to decompose the answers and apply it to my situation. I have data which looks like this: Total Packets Total TCP 2 0 0 0 0 0 4 0 1 1 4 2 0 0 0 0 0 0 1 1 0 0 93 85 1234 1232 699 695 4 4 2 2 0 0 0 0 0 0 0 0 0 0 4 0 0 0 4 0 6 4 3 3 0 0 0 0 0 0 Thats the top of the data file with the anomalies/outliers being anything over 200 in the Total TCP column. Where the confusion starts is understanding what is meant in the answer in this link k-means question where the responder mentions k-means labels and truth labels in his answer about how to do a confusion matrix. I have provided a quote for context: "Assuming that you have some gold standard for the classification of your headlines into k groups (the truth), you could compare this to the KMeans clustering (the prediction). The only problem with this is that KMeans clustering is agnostic to your truth, meaning the cluster labels that it produces will not be matched to the labels of the gold standard groups. There is, however, a work-around for this, which is to match the kmeans labels to the truth labels based on the best possible match." Has anyone an idea of what the labels would be with my example? I have followed a tutorial in another link Outlier Detection with K-means and with a K of one it seemed to pick up the outliers as seen in this plot: The red circles are around the outliers. In terms of where I am I have the program to a point where I can get the outliers but I would like to do a confusion matrix on top of this. I think that has to to with the K-means labels and truth labels mentioned previously but I am a bit lost in how to proceed. Any help would be greatly appreciated and I hope there is enough information in the post. AI: The question doesn't mention it clearly but apparently the goal is to detect outliers, in this case defined as instances with "anything over 200 in the Total TCP column". So every instance can be labelled as outlier or not: class 0 (negative) if total TCP <200 class 1 (positive) if total >= 200 If you add a third column is_outlier which represents the true outlier status, you obtain an annotated dataset that could be used for binary classification. Now let's assume you want to cluster with k-means and obtain a confusion matrix. In this case you're using k-means for doing classification without supervision (no training with labelled instances). Let's say $k=2$ since you're actually doing binary classification, so k-means predicts two clusters with no particular meaning or order. Before evaluating against the true labels you would need a method to match the predicted clusters with the true classes. In this particular case it would make sense to take the largest predicted cluster as corresponding to class 0 (not an outlier) and the smallest as class 1 (outlier). Once this is done, you can count the number of instances for every pair (predicted outlier status, true outlier status). While this is perfectly doable, this approach is highly questionable: you have a deterministic method to find outliers with a simple test on the total TCP value, so why using ML in the first place? Testing the value directly is much more efficient and achieves 100% performance. Also here it's unclear why one would use clustering this way if the goal is actually classification.
H: How to incorporate static variables into ML I have to establish an ML-based model where I predict precipitation in a complex terrain using multi-year daily observations from 50 stations. Besides a dozen of continuous variables, predictors include three variables that reflect topography: elevation, slope, and aspect. As these three variables do not change for a single station, I have doubts that the model will count on these during the training (I haven't yet started the analysis, still compiling the data frame). Are my concerns valid? I'm thinking about writing a function that will randomly alter these three static variables per each observation in a data frame by a small margin, e.g +-2%. Would there be major caveats behind such an approach? AI: You have 3 variables that refer to a particular station. In your training set, you have only one station? -- If yes then the best is to drop them In your training set, you have more? -- Then they can have distinct values so leave them. If your training set has one station, and then in your test set you have another station -- Drop them. Your model wont be able to learn from them. But there are high chances that you have a data set shift in your model and it wont perform well.
H: ggplot2 for Cluster analysis (non-readible row names) I have made a cluster analysis and ended up with dendrogram; however the row names are not readible (made a red rectangle). May I ask if there is way to adjust it? library("reshape2") library("purrr") library("dplyr") library("dendextend") dendro <- as.dendrogram(aggl.clust.c) dendro.col <- dendro %>% set("branches_k_color", k = 5, value = c("darkslategray", "darkslategray4", "darkslategray3", "gold", "gold2")) %>% set("branches_lwd", 0.6) %>% set("labels_colors", value = c("darkslategray")) %>% set("labels_cex", 0.5) ggd1 <- as.ggdend(dendro.col) ggplot(ggd1, theme = theme_minimal()) + labs(x = "Num. observations", y = "Height", title = "Dendrogram, k = 5") AI: Try adding theme to your plot layout So: library("reshape2") library("purrr") library("dplyr") library("dendextend") dendro <- as.dendrogram(aggl.clust.c) dendro.col <- dendro %>% set("branches_k_color", k = 5, value = c("darkslategray", "darkslategray4", "darkslategray3", "gold", "gold2")) %>% set("branches_lwd", 0.6) %>% set("labels_colors", value = c("darkslategray")) %>% set("labels_cex", 0.5) ggd1 <- as.ggdend(dendro.col) ggplot(ggd1, theme = theme_minimal()) + theme(text = element_text(size=20)+ labs(x = "Num. observations", y = "Height", title = "Dendrogram, k = 5")
H: ResNet output dimensions of initial convolution don’t yield in an integer I am trying to understand the ResNet dimensions, but got stuck at the first layer. We are passing a [224x224x3] image into 64 filters with kernel size 7x7 and stride=2. According to the ResNet source code from pytorch we are also using zero padding of size 3. The output size should be 112, but I get a output size of 112.5. To get an output size of 112 we need padding of 2.5.. See: I do not understand how the output of 112 is created. Is the padding adjusted by pytorch automatically to match floor (output)? AI: As you can see in the pytorch documentation for torch.nn.Conv2d, the output size of a 2d convolutional layer can be calculated that is very similar to the formula you show: $$ H_{out} / W_{out} = \lfloor \frac{H_{in} + 2 * padding - dilation * (kernel\_size - 1) - 1}{stride} + 1 \rfloor $$ So after performing the calculations pytorch indeed floors the output to get the largest integer smaller than or equal to the calculated output since half pixels do not exist.
H: Sklearn xgb.fit: TypeError: fit() missing 1 required positional argument: 'y' I am new to ML, and XGB is really confusing me. I understand that for Python XGB can be imported directly from the xgb library or via SKLearn. The methods for xgb from the direct xgb library also differ from SKLearn xgb. Eg xgb library uses xgb.train while SKLearn xgb uses fit. I tried the xgb from SKLearn xgb, but got the below error: TypeError: fit() missing 1 required positional argument: 'y' What is wrong with my below code? And is the direct xgb better than SKLearn xgb? Would appreciate some help. Thank you from xgboost.sklearn import XGBClassifier from sklearn.utils import class_weight from sklearn.model_selection import train_test_split X1=s_matrix2 # a sparse matrix Y1=df['Label'].values X_train, X_test, y_train, y_test = train_test_split(X1, Y1, random_state=0, test_size=0.2) classes_weights = class_weight.compute_sample_weight( class_weight='balanced', y=y_train ) xgb_model=XGBClassifier.fit(X_train, y_train, sample_weight=classes_weights) ``` AI: First the classifier needs to be created. Then it can be fit. sklearn API works in an object oriented interface here - create the object, call methods on the object. ... xgb_model = xgb.XGBClassifier(parameters that you want for this model, find options in the documentation [here](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBClassifier) ) xgb_model.fit(X_train, y_train, sample_weight=classes_weights)
H: NameError: name 'model' is not defined Keras with f1_score I'm having a problem with my Keras model, in the .compile() I use accuracy, loss, precision, recall and AUC, but also I need f1_score, due to Keras doesn´t include f1_score, I tried to calculate by myself but I get this error NameError: name 'model' is not defined, here's my code: def residual_network_1d(input_shape): n_feature_maps = 64 input_layer = keras.layers.Input(input_shape) # BLOCK 1 conv_x = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=8, padding='same')(input_layer) ... # FINAL gap_layer = keras.layers.GlobalAveragePooling1D()(output_block_3) output_layer = keras.layers.Dense(27, activation='softmax')(gap_layer) model = keras.models.Model(inputs=input_layer, outputs=output_layer) return model residual_network_1d_model=residual_network_1d(input_shape = (5000,1)) def f1_score(y_test,y_pred): import numpy as np from sklearn.metrics import f1_score y_test = np.argmax(folds[0][1],axis=0) y_pred1 = model.predict(x=pc.generate_validation_data(ecg_filenames,y,folds[0][1])[0]) y_pred = np.argmax(y_pred1, axis=1) my_f1_score=f1_score(y_test, y_pred , average="macro") return my_f1_score residual_network_1d_model.compile(loss=tf.keras.losses.BinaryCrossentropy(), optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=[tf.keras.metrics.BinaryAccuracy( name='accuracy', dtype=None, threshold=0.5),tf.keras.metrics.Recall(name='Recall'),tf.keras.metrics.Precision(name='Precision'),f1_score, tf.keras.metrics.AUC( num_thresholds=200, curve="ROC", summation_method="interpolation", name="AUC", dtype=None, thresholds=None, multi_label=True, label_weights=None, )]) Why say model is not defined if I load my model previously? AI: In your f1_score function you are calling model.predict, but the function only takes the variables y_test and y_pred as input. Therefore the model variable you are referring to is not defined within the scope of this function.
H: ValueError: Input 0 of layer conv2d is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape I'm trying to create an auto-encoder based model for segmentation, which looks something like this: https://i.stack.imgur.com/4F3Z0.png I haven't added a single step, nor missed one as far as I remember. Then how come, when I try to fit data to it, it throws me an error saying: ValueError: Input 0 of layer conv2d is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape (None, 128, 128, 3) My code looks something like this: img_input = Input(shape=input_shape) x = img_input # Encoder # Block 1 # Block 2 # Block 3 # Decoder # Deconv 1 # Deconv 2 x = Reshape((input_shape[0],input_shape[1], classes))(x) x = Activation("softmax")(x) model = Model(img_input, x) return model //Not posting all the code, I might hear I'm dumping code and asking help Fitted values, something like: Auto_Encoder.fit( np.array(x), # `x` is a python array np.array(y), # `y` is a python array ... Here is the model summary: Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 128, 128, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 128, 128, 64) 640 _________________________________________________________________ batch_normalization (BatchNo (None, 128, 128, 64) 256 _________________________________________________________________ conv2d_1 (Conv2D) (None, 128, 128, 64) 36928 _________________________________________________________________ batch_normalization_1 (Batch (None, 128, 128, 64) 256 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 64, 64, 64) 0 _________________________________________________________________ dropout (Dropout) (None, 64, 64, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 64, 64, 128) 73856 _________________________________________________________________ batch_normalization_2 (Batch (None, 64, 64, 128) 512 _________________________________________________________________ conv2d_3 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ batch_normalization_3 (Batch (None, 64, 64, 128) 512 _________________________________________________________________ conv2d_4 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ batch_normalization_4 (Batch (None, 64, 64, 128) 512 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 32, 32, 128) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 32, 32, 128) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 32, 32, 256) 295168 _________________________________________________________________ batch_normalization_5 (Batch (None, 32, 32, 256) 1024 _________________________________________________________________ conv2d_6 (Conv2D) (None, 32, 32, 256) 590080 _________________________________________________________________ batch_normalization_6 (Batch (None, 32, 32, 256) 1024 _________________________________________________________________ conv2d_7 (Conv2D) (None, 32, 32, 256) 590080 _________________________________________________________________ batch_normalization_7 (Batch (None, 32, 32, 256) 1024 _________________________________________________________________ dropout_2 (Dropout) (None, 32, 32, 256) 0 _________________________________________________________________ up_sampling2d (UpSampling2D) (None, 64, 64, 256) 0 _________________________________________________________________ conv2d_8 (Conv2D) (None, 64, 64, 128) 295040 _________________________________________________________________ batch_normalization_8 (Batch (None, 64, 64, 128) 512 _________________________________________________________________ conv2d_9 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ batch_normalization_9 (Batch (None, 64, 64, 128) 512 _________________________________________________________________ conv2d_10 (Conv2D) (None, 64, 64, 128) 147584 _________________________________________________________________ batch_normalization_10 (Batc (None, 64, 64, 128) 512 _________________________________________________________________ dropout_3 (Dropout) (None, 64, 64, 128) 0 _________________________________________________________________ up_sampling2d_1 (UpSampling2 (None, 128, 128, 128) 0 _________________________________________________________________ conv2d_11 (Conv2D) (None, 128, 128, 64) 73792 _________________________________________________________________ batch_normalization_11 (Batc (None, 128, 128, 64) 256 _________________________________________________________________ conv2d_12 (Conv2D) (None, 128, 128, 64) 36928 _________________________________________________________________ batch_normalization_12 (Batc (None, 128, 128, 64) 256 _________________________________________________________________ conv2d_13 (Conv2D) (None, 128, 128, 4) 2308 _________________________________________________________________ dropout_4 (Dropout) (None, 128, 128, 4) 0 _________________________________________________________________ reshape (Reshape) (None, 128, 128, 4) 0 _________________________________________________________________ activation (Activation) (None, 128, 128, 4) 0 ================================================================= Total params: 2,592,324 Trainable params: 2,588,740 Non-trainable params: 3,584 AI: The error message means that the input shape of Conv2D layer should be (128,128,1) which is consistent with your model summary. However, in the actual input the shape it finds is (128,128,3), hence the error. It would seem that you are using a 3 channel image when you have defined only one channel in the input shape.
H: Latent Dirichlet Allocation (LDA) importance of document generation and Gibbs Sampling I am having trouble finding the correlation between the two seemingly uncorrelated parts of LDA. What I understood from several videos is: There is a document generation "part", which is the construction of two dirichlet distrubutions, one describing the distribution of documents to topics, and one describing the distirbution of topics to words. There is the Gibbs sampling optimization "part", which takes in documents whose words have been assigned to topics, and at each iteration of the optimization the Gibbs sampling takes a step towards a somewhat monistic distribution of documents to topics and words to topics, i.e each document's word's belong more or less to the same topic, and each instance of a word in the corpus of documents belongs more or less to one topic. My question is why is the first part needed? I assume its importance may be that the initial random assignment of topics need be from a dirichlet distribution, but I am not sure if it is the case. Is there anything I am missing? AI: tldr: in LDA the "generation part" is used to describe the design of the model, not for practical applications. LDA is a Bayesian generative model. The distinction between generative and discriminative models is addressed for example in this question or if you want more detail in this one. As the name suggests, generative models can generate data. But in the case of LDA and other topic modelling models, it's never actually used for the task of generation because it would produce meaningless texts. In fact the generative aspect is more about the design of the model: the idea is to represent a model as a statistical generation process, i.e. by drawing some values out of a distribution represented by the parameters of the model. This is why these models are always described through a "generative story", which represents the design of the model. Note that this design involves a lot of assumptions and simplifications, this is why the practical ability to generate is not really useful in LDA. In practice LDA is almost exclusively used to obtain a probabilistic clustering of a set of documents based on topics, with different topics represented as different distributions over the vocabulary. This is done by estimating the parameters of the model from the corpus, and this process is usually done with Gibbs sampling. Note that Gibbs sampling is a general estimation method, it's not specific to LDA. In case this helps, I often explain the differences between the different tasks in computer science terms: For the generation task: the input is a model parameters, the output is a set of documents (potentially infinite). For the estimation task: the input is a (finite) set of documents, the output is the model parameters. For the "application" task: the input is a model parameters and some input document $d$ the output is the posterior probabilities $p(t|d)$ for every topic $t$.
H: Which algorithm is best for predicting diseases if symptoms are given? After Topic modelling through LDA, I get the following dataset as result. Document_No Dominant_Topic Topic_Perc_Contrib Keywords TextBookFindings Disease/Drugs 0 0 3.0 0.7625 hypotension, bradycardia, mydriasis, hypersali Hypotension,hypothermia,bradycardia NSAIDS Poisoning 1 1 5.0 0.6833 edema, cyanosis, cardiacarrest, lacrimation, Hyperventilation,respiratoryalkalosis,edema–br... NSAIDS Poisoning 2 2 0.0 0.8100 vomiting, nausea, diarrhea, abdominalpain, Nausea,vomiting,diarrhea,abdominalpain– NSAIDS Poisoning 3 3 0.0 0.2625 vomiting, nausea, diarrhea, abdominalpain, GIbleeding,pancreatitis,hepaticinjury NSAIDS Poisoning 4 4 1.0 0.4463 insomnia, drowsiness, irritability, neurotoxic Headache,dizziness,encephalopathy,irritability NSAIDS Poisoning ... ... ... ... ... ... ... 1446 1446 7.0 0.5250 weakness, muscle, coagulopathy, fasciculations... metabolicacidosis,Elevatedlactateconcentration... Neuroleptic malignant syndrome (NMS) 1447 1447 0.0 0.0500 vomiting, nausea, diarrhea, abdominalpain, pan... hematologictoxicity Neuroleptic malignant syndrome (NMS) 1448 1448 0.0 0.5250 vomiting, nausea, diarrhea, abdominalpain, pan... Pancreatitis NaN 1449 1449 0.0 0.0500 vomiting, nausea, diarrhea, abdominalpain, pan... Hypersensitivity Neuroleptic malignant syndrome (NMS) 1450 1450 0.0 0.0500 vomiting, nausea, diarrhea, abdominalpain, pan... sensoryperipheralneuropathy NaN I want to create a prediction system when a symptom is inputted, it shows the percentage matches of the Disease/Drugs. So I want to create a prediction between columns keywords and Disease/Drugs. Which will be the best algorithm and some suggestions how to move forward with this? AI: The first challenge you will face is transforming the textual labels of symptoms into numeric features that can be understood by a model. The simplest techniques are ordinal encoding, where each word is assigned to a number between 1 and N and one-hot encoding (OHE), where each word becomes an unique N-length vector, with all elements == 0 except for one. A good starting point for your project would be to one-hot encode each symptom, and for each entry in your dataset, sum the one-hot encoded symptoms to obtain a vector that describes all symptoms. This vector would be the input to your model. For example, if you had the following one-hot encodings: "vomiting" -> [1, 0, 0] "nausea" -> [0, 1, 0] "irritability" -> [0, 0, 1] Then an instance where there is both vomiting and nausea would be encoded as [1, 1, 0]. The drawback of the simple techniques is that the word encodings will not be very semantically meaningful. Notice that the distance between "vomiting" and "nausea" is the same as the distance between "vomiting" and "irritibility", even though vomiting and nausea are probably semantically closer in this context. So if the simple techniques don't work well enough for you, then you can try word embeddings. Models such as Word2Vec and GloVe transform each word into a vector where distance in the vector space is meaningful. So "nausea" and "vomiting" will be close to each other, but "vomiting" and "irritibility" will be farther away. Nowadays there are even more powerful embeddings trained on huge corpura of text. Examples include BERT and GPT-2. But I think this would be major overkill for your use case. The next challenge is selecting a model that can map the encoded words to the disease. This is a multi-class classification problem, so that rules out binary classifiers. If there are a lot of different symptoms then your input vector is going to be high-dimensional, so it would be nice to have a model with built in feature selection or dimensionality reduction. Starting out, I would recommend a ensembled decision tree model like Random Forest or XGBoost. These models are easy to train and generally perform well on this type of problem. Also, if you're using scikit-learn, you can easily get probabilities for each class with the predict_proba() method. If the tree-based models don't work well, you could try other simple approaches like kNN or Naive Bayes If you need more complexity, neural networks and deep learning are always on the table. But if your dataset has only 1450 examples, that's probably not enough to train most DNNs. So to summarize, you need to encode the symptoms numerically and a good first try would be one-hot-encoding. You also need to pick a classifier that works well for this type of problem and can output probabilities rather than a single prediction. I think a good start would be the tree ensembles from scikit-learn. If the simple options don't work well enough, you can step up complexity on either the encoding side or the modeling side.
H: Obtain precision at a certain probability value With scikit-learn, one is able to compute the precision values as well the predicted probability output. To compute the precision values, the sklearn precision/recall function takes the true target values as well as the predicted target probability (can be target scores or non-thresholded measure of decisions) as an input, however the computed precision array does not have the same length as the the given predicted probability (precision length = n_thresholds + 1). Is it somehow possible to compute the precision at a given probability output? AI: For everybody else, having a similar problem: I was able to solve this issue by computing the precision as follows: import numpy as np # test data y_true = np.array([0,0,0,0,1,1,1]) predicted_prob = np.array([0.1, 0.2, 0.4, 0.8, 0.9, 0.5, 0.3]) prob_threshold = 0.7 # your arbitrary cut n_passed = y_true[predicted_prob > prob_threshold].shape[0] n_passed_true = y_true[predicted_prob > prob_threshold].sum() precision = n_passed_true / n_passed
H: How can we make forecasts from stationary data I'm confused about the concept of stationarity. Most definitions require the mean and Variance to be constant 'over any interval'. This statement confuses me, if any interval should have the same mean mean and variance then I'll select a time-strip as narrow as possible, say 1 day where the graph is on a high and then another 1 day where the graph is on a low, then the mean is obviously different. Say I take means over the green and blue bounds, they are going to be different, how is this a stationary time series then ? Moreover if trends and seasonality are not to be there in stationary time series data then what do models that require stationary data predict from, trends and seasonality are 'patterns' in the data, if they are not there than what is the basis of prediction, in that case how is stationary time series data of any use ? AI: Stationary time series refers to the stochastic process of generating the time series data, not necessarily the actual sampled data values themselves. A strict stationary time series has constant mean and constant variance for a stochastic process of data generation. There are many other definitions of stationarity. In your graph, it looks like the data closely resembles having been generated from a constant mean $\mu=0$ and constant standard deviation $\sigma<5$ stochastic process, so it's probably a stationary time series. The interval is the duration of the single sweep of the time series generation process, which is one instance of time series data. It is not a problem to limit the interval to smaller region. The problem is when you don't know the original time series generation process, and have to deduce it from one or more generated time series data. If you had only looked at two data point values, it would be almost impossible to deduce stationarity of the process with any certainty. With longer interval, and possibly multiple generations of the time series in the same interval (impossible for real-time time series since you can't go back in time), one can deduce stationarity with more confidence.
H: Is it possible to combine two confusion matrices? Assume I have two different algorithms that tests whether a given image contains a goat or not. I apply these two algorithms to two different datasets and obtain two confusion matrices. Now I want to somehow combine these two algorithms into a third one as follows: Given an image, I apply both algorithms and claim the image contains a goat if both algorithms guesses so. If even one of them guesses the image has no goat, I return NO. Is it possible to combine the original two confusion matrices into a third one in a meaningful way? Note that if the original two algorithms run on the same dataset, I could just combine the result to get the confusion matrix for the third one. (I guess using Cohen's kappa or Scott's pi?) However, this is not the case. One way I could think of is as follows: Say the first dataset contains 10 images and the second one contains 20 images. I could select a random 10 images from the second dataset, and make the assumption that the first dataset actually equals this random 10 images from the second dataset. Then I can combine the results. Would that be a meaningful test? AI: No, this cannot work: even if the two confusion matrices were obtained from the same dataset there is no way to check the condition "both algorithms predict positive on the same image" from only the confusion matrices. Example: take for instance an instance which is TP for A: it could be either a TP or a FN for B. Same thing for a FP for A: it can be either a FP for B or a TN. And so on, basically there's no way to deduce the number of TP or any other category for the meta-algorithm in this way. And this is assuming the same dataset in the first place. So the only way to achieve this would be: for the two algorithms to have been applied on the same dataset, and to have the actual predictions (not only the confusion matrices) with the id of the image, so that the meta-model predictions can be obtained by a logical AND between the positive predictions of the two algorithms.
H: How to determine the exact number of nodes of the fully-connected-layer after Convolutional Layers? class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(3, 16, kernel_size=2, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=2, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.fc1 = nn.Linear(9*9*32, 200) self.fc2 = nn.Linear(200, 50) self.output = nn.Linear(50, 10) RuntimeError: size mismatch, m1: [20 x 2048], m2: [2592 x 200] at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/TH/generic/THTensorMath.c:2033 I applied the formula W’ = ((W-F+2P)/S)+1 I ended up with 9 pixels per side instead of 8: W’ = ((W-F+2P)/S)+1 = ((32-2+2)/1)+1=31 Max Pool W’ = 31/2 = 15.5 —>16 W’’ = ((W-F+2P)/S)+1 = ((16-2+2)/1)+1=17 Max Pool W’ = 17/2 = 8.5 —> 9 9932 = 2592 8832 = 2048 I cannot understand why the nu AI: You calculation for the output size is incorrect, in your calculation of the output size of the first max pooling layer you are rounding the result up to 16. However, as the pytorch documentation shows the result is floored, which in this case would give an output size of 15. Continuing with the calculations steps after it will leave you with a size of 8.
H: val_sparse_categorical_accuracy I know the metric sparse_categorical_accuracy Fit model on training data Epoch 1/2 782/782 [==============================] - 1s 1ms/step - loss: 0.3485 - sparse_categorical_accuracy: 0.9011 - val_loss: 0.1956 - val_sparse_categorical_accuracy: 0.9438 Epoch 2/2 782/782 [==============================] - 1s 1ms/step - loss: 0.1653 - sparse_categorical_accuracy: 0.9514 - val_loss: 0.1340 - val_sparse_categorical_accuracy: 0.9616 But What is the different between sparse_categorical_accuracy and val_sparse_categorical_accuracy What does it mean if during the training sparse_categorical_accuracy is increasing but val_sparse_categorical_accuracy seems to be stucked AI: The difference is simply that the first one is the value calculated on your training dataset, whereas the metric prefixed with 'val' is the value calculated on your test dataset. If the metric on your test dataset is staying the same or decreasing while it is increasing on your training dataset you are overfitting your model on your training dataset, meaning that the model is trying to fit on noise present in the training dataset causing your model to perform worse on out-of-sample data.
H: How to detect anomalies in each feature - time series I have a dataset with 5 features corresponding to 5 sensors that measure each three seconds the state of an accelerator. It is structured as well: Sensor 1 | Sensor 2 | Sensor 3 | Sensor 4 | Sensor 5 | Label 1.5 1.1 0.8 1.2 1.2 0 1.2 1.4 1.4 1.4 1.1 0 1.2 1.1 1.2 1.3 1.5 0 The label indicates if the time series is anomaly(=1) or not(=0). I have an anomaly detection task, and the frameworks I've chosen (1, 2) give me as output an array with length 3 where I have the labels predicted: (0, 1, 0). I usually worked with anomaly detection frameworks which gave me a threshold and I could have easily marked the values above it as anomalies. In this specific case, with this array of length 3, is it right to assume that I could rewrite the following dataset as this? (True = Anomaly, False = normal) Sensor 1 | Sensor 2 | Sensor 3 | Sensor 4 | Sensor 5 | False False False False False True True True True True False False False False False So, instead mark one value at time, it directly mark all the time series as anomaly? AI: I believe the assumption you've made is incorrect (the whole row being anomalous or not). To explain this thoroughly, you would need to know which algorithm you're using in order to detect whether or not the final label is 0 or 1. For anomaly detection, you can approach the problem as a Supervised (pretty much classification problem), Unsupervised or Semi-Supervised. Assuming from your data, you've chosen the unsupervised approach. Unsupervised Anomaly Detection An unsupervised anomaly detection has a plethora of algorithm subsets; Distance Based, Statistical, Classification, Angle Based, DBscan, Neural Networks and more. Under those subsets lie algorithms that can help detect anomalous values such as unique NN architectures etc. Some algorithms (most of the ones I've used) have some form of dimensionality reduction such as PCA. Due to that fact, its a lot more difficult to grasp whether a specific column (e.g Sensor 1 is anomalous on its own) A better way to wrap your head around how/why a datapoint is tagged anomalous or not would be to plot a t-SNE graph. If you're interested in me editing my answer to create an anomaly detection model that tags data points as anomaly or not (along with the anomaly score for you to be able to set your personal threshold) and plotting a t-SNE graph, let me know.
H: Creating a custom layer in tensorflow I'm trying to create a layer in TensorFlow, which works something like this: And my implementation looks something, like this: class BinaryLayer(Layer): def __init__(self): super(BinaryLayer, self).__init__() def build(self, input_shape): w_init = 0.5 self.w = tf.Variable(name="kernel", initial_value=w_init(dtype='float32'),trainable=True) def call(self, inputs): return tf.math.greater(inputs, self.w) But it gives me an error saying 'float' object is not callable And I also think there will be another problem in the future, which is, it will return boolean values, such as: [[TFT] [TTF] [FFT]], but I want something like this: [[101] [110] [001]] AI: Resolved, with: self.w = tf.Variable(name="kernel", initial_value=w_init,trainable=True) ... tf.cast(out_tensor_b, tf.float32)
H: Optimizers, loss functions and weights: when do they matter? I'm training an FCN in TF/Keras with sigmoid focal loss (from TF addons) and saving weights in checkpoints. I will need the inference to be done on another computer that, for the moment, does not have TF addons installed. Because of the custom layer, I can't seem to save the whole serialised model and need to create the model architecture and load the weights. Am I correct in thinking that once you have the weights of a model, all you need is the architecture to re-create that model via load_weights? In other words, once trained, the loss function and optimizer no longer play any meaningfull roles? So I could build my architecture, and add another optimiser and a more conventional loss (eg categorical x-entropy) and once the weights are loaded, inference will give the same results as if I had a sigmoid focal loss? AI: Yes. The optimizer and loss the are not part of serving inference. Once you finish training, tensorflow will save the entire graph (i.e. architecture + weights). Then, you just need to load the graph (with its weights) and provide the input for serving i.e. the feature vectors. Once the graph is loaded it is just a function f(x) where x is the feature vector and f is the function of the graph. There is no use for the loss at this stage as the optimization process is over. There are several ways you can provide tensorflow graph with features. One common option is with GRPC, where you feed the model with features that are organized as google protobuf structure. You can't change loss or optimizer for the inference part, as they are only relevant to the training and optimization.
H: ValueError when trying to split DataFrame into train/test I am getting ValueError: too many values to unpack (expected 2) from this code: (x_train, y_train), (x_test, y_test) = data print(x_train.shape) print(y_train.shape) print(y_train[:3]) I checked similar questions in this error but couldn't solve it. AI: You are trying to unpack the data variable into two separate pieces, each of which should contain another two outputs (an x and y) variable. However, data is simply a single output, which is a pandas dataframe for which unpacking doesn't work. Based on the code you provided it seems you are trying to split your data into a training and test dataset. This does not work this way if you have the data stored as a single dataframe. You will have to split the manually yourself into a feature array and an array of values you are trying to predict, which you can then split into a training and test dataset using the train_test_split function from scikit-learn.
H: test data is not a good representation of train data I have predefined train and test sets. On generating some statistics like value_counts and checking the unique values, I feel that there is a 'lot' of difference between the distributions of the variables. What should be done with this? Suppose if I want to delete a column from the train_set for any reason like near-zero variance, should I repeat the same for the test_set (even if there is no such problem in the test_set's frequency tables? I ran the following code for dataset in both_datasets: # both_datasets contains train_set and test_set print(dataset.nunique()) print('\n') And this is the output (I compiled it for a better view and highlighted some extreme cases) You might observe that for the column specific_code_lesion, the test_set misses an entire category! Then in order to see how many unique values my columns contain, I ran the following code for dataset in both_datasets: print('-'*120) for col in dataset.columns: unique = len(dataset[col].unique()) percentage = float(unique)/ dataset.shape[0]*100 print(percentage, col) And here is the output: So there are clear differences between the ratios of the percentage presence of unique values. The question is should I Avoid taking any insights from test_data. Do changes WRT the insights taken from the train_set only. However, every change I make should be replicated in test_set as well Use test_data too for insights and do preprocess accordingly. Do something to change the test_data to make it more balanced and representative. Somethings else :S AI: Here's is my attempt to answer these questions: Avoid taking any insights from test_data. Do changes WRT the insights taken from the train_set only. However, every change I make should be replicated in test_set as well: The second part is mostly true. Only in some rare cases where you may be applying class balancing techniques to the train set which should not be applied to the test/evaluation set. Test set should represent the real world data as much possible, while, train set should do the same, some tweaks to this data set may be applied independently to make it easy/possible to train with the chosen ML algorithm. However, any scaling/normalization/transformation parameters must be replicated exactly to the test set as well. Use test_data too for insights and do preprocess accordingly. There should be no harm in conducting an EDA on both train and test datasets and using the combined knowledge for taking feature engineering decisions. Do something to change the test_data to make it more balanced and representative. NO. As noted in the #1 above, the test data should represent the real world data as much as possible. How else would you rely on the test set's evaluation metrics for the unseen real world data? You can read some more about it here Should I balance the test set when i have highly unbalanced data? Somethings else :S You should not be worried if a category/class of a certain feature is missing from the test set. However, the reverse is not true: there must not be any new category/class in the test or real world data set which was absent from the train set - the preprocessing routine and model will not know how to handle it. Also, if there are a very low number of unique values or near zero variance in a feature, you should consider dropping it (from both train and test sets). In the end, this should reflect in low feature importance and highlighted by any of the feature selection procedures like this Feature selection
H: Understanding the likelihood function The likelihood function is defined as --> P(Data|Parameter) - This means, "The probability that the parameter would generate the observed data". Here, data refers to the independent variables. This makes no sense to me because we generate parameters from the data, not the other way round. Data remains constant. Can somebody explain clearly what P(Data|Parameter) exactly is? AI: $P(data|parameter)$ is not used in the sense of generating new data, but rather in the sense of how probable is that this data have been already generated by such parameters.
H: How to save the history epochs and plots in files from keras models I have a question, that maybe is very simple, but I can't do it: I have 3 neural networks which are trained with 100 epochs, and I need to save all the history's train displayed at the end in a .txt file (i.e: time: 00s 0ms/step loss:... accuracy:... recall:... etc.), maybe it's easy, but also I need to plot each metric and its val_metric from each epoch and save that plots too, as images I guess (I know that maybe is nonsense to plot at epoch 1,2... but my professor ask us to do that). How can I do that? And, it's possible to save each line of the epoch and under it the correspondig plots in a file? I use Keras to build the model. This is the .compile() function that I use: import tensorflow_addons as tfa residual_network_1d_model.compile(loss=tf.keras.losses.BinaryCrossentropy(), optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=[tf.keras.metrics.BinaryAccuracy( name='accuracy', dtype=None, threshold=0.5),tf.keras.metrics.Recall(name='Recall'),tf.keras.metrics.Precision(name='Precision'),tfa.metrics.F1Score(num_classes=27,average='macro'), tf.keras.metrics.AUC( num_thresholds=200, curve="ROC", summation_method="interpolation", name="AUC", dtype=None, thresholds=None, multi_label=True, label_weights=None, )]) Thanks for your answers. AI: For the data that is printed to the terminal while training you should be able to use the CSVLogger callback which will save the output to a plain text file. For plotting the metrics you can use the metrics stored in the History object and plot them using a plotting library such as matplotlib and save them using the library specific function for saving the plot (matplotlib.pyplot.savefig for matplotlib).
H: Different DataFrames need to compare and add new value I've two Dataframes DF1 = Contains (~30 Columns and 15000 entries) relevant are "TYP" (string) and "VERSION" (string) DF2 = Contains "SOFTWARE_NAME" (string), "VERSION" (string) and "EOS" (date) Now I'd like to add in DF1 additional columns "EOS_DATE" (date), "EOS" (bool) if DF1['TYP'] == DF2['SOFTWARE_NAME'] & DF1['VERSION'] == DF2['VERSION'] then DF1['EOS_DATE'] = DF2['EOS'] DF1['EOS'] = if DF2['EOS'] < now() then True else False I've tried with np, pd.where(~) but running mainly in this issue: ValueError: Can only compare identically-labeled Series objects AI: First, change your columns values, so you won't have any issue (my goal is to make only one dataframe) DF2 = DF2.rename(columns={"EOS": "DF2_EOS", "VERSION": "DF2_VERSION"}) Then you want to apply a change only for lines valuing your conditions. The best to do is to merge your dataframes, keeping all your DF1 rows (ie : standard merge) : DF1 = DF1.merge(DF2, left_on=['TYP', 'VERSION'], right_on=['SOFTWARE_NAME', 'DF2_VERSION']) Finally make your changes : DF1.loc[DF1['DF2_EOS'] < pandas.to_datetime('today').normalize(), 'EOS'] = True DF1.loc[DF1['DF2_EOS'] >= pandas.to_datetime('today').normalize(), 'EOS'] = False DF1 = DF1.rename(columns={"DF2_EOS": "EOF_DATE"})
H: Should I apply Softmax before calculating metrics Precision or similar? I am using PyTorch Lightning (there is no tag for this and I don't have enough reputation to create one) and am facing a multi classification problem. My loss function is torch.nn.CrossEntropyLoss which applies softmax internally. So, as I don't have to take care of this, my model prediction output is not a probability vector. Now, in terms of code, a step looks like this: def train_step(self, batch): datapoints, labels = batch y_out = self(datapoints) loss = torch.nn.CrossEntropyLoss()(y_out, labels) metric_output = calculate_some_metric(y_out, labels) My question is now, if there is any need to apply softmax manually before calculating the metric. Surely that will probably depend on the metric itself if this is necessary or not... but are there common metrics where this makes a difference? AI: yes, you should apply softmax or sigmoid (for the binary case). y_out is what usually called logits. There are metrics like AUC which requires the probability as an input and will not work well with y_out.
H: Methods for finding characteristic words for a group of documents in comparison to another group of documents? I'm working on a problem of anomaly detection, where at the end of the anomaly detection I will have a group of documents consisting of a title of each object that was flagged as anomalous. At the same time I have another group of documents which are the documents/texts of a title of each object that was not flagged as anomalous. anomalous_titles =[[Product:A - sub_group:X1 - pod: P1 - function: M1], [Product:B...],..] not_anomalous_titles =[[Product:R - type:TX - producer: XX], [Product:B...],..] What I would like to do here is to understand if there are any words or patterns that is shared among the anomalous documents which are not common in the group of non anomalous documents. What method would be good to apply in this scenario? I know about TF-IDF and Topic Modelling, but I don't know if it makes sense for this use case? Appreciate any input! AI: TF-IDF and Topic Modelling wouldn't be suitable as they do not take classes into account. One approach would be to train a basic classifier and extract important features per class. The steps: Create a TF-IDF matrix for the text corpus. Train a basic classifier using the TF-IDF Matrix as feature matrix and the classes as target. (A decent accuracy is enough.) Get the feature_importances from the trained classifier. Sort to get most important features and their corresponding classes. import numpy as np from collections import defaultdict from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.ensemble import RandomForestClassifier # Loading sample data categories = ['comp.sys.mac.hardware', 'rec.autos', 'sci.space', 'rec.sport.baseball'] newsgroups = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'),categories=categories) # 1. Fit corpus to tfidf vectorizer tfidf = TfidfVectorizer(min_df=15, max_df=0.95, max_features=5_000) tfidf_matrix = tfidf.fit_transform(newsgroups.data) # 2. Train classifier clf = RandomForestClassifier() clf.fit(tfidf_matrix, newsgroups.target) # 3. Get feature importances feature_importances = clf.feature_importances_ # 4. Sort and get important features word_indices = np.argsort(feature_importances)[::-1] # using argsort we get indices of important features feature_names = tfidf.get_feature_names() # Lookup to get words from index top_n = 50 # Top N features to be considered top_words_per_class = defaultdict(list) for word_idx in word_indices[:top_n]: word = feature_names[word_idx] word_class = newsgroups.target_names[clf.predict(tfidf.transform([word]))[0]] top_words_per_class[word_class].append(word) The top_words_per_class would be like: { "rec.autos": ["car", "cars", "engine", "ford", "like", "dealer", "oil", "toyota"], "sci.space": ["space", "nasa", "orbit", "launch", "earth", "moon", "shuttle", "thanks", "program", "project", "spacecraft"], "comp.sys.mac.hardware": ["mac", "apple", "drive", "scsi", "centris", "video", "quadra", "monitor", "se", "card", "powerbook", "use", "problem", "simms", "software", "modem"], "rec.sport.baseball": ["baseball", "game", "team", "games", "season", "players", "year", "league", "runs", "hit", "player", "braves", "teams", "pitching"]} }
H: Python Pandas - Compare Columns In Separate Dataframes, Then Delete Non-Matching Rows In 1 DataFrame I would like to compare 2 columns in different dataframes, then delete entire non-matching rows. The 2 columns are not of the same length. Here is a mock-up of DataFrame #1: Here is a mock-up of DataFrame #2: I would like to compare the ID_NUMBER columns in the 2 DataFrames, then delete entire rows in DataFrame #1 if the row value cannot be found in DataFrame #2. Currently, my code is not working: DataFrame1 = np.where((DataFrame1["ID_NUMBER"] == DataFrame2["ID_NUMBER"])) Many thanks in advance! AI: You can use pandas.Series.isin with boolean indexing: df1[df1["ID_NUMBER"].isin(df2["ID_NUMBER"])]
H: Error in xgboost This is my script: library(xgboost) library(tidyverse) library(caret) library(readxl) library(data.table) library(mlr) data <- iris righe_train <- sample(nrow(data),nrow(data)*0.8) train <- data[righe_train,] test <- data[-righe_train,] setDT(train) setDT(test) labels <- train$Species ts_label <- test$Species new_tr <- model.matrix(~.+0,data = train[,-c("Species"),with=F]) new_ts <- model.matrix(~.+0,data = test[,-c("Species"),with=F]) #convert factor to numeric labels <- as.numeric(labels)-1 ts_label <- as.numeric(ts_label)-1 #preparing matrix dtrain <- xgb.DMatrix(data = new_tr,label = labels) dtest <- xgb.DMatrix(data = new_ts,label=ts_label) #default parameters params <- list(booster = "gbtree", objective = "binary:logistic", eta=0.3, gamma=0, max_depth=6, min_child_weight=1, subsample=1, colsample_bytree=1) xgbcv <- xgb.cv( params = params, data = dtrain, nrounds = 100, nfold = 5, showsd = T, stratified = T, print_every_n = 10, early_stopping_round = 20, maximize = F) ##best iteration = 79 min(xgbcv$test.error.mean) #first default - model training xgb1 <- xgb.train (params = params, data = dtrain, nrounds = 79, watchlist = list(val=dtest,train=dtrain), print.every.n = 10, early.stop.round = 10, maximize = F , eval_metric = "error") #model prediction xgbpred <- predict (xgb1,dtest) xgbpred <- ifelse (xgbpred > 0.5,1,0) #confusion matrix library(caret) confusionMatrix (xgbpred, ts_label) #Accuracy - 86.54%` #view variable importance plot mat <- xgb.importance (feature_names = colnames(new_tr),model = xgb1) xgb.plot.importance (importance_matrix = mat[1:20]) But when I run the instruction xgbcv I have this error: Error in xgb.iter.update(fd$bst, fd$dtrain, iteration - 1, obj) : [15:21:18] amalgamation/../src/objective/regression_obj.cu:103: label must be in [0,1] for logistic regression Why? How can I fix it? AI: The Iris data has three target values ("species"). The objective function in params is set to objective = "binary:logistic", which only accepts two classes (binary taget). In case you have more than two classes, you need a multiclass objective function, e.g. multi:softmax or multi:softprob. As stated in the docs: “binary:logistic” –logistic regression for binary classification, output probability [...] “multi:softmax” –set XGBoost to do multiclass classification using the softmax objective, you also need to set num_class(number of classes) “multi:softprob” –same as softmax, but output a vector of ndata * nclass, which can be further reshaped to ndata, nclass matrix. The result contains predicted probability of each data point belonging to each class. Also note additional parameter options available for xgboost.
H: Confusion matrix with different levels I want to print a confusion matrix, but data and reference have not the same level. How can I do? this is my actual code: library(xgboost) library(tidyverse) library(caret) library(readxl) library(data.table) library(mlr) data <- iris righe_train <- sample(nrow(data),nrow(data)*0.8) train <- data[righe_train,] test <- data[-righe_train,] setDT(train) setDT(test) labels <- train$Species ts_label <- test$Species new_tr <- model.matrix(~.+0,data = train[,-c("Species"),with=F]) new_ts <- model.matrix(~.+0,data = test[,-c("Species"),with=F]) #convert factor to numeric labels <- as.numeric(labels)-1 ts_label <- as.numeric(ts_label)-1 #preparing matrix dtrain <- xgb.DMatrix(data = new_tr,label = labels) dtest <- xgb.DMatrix(data = new_ts,label=ts_label) #default parameters params <- list(booster = "gbtree", objective = "multi:softmax", num_class = 3, eta=0.3, gamma=0, max_depth=6, min_child_weight=1, subsample=1, colsample_bytree=1) xgbcv <- xgb.cv( params = params, data = dtrain, nrounds = 100, nfold = 5, showsd = T, stratified = T, print_every_n = 10, early_stopping_round = 20, maximize = F) ##best iteration = 79 min(xgbcv$test.error.mean) #first default - model training xgb1 <- xgb.train (params = params, data = dtrain, nrounds = 79, watchlist = list(val=dtest,train=dtrain), print.every.n = 10, early.stop.round = 10, maximize = F , merror = "error") # eval_metric = "error") #model prediction xgbpred <- predict (xgb1,dtest) xgbpred <- ifelse (xgbpred > 0.5,1,0) #confusion matrix library(caret) confusionMatrix (xgbpred, ts_label) AI: You need to convert the numeric vectors to factors, for example like this: factors_both <- as.factor(c(xgbpred, ts_label)) xgbpred_f <- factors_both[1:length(xgbpred)] ts_label_f <- factors_both[length(xgbpred)+1:length(xgbpred)*2] > confusionMatrix(xgbpred_f,ts_label_f) Confusion Matrix and Statistics Reference Prediction 0 1 2 0 4 4 0 1 0 1 6 2 0 0 0
H: how does zoom out works in data augmentation? how does zoom out works in data augmentation? I'm reading a doc on data augmentation in Keras ,and it says that randomZoom(0.2) zooms in and out by a factor in rage of 20%. how does zoom out works? does it add white border to the image? AI: It seems that it mirrors the image in borders; so it is like putting smaller versions of the image around the image(having 9 smaller images together) and then cropping the big image with the size of the original image.
H: How does pandas qcut decide the bin edges I have pandas dataframe, and I want to bin the continuous values. a['abc'].describe() # a name of pandas dataframe, abc--column name count 250000.000000 mean 43.412040 std 26.075295 min 0.000000 25% 25.000000 50% 38.000000 75% 53.000000 max 218.000000 Name: abc, dtype: float64 On using pandas qcut for 4 groups, how is a negative value assigned in one of bins? a["abc_bin"] = pd.qcut(a["abc"],4,labels=None,) print(a["abc_bin"].value_counts()) (25.0, 38.0] 73448 (-0.001, 25.0] 62818 (53.0, 218.0] 61605 (38.0, 53.0] 52129 Name: abc_bin, dtype: int64 How is the bin width is decided? In particular, how is there a negative value as a bin edge? AI: Why does one bin include negative values? This is because the resulting intervals are open on the left, so pandas extends the left edge to include the min. Based on your describe output, the min is 0, so the left edge becomes slightly negative: (0, 25.0] does not include 0 (-0.001, 25.0] includes 0 This is not documented in qcut, but similar behavior is explained in cut: bins: ... The range of x is extended by 0.1% on each side to include the minimum and maximum values of x. How are the bins determined? qcut adjusts the edges such that each bin contains the same number of elements, whereas cut just divides strictly at the edges. So with cut, we can avoid the negative edge by specifying a list of bins because the data gets split exactly at those edges: a = pd.DataFrame({'abc': np.random.random(size=10000)}) pd.cut(a['abc'], [0, 0.25, 0.5, 0.75, 1]).value_counts() # (0.25, 0.5] 2637 # (0.0, 0.25] 2478 # (0.75, 1.0] 2454 # (0.5, 0.75] 2431 # Name: abc, dtype: int64 But with qcut, this workaround has no effect since qcut always adjusts the edges to force the bins into equal counts: pd.qcut(a['abc'], [0, 0.25, 0.5, 0.75, 1]).value_counts() # (-0.000818, 0.253] 2500 # (0.253, 0.489] 2500 # (0.489, 0.745] 2500 # (0.745, 1.0] 2500 # Name: abc, dtype: int64
H: Encoding with OrdinalEncoder: TypeError: unhashable type: 'numpy.ndarray' I am trying to do a Random Forest in a dataset with numerical and categorical variables in order to obtain a categorical result (two possible classes, column name "predicción"). I am using the scikit-learn library in Jupyter notebook. I have done the train-test split like this: X_train, X_test, y_train, y_test = train_test_split(datos.drop(columns = 'predicción'), datos['predicción'],random_state = 123) then I made two lists with the column names that hold numerical or categorical values: cat_cols = X_train.select_dtypes(include=['object', 'category']).columns.to_list() numeric_cols = X_train.select_dtypes(include=['float64', 'int']).columns.to_list() cat = list(np.array(cat_cols).reshape(1,9)) I then do the encoding using ColumnTransformer: niveles = ['0', '1', '2'] encoder = ColumnTransformer([('ordinal', OrdinalEncoder(categories=[niveles]), cat)],remainder='passthrough') So far so good, no errors up to this point. The error rises when I use the fit_transform: X_train = encoder.fit_transform(X_train) X_test = encoder.fit_transform(X_test) I have not been able to find a solution to this problem or any alternative. I am fairly new to machine learning if that can be an excuse. Any bit of help is welcome! AI: You are using the fit_transform method on both the training and test dataset which is incorrect. You should only use fit_transform on the training dataset (since the encoder should only fit/learn from data in the training dataset and not the test dataset as this would be new information when deploying your model) and then use transform to apply the encoder on the test dataset.
H: Distribution Shift vs Transfer Learning Transfer learning (TL) is a research problem in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem [1] Distribution Shift The conditions under which the system was developed will differ from those in which we use the system. [2] I consider there is no difference between distribution shift and dataset shift. But between transfer learning and distribution shift? What are the differences? Can we say that transfer learning is an intended distribution shift? AI: Yes - One difference is between transfer learning and distribution shift is intention and knowledge of a different dataset. There are many types of transfer learning. Sometimes models are trained on one dataset and applied to another dataset without additional training. This has to be the case when there are no labels on the second dataset. Other times models are trained on one dataset and then fined tuned on another dataset. This can be the case when the second data has labels.
H: How use logistic function to normalize data to (0,1) I am reading paper about data normalization and I am interested how is it possible to use the logistic sigmoid function to normalize data to the specific interval (0,1). There is only short mention in the paper. When I did some testing computation in Excel I never get value from mentioned interval but every time I get number 1. AI: Excel is rounding to 1. Using the logistic function for normalization would have a narrow domain. $\lim_{x\rightarrow\infty} \frac{1}{1+e^{-x}}=1$ Even for $x=100$ it's too close to 1: $f(100)\approx0.9999999999999999999999999999999999999999999627992402397916403704...$ You'd be better off with min max normalization, though this is to the range $[0,1]$. If you needed $(0,1)$, you could squash your logistic function by $\alpha$ like so: $f(x)=\frac{1}{1+e^{-\frac{x}{\alpha}}}$.
H: If a feature has already split, will it hardly be selected to split again in the subsequent tree in a Gradient Boosting Tree I have asked this question here, but seems no one was interested in it: https://stats.stackexchange.com/questions/550994/if-a-feature-has-already-split-will-it-hardly-be-selected-to-split-again-in-the If a feature has already split, will it hardly be selected to split again in the subsequent tree in a Gradient Boosting Tree? It is motivated by the fact that for the heavy correlated features in a single tree, usually only one of them will be selected to split as their uncertainty will remain few after a splitting. Now in Gradient Boosting Tree, is residual similar with the uncertainty? Currently, I happened to how heavily correlated features affects the feature importance selected by Gradient Boosting Tree. I guess the result is that Gradient Boosting Tree will only select the importance from one of correlated features just like LASSO. AI: If a feature has already split, will it hardly be selected to split again in the subsequent tree in a Gradient Boosting Tree? It's harder yes, but common. In the same tree it can not happen in the same point. In a subsequent tree it can. It is motivated by the fact that for the heavy correlated features in a single tree, usually only one of them will be selected to split as their uncertainty will remain few after a splitting. Now in Gradient Boosting Tree, is residual similar with the uncertainty? If you are trying to understand gradient boosting I would not mix it with uncertainty. There are several ways to compute the uncertainty with gradient boosting, NGBoost is one but not the only. Your question needs further retunnign to properly be answered. You are mixing highly correlated features, with uncertainty. You need to clarify them before attempting to answer this question.
H: How python LogisticRegression() binary classification works for more than 2 independent variables I created bag of words (in which number of columns were around 15000 i.e greater than 1) and corresponding output data (0 or 1). After that I used LogisticRegression() (I didn't passed any parameters) from sklearn and used it for training and testing. It didn't gave any error or warning and worked completely fine. However isn't logistic regression only for those dataset which have maximum 1 independent variables? How sigmoid function would work when number of independent variables are greater than 1? Sigmoid function is used in LogisticRegression and wherever I read it is described in 2 dimension space i.e with only one independent variable. AI: Multiple predictor (independent) variables are allowed. $$ \hat p =\dfrac{1}{1+\exp(-\hat z)}\\ \hat z = X\hat\beta = \hat\beta_0+\hat\beta_1x_1+\hat\beta_2x_2++\hat\beta_3x_3+\cdots $$ You can simulate this in R. Let’s use five predictor variables. set.seed(2021) N <- 100 p <- 5 X <- matrix(runif(N * p), N, p) z <- X %*% c(1, 2, -3, 4, -5) p <- 1/(1+exp(-z)) y <- rbinom(N, 1, p) L <- glm(y ~ X, family = binomial) EDIT Let's look at the sigmoid function in the plane for my example. z_hat <- predict(L) p_hat <- 1/(1 + exp(-z_hat)) plot(z_hat, p_hat) It's just a sigmoid curve in the plane, even though there are multiple predictor variables.
H: Why is cross entropy based on Bernoulli or Multinoulli probability distribution? When we use logistic regression, we use cross entropy as the loss function. However, based on my understanding and https://machinelearningmastery.com/cross-entropy-for-machine-learning/, cross entropy evaluates if two or more distributions are similar to each other. And the distributions are assumed to be Bernoulli or Multinoulli. So, my question is: why we can always use cross entropy, i.e., Bernoulli in regression problems? Does the real values and the predicted values always follow such distribution? AI: Background: The concept of Cross Entropy is inherited from Information theory where it is applied to understand and measure the difference in the distributions of two or more events. Events as you would appreciate are a discrete concept and translate to classes in the case of a ML classification problems. This is the reason that Cross Entropy is only applicable to Bernoulli/Multinoulli (categorical distributions). Regarding your question: It is not clear why you mention Logistic regression and raise a question on the applicability of Cross Entropy (aka LogLoss in case of Logistic regression) to regression problems (the name may have confused you?). Since, Logistic regression is a classification model, all seems to fit well in place. EDIT 1: If you take a normal distribution (hence, continuous) and discretize it using bins, you convert it into a multinoulli distribution where the area under the curve of individual bins acts as pi of the events/classes. Now you can easily calculate cross entropy for this transformed distribution, however, it is no more a normal distribution.
H: Error with MSE in LSTM I'm trying to fit an LSTM model on my dataset, using also a validation set. My datasets have the following shapes: X_train = (56054, 250, 30) #where 250 = sequence_length X_val = (13969, 250, 30) #where 250 = sequence_length This is the model I created: cbs = [History(), EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0003, verbose=0), TensorBoard(log_dir='Baseline/tb_logsLSTM', histogram_freq=1, write_images=True)] model = Sequential() model.add(LSTM(40, input_shape=(None, X0train_seq.shape[2]), return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(40, input_shape=(None, X0train_seq.shape[2]), return_sequences=False)) model.add(Dropout(0.2)) model.add(Dense(30)) model.add(Activation('linear')) model.compile(loss='mse', optimizer='adam', metrics=["mse"]) model.fit(X_train, X_train, batch_size=60, epochs=35, validation_data=(X_val, X_val), callbacks=cbs, verbose=True) When I run it, it finish the first epoch and give me this error in the fit function: tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [64,30] vs. [64,250,30] [[node gradient_tape/mean_squared_error/BroadcastGradientArgs How can I solve it? AI: It seems a couple of things can be done differently. Firstly, it seems you are passing train and label data incorrectly when fitting the model. It should be more like: model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2) For trainY being your labels, as opposed to passing trainX twice as in your code above. Same applies for your validation_data argument. Secondly, what is the role of a linear activation in your code? You could simply close your computational graph with the dense layer e.g. model = Sequential() model.add(LSTM(4, input_shape=(1, X0train_seq.shape[2]))) model.add(Dense(1)) model.compile(... Hope this is helpful.
H: Best practices when creating a virtual environment in Anaconda I am absolutely confused due to the number of different ways and the abundance of articles explaining how to create a virtual environment in Anaconda. You could use conda or virtualenvironment to create one. Also depending on how you create it, you need to use either conda or pip to install additional libraries. I tried to read but it only confused me more! Up until now I haven't created one and install additional packages using pip into the base folder. (I am using Anaconda and Spyder). Can someone please help me with the best practice to create a virtual environment and how to install additional libraries? I use Spyder as my primary IDE. PS: if you think this is a duplicate question, kindly post a link to the question rather than downvoting, and I'll close or delete this question myself. AI: Whenever working in a collaborative environment it's always best to create env using environment.yml. Easily set up additional channels(if required)for downloading relevant packages. Can add all the dependencies which can be downloaded through these channels and also pip dependencies in same file. Though it takes additional work at start but it helps in debugging later :) Resources: Create an environment file Creating an environment from an environment.yml file
H: Influence of imbalanced feature on prediction I want to use XGB regression. the dataframe is coneptually similar to this table: index feature 1 feature 2 feature 3 encoded_1 encoded_2 encoded_3 y 0 0.213 0.542 0.125 0 0 1 0.432 1 0.495 0.114 0.234 1 0 0 0.775 2 0.521 0.323 0.887 1 0 0 0.691 My question is, what is the influence of having imbalanced observations of the encoded features? for example, is I have more features that are "encoded 1" comapred to "encoded 2" or "encoded_3". Just to make it clear, I want to use regression and not classification. If there is any material to read about it pelase let me know. AI: It doesn't matter, it's just what the data is. I assume that you're thinking about issues related to "imbalanced dataset", but this term refers only to imbalance in the values of the target variable (and it's more commonly used about classification, but technically it's relevant also in regression). Features don't need to be balanced in any way, they just need to be good indicators for the target variable.
H: Why performance varies among validate set, public testset and private testset? When practicing with classical kaggle competitions, such and Titanic, House pricing, and so on, I followed the traditional process that I learned from textbook: split training data into trainig set and validation set (either by 7:3 or CV fit model with training set evaluate the model performance with validation set combine the training set and validation set and re-train the model with the same parameters that were good on validation set Predict the result of test set Something I could not understand is that, why model performed good at validation set, but may not be good at public testset? And sometimes even public testset score may differ from private testset. If the validation set could not avoid over- or under- fitting in real predition, then it seems useless? What's more,how can we tell a model is good or not even if it performed good at private test set? Maybe it would performed bad at a "private-private testset" afterward. This really frustrated and confused me...maybe I have a wrong concept on performance evaluation method, or there are more reasonalble way to evaluate a model? AI: Something I could not understand is that, why model performed good at validation set, but may not be good at public test set? And sometimes even public test set score may differ from private test set. Validation sets are used to tune your hyperparameters, so that you can then analyze the the performance of your model on a never-seen-before data (the test set). So, comparing the performance of validation set and test sets (private/public) is not fair. Also, in case of Kaggle datasets, usually the public test sets are smaller (and probably a bit different than the private ones), otherwise you could have trained a good model on the public test set and except to get a good result on the private one as well. If the validation set could not avoid over- or under- fitting in real prediction, then it seems useless? Again validation sets are used for hyperparameter tuning. If your model cannot perform good (however you define it), use better models or perform a study on your data. But do not use any information you gained from test set to tune the model! What's more, how can we tell a model is good or not even if it performed good at private test set? Maybe it would performed bad at a "private-private test set" afterward. Public-private test sets are mainly used for competitions and their main purpose is to prevent cheating. But know this, you were given a limited resources, here data. Perform your best on it. If you are sure about the model you trained, and it still performed poor on the final dataset, maybe the data given to you was not representative enough.
H: Create Period column based on a date column where the first month is 1, second 2, etc I have a dataset with many project's monthly expendituries (cost curve), like this one: Project Date Expenditure(USD) Project A 12-2020 500 Project A 01-2021 1257 Project A 02-2021 125889 Project A 03-2021 102447 Project A 04-2021 1248 Project A 05-2021 1222 Project A 06-2021 856 Project B 01-2021 5589 Project B 02-2021 52874 Project B 03-2021 5698745 Project B 04-2021 2031487 Project B 05-2021 2359874 Project B 06-2021 25413 Project B 07-2021 2014 Project B 08-2021 2569 Using python, I want to create a "Period" column that replace the month value for a integer that represents the count of months of the project, like this: Where the line is the first month of the Project A (12-2020) the code should put 1 in the "Period" column, the second month (01-2021) is 2, the third (02-2021) is 3, etc. because I need to focus on the number of months that the projects of my dataframe had an expediture (month 1, month 2, month 3...) Project Date Period Expenditure(USD) Project A 12-2020 1 500 Project A 01-2021 2 1257 Project A 02-2021 3 125889 Project A 03-2021 4 102447 Project A 04-2021 5 1248 Project A 05-2021 6 1222 Project A 06-2021 7 856 Project B 01-2021 1 5589 Project B 02-2021 2 52874 Project B 03-2021 3 5698745 Project B 04-2021 4 2031487 Project B 05-2021 5 2359874 Project B 06-2021 6 25413 Project B 07-2021 7 2014 Project B 08-2021 8 2569 AI: The easiest thing is for you to calculate for each row: The start date of the corresponding project. The months since current date and start date of the project. Below is a sample code that does that for you: import pandas as pd import numpy as np df = pd.DataFrame( [ ["Project A", "12-2020", 500], ["Project A", "01-2021", 1257], ["Project A", "02-2021", 125889], ["Project A", "03-2021", 102447], ["Project A", "04-2021", 1248], ["Project A", "05-2021", 1222], ["Project A", "06-2021", 856], ["Project B", "01-2021", 5589], ["Project B", "02-2021", 52874], ["Project B", "03-2021", 5698745], ["Project B", "04-2021", 2031487], ["Project B", "05-2021", 2359874], ["Project B", "06-2021", 25413], ["Project B", "07-2021", 2014], ["Project B", "08-2021", 2569], ], columns=["Project", "Date", "Expenditure(USD)"], ) df["Date"] = pd.to_datetime(df["Date"], format="%m-%Y") # Convert date column type # get the start date of the project # i.e find the lowest date of rows that have the same project as the current row df["Project Start Date"] = df.apply(lambda row: min(df[df["Project"] == row["Project"]]["Date"]), axis=1) # calculate the period # i.e. the number of months of the current date since the start of the project + 1 df["Period"] = ((df["Date"] - df["Project Start Date"]) / np.timedelta64(1, "M") + 1).round().astype(int) print(df) It gives you the following: Project Date Expenditure(USD) Project Start Date Period 0 Project A 2020-12-01 500 2020-12-01 1 1 Project A 2021-01-01 1257 2020-12-01 2 2 Project A 2021-02-01 125889 2020-12-01 3 3 Project A 2021-03-01 102447 2020-12-01 4 4 Project A 2021-04-01 1248 2020-12-01 5 5 Project A 2021-05-01 1222 2020-12-01 6 6 Project A 2021-06-01 856 2020-12-01 7 7 Project B 2021-01-01 5589 2021-01-01 1 8 Project B 2021-02-01 52874 2021-01-01 2 9 Project B 2021-03-01 5698745 2021-01-01 3 10 Project B 2021-04-01 2031487 2021-01-01 4 11 Project B 2021-05-01 2359874 2021-01-01 5 12 Project B 2021-06-01 25413 2021-01-01 6 13 Project B 2021-07-01 2014 2021-01-01 7 14 Project B 2021-08-01 2569 2021-01-01 8
H: Can I impute with median if median = 0? I want to impute a numerical feature using the median, but the median for that feature is 0 and mean = 106. Should I go ahead and impute or is there anything else I can do? PS: I don't want to create a new binary variable to capture the missingness of the data. I only want to impute using mean or median. Edit 1: I have a regression task at hand, predicting house price. The variable in question carries the info about Masonry veneer area in square feet. These stats are obtained after splitting the data into train and test. There are 6 nan values. Below are some relevant stats: count 1162.000000 mean 106.538726 std 185.924370 min 0.000000 25% 0.000000 50% 0.000000 75% 164.000000 max 1600.000000 Edit 2: The reason I chose median is because after imputing, it doesn't distort the histogram and both the curves (before imputing and after imputing) are the same. AI: It's hard to say without seeing the distribution of the data, it would be useful if you added that, and described what the variables actually represent. If your data is really skewed, then even if a median of 0 sounds like the wrong thing to do, it is better than using the mean since doing so will place more emphasis on the outliers in your data. In general though, any form of imputation will be adding biases to your dataset, and if the number of missing samples is small then it may be worth ignoring those samples in your analysis. When you impute missing values with the mean, median or mode you are assuming that the thing you're imputing has no correlation with anything else in the dataset, which is not always true. Consider this example: x1 = [1,2,3,4] x2 = [1,4,?,16] y = [3, 8, 15, 24] For this toy example, $y = 2x_1 + x_2$. We also know that $x_2 = x_1^2$. Now suppose we wanted to do regression on X = [x1, x2] to determine the coefficients that give us the best y_hat. If we imputed x2 with the mean or the mode, i.e. 7 or 4 we'd be adding a bias into our analysis, since that doesn't consider any interaction from x1! Better methods would seek to learn how to predict the missing data from the other features. I've added some links for further reading: https://towardsdatascience.com/a-comprehensive-guide-to-data-imputation-e82eadc22609 https://en.wikipedia.org/wiki/Imputation_(statistics) http://num.pyro.ai/en/stable/tutorials/bayesian_imputation.html https://stats.idre.ucla.edu/wp-content/uploads/2016/02/multipleimputation.pdf https://scikit-learn.org/stable/modules/impute.html#impute
H: How to classify using incomplete features Assume we have some features pressure, volume, temperature, intensity, mass, size, ... The problem is that, I do not have allways a complete set of these info. I can not put zero for the unknown featurs because it has a meaning. For example if I do not know the temperature I can not set it T=0. In other word how to make a difference betweem T=0 and T=unknown AI: It sounds like you're Dealing with Missing Data. Missing data can be handled in two ways: (1) delete values (rows or columns) or (2) perform imputation to replace missing values. Before identifying a strategy to handle this missing data, you should identify how the data is missing. Missing Completely at Random (MCAR) means there is no difference in the observations with and without the missing values. In other words, the probability of the data being missing is the same for all classes (in a classification problem). Missing at Random (MAR) implies that there is a systematic difference in the observations with and without the missing values, but that the difference is related to some of the observed data. Perhaps temperature is unlikely to be measured if you have pressure and volume, for example. Missing Not at Random (MNAR) mean the probability of value being missing varies for unknown reasons. Once you have diagnosed the missingness, you can select an appropriate technique accordingly. This may be removing the data, imputing the data, or a combination of both (i.e. removing one column and imputing another). Removing Data Pairwise deletion can be applied to delete all rows with missing values. Training will only occur on rows with complete data. This will introduce bias if the data is not MCAR and will not allow you to make predictions on future data that contains missing data. Alternatively, entire features can be dropped if they have a high percentage of missing values. The effect of this on the model should be explored. Imputation If you are interested in retaining all of the data, imputation can be applied to fill missing values using a strategy. Sklearn has a SimpleImputer that has 4 strategies to choose from when imputing a column: filling with the "mean", "median", or "most frequent" value in that column or applying a "constant". By using constant, for example, you could represent unknown values with the string "unknown". I noticed that you mentioned numerical values in your question. For the numerical variables, adding a new column to indicate missing data might be appropriate (depending on the pattern of missingness). This could be useful as an MNAR imputation technique. Interpolation can be used as an imputation technique to extrapolate the missing data as a function of the other observations. You did not mention what you are trying to model, but based on the features you provided, this might be appropriate. If you are working with time series data, you should explore imputation techniques specific to time series problems such as Last Observation Carried Forward (LOCF) which replaces the missing value with the last observed value. Other Resources You can also look into K-Nearest Neighbors as an imputation method. It can identify the most frequent value among K neighbors, where nearness is based on distance between observed values. Be careful using this on data that has a high number of binary features. Another topic to explore might be multiple imputation for a more advanced technique.
H: New classification in Machine Learning model with xgboost I write a code in Rstudio with xgboost to solve a Machine Learning problem. This is my actual code: library(xgboost) library(tidyverse) library(caret) library(readxl) library(data.table) library(mlr) data <- iris righe_train <- sample(nrow(data),nrow(data)*0.8) train <- data[righe_train,] test <- data[-righe_train,] setDT(train) setDT(test) labels <- train$Species ts_label <- test$Species new_tr <- model.matrix(~.+0,data = train[,-c("Species"),with=F]) new_ts <- model.matrix(~.+0,data = test[,-c("Species"),with=F]) #convert factor to numeric labels <- as.numeric(labels)-1 ts_label <- as.numeric(ts_label)-1 #preparing matrix dtrain <- xgb.DMatrix(data = new_tr,label = labels) dtest <- xgb.DMatrix(data = new_ts,label=ts_label) #default parameters params <- list(booster = "gbtree", objective = "multi:softmax", num_class = 3, eta=0.3, gamma=0, max_depth=6, min_child_weight=1, subsample=1, colsample_bytree=1) xgbcv <- xgb.cv( params = params, data = dtrain, nrounds = 100, nfold = 5, showsd = T, stratified = T, print_every_n = 10, early_stopping_round = 20, maximize = F) ##best iteration = 79 min(xgbcv$test.error.mean) #first default - model training xgb1 <- xgb.train (params = params, data = dtrain, nrounds = 21, watchlist = list(val=dtest,train=dtrain), print.every.n = 10, early.stop.round = 10, maximize = F , merror = "error") # eval_metric = "error") #model prediction xgbpred <- predict (xgb1,dtest) xgbpred <- ifelse (xgbpred > 0.5,1,0) #confusion matrix library(caret) factors_both <- as.factor(c(xgbpred, ts_label)) xgbpred_f <- factors_both[1:length(xgbpred)] ts_label_f <- factors_both[length(xgbpred)+1:length(xgbpred)*2] confusionMatrix (xgbpred_f,ts_label_f) #Accuracy - 86.54%` #view variable importance plot mat <- xgb.importance (feature_names = colnames(new_tr),model = xgb1) xgb.plot.importance (importance_matrix = mat[1:20]) So, this is a Machine Learning supervised models. How can I classify a new registration? I have this new registration: new_record <- c(5.3,3.2,2.0,0.2) How can I classify it using the previous model? AI: With new data, you need to go down exactly the same route as with the data used for training. Something like: # Some data newdf = data.frame(Sepal.Length=c(5.3), Sepal.Width=c(3.2), Petal.Length=c(2.0), Petal.Width=c(0.2)) # Model matrix newdf = model.matrix(~.+0,data = newdf) # Predict on xgb.DMatrix object predict(xgb1,xgb.DMatrix(data = newdf))
H: How can I define the optimal value of k in the KNN model? This is my script in Rstudio: library(class) library(ggplot2) library(gmodels) library(scales) library(caret) library(tidyverse) library(caret) db_data <- iris row_train <- sample(nrow(iris), nrow(iris)*0.8) db_train <- iris[row_train,] db_test <- iris[-row_train,] unique(db_train$Species) table(db_train$Species) #-------- #KNN #------- model_knn<-train(Species ~ ., data = db_train, method = "knn",tuneGrid = data.frame(k = 12)) summary(model_knn) #------- #PREDICTION NEW RECORD #------- test_data <- db_test db_test$predict <- predict(model_knn, newdata=test_data, interval='confidence') confusionMatrix(data=factor(db_test$predict),reference=factor(db_test$Species)) #------- How can I define the optimal value of k in the KNN model? AI: Usually you will use cross validation to find optimal model (hyper) parameter. See this post for a application to KNN in R. You may also have a look at the book "An Introduction to Statistical Learning" ch. 5 (Resampling Methods) to learn more about cross validation.
H: What's an algorithm or method that updates a prediction after new information is available? Suppose I have a binary classifier that predicts if students will fail a test based on independent features. How can I update this model's prediction as more information regarding a student's test outcome is available to predict its performance (fail or not) on the following tests? For example, for test #1, the model outputs "fail," and the student indeed fails. I wish to take this new information (the fact that the student failed) as extra information to use while predicting if the student will fail test #2. Is there an algorithm or technique that allows me to update the model as this new information comes? I thought of several ways, but none of them seems right. Having as a feature of my training set the test number (this could work, but I don't know the total amount of tests) Having as a feature the outcome of the previous test (this one faces the same issue as the previous case) Implementing online learning and updating the model with each new outcome. Instead of training such as classifier, learning somehow a decay weight I could use with the last test's outcome to obtain the new one. AI: First, I will address your thoughts then I will offer some suggestions. Having as a feature of my training set the test number (this could work, but I don't know the total amount of tests) I have interpreted this as the creation of a "tests" column with a list of tests the individual has taken (i.e. [1, 2, 3])? We can think of this as a binary indicator of whether or not someone has taken a test. Given that you will only see an indicator for later tests when an individual has completed all the previous tests, the variables will be correlated. Additionally, it does not provide information on an individual's test performance. Furthermore, multiple measurements of a single individual, which violates assumptions behind some approaches. Having as a feature the outcome of the previous test (this one faces the same issue as the previous case) I will address this in predicting multiple events. Implementing online learning and updating the model with each new outcome. It's worth considering how you will use your trained model. Are you making predictions on a large group of students from a variety of classes every day? Or are you making predictions on a set of students in one class before each test? The model will only learn at each step that you update it. It will get better as you use it more, which means you might need a large amount of data for an online approach. It may be worth considering how often you are planning on updating it and if it makes more sense to retrain a static model once a year (after all of the students take exam #1, for example). Instead of training such as classifier, learning somehow a decay weight I could use with the last test's outcome to obtain the new one. Weight decay is used to avoid over fitting. This could be a component of an approach you choose, but I don't think it addresses your question. Perhaps you meant to say Survival Analysis. A reasonable application for this might be predicting student drop out rate. Predicting Multiple Events It sounds like you are trying to predict the outcome of multiple events: the outcome of test #1, the outcome of test #2, .... the outcome of test n. I would interpret your labels as different types of events, since the content of test #2 is likely different from the content of test #3. A naive way to accomplish this might be to train a separate binary classifier for each event (test outcome). The features that you input can be the test(s) results prior. I believe this is what you meant by "Having as a feature the outcome of the previous test". I asked a clarifying question about your dataset that should provide more context as to whether this is reasonable. X features y predictor student features test 1 outcome student features, test 1 outcome test 2 outcome student features, test 1 outcome,... test n-1 outcome test n outcome Predicting with Repeated Observations (for the same event) Your data seems to consist of repeated measurements of multiple individuals, where more variables (i.e. test results) are measured at each time point. If we assume each of your events are the same type of event (i.e. someone retaking the SAT), you might be able to model this as a Repeated Measures Model. Other Resources If you are looking for other features to measure that might indicate student success, here's an article on Predicting Student Outcomes. This article analyzes their network of friends. Depending on your dataset, collaborative filtering might be a reasonable approach. These researchers applied a similar technique to their study on student performance. It might be beneficial to explore the why behind your prediction and consider structuring this as a regression problem where you try to predict the percentage a student might score in a class instead of a binary classification problem. Finally, if you're interested in exploring more information related to the question of your header, "an algorithm that updates predictions after new information is available", you can look into: Online Learning Reinforcement Learning Incremental Learning with Partial Fit
H: Difference between model score on test part and Kaggle public score I tested my CatBoostModel model on part of data and get 0.92 score, but Kaggle public score was 0.9. I found new hyperparameters via randomsearch, new model score was 0.925, but on Kaggle score fell to 0.88. What should I do to validate the model correctly? AI: In general, you should expect to get lower scores on test sets than validation sets, since you took advantage of validation data to tune your model. But for a correctly trained model, the difference between the validation and test sets must be small, as in 0.92 vs 0.9. To be more confident about your model's output, you can perform Cross-Validation. Also, apparently, your model overfitted the training data after hyperparameter optimization. You can use regularization or early-stopping to prevent that.
H: How to make model not too dependent on one variable? Let's suppose I have a generic model: Variable A | Variable B | Variable C | Variable D Variable Dis a categorical variable. ( for example models of cars - and the dataset on which I trained my model only has models up to year 2020 ) I know for sure that Variable A | Variable B | Variable C are always present, however Variable D can be missing (if for example I am using models of cars from 2021). My questions are: If I cannot use data from 2021, how safe is it to use Variable D in my predictions? Could I just randomly assign a value to Variable D when it is missing? Is it possible that the model may become too reliant on Variable D and by randomly assigning values I might introduce bias? Should I just drop Variable D, or just the rows without an associated category in the data on which my model has been trained? Thank you for your time. AI: Answers to all your questions really depend on what Variable D is. Based on your description it does seem like your model would be too dependent on Variable D and would not generalize. I'll be using the car model example you have mentioned to explain my answer. Let us consider a model which predicts Car Price based on car features. The dataset would be as follows: Here you should not use Car Model as a feature as: Car Model is a direct indicator of price. The model will just learn the mapping Car Model -> Price and doesn't learn any other features. For future cases Car Model does not help prediction. Consider a new car for which your model has to find the price. You'll have the following data: Since your model hasn't seen audi 100ls it would make a very bad prediction. You need to ask the following questions to help you decide what to do: Will the variable be available during real-time prediction. If not, then do not make use of it during training. If it is available, does it help prediction? Eg: A new car model's name does not help you determine car price, but other features like fueltype, doornumber, mileage etc. do help. If the variable is both available and helps prediction in real-time , you can try imputing missing values.
H: Which type of models generalize better, generative or discriminative models? In NLP, which type of models (generative or discriminative) is more sensitive to the amount of data to generalize better? references? This is related to the way those two types capture the data probability (join-prob. vs conditional prob.)? AI: My answer is not limited to NLP and I think NLP is no different in this aspect than other types of learning. An interesting technical look is offered by: On Discriminative vs. Generative Classifiers - Andrew Ng, Michael Jordan. Now a more informal opinion: Discriminative classifiers attack the problem of learning directly. In the end, you build classifiers for prediction, which means you build an estimation of $p(y|x)$. Generative models arrive through Bayes theorem to the same estimation, but it does that estimating the joint probability and the conditional is obtained as a consequence. Intuitively, generative classifiers require more data since the space modeled is usually larger than that for a discriminative model. More parameters mean there is a need for more data. Sometimes not only the parameters but even the form of a joint distribution is harder to be modeled rather than a conditional. But if you have enough data available it is also to be expected that a generative model should give a more robust model. Those are intuitions. Vapnik asked once why to go for joint distribution when what we have to solve is the conditional? He seems to be right if you are interested only in prediction. My opinion is that there many factors that influence building a generative model of a conditional one which includes the complexity of formalism, the complexity of input data, flexibility to extend results beyond prediction and the model themselves. If there is a superiority of discriminant models as a function of available data, that is perhaps a small margin.
H: How to convert horizontal bounding box coordinates to oriented bounding box coordinates I am trying to detect oriented bounding boxes with faster rcnn for a long time, but I could not make it to do so. I aim to detect objects in the DOTA dataset. I was using built-in faster rcnn model in pytorch, but realized that it does not support OBB. Then I found another library named detectron2 that is built on the pytorch framework. Built-in faster rcnn network in detectron2 is actually compatible with OBB but I could not make that model work with DOTA. Because I could not convert DOTA box annotations to (cx, cy, w, h, a). In DOTA, objects are annotated by coordinates of 4 corners which are (x1,y1,x2,y2,x3,y3,x4,y4). I can't come up with a solution that converts these 4 coordinates to (cx, cy, w, h, a), where cx and cy are the center point of OBB and w, h, and a are width, height and angle respectively. Is there any suggestion? AI: Assuming the rectangle is as below (easily adjusted if otherwise) Then: The tangent of the angle of rotation is given by $tan(a)=\frac{y_3-y_2}{x_3-x_2}$ Depending on quadrant of interest one can take the opposite coordinates. Getting the inverse tan function gives the angle in radians. The width is given by $w=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$ Similar for height The center x is given by $cx=\frac{x_1+x_3}{2}$ And similar for cy. Rotation is assumed to leave center unaffected.
H: How does amazon's reviews that mention extracts topics from reviews? Amazon product page contains a section called Reviews that mention. The section lists the main things that users liked or dislike about the product. For example see this page. How exactly does it work? This can be done using topic modelling using LDA. But this approach has several drawback. You need to choose number of topics upfront. But in amazon reviews number of topics vary for each product. Number of topics are not the same even for products that belong to same category. You need to give friendly name to each topic. With so many products its unlikely that amazon does that. What approach would be suitable to do this in completely unsupervised way, without the drawbacks mentioned above. AI: One possible approach I can see is as follows: Amazon considers (until now and based on its historic data, and checked every X time) a possible number of frequent categories (i.e. labels in a classification context) In the product you send, you can see the considered categories: and the most frequent terms users have writen on their reviews, used as filters: by applying some techniques like word embeddings, you can build a classifier to find which categories those terms belong to, based on some predefined category labels new ones categories could be found with unsupervised clustering techniques
H: How to fill missing consumption data on time series? I have a dataset that contains consumptions. These consumptions are measured every month. But some months are not measured. So the measured month after the unmeasured month is actually worth the sum of the two months (or more). My dataset; difference date 2019-01-01 50.0 2019-02-01 60.0 2019-03-01 NaN 2019-04-01 140.0 2019-05-01 90.0 So we can understand that the 4th month's value is actually a sum of the 3rd and 4th months. It is necessary to organize this data with this logic. Because 140 is not a correct value for the 4th month and the 3rd month's consumption is not zero. difference date 2019-01-01 50.0 2019-02-01 60.0 2019-03-01 70.0 2019-04-01 70.0 2019-05-01 90.0 This (mean) can be one of an approach to avoid this problem on the dataset. After that, I can use this data set to predict next month's consumption. I want to know if this approach has a name. What solutions can I implement on this type of time-series datasets? How can I search this problem? AI: The approach you're trying to describe is being able to fill the gaps in your data. Filling N/A in the data Since you're working in Python, I'm guessing your data is stored as a Dataframe. Pandas has a specific function for this: DataFrame.fillna(). This lets you fill any NaN values with multiple methods. There are some similar examples in this answer. Filling N/A and changing the following item From my knowledge, Dataframes don't yet have any functionality to do this. The best option I can think of is to iterate through the series. You could either convert to a list with .tolist() then use a for loop, or use Series.iteritems() In your loop, you'll need a condition to check if NaN and if so, take the average of the current and next item if the current item is NaN. You may also need to have a condition for the edge case if the final value in the list is NaN
H: How to remove all empty elements from a list of list I have the following list of list in Python and would like to remove all empty elements without invalidating indices. mylist = [['write', 'create', 'draw', 'book', 'mkay', '', '', '', '', '', ''], ['hey', 'mykey', 'ange', 'kiki', 'rose', '', '', '', '', '', '', '', '', '', '']] I need the output like this: mylist = [['write', 'create', 'draw', 'book', 'mkay'], ['hey', 'mykey', 'ange', 'kiki', 'rose']] I have tried to use the following line of code, but it does not give the expected output. mylist1 = list(filter(None, mylist)) AI: You can use nested list comprehension with a conditional check for the empty string as: new_list = [[s for s in sub_list if s] for sub_list in mylist] Output [['write', 'create', 'draw', 'book', 'mkay'], ['hey', 'mykey', 'ange', 'kiki', 'rose']]
H: Should one log transform discrete numerical variables? I am working on a Linear Regression problem and one of the assumptions of a Linear Regression model is that the features should be Normally Distributed. Hence to convert my non linear features to linear, I am performing several transformations like log, box-cox, square-root transformation etc. I have both, discrete and continuous numerical variables (an example of each along with their histograms and qq plot is given): CONTINUOUS VARIABLE HISTOGRAM AND QQ PLOT DISCRETE VARIABLE HISTOGRAM AND QQ PLOT From the qq plot of the continuous variable, we can see there are points that do no lie on the red line and hence it needs some kind of transformation. So I might try different transformations to see which results in a Normal Distribution and hence make the points fall on the red line. But what about the discrete variable? From the qq plot of the discrete variable, all the points are forming a horizontal line so will transforming them make them fall on the red line? Should I proceed the same as I do in the case of a continuous variable, or is there some other method? AI: First of all in standard linear regression there is no assumption of normality for the features. More than that, standard linear regression, known also as fixed effects linear regression is a linear regression model where the input variables are given, so they are not random variables. Under that model only the target variable is a random variable $$y = X\beta + \epsilon, \epsilon \sim \cal{N}(0,\sigma^2) $$ There are quite a few assumptions for a linear model like independent and heteroskedastic noise, additive features and so on. There are times you need to transform features to meet those assumptions. I enumerate some cases I encountered: Your model is linear (yes, I know, it sounds redundant, but it is not). The point is that the input variables should be additive to form hyperplanes (in 2 dimensions this is a line), otherwise your model don't work. Imagine you collect observations about gravitational attraction which comes from formula: $F_G = -\frac{G m_1 m_2}{r^2}$ (inverse square law - Force is proportional to the product of the masses and inversely proportional to the square of the distance between them). And you want to regress $F_G$ as a linear model from input variables $m_1$, $m_2$, $r$ ($G$ is a constant). Obviously modeling that like $F_G = \beta_0 + \beta_1 m_1 + \beta_2 m_2 + \beta_3 r + \epsilon$ will not work at all, it will be wild. But taking logarithms from all variables involved your data will be linearly additive. Most of the time you do not know the laws which governs your data, but with careful inspection of the relations between your input variables you could eventually get them to be in good shape for a linear model. For that you should see how more inputs correlates and contributes not each feature independently heteroskedastic noise - this means constant variance of thee error, or in plain English the variance of the error should not depend input variables. Imagine that your output variable is a volume, something like $v^d$, where $d$ is the dimensionality of the space. Now an error of 1 centimeter for a cube of side length 10 is much lower than the error in volume for a cube of side 100. Basically the observed variance should increase significantly for large values of target variables. Again, most of the time you might not know "laws" about your variables, but graphical inspection can help a lot to isolate such kind of deviations and adjustments could be made. In the example I gave again a logarithmic transformation (even box-cox, power transform, etc) could help you a lot. discrete variables - here you have different options. You could have an ordinal variable here like temperature low, medium or high. Those cases could be encoded like a numerical column but you have to pay attention with the values you have for each level. Those values will have implications in the value of the coefficients and the values you gave should have some sense. If you discrete variables encodes mutually exclusive factors like eye color: brown, blue, green, whatever. You better encode those as binary variables (one less since otherwise the regression will not work due to impossibility of matrix inversion). Now the above discussion discusses cases when your discrete variables are given as factors (text eventually). But this applies also to numeric encodings also. If the color is encoded as 1, 2, 3, .., instead of strings you should transform that into binary variables if they are nominal factors. If you have an ordinal variable you perhaps could leave it as such, other than the case when you have a clue regarding better proper values. I will try to give here another example. Supporse you have encoded numerically a magnitude of an earthquake and you have something like 1 for (1-3 richter), 2 for (4-6 richter), 3 for (7-9 richter) and 4 for a catastrophic one. You could maintain the same values or maybe you could try to use the fact that richer scale is an exponential scale, 3 richer degree is 10 times smaller than 4 richer degrees (or similar, I do not remember precisely, but you can get the idea). In that case you could substitute those values with $10^i$ instead of $i$ for a better alignment with the linear model. Some things could not be repaired. For example if your data comes from a time series where you have a clear and string dependence of observations from past observations, such kind of problem could not be easily solved, and perhaps you should take another approach anyway, since fixed effects linear models are not recommended for such cases. As a conclusion, you could study more the assumptions of linear regression, try to understand what could you do to check, or at least to inspect and study if those assumptions are met, and see if it could be corrected reasonably. This should be your target, to make your data aligned with the linear model assumptions, if this is the model you want to use. All transformations of data should be governed by this idea. And of course, please remember what you have done to transform the data, to apply the same to future predictions. Eventually to invert the target transformations if you want results in the original target distribution. [later edit]: I added some ideas on discrete variables.
H: What does keras.backend.clip do? I am trying to create a custom loss function and when looking at other examples of loss functions online, I found this example: def loss(y_true, y_pred): # normalize y_pred y_pred /= keras.backend.sum(y_pred, axis=-1, keepdims=True) # clip to prevent NaN's and Inf's y_pred = keras.backend.clip(y_pred, keras.backend.epsilon(), 1 - keras.backend.epsilon()) # calc loss = ... return loss I do not know what "keras.backend.clip" does and any documentation I find on Google (here and here) define clip as a function that does elementwise clipping. I dont know what clip means so "elementwise clipping" means nothing in particular. I assume that it must replace values over a threshold or something like that. Is there a more mathematical definition? AI: Clip, to me, means to set a value to a threshold if it exceeds the threshold. For example, if we clip data at 5, then 0 is 0, 1 is 1, but 6 is 5, and so is anything higher. The word comes from thinking about clipping grass off at a given height. Of course, one can also clip above a threshold - or both.
H: How can I choose the best machine learning algorithms from all kinds of algorithms? When I want to find a model for my data set, I find that there are lots of algorithms that I can use. I know how to minimize selection choices by separating supervised and unsupervised algorithms and the purpose of the problem I am trying to solve. But after that, there are also lots of algorithms to choose from, even in the scikit-learn library that I currently use, and there are lots of algorithms that I don’t know. They might work better in my problem and also there are deep learning algorithms that are stronger than machine learning algorithms. After looking for them, I’ve got tired and a simple project cost me a whole two weeks, but I wasn’t satisfied with the result at the end either. So, what should I do? Do I have to memorize all the algorithms in machine learning libraries, like scikit-learn? Or should I abandon learning machine learning algorithms and start learning deep learning? AI: There is a theoretical result called the "no free lunch theorem" which proves that there is no "best ML algorithm" in general. It's important to understand how an algorithm works in order to have a good intuition about whether it's suitable for a case. Without this one can only attempt different methods randomly by trial and error, it takes more time and more effort. From what you describe it looks like your learning focused on how to use available tools (i.e. run the algorithms). If you want to become really good at data science you also need a good theoretical background. Data Science is very broad, nobody knows everything because it's impossible. My advice is to focus on understanding one thing very well before moving on to the next topic. It's usually better to be an expert in a specific area than to have a shallow knowledge of a bit of everything. there are deep learning algorithms that are stronger than machine learning Technically deep learning methods are also machine learning. Or should I abandon learning machine learning algorithms and start learning deep learning? In my opinion it's better to have a really good understanding of traditional ML methods before moving to DL.
H: Catboost not able to handle a very simple dataset? This is a post from a newbie and so might be a really poor question based on lack of knowledge. Thank you kindly! I'm using Catboost, which seems excellent, to fit a trivial dataset. The results are terrible. If someone could point me in the right direction I'd sure appreciate it. Here is the code in its entirety: import catboost as cb import numpy as np import pandas as pd from sklearn.metrics import r2_score from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler # Some number of samples, not super important samples = 26 # Our target is a simple linear progression (!) yvals = range(samples) y = pd.DataFrame({'y': yvals}) # Our feature is an exact COPY of the target (!) X = pd.DataFrame.from_dict({ 'x0': np.array(yvals) }) # I want to use shuffle = False for reasons beyond the scope of this question X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, shuffle=False) # Two stages to the pipeline pipe = Pipeline([ ('scaler', StandardScaler()), ('regressor', cb.CatBoostRegressor(loss_function="RMSE", verbose=False)) ]) # Here we go pipe.fit(X_train, y_train) # Print results y_hat = pipe.predict(X_test) r2 = r2_score(y_test, y_hat) print('r2:', r2) The output is: r2: -4.256672011036048 I would have expected a perfect fit, or 1.0 for r2. Am I misusing catboost perhaps? Thanks again for any help!!! AI: "Traditional" tree models cannot extrapolate well outside the training data's range, so "I want to use shuffle = False for reasons beyond the scope of this question" actually can't be ignored. If you expect testing/production data to have significantly different values, use a different kind of model. There are tree models that support regressions in their leaves, sometimes called "model-based recursive partitioning", but that is not used as base learners for GBMs. (GBMs like CatBoost can predict outside the range, but not well.)
H: Select Random Value from Pandas list column for each row ensuring that value don't get picked again I have a pandas DataFrame below import pandas as pd data = { 'poc':["a", "b", "c", "d"], 'school':["school1", "school2", "school3", "school4"], 'volunteers':[["sam", "mat", "ali", "mike", "guy", "john"], ["sam", "mat", "ali", "mike"], ["rose", "sam", "mike", "jorge"], ["susan", "jack", "alex", "mat", "mike"]] } df = pd.DataFrame.from_dict(data) ​ I need to create a new column that has a random pick from the volunteers column to select 1 volunteer for each school ensuring that the same volunteer doesn't get picked twice so far I have tried import random df["random_match"] = [random.choice(x) for x in df["volunteers"]] but this just gives me a random pick without ensuring it is not repeated. AI: This is a first approach, and even though this is not the best in terms of performance it makes the work: def urandom(frame): ls = list() for idx, row in frame.iterrows(): val = np.random.choice(frame.loc[idx,"volunteers"]) while val in ls: val = np.random.choice(frame.loc[idx,"volunteers"]) ls.append(val) return pd.Series(ls) df.assign(pick = urandom(df)) Outputs: (If you need reproducible code dot not forget to add a random seed)
H: What Shape Does Naive Bayes make? Decision Trees draw straight lines to partition the feature space. According to the Universal Approximation Theorem, Neural Networks can draw any continuous function. What sort of shape does the Naive Bayes classifier draw? AI: Specifically talking about Gaussian Naive Bayes, the decision boundary are ellipsoids characterized by the mean and standard deviation of the Gaussian distribution. Image: https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#sphx-glr-auto-examples-classification-plot-classifier-comparison-py
H: roc_auc_score from sk-learn gives error when test label vector with classes has only a subset of the whole set I have an imbalanced dataset. Does it make sense to compute the roc-auc for the classifier I created in a holdout set? Here's very artificial MWE: from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression X, y = load_iris(return_X_y=True) clf = LogisticRegression(solver="liblinear").fit(X, y) # Let's assume that X_test = X, y_test is just a vector of 1s. roc_auc_score([1]*150, clf.predict_proba(X_test), multi_class='ovr') ValueError: Number of classes in y_true not equal to the number of columns in 'y_score' AI: predict_proba method will return a numpy array of shape (n_samples,2) with the probability of Y == 1 and Y == 0 but you need to pass only the probability of Y == 1 for roc calculation so: from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression X, y = load_iris(return_X_y=True) clf = LogisticRegression(solver="liblinear").fit(X, y) # Let's assume that X_test = X, y_test is just a vector of 1s. roc_auc_score([1]*150, clf.predict_proba(X_test)[:,1], multi_class='ovr')
H: Business days in Pandas: EU, USA, or another? To create a pandas date range I usually do the following: pd.date_range("1991-01-01","1998-01-01",freq="B") where pd is the pandas import, and freq="B" makes the frequency to be "Business Days". In the R Dataset Documentation, the EuStockMarkets is said to have been collected at between 1991 and 1998. pd.date_range("1991-01-01","1998-01-01",freq="B") results in 1828 dates, while the R dataset has 1860. Why is that? What are the differences? I find very strange that R Dataset has collected data from different EU countries, which have different holidays, in just 1 and only 1 date index... Pandas only returns one time series. Is this business days in the USA, the EU(whatever this may mean), or it depends on the system where I run the pandas command? AI: The reason for the difference in the number of days between the two is that the EuStockMarkets dataset does not range from 1-1-1991 to 1-1-1998 but from 1-7-1991 to 14-08-1998 so you are comparing two different time ranges. > time(EuStockMarkets) Time Series: Start = c(1991, 130) End = c(1998, 169) Frequency = 260 Using the actual dates when calculating the number of business days in the range using timeanddate.com gives 1860 when including public holidays. So while the documentation notes that it excludes holidays that doesn't seem to actually be the case. This might be linked to your second point regarding the different holidays at only one index. For pandas.date_range the days returned seem to simply only be the weekdays, i.e. Monday through Friday, which include any holidays. If you want to exclude holidays in your python version you can use any of the existing calendar classes or create a custom one yourself in combination with pandas.bdate_range.
H: How to freeze certain layers in models obtained from keras.applications I am currrently trainning to use transfer learning on ResNet152 obtained from Keras Applications: tf.keras.applications.ResNet152( weights="imagenet", input_shape=(400,250,3) ) I know to freeze all the layers I need to set the trainable attribute to False, but right now I need to freeze certain layers. More specifically, I need to unfreeze the last three layers of this model but freeze the rest. So how do I do that? AI: you can freeze all the layer with model.trainable = False and unfreeze the last three layers with : for layer in model.layers[-3:]: layer.trainable = True the model.layers contain a list of all the ordered layer that compose the model.
H: How to loop through multiple lists/dict? I have the following code which finds the best value of k parameter in the KNNImputer. Basically it is looping through the list of k_value and for each element, it is fitting the KNNImputer to the model and in the end appending the result to an empty dataframe. lire_model = LinearRegression() k_value = [1,3,5,7,9,11, 13, 15, 17, 19, 21] k_value_results = pd.DataFrame(columns = ['k', 'mse', 'rmse', 'mae', 'r2']) scoring_list = ['neg_mean_squared_error', 'neg_root_mean_squared_error', 'neg_mean_absolute_error', 'r2'] for s in k_value: imputer = KNNImputer(n_neighbors = s) imputer.fit(train_x1_num) train_x2 = pd.DataFrame(imputer.transform(train_x1_num), columns = train_x1_num.columns) test_x2 = pd.DataFrame(imputer.transform(test_x1_num), columns = test_x1_num.columns) enc = ce.CatBoostEncoder() enc.fit(train_x3, train_y) train_x4 = pd.DataFrame(enc.transform(train_x3), columns = train_x3.columns) test_x4 = pd.DataFrame(enc.transform(test_x3), columns = test_x3.columns) base_score = cross_validate(lire_model, train_x4, train_y, cv = 5, scoring = scoring_list, n_jobs = -1) row = { 'k': s, 'mse' : -1 * base_score['test_neg_mean_squared_error'].mean(), 'rmse' : -1 * base_score['test_neg_root_mean_squared_error'].mean(), 'mae' : -1 * base_score['test_neg_mean_absolute_error'].mean(), 'r2' : base_score['test_r2'].mean() } k_value_results = k_value_results.append(row, ignore_index = True) If I have more than 1 list through which I want to loop through and perform the same functionality as above code, how can I do that? For example:- list1 = [a, b, c, d] list2 = [e, f, g] I want to loop through both the lists and for each combination of parameters (total 4*3 =12 combinations) I want the results. Basically I want to GridSearch over multiple lists without using sklearns GridSearchCV function. Any ideas? AI: You can use the ParameterGrid class from scikit-learn for this. This allows you to supply a dictionary where the values are lists with possible values for that specific key. You can iterate over this to get all possible combinations between the specific hyperparameters, see also the examples from the documentation page: from sklearn.model_selection import ParameterGrid param_grid = {'a': [1, 2], 'b': [True, False]} list(ParameterGrid(param_grid)) # [{'a': 1, 'b': True}, {'a': 1, 'b': False}, # {'a': 2, 'b': True}, {'a': 2, 'b': False}]
H: A strange decimal number on top of my plot of a pandas series I am using the EU stock market dataset, and applied a box-cox transformation to each time series using stats.boxcox from the scipy module. The resulting data frame is df_box_cox. df_box_cox.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 1860 entries, 0 to 1859 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 DAX 1860 non-null float64 1 SMI 1860 non-null float64 2 CAC 1860 non-null float64 3 FTSE 1860 non-null float64 dtypes: float64(4) memory usage: 58.2 KB When I run df_box_cox["DAX"].plot(), I get: Why do I get that pesky number on the top left? AI: This is more of a programming question than a data science question and would therefore be better suited to the stackoverflow stackexchange. The numbers you are seeing are related to ticklabel formats used by the matplotlib library. Changing the axis settings should allow you to get rid of the numbers on the top left of the plot: ax.ticklabel_format(useOffset=False, style='plain') # or plt.ticklabel_format(useOffset=False, style='plain') For more in-depth information on the offset and style see this stackoverflow answer or the matplotlib documentation.
H: How to convert duration column from 1 hr. 17 min to 77 min in pandas? I am trying to convert all the hr., min, and sec into just mins. For example, 1 hr. 17 min to 77 min, and 34 sec to 0.56 min, not 0.34 min. So I have used this code: merged['duration'] = merged['duration'].str.replace(" hr.", '*60').str.replace(' ','+').str.replace(' min','*1').str.replace(' ','+').str.replace(' sec','*0.01').apply(eval) From: How to Convert a Pandas Column having duration details in string format (ex:1hr 50m) into a integer column with value in minutes But it gives me this error: unsupported operand type(s) for +: 'int' and 'builtin_function_or_method' I am not sure how to go about it. Please help! AI: You are replacing the spaces with a + character before replacing `` and . Therefore the minutes and seconds do not get replaced, which in turn cannot be evaluated using eval. Reordering the different operations should solve the error: df['duration'] = df['duration'].str.replace(" hr.", '*60').str.replace(' min','*1').str.replace(' sec','*0.01').str.replace(' ','+').apply(eval) You are mentioning that you want 34 seconds converted to 0.34, but if you want everything in minutes this does not make sense since 34 seconds is 34/60=0.56 seconds. If you also want to change this the following should work: df['duration'] = df['duration'].str.replace(" hr.", '*60').str.replace(' min','*1').str.replace(' sec','*(1/60)').str.replace(' ','+').apply(eval)
H: Imbalanced Dataset: Train/test split before and after SMOTE This question is similar but different from my previous one. I have a binary classification task related to customer churn for a bank. The dataset contains 10,000 instances and 11 features. The target variable is imbalanced (80% remained as customers (0), 20% churned (1)). Initially, I followed this approach: I first split the dataset into training and test sets, while preserving the 80-20 ratio for the target variable in both sets. I keep 8,000 instances in the training set and 2,000 in the test set. After pre-processing, I address the class imbalance in the training set with SMOTEENN: from imblearn.combine import SMOTEENN smt = SMOTEENN(random_state=random_state) X_train, y_train = smt.fit_sample(X_train, y_train) Now, my training set has 4774 1s and 4182 0s. I know proceed to building ML models. I use scikit-learn’s GridSearchCV with cv = KFold(n_splits=5, shuffle=True, random_state=random_state) and optimise based on the recall score. For instance, for a Random Forest Classifier: cv = KFold(n_splits=5, shuffle=True, random_state=random_state) scoring_metric='recall' rf = RandomForestClassifier(random_state=random_state) param_grid = { 'n_estimators': [100], 'criterion': ['entropy', 'gini'], 'bootstrap': [True, False], 'max_depth': [6], 'max_features': ['auto', 'sqrt'], 'min_samples_leaf': [2, 3, 5], 'min_samples_split': [2, 3, 5] } rf_clf = GridSearchCV(estimator=rf, param_grid=param_grid, scoring=scoring_metric, cv=cv, verbose=False, n_jobs=-1) best_rf_clf = rf_clf.fit(X_train, y_train) y_pred = cross_val_predict(best_rf_clf.best_estimator_,X_train, y_train,cv=cv) print('Train: ', np.round(recall_score(y_train, y_pred), 3)) y_pred = best_rf_clf.best_estimator_.fit(X_train, y_train).predict(X_test) print(' Test: ', np.round(recall_score(y_test, y_pred), 3)) My recall CV score on the training set is 0.902, while the score on the test is 0.794. However, when apply SMOTEENN on the full dataset and then split into training and test sets, I get a recall CV score on the training set equal to 0.913, and 0.898 for the test set. How can we explain this difference between the two approaches? What causes this gap between the two sets in the first approach (split then SMOTEENN) compared to the second one (SMOTEENN and then split)? My guess is that the second approach leads to a more balanced test set (1220 1s, 1036 0s), compared to the first one (1607 1s, 393 0s). Thanks! AI: Essentially applying SMOTE makes the job easier for the model: SMOTE generates artificial instances which tend to have the same properties as each other, so it's easier for the model to capture their patterns. However these instances are rarely a good representative sample for the minority class, so there's a higher risk that the model overfits. Of course if SMOTE is also applied to the test set, the model appears to perform better. This is the equivalent of changing a difficult question to an easier one in order to answer the question better. Resampling methods are rarely a good solution to the imbalance problem. It's important to understand that imbalanced data is a problem only because the minority class in the training set is not representative enough and/or the features are not good enough indicators for the label. The ideal scenario is to solve these two problems, then the model can perform perfectly well despite the imbalance.
H: Cross validation and hyperparameter tuning workflow After reading a lot of articles on cross validation, I am now confused. I know that cross validation is used to get an estimate of model performance and is used to select the best algorithm out of multiple ones. After selecting the best model (by checking the mean and standard deviation of CV scores) we train that model on the whole of the dataset (train and validation set) and use it for real world predictions. Let's say out of the 3 algorithms I used in cross validation, I select the best one. What I don't get is in this process, when do we tune the hyperparameters? Do we use Nested Cross validation to tune the hyperparameters during the cross validation process or do we first select the best performing algorithm via cross validation and then tune the hyperparameter for only that algorithm? PS: I am splitting my dataset into train, test and valid where I use train and test sets for building and testing my model (this includes all the preprocessing steps and nested cv) and use the valid set to test my final model. Edit 1 Below are two ways to perform Nested cross validation. Which one is the correct way aka which method does not lead to data leakage/overfitting/bias? Method 1: Perform Nested CV for multiple algorithms and their hyperparameters simultaneously:- from sklearn.model_selection import cross_val_score, train_test_split from sklearn.model_selection import GridSearchCV from sklearn.metrics import mean_squared_error from sklearn.ensemble import RandomForestRegressor from sklearn.svm import SVR from sklearn.datasets import make_regression import numpy as np import pandas as pd # create some regression data X, y = make_regression(n_samples=1000, n_features=10) # setup models, variables results = pd.DataFrame(columns = ['model', 'params', 'mean_mse', 'std_mse']) models = [SVR(), RandomForestRegressor(random_state = 69)] params = [{'C':[0.01,0.05]},{'n_estimators':[10,100]}] # split into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.3) # estimate performance of hyperparameter tuning and model algorithm pipeline for idx, model in enumerate(models): # perform hyperparameter tuning clf = GridSearchCV(model, params[idx], cv = 3, scoring='neg_mean_squared_error') clf.fit(X_train, y_train) # this performs a nested CV in SKLearn score = cross_val_score(clf, X_train, y_train, cv = 3, scoring='neg_mean_squared_error') row = {'model' : model, 'params' : clf.best_params_, 'mean_mse' : score.mean(), 'std_mse' : score.std()} # append the results in the empty dataframe results = results.append(row, ignore_index = True) Method 2: Perform Nested CV for single algorithm and it's hyperparameters:- from sklearn.datasets import load_iris from matplotlib import pyplot as plt from sklearn.svm import SVC from sklearn.model_selection import GridSearchCV, cross_val_score, KFold, train_test_split import numpy as np # Load the dataset iris = load_iris() X_iris = iris.data y_iris = iris.target train_x, test_x, train_y ,test_y = train_test_split(X_iris, y_iris, test_size = 0.2, random_state = 69) # Set up possible values of parameters to optimize over p_grid = {"C": [1, 10], "gamma": [0.01, 0.1]} # We will use a Support Vector Classifier with "rbf" kernel svm = SVC(kernel="rbf") # Choose cross-validation techniques for the inner and outer loops, # independently of the dataset. # E.g "GroupKFold", "LeaveOneOut", "LeaveOneGroupOut", etc. inner_cv = KFold(n_splits=4, shuffle=True, random_state=69) outer_cv = KFold(n_splits=4, shuffle=True, random_state=69) # Nested CV with parameter optimization clf = GridSearchCV(estimator=svm, param_grid=p_grid, cv=inner_cv) clf.fit(train_x, train_y) nested_score = cross_val_score(clf, X=X_iris, y=y_iris, cv=outer_cv) nested_scores_mean = nested_score.mean() nested_scores_std = nested_score.std() AI: Suppose you have two models which you can choose $m_1$, $m_2$. For a given problem, there is a best set of hyperparameters for each of the two models (where they perform as good as possible), say $m_1^*$, $m_2^*$. Now say $Acc(m_1^*) > Acc(m_2^*)$, i.e. model 1 is better than model 2. Now suppose you have tuned model 2 (or you have "okay" hayperparameter by coincidence) but you use inferior hyperparameter for model 1. You could end up finding $Acc(m_1^s) < Acc(m_2^*)$ (i.e. "choose model 2"), while the true best choice would be: "use tuned model 1". Thus in order to make an informed decision, you would need to "tune" both models and compare the performance of the tuned models with "best hyperparameter". What I often do is to define test and train data, tune possible models using cross validation (train data only!), and assess the performance of the tuned models based on the test set. In addition you may want to do feature engineering / feature generation. This should be done before tuning the models, since different data may lead to different optimal hyperparameter, e.g. in case of a random forest, where the number of split candidates per split can be contingent on the number and the quality of features.
H: Logarithmic scale for a learning curve I'm plotting the learning curve with Python with the following code: import matplotlib.pyplot as plt import seaborn as sns import csv import pandas as pd sns.set(style='darkgrid') # Increase the plot size and font size. sns.set(font_scale=1.5) plt.rcParams["figure.figsize"] = (12,6) plt.plot(lst, 'r') plt.legend(["Validation Loss"]) # Label the plot. plt.title("RNN deltat") plt.xlabel("Epoch") plt.ylabel("Loss") The curve looks like this: The lecturer said better try it on a logarithmic scale. Can you please help to apply the logarithm here? AI: This is more of a programming question than a data science question and would therefore be better suited to the stackoverflow stackexchange. To change the y-axis from a linear scale to a logarithmic scale you can use matplotlib.pyplot.yscale function using "log" as the argument: import matplotlib.pyplot as plt plt.yscale("log")
H: Is reinforcement learning analogous to stochastic gradient descent? Not in a strict mathematical formulation sense but, would there be there any key overlapping principals for the two optimisation approaches? For example, how does $$\{x_i, y_i, \mathrm{grad}_i \}$$ (for feature, label and respective gradient from training example of SGD) defer from $$\{s_i, a_i, r_i\}$$ for state, action and reward example for RL? Given that $x_i$ can be viewed as a state, label $y_i$ as a reward (e.g. good/bad label) and $\mathrm{grad}_i$ as action. I appreciate that reinforcement learning is (a) learning what to do and how to map situations to actions as well as (b) learning from interaction and how in such a setting it is impractical to acquire "supervised training" training examples from all possible set of actions/rewards. But in essence, I would like to see whether there is a clear differentiation between the two abstractions above. AI: From your question I assume that you are familiar with at least basic concepts in RL so I won't dive into too many details. RL in general is not SGD. In RL you will encounter various optimization schemes in order to optimize an utility function. Two of the most popular families of methods used for optimizing an utility function (in RL MDP formulation) are Value methods and Policy Gradient methods. Value (or Critic) Methods Model-based value methods use Dynamic Programming (DP) to optimize an utility function. In simple wording, once optimal value functions have been found, that satisfy the Bellman Optimality Equations, can be used to obtain optimal policies. Model-free value methods use a form of Temporal Difference (TD) Learning to estimate the value function. TDs are a combination of DP and Monte Carlo (MC) methods. Like DP, TD methods update estimates based in part on other learned estimates, without waiting for a final outcome (they bootstrap). Like MC methods, TD methods can learn directly from raw experience without a model of the task’s dynamics. A very common TD algorithm is Q-learning. It has been proved that, under the assumption of infinite visitations of every state-action pair, Q-learning converges to the optimal value function. Policy Gradient Methods (or Actor Methods) PG methods assume a parametrized policy function and use gradient ascent to optimize its parameters in order to maximize expected return: $$\theta_{h+1}=\theta_{h}+\left.\alpha_{h} \nabla_{\theta} J\right|_{\theta=\theta_{h}}$$ In this case you could possibly state that RL is following the steepest descent on the expected return. I post some great references in case you would like to delve into the details: Reinforcement Learning: An Introduction, 2nd edition Policy Gradient Methods Policy Optimization Policy Gradient Algorithms
H: Points to remember when embarking on an organization-wide turn to AI solutions In our organization, we are currently in the phase of building up team, skills to automate and implement AI based solutions. So, we are very early in this AI journey. Right now, we are also working on identifying some of the problems that we face in our business. For example, when we get 8 customer segments, but only 2 of them bring in a lot of revenue. Rest all of them perform poorly. We would like to find out why through data analytics/identify factors that is causing this issue. While all this seems doable, I would like to seek your suggestions on how can we make the business users/leaders clear on what AI can and cannot do. Because, I feel it is very much possible for business team to be carried away by the hype around AI/ML etc. So, as a data person, I think it is my responsibility to clarify what can and cannot be done using AI. And why can't we rely on AI results 100 percent. Why should there always be a caution in trusting AI output Any books,papers, case-studies or articles etc which has this information/points to consider when embarking on organization wide AI-initiative can help me One such article is here AI: Some important points I can bring up are: AI/ML learn (stable) patterns from what they are given. But, if forced, they will find (irrelevant) patterns even in noise. Cannot learn what they did not see. So, generalisation is actually possible only for variations (allowed by the underlying architecture) of what were already seen. AI/ML may discriminate and not be fair, where fairness is required (this can happen for various reasons). It is not always interpretable, so one cannot know why one gets this or that result. More importantly one cannot verify (in a straightforward manner at least, before the fact) if any of the previous issues happens. References: Overfitting Generalization error Concept drift Ethics, transparency and accountability of AI Explainable Deep Learning: A Field Guide for the Uninitiated
H: Generate fake model predictions according to desired precision/recall values Lets assume I generate a random set of target labels for a binary classification with N elements and a certain frequency of the positive class (1), e.g. 10%: targets = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, ...] I now want to generate fake classification predictions such that the overall predictions roughly satisfy desired precision and recall values e.g. precision = 0.8 and recall = 0.2. I don't need exact values but like to get close such as 0.78 and 0.25. predictions = [ 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, ...] Is there a way to do this efficiently in python/numpy, ideally so that I can produce multiple sets of predictions by repeating the generation process with different random seeds? AI: Sure, you can just calculate how many instances are needed for each classification status based on the constraints. For example: gold positive = 10%, i.e. $TP+FN=0.1 \times N$ recall = 0.2, i.e. $TP/(TP+FN)=0.2$, so $FN=4\times TP$ From these two equations we get $TP=0.02 \times N$ and $FN=0.08 \times N$ precision = 0.8, i.e. $TP/(TP+FP)=0.8$ so $FP=0.25\times TP = 0.005 \times N$ Finally $TP+FP+FN+TN=N$ so $TN=N-(TP+FP+FN)=0.895 \times N$ Let's say $N=1000$, we want to have 20 TP, 80 FN, 5 FP and 895 TN. You should be able to write a code which does these calculations and then generates a list of predictions which match these constraints. Note that technically this is not a model (not even a fake one) because there are no features and no real classification happening.
H: Picking the right NLP model to tag words from a dataset As the title suggests, I am posting here in the hope someone could direct me towards NLP models for tagging words. To be more concrete, here is what I wish to do. I would like to build a flashcard application using an NLP model that would tag/categorize words. So let us imagine I have a CSV file with items made of one question (in English) and one answer (in French): +---------------------------- | plane | avion | +-------------+-------------+ | chopsticks | baguettes | +-------------+-------------+ | airport | aéroport | +-------------+-------------+ The idea is that the learners would pick a contextual deck (in this example, a deck related to travelling with planes). That deck would be generated by a tag "airport" made by the machine learning algorithm. And thus, is there any good models I should look to? Edit: After much research, I came across NLU which meets many of the requirements I have described above. If you are interested, please have a look at those links: What is NLP technique to generalize manually created rules in text? and NLP algorithms for categorizing a list of words with specific topics, as well as this repo: Probase-Concept AI: For generating lists of words related to the same topic, my first thought would be to take a large corpus of (monolingual) text, apply topic modelling and then collect random top words by topic.
H: Text clustering model on small dataset Is there any way to run any clustering model on a small dataset with 290 text records (minimum character size is 100)? AI: Certainly. I demonstrated clustering techniques using 52 records (playing cards and their features). Unlike classification algorithms, clustering will work with the data available. The question to ask yourself is whether your data has a sufficient number of features that are enable records to be both clusterable and separable.
H: Clustering with Highly separable features I noticed that in my dataset a particular column is highly separable where it splits the data perfectly into 5 distinct classes (re-evaluated where class2 means better than class1). I would like to study the underlying structure using a clustering model from this same data set. Should I include this column as a variable for clustering despite knowing that this feature is highly separable? Would this feature create any bias or affect the results for the clustering model? All these with the assumption that I will be using a K-means Algorithm AI: Would this feature create any bias or affect the results for the clustering model? This is all you need to think about while performing clustering. You can evaluate your clustering algorithm to assess whether it is performing well or not. And this question has lots of resources that will help you how one can evaluate clustering algorithms. In this way, you will be able to evaluate your clustering algorithm and also analyze the effects of various features on your algorithm.
H: Correct Machine Learning approach for prediction using multiple timelines I am wondering what would be the correct ML approach in order to predict the upcoming value of a time serie based on the previous behaviours of various time series for the same period. I have a dataset in the form of: TS name Day1 Day2 ... Day50 Target-Day51 TS 1 5 13 ... 16 12 TS 2 8 18 ... 9 16 ... 12 2 ... 13 4 TS 4000 3 7 ... 4 10 Imagine that a new row will be in the following form and I want to predict the target day: TS name Day1 Day2 ... Day50 Target-Day51 TS 4001 3 22 ... 48 XX Is this a time-series approach? A regression one? A Multivariate Time series one? Can you suggest some algorithms which could work here? AI: There is a nice overview of possible techniques/models for R-Packages: https://cran.r-project.org/web/views/TimeSeries.html Especially have a look at the section "Multivariate Time Series Models" (e.g. Vector autoregressive (VAR) models). However, other approaches could be interesting for you as well. In case you have a sufficient amount of data, you could also look at neural nets, i.e. with LSTM layers. Find a instructive example here: https://keras.io/examples/timeseries/timeseries_weather_forecasting/
H: How to preprocess heavy MRI images? I have a large MRI dataset for an image segmentation task that cannot directly fit in memory in Colab, you can access the data with the link I put at the end. They are brain MRI images: 484 training images, each has a shape of (240, 240, 155, 4), these 4 numbers are the height, width, number of layers and sequences respectively. 484 labels, each has a shape of (240, 240, 155) How are you going to preprocess those images before training? Below are the steps that I tried but it didn't work: Load and read the image. (I used nibabel) Convert the images' type from float64 to float32, labels' type to uint8. Remove the very first and last layers because they don't contain useful information . Stack/Add each of them into an array with a for loop. What else do you think I can do do deal with this problem? Datalink: https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2 (task 1 - Brain Tumour) Please tell me if you need more information. AI: As you cannot read the whole dataset in a single time, you should read and preprocess the images batch-wise while training the model. You can write your preprocessing pipeline in a data loader and iterate through this data loader while training the model. During each iteration, your data loader will fetch a single batch of data. And you can write your custom pipeline in the data loader to get this single batch. Treat this as a generator and iterate through this generator in the training loop, and you will get batches on the run time (i.e. a single batch is read in the memory at a time). The following links give a good example of creating a custom data loader in Pytorch - https://pytorch.org/tutorials/beginner/basics/data_tutorial.html https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel You have similar functionality in TensorFlow using input data pipelines - https://www.tensorflow.org/guide/data
H: K-Fold cross validating with random forest - how to correctly fit model to every fold? So I have created K-Folds from my data using this code: X = rfedata.drop(['target'],axis=1) Y = rfedata['target'] kf = KFold(n_splits=10) KFold(n_splits=10, random_state=None, shuffle=False) for train_index, test_index in kf.split(X): print("TRAIN:", train_index, "TEST:", test_index) X_train, X_test = X.take(list(train_index),axis=0), X.take(list(test_index),axis=0) y_train, y_test = Y.take(list(train_index),axis=0), Y.take(list(test_index),axis=0) Now I would like to train my model. By doing the below, does the model get trained on every fold, or is it just 1 fold? If not, would you have an idea of how to train 1 model on every fold? model9 = RandomForestRegressor() model9.fit(X_train, y_train) AI: As mentioned in the comments by @Oxbowerce, make sure to include fit inside the loop and it will train your model again. You can also initialize the object again as follows for train_index, test_index in kf.split(X): print("TRAIN:", train_index, "TEST:", test_index) X_train, X_test = X.take(list(train_index),axis=0), X.take(list(test_index),axis=0) y_train, y_test = Y.take(list(train_index),axis=0), Y.take(list(test_index),axis=0) # Train the model for each fold model9 = RandomForestRegressor() trained_model = model9.fit(X_train, y_train) # Evaluate the model for each fold # Save the trained model # And so on!
H: Increase accuracy through "overfitting" multiple models? I am currently trying to create a model to classify 5 specific classes from the coco dataset. I am using the object detection app from tensorflow. My question is: Will it be better if i: -Train one model to detection all classes at once. or: -Train one model per class to do really well to find the presence of an item of given class and iterate over each model. Thanks in advance! :) AI: Considering computation vs accuracy tradeoff - It will be better to train one model to detect all classes at once because there is a high inter-relation among these tasks. The whole object detection network architecture will remain the same and you just need to change the last layers to make the network detect all 5 classes. This approach will also let your model train faster and you just need to train one single model. Considering only accuracy - Training one model per class might help increase accuracy but it totally depends on the dataset and types and number of classes. But you can also end up getting the same accuracy as compared to the first model. Lastly, running 5 models will be 5 times more expensive.
H: Drop or impute the missing values? I am working with a dataset having 45k rows and I was a bit confused on whether or not to drop the missing values OR impute the missing values. Column wise missing value distribution : As per this answer: https://stackoverflow.com/a/28199556/12298398), I calculated the number of rows containing missing values >>> np.count_nonzero(df.isnull().values.ravel()) 2057 But now I am a bit confused on whether or not I should drop these rows containing missing values since dropping them will cost a loss of data or I should impute those columns which have missing values greater than 500. Let me know your thoughts on the same, Thank you AI: In most cases, dropping data only makes sense when you have a large number of nan values. For example of you have a feature with 98% nan values, it is not going to be of much use to any algorithm. Also imputing that feature is not going to work as you don't have much data to go on with. But if there are reasonable number of nan values, then the best option is to try to impute them. There are 2 ways you can impute nan values:- 1. Univariate Imputation: You use the feature itself that has nan values to impute the nan values. Techniques include mean/median/mode imputation, although it is advised not to use these techniques as they distort the distribution of the feature. Other techniques might include creating a new feature to capture the missingness of that feature. You should Google this topic as there are literally hundreds of articles and blogs. 2. Multivariate Imputation: As the name suggests, you use multiple columns to impute nan values in a specific feature/column. This method is the most preferred as it results in better imputation results than Univariate Imputation. Some of the most used techniques are KNNImputer and IterativeImputer. Again Google is your best friend! Bottom line being, only drop nan values when your feature has a majority if it's values as nan. If not, it's usually better to impute. Cheers!
H: Softmax in Sentiment analysis I am going to do Sentiment Analysis over some tweet texts. So, in summary we have three classes: Positive, Neutral, Negative. If I apply Softmax in the last layer, I will have the probability for each class for each piece of text. we know that in Softmax: P(pos) + P(neu) + P(neg) = 1 My question: suppose that we have a piece of text in Positive label. So, do we have to have these probabilities in this order: P(pos) > P(neu) > P(neg) What does it mean when we have them in this order: P(pos) > P(neg) > P(neu) Can we conclude anything from this? For example, can we say with confidence that the label is Positive like as before? AI: If you have a text in positive label, and your model think it is positive then the positive probability your model output will be the largest. If you ask your model which is the second most likely label that you(your model) think this text sample belong to, your model's answer is the class that has the second largest probability in the output, and so on. In summary, your model rank the class from most likely to less likely to your sample. So the order of the probabilities depend on your model belief, such that the most likely class will has the largest probability and the least likely class will has the least probability. My question: suppose that we have a piece of text in Positive label. So, do we have to have these probabilities in this order: P(pos) > P(neu) > P(neg) Not exactly, it depends on your model belief, which depends on how good is your data to express the idea of positive, neutral and negative. But usually when use logistic regression to classify 3 class positive, neutral and negative, people will set a threshold for positive, neutral and negative in the probability range, for example: > 0.7 is positive, in [0.4, 0.7] is neutral and the remaining is negative. By doing this, we implicitly assume that the probabilities are indeed have order as you said. This is because we assume that there is an order between positive, neutral and negative, such that neutral is between positive and negative. But if we are dealing with another problem for example classify dog, cat and fist, then I don't think we can assume the order. What does it mean when we have them in this order: P(pos) > P(neg) > P(neu) It means that the model believe the most likely class to your sample is positive, the second most likely is negative, and the least likely is neutral. Can we conclude anything from this? For example, can we say with confidence that the label is Positive like as before? In my opinion, the model is confident with its answer, if we choose to believe it, then we can confidently say that the sample's class is positive as before.
H: Does validation_split in tf.keras.preprocessing.image_dataset_from_directory result in Data Leakage? For a binary image classification problem (CNN using tf.keras). My image data is separated into folders (train, validation, test) each with subfolders for two balanced classes. Borrowing code from this tutorial, I initially loaded my training and validation sets this way: train_ds = tf.keras.preprocessing.image_dataset_from_directory( train_path, validation_split=0.2, subset="training", seed=42, image_size=image_size, batch_size=batch_size, ) val_ds = tf.keras.preprocessing.image_dataset_from_directory( train_path, validation_split=0.2, subset="validation", seed=42, image_size=image_size, batch_size=batch_size, ) Note that I am loading both training and validation from the same folder and then using validation_split (because I wanted to play around before using the real validation set). My model was performing quite well, achieving validation accuracy of~0.95. Then I decided to update my code to load the real validation set: train_ds = image_dataset_from_directory( train_path, seed=42, image_size=image_size, batch_size=batch_size, ) val_ds = image_dataset_from_directory( val_path, seed=42, image_size=image_size, batch_size=batch_size, ) Now my model is performing substantially worse (~0.75 accuracy). I'm trying to understand why. I suspect my initial code was causing some data leakage. Now that I look at it, I can't tell how the second call of image_dataset_from_directory (for val_ds) knows not to load images that were already loaded for the first call (for train_ds) (unless having the same random seed prevents this). I would be certain this is the issue, except for the fact that I pulled this code directly from a keras.io tutorial - surely they wouldn't make such a basic mistake? Main question: Given the way that validation_split and subset interact with image_dataset_from_directory(), is the first version of my code resulting in data leakage? If it should not be resulting in data leakage between training and validation sets, then I will need to consider other possibilities, such as: There are actual differences between images in the train and validation set folders. I could combine and reshuffle them. The order of images in the training folder is such that given my random seed "easier" images were getting pulled for the validation set. AI: A possible issue is that Keras validation_split uses the "last $x$ percent" of data as validation data without shuffling the data. So if your data has a certain stratification, this stratification will affect the validation set. I further understand from the docs that the shuffle argument in .fit() does not shuffle data before assigning the validation data. It shuffles training data before each epoch. As far as I remember I had a similar problem and needed to "manually" shuffle my data before feeding it to the NN in order to avoid problematic bunching of classes in the validation set (defined by validation_split). From the docs: validation_split Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. shuffle Logical (whether to shuffle the training data before each epoch) or string (for "batch"). "batch" is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not NULL.
H: I am getting the error in SimpleImputer from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values= np.NaN, strategy='most_frequent') imputer = imputer.fit(cat_vars[:,2:4]) cat_vars[:,2:4] = imputer.transform(cat_vars[:,2:4]) The above is my code for replacing the missing values with the most frequent value in the column index starting from 2 to 3.I am getting the below error. Please suggest why this error is coming. Thanks in advance. TypeError Traceback (most recent call last) <ipython-input-91-48eaa0ca1d43> in <module> 2 from sklearn.impute import SimpleImputer 3 imputer = SimpleImputer(missing_values= np.NaN, strategy='most_frequent') ----> 4 imputer = imputer.fit(cat_vars[:,2:4]) 5 cat_vars[:,2:4] = imputer.transform(cat_vars[:,2:4]) TypeError: 'SimpleImputer' object is not subscriptable AI: Try this: from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values= np.NaN, strategy='most_frequent') imputer = imputer.fit(cat_vars.iloc[:,2:4]) cat_vars.iloc[:,2:4] = imputer.transform(cat_vars.iloc[:,2:4])
H: Distribution of text data How can I identify whether the training data and test data came from same distribution or not? I tried with TFIDF and cosine similarity from sklearn.feature_extraction.text import TfidfVectorizer import pandas as pd from sklearn.metrics.pairwise import cosine_similarity train_data=pd.read_csv('train_dataset.csv') test_data=pd.read_csv('test_dataset.csv') tfidf1 = TfidfVectorizer().fit_transform(train_data.text) tfidf2 = TfidfVectorizer().fit_transform(test_data.text) # compute and print the cosine similarity matrix cosine_sim = cosine_similarity(tfidf1, tfidf2).flatten() print(cosine_sim) By executing, I got this error ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 5416 while Y.shape[1] == 11658 AI: On test set, only transform: tfidf1 = TfidfVectorizer().fit_transform(train_data.text) tfidf2 = TfidfVectorizer().transform(test_data.text)
H: Should I add string feature columns? If my dataframe looks like this: user item property_1 property_2 property_3 rating u1 i1 90.2 0 NaN 0 u1 i2 80.2 1 0.90 1 u1 i3 70.2 1 NaN 1 u2 i2 80.2 1 0.90 0 u2 i4 80.4 0 0.10 1 u3 i1 90.2 0 NaN 1 u3 i4 80.4 0 0.10 1 u3 i5 93.9 1 0.33 0 u3 i6 90.9 0 0.55 0 u4 i1 90.2 0 NaN 0 u4 i6 90.9 0 0.55 1 u4 i7 50.2 1 NaN 1 And I want to predict what rating would a user give to an item using these properties, what method should I apply? Something that would look at the user-item pairs. Because I used XGBoost for classification, with property_1, property_2, property_3 as features, I obtained good results, but my model doesn't know that more users rated the same item, does it? That the users and items appear multiple times, even if I have no duplicates. For example, second row and fourth row have the same properties, but different ratings, because the users are different: user item property_1 property_2 property_3 rating u1 i2 80.2 1 0.90 1 u2 i2 80.2 1 0.90 0 I already have a collaborative filtering in a separate model that works well, but it doesn't look at the properties of the item, which is something that I want to use. And if I add item as a feature column I get the error: ValueError: DataFrame.dtypes for data must be int, float, bool or categorical. When categorical type is supplied, DMatrix parameter `enable_categorical` must be set to `True`.item AI: Regarding feature encoding As the error says, strings are not accepted. You need to transform item in a way that can be digested by xgboost (essentially some numerical representation). There seems to be a method to transform to categorical when generating the DMatrix (see the docs, never tried it). However, my first idea would be to "one hot" encode item using sklearn.preprocessing.OneHotEncoder. Another method would be to use pandas.get_dummies. Update: Regarding the general model setup Your model is not well described in the question. My understanding is that your model looks like: $$rating (user,item,...),$$ so you aim at predicting the rating for a given user and item etc. Each user and each item can be thought of as having an own "identity" (happy, grumpy person; quality product, cheap product etc.). This is called a "fixed effect" in econometrics. The canonical approach is to add one dummy (one-hot) for each user, item. In linear models, this introduces a "level shift" (additional intercept term). So a "cheap" product would get a lower rating on average. Or a grumpy person would generally give a lower score (lower intercept) for a given product compared to other people. In tree based models the effect is less clear. However, provided that there is quite a large literature using "fixed effects", I suppose this is a good starting point. You need to distinguish the sources of ratings as good as possible and individual product/user aspects are important and can easily be represented by a "dummy". One open question is how to deal with "out of sample" users. I might think that there are no out of sample items, but I guess that there are out of sample users. If true, you would need to "approximate" the user's identity by socio-economic variables (age, gender, education, preferences, etc.).
H: Which machine learning algorithms are more suitable for binary classification? We know that there are many different types of classification algorithms. But among the different categories of classification algorithms, which algorithms are suitable for binary classification and which are suitable for more classes, and why? AI: If you want to be highly literal, logistic regression is excellent for binary classes but completely inappropriate for $3+$ classes. No worries: there is multinomial logistic regression, the theory of which mimics binary logistic regression (one might consider logistic regression to be a special case of multinomial logistic regression). Depending on the sophistication of my audience, I might be comfortable referring to "logistic regression" and leaving it to them to realize that I mean "multinomial" logistic regression when there are $3+$ categories and "binary" logistic regression when there are $2$ categories. Random forest can do the binary case but also the multiclass case. Ditto for k-nearest neighbors, support vector machines, and neural networks. I cannot think of a model for binary classes that lacks a multiclass analogue.
H: Preprocessing for the final model to be deployed Typically for a ML workflow, we import the data (X and y), split the X and y into train, valid and test, preprocess the data for train, valid and test(scale, encode, impute nan values etc), perform HP tuning and after getting the best model with the best HP, we fit the final model to the whole dataset (i.e. X and y). Now the issue here is that X and y are not preprocessed as only the train, valid and test are preprocessed. So when fitting the final model on X and y, we'll be getting an error as we haven't encoded (and performed other preprocessing steps) X and y. How are we then supposed to train the final model on the whole dataset? Do we preprocess X and y before fitting the final model? And if so won't it lead to data leakage/ overfitting? Any help will be much appreciated! AI: In the new experiment the full data is the training set. There's no test set or validation set. How are we then supposed to train the final model on the whole dataset? Do we preprocess X and y before fitting the final model? Yes, and it's important to apply the exact same preprocessing method as was done on the original training set, now using the full data as training set. Any difference would invalidate the performance measured in the first experiment. And if so won't it lead to data leakage/ overfitting? The preprocessing steps are determined on the training set only, then the exact same steps can be applied to the test set (or validation set). In this case there's no test set anymore, so there cannot be data leakage. Of course the original test set cannot be used anymore as a test set, since it's now part of the training set. There might be some overfitting, but it's not related to using the full dataset. Of course the first experiment should be used to check for overfitting before using the full data as training set. Once the model is trained on the full data, there's no way to check for overfitting anymore (unless there's some additional unseen labelled data that can be used as test set).
H: Date time conversion in a CSV column I am new to data science. I am attempting to write a program using regression techniques, and all of my values are numerical, except for the date and time (UTC), which are written in this format: HH:MM:SS MM/DD/YY. The date and time are a part of a CSV file and I do not know how to alter the column. I have looked around for how to convert this to a numerical value, but all the results put the date before the time. Other than that, I am having a hard time finding people that changed more than a single date. If anyone could guide me on how to make the time and date readable (using LinearRegression().fit() from the sklearn.linear_model library) I would greatly appreciate it. P.S. Do I even have to convert it to a number? Can I keep it as the date and time or do I need to convert it? EDIT: algaeData = pd.read_csv(r'my_file').drop(columns=['Type', 'Device Type', 'Device S/N', 'Mooring', 'MRPT & NOTES']) algaeData['Date (UTC)'] = pd.to_datetime(algaeData['Date (UTC)'], format='%H:%M:%S %m/%d/%y') x = algaeData.drop(columns=['BGA (ug/L) (ug/L)']) y = algaeData['BGA (ug/L) (ug/L)'] x, y = np.array(x), np.array(y) model = LinearRegression().fit(x, y) AI: If you're using pandas you can convert your column pretty easily using df['col'] = pd.to_datetime(df['col'], format='%H:%M:%S %m/%d/%Y') That will read your dates as a datetime64[ns] object. Which sklearn will be able to parse when you fit your LinearRegression model using that predictor. Though I fail to understand what you're trying to do when you say Other than that, I am having a hard time finding people that changed more than a single date.