text
stringlengths
83
79.5k
H: Interpreting ROC curves across k-fold cross-validation I have used a MARS model (multivariate adaptive regression splines) and I have used k fold cross validation for the evaluation of the model, obtaining the following graph: How would be the interpretation of this model? I understand that in the 6 fold, the model obtains a better AUC, but why? What is the interpretation of this? Thanks to all. AI: $k$-fold cross-validation simply repeats the same process with different parts of the data. Therefore any difference between different folds can only be due to chance, i.e. it's only because different instances are selected (by chance) that the results are different. In theory, if the dataset is large and representative enough, the performance should be almost identical across different folds. Thus large differences tend to indicate that the data is not representative enough and/or that the trained model overfits (i.e. it's too complex for the data, learning detail which happen by chance instead of general patterns). Imho this graph is a bit subjective to interpret: I would say that there are quite important variations across folds, so it could be a case of overfitting. But the different curves stay roughly around the same area, so I think it's not too bad. It's a borderline case from my point of view.
H: How interpret or what's the meaning of rbm.up results? I am studying deep learning and the deepnet R package gives me the following example: (rbm.up function Infer hidden units states by visible units) library(deepnet) Var1 <- c(rep(1, 50), rep(0, 50)) Var2 <- c(rep(0, 50), rep(1, 50)) x3 <- matrix(c(Var1, Var2), nrow = 100, ncol = 2) r1 <- rbm.train(x3, 3, numepochs = 20, cd = 10) v <- c(0.2, 0.8) h <- rbm.up(r1, v) h The result: [,1] [,2] [,3] [1,] 0.5617376 0.4385311 0.5875892 What do these results means? AI: As the documentation mentions, it shows the hidden states of each of the nodes in the restricted Boltzmann machine after you have trained your model. The number of values you get from rbm.up is equal to the value for the hidden argument in rbm.train (the second value), which in your case is 3. If you increase this number you will see that the number of values you get from rbm.up also increases: library(deepnet) Var1 <- c(rep(1, 50), rep(0, 50)) Var2 <- c(rep(0, 50), rep(1, 50)) x3 <- matrix(c(Var1, Var2), nrow = 100, ncol = 2) r1 <- rbm.train(x3, 5, numepochs = 20, cd = 10) v <- c(0.2, 0.8) h <- rbm.up(r1, v) print(h) # [,1] [,2] [,3] [,4] [,5] # [1,] 0.5826883 0.6624397 0.5545223 0.4133155 0.5533788
H: My validation loss is too much higher than the training loss is that overfitting? I am new to data deep learning. I am educating myself but I don't understand this situation. Where Validation loss is much much higher than the training loss. Can someone please interpret this? inputs = keras.Input((width, height, depth, 1)) x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(inputs) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=128, kernel_size=3, activation="relu")(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.Conv3D(filters=256, kernel_size=3, activation="relu")(x) x = layers.MaxPool3D(pool_size=2)(x) x = layers.BatchNormalization()(x) x = layers.GlobalAveragePooling3D()(x) x = layers.Dense(units=512, activation="relu")(x) x = layers.Dropout(0.3)(x) outputs = layers.Dense(units=1, activation="sigmoid")(x) AI: Upon first glance, this model is likely overfitting. Not always, but many times, whenever you have better training metrics than validation metrics (lower training loss, higher training accuracy), it is indicative of some level of overfitting because the model essentially "memorized" some portion of the training data, and it is not generalizing well to data it has not seen before. However, looking at the charts, your validation loss (on average) is several orders of magnitude larger than the training loss. Depending on what loss you are using, there should typically not be this big of a difference in the scale of the loss. Consider the following: Make sure your validation and training data are preprocessed identically. If you applied certain transformations/preprocessing steps to your training data, do the same to your validation data. This is extremely important, and, if I had to guess, this may be what went wrong in your training. If you normalized your training data, for example, but you didn't normalize your validation data (for example), then your training loss will be several orders of magnitude smaller than the validation loss. Once you check that your training and validation data are consistent before input, make sure to add in regularization (a dropout layer is typical for neural networks) so that your model is not exposed to all of the training data at every layer. A dropout of 0.1-0.3 is pretty typical but a reasonable amount should be ok. Shuffle and randomly split the train and validation data. If the model recognizes some pattern that's in the training data, but not in the validation, this would also cause some overfitting.
H: Orange's Results are not reproducible I've been watching a few training videos from Orange here and attempted to reproduce the process. They used iris dataset for classification task. When I compared my confusion matrix to theirs, I didn't get the same results. Is this a problem with Orange software or with sklearn (I know they somehow leverage sklearn)? When you run a code again 6 years later, you get different results, even though the dataset is the same... AI: In general, algorithms has a lot of maths behind. Maybe the difference between Orange software and sklearn is due to small differences in these maths. Of course, only small differences should appear. Moreover, many algorithms (like random forest) are created with some type of randomness; for example, random forest select randomly samples and features. So here is another possible factor that creates differences.
H: What does it mean if the validation accuracy is equal to the testing accuracy? I am training a CNN model for my specific problem. I have divided the dataset into 70% training set, 20% validation set, and 10% test set. The validation accuracy achieved was 95% and the test accuracy achieved was also 95%. What does this mean? Is this mean that the model is not biased ( not biased to the samples in the validation set ) and its hyperparameters have been fine-tuned correctly? Also, do these results confirm the generalization ability of the model ( no overfitting)? AI: First of all, make sure you did the split before any kind of pre-processing. Splitting data after pre-processing introduces data leakage. Second, shuffle the data once again, re-train, validate and test, check if the result persists. If yes, you are right, the model is not biased to the validation set and hyper-parameters have been fine-tuned correctly.
H: Why we need to 'train word2vec' when word2vec itself is said to be 'pretrained'? I get really confused on why we need to 'train word2vec' when word2vec itself is said to be 'pretrained'? I searched for word2vec pretrained embedding, thinking i can get a mapping table directly mapping my vocab on my dataset to a pretrained embedding but to no avail. Instead, what I only find is how we literally train our own: Word2Vec(sentences=common_texts, vector_size=100, window=5, min_count=1, workers=4) But I'm confused: isn't word2vec already pretrained? Why do we need to 'train' it again? If it's pretrained, then what do we modify in the model (or specifically, which part) with our new 'training'? And how does our now 'training' differ from its 'pretraining'? TIA. Which type of word embedding are truly 'pretrained' and we can just use, for instance, model['word'] and get its corresponding embedding? AI: word2vec is an algorithm to train word embeddings: given a raw text, it calculates a word vector for every word in the vocabulary. These vectors can be used in other applications, thus they form a pretrained model. It's important to understand that the model (embeddings) depends a lot on the data they are trained on. Some simple applications can simply use a general pretrained model, but some specific applications (for example specific to a technical domain) require the embeddings to be trained on some custom data.
H: For each unique value in a column, count respective unique values in another column I have a set of tabular data (e.g. csv) representing accesses to a server through a specific protocol . The data follows this format: server_id | protocol =================== s1 A s1 C s1 C s1 B s2 A s2 B s2 C s2 A s3 A s3 B s3 B server_id can be one of: s1, s2, s3 protocol can be one of: A, B, C In R, how can I get the following? server_id | A | B | C ===================== s1 1 1 2 s2 2 1 1 s3 1 2 0 A, B and C columns represent the amount of times a server was accessed with that protocol. I cannot wrap my head around the declarative way of doing things in R and need some help. Let me know if my question is not clear or if this is not the correct place to post it. Thank you for your help. AI: Since this is more of a programming question than a data science question it would be better suited for the stackoverflow stackexchange page, but this can done relatively easily using some of the functions from the tidyr library: library(tidyr) df <- data.frame( server_id = c("s1", "s1", "s1", "s1", "s2", "s2", "s2", "s2", "s3", "s3", "s3"), protocol = c("A", "C", "C", "B", "A", "B", "C", "A", "A", "B", "B") ) df %>% # count number of rows for each combination of server_id and protocol group_by(server_id, protocol) %>% tally() %>% # pivot the protocol names over the columns pivot_wider(names_from=protocol, values_from=n) %>% # replace NA values in all columns with 0 mutate(across(everything(), .fns=~replace_na(., 0))) Which returns the following dataframe: server_id A B C s1 1 1 2 s2 2 1 1 s3 1 2 0
H: Is it feasible to integrate convolutionnal layers as Reinforcement Learning input to learn video game? Let's say, you want to apply reinforcement learning on a simple 2D game. (ex : super mario) The easy way is of course to retrieve an abstraction of the environnment, per example using gym and/or an open-source implementation of the game. But if it's not available, I think about integrating convolutionnal layers over pixels as inputs of the RL agent. We could of course split the task in two : featurization of the images and then reinforcement learning, we probably would need some supervision over the images (which can be problematic since we have no abstraction of the environment). Is it a feasible approach to combine learning a featurization of the image data and learning a game policy at the same time ? AI: Yes, it is possible to use convolutional layers in a reinforcement learning (RL) agent approximation function for action values (e.g. Q learning) or for policies (e.g. REINFORFCE). In fact, any learning system capable of online learning of functions from example inputs and outputs will work with RL. The RL component will generate the examples to learn by taking actions in the environment or in simulation, and calculating some value such as the expected return. These examples are drawn from different distributions as the agent becomes better at the task, which is why online learning is important - the agent must forget the values associated with earlier experiences and replace them with new values as it improves its performance. Neural networks work for online learning by default, unless you make changes to them to prevent that. That means you are not restricted to simple feed-forward networks. You can use CNNs, RNNs and other flavours of neural network provided you design them to output your value function or policy. Which will be best to use depends on the nature of the environment and your input signals. CNNs are a good choice whenever there is a structured arrangement of similar inputs - that includes image data, also many board games. If you have not already, you may want to get hold of a (free PDF) copy of Reinforcement Learning: An Introduction. In chapter 16, section 16.5 the authors explain the original DQN project which learned how to play video games, including a discussion of the neural network architecture and pre-processing used. This is nowadays a well-known result in the RL community, you will find discussions, examples and implementations of it in many places. One of the original researchers on DQN, David Silver, has published a lecture series on RL, with videos available on YouTube. He is also associated with DeepMind's Alpha Go project, which is another example RL system that uses CNN architecture internally.
H: Confusion about the value of within-cluster SSE I have a dataset of shape (29088, 11). When I apply the Kmeans where K=2 I get the following plot: I am surprised that the value of Sum Squared Error (SSE) for C0 (in blue) is smaller than the value of SSE for C1 (in red). Isn't supposed to be the opposite as is demonstrated in the plot where the blue points are distorted which means the value of SSE should be larger? Note: C0 has 8554 points (in blue) while C1 has 20534 points (in red) AI: I believe that the number of elements in C1 clusters are more than that of C0. Can you please check that once? C0 has 8554 samples, thus the average SSE becomes $\frac{28101.1}{8544} = 3.28$. While C1 contains 20534 points with average SSE of $\frac{47725.5}{20534}=2.324$. This implies that the C1 cluster is more contained, it has a very high SSE because it contains more than 2x times the points present in C0.
H: What are the disadvantages of accuracy? I have been reading about evaluating a model with accuracy only and I have found some disadvantages. Among them, I read that it equates all errors. How could this problem be solved? Maybe assigning costs to each type of failure? Thank you very much for your help. AI: A common complaint about accuracy is that it fails when the classes are imbalanced. For instance, if you get an accuracy of $98\%$, that sounds like a high $\text{A}$ in school, so you might be pretty happy with your performance. However, if the class ratio is $99:1$, then you’re doing worse than you would by always guessing the majority class. However, accuracy has issues when the classes are naturally balanced, too. In many applications, there are different costs associated with the different mistakes. Accuracy takes away from your ability to play the odds. The typical threshold for a (binary) model that outputs probability values (logistic regression, neural nets, and others) is $0.5$. Accuracy makes a $0.49$ and $0.51$ appear to be different categories while $0.51$ and $0.99$ are the same. I’d be a lot more comfortable making a huge decision based on a probability of $0.99$ than on $0.51!$ Accuracy masks this. In fact, any threshold-based metric like sensitivity, specificity, $F_1$, positive predictive value, or negative predictive value masks the differences between $0.51$ and $0.99$. Consequently, statisticians advocate for direct evaluation of the probability outputs of models, using metrics such as log loss (often called crossentropy in machine learning circles and sometimes negative log likelihood) and Brier score (pretty much mean squared error, with an unsurprising generalization in the multiclass setting). Vanderbilt’s Frank Harrell, the founder and former head of the Department of Biostatistics at their medical school, as well as a frequent user of the statistics Stack, has two good blog posts about the idea of predicting tendencies instead of categories and measuring success by evaluating the probability outputs of models. Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules Classification vs. Prediction
H: What to do with missing origin city values Hello fellow data scientists. I am new in this field, and I face to a problem, for what I need an advice. So I have data, where one column is product ID, and another which say from which city it originates. So my question is now what do in the cases, when city value is empty? I think it is absolutely impossible, to guess the origin and fill, or fill with median. so what is your advice? Thank you very much AI: With little context given, we could deal with this in a very generic way. There are a couple ways to deal with categorical missing values: Leaving as it is. This depends on the proportion of missing values and which model you would be using (for example, using this method while planning to use regression model would make no sense). Getting rid of the entry. This likewise depends on the proportion of missing values. Assuming that this is your only feature, you wouldn't suffer much from losing other column data based on a single missing value, but regardless if the proportion of missing values is high, you would probably want to think otherwise. Filling in with a predictive value. Is this dataset contingent to another dataset that maybe shows a distribution of the origin cities? Is it reasonable that, based on the distribution, you replace the missing values with the city with the most frequency? If you have an access to an additional, related dataset, is the city column explained by other features? There are obviously more sophisticated and detailed ways to tackle the missing value problem, but you would have to consider the context of the data, the proportion and distribution, and sometimes even the purpose of the data analysis.
H: Relationship between visualization and feature engineering Having been in industry for a while, my puzzle on this question still remains unsolved. What exactly is the relationship between visualization and feature engineering? While on Kaggle or elsewhere we can frequently bump into beautiful visualization of data. However, I myself see it little reference values when coming into how it helps in feature engineering. In particular, I can complete the entire feature engineering without plotting any graphs and still manage to get highly accurate models, by simply relying on statistics I print from data. And more often than not, those graphs neither fail to show precisely the intervals or the exact numerical values for a certain data field. To me, it's too approximative to be useful. Thus, I'm seeing there's little use of visualization if I already can feature engineer based on calculating some statistics. I am looking for someone to correct my concept and point out what I've been missing in terms of the contributions of visualization in feature engineering, so I won't be so blind. TIA. AI: First of all you are nothing you mentioned in your question is that serious! You can do many things without visualisation and if you see a blank page of thousands of numbers more intuitive than a simple plot then lucky you! Nothing is right or wrong here! Second point I would like to mention is that you can intuitively understand the distribution of your data in terms of classes or clusters in a simple visualisation that I am sure even you, yourself, will not see easily in just plain numbers. That was just an example and follows the rule of "an image worth thousand words"! PS: Can you really detect skewness of distributions just from looking at numbers?!
H: How to Form the Training Examples for Deep Q Network in Reinforcement Learning? Trying to pick up basics of reinforcement learning by self-study from some blogs and texts. Forgive me if the question is too basic and different bits that I understand are a bit messy, but even after consulting a few references, I cannot really get how Deep Q learning with a neural network works. I understood the Bellman equation like this $$V^\pi(s)= R(s,\pi(s)) + \gamma \sum_{s'} P(s'|s,\pi(s)) V^\pi(s')$$ and the update rule of Q table. $$Q_{n+1}(s_t, a_t)=Q_n(s_t, a_t)+\alpha(r+\gamma\max_{a\in\mathcal{A}}Q(s_{t+1}, a)-Q_n(s_t, a_t))$$ But when training a neural network to represent the mapping, how exactly do I get the training samples? To make it more concrete, suppose the state $s\in\mathbb{R}^d$ is a $d$-dimensional vector and there are $|\mathcal{A}|$ actions possible in total where $\mathcal{A}$ is the action space. From some readings, I understood the neural network will have $d$ input neurons, $|\mathcal{A}|$ output neurons and hidden layers in between. After sufficient number of epochs, the forward pass for any $s\in\mathbb{R}^d$ will generate the $Q$ values for different actions at the output layer. For this network then, the training data for supervised learning should have the shape $N\times(d+|\mathcal{A}|)$ where $N$ is the number of training samples. Now suppose I am using an environment from the gym library. According to their documentations, this is how you take an action on an environment, to get a new state, reward, and other information. state, reward, done, info = env.step(action) So how to generate $N\times(d+|\mathcal{A}|)$ training samples from the above line of code? Even if I execute it $N$ times, it will give me the single step rewards of those actions, not the $Q$ values accounting for discounted future rewards. AI: But when training a neural network to represent the mapping, how exactly do I get the training samples? To make it more concrete, suppose the state $s\in\mathbb{R}^d$ is a $d$-dimensional vector and there are $|\mathcal{A}|$ actions possible in total where $\mathcal{A}$ is the action space. Your training data for the neural network is the (maybe feature engineered) state vector that represents $s$, the action choice $a$, and the action value that is your best estimate of $Q(s,a)$ for the current policy. If you are using some form of TD learning, like Q-learning, then this best estimate is also called the TD target. Your target policy is constantly changing in Q learning, because it depends on the current Q estimates. So you cannot simply store a set of value estimates along with the start state and chosen action, like a supervised learning training data set. What you have to do instead is calculate the TD target just before you use it. That is where the Bellman equation and update rules come in. The ones you quote have everything built in to do a tabular update. The TD target is this part: $$r+\gamma\max_{a\in\mathcal{A}}Q(s_{t+1}, a)$$ This TD target is the output value ($y$) for the neural network. The input ($x$) to associate with this target in the simplest form is the vector representations of $(s, a)$ concatenated. In practice though you will often see a function that outputs all action values at once with just the state vector as input. The typical loop for training on data from experience replay when you have this structure involves calling the neural network multiple times, but does not require you to somehow collect experience from each possible action in each state. I have simplified this to avoid discussing more detail than necessary, so it will not work without addition of more loop control, options for copying learning network to the target network etc. For real examples you will want to unpick a reference DQN implementation. # DURING EXPERIENCE GATHERING # Assume we have a current state value # This is the greedy action. Exploring not shown action = np.argmax(learning_nn(state)) next_state, reward, done, info = env.step(action) store_in_experience_replay(state, action, reward, next_state, done) # DURING UPDATES # In real projects, this code will be vectorised to process # a mini-batch with multiple samples at once. s, a, r, next_s, done = sample_from_experience_replay() # This gets the current vals, we're going to modify it later to make the target q_val_targets = learning_nn(s) if done: td_target = r # we're done, so Q(next_s, *) = 0 by definition else: td_target = r + gamma * max(target_nn(next_s)) end # Here's the "trick" to allow us to train from the single # sample. We assume all the other actions have same prediction as we just # generated, so their error will be 0 q_val_targets[a] = td_target # I just made up this syntax, you will probably have a helper function anyway learning_nn.train_on_batch([s], [q_val_targets]) This is not the only way to do it. You could instead arbitrarily set error or gradient to zero for all outputs other than the one associated with the TD target. However, something like the code above is relatively common because NN frameworks will mostly support training on input/output pairs more easily than tweaking the internal steps during backpropagation.
H: How to deal with missing values that are supposed to be missing? I am trying to predict loan defaults with a fairly moderate-sized dataset. I will probably be using logistic regression and random forest. I have around 35 variables and one of them classifies the type of the client: company or authorized individual. The problem is that, for authorized individuals, some variables (such as turnover, assets, liabilities, etc) are missing, because an authorized individual should not have this stuff. Only a company can have turnover, assets, etc. What do I do in this case? I cannot impute the missing values, but I also can't leave them empty. In the dataset there are about 80% companies and 20% authorized individuals. If I can't impute that data, should I just drop the rows in which we find authorized individuals altogether? Is there any other sophisticated method to make machine learning techniques (logistic regression and random forests) somehow ignore the empty values? AI: Do not ignore missing values. In your case, they carry important information. Consider (1) binning numeric variables, including a separate bin for 'missing', or (2) impute the missing values with 0, introducing a dummy variable for when the variable is 'missing'. Point (1) results in a loss of information, but is most common and easiest to interpret. Point (2) reduces information loss, but leads to bias. I would consider (1).
H: Overfitting CNN model - any relation to input image size? If my CNN model is over-fitting despite trying all possible hyper parameter tuning, does it mean I must decrease/increase my input image size in the Imagadatagenarator? AI: Overfitting of a model in deep learning is very highly related to number of training samples you have. Given that you use VGG16 which has 138 million parameters it requires a lot of data for training properly. If you use 600 images only for training it will overfit on training data. Its difficult to control overfitting on these small data by adjusting any other hyperparameter or resizing the image. Please collect more data or use data augmentation techniques to increase the samples of training data
H: Labelling for churn measurement I have 3 domains of supplier data (Jan 2017 to Jan 2022) and they are as follows a) Purchase data - Contains all the purchase (of product) data made by the suppliers with us. It contains columns such as purchase date, invoice number, product id,supplier id,project name b) Inventory data - Contains the stock/inventory info of our product with the suppliers (in their warehouse). This is reported every month. It contains columns such as supplier id, product id, inventory_reported_date, qty_in_stock etc. There is no project name here. c) Order backlog data - Contains the pending orders yet to be delivered by us to the suppliers. Meaning, the suppliers have already booked orders with us for products but we are yet to deliver. It contains columns such as supplier id, supplier name, product id, qty ordered, supplier_requested_delivery_date,company_delivery_confirmed_date etc Now, I would like to come up with a rule to identify suppliers who are likely to leave us or stay with us. We plan to build supplier attrition ML model. For this, however, we don't have any ground truth with us (to know whether a supplier left us or not). So, we would like to create rule based label to indicate supplier attrition risk. It could be high risk and low risk. Meaning, high risk indicates supplier who is highly likely to leave and low risk means supplier who is less likely to leave us please note that a supplier can buy same product multiple times for the same project and also for different projects some of the points that I could think of is as below but am not sure whether it is correct or logical a) Decline in order backlog - I can find out the average order backlog for a specific product by a supplier over time (Jan 2017 to Jan 2020) and how it is doing from Feb 2020 to Jan 2022. If the trend is declining, should I mark it as high risk? b) Decline in purchase history - I can find out the average purchase time period (like every 3 months, 6 months etc) for a specific product by a supplier over time (Jan 2017 to Jan 2020) and how it is doing from Feb 2020 to Jan 2022. If the trend is declining, should I mark it as high risk? c) Inventory data - If inventory is not reported for a specific product by a supplier, is it okay to consider that supplier left us for that specific product? But it is not realistic to expect supplier to buy all products available with us. He will only buy what he wants (and reports inventory only for what he buys) Can I seek your suggestions and views on how we can arrive at a rule based label for supplier attrition scenario? AI: Decline in the purchase history seems to be a logical data to determine the churn. The approach to this would be simple : Try to calculate the average purchase cycle & average order value of each supplier Now you can define churn rule on following basis: a. Supplier whose purchase cycle lies in +- 10% of average purchase cycle and order value in +- 10% of of averge order value ---- No Risk Customer b. Supplier who purchase cycle has increase but order value have also increase in the same ratio ---- Low Risk / No Risk c. Supplier whose purchase cycle has remained same but order value has decreased over last 2-3 order ---- High Risk d. Supplier whose order value is same but purchase cycle has increased over last 2-3 cycles --- Risk (Needs attention) e. Supplier whose order value has decreased and purchase cycle has increased --Very High Risk Likely to churn
H: Training data for anomaly detection using LSTM Autoencoder I am building an time-series anomaly detection engine using LSTM autoencoder. I read this article where the author suggests to train the model on clean data only in response to a comment. However, in most cases, it is not possible to find and exlude anomalies manually. I had always believed that because anomalies are very rare, if we train the model on all the data then the model will learn the normal behavior of time series and be ready to detect anomalies. I have read the same notion in many other articles too. Can someone throw light on what should be right mechanism to prepare the training data for anomaly detection? AI: I will try to clarify the point as best as I can. Ideally a model for anomaly détection should be trained with typical data, so that atypical data (anomalies) stand out. However in practice this may not be achievable. So one can train the model on mixed data, provided the relative percentages of typical vs atypical cases are overwhelmingly high. (How high the relative percentages should be is a grey area depending on model used, type of data, etc.. as in many cases in machine learning there is no fixed hard limit) Thus in this case the model will learn typical data with very high accuracy, thus can be used to detect atypical data. Hope above analysis is clear enough. As following references point out, approaches to anomaly detection and training depends on whether data are labeled and whether supervised or unsupervised algorithms are used. If data are labeled, even if mixed, then supervised algorithms will usually work (including LSTMs). If data are mixed and unlabeled, then some clustering method (eg kmeans) can help in partitioning the data in typical/atypical sets and then proceed as previously. Some references on variations regarding anomaly detection problems and their approach: Anomaly Detection with Machine Learning: An Introduction Supervised Training data is labeled with “nominal” or “anomaly”. The supervised setting is the ideal setting. It is the instance when a dataset comes neatly prepared for the data scientist with all data points labeled as anomaly or nominal. In this case, all anomalous points are known ahead of time. That means there are sets of data points that are anomalous, but are not identified as such for the model to train on. Popular ML algorithms for structured data: Support vector machine learning k-nearest neighbors (KNN) Bayesian networks Decision trees Clean In the Clean setting, all data are assumed to be “nominal”, and it is contaminated with “anomaly” points. The clean setting is a less-ideal case where a bunch of data is presented to the modeler, and it is clean and complete, but all data are presumed to be nominal data points. Then, it is up to the modeler to detect the anomalies inside of this dataset. Unsupervised In Unsupervised settings, the training data is unlabeled and consists of “nominal” and “anomaly” points. The hardest case, and the ever-increasing case for modelers in the ever-increasing amounts of dark data, is the unsupervised instance. The datasets in the unsupervised case do not have their parts labeled as nominal or anomalous. There is no ground truth from which to expect the outcome to be. The model must show the modeler what is anomalous and what is nominal. “The most common tasks within unsupervised learning are clustering, representation learning, and density estimation. In all of these cases, we wish to learn the inherent structure of our data without using explicitly-provided labels.”- Devin Soni In the Unsupervised setting, a different set of tools are needed to create order in the unstructured data. In unstructured data, the primary goal is to create clusters out of the data, then find the few groups that don’t belong. Really, all anomaly detection algorithms are some form of approximate density estimation. Popular ML Algorithms for unstructured data are: Self-organizing maps (SOM) K-means C-means Expectation-maximization meta-algorithm (EM) Adaptive resonance theory (ART) One-class support vector machine Factor Analysis of Mixed Data for Anomaly Detection Anomaly detection aims to identify observations that deviate from the typical paern of data. Anomalous observations may corre- spond to nancial fraud, health risks, or incorrectly measured data in practice. We show detecting anomalies in high-dimensional mixed data is enhanced through rst embedding the data then as- sessing an anomaly scoring scheme. We focus on unsupervised detection and the continuous and categorical (mixed) variable case. We propose a kurtosis-weighted Factor Analysis of Mixed Data for anomaly detection, FAMDAD, to obtain a continuous embedding for anomaly scoring. We illustrate that anomalies are highly separable in the rst and last few ordered dimensions of this space, and test various anomaly scoring experiments within this subspace. Results are illustrated for both simulated and real datasets, and the pro- posed approach (FAMDAD) is highly accurate for high-dimensional mixed data throughout these diverse scenarios. A comprehensive survey of anomaly detection techniques for high dimensional big data Anomaly detection in high dimensional data is becoming a fundamental research problem that has various applications in the real world. However, many existing anomaly detection techniques fail to retain sufficient accuracy due to so-called “big data” characterised by high-volume, and high-velocity data generated by variety of sources. This phenomenon of having both problems together can be referred to the “curse of big dimensionality,” that affect existing techniques in terms of both performance and accuracy. To address this gap and to understand the core problem, it is necessary to identify the unique challenges brought by the anomaly detection with both high dimensionality and big data problems. Hence, this survey aims to document the state of anomaly detection in high dimensional big data by representing the unique challenges using a triangular model of vertices: the problem (big dimensionality), techniques/algorithms (anomaly detection), and tools (big data applications/frameworks). Authors’ work that fall directly into any of the vertices or closely related to them are taken into consideration for review. Furthermore, the limitations of traditional approaches and current strategies of high dimensional data are discussed along with recent techniques and applications on big data required for the optimization of anomaly detection. (posted from my smartphone)
H: Regularization and loss function I am currently trying to get a better understanding of regularization as a concept. This leads me to the following question: Will regularization change when we change the loss function? Is it correct that this is the sole way that these concepts are related? AI: I would separate three terms: A loss function, regularizing terms and the function you want to optimize. For example, you start out with a problem: You want to distinguish cat and dog images. For this you have already function that is dependent on some parameters. Your function gets an inputs images and its output is then either the word "cat" or "dog". If you want to measure how good your function actually is, you need a loss function. For example, whenever it says "dog" to a cat image the loss function should indicate so. In reality, you will only have a certain number of images and there will be arbitrary many and complex functions f* solving this optimization problem. These functions can perfectly work for your problem, your simple dataset, but in general not necessarily do what you want it to do. For example this function f* works perfectly on your dataset, but otherwise says "dog" to any kind of image regardless of it's actually dog or cat image. Clearly, there are good and bad functions f* you can get after optimization. So, what you want to do is now guide the optimization process in such a way, that something like the above scenario doesn't happen. You may not be able to remove this possibility completely, but at least, make it much less likely that you end up with a function that solves your problem but is otherwise useless. One of the tools at your deposal are regularizes. In physics you want to describe the world and you can make up arbitrary complex models that explain everything there is. Physicists though prefer simple models they can actually test and verify. What use is a model that explains everything just as well as a small model, but is magnitudes bigger? Bigger models may also do stuff that you don't want it do. In Machine Learning we have a similar perspective: We dislike complex models, because their behaviour might not be easy to understand/predict or verify. That's why you apply a regularizing term to your loss function that penalizes complexity. So if you optimize your model, it naturally tries to move towards simpler versions of it that use less parameters. Did this help?
H: Activation Functions in Haykins Neural Networks a comprehensive foundation In Haykins Neural Network a comprehensive foundation, the piecwise-linear funtion is one of the described activation functions. It is described with: The corresponding shown plot is I don't really understand how this is corrected since the values shown in the graph in the area of -0.5 < v < 0.5 is not v but v+0.5. Am I understanding something wrong, or is there a mistake? AI: There is a mistake. I think there should be v + 0.5 in the function definition, since the author set boundaries as 1 and 0.
H: Plot multiple time series from single dataframe I have a dataframe with multiple time series and columns with labels. My goal is to plot all time series in a single plot, where the labels should be used in the legend of the plot. The important point is that the x-data of the time series do not match each other, only their ranges roughly do. See this example: import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame([[1, 2, "A", "A"], [2, 3, "A", "A"], [3, 1, "A", "A"], [4, 2, "A", "A"], [1.1, 2.3, "B", "B"], [2.3, 3.1, "B", "B"], [3.2, 1.7, "B", "B"], [4.1, 2.8, "B", "B"], [0.9, 2.5, "A", "B"], [1.8, 3.5, "A", "B"], [2.7, 1.2, "A", "B"], [4.4, 5.2, "A", "B"]], columns = ["x", "y", "Cat1", "Cat2"]) The way I got it to work is by looping over the different category-labels, and then plotting the resulting dataframes onto the same ax object: list1 = set(list(df["Cat1"])) for Cat1 in list1: list2 = set(list(df["Cat2"])) for Cat2 in list2: ax = plt.gca() df_temp = df[(df["Cat1"] == Cat1) & (df["Cat2"] == Cat2)] df_temp.plot(x = "x", y = "y", label = Cat1 + "; " + Cat2, ax = ax) plt.show() The result looks like this: Now my question is: Is there a smarter/quicker/more succint way of achieving the same result? E.g. doing something like df.plot(x = "x", y = "y", label = ["Cat1", "Cat2"]) AI: Not sure if you want to have this done using just pandas/matplotlib, but this can be done relatively easily using the seaborn plotting library: import seaborn as sns sns.lineplot(data=df, x="x", y="y", hue=df["Cat1"] + "; " + df["Cat2"]) This would give a plot that looks as follows:
H: How do i generate text from ids in Torchtext's sentencepiece_numericalizer? The torchtext sentencepiece_numericalizer() outputs a generator with indices SentencePiece model corresponding to token in the input sentence. From the generator, I can get the ids. My question is how do I get the text back after training? For example >>> sp_id_generator = sentencepiece_numericalizer(sp_model) >>> list_a = ["sentencepiece encode as pieces", "examples to try!"] >>> list(sp_id_generator(list_a)) [[9858, 9249, 1629, 1305, 1809, 53, 842], [2347, 13, 9, 150, 37]] How do I convert list_a back t(i.e "sentencepiece encode as pieces", "examples to try!")? AI: Torchtext does not implement this, but you can use directly the SentencePiece package. installable from PyPi. import sentencepiece as spm sp = spm.SentencePieceProcessor(model_file='test/test_model.model') sp.decode([9858, 9249, 1629, 1305, 1809, 53, 842])
H: VIF Vs Mutual Info I was searching for the best ways for feature selection in a regression problem & came across a post suggesting mutual info for regression, I tried the same on boston data set. The results were as follows: # feature selection f_selector = SelectKBest(score_func=mutual_info_regression, k='all') # learning relationship from training data f_selector.fit(X_train, y_train) # transform train input data X_train_fs = f_selector.transform(X_train) # transform test input data X_test_fs = f_selector.transform(X_test) The scores were as follows: Features Scores 12 LSTAT 0.651934 5 RM 0.591762 2 INDUS 0.532980 10 PTRATIO 0.490199 4 NOX 0.444421 9 TAX 0.362777 0 CRIM 0.335882 6 AGE 0.334989 7 DIS 0.308023 8 RAD 0.206662 1 ZN 0.197742 11 B 0.172348 3 CHAS 0.027097 I was just curious & mapped the VIF along with scores & I see that the features/Variables with high scores has a very high VIF. Features Scores VIF_Factor 12 LSTAT 0.651934 11.102025 5 RM 0.591762 77.948283 2 INDUS 0.532980 14.485758 10 PTRATIO 0.490199 85.029547 4 NOX 0.444421 73.894947 9 TAX 0.362777 61.227274 0 CRIM 0.335882 2.100373 6 AGE 0.334989 21.386850 7 DIS 0.308023 14.699652 8 RAD 0.206662 15.167725 1 ZN 0.197742 2.844013 11 B 0.172348 20.104943 3 CHAS 0.027097 1.152952 How to select the best features among the list? AI: There is not an optimal answear to this question, however let me shade some lights on how I have previously used these methods. I work with large amount of features and with different type of models. As you may already know there are linear and non linear models. Linear models perform well when feature selection is done on the most basic level. However models like random forest or even xgboost give you the opporunity to let more features in. I have used both at the same time as steps. Apply Kbest. If I still have a lot of features, apply VIF to reduce even more. From experience, when using Boosted models, you don't really need VIF since these type of models know how to deal with multi-colinearity. I apply VIF only when I use linear regression models. It really depends on what model you are going to feed those feautures.
H: What's the purpose of statistical analysis ( statistically important features) vs feature elimination in machine learning I am developing a classification model for covid19 symptoms (after being ill) and I don't understand statistical analysis importance (some parts of it) 1 Firstly: Basically we perform statystical analysis to learn about data. However what's the purpose of counting mean, standard deviation as shown here: https://www.sciencedirect.com/science/article/pii/S0010482522000762#bib27 What insight will it give me? 2 Moreover: They perform statistical test like Chi-Square to find the statistically significant features. Suppose they have around 15 "blood parameteres" and the tests would tell that only 10 of them are statistically important. Does it mean those 5 won't be used in the training and can be removed? 3 If they can be removed: Would feature elimination prove the same? Suppose we used Recursive Feature Elimination / Random forest with 10-best features. Would results be the same? AI: Though not in the details, it looks like they took some of the continuous variables, ranked them, and then used Chi-square to determine feature set. No explanation given as to why they did that. Also regarding the features not found significant. You can certainly uses them in model. chi-square is a weak test, and there may be interactions found in the model which are meaningful. In any case The statistical tests were exploratory. Then were not used for inference directly. It is always a good practice to perform basic statistical descriptive statistics before approaching any ML. For example they could have not performed the missing value imputation without first seeing how many there were. Also note that MVC variable has overlapping confidence intervals between COVID and non-COVID responses, which sometimes is a signal that there is not a significant difference due to that variables. They selected four features: white blood cell count (WBC), monocyte count (MOT), age, and lymphocyte count (LYT) and they ran them through 8 machine learning algorithms to classify and they used a stacked ML model.
H: Standardization in combination with scaling Would it be ok to standardize all the features that exhibit normal distribution (with StandardScaler) and then re-scale all the features in the range 0-1 (with MinMaxScaler). So far I've only seen people doing one OR the other, but not in combination. Why is that? Also, is the Shapiro Wilk Test a good way to test if standardization is advisable? Should all features exhibit a normal distribution or are you allowed to transform only the ones that do have it? AI: Doing both on a given feature is redundant, and equivalent to doing whichever you do last; they are both linear transformations. StandardScaler doesn't require normally distributed data to be useful. It just centers and scales each feature so that it has mean zero and standard deviation 1; that's potentially useful no matter the distribution. See this stats.SE answer. As to which is better, I don't know that there's a right answer. One of the sklearn core devs answered here, but it leaves a lot open.
H: Feature selection before or after scaling and splitting Should feature scaling/standardization/normalization be done before or after feature selection, and before or after data splitting? I am confused about the order in which the various pre-processing steps should be done AI: Some feature selection methods will depend on the scale of the data, in which case it seems best to scale beforehand. Other methods won't depend on the scale, in which case it doesn't matter. All preprocessing should be done after the test split. There are some cases where it won't make a difference, but if you're uncertain it's safer to do everything after splitting. The test set is supposed to act as data your model will see in production; you won't have access to that data to help define scale (or anything else), so don't use it that way while training.
H: Create new rows based on a value in a column My dateset is generated like the example df = {'event':['A','B','C','D'], 'budget':['123','433','1000','1299'], 'duration_days':['6','3','4','2']} I need to create rows for each event based on the column 'duration_days', if I have duration = 6 the event may have 6 rows: event budget duration_days A 123 6 A 123 6 A 123 6 A 123 6 A 123 6 A 123 6 B 123 3 B 123 3 B 123 3 AI: The easiest way of doing this is probably to first convert the dataframe back to a list of rows, then use base python syntax to repeat each row n times, and then convert that back to a dataframe: import pandas as pd df = pd.DataFrame({ "event": ["A","B","C","D"], "budget": [123, 433, 1000, 1299], "duration_days": [6, 3, 4, 2] }) pd.DataFrame([ row # select the full row for row in df.to_dict(orient="records") # for each row in the dataframe for _ in range(row["duration_days"]) # and repeat the row for row["duration"] times ]) Which gives the following dataframe: event budget duration_days A 123 6 A 123 6 A 123 6 A 123 6 A 123 6 A 123 6 B 433 3 B 433 3 B 433 3 C 1000 4 C 1000 4 C 1000 4 C 1000 4 D 1299 2 D 1299 2
H: Why does log-transforming the target have a huge impact on MSE value? I am doing linear regression using the Boston Housing data set, and the effect of applying $\log(y)$ has a huge impact on the MSE. Failing to do it gives MSE=34.94 while if $y$ is transformed, it gives 0.05. AI: The MSE is sensitive to scale. To see this, $$ MSE = \frac{1}{N}\sum_{i=1}^{N} (y_i - \hat{y}_i)^2 $$ Let's suppose your outcome ranges from $[1,99]$ with mean at $50$, and let's pretend your model is just a "naive" estimate where the estimates are just $\hat{y}_i = 50$. The MSE is then 816.66. Now if you log-transformed, the outcome ranges from $[0,4.595]$ with mean 3.63. Again we use a simple model where the estimates are just the sample mean. The MSE is then 0.851. Note that the fit of the model is not any better, the only thing that's changed is the scale of the MSE.
H: Genetic Algorithms (Specifically with Keras) I can't get my deep genetic algorithm snake game to work and I can't figure out why. At this point, I think it must be either the crossover_rate/mutation_rate or the actual crossover code itself is wrong. I'm hoping someone here can help me with figuring out which. So, here's my understanding of deep genetic algorithms: You have a pool of agents. They're randomly generated. You have each of them run, tracking their fitness up until they die. When all agents in the pool are dead, you select some number of the fittest of them. You then take those models (the parents). You grab two parents and use one of them as the base, swapping in some weight and bias values from the other parent. That's crossover. You then go through those weights and bias values and randomly increase or decrease some of them by just a bit. That's mutation. As far as I can tell, that understanding is correct. But then again, maybe it isn't. Something is broken here, so maybe I just don't have a clue how a DGA actually works. But assuming that my understanding is correct, here's my code: It's initialized with my chosen fitness threshold (a percentage if it's < 1, a number of it's > 1), crossover rate, mutation rate, and the degree to which I mutate a value. It also has a legacy pool where I store the absolute best agents I've ever gotten so I can use them as the parents for the new generation. (FYI 1: The models are 11 input size, two 128 dense layers, 3 output layer) (FYI 2: The legacy_pool and new_generation lists are lists of agents where each agent is two item list, with [0] being the model for an agent and [1] being the fitness of the agent) class GeneticAlgorithm(): def __init__(self): #self.fitness_threshold = 0.10 self.fitness_threshold = 2 self.crossover_rate = 0.10 self.mutation_rate = 0.10 self.mutation_degree = 0.50 # Pool of previous parents so we can use the fittest of all time self.legacy_pool = None def _improvement_check(self, new_generation): ''' Only allow the parents to be the absolute fittest of all generations. ''' # For the first time, we just set it to the first generation of parents if self.legacy_pool == None: self.legacy_pool = new_generation # For every other generation, we actually check for improvements else: # Reverse the lists to increase accuracy new_generation.reverse() self.legacy_pool.reverse() # Check for improvements for i in range(len(new_generation)): for j in range(len(self.legacy_pool)): if new_generation[i][1] > self.legacy_pool[j][1]: self.legacy_pool[j] = new_generation[i] break # so we only add a new agent once # Resort the legacy pool self.legacy_pool.sort(key=lambda a: a[1], reverse=True) def breed_population(self, population): ''' Crossover the weights and biases of the fittest members of the population, then randomly mutate weights and biases. ''' # Get the new generation and the number of children each pair needs to have new_generation, num_children = population.get_parents(self.fitness_threshold) # Update the legacy pool of agents to include any members of the new generation # that are better than the old generations self._improvement_check(new_generation) # # Get the parent models parents = [agent[0] for agent in self.legacy_pool] #shuffle(parents) # Shuffle the parents into a random order # Initialize children children = AgentGA(population.population_size) # Crossover and mutate to get the children for c in range(num_children): for i in range(1, len(parents)-1, 2): children.agents[i*c][0] = self.crossover(children.agents[i*c][0], parents[i], parents[i+1]) return children def crossover(self, child, parent_one, parent_two): ''' Apply crossover and mutation between two parents in order to get a child. ''' # Crossover and mutate each layer for i in range(len(child.layers)): # Get weights and biases of the parents # p1 acts as the base for the child child_data = parent_one.layers[i].get_weights() p2_data = parent_two.layers[i].get_weights() # Handle the weights for x in range(child_data[0].shape[0]): for y in range(child_data[0].shape[1]): # Check to see if crossover should occur if (random() < self.crossover_rate): child_data[0][x][y] = p2_data[0][x][y] # Check to see if mutation should occur if (random() < self.mutation_rate): child_data[0][x][y] += child_data[0][x][y] * uniform(-self.mutation_degree, self.mutation_degree) # Handle the biases for x in range(child_data[1].shape[0]): # Check to see if crossover should occur if (random() < self.crossover_rate): child_data[1][x] = p2_data[1][x] # Check to see if mutation should occur if (random() < self.mutation_rate): child_data[1][x] += child_data[1][x] * uniform(-self.mutation_degree, self.mutation_degree) # Set weights and biases in child child.layers[i].build(input_shape=child_data[0].shape[0]) child.layers[i].set_weights(child_data) return child ``` AI: You have many 1000s of neural network parameters that need to be set up correctly to generate a policy function for your game. In addition, many of the parameters are co-dependent - a "good" set of weights for one neuron in layer 1 will not be effective unless weights in layer 2 make use of it correctly - so cannot be searched and optimised independently. This is much too hard a search problem for a basic GA to optimise. Simple policy functions built out of neural networks can be found using GAs, but you have to radically change the architecture compared to deep learning. A well-established GA/NN combination that should work for your problem is NEAT. There are a few Python implementations, including neat-python. NEAT does things differently to the approach you have tried. Key differences are: The neural networks have far fewer neurons and weights. This is usually OK for simple control systems. It may not work so well if you wanted the policy function to be driven from screenshots of the game (if that is your eventual goal, I recommend investigating reinforcement learning and algorithms like DQN which have been demonstrated to solve these kinds of problem). The NEAT algorithm makes and tracks "innovations" to neural network architecture (e.g. adding new neurons and links between them), so that it can perform crossover and mutation whilst respecting the co-dependent nature of weights in the NN. If you don't want to try NEAT, you could try heavily simplifying your 3-layer network. Probably just 10 neurons per hidden layer will be enough for the policy function, and would reduce the size of search space by a few orders of magnitude. Also if your mutation rate is evaluated per weight, you will probably want it to be lower, e.g. 0.01 instead of 0.10 - too many mutations will swamp any forward progress made from selection with random behaviour and cause performance to plateau.
H: What does a leaf size of 1 in K-neighbors regression mean? I am doing hyperparameter tuning + cross validation and I'm constantly getting that the optimal size of the leaf should be 1. Should I worry? Is this a sign of overfitting? AI: leaf_size should have zero effect on the performance of the model. Its effect is on construction of the lookup object, which affects training and prediction time, but not the results. The best parameters are probably just breaking ties by value of this parameter, so the smallest is chosen. You can inspect the GridSearchCV attribute cv_results_ to be sure. It should also contain training and prediction times, which might inform what value you ultimately want to select.
H: When to split Test and Training data from the full Dataset I'm about to put my implementation into a pipeline and I'm now faced with the dilemma on when to actually split the test and training set? I have the following steps that I currently do (the names are self explanatory) DistinctValuesCleanser OutlierCleanser FeatureCoRelationAnalyzer FeatureVarianceThresholdAnalyzer DataEncoder SimpleImputer And perhaps some more EDA (Exploratory Data Analysis) steps will follow! So, now the question, do I run all these on my entire dataset and then split or split first and then run through these steps only on the training dataset? AI: You should split the dataset in training and test set first, because in a real environment, where your model is deployed, you just don't have a test set, since test set is used to check the ability of the model to generalize. For example, if you do your 'SimpleImputer' step (e.g. fill null values with mean of each feature) on full dataset, you're computing this mean over the training + test set, but it's not right, because you need to think as your test set doesn't exists, so you fill null values with mean of samples' features in training set, which are samples you use to train the model. In fact, if you use the test set to compute the mean with which null values will be replaced, then those new samples are 'dependent' by the test set, so you can't use it to test the generalization error, because you "already saw" test data before. Also for the 'OutlierCleanser' step, you shouldn't remove outliers from test set, since in a real environment, you will face cases in which outliers appear, so you should remove them only on training set, since it's the data in which you "have control". Same reasoning can be applied on covariance analysis and so on
H: Sum vs mean of word-embeddings for sentence similarity So, say I have the following sentences ["The dog says woof", "a king leads the country", "an apple is red"] I can embed each word using an N dimensional vector, and represent each sentence as either the sum or mean of all the words in the sentence (e.g Word2Vec). When we represent the words as vectors we can do something like vector(king)-vector(man)+vector(woman) = vector(queen) which then combines the different "meanings" of each vector and create a new, where the mean would place us in somewhat "the middle of all words". Are there any difference between using the sum/mean when we want to compare similarity of sentences, or does it simply depend on the data, the task etc. of which performs better? AI: TL;DR You are better off averaging the vectors. Average vs sum Averaging the word vectors is a pretty known approach to get sentence level vectors. Some people may even call that "Sentence2Vec". Doing this, can give you a pretty good dimension space. If you have multiple sentences like that, you can even calculate their similarity with a cosine distance. If you sum the values, you are not guaranteed to have the sentence vectors in the same magnitude in the vector space. Sentences that have many words will have very high values, where as sentences with few words with have low values. I cannot think of a use-case where this outcome is desirable since the semantical value of the embeddings will be very much dependand on the lenght of the sentence, but there may be sentences that are long with a very similar meaning of a short sentence. Example Sentence 1 = "I love dogs." Sentence 2 = "My favourite animal in the whole wide world are men's best friend, dogs!" Since you may want these two sentence above to fall closely in the vector space, you need to average the word embeddings. Doc2Vec Another approach is to use Doc2Vec which doesn't average word embeddings, but rather treats full sentences (or paragraphs) as a single entity and therefore a single embeddings for it is created.
H: What metrics work well in unbalanced assemblies? I wanted to know if there are some metrics that work well when working with an unbalanced dataset. I know that accuracy is a very bad metric when evaluating a classifier when the data is unbalanced but, what about for example the Kappa index? Best regards and thanks. AI: Here is the answer I gave on the stats SE The choice of metric depends on the needs of the application, not the problems with the methods/tools. Accuracy is not a very bad metric; the main problem is that practitioners fail to use the relative class frequencies to calibrate their expectations. If 95% of the data belong to the majority class and you get 94% accuracy then of course that isn't very impressive. One way to get around this is to look at accuracy gain, something like $$\frac{Accuracy - \pi}{1 - \pi}$$ where $\pi$ is the relative frequency of the majority class. If you achieve perfect performance you get a score of 1 - if you do as well as the majority classifier you get a score of 0 (indicating that your model has probably learned nothing of interest by looking at the attributes). In the example above, you would get a negative score, indicating that the classifier is useless. Now this is an affine transformation of accuracy, so it is still measuring exactly the same thing, just on a more interpretable scale. Imbalanced problems often have unequal misclassification costs, with false-negatives usually being more costly than false-positives, in which case you should probably look at the expected loss of the classifier instead of the accuracy. Again, this means focussing on the needs of the application rather than the methods. However, for this sort of problem you should use a probabilistic classifier, such as [kernel] logistic regression, so you should look at metrics that measure the quality of the predictions of probability, such as the cross-entropy or Brier score. Probabilistic classifiers are likely to be better as you can experiment with misclassification costs without refitting the model (and do things like implement a rejection operator). When you have done that as a baseline, then perhaps experiment with non-probabilistic classifiers to see if they have benefits.
H: forcing decision tree use specific features first My goal it to force some feature used firstly to split tree. Below, the function splitted tree using feature_3 first. For instance, is there a way to force to use feature_2 first instead of feature_3 ? from sklearn import datasets from sklearn.tree import DecisionTreeClassifier from sklearn import tree iris = datasets.load_iris() X = iris.data y = iris.target fit = DecisionTreeClassifier(max_leaf_nodes=3, random_state=0).fit(X,y) text_representation = tree.export_text(fit) print('Graph') print(text_representation) AI: If you want to force your own split (your own segmentation of the data), split the data yourself and build separate trees. This will allow each tree to split, optimize, build to the proper depth, regularization, etc. for each segment. Then your scoring routine looks at your segmentation then uses the appropriate tree. I use this technique when I believe (research by SMEs and data) that the segments are different enough - often even have different data available to each - that makes this extra effort worthwhile. I do not segment and build the models, then compare to segmented models to check which gives me the performance I need.
H: DB-Scan with ring like data I've been using the DBScan implementation of python from sklearn.cluster. The problem is, that I'm working with 360° lidar data which means, that my data is a ring like structure. To illustrate my problem take a look at this picture. The colours of the points are the groups assigned by DBScan (please ignore the crosses, they dont have anything to do with the task). In the picture I have circled two groups which should be considered the same group, as there is no distance between them (after 2pi it repeats again obviously...) Someone has an idea? Of course I could implement my own version of DB-Scan but my question is, if there is a way to use sklearn.cluster.dbscan with ring like structures. AI: This solved my problem: https://stackoverflow.com/questions/48767965/dbscan-with-custom-metric This is the formula I used for my distance, with n = 2*pi: https://math.stackexchange.com/a/1149125
H: Why DQN but no Deep Sarsa? Why is DQN frequently used while there is hardly any occurrence of Deep Sarsa? I found this paper https://arxiv.org/pdf/1702.03118.pdf using it, but nothing else which might be relevant. I assume the cause could be the Ape-X architecture which came up the year after the Deep Sarsa paper and allowed to generate an immense amount of experience for off-policy algorithms. Does it make sense or is their any other reason? AI: Off-policy learning allows you to use experience replay, which is a finite historical bucket storing recent experiences, which you can then use to randomly sample a fraction of the events from and train your model on these events. This is done to break the autocorrelation of the events (very similar results the closer they are in time), which causes problems when training a NN. This approach cannot be used with SARSA since it uses the next action to train the model. I am sure that someone has already figured out some way to hack this together but it's not really meant to be used as such.
H: Are genetic algorithms considered to be generative models? My understanding is that these sorts of algorithms can evolve/mutate data to hone in on specific desirable areas in large/difficult to search parameter spaces. Assuming one does this successfully, how does one generate new data/sample from that desirable range? Does doing so utilize the same algorithm/structure, or would additional methods need to be introduced? If genetic algorithms improve the efficiency of a search by finding promising regions, can they be considered as a type of generative models? Or would that be incorrect? AI: Within the broader field of ML, I think generative models has a more specific meaning, and if someone says the phrase, I don't really think of GAs as what they were referring to. So in that regard, no, I don't think GAs would be considered generative models (unless perhaps you built one to do very specific tasks that we tend to use other algorithms for today that we call generative models). However, it's probably one of those things that any attempt to rigorously define the term in a way that reflects that based on some underlying principle is really hard, and you'd probably end up with a definition that very well might include GAs. Basically, I think we're in "is a hot dog a sandwich?" territory here, which can be a fun discussion, but doesn't really provide much in the way of useful outcomes. In terms of specific questions, GAs don't generally have separate operators for "hone in on a new region of the space" and "generate solutions within that region". The GA has some genetic operators that when iterated and combined with selection, serve to move the population around the search space over time. All you're doing is generating new individuals via those operators. Humans interpreting the results will say things like, "that mutation caused it to find a promising new region to explore" or whatever, but the algorithm just mutated an individual using the same algorithm it always does. It's us imposing descriptions on the outcomes that provide the kind of color you're asking about.
H: Does t-SNE have to result in clear clusters / structures? I have a data set which, no matter how I tune t-SNE, won't end in clearly separate clusters or even patterns and structures. Ultimately, it results in arbitrary distributed data points all over the plot with some more data points of the one class there and some of another one somewhere else. Is it up to t-SNE, me and/or the data? I'm using Rtsne(df_tsne , perplexity = 25 , max_iter = 1000000 , eta = 10 , check_duplicates = FALSE) AI: No, T-SNE does not have to result in clear clusters. It is a low dimension visualization of high dimension data. So, if you data points are well clustered in low dimension, it means that they can be classified in lower dimension. The idea behind T-SNE is to calculate probability of data points. Points far from each other have low probability. I would suggest to have a look at this link once, https://towardsdatascience.com/t-distributed-stochastic-neighbor-embedding-t-sne-bb60ff109561
H: RNN/LSTM timeseries, with fixed attributes per run I have a multivariate time series of weather date: temperature, humidity and wind strength ($x_{c,t},y_{c,t},z_{c,t}$ respectively). I have this data for a dozen different cities ($c\in {c_1,c_2,...,c_{12}}$). I also know the values of certain fixed attributes for each city. For example, altitude ($A$), latitude $(L)$ and distance from ocean ($D$) are fixed for each city (i.e. they are time independent). Let $p_c=(A_c,L_c,D_c)$ be this fixed parameter vector for city $c$. I have built a LSTM in Keras (based on this post) to predict the time series from some initial starting point, but this does not make use of $p_c$ (it just looks at the time series values). My question is: Can the fixed parameter vector $p_c$ be taken into account when designing/training my network? The purpose of this is essentially: (1) train a LSTM on all data from all cities, then (2) forecast the weather time series for a new city, with known $A_{new},L_{new},D_{new}$ values (but no other data - i.e. no weather history for this city). (A structure different from LSTM is fine, if that's more suited.) AI: You can create a sort of encoder-decoder network with two different inputs. latent_dim = 16 # First branch of the net is an lstm which finds an embedding for the (x,y,z) inputs xyz_inputs = tf.keras.Input(shape=(window_len_1, n_1_features), name='xyz_inputs') # Encoding xyz_inputs encoder = tf.keras.layers.LSTM(latent_dim, return_state=True, name = 'Encoder') encoder_outputs, state_h, state_c = encoder(xyz_inputs) # Apply the encoder object to xyz_inputs. city_inputs = tf.keras.Input(shape=(window_len_2, n_2_features), name='city_inputs') # Combining city inputs with recurrent branch output decoder_lstm = tf.keras.layers.LSTM(latent_dim, return_sequences=True, name = 'Decoder') x = decoder_lstm(city_inputs, initial_state=[state_h, state_c]) x = tf.keras.layers.Dense(16, activation='relu')(x) x = tf.keras.layers.Dense(16, activation='relu')(x) output = tf.keras.layers.Dense(1, activation='relu')(x) model = tf.keras.models.Model(inputs=[xyz_inputs,city_inputs], outputs=output) optimizer = tf.keras.optimizers.Adam() loss = tf.keras.losses.Huber() model.compile(loss=loss, optimizer=optimizer, metrics=["mae"]) model.summary() Here you are, of course I inserted random numbers for layer, latent dimensions, etc. With such code, you can have different features to input with xyz and city features and these have to passed as arrays. Of course, to predict you have to give the model "xyz_inputs" and city features of the one you want to predict.
H: BERT base uncased required gpu ram I'm working on an NLP task, using BERT, and I have a little doubt about GPU memory. I already made a model (using DistilBERT) since I had out-of-memory problems with tensorflow on a RTX3090 (24gb gpu's ram, but ~20.5gb usable) with BERT base model. To make it working, I limited my data to 1.1 milion of sentences in training set (truncating sentences at 128 words), and like 300k in validation, but using an high batch size (256). Now I have the possibility to retrain the model on a Nvidia A100 (with 40gb gpu's ram), so it's time to use BERT base, and not the distilled version. My question is, if I reduce the batch size (e.g. from 256 to 64), will I have some possibilities to increase the size of my training data (e.g. from 1.1 to 2-3 milions), the lenght of sentences (e.g. from 128 to 256, or 198) and use the bert base (which has a lot of trainable params more than distilled version) on the 40gb of the A100, or it's probably that I will get an OOM error? I ask this because I haven't unlimited tries on this cluster, since I'm not alone using it (plus I have to prepare data differently in each case, and it has a quite high size), so I would have an estimation on what could happen. AI: As you pointed out in your comments, you pre-tokenized the data and kept in in tensors in GPU memory. Only the current batch should be loaded in GPU RAM, so you should not need to reduce your training data size (assuming your data loading and training routines are implemented properly). To keep you training data tensor in CPU, you can use with tf.device(...):. However, take into account that the size of the training data can also be huge for the size of the CPU memory. A typical approach for this is to save the token IDs on disk and then load them from there.
H: How do I calculate the accuracy rate of predicting “Fail”? Am I supposed to create a confusion matrix? Question: ABC Open University has a Teaching and Learning Analytics Unit (TLAU) which aims to provide information for data-driven and evidence-based decision making in both teaching and learning in the university. One of the current projects in TLAU is to analyse student data and give advice on how to improve students’ learning performance. The analytics team for this project has collected over 10,000 records of students who have completed a compulsory course ABC411 from 2014 to 2019. AI: Strictly speaking, calculating accuracy doesn't require the details of a confusion matrix: it's simply the proportion of correct predictions. Since there are 4 possible classes in this exercise and we are interested only in the accuracy of the class 'fail', this means that the 3 other classes are considered like a single class 'not fail'. So to obtain the accuracy of fail, sum: the number of students predicted as 'fail' who truly fail (True Positive cases) the numbers of students predicted as 'not fail' who truly don't fail (True Negative cases) And then divide by the total number of students. edit to answer comment: the DT shows for every node the proportion of instances by class, for the subset of data that it receives based on the previous conditions (see a short explanation about DTs here). The instances are predicted at the level of leaf nodes, i.e. nodes with no children. The leaf node simply assigns the majority class. For example if we take the leaf node "studied_credits>=82.500" (just below the root), the majority class is 'withdrawn'. This means that the 5565 instances in this leaf are predicted 'withdrawn', which means 'not fail' for our purpose. This includes 1120 instances which actually should be 'fail', so this leaf node results in 4445 TNs and 0 TPs (and also 1120 FNs but we are not interested in those for accuracy). By doing this for every leaf node you should obtain the total number of TPs and TNs. The total number of instances is given in the root node, it's 15370.
H: How to set the same number of datapoints in the different ranges in correlation chart I am beginner in working with machine learning. I would like to ask a question that How could I set the same number of datapoints in the different ranges in correlation chart? Or any techniques for doing that? . Specifically, I want to set the same number of datapoints in each range (0-10; 10-20;20-30;...) in the image above. Thanks for any help. AI: You can bin your variables to prevent overplotting and make the output cleaner. Here is an example from StackOverflow: https://stackoverflow.com/questions/16947210/making-binned-scatter-plots-for-two-variables-in-ggplot2-in-r This may not be exactly what you need since you did say you want the same number of datapoints in each range (or bin). You would have to add some code if you wanted that exact format.
H: How To Develop Cluster Models Where the Clusters Occur Along Subsets of Dimensions in Multidimensional Data? I have been exploring clustering algorithms (K-Means, K-Medoids, Ward Agglomerative, Gaussian Mixture Modeling, BIRCH, DBSCAN, OPTICS, Common Nearest-Neighbour Clustering) with multidimensional data. I believe that the clusters in my data occur across different subsets of the features rather than occurring across all features, and I believe that this impacts the performance of the clustering algorithms. To illustrate, below is Python code for a simulated dataset: ## Simulate a dataset. import numpy as np, matplotlib.pyplot as plt from sklearn.cluster import KMeans np.random.seed(20220509) # Simulate three clusters along 1 dimension. X_1_1 = np.random.normal(size = (1000, 1)) * 0.10 + 1 X_1_2 = np.random.normal(size = (2000, 1)) * 0.10 + 2 X_1_3 = np.random.normal(size = (3000, 1)) * 0.10 + 3 # Simulate three clusters along 2 dimensions. X_2_1 = np.random.normal(size = (1000, 2)) * 0.10 + [4, 5] X_2_2 = np.random.normal(size = (2000, 2)) * 0.10 + [6, 7] X_2_3 = np.random.normal(size = (3000, 2)) * 0.10 + [8, 9] # Combine into a single dataset. X_1 = np.concatenate((X_1_1, X_1_2, X_1_3), axis = 0) X_2 = np.concatenate((X_2_1, X_2_2, X_2_3), axis = 0) X = np.concatenate((X_1, X_2), axis = 1) print(X.shape) Visualize the clusters along dimension 1: plt.scatter(X[:, 0], X[:, 0]) Visualize the clusters along dimensions 2 and 3: plt.scatter(X[:, 1], X[:, 2]) K-Means with all 3 Dimensions K = KMeans(n_clusters = 6, algorithm = 'full', random_state = 20220509).fit_predict(X) + 1 Visualize the K-Means clusters along dimension 1: plt.scatter(X[:, 0], X[:, 0], c = K) Visualize the K-Means clusters along dimensions 2 and 3: plt.scatter(X[:, 1], X[:, 2], c = K) The K-Means clusters developed with all 3 dimensions are incorrect. K-Means with Dimension 1 Alone K_1 = KMeans(n_clusters = 3, algorithm = 'full', random_state = 20220509).fit_predict(X[:, 0].reshape(-1, 1)) + 1 Visualize the K-Means clusters along dimension 1: plt.scatter(X[:, 0], X[:, 0], c = K_1) The K-Means clusters developed with dimension 1 alone are correct. K-Means with Dimensions 2 and 3 Alone K_2 = KMeans(n_clusters = 3, algorithm = 'full', random_state = 20220509).fit_predict(X[:, [1, 2]]) + 1 Visualize the K-Means clusters along dimensions 2 and 3: plt.scatter(X[:, 1], X[:, 2], c = K_2) The K-Means clusters developed with dimensions 2 and 3 alone are correct. Clustering Between Dimensions Although I did not intend for dimension 1 to form clusters with dimensions 2 or 3, it appears that clusters between dimensions emerge. Perhaps this might be part of why the K-Means algorithm struggles when developed with all 3 dimensions. Visualize the clusters between dimension 1 and 2: plt.scatter(X[:, 0], X[:, 1]) Visualize the clusters between dimension 1 and 3: plt.scatter(X[:, 0], X[:, 2]) Questions Am I making a conceptual error somewhere? If so, please describe or point me to a resource. If not: If I did not intend for dimension 1 to form clusters with dimensions 2 or 3, why do clusters between those dimensions emerge? Will this occur with higher-dimensional clusters? Is this why the K-Means algorithm struggles when developed with all 3 dimensions? How can I select the different subsets of the features where different clusters occur (3 clusters along dimension 1 alone, and 3 clusters along dimensions 2 and 3 alone, in the example above)? My hope is that developing clusters separately with the right subsets of features will be more robust than developing clusters with all features. Thank you very much! UPDATE: Thank you for the very helpful answers for feature selection and cluster metrics. I have asked a more specific question: Why Do a Set of 3 Clusters Across 1 Dimension and a Set of 3 Clusters Across 2 Dimensions Form 9 Apparent Clusters in 3 Dimensions? AI: The field of feature selection for clustering studies this topic. A specific algorithm for feature selection for clustering is Spectral Feature Selection (SPEC) which estimates the feature relevance by estimating feature consistency within the spectrum matrix of the similarity matrix. The features consistent with the graph structure will have similar values to instances that are near to each other in the graph. These features should be more relevant since they behave similarly in each similar group of samples, aka clusters. "Feature Selection for Clustering: A Review" by Alelyani et al. goes into greater detail. There is an also an Feature Selection for Clustering Python package.
H: Keras Binary Classification - Maximizing Recall Let me start by saying my machine learning experience is... dangerous at this stage. I'm still a beginner. I have a binary classification data set of about 100 000 records. 10% of the records are positive and the rest obviously negative. Thus a highly skewed dataset. It is extremely important to maximize the positive (true positive) prediction accuracy (recall) at the expense of negative (true negative) prediction accuracy . Thus, I would rather have an overall 70% accuracy if positive accuracy is 90%+ compared to a low positive accuracy and high overall accuracy. You can already see the issue here. Training the below algorithm obviously optimizes loss for the entire dataset. Thus, priority is given to the negative records which consist of 90% of the dataset. Thus, the overall data set accuracy is high, but the true positive accuracy (recall) is horrible. model = keras.Sequential() model.add(layers.Dense(128, activation='relu', input_dim=35)) model.add(layers.Dense(128, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) One idea would be to try and change the sigmoid threshold to less than 0.5 to try and give preference to recall. But to begin with I have no idea how to do this or if it is even a valid method. Any advice will be appreciated AI: This is the kind of solution that I was looking for: https://github.com/huanglau/Keras-Weighted-Binary-Cross-Entropy/blob/master/DynCrossEntropy.py
H: Interpreting the variance of feature importance outputs with each random forest run using the same parameters I noticed that I am getting different feature importance results with each random forest run even though they are using the same parameters. Now, I know that a random forest model takes observations randomly which is causing the importance levels to vary. This is especially shown for the less important variables. My question is how does one interpret the variance in random forest results when running it multiple times? I know that one can reduce the instability level of results by increasing the number of trees; however, this doesn't really tell me if my feature importance results are "true" though they may be true for that specific run (but not necessarily for a separate run). Even if I were to take an extremely large number of trees and average the feature importance results for each variable, that still doesn't necessarily confirm that it will produce the same importance results if I repeat that exact same process again. Additionally, I have tried it with an extremely large number of trees and still got a slight variation (it did significantly reduce the variance of my results) in my feature importance results between runs. Is there any method that I can use to interpret this variance of importance between runs? I cannot set a seed because I need stable (similar) results across different seeds. Any help at all would be greatly appreciated! AI: Random Forests are full of 'randomness', from selecting and resampling the actual data (bootstrapping) to selection of the best features that go into the individual decision trees. So with all of this sampling going on the starting seed will affect all of these intermediate results as well as the final set of trees. Since you asked about the feature importance it will also affect the ranking as well. So it is always best to keep the seed the same. If you results are changing, and you are doing multiple runs, averaging the feature importance of all of the runs should give you a good idea of what the 'true' value should be.
H: Coefficients values in filter in Convolutional Neural Networks I'm starting to learn how convolutional neural networks work, and I have a question regarding the filters. Are these chosen manually or are they generated by the network in training? If it's the latter, are the coefficients in the filters chosen at random, and then as the network is trained they are "corrected"? Any help or insight you might be able to provide me in this matter is greatly appreciated! AI: The values in the filters are parameters that are learned by the network during training. When creating the network the values are initialized randomly according to some initialization scheme (e.g. Kaiming He initialization) and then during training are updated to achieve a lower loss (i.e. the learning process).
H: Is my model classification overfitting? Is this possible to be just a bad draw on the 20% or is it overfitting? I'd appreciate some tips on what's going on. AI: A few comments: You don't mention number of classes or distribution. Unless the classes are balanced, you should use precision/recall/f1-score instead of accuracy (if your majority class is 75%, accuracy can be 75% just by always predicting this class). It's also unclear what your validation set is used for? When your feature is represented as bag of words, it's not one feature anymore, it's as many as the vocabulary size. This is important because if it's very large you're very likely to have overfitting. Btw this is certainly why you improve performance when you remove some words. Generally you should remove all the rare words, which are useless for the model and often cause overfitting. A difference of 78% on the validation set down to 75% on the test set is not necessarily worrying, but that depends on other factors.
H: Is there a sensible notion of 'character embeddings'? There are several popular word embeddings available (e.g., Fasttext and GloVe); In short, those embeddings are a tool to encode words along with a sensible notion of semantics attached to those words (i.e. words with similar sematics are nearly parallel). Question: Is there a similar notion of character embedding? By 'character embedding' I understand an algorithm that allow us to encode characters in order to capture some syntactic similarity (i.e. similarity of character shapes or contexts). AI: Yes, absolutely. First it's important to understand that word embeddings accurately represent the semantics of the word because they are trained on the context of the word, i.e the words close to the target word. This is just another application of the old principle of distributional semantics. Characters embeddings are usually trained the same way, which means that the embedding vectors also represent the "usual neighbours" of a character. This can have various applications in string similarity, word tokenization, stylometry (representing an author's writing style), and probably more. For example, in languages with accentuated characters the embedding for é would be closely similar to the one for e; m and n would be closer than x and f .
H: Pretrained vs. finetuned model I have a doubt regarding terminology. When dealing with huggingface transformer models, I often read about "using pretrained models for classification" vs. "fine-tuning a pretrained model for classification." I fail to understand what the exact difference between these two is. As I understand, pretrained models by themselves cannot be used for classification, regression, or any relevant task, without attaching at least one more dense layer and one more output layer, and then training the model. In this case, we would keep all weights for the pretrained model, and only train the last couple of custom layers. When task is about finetuning a model, how does it differ from the aforementioned case? Does finetuning also include reinitializing the weights for the pretrained model section, and retraining the entire model? AI: Even if both expressions are often considered the same in practice, it is crucial to draw a line between "reuse" and "fine-tune". We reuse a model to keep some of its inner architecture or mechanism for a different application than the original one. For example, we can reuse a GPT2 model initialy based on english to adapt it to another language like chinese, which means deep changes from the initial model to the new one. On the other hand, we fine tune a model to improve an already existing application or a slight different one, by changing specific hyperparameters or use better algorithms (for instance, using AdamW instead of Gradient Descent). There are plenty of methods in NLP to improve existing models, that's why we can consider it as a different area. It could be regarded as a semantic issue, but I think it is interesting not to be confused between both expressions.
H: Derivative of MSE Cost Function The gradient descent: $\theta_{t+1}=\theta_t-a\frac{\partial}{\partial \theta_j}J(\theta)$ But specifically about $J$ cost function (Mean Squared Error) partial derivative: Consider that: $h_\theta(x)=\theta_0+\theta_1x$ $\frac{\partial}{\partial\theta_j}J(\theta) = \frac{\partial}{\partial\theta_j}\frac{1}{2}(h_{\theta}(x)-y)^2$ $\ \ \ \ \ \ \ \ \ \ \ \ =2\frac{1}{2}(h_{\theta}(x)-y)*\frac{\partial}{\partial\theta_j}(h_{\theta}(x)-y)$ $\ \ \ \ \ \ \ \ \ \ \ \ = (h_{\theta}(x)-y)*\frac{\partial}{\partial\theta_j}(\sum_{i=0}^{n}\theta_ix_i-y_i)$ $\ \ \ \ \ \ \ \ \ \ \ \ = (h_{\theta}(x)-y)x_j$ It´s not clear to me how $x_j$ is calculated: $\frac{\partial}{\partial\theta_j}(\sum_{i=0}^{n}\theta_ix_i-y) = x_j $ Can anyone help me to understand in detail this part of the partial derivative? Thanks in advance. AI: Any term $f$ that is not a function of $\theta_j$ in any equation will have a partial derivative $\frac{\partial}{\partial\theta_j}(f) = 0$. Importantly, no $x_i$, $y$ or $\theta_{i \ne j}$ depend in any way upon $\theta_j$, so they are effectively constants when figuring out the partial derivative. This is also true for any function of them, provided that also does not depend on $\theta_j$. So for example $\frac{\partial}{\partial\theta_j}(f(y)) = 0$, $\frac{\partial}{\partial\theta_j}(f(y)\theta_j) = f(y)$ and $\frac{\partial}{\partial\theta_j}(f(y)\theta_j^2) = 2f(y)\theta_j$ From this: $\frac{\partial}{\partial\theta_j}(\sum_{i=0}^{n}\theta_ix_i-y) = x_j $ When $i \ne j$, then $\frac{\partial}{\partial\theta_j}(\theta_ix_i-y) = 0$, because no term inside the brackets depends on $\theta_j$. When $i = j$, then $\frac{\partial}{\partial\theta_j}(\theta_jx_j-y) = x_j$. Only the term $\theta_jx_j$ depends on $\theta_j$, and it is a linear multiplication.
H: Can anyone tell me how can I get the following output? Here is my code; file_name = ['0a57bd3e-e558-4534-8315-4b0bd53df9d8.jpeg', '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'] img_id = {} images = [] for e, i in enumerate(range(len(file_name))): img_id['file_name'] = file_name[e] images.append(img_id) print(images) The output is; [{'file_name': '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'}, {'file_name': '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'}] I want it to be; [{'file_name': '0a57bd3e-e558-4534-8315-4b0bd53df9d8.jpeg'}, {'file_name': '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'}] I don't know, why it is saves only the last file name in the dictionary? AI: You are overwriting the data stored in img_id because you are using the same dictionary with the same key (file_name). You can either reset the img_id variable to an empty dictionary within your for loop or use a simpler list comprehension: file_name = ['0a57bd3e-e558-4534-8315-4b0bd53df9d8.jpeg', '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'] images = [] for e, i in enumerate(range(len(file_name))): img_id = {} img_id['file_name'] = file_name[e] images.append(img_id) # or [{"file_name": x} for x in file_name]
H: How does ExtraTrees (Extremely Randomized Trees) learn? I'm trying to understand the difference between random forests and extremely randomized trees (https://orbi.uliege.be/bitstream/2268/9357/1/geurts-mlj-advance.pdf) I understand that extratrees uses random splits and no bootstrapping, as covered here: https://stackoverflow.com/questions/22409855/randomforestclassifier-vs-extratreesclassifier-in-scikit-learn The question I'm struggling with is, if all the splits are randomized, how does a extremely randomized decision tree learn anything about the objective function? Where is the 'optimization' step? AI: The splits are random, but the value assigned at each leaf is still the average of the response among training points landing in that leaf. Without pruning, both kinds of trees will perfectly fit their training data; the difference is in how unseen data will pass through the splits (and pruned trees will of course cause a bigger discrepancy).
H: Using Sci-Kit Learn Clustering and/or Random-Forest Classification on String Data with Multiple Sub-Classifications I have a set of data with some numerical features and some string data. The string data is essentially a set of classes that are not inherently related. For example: Sample_1,0.4,1.2,kitchen;living_room;bathroom Sample_2,0.8,1.0,bedroom;living_room Sample_3,0.5,0.9,None I want to implement a classification method with these string-subclasses as a feature; however, I don't want to have them be numerically related or have the comparisons be directly based on the string itself. Additionally, if samples have no data in this column they should not be inherently related. Is there a way to implement these features as "classes" in a way that doesn't rely on a distance metric? I originally wanted to try converting the classes directly to numerical data, but I am worried that arbitrarily class 1 would be considered more closely related to class 2 than class 43. AI: You use something called "dummy encoding".
H: What is the purpose of Sequence Length parameter in RNN (specifically on PyTorch)? I am trying to understand RNN. I got a good sense of how it works on theory. But then on PyTorch you have two extra dimensions to your input data: batch size (number of batches) and sequence length. The model I am working on is a simple one to one model: it takes in a letter than estimates the following letter. The model is provided here. First please correct me if I am wrong about the following: Batch size is used to divide the data into batches and feed it into model running in parallels. At least this was the case in regular NNs and CNNs. This way we take advantage of the processing power. It is not "ideal" in the sense that in theory for RNN you just go from one end to another one in an unbroken chain. But I could not find much information on sequence length. From what I understand it breaks the data into the lengths we provide, instead of keeping it as an unbroken chain. Then unrolls the model for the length of that sequence. If it is 50, it calculates the model for a sequence of 50. Let's think about the first sequence. We initialize a random hidden state, the model first does a forward run on these 50 inputs, then does backpropagation. But my question is, then what happens? Why don't we just continue? What happens when it starts the new sequence? Does it initialize a random hidden state for the next sequence or does it use the hidden state calculated from the very last entry from the previous sequence? Why do we do that, and not just have one big sequence? Does not this break the continuity of the model? I read somewhere it is also memory related; if you put the whole text as sequence, gradient calculation would take the whole memory it said. Does it mean it resets the gradients after each sequence? Thank you very much for the answers AI: The RNN receives as input a batch of sequences of characters. The output of the RNN is a tensor with sequences of character predictions, of just the same size of the input tensor. The number of sequences in each batch is the batch size. Every sequence in a single batch must be the same length. In this case, all sequences of all batches have the same length, defined by seq_length. Each position of the sequence is normally referred to as a "time step". When back-propagating an RNN, you collect gradients through all the time steps. This is called "back-proparation through time (BPTT)". You could have a single super long sequence, but the memory required for that would be large, so normally you must choose a maximum sequence length. To somewhat mitigate the need of cutting the sequences, people normally apply something called "truncated BPTT". That is what the code you linked uses. It consists of having the sequences in the batches arranged so that each of the sequences in the next batch are the continuation of the text from each of the sequences in the previous batch, together with reusing the last hidden state of the previous batch as the initial hidden state of the next one.
H: How can I use a confusion matrix in image captioning? I read that a confusion matrix is used with image classification but if I need to draw it with image captioning how to use it or can I draw it in the evaluation model phase for example if yes how can I start? AI: There's a confusion: a confusion matrix is a standard tool for evaluating a classification task, i.e. one where the target is a categorical variable. The confusion matrix is a table which allows observing the number of test instances which have true class X and are predicted class Y, for every class X and Y. This is practical only with a small number of classes of course, otherwise the confusion matrix is not readable. The task of image captioning is not classification. The target is unstructured data (text), not a categorical variable with a finite set of possible values. Therefore it requires a different (and more complex) evaluation method. It's often similar to machine translation, based on a measure of similarity between the gold standard caption and the predicted caption. Usually one should use the state of the art evaluation method, i.e. the method used in recent papers published on this task.
H: How do I design a random forest split with a "not sure" category? Let's say I have data with two target labels, A and B. I want to design a random forest that has three outputs: A, B and Not sure. Items in the Not sure category would be a mix of A and B that would be about evenly distributed. I don't mind writing the RF from scratch. Two questions: What should my split criterion be? Can this problem be reposed in a standard RF framework? AI: A standard decision tree (or random forest) predicts a probability for the instance to belong to the positive class (I'm assuming binary classification). This probability is based exactly on the same idea: given the features values leading to this leaf of the node, if the proportion of positive instances in this leaf (i.e. with these conditions on the features) is $p$ then a new instance is assigned $p$ as a probability to be positive. So basically you just have to obtain the predicted probability (instead of the class), and if this probability is close enough to 0.5 (e.g. between 0.4 and 0.6) you can predict 'not sure'. Naturally this probability is based on the training data. If the training data is not representative enough or a test instance is too different from the training data, then the probability would be meaningless.
H: Can I perform a Logistic regression on this data? I have the data below: I want to explain the relationship between 'Milieu' who has two factors, and 'DAM'. As you may notice, the blue population's included in the red population. Can I apply a logistic regression? AI: Yes. If you have numeric features for a classification problem, you can apply logistic regression. However, you are unlikely to see spectacular results for this data. Let's look at the classic example Iris data set that does perform well under logistic regression: This data set works well because the classes are largely linearly separable. Essentially, you could draw lines on that graph to separate the classes. Logistic regression is able to learn this and correctly classify most samples. In the case of your data, logistic regression (and all other methods of classification) will struggle in the "overlapping" region because the features you have available simply don't provide enough information to correctly identify classes in the this region. You should still see some success outside of this region. The best way to answer whether or not logistic regression will meet your needs is to run an experiment. Run it on your training data while holding back a test set and check performance on the held out data. If this gives performance that meets your needs, you're good to go. Otherwise, you'll likely need to explore additional features or come up with another way to solve this problem.
H: Understanding SGD for Binary Cross-Entropy loss I'm trying to describe mathematically how stochastic gradient descent could be used to minimize the binary cross entropy loss. The typical description of SGD is that I can find online is: $\theta = \theta - \eta *\nabla_{\theta}J(\theta,x^{(i)},y^{(i)})$ where $\theta$ is the parameter to optimize the objective function $J$ over, and x and y come from the training set. Specifically the $(i)$ indicates that it is the i-th observation from the training set. For binary cross entropy loss, I am using the following definition (following https://arxiv.org/abs/2009.14119): $$ L_{tot} = \sum_{k=1}^K L(\sigma(z_k),y_k)\\ L = -yL_+ - (1-y)L_- \\ L_+ = log(p)\\ L_- = log(1-p)\\ $$ where $\sigma$ is the sigmoid function, $z_k$ is a prediction (one digit) and $y_k$ is the true value. To better explain this, I am training my model to predict a 0-1 vector like [0, 1, 1, 0, 1, 0], so it might predict something like [0.03, 0.90, 0.98, 0.02, 0.85, 0.1], which then means that e.g. $z_3 = 0.98$. For combining these definitions, I think that the binary cross entropy loss is minimized by using the parameters $z_k$ (as this is what the model tries to learn), so that in my case $\theta = z$. Then in order to combine the equations, what I would think makes sense is the following: $z = z - \eta*\nabla_zL_{tot}(z^{(i)},y^{(i)})$ However I am unsure about the following: One part of the formula contains $z$, and another part contains $z^{(i)}$, this doesn't make much sense to me. Should I use only $z$ everywhere? But then how would it be clear that we have prediction $z$ for the true $y^{(i)}$? In the original SGD formula there is also an $x^{(i)}$. Since this is not part of the binary cross entropy loss function, can I just omit this $x^{(i)}$? Any help with the above two points and finding the correct equation for SGD for binary cross entropy loss would be greatly appreciated. AI: You are confusing a number of definitions. The loss definition you provided is correct, yet the terms you used are not precise. I'll try to make the following concepts clearer for you: parameters, predictions and logits. I want you to focus on the logit concept, which is I believe the issue here. First, binary classification is a learning task where we want to predict which of two classes 0 (negative class) and 1 (positive class) an example $x$ comes from. Binary cross entropy is a loss function that is frequently used for such tasks. And, to use this loss function, the model is expected to output one real number $\hat{y} \in [0,1]$ for each example $x$. $\hat{y}$ represents the probability that the example is from the positive class 1. I'd rather write the loss as follows: $$\begin{align} L &= \sum_{i=1}^n l(\hat{y_i}, y_i)\\ l(\hat{y_i}, y_i) &= -y_i log(\hat{y_i}) -(1-y_i) log(1-\hat{y_i}) \end{align}$$ Now, the way our predictions $\hat{y}$ are computed depends on the family of models we choose to use. For example, if you use a logistic regression model, the model computes predictions as follows $\hat{y} = \sigma(z)$, where $z \in \mathbb{R}$ is called the logit (not the prediction) and $\sigma$ is the sigmoid function. In logistic regression, the logit is a linear function of your features $z = \theta x$, where $\theta$ is the parameter vector (which is independent from your set of examples) and $x$ is the example vector. So, $$\hat{y_i} = \sigma(z_i) = \sigma(\theta x_i) $$ In this case, the loss becomes: $$\begin{align} L &= \sum_{i=1}^n -y_i log(\hat{y_i}) -(1-y_i) log(1-\hat{y_i}) \\ &= \sum_{i=1}^n -y_i log(\sigma(\theta x_i) ) -(1-y_i) log(1-\sigma(\theta x_i) ) \end{align}$$ Now, compute the gradient of $L$ with respect to $\theta$ and plug it in your SGD update rule. To summarize, predictions are related to logits by the sigmoid function, and logits are related to example features by model parameters. I used logistic regression to simplify the discussion. Using a neural network, the relationship between logits and model parameters becomes more complicated. Last, I want to clarify that SGD can be used with a variety of models, so when you say it contains $x_i$ in its formula, you need to specify which family of models you are talking about.
H: How to find the number of operation ( multiplication or addition etc) required given a Keras model? I want to implement an FPGA code or hardware code of a Keras model. As a first step, I want to find the number of mathematical operations required to evaluate a predicted output given a model. The model below is a two-class classifier and a sample of input is a vector of size 232X1. The model is: model.add(keras.layers.Dense(5, input_dim=232, activation='relu')) model.add(keras.layers.Dense(1, activation='sigmoid')) The question is given in the model above, how many mathematical operations (plus, minus, multiplication, division) are required to find the output value. In my understanding since there are 5 output neurons in the first layer we have 5232 weights so we need to calculate 5232 multiplication in the first stage and next as we have 5 relu activation calculation. As there are no other layers except the last layer, which is just the output we need only 5 multiplication and 5 sigmoid calculation, and 5 addition. Is the above approach correct? AI: To compute the number of elementary operations, you need to understand what is happening under the hood. Let $x$ be an input vector of size $n$. Given such a vector, a dense layer of $m$ units with an activation function $f$ will execute the following operation: $$a = f(Wx + b)$$ $W$ is the weight matrix associated with the dense layer (its size is $m \times n$) and $b$ is the bias vector (of size $m$). We can derive from this formulation the following: The number of multiplications is $mn$ (this comes from the definition of the product $Wx$). The number of additions is $m(n-1) + m = mn$, where $m(n-1)$ comes from the definition of $Wx$ again, and $m$ comes from adding the bias vector $b$. Then, $f$ is applied $m$ times (once on each component of the resulting vector $Wx +b$). Applying this to your example: Layer 1 does $5 \times 232 = 1160$ multiplications and $1160$ additions, and applies $ReLU$ 5 times (because $m=5$, $n=232$ and $f=ReLU$), Layer 2 does $1 \times 5 = 5$ multiplications and $5$ additions, and applies $\sigma$ the sigmoid function 1 time only (because $m=1$, $n=5$ and $f=\sigma$) The total number of multiplications is: $1165$ and the total number of additions is: $1165$.
H: Building a graph out of a large text corpus I'm given a large amount of documents upon which I should perform various kinds of analysis. Since the documents are to be used as a foundation of a final product, I thought about building a graph out of this text corpus, with each document corresponding to a node. One way to build a graph would be to use models such as USE to first find text embeddings, and then form a link between two nodes (texts) whose similarity is beyond a given threshold. However, I believe it would be better to utilize an algorithm which is based on plain text similarity measures, i.e., an algorithm which does not "convert" the texts into embeddings. Same as before, I would form a link between two nodes (texts) if their text similarity is beyond a given threshold. Now, the question is: what is the simplest way to measure similarity of two texts, and what would be the more sophisticated ways? I thought about first extracting the keywords out of the two texts, and then calculate Jaccard Index. Any idea on how this could be achieved is highly welcome. Feel free to post links to papers that address the issue. NB: I would also appreciate links to Python libraries that might be helpful in this regard. AI: It looks to me like topic modeling methods would be a good candidate for this problem. This option has several advantages: it's very standard with many libraries available, and it's very efficient (at least the standard LDA method) compared to calculating pairwise similarity between documents. A topic model is made of: a set of topics, represented as a probability distribution over the words. This is typically used to represent each topic as a list of top representative words. for each document, a distribution over topics. This can be used to assign the most likely topic and consider the clusters of documents by topic, but it's also possible to use some subtle similarity between the distribution. The typical difficulty with LDA is picking the number of topics. A better and less known alternative is HDP, which infers the number of topics itself. It's less standard but there are a few implementations (like this one) apparently. There are also more recent neural topic models using embeddings (for example ETM). Update Actually I'm not really convinced by the idea to convert the data into a graph: unless there is a specific goal to this, analyzing the graph version of a large amount of text data is not necessarily simpler. In particular it should be noted that any form of clustering on the graph is unlikely (in general) to produce better results than topic modelling: the latter produces a probabilistic clustering based on the words in the documents, and this usually offers a quite good way to summarize and group the documents. In any case, it would possible to produce a graph based on the distribution over topics by document (this is the most natural way, there might be others). Calculating a pairwise similarity between these distributions would represent closely related pairs of documents with a high-weight edge and conversely. Naturally a threshold can be used to remove edges corresponding to low similarity edges.
H: What is meant by this notation for ensemble classifier error rate The below is a picture which denotes the error of an ensemble classifier. Can someone help me understand the notation What does it mean to have (25 and i) in brackets and what is ε^1 is it error of first classifier or the error rate raised to power i. Can someone explain this formulae. AI: $\varepsilon^i$ is the error rate raised to the power i. So for each value i, the formula calculates the probability of i classifiers classifying a sample incorrectly, so for i=13 we have: $$e_{13\ wrong} = {25 \choose 13} \times \varepsilon^{13} \times {(1-\varepsilon)}^{12}$$ Assuming $\varepsilon = 35\%$, and calculating the binomial coefficient gives us: $$e_{13\ wrong} = 5,200,300 \times 0.35^{13} \times 0.65^{12} = 0.035$$ Repeat this for $i = 14, 15, ... , 25$, then sum all the results to get the final answer.
H: is it good to have 100% accuracy on validation? i'm still new in machine learning. currently i'm creating an anomaly detection for flight data. it is a multivariate time series data that include timestamp, latitude, longitude, velocity and altitude of the aircraft. i'm splitting the data into train and test with 80% ratio. i used the keras LSTM autoencoder to do a anomaly detection. so here's my code def create_sequence(data, time_step = None): Xs = [] for i in range (len(data) - time_step): Xs.append(data[i:(i + time_step)]) return np.array(Xs) # pre-process to split the data dfXscaled, scalerX = scaledf(df, normaltype=normalization) num_train = int(df.shape[0]*ratio) values_dataset = dfXscaled.values train = values_dataset[:num_train, :] test = values_dataset[num_train:, :] # sequence input data [sample, time step, features] train_input = create_sequence(train, time_step = time_step) test_input = create_sequence(test, time_step = time_step) train_time = index_time.index[:num_train] test_time = index_time.index[num_train:] # model model_arch = [] last_layer = num_layers - 1 for x in range(num_layers): if x == last_layer: model_arch.append(tf.keras.layers.LSTM(num_nodes, activation='relu', return_sequences=True, dropout = dropout)) else: model_arch.append(tf.keras.layers.LSTM(num_nodes, activation='relu', input_shape=(time_step, 4), dropout = dropout)) model_arch.append(tf.keras.layers.RepeatVector(time_step)) model_arch.append(tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(4))) model = tf.keras.models.Sequential(model_arch) opt= tf.keras.optimizers.SGD(learning_rate=learning_rate) model.compile(loss=tf.keras.losses.Huber(), optimizer=opt, metrics=[tf.keras.metrics.MeanAbsolutePercentageError(name='mape'), tf.keras.metrics.RootMeanSquaredError(name='rmse'), "mae", 'accuracy']) history = model.fit(train_input, train_input, epochs=epochs, batch_size = num_batch, validation_data=(test_input, test_input), verbose=2, shuffle=False) when i do a model evaluation, it come up with 100% accuracy is it good to have 100% accuracy ? or my model is overfitting the data ? AI: Usually indicates something is wrong. In your case, things which do not seem right: One can easily get ~100% accuracy in anomaly detection - just keep predicting the majority class. Is this model really for anomaly detection? Anomaly detection is a classification problem, but your metrics (MAPE, RootMeanSquaredError etc.) are regression metrics.
H: How to verify if the behavior of CNN model is correct? I am exploring using CNNs for multi-class classification. My model details are: and the training/testing accuracy/loss: As you can see from the image, the accuracy jumped from 0.08 to 0.39 to 0.77 to 0.96 in few epochs. I have tried changing the details of the model (number of filters, kernel size) but I still note the same behavior and I am not experienced in deep learning. Is this behavior acceptable? Am I doing something wrong? To give some context. My dataset contains power traces for a side channel attack on EdDSA implementation. Each trace has 1000 power readings. AI: Do you have the same results in the next epochs? If yes, your learning rate might be too high: Are you using an Adam optimizer? It could also happen with some other hyper parameters like: A too high dropout that resets too much your neural network. If it is set to 0.5 or more, you could try with a lower value like 0.1 or 0.2. A bad weigh initialization (use random or Xavier for good results for instance). In order to be more specific, I would need to read part of the code.
H: what degree of fredom one should use while calculating standard diviation for standardizing data I am writing a function to standardize the data and I found out that we can choose either ddof = 0 or ddof = 1, so I got confused that which one to choose and why? Does this make any difference? AI: ddof represents the degrees of freedom adjustment when you subtract from N in the standard deviation formula below. So I suggest that if you are working with a sample from a population, and you want an unbiased estimate then you use ddof=1. If you want to consider it as a population you can use ddof=0. $$ S= \sqrt{ \dfrac{1}{N-1}\sum_{i=1}^N \bigg( X_i-\bar X \bigg)^2 } $$
H: What are the requirements for a word list to be used for Bayesian inference? Intro I need an input file of 5 letter English words to train my Bayesian model to infer the stochastic dependency between each position. For instance, is the probability of a letter at the position 5 dependent on the probability of a letter at position 1 etc. At the end of the day, I want to train this Bayesian network in order to be able to solve the Wordle game. What is Wordle? It’s a game where you guess 5 letter words, and it tells you how many letters you got correct and if they are in the right positions or not. You only have six attempts. Concluding, Wordle is about narrowing down the distribution of what the true word could be. Problem What requirements should such a words list meet? Should I mix US and British english? Should I include all possible words? Even very exotic ones that nobody knows/uses? Should these words be processed/normalized in some way? Does it make sense to use multiple sources? Is there any way to ensure the completeness and correctness? What I have did so far I modeled the Bayesian network consisting of 5 random variables for each letter at each position: $L1$, $L2$, $L3$, $L4$, $L5$ I came to the conclusion that the marginal probability of the searched word is $P(L1, L2, L3, L4, L5).$ In order to calculate the joint probability distribution I need a word list, so I asked myself the aboved questions I've found many sources for word lists, but I'm not sure if I should use one or all I have verified that both US and British English spellings occurred in the Wordle. PS: I know that the list of all possible solution words has been leaked. But I don't want to use such a list, because what if the makers of Wordle change the list again? AI: Well, it totally depends what you want to do with the resulting probability model. If you're planning to use the model for spelling correction for example, you should probably use a vocabulary as large as the kind of text you're expecting to process. In general this is actually done not from a list of words but from a large corpus of text, taking all the n-grams up to 5 in the text into account. It's possible that restricting to 5 letters word would not represent the same probabilities than in the full language. But again this choice depends what is the target task for the model. Answering the updated questions: Ideally, you would use the same database as Wordle itself. I'm not sure but as you said this could backfire if the list is changed later. As far as I know (I happen to play it too!), the game seems to work with fairly standard English vocabulary, so I'm guessing that any standard vocabulary would fit. Should I mix US and British english? I don't know. Should I include all possible words? Even very exotic ones that nobody knows/uses? For the sake of completeness I think you can, but your model could include the global probability of word in order to make standard words more likely than rare words. An option that comes to mind is to use the Google NGrams data (here unigrams) and extract only five letters words. Should these words be processed/normalized in some way? Only for capitalization, I think. Does it make sense to use multiple sources? Is there any way to ensure the completeness and correctness? This could be tricky, because mixing different sources can cause some bias in the n-grams probabilities.
H: Interpreting interaction term coefficient in GLM/regression I'm a psychology student and trying come up with a research plan involving GLM. I'm thinking about adding an interaction term in the analysis but I'm unsure about the interpretation of it. To make things simple, I'm going to use linear regression as an example. I'm expecting a (simplified) model like this: $$y = ax_{1} + bx_{2} + c(x_{1}*x_{2})+e$$ In my hypothesis, $x_{1}$ and $y$ are negatively correlated, and $x_{2}$ and $y$ are positiely correlated. As for correlation between $x_{1}$ and $x_{2}$, it is unknown. Now the question is, if we make a model and get a coefficient $c$, how can we interpret it, whether it's positive or negative? The reason I'm confused is that $x_{1}$ and $x_{2}$ have different effects interms of direction (positive or negative) towards $y$. Do I have to make $x_{1}$ or $x_{2}$ into a reciprocal so that both variables have the same directional effects towards $y$? Another possibility that I can think of is that $c$ it self does not explain the whole of interaction effect and another test needs to be run to specify that. Thank you in advance. AI: if we make a model and get a coefficient c, how can we interpret it, whether it's positive or negative? One key issue on interaction variables is interpretation. Let's remember that we're usually looking for marginal effects (as $dy/dx_1$ or $dy/dx_2$). Therefore the (estimated) derivative of each is $a + cx_2$ and $b + cx_1$ respectively, which means that the change is not constant, but dependent on the values of $x_2$ and $x_1$. We can rewrite the derivatives condition as $dy/dx_1=a + cx_2<0$ and $dy/dx_2 = b + cx_1 >0$. There are many ways to interpret this. For example, let's suppose $x_1$ and $x_2$ are strictly increasing and positive and $c$ turn out to be positive. In that case, $a$ has to be really negative for the inequation to hold for every value of $x_2$ (i.e. $a < -cx_2$). So, in this type of models, coefficient interpretation is not as straight-forward as in linear (in variables) models. So, $c$ could be either positive or negative. That's why you need to verify if the combination of ($a,c$) or ($b,c$) give positive slopes (derivatives) or not. Geometrics come very handy in this case. Do I have to make x1 or x2 into a reciprocal so that both variables have the same directional effects towards y? No, you don't need to. Though, it could help the interpretation a little bit. Another possibility that I can think of is that c it self does not explain the whole of interaction effect and another test needs to be run to specify that. In your example, $c$ does capture the interaction effect. IF you're willing to test if $c=0$, or not, is a different test (rather that the "sign test" done in the previous question). If $c$ is statistically insignificant ($c=0$) then interaction effect is null and you could interpret this model as a simple linear one, requiring that $a<0$ and $b>0$.
H: Compare string entries of columns in different pandas dataframes I have two dataframes, df1 and df2, both with different number of rows. df1 has a column 'NAME', a short string; and df2 has a column 'LOCAL_NAME', a much longer string that may contain the exact contents of df1.NAME. I want to compare every entry of df1.NAME with every entry in df2.LOCAL_NAME, and if df1.NAME appears in a particular entry of df2.LOCAL_NAME, I want to create add an entry in a new column df2.NAME_MAP = df1.NAME. If it doesn't appear in the long string df2.LOCAL_NAME, the corresponding entry in df2.NAME_MAP will be df2.LOCAL_NAME For now, efficiency is not an issue. Here are sample datasets. df1 = pd.DataFrame({ "NAME" : ['222', '111', '444', '333'], "OTHER_COLUMNS": [3, 6, 7, 34] }) df2 = pd.DataFrame({ "LOCAL_NAME": ['aac111asd', 'dfse222vdsf', 'adasd689as', 'asdv444grew', 'adsg243df', 'dsfh948dfd'] }) df1: NAME OTHER_COLUMNS '222' 3 '111' 6 '444' 7 '333' 34 df2: LOCAL_NAME 'aac111asd' 'dfse222vdsf' 'adasd689as' 'asdv444grew' 'adsg243df' 'dsfh948dfd' The goal is to create another column in df2 called NAME_MAP which has the value of df.NAME if that string is contained exactly in the larger df2.LOCAL_NAME string. df2 would now look like this: LOCAL_NAME NAME_MAP 'aac111asd' '111' 'dfse222vdsf' '222' 'adasd689as' 'adasd689as' 'asdv444grew' '444' 'adsg243df' 'adsg243df' 'dsfh948dfd' 'dsfh948dfd' Then I can join the two dataframes on NAME_MAP: LOCAL_NAME NAME_MAP NAME (from df1) OTHER_COLUMNS (from df1) 'aac111asd' '111' '111' 6 'dfse222vdsf' '222' '222' 3 'adasd689as' 'adasd689as' NaN NaN 'asdv444grew' '444' '444' 7 'adsg243df' 'adsg243df' NaN NaN 'dsfh948dfd' 'dsfh948dfd' NaN NaN How do I go about trying to do this string comparison in two datasets of different sizes? AI: Here's a way to solve it Create a df with cartesian product of both dataframes such as here : https://stackoverflow.com/questions/53907526/merge-dataframes-with-the-all-combinations-of-pks cp = df2.assign(key=0).merge(df1.assign(key=0), how='left') Keep only the lines where NAME is in LOCAL NAME (just print cp after that so you understand what's done) cp['key'] = [1 if x in y else 0 for x,y in zip(cp['NAME'],cp['LOCAL_NAME'])] cp = cp[cp['key'] == 1].drop(['key'], axis=1) Merge, and fill the ones without combination by the local name df2 = df2.merge(cp, how='left', on='LOCAL_NAME') df2['NAME'] = df2['NAME'].fillna('') df2['NAME'] = [y if x == '' else x for x,y in zip(df2['NAME'],df2['LOCAL_NAME'])] Result : LOCAL_NAME NAME OTHER_COLUMNS 0 aac111asd 111 6.0 1 dfse222vdsf 222 3.0 2 adasd689as adasd689as NaN 3 asdv444grew 444 7.0 4 adsg243df adsg243df NaN 5 dsfh948dfd dsfh948dfd NaN
H: how to calculate loss function? i hope you are doing well , i want to ask a question regarding loss function in a neural network i know that the loss function is calculated for each data point in the training set , and then the backpropagation is done depending on if we are using batch gradient descent (backpropagation is done after all the data points are passed) , mini-batch gradient descent(backpropagation is done after batch) or stochastic gradient descent(backpropagation is done after each data point). now let's take the MSE loss function : how can n be the number of data points ?, because if we calculate the loss after each data point then n would be only 1 everytime. also i saw a video in where they put n as the number of nodes in the output layer. link to video( you can find what i'm talking about in 5:45) : https://www.youtube.com/watch?v=Zr5viAZGndE&t=5s therefore iam pretty confused on how we calculate the loss function ? and what does n represent? also when we have multiple inputs, will we only be concerned with the output that the weight we are trying to update influence ? thanks in advance AI: As the image says, n represents the number of data points in the batch for which you are currently calculating the loss/performing backpropagation. In the case of batch gradient descent this would be the number of observations in the complete dataset, in the case of mini-batch gradient descent this would be equal to the batch size (or lower if you are using an incomplete batch of data), or 1 in the case of stochastic gradient descent. The reason that the video talks about summing the error over the number of nodes in the output layer is because in their example they are using a network with multiple output nodes, whereas MSE is generally used for regression problems where you are only using a single output node (see for example also this question). A network that uses multiple inputs does not have an impact on how the loss is calculated, in addition because of the chain rule used in backpropagation the algorithm only looks at the partial derivative of the loss with respect to a single weight/bias.
H: Interpreting cluster variables - raw vs scaled I already referred these posts here and here. I also posted here but since there is no response, am posting here. Currently, I am working on customer segmentation using their purchase data. So, my data has below info for each customer Based on the above linked posts I see that for clustering, we have to scale the variables if they are in different units etc. But if I scale/normalize all of them to uniform scale, wouldn't I lose the information that actually differentiates the customers from one another? But I also understand that monetary value could construed as high weight model because they might go upto range of 100K or millions as well. Let's assume that I normalized and my clustering returned 3 clusters. How do I answer below questions meaningfully? q1) what is the average revenue from customers who are under cluster 1? q2) what is the average recency (in days) for a customer from cluster 2? q3) what is the average age of customer with us (tenure) under cluster 3? Response to all the above question using normalized data wouldn't make sense because they ll amight be in a unform scale mean 0, sd 1 etc So, I was wondering whether it is meaningful to do the below a) cluster using normalized/scaled variables b) Once clusters are identified, use customer_id under each cluster to get the original variable value (from input dataframe before normalization) and make inference or interpret clusters? So, do you think it would allow me to answer my questions in a meaningful way Is this how data scientists interpret clusters? they always have to link back to input dataframe? AI: A simple way to estimate the loss of data due to the normalization/scaling, is to apply the inverted algorithm to see how different it is from the raw data. If the data loss is very low (ex: 0.1%), scaling is not an issue. On the other hand, if your clusterization works very well for 10k customers, it shall work well for 1 million. Generally speaking, it is better to have a very good model on a small random dataset and then increase it progressivelly until you reach the production scale. You can either make clusters from one feature, or several features. Due to the problem complexity, it is generally better to start with one feature, and then extend to several features. Making clusters from several features works better with dimensional reduction algorithms (ex: UMAP), because you project all your dimensions in a 2D plan automatically and make interesting correlation studies for all customers. If you apply a good multi dimensional clustering, all the features are taken into account and every point is represented by a customer id. If you select a cluster through a cluster technique (ex: DBSCAN), you just have to extract the list of the customers from this cluster, filter the raw data with this list, and start your data analysis to answer q1,q2 or q3. Note that normalization depends on the dimensional reduction algorithm you are using. UMAP wouldn't require data normalisation, whereas t-SNE or PCA requires it. https://towardsdatascience.com/tsne-vs-umap-global-structure-4d8045acba17 Finally, clusters' interpretation should be backed by actual proofs: even if algorithms are often very efficient in clustering data, it is crucial to add indicators to check if the data has been well distributed (for instance comparing mean or standard deviation values between clusters). In some cases, if the raw data have a too wide distribution, it could be interesting to apply a log but you might loss information.
H: What is the Purpose of Feature Selection I have a small medical dataset (200 samples) that contains only 6 cases of the condition I am trying to predict using machine learning. So far, the dataset is not proving useful for predicting the target variable and is resulting in models with 0% recall and precision, probably due to how small the dataset is. However, in order to learn from the dataset, I applied Feature Selection techniques to deduct what features are useful in predicting the target variable and see if this supports or contradicts previous literature on the matter. However, when I reran my models using the reduced dataset, this still resulted in 0% recall and precision. So the prediction performance has not improved. But the features returned by the applying Feature Selection have given me more insight into the data. So my question is, is the purpose of Feature Selection: to improve prediction performance or can the purpose be identifying relevant features in the prediction and learning more about the dataset So in other words, is Feature Selection just a tool for improved performance, or can it be an end in itself? Also, if using the subset of features returned by Feature Selection methods does not improve the accuracy or recall of the model how can I demonstrate that these feature are indeed relevant in my prediction? If you can link some resources about this issue that would be very useful. Thank you. AI: You partially answered your own question. Feature selection is for gaining insight into your problem, regardless of whether or not it is actually used in a model. This is particularly important when using a small number of features, as you have stated, since you might expect importance to surface when doing modeling. However if it is contrary to what you expect, that is important as well, since it might indicate problems with sample size, measurement, etc. Feature selection can also be used to improve performance, if you downplay interpretability, if you are willing to monitor the model, and optimize it when it degrades. The difference between the two is that if you choose the 2nd method, and your model degrades, I think you will need to explain what is happening in terms of interpretability, or just reoptimize it and 'hope for the best' (not recommended). Many times companies don't care in your model is performing well, but will begin to question if it is not. In the first case, you will always have an interpretable model, with (hopefully) acceptable performance. There are also techniques such as Lasso regression which enables you to perform some optimization, by shrinking the coefficient to an 'interpretation level' that is acceptable. So both explainable AND performance is used nowadays for feature selection. Choice often depends upon the specific type of problem. Modeling for social and health issues require interpretation, while 'big data' types of problems often call for performance enhancing feature selection
H: How to return the number of values that has a specific count I would like to find how many occurrences of a specific value count a column contains. For example, based on the data frame below, I want to find how many values in the ID column are repeated twice | ID | | -------- | | 000001 | | 000001 | | 000002 | | 000002 | | 000002 | | 000003 | | 000003 | The output should look something like this Number of ID's repeated twice: 2 The ID's that are repeated twice are: | ID | | -------- | | 000001 | | 000003 | Any help would be appreciated. AI: You can use df['var'].value_counts() to get this info. Example: import pandas as pd x = pd.Series(['000001', '000001', '000002', '000002', '000002', '000003', '000003']) vc = x.value_counts() vc.index[vc == 2] # Index(['000003', '000001'], dtype='object') Beware though of potential conversion of the original data into strings for the series index though. (If that is a problem, using something like df.groupby('x',as_index=False).size() may be a better option.)
H: Is it recommended to train a NER model using a dataset that has all tokens annotated? I'd like to train a model to predict the constant and variable parts in log messages. For example, considering the log message: Example log 1, the trained model would be able to identify: 1 as the variable Example, log labeled as the constants. To train the model, I'm thinking of leveraging a training dataset that would have all tokens in all of the log entries annotated. For example, for a particular log entry in the dataset, we would have a number of 8 tokens, of which 6 would be constants and 2 would be variables. However, from what I've seen so far, most NER tasks only annotate part of the textual entries, rather than annotating all tokens in the training data. Thus, is this the right way to tackle this problem? Should I formulate the problem differently, namely not as a NER task, maybe? OBS: To clarify the difference between constants and variables, these refer to the parts that constitute an original code logging statement. In such a statement, the constants are the textual parts written by developers which remain the same during the execution of the system, whereas the variables is information that is generated during runtime. AI: There's no problem with this. Technically a sequence labeling model (this is the general name of the problem of which NER is a particular example) actually always annotates all the tokens. For example, POS tagging is another sequence labeling task in which all the tokens must be receive a label. In the case of the NER task, one is only interested in extracting the entities from the text, this is why any other token is assigned a default label (the "Outside" label in the BIO format, for Begin/Inside/Outside). What matters for the model is whether enough information is provided in the context to recognize the class. The features are usually designed through patterns which describe conditions about the current word or any word in the context. For example a feature could be used to represent whether the current token is a word or made of digits, or whether the previous token belongs to a predefined set of words, etc.
H: Can depth be used as a feature when predicting rock type from well log data? I am trying to predict the lithofacies, i.e. the rock type, from well log data, a project very similar to the one described in this tutorial. A well log can be seen as a 1D curve tracking how a given property (e.g. gamma radiation, electrical resistivity, etc...) varies as a function of depth. The idea is to use these 1D arrays as the input features to train a Machine Learning model (e.g. SVM or Random Forest), to infer the facies at a given depth. For instance, in the image below: the first 5 tracks (GR to PE) are the well logs used as features while the last 2 tracks (Facies and Prediction) correspond to the true and predicted facies. One of my colleague started using depth as a feature, thus obtaining much higher scores than when working with well logs only. While this may make sense from a geological standpoint, as certain rock types are expected within a given depth range, I think that this will cause model overfitting [EDIT from June 1, 2022] I am concerned that doing so would put "too much constraint" on the model. Is this explanation correct, or may depth (or position) be used as a feature to train a ML model? AI: I don't see any problem using depth. Instead of putting "too much constraint" on, I would say it provide "extra information" or "predictive power" to the model, just like the well logs. This is what a feature does. Think it in another way, if depth would harm the model (say by 'overfit'), I can argue that any of the 5 tracks of well log features could do the same. As a separate topic, "putting extra constraint on a model" usually ease overfitting, which is often done deliberately. This technique is called regularization.
H: Classification Produces too Many False Positives or False Negatives I trying to classify this data set (https://www.kaggle.com/datasets/fedesoriano/stroke-prediction-dataset) to classify if a patient is at risk for having a stroke. As the title says, whatever test I run to classify the patients, I keep running into the final results having too many false-positives or too many false-negative results. The data itself is severely imbalanced (95% 0s to 5% 1 (had a stroke)) and in spite of doing various things to try and balance it or compensate for it, I keep running into the same ends. For the record, yes, I have tried SMOTEing the training data set with no success. Furthermore, I've read a few articles against SMOTEing the test data set due to data leakage (e.g. https://machinelearningmastery.com/data-leakage-machine-learning/ and https://imbalanced-learn.org/stable/common_pitfalls.html#data-leakage). Here are the codes I've been using. I'm using Python 3.10: X = stroke_red.drop('stroke', axis=1) # Removes the "stroke" column. Y = stroke_red.stroke # We're storing the dependent variable here. ####### Pipelining ####### from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline from sklearn.preprocessing import OneHotEncoder, StandardScaler from sklearn.compose import ColumnTransformer cat_pipe = Pipeline( steps=[ ("impute", SimpleImputer(strategy="most_frequent")), ("oh-encode", OneHotEncoder(handle_unknown='ignore', sparse=False)) ] ) num_pipe = Pipeline( steps=[ ("impute", SimpleImputer(strategy="mean")), ("scale",StandardScaler()) ] ) cont_cols = X.select_dtypes(include="number").columns cat_cols = X.select_dtypes(exclude="number").columns process = ColumnTransformer( transformers=[ ("numeric", num_pipe, cont_cols), ("categorical", cat_pipe, cat_cols) ] ) ####### Splitting the data into train/test ####### from sklearn.model_selection import train_test_split, cross_validate, GridSearchCV, StratifiedKFold #preprocessing. X_process = process.fit_transform(X) Y_process = SimpleImputer(strategy="most_frequent").fit_transform( Y.values.reshape(-1,1) ) X_train, X_test, Y_train, Y_test = train_test_split(X_process, Y_process, test_size=0.3, random_state=1111) # Splits data into train/test sections. Random_state = seed. from imblearn.over_sampling import SMOTENC from imblearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression sm = SMOTENC(categorical_features=[0,2,3], random_state=1111) X_train, Y_train = sm.fit_resample(X_train, Y_train) Finally, the Extreme Gradient Boosting algorithm: import xgboost as xgb boostah = xgb.XGBClassifier(objective='binary:logistic', n_estimators=100000, max_depth=5, learning_rate=0.000001, n_jobs=-1, scale_pos_weight=20 ) # scale_pos_weight is a weight. #0s / #1s . boostah.fit(X_train,Y_train) predict = boostah.predict(X_test) print('Accuracy = ', accuracy_score(predict, Y_test)) print("F1 Score = ", f1_score(Y_test, predict)) print(classification_report(Y_test, predict)) print(confusion_matrix(Y_test, predict)) Here are the confusion matrix results. Bear in mind, I had the SMOTE section commented out when running this: Accuracy = 0.6966731898238747 F1 Score = 0.2078364565587734 precision recall f1-score support 0 0.99 0.69 0.81 1459 1 0.12 0.82 0.21 74 accuracy 0.70 1533 macro avg 0.55 0.76 0.51 1533 weighted avg 0.95 0.70 0.78 1533 [[1007 452] [ 13 61]] Here are the results with SMOTE on: Accuracy = 0.39008480104370513 F1 Score = 0.13506012950971324 precision recall f1-score support 0 1.00 0.36 0.53 1459 1 0.07 0.99 0.14 74 accuracy 0.39 1533 macro avg 0.54 0.67 0.33 1533 weighted avg 0.95 0.39 0.51 1533 [[525 934] [ 1 73]] Any tips on fixing this? If you need my complete code, let me know, and I'll get it to you. AI: Behind the scenes there is a confidence score associated to most models. You can retrieve them using model_name.predict_prob instead of model_name.predict. By default predict uses a .5 confidence score, i.e. anything above a .5 confidence score is predicted to be in the positive class. All you have to do is alter that threshold and you can tradeoff performance between the two classes.
H: How to fit a model on validation_data? can you help me understand this better? I need to detect anomalies so I am trying to fit an lstm model using validation_data but the losses does not converge. Do they really need to converge? Does the validation data should resemble train or test data or inbetween? Also, which value should be lower, loss or val_loss ? Thankyou! AI: When validating machine learning models, you have to use a validation procedure that is consistent with your problem. For an anomaly detection use-case, it means correctly split your data, evaluate your model and with the right metrics. Split of the data You have to correctly choose the way you are splitting your data. By default, you have to define three different sets : training, validation and the test sets. The train-validation-test split is the most appropriate if the observations are well independent and the notion of time is not important in your problem. It is the best one because the distribution of your training data should be similar to your validation and test datasets. Exemple 1 : To detect anomalies in banking transactions, the observations are independent and time is not important. Train-validation-test split seems to be an appropriate choice. Exemple 2 : To detect anomalous temperatures in time series, time is an important variable because it might be possible to learn these anomalous temperatures from future data, which would then introduce a look-forward bias. In that situation, refer to sklearn TimeSeriesSplit. If you have few observations, you can also take a look on cross-validation. Because you are using LSTM models which are designed for time series modeling, I guess you might be in the second configuration. Which loss to minimize ? You have every time to minimize the validation loss function. The correct model selection would be such as : Select a set of models and features to optimize : For each model : Train the model on the train set. Evaluate your model on the validation set. Select your best model according to your validation set metrics. Evaluate it one and only one time on the test set. As you want to minimize the loss on the validation set, you don't especially need to converge on the training set. For example, in an overfitting situation, you can obtain a very low loss on the training set but a very high loss on the validation set due to overfitting. Test set metrics is your true compass. Which metrics to use ? For an anomaly detection use-case, you have to carefully choose your metrics. For most use-cases, the accuracy metrics will be very bad as the distribution of your labels is imbalanced and the positive labels (the anomalies) normally are more important than the non-anomaly class. You have to select a metrics appropriate with respect to the previous reasons + the problem you are searching to solve.
H: Combine multiple duplicate categorical variables into a single one for multiple linear regression I am trying to create a regression model that predicts the box office success of a movie, with one of the explanatory variables being the actors who appear in the film. My problem is that I decided to do the first 4 billed actors, but in the model, it is taking it as 4 separate variables (Actor 1, Actor 2, Actor 3, Actor 4). For example, Jack Nicholson is the lead in "as good as it gets" so he would be Actor 1, but in "a few good men", he would be Actor 2, so the model doesn't recognize them as the same value for calculations. I want the model to treat Actor 1 the same as Actor 4 for the inputs so that the order the actors are assigned does not impact the output. So (Tom Cruise, Brad Pitt) would be treated the same as (Brad Pitt, Tom Cruise). Is there a model/method that I could use to solve this problem? If my problem isn't clear I can clarify any further questions. AI: The issue is just that you consider the list of actors as ordered, but if they are considered as an (unordered) set it works perfectly. The regular "bag of words" representation used in text can perfectly handle this, considering all the different actors as the distinct "words", i.e. the vocabulary. The principle is simple: every actor is assigned an index $i$, for example by sorting the actors alphabetically. Every movie (instance) has a set of actors (can be any number) represented as an array of boolean values, where the index $i$ is 1 if and only if actor $i$ is in the movie.
H: What are the differences between the below feature selection methods? Do the below codes do the same? If not, what are the differences? fs = RFE(estimator=RandomForestClassifier(), n_features_to_select=10) fs.fit(X, y) print(fs.support_) fs = SelectFromModel(RandomForestClassifier(), max_features=10) fs.fit(X, y) print(fs.support_) fs= RandomForestClassifier(), fs.fit(X, y) print(fs.feature_importances_[:10,]) AI: They are not the same. As the name suggests, "recursive feature elimination" (RFE) recursively eliminates features, by fitting the model and throwing away the least-important one(s). After removing one feature, the next iteration may find the remaining features have changed order of importance. This is especially true in the presence of correlated features: they may split importance when included together, so might both be dropped by your second approach; but in RFE, one gets dropped at some point, but then the other one appears more important in the following iterations (since it no longer splits its importance with its now-dropped companion) and so is kept. Your third approach doesn't do any feature selection; it just prints the first (not top) feature importances (according to the model fitted on all features).
H: Is it good practice for Keras/TensorFlow users to rely on the validation set for testing? Some sources consider a test/train split, such as with sklearn, to be expected practice, and validation is more or less reserved for k-fold validation. However, Keras has a somewhat different approach with its validation_split parameter. Different sources report different things on the subject, some suggesting that this replaces test/train splitting, and it seems it should obviously not be confused with k-fold cross-validation. Can anyone confirm or clarify what is generally expected among keras users on the subject? AI: After some additional digging I came across this issue at the Keras source repository which seems to outline the usage and some of the confusion surrounding the nomenclature of Keras' validation set. According to this, it appears it is correct to say that the validation set is equivalent to a test set, and the naming reflects how it is used to help assess the training process itself during training.
H: Correct way of calculating probability I have some data which shows how many orders were made by a certain customer group that bought a certain product type: And the same format but showing how many refunds were made: I am trying to answer a question: What is the probability that an order is made by a customer in the group [A - B] and is refunded? My approach was: being_in_group = df_final[df_final.customer_group.isin(['A','B','C','D'])]\ .groupby('customer_group')\ .agg({'order_id': 'count'}).sum(axis = 0) all_orders = df_final.groupby('customer_group').agg({'order_id': 'count'})\ .sum(axis = 0) p_being_in_group = round(being_in_group / all_orders, 5) being_refunded = df_final[(df_final.refund == True) & (df_final.customer_group.isin(['A','B']))]\ .groupby('customer_group')\ .agg({'order_id': 'count'})\ .sum(axis = 0) # or taking all customer groups being_refunded_all = df_final[(df_final.refund == True)]\ .groupby('customer_group')\ .agg({'order_id': 'count'})\ .sum(axis = 0) p_being_refunded = round(being_refunded / all_orders, 5) p_being_refunded_all = round(being_refunded_all / all_orders, 5) p_final_1 = p_being_in_group * p_being_refunded * 100 p_final_2 = p_being_in_group * p_being_refunded_all * 100 I am wondering if that is the correct approach - calculating the probability of an order being made by the group A & B and then checking the refunded orders - should I check the refunded orders in all of the data or only in the data where customer_group is A & B? AI: If my understanding is correct, your p_final_1 should give the correct result. More simply: P(groupAB ^ refunded) = #(groupAB ^ refunded) / #(total) where #(groupAB ^ refunded) is the number of orders in group A and B which are refunded #(total) is the total number of orders I think that this should be equal to p_final_1 because: p_final_1 = p_being_in_group * p_being_refunded = p(groupAB) * p(refunded | groupAB) = p(groupAB) * p(refunded ^ groupAB) / p(groupAB) = p(refunded ^ groupAB)
H: What are the hidden states in the Transformer-XL? Also, how does the recurrence wiring look like? After exhaustively reading the many blogs and papers on Transformers-XL, I still have some questions before I can say that I understand Transformer-XL (and by extension XLNet). Any help in this regard is hugely appreciated. when we say hidden states are transferred from one segment to another, what exactly is included in these hidden states? Are the weights of the networks implementing the attention mechanism (i.e. calculating the Q, K and V) included? Are the weights involved in calculating the input word embedding included in the hidden state? When the hidden states are transferred during recurrence, is this transfer from the encoder of one segment to the encoder of the next segment? Or is it from the decoder of the current segment to the encoder of the next segment? Is the decoder involved at all in the hidden state transfer? I see images like the following in the following in the papers and blogs. what do the dots represent? encoders? decoders? or an entire unit? I guess the answer to my second question will shed a light on this one too. Thank you AI: By hidden states, they mean outputs of the layers, i.e., what you get after the feed-forward sub-layer. For Transformer-XL, it is important that these are also what you use as an input to the self-attention. Therefore, at inference time, if you want to compute the states recursively by segments (presumably because you cannot fit the entire input int he memory), this is the only thing you need to remember from the previous steps to continue the computation. There is no encoder, you can imagine Transformer-XL as a decoder-only model. Transfering the states just means remembering them, so you can do the self-attention over them, but you can no longer back-propagate through them because you only remember the values and the entire graph telling how you got them. The dots in the scheme correspond to the hidden states: one state per input subword and per layer. The lines between them are the self-attention links.
H: Do we need the outer discount term when implementing REINFORCE algorithm I am learning the REINFORCE algorithm, which seems to be a foundation for other algorithms. I saw the $\gamma^t$ term in Sutton's textbook. But later when I watch Silver's lecture on this, there's no $\gamma^t$ term. I read several implementations of the REINFORCE algorithm and seems no one includes this term. (However, when calculating return $G_t$, they do use $\gamma$ to do discounting.) I am wondering, is it OK to just omit the outer $\gamma^t$ while still using it to calculate the return ($G_t$ or $\text{v}_t$)? Does the outer $\gamma^t$ term have nothing to do with the inner $\gamma^{k-t-1}$ term? Thanks. FYI, this is the implementation I studied: https://github.com/pytorch/examples/blob/master/reinforcement_learning/reinforce.py AI: The answer to your question is already in the text which you copied from Sutton's and Barto's book. Sutton's and Barto's pseudocode describes the general, discounted, case. While the lecture by Silver talks about the undiscounted case. And in the undiscounted case $\gamma=1$ and therefore $\gamma^t=1$. So you can just omit it. Equation (13.8) in Sutton's and Barto's textbook also provides this simplified equation for the undiscounted case. Also note that Monte Carlo methods are applied to episodic MDPs only. Therefore, in many cases you might not need the discounted version.
H: Naive Bayes Denominator clarification I came across an earlier post that was resolved and had a follow up to it but I couldn't comment because my reputation is under 50. Essentially I am interested in calculating the denominator in Naive Bayes. Now I understand that the features in Naive Bayes are assumed to be independent so could we calculate $p(x) = p(x_{1})p(x_{2})...p(x_{n})$ or would we have to use this formula $$p(\mathbf{x}) = \sum_k p(C_k) \ p(\mathbf{x} \mid C_k)$$ with the conditional independence assumption that$$ p(\mathbf{x} \mid C_k) = \Pi_{i} \, p(x_i \mid C_k) $$ My question is would both ways of calculating give the same p(x)? Link to the original question : https://datascience.stackexchange.com/posts/69699/edi Edit** : Sorry I believe the features have conditional independence, rather than complete independence. Therefore it is incorrect to use $p(x) = p(x_{1})p(x_{2})...p(x_{n})$? Lastly, I understand we don't actually need the denominator to find our probabilities but am asking out of curiosity. AI: The way to calculate $p(x)$ is indeed: $$p(x) = \sum_k p(C_k) \ p(x| C_k)$$ Since in general one needs to calculate $p(C_k,x)$ (numerator) for every $k$, it's simple enough to sum all these $k$ values. It would be incorrect to use the product, indeed. Lastly, I understand we don't actually need the denominator to find our probabilities but am asking out of curiosity. Calculating the marginal $p(x)$ is not needed in order to find the most likely class $C_k$ because: $$argmax_k(\{ p(C_k|x) \}) = argmax_k(\{ p(C_k,x) \})$$ However it's actually needed to find the posterior probability $p(C_k | x)$, that's why it's often useful to calculate the denominator $p(x)$ in order to obtain $p(C_k | x)$, especially if one wants to output the actual probabilities.
H: Understanding the algebra behind a specific partial derivative equation I am following this article about neural networks. Given: Until here I understand everything, but then he continues to: I don't understand how he got to that conclusion. I think he skipped some algebra steps that would have made it easier for me to understand. AI: We know that: (1) $\frac{\partial}{\partial x}\big (f(x) + g(x) \big) = \frac{\partial}{\partial x}f(x) + \frac{\partial}{\partial x}g(x)$ (2) $\frac{\partial}{\partial x}a = 0$ Now, \begin{align*} &\frac{\partial}{\partial w_{12}^{1}} (w_{11}^{1}h_1^{2} + w_{12}^{1}h_2^{2} + w_{13}^{1}h_3^{2} + b_1^{1}) = & \text{[using (1)]}\\ &\frac{\partial}{\partial w_{12}^{1}} (w_{11}^{1}h_1^{2}) + \frac{\partial}{\partial w_{12}^{1}} (w_{12}^{1}h_2^{2}) + \frac{\partial}{\partial w_{12}^{1}} (w_{13}^{1}h_3^{2}) + \frac{\partial}{\partial w_{12}^{1}} b_1^1) = & \\\\ & \text{since } h_1^2 \text{ is independent of } w_{12}^1 \text{ , } h_3^2 \text{ is independent of } w_{12}^1 \text{ , } b_1^1 \text{ is independent of } w_{12}^1 & \text{[using (2)]}\\ & = 0 + \frac{\partial}{\partial w_{12}^{1}} (w_{12}^{1}h_2^{2}) + 0 + 0 \implies \frac{\partial}{\partial w_{12}^{1}} (w_{12}^{1}h_2^{2}) \end{align*}
H: How should I use BERT embeddings for clustering (as opposed to fine-tuning BERT model for a supervised task) First of all, I want to say that I am asking this question because I am interested in using BERT embeddings as document features to do clustering. I am using Transformers from the Hugging Face library. I was thinking of averaging all of the Word Piece embeddings for each document so that each document has a unique vector. I would then use those vectors for clustering. Please feel free to comment if you think this is not a good idea, or if I am missing something or not understanding something. The issue that I see with this is that you are only using the first N tokens which is specified by max_length in Hugging Face library. What if the first N tokens are not the best representation for that document? Wouldn't it be better to randomly choose N tokens, or better yet randomly choose N tokens 10 times? Furthermore, I realize that using the WordPiece tokenizer is a replacement for lemmatization so the standard NLP pre-processing is supposed to be simpler. However, since we are already only using the first N tokens, and if we are not getting rid of stop words then useless stop words will be in the first N tokens. As far as I have seen, in the examples for Hugging Face, no one really does more preprocessing before the tokenization. [See example below of the tokenized (from Hugging Face), first 64 tokens of a document] Therefore, I am asking a few questions here (feel free to answer only one or provide references to papers or resources that I can read): Why are the first N tokens chosen, instead of at random? 1a) is there anything out there that randomly chooses N tokens perhaps multiple times? Similar to question 1, is there any better way to choose tokens? Perhaps using TF-IDF on the tokens to at least rule out certain useless tokens? Do people generally use more preprocessing before using the Word Piece tokenizer? To what extent does the choice of max_length affect performance? Why is there a limit of 512 max length in Hugging Face library? Why not just use the length of the longest document? Is it a good idea to average the WordPiece embeddings to get a matrix (if you want to do clustering)? Is it a good idea to use BERT embeddings to get features for documents that can be clustered in order to find similar groups of documents? Or is there some other way that is better? original: 'Trump tries to smooth things over with GOP insiders. Hollywood, Florida (CNN) Donald Trump\'s new delegate guru told Republican Party insiders at a posh resort here on Thursday that the billionaire front-runner is recalibrating the part "that he\'s been playing" and is ready tokenized: ['[CLS]', 'trump', 'tries', 'to', 'smooth', 'things', 'over', 'with', 'go', '##p', 'insider', '##s', '.', 'hollywood', ',', 'florida', '(', 'cnn', ')', 'donald', 'trump', "'", 's', 'new', 'delegate', 'guru', 'told', 'republican', 'party', 'insider', '##s', 'at', 'a', 'po', '##sh', 'resort', 'here', 'on', 'thursday', 'that', 'the', 'billionaire', 'front', '-', 'runner', 'is', 'rec', '##ali', '##bra', '##ting', 'the', 'part', '"', 'that', 'he', "'", 's', 'been', 'playing', '"', 'and', 'is', 'ready', '[SEP]'] AI: Here are the answers: In sequence modeling, we expect a sentence to be ordered sequence, thus we cannot take random words (unlike bag of words, where we are just bothered about the words and not really the order). For example: In bag of words: "I ate ice-cream" and "ice-cream ate I" are same, while this is not true for the models that treat entire sentence as ordered sequence. Thus, you cannot pick N random words in a random order. Choosing tokens is model dependent. You can always preprocess to remove stop words and other contents such as symbols, numbers, etc if it acts as noise than the information. I would like to clarify that lemmatizing and word-piece tokenization is not the same. For example, in lemmatization "playing" and "played" are lemmatized to "play". But in case of word-piece tokenization it's likely split into "play"+"##ing" or "play"+"ed", depending on the vocabulary. Thus, there is more information preserved. max_length should be optimally chosen such that most of you sentences are fully considered. (i.e, most of the sentences should be shorter than max_length after tokenization). There are some models which considers complete sequence length. Example: Universal Sentence Encoder(USE), Transformer-XL, etc. However, note that you can also use higher batch size with smaller max_length, which makes the training/fine-tuning faster and sometime produces better results. The pretrained model is trained with MAX_LEN of 512. It's a model's limitation. In specific to BERT,as claimed by the paper, for classification embeddings of [CLS] token is sufficient. Since, its attention based model, the [CLS] token would capture the composition of the entire sentence, thus sufficient. However, you can also average the embeddings of all the tokens. I have tried both, in most of my works, the of average of all word-piece tokens has yielded higher performance. Also, some work's even suggests you to take average of embeddings from the last 4 layers. It is merely a design choice. Using sentence embeddings are generally okay. But, you need to verify with the literature. There can always be a better technique. Also, there are models specific to sentence embeddings (USE is one such model), you can check them out.
H: How to predict unknown unknowns in machine learning I am dealing with a problem about classifying bird species through analysing MFCCs. I already built a dataset with 13 MFCCs for two kinds of birds. And I trained the data with Naive Bayes & KNN model. However, when I tried the model with prediction of third bird species, it is classified as the one of the two species. I am wondering how can I achieve to predict unknown species as unknowns? And I know my existing classification model may not work. So, what kind of model might be helpful? Does SSL useful in my case? Or treat these unknowns as outliers? But how can that be applied in MFCC? AI: If you want to predict wether a bird is a bird of one of your two classes or unknown you need three classes: $[bird A, bird B, unknown]$. For the unknown class you need data from birds which are neither $bird A$ nor $bird B$. You should make sure the number of rows for each of the three classes is roughly the same. If you don't have data of birds which are neither $bird A$ nor $bird B$ you can use anomaly detection to detect wether a bird is $unknown$ before predicting if it is $class A$ or $class B$.
H: How to give a 3D Tensor as input to LSTM I'm having X_train of shape (1400, 64, 35) and y_train of shape (1400,). I want to give X_train as input to LSTM layer and also want to find the Average (using GlobalAveragePooling Layer) of the Output of LSTM at each time step and give it as input to a Dense Layer. For this problem how to connect the layers and build a sequential model? I'm using Tensorflow.Keras API's AI: LSTM takes as input 3 dimension tensors (batch_size,time_step,input). So before adding a LSTM() layer you need to either use Flatten() or TimeDistributed(Flatten()) layer.
H: Word representation that gives more weight to terms frequent in corpus? The tf-idf discounts the words that appear in a lot of documents in the corpus. I am constructing an anomaly detection text classification algorithm that is trained only on valid documents. Later I use One-class SVM to detect outliers. Interesting enough the tf-idf performs worse than a simple count-vectorizer. First I was confused, but later it made sense to me, as tf-idf discounts attributes that are most indicative of a valid document. Therefore I was thinking of a new approach that would weight words that always appear in documents more, or rather assign a negative weight for the absence of such words. I have preset dictionary of words, so there is no worry that irrelevant words such as(is, that) will be weighted. Do you have any ideas about such representations? The only thing I could imagine would be subtracting the document frequency from the attributes that are zero in a certain document. AI: I'm not aware of any standard representation which increases the importance of document-frequent words, but IDF can simply be reverted: instead of the usual $$idf(w,D)=\log\left(\frac{N}{|d\in D\ |\ w \in d|}\right)$$ you could use the following: $$revidf(w,D)=\log\left(\frac{N}{|d\in D\ |\ w \notin d|}\right)$$ However for the task you describe I would be tempted to try some more advanced feature engineering, typically by using features which represent how close the distribution of words in the current document is from the average distribution.
H: implementing forward and backward of a Linear model I'm implementing the code of this abstraction. The forward is easy and looks like that: I don't understand the backward path and how it fit's the abstraction in the first image: Why is db defined as multiplication of ones of x's shape and dout ? Why is dw defined as multiplication of ones of x.T and dout ? Why both of them are accumulated. i.e it is used += and not = ? Why is dw defined as multiplication of ones of dout and w.T ? AI: This is because the derivative wrt $b$ is $1$: $\frac{\partial E}{\partial b} = 1$ dout is the derivative of loss function wrt prediction. Using chain rule, $$ \frac{dE}{dw} = \frac{dE}{dy}\frac{dy}{ds}\frac{ds}{dw} $$ The last term is the vector of input features $x$. In your case dout is the combination of the first two terms. For example, for MSE loss and sigmoid activation dout $= (y-L)y(1-y)$ This is often used in optimizers for momentum calculation For MLPs, you need to compute gradients for coarse layers using gradients of deep layers. For example, for MLP with one hidden layer with features $\mathbf{z}$ (hence 3 in total) vector of gradients wrt weights in the input layer $\mathbf{w}^0$ would be $$ y= \sigma(\sum_kw^1_k \cdot\sigma(\sum_jw^0_jx_j))\\ \frac{\partial E}{\partial \mathbf{w^0}} = \frac{\partial L}{\partial y} \frac{\partial y}{\partial s} \frac{\partial s}{\partial \mathbf{z}}\frac{\partial \mathbf{z}}{\partial \mathbf{w}^0} = \frac{\partial E}{\partial \mathbf{z}}\frac{\partial \mathbf{z}}{\partial \mathbf{w}^{0}}\\ \frac{\partial E}{\partial \mathbf{w^0}} = (y-L) y(1-y) \sum_j\frac{\partial s}{\partial z_j}\frac{\partial z_j}{\partial \mathbf{w^0}} = (y-L) y(1-y) \sum_j\frac{\partial s}{\partial z_j}\frac{\partial z_j}{\partial s_j}\sum_i \frac{\partial s_j}{\partial w_{ij}}\\ \frac{\partial E}{\partial \mathbf{z}} = (y-L)y(1-y)\frac{\partial s}{\partial \mathbf{z}} = (y-L)y(1-y)\mathbf{w}^1 $$ So, in other words, in order to compute gradients for weights in the input layer, you need gradients wrt neurons in the hidden layer
H: When combined correlation of features decreases I'm building a machine learning model in Python to predict soccer player values. I'm trying to predict a "player_value" column containing the value of a specific player. Consider a sample of the columns (features) I'm using. --------------------------------- appearances | goals | goals_per_game ------------|-------|--------------- 20 | 2 | 0.1 60 | 20 | 0.33 54 | 30 | 0.55 43 | 15 | 0.34 30 | 17 | 0.56 I thought that the correct way to use those columns would be creating a goals per game statistic (goals divided by appearances), since a player can have more goals than another player, but with less matches played. After that, the correlation of the refered columns with the column that I'm trying to predict (player value) decreased. The correlation of the "goals" and the "appearances" columns with the player value column was about 45% each, while the new "goals_per_game" column has a correlation around 18%. Should I use the columns "appearances" and "goals_per_game" columns individually and not use the "goals_per_game" column? Is my analysis wrong and it does not makes sense to use a "goals_per_game" metric since the player value is higher when using those features individually? AI: The only metric you can use to assess the added value of each feature, is the accuracy of your predictive classifier. You can simply include all three feature variables and build your classifier from those. Then you can remove each of the $3$ features and build the three possible classifiers from each of the remaining $2$ features (for example, 'appearances' and 'goals', 'appearances' and 'goals_per_game', and so forth). This analysis gives you the added value of each of the three features to your predictive accuracy. Note the apparent dependency between 'goals' and 'goals_per_game'. This strategy to feature selection is called sequential backward search. If you have an excess of $10$ features, more advanced algorithms like floating search and MCMC are likely to yield better performing subsets of features. In your case with solely $3$ feature variables sequential backward search should work fine.
H: How to convert DNA sequences in FASTA format to OneHot Encoded Pandas Dataframe for Neural Networks? DNA sequences in FASTA format look like: CATGCATTAGTTATTAATAGTAATCAATTACGGGGTCATTAGTTCA... I am trying to convert them into one-hot encoded data in a Pandas dataframe so that I can use various neural networks to analyze them. This has probably been done many times. Can someone point me to references or Python packages for it? AI: I am not sure about any python package but you can simply do integer encoding using labelencoder() and then do one-hot encoding. label_encoder = LabelEncoder() integer_encoded_seq = label_encoder.fit_transform(seq) onehot_encoder = OneHotEncoder(sparse=False) integer_encoded_seq = integer_encoded_seq.reshape(len(integer_encoded_seq), 1) onehot_encoded_seq = onehot_encoder.fit_transform(integer_encoded_seq) Refer this.
H: I do feature engineering on the full dataset, is this wrong? I am aiming to predict the number of days it takes to sell a given property, let's call this variable "DaysForSale" - in short DfS Using the DfS I created a variable called "median_dfs_grouped_street_name" which returns the median days it takes to sell a property for the different streets available in the dataset. (The street names are all categorized). After this, I do my train/test split and run my Random Forest method. Using the feature_imporatances function I see that the new feature is the second most important, which makes me wonder if this is the correct approach? I have two questions: Is it wrong to develop features using the target variable? Is it wrong to do feature engineering on the full dataset? AI: Is it wrong to develop features using the target variable? Not necessarily. It is called "target encoding" or "Mean encoding" and can be very useful. In your case you could, for example, use the DfS of your train data to calculate a median value per street. But you need to carefully design the target encoding to avoid overfitting (there are different strategies to do that - see below link). And for the test data you can only use the target encoding based on your train data. The Coursera course "How to Win a Data Science Competition: Learn from Top Kagglers" has great content on target/mean encoding to be found here. Is it wrong to do feature engineering on the full dataset? Not necessarily. As pointed out in Nicolas' answer you need to be careful to not leak data though. Here's an example where it would be ok: let's assume one of your features is date of enlisting which is the date when the property was published for sale. You could, for example, add a feature to the whole dataset called days since enlisting which simply calculates the days between now and when the property was published for sale. However, your median is an example which results in data leakage since it is not "per row" data engineering but "across rows" data engineering applied to train and test data. That's why the safer approach is to first split the data, remove the target variable from the val/test data and then do feature engineering. Thereby, you avoid any unintended data leakage.
H: Can we use sentence transformers to embed sentences without labels? I was trying to use this project : https://github.com/UKPLab/sentence-transformers for embedding non english sentences, the language is not a human speaking language, its machine language (x86) but the problem is i cannot find a simple example where it shows how can i embed sentences using a custom dataset without any labels or similarity values of the sentences. basically i have an array of sentences lists without any labels for sentences or similarity values for them, and i want to embed them into vectors in a way that it preserves the semantic of the sentence the best way possible, so far i have used word2vec and doc2vec using gensim library so i wanted to try this method to see if its any better? AI: The link you provided of Siamese Bert is an instance of a Bert or Roberta finetuned on STS or NLI data. Which can have the format sentence 1 is similar 3 out of 5 to sentence 2 (STS). Hence, is supervised, it does not fit your purpose. Nonetheless, do not despair, there are some that do not require training, although may not perform as good as the supervised one. The below use word embeddings which you can train on your data corpora to generate sentence embeddings: Word Mower's Distance Sentence Embedding S3E Or by feeding just sentences line by line: DeCLUTR Sent2Vec P.S. I have not tried all of the solutions, to my knowledge I suggest these, cause either they are quite known or are quite recent.
H: Why does Transfer Learning works better on smaller datasets than on larger ones? This question is not about the utility of Tranfer Learning compared with regular supervised learning. 1. Context I'm studying Health-Monitoring techniques, and I practice on the C-MAPSS dataset. The goal is to predict the Remaining Useful Life (RUL) of an engine given sensor measurements series. In health-monitoring, a major issue is the low amount of failure examples (one can't afford to perform thousands of run-to-failure tests on aircraft engines). This is why Transfer Learning has been studied to solve this, in Transfer Learning with Deep Recurrent Neural Networks for Remaining Useful Life Estimation, Zhang et al, 2018. My question is about the results presented in this article. 2. Question C-MAPSS dataset is composed of 4 subdatasets, each of which has different operational modes and failure modes. The article cited above performs transfer learning between these subdatasets. Particularly, when training a model on a target subdataset B using the weights of a trained model on a source dataset A, they don't train on all the B dataset. They conduct an experiment in which they test various sizes for the target dataset B : they try on 5%, 10%, ..., 50% of the total dataset B. The results are presented in page 11. A few cases excepted, the have better results on smaller target datasets. This seems counter intuitive to me : how could the model learn better on less examples ? Why does Transfer Learning works better on smaller datasets than on larger ones ? AI: In the article you provide, from page 11 results, I think one cannot conclude that transfer learning works better on smaller datasets than on larger ones. If you look at the results of transfer learning score values (or RMSE) vs size of learning, it is also getting better while dataset size is increasing (for instance E2 or E5 or E8). So transfer learning does not work better on small datasets. However, you might be looking at the IMP index which is based on the mean score (or RMSE) of learning with and without transfer learning. IMP= (1−(WithTransfer)/(NoTransfer))×100 The index is based on two curves. WithTransfer which will have good performances even at the beginning because when using relevant transfer learning, the model could already extract pertinent information from a very small testing dataset. NoTransfer which will start with poor performances (difficulty to generalize) and then increasing with the size of testing data. The IMP index has then the expected curve you're pointed out, for example with E2 and E5.
H: Dealing with categorical variables in regression problems which method to use? Usually if I have regression problem and my initial dataset contains categorical variables like : column 1: Math Science Science English I would convert this non-numerical variables to numerical variable such that : Math: 0, Science : 1, English : 2. However, I recently found a tutorial said that this solution is not performant because there is no favorite class among other means there is no increase between those classes and if it existe we can not quantify it. Can anyone explain this for me because I usually worked with solution one ? AI: This solution would be performant only if your values has an order. Some models use as learning function the distance between points, and if you use your method, a student in Math and a student in English (0 and 2 making a 2 distance) will have more distance than a student in Math and a student in Science (0 and 1 making a 1 distance). Using this method involves a bias, so you'll have to go another way. One well known method is One Hot Encoding, which will create 3 binary-variables Column1_Math, Column1_Science, Column1_English, with values 0 or 1 (for example, if column 1 is Math, then you'll have Column1_Math = 1, Column1_Science = 0, Column1_English = 0). This way, you avoid biaising your model. I already explained other ways I know to deal with your issue in this answer that I highly suggest you to take a look at
H: Removing constant from the regression model I am trying to calibrate two variables $(X,Y)$ of different measuring techniques from two instruments, the result of the linear regression analysis appears as shown in the image. The result shows the regression constant is not statistically significant but the model is significant. I have tried to remove the regression constant (it is a very small value close to zero) and $R$ of the new model is raised to 90%. Is it correct to remove the regression constant? AI: When you estimate a linear model without constant, you essentially "force" the estimated function to go through the $(0,0)$ coordinates. With an intercept, you estimate a linear function like: $$ y = \beta_0 + \beta_1 x .$$ Without intercept, you estimate a linear function like: $$ y = 0 + \beta_1 x .$$ So when $x=0$, $y$ will be $0$ as well. You should not only look at $R^2$ since $R^2$ often will go up when you have no intercept. Think about the structure of your model, how the data look like, and what you want to achieve. Example in R: library(ISLR) auto = ISLR::Auto ols1 = lm(mpg~horsepower,data=auto) summary(ols1) plot(auto$horsepower, auto$mpg) lines(auto$horsepower, predict(ols1, newdata=auto), type="l", col="red") ols2 = lm(mpg~horsepower+0,data=auto) summary(ols2) plot(auto$horsepower, auto$mpg) lines(auto$horsepower, predict(ols2, newdata=auto), type="l", col="red") Results: Model with intercept: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 39.935861 0.717499 55.66 <2e-16 *** horsepower -0.157845 0.006446 -24.49 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.906 on 390 degrees of freedom Multiple R-squared: 0.6059, Adjusted R-squared: 0.6049 F-statistic: 599.7 on 1 and 390 DF, p-value: < 2.2e-16 Model without intercept: Coefficients: Estimate Std. Error t value Pr(>|t|) horsepower 0.178840 0.006648 26.9 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 14.65 on 391 degrees of freedom Multiple R-squared: 0.6492, Adjusted R-squared: 0.6483 F-statistic: 723.7 on 1 and 391 DF, p-value: < 2.2e-16 Summary: In this example, excluding the intercept improved the $R^2$ but by forcing the (estimated) function to go through $(0,0)$, the model results are entirely different. In essence, the model without intercept produces bullshit in this case. So be very careful to exclude the intercept term.
H: TF-IDF for Topic Modeling Can TF-IDF be used a sole method for Topic Modeling ? (I know there are better methods like LDA , LSA etc) I just want to understand if TF-IDF alone can help us in Topic modeling . If yes , can someone explain how that simple framework works ? I want to understand the application and capabilities of TF-IDF as a sole method for Topic Modeling. I could not find this anywhere else in the internet . AI: Formally the problem of topic modelling is a clustering problem: given a collection of text documents, group together the documents which are topically similar. So technically it can indeed be done with a TF-IDF representation of documents as follows: Collect the global vocabulary across all the documents and calculate the IDF for every word. Represent every document as a TF-IDF vector the usual way: for every word, obtain the term frequency in the document (TF) then multiply by the global IDF for this word (IDF). Note that every vector must represent the document over the global vocabulary. Use any clustering method over the vector representations of the documents: K-means, hierarchical clustering, etc. Note that this method is unlikely to be as good as state of the art methods for topic modelling.
H: Is it bad to have a lot of one class of Data [K-NN classifier]? I am trying to train a sklearn K-NN classifier on a labeled text dataset (in Irish). There are 5 classes, 0-4, but there is a lot of variation between how many there are in each class. What I have done is I've gotten a corpus of Irish text, iterated through every word and stripped a few letters from it based on a linguistic form it took (or not). The problem is, class 4 (which means no action was performed) accounts for 16.5M out of 20.1M entries and it goes all the way down to class 3 with only 36,000 entries. Gathering more data probably won't help as this basically represents the proportion of times these forms of words appear in real life. Is this bad for classification and will it bias the classifier in any way? If it does, is that bias actually of help? Any help is appreciated. Justin AI: I could think of 2 solutions: Since you mention stripping of the words why not make it a 2 step program where in the first classifier is a binary where in 1-3 is one class of Actions performed and the second class is 4 where there is No Action performed. If the word happens to be in the first category you can further run it for classification in between the 3 classes. Would be to cut down 4 to fit the distribution but this will result in a huge loss of data which I dont think is viable but worth a try! Bias is never good for any program and that is clearly explained by Shiv!
H: How Important is Machine Learning to a Data Scientist? When ever the word data science pops up people generally become quick to move to machine learning. Is that the right thing? For a data scientist isn't the handling of data (collection, pre-processing, visualization, etc.,) more important? I am aware of the thread What is valued more in the data science job market, statistical analysis or data processing?, but the answers really didn't help me and the job market has changed since then! AI: There’s a lot of statistics that isn’t machine learning: experimental design, inference, interpretable models. All three could be much more important than machine learning, depending on the job. Then there’s the part that statisticians don’t like, which is that most of what a data scientist does is argue with data sets that are in nasty formats. That could be of considerable importance to a machine learning group, but someone good at programming with no knowledge of predictive modeling could be quite excellent at such a task.
H: When should I use normal Q learning over a DQN? From this article here, it says that using a tabular Q function is less scalable than a deep Q network. I assume that this means that the Q table approach works for some environments, but once they become more complex, they aren't as efficient. For example, the Frozen Lake environment used in the article states that the deep Q network is slower than the Q table. The Frozen Lake environment has a relatively simple environment with 16 states and 4 actions per state. However, in an environment such as a game of snake, there are many more states, making the Q table larger. How should I decide between a Q table and a Deep Q Network? AI: I haven't worked with reinforcement learning but in general one should judge based on the data (or in this case states). The shift of algorithms from normal ones to deep ones is made when we require more power. So it boils down to your need! If you can get results with simple algorithms always choose that but if simple algorithms don't cut it you will have to go for the deeper ones. Not going on the true computational meaning but generally deeper models are more complex and one should always avoid complex models where ever possible. You also mention "slower". That has a lot of perspective into it so you shouldn't make a final judgement on the models in that aspect!
H: Types of maps in Interpretable Machine Learning I have worked on Interpretable Machine Learning (IML) for over 1 year. However, there are some terminologies that always make me confused. For example, saliency maps/heat maps. Are they same? Are activation maps visualizations of activation values of a unit(s) in Neural networks? Do you know any other types of maps? Thank you. AI: Difference between saliency and heat maps - https://www.reddit.com/r/EyeTracking/comments/hhy2u5/what_is_the_difference_between_saliency_maps_and/ Activation maps - https://computersciencewiki.org/index.php/Feature_maps_(Activation_maps) These two links should answer your question! Regarding types of maps, they could be many but activation maps are the most used. People can create various maps so i will not be stressing on the question. I am no expert but I have written one paper on iml. Smoothed Score-CAM. Do look into it :)
H: Pandas Groupby datetime by multiple hours I have a log dataset which looks like the following:- Time event 2020-08-27 07:00:00 1 2020-08-27 08:34:00 1 2020-08-27 16:42:23 1 2020-08-27 23:19:11 1 . . . I am trying to get the count of events that happened within different hourly interval (6 hours, 8 hours etc). Any ideas on how I can get it done pandas ? AI: You can use the Grouper function. With the freq argument, you can set the time interval. The example is for 6 hours. dfcounts = df.groupby(pd.Grouper(freq='6H')).count()