text
stringlengths
83
79.5k
H: How to get all 3 labels' sentiment from finbert instead of the most likely label's? I'm using bert to do sentiment analysis. I previous used cardiffnlp's twitter-roberta-base-sentiment, https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment. It gives the the usage on its page. from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import softmax import csv import urllib.request # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) # Tasks: # emoji, emotion, hate, irony, offensive, sentiment # stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary task='sentiment' MODEL = f"cardiffnlp/twitter-roberta-base-{task}" tokenizer = AutoTokenizer.from_pretrained(MODEL) # download label mapping labels=[] mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt" with urllib.request.urlopen(mapping_link) as f: html = f.read().decode('utf-8').split("\n") csvreader = csv.reader(html, delimiter='\t') labels = [row[1] for row in csvreader if len(row) > 1] # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) model.save_pretrained(MODEL) text = "Good night " text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Good night " # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) It shows sentiments of all three labels, positive, neutral and negative. However, I'm now trying to use Finbert from ProsusAI to do sentiment analysis https://huggingface.co/ProsusAI/finbert. It doesn't give me its usage on its page. So I'm following this tutorial https://towardsdatascience.com/effortless-nlp-using-pre-trained-hugging-face-pipelines-with-just-3-lines-of-code-a4788d95754f. My code is from transformers import pipeline classifier = pipeline('sentiment-analysis', model='ProsusAI/finbert') classifier('Stocks rallied and the British pound gained.') However, the result is [{'label': 'positive', 'score': 0.8983612656593323}]. It only shows the sentiment of the most likely label's (positive). But I need all three labels' sentiment (positive, neutral and negative). How should I use it? AI: You can get the scores for all labels as follows: from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import scipy tokenizer = AutoTokenizer.from_pretrained("ProsusAI/finbert") model = AutoModelForSequenceClassification.from_pretrained("ProsusAI/finbert") inputs = tokenizer("Stocks rallied and the British pound gained.", return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits scores = {k: v for k, v in zip(model.config.id2label.values(), scipy.special.softmax(logits.numpy().squeeze()))} scores # {'negative': 0.034473564, 'neutral': 0.067165166, 'positive': 0.8983614}
H: What's the input shape for an SVM classifier? I have a dataset with tensors (there are 12 classes) of shape (700,2000) - height is 700 and width is 2000. I would like to try to use an SVM classifier (just to see how it does). My question is - how do I input the data? Do I flatten it? So the input arrays would be 1 dimensional with length 700*2000? AI: In general, the software is probably expecting an $N$ examples (or records) by $M$ (features) array as the input matrix $X$, plus a 1D array of length $N$ as the expected output labels (with 12 unique values, corresponding to the 12 classes). So if you're saying that your entire dataset is 700 × 2000, just use it as-is. On the other hand, if you're saying that one single example of your data is 700 × 2000, then you would need to flatten each example into a 1,400,000-feature vector... but this is a very high-dimensional vector, and although SVMs are good at high-dimensional data, it can lead to problems (see the curse of dimensionality). So you could either reduce the size of the original examples, or take some steps to otherwise reduce the dimensionality of your data.
H: RNN basic doubt Suppose if I have 2 sentences: "My name is Alex" "Alex is my name" If I am using a RNN, After processing both the sentences, Will the final output vector be the same? Because RNN basically shares the weights, And both the sentences have the same number of words,Shouldnt the final output after processing the last word in both sentences be the same ? I am well aware that when processing each word in RNN, The next word will be based on the current and previous processed words. But what about the full processing of both these sentences with same words. Will they have same final output? AI: No, they will not have the same final output. Although the weights of the RNN are the same for each time step and the words are the same, their order is not and therefore the inputs and hidden states received at each time step will be different, and so will their outputs. You said it yourself: The next word will be based on the current and previous processed words. . The next and previous words for each time step are not the same in two sentences with the same words but in different order.
H: How to calculate Pointwise Mutual Information (PMI) when working with multiple ngrams Pointwise Mutual Information or PMI for short is given as Which is the same as: Where BigramOccurrences is number of times bigram appears as feature, 1stWordOccurrences is number of times 1st word in bigram appears as feature and 2ndWordOccurrences is number of times 2nd word from the bigram appears as feature. Finally N is given as number of total words. We can tweak the following formula a bit and get the following: Now the part that confuses me a bit is the N in the formula. From what I understand it should be a total number of feature occurrences, even though it is described as total number of words. So essentially I wouldn't count total number of words in dataset (as that after some preprocessing doesn't seem like it makes sense to me), but rather I should count the total number of times all bigrams that are features have appeared as well as single words, is this correct? Finally, one other thing that confuses me a bit is when I work with more than bigrams, so for example trigrams are also part of features. I would then, when calculating PMI for a specific bigram, not consider count of trigrams for N in the given formula? Vice-versa when calculating PMI for a single trigram, the N wouldn't account for number of bigrams, is this correct? If I misunderstood something about formula, please let me know, as the resources I found online don't make it really clear to me. AI: The application of PMI to text is not so straightforward, there can be different methods. PMI is originally defined for a standard sample space of joint events, i.e. a set of instances which are either A and B, A and not B, not A and B or not A and not B. In this setting $N$ is the size of the space, of course. So the question when dealing with text is: what is the sample space? Sometimes it makes sense to consider specific units of text as instances, for example small documents (e.g. tweets) or sentences. In this option the different cases are whether word A and B appear at least once individually/jointly in the document, and then we count the number of documents as frequency. $N$ is the total number of documents, of course. Sometimes there's no natural unit to consider, only the full text. In this case the sample space is defined by a moving windows of length $m$ in the text, i.e. the window starting at position 1, 2, 3, etc. Every window is a 'document' which can have combination of [not] A/B.
H: Which representation of CNN feature maps is correct? When I extract my features from my CNN, it doesn't look like this: And those pictures are not just representation. From this article it can be seen that these features are actual extracted features from real CNN. However, the features that I extracted look exactly like this: Why is this representation not like the first picture? Is it the correct one? AI: I don't think you are comparing like with like. In the left-most panel of the first image, you are seeing the weights in each kernel (one channel from one convolutional layer). These are yellow in the figure below. The size of the kernels is determined by the hyperparameters of the network; they might have a size like 3 × 3 or 31 × 31. I'm not 100% certain about the other two panels; the right-most looks more like convolutional products than filters. In the second, you are certainly looking at the activations when given a particular input example. These images are the result of convolving the input with the kernels; this part is pink in the figure below. Their size depends on the input image size, the kernel size, and the convolution parameters. From the article you linked to:
H: Low validation accuracy when not using shuffled datasets First I tried creating the training/testing datasets using sklearn train_test_split function like the following, x_train, x_test, y_train, y_test = train_test_split(x_scaled, y, test_size=0.5, random_state = 1) And on the second test I tried splitting the two datasets by half without any kind of randomization... x_train, x_test, y_train, y_test = x_scaled[:int(total_rows/2)],x_scaled[int(total_rows/2):],y[:int(total_rows/2)],y[int(total_rows/2):] On first test, The model accuracy was like the following, loss: 0.1951 - accuracy: 0.7057 - val_loss: 0.2101 - val_accuracy: 0.6540 and classification report was like this, precision recall f1-score support 0.0 0.55 0.76 0.64 864 1.0 0.78 0.56 0.65 1263 accuracy 0.65 2127 macro avg 0.66 0.66 0.65 2127 weighted avg 0.68 0.65 0.65 2127 On the second test, when I used splitted datasets, the model accuracy was pretty good, but test accuracy was below average... loss: 0.1558 - accuracy: 0.7875 - val_loss: 0.5014 - val_accuracy: 0.5026 Classification report, precision recall f1-score support 0.0 0.47 0.80 0.59 965 1.0 0.60 0.26 0.36 1162 accuracy 0.50 2127 macro avg 0.54 0.53 0.48 2127 weighted avg 0.54 0.50 0.47 2127 I understand the second model is overfitting, that's why I'm getting poor test results... But in the real world the structure is gonna be kind of same... Like I'll have to use the model on fresh data while training it on older data... ( The rows are sorted or indexed by datetime in the datasets ) I'm kinda new to machine learning. So lil bit confused on this... Does the second test mean the model is not gonna perform that well like the first test in real world? Or what I'm doing wrong? AI: Normally overffited models will generalize poorly, because their parameters were estimated to follow the patterns found on your train set only. But why ? The parameters/weights are estimated using gradient, if you don't know what it is 3Blue1Brown has a great video about that, think of it as a compass that points to the direction where your loss function converges to 0. That direction is improved with the different patterns that your model finds in the data. Although, you didn't shuffle your data, so some repeated patterns can show up in a sequence (e.g the first 100 images of the training set are cats) and the gradient will follow only those patterns until it finds a completely different pattern and realize: "Wait I'm in the wrong direction!" - So now it needs to recalculate the weights to follow the new pattern and that can happen nearly the end of the training loop, meaning that your model will not have time to learn the new pattern. Or it can't even find new patterns because those were only in the data in validation set when you truncate your inputs (x_train, x_test). So your model will be very good to classify your training set data but only that data. When you shuffle you show patterns in a random way so the model can update their weights (learning those patterns) in time and slowly converge to a minimum on your loss function. There are other cases where model can overfit, a small dataset is one of those cases. If it was confuse to understand, tell me I'll try to explain in a better way...
H: How to Define KPIs for a product solution I have to find certain KPIs for a given product solutions and describe data tables to track those KPIs. Context: Our practitioners conduct treatment X in 3 steps with patient, so it leads to 3 appointments: Check to see if treatment X is needed. 2. preparation and 3. actual treatment Problem: Too many patients have a positive check and need treatment X, so practitioners often forget to schedule the following appointments. Patients are not treated adequately and our doctors lose potential revenues. Product solution: We will automatically send a sms to patients with positive check after their check appointment, with a link to book an appointment for the following step. Task: Define KPIs you would want to track to check if this product solution is working well Describe the data tables you would need engineering to create to track these KPIs BONUS : Create a rough dashboard draft that you would use to show these KPIs AI: KPIs are linked to your project's objective. If the aim is to know whether a product works well or not, you can either collect indicators from doctors or patients. In general, doctors are easier to monitor, as this kind of indicator is useful for them to know the product effects. Your system should focus on indicators than can provide the doctor to monitor the product's real effect. To ensure that the doctors collect them correctly, it is better to set the most important indicators as mandatory (ex: the next appointment). Therefore, you need to know KPIs such as: General information such as age, weight, etc. Linked health indicators at each step. Treatment requested by the doctor. Result of the treatment in the short and long term. Then, you should be able to have dashboards from many patients thanks to this data.
H: Why do RNN text generation models treat word prediction as a classification task? In many of the sources I have found regarding text generation with word-based RNN models (LSTM or GRU), the model is trained to perform a classification task across the vocabulary (such as with categorical cross-entropy loss) to predict the next word. An example can be found here for starters. Over a large vocabulary, this gets computationally expensive. To me, it seems much more practical to first get contextual embeddings for each word in the training/testing dataset by using a pre-trained model like BERT. Then the sequential model could predict words using a loss function that measures the distance between predicted and actual embeddings with MSE or cosine similarity (CosineEmbeddingLoss). A lookup in the embedding space could return the word nearest to each prediction to make the output human-readable. Is there anything wrong with the outlined approach or is it viable for text generation? The softmax operator and the classification task seem needlessly expensive for large vocabularies. Although BERT cannot be used to directly generate text (can bert be used for sentence generating tasks) I see nothing wrong with training a new model using BERT's embeddings or the embeddings of a similar model (see "BERT for feature extraction" here). AI: The main difference between RNN-based text generation and BERT is the attention mechanism based on transformers. This attention mechanism is very important to add context between words and explains why the results are better than RNN in many applications. However, in terms of text generation, GPT-2 is more adapted than BERT because it uses a masked self-attention mechanism. The mask self-attention model is trained by guessing the next word or token, contrary to BERT which is trained on whole phrases. That allows the definition of the next sequence of words in terms of probability, thanks to the previous ones. Here is an article that explains how GTP2 works: https://jalammar.github.io/illustrated-gpt2/ The model is available here: https://huggingface.co/gpt2
H: Voting Regression models, other approaches than averaging the results from each estimators In a regression problem that I'm currently working on, it seems that my model is doing well on higher values but significantly worse on lower values (e.g. values from 100,000,000 to 105,000,000 are being accurately predicted/ having lower error scores while values from 1,000,000 to 5,000,000 don't). One approach that I am planning to test out is using multiple regression models, with one trained on the lower values and one on the higher values. I've seen scikit-learn's VotingRegressor, but if I understand correctly it seems that in predicting the value it'll only average the result from the estimators. Other than using average values from the estimators, are there any other approaches to do the voting from multiple regression models? Since classification problems might use soft/hard voting, wondering if there are alternative approaches in regression problems as well. AI: You may try a stacking or blending approach (such as a StackingRegressor() in the recent sklearn versions), featuring a simple meta-model taking your initial models' predictions as features.
H: Why XGboost does not work on small dataset Here I am using Xgboost for classification for a simple small dataset, when x = 0 then y = 1 elif x = 1 then y = 0. Then I use the xgb.XGBClassifier() but the resulting probability is just 0.5. I wonder why this happens. AI: There's too few different samples so XGBoost is unable to split the trees properly (you can check the actual trees using clf.get_booster().get_dump()). Reducing the min_child_weight hyperparameter (e.g. clf = xgb.XGBClassifier(min_child_weight=0.5)) should get you some traction.
H: Clustering ordered categorical data Suppose I have a list of, say, 100 countries, as well as their respective historical sovereign credit ratings as such 2020 2019 ... 2000 Country 1 AAA A- ... BBB Country 2 CCC B- ... BBB ................................... I am interested in clustering these based on their historical credit ratings. For instance, I expect two countries that have consistently rated highly over the years (say ratings between A- and AAA) would cluster together, countries with varying degrees of ratings (from low to high) over the years 2000 and 2020 would also cluster together, and countries that have consistently rated poorly also. I have looked at a few suggestions online for clustering categorical data based on multiple variables, but usually they are not for ordered categorical data. For instance, the dissimilarity matrix generated by Kmodes, is predicated on the two categories being identical. However, in ordered categorical data, a rating of BBB+ and BBB are incredibly close to one another and thus must be clustered together. What would be a good solution to such clustering exercise for the countries given the example above? AI: You can have categories that contain a logic that could be a numeric value and it seems to be your case. That's why you should consider those ratings from a mathematical point of view and assign a numerical scale that would be comprehensive to your algorithm. For instance: AAA+ => 1 AAA => 2 AAA- => 3 AA+ => 4 AA => 5 AA- => 6 etc. In this way, countries rated AAA+ in 2022 and AA- in 2021 should be close to countries rated AAA in 2022 and AA in 2021 because [1,6] are similar to [2,5] from a numeric point of view. However, if you consider those rating as separated categories like this: AAA+ => col_AAA+= True, col_AAA=False, col_AAA-=False, col_AA+=False,... AAA => col_AAA+= False, col_AAA=True, col_AAA-=False, col_AA+=False,... etc. You would have more data to deal with and the algorithm would not see any ranking between columns, and hence would not make good clustering. I recommend using numeric values for any feature that can have a scale and use categories just in case of independent ones (for instance, sea_access=Yes/No, or opec_member=Yes/No). I some case, you can also implement an intermediate solution like this one: AAA+ => col_A= 1, col_B=0, col_C-=0, ... AAA => col_A= 2, col_B=0, col_C-=0, ... ... BBB+ => col_A= 0, col_B=1, col_C-=0, ... BBB => col_A= 0, col_B=2, col_C=0, ... etc. It could be interesting if you want to make a clear difference between rating groups (ex: going from AAA to A+ is not as bad as going from A- to BBB+). Note: clustering could be difficult if you consider too many years, even with algorithms like UMAP or t-SNE. That's why a good option is to consider a few years for a beginning or simplify with smoothing algorithms.
H: What does this value function of a parameterized policy mean? and is it related to TRPO in RL? Iv been watching the RL lectures on youtube from Stanford. In episode 9 – policy gradients 2 the teacher Emma Brunskill says we are going to learn about how to make safer policy gradient steps by adjusting (not sure how yet) the policy gradient estimate we compute. The teacher says the equation for the value function of the policy parameterized by $\theta$ is the following. $$ V(\theta) = E_{\pi\theta} [\sum_{t=0}^{\infty} \gamma^{t} R(s_t,a_t)] $$ where $ s_0 \sim \mu(\pi) $ the start state distribution, $a_t \sim \pi(a_t|s_t) $ the policy, $ s_{t+1} \sim P(s_{t+1}|s_t,a_t)$ the transition model of a MDP that terminates or is episodic, $R(s_t,a_t)$ is the reward model evaluated for the state and action expected at time step t. Then the teacher says we can write the value function of another policy like the following. I get lost here. $$ V(\tilde{\theta}) = V(\theta) + E_{\pi\tilde{\theta}}[\sum_{t=0}^{\infty} \gamma^{t} A_{\pi}(s_t,a_t)] $$ then she re-expresses that expectation as. $$ V(\tilde{\theta}) = V(\theta) + \sum_{s}\mu(s)\sum_{a}\pi(a|s) A_{\pi}(s,a) $$ Where $V(\theta)$ is the value function of the policy parameterized by $\theta$ and $\tilde{\theta}$ is a different set of parameters than $\theta$. I don’t understand why? Im looking for an explanation of what this equation is used for? Or why it makes sense to represent the value function of a policy using the value function and advantage function of another policy. Does this mean that $V(\tilde{\theta})$ is expected to be different from $V(\theta)$ by the the advantage function of $\pi\theta$ averaged by the distribution of states and actions seen under $\pi\tilde{\theta}$ Im having a hard time wrapping my mind around that. I think it means that if the new policy chooses actions differently than the current policy it will expect to get advantage that the current policy expects to get from taking those actions. (and additionally weighted by the discounted distribution of states seen undere the new policy) I haven’t read the cource testbook Reinforcement Learning: an introduction by Sutton & Barto. If you have and this was part of the book and you understand why this makes sense please let me know. AI: The teacher tries to show that we need to update the parameters in a way that the new value estimation under the new parameters is monotonically improving the value and hence the policy. The section that is confusing is when she makes use of the Performance-Difference Lemma Kakade & Langford, 2002. But the proof is omitted and thus the confusion. I think these sources have it in a more clear way along with more detailed proofs: Analysis on Policy Difference Equation. This source explains also how from the "time description" you get to the "state visitation distribution" description that confused you. p. 20 There are plenty of other sources that have detailed derivation of the specific lemma. If you are interested in RL theory then this paper by Kakade is a review of various such methods such as the Conservative Policy Iteration which is the algorithm that the teacher is trying to get to.
H: Binary Classification with Very Small Dataset (<40 samples) I'm trying to perform binary classification on a very small dataset, consisting of 3 negative samples and 36 positive samples. I've been testing different models from scikit-learn (logistic regression, random forest, svc, mlp). Depending on random_state when using train_test_split, the train or test set might not have a negative sample in it and classification performance is poor because of this. I've read into oversampling techniques using ROSE or various flavors of SMOTE, but have also read that oversampling will lead to overfitting or does not increase performance. I had experimented with oversampling the training set and depending upon how the data is split into train/test the different models are each able to correctly classify unseen data (except for log reg). However, because of the possibility of overfitting due to oversampling I am unsure of the model's actual ability to perform on unseen data. When not oversampling and just performing feature selection, tuning hyperparameters (e.g., class weights), and using LOOCV the models (not log reg) are able to correctly classify each sample as negative or positive. However, I have read that LOOCV tends to have high variance and I am unsure of how the classifiers would perform on new unseen data. Unfortunately collecting more data is not possible, I have to work with what I currently have. My question is how do I approach the problem to achieve the best performance I can without overfitting the classification models? Having someone classified falsely as negative is preferable to having something classified falsely as positive. If the models are able to correctly classify everything when performing LOOCV is that the last step in the process before model deployment, or are there other things I should look into as well? AI: I'm not sure this will be a comprehensive answer but an opinion to give a push to the reasoning. There are only 3 negative cases. I could create a custom cross validation scheme: create a test case with one negative case and the rest 2 of them to put to the train set. Then iterate through the negative cases, giving a chance to everyone to be in the test set. Any test set I would enrich with positive cases keep the ratio of positive/negative cases fixed: 36/3 * 1 = 12 positive observations in each test set. I'm not sure this technic would work in any way but at least this can be a solution for the CV scheme problem. I would definitely be prepared that the problem doesn't have any adequete solution with so poor data. I stress this idea in order to make a reasonable expectation on time, money budget as well as risks of the project. I'm not sure it's reasonable to do so intensive overfitting. You may approximately count how many times you used any specific negative class observation in order to get a feeling of the degree of contamination of your model with overfitting. This is not strict or correct terminology, I just want to share my intuition. Each time you use the negative observation for training or assessment you increase you changes that a good model will fool you eventually. You have high risks with so few examples. You may treat the problem as an anomaly detection problem. Split a observations in train-test set. For example, 10 observations in the test set, 3 of them are negative cases. Train a clustering model then look how it works on the test set. Does it group negative cases in one separate group or not? https://scikit-learn.org/stable/modules/clustering.html Another approach is to add your own knowledge of the world to the problem if this is applicable to your case. For example, imagine we have titanic data, only 39 observations with 3 survivors and 2 columns: Name and Survived. I could suggest that gender is important and create a new column based on my world knowledge. Looks like I'm reinventing the wheel and feature engineering but anyway this may be useful for you. The last point is that when you have so few data use data visualization intensively. Make your own, maybe even hardcoded if-else model, based on the plots where you painting data points by target (color=target). This could be more reliable and less prone to overfitting comparing with CV and complex models.
H: Are feature importances of ensemble methods sensible interpretable? As mentioned in the question, it is easy to interpret the meaning of features in algorithms like simple decision trees. But in the case of ensemble methods that are known to average/modify features, are these results still sensible interpretable/usable to argument about the feature(s)? AI: During recent years, the most successful feature attribution method from has been the SHAP values. SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions One of the most famous implementations is Tree Explainer Tree SHAP is a fast and exact method to estimate SHAP values for tree models and ensembles of trees The paper originally appeared in Nature ( here in Arxiv): Explainable AI for Trees: From Local Explanations to Global Understanding. In case you want to extend and see more related literature you can have a look at the appendix "Methods 5 Previous Global Explanation Methods for Tree". They provide the previous state of the art of feature-relevance methods. The exact algorithm that answers your question is in the Method 10, it explains how TreeShap is computed.
H: Named Entity Recognition using Spacy V3 with imbalance entities Will the spacy V3 model get affected by imbalanced entities? I have got a dataset annotated in spacy format and if I look into my custom entities the rations are different for different entities. For example, one entity say 'flex' is more than 2500 but I also have an entity say 'door' which is just 21. I trained my spacy model and evaluated using spacy.evaluate(examples). I'm getting f1-score of 0.64, precision of 1.0 and recall of 0.47. I want to know whether this entity imbalance is affecting model performance?. If yes is there a way to solve this issue? Any help on this will be greatly appreciated. AI: The imbalance between entities is unavoidable: some entities are naturally more frequent than others. It would likely cause various other biases to try to oversample real text in order to increase the number of occurrences. The imbalance does affect performance of course, it's easier to correctly recognize a frequent entity like "New York" than for instance "Cork". But this is the statistical game, there are always going to be some errors somewhere. Finally it's important to keep in mind that NER is not primarily meant to recognize only a finite set of predefined entities seen in the training data. On the contrary, the goal is to use clues in the text in order to capture any entity present in the text, whether it has been seen in the training set or not. So in theory the training data should provide a representative sample of the different contexts in which NEs can appear, and this should be sufficient to recognize any entity independently of their frequency. But in reality it's practically impossible to have a perfect representative sample of the contexts, of course.
H: How to do Data Augmentation efficiently in Tensorflow 2? First of all I'm asking that because of this tutorial. When I heard about Data Augmentation the definition I learned was something like: "It's a technique where we create more data to our dataset transforming some samples of our current dataset (transformations like rotations, flips, brightness etc).". But in that tutorial they're just overwriting the current dataset with transformed samples not adding new data ... Or I'm wrong ? The correct way of doing that wouldn't be get N random samples, transform them and add them to the dataset ? AI: For most frameworks, random augmentation includes no augmentation (random flipping may either flip or not, random rotation angle can be 0 or nigh). This is also randomized every epoch (or whenever your dataset entry is otherwise sampled). Thus the model should eventually see the original image and lots of its possible augmentations.
H: Fluctuating accuracy for Naive Bayes Classifier and SVM I am comparing the classification accuracy between Naive Bayes (NBC), SVM and a Neural Network. I am using a Dataset of ~18K and 26 Labels. In the current state the Neural Network get always an accuracy of >80%, but the NBC and SVM fluctuate between 15% and 80%. They mostly end up near one of the two extrema. The only difference for each run is the splitting of the data in Training/Testing with the model_selection.train_test_split() function of sklearn For the implementation of the classifiers I am also using the classes and functions of sklearn. I highly suspect the problem in my data but I am already doing the basic preprocessing with stop words, lowercasing, etc. AI: I recommend you to use the "stratify" attribute of the train_test_split function in order to have a good distribution of the classes (this will avoid the case where there is a support of 0 on the class number 25). Finally, if you see a big variance depending on the dataset, I think a cross validation is interesting.
H: LinearSVC training time with CountVectorizer and HashingVectorizer I am currently trying to build a text classifier and I am experimenting with different settings. Specifically, I am extracting my features with a CountVectorizer and HashingVectorizer: from sklearn.feature_extraction.text import CountVectorizer, HashingVectorizer # Using the count vectorizer. count_vectorizer = CountVectorizer(lowercase=False, ngram_range=(1, 2)) X_train_count_vectorizer = count_vectorizer.fit_transform(X_train['text_combined']) X_dev_count_vectorizer = count_vectorizer.transform(X_dev['text_combined']) # Using the has vectorizer. hash_vectorizer = HashingVectorizer(n_features=2**16,lowercase=True, ngram_range=(1, 2)) X_train_hash_vectorizer = hash_vectorizer.fit_transform(X_train['text_combined']) X_dev_hash_vectorizer = hash_vectorizer.transform(X_dev['text_combined']) Then I am using a LinearSVC classifier from sklearn.svm import LinearSVC # Testing with CountVectorizer. clf_count = LinearSVC(random_state=0) clf_count.fit(X_train_count_vectorizer, y_train) y_pred = clf_count.predict(X_dev_count_vectorizer) accuracy_score(y_dev, y_pred) # Testing with HasingVectorizer. clf_count = LinearSVC(random_state=0) clf_count.fit(X_train_hash_vectorizer, y_train) y_pred = clf_count.predict(X_dev_hash_vectorizer) accuracy_score(y_dev, y_pred) I obtained the following results: Time to train Accuracy CountVectorizer 59.9 seconds 83.97% HashingVectorizer 6.21 seconds 84.92% Please note that even when limiting the number of features of the CountVectorizer to 2**18, I still get slower training and inferior reults. My questions: Why is training with CountVectorizer slower even for a similar number of features? What could explain the performance gain in terms of training time? Any intuition on the reasons behind the accuracy gain? For my particular case, I have also trained a TfidfVectorizer and the CountVectorizer worked a bit better. If the HashingVectorizer has such significant advantages in certain cases. I am wondering why the HashingVectorizer usage is not more widely introduced in different NLP tutorials? AI: Your lowercase setting is different for CountVectorizer and HashingVectorizer. It might have an impact. Otherwise, they do very similar job in this case, the accuracy difference varies with the exact task but is not that huge. Disparate training speeds you observe are not related to the method itself (the feature matrix size is comparable), it's just HashingVectorizer normalizing the results by default, which is usually beneficial for SVC, resulting in much fewer iterations (check clf_count.n_iter_). Applying sklearn.preprocessing.Normalizer() to CountVectorizer results will likely make it fit equally fast. HashingVectorizer is still faster and more memory efficient when doing the initial transform, which is nice for huge datasets. The main limitation is its transform not being invertible, which limits the interpretability of your model drastically (and even straight up unfitting for many other NLP tasks).
H: Removing rows with total number of ID occurred not equal to a specific number in Pandas Python I have the following data frame in Pandas: ID rank feature 1 1 3 1 2 6 1 3 8 1 4 6 2 1 2 2 2 9 3 1 0 3 2 3 3 3 1 4 1 3 4 2 9 4 3 0 4 4 5 4 5 1 5 1 2 5 2 4 5 3 0 5 4 8 and I would like to delete all the rows such that the number in ID occurs in those rows is not equal to 4. For example, ID 1 occurs 4 times, ID 2 occurs 2 times, ID 3 occurs 3 times, ID 4 occurs 5 times and ID 5 occurs 4 times. So I would like to delete rows with ID = 2,3,4 and the output looks like: ID rank feature 1 1 3 1 2 6 1 3 8 1 4 6 5 1 2 5 2 4 5 3 0 5 4 8 Is there any computationally efficient way to do that? Thank you so much. AI: You can use groupby and transform to calculate the number of occurrences of each ID and then use simple filtering to get the result you're looking for: import pandas as pd df = pd.DataFrame({ "ID" : [1, 1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5], "rank": [1, 2, 3, 4, 1, 2, 1, 2, 3, 1, 2, 3, 4, 5, 1, 2, 3, 4], "feature": [3, 6, 8, 6, 2, 9, 0, 3, 1, 3, 9, 0, 5, 1, 2, 4, 0, 8] }) ( df # count number of occurences and select only those rows whose ID is present 4 times .loc[lambda x: x.groupby("ID")["ID"].transform("count") == 4] ) Which returns: ID rank feature 1 1 3 1 2 6 1 3 8 1 4 6 5 1 2 5 2 4 5 3 0 5 4 8
H: Will summing features improve the Machine Learning models? Assuming that I have two features, x and y for an MLP model. I know that depending on the model, the multiplication of features can yield a better feature. For example, if x and y are the dimension of a rectangle, then the multiplication will give the area. Assuming that x and y are the area of a room and a kitchen. x+y will be the total area of the apartment. Is it recommended to create a new feature by adding features together for Machine Learning models? AI: Depending on the type of model you are using the model might be able to create these type of new features by itself, but if you have this added domain knowledge it is definitely recommended to create them yourself. For more info on this topic have a look at feature engineering.
H: Limitations of NLP BERT model for sentiment analysis I am reading a paper, where the authors assess online public sentiment in China in response tot the government's policies during Covid-19, using a Chinese BERT model. The author's objective is not only to learn whether a given online post is critical or supportive, but also learning to whom each post was directed at (e.g. CCP, local governments, health ministry, etc). To achieve this, the authors further state in pages 8 through 9, that they,"To train the classifer, we randomly sample approximately 5,000 posts from each dataset (10,541 posts in total), stratified by post creation data. This sample is used for a number of analyses, and we refer to it as the Hand-Annotated Sample." My question here is what's the value of using human-annotated posts in combination with a BERT sentiment analysis model? Specifically, my understanding of BERT as a technique is that it eliminates or at least minimizes the need for pre-labelling a sample of text for sentiment analysis purposes, and it's not clear to me why we still need hand-annotated text by humans even when using BERT. AI: BERT is pre-trained on two generic tasks: masked language modeling and next sentence prediction. Therefore, those tasks are the only things it can do. If you want to use it for any other thing, it needs to be fine-tuned on the specific task you want it to do, and, therefore, you need training data, either coming from human annotations or from any other source you deem appropriate. The point of fine-tuning BERT instead of training a model from scratch is that the final performance is probably going to be better with BERT. This is because the weights learned during the pre-training of BERT serve as a good starting point for the model to accomplish typical downstream NLP tasks like sentiment classification. In the article that you referenced, the authors describe that they fine-tune a Chinese BERT model on their human-annotated data multiple times separately: To classify whether a Weibo post refers to COVID-19 or not. To classify whether posts contained criticism or support. To identify posts containing criticism directed at the government or not. To identify posts containing support directed at the government or not. Fine-tuning BERT usually gives better results than just training a model from scratch because BERT was trained on a very large dataset. This makes the internal text representations computed by BERT more robust to infrequent text patterns that would be hardly present in a smaller training set. Also, dictionary-based sentiment analysis tends to give worse results than fine-tuning BERT because a dictionary-based approach would hardly grasp the nuances of language, where not only does a "not" change all the meaning of a sentence, but any grammatical construction can give subtle meaning changes.
H: Loss on whole sequences in Causal Language Model I'd like to know, from an implementation point of view, when training a Causal Transformer such as GPT-2, if making prediction on whole sequence at once and computing the loss on the whole sequence is something standard ? When going across examples that vulgarize what happen during training, we have examples like this: Which suggests that we mask at a certain token in the sequence and make a prediction, then compute the loss for this single token, so the loss would take data of shape (batch_size, num_classes). However if I'm correct, since we're talking about causal models, we could predict all tokens at once because the model can only attend to what's on the left of the sequence (and can't attend on the right, so it can't "cheat"), apply the loss on data that would have the shape (batch_size, sequence_length, num_classes) where sequence_length is computed in a single forward pass. And so speedup the training. Am I correct ? If so, do you know famous repos that do this ? And if not, why would it be wrong ? Thanks. AI: The figure and the blog post are simply incorrect. Doing a reverse image search, I see that the image you posted comes from a blog post on Towards Data Science. That image is so wrong. Just think that in a causal language model, the prediction for the word next would be in the time step that receives as input the word very! Also, causal language models do not use any <mask> tokens. You are correct in your understanding: at training time, the loss of all time steps is computed at once. This can be done because: The models attend only to the previous tokens. RNNs do this by construction. Transformers do this because of the mask imposed on the decoder. We are using the true previous tokens as input to the prediction instead of using the model's own predictions as input. This is referred to as "teacher forcing". ALL causal language modeling implementations compute the training loss in a single pass.
H: What is the effect of the tokens? What is the effect of the tokens that the model has if model A has 1B tokens and the other model has 12B tokens? Will that have an effect on the performance? AI: The question is not precise enough, it depends on other factors: in general, a larger training set tends to lead to a better model. However it depends if the training set is really relevant and useful for the task. For example: adding the larger dataset contains data from a different domain than the target task, the additional data might be useless if the data contains a lot of errors or noise, it might cause the model to perform worse if the larger data contains mostly duplicates, it's likely not to perform better. So larger data is good for performance only if the additional data is actually of good quality.
H: What to do with Transformer Encoder output? I'm in the middle of learning about Transformer layers, and I feel like I've got enough of the general idea behind them to be dangerous. I'm designing a neural network and my team would like to include them, but we're unsure how to proceed with the encoded sequences and what the right way to plug the them into the next layer in the model would be. We would like to process it such that we can plug the encoded sequence into a FC layer immediately after the Transformer Encoder. If we just use a batch size of 1, for the sake of the argument, our encoded sequence output after being processed by the Transformer Encoder has shape tuple of (L,E), where L is the input sequence length and E is the embedded dimension size. I've seen some vague description of using some max/avg/conv1d pooling on the Encoded sequence, but nothing super clear about what that means. If I'm following this correctly, would I apply the max/avg/1conv1d pooling such that the pooling result gives me an resulting vector with shape tuple (E,), or would I pool along the other dimension? AI: The typical approach for this is follow BERT's approach: add an extra special token at the beginning of the input sequence (in BERT it is [CLS]) and only use the output of the network at that position as input to your fully connected layer. The output at the rest of the positions is ignored. You can see a nice illustration of this approach in the notorious blog post The Illustrated BERT, which explains very visually all the details about BERT: In the illustration, you can see the model input at the bottom and how it has been added a special [CLS] token at the beginning and then the output of the model at that position is then used for a classification task. During training, the model will learn to condense the needed information from the whole sentence into the output of the first position. Another alternative, as you pointed out, is to have global average pooling over all the outputs. This was the norm in the LSTM times before Transformers came. I am not aware of any articles comparing the performance of both approaches but, nowadays, with Transformers, everybody uses the BERT approach. Both BERT's approach and the global average/max pooling approach achieve your goal: collapsing the variable length sequence of vectors into a single vector that you can then use for classification.
H: Transformers - Why Self Attention calculate dot product of q and k from of same word? As far as I understand and looked into Attention Is All You Need and Transformer model for language understanding, the Self Attention at Scaled Dot-Product Attention is calculating $query$ and $key$ of the same word, which is the diagonal of the matrix in the diagram. The $q \cdot k$ of the same word will generate the largest value, which means that a word attends to itself in a sentence. Is it correct or am I missing something? Why Self Attention does not exclude the $q \cdot k$ of the same word itself? AI: It is not necessarily the case. The matrics $K$ and $Q$ can be very different. The intuition is that these two projections allow the model to search for a particular piece of information in the hidden states. From that perspective, there is no need to exclude the query state. If you look at analyses of what the attention heads in trained models do, you can see that most of the patterns are not diagonal as you would expect. Some of them look diagonal, some really are, but it is often a shifted diagonal. Example from Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned (Voita et al., ACL 2019): Another example from From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions (Mareček & Rosa, 2019)
H: Export pandas dataframe to dictionary as tuple keys and value I have a pandas dataframe df that looks like this: col1 col2 col3 A X 1 B Y 2 C Z 3 I want to convert this into a dictionary with col1 and col2 as a tuple key and col3 as value. So, the output would look like this: { ('A', 'X'): 1, ('B', 'Y'): 2, ('C', 'Z'): 3 } How do I get my desired output? AI: df.set_index(['col_1', 'col_2']).apply(dict, axis=1).reset_index(name='new_key') df.drop(labels=['col_1', 'col_2'], axis=1)
H: What is the best machine learning technique to fuse two spatial data sets? I have two data sets, containing points geometry (X,Y) and a recorded car exhaust parameter (let's say, RP value), of an area of interest (AOI). The datasets are spatially different, that is, the first data set is along side walk (X1, Y1, RP1) and the second data set (X2,Y2, RP2) is on the road center line (line split into equidistant 2 meters points). The distance between the data along the side walk and the one on road center line is varying, at some locations, it is 3 - 6 meters and at some locations it is > 6 meters (let say, 6 - 20 meters range). This is due to the fact that this distance reflects varying road widths, lengths in a realistic, complex city landscape. With the above data in hand, I want to fuse both data sets, considering the data along the side walk "more reliable" (thus higher weightage?), and compare the fused output with the reference data at limited locations in the AOI, to evaluate the data-fusion performance. What is the best machine learning/data science technique to achieve the above? I am open to exploring several (or the "best candidate") technique(s) in Python, R, Matlab, for example. The focus for me is on the data fusion technique. Ps. It is also possible to obtain information on road widths, lengths, building present or not, etc., if it is deemed "suitable" to include in the data processing. AI: In terms of navigation, one of the most reliable algorithms is the Kalman Filter because it predicts directions according to previous points. In your case, if you have 2 points measurement at each record, if you use a Kalman Filter, it would identify which one is the closest to the predicted value, without having outlines. If you apply the Kalman Filter correctly, you would have a 3rd positioning value that would rectify the two others, and help you identify which one is the most reliable. There are several libraries to achieve this: PyKalman FilterPy including experiments Simdkalman Be careful to set a good noise reduction value: too much noise reduction would make trajectories too precise and subject to outlines errors and too low noise reduction would make trajectories too blurred. A good noise reduction value should be closer to the natural trajectory, which is smooth and clear.
H: Smaller embedding size causes lower loss When I convert my multilingual transformer model to a single lingual transformer model (got my languages embedding from the multilingual transformer and deleted other embeddings, decreased dimensions of embedding layers), the loss is much less. But I didn't understand why. What can be the reason for that? AI: New Answer The loss of a text generation task like question generation is normally the average categorical cross-entropy of the output at every time step. Drastically reducing the number of tokens means that the number of classes of the output probability distribution is greatly reduced. The value of cross-entropy depends on the number of classes. Having more classes means that the output distribution must cover more options and it is more difficult to assign more probability to the ground truth class (i.e. the correct token). Therefore, it is to be expected that, if you drastically reduce the number of tokens, the value of the loss is lower. Old answer From your description, I understand that: What you had was a Transformer trained on multilingual data with word-level tokens (because if you had subword-level tokens like BPE or unigram then you would not be able to filter by language from the token list so easily). What you did was: Removing the entries associated with words in other languages from the token list. Reduce the embedding size. Retrain your model on the data of a single language pair. With those assumptions: When you "converted your model from multilingual to single lingual", you simplified the task enormously. It seems that the gain in the simplicity of the task surpassed the loss of capacity of the model caused by the reduction of the embedding size.
H: What is the difference between adding words to a tokenizer and training a tokenizer? The title says it all. I was researching this question but couldn't find something useful. What is the difference between adding words to a tokenizer and training a tokenizer? AI: First, a clarification: tokenizers receive text and return tokens. These tokens may be words or not. Some tokenizers, for instance, return word pieces (i.e. subwords). This way, a single word may lead to multiple tokens (e.g. "magnificently" --> ["magn", "ific", "ently"]). Some examples of subword tokenizers are Byte-Pair Encoding (BPE) and Unigram. Therefore, adding a "word" to a tokenizer may not make sense for a subword-level tokenizer; instead, I will refer to it as "adding a token". Some simple tokenizers rely on pre-existing boundaries between tokens. For instance, it is very common to tokenize by relying on the white space between words (after a previous mild pre-processing to separate punctuation). Depending on the complexity of the separation of tokens from the text, the tokenization process can consist of just a lookup in a table to a complex computation of probabilities. For simple tokenizers that only consist of a lookup table, adding a token to it is simple: you just add an entry to the table. For more complex tokenizers, you need a training process that learns the needed information to later tokenize. In those cases, adding a token is simply not possible, because the information stored in the tokenizer is richer, not just a table with entries.
H: How can I implement classification for this problem? I have been thinking about the problem of "predicting" damages awarded in legal cases. For specificity, let us be given a dataset of summaries of cases of a certain flavour (say discrimination cases) that have been binned in a fixed number of "bands" by ranges of damages awarded. Then is it possible to train a custom model to be able to read the facts of a case as reported by an aggrieved party and predict which bin it would fall into should the plaintiff win. My first thought is unsupervised text clustering via NLP. Is there something more efficient that can be used here? AI: If I understand the problem correctly, the input dataset consists of a 2 columns. Column A - Previous Case Summary, Column B - the range/bin of damage awarded And you want to map a new unseen case summary, to one of the existing column B values ranges/bin based on the similarity of new case summary to most similar Column A case summary text. I recently worked on a similar problem, where instead of case summary, I had fields/labels mapped to their description and I wanted to map a new/unseen field to one of the given descriptions. Mapping of an unseen Field/word to an existing description (in the input data), given Field and their respective descriptions as input/training data My approach was doing Bert Embedding and then doing cosine similarity on field/labels and based on the similarity value to one of the existing fields, taking its description. This could be one of the approaches. Let me know, if you need the sample code. Happy to help.
H: Using Dataloader to display an image I have successfully loaded my data into DataLoader with the code below: train_loader = torch.utils.data.DataLoader(train_dataset, 32, shuffle=True) I am trying to display a multiple images using the code below: examples = next(iter(train_loader)) for label, img in enumerate(examples): print(img.shape) # [32, 3, 224, 224] How would I print each image in the batchsize using plt.imshow, as well as show the label? (Note: This is the CatDogDataset) AI: train_loader = torch.utils.data.DataLoader(train_dataset, 32, shuffle=True) examples = next(iter(train_loader)) for label, img in enumerate(examples): plt.imshow(img.permute(1,2,0)) plt.show() print(f"Label: {label}") Reference https://pytorch.org/tutorials/beginner/basics/data_tutorial.html
H: Is an LSTM cell autoregressive? I am currently writing some stuff up on Long Short-term Memory cells (LSTM), and stumbled over a question which I had trouble answering on the fly. The LSTM takes as input $h_{t-1}$ (besides the cell state and the local input $x_t$), which is also the output of the previous time step $t-1$. I was looking for a definition of the term autoregressive, because in the context of ML, it is often used for models which make a prediction $y_t$ while knowing about all prior predictions $y_{t-n}$. Wikipedia states that The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term) If I understand correctly, I would argue that the LSTM is a first-order autoregressive model, because it sees exactly one value emitted in the past. At the same time, I think it is not autoregressive, because (if I understand the definition correctly) it does not exclusively rely on previous time steps, but also receives local information in the form of $x_t$. Is anyone able to clarify whether the LSTM cell could be considered autoregressive or not? AI: It is autoregressive because its output at time $t - 1$, $h_{t - 1}$, is received as input for the computation at time $t$ and used to generate $h_t$. The fact that there are other inputs like $x_t$ does not make it not autoregressive. In some configurations (e.g. language models), its predictions are even used as input $x$.
H: does "unravelling" lstm units still mean one unit I have seen images of lst and rnn units online, where they "unravel" the unit. Is this only one, singular, unit? If you have multiple units in a cell (layer), are both the cell state and hidden state carried through to the next unit? (or are they recycled in each unit) By ht and ht-1, I assume that all memories are stored in an array? (or is it 1 vector) I read in an article that the length of cell state and hidden state is equal to the amount of units in a cell (layer). If this is true, do each units output multiple predictions on the same thing or different things? Image #2 (response to an answer) AI: The "unraveling" you are referring to is just to illustrate how the different time steps of the input are received and processed. It doesn't have anything to do with the number of units. The "number of units" actually refers to the dimensionality of the input vector and the hidden state. The output and hidden state are passed to the computation of the next time step. $h_t$ and $h_{t-1}$ and vectors that have been computed at different time steps. Depending on how you configure of the LSTM, you may get all $h_i$'s (e.g. to apply attention over them) or just the last one (e.g. to perform classification). As I mentioned in (1), the "number of units" actually refers to the dimensionality of the input vector and the hidden state so, what you read is true. The prediction at reach time step is a vector of real numbers.
H: Can you detect source language of a translation? Sometimes you read text and you have a strong feeling that it was translated from a certain language. For example, you read Russian text, see «взять автобус» («take bus» instead of Russian «сесть в автобус» (literally «sit on bus»)), and it becomes obvious that the text was originally written in English and then translated by low-qualified translator. Provided you have a long text, can you automatically detect if it is translation or is it originally written in this language, and can you detect the source language? Are there any ready solutions? AI: In the machine translation research community, the translated text that exhibits some traits from the original language is called "translationese". There are multiple lines of research that try to spot translationese (i.e. tell apart text that has been translated, either by human or machine, from text written directly). Here you can see academic articles related to the matter. However, I have not been able to find research that studies the feasibility of identifying the original source language of the translation, let alone ready-made solutions.
H: Can both R-square and Adjusted R-square be same? I am solving a Multiple Linear Regression problem and judging the model by looking at R-square and Adjusted R-square metrics. In recent iteration which are yielding desired coefficients directionally with respect to Target, I am getting both R-square and Adjusted R-square as 0.73. Can this be possible or is something not right? AI: R square value assumes each independent variable (IV) in the model contributes to explaining the variance in the dependent variable. Adjusted R square, on the other hand, is computed and thus includes only those dependent variables that are statistically significant and actually contribute in the variance explanation of the dependent variable. You can check the difference in the two values if you build a model with s forward step, adding one IV to the model at a time and increasing the complexity of the model The R- square will increase, but Adjusted R swuare might increase or decrease depending on the statistical significance significance of the newly added IV. In your case, of you are getting same values for both of them, implies all independent variables are statistically significant in explaining the variance.
H: What is the proper way to evaluate ML model when training on time-dependent data? I have a task where I predict a probability of an event happening every month, for which I am using LGBM model and MLflow for model performance tracking. My dataset consists of historical data. In my dataset I have time-dependent features as inflation, interest rate (it’s a task in a banking sphere). Every month I want to make new predictions and validate model predictions for the previous period, and track how model worsens with time. When I train model for production I want to record the initial model metrics to be able to track them through time in the future. So far I have been randomly splitting my dataset on train and validation set using sklearn train_test_split and recording model metrics on this validation set. But the problem is that when using this random data split metrics are actually better than in reality, because this random model split allows model to use data “from future” for training and during validation model actually knows about inflation and interest rates from the future. Example: In my dataset I have records for 2017-01, 2017-02 and 2017-03. When I randomly split this dataset some records from 2017-03 may end up in validation set as well as in train set and this way improve the prediction results for records from 2017-03. Alternative option: train model on 2017-01, 2017-02 and validate on 2017-03 in order to simulate the actual life scenario and record more realistic metrics. The second options sounds more natural to me right now. But my concern is that in this case model is different from the final model that I eventually train on all dataset. In this case I doubt that the metrics that I record on validation set correspond to the actual metrics of my final model. So, my question is: what is the proper way to evaluate model trained on historical data in order for metrics to be consistent and comparable with real-world scenario? Can I use randomised dataset split, or is it more correct to keep a separate validation data “from the future” that model hasn’t seen and can’t be biased towards it? AI: In prediction topics, you have first to ensure that there is enough data to find patterns and make good predictions. In your case, I'm not sure if 3 months are enough to detect something. You should at least have 1 or 2 years' records. If you don't have 1-year records, you can try doing a predictive model with a few months' data, but it might not work very well if there aren't enough detectable patterns. Regarding train and test data, it's generally good to consider 80% to 90% of training data, and then test it with the last 20% or 10% last ones. Then, the best way to know if your model works well is to prove its accuracy in the real world: testing its accuracy with test data is good, but it doesn't mean that you'll have a good result with it later. Therefore, you should validate it by predicting the next day, see if the prediction is correct, record the result, and assign a real score several days later. About randomness, unless you want to train your model on random periods of time (week or month) and want to find patterns in those periods, you shouldn't apply randomness. In addition to that, the sampling rate is crucial to detect patterns: a too high sampling rate (ex: minutes) wouldn't be effective because most models are not able to have an important memory, whereas a too low sampling rate (ex: month) would only detect some general patterns without any added value.
H: CNN Deep Learning model fit problem with Tensorflow I have the following problem: I'm trying to fit a deep learning CNN model in google colab with a dataset of cats and dogs, it is very popular in Kaggle, i've cleaned the dataset of none images with a method, and the model train well for a few iterations but in a moment the code throw the following error: And here a picture where the code works well in the first iterations, how i can solve this? I'm thinking it's a dataset problem but i'm not sure. I am a student of Deep Learning with Tensorflow Here it's my colab notebook: https://colab.research.google.com/drive/1GzLma_-DMHOe1-injn4d_2KiI6kcwasZ?usp=sharing AI: There are 2 things you can do. Verify that there are no corrupted images, and all files in the directories are actual images. If you do not fix this before training you will get errors regarding these issues and training will fail once these files are reached. Running the following bash commands in the base directory will resolve these issues: find /tmp/data/ -size 0 -exec rm {} + find /tmp/data/ -type f ! -name "*.jpg" -exec rm {} + You should add a Resizing layer in your model such that you are sure that any image will be passed to the model in the right format.
H: Difference between pretrained, finetune, feature extract I'm a little confused between the following terminology: pretrained, finetune and feature extract. I would like to use an out-of-the-box model to train a covid dataset. If I were to use resnet, would I be pretraining it? In what situation would I be finetuning the model or feature extracting? Since the model is being pretrained, would it be wise to use the same weights being trained on the ImageNet? AI: Although fine-tuning may refer to some improvement of an existing model, it is not an improvement, but rather a transfer learning process to adapt a pre-trained model to new data. I disagree with the fact that fine-tuning refers to transfer learning because it leads to confusion indeed. Consequently, you fine-tune a pre-trained model in order to be able to learn efficiently on new data, thanks to the already learned data on the pre-trained model, which has more general extracted features. Without using a pre-trained model, a model trained from scratch is not able to differentiate data easily and there is poor feature extraction. https://d2l.ai/chapter_computer-vision/fine-tuning.html If you train a pre-trained model on a new dataset, the new dataset should have some similarities with the original dataset from the first training. For instance, if you want to train a pre-trained dataset with an unknown animal (ex: pangolins), it should be ok if it knows already many other animals. But if you train it with completely new data (ex: 3d medical scans) without any similarity with the already known ones, it would be able to recognize if they are 3d medical scans, but it may not differentiate very well the different types of medical scans. Consequently, in the case of completely new data like 3 medical scans, it would be better to train the model from scratch, unless the model have already learned on 3 medical scans pictures.
H: Transformers vs RNN basic doubt I have a basic doubt. Kindly clarify this. My doubt is, When we are using LSTM's, We pass the words sequentially and get some hidden representations. Now transformers also does the same thing except non sequentially. But I have seen that the output of BERT based models can be used as word embeddings. Why can't we use the output of LSTM also as a word embedding? I can find sentence similarity and all with LSTM also ? For eg : If I have a sentence " is it very hot out there" Now I will apply word2Vec and get dense representations and pass it to my LSTM model. The output of my LSTM can also be used as word embeddings as we do the same with BERT? My understanding was that LSTM is used to identify dependencies between words and using that learned weights to perform classification/similar tasks. AI: There are multiple concepts mixed in your question. Contextual vs. non-contextual word embeddings: word2vec is a non-contextual approach to obtaining token embeddings. This means that a specific word has the same vector representation regardless of the other words in the sentence it appears. BERT, on the other hand, can be used to obtain contextual representations, because the representations of a token depend directly on the other words in the sentence. Contextual word embeddings with LSTMs. You can obtain contextual word embeddings with LSTMs. Actually, BERT has 2 predecessors that are just that. These models are ULMFit and ELMo. Both are bidirectional LSTMs. The fact that they are bidirectional is important here, otherwise, the representations would only be contextual for the words to the right of each word. Using BERT or LSTMs for classification and other tasks. Both BERT and LSTMs are suitable to perform text classification. In the case of BERT, the sentence-level representation is obtained by prefixing the sentence with the special token [CLS] and taking the representations obtained at that position as sentence representation (this is trained with the next-sentence prediction task in the BERT training procedure). In the case of LSTMs, the sentence-level representation is usually obtained either as the last output of a unidirectional LSTM or by global pooling over all the representations of a bidirectional LSTM.
H: Am I building a good or bad model for prediction built using Gradient Boosting Classifier Algorithm? I am building a binary classification model using GB Classifier for imbalanced data with event rate 0.11% having sample size of 350000 records (split into 70% training & 30% testing). I have successfully tuned hyperparameters using GridsearchCV, and have confirmed my final model for evaluation. Results are: Train Data- [[244741 2] [ 234 23]] precision recall f1-score support 0 1.00 1.00 1.00 244743 1 0.92 0.09 0.16 257 accuracy - - 1.00 245000 macro avg 0.96 0.54 0.58 245000 weighted avg 1.00 1.00 1.00 245000 test data - [[104873 4] [ 121 2]] precision recall f1-score support 0 1.00 1.00 1.00 104877 1 0.33 0.02 0.03 123 accuracy - - 1.00 105000 macro avg 0.67 0.51 0.52 105000 weighted avg 1.00 1.00 1.00 105000 AUC for both class 1 & 0 is 0.96 I an not sure if this is a good model I can use for predicting probability of occurrence. Please guide. AI: "Unbalanced" data are not a problem, unless you use unsuitable error measures... like accuracy, or precision, recall and the F1 (or any other Fbeta) score, all of which suffer from exactly the same problems as accuracy. Instead, work directly with probabilistic predictions, and assess the probabilistic predictions directly using proper scoring rules. Do not use thresholds in evaluating your statistical model. The choice of one or more (!) thresholds is an aspect of the decision, together with your probabilistic classification. It is not part of the statistical model. We have many, many, many threads on unbalanced data at CrossValidated, and we are at a bit of a loss what to do with these, because the data science community apparently sees a problem here that completely disappears once you move away from intuitive but misleading evaluation measures. We have a Meta.CV thread dedicated to this, with a number of links to other CV threads.
H: Can you use a trained image segmentation model to label more training data for itself? Labeling images for semantic segmentation can be expensive. Is it viable to train a model (such as Unet) to a good accuracy and then use this model to label more images to be used as further training data for the same model? Would this cause overfitting? AI: I assume you're thinking of only using images where you are confident the model has segmented them correctly? I don't think this would cause overfitting - at least what we normally think of as overfitting. However, you could end up training the model to do even better on images where it already does well, at the expense of worse results where it is not doing so well (which I guess you could think of as a type of overfitting). There is a technique called active learning that does something similar to this, though. Here you use the original model to identify images that would help improve the model the most, if they were labelled and added to the training set. These are then labelled by your domain experts and the model retrained. Obviously you can repeat this if need be until you stop seeing any improvement. See these blogs on active learning for more details: Active learning machine learning: What it is and how it works by DataRobot and Active Learning: Curious AI Algorithms on DataCamp While I was writing this answer I found this article: Active Learning in Machine Learning Explained by Vatsal on Towards Data Science that suggests combining your approach and active learning.
H: Suggestions for guided NLP online courses - Beginner 101 I would like to know from the data science community here for suggestions on nlp courses. I am new to NLP area and would like to take up a course which covers from basic to advanced concepts such as tokenization to embeddings, GPT-3, transformers etc My aim is to become a Applied NLP expert (and I don't intend to invent any new algos). So, basically am trying to find a course where they can teach us existing algos, recent advancements, variety of use-cases etc in NLP Is there any courses that you would recommend? AI: I would recommend two course which focus on code first approach and which will help you understand concepts by getting your hands dirty. Both of these courses contains code and video resources. Fast.ai NLP Hugging Face NLP Happy Learning :)
H: How to build single pipeline with multiple estimators supporting fit and predict? I have a ML problem where I want to divide the prediction task into subproblems (where I believe specialized models will do better). All these predictions tasks operate independently and will use the same input data - but will have different estimators/targets. For example: single dataset (A) shared transformations A -> B estimator #1: random forests with target Y1 estimator #2: GBM classifier with target Y2 estimator #3: logistic regression with target Y2 the predictions of each of these models will be output as a tuple (#1, #2, #3) I'm looking for a simple (or best practice way) to define the above pipeline and train it and be able to use it for prediction. I have looked at sklearn Pipeline but best I can tell you can't use that to have multiple estimators for training/predictions (would love to learn I'm wrong on this). My fallback option is to build a class that supports fit and predict_proba but under the hood just calls these models sequentially (training in sequence & generating predictions in sequence before returning the tuple of results). Is there a better way to go about this problem? AI: Scikit-learn pipelines are designed to chain operations, they are not designed to handle conditional logic. Your problem is better handled in Python-based logic. Something like: from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler pipe_1 = make_pipeline(StandardScaler(), RandomForestClassifier()) pipe_2 = make_pipeline(StandardScaler(), GradientBoostingClassifier()) pipe_3 = make_pipeline(StandardScaler(), LogisticRegression()) pipe_1.fit(X, y1) pipe_2.fit(X, y2) pipe_3.fit(X, y2) predictions = (pipe_1.predict(X), pipe_2.predict(X), pipe_3.predict(X))
H: What Preprocessing is Needed for Semantic Search Using Pre-trained Hugging Face Transformers? I am building a project for my bachelor thesis and am wondering how to prepare my raw data. The goal is to program some kind of semantic search for job postings. My data set consists of stored web pages in HTML format, each containing the detail page of a job posting. Via an interface I want to fill in predefined fields like skills, highest qualification, etc. with comma-separated sentences or words. These are then embedded via a Hugging Face Transformer and afterwards the similarity of the input is to be compared with the already embedded job postings and the "best match" is returned. I have already found that intensive preprocessing such as stop word removal and lemmatization is not necessarily required for transformers. However, the data should be processed to resemble the data on which the pre-trained transformers learned. What would be the best way to prepare such a data set to fine-tune pre-trained Hugging Face Transformers? Additional info: 55,000 of the saved web pages contain an annotation scheme via which I could simply extract the respective sections "Skills" etc. from the HTML text. If that is not sufficient, I can use prodigy to further annotate the data, e.g. by span labeling texts within the text of the job postings. Thank you very much in advance! AI: Resumes are quite different from classic text because there are many proper nouns (names, companies, places, etc.) and other data difficult to classify (phone numbers, marks, age, etc.). That's why you can use lighter versions like DistilBert to train your data on resumes and get good results. Therefore, you should first separate every paragraph and label them to classify resumes correctly. You can also use pre-trained models like this one and fine-tune them with your data. However, this is not a semantic search yet. After classifying resumes content correctly, you can use a semantic transformer to look for field similarity among the same resumes category. Note: the computing power might be very high if you have thousands of CVs to compare with, even if you detect the search category and process the comparisons in one category only.
H: If you are making a ensemble model does training data on base models have to be different from one another I was reading this article talking about ensemble models. I was interested in the max voting model using 3 base learners. However, I am a little confused about the process. Currently, I'm thinking it goes like this: I have a training and testing sets. All three models are trained on the training set individually and finally at the end I combine the 3 models and do max voting on the testing set and see the results. Instead, should the original training set be divided such that each base learner does not see the same training data? AI: When ensembling, you need some method of introducing diversity into your models (otherwise all your models will make the same prediction, so ensembling them won't improve the results). Using different training data for each model is one way of introducing this diversity. A common method is to use bootstrapping or bagging, where you randomly sample (with replacement) from your training data. This is what the random forest algorithm does (although it also randomly selects the features for even more diversity). As pointed out by @desertnaut, you do your initial test/training split first, then form your ensemble training sets using only the training data. However, there are several other ways to introduce diversity into your models: Boosting - where the models are trained in sequence. Each model re-weights the training samples, increasing the weight of samples the previous model classified incorrectly and decreasing the weight of those previously classified correctly. This is how AdaBoost works. Use different classifiers - e.g. if you want 3 learners you could ensemble a logistic regression model, an SVM and a neural network. Use different architectures or hyper-parameters, so use SVMs with different kernels or different sized neural networks. If using neural networks, initialise each network differently, so that when trained, each model converges to a different solution.
H: How to build a predictive model with multiple features? I built an R RandomForest Regression model. The source training data is a historical monthly report of all closed tickets, and the data for forecasting/prediction is a report of open tickets. These reports are generated by another team. I test/train the model using two years of historical closed ticket data, and predict (forecast) a ŷ Completion Date for each open ticket. The closed tickets training data looks like this: ID Dollars Fruit Etc StartDate CompletionDate AA088 500 Apple ... 1/1/2020 2/15/2020 AB100 1000 Apple ... 1/1/2020 5/15/2020 AB101 2000 Banana ... 1/1/2020 5/15/2020 BB723 5000 Apple ... 1/5/2020 3/20/2020 BB724 3000 Lime ... 1/5/2020 3/20/2020 BB725 1000 Orange ... 1/5/2020 3/20/2020 The open ticket data looks similar, except it lacks CompletionDate, and sometimes various fields are "Unknown" at this time. To build the model, I withhold "ID", make all categorical values factors, use CompletionDate as my y variable, and train the RandomForest on a majority of available features. Recently, the team that generates this data threw a curve ball, rather than each row being a single record, rows are line-items of a higher level ticket! A majority of tickets have only one line-item, the remaining tickets can have between 2 and 6 line-items. ID_Parent ID_Row Dollars Fruit Etc StartDate CompletionDate AA 088 500 Apple ... 1/1/2020 2/15/2020 AB 100 1000 Apple ... 1/1/2020 5/15/2020 AB 101 2000 Banana ... 1/1/2020 5/15/2020 BB 723 5000 Apple ... 1/5/2020 3/20/2020 BB 724 3000 Lime ... 1/5/2020 3/20/2020 BB 725 1000 Orange ... 1/5/2020 3/20/2020 I have considered to summarize (rollup) records, which is easy for numeric value like Dollar (Sum(Dollars)). I could concatenate the multiple categorical values, however, each factor is independent and has strong predictive value to the model (i.e. line items with "Apple" has a weight / meaning that would be lost if I simply concatenated as a string with the other row's value) ID_Parent SumDollars ConcatenatedFruit Etc StartDate CompletionDate AA 500 Apple ... 1/1/2020 2/15/2020 AB 3000 Apple, Banana ... 1/1/2020 5/15/2020 BB 9000 Apple, Lime, Orange ... 1/5/2020 3/20/2020 How should I handle a categorical feature like Fruit that contains multiple factors? Can RandomForest accept a feature that contains multiple factors? Do I need to use a different type of model? AI: My recommendations is to OneHotEncode this variable, to finally obtain something like this: ID_Parent SumDollars ConcatenatedFruit_Apple ConcatenatedFruit_Banana ConcatenatedFruit_Lime ConcatenatedFruit_Orange Etc StartDate CompletionDate AA 500 1 0 0 0 ... 1/1/2020 2/15/2020 AB 3000 1 1 0 0 ... 1/1/2020 5/15/2020 AB 9000 1 0 1 1 ... 5/1/2020 3/20/2020 Moreover, if you OneHotEncoded this way, random forest can deal perfectly with this categorical feature. Here I provide you one code that will do what I commented: import pandas as pd df = pd.DataFrame({'id': [0, 1, 2], 'class': ['2 3', '1 3', '3 5']}) df['class'] = df['class'].apply(lambda x: x.split(' ')) df_long = df.explode('class') df_one_hot_encoded = pd.concat([df, pd.get_dummies(df_long['class'],prefix='class', prefix_sep='_')], axis=1) df_one_hot_encoded_compact = df_one_hot_encoded.groupby('id').max().reset_index() I've extracted it from here (answered by OmaymaS)
H: Remove Noise Caused by Other Variables to Predict an Expected Value I have three variables measured at a sensor: Temperature (T), Humidity (H), and Methane Concentration (PPM). There are physical reasons why changes in T and H will influence PPM. I am interested in removing changes caused to PPM by changes in T and H. What I would like to see is an expected value of PPM with effects of delta T and H removed. Below is a plot over several days of measured values of T, H, and PPM. Additionally, this is one of many sensors. I need a way of generating a model for each individual sensor as this correlation is specific to component tolerances. I'm looking for direction on where to start with this. What algorithm would you use? What's the simplest solution to try and get an expected PPM reading that minimizes the effects of delta T and H? AI: Here are 2 interesting algorithms : Multivariate LSTM. LSTM cells are great to find patterns having cycles with 50 to 300 timesteps. Be aware that it is quite sensitive to noise, so apply smoothing algorithms to see if predictions are better. Random Forest. Even if it hasn't a long memory like the LSTM cells, Random Forest is great to find correlations between signals. In some cases, Random Forest have even better results than LSTMs. In addition to that, you seem to have very precise data and it might be not necessary for prediction tasks. In fact, having too precise data could reduce prediction accuracy, because algorithms have to memorize more data and hence can produce more prediction errors. Consequently, I recommend reducing the sampling rate as low as possible, without altering the overall data quality. Such a simplification should be "humanly understandable" (= not too precise and not too simplified). You can do this by replacing the values of 10 records with 1 using their mean value. Otherwise, you could get poor predictions and too many calculations.
H: Continuous variables with missing values of type MNAR I have continuous variables (weather features) with missing values of type MNAR (a different distribution with and without missing values). I learned that these variables should be transformed into categorical by adding a 'missing' category. Filling with mean/median/mode is not an option in this case (although I saw in different projects online that even though there are MNAR values, some still impute them with mean/median, contrary to what I have learned). For example: Cloud coverage feature with MNAR missing values converted to categorical feature with subcategories of: ['very high', 'high', 'medium', 'low', 'very low', 'missing'] (see image): I want to check seasonality and trend (time series data) and add those features to the dataset, but when I check it before converting the variables it simply gives me more variables with missing values that lead to the same problem of MNAR. import statsmodels.api as sm df.set_index('timestamp', inplace=True) analysis = df[['cloud_coverage']].dropna().copy() s = sm.tsa.seasonal_decompose(analysis, period=30) df['trend_cloud_coverage'] = s.trend df['seasonal_cloud_coverage'] = s.seasonal df['residual_cloud_coverage'] = s.resid All these 3 new features contain missing values corresponding to the missing values in cloud_coverage feature. How do I deal with missing values in this case where time series is of importance? Thank you all for any help. AI: I'm not sure which is your concrete question. However I would recommend you use KNNImputer. This function will change the "missing values" using the knn algorithm. Which means that it will input values focusing on other similar samples. To do that, you will probably first to LabelEncoder your column keeping NaN (since your values is ordinal, you won't have problem). And the apply then KNNImputer. If you want to just obtain values input without decimal, set the number of neighbors to just 1.
H: Checking the interpretation of log odds in logistic regression (with multiple variables) Here's a log odds equation --> 0.8=2.5(Hypertension)+0.3(Gender)+0.06(Age)+15 Please let me know if my interpretation of it is right --> My interpretation: With one unit change in 'Hypertension', log odds of heart disease increases by 2.5 units. Or in other words, a person with hypertension is 2.5 times more likely to get heart disease as compared to a person with no hypertension. AI: May I be wrong, but I do not think 2.5 times more. Considering you are not measuring the probability of heart disease to non-hypertension. On this way, your first sentence is correct, but the second, do not.
H: Does the environment matter (area outside the box) in tensorflow's object detection algorithm? I am exploring tensorflow's object detection algorithm. Prior to training I had to mark boxes around my items in the training dataset images. This was fed into training. Does the environment (surroundings) outside of the box marking matter in tensorflow's object detection algorithm? Or is the training based only on the contents inside of the marked box? AI: The training is based on the boxes content only, but during the detection process, the algorithm has to scan all the image. Consequently, there is no learning of the environment outside the box. Such algorithms only focus on detecting specific objects, independently from their surrounding environment. However, tensorflow could be used to apply contextual object recognition, but it requires additional function such as an attention mechanism.
H: SQL, count total users' first order for each region, each day I have a table called orders. Link for the table here: table I want to get the total users' first order in each region, each day. First, I tried to get the first order for each unique user by doing this: SELECT customer_id, MIN(order_date) first_buy, region FROM orders GROUP BY 1 ORDER BY 2, 1; This resulted with: customer_id, first_buy, region BD-11500, 2017-01-02, Central DB-13060, 2017-01-03, West GW-14605, 2017-01-03, West HR-14770, 2017-01-03, West SC-20380, 2017-01-03, West VF-21715, 2017-01-03, Central And so on… But I want to get this result: first_buy, region, count_user 2017-01-02, Central, 1 2017-01-03, West, 4 2017-01-03, Central, 1 What do u think? AI: If I understand your question correctly, you want to take the result you've got so far, then for each unique combination of first_buy and region (hint: or group), count the number of customers - so all you need to do is nest your current select inside another select that does just that.
H: Creating class labels for custom DataSets efficiently (HuggingFace) I have pandas dataframes - test & train,they both have text and label as columns as shown below - label text fear ignition problems will appear joy enjoying the ride As usual, to run any Transformers model from the HuggingFace, I am converting these dataframes into Dataset class, and creating the classLabels (fear=0, joy=1) like this - from datasets import DatasetDict traindts = Dataset.from_pandas(traindf) traindts = traindts.class_encode_column("label") testdts = Dataset.from_pandas(testdf) testdts = testdts.class_encode_column("label") Finally these Datasets are put into DatasetDictlike this- emotions = DatasetDict({ "train" : traindts , "test" : testdts }) Everything works well but as you see that the way I am doing it can be definitely improved. How can it be done more efficiently in less number of lines ? AI: This is a coding style issue, so people may well have different opinions! But I don't see any problem with the way you've coded it. If you really want to reduce the number of lines of code you could combine the two assignments to traindts into one statement, and the same with testdts: traindts = Dataset.from_pandas(traindf).class_encode_column("label") testdts = Dataset.from_pandas(testdf).class_encode_column("label") If you don't use traindts and testdts anywhere else, you could then even remove the assignment statements altogether and move all the code into the call to DatasetDict: emotions = DatasetDict({ "train" : Dataset.from_pandas(traindf).class_encode_column("label"), "test" : Dataset.from_pandas(traindf).class_encode_column("label") }) But then you are sacrificing readability for fewer lines of code. So if it were me, I probably would make the first change, but wouldn't bother with the second one.
H: If sigmoid activation function is prone to vanishing and exploding gradients can we not use it in final layer of binary classfication? Many paper and books say that sigmoid activation function with random intialization is prone to vanishing/exploding gradients therefore it is better to use LeakyRelu, Elu, or Relu. Does this mean that we should use them in final layer of binary classificiation as well? AI: Vanishing & Exploding Gradient problem happens in case of deep neural network. In NN when we have to update weights & biases for each layer we calculate the partial derivate with respect to y_hat at each layer (Back Propogation Algorithm). Because in this case weights are multiplied in chain with each other. As Sigmoid is used in last layer it will only be just gradient and does not have impact of other layer while initial layer will be multiplies by weights of earlier layer leading to Vanishing Gradient problem. So Sigmoid in last layer does not lead to Vanishing Gradient problem and you can use it safely.
H: Is there any sentiment analysis algorithm to identify sentiment of a sentence towards a certain word in the sentence? I'll start with some examples. Think about a sentence like "Mazda CX5 is a good car.". NLTK sentiment analysis module "Vader" will give a positive polarity score on the sentence. Meanwhile a positive score will also be assigned to a sentence like "Mazda CX5 is a better car compared to Subaru Forester." However, the sentence in fact has a negative sentiment towards Subaru Forester. I wonder if there is any algorithm can actually identify such sentiment difference between the general sentiment and sentiment against a certain word in the sentence. AI: Aspect based analysis tried to solve above problem. It categorizes data by aspect and assign sentiment to it. Lets say you have a restaurant review : Food was good and service was bad It will create 2 categories : Aspect : Food Sentiment : Positive Aspect : Service Sentiment : Negative
H: Str.contains and isin function do not return all correct rows of dataframe Given a list of strings L1 L1 = ['a', 'b', 'c'], I need to extract the rows which contain the values given in list L1. I used the isin function: df[df['column1'].isin(L1)] The data contains the following values in a column 1: 'a' 'c' 'a, d' 'brp' The data contains the following values in a column 2: ['a'] ['c'] ['a', 'd'] ['brp'] The output I need should print all the rows because the string 'a' is present in L1, but, the output returns only 3 rows: rows 1, 2 (that is the rows containing strings 'a', 'c') How do I modify the code so that it returns the 3rd row as well? AI: You can use the str.contains method for this using a regex pattern: import pandas as pd L1 = ["a", "b", "c"] df = pd.DataFrame({ "column1": ["a", "c", "a, d", "brp"] }) # use the '|' character to check if the strings contains any of the characters in L1 df[df["column1"].str.contains("|".join(L1))]
H: why there is no preprocessing step for training BERT? I would like to train a BERT model from scratch. I read the paper as well as a few online material. It seems there is no preprocessing involved. e.g. removing punctuation, stopwords ... I wonder why is it like that and would that improve them model if I do so ? AI: Although a definitive answer can only be obtained by actually trying it and it would depend on the specific task where we evaluate the resulting model, I would say that, in general, no, it would not improve the results to remove stopwords and punctuation. We have to take into account that the benefit of BERT over more traditional approaches is that it learns to compute text representations in context. This means that the representations computed for a word in a specific sentence would be different from the representations for the same word in a different sentence. This context also comprises stopwords, which can very much change the meaning of a sentence. The same goes for punctuation: a question mark can certainly change the overall meaning of a sentence. Therefore, removing stopwords and punctuation would just imply removing context which BERT could have used to get better results. That is why you will not see stopwords being removed for deep learning approaches applied to tasks where language understanding is key for success. Furthermore, while blindly doing stopword removal has been a "tradition" in tasks like topic modeling, its usefulness is beginning to be questioned even for those tasks in recent research. regarding tokenizer BERT has a word-piece vocabulary that was learned over the training corpus. I don't think removing tokens manually is an option here, given that they are word pieces and you wouldn't know in which words they may be used. It would be possible, however, to identify low frequency tokens by encoding and large vocabulary and then remove the lowest frequency ones. Nevertheless, BERT's vocabulary has almost 1000 (purposefully) unused tokens, so there is room to remove unused tokens if that's what you want. I don't think it would make a difference, though.
H: Denoising in ML Pipeline Say I split my raw data into train and test sets. Should I clean them first and denoise the datasets before I start creating new features or, should I create new features for both the train and test set and then clean/denoise them? I'm looking to create my own Transformers for use in an sklearn ML pipeline but I am unsure about the order in which to do things. p.s. I would be performing cross-validation and want to prevent data leakage. AI: Not sure what kind of data you have. In case your using images, is done by image, in which case, you wouldn't have data leakage if you split before or after. But if your talking about a time-related dataset where cleaning/ denoising could consist of using moving averages of statistics from thew whole dataset, you should definitely split first to avoid that statistics from a validation/test dataset get in contact with the training dataset. As a rule of thumb, always ask yourself: when I am using this model in production, how will the data presented to me be? A simple and more trivial example that illustrates is when you fit a scaler in your training set to use it later in the validation/test dataset, since in production you will have a scaler pre-trained and will have to use it against the new data that is presented to you. And more specific for the case you want to use cross-validation: For each fold you should do your denoising of what is validation and training separately. If that wasn't super helpful, could you clarify what type of data you are using and what exaclty do you intend to do as denoising?
H: what does shuffle and seed parameter in Keras image_gen.flow_from_directory() signify? image_datagen.flow_from_directory( directory=src_path_train, target_size=(100, 100), color_mode="rgb", batch_size=batch_size, class_mode="categorical", subset='training', shuffle=True, seed=42 ) What does shuffle in the code snippet mean? Does this indicate that the flow_from_directory function shuffles the images before loading them? if so, how does it help the training procedure? Again, I'm reading an article where the shuffle setting is True for training and validation but False for testing. Why is this different for testing? train_generator = image_datagen.flow_from_directory( directory=src_path_train, target_size=(100, 100), color_mode="rgb", batch_size=batch_size, class_mode="categorical", subset='training', shuffle=True, seed=42 ) valid_generator = image_datagen.flow_from_directory( directory=src_path_train, target_size=(100, 100), color_mode="rgb", batch_size=batch_size, class_mode="categorical", subset='validation', shuffle=True, seed=42 ) test_generator = test_datagen.flow_from_directory( directory=src_path_test, target_size=(100, 100), color_mode="rgb", batch_size=1, class_mode=None, shuffle=False, seed=42 ) the above code snippet is taken from the article where the shuffle setting is True for training and validation but False for testing. AI: When shuffle = True your dataset will be randomly shuffled to avoid any overfitting in training. Passing samples in different orders makes the model more robust to overfitting. That's why during training it is advisable to turn on shuffling while during inference (validation/test), you only need to get the output, no training. Hence, no shuffling. Even though everything is random here, you can still reproduce your result using the seed parameter. It will reproduce the same result every time. If you don't use seed, then at every run, your model will be different and you cannot reproduce the results.
H: additive or multiplicative model? let's say if I have two scores $x_1^i$ and $x_2^i$ for each data point $i$, and I need to make a final score/loss function out of it. Should I use a weighted sum $w_1 x_1^i + w_2 x_2^i$, or their multiplication $x_1^i x_2^i$? (of course loss is the minus score) I think multiplication is addition in a different sense? because $\log x_1 x_2 = \log x_1 + \log x_2$? AI: In case of multiplication, loss of each point will interact with other. For eg: if loss of point 1 is 0 and point 2 is 3. Then total loss is 0. However, this is not the case. In case you want to magnify the loss you can take exponential of each loss. Still loss minimum will be at the same point as you will get without exponential Loss function objective is to find minima of total loss . Total loss can be sum average over all instances .it should be representative of all the instances
H: Normalizing softmax by dividing by its maximum? Reading this paper, I'm struggling to understand the step with the question mark (page 3). The formula for $\textbf r$ uses $\textbf q_i$ (no tilde), but the numeric values in the following paragraph are in fact $\tilde{\textbf q}_a, \tilde{\textbf q}_b$ - the standard softmax formula, without division by $max_i \tilde{\textbf q_i} + \epsilon$. I'm new to this... is this step some sort of numerical stability thing which has not played a role in this example? What am I not seeing here? Thank you! AI: You are not missing, the text seems to miss explanation. The important takeaway is how do use Fuzzy logic where embedding values are nearly same for two classes. The example discussed: h1 =[-1.2,2.3] q~ = [.0293,.9797] q =[.03,.99] (assuming e =.001) Apply threshold (0.5) r =[0,1] Fuzzy encoding will be helpful when all classes have value below threshold. For eg: h =[1,1,1,1,1,1,1,1,1,1] q~ = [.1,.1,.1,.1,.1,.1,.1,.1,.1,.1] q = [.99,.99,.99,.99,.99,.99,.99,.99,.99,.99] r = [1,1,1,1,1,1,1,1,1,1] Fuzzy encoding ensures that all classes with similar embedding value get same class. h = [1.2,1.3] q~ = [.475,.525] q = [.9,.99] r =[1,1]
H: Input length of Sentence BERT Can Sentence Bert embed an entire paragraph instead of only a sentence? For example, a description of a movie. If so, what if the word counts exceeded the input limit? I remember Bert can only take up to 512 tokens as inputs? AI: Since BERT is designed for a sentence it captures the context in a sentence. However in a paragraph they will be number of sentences and a number of context. it is not a good idea to insert paragraph in BERT. you will not get good results You should tokenize paragraph in sentences using nltk or spacy
H: Automate Clustering predictions and RFM metrics We did a POC for customer segmentation and followed the below approach a) extract data from source system (SAP business objects) b) Use python jupyter notebook to manipulate, merge and group data (multiple csv files) c) We cluster based on some preset variables. So, we use the below 4 variables a) Recency (R) b) Frequency (F) c) Cutomer duration with our company (indicates loyalty) (Y) d) No of different market segments entered by the customers (indicates cross-selling) (P) d) Run 1d kmeans algorithm (Jenks Breaks algo) for each variable. So, 4 algos are run (for 4 variables) e) For the sake of interpretability and for easy modifications of rules based on business criteria, we also incorporate a rule to finally come up with meaningful customer segments like below f) based on each business users defined requirement, we send out automated emails on a monthly basis Now, my questions are as follows a) How can I make this automated? my data gets updated every 45 days. We are always looking to create clusters of 4/5 for Recency and Frequency variable and 2 and 3 for Prod and years variable. This will not change. b) But since, we provide results to sales users to follow up with customers, we want to be able to track the results across each run and have a dashboard to know whether a customer who needed attention is now moved to loyalist or champions segment because our sales users continuously followed up with them. We would like to measure that transition between each segments and this is planning to be used as a KPI for sales users. How can we do this? c) Is 1d-kmeans algorithm considered as an AI algorithm? d) How can this be made as a pipeline and any suggestions on how to improve this project further is welcome AI: The main advantage of unsupervised learning is to be able to make meaningful clusters and hence valid scenarios. That's why, it is not always necessary to make a fully automated solution, but rather a robust one that could be later automated. I don't know how many features you have, but UMAP is great for clusterizing non-linear data, even if you have more than 20 features. It is also a random algorithm and it has reproducibility. I recommend to use UMAP with reproducibility and then K-Means to classify the clusters automatically. Once it is done, get the range of the data for each cluster(i.e. min/max of each cluster + stardard deviation or mean values if interesting), so that you can detect the different groups, and make valid classifications rules. Those rules can be applied for any new data without needing to go through the UMAP/K-Means process. If there is a lot of new data in the future, it could be necessary to repeat UMAP/K-Means because of new potential groups. It depends on the data complexity over time. Here is an example how to achieve this. More information: Understanding UMAP interactively. Basic clustering with UMAP. How exactly UMAP works.
H: How to improve Regression RMSE with LightGBM I have the following dataset: https://raw.githubusercontent.com/Joffreybvn/real-estate-data-analysis/master/data/clean/belgium_real_estate.csv I want to predict the price column, based on the other features, basically I want to predict house price based on square meters, number of rooms, postal code, etc. So I did the following: Load data: workspace = Workspace(subscription_id, resource_group, workspace_name) dataset = Dataset.get_by_name(workspace, name='BelgiumRealEstate') data =dataset.to_pandas_dataframe() data.sample(5) Column1 postal_code city_name type_of_property price number_of_rooms house_area fully_equipped_kitchen open_fire terrace garden surface_of_the_land number_of_facades swimming_pool state_of_the_building lattitude longitude province region 33580 33580 9850 Landegem 1 380000 3 127 1 0 1 0 0 0 0 as new 3.588809 51.054637 Flandre-Orientale Flandre 11576 11576 9000 Gent 1 319000 2 89 1 0 1 0 0 2 0 as new 3.714155 51.039713 Flandre-Orientale Flandre 12830 12830 3300 Bost 0 170000 3 140 1 0 1 1 160 2 0 to renovate 4.933924 50.784632 Brabant flamand Flandre 20736 20736 6880 Cugnon 0 270000 4 218 0 0 0 0 3000 4 0 unknown 5.203308 49.802043 Luxembourg Wallonie 11416 11416 9000 Gent 0 875000 6 232 1 0 0 1 0 2 0 good 3.714155 51.039713 Flandre-Orientale Flandre I hot encoded the category features, city, province, region, state of the building: one_hot_state_of_the_building=pd.get_dummies(data.state_of_the_building) one_hot_city = pd.get_dummies(data.city_name, prefix='city') one_hot_province = pd.get_dummies(data.province, prefix='province') one_hot_region=pd.get_dummies(data.region, prefix ="region") Then I added those columns to the pandas dataframe #removing categorical features data.drop(['city_name','state_of_the_building','province','region'],axis=1,inplace=True) #Merging one hot encoded features with our dataset 'data' data=pd.concat([data,one_hot_city,one_hot_state_of_the_building,one_hot_province,one_hot_region],axis=1) I remove the price x=data.drop('price',axis=1) y=data.price then train test split from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=.3) then I train: x_df = DataFrame(x, columns= data.columns) x_train, x_test, y_train, y_test = train_test_split(x_df, y, test_size=0.15) #Converting the data into proper LGB Dataset Format d_train=lgb.Dataset(x_train, label=y_train) #Declaring the parameters params = { 'task': 'train', 'boosting': 'gbdt', 'objective': 'regression', 'num_leaves': 10, 'learnnig_rate': 0.05, 'metric': {'l2','l1'}, 'verbose': -1 } #model creation and training clf=lgb.train(params,d_train,10000) #model prediction on X_test y_pred=clf.predict(x_test) #using RMSE error metric mean_squared_error(y_pred,y_test) However the RMSE its: 6053845952.2186775 which seems a huge number. I am not sure what I am doing wrong here AI: mean_squared_error(y_pred,y_test) is MSE, not RMSE (which would be mse ** 0.5). Taking a square root of it yields around 80k, which is not that huge compared to your actual price values - you seem to have around 75% explained variance, which is quite decent. You can probably improve it further by performing some EDA and dealing with outliers somehow (MSE is outlier sensitive). You should also check for possible highly correlated features, as those inflate you model variance (at a quick glance, you don't use drop_first when doing OHE, thus getting redundant columns). Scaling is not really a must, tree models, including gradient boosting on trees, are rather indifferent to scale.
H: How can I improve my current model to get a higher mAP value? (Stuck at 79~78) I am facing an issue trying to improve my model for object detection, this is something which I have been facing for quite a few days. I have tried to improve my model by fine tuning and also changed the split to 80-20 (5399 for train, 1499 for val) to include more data for my validation set but still no luck in trying to improve the mAP (mean average precision) value of my model. The model config I have so far: model_4 = Sequential() model_4.add(Conv2D(16, (3, 3), activation='relu', input_shape=(300, 300, 3))) # model_4.add(RandomFlip(mode='horizontal_and_vertical', seed=None)) model_4.add(BatchNormalization()) model_4.add(MaxPool2D(pool_size=(2, 2))) model_4.add(Conv2D(32, (3, 3), activation='relu')) model_4.add(BatchNormalization()) model_4.add(MaxPool2D(pool_size=(2, 2))) model_4.add(Conv2D(64, (3, 3), activation='relu')) model_4.add(BatchNormalization()) model_4.add(MaxPool2D(pool_size=(2, 2))) model_4.add(Conv2D(64, (3, 3), activation='relu')) model_4.add(BatchNormalization()) model_4.add(MaxPool2D(pool_size=(2, 2))) model_4.add(Conv2D(128, (3, 3), activation='relu')) model_4.add(MaxPool2D(pool_size=(2, 2))) model_4.add(Flatten()) model_4.add(Dropout(0.35)) model_4.add(Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.1))) model_4.add(Dropout(0.35)) model_4.add(Dense(4)) model_4.compile(loss='mse', optimizer=Adam(learning_rate=0.0001), metrics=[tfr.keras.metrics.MeanAveragePrecisionMetric()]) model_4.summary() My approach is to try and improve as much as I can without a pretrained model/weights. I trianed with batch size as 64, no data augmentation. I was hoping to at least reach 85% or higher but that may be a bit of a stretch for me at this point. My graphs for loss and mAP: I reached a mAP of 79 for my train and val set which is quite close Any help would be appreciated as to how I can go forward into making the model perform better. I am sure there has to be something that is causing a bottleneck in the model performance. Thanks a lot. AI: There are several things to consider: Increase the number of convolution blocks. Use residual blocks, as explained here Use different activation functions, such as Leaky ReLU, Mish, Swish, etc. To connect the feature map of feature extractors to the dense layer, use the Global pooling layer instead of flattening. Try hyperparameter tuning with already available keras tuners. Another approach would be to use an already fine-tuned model using Transfer Learning. Take a look at the EfficentNets.
H: Gradient tree boosting additive training In the XGBoost documentation, they specify that the additive training is done given an objective $obj^{(t)}$ defined as $obj^{(t)} = \sum\limits_{i=1}^n \ell(y_i, \hat{y}_i^{(t-1)}+f_t(x_i)) + \sum\limits_{k=1}^t \omega(f_k)$. In the following lines, it is specified that this objective is equal to $\sum\limits_{i=1}^n \ell(y_i, \hat{y}_i^{(t-1)}+f_t(x_i)) + \omega(f_t)+constant$ though I don't really understand this step - do they mean that $\sum\limits_{k=1}^{t-1} \omega(f_k) = constant$? if so, why? and why is this a greedy method? AI: do they mean that $\sum\limits_{k=1}^{t-1} \omega(f_k) = constant$? Yes! From the paragraph preceding that: we use an additive strategy: fix what we have learned, and add one new tree at a time. That is, we're just trying to build the next tree $f_t$, given that $f_1, \dotsc, f_{t-1}$ have already been built, and so $\omega(f_1), \dotsc, \omega(f_{t-1})$ are all already determined. Gradient boosting is greedy in that sense (earlier trees don't try to look ahead to how later trees will fare, nor do later trees attempt to modify earlier trees), but moreso the tree building process is greedy (earlier splits don't try to look ahead to how later splits will fare).
H: Is it bad to have similar images in training and validation set? I saw many persons on YouTube who are making videos, split them into frames and label each object for a custom dataset. Therefore there're many images of a single scene which are very similar to each other. I thought, it could be bad to have similar images in training and validation set. Also, nobody uses each image to train and test and copies a few of them to a validation set because they would be equal. Is that right? I am training a yolov5 model to detect persons on single images. AI: As in the comments: You're right, if the validation images are very similar to the training ones you may end up overfitting. But most of the times it is the best you can do so you do it. In a video, the number of used frames is not a problem but of particular importance is to time-separate training and validation datasets.
H: Comparing encoders to same input of differnt output size Let's say I have an input s1 and I pass it to two encoders e1 and e2. They output encodings of size s1 and s2, where their length are not equal, lets say len(s1) = k*len(s2). Is it possible to somehow compare which encoder is better out of the two ? AI: Encoders "encode" your high dimensionality input into a lower dimensional space. One way to compare different encoders is to use such representation for building the same model (say a NN with a fixed amount of layers) and to see which one is performing best or if the performance loss for the lower dimensionality is acceptable. A downside of that stand on the training procedure that advantage one representation instead of another. Think for example as having very few data points and to compare a big encoder with a small one: probably the small one will be better because it won't overfit. You can also take this concept to the extreme and use those representations as inputs to a linear/logistic regression: in this way the chances of overfitting are very low and the better representation for the linear scenario will result in higher performances.
H: Do Linear Regression and Logistic Regression models from sklearn include regularization? I'm learning Data Science by enrolling on different courses, and I've recently learnt something that seems very interesting to apply when doing linear or logistic regression models: regularization. In the screenshot below, the course that I'm currently doing, shows that regularizing these models include an additional factor in the formula of the cost function (which I've circled in red). This factor helps the problem of over-fitting by adding cost when we have many input features. Now, during this course, we're creating the formulas and functions ourselves to better understand the mechanisms behind these models. However, I would prefer to continue using the functions from sklearn later. I would like to know if the LinearRegression() and LogisticRegression() models from sklearn, when trained, include (or have any parameter to make them include) regularization. Thanks everyone! AI: In sklearn they are presented in a different way as you expected. Linear regression is without any regularization term, if you look for a regularized version as the one you showed you are looking for a Ridge with alpha = 1. Logistic regression instead has a parameter called penalty{‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’ that lets you choose the desired penalty. I suggest you to spend some time on sklearn documentation, it's complete and very well presented!
H: Back propagation matrix shape error using Python I wanna implement the back-propagation algorithm in python with the next code class MLP(object): def __init__(self, num_inputs=3, hidden_layers=[3, 3], num_outputs=2): self.num_inputs = num_inputs self.hidden_layers = hidden_layers self.num_outputs = num_outputs layers = [num_inputs] + hidden_layers + [num_outputs] weights = [] bias = [] for i in range(len(layers) - 1): w = np.random.rand(layers[i], layers[i + 1]) b=np.random.randn(layers[i+1]).reshape(1, layers[i+1]) weights.append(w) bias.append(b) self.weights = weights self.bias = bias activations = [] for i in range(len(layers)): a = np.zeros(layers[i]) activations.append(a) self.activations = activations dW=[] db=[] for i in range(len(layers)-1): derW=np.zeros((layers[i], layers[i+1])) derb=np.zeros((layers[i+1])).reshape(1, layers[i+1]) dW.append(derW) db.append(derb) self.dW=dW self.db=db def forward_propagate(self, inputs): activations = inputs self.activations[0] = activations for i, w in enumerate(self.weights): activations = self._sigmoid((np.matmul(activations, w)+self.bias[i)) self.activations[i+1] = activations.T return activations def back_propagate(self,error): for i in reversed(range(len(self.dW))): activations=self.activations[i+1] delta = np.multiply(self._sigmoid(activations),error) print("This is delta: {} ".format(delta)) current_activations=self.activations[i] current_activations = current_activations.reshape(current_activations.shape[0],-1) print("This is the current activations: {} ".format(current_activations)) self.dW[i] = 1/delta.shape[0]*np.dot(current_activations,delta) def train(self, inputs, targets, epochs, learning_rate): for i in range(epochs): sum_errors = 0 for j, input in enumerate(inputs): target = targets[j] output = self.forward_propagate(input) error = target - output self.back_propagate(error) def _sigmoid(self, x): y = 1.0 / (1 + np.exp(-x)) return y So I created the next dummy data in order to verify everything is correct items = np.array([[random()/2 for _ in range(2)] for _ in range(1000)]) targets = np.array([[i[0] + i[1]] for i in items]) mlp = MLP(2, [5], 1) mlp.train(items, targets, 2, 0.1) but when I run the code I have the next error ValueError: shapes (2,1) and (5,1) not aligned: 1 (dim 1) != 5 (dim 0) I understand the error because when I printed the delta and current activations values I have the next ones: This delta: [[-0.67139682]] This is the current activations: [[ 0.11432486] [-0.38246416] [-0.85207878] [ 0.73210993] [ 0.76603196]] This is delta: [[-1.45663835] [-1.2793182 ] [-0.76875725] [-0.90048138] [-0.86253739]] This is the current activations: [[0.08248608] [0.12631125]] So what I really want is that the current activation [[-0.67139682]] multiply with this delta value [[0.08248608] [0.12631125]] and this current activations [[ 0.11432486] [-0.38246416] [-0.85207878] [ 0.73210993] [ 0.76603196]] multiply with this delta value [[-1.45663835] [-1.2793182 ] [-0.76875725] [-0.90048138] [-0.86253739]] but I don't know how to do that. Any help? AI: I believe you should change: self.dW[i] = 1/delta.shape[0]*np.dot(current_activations,delta) to self.dW[i] = 1/delta.shape[0]*np.dot(current_activations,delta.T) in the back propagation function. This will help you to avoid the error.
H: Do the preprocessing steps for new data need to be identical to the steps for train/test data? I'm using decision tree classification for a classification problem. I have preprocessed the data, train/test split it, and run a model with cross validation before testing it. The steps I followed for preprocessing are outlined below: Removed some occurences (rows) which aren't usable Transformed some of the columns by taking nth-root to remove skew (n is different for each column, I plotted the data and did whatever looked like it reduced the skew most) Train/test split the data I fit OneHotEncoder() and StandardScaler() to the training data I applied the transformations in step 4 to both the training and test data My questons are as follows: Are my steps correct? In particular, is it correct to 'root transform' the data before train/test split, or does that lead to data leakage? When I want to apply my model to new data (after testing etc.) does that new data have to undergo identical preprocessing? e.g. fit to the train set then apply it to the new data and root transformations of the same nth-root. Thanks in advance AI: You are correct, as Evolving_Richie commented. When you want to apply you model, you have to follow the same process you did when training. However, you only need to transform in the same way, never fit! So the process would be: Remove occurrences/samples (you are accepting that you won't have predictions for these values, if not, change you approximation), transform columns, transform OneHotEncoder() and StandardScaler(). Of course we are not splitting data into training and test because we are on deployment, and all our data is test.
H: Is it normal for a model to perform worse with the use of word embeddings? I have a multiclass text classification problem and I've tried different solutions and models, but I was not satisfied with the results. So I've decided to use GloVe ( Global Vectors for Word Representation ) , but somehow all the models performed even worse. So my question is, is it possible that NLP models perform even worse by the use of some word embeddings models like GloVe or FastText? Or did I just made a bad implementation? The code is given below: embedding_model = {} f = open(r'../../langauge_detection/glove.840B.300d.txt', "r", encoding="utf8") for line in f: values = line.split() word = ''.join(values[:-300]) coefs = np.asarray(values[-300:], dtype='float32') embedding_model[word] = coefs f.close() def sent2vec(s): words = str(s).lower() words = word_tokenize(words) words = [w for w in words if not w in stop_words] words = [w for w in words if w.isalpha()] M = [] for w in words: try: M.append(embedding_model[w]) except: continue M = np.array(M) v = M.sum(axis=0) if type(v) != np.ndarray: return np.zeros(300) return v / np.sqrt((v ** 2).sum()) X_train, X_test, y_train, y_test = train_test_split(df.website_text, df.industry, test_size=0.2, random_state=42) x_train_glove = [sent2vec(x) for x in tqdm(X_train)] x_test_glove = [sent2vec(x) for x in tqdm(X_test)] x_train_glove = np.array(x_train_glove) x_test_glove = np.array(x_test_glove) from sklearn.linear_model import SGDClassifier sgd = SGDClassifier(random_state=42) sgd.fit(x_train_glove, y_train) AI: There are various cases where a problem works better with a simpler representation of text than word embeddings: Data size: if it's too small, the model may overfit because the embeddings give too much precision. Generally embeddings are more subtle so they require more data diversity. The selected embeddings are not suitable for the data, e.g. general text embeddings may not work well with scientific texts, social media data, etc. Embeddings are trained on some data, so if this training data is too different from the data for the application then it won't give good results. Generally one should never assume that a method is always better than another, as per the No Free Lunch theorem.
H: Use model output as feature to predict model error in boosting We use data $(\boldsymbol{x_1}, \boldsymbol{x_2},\ldots, \boldsymbol{x_n},\boldsymbol{y})$ to improve the predictability of a physical model $f(x_1,x_2,\ldots,x_n)$ that was implemented by domain experts. Let $\hat{\boldsymbol{y}}=f(\boldsymbol{x_1}, \boldsymbol{x_2},\ldots, \boldsymbol{x_n})$. Originally, we decided to fit the errors $\boldsymbol{e}=\boldsymbol{y}-\hat{\boldsymbol{y}}$ with a statistical model, $g$, so the improved model shall be $g(x_1,x_2,\ldots,x_n) + f(x_1,x_2,\ldots,x_n)$. Later I found that adding $\hat{y}$ as a feature can train a better $g$ for fitting the error $e$, so the final model was changed to: $$g(x_1,x_2,\ldots,x_n,\hat{y}) + f(x_1,x_2,\ldots,x_n)$$ or simply $$h(x_1,x_2,\ldots,x_n)=g\left(x_1,x_2,\ldots,x_n,f(x_1,x_2,\ldots,x_n)\right) + f(x_1,x_2,\ldots,x_n)$$ Does this approach have any issue? It seems good to me because $\hat{y}$ is just an engineered feature for training $g$. Posts like https://stats.stackexchange.com/questions/404809/is-it-advisable-to-use-output-from-a-ml-model-as-a-feature-in-another-ml-model also support my view. AI: What do you mean by $f$ is a "physical model"? If you mean something like, "Given some $x$, domain experts then use their experience/discretion to estimate $f(x)$", which you then feed into some statistical model $g$, then I see no issue at all here. (E.g. $x$ is some weather data, we then ask some weather experts their thoughts on the chance of rain tomorrow $f(x)$, and then use that as features for some machine learning model.) In fact, that is simply feature engineering. If $g$ is a flexible ML model like neural networks, forests, etc, then worst case these features don't contribute anything but should not really degrade performance. If $g$ is a rigid statistical model like OLS or something, then you might run into some various model-specific issues like multi-collinearity etc. Hard to say without knowing what $g$ is. Now if $f$ is also a statistical model, then you might run into some issues with overfitting. For example, training a random forest to get $f(x)$ then using both $x$ and $f(x)$ as features in a neural network. But you can work around this with some proper cross-validation and data splitting.
H: Bayesian state description in Reinforcement Learning What's the best approach to feed a bayesian description of an observed state to a Reinforcement Learning agent? Brief context: I have an agent situated in an environment, which it perceives through a series of noisy sensors. Using Bayesian inference (outside of the RL agent) a number of sensed variables of interest are described in terms of their probability distributions, forming the state vector. As an example, think of the state as the position of an object in space, described at each instant with a multivariate Normal distribution. The agent is tasked with gathering information about the state, meaning that its actions can impact the level of uncertainty used to describe the state. Following the previous example, if the location of the object is perceived through a movable camera, one of the actions could be to focus the camera on the object, lowering the variance in the probabilistic description of said location. The state is assumed continuous, so a function approximation has to be used to retrieve the Q factor in each state. The set of actions is instead finite. The reward function reflects this "variance minimizing" drive, as there's a penalty factor proportional to the variance of the observed state. First question: is this actually doable with RL or am I missing something? Second question: which choice might be better for the function approximator, knowing it has to deal with that probabilistic description of the state? NOTE: The simpler the approach the better, as the training data is rather limited and definitely not enough for a Deep approach. AI: If the state uncertainty affects the decision making then there are possibly two avenues that you can take: Incorporate the uncertainty in the state and formulate the problem as a Partially Observable Markov Decision Process (POMDP). An example of this is the RockSample domain. Some indicative links that could help you in your search: paper, code+visualization, code+examples+solvers Another approach is to try to solve the exploration vs exploitation dilemma. Mainly if there are a number of different tasks (MDPs) and the agent doesn't know which task is facing it can maintain a belief distribution over tasks and use a solver to solve the task it believes is currently facing. This is the Bayes-Adaptive MDP framework. You would need approximate methods to solve the problem. Aside the traditional POMDP/BAMDP solvers out there, there are also Neural Network approaches (usually model free RL) that solve these types of problems. However if you are not familiar with these problem formulations it is better to use the traditional methods first.
H: How to proceed when training data change frequently (in production)? I'm working with a Recommendation System that would take as parameters a bunch of "tweets" a user see during his navegation on a mobile app. Every tweet has a property, like a category (which is a input from the Operational team). My recommedation at the end, is show the next "tweet" so it's interesting for the end user. For the initial training and validation, everything work smoothly. I developed a NxM matrix and calculated the distance between each user. For the columns, I used the categories. As so, similar users read similar tweets of similar categories. I get the distance between these tweets and recommend the one which similar users liked most My concern is that those categories change over time. New categories (aka dimensional space) expands. So supposed we had categories [Work | News | Health]. Next week, the Operational team would want to test new categories, like [Family, Fitness]. Not overwriting the olds, but launching new tweets with new categories that were not mapped on the training step How can I maintain the model in prodution, since these new categories are always changing? The initital NxM matrix I built will not work, since the model in production was training using len(categories) that are less than actual. It's like saying model in production has a NxM matrix, but now tha data has NxM+2 Do I need to have a constante training step on the cloud? Since the model just take as a input the last few tweets a user saw, how can I keep the pace on new categories showing up? AI: The answer to your question really depends on buisness specifics. If categories only update once a week, then probably the most reasonable solution is to retrain the model from scratch (as you suppose to do it regulary anyway, even with fixed number of categories). If the updates are daily (or even more often), then you could start collecting an exhaustive list of categories (which is a good idea anyway), and possibly create a hierarchy of categories (e.g. #tennis is part of #sport, but not o/w). The latter will allow you to use some hierarchical loss (e.g. hierarchical cross-entropy), which in turn could help with cold-start (even if you get a completely new category, most likely it will be a part of some existing higher-level category). Again, even with exhaustive list of categories you do want to regularly retrain your model (how often is another question). In general a completely new category means that you do not have any user-category pair in your training set, and this "item cold-start" problem do not have a "very good" solution. Alternatively, in your case it seems that the category itself is present, it is just not required at some of the weeks by a operational team. If this is the case, then I would follow the second option (get an exhaustive list and proceed..). Once you have an exhaustive list, if come of them are not required this week, simply skip it and move to the next most likely (for a particular user).
H: Do 0-columns affect the results of time series clustering when using k-means and Ward's method? I would like to cluster multidimensional time series using k-means and Ward's method. My base dataset has 4 columns (features) and each of them is a time series of 288 values. So one "datapoint" has $4*288=1152$ entries (dimensions). I have 100 datapoints that I want to cluster. Depending on the setup, it might be possible that 1 or 2 of the 4 columns have 0 values for 288 time series values and for all of the 100 datapoints that I want to cluster. Now my question is, if and how these 0-columns affect the results of the clustering with k-means and Ward's method? So let's say that actually one datapoint has only 2 features with 288 values. Does it make a difference if I use $2*288=576$ dimensions for one record compared to using $4*288=1152$ dimensions for one record when out of the 4 dimensions in the big array 2 have 0-values for all entries? AI: If you have got, say, $100 \times 576$ (i.e., $100$ rows/data points and $576$ columns which represent the linearization of $2$ time series) values that are $0$ and you use a variance-based optimization approach, then including such values will affect the resulting variance, since variance is based on the mean of your observations. However, assuming that you use a non-randomized clustering procedure, the data points will fall inside the same clusters either including or excluding those 0 values; they will simply have different variances, but those variances are penalized by the same quantity in all the data points (i.e., the mean will be nearer to $0$ including those $0$ values). If you want to use a randomized procedure, I would suggest you use the same random seed for both experiments to check the result by inspecting some data points in both experiments and see in which cluster they fall.
H: Is it possible to implement logistic regression (or any other ML method) to impute null values in a categorical feature with multiple values? I'm doing a Data Science project, and I'm on the stage of cleaning categorical features. I've been researching, and it seems that imputing the mean or median can change the distribution. Therefore, a better way would be to use logistic regression or any other model to predict null values in categorical features. In this post, the author explains how to use logistic regression to impute null values in a binomial categorical feature. However, the categorical features that I'm using have multiple possible values. Do you know of any approach to solve this and get an accurate imputation of null values on multi-categorical features? Thanks! AI: I am not saying this is a good idea. You could use multinomial models (logistic, trees). The test you posed "get an accurate imputation" is hard. Given the missing values are unknown, you can get a probabilistic answer. How accurate is a function of the data. And now you have 2 models that you need to prepare and monitor. A bigger question - can the features be null during scoring or is this a training issue only? If the model is in production and receives missing values, you need to run the imputation model scoring to determine what value to place in the feature before scoring with the model. Hopefully a null indicator variable is always getting set in your data. And you have already researched the missing values to see if there is a pattern, if there is meaning to the missing, why they are missing, subject matter expert rules that can replace, etc. Are these missing at random or missing not at random or ...?
H: Is there a term for data based on calculations of raw data? I'm trying to find out if there is a specific term for describing data created from computations based on raw data. For example, I have the two data tables below. I'd like to know the term used to describe table #2. Thank you! Data Table #1: Raw Data Based On Sales Transactions |**TimeStamp**|-|**TransactionTotal**| -------------------------------------- | 09:30 |-| 100.00 | -------------------------------------- | 09:31 |-| 50.00 | -------------------------------------- | 09:35 |-| 10.00 | -------------------------------------- | 10:12 |-| 200.00 | -------------------------------------- | 10:18 |-| 100.00 | Data Table #2: Calculations based on Sales Transactions ------------------------------------------------------ | Avg Transaction Value |-| 92.00 | ------------------------------------------------------ | Transaction Per Hour (09am) |-| 3.0 | ------------------------------------------------------ | Transaction Per Hour (10am) |-| 2.0 | ------------------------------------------------------ | Total Transactions |-| 5.0 | ------------------------------------------------------ | Avg Transaction Per Hour |-| 2.5 | AI: In your specific case there, I would call them descriptive statistics of the raw data. In the context of feeding those statistics into a model (treating them as features), you could call them manually generated features $\rightarrow$ the result of feature engineering.
H: How to plot categorical variables with a pie chart I am concerned with a single column (fruit) from my df: | fruit | | --------------------| | apple, orange | | banana | | grapefruit, orange | | apple, banana, kiwi | I want to plot the values from fruit to a pie chart to get a visual representation of the distribution of each individual fruit I run: df.plot(kind='pie', y='fruit') But this gives a TypeError: '<' not supported between instances of 'str' and 'int' I have read: https://stackoverflow.com/questions/20449427/how-can-i-read-inputs-as-numbers But I can't see how it helps solve my problem Any help much appreciated! AI: You may want first to try to count the number of occurrences of each string inside the column, and from then you have only to plot with whatever kind of plot you want. df = pd.DataFrame({"fruit":["apple, orange", "banana", "grapefruit, orange", "apple, banana, kiwi"]}) df.fruit.str.get_dummies(sep = ",").sum().plot.pie();
H: Normalize summary of customer feedback text / word-cloud /word-count I am trying to make a first analysis on the interest of people feedback from their emails. For a first analysis I made with a simple wordcount to know the key words. I am facing the following problem: some customers give very short feedback and others a single customer gives very long feedback so the wordcount mechanism that simply counts words gives more weight to the customer who writes more, which may not be the most important. i.e customer_1: I would like to know the normative about Covid, cause I m covid vaccinated... covid ..covid (2000 words) # word covid appear 13 times customer_2: I m worry about price (100 words) customer_3: Something about pices too(150 words) If we just follow the aproach of Word count, the results are unbalanced towards the person who writes the most. how can this be avoided ? In ML, in order to avoid that some attributes have more weight than others, they are normalised, how would this be in NLP ? AI: You can apply text classification with Bert. It would give a classification, whatever the message length is. Therefore, you can use multi-class text classification, for instance: https://huggingface.co/palakagl/Roberta_Multiclass_TextClassification?text=I+love+AutoTrain+%F0%9F%A4%97 To implement it, here are several tutorials: https://www.kaggle.com/code/thebrownviking20/bert-multiclass-classification https://towardsdatascience.com/multi-class-text-classification-with-deep-learning-using-bert-b59ca2f5c613 https://towardsdatascience.com/text-classification-with-bert-in-pytorch-887965e5820f#:~:text=There%20are%20two%20different%20BERT,hidden%20size%2C%20and%20340%20parameters.
H: PPV/TPR equivalent for negative results I'm looking at the results of an ML model I made and I've calculated the PPV, TPR, NPV and TNR. As is expected, there is a tradeoff between the PPV and TPR (from which the F1 score can be calculated) but I was wondering if a similar relationship exists between NPV and TNR, as I have observed that in my results - if so, is there a similar metric to the F1 score for these measurements? Edit: is it even necessary to look at the NPV and TNR? Wikipedia (I know, not a great source) says that a perfect precision eliminates false positives and a perfect recall eliminates false negatives, so what does knowing the NPV and TNR bring to the table? Because surely a perfect NPV eliminates false negatives and a perfect TNR eliminates false positives, so they don't really add any insight into the model. Thanks! AI: The logic of binary classification measures is as follows: Usually there is a natural 'positive' class for the application, and if not one is defined by convention. Evaluation measures are supposed to represent how well a model recognizes this positive class by contrast to the negative one. Naturally if the model can identify the positive class well then it means that it distinguishes the two classes well, therefore it also identifies the negative class well. This is why there is no need for a negative-focused equivalent of F1-score (especially since the positive class is chosen based on the application), but it could perfectly be defined indeed. There's indeed no particular need for negative-focused measures like NPV and TNR, these values are sometimes useful in specific applications but they do not provide any additional information about the ability of the model. The 2 dimensions which are needed are precision (PPV) and recall (TPR). For the record the Wikipedia page on precision/recall is a good reference :)
H: Average precision, balanced accuracy, F1-score, Matthews Correlation Coefficient, geometric means Average precision, balanced accuracy, F1-score, Matthews Correlation Coefficient, geometrics means are the few evaluation metrics for imbalanced data. However, all this metrics can lead to different 'best' model. How do we then decide which is indeed the 'best' model? AI: It's about designing the task properly. I'm not talking about the design of the model, and this is not either about selecting an evaluation based on the characteristics of the data (e.g. there's no simple way to decide based on whether the data is imbalanced or not). The design of standard tasks is usually established in the state of the art. Take machine translation (MT) for example, there is a whole area of research devoted to evaluating MT, with various simple and advanced evaluation measures designed specifically for the task. People often confuse 'standard type of task' and 'standard task', for example assuming that all the classification tasks can be evaluated the same way. Of course there are standard measures which are used very often in classification, but even with "regular" classification one should ensure that the evaluation measure fits the task. So how does one select the "best" evaluation measure for a task, when the task is not standard? First it's important to realize that a performance score is always a simplification, so there's no perfect evaluation (btw this is why it's sometimes relevant to use several measures). The goal of the evaluation is to represent how well the model does the job, whatever the job is. This often implies human annotations, sometimes by experts, in order to represent what the job should be about. Depending on the task, it is sometimes relevant to compare different evaluation measures: what are their similarities, differences, possible biases or limitations, and which one fits the target task the best. In short: there is no simple answer for evaluation, it's not about applying technical rules but about analyzing the specific target task.
H: combine xgb feature importance Is it possible to combine the results from the xgb.boost importance function. For example, due to one hot encoding, I have a feature age=35 and another age=60. Is there a way that I can add these to get an overall importance of age, and not just age at every value? In case it matters the model is a binary:logistic one. library(xgboost) imp_matrix= xgb.importance(features,model=mod) AI: You can just sum them. The three importances reported are all fractions that a given feature contributes out of the total. The measures are Gain, the impurity improvement a split provides; Cover, the number of datapoints passing through a node; and Frequency, just the number of nodes. All of these are summed across the nodes that use a given feature to split, and so adding them together across dummy variables for a given categorical also makes sense.
H: Difficulty loading data/running model on custom dataset derrived from DNA sequence data - TypeError when attempting to run model I am a student who has some limited experience with keras, and for a new project recently decided to learn how to use pytorch to implement my models. I'm a beginner with both, so apologies in advance for my inexperience, I am doing my best to follow tutorials, but my limited experience combined with most examples being in different uses has resulted in slower comprehension. I'm trying to use NiN blocks as described here (https://d2l.ai/chapter_convolutional-modern/nin.html) to inform my model's architecture. I have built a custom dataset class for my data, the X data is genetic sequence 256 bases long (i.e. "AGCTGGAGCT..."), so the resulting array after one-hotting to four channels for each of the four bases looks like [[[1,0,0,0],[0,1,0,0]...], [0,0,1,0], ...]] and has shape 48,976, 256, 4. I read that Conv1d looks for channels first, so I permuted the channels in the dataset's tensor to read in that way, resulting in torch.Size([48976, 4, 256]). The Y data is 2 values for a given sequence of X, ESC and TSC, each numeric values derived from other source data. The dataset code is as follows: device = "cuda" if torch.cuda.is_available() else "cpu" def onehotseq(dataset, input_shape): onehot = np.zeros(input_shape) for i in range(0, dataset.shape[0]): seq = dataset.iloc[i,1] for c in range(0,len(seq)): if (seq[c] == "A"): onehot[i,c,:] = [1,0,0,0] elif (seq[c] == "C"): onehot[i,c,:] = [0,1,0,0] elif (seq[c] == "G"): onehot[i,c,:] = [0,0,1,0] elif (seq[c] == "T"): onehot[i,c,:] = [0,0,0,1] return onehot class EpiDataset(torch.utils.data.Dataset): def __init__(self, Seq_filepath="path_to_sequence_data", Y_data_filepath="path_to_output_data"): self.seq_data = pd.read_csv(Seq_filepath, sep="\t", header=None) self.seq_data.rename(columns={0:"id", 1:"seq"}, inplace=True) self.y_data = pd.read_csv(Y_data_filepath, sep="\t", header = 0) self.y_data["ESC"] = np.log2((self.y_data["ESC.H3K27ac"].values+1)/(self.y_data["ESC.input"].values+1)) self.y_data["TSC"] = np.log2((self.y_data["TSC.H3K27ac"].values+1)/(self.y_data["TSC.input"].values+1)) self.dataset = self.seq_data.merge(self.y_data, on="id") self.list_IDs = self.dataset["id"] self.seq = self.dataset["seq"] self.esc = self.dataset["ESC"] self.tsc = self.dataset["TSC"] self.input_shape = (self.dataset.shape[0], 256, 4) self.onehotseq = onehotseq(self.dataset, self.input_shape) self.tensorX = torch.from_numpy(self.onehotseq) self.tensorX = self.tensorX.permute(0, 2, 1) self.labels = self.dataset[["ESC","TSC"]].to_numpy() self.tensorY = torch.from_numpy(self.labels) def __len__(self): return len(self.list_IDs) def __getitem__(self, index): ID = self.list_IDs[index] seq = self.seq[index] esc = self.esc[index] tsc = self.tsc[index] return { "ID: ": ID, "sequence: ": seq, "ESC: ": esc, "TSC: ": tsc } This all seems to work as intended, and I was able to design a Module class, which also seems to be functionally correct, but I get a type error whenever I try to use the model. The code and error are: def nin_block(out_channels, kernel_size, padding="same"): return nn.Sequential( nn.LazyConv1d(out_channels, kernel_size, padding), nn.ReLU(), nn.LazyConv1d(out_channels, kernel_size=1), nn.ReLU(), nn.LazyConv1d(out_channels, kernel_size=1), nn.ReLU() ) class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.NiN = nn.Sequential( nin_block(32, kernel_size=11,padding="same"), nn.MaxPool1d(3, stride=2), nin_block(64, kernel_size=4, padding="same"), nn.MaxPool1d(3, stride=2), nin_block(128, kernel_size=4, padding="same"), nn.MaxPool1d(3, stride=2), nin_block(256, kernel_size=3, padding="same"), nn.MaxPool1d(3, stride=2), nn.Dropout(0.4), nin_block(4, kernel_size=3, padding="same"), nn.AdaptiveAvgPool1d(2), nn.Flatten(), ) def forward(self, x): x = self.flatten(x) logits = self.NiN(x) return logits Error message, resulting from running model = NeuralNetwork().to(device) and then logit = model(x.tensorX) TypeError: conv1d() received an invalid combination of arguments - got (Tensor, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of: * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (Tensor, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int) * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (Tensor, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int) My question is, what am I doing wrong either in building my module or dataset, or am I missing a step? The data loaded in is the prepared data for initial exploration/training of different model architectures. AI: I was able to get help off-site, and the issue was feeding "same" into padding at the wrong stage. The correct code would be: def nin_block(out_channels, kernel_size, padding): return nn.Sequential( nn.LazyConv1d(out_channels, kernel_size, padding=padding), nn.ReLU(), nn.LazyConv1d(out_channels, kernel_size=1), nn.ReLU(), nn.LazyConv1d(out_channels, kernel_size=1), nn.ReLU() ) class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.NiN = nn.Sequential( nin_block(32, kernel_size=11,padding="same"), nn.MaxPool1d(4, stride=2), nin_block(64, kernel_size=4, padding="same"), nn.MaxPool1d(4, stride=2), nin_block(128, kernel_size=4, padding="same"), nn.MaxPool1d(4, stride=2), nin_block(256, kernel_size=4, padding="same"), nn.MaxPool1d(4, stride=2), nn.Dropout(0.4), nin_block(4, kernel_size=4, padding="same"), nn.AdaptiveAvgPool1d(2), nn.Flatten(), ) def forward(self, x): x = self.flatten(x) logits = self.NiN(x) return logits I'm still troubleshooting some other errors/issues, but marking this as closed given the specified error has been resolved.
H: What is the advantage of using Euler's number ($e^x$) instead of another base in the softmax equation? I understand the softmax equation is $\boldsymbol{P}(y=j \mid x)=\frac{e^{x_{j}}}{\sum_{k=1}^{K} e^{x_{k}}}$ My question is: why use $e^x$ instead of say, $3^x$. I understand $e^x$ is it's own derivative, but how is that advantageous in this situation? I'm generally trying to understand why euler's number appears everywhere, especially in statistics and probability, but specifically in this case. AI: Choosing a different base would squash the graph of the function uniformly in the horizontal direction, since $$ a^x = e^{x\cdot \ln(a)}. $$ The exponential function with base $e$ is widely considered the simplest exponential function. It has nice properties that no other base has, mainly: The function $e^x$ is its derivative. It has a particularly simple power series expansion: $$ e^x = 1 + x + \frac12 x^2 + \frac16 x^3 + \cdots + \frac1{n!}x^n + \cdots $$ All of the coefficients are rational numbers. If the base had been something intuitively "nicer" than $e$, such as an integer, the coefficients would need to be irrational. For this reason, most mathematicians will pick $e^x$ when they need an exponential function and have no particular reason to choose one base over another. (Except for computer scientists and information theorists, who sometimes prefer $2^x$).
H: How to manually calculate the variance of the least squares estimator in R As stated in the title, how do you manually calculate the variance of the least squares estimator in R? I know that the least estimates have the following formula: $$\hat{\beta}=(X^TX)^{-1} X^T Y, $$ and the variance of the least squares estimator is given by $$Var(\hat{\beta}) = σ^2(X^TX)^{−1}$$ My question clearly stated how to do that "manually," so I can understand that concept comprehensively. An R example would only serve to help me understand this concept. I can easily find in R ($^T$), but what about $\sigma^2$? AI: Let's build the entire example (You can use the Wikipedia page for reference on all formula below): First, generate the model parameters (p=4 in this case) p <- 4 beta <- rnorm(p) Next let's simulate some observations from a linear model: n <- 100 X <- cbind(1,t(replicate(n, rnorm(p-1)))) epsilon <- rnorm(n) y <- X%*%beta + epsilon Let's also obtain our estimates for $\beta_i$: beta_hat <- solve((t(X)%*%X))%*%t(X)%*%y Now we can calculate model predictions: pred <- X%*%beta_hat And finally calculate $\hat{\sigma}^2$: sigma_2 <- sum((y-pred)^2)/(n-p) Just for sports let's also calculate $Var(\hat{\beta})$: beta_hat_covariance <- solve(t(X)%*%X)*sigma_2 beta_hat_var <- diag(beta_hat_covariance) To make sure all our calculations are correct we can believe the lm function is the source of truth and do: my_lm_summary <- summary(lm(y ~ X-1)) # our calculation: sigma_2 # is the same as: my_lm_summary$sigma^2 # our calculation: beta_hat_var # is the same as: my_lm_summary$coefficients[,2]^2
H: If we train a binary classifier (lets say tree based) to predict ordinal data do they learn to interpolate? Let's assume we have data about students in grade 10. We have test scores ranging from 0-100, however we are only provided two labels ; High score = if the score> 80% and low score if the score < 80%. Suppose we train a tree-based classifier, will the model learning to interpolate as well? When a calibrated tree is 10% confident that a record (A) is low complexity vs 40% for another record(B) - can we say that record B is likely to have a higher score than record A. How can we train a model to learn this without explicitly provided the absolute score? [Edit] - Assume you have freedom to get all the input features you want. Ex: Family income, hours studies etc. in the training set. AI: Depending on data and model fit, it is possible that confidence scores could proxy (relative) predicted performance. However, you cannot guarantee the relationship you describe would occur. Even if some relationship to this effect does occur, confidence scores would not be easily interpretable. At best you may be able to produce a rough ordering of exam scores, which may produce reasonable results on aggregate. It is unlikely to be suitable for direct comparison of two samples, or for estimating absolute exam scores. It would be easier to comment further with more information on the desired use case. Also note that this is far less likely to be effective if your model overfits on the training set.
H: TypeError: object of type 'NoneType' has no len() when implementing neural network I am building artificial neuron network (ANN) model for predicting values but facing problem: Input: def create_model(optimizer = 'rmsprop', units = 16, learning_rate = 0.001): ann = tf.keras.Sequential() # Initialising ANN ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding First Hidden Layer ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding Second Hidden Layer ann.add(tf.keras.layers.Dense(units = Y.shape[1], activation = 'softmax')) # Adding Output Layer ############################################### # Add optimizer with learning rate if optimizer == 'rmsprop': opt = tf.keras.optimizers.RMSprop(learning_rate = learning_rate) elif optimizer == 'adam': opt = tf.keras.optimizers.Adam(learning_rate = learning_rate) elif optimizer == 'SGD': opt = tf.keras.optimizers.SGD(learning_rate = learning_rate) else: raise ValueError('optimizer {} unrecognized'.format(optimizer)) ############################################## ann.compile(optimizer = optimizer, loss = 'categorical_crossentropy', metrics = ['accuracy']) # Compiling ANN return ann ann = KerasClassifier(model = create_model, verbose = 2, learning_rate = 0.001, units = 16 ) optimizers = ['rmsprop', 'adam', 'SGD'] epoch_values = [10, 25, 50, 100, 150, 200] batches = [10, 20, 30, 40, 50, 100, 1000] units = [16, 32, 64, 128, 256] lr_values = [0.001, 0.01, 0.1, 0.2, 0.3] hyperparameters = dict(optimizer = optimizers, epochs = epoch_values, batch_size = batches, units = units, learning_rate = lr_values ) grid = GridSearchCV(estimator = ann, cv = 5, param_grid = hyperparameters) history = grid.fit(X_train, Y_train, batch_size = 32, validation_data = (X_test, Y_test), epochs = 100 ) # Fitting ANN Output error: File c:\Users\dis\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\model_selection\_search.py:875, in BaseSearchCV.fit(self, X, y, groups, **fit_params) 869 results = self._format_results( 870 all_candidate_params, n_splits, all_out, all_more_results 871 ) 873 return results --> 875 self._run_search(evaluate_candidates) 877 # multimetric is determined here because in the case of a callable 878 # self.scoring the return type is only known after calling ... self._check_model_compatibility(y) File "c:\Users\dis\AppData\Local\Programs\Python\Python310\lib\site-packages\scikeras\wrappers.py", line 551, in _check_model_compatibility if self.n_outputs_expected_ != len(self.model_.outputs): TypeError: object of type 'NoneType' has no len() Data: X.shape -> (10, 2066) Y.shape -> (10, 4) X_train.shape -> (8, 2066) X_test.shape -> (2, 2066) Y_train.shape -> (8, 4) Y_test.shape -> (2, 4) AI: When you use Sequential model in tf.keras you need to provide the input_shape in the first layer or to add the input layer. Modify your code as follows: ann = tf.keras.Sequential() # Initialising ANN ann.add(tf.keras.layers.Dense(units = units, input_shape=(X.shape[0],), activation = "relu")) # Adding First Hidden Layer ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding Second Hidden Layer ann.add(tf.keras.layers.Dense(units = Y.shape[1], activation = 'softmax')) # Adding Output Layer or adding an input layer as follows: ann = tf.keras.Sequential() # Initialising ANN ann.add(tf.keras.layers.Input(shape=(X.shape[0],))) # Input Layer ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding First Hidden Layer ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding Second Hidden Layer ann.add(tf.keras.layers.Dense(units = Y.shape[1], activation = 'softmax')) # Adding Output Layer
H: Are you supposed to clean new data before it is fed to a machine learning model? I have train/test data for my text classification problem. I have used them to create and test several ML models (LogisticRegression, RandomForest, and LinearSVC). The train and test data consist of many documents classified into several categories. It is cleaned from dates and numbers, everything is lowercase and with no punctuation. Where the dates are cleaned I have substituted them with the word 'date'. The same approach I have applied to invoice numbers which were replaced with the word 'invoice'. This greatly helped my models because this specific word was given higher weight and it improved classification. Now that I have chosen the best model I plan to use it for the new data that will be coming. As for this new data, should I clean it before it goes to the trained model (as I clean my train/test data), or am I supposed to leave it as is? AI: Yes, it makes perfect sense to clean/preprocess the new data much like train /test dataset. For reference: [https://stackoverflow.com/questions/66301306/do-you-have-to-clean-your-test-data-before-feeding-into-an-nlp-model][1]
H: Text cleaning when applying Sentence Similarity / Semantic Search Do we need to apply text cleaning practices for the task of sentence similarity? Most models are being used with whole sentences that even have punctuation. Here are two example sentences that we wish to compare using SentenceTransformer (all-MiniLM-L6-v2): sentences = [ "Oncogenic KRAS mutations are common in cancer.", "Notably, c-Raf has recently been found essential for development of K-Ras-driven NSCLCs."] # yields that sentence 2 has a score of 0.191 when compared with sentence 1 Will cleaning those sentences change its semantic meaning? cleaned = ['oncogenic bras mutations common cancer', 'notably c-raf recently found essential development bras driven nsclcs.'] # yields that sentence 2 now has a score of 0.327 when compared to sentence 1 It seems the model works better when the text is cleaned. However, nowhere does it say that the input sentences are being / should be cleaned? Would love to know your takes on this. AI: Answer: Transformer based models used for sentence similarity have been trained on huge amounts of data where the text preprocessing part has been handled either at the tokenization step or by the attention mechanism of the transformer. Applying cleaning methods and then using the cleaned text as input will worsen the quality of the embeddings. The inputs now differ from the ones the model has been trained with. The attention mechanism will be the one who will neglect tokens that are meaningless and include ones that are meaningful. In that case, a comma or a number can be meaningful in some context and meaningless in another, hence why we should not clean the text on our own but use it as it is.
H: Select threshold (cut-off point )for binary classification by desired fpr persentage value I want to recreate catboost.utils.select_threshold(desc) method for CalibratedClassifierCV model. In Catboost I can select desired fpr value, to return the boundary at which the given FPR value is reached. My goal is to the same logic after computing fpr, tpr and boundaries from sklearn.metrics.roc_curve I have the following code prob_pred = model.predict_proba(X[features_list])[:, 1] fpr, tpr, thresholds = metrics.roc_curve(X['target'], prob_pred) optimal_idx = np.argmax(tpr - fpr) # here I need to use FPR=0.1 boundary = thresholds[optimal_idx] binary_pred = [1 if i >= boundary else 0 for i in prob_pred] I guess it should be simple formula but I am not sure how to place 0.1 value here to adjust threshold. AI: I've done my research and testing and it's that simple: def select_treshold(proba, target, fpr_max = 0.1 ): # calculate roc curves fpr, tpr, thresholds = roc_curve(target, proba) # get the best threshold with fpr <=0.1 best_treshold = thresholds[fpr <= fpr_max][-1] return best_treshold
H: Different result of classification with same classifier and same input parameters I did a binary classification using "Random Forest". The code block is clf = RandomForestClassifier() clf.fit(X_train, y_train) R_y_pred = clf.predict(X_test) print(classification_report(y_test, R_y_pred)) The result is precision recall f1-score support 0 0.91 0.98 0.94 1023 1 *0.79 0.48* 0.60 185 accuracy 0.90 1208 macro avg 0.85 0.73 0.77 1208 weighted avg 0.89 0.90 0.89 1208 When I apply clf.get_params() command to see the default parameters, I got {'bootstrap': True, 'ccp_alpha': 0.0, 'class_weight': None, 'criterion': 'gini', 'max_depth': None, 'max_features': 'sqrt', 'max_leaf_nodes': None, 'max_samples': None, 'min_impurity_decrease': 0.0, 'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100, 'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False} Now in another code, I defined the criterion for RandomForestClassifier The code block is cri_clf = RandomForestClassifier(criterion = 'gini') cri_clf.fit(X_train, y_train) cri_y_pred = cri_clf.predict(X_test) print(classification_report(y_test, cri_y_pred)) The result is precision recall f1-score support 0 0.91 0.98 0.94 1023 1 *0.80 0.46* 0.59 185 accuracy 0.90 1208 macro avg 0.86 0.72 0.77 1208 weighted avg 0.89 0.90 0.89 1208 So, you can see that there is a slight difference in the result of precision and recall when I define a criterion explicitly with not defining a criterion. If all the parameters are the same for two codes why do I get the differences between the two results? Thank you. AI: From sklearns random forest documentation: random_state int, RandomState instance or None, default=None Controls both the randomness of the bootstrapping of the samples used when building trees (if bootstrap=True) and the sampling of the features to consider when looking for the best split at each node (if max_features < n_features). See Glossary for details. Each time you re-run this with random_state = None it runs different models. Set random_state to 0 (or any number) and see consistent results.
H: pytorchs LSTMs use of 'bias' and 'weight' strings Hi I am new to RNN and have come across this the following implementation of Pytorchs LSTM, but I cant understand how (or why) the 'bias' and 'weight' strings work in the 'def init_weights'. class LSTM_LM(nn.Module): def __init__( self, pretrained_emb: torch.tensor, lstm_dim: int, drop_prob: float = 0.0, lstm_layers: int = 1, ): super(LSTM_LM, self).__init__() self.vocab_size = pretrained_emb.shape[0] self.model = nn.ModuleDict({ 'embeddings': nn.Embedding.from_pretrained(pretrained_emb, padding_idx=pretrained_emb.shape[0] - 1), 'lstm': nn.LSTM( pretrained_emb.shape[1], lstm_dim, num_layers=lstm_layers, batch_first=True, dropout=dropout_prob), 'ff': nn.Linear(lstm_dim, pre.shape[0]), 'drop': nn.Dropout(dropout_prob) }) # Initialize the weights of the model self._init_weights() def _init_weights(self): all_parameters = list(self.model['lstm'].named_parameters()) + \ list(self.model['ff'].named_parameters()) for n, p in all_parameters: if 'weight' in n: nn.init.xavier_normal_(p) elif 'bias' in n: nn.init.zeros_(p) EDIT To be more precise, what part of the code makes it possible to check if the string 'weight' appreas in n? n is as I understand it a parameter but does nn.LSTM consist of weight and bias as stringparameters such that I can access them with LSTM.parameter('weight')[1] for instance? I am not sure how to understand it in relationhsip (if there is such) to the variable section of: https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html Update I am now able to print all_parameters of LSTM. It looks like this: [('weight_ih_l0', Parameter containing: tensor([[-0.5299, 0.0481], [-0.3032, 0.2907], [-0.0553, -0.4933], [-0.2063, -0.2334], [-0.5127, -0.1538], [-0.4484, 0.1707], [-0.3729, 0.3518], [-0.3200, 0.5846]], requires_grad=True)), ('weight_hh_l0', Parameter containing: tensor([[-0.6242, 0.5774], [ 0.7023, -0.3028], [-0.4403, 0.2972], [-0.3179, 0.4870], [ 0.2489, 0.0627], [ 0.6007, 0.3024], [-0.3393, 0.1481], [ 0.1212, -0.6172]], requires_grad=True)), ('bias_ih_l0', Parameter containing: tensor([-0.2282, -0.0345, -0.3226, -0.5983, -0.0105, 0.3180, -0.1699, -0.5312], requires_grad=True)), ('bias_hh_l0', Parameter containing: tensor([ 0.4270, 0.0965, -0.3981, 0.6470, 0.3207, -0.0163, -0.4651, -0.0321], requires_grad=True)), ('weight', Parameter containing: tensor([[ 0.2041, 0.5927], [ 0.4556, 0.1257], [ 0.5357, -0.1195], [ 0.0016, -0.1114]], requires_grad=True)), ('bias', Parameter containing: tensor([ 0.0932, -0.5147, -0.6265, 0.2009], requires_grad=True))] Although I don't see how that match the variables in the pytorch documentation that I linked to above, such as: ~LSTM.weight_ih_l[k] – the learnable input-hidden weights of the \text{k}^{th}k th layer (W_ii|W_if|W_ig|W_io), of shape (4hidden_size, input_size) for k = 0. Otherwise, the shape is (4hidden_size, num_directions * hidden_size). If proj_size > 0 was specified, the shape will be (4*hidden_size, num_directions * proj_size) for k > 0 AI: The function _init_weights is simply looping over all parameters and using a Xavier normal initialization for the weights and initializing the biases with a value of zero. The values you see in all_parameters match what is mentioned in the pytorch documentation under the 'Variables' header. For example, weight_ih_l0 and weight_hh_l0 in your code link back to the variables weight_ih_l[k] and weight_hh_l[k] that are mentioned in the documentation (with k being zero in this case).
H: Should I use validation data and val_loss when training final model? I am training a keras model that utilizes early_stopping in order to prevent overfitting. This requires that I set aside a validation dataset. My task requires that I keep my training and validation split by time, so that all samples in my validation set occur after the point in time of those in my training set. My challenge is that the examples in my validation (by definition the most recent examples in time) are very important for my prediction task and I would like to use them to train a final model. From all I can see, it seems that in general it is recommended to train a final model (to be released to production) on all data available, after model configuration has been decided upon in a traditional train/test period (see here). However, if I use all of my data to train a final model, I no longer can utilize early_stopping, since I will not have any validation set (it will be being used for training). I could randomly sample a subset of my training data to use for validation (instead of using the most recent data as I was during training/testing), but then I worry that due to the time series dynamic of the problem I am running the risk of data leakage. My question really boils down to: What is the preferred way to train a final, production model when in Keras (or another framework)? Thanks! AI: Especially for time series work, yes, use your full dataset for training your final model. Keep your number of epochs the same as the best performance on your val_loss. If you want, you can remove the same period of time from the start of the training data, to ensure the model is given a consistent number of samples to learn over. This is a big challenge when shipping models to production, as you now have no validation set, how do you know how well it is performing? This is where you need to get creative, and use different time-series k-fold splitting strategies. When working on TS problems that you want to ship to production, I like to use a training, validation and holdout set. So you can test your models true performance. Otherwise you will be over-fitting on your validation set, through early stopping. (You are leaking information from your validation set, into the keras model training process) Hope this helps.
H: Finding research papers for a dataset I found a breast cancer dataset on Kaggle. Here is the link - https://www.kaggle.com/datasets/reihanenamdari/breast-cancer I would like to how could I find out which research papers use this dataset for binary classification. So far I got only one paper "Breast Cancer Survival Prediction from Imbalanced Dataset with Machine Learning Algorithms" that use this dataset after searching on google scholar. If there is any technique to find out research papers for a particular dataset, Please let me know. Thank you. AI: Super important question. The reason is that this is not the original source. If you go to the data -> meta data -> sources, you can see the source is: JING TENG, January 18, 2019, "SEER Breast Cancer Data", IEEE Dataport, doi: https://dx.doi.org/10.21227/a9qy-ph35. https://ieee-dataport.org/open-access/seer-breast-cancer-data Then searching google datasets for the DOI number, we can click through onto the google scholar link to get the following: https://scholar.google.com/scholar?q=%22ieee%20dataport%20org%20open%20access%20seer%20breast%20cancer%20data%22
H: Hello guys, is dimension reduction required for tensorflow? I am working on face emotion detection using FER2013 dataset using tensorflow and vgg16 model. I am applying t-sne to my training dataset for dimensionality reduction. My question is that "is dimensionality reduction required for the tensorflow ???? AI: Dimensionality reduction is not related to TensorFlow's CNN training: Dimensionality reduction is for unsupervised data clustering and classification. Not sure if you will cluster expressions clearly because they are not the most remarkable points on faces. CNN training consists of supervised training where you indicate the expressions to recognize. For instance, all pictures flagged with "happiness" will be trained to detect happy strokes of happiness (around the eyes, smile, etc.). I would recommend Dimensionality Reduction using the CNN training's output, i.e. the softmax activations. https://fr.mathworks.com/help/deeplearning/ug/view-network-behavior-using-tsne.html
H: Recognize chatbox on game screenshots I have videos from a computer game. In this computer game, during the rounds, there is a chat box where players can write messages. I want to read the content of this chatbox. Difficulties are here: The chatbox is always different in size, depending on how much has been written. Sometimes there is no chatbox at all, because nobody writes anything. Sometimes the chatbox is covered by other HUD windows. Parts of the video are in the menu or on the desktop. Not all are in the game. At first I thought I would break the problem down into individual steps. Split video into frames use an image classifier to see if it is a gamescreen at all. cut out approximately where the chatbox could be. detect with an object detection in which area the chat is and cut out the picture like this. So that no other HUD elements are in the image. use Tesseract for the actual text detection. But I think this is very complicated. Would it be better to do 2,3,4 directly with object detection? So something like this: Split videos into frames detect if there is a chat at all and if so where crop text recognition Before I label 10.000 images I wanted to ask what is the right approach. Thanks a lot! AI: Yes, it seems to be the right general approach, however, I recommend to cut down into smaller pieces in order to be very efficient. First of all, you should ensure that the most important function, text recognition, works well: If you have done good area detection and good screen recognition, but you don't have good text recognition, you would have lost time. Furthermore, there are always tricks to get the right area and the right frame, but having reliable text recognition is more complex. That's why I would start to train the text recognition function with already cut text areas. The aim is to reach a very good result. Tesseract is probably the best library to do that. Nevertheless, the background is an image, so it is important to check that the text recognition is correct on hundred pictures. In addition to that, I would recommend starting with 200 random pictures, instead of 10 000. Very often, you can find many mistakes with a first small sample that you could correct, without processing the whole data. Then you increase progressively the amount because there are always unexpected special cases (ex: a life bar with text in the text's background). It could be interesting to apply object detection to get the bottom menu coordinates, as the text position may differ with screen resolution. But it could be even simple to consider the window size and consider the bottom menu positions as size percentages. Consider also all kinds of user settings (ex: removal of the bottom menu, change of the background transparency, etc.).
H: Adjusted Cosine Similarity With Zero Vectors I create a recommendation engine which finds item similarities according to user ratings. I'm trying to use adjusted cosine similarity to find similarities. I follow these steps. Find mean rating of an every item. Subtract mean rating from each item rating. Apply cosine similarity. My problem is at the second step. If all users give same rating to an item, subtracting mean rating from each rating creates zero vector. Because this vectors are dividers in cosine similarity, this causes zero division error. So is there a solution for this? AI: welcome to the Data science SE community:) A couple of questions in regards to the OP. In what percentage of cases is this problem of dividing by zero would occur. Could you compute the percentage of such data points in the entire dataset after step 2. Based on the percentage, you could either consider dropping those points ( of the percentage is too less), similarly think of other approaches to go about tackling the problem, which brings me to point nr.2 Might think of using some other similarly measure which could solve this problem https://pytorch.org/docs/stable/generated/torch.nn.CosineSimilarity.html This similarity score adds a small epsilon to avoid division by zero error.
H: Is backpropagation applied every layer the same? For example, I have layers that are pretrained. But while predicted, the loss is very high. But not because of pre-trained layers. Because of not pretrained layers. Will every layer be affected by backprop the same? AI: This depends on how you configure the training process: You can, for instance, freeze the pretrained layers; this implies that only the not pretrained layers will be updated. You can also set different learning rates to different layers, so that the pretrained layers are assigned a very small learning rate that allows them to be updated but not too fast. Therefore, backpropagation is the same for all layers but the weight update strategy can be different.
H: Clustering on Market-1501 dataset I am trying to perform clustering on the Market-1501 dataset. The approach that I am using is as follows: I train a Person-Reid Model (using this repository: Reid-Strong-Baseline) Use a version of depth first search for clustering data (not part of the training set) into individual classes. Although the Rank-1, Rank-5 metrics of the ReID model are very good, the overall effect of clustering is rather disappointing. I am also struggling to find relevant literature that could help me. Does anyone have any pointers on where I could at least find relevant literature (i.e Person-Reid followed by clustering). Thanks in advance. PS: I have posted the same question on Stackoverflow. Thought that this would be a more apt place for this discussion. AI: Using ReID output seems to be the right approach because you tell the network what to learn, but you have to choose the right output: it should be something like a softmax activation result telling the different possible classifications with scores. Then, you can use that output to train dimensional reduction algorithms like UMAP or t-SNE: They have good results because they are non-linear, i.e. they are able the clusterize complex correlations between features. Here is a playground: https://projector.tensorflow.org/ Here is an interesting code with fashion images: https://github.com/zalandoresearch/fashion-mnist They have also a reproducibility function: https://umap-learn.readthedocs.io/en/latest/reproducibility.html
H: Why does my mean average precision metric show as 0.000e+00? I have an object detection model with my labels and images. I am trying to use the tensorflow ranking metric for MAP, https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/metrics/MeanAveragePrecisionMetric. The metric is used when I compile the model but this is the result I get: Epoch 2/220 92/92 [==============================] - 22s 243ms/step - loss: 0.0027 - mean_average_precision_metric: 0.0000e+00 - val_loss: 0.0019 - val_mean_average_precision_metric: 0.0000e+00 Epoch 3/220 92/92 [==============================] - 22s 245ms/step - loss: 0.0014 - mean_average_precision_metric: 0.0000e+00 - val_loss: 7.5579e-04 - val_mean_average_precision_metric: 0.0000e+00 Epoch 4/220 92/92 [==============================] - 23s 247ms/step - loss: 8.7288e-04 - mean_average_precision_metric: 0.0000e+00 - val_loss: 6.7357e-04 - val_mean_average_precision_metric: 0.0000e+00 Epoch 5/220 92/92 [==============================] - 23s 248ms/step - loss: 7.3901e-04 - mean_average_precision_metric: 0.0000e+00 - val_loss: 5.3464e-04 - val_mean_average_precision_metric: 0.0000e+00 My labels and images are all normalized as well according to my image dimensions. train_images /= 255 val_images /= 255 test_images /= 255 train_targets /= TARGET_SIZE val_targets /= TARGET_SIZE test_targets /= TARGET_SIZE model.compile(loss='mse', optimizer='adam', metrics=[tfr.keras.metrics.MeanAveragePrecisionMetric()]) Could the metric not be the right way of using it or maybe not meant for my data? AI: I would look into whether your loss function is correct. Mean square error is a regression metric (and precision is a classification metric). Something like categorical cross entropy is probably more suited. Eitherway as a sanity check I would you can always run a model for say 10 epochs. Then run predictions and calculate the precision manually (or with sklearns builtin method.