text
stringlengths
83
79.5k
H: Top N accuracy for an imbalanced multiclass classification problem I have a multi-class classification problem that is imbalanced. The task is about animal classification. Since it's imbalanced, I am using macro-F1 metric and the current result that I have is: 51.59. The issue that I am facing is that, this task will be considered as a recommending task, where the accuracy of TOP-N is needed. When I compute the TOP-N accuracy, I get the following: Top-1: 88.58 Top-2: 94.86 Top-3: 96.48. As you can see, the accuracy for the TOP-N is totally biased to the majority class, where the gap between the macro-F1 and top-1 is big. My question is, how can I consider the class imbalance when I calculate the Top-N accuracy? AI: Sounds like your minority class is being poorly predicted and affecting your macro f1 score (see this answer for more info From the sklearns top k accuracy score documentation you can pass a list of weights to 'rebalance' the score. sample_weight, array-like of shape (n_samples,), default=None Sample weights. If None, all samples are given the same weight.
H: Combining sklearn pipelines with different output shape As part of a data preprocessing step, I'm trying to create a "master pipeline" from two separate pipelines, one for numerical features and one for datetime features. The numerical pipeline removes outlier rows based on an IQR filter, whereas the datetime pipeline doesn't remove any rows, only feature engineers day of week. The issue arrives when I try to combine these into a master pipeline that performs both of these steps. I've tried using both ColumnTransformer and FeatureUnion, but both output the same error (7991 is the size of the output after removing numerical outliers, 13400 is the size output size of the datetime pipeline): ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 7991 and the array at index 1 has size 13400 These are my pipeline objects: class FeatureSelector(BaseEstimator, TransformerMixin): def __init__(self, feature_names): self.feature_names = feature_names def fit(self, X, y=None): return self def transform(self, X): return X[self.feature_names] class IQRFilter(BaseEstimator,TransformerMixin): def __init__(self,factor=2): self.factor = factor def outlier_detector(self,X,y=None): X = pd.Series(X).copy() q1 = X.quantile(0.25) q3 = X.quantile(0.75) iqr = q3 - q1 self.lower_bound.append(q1 - (self.factor * iqr)) self.upper_bound.append(q3 + (self.factor * iqr)) def fit(self,X,y=None): self.lower_bound = [] self.upper_bound = [] X.apply(self.outlier_detector) return self def transform(self,X,y=None): X = pd.DataFrame(X).copy() for i in range(X.shape[1]): x = X.iloc[:, i].copy() x[(x < self.lower_bound[i]) | (x > self.upper_bound[i])] = 'OUTLIER' X.iloc[:, i] = x return X class RemoveIQROutliers(BaseEstimator, TransformerMixin): def __init__(self): pass def fit(self, X, y=None): return self def transform(self, X): for col in X.columns: X = X[X[col] != 'OUTLIER'] return X class ExtractDay(BaseEstimator, TransformerMixin): def __init__(self): pass def is_business_day(self, date): return bool(len(pd.bdate_range(date, date))) def fit(self, X, y=None): return self def transform(self, X): X['day_of_week_wdd'] = X['wanted_delivery_date'].dt.dayofweek return X And these are my two pipelines: numerical_pipeline = Pipeline([ ('FeatureSelector', FeatureSelector(num_cols)), ('iqr_filter', IQRFilter()), ('remove_outliers', RemoveIQROutliers()), ('imputer', SimpleImputer(strategy='median')), ('std_scaler', StandardScaler()) ]) date_pipeline = Pipeline([ ('FeatureSelector', FeatureSelector(date_cols)), ('Extract_day', ExtractDay()), ]) Trying to combine them like this causes the mentioned error message: full_pipeline = Pipeline([ ('features', FeatureUnion(transformer_list=[ ('numerical_pipeline', numerical_pipeline), ('date_pipeline', date_pipeline) ])) ]) full_pipeline.fit_transform(X_train) What is the correct way to go about this? AI: sklearn doesn't yet really provide a good way to remove rows in pipelines. SLEP001 proposes it. imblearn has some ways to make this work, but it's semantically specific to resampling data. If you don't need to modify the target (if you'll only use this transformer on X, and not in a pipeline with a supervised model), you can make this work. One more caveat: you probably won't want to throw away outliers in production, so consider how you'll rework this transformer after training. The point is that you should wait to remove the rows with OUTLIER entries until after you've joined the datetime features back on. (One alternative is to try to pass the information about which rows were removed to the datetime processor, but that would then require a custom alternative to FunctionUnion or ColumnTransformer.) Unfortunately, despite all of your custom transformers returning dataframes, the ways to recombine them (ColumnTransformer and FeatureUnion) won't preserve that yet (but see pandas-out PR and some linked issues/PRs). Until that's remedied, your best bet might be to modify your transformers to accept an __init__ parameter columns on which to operate, removing the FeatureSelector step. outlier_prune = Pipeline([ ('iqr_filter', IQRFilter(columns=num_cols)), ('remove_outliers', RemoveIQROutliers()), ]) # important: the output of this is a frame numerical_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy='median')), ('std_scaler', StandardScaler()) ]) preproc_pipeline = ColumnTransformer([ ('numerical_pipeline', numerical_pipeline, num_cols), ('date_eng', ExtractDay(), date_cols), ]) full_pipeline = Pipeline([ ('outliers', outlier_prune), ('preproc', preproc_pipeline), ])
H: Parallel Data preprocessing I am looking for a suggestion. Is it possible to implement the data preprocessing steps like missing value imputation, outlier detection, normalization, label encoding in parallel? Can I implement cuda/openmp/mpi programming for data preprocessing? Thank you. AI: Yes - there are a lot of approaches. Depending on the language you are using / packages. Assuming Python: Multiprocessing: Dask, pool.map, modin, pandarallel, spark GPU: CuDF from RAPIDS Multi-GPU: Cudf-Dask If you have a Nvidia GPU - I would highly recommend the RAPIDs framework, they have plotting, machine learning, dataframes etc...
H: Multi head self attention output size for batches with different sequence length I have a question regarding the self attention layer of transformers. When dealing with sequences of varying lengths in a mini-batch, we pad sequences so that all sequences in the batch have the same length. Let's say that that most sequences in a dataset are < 500 elements long, but there are a few very long sequences that can be 1000s of elements long. If I want to handle those very long sequences without truncating, will the multi-head self attention layer's size have to be tailored to the longest possible sequence even when input batches don't contain any of the long sequences? AI: will the multi-head self attention layer's size have to be tailored to the longest possible sequence even when input batches don't contain any of the long sequences? No. The attention is automatically tailored to the length of the batch, not the maximum possible length. Furthermore, with a technique called “bucketing” you create batches with similar lengths to avoid wasting space of the batch with padding tokens. Deep learning frameworks like Tensorflow and Pytorch make it easy to add bucketing to your data loading logic. Original answer will the multi-head self attention layer's size have to be tailored to the longest possible sequence even when input batches don't contain any of the long sequences? Yes. However, you normally use "bucketing". This technique consists of creating batches with similar lengths, to avoid wasting space of the batch with padding tokens. Deep learning frameworks like Tensorflow and Pytorch make it easy to add bucketing to your data loading logic.
H: How to create custom action space in openai.gym I am trying to upgrade code for custom environment written in gym==0.18.0 to latest version of gym. My current action space and observation space are defined as self.observation_space = np.ndarray(shape=(24,)) self.action_space = [0, 1] I understand that in the new version the spaces have to be inherited from gym.spaces class. Can someone help me on how to rewrite my spaces (observation/action) to implement the gym.spaces? Thanks AI: In the case of a 1D observation space, it could be something like: self.observation_shape = (24, 1, 3) self.observation_space = spaces.Box(low = np.zeros(self.observation_shape), high = np.ones(self.observation_shape),dtype = np.float16) self.action_space = spaces.Discrete(3,) See also: https://blog.paperspace.com/creating-custom-environments-openai-gym/
H: How do I find out which metadata is affecting/influencing the classification model? I'm working with genomics data; I have a multi-class label with a matrix of numeric values (rows are the samples). Each sample may have different metadata which are not being used for training nor testing. For example, each sample may be treated with a dosage value of 50 or 100, etc. The classification model works well using lda or rf. I am open to use any. I have about four (dosage, tissue, etc) of these metadata and would like to know which of them are influencing the model and by how much. AI: 3 options come to mind that address your problem directly, in prority order: Add the meta data as features to your dataset, if the feature importance is high for those features - then you are proving some relationship. Treat the meta data as targets, "can you use your current features to predict this?" Plot your results by meta data feature. I would start with a parallel coordinates plot: https://plotly.com/python/parallel-coordinates-plot/ Where each label is a meta data feature + one of the labels being your target (see plot below) In addition to this, you can run a large number of statistical tests to quantify this relationship. Dummy code for plot: import plotly.express as px df = px.data.iris() fig = px.parallel_coordinates(df, color="species_id", labels={"species_id": "Target", "sepal_width": "dosage", "sepal_length": "tissue", "petal_width": "day_of_week", "petal_length": "time", }, color_continuous_scale=px.colors.diverging.Tealrose, color_continuous_midpoint=2) fig.update_layout(font=dict(size=22)) fig.show()
H: Is there a machine learning model that is able to take reviews as input and output a new and unique blog article from them? I am looking for a machine learning model ideally with inference speeds of no longer than a few minutes that is able to take in n human written reviews and output a blog article from them. The model would need to be pre-trained or if it does need training using the reviews then the training would not take more than a few minutes on a modern single GPU machine. Can someone point me to such open-source projects? AI: There is a solution that could work well. It requires minimum effort but has to be tested. If you take several reviews and you group all first paragraphs together, then the second ones, etc. and you apply an efficient summarization model, you should have the essence of all reviews. The model would recognize the most frequent patterns in an organized way and do the job. It could work with sentences instead of paragraphs. If the articles have very different sizes, you can use summarization for each of them to 10 sentences, and then apply the process described above. Note: You can't do this with full reviews next to each other because the model would not recognize the beginning from the end for each of them. https://huggingface.co/facebook/bart-large-cnn Other models: https://huggingface.co/models?pipeline_tag=summarization&sort=downloads
H: How can I do mathematical operations to two columns of a CSV file and save the result in a new CSV file? I have a CSV file (such as test1.csv). There are tabular values like the following. S1 S2 S3 4.6 3.2 2.1 3.2 4.3 5.4 1.4 3.4 6.1 I want to do mathematical operations, such as R1=(S1+S2)/1.5 and R2=(S2+S3)/2.5. Then I want to save the results R1 and R2 in a new CSV file (such as test2.csv). I tried with the following code. That does not work. import pandas as pd df = pd.read_csv('test1.csv') df2['R1'] = (df['S1'] + df['S2'])/1.5 df2['R2'] = (df['S2'] + df['S3'])/2.5 df2.to_csv('test2.csv') AI: You did not define df2 before attempting to use it. Try this: import pandas as pd df = pd.read_csv('test1.csv') df2 = pd.DataFrame({}) df2['R1'] = (df['S1'] + df['S2'])/1.5 df2['R2'] = (df['S2'] + df['S3'])/2.5 df2.to_csv('test2.csv')
H: How to calculate accuracy of an imbalanced dataset I like to understand what is the accuracy of an imbalanced dataset. Let's suppose we have a medical dataset and we want to predict the disease among the patients. Say, in an existing dataset 95% of patients do not have a disease, and 5% patients have disease. So clearly, it is an imbalanced dataset. Now, assume our model predicts that all 100 out of 100 patients have no disease. Accuracy means = (TP+TN)/(TP+TN+FP+FN) If the model predicts 100 patients do not have a disease and we are predicting disease among the patient then True positive refers to the disease among the patient and True negative refers to no disease among the patient. In that case accuracy should be (0+100)/(0+100+0+0) = 1. We are going to predict how many patients have a disease so if we get accuracy 1, does that mean 100% of patients have the disease? I am taking the example from 5 Techniques to Handle Imbalanced Data For a Classification Problem . I am not sure at the time of accuracy calculation why they calculate it as (0+95)/(0+95+0+5) = 0.95, if they have already described that their model predicts all 100 out of 100 patients have no disease. I hope I clarified my question. Thank you. AI: Accuracy is the number of correct predictions out of the number of possible predictions. In many regards, it is like an exam score: you had an opportunity to get $100\%$ of the points and got $97\%$ or $79\%$ or whatever. The class ratio is not a factor. In your example, you had $95$ negative patients and $5$ positive. You predicted $100$ negative patients, meaning that you got $95$ correct and $5$ incorrect for an accuracy of $95\%$. Note that accuracy is a surprisingly problematic measure of performance, and this is true even when the classes are naturally balanced. With imbalance, however, accuracy has the potential to mislead in a way that is not present in many other measures of performance, and your example is a good demonstration of that. All your model does is predict the majority class; it does nothing clever. However, your model achieves an accuracy of $95\%$, which sounds like a high $\text{A}$ in school that indicates strong performance.
H: Perform k-means clustering over multiple columns and get the cluster center values? I read here how to show the number of clusters over $n$ columns. I would like to know how to get in a table, the values of the clusters centers. Could someone help me with this? AI: sklearn.clusters.KMeans has an attribute cluster_centers_, which stores the array of cluster centers. You can add them to the dataframe as new columns this way: clusters = KMeans(n_clusters = n) predict = clusters.fit_predict(data) centers = pd.DataFrame(clusters.cluster_centers_[predict, :]) centers.index = data.index data = pd.concat([data, centers], axis=1)
H: Precision, recall and importance of them in the imbalance problem I have a test dataset. The dataset is an imbalanced dataset. The total training instances for the dataset is 543 among them minority class(yes) is 75 and the majority class(No) is 468. The class of interest is minority class(yes). I used the Naive Bayes classifier for prediction. The confusion matrix I got TP TN FP FN 33 391 77 42 The total instances for No class are 468, The classifier truly predicted 391 instances as negative. However the total negative class that the classifier predict is 391+42 = 433, Those, 42 false negatives are actually positive class but the classifier predict them as negative. Am I right with this explanation? Secondly, the classifier predicted 33 instances as true positive. However, total prediction of positive class TP+FP = 33+77 = 110. Now these false positive are actually negative class. So, if I calculate TP+FN I will get 33+42 = 75 which is the total number of positive instances in the test set. If I calculate TN+FP I will get 391+77 = 468, which is the total number of negative instances in the test set. Now, the precision is True positive/(True positive + False positive), As I have mentioned earlier False positive is noting but some negative instances, So, my question is what does precision actually mean? For recall is True positive/(True positive + False negative), As I have mentioned earlier False negative means positive instances. (True positive+Flase negative ) total number of positive instances. Now, what does it mean by True positive/ Total number of positive instances? Lastly, in the class imbalance problem if the majority class is our class of interest which metric (precision and recall) should we consider? Thank you. AI: The total instances for No class are 468, The classifier truly predicted 391 instances as negative. However the total negative class that the classifier predict is 391+42 = 433, Those, 42 false negatives are actually positive class but the classifier predict them as negative. Am I right with this explanation? Yes, this is correct. Now, the precision is True positive/(True positive + False positive), As I have mentioned earlier False positive is noting but some negative instances, So, my question is what does precision actually mean? Precision represents the proportion of correct instance among the instances predicted as positive. In other words, this is the probability that a case predicted positive is truly positive. For recall is True positive/(True positive + False negative), As I have mentioned earlier False negative means positive instances. (True positive+Flase negative ) total number of positive instances. Now, what does it mean by True positive/ Total number of positive instances? Recall is the proportion of instance predicted positive by the system among all the truly positive instances. In other words, it represents the probability that the system correctly "finds" that an instance is positive. Lastly, in the class imbalance problem if the majority class is our class of interest which metric (precision and recall) should we consider? It's pretty rare that the majority class is of interest, usually the minority class is chosen as the positive class. But anyway this wouldn't change the answer: one should use precision and recall (or F1-score if a single value is needed), but in this case one should use a higher precision (number of digits after the comma).
H: RNN for continuous, real-time learning without pre-training I am learning ML and I'm trying to solve this problem Create a rock paper scissors game where the AI is able to beat the player more than 50% of the time. My initial intuition was to use an RNN with an LSTM. I imagined my AI analyzing the history of the moves made by P1 and guessing what the next one might be. Now, here's my problem. I did not want to come up with a bunch of data to train the model with beforehand but rather, have the model learn in real-time how each player plays and gradually get better during the game until it can reasonably well predict that player's moves. After struggling for a while thinking about this, I googled the prompt and read up on others' solutions. All 3 I looked at went with the route of pre-training. Creating complex algorithms that could generate a semi-random sequence of moves (r,p,s,s,p,etc..) and using that to train the model. My concern with doing it this way is that it causes a heavy reliance on prior data to be good vs. how the player in front of me is playing right now. In order for my AI to be good, I would need to spend a significant amount of time creating (broad) training data to train it instead of learning from the player I'm up against. Is this the way things must be done? Is what I'm envisioning possible, and is that the right approach to solve the problem? AI: I would look into the Markov Chain This is quite a commonly solved problem with many approaches.
H: Predicting a real number in a fixed range Suppose we need to predict a real number in fixed range, for example, [0 .. 5], and our Y can be 3.14, 2.4654 etc. What is the name of this kind of tasks (to be able to search further) and what are the approaches to solve this problem? AI: This is a regression problem with a "limited dependent variable". One very common approach is to use a sigmoid transformation as the final step in your model. For example, the logistic transform $f(x) = \frac{5}{1 + e^{-x}}$ is constrained to the interval [0, 5].
H: Understanding MDP variants and "model-free" RL algorithms RL is based on MDPs. But MDPs have other useful variants such as Semi MDP (variable time), POMDP (partially observable states) etc. Some industrial problems seem to be better suited for SMDP and/or POMDPs. For example optimizing maintenance, the time between maintenance events is variable. And, some states of equipment may not available for measurement directly -- hence we have "partially observable" states. If I decide to use a model-free algorithm such as DDPG or PPO, the theory of which I think is based on MDP -- am I comprising on the state or model definition? Will it create a less efficient agent? Of course I may not have a choice as creating an accurate environment is out of question. But just trying to understand. Appreciate your thoughts. AI: The performance of a classical agent strongly depends on how close the observation approximates the true state of the environment. The state is sometimes defined as all we need to know to model the dynamics forward in time. Therefore, the closer it is possible to infer the dynamics of the environment with the information provided, the better the performance. A common case study in RL is Atari. There, one observation (pixel image) alone would be insufficient to make progress at all. But if you stack consecutive images together, then this can provide sufficient information for an agent developed for MDPs like DQN to learn games like Breakout, Space Invaders or Pong. Now, this is not enough for games that require memory, like Montezuma's Revenge, which would be considered as a POMDP. To address partial observability, a general idea is to use a recurrent neural network (often LSTM or GRU) instead of a feed-forward network in the actor and critic to give the agent additional context if required to make good decisions. The use of an LSTM, plus additional implementation tricks, can give strong results in all Atari games (see R2D2). Also, essentially PPO with a (huge) LSTM was also able to reach super-human performance in DOTA2, see OpenAI Five. To take an example in robotics deployed to the real world. Assume we know the positions and velocities of all the joints. Then this information is still partial in practice, as we ignore other factors (like friction, wind speed, terrain). However, it would perform still be able to perform decently. If we had only the positions, then we may be able to infer the velocities by using an LSTM and make it work. If we had only access to the information of one joint, it would probably never work.
H: Is R programming a good way to start with Data Science? I am a software engineer (currently CTO) specialized on web and mobile applications picking up data science skills. I do this mainly for future projects within my startup that works in digital healthcare. For this I have started to learn via Coursera, specifically with this John Hopkin's Data Science specialization which uses R programming as a base. During these years I've seen how Python is normally the programming language that gets more associations with Data Science and ML and I am now hesitating whether to continue with this or pick another one that uses Python. Is R better than Python to be future ready? I'd like to avoid having to pick up another skill later on because the first one was not sufficient. What do you guys think? Thank you in advance. AI: Your question carries the risk to attract opinion- rather than fact-based answers. However, here are a couple of hard facts: Going by popularity, the State of Data Science 2021 report provides a relatively clear answer: According to their survey, Python is by far the most popular language in Data Science. Moreover, when compared to R specifically, Python has the advantage of being a general programming language. Another advantage may be its general popularity as it is currently the most popular programming language according to this source, i.e. you may benefit from learning Python beyond Data Science. In contrast, the Stackoverflow developer survey (not Data Science specific) ranks it below JavaScript, HTML and SQL - but still well ahead of R. In summary, Python appears to be the by far most popular language for Data Science and is also generally one of the or the most popular language. Therefore, going by popularity Python is the better choice.
H: Average number of records by ID I have a dataframe with IDs and booking refs, looking like the simplified example below. ID BookingRef 001 2019/32323 002 2011/23232 002 2017/7u4922 In the above example, 001 has one booking and 002 has two bookings in total so the average number of bookings for customers is 1.5. How could I calculate this for millions of records using python and pandas? AI: You can use the groupby method to group the dataframe by ID, then size() to count the number of rows for each ID. Then use the mean function to get the average: df.groupby('ID').size().mean()
H: Threshold determination / prediction for cosine similarity scores Given a query sentence, we search and find similar sentences in our corpus using transformer-based models for semantic textual similarity. For one query sentence, we might get 200 similar sentences with scores ranging from 0.95 to 0.55. For a second query sentence, we might get 200 similar sentences with scores ranging from 0.44 to 0.27. For a third query sentence, we might only get 100 similar sentences with scores ranging from 0.71 to 0.11. In all those cases, is there a way to predict where our threshold should be without losing too many relevant sentences? Having a similarity score of 1.0 does not mean that two documents are 2X more similar than if the score was 0.5. Is there a way to determine the topk (how many of the top scoring sentences we should return) parameter? AI: As far as I know there is no satisfactory answer: One uses a threshold in order to avoid having to choose a specific K in a top K approach. The threshold is often selected manually to eliminate the sentences which are really not relevant. This makes this method more suitable for favouring recall, if you ask me. Conversely, one uses a "top K" approach in order not to select a threshold. I think K is often selected quite low in order to keep mostly relevant sentences, i.e. it's an approach more suitable for high precision tasks. The choice depends on the task: First, the approach could be chosen based on what is done with the selected sentences: if it's something like question answering, one wants high precision usually. If it's information retrieval, one wants high recall. If it's a search engine, just rank the sentences by decreasing similarity. Then for the value itself (K or threshold), the ideal case is to do some hyper-parameter tuning. i.e. testing multiple values and evaluate the results. If this is convenient or doable for the task, then look at a few examples and manually select a value which looks reasonable.
H: Online Learning/Continual Learning for tree-based Algorithms Every example I come across any kind of iterative learning on Random Forest/XGBoost/LightGBM, it just continuously grows the number of estimators for new batches of data by n_tree/n_estimators/num_boost_rounds every time .fit() gets applied [...]. Most of them seem to rely on iterative learning for training on very large datasets that can't be loaded into memory at once. However, I want to implement a continuous learning pipeline (with LightGBM; Python) that takes newly available data on a daily basis in order to update an existing model (without the need to retrain on the whole [growing] dataset; stateful). The initially mentioned approach would imply that my model will ever increase its tree count. Is it possible to train tree-based algorithms so that the estimators (split thresholds) themselves get updated/adjusted in contrast to only adding estimators? AI: This is a really good question for which I will give you a theoretical result; in particular, I am not aware of any specific implementation in any programming language. The concept of incremental learning with decision trees started in 1986 to enhance the ID3 learning algorithm to learn continually/incrementally (recall that ID3 deals only with categorical input features/variables); the resulting procedure is called ID4. Some years later, Utgoff et al. proposed other two approaches (ID5 and ITI) to overcome some of the shortcomings of the ID4 approach. There is also a nice Wikipedia page that you might read. EDIT There are also works on online boosting and bagging, which I was not aware of either, but your question thought me something new; thanks for that (+1)!
H: Accent on special words or characters can affect machine learning algorithms? Good evening, I wanted to ask this question about the accent in words or special characters can affect machine learning algorithms. I'm looking to do a job. I would like to know a recommendation for an article or book that I can use as a reference. AI: The short answer is yes. The long answer is the following. Machine learning (ML) algorithms are designed to build models from data, but there is a general motto among ML practitioners: "garbage in, garbage out." That said, if you have trained your model in an intelligent way to distinguish between strange accents on words, then testing the resulting model on sentences with such accents is a good practice; in particular, I'm thinking of Spanish and Italian words, for instance, where a missing accent could give a completely different semantics to such sentences. Nevertheless, a more practical approach in the field of natural language processing (NLP) is to perform a text normalization (e.g., lemmatization) of your text before learning since specific words are less frequent than more general lemmas. For your specific task, there is a post that has Python code that could remove accents from words if you are willing to drop the take this journey. I hope that this gives you more insight into the complexity of NLP tasks. EDIT There are papers on accent-related researches, such as 1, 2, and 3, among many others.
H: Why use cosine similarity instead of scaling the vectors when calculating the similarity of vectors? I'm watching a NLP video on Coursera. It's discussing how to calculate the similarity of two vectors. First it discusses calculating the Euclidean distance, then it discusses the cosine similarity. It says that cosine similarity makes more sense when the size of the corpora are different. That's effectively the same explanation as given here. I don't see why we can't scale the vectors depending on the size of the corpora, however. For example in the example from the linked question: User 1 bought 1x eggs, 1x flour and 1x sugar. User 2 bought 100x eggs, 100x flour and 100x sugar User 3 bought 1x eggs, 1x Vodka and 1x Red Bull Vector 1 and 2 clearly have different norms. We could normalize both of them to have length 1. Then the two vectors turn out to be identical and the Euclidean distance becomes 0, achieving results just as good as cosine similarity. Why is this not done? AI: Let $u, v$ be vectors. The "cosine distance" between them is given by $$d_{\cos}(u, v) = 1 - \frac{u}{\|u\|} \cdot \frac{v}{\|v\|} = 1 - \cos \theta_{u,v},$$ and the proposed "normalized Euclidean distance" is given by $$d_{NE}(u, v) = \left\| \frac{u}{\|u\|} - \frac{v}{\|v\|} \right\| = d_E(\frac{u}{\|u\|}, \frac{v}{\|v\|}).$$ By various symmetries, both distance measures may be written as a univariate function of the angle $\theta_{u,v}$ between $u$ and $v$. [1] Let's then compare the distances as a function of radian angle deviation $\theta_{u,v}$. Evidently, they both have the same fundamental properties that we desire -- strictly increasing monotonicity for $\theta_{u,v} \in [0, \pi]$ and appropriate symmetry and periodicity across $\theta_{u,v}$. Their shapes are different, however. Euclidean distance disproportionately punishes small deviations in the angles larger than is arguably necessary. Why is this important? Consider that the training algorithm is attempting to reduce the total error across the dataset. With Euclidean distance, law-abiding vectors are unfairly punished ($\frac{1}{2} d_{NE}(\theta_{u,v} = \pi/12) = 0.125$), making it easier for the training algorithm to get away with much more serious crimes ($\frac{1}{2} d_{NE}(\theta_{u,v} = \pi) = 1.000$). That is, under Euclidean distance, 8 law-abiding vectors are just as bad as maximally opposite-facing vectors. Under cosine distance, justice is meted out with more proportionate fairness so that society (the sum of error across the dataset) as a whole can get better. [1] In fact, $d_{\cos}(u, v) = \frac{1}{2} (d_{NE}(u, v))^2$.
H: How to Combat Data Drift I have customer demographic data that include columns like: age, the first half of the postcode, occupation (there is a defined list of possible occupations), and more. Each month I get a new batch of 1000 rows of this type of data (which is not labelled) and I need to put this into my trained model to predict what item (out of 5 items) each person in the new batch data set is most likely going to buy (a multiclass classification problem). Each time I receive this data, I compare the summary statistics between the old and new data, and investigate any changes in the distribution of the categorical variables using hypothesis testing. If my tests show that my new batch of data had vastly different summary stats, or distributions to my training set e.g. The new batch targeted people under 25 only, whereas my training set contains all age groups. The new batch targeted people from a specific area of the UK, whereas my training set contains all possible locations in the UK. Would I need to: Make any changes to my training set, or my overall workflow, to adjust for this? As far as I know, this is data drift. Am I correct in saying that? If the batch data coming in was labelled, so we knew what items these people bought, and there was a sizable difference in the proportion of each product sold, what could I do to quantify this instead of naively adding this new data to the training set and retraining my model? Thanks AI: As you suggest, that situation could end up your monitoring system indicating a data drift. To evaluate this scenario, let's classify some types of data drift we could have: features drift: given when the distribution of the input features (comparing training datasets VS prediction datasets) change enough (with a defined threshold) to raise an alert target drift: distribution of the label values change when comparing training VS prediction distributions concept drit: when the relation between the input features and target values change; it can arise when the label is redefined (for instance, the business rules for deciding clients who are active or inactive with some products; if the labeling rules changes, the same input feature values could be assigned to different target values before VS after redefining). The way to monitor these drifts can be carried out via hypothesis testing as you say (e.g. Kolmogor-Smirnov test, Population Stability Index, etc), where you define the degree of warning thresholds. The point is: what is the goal of having this drift monitoring? In general, the advice is to retraining your models when this type of drifts occur: it might or might not improve your model performance, but you make sure to have a model updated with fresh data. Other goal is also to have knowledge about your update data statistics of course. Nevertheless, in this scenario of a subset of clients ages, although the model was trained with more "complete" datasets, you are making inference on a subset of the whole population used for training the model, so your model could still be valid enough (unless this new sceneario becomes the ususal one, so a more custom model could be trained with this new kind of more specific data).
H: How to stop my CNN getting confused between 3s and 8s, and 1s and 7s? I am trying to train a CNN, using the MNIST dataset (which I perform data augmentation on), to classify numbers on a sudoku grid from 0-9. While mostly successful, my network seems to get confused between 3s and 8s, and 1s and 7s because of how similar they look. This is unacceptable, however, since incorrect classification will make solving the sudoku problem impossible. I am using the ResNet50 pre-trained model as my convolutional base. Is there any way to more harshly penalise mis-classification of 3s and 8s, or 1s and 7s during the training process? I did find a link which seems to propose a solution - https://github.com/keras-team/keras/issues/2115 - but I am quite new to TensorFlow/Keras, and don't quite understand the code given. I would very much appreciate if anyone could either explain the proposed solution on the link above, or suggest improvements to my network/code (which I have added below). THE NETWORK: import cv2 as cv import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_datasets as tfds tfds.disable_progress_bar() (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data() # To make the model work with grayscale images, we need to make them APPEAR to be RGB. The easiest way is to repeat the image array 3 times on a new dimension. # Because we will have the same image over all 3 channels, the performance of the model should be the same as it was on RGB images. train_images, test_images = np.repeat(train_images[..., np.newaxis], 3, -1), np.repeat(test_images[...,np.newaxis], 3, -1) # Normalize pixel values to be between 0 and 1 train_images, test_images = train_images / 255.0, test_images / 255.0 # Resize the input shape because ResNet50 can take the input image having height, width as multiples of 32 and 3 as channel width train_images, test_images = tf.image.resize(train_images, [32,32]), tf.image.resize(test_images, [32,32]) assert train_images.shape == (60000, 32, 32, 3) # All images are 28x28 - no resizing needed assert test_images.shape == (10000, 32, 32, 3) assert train_labels.shape == (60000,) # Labels - numbers from 0 to 9 assert test_labels.shape == (10000,) # Labels - numbers from 0 to # Data augmentation parameters rotation_range_val = 10 # rotation width_shift_val = 0.1 # horizontal shift height_shift_val = 0.1 # vertical shift zoom_range_val=[0.8,1.3] # zoom train_datagen = tf.keras.preprocessing.image.ImageDataGenerator( rotation_range = rotation_range_val, width_shift_range = width_shift_val, height_shift_range = height_shift_val, zoom_range = zoom_range_val, ) train_datagen.fit(train_images) # Pretrained convolutional base base_model = tf.keras.applications.ResNet50(input_shape = (32,32,3), include_top=False, weights='imagenet') # Freezing the base base_model.trainable = False # Adding dense layers on top classifier_model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), # hidden layer tf.keras.layers.Dense(10) # output layer ]) # Combine base and classifier model = tf.keras.Sequential([ base_model, classifier_model ]) # Compile the model model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) # Fits the model on batches with real-time data augmentation history = model.fit(train_datagen.flow(train_images, train_labels, batch_size=64), epochs=20, verbose=2) # If unspecified, batch size defaults to 32. plt.plot(history.history['accuracy'], label='accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.ylim([0.5, 1]) plt.legend(loc='lower right') plt.show() test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(test_acc) model.save('C:\Programming 2022-23\OpenCV\Sudoku Solver\ResNet50_model_with_augmentation') HOW I AM USING THE NETWORK TO CLASSIFY: import cv2 as cv import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_datasets as tfds tfds.disable_progress_bar() localised_grid = np.load("C:\Programming 2022-23\OpenCV\Sudoku Solver\Localised sudoku grid.npy") # Visualising the gridlines output_visualise = np.copy(localised_grid) for i in range (0,500,50): cv.line(output_visualise, (i,0), (i,450),(255,0,0), thickness = 3) # Drawing vertical lines cv.line(output_visualise, (0,i), (450,i),(255,0,0), thickness = 3) # Drawing horizontal lines #cv.imshow("Gridlines", output_visualise) # MNIST dataset contains black and white images, so we threshold the images localised_grid_gray = cv.cvtColor(localised_grid, cv.COLOR_BGR2GRAY) adaptive_thresh = cv.adaptiveThreshold(localised_grid_gray, 255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY_INV, 11, 2) for i in range (0,2): adaptive_thresh = cv.medianBlur(adaptive_thresh, 3) cv.imshow("Thresholded", adaptive_thresh) sudoku_grid_stored = np.empty((9,9,46,46)).astype("uint8") for y in range(0,9): for x in range(0,9): cropped_digit = adaptive_thresh[50*y+2:(50*y)+48,50*x+2:(50*x)+48].astype("uint8") # Crop slightly to remove any sudoku grid outlines that may exist sudoku_grid_stored[y][x] = cropped_digit # cv.imshow("7", sudoku_grid_stored[2][0]) # 7 lies in 3rd row, 1st column (zero indexed) # Inputs to CNN require must have three colour channels sudoku_grid_stored = np.repeat(sudoku_grid_stored[..., np.newaxis], 3, -1) # Inputs to CNN must be normalised between 0 and 1, since this is how it was trained sudoku_grid_stored = sudoku_grid_stored/255.0 # Inputs to CNN must be 32x32, since this is how it was trained sudoku_grid_resized= np.empty((9,9,32,32,3)) for j in range(0,9): for i in range (0,9): sudoku_grid_resized[j][i] = cv.resize(sudoku_grid_stored[j][i], (32,32)) # Load trained CNN CNN_model = tf.keras.models.load_model('C:\Programming 2022-23\OpenCV\Sudoku Solver\ResNet50_model_with_augmentation') test = sudoku_grid_resized[0][0] test = np.expand_dims(test,axis=0) prediction = np.argmax(CNN_model.predict(test)) print(prediction) RESULT OF THRESHOLDING (I.E. INPUTS TO CNN) - for example, the top left value of 3 is being read as 8. AI: Is there any way to more harshly penalise mis-classification of 3s and 8s, or 1s and 7s during the training process? Yes by setting weights for the loss function but this would likely lead to lower scores in the other numbers. Firstly I note you have just train and test. Standard practise is to do the split as train validation and test. (usually by splitting the training into train and validation) Then you can perform cross validation. Additionally you should stratify (have same representation of classes) in each dataset. As to your main question, several Approaches: More augmentation including elastic distortion, brightness and cutouts Cross Validation Use earlystopping into the method (potentially your model is overfitting the training data) Ensemble more neural networks Data-exploration - see which images are failing in the validation and look for patterns Test time augmentation Perform augmentation on your validation (and test) images and take an average.
H: Detrend a time series I am fairly new to forecasting and I am trying to create a demand forecast for my organization; I am following the methodology outlined here. In step 12 of the process, the author subtracts the trend value from observed value to detrend the data (code shown below): # Using statmodels: Subtracting the Trend Component. from statsmodels.tsa.seasonal import seasonal_decompose df = pd.read_csv('https://raw.githubusercontent.com/selva86/datasets/master/a10.csv', parse_dates=['date'], index_col='date') result_mul = seasonal_decompose(df['value'], model='multiplicative', extrapolate_trend='freq') detrended = df.value.values - result_mul.trend My issue is with the last line. If the decomposition is done multiplicatively, why is the trend value be subtracted. Should it not be divided (instead of subtracted)? If Multiplicative Time Series is given by: Value = Base Level * Trend * Seasonality * Error, has the author made a mistake or is subtraction the only way to detrend the data? AI: Yes the author has made a mistake. The trend needs to be divided from the multiplicative time series while de-trending it. In the next code section he has de-seasonalized correctly: # Deseasonalize deseasonalized = df.value.values / result_mul.seasonal
H: What is the difference between Data Modeling and Data Processing? When discussing big data, it is sometimes mentioned that data modeling can be done by using a tool like map reduce, while data processing may be performed by apache spark. What is the difference between data modeling tasks, and data processing tasks? Thanks in advance AI: Data modeling means representing the data, usually with a somewhat compact model. Modeling implies simplifications: this can lead to a good model which reliably represents the patterns in the data or a terrible model which simplifies too much or not enough. Data processing is applying any kind of process to the data. For the record, map-reduce is relevant only as a technique for processing large data efficiently.
H: Is Pearson correlation a good loss function? I want to do a data science project. I want to use price history to predict future prices. I want to use correlation(y, y_pred) as my loss function but I found it's hard to calculate first deter, and second deter. Has anyone used correlation as loss function, and is it good? AI: I think Dave's answer points out the most pressing issues: translational invariance Absolute scale invariance In Tensorflow we can define our correlation function: class CorrLoss(tf.keras.losses.Loss): def call(self, y_true, y_pred): res_true = y_true - tf.reduce_mean(y_true) res_pred = y_pred - tf.reduce_mean(y_pred) cov = tf.reduce_mean(res_true * res_pred) var_true = tf.reduce_mean(res_true**2) var_pred = tf.reduce_mean(res_pred**2) sigma_true = tf.sqrt(var_true) sigma_pred = tf.sqrt(var_pred) return - cov / (sigma_true * sigma_pred) And quickly whip up a simple linear model: model = tf.keras.Sequential([ layers.Dense(input_shape=[1,], units=1) ]) And a data set that is learnable by this model: x = tf.random.normal((1000,)) y = 5 * x + 10 + tf.random.normal((1000,)) Training with our choice of loss function, model, and data, we can visually understand that correlation alone is not sufficient. As Dave describes, least squares is often effective. Mostly for my own amusement, I considered if maximizing $\mathbb{E}[Y \hat Y]$ would fare any better than maximizing Pearson's correlation. Here is the custom loss function: class ProdLoss(tf.keras.losses.Loss): def call(self, y_true, y_pred): return -tf.reduce_mean(y_true * y_pred) The following is a close success over the consistently-horrible choice of correlation: And often it would look better, but it wasn't reliable! It would also often look like this: Interestingly, the product moment will tend to ignore the true values by making the predicted values extreme. I noticed this by taking the same problem and increasing the number of epochs to $10^4$. Thus the correlation and mixed moment are unreliable loss functions for achieving $Y \approx \hat Y$.
H: Model from an aggregate I’m in a place where we’re unable to train models on data due to GDPR. What I want is to predict people getting a job (y) given (x,x,x,x…) their employment type working full time or part time, work sector, area and age. Since I cannot train on real data, I created an aggregate from the real data. Then used numpy.random.choice to generate synthetic data frame with data from the aggregate. Then created a decision tree boost model that had a good f1 score and accuracy. The behaviour of the model is similar to the aggregate. When I presented this to people, they informed me that what I did is not a real model and by recreating data from an aggregate, you loose important correlations from the data for individuals represented in the dataset. I disagree on these points since the model is acting based on the aggregate. After reading, I figured out that you do not need to have correlations to build models. There are many research papers based on non linear dataset with no/few correlations. I don’t understand how a model from an aggregate looses its statistical traits. AI: There are correlations regarding the creation dates or maybe the geographical zone, but if you keep the study in a similar time frame, you shouldn't lose something important. There is a correlation loss as long as there are links between individuals and the environment (ex: time and place). If there are no links between individuals, I don't see why it is not meant to make an aggregate. In addition, most models need random data to find statistical straits, one exception is time series because the models learn from sequential data that couldn't be randomized for obvious reasons. It would be very interesting to know the reason why an aggregate loses its statistical straits (apart from the time frame).
H: Inference Process in Autoregressive Transformer Architecture I'm abit confused about how the inference/prediction process works in transformer. For simplicity suppose it is a transformer for translation. My understanding is that when training, the whole input and output sentences are put into model. This is possible because of the causal mask in the decoder that prevents the model from cheating and looking ahead. Then, once the weights have been trained, the inference/prediction works by placing </s> or start sentence tag in the beginning with padding. The predicted word is then concatenated until </s> is the predicted word. My confusion arises from how the predicted word is acquired. The causal mask ensures that the first predicted token (X_1 below) is only a function of the first token (i.e. is not affected by the padding we used in the other tokens. So our first predicted word/token should be taken from the first, and subsequently once we concatenated k words it should be taken from k+1 th output position. See the diagram below for clarity. However, I've been using nlp.seas.harvard.edu/annotated-transformer/ as reference (and also checked another tensorflow tutorial), and they seem to take the predicted word/token as the last token (i.e. X_N).For example, under the inference section of the link above, we have: prob = test_model.generator(out[:, -1]) _, next_word = torch.max(prob, dim=1) next_word = next_word.data[0] ys = torch.cat([ys, torch.empty(1, 1).type_as(src.data).fill_(next_word)], dim=1) Thus, my question is whether I'm misunderstanding the model of misunderstanding the code? AI: I think that your misunderstanding comes from this: Then, once the weights have been trained, the inference/prediction works by placing or start sentence tag in the beginning with padding. There is no padding at all. In the first iteration, the length of the input is only one token (i.e. for </s>). Once the prediction for the first token is computed, it is appended to the input, getting a tensor of length 2, which is then used as input, and so on. At each step, the length of the input increases by 1. Therefore, the prediction for the current step is always the last predicted token.
H: Improve CNN classification accuracy I am training a CNN model with about 20.000 images with two classes each 10.000 images. The size of the images vary between 50*50 pixel and 1000x500 pixels. I am resizing all images to the average size of all images, which is 350x150 pixels. Then training a CNN with this architecture: import cv2 import os import numpy as np from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers import Flatten from keras.constraints import maxnorm from keras.optimizers import SGD from keras.layers.convolutional import Convolution2D from keras.layers.convolutional import MaxPooling2D from keras.utils import np_utils from sklearn.model_selection import train_test_split import random import matplotlib.pyplot as plt data = [] labels = [] imagePaths = sorted(list(my_images)) random.seed(42) random.shuffle(imagePaths) # loop over the images for imagePath in imagePaths: image = cv2.imread(imagePath) image = cv2.resize(image, (350, 150)) data.append(image) # extract the class label from the image path and update the # labels list label = imagePath.split(os.path.sep)[-2].split('/')[-1] if label == 'pos': label = 1 elif label == 'neg': label = 0 labels.append(label) # scale the raw pixel intensities to the range [0, 1] data = np.array(data, dtype="float") / 255.0 labels = np.array(labels) # partition the data into training and testing splits using 75% of # the data for training and the remaining 25% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.25, random_state=42) unique, counts = np.unique(trainY, return_counts=True) print(dict(zip(unique, counts))) y_train = np_utils.to_categorical(trainY) y_test = np_utils.to_categorical(testY) num_classes = 2 # # # Create the model model = Sequential() model.add(Convolution2D(32, 3, 3, input_shape=(150, 350, 3), activation='relu', border_mode='same')) model.add(Dropout(0.2)) model.add(Convolution2D(32, 3, 3, activation='relu', border_mode='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Convolution2D(64, 3, 3, activation='relu', border_mode='same')) model.add(Dropout(0.2)) model.add(Convolution2D(64, 3, 3, activation='relu', border_mode='same')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dropout(0.2)) model.add(Dense(1024, activation='relu', W_constraint=maxnorm(3))) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu', W_constraint=maxnorm(3))) model.add(Dropout(0.2)) model.add(Dense(num_classes, activation='softmax')) # Compile model epochs = 25 lrate = 0.01 decay = lrate / epochs sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False) model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) I am getting an accuracy of 95 % which is really good and I am using it as production model. However I am wondering whether I can improve the accuracy since the number of images seems to be very high and the classification problem separable: example images Is there any chance to improve the model and to squeeze out a bit more from the prediction? AI: 95% is very good, I'm not sure if improving that result would not alter the result in production: Keeping an error margin might be helpful to avoid overfitting, but it may not be your case. Nevertheless, here are some tips to improve your model even more: Apply the AdamW algorithm instead of SGD. AdamW is an optimizer that reduces the learning rate progressively with iterations. I've already improved models by 20% using this optimizer. https://www.fast.ai/posts/2018-07-02-adam-weight-decay.html Fine-tune your hyperparameters & structure thanks to a genetic algorithm. This solution requires a lot of patience, as it explores many different model configurations, but you will eventually reach better results. Therefore, you could rent a powerful GPU in a cloud for a few hours to do this task (=few dollars). https://sainivedh.medium.com/optimization-of-cnn-architecture-using-genetic-algorithm-for-image-classification-5c48f25dac9c
H: Where is the VGG in Cartoongan? When I read through the paper of Cartoongan [CartoonGAN: Generative Adversarial Networks for Photo Cartoonization], I was so confused about where is vgg located in the entire network. Based on the paper about GAN [Generative Adversarial Nets], I could not see anywhere that vgg exists in the network. I feel that I couldn't see a clear picture of the entire construction of Cartoongan. I know how GAN is constructed with a generator and discriminator, but Cartoongan no. If someone can help, I appreciated it. AI: Great question, VGG is mostly used as feature extraction. As such we can compare the similarity between the styles of the two images. Usually ground truth batch of images and the ones that are generated. This is mainly used as an extra term in the loss function. As shown below s_loss and c_loss This can be seen on the GitHub repo of the paper in question: if self.content_lambda != 0. or self.style_lambda != 0.: vgg_generated_images = self.pass_to_vgg(generated_images) if self.content_lambda != 0.: c_loss = self.content_lambda * self.content_loss( self.pass_to_vgg(source_images), vgg_generated_images) g_total_loss = g_total_loss + c_loss if self.style_lambda != 0.: s_loss = self.style_lambda * self.style_loss( self.pass_to_vgg(target_images[:vgg_generated_images.shape[0]]), vgg_generated_images) g_total_loss = g_total_loss + s_loss d_grads = d_tape.gradient(d_total_loss, discriminator.trainable_variables) g_grads = g_tape.gradient(g_total_loss, generator.trainable_variables) Where the VGG is initialized and loaded at the begging of training, as shown below: base_model = VGG19(weights="imagenet", include_top=False, input_shape=input_shape) tmp_vgg_output = base_model.get_layer("block4_conv3").output tmp_vgg_output = Conv2D(512, (3, 3), activation='linear', padding='same', name='block4_conv4')(tmp_vgg_output) self.vgg = tf.keras.Model(inputs=base_model.input, outputs=tmp_vgg_output) self.vgg.load_weights(os.path.expanduser(os.path.join( "~", ".keras", "models", "vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5")), by_name=True)
H: Can we visualize a feature space with 4 or more dimensions? I have a dataset with 6 independent variables, I have trained this dataset on a Support Vector Machine. Is there is a way in python or R to visualize the feature space with the hyperplane? Or for example, can I plot many 2D plots such that every plot has two features vs each other separated by the decision boundary in the graph? AI: For something like visualizing four dimensions, you could do something along the lines of constructing a 3D plot and then adding a color gradient to represent the fourth dimension. For five dimensions you could add scatter plot point size, for six perhaps even scatter point shape as well. After around four dimensions however, the 'standard' visualization of a scatterplot starts to look fairly clunky. One way to visualize six dimensional data is to use a technique you mentioned above. Namely, create a series of 2D plots. One way to accomplish this in by using the scatter_matrix in pandas. You can read about that on its documentation here. A minimal example is provided below: import pandas as pd import numpy as np # import the scatter_matrix function from pandas from pandas.plotting import scatter_matrix # create a df that contains a series of random numbers with 6 features df = pd.DataFrame(np.random.randn(60, 6), columns=['x_1', 'x_2', 'x_3', 'x_4', 'x_5','x_6']) #create a scatter matrix and plot a histogram across the diagonal scatter_matrix(df, alpha=.4, figsize=(8, 8), diagonal='hist'); The result:
H: How to calculate a trend to use as a feature in a machine learning model? In a machine learning model one of the features is unemployment: Month Unemployment May-2022 3.6% Jun-2022 3.7% Jul-2022 3.8% Aug-2022 3.9% What I need is to use as an additional feature the trend in the last three months, in this case it went up 0.1% each month, so the trend would be 0.3%? Note that I'm not looking to calculate the moving average. AI: You have several choices: The trend can be calculated as $\frac{last - first}{first}$, e.g. $\frac{3.9 - 3.6}{3.6}$ You can perform a linear regression including the 4 points and use the slope as trend. Or any variant, e.g. average difference: $\frac{(3.7-3.6)+(3.8-3.7)+(3.9-3.8)}{3}$.
H: I am struggling to understand the point of supervised ML models in real world scenarios Sorry for maybe a stupid question, but I can't seem to find any explanation of it online. If supervised machine learning only works on labeled datasets - you can't use it to predict a value of unlabelled data after the model is already trained? And if that is true, how could you possibly use those models in real life scenarios? For example you write and train a classifier to predict the age group of the user, can you in any way use that created model for actual prediction of an unlabelled entry? And if not, what is the point of building this kind of model? Thank you! AI: Supervised means that the training stage is supervised and requires labels. It does not mean that you need labels during inference. Here is small example using a Random Forest classifier with scikit learn (source): >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_samples=1000, n_features=4, ... n_informative=2, n_redundant=0, ... random_state=0, shuffle=False) >>> clf = RandomForestClassifier(max_depth=2, random_state=0) >>> clf.fit(X, y) RandomForestClassifier(...) >>> print(clf.predict([[0, 0, 0, 0]])) [1] As you can see, training the model takes in labels (clf.fit(X, y) where y are the labels) but during inference the model runs the prediction for an unseen datapoint with no label (print(clf.predict([[0, 0, 0, 0]])) where [[0, 0, 0, 0]] is a a new datapoint which is classified as belonging to class 1). In contrast, unsupervised ML does not require labels during training. This blog post provides some further explanations and examples.
H: Inverted ROC curve I am using tidymodels package in R. Running random forest to classify three classes. There are about 8000 samples in total and 130 features. This is how the ROC curves look like. The predictions for all 3 classes are actually pretty good and fit with expectations. The misclassified samples are also easy to explain. Here are some metrics as given by the software on test data: .metric .estimate 1 accuracy 0.887 2 bal_accuracy 0.917 3 f_meas 0.887 4 kap 0.831 5 mcc 0.835 6 precision 0.895 7 recall 0.889 8 roc_auc 0.975 And the confusion matrix. Truth Prediction A B C A 573 19 22 B 6 662 17 C 148 25 628 Everything seems to be fine except for the strange ROC curves. I am trying to find an explanation for this. AI: Ok I have figured this out. It was to do with the order of prediction classes provided to roc_curve(). Here is a reproducible example below. library(tidymodels) library(dplyr) dfr <- iris %>% mutate(Species=factor(Species)) data_split <- initial_split(dfr,prop=0.70,strata=Species) data_train <- training(data_split) data_test <- testing(data_split) data_tune_cv <- vfold_cv(data_train,v=10,repeats=1,strata=Species) recipe <- data_train %>% recipe(Species ~ .) rf_spec <- rand_forest(mtry = tune(), min_n = tune(), trees = tune()) %>% set_mode("classification") %>% set_engine("ranger", num.threads = (parallel::detectCores()-1), importance = "impurity") rf_wf <- workflow() %>% add_recipe(recipe) %>% add_model(rf_spec) rf_metrics <- metric_set(roc_auc,accuracy,bal_accuracy,f_meas,kap,mcc,precision,recall) rf_grid <- grid_latin_hypercube(mtry=finalize(mtry(),data_train), min_n(), trees(), size=20) rf_tune <- tune_grid(rf_wf, resamples=data_tune_cv, grid=rf_grid) #> ! Fold01: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold01: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold01: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold02: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold02: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold02: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold03: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold03: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold03: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold04: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold04: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold04: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold05: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold05: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold05: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold06: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold06: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold06: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold07: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold07: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold07: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold08: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold08: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold08: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold09: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold09: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold09: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold10: preprocessor 1/1, model 1/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold10: preprocessor 1/1, model 5/20: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold10: preprocessor 1/1, model 17/20: 5 columns were requested but there were 4 predictors in the data. 4 will... rf_best <- select_best(rf_tune,metric="roc_auc") rf_training_pred <- rf_wf %>% finalize_workflow(rf_best) %>% fit_resamples(data_tune_cv,control = control_grid(save_pred=TRUE)) %>% collect_predictions() #> ! Fold01: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold02: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold03: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold04: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold05: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold06: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold07: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold08: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold09: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... #> ! Fold10: preprocessor 1/1, model 1/1: 5 columns were requested but there were 4 predictors in the data. 4 will... This gives incorrect ROC. rf_training_pred %>% roc_curve(truth=Species,.pred_setosa,.pred_virginica,.pred_versicolor) %>% autoplot(rf_training_roc) This gives the correct ROC. rf_training_pred %>% roc_curve(truth=Species,.pred_setosa,.pred_versicolor,.pred_virginica) %>% autoplot(rf_training_roc) Created on 2022-09-22 by the reprex package (v2.0.1) Session info sessioninfo::session_info() #> ─ Session info ─────────────────────────────────────────────────────────────── #> setting value #> version R version 4.1.0 (2021-05-18) #> os Ubuntu 20.04.5 LTS #> system x86_64, linux-gnu #> ui X11 #> language en_GB:en #> collate en_GB.UTF-8 #> ctype en_GB.UTF-8 #> tz Europe/Stockholm #> date 2022-09-22 #> pandoc 2.18 @ /usr/lib/rstudio/bin/quarto/bin/tools/ (via rmarkdown) #> #> ─ Packages ─────────────────────────────────────────────────────────────────── #> package * version date (UTC) lib source #> assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.1.0) #> backports 1.4.1 2021-12-13 [1] CRAN (R 4.1.0) #> broom * 1.0.0 2022-07-01 [1] CRAN (R 4.1.0) #> class 7.3-20 2022-01-13 [1] CRAN (R 4.1.0) #> cli 3.3.0 2022-04-25 [1] CRAN (R 4.1.0) #> codetools 0.2-18 2020-11-04 [1] CRAN (R 4.1.0) #> colorspace 2.0-3 2022-02-21 [1] CRAN (R 4.1.0) #> crayon 1.5.1 2022-03-26 [1] CRAN (R 4.1.0) #> curl 4.3.2 2021-06-23 [1] CRAN (R 4.1.0) #> DBI 1.1.3 2022-06-18 [1] CRAN (R 4.1.0) #> dials * 1.0.0 2022-06-14 [1] CRAN (R 4.1.0) #> DiceDesign 1.9 2021-02-13 [1] CRAN (R 4.1.0) #> digest 0.6.29 2021-12-01 [1] CRAN (R 4.1.0) #> dplyr * 1.0.9 2022-04-28 [1] CRAN (R 4.1.0) #> ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.1.0) #> evaluate 0.16 2022-08-09 [1] CRAN (R 4.1.0) #> fansi 1.0.3 2022-03-24 [1] CRAN (R 4.1.0) #> farver 2.1.1 2022-07-06 [1] CRAN (R 4.1.0) #> fastmap 1.1.0 2021-01-25 [1] CRAN (R 4.1.0) #> foreach 1.5.2 2022-02-02 [1] CRAN (R 4.1.0) #> fs 1.5.2 2021-12-08 [1] CRAN (R 4.1.0) #> furrr 0.3.1 2022-08-15 [1] CRAN (R 4.1.0) #> future 1.27.0 2022-07-22 [1] CRAN (R 4.1.0) #> future.apply 1.9.0 2022-04-25 [1] CRAN (R 4.1.0) #> generics 0.1.3 2022-07-05 [1] CRAN (R 4.1.0) #> ggplot2 * 3.3.6 2022-05-03 [1] CRAN (R 4.1.0) #> globals 0.16.0 2022-08-05 [1] CRAN (R 4.1.0) #> glue 1.6.2 2022-02-24 [1] CRAN (R 4.1.0) #> gower 1.0.0 2022-02-03 [1] CRAN (R 4.1.0) #> GPfit 1.0-8 2019-02-08 [1] CRAN (R 4.1.0) #> gtable 0.3.0 2019-03-25 [1] CRAN (R 4.1.0) #> hardhat 1.2.0 2022-06-30 [1] CRAN (R 4.1.0) #> highr 0.9 2021-04-16 [1] CRAN (R 4.1.0) #> htmltools 0.5.3 2022-07-18 [1] CRAN (R 4.1.0) #> httr 1.4.3 2022-05-04 [1] CRAN (R 4.1.0) #> infer * 1.0.3 2022-08-22 [1] CRAN (R 4.1.0) #> ipred 0.9-13 2022-06-02 [1] CRAN (R 4.1.0) #> iterators 1.0.14 2022-02-05 [1] CRAN (R 4.1.0) #> knitr 1.39 2022-04-26 [1] CRAN (R 4.1.0) #> labeling 0.4.2 2020-10-20 [1] CRAN (R 4.1.0) #> lattice 0.20-45 2021-09-22 [1] CRAN (R 4.1.0) #> lava 1.6.10 2021-09-02 [1] CRAN (R 4.1.0) #> lhs 1.1.5 2022-03-22 [1] CRAN (R 4.1.0) #> lifecycle 1.0.1 2021-09-24 [1] CRAN (R 4.1.0) #> listenv 0.8.0 2019-12-05 [1] CRAN (R 4.1.0) #> lubridate 1.8.0 2021-10-07 [1] CRAN (R 4.1.0) #> magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.1.0) #> MASS 7.3-58.1 2022-08-03 [1] CRAN (R 4.1.0) #> Matrix 1.4-1 2022-03-23 [1] CRAN (R 4.1.0) #> mime 0.12 2021-09-28 [1] CRAN (R 4.1.0) #> modeldata * 1.0.1 2022-09-06 [1] CRAN (R 4.1.0) #> munsell 0.5.0 2018-06-12 [1] CRAN (R 4.1.0) #> nnet 7.3-17 2022-01-13 [1] CRAN (R 4.1.0) #> parallelly 1.32.1 2022-07-21 [1] CRAN (R 4.1.0) #> parsnip * 1.0.1 2022-08-18 [1] CRAN (R 4.1.0) #> pillar 1.8.0 2022-07-18 [1] CRAN (R 4.1.0) #> pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.1.0) #> prodlim 2019.11.13 2019-11-17 [1] CRAN (R 4.1.0) #> purrr * 0.3.4 2020-04-17 [1] CRAN (R 4.1.0) #> R.cache 0.16.0 2022-07-21 [1] CRAN (R 4.1.0) #> R.methodsS3 1.8.2 2022-06-13 [1] CRAN (R 4.1.0) #> R.oo 1.25.0 2022-06-12 [1] CRAN (R 4.1.0) #> R.utils 2.12.0 2022-06-28 [1] CRAN (R 4.1.0) #> R6 2.5.1 2021-08-19 [1] CRAN (R 4.1.0) #> ranger * 0.14.1 2022-06-18 [1] CRAN (R 4.1.0) #> Rcpp 1.0.9 2022-07-08 [1] CRAN (R 4.1.0) #> recipes * 1.0.1 2022-07-07 [1] CRAN (R 4.1.0) #> reprex 2.0.1 2021-08-05 [1] CRAN (R 4.1.0) #> rlang 1.0.4 2022-07-12 [1] CRAN (R 4.1.0) #> rmarkdown 2.14 2022-04-25 [1] CRAN (R 4.1.0) #> rpart 4.1.16 2022-01-24 [1] CRAN (R 4.1.0) #> rsample * 1.1.0 2022-08-08 [1] CRAN (R 4.1.0) #> rstudioapi 0.13 2020-11-12 [1] CRAN (R 4.1.0) #> scales * 1.2.0 2022-04-13 [1] CRAN (R 4.1.0) #> sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.1.0) #> stringi 1.7.8 2022-07-11 [1] CRAN (R 4.1.0) #> stringr 1.4.0 2019-02-10 [1] CRAN (R 4.1.0) #> styler 1.7.0 2022-03-13 [1] CRAN (R 4.1.0) #> survival 3.4-0 2022-08-09 [1] CRAN (R 4.1.0) #> tibble * 3.1.8 2022-07-22 [1] CRAN (R 4.1.0) #> tidymodels * 1.0.0 2022-07-13 [1] CRAN (R 4.1.0) #> tidyr * 1.2.0 2022-02-01 [1] CRAN (R 4.1.0) #> tidyselect 1.1.2 2022-02-21 [1] CRAN (R 4.1.0) #> timeDate 4021.104 2022-07-19 [1] CRAN (R 4.1.0) #> tune * 1.0.0 2022-07-07 [1] CRAN (R 4.1.0) #> utf8 1.2.2 2021-07-24 [1] CRAN (R 4.1.0) #> vctrs 0.4.1 2022-04-13 [1] CRAN (R 4.1.0) #> withr 2.5.0 2022-03-03 [1] CRAN (R 4.1.0) #> workflows * 1.0.0 2022-07-05 [1] CRAN (R 4.1.0) #> workflowsets * 1.0.0 2022-07-12 [1] CRAN (R 4.1.0) #> xfun 0.32 2022-08-10 [1] CRAN (R 4.1.0) #> xml2 1.3.3 2021-11-30 [1] CRAN (R 4.1.0) #> yaml 2.3.5 2022-02-21 [1] CRAN (R 4.1.0) #> yardstick * 1.1.0 2022-09-07 [1] CRAN (R 4.1.0) #> #> #> ──────────────────────────────────────────────────────────────────────────────
H: How can I say if I have a class imbalance issue in my data? Assuming I have this dataset: Label --- %Total 0 -------- 18.53% 1 -------- 8.18% 2 -------- 26.22% 3 -------- 16.46% 4 -------- 8.62% 5 -------- 9.58% 6 -------- 5.88% 7 -------- 6.53% I could say I have a class imbalance problem ? Is it mandatory in this case to fix the problem trying to use all the various techniques (resampling, data augmentation, change perf metric etc...) ? Is there a mathematical formula to get the grade of imbalance severity or something like that to understand if there is a class imbalance problem ? I think we have to evaluate case by case, the techniques to avoid imbalanced data could even not work at all, there isn't a general rule. Any ideas? AI: 'Imbalance problem' is a mix up of several loosely related issues, mainly these two: It's hard to generalize when there's too few of a certain class' samples, especially with lots of dimensions. However, methods like resampling won't help much in this case: in an oversimplified way, that means trying to combat model variance by shifting its bias. There's little you can do aside from gathering more data unless, perhaps, you are only interested in certain class-specific metrics of those few rare classes. Your class distribution does not seem that bad - your model will generalize alright with enough samples regardless of the class ratio. Logistic functions underestimating the rare cases' probability. That's basically just bias, resampling / reweighing / threshold selection have mostly the same effect. The latter is the easiest as it does not require retraining, however this is, strictly speaking, a decision making part, which should not be mixed with evaluation stage (there could be more than one decision threshold for different actions etc). So, the 'ideal' way would be: don't resample at all, evaluate using 'proper' (class independent and threshold independent) metrics, such as logloss, and thus work directly with scores/probabilities (calibrate if needed) up until the decision stage. In DS context however, you often still need 'intuitive' metrics (based upon confusion matrix), which are threshold sensitive and often class specific. Even then, anything more complex than selecting a threshold upon the precision/recall curve is usually excessive.
H: SkLearn Categorical Naive Bayes Vs Mathematical theory of Naive Bayes The Naive Bayes classification based on the following formula $P(C_i|X) = {P(X|C_i)P(C_i) \over P(X)} ... i)$ $P(X|C_i)$ is the posterior probability of $X$ conditioned on $C_i$, $P(X)$ prior probability of $X$, $C_i$ represents the class. Now if we have a dataset following: Age Income Buy_computer Senior fair Yes Junior fair Yes Young poor No Senior poor Yes Junior fair No Young poor No Now if we get a new data (Age = young, Income= fair) we need to find out in which class this data should belong. ... example 1) We can use eq i) to find out the class I have also learned Categorical Naive Bayes As per the documentation, The probability of category t in feature i given class c is estimated as: P(Xi=t|y=c ;alpha) = (Ntic + alpha)/(Nc + alpha ni) ...ii) As per the example 1) we can convert equation ii) as P(Age = young, Income= fair| Buy_computer=?) and then apply equation i) to it to find out the class of P(Age=young, Income=fair) However, I cannot understand how the right hand side of eq ii) is related to equation i) Equation i) also does not have any alpha parameter, how could the parameter alpha influence the answer? Thank you. AI: Now if we get a new data (Age = young, Income= fair) we need to find out in which class this data should belong. ... example 1) If a sample doesn't have a label you can't include it in the train/test set,not sure if that's what you mean here but I want to clarify that just in case. That being said, you could try to predict it's lable after the model has been trained and tested. However, I cannot understand how the right hand side of eq ii) is related to equation i) I think there are two things causing confusion here. The notation you're using for equation $i)$ and $ii)$ is inconsistent. My guess is you're citing these from different sources. I'll use the notation used by sklearn for my answer. You can't apply equation $i)$ to equation $ii)$. Equation $i)$ is a statement of Bayes' Theorem, while $ii)$ is an assumption we make about the liklihood $P(x_i|y)$ for categorically distributed datasets. How $i)$ and $ii)$ are related will be come more clear if we go through a quick derivation of the math behind CategoricalNB Suppose we have a feature set $X$ with label $y$. We'd like to train a model to calculate proability of output $y$ given this feature set $X$ as this will allow the model to predict unlabeled data. According to Bayes's Theorem, the probability of $y$ given $X$ (denoted $P(y|X)$) is: \begin{eqnarray*} P(y|X) &=& \frac{P(X|y)P(y)}{P(X)} \\ &=& \frac{P(x_1,..x_a|y)P(y)}{P(x_1,..x_a)} \end{eqnarray*} where in the second line we've expanded the $X$ into it's individual features $x_i$. $P(y)$ and $P(x_1,..x_a)$ can be calculated from the training data, but how the hell do we calculate $P(x_1,..x_a|y)$? To do this we assume the features are mutually independent in which case we have: \begin{eqnarray*} P(x_1,..x_a|y) &=& P(x_1|y)P(x_2|y)..P(x_a|y) \\ &=& \Pi_{i=1}^{i=a} P(x_i|y) \end{eqnarray*} The assumption of mutual independence is what puts the Naive in Naive Bayesian Model, i.e. if a Bayesian model is described as Naive it means it is based on an assumption of mutual independence between features. So the problem of calculating $P(y|X)$ has been reduced to calculating $P(x_i|y)$, what seperates all of the Naive Bayesian models is the methodology they use to calculate $P(x_i|y)$. For the methodology behind CategoricalNB we make the further assumption that each feature $x_i$ has a categorical distribution given by: \begin{equation} P(x_i|y, \alpha) = \frac{N_{tic}+\alpha}{N_c+\alpha n_i} \end{equation} where $N_{tic}$ is the number of times category $t$ appears in feature $x_i$ when $y=c$, and $N_c$ it the number of times $y=c$. $\alpha$ is a hyperparameter introduced to reduce overfitting on the train set and $n_i$ is the number of catergories in the feature $x_i$. So to summarize: Equation $i)$ is a statement of Bayes' Theorem, which is the cornerstone of every Bayesian model (that's why they're call Bayesian models after all) Equation $ii)$ is an assumption made about the liklihood $P(x_i|y)$. This assumption. along with the assumption of mutually independent features are what underpins the methodology behind sklearns CategoricalNB $\alpha$ is a hyperparameter used to reduce overfitting. You can't calculate $\alpha$ with some kind of pen and paper calculation, it can only be calculate via hyperparameter fine tuning Hope this helps
H: One word changes everything NLP I have a classification model (BERT) that classifies sentences as either question or normal sentences. But whenever a sentence has "how" word, the model chooses "question" class. How can I solve this issue? (I have a very big dataset.) AI: Very likely, the majority of the sentences which contain "how" in your training data are labelled as question. It's probably a problem of representativity of the training set, because otherwise the problem wouldn't be this specific. But note that it's likely that your training data contains other issues a well, possibly there are errors in the labels.
H: Stacking: Use predictions of train or test to create features for level 1 classifier The question is pretty simple. In stacking, the predictions of level 0 models are being used as features to train a level 1 model. However, the predictions of what data? Intuitively it makes more sense to predict the test set and use those results to train the final classifier. I am not sure whether this results in data leakage, I don't think this results to data leakage (since the final classifier has only information that the initial ones do, ie. only from the train data - it doesn't know if those predictions are good or not). Is this reasoning correct? AI: I'm not sure if there's any standard about this, but I usually proceed by splitting the training set into two parts A and B: A is used as training set for level 0 models B is used as test set for the level 0 models and as training set for the level 1 model. As usual, the final test set made of fresh instances is used to evaluate the final model, made of stacking the level 0 models and level 1 model. [added] You're right that there would be data leakage if one were using the same data for training and testing the level 0 models. This would be especially bad, because it means that the level 1 model would expect 'very good' level 0 predictions (since they have been seen during training), and obviously the 'production' level 0 predictions would not be as good and therefore the level 1 model would be completely overfit. One can also use nested cross-validation to the same effect.
H: clustering time series with different sized time series I have read this article on towardsdatascience and they teach how to cluster time series using the DTW distance and the TimeSeriesKMeans from the tslearn.clustering library. I also read the official documentation and I found a note. Notes If metric is set to “euclidean”, the algorithm expects a dataset of equal-sized time series. This suggest me that for other metrics (like dtw for example) the method works with different sized time series. I'm currently working on time-series data and I want to check if I can get some interesting information about my data using this method. This is how I constructed my curves. I have a dataframe called "relevant_figures" that it contains the relevant information in order to construct the curves. Then I proceed as follows: X = [] for _,row in relevant_figures.iterrows(): input_time = row['InputTime'] output_time = row['OutputTime'] ts = weights_df.loc[input_time : output_time]['weight'].copy() X.append(ts) When I try the method TimeSeriesKMeans(n_clusters=3, metric="dtw").fit(X) It throws a ValueError Name: peso, Length: 120, dtype: float64]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. However I can't reshape in order to construct an array because every ts has different lengths. So reshaping does not work. What should I do? Thanks in advance AI: Try using the to_time_series_dataset function in the tslearn.utils module. This takes a list of lists as input and returns the data formatted as a numpy array, e.g.: from tslearn.utils import to_time_series_dataset X = to_time_series_dataset([[1, 2, 3, 4], [1, 2, 3], [2, 5, 6, 7, 8, 9]]) It looks like it pads the shorter time series with nan's to fit them into the array.
H: Confusion regarding One Hot Encoding We use one hot encoding with nominal variables so that we don't impose any order relationship. If we have a categorical variable that has three categories: Red, Blue, Green. The one hot encoding of this variable will be : Blue --> [1,0,0], Green --> [0,1,0] and Red --> [0,0,1]. Now, if we convert these binary numbers back to decimal numbers, we will have: Blue --> 4, Green --> 2 and Red --> 1, which has natural ordering relationship , so how one hot encoding doesn't introduce natural ordering relationship ? AI: You're confusing binary encoding with one hot encoding (OHE): there's no reason to interpret the array of 3 binary variables [0,1,0] as a single binary number. The 3 values are interpreted by any ML model as 3 different features. For example if we call these 3 distinct features A, B, C you could see a condition B==1 in the node of a decision tree, leading to two nodes corresponding only to true (b=1) and false (B=0).
H: How does one perform a Canova-Hansen test in Python? I am referring to the documentation here, but it does not give many examples on how to actually perform the test. I have a pandas dataframe with two columns: Column 1 is first day of every week, Column 2 is demand, and this data goes back over 150 weeks. How would I perform a CH test to see if there is any seasonality in my data? AI: In general this could be achieved with the snippet below where you have to replace x with your observations. However I'm not sure whether the Caonva-Hansen-Test is suitable for weekly observations (m=52) since these sort of tests are usually designed with monthly or quarterly time series in mind. Therefore it might be better to aggregate your observations to a monthly level (m=12). from pmdarima.arima import CHTest import numpy as np x=np.random.normal(size=1000) CHTest(m=52).estimate_seasonal_differencing_term(x)
H: Is there a way to map words to their synonyms in tfidf? I have the following code: from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.cluster import KMeans import pandas as pd sentences = ["I have the ability", "I have the weakness", "I have the capability", "I have the power"] tfidf = TfidfVectorizer(max_features=300) tfidf.fit(sentences) X = tfidf.transform(sentences) k = 2 model = KMeans(n_clusters=k, random_state=1) model.fit(X) print(pd.DataFrame(columns=["sentence"], data=sentences).join(pd.DataFrame(columns=["cluster"], data=model.labels_))) The output looks like this: index sentence cluster 0 I have the ability 0 1 I have the weakness 0 2 I have the capability 0 3 I have the power 1 As you can see "I have the ability", "I have the weakness", "I have the capability" were grouped in the same cluster (cluster 0) and "I have the power" was grouped into a separate cluster. I think they were grouped randomly and it can't tell which sentences actually mean the same thing. I want a way to be able to group "I have the ability", "I have the capability", and "I have the power" together by specifying that ability, capability and power are synonyms. So basically mapping all words to their synonyms. Is there an existing package for this? AI: TfIdf vectors require much more data than that to be useful, but also don't give you the ability to identify synonyms. To do that with vectors and the amount of data you're working with, you'll need a pre-trained vector vocabulary. GloVe vectors are a popular choice to start with, but there will be others you can find and play with that may work better for your explicit purpose. Note that if you don't limit yourself to vector-based approaches, there are many classical approaches to this problem. WordNet would probably be the first thing I reach for here.
H: when I run Random Forest classification model then at every rows of my train data set show this error (ValueError: could not convert string to float:) ''' from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier(n_estimators = 500, max_depth = None, min_samples_split=2, min_samples_leaf =1, bootstrap = True, random_state=0) forest = forest.fit(X_train, y_train) print(forest.score(X_test, y_test)) ''' AI: The error message is not lying to you :) It cannot convert the string "one favourite christmas gifts year love" to a float. RandomForestClassifier (as most scikit-learn models) requires its inputs to be numeric. It does not know how to handle strings of text. Your training data has at least one column that contains string values. When the model tries to convert the training data to numeric values, the error is thrown when it encounters a string. You need to either encode the string values as numbers (e.g. with a text embedding model like Word2Vec) or drop the columns containing strings prior to training.
H: Why is max_features ordered by term frequency instead of inverse document frequency In the docs: https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html it is explained that max_features is ordered by term frequency across the corpus. Why not use the idf? AI: The reason is probably that using the top IDF features would mean selecting the least frequent words, in particular the ones which appear only once and are very frequent. These rare words should usually be removed because they are often due to chance and anyway are unlikely to appear again, therefore these are bad features likely to cause overfitting. In other words, it's always better for the features to be frequent so their statistical relations with other variables (especially the target) can be estimated reliably by the algorithm. Picking the top IDF features would do the opposite: take the least reliable statistical information into account.
H: working principle of Support Vector Machine I have a dataset consisting of numerical features and categorical features. I want train the training set using SVM. SVM is a quadratic optimization algorithm. I would like to know the how SVM works on categorical data. Can anyone share any references, links to research papers, or weblink to describe the process? I am also looking forward to know the theory behind handling categorical data using SVM. AI: To understand an algorithm very well, I use to study thoroughly the original paper, to understand the original mindset in creating it and the mathematical logic. http://image.diku.dk/imagecanon/material/cortes_vapnik95.pdf In parallel, I play with interactive demonstrators to check different use cases and test the limits. For instance: https://jgreitemann.github.io/svm-demo https://cs.stanford.edu/~karpathy/svmjs/demo/ https://dash.gallery/dash-svm/
H: Increasing/Decreasing importance of feature/thing in ML/DL I have 3 cases: I have a classification model that will be used to classify cats and dogs. On my train data dog pictures has a watermark on them, but cat pictures don't. The problem is: Whenever I have a watermark on a cat picture, the model will predict the cat picture as a dog picture I have another classification model that classifies questions and normal sentences. But whenever I have the "how" word in my normal sentence, the model will classify it as "question" I have a prediction model. I have 5 columns but column number 3 is very important. I mean the importance of that column is very high. But my model cannot understand it. All of those problems have 1 common problem. The importance of "something" or "feature" is being misunderstood by models. How these kinds of problems can be solved? AI: I would not say these models "misunderstand" anything. They simply learn from the data provided based on their inductive biases. I hypothesize that all three cases might be caused by the chosen (train) datasets: In case 1 the train data is not representative of the test or deployment data since only the train data has watermarks on an image if and only if it shows a dog. If that is not the case for your test or deployment data then you need to adjust your train data accordingly to remove this artifact. In case 2 I suggest to check the distribution of the word "how" in questions and non-questions in your train data. It might be that "how" almost exclusively occurs in questions which would, again, be a problem with the dataset not stemming from the same distribution as the data you run inference on. If that is not the case I'd check if your model cannot differentiate between the word "how" appearing in in a question vs. a non-question. If that is the case, a different model type might be more suitable. In case 3 it is unclear to me how you derive the importance of "column 3". It might, again, be a problem with the train dataset which, in this case, might not present that feature as very important. Alternatively, it could be that the chosen model is not able to learn the association between that feature and the target (simple example: there is a non-linear association but the model is linear). In summary, case 1 can be handled by feeding a different train dataset. Case 2 and 3 might require different train datasets but might alternatively require different models if the problem is not with the datasets.
H: How do the intercept and slope calculated in linear regression relate to the output of lm? I have been looking at how to calculate coefficients by hand and the example produces $Y = 1,383.471380 + 10.62219546 * X$ However the output shown of lm does not show these values anywhere. How do I reconcile the results of calculations by hand for B0 and B1 with the output of summary(model)? AI: The reason that the values you get from manually calculating the coefficients do not show up in the output from lm is that lm is using a different dataset (Anscombe's Quartet) than the one used for manually calculating the coefficients. In addition, the regression formula also differs between the two. If using the exact same dataset and regression formula the values under the estimate column should match up by the coefficients you get from manually calculating them.
H: Predict actual result after model trained with MinMaxScaler LinearRegression I was doing the modeling on the House Pricing dataset. My target is to get the mse result and predict with the input variable I have done the modeling, I'm doing the modeling with scaling the data using MinMaxSclaer(), and the model is trained with LinearRegression(). After this I got the score, mse, mae, dan rmse result. But when I want to predict it with the actual result. It got scaled, how to predict the after result with the actual price? Dataset: https://www.kaggle.com/code/bsivavenu/house-price-calculation-methods-for-beginners/data This is my script: import pandas as pd import numpy as np from sklearn.preprocessing import LabelEncoder, MinMaxScaler from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, mean_absolute_error train = pd.read_csv('train.csv') column = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt'] train = train[column] # Convert Feature/Column with Scaler scaler = MinMaxScaler() train[column] = scaler.fit_transform(train[column]) X = train.drop('SalePrice', axis=1) y = train['SalePrice'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=15) # Calling LinearRegression model = LinearRegression() # Fit linearregression into training data model = model.fit(X_train, y_train) y_pred = model.predict(X_test) # Calculate MSE (Lower better) mse = mean_squared_error(y_test, y_pred) print("MSE of testing set:", mse) # Calculate MAE mae = mean_absolute_error(y_test, y_pred) print("MAE of testing set:", mae) # Calculate RMSE (Lower better) rmse = np.sqrt(mse) print("RMSE of testing set:", rmse) # Predict the Price House by input: overal_qual = 6 grlivarea = 1217 garage_cars = 1 totalbsmtsf = 626 fullbath = 1 year_built = 1980 predicted_price = model.predict([[overal_qual, grlivarea, garage_cars, totalbsmtsf, fullbath, year_built]]) print("Predicted price:", predicted_price) The result: MSE of testing set: 0.0022340806066149734 MAE of testing set: 0.0334447655149599 RMSE of testing set: 0.04726606189027147 Predicted price: [811.51843959] Where the price is should be for example 208500, 181500, or 121600 with grands value in $. What step I missed here? AI: First, you can't use anything from the test set before training. This means that the scaling should be done using only the test set, otherwise there's a risk of data leakage. Then remember that scaling your features means that the model learns to predict with scaled features, therefore the test set should be passed after it has been scaled as well (using the same scaling as the training set, of course). Finally you could obtain the real price value by "unscaling" with inverse_transform. But instead I decided not to scale the target variable in the code below because it's not needed (except if you really want to obtain evaluation scores scaled). It's also simpler ;) full = pd.read_csv('train.csv') column = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt'] full = full[column] X = train.drop('SalePrice', axis=1) y = train['SalePrice'] # always split between training and test set first X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=15) # Then fit the scaling on the training set # Convert Feature/Column with Scaler scaler = MinMaxScaler() # Note: the columns have already been selected X_train_scaled = scaler.fit_transform(X_train) # Calling LinearRegression model = LinearRegression() # Fit linearregression into training data model = model.fit(X_train_scaled, y_train) # Now we need to scale the test set features X_test_scaled = scaler.transform(X_test) y_pred = model.predict(X_test_scaled) # y has not been scaled so nothing else to do # Calculate MSE (Lower better) mse = mean_squared_error(y_test, y_pred) print("MSE of testing set:", mse) # Calculate MAE mae = mean_absolute_error(y_test, y_pred) print("MAE of testing set:", mae) # Calculate RMSE (Lower better) rmse = np.sqrt(mse) print("RMSE of testing set:", rmse) # ... evaluation etc. ```
H: what is the difference between 'object detection' and 'outlier detection' in computer vision? If I'm looking at drone footage, and I'm looking for tennis courts, then I'm doing object detection. If I decide that a tennis court is an 'outlier' as opposed to the rest of the landscape, am I now doing 'outlier detection'? What do these 2 different names mean? AI: Object Detection: Labelling an sample to indicate the presence of a class of object. In images, this is usually done with a bounding box or applying labels to each pixel of the image. Outlier Detection: Identifying examples in your data that are different to the usual distribution of yor data. Can also be referred to as Anomaly Detection. In your example, object detection is the task you're trying to solve, with you or an algorithm working on drone footage to identify tennis courts. Outlier detection is an process that you apply to available drone recordings to identify unusual examples, compared to the "normal" images of tennis courts that you are usually detecting. These would likely contain courts that are different colours and surfaces, or look different because of damage or weather. Hope this helps! Side Note: Identifing something as "different to the background/landscape" as you mention in your definition of "outlier" is an interesting way to approach your task. If you were trying to use machine learning to solve this task, you could try assigning a class for "Not Tennis Court" in the most simple case. Alternatively, you could try assigning more classes like "Field", "Road", "Hedge", etc. to encourage your model to distinguish between these things and possibly boost the overall performance.
H: Machine learning algorithms for tabular dataset I have a dataset with 120 features and 5000 instances. The dataset is combination of categorical and numerical values. It is a tabular dataset. My problem is a binary classification problem. I trained my dataset with all classic classification algorithms like Naive Bayes, Bayesian net, SVM, MLP, Random forest, Logistic regression etc. I would like to know is there any algorithms available in the machine learning field which is new and not the classic one and can be implemented using a tabular dataset. I heard about Convolution neural network, deep neural network etc but I believe they are used in image data not tabular data. AI: You can try Generalized Additive Models (GAM). It models the response variable, $y$, as a sum of functions of individual features $f_i(x_i)$: $y = \sum\limits_if_i(x_i)$. You don't need to provide $f_i$, the algorithm finds learns them from the data. By analyzing the $f_i$, you can see, which feature contributes significantly, and in which way. You can even fit the $f_i$ to simple analytic functions and get an analytic dependence of $y$ on $x_i$. Links for GAM: GAM in wikipedia, GAM in R, GAM in Python, ClassificationGAM in Matlab, as well as here, here, here, and here. A related method is Alternating Conditional Expectations (ACE) from this paper: link. I wrote a blog post about it here. These methods are not new, but I have a feeling that you didn't try them. I also recommend browsing through methods in weka. It has some interesting, non-mainstream algorithms, such as classifier for learning functional trees, HotSpot, alternating model trees, alternating decision trees, and many other.
H: Should I annotate additional information besides the categories I already need in a text? I have a dataset with bank transfer reasons. They vary a lot because humans wrote them. From the reasons that are linked to invoice payments I need to extract several things: invoice number(s) IBAN counterparty Before I use any NN algorithm I need to annotate the data. So, for example, I have these rows: "Bank transfer for INV. 00234, 00435/2022.01.13 [BIC] [IBAN] Company Ltd" "Payment of invoice 00034-1120,34 on 02.17 [BIC] [IBAN] Company 2 inc." In case 1, I have: invoice numbers: 00234, 00435 IBAN - [IBAN] counterparty - Company Ltd In case 2, I have: invoice number: 00034 IBAN - [IBAN] counterparty - Company 2 inc I have also annotated invoice prefixes such as inv, INV, invoice, etc. My question is, should I add additional annotations such as "date" (2022.01.13, 02.17) or "sum paid" (1120,34)? Could they be helpful for a transformer, for example, to find out what an invoice is? AI: It might depend on the algorithm you choose and on how various your data is. Solution 1: If every potential case is precisely identified, it could be better to classify every field precisely. Solution 2: However, if there are a lot of potential cases, including unexpected ones (ex: notes or chaotic order), a good solution could be to define an annotation "other" to group any other field that doesn't match the others. Solution 3: A mix of solutions to reduce errors as much as possible in every field: Use filters to recognize fields like IBAN or invoice numbers Use NN trained on company names to recognize companies I'm afraid that 100% NN would lose efficiency in recognizing very different fields that include numeric and text data.
H: How do I know that my weights optimizer have found the best weights? I am new to deep learning and my understanding of how optimizers work might be slightly off. Also, sorry for a third-grader quality of images. For example if we have simple task our loss to weight function might look like this: As far as I understand optimizers look for improvements and try to fall into the hole that it found. But what if we have lots of local minimas, how do I know if, for example, adam optimizer have found global minima of loss, not just some local minima? And the third case I can think of is what if we have a flat plateau of a loss function, except for a tiny range of weights, would it be found using adam? How do I know if it even exists? Are there any tools or methods that I can use to analyse this function? AI: No optimizer can guarantee you that it has found the global minimum. That's why randomly initialize weight to start at different arbitrary points and then start descending towards the minima, hoping we might go to the global minima. Sometimes our stepsize is large enough to overshoot a valley of local minima and jump across it. It depends on the optimizer and steps size. But, in most practical applications, the minima found by these optimizers ( most likely to be local minima ) work well for our applications, but they are not necessarily the global minima.
H: Testing the impact of events on time series Context I am working with product data for a retail company. I have the daily impressions (number of times it was viewed online) for all products over a 30 day period (can get more data). Here is the data for one product: on_sale = [False, False, False, False, False, False, False, True, True, True, True, True, True, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False] impressions = [1,0,0,5,1,0,6,16,17,18,16,17,15,12,4,3,5,4,2,3,2,1,2,4,2,0,3,5,13,3] So this product was on sale from index 7 to index 13. We expect that when a product is on sale, it will see an increased number of impressions. We have introduced a new piece of logic that identifies all on sale products and moves them up into a 'better' advertising campaign, with the aim of increasing their impressions over and above what the normal level of impressions for that on sale product would be. Question What is the best way to test the impact of this new piece of logic? In particular, how can I test the impact of moving on sale products into a better campaign (in terms of impressions) relative to what we'd expect for the same on sale products without moving them up a campaign? I imagine some sort of A/B test where I randomly take some of the on sale products and apply the new logic to them (i.e. move the products up) and compare the change in impressions to a control group of on sale products that I don't apply the new logic to. Any insight into this problem would be great. I am looking for test that can be implemented in Python. AI: The best way to do a comparison like this is multi-facetted. The first component is to create an appropriate experiment which means being as completely random as you possibly can and serving an equal number of enhanced and typical advertisement campaigns. Be super sure that you are being even and fair across times of day, demographics and all possible fronts to limit bias as much as possible. Ideally I would prefer to have data from before and after the sales collected in both methods as well to give you a sense of how the enhancement works in both sale and non-sale periods of time. Once you have two sets of data, you need to prepare each set of experiments as a separate time series. You can all the things one might do with time series to explore them separately. But to do a comparison you can use a granger causality test (here in python, here in R). This test is a test of how well one time series predicts another. It looks at how well a point in set one at a given time is predictive of a value in set two at some time in the future ( some number of lag time increments in the future). The links explain how it work pretty well and it uses an F-statistic and a p-value like many a good statistical tests. The underlying idea in using two sets of samples drawn during the same periods with just a change in one dimension implies that a change in pattern is the result of the change in campaign method. I would use a test on data prior to the sale to see that you have created an experimental sampling method that is sufficiently random. If set one is predictive of set two presale, then you know that a change during the sale which makes set one not predictive of set two should mean that you are no longer sampling from the same population (the new campaign methods is having an affect).
H: Feature scaling in Linear Regression I always use Linearregression() class in sklearn library for creating a linear regression model. According to my understanding, we need feature scaling in linear regression when we use Stochastic gradient descent as a solver algorithm, as feature scaling will help in finding the solution in less number of iterations, so with sklearn.linear_model.SGDRegressor() we need to scale the input. However, we dont need to scale the input with Linearregression() as it uses the closed form solution ( based on minimizing the sum of squared residuals). So my first question is, is my understanding correct ? Now my second question is, I need to understand in details why exactly feature scaling will not help if we uses Linearregression() ? AI: @AAA, Yes, your understanding is correct. Answer to your second question: LinearRegression() uses Normal Equation i.e. closed-form solution to get best parameters for a given solution. Hence, we don’t have iterative loops to find best solution. Therefore, feature scaling is not recommended. Whereas, algorithms that use gradient decent, scaling is recommended.
H: Filter out transactions occurring within a timeframe with the same amount I need to apply some filtering on a data frame using pandas. Basically my data frame has the following column: ID - The row id of the transaction Timestamp - object was transformed to datetime format Amount - float The data frame consists of more than 100k transactions. I want to filter out all of transactions that have the same amount, and occur within a minute of each other. In a single minute I can have 10 transactions, and all of the 10 will be filtered out (These transactions will be moved to a new data frame called Duplicates for example). ID TMSP amount 0 2019-01-01 00:01:11 89 1 2019-01-01 00:01:17 89 2 2019-01-01 00:02:49 238 3 2019-01-01 00:03:13 238 7 2019-01-01 00:08:46 117 As an example in the above records, we will be filtering out the first four records. Logically we have to create a loop, go through the records (i+1 and i), compare the amount and time difference, and if it matches the conditions, the i+1 row will be moved to the new data frame. Is there any other methods that we can use in pandas that could do some sort of grouping based on several conditions? AI: welcome to DataScience Exchange. It would be good to have more data to explore the solution, but I've came up with one and it might work as intended. import pandas as pd data = { "id": [0, 1, 2, 3, 7], "time": [pd.to_datetime("2019-01-01 00:01:11"), pd.to_datetime("2019-01-01 00:01:17"), pd.to_datetime("2019-01-01 00:02:49"), pd.to_datetime("2019-01-01 00:03:13"), pd.to_datetime("2019-01-01 00:08:46") ], "amount": [89, 89, 238, 238, 117] } df_dup = ( df .groupby("amount") .agg({"time": ["max", "min"], "id": list}) .assign(time_diff=lambda df: df["time"]["max"] - df["time"]["min"], time_diff_in_seconds=lambda df: df["time_diff"].apply(lambda x: x.seconds), n_ids=lambda df: df["id"]["list"].apply(lambda x: len(x)) ) .loc[lambda df: (df["time_diff_in_seconds"] <= 60) & (df["n_ids"] > 1)] ) ids_to_filter_out = df_dup["id"]["list"].explode().values.tolist() Basically, I grouped data by amount, calculated min and max times and created a column called time_diff_in_seconds to see if these duplicates are in a range of less than one minute. I also calculated the number of ids to get only duplicated values. The code could be improved for readability, however I would like to check with you if there are duplicated samples with more than one minute of difference between min and max and if they should be excluded as well, so a case such as: ID TMSP amount 0 2019-01-01 00:01:11 89 1 2019-01-01 00:01:17 89 2 2019-01-01 00:03:17 89 3 2019-01-01 00:02:49 238 4 2019-01-01 00:03:13 238 7 2019-01-01 00:08:46 117 Is there cases such that? In this scenario, should we remove only ids 1 and 2?
H: Transform dataset to regression problem by sorting? I have a raw unlabeled dataset, and I want to design a model to perform a regression. In my dataset, it does not make sense to give each observation a value, but it does make sense to sort them. Can I implement an algorithm to create values for each observation by sorting them? I thought about this: Select N random observations and sort them Give each observation a new score, equal to its position Calculate the score of an observation as the average position across all times the observation was picked return to step 1 Does it make sense? Is there any machine learning branch that studies this kind of scenarios? AI: There is a field for this called ordinal regression. Does each unique observation have it's own rank or can observations share a rank? IE. if you have 10 elements, are they labeled 1,2,3 ... 10? or could it be 1,1,1, 2,2,2,3,3,3,4? can't post as comment due to karma What are these values supposed to represent? Why are you doing this analysis?
H: Combine datasets of different domains to ehance generalizibility so I try to implement an Emotion Classifier, which should detect several emotions from a text. There are several datasets for this (ISear, GoEmotions, etc.). However, a lot of them come from different domains, e.g. from Chats, Blogs, Newsarticles, etc. My Emotion Classifier should not be limited to a domain, so I basically combined each dataset (where I only considered the emotion: anger, disgust, neutral, happy, fear) and trained my model with it. My goal is to get an Emotion Classifier which generalizes well, also maybe on unknown use cases. So everyone can use it. It is worth highlighting, that I got an accuracy from 63-67% for each dataset I used here. Now I wanted to know is this a reasonable approach? Which challenges and disadvantages are possible? Is there a paper, which is specifically discussing this kind of topic? Or do you have another idea how I could possibly solve this differently AI: This sounds reasonable indeed, but I would suggest to verify this experimentally: Since you have access to multiple heterogeneous datasets, I think a good way to evaluate the ability of the model to generalize would be to train on all the datasets but one, and then evaluate on the remaining dataset. Then preferably repeat with every dataset as test set in order to account for chance (similarly to cross-validation). To test whether the hypothesis works, you should also train baseline models using only one dataset and compare their performance on the same test sets.
H: Logistic Regression using Logisticregression() class In the documentation of Logisticregression() offered by sklearn library, it states the following note: The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon, to have slightly different results for the same input data. If that happens, try with a smaller tol parameter. I have two questions regarding this note : What is the meaning of The underlying C implementation uses a random number generator to select features when fitting the model What is tol parameter? AI: The Note you reference was added back when the only solver available in LogisticRegression was LIBLINEAR, and that solver uses coordinate descent: coordinates are examined and adjusted individually, iteratively. The order in which that happens apparently is based on a random number generator. See also https://stats.stackexchange.com/q/327225/232706 https://stackoverflow.com/q/38640109/10495893 Probably that note doesn't apply to all the newer solvers, and ought to be clarified. As for tol, it is the tolerance criterion for convergence: when the updates to be made are smaller than tol, we say that's good enough and stop iterating. What exactly "the updates" being measured are used may also depend on the solver. See e.g. https://stats.stackexchange.com/a/255380/232706 https://github.com/scikit-learn/scikit-learn/issues/22243 https://github.com/scikit-learn/scikit-learn/issues/11536
H: Why is it an advantage "that Markov chains are never needed" to obtain gradients? In the original GAN (Generative Adversarial Network) paper, Generative adversarial networks by I. Goodfellow, J. Pouget-Abadie, M. Mirza et. al. they state an advantage of the GAN is "that Markov chains are never needed, only backprop is used to obtain gradients, no inference is needed during learning" (Section 6 of paper). I don't understand why this is an advantage? If we look at this statement from the other way around, why would using Markov chains be a disadvantage? AI: Implementation Considerations Markov chains are sequential, because they describe one state t_0 based on the previous state t_-1. When you have long Markov chains, you basically have a long sequence of calculations, each relying on the previous state to be calculated. Due to this sequential nature, parallelizing the computation, as you can do with the gradients and inference in a neural net, is not possible. Architecture Considerations In the paper they state several issues with Markov chains: "[...] methods based on Markov chains require that the distribution be somewhat blurry in order for the chains to be able to mix between modes." In addition, in Section 6 they state that Markov chains require "blurry" data distributions as opposed to GANs, which can represent "sharp" distributions as well. Section 2, first paragraph: They talk about Markov chains as a means for approximating the partition function of Deep/Restricted Boltzman Machines, which would otherwise be intractable. However, they state that mixing is a problem here. I am not sure what they mean by mixing here. As I understand the caption of Figure 2, they state that Markov chain mixing leads to correlated samples. This might be due to the seed, i.e., the inital state, you must provide in order to sample from a Markov chain.
H: How many matrix combinations is this? Input Matrix: A B C D E A 1 - - - - B - 1 - - - C - - 1 - - D - - - 1 - E - - - - 1 "-": arbritary values "1": locked "1" value (diagonally locked, ref below) Output Matrix (one possible example after some arbitrary optimization based on the arbitrary values in the matrix): C E A D B C 1 - - - - E - 1 - - - A - - 1 - - D - - - 1 - B - - - - 1 Sidenote: in case it matters, in the output matrix, the - values have remained the same according to their locations, i.e. cell B,C in the output matrix has the same value as cell B,C in the input matrix. Sorting rules: If an A-E row is moved, the A-E column needs to move also, so that the "1" remains diagonal top left to bottom right (exactly like in the output matrix example). Questions: How many possible combinations are there? Is it as simple as 5 (A-E) * 5 (A-E) = 25? What would be a good algorithm approach for an exhaustive run where all possible combinations are considered. FOR loop on rows, nested with a FOR loop on columns maybe? Thank you AI: Since column order must be same as row order, there are really only 5 items (A, B, C, D, E). For number of combinations of 5 items, it is $5! = 120$ combinations. An algorithm could be: items = [A, B, C, D, E] num_items = items.length function choose(items): if items.length is 0: yield items for i in 0 up to items.length: # choose i-th item, and append recursively with remaining items # Generates: # A, B, C, D, E # A, B, C, E, D # A, B, D, C, E # ... # E, D, C, B, A yield items[i] + choose(items - {items[i]}) for combo in choose(items): do_something(combo)
H: Time series data for Prophet model I have a time series data that has some missing dates. For example, in the dataset below '2017-08-06' is missing. To enter this data in the Prophet model, do I have to create the data frame with all the missing dates? I know the Prophet model does not require the missing data for any specific time to be filled as it is a curve-fitting-based model. But I am unsure if all dates should be included for daily time series prediction. Thanks so much for answering my question. AI: If you are going to work exclusively in Prophet it is robust in the absence of a reasonable number of missing serial data points. If you intend to do traditional time series work as well, yes, you will need to have a space for each regular interval and ideally values. Traditional time series models use subsets of values at lags to estimate future time values. When there are missing values, there is nothing to predict from. So finding a method appropriate to you specific data, uses and the level of and reason for missingness is imperative!
H: Building a Time Series Model I am working on building a time series model, but the dataset I have only has date features for the year; the month and date are not available. What would be a suitable model to use and is it even possible? The dataset is an Excel file from different years I have merged into a single sheet with records for each year arranged in alphabetical order. AI: The first question about applying time-series models is whether you can detect some patterns by yourself or not. For instance, if you find that the values rise every last week of the month, then you might expect time-series models to be useful. But if your data is too coarse to extract anything interesting because of external and unpredictable impacts (ex: economic crisis, accident in several stations, etc.) the added value would be too limited. In addition, you should gather enough data to teach the model patterns. Consequently, if you only have weekly data, you should have at least 3 years (~150 records) to teach the model some patterns due to seasonality and the impact of special events (ex: stock market crash). You could add some external data like weather or other products associated with your values. This would improve the model prediction quality.
H: LSTM Feature engineering: using different Knowledge Graph data types For a research project, I'm planning to use an LSTM to learn from sequences of KG entities. However, I have little experience using LSTMs or RNNs in general. During planning, a few questions concerning feature engineering have come up. Let me give you some context: My initial data will be a collection of $n$ texts. From these texts, I will extract $n$ sequences of entities of variable length using a DBPedia or Wikidata tagger. Consequently, I'll have $n$ sequences of KG entities that somehow correspond to their textual counterparts. Most LSTM implementations I've seen take only one type of feature as input. However, as we're dealing with knowledge graphs, we have access to more types of information. I'm wondering what would be a good strategy to use more than just one type of feature. Objective Given a sequence of seen entities, I want the model to predict the continuation of that sequence. A set of truncated sequences from the corpus will be kept apart. The beginnings will serve as prompts and the endings will be truth values for evaluation. I'm also interested in the model's prediction probabilities when predicting following entities for one single entity given as a prompt. Assumptions I assume that diverse types of features will help the model make good predictions. Specifically, I want the model to learn not only from entity sequences but also from KG 'metadata' like associated RDF classes or pre-computed embedding vectors. Features Feature 1: Numerical vocabulary features The simplest case I can think of is to create an orderet set from all extracted entities. For example, if the extracted entities from all my documents were [U2, rock, post-punk, yen, Bono, revolutionary, guitar] (in reality that'll probably be a few thousands more), I'd create this ordered set representing my vocabulary: {1: http://dbpedia.org/resource/U2, 2: http://dbpedia.org/resource/Rock_music, 3: http://dbpedia.org/resource/Post-punk, 4: http://dbpedia.org/resource/Japanese_yen, 5: http://dbpedia.org/resource/Bono, 6: http://dbpedia.org/resource/Revolutionary, 7: http://dbpedia.org/resource/Acoustic_guitar} The training data for the LSTM would then be sequences of integers such as training_data = [ # Datapoint 1 [[1, 2, 3, 4, 5, 6, 7]], #document 1 # Datapoint 2 [[5, 3, 3, 1, 6]], #document 2 # Datapoint 3 [[2, 4, 5, 7, 1, 6, 2, 1, 7]], #document 3 ...] Feature 2: Numerical class features I want to include additional information about RDF classes. Similar to the approach in Feature 1, I could create an ordered set containing all possible classes. However, the difference is that each entity belongs to one or more classes If all classes extracted were {1: owl:Thing, 2: dbo:MusicGenre, 3: dbo:Agent, 4: dbo:Person, 5: dbo:PersonFunction} I would create a new data structure for each data point, this time containing class information. The notation represents {entity: [classes]}. My training data could then look something like this: training_data = [ # Datapoint 1 [ [1, 2, 3, 4, 5, 6, 7], # feature 1 {1: [1,2,4], 2: [2,3,4,5], ..., 7: [3,5]} # feature 2 ], # Datapoint 2 [ [5, 3, 3, 1, 6], # feature 1 {1: [2,3,4], 2: [1,2,4,5], ..., 5: [3,5]} # feature 2 ], # Datapoint 3 [ [2, 4, 5, 7, 1, 6, 2, 1, 7], # feature 1 {1: [1,2,4], 2: [1,2,3,5], ..., 9: [2,3]} # feature 2 ], ...] Feature 3: RDF2Vec embeddings Each KG entity from a collection of entities can be mapped into a low-dimensional space using tools like RDF2Vec. I'm not sure whether to use this feature or not as its latent semantic content might interfere with my research question, but it is an option. Embedding features, in this case, are vectors of length 200: embedding_vector = tensor([5.9035e-01, 2.6974e-01, 8.6569e-01, 8.9759e-01, 9.3032e-01, 5.2442e-01, 9.6031e-01, 1.8393e-01, 6.3000e-01, 9.5930e-01, 2.5407e-01, 5.6510e-01, 8.1476e-01, 2.0864e-01, 2.7643e-01, 4.8667e-02, 9.3791e-01, 8.0929e-02, 5.0237e-01, 1.4946e-01, 5.9263e-01, 4.7912e-01, 6.8907e-01, 4.8248e-03, 4.9926e-01, 1.5715e-01, 7.0777e-01, 6.0065e-01, 2.6858e-01, 7.2022e-01, 4.4128e-01, 4.5026e-01, 1.9987e-01, 2.8191e-01, 1.2493e-01, 6.0253e-01, 6.9298e-01, 2.5828e-01, 2.8332e-01, 9.6898e-01, 4.5132e-01, 4.6473e-01, 8.0197e-01, 8.4105e-01, 8.8928e-01, 5.5742e-01, 9.5781e-01, 3.8824e-01, 4.6749e-01, 4.3156e-01, 2.8375e-03, 1.5275e-01, 6.7080e-01, 9.9894e-01, 7.2093e-01, 2.7220e-01, 8.5404e-01, 6.9299e-01, 3.9316e-01, 8.9538e-01, 8.1654e-01, 4.1633e-01, 9.6143e-01, 7.1853e-01, 9.5498e-01, 4.5507e-01, 3.6488e-01, 6.3075e-01, 8.0778e-01, 6.3019e-01, 4.4128e-01, 7.6502e-01, 3.2592e-01, 9.5351e-01, 1.1195e-02, 5.6960e-01, 9.2122e-01, 3.3145e-01, 4.7351e-01, 4.5432e-01, 3.7222e-01, 4.3379e-01, 8.1074e-01, 7.6855e-01, 4.0966e-01, 2.6685e-01, 2.4074e-01, 4.1252e-01, 1.9881e-01, 2.2821e-01, 5.9354e-01, 9.8252e-01, 2.7417e-01, 4.2776e-01, 5.3463e-01, 2.9148e-01, 5.8007e-01, 8.2275e-01, 4.8227e-01, 8.5314e-01, 3.6518e-01, 7.8376e-02, 3.6919e-01, 3.4867e-01, 8.9571e-01, 2.0085e-02, 7.9924e-01, 3.5849e-01, 8.7784e-01, 4.6861e-01, 6.2004e-01, 6.8465e-01, 4.1273e-01, 4.2819e-01, 9.4532e-01, 2.2362e-01, 8.3943e-01, 1.1692e-01, 6.9463e-01, 7.6764e-01, 2.8046e-02, 6.9382e-01, 9.2750e-01, 3.6031e-01, 6.8065e-01, 1.6976e-01, 8.2079e-01, 6.4580e-01, 8.3944e-01, 3.9363e-01, 4.4026e-01, 4.4569e-01, 8.2344e-01, 5.4172e-01, 1.6886e-04, 3.8689e-01, 5.8966e-01, 1.9510e-02, 2.5976e-01, 4.0868e-01, 3.1406e-01, 3.6334e-01, 6.1768e-01, 5.4854e-01, 4.1273e-01, 7.2670e-04, 2.4486e-01, 4.1042e-01, 9.0760e-01, 1.6224e-01, 7.4019e-02, 8.1329e-01, 7.2573e-01, 8.2816e-01, 7.3032e-01, 6.6017e-01, 6.4281e-01, 4.1839e-01, 9.2251e-01, 1.5183e-02, 4.4538e-01, 9.7205e-01, 9.5677e-01, 9.5649e-01, 1.2610e-01, 9.2521e-01, 3.2649e-01, 2.1019e-02, 2.5695e-01, 4.2663e-01, 9.2064e-01, 4.5242e-01, 7.0447e-01, 8.1233e-01, 2.7507e-01, 2.4744e-01, 1.3670e-01, 6.4032e-01, 5.8332e-01, 5.5130e-01, 2.4997e-02, 7.7206e-01, 1.5085e-01, 2.8028e-01, 8.2839e-01, 5.8292e-01, 9.9087e-01, 6.0233e-01, 4.1489e-01, 6.4902e-01, 7.5428e-01, 8.0953e-01, 3.7530e-01, 4.8196e-01, 1.8786e-01, 9.8463e-01, 6.3303e-01, 4.8519e-01, 7.6163e-01, 3.3821e-01] If I included this in my training data, it would look something like this: training_data = [ # Datapoint 1 [ [1, 2, 3, 4, 5, 6, 7], # feature 1 {1: [1,2,4], 2: [2,3,4,5], ..., 7: [3,5]}, # feature 2 [7 embedding vectors], # feature 3 ], # Datapoint 2 [ [5, 3, 3, 1, 6], # feature 1 {1: [2,3,4], 2: [1,2,4,5], ..., 5: [3,5]}, # feature 2 [5 embedding vectors], # feature 3 ], # Datapoint 3 [ [2, 4, 5, 7, 1, 6, 2, 1, 7], # feature 1 {1: [1,2,4], 2: [1,2,3,5], ..., 9: [2,3]}, # feature 2 [9 embedding vectors], # feature 3 ], ...] Questions My training data will consist of lists of variable length and matrices/tensors. How do I best feed this data to the model? In any case, I'm interested in predicting only entities. Training only on feature 1 could be a baseline that I compare to combinations of features, e.g. Features 1+2 or 1+3 or 1+2+3 Based on what I've read until now, I think I'm going to use padding and masking. However, I'm not sure what my features should finally look like. I appreciate any kind of feedback. Thanks for sharing your thoughts! AI: As general points: Multivariate RNN: You can use multiple sequential features as an input to your recurrent layers. Taking pytorch as a reference, you can see that the input of LSTM object is a tensor of shape $(L, H_{in})$ or $(L, N, H_{in})$ for batched, where $L$ is the length of your sequences whereas $H_{in}$ is the number of input features. In this approach, you can leave mapping tokens to a vocabulary as part of the standard procedure of a standard embedding being learnt. You may be able to use a multi-label approach (as opposed to multi-class), if I understand your question correctly. Multimodal learning: If features related to embeddings can be considered static/not evolving over time, you may want to add a second auxiliary port to your network, to specifically model this data type. This second part would consist of a feed-forward network with fully connected layers. The fixed-length vector representations / embeddings at the outputs of your RNN and FFN modules could get concatenated before passed to your classification layer. In this way you allow the model to reason from a joint representation of both data modalities. Hope it helps.
H: Time series data transformation for Prophet model I have a time series data that looks as follows The data has the following frequency distribution Here is the Q-Q plot It looks like the data is exponentially distributed. My assumption is that the noise or error component of the data will also be similarly distributed. So to make the data normally distributed I applied a log transformation, which resulted in the following frequency distribution and Q-Q plots. It looks like that even after log transformation the transformed data did not look normally distributed. Is there any other transformation that should be done? More importantly, does the prophet really require transformation? I would really appreciate your answers/comments. AI: I am not convinced from that first time series plot that you have exponential data. What you do have is a lot of observations in a small graph space with some unaccounted for periodicity possibly. Before deciding to transform the data, it is meaningful and useful to decompose your series. Decomposition breaks out a trend (upward or downward trajectories), Seasonality (repeated periodic patterns in the data) and the remainder (which you can think of as noise, temporally localized variability) When you look at this tryptic of graphs you will be able to assess better if there is some kind of pattern worth transforming. There is a fantastic book (open sourced) by Rob Hyndman and George Athanasopoulos called Forecasting Principals & Practice. It gives really clear suggestions on how to start working with time series data and provides meaningful ways for interpreting plots tests. I would start with understanding your series more completely before rushing to transform it.
H: Accuracy is getting worse after text pre processing I'm working a multi-class text classification project. After splitting the dataset into train and test datasets, I've applied the below function on the train dataset (AKA pre processing): STOPWORDS = set(stopwords.words('english')) def clean_text(text): # lowercase text text = text.lower() # delete bad symbols text = re.sub(r"(@\[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)|^rt|http.+?", "", text) # delete stopwords from text text = ' '.join(word for word in text.split() if word not in STOPWORDS) # Stemming the words text = ' '.join([stemmer.stem(word) for word in text.split()]) return text To my surprise, I've got much worst results (i.e. va_accuracy) applying on the train dataset rather than just "do nothing" (59% vs 69%) I've literally commented out the apply line in the below section: all_data = dataset.sample(frac=1).reset_index(drop=True) train_df, valid = train_test_split(all_data, test_size=0.2) train_df['text'] = train_df['text'].apply(clean_text) What am I missing? How can it be that pre processing steps decreased accuracy? A bit more info I forgot to mention I'm using the below to tokenize the text: X_train = train.iloc[:, :-1] y_train = train.iloc[:, -1:] X_test = valid.iloc[:, :-1] y_test = valid.iloc[:, -1:] weights = class_weight.compute_class_weight(class_weight='balanced', classes=np.unique(y_train), y=y_train.values.reshape(-1)) le = LabelEncoder() le.fit(weights) class_weights_dict = dict(zip(le.transform(list(le.classes_)), weights)) tokenizer = Tokenizer(num_words=vocab_size, oov_token='<OOV>') tokenizer.fit_on_texts(X_train['text']) train_seq = tokenizer.texts_to_sequences(X_train['text']) train_padded = pad_sequences(train_seq, maxlen=max_length, padding=padding_type, truncating=trunc_type) validation_seq = tokenizer.texts_to_sequences(X_test['text']) validation_padded = pad_sequences(validation_seq, maxlen=max_length, padding=padding_type, truncating=trunc_type) Later on I'm fitting all into the model as follows: model = Sequential() model.add(Embedding(vocab_size, embedding_dim, input_length=train_padded.shape[1])) model.add(Conv1D(48, len(GROUPS), activation='relu', padding='valid')) model.add(GlobalMaxPooling1D()) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dropout(0.5)) model.add(Dense(len(GROUPS), activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) epochs = 100 batch_size = 32 history = model.fit(train_padded, training_labels, shuffle=True , epochs=epochs, batch_size=batch_size, class_weight=class_weights_dict, validation_data=(validation_padded, validation_labels), callbacks=[ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.0001), EarlyStopping(monitor='val_loss', mode='min', patience=2, verbose=1), EarlyStopping(monitor='val_accuracy', mode='max', patience=5, verbose=1)]) AI: You have to apply the same preprocessing to the test data. Based on your code you apply the clean_text function only to train data but then predict on test/validation data that was not cleaned. That means that your model learns on clean data but you want it to predict on raw data which contains things the model never seen (because it was removed from the train dataset) which will result in worse performance. Edit after discussion in comments: You can either preprocess all data at the same time before splitting and have all_data = dataset.sample(frac=1).reset_index(drop=True) all_data['text'] = all_data['text'].apply(clean_text) train_df, valid = train_test_split(all_data, test_size=0.2) or just apply the sample preprocessing to the valid dataset after you split the data all_data = dataset.sample(frac=1).reset_index(drop=True) train_df, valid = train_test_split(all_data, test_size=0.2) train_df['text'] = train_df['text'].apply(clean_text) valid ['text'] = valid ['text'].apply(clean_text) It is important to apply the same preprocessing for all data that goes into the model so all input is in the same format. This means that when you deploy your model into application you will also have to apply the same preprocessing to any text input coming into the model before making prediction.
H: How does the universal approximation theorem connect to classification problems? Feedforward Neural Networks are used a lot for classification problems and they seem to do a good job. How can this be justified? As the universal approximation theorem for neural networks assumes there is continuity, which clearly is not the case with classifications? AI: There is continuity. Remember that a neural network doing “classification” actually outputs probabilities of class membership. Therefore, the neural network is a continuous function between the feature space and the interval $[0,1]$, and the universal approximation theorem applies.
H: How to assess predictive power of each predictor in case of linear regression For a predictive model with binary target, we can assess the predictive power of each predictor by calculating their information value. What is the equivalent of IV when the target variable is continuous? Should I look at $R^2$ value of individual predictor when used as a sole predictor in a regression model? Edited: I have 4300 attributes to choose from. So adding all attributes would be impossible. I want to sort the attributes based on their individual predictive power and add a short list of 50 attributes to my model. AI: TLDR: If you do need to do feature selection then yes you would use R^2. Look up forward and backward feature selection. Another good option is to use L1 regularization. I would probably try L1 first if you're able to run the model with all the features. I am guessing you wont want to do L1 regularization or backward selection because you are trying to avoid running the regression with all variables? If that is the case I would recommend using forward selection and just stop at whatever num variables you want your model to end up having. It is better to do forward selection than simply finding R^2 for every X with the outcome. It sounds like your question is about how to interpret your model. However when I read the comments, it seems like you are asking about how to reduce the number of variables for the model. Based on assuming that you are trying to ask about using less variables. The next question I would ask is what is the goal for reducing the number of attributes? If it is for regularization (to prevent overfitting) then the person that commented about variable selection being unstable is correct. You can do cross validation with L1 or L2 regularization, or both (elastic net). Furthermore, if you do L1 regularization, you can also use this as another way to do feature selection because it will assign coefficients to the variables with 0 or 1. If you are trying to reduce the number of variables because you want to reduce computational complexity to reduce training time or use less memory then you can do feature selection or dimensionality reduction. If you do feature selection you are picking the features that have the best predictive power. There are different ways to do feature selection (ex: forward selection, backward selection). You can also do dimensionality reduction (ex: PCA) which will create "new features" AKA principal components out of the original features which are linear combinations of the original features. These principal components do not have anything to do with the outcome variable (unlike feature selection and regularization). The top principal components are the combos of the original features that provide the most information.
H: N-gram language model for preposition predition I am trying to build N gram models to predict the missing prepositions of a text corpus. I would want to have some guidance on if I'm understanding and doing things correctly. So the N gram model is basically just a collection of posterior probabilities? Pr(this word | previous words)? Then how is this machine learning I wonder? Since we would get a deterministic set of probabilities based on the frequencies of the word combinations from the training set. There doesn't seem to be any parameters to learn except in interpolation (like the weights of each gram in their weighted sum). As for the actual prediction of preposition, after getting a set of the posterior probabilities of all the words in the vocabulary, do I simply only compare the posterior probabilities of the few known prepositions and find the argmax as the prediction? Appreciate any help, thanks! AI: Your understanding is mostly correct, but don't forget that you can not only take into account the previous tokens, you can also consider: p(this word | next words) p(this word | N previous word,N next words) etc. Often a combination of these probabilities offers optimal results. how is this machine learning I wonder? Well, ML is very often a deterministic calculation, it doesn't have to be a complex approximation problem: decision trees, Naive Bayes, linear regression... do I simply only compare the posterior probabilities of the few known prepositions and find the argmax as the prediction? Yes, that would be the idea. Of course if you use multiple models as suggested above there must be some kind of combination additionally.
H: How to calculate the significance of each feature? I built a predictive model using an elastic net regression model with sklearn. The model R2 = 0.015. I know SHAP method could provide the importance of the features. However, How to calculate the significance of each feature? (Get which feature is significant or which features successfully predict the response.This way, I can tell my story in the paper and discuss these features in detail.) As far as I know, R package "eNetXplorer" can do this by permutation test, but I have identified a useful elastic net model via Scikit-learn.Is there a similar package in the python environment? Any help is greatly appreciated! AI: Assuming you are talking about getting a features impact scores ranking (i.e. sort features by their relevance on model predictions), I would go for a permutation importance methodology. It is a model-agnostic approach which you can use providing an already fit model and a evaluation dataset. The concept is to relate the highest drops in model performance with the most important features, when the values of these ones are shuffled. The process it follows could be defined by the following steps: Make model predictions on a sample of records Select a column, shuffle its values and predict again with the model Get the the drop (if any) of model performance on this new shuffled dataset VS the initial one Average this difference across all predictions Repeat the steps above for all features Scikit-learn provides something like this with its permutation importance functionality, I hope it helps. Other source of info here
H: Finding points in 3D space - SVM In the SVM classification, we use planes to classify the labels points if the dataset has 3 input features. We need to use planes when input features are 3. I am describing a toy dataset with 3 input features as follows Study_time rest_time pass_time label 40 10 5 Good 38 12 3 Good 20 8 10 bad 15 12 2 bad In this dataset you can see, three input features are study_time, rest_time, pass_time. We need to define a plane to find out the label. I went through various course materials of support vector machines and every material said that we need to define points in 3D space if the number of features is 3. I know points mean a dot which has an x co-ordinate and a y-co-ordinate. In toy dataset, every instance has 3 values, 1 for study times, 1 for rest_time, 1 for pass_time. 1st instance can be defined as (40, 10, 5). If we consider the points with respect to the toy dataset, which are those points? what are there co-ordinates? Thank you. AI: By points they mean x,y and z components. Not just an x and a y. In the same way that you need two points to uniquely define a line, you need 3 "points" to define a plane. The points in the dataset are simply the numerical values for each row: (40,10,5),..., (15,12,2).
H: Official page of Weka for SVM java code I am using Weka to train a model from few days. I know Weka use Java code to implement a classifier. I also heard that Weka has some github pages to describe the java code for the classifiers. I like to know the SVM java code which is used in WEKA. I found few webpages describing Java code for SVM classifiers for WEKA. I can not understand which one is there official page. Providing me the link to SVM GitHub page would be very helpful. Thank you. AI: The official Weka source code is stored in a local Gitlab git repository (not on Github). Note that there are two versions of SVM commonly used with Weka: The SMO classifier, source code here. The LibSVM wrapper: an external library that can be used in Weka (i.e. the source code is not part of Weka).
H: Mathematically rigorous NLP I'm looking for resources (books/articles/whatever) that provide mathematical formalization of NLP and statistical language theory. By that I mean clear exposition of the subject in terms of probability spaces (measure spaces) and so on. For example, many NLP books (like the Manning's one) use n-gram models which, as I see, may be modelled as Markov processes with word-states, but neither book states explicitly how the probability space for the process is constructed (I guess, there's something related to probabilities on formal languages?). I need such clear expositions. Thanks in advance. AI: My personal recommendation would be Introduction to Natural Language Processing by Jacob Eisenstein. In this book you should find sufficient mathematical formalization/rigor. This books is also, in my opinion, a touchstone of many introductory NLP books.
H: "Chi-squared approximation may be incorrect" on hypothesis test I'm doing a hypothesis test to determine if the data follows binomial distribution with p=0.31, and I'm getting a warning when running this: chisq.test(x = c(36,48,38,23,10,3),p = dbinom(0:5,5,0.31)) Chi-squared test for given probabilities data: c(36, 48, 38, 23, 10, 3) X-squared = 28.265, df = 5, p-value = 3.23e-05 And the warning message I'm getting: Warning message: In chisq.test(x = c(36, 48, 38, 23, 10, 3), p = dbinom(0:5, 5, 0.31)) : Chi-squared approximation may be incorrect AI: I believe what you were trying to do was to check if given the size of 100 observation from a binomial distribution with the probability of 0.31, you would get the same result. In this case R's chisq.test() is not a good choice, first because you have stated the probability wrong. You're giving the probabilities of each observation one-by-one, element-wise and that's not what you have meant. Here's an example: x <- c(36,48,38,23,10,3) binom(0:5, 5, 0.31) [1] 0.156403135 0.351340375 0.315697149 0.141834951 0.031861475 0.002862915 36 is not equal to 0.15 anyway, the same goes with the the other elements. You could have divided x by 100 but still you're not applying the right statistical test here. One option is to use Kolmogorov–Smirnov test which is used to test whether or not a sample comes from a certain distribution. To perform a one-sample or two-sample Kolmogorov-Smirnov test in R we can use the ks.test() function. Here we assume that: $H_0:$ the two dataset values are from the same distribution - Binomial in this case ks.test(c(36,48,38,23,10,3), pbinom(5, 100, 0.31)) Two-sample Kolmogorov-Smirnov test data: c(36, 48, 38, 23, 10, 3) and pbinom(5, 100, 0.31) D = 1, p-value = 0.2857 alternative hypothesis: two-sided Voila! Turns our your observations may come from a Binomial distribution with p=0.31. Still be aware that with these few observations, the test result couldn't be much reliable after all.
H: How to use new data with Principal Component Analysis (PCA) I reduce data with PCA already from 9 to 3 feature. If I have real data new row which I want to use with pre-train model (.h5). Can I change data 9 feature to PCA 3 feature only one row for test with model ? import numpy from pandas import read_csv from sklearn.decomposition import PCA # load data url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.csv" names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] dataframe = read_csv(url, names=names) array = dataframe.values X = array[:,0:8] Y = array[:,8] # feature extraction pca = PCA(n_components=3) fit = pca.fit(X) # summarize components print("Explained Variance: %s" % fit.explained_variance_ratio_) print(fit.components_) AI: Simply use pca.transform(test_data). That is: test = X[22].reshape(1, -1) pca.transform(test) array([[-72.73967494, -86.24860793, -5.94958303]]) I used a random index of your own train set to illustrate the use case but you can input any array of size 8 here. Can I change data 9 feature to PCA 3 feature only one row for test with model ? You have to input 8 features (9 features except your target value).
H: Does combining real data with simulation data improve the performance of machine learning? I am trying to train a machine learning model to help me classify some real data. Since the acquisition and labeling of real data can be very expensive, the training data is generated with simulation. However, the trained model doesn't perform very well on real data, my suspicion is that the simulation is not a 100% accurate representation of real data. Therefore, I am wondering will the performance be improved if I train the model with a mixture of simulation and real data (say 20% real data). I would greatly appreciate it if you could either answer the question or point me to the right reference! AI: I suggest that you add an extra input binary variable indicating whether the data is simulated or not. For the simulated data, you would set it to 1, while for real data you would set it to 0. This may help the model profit from simulated data while still being able to do well in real data. This advice is inspired by something we use in machine translation called "tagged back-translation". When we are training a translation system from language A to language B, if we have a small training dataset A → B but we have a lot of monolingual data in language B, we first train a translation system from B to A and then use it with the monolingual B data to obtain a synthetic dataset used to train our final A → B system. The final system, however, performs better if we indicate as part of the input if the data is real or synthetic.
H: Convert English active voice sentences into passive voice sentences using Machine learning Is there any machine learning algorithm developed to convert an English active voice sentence into a passive voice sentence? And what are the datasets available related to that purpose? And also if there are available source codes related to that research idea please mentioned them too. AI: There is various Paraphrasing tool available for Conversion. For datasets, you can search on Kaggle and IEEE Dataport. I hope you find some of these useful.
H: Abstracted text summarisation and generation from weighted keywords Suppose I have a list of weighted keywords/phrases, such as "solar panel", "rooftop", etc. The weights are in [0,1] with higher weights indicating a stronger preference for specific keywords, so "solar panel" may have a weighting of 0.3 and "rooftop" may have a weighting of 0.2, for example. The sum of keyword weights is 1. For each keyword/phrase, I additionally have a number of contextual sentences which are also weighted and carry a positive, negative, or neutral sentiment/connotation. For example, one contextual sentence related to the "solar panel" phrase might be "good for the environment" which is labelled with a positive sentiment and carries a weight of 0.2. The sum of weights for each keyword's contextual sentences is 1, so the sum of weights for all contextual sentences across all keywords is N, where N is the number of individual keywords. Finally, I also have weighted linkages in [0,1] between keywords/phrases which, again, sum to 1. For example, the directed linkage from "solar panel" to "rooftop" may have a weight of 0.2 while the directed linkage from "rooftop" to "solar panel" may have a weight of 0.4. I would like to use these weighted keywords, phrases, contextual sentiment-labelled sentences and linkages to create a summary in natural language. I realise that I'm working in reverse from the typical text summarisation objective, but I believe that the richness of my data should make the task a little easier. How should I approach it? Should I first use a model to summarise the text contained within each of the contextual sentences before attempting to extract more basic keywords that can be used to generate summary text? How should I process the data? Is it worth pursuing a two-step approach, where a basic model summarises the keywords and contextual sentences in basic language before a secondary model transforms it to richer, more natural language? I would be very grateful for any guidance or recommendations. Edit: I'm very new to NLP, so I apologise for my lack of terminology and mathematical formalism. AI: If you have data with a good score system, I would start with something simple, because using a neural network like Bert might be complex to set up. Something simple is to take the scores and build a phrase with meaning, for instance: "solar panel" + "rooftop" + "environment-friendly" = "Rooftop solar panel, with a low environmental impact (less than 8g of carbon/year)". You can achieve this using if/then rules and some basic equations if there is numerical values. For example, 0.2 for the environmental impact would be something like (1-0.2)*10 = 8g. Then you can improve results with a neural network like Bert, but you would need enough data to train it, using different inputs ("0.2,0.6,0.1") and their associated outputs (-> "Rooftop solar panel, with a low environmental impact (less than 8g of carbon/year)") and this train data should be representative enough of most common use cases. See: https://chriskhanhtran.github.io/posts/extractive-summarization-with-bert/
H: Encoding for Linear Regression I have a CSV file with salary information and other columns. I am trying to transform some of these columns into proper values, for a LinearRegression and a SGDRegressor, or some other. Because, I don't think that the LinearRegression in sklearn can handle the data bits as is. Data: 607 records Numerical columns: year, salary, salary in USD Categorical columns: experience, type, residence, currency, remote work, company location, and company size. Target: salary in USD Encoding: # Import neccessary encoder from sklearn.preprocessing import OneHotEncoder # Encoding of categorical data encoder = OneHotEncoder(sparse=False) # Extract columns columns = data[['Experience', 'Type', 'Residence', 'Remote work', 'Company location', 'Company size']] Questions: How to group any data within the categories (to avoid duplicates)? Is OneHotEncoder the recommended way of doing this? AI: Comments: You should not group any data even if there are duplicates, because this would distort the distribution of the values (features and target). OneHotEncoder should be used on the categorical features only. Even with those, mind that values which are too rare should usually be removed or replaced in order to avoid overfitting. Some algorithms work better with numerical features scaled. Linear regression is unlikely to work well with some complex data in my opinion. Personally I like to try decision tree regression for this kind of mixed dataset.
H: Tensorflow - do I need to learn computer vision before linear (timeseries) regression? I'm a newbie to tensorflow / keras and I am currently working my way through Deep Learning with Python (2nd edition) by Francois Chollet. I understand the basics of Computer vision and the MNIST examples but I'm not really interested in computer vision. My question is do you think it would be safe to skip the computer vision chapters and just focus on my interest(s) in regression machine learning + time series forecasting? Thank you AI: Computer vision is not needed to learn time series forecasting. Also, as you have mentioned Machine Learning and regression in that case Deep learning itself is optional. you can build model without using Deep Learning. If you want to use LSTM in Time Series, then you need Deep Learning.
H: Select the best feature selection method for classification I am trying to make predictions (using Weka) on a tabular dataset. It is a categorical dataset which is encoded by label encoder. I got a good result for SVM and Logistic Regression, namely the accuracy is around 85%. The dataset is high-dimensional and I like to fine-tune my accuracy. So, I am thinking about the feature selection method. I found different feature selection techniques, such as CfsSubsetEval, Classifier Attribute eval, classifier subset eval, Cv attribute eval, Gain ratio attribute eval, Info gain attribute eval, OneRattribute eval, principal component, relief f attribute eval, Symmetric uncertainty, Wrapper subset eval. I would like to know which one would be the best for the dataset that shows good accuracy with Logistic Regression or SVM? AI: I don't think that there is a single feature selection method that works best with a specific algorithm, what they do is selecting the best features based on various criteria. These features can be useful or not to the algorithm that does the classification, regardless what this algorithm is. Without knowing anything about your data or their distribution, you can simply try a lot of those methods to see which produces the best results, and see if these generalize with the test set. Also, SVM itself can be used for feature selection, since it finds the optimal coefficient for each feature. I don't know if you can access those coefficients through Weka (sorry, not familiar with the software), but if you could they can be an indicator of how important each feature is.
H: Good NLP model for computationally cheap predictions that can reasonably approximate language model given large training data set I have a corpus of about one billion sentences, in which I am attempting to resolve NER conflicts (when two terms overlap in a sentence). My initial plan is to have an SME label the correct tag in each of a large number of conflicts, then use those labels to train either an NER model or a binary classification model (like GAN-ALBERT), to identify the correct choice when two NER tags conflict. The problem is, about 5% of these sentences contain conflicts, and I don't think that I have the computational resources to run BERT or ALBERT prediction on 50 million sentences in a reasonable amount of time. So, my hope is to use the ALBERT model to generate a large number of labels (perhaps one million) for a computationally cheaper model. So, I'm wondering if there is a model, 10 to 100 times cheaper at prediction than BERT, that could be trained to do a reasonable job of replicating the ALBERT model's performance, given a large amount of training data generated by said model. AI: There are several smaller BERT models, including bert-tiny. Bert-tiny is a distillation of the full BERT model.
H: Understanding the tensorboard plots on a stable-baseline3's PPO So there are various plots that are provided when training a stable-baselines3's PPO model, so I thought you'd help me fill up the gaps with what is not quite clear to me: rollout/ep_len_mean: that would be the mean episode's length. What is the expected behavior? rollout/ep_rew_mean: the mean episode reward. Expected to increase over time time/fps: ? train/approx_kl: How different is $\pi_t$ compared to $\pi_{t+1}$. I suppose it should decrease over time as $\pi$ approaches the optimal policy. (Not sure though why it is the approximated value, and not the exact one) train/clip_fraction: some measurement to how the reward $r_t$ was clipped (as in trust regions). Should decrease over time as we're not stepping out of "boundaries" as much given a trained policy. train/clip_range: ? train/entropy_loss: perhaps it represents how random our actions are? not quite sure what to be expected here. train/explained_variance: I would say that this one represents how well we approximate the value function, so it should increase over time. train/learning_rate: the model's learning rate. Presumably will decrease over time, but perhaps it is not the default behavior train/loss: total $L_{CLIP}$ loss? should decrease over time train/policy_gradient_loss: ? train/std: ? train/value_loss: ? AI: The explanation of all logged values can be found on the documentation page of the Logger class: eval/ All eval/ values are computed by the EvalCallback. mean_ep_length: Mean episode length mean_reward: Mean episodic reward (during evaluation) success_rate: Mean success rate during evaluation (1.0 means 100% success), the environment info dict must contain an is_success key to compute that value rollout/ ep_len_mean: Mean episode length (averaged over 100 episodes) ep_rew_mean: Mean episodic training reward (averaged over 100 episodes), a Monitor wrapper is required to compute that value (automatically added by make_vec_env). exploration_rate: Current value of the exploration rate when using DQN, it corresponds to the fraction of actions taken randomly (epsilon of the "epsilon-greedy" exploration) success_rate: Mean success rate during training (averaged over 100 episodes), you must pass an extra argument to the Monitor wrapper to log that value (info_keywords=("is_success",)) and provide info["is_success"]=True/False on the final step of the episode time/ episodes: Total number of episodes fps: Number of frames per seconds (includes time taken by gradient update) iterations: Number of iterations (data collection + policy update for A2C/PPO) time_elapsed: Time in seconds since the beginning of training total_timesteps: Total number of timesteps (steps in the environments) train/ actor_loss: Current value for the actor loss for off-policy algorithms approx_kl: approximate mean KL divergence between old and new policy (for PPO), it is an estimation of how much changes happened in the update clip_fraction: mean fraction of surrogate loss that was clipped (above clip_range threshold) for PPO. clip_range: Current value of the clipping factor for the surrogate loss of PPO critic_loss: Current value for the critic function loss for off-policy algorithms, usually error between value function output and TD(0), temporal difference estimate ent_coef: Current value of the entropy coefficient (when using SAC) ent_coef_loss: Current value of the entropy coefficient loss (when using SAC) entropy_loss: Mean value of the entropy loss (negative of the average policy entropy) explained_variance: Fraction of the return variance explained by the value function, see https://scikit-learn.org/stable/modules/model_evaluation.html#explained-variance-score (ev=0 => might as well have predicted zero, ev=1 => perfect prediction, ev<0 => worse than just predicting zero) learning_rate: Current learning rate value loss: Current total loss value n_updates: Number of gradient updates applied so far policy_gradient_loss: Current value of the policy gradient loss (its value does not have much meaning) value_loss: Current value for the value function loss for on-policy algorithms, usually error between value function output and Monte-Carle estimate (or TD(lambda) estimate) std: Current standard deviation of the noise when using generalized State-Dependent Exploration (gSDE)
H: Fine tuning BERT without pre-training it on domain specific corpus I'm building an internal semantic search engine using BERT/SBERT + ElasticSearch 8 where answers are retrieved based on their cosine similarity with a query. The documents to be searched are somewhat domain-specific, off the top of my head estimation is that about 10% of the vocabulary is not present in Wiki or Common Crawl datasets on which BERT models were trained. These are basically "made-up" words - niche product and brand names. So my question is: Should I pre-train a BERT/SBERT model first on my specific corpus to learn the embeddings for these words using MLM? or Can I skip pre-training and start fine-tuning a selected model for Q/A using SQUAD, synthetic Q/A based on my corpus and actual logged user queries? My concern is that if I skip #1 then a model would not know the embeddings for some of the "made up" words, replace them with "unknown" token and this might lead to worse search performance. AI: Is your corpus big enough? (= several GBs) If yes, you could train a model from scratch and have good results. https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6 If not, fine-tuning should be better. You can always try to train it from scratch but you might have sometimes wrong results. Perhaps you can add some training data from similar sources to reach an optimal result. https://www.tensorflow.org/tfmodels/nlp/fine_tune_bert
H: What is the best way to handle missing observations for ACF / PACF? I have a time series with daily retail sales with two types of missing values Stores are closed on Sundays such that there are no observations. Stores are closed on public holidays (approx. 10 days per year) I want to calculate and plot ACF and PACF for this time series. How can I handle the missing observations? My idea is that dropping the Sundays from the time series is fine as long as one keeps in mind that when interpreting the plot - A peak on lag 6 indicates a weekly seasonality which usually is associated with a peak for lag 7. What about the other missing values? Simply dropping would result in a wrongly calculated ACF since the lag-order is disturbed. AI: I'm afraid there is no ideal solution for such case but there could be interesting ways to deal with that. As you mentioned, removing days could alter the seasonality detection. One solution (among others) is to calculate the previous mean variations (ex: 3 previous weeks) on the public holiday. For instance, if Christmas is a Wednesday December 25th, you can take the variations on previous Wednesdays (4th,11th,18th) regarding their previous days: Tue 3rd: 5.3 Wed 4th: 6.1 => +15% Tue 10th: 6.4 Wed 11th: 6.2 => -3% Tue 17th: 5.6 Wed 18th: 6.5 => +16% Tue 24th: 6.3 Wed 25th: ? => Apply ~+10% => 6.9 Another interesting solution is to take last year's variation because the weekday is usually not the same.
H: CNN model why is ReLu used in Conv1D layer and in the first Dense Layer? I have a problem. I have a CNN model which is used for an NLP problem. This is written in Python. I have questions about this, which I can't find an answer to. Why is ReLu used inside the Conv1D layer and not Softmax ? Why is ReLu used again as activation function in the first Dense-Layer and why Softmax afterwards ? model1 = Sequential() model1.add( Embedding(vocab_size ,embed_size ,weights = [embedding_matrix] #Supplied embedding matrix created from glove ,input_length = maxlen ,trainable=False) ) model1.add(Conv1D(256, 7, activation="relu")) model1.add(MaxPooling1D()) model1.add(Conv1D(128, 5, activation="relu")) model1.add(MaxPooling1D()) model1.add(GlobalMaxPooling1D()) model1.add(Dense(128, activation="relu")) model1.add(Dense(number, activation='softmax')) print(model1.summary()) ``` AI: The softmax activation is used as the activation function of the last layer in multiclass classification problems because it gives a categorical probability distribution over N discrete options. ReLU is used as a middle-layer (either convolution or dense) activation function because it is a non-linearity that works well and is robust to the vanishing gradient problem (as opposed to tanh or sigmoid).
H: ValueError: Found unknown categories ['IR', 'HN', 'MT', 'PH', 'NZ', 'CZ', 'MD'] in column 3 during transform I am trying to use Linear Regression, to predict salary in USD. I have the following data: Data: 607 records Numerical columns: year, salary, salary in USD Categorical columns: experience, type, residence, currency, remote work, company location, and company size. Target: salary in USD Preprocessing dataset: from sklearn.preprocessing import OneHotEncoder, StandardScaler from sklearn.compose import ColumnTransformer # Columns to drop: drop_cols = ['Currency', 'Company location', 'Salary', 'Title'] # Attributes of interest num_attributes = ['Year'] one_hot_attributes = ['Experience', 'Type', 'Remote work', 'Residence', 'Company size'] # Drop columns: data.drop(drop_cols, 1, inplace=True) # Setup transformer for column: preprocessor = ColumnTransformer([ ('nums', StandardScaler(), num_attributes), ('one_hot', OneHotEncoder(drop='first', sparse=False), one_hot_attributes)], remainder='passthrough') Pipe: from sklearn.pipeline import Pipeline pipe = Pipeline(steps =[ ('preprocessor', preprocessor), ('model', LinearRegression()), ]) pipe.fit(X_train, y_train) Perform prediction: prediction = pipe.predict(X_test) pd.DataFrame({'original test set':y_test, 'predictions': prediction}) Error: ValueError: Found unknown categories ['IR', 'HN', 'MT', 'PH', 'NZ', 'CZ', 'MD'] in column 3 during transform AI: This error is thrown by the OneHotEncoder class because your test dataset contains values for a column (likely the Residence column) that were not present in your training dataset. As specified in the documentation, the default for the handle_unknown argument is to throw an error when new values are encountered when transform is called. Setting handle_unknown='ignore' should stop the error from being thrown.
H: Grid-search for a multi-output regression task using Scikit-learn's API I'm trying to make a model for a multi-output regression task where $y=(y_1, y_2,..., y_n)$ is a vector rather than a single scalar. I am using Scikit-learn's MultiOutputRegressor method to train and make a model for each $y_i \in y$ separately. My code looks like this: base_learner = lightgbm.LGBMRegressor(random_state=seed) estimator = MultiOutputRegressor(regressor) grid = { # hyperpramters to check # ... # 'random_state': [500], 'n_estimators': [100, 500], 'num_leaves': [15, 31, 63], 'max_depth': [8, 10], # 'min_data_in_leaf': [15, 25], 'feature_fraction': [0.3, 0.4], 'bagging_fraction': [0.4, 0.5], # 'bagging_freq': [100, 200, 400], "n_jobs": [-1], "verbose": [-1] } gs = GridSearchCV(base_learner, param_grid=grid, scoring=my_custom_score, cv=10) gs.fit(X_train, y_train) As you can see, the base-learner for each $y_i$ is of type lightgbm.LGBMRegressor. (By base-learner, I mean each individual leaner used to learn and predict each $y_i$.) I want to do a grid search to pick the best hyperparameters for each base-learner. But I don't know how to pass the list of hyperparameters in the grid variable to the base learners that are wrapped in MultiOutputRegressor. When I run the shown code above, I get the following error: Do you have any suggestion about how to pass hyperparameters to individual base-learners when one uses MultiOutputRegressor API? (Based on what I see in the error, MultiOutputRegressor itself only takes two parameters which are mainly for a using a leaner not passing hyperparameters to the underlying learners.) AI: I assume you meant GridSearchCV(estimator ..., otherwise there's no wrapping here. You'll need to supply a prefix: 'estimator__n_estimators': [100, 500], 'estimator__bagging_fraction': [0.4, 0.5], and so on.
H: How does BERT produce CLS token? Internally does it do max-pooling or avarage pooling? I ran experiment to compare max-pooled word tokens vs CLS token for sentence classification and CLS clearly wins. Trying to understand how BERT generates CLS token embedding if its better than max or avg pooling. AI: The output at the first position (which is the position the special token [CLS] is at the input sequence and is what you call the "CLS token") is neither computed with max-pooling or average pooling, but it is computed with self-attention, like the other output positions. The difference with the other output positions is that the first position is trained with the next sentence prediction (NSP) task. This means that the representation learned there is meant to predict whether the second part of the input (the subsequence after the [SEP] special token) was following the first part of the input in the original document. You can check the details at section 3.1 of the original BERT paper, within the "Task #2: Next Sentence Prediction (NSP)" subtitle. The following figure from the paper illustrates how the output at the first position is used for the NSP task:
H: Why compare multiple machine learning algorithms and then decide which algorithm to use for fine tuning? I have a problem. There is a dataset A, which deals with a classification problem. And for this dataset, several different baseline algorithms have been defined and computed. In addition, three models were used: Logistic Regression, XGBoost and RandomForest. Now my question is this, why use different algorithms (Logistic Regression, XGBoost and RandomForest) and investigate which one is the better algorithm? Is it because the algorithms have different strengths and perform better depending on the data set? Algorithm Accuracy Precision Recall F1-Score Baseline 1 0,20 0,20 0,20 0,20 Baseline 2 0,20 0,20 0,20 0,20 Logistic Regression 0,53 0,52 0,28 0,36 RandomForest 0,65 0,64 0,63 0,63 XGBoost 0,50 0,61 0,55 0,58 For example, RandomForest, gave the best result and then the hyperparameters are adjusted. AI: In machine learning there is something called the no free lunch Theorem, which basically states that there isn't one solution/algorithm that will perform best on every problem. Different algorithms will perform differently on different data. Therefore, you try different algorithms to pick the best, although there are algorithms that are generally more powerful for certain types of problems (...). Hint: You usually pick the best algorithms without hyperparameter tuning in the first round and then compare the best ones again after tuning their hyperparameters since these can make a decisive performance difference - especially for more "complex" algorithms like XGBoost in contrast to Random Forest. Although both algorithms are decision-tree based.
H: Pytorch Neural Network that tries to approximate $z_i = x_i^2 + y_i^2$ not converging to solution Background I am teaching myself Pytorch, as a Mechanical engineering technology (MET) faculty. My end goal is to replace many data-driven heat transfer and Fluid dynamics models with Neural network approximations. This is a wholly academic exercise to expose my MET students to Neural networks via a familiar environment. I have some experience creating Neural Networks using the Wolfram Language. The problem statement Approximate $z_i = x_i^2 + y_i^2$ with a multi-layer feedforward percepteron where $x_i, y_i$ are randomly generated floats. The issue I am faced with The NN I created using Pytorch does not converge and I cannot tell if this is because of: Improper layer definitions. I have experimented with three layers (linear - ReLU or Tanh - linear) and the current one. I have experimented with different numbers of outputs from the first linear layer. Not sufficient epochs. Improper learning rate. The code itself is foul. I would greatly appreciate help or advice on this matter. I have included my code. I would be happy to provide any other information. Code Setting up the data, NN layers, and the optimizer import torch import torch.nn as nn import numpy as np from sklearn.model_selection import train_test_split x = np.random.random(1000); y = np.random.random(1000); z = x**2 + y**2 input_data = torch.Tensor(np.transpose([x ,y])) output_data = torch.Tensor(z) input_training, input_validation, output_training, output_validation = train_test_split(input_data, output_data, random_state=42, test_size=0.15, shuffle=True) class NonLinearRegression(torch.nn.Module): def __init__(self): super(NonLinearRegression, self).__init__() self.linear_1 = nn.Linear(in_features=2, out_features=10) self.act_1 = nn.ReLU() self.linear_2 = nn.Linear(in_features=10,out_features=5) self.act_2 = nn.ReLU() self.linear_3 = nn.Linear(in_features=5,out_features=1) def forward(self, y): y = self.linear_1(y) y = self.act_1(y) y = self.linear_2(y) y = self.act_2(y) y = self.linear_3(y) y_pred = y return y_pred model_nonlinear = NonLinearRegression() optimizer = torch.optim.SGD(model_nonlinear.parameters(), lr=1e-6) criterion = nn.MSELoss(reduction='sum') The NN training loop epoch_max = 20000 for epoch in range(epoch_max): total_loss = 0; model_nonlinear.train() y_pred = model_nonlinear(input_training) loss = criterion(y_pred, output_training) loss.backward() total_loss += float(loss) if (total_loss < 0.001): print("Num steps: " + str(epoch)) break optimizer.step() Validation input_validation, model_nonlinear(input_validation) #the math does not check out. AI: I think the main thing that is wrong is that your training loop is currently not resetting the gradients between epochs (using optimizer.zero_grad()). This causes gradients to accumulate, which stops your network from learning properly. Making this single change already massively improves the learning of your network, achieving a loss of around 6.5 after 20000 epochs. Some additional changes I've made that improve/speed up the learning even more are the following: Use the Adam optimizer instead of the SGD optimizer, the default learning rate of 0.001 seems to work fine. Use mini batches instead of the full training dataset. Increasing the number of parameters in your model. These changes result in the following code: # model definition class NonLinearRegression(torch.nn.Module): def __init__(self): super(NonLinearRegression, self).__init__() self.linear_1 = nn.Linear(in_features=2, out_features=25) self.act_1 = nn.ReLU() self.linear_2 = nn.Linear(in_features=25,out_features=10) self.act_2 = nn.ReLU() self.linear_3 = nn.Linear(in_features=10,out_features=1) def forward(self, y): y = self.linear_1(y) y = self.act_1(y) y = self.linear_2(y) y = self.act_2(y) y = self.linear_3(y) y_pred = y return y_pred model_nonlinear = NonLinearRegression() # changed optimizer optimizer = torch.optim.Adam(model_nonlinear.parameters()) criterion = nn.MSELoss(reduction='sum') # training loop epoch_max = 2500 for epoch in range(epoch_max): # mini-batch ix = torch.randint(0, input_training.shape[0], size=(64,)) total_loss = 0 y_pred = model_nonlinear(input_training[ix]) loss = criterion(y_pred.squeeze(), output_training[ix]) # set gradients to zero optimizer.zero_grad() loss.backward() total_loss += float(loss) if (total_loss < 0.001): print("Num steps: " + str(epoch)) break optimizer.step() With the following loss curve and predictions:
H: Why is my model overfitting? I am building a classification model based on some machine performance data. Unfortunately for me, it seems to over-fit no matter what I change. The dataset is quite large so I'll share the final feature importance & cross validation scores after feature selection. #preparing the data X = df.drop('target', axis='columns') y = df['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=10, stratify=y) I then cross validate as follows; logreg=LogisticRegression() kf=KFold(n_splits=25) score=cross_val_score(logreg,X,y,cv=kf) print("Cross Validation Scores: {}".format(score)) print("Average Cross Validation score : {}".format(score.mean())) Here are the results that I get: > Cross Validation Scores [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0.94814175 1. 1. 1. 1. 1. 1.] > Average Cross Validation score : 0.9979256698357821 When I run RandomForests, the accuracy is 100%. What could be the problem? PS. The classes were imbalanced so I "randomly" under-sampled the majority class. UPDATE: I overcame this challenge by eliminating some features from the final dataset. I retrained my models using a few features at a time and was able to find out the ones that caused the "over-fitting". In short, better feature selection did the trick. AI: This isn't overfitting. You're reporting cross-validation scores as very high (and are not reporting training set scores, which are presumably also very high); your model is just performing very well (on unseen data). That said, you should be asking yourself if something is wrong. There are two common culprits that come to mind: One of your features is very informative, but wouldn't be available at prediction time ("future information", or in the extreme case, you accidentally left the target variable in the independent variable dataframe) Your train-test splits don't respect some grouping (in the extreme case, rows of the frame are repeated and show up in both training and test folds). Otherwise, it's entirely possible your problem is just easily solved by your model. See also Why does my model produce too good to be true output? Quote on too good to be true model performance?
H: Is it possible to tell if one activation function is better than the other one based on their graphs? I am attempting to formulate my own activation function. However, I'm new to neural networks, am not yet ready to test it, but would want to know if I already landed on a better activation function than my benchmark before pushing through for a successful study. These are their graphs. Mine is in green and the benchmark is in purple. Is it possible to tell which one is better based on these graphs? Thanks! AI: tl;dr No. The choice of an activation function is highly dependent on the task at hand, so there isn't necessarily a "better" in the general sense, let alone a signal you could get from a chart. The chart also doesn't tell you if the activation or its derivative is easy/inexpensive to compute relative to the other, which can be a consideration.
H: Why an already trained model is not generalizable to another related dataset? A model is trained to predict the median temperature of Boston. The resulting model works well according to their validation data. However, this model performs poorly when used to predict the temperature of Washington. Explain the reason and suggest a way of training a better model for Washington data. I think these two datasets are not identically distributed, so the model obtained on one dataset is not generalizable to another dataset. The solution is that we should merge these two datasets and then do cross validation and train the model. Is it true? AI: This should be due to the fact that ML Models fails to give optimal results when distribution of data changes i.e. Data is not identically distributed. To solve this problem yes the best approach would be to merge the data and create a model on data from both states. This will make sure that your training distribution aligns with test/ real world data distribution hence improving model performance
H: How is loss calculated in truncated BPTT, for a many to one problem? In many resources I refered to such as Justin Johnson's Lecture 12 on RNN, truncated BPTT is explained as the process of feedforward and backpropagate for smaller chunks of the sequence. These explanations says loss is calculated for each chunk and the last hidden state from the previous chunk is used as the initial hidden state for the next chunk. But in a many to one problem when there is a target available only at the last time step, how is loss calculated for the chunks? Or how truncated BPTT works in such cases AI: If in your problem it only makes sense to compute the result at the very end of the sequence, then the loss can only be computed at the batch containing the end of the sequence. I guess this gets more complicated if each sequence has a different length. However, in many-to-one problems, it normally makes sense to compute the result at every time step. For instance, in time series forecasting or in language modeling, the inputs are the previous time steps and the output is the prediction for the next time step, and it makes sense to generate a result (i.e. a prediction for the next time step) at every time step and therefore compute the loss combining the errors of the outputs at each time step in each batch.
H: What makes an ROC curve a curve and why do the values change? I have a problem. I am currently looking at a classifier and I would like to examine this using an ROC curve as a metric. However, questions have arisen to which I can not find an answer. A ROC curve describes the following ROC curves are frequently used to show in a graphical way the connection/trade-off between clinical sensitivity and specificity for every possible cut-off for a test or a combination of tests. In addition the area under the ROC curve gives an idea about the benefit of using the test(s) in question. Why does an ROC curve become a curve in the first place? Why does TP (True Positive) and FP (False Positive) rate change? And why does the ratio vary? AI: ROC curve is a parametric one. Each point has a respective third coordinate (classification threshold). The .predict_proba() method of sklearn models returns the class scores (the measure of a model's certainty of the prediction, or probability for well-calibrated models). By default, sklearn .predict() method predicts the class by comparing this score to 0.5 threshold. Neural network classifiers often similarly decide on the class by applying argmax to the scores. If we operate on scores directly however, we can try as much possible thresholds as there are unique score values (plus a zero one, which would classify everything as 1). Each possible threshold would yield a different confusion matrix. We start drawing the curve from the upper right (zero threshold), where most observations are classified as positive: that means perfect recall but also a high false positive rate. Towards the bottom left, more obervations are classified as negative: recall decreases, but so do false positives. A decent classifier with respect to this metric, obviously, should yield high recall and low FPR at at least some threshold.
H: Why is it useful to use different word splitting with different tokenizers? I have a problem. I have a NLP classification problem. There are different methods to decompose sentences into tokens, for example in whole words or in characters. Then there are different tokenizers like: TF-IDF Binary Frequency Count My question now aims, why should one make the effort and use a different word division (word or character) and then check this with the different tokenizers? AI: It's rare to represent sentences as sequences of characters, since most NLP tasks are related to the the semantics of the sentence, which is expressed by the sequence of words. A notable exception: stylometry tasks, i.e. tasks where the style of the text/author matters more than the topic/meaning, sometimes rely on sequences of characters. Yes, the question of tokenization can indeed have an impact of the performance of the target task. But modern methods use good word tokenizers trained on large corpora, not simplifed whitespace-based tokenizers. There can still be differences between tokenizers though. There are even more text representations methods than listed here (embeddings are an important one). And yes, these also have a huge impact on performance. For all these different options (and others), the reason why it's often worth testing different variants is clear: it affects performance and it's not always clear which one is the best without trying, so one must evaluate the different options. Btw it's crucial to precisely define how the target task is evaluated first, otherwise one just subjectively interprets results. Basically imho this is a matter of proper data-driven methodology. Of course experience and intuition also play a role, especially if there are time or resources constrains.
H: Why so discrepancy between ARIMA and LSTM in time series forecasting? I have this time series below, that I divided into train, val and test: Basically, I trained an ARIMA and an LSTM on those data, and results are completely different, in terms of prediction: ARIMA: LSTM: Now, maybe I am passing, in some way, the test set to LSTM in order to perform better? Or LSTM is simply (lot) better than ARIMA? Below there is some code. Note that in order to do prediction in future days, I am adding the new and last predicted value to my series, before training and predicting: ARIMA code: # Create list of x train valuess history = [x for x in x_train] # establish list for predictions model_predictions = [] # Count number of test data points N_test_observations = len(x_test) # loop through every data point for time_point in list(x_test.index[-N_test_observations:]): model = sm.tsa.arima.ARIMA(history, order=(3,1,3), seasonal_order=(0,0,0,7)) model_fit = model.fit() output = model_fit.forecast() yhat = output[0] model_predictions.append(yhat) true_test_value = x_test[time_point] #history.append(true_test_value) history.append(yhat) MAE_error = mean_absolute_error(x_test, model_predictions) print('Testing Mean Squared Error is {}'.format(MAE_error)) Testing Mean Squared Error is 86.71141520892097 LSTM code: def sequential_window_dataset(series, window_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=window_size, drop_remainder=True) ds = ds.flat_map(lambda window: window.batch(window_size + 1)) ds = ds.map(lambda window: (window[:-1], window[1:])) return ds.batch(1).prefetch(1) # reset any stored data keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) # set window size and create input batch sequence window_size = 30 train_set = sequential_window_dataset(normalized_x_train, window_size) valid_set = sequential_window_dataset(normalized_x_valid, window_size) # create model model = keras.models.Sequential([ keras.layers.LSTM(100, return_sequences=True, stateful=True, batch_input_shape=[1, None, 1]), keras.layers.LSTM(100, return_sequences=True, stateful=True), keras.layers.Dense(1), ]) # set optimizer optimizer = keras.optimizers.Nadam(lr=0.00033) # compile model model.compile(loss=keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) # reset states reset_states = ResetStatesCallback() #set up save best only checkpoint model_checkpoint = keras.callbacks.ModelCheckpoint( "my_checkpoint", save_best_only=True) early_stopping = keras.callbacks.EarlyStopping(patience=50) # fit model model.fit(train_set, epochs=500, validation_data=valid_set, callbacks=[early_stopping, model_checkpoint, reset_states]) # recall best model model = keras.models.load_model("my_checkpoint") # make predictions rnn_forecast = model.predict(normalized_x_test[np.newaxis,:]) rnn_forecast = rnn_forecast.flatten() # Example of how to iverse rnn_unscaled_forecast = x_train_scaler.inverse_transform(rnn_forecast.reshape(-1,1)).flatten() rnn_unscaled_forecast.shape 'LSTM': 9.964744041030935 Maybe there is something with that window size of the LSTM? Or maybe something when I do predictions for LSTM? # make predictions rnn_forecast = model.predict(normalized_x_test[np.newaxis,:]) AI: Arima and LSTM are very different and there could be some tips to improve results. Have you tried relative values instead of raw values? For instance: #Raw values: raw=[1200, 1300, 1250, 1370] #Relative (or differential) values: diff=[+100,-50,+120] Sometimes, raw values like 1400 could alter the results for ARIMA and LSTM differently. On the other hand, LSTM could have bad predictions with noisy data. Some smoothing could improve results, but it depends on the kind of data. Finally, are you trying to forecast 30 days in a single shot? Most predictions focus on 1-day forecast and set their precision on the sequential results from one day to another on the 30 days of validation data. If your aim is to get accurate long-term forecasting, ARIMA and LSTM might not be the best solutions (overall ARIMA), because they have their own structural limitations. This could explain also why LSTM results have a gap with real results: some intern mechanisms have limited memory and wrongly predict important decreases or decreases of values. The shape result of LSTM seems correct, but there is a small shift in Y of 10 because it initially predicted a smaller decrease. LSTM is quite difficult to understand: all I can say is that weights are connected to each other and peaks are more difficult to predict because of those dependencies. I recommend reading the initial paper, it's very interesting: https://www.researchgate.net/publication/13853244_Long_Short-term_Memory My advice is to lose accuracy by grouping values (ex: make a prediction of weeks instead of days) or use long-term models like those ones: https://towardsdatascience.com/extreme-event-forecasting-with-lstm-autoencoders-297492485037 https://thuijskens.github.io/2016/08/03/time-series-forecasting/ https://arxiv.org/pdf/2210.08244.pdf
H: Does it help to have similar values for features in train and test data to make accurate predictions? I am quite new to some concepts of machine learning and having hard time understanding the following. Suppose I have a supervised classifier (random forest) trained with a dataset with several features. Do the features in the test dataset need to have values that are somewhat similar (or closer) to the training data (or in the same domain). For example, take training data record: <'label A', 12, 23, 3412, 65> (assume other 'label A' types are similar to this, with only +-10 difference for each feature), test data: X: <10, 21, 3000, 80> and Y: <0.12. 0.23, 34.12, 0.65>. Out of X and Y, which has a higher chance of being classified as type 'label A'? Please make a note of any assumptions you make. AI: This is both a simple and complex concept and one that we are always concerned with while building models. The short answer is, the data in your training and test sets needs to be randomly selected, and accordingly you have no control over the range and variation in either set relative to a fixed amount of modeling data. The long answer is that any model will interpolate better than it will extrapolate because it is easier to describe what is known by the model. So, if there are a lot of values outside of the training range, it will for sure affect the predictive capabilities of your resulting model. The nuances of this depend on a lot of things, how much data you have, how divergent the training and test variables are, what specific model tuning parameters you use how much the outcome variable varies with its predictors, why and how. But ultimately the training set should be large enough and varied enough such that it captures all of the variance in the outcome variable you hope to predict. This more than likely means that your features should be fully described at all possible values they might take.
H: Append Existing Columns to another Column in Pandas Dataframe I have a data that looks like this: The T2M indicates the temperature, and the followed row is the year. I want to append all the similar parameters columns under a single column having all the years, I will end up with one T2M column only, and the final dataframe would look like this Parameter | T2M | ... Year | 1981 | ... Jan Feb . . Year | 1982 | ... . . . I tried the following but it doesn't work: dff = df.copy() temp = df.iloc[:,1] dff.append(temp) I get this error : ValueError: cannot reindex from a duplicate axis which doesn't make sense because here in the first example similar indices were used. AI: Ok, I figured out the problem. The duplicate axis error was coming because the dataframe has multiple columns with the name 'T2M' so append() could not figure to which column it would append the new values. Instead, I copied the dataframe, in the copy I deleted all columns to be appended, and extracted the data from the original df to the copied one. Since in the copy all columns are unique, everything went fine.
H: ignore exponent of a word in dataframe I have a dataframe where there are columns which have exponent in their values(strings/words). Example Pandas reads them as Rasha, Fatiguec,Pyrexiab. is there any way i can make it read properly. or even some other way to remove the exponent in those words UPDATE: I found the solution under this post https://stackoverflow.com/questions/64309887/pandas-read-html-ignore-superscripts-and-subscripts AI: I think you have to manually exclude those superscript letters aferwards - possibly even directly via the underlying code. Take a look at the wikipedia page regarding Unicode Subscripts and Superscripts. Quick and dirty example code via the particular superscript character: import re text = "Rashᵃ" re.sub("(ᵃ)", '', text) Out[1]: 'Rash'
H: What model to fit to call center data I have a dataset with calls from day 1 to day 340. What model can I fit to mathematically capture the pattern? There are only 1 or 2 digit number of calls on all days except day 61.62.63 and 121.122.123 and 170 days when there are 3-4 digit number of calls AI: I don't think you are going to be able to capture this pattern unless you include some kind of information around those 3 blow up periods. Were these holidays, new product release, stimulus checks, govt requirements, etc? Best make a variable to capture that. If you don't know why you had a sudden increase of 2 to 3 magnitudes, you should probably understand your data better. Depending on your needs, a solid line graph with some well placed text and color might get you just as far as a fancy model. Eg, 98% of days we had roughly X calls, but during Holidays, calls increase Y%. Please consider temp hiring during holidays to solve this problem.