text
stringlengths
83
79.5k
H: how to use bert-tiny using transformers? how can I use BERT-tiny .. I tried to load bert-base-uncased by this line transformers.AutoTokenizer.from_pretrained("bert-base-uncased") but how can I use BERT-tiny, please? AI: You can use the bert-tiny model uploaded to the huggingface model repository by user prajjwal1. The model card mentions that it uses the checkpoint from the offical Google BERT repository and converted it to pytorch. import transformers transformers.AutoTokenizer.from_pretrained("prajjwal1/bert-tiny")
H: Understanding derivation of gradient optimisation problem I'm following a tutorial on youtube about reinforcement learning. They are going through the steps to understand policy gradient optimisation. In one of the steps he says (delta policy)/policy == delta log policy. How can he make that jump? I have attached a screenshot from the video and also a link to the video. https://www.youtube.com/watch?v=wDVteayWWvU&list=PLMrJAkhIeNNR20Mz-VpzgfQs5zrYi085m&index=48&ab_channel=SteveBrunton AI: That is called the "log trick". Essentially, from calculus: $$\frac{d}{dx}\log(f(x))=\frac{1}{f(x)}\frac{d}{dx}f(x)$$ Applying the same principle in higher dimensions and you get the equation you wrote.
H: How to interpret a Regression Error Characteristic curve How can I interpret the REC (Regression Error Characteristic) curve ? What is error tolerance and what is the area over the curve? What should be the accepted value for the error tolerance? How to interpret, for ex, the following graph. AI: The Regression Error Characteristic shows the error level on the X axis and the proportion of points under this error level on the Y axis. For example, in your graph: For X=1, the Y value is around 15-20% for the kNN model. This means that the kNN model predicts around 15-20% of the instances with an error less or equal to 1. By contrast, the Y value is around 30-35% for the SVM model, so the SVM model has a higher proportion of instances with a small error (less or equal 1). One can also start with the Y value: for Y=75% of the instances, X is around 1.1-1.2 for both models. So both models predict 75% of the instances with an error less or equal to 1.1-1.2. Of course, a good system would have the highest possible proportion of instances within the lowest possible level of error, so ideally the curve should be as close as possible to the top left corner. The area over the curve is all the area between the top left corner and the curve, i.e. on the left and above the curve (as opposed to the area to the right and under the curve). It can be used as a measure of the overall amount of error that the system makes: if this area is small, it means that the system is close to the top left corner so it's good. On the contrary if this area is large, then it means that the system is far from the top left corner so it's not a great model.
H: Automated feature selection packages - Python I am working on a binary classification with 977 rows. class proportion is 77:23. I have lot of high cardinality categorical variables and couple of numeric variables such as Age and quantity. I would like to know what are some of the automated feature selection packages available in python. Basically, as a data scientist, knowing this list of packages would help me in doing my tasks efficientlty. I came across the below a) Featurewiz here b) sklearn.feature_selection c) Xverse here d) SHAP-hypertune here I am sure there are lot more packages that can be added to this list and am not aware of it. Can I please have your help to list the other automated feature selection packages in python? AI: In addition to these algo ML algorithms with high regularization can do a intrinsic feature selection. This is known as Kitchen Sink Approach. In this all features are pushed to ML model and ML model decides what it is important for it. For example: L1 regularization in regression can do feature selection intrinsically
H: __init__() takes 1 positional argument but 4 were given sklearn standard scaler error I defined a class like below: from sklearn.base import BaseEstimator, TransformerMixin from sklearn.preprocessing import StandardScaler class CustomScaler(BaseEstimator, TransformerMixin): def __init__(self, columns, copy=True, with_mean=True, with_std=True): self.scaler = StandardScaler(copy, with_mean, with_std) self.columns = columns self.mean_ = None self.var_ = None def fit(self, X, y = None): self.scaler.fit(X[self.columns], y) self.mean_ = np.mean(X[self.columns]) self.var_ = np.var(X[self.columns]) return self def transform(self, X, y=None, copy=None): init_col_order = X.columns X_scaled = pd.DataFrame(self.scaler.transform(X[self.columns]), columns=self.columns) X_not_scaled = X.loc[:, ~X.columns.isin(self.columns)] return pd.concat([X_not_scaled, X_scaled], axis = 1)[init_col_order] When I try to create an instance from it: columns_to_scale = ['col_A', 'col_B'] scaler = CustomScaler(columns_to_scale) I got this error: init() takes 1 positional argument but 4 were given What's the problem? And how to solve it? Environment: Python: 3.9.1 Scikit-learn: 1.0.2 AI: As the error mentions, you are passing multiple positional arguments whereas the __init__ method of StandardScaler only takes in one. The arguments you are trying to pass to the scaler should instead be passed as keywords arguments: self.scaler = StandardScaler(copy=copy, with_mean=with_mean, with_std=with_std)
H: Reinforcement Learning - Model based and model free I'm studying reinforcement learning and I found confusing information. I know there are two different types of reinforcement learning, model based and model free. In the second image, it is possible to see TD learning, so I don't understand if Td learning is another type of reinforcement learning or it is a model based. AI: Reinforcement Learning is a paradigm of learning with reward and includes tons of methods. One categorization is between model-based RL and model-free RL. In model-based the agent knows all the possible contingencies from the current state and can evaluate the expected return which will help it to take an informative decision. Computationally this means that the agent has a model of how the environment changes states given an action. It also knows the outcome of a specific transition. Before the agent moves, it makes an estimation of what to expect given current state. In cognitive terms this is planning. Model-based is also referred to as response-outcome model. The reason for this is that given a change in the outcome (reward) the agent can adapt its response accordingly to optimize return. This is because it has a model of the world that tells it "what happens if". Model-free RL is related to stimulus-response learning or else habitual learning. The agent has no knowledge of "what happens if". Instead it has passed through an extensive period of trial and error till it managed to associate a particular state (stimulus) with a particular action (response) by (computationally) maintaining an estimation of expected return from the current state. TD methods belong to MF RL as they do not use a model in order to predict what happens in future transitions. In other words they do not perform planning. Computationally, TD methods are used to learn value functions (estimators of expected return) by accumulating sampled experience (i.e interacting with the task by selecting actions and observe the outcome). Of course there is a trade-off between exploring new actions or exploiting ones that their outcome has been estimated. In general, if I were to perform MB RL I would think the outcomes of all possible contingencies from my current state, up to a horizon. Then I would compare those and I would select the action that corresponds to the best one. In contrast if I were to perform MF RL I would simply try an action that either I think is good because in the past it led me to a positive outcome, or a random action in order to see if I can get an even better outcome. After the action I would observe the outcome and update how good is that action based on the new evidence. There have been some pathways in the brain that could computationally be described (approximately) by MF and MB RL. There has been research in pathways that combine both (learning a model of the world by experience). If you are interested in RL from computational neuroscience perspective you could search some tutorial on MB/MF RL by Nathaniel Daw or Yael Niv (they have written a lot) and work representative of the field by Peter Dayan - one of the pioneers on the subject - and Samuel Gershman.
H: When to use best hyperparameters - Feature selection or Model building? I am working on a binary classification with 977 rows using different algorithms I am planning to select important features using wrapper methods. As you might know, wrapper methods involve use of ML model to find the best subset of features. Therefore, my question is as below a) Should I use best hyperparameters even for feature selection using ML model? If yes, why? b) If no to above question, then am I right to understand that we use best hyperparameters using model building with important features selected above (using a wrapper method)? c) Is it normal to expect that best hyperparameters from a) and b) should be the same? because both are trying to solve the same objective (best f1-score for example) Am not sure but feel that doing grid search to find best hyperparameters for feature selection and model building separately seems to be a overkill. can share your views on this please? AI: Feature selection can have an impact of hyperparameters which are optimal and vice versa. So if you want to squeeze every bit of performance from your model you should do both of the together. But as you rightly mentioned that it may not be feasible and i would suggest the following step : Very loosely optimize hyperparameters, just to make sure you don't assign extremely bad values to some hyperparameters. This can often just be done by hand if you have a good intuitive understanding of your hyperparameters, or done with a very brief hyperparameter optimization procedure using just a bunch of features that you know to be decently good otherwise. Feature selection, with hyperparameters that are maybe not 100% optimized but at least not extremely terrible either. If you have at least a somewhat decently configured machine learning algorithm already, having good features will be significantly more important for your performance than micro-optimizing hyperparameters. Extreme examples: If you have no features, you can't predict anything. If you have a cheating feature that contains the class label, you can perfectly classify everything. Optimize hyperparameters with the features selected in the step above. This should be a good feature set now, where it actually may be worth optimizing hyperparams a bit. Please refer to this link for more details
H: Ordered categorical xlabel number - what to call xlabel Say I have 105 brand names from a store, and I know the average retrun percentage for the products of the different brands. . For example: Brand = Nike, return_rate = 30% Then I order all these brands and simply put in an integer instead of the name (since I can't put all brands on the xlabel) So now Nike is simply number 50: Brand = 50, return_rate = 30% The graph looks like this I have no clue what to call the xlabel.. Would love your help. AI: I would not put any value on the xaxis ticks. I would title the x axis "Brands (ordered from lowest to highest return rate)". Though I am not sure what the plot is telling me. If it fits your story then great.
H: Vectorized String Distance I am looking for a way to calculate the string distance between two Pandas dataframe columns in a vectorized way. I tried distance and textdistance libraries but they require to use df.apply which is incredibly slow. Do you know any way to have a string distance using only column operations ? Thanks AI: I found here that performance across string distance libraries varies greatly : https://github.com/life4/textdistance#benchmarks The python-Levenshtein library is lightning fast compared to the others so I will use this one. If it's not sufficient I will use parallelism as suggested by @Peter
H: How to interpret standard deviation calculated by excel I have some percentage values in my excel spreadsheet like: When i use the STDEV function to calculate the standard deviation with these values, i get 0,125. How should i interpret this? Is it +- 0,125% or +-12,50%? AI: You should interpret it as 12.5%. The percentage values that you are seeing in your spreadsheet (93.33%, ...) are only formats, internally Excel is dealing with decimal point numbers (.9333, ...) so the result of the standard deviation will be in the same scale: .125 -> 12.5%
H: how can i set learning rate for big data? will it need more epochs for training or it is not a necessary and what is the learning rate I should set for this data with optimizer adam? AI: you can use the default learning rate, then apply the learning rate scheduler to adjust your learning rate in every few epochs. Please refer to the keras document
H: Does make sense to use t-SNE and then applied HDBSCAN to cluster? I believe that the title is self-contained. Does make sense to use t-SNE and then applied HDBSCAN to cluster the data with dimensionality reduction? AI: tSNE is NOT a Dimensionality Reduction algorithm but a Visualization method. The reason is that it is non-parametric and can not model a new data in the same way. What you are looking for is UMAP and yes, you can reduce dimensionality and use a clustering method to find clusters (indeed this is a common practice in clustering)
H: Class imbalance: Will transforming multi-label (aka multi-task) to multi-class problem help? I noticed this and this questions, but my problem is more about class imbalance. So now I have, say, 1000 targets and some input samples (with some feature vectors). Each input sample can have label '1' for many targets (currently tasks), meaning they interact. Label '0' means they don't interact (for each task, it is a binary classification problem). Unbalanced data My current issue is: For most targets there are <1% samples (perhaps 1 or 2) that are labelled 1. Since I have to split train-val-test and calculate AUROC, there are in fact only 3 targets left that can support the classification under some threshold (say, have >5% positive labels across all samples). Transform or not? Someone has suggested modeling this as a multi-class problem instead of a multi-task problem, meaning I would transform the label vector of each sample into a set of label-1 targets. For example, if sample A originally has label 1 for targets 12, 232, 988 (and 0 for all others), the new label for sample A would simply be {12, 232, 988}=label_id. But this might make the situation worse, because now a target (task) does not share labels across samples, e.g., if sample B interacts with target 12 and 232 only, originally targets (tasks) 12 and 232 would have two positively labeled data points, but now those two samples become totally different. Would appreciate any suggestions! Side note: I'm using simple classifiers such as MLP or SVM. If there are any specific methods designed for imbalanced data (which I've never heard of), that would also be wonderful. AI: It's the opposite: if you had such a multi-class problem where a class represents a subset of labels, then it might help to transform it into a multi-label problem in order to have more instances for every label. In your case, it's very likely that the multi-class setting will make the imbalance problem worse, with very few instances for most classes. The exception would be if there are very few possible combinations of labels. Essentially training a model requires a sufficiently large representative sample, in particular for every class/label. I'll be blunt: Every target which has only 1 or 2 positive instances is probably not usable at all. An obvious problem you'll have with those cases is that you can't test the model reliably with (at best!!) a single positive instance in the test set. And you certainly can't build a regular ROC curve.
H: Am I over-complicating stuff? I'm trying to classify some 1-D time series data, so I used a simple 1D CNN and fine-tuned the model via Bayesian Optimization (nothing fancy, just used the Keras tuner). And I got very good results (this is on the test dataset, obviously): But then I saw a method using Continuous Wavelet Transform, we convert the 1-D time series to 2-D scaleograms (frequency vs time graph) and input it to a 2-D CNN. I tried this, and even used transfer learning and Bayesian Optimization, yet, I constantly overfitted the model, and my validation accuracy stuck at 43% while my validation loss got higher and higher (as far as I know, this is overfitting, where the model doesn't learn anything). So, my question is this: should I focus on the 1D-CNN, or should I try to improve in the second method? (I have to add one thing: this is for a research paper; so the only reason I started to work on the second method was the fact that my advisor told me that the second method will add "scientific value" to the paper (which I strongly disagreed) and will make the accepting of the paper more "probable.") AI: Academics is what got our industry, and world, to where it is today. With that said, academics is also one of the most vapid worlds I have ever had the misfortune to be a part of. I can still remember my prof wasting my time for the sake of the "story", as if I'm writing a novel, not a research paper. One of the issues with academics is there's never a "good enough". I don't know if you're overcomplicating things; if the objective is to always get better test results, you can be working on one problem for the rest of your life. To me, those test results seem fine. If you think they can get better, go for it. If you're prof insists on you focusing on "the story", and pursuing avenues merely for their own sake, buckle up. As for transforming your data into a 2D space via some form of frequency analysis, my eyes immediately lit up. I'm familiar with things in that area, but you mentioned some cool new big words I've never heard of, and got excited to google. I'm sure that avenue has some merit, but my hunch is it has about as much practical utility as flames on the side of a sports car (considering your current results are so good). It will certainly impress the other academics around the water cooler, though. Your professors live in a world of endless improvement of abstract problems without practical constraint, so when you're in academics, the answer is no, you can never over-complicate. Your sanity may disagree, however. If you want practical constraints, be careful what you wish for, but for what it's worth I haven't had this particular issue in the ocean of practical constraints which is machine learning for a real product. I saw your note, and It reminded me of me. For what it's worth, I much prefer working on tangible problems with real constraints.
H: When standardizing data, does that imply that the mean and standard deviation will become 0, respectively 1? As title suggests, I've been wondering about how standardization works when trying to understand how Principal Component Analysis( PCA) works from this tutorial https://medium.com/analytics-vidhya/understanding-principle-component-analysis-pca-step-by-step-e7a4bb4031d9 AI: YES You can prove this by using properties of the mean and variance of a transformation, but the intuition is that subtracting the observed mean gives you a a new mean of zero (centers your data) and the dividing by the standard deviation compresses or expands the distribution to give a standard deviation of one. Note that there is no requirement to standardize your data before you run PCA.
H: How to reset ML model's memory? I have been working on binary classification problem using algorithms such as Random Forest, neural networks, Boosting methods and logistic regression. However, during my model building process, I tweaked my model based on the performance in test set (X_test). Meaning, I do the below step-1) I apply .fit() on train data, assess the performance (identify best parameters through grdisearchcv) step-2) Later, I apply .predict() on test_data When performance was not good on test_data, I did the below a) Changed the algorithm (or hyperparameters,cv folds, scoring etc) and repeated step 1) and step 2) While I found out by reading online that this is not a good approach as I am exposing the model to test data (multiple times) and model may overfit for my test_data (and not perform well in future for new data from real world). So, now I want to erase my model's memory/make it unsee whatever it has already seen. How can I reset ML model memory? Does resetting my jupyter notebook, laptop etc would make it forget everything? AI: Your mistake is that you make adjustments based on the test data performance and then retest on the same test data when you think you’ve made an improvement. In “regular” machine learning, say a linear regression, you fiddle with the regression parameters until you find a minimal loss value. That’s essentially what you’re doing here. You fiddle with the model hyperparameters on the training data and the test them out on the test data. This risks overfitting the hyperparameters to the test data in the same way that parameters fit to the in-sample data. In other words, you risk tuning your hyperparameters to fit the test data, rather than giving good ability to generalize.
H: PCA result interpretation I'm studying PCA and I'm trying to apply this method to mtcars dataset in R. This is my code: library(usethis) library(devtools) library(ggbiplot) mtcars.pca <- prcomp(mtcars[,c(1:7,10,11)], center = TRUE,scale. = TRUE) mtcars.country <- c(rep("Japan", 3), rep("US",4), rep("Europe", 7),rep("US",3), "Europe", rep("Japan", 3), rep("US",4), rep("Europe", 3), "US", rep("Europe", 3)) ggbiplot(mtcars.pca,ellipse=TRUE, labels=rownames(mtcars), groups=mtcars.country) # Results for Variables res.var <- get_pca_var(mtcars.pca) res.var$coord # Coordinates res.var$contrib # Contributions to the PCs res.var$cos2 # Quality of representation # Results for individuals res.ind <- get_pca_ind(mtcars.pca) res.ind$coord # Coordinates res.ind$contrib # Contributions to the PCs res.ind$cos2 # Quality of representation what is the meaning of the res.var$contrib? how can I interpret this values? AI: As mentioned by your comment, the contrib column holds all the contributions to the principal components. What this means more specifically is explained well by user chl in this answer on PCA an variable contributions: If you have a "PCA" object constructed using FactoMineR::PCA, then variable contribution values are stored in the $var$contrib slot of your object. The contribution is a scaled version of the squared correlation between variables and component axes (or the cosine, from a geometrical point of view) --- this is used to assess the quality of the representation of the variables of the principal component, and it is computed as $\text{cos}(\text{variable}, \text{axis})^2 \times 100$ / total $\text{cos}^2$ of the component. In other words, it shows how much each variable contributes to the creation each principal component. The value is a percentage and all values for each principal component add up to 100%.
H: Recommend different product using NearestNeighbour I am working on creating a recommendation system which suggests product for the user, based on the other user's data from the same region. My dataset is as below UserId Product Region Rating 1 A R1 1.23 2 A R1 1.23 3 A R1 1.24 4 B R1 1.45 5 B R1 1.23 I am using NearestNeighbour algorithm to find the k nearest neighbors First for the preparation step, I did get dummies dummies = pd.get_dummies(df) This will one hot the columns (userId and Product) Applying NearestNeighbour Not the exact code.. Just writing a part of it. neigh = NearestNeighbors(algorithm="euclidean") neigh.fit(dummies) Getting 5 nearest neighbors using kneighbours method neigh.kneighbors(input_1,5, return_distance=True) I passed first row of dummies as input_1. I did receive the result. But the recommendations were row index [2,3 5] Which makes sense too. The region, rating, and product are almost the same, and hence it returns the index 2, 3 and 5. In this case, First two neighbours were the same product(Product A). only the third one was the different product (Product B). In my dataset I have more than 60k rows. Even with k=50, I am getting all the index with the same product. I tried another approach, After generating dummies, I took the first line (for input) and deleted all the rows containing product A related, and then ran the fit and kneighbors. - but it doesn't seem to be efficient. It's more like training model again and again for each row. Question: Is the approach or algorithm totally wrong? How to recommend a different product using the NearestNeighbour. AI: A few remarks: I don't see why the user id is included in the features since the goal is to find similar products. It should probably be removed. Actually a different approach would be to first cluster the users based on their ratings of the products in order to find groups of similar users, and then obtain the k-NN among a specific group of similar users. There might be a problem here about the fact that the instances don't contain any information to match similar products. If there's a description of the product, the words could be used to find products which are semantically similar. Technically k-NN doesn't train a model, it just stores the instances and computes the similarity for every instance. In theory at least it can easily be optimized to select a subset of instances. It might not be easy to adapt the predefined library functions (I don't know), but it's easy to implement the algorithm manually.
H: Binary Logistic Regression in R on the dataset, Titanic I am new to R and Model Learning Algorithm. I am trying to perform Binary Logistic Regression on the training set using the Titanic dataset which is provided by default from R. I am running the algorithm on the set with the variable, Survived as the outcome variable. The variable, Survived contains Yes and No values. I am splitting the dataset into two set, training(40) and test(60). The data look like this below, Titanic Data #Binary Logistic Regression #Import dataset, Titanic data(Titanic) #Load data to the example as data.frame example<- as.data.frame(Titanic) #Add a new column, Country to determine on where they are born example['Country'] <- NA #Declare a vector of unique country countryunique <- array(c("Africa","USA","Japan","Australia","Sweden","UK","France")) #Declare an empty vector new_country <- c() #Perfor looping through the column, Country for(loopitem in example$Country) { #Perform random selection of an array, countryunique loopitem <- sample(countryunique, 1) #Load the new value to the vector new_country<- c(new_country,loopitem) } #Override the Country column with new data example$Country<- new_country #Convert the column to factor but the Freq as numeric example$Class<- as.factor(example$Class) example$Sex<- as.factor(example$Sex) example$Age<- as.factor(example$Age) example$Survived<- as.factor(example$Survived) example$Country<- as.factor(example$Country) example$Freq<- as.numeric(example$Freq) #Split the dataset to training and test set. set.seed(20) sample_size <- floor(0.6 * nrow(example)) test_index <- sample(seq_len(nrow(example)), size = sample_size) #Load data into test for 60 percentage test <- example[test_index,] #Load data into training for 40 percentage training <- example[-test_index, ] #Logistic regression modelling mod.lg <- glm(Survived~., family=binomial(), data=training); #Provide the summary of the model summary(mod.lg) The summary of the model is shown below. Call: glm(formula = Survived ~ ., family = binomial(), data = training) Deviance Residuals: 1 4 5 7 10 12 15 16 21 22 23 26 30 -0.0000040454 -0.0000024660 -0.0000104674 -0.0000024921 -0.0000107568 -0.0000000211 -0.0000000211 -0.0000053423 0.0000107568 0.0000041004 0.0000005560 0.0000103920 0.0000024086 Coefficients: (1 not defined because of singularities) Estimate Std. Error z value Pr(>|z|) (Intercept) 48.8492 822876.6829 0.000 1 Class2nd -43.8783 1592352.1656 0.000 1 Class3rd -39.2030 351691.5041 0.000 1 ClassCrew -75.3682 822888.6960 0.000 1 SexFemale -24.5969 819055.3208 0.000 1 AgeAdult 76.0607 827305.0519 0.000 1 Freq -0.6793 1165.4986 -0.001 1 CountryAustralia -74.3782 849754.8545 0.000 1 CountryFrance 24.4715 895175.9026 0.000 1 CountrySweden -47.8800 115169.7337 0.000 1 CountryUK 53.9582 1576877.4347 0.000 1 CountryUSA NA NA NA NA (Dispersion parameter for binomial family taken to be 1) Null deviance: 17.3232395027816 on 12 degrees of freedom Residual deviance: 0.0000000005291 on 2 degrees of freedom AIC: 22 Number of Fisher Scoring iterations: 25 I want to know on whether I am on the correct path to implementing Binary Logistic Regression on the dataset, Titanic and noticed that the result of the summary of the model contain many 0.000 on the third column.. How to fix this issue? How to interpret the summary of the model? Thank you. AI: The small z-score (third column) tells you that there is "a lot of" uncertainty regarding your estimated coefficients. This is also expressed by the "large" p-value (=1). Essentially this means that your model did not learn anything useful. When you look at the confidence band of your estimated coefficients confint(mod.lg), you see that the coefficients could be negative or positive (this is what a "high" p-value indicates, that coefficients are not statistically different from zero). Why is that? You have very little training data. Two degrees of freedom left is way too little. You seem to assign "country" randomly. There is no useful information in this random variable. I suggest using other data with more observations. Logistic regression (with R-Labs) is very well explained in "Introduction to Statistical Learning" (Ch. 4). Maybe you have a good read of the chapter and try the "Labs" so to get a sound idea of Logit.
H: if feature has 4 unique values ( number of products: 1 ,2,3,4 ) so should i treat that as categorical or discreate variable? I am using bank churn data (https://www.kaggle.com/kmalit/bank-customer-churn-prediction/data) there is a column in data called NumOfProducts that has 4 unique values so should I treat that as a categorical variable and make dummy variables or leave it as it is... please mention why should I do that. AI: I would suggest you use it with its semantical meaning: the variable is numerical (of discrete type). As such, the ordering between such values has a specific meaning and a model could learn features as: $$NumOfProducts \bowtie a,$$ where $\bowtie \, \in \{<,\le, =, \neq, \ge, >\}$ and $a \in \{1,2,3,4\}$. Meanwhile, if you treat $NumOfProducts$ as nominal, categorical variable, then you lose the semantical ordering between values and, in such case, $\bowtie \, \in \{=,\neq\}$ only.
H: Feature classification - am I doing it right? I have a system where i get as input array of feature strings: ["kol","bol","sol","nol"] The length of this array is dynamic, i can get 2, 4 or 6 etc, total features <20 I need to make a decision according to this array, the decision is another string: x = ["feature1","feature5","feature3","feature8"] #in y = "john" #decide What I end up doing is creating a table, 1 if exist, 0 otherwise, for each training set (dataframe pandas): feature1 feature2 feature3 feature4 feature5... decision 1 0 1 0 1 1 (john mapped to 1, Ly to 2, etc) I feed this into a Decision Tree Classifier using sklearn. (DecisionTreeClassifier) I train it with 100+ input feature arrays and desired outcomes. It works, but i do have a feeling that it won't really provide value if the input will be different than trained data, because there is no real meaning/weight to these binary values. These features strings comes from a Bag of Words in which if appear on a text, i extract it, to create a well defined set of features to train/predict. can I, or should I change the values from 1/0 to a more weighted ones? how do i get them? Is this a right approach assuming i have a bag of words in which i look for in a text and produce features that both in the text and the bag. AI: This looks closely similar to text classification. The main concept in any supervised classification is that the model receives the same features (in the same order) when it is applied as when it was trained. This is why traditionally the bag of word representation is used: every word in the vocabulary is assigned an index $i$ to be represented as feature $i$. The value of the feature can be boolean (1 if present in the instance, 0 otherwise) or numerical (frequency of the word in the instance, or some more complex value like TFIDF). The meaning of these feature is simple: it tells the model whether a particular word is present or not. The model calculates how often a particular label is associated with a particular word. Thus in a decision tree the model is made of conditions such as: "if the instance contains word A and does not contain word B and contains word C then the label is Y". Crucially, the vocabulary is fixed at the training stage. This implies that any new word found in the test instances cannot be used at all. This is the problem of out-of-vocabulary (OOV) words. It's also usually recommended to remove the least frequent words, because they likely happen by chance and cause a high risk of overfitting (see previous link). Overfitting is when the model thinks that there's a strong association between a particular word and a label even though it only had one or two examples which happened by chance.
H: Trying to perform elastic-net regression in R I am new to R and Elastic-Net Regression Model. I am running Elastic-Net Regression Model on the default dataset, titanic. I am trying to obtain the Alpha and Lambda values after running the train function. However when I run the train function, the output keeps on lagging and I had to wait for the output but there is no output at all. it is empty.... I am trying Tuning Parameters. data(Titanic) example<- as.data.frame(Titanic) example['Country'] <- NA countryunique <- array(c("Africa","USA","Japan","Australia","Sweden","UK","France")) new_country <- c() #Perform looping through the column, TLD for(loopitem in example$Country) { #Perform random selection of an array, countryunique loopitem <- sample(countryunique, 1) #Load the new value to the vector new_country<- c(new_country,loopitem) } #Override the Country column with new data example$Country<- new_country example$Class<- as.factor(example$Class) example$Sex<- as.factor(example$Sex) example$Age<- as.factor(example$Age) example$Survived<- as.factor(example$Survived) example$Country<- as.factor(example$Country) example$Freq<- as.numeric(example$Freq) set.seed(12345678) trainRowNum <- createDataPartition(example$Survived, #The outcome variable #proportion of example to form the training set p=0.3, #Don't store the result in a list list=FALSE); # Step 2: Create the training mydataset trainData <- example[trainRowNum,] # Step 3: Create the test mydataset testData <- example[-trainRowNum,] alphas <- seq(0.1,0.9,by=0.1); lambdas <- 10^seq(-3,3,length=100) #Logistic Elastic-Net Regression en <- train(Survived~. , data = trainData, method = "glmnet", preProcess = NULL, trControl = trainControl("repeatedcv", number = 10, repeats = 5), tuneGrid = expand.grid(alpha = alphas, lambda = lambdas) ) Could you please kindly advise on what values are recommended to assign to Alpha and lambda? or what I am missing in the R code? Thank you AI: $\alpha$ is the regulation parameter. From the glmnet vignette: alpha is for the elastic net mixing parameter α, with range α∈[0,1]. α=1 is lasso regression (default) and α=0 is ridge regression With Lasso (using the L1 norm), feature parameters can become zero, which actually means dropping them out of the model. $\lambda$ is a tunig parameter for the penalty term. Thus you need to tune $\lambda$ in order to find a good fit (i.e. proper "strength" of regulation). $\lambda$ needs to be tuned for every $\alpha$ (as different $\alpha$ actually gives a different model). You may have a look at the glmnet vignette or at this article discussing the application Ridge in R. Also see Introduction to Statistical Learning, Chapter 6.2 for details.
H: What thought processes do people use for generating priors on a variable's probability distribution? Example Consider a block of text with a variety of sentence types within it, of which there are 7. Within a text these sentences will be more or less likely to appear, dependent on where in the text they sit, and which sentence types have preceded them. I'm looking into using Bayesian inference as an approach to working out the likelihoods of different sentence types appearing at certain points in the text, given where in the text they are and which sentence types have come before them. My problem The variable in question is the sentence type, and within Bayesian inference it is necessary to assume a prior distribution of the variable in order to precede, which is what I'm struggling with. Some sentence types will almost all be at the start or the end; others will be very unlikely to appear unless another type has come before it; and they have varying counts of how often they appear across a corpus of different texts, pictured below: Broader question Any ideas on the problem above? What are some more general thought processes to put to use in generating a sensible prior distribution? If you would suggest a different approach to tackling this problem then please say. Suggested solution This variable has some properties: There are a few outcomes that are very likely, with the rest being a lot less likely but fairly uniformly distributed. Some outcomes are a lot more likely depending on what has come before them, so the sentence structure matters I'm guessing there are some probability distributions that would suit property (1) quite well, but property (2) I'm more confused by. AI: In general the prior is simply calculated on the training data. At first sight this looks like a simple sequence labeling problem. If so the model can be trained with HMM or CRF.
H: How Decision Tree Classifier works? In particular i am using SKLearn with class DecisionTreeClassifier. I would really like to understand how the tree build itself in a simple visual way. Most explanations use virtue signaling math equations, and I am sure there is a pseudo way explaining this. Any simple article? How the tree is being created? a simple 2 features example. What it does if 2 same training set of features leads to different classes ? What it does if feature1+feature2 trained to output classA, and later we test for only feature1? would it predict A? Do the training Y values have meaning? for instance if Y classes are 1,2,3,4.. , does it look for a "meaning" in their value and it's relation to the numeric X values? or is it just a class without mathematical meaning? How is this concept different from finding a distance between features vectors? who provide better results and when ? What are ways of adjusting the classifier parameters(branches, depth, etc) to improve it? Hard to find a simple view about how this magic works without virtue signaling equtions. AI: I think the first issue here is that you don't have a clear understanding of supervised learning in general. Here are a few basics: The input for the training stage is a set of labelled instances. A labelled instance is a fixed-size vector of features values (the features must always be in the same order) and its corresponding label (or class). The output of training is a model. A model can be applied to unlabelled instances to predict their corresponding labels. Thus some of the questions can be answered in general: two different training sets lead to two different models. They are completely independent. The set of features is always fixed, it must be exactly the same at training and at testing. The goal of a supervised model is to find relations between the features and the labels, in order to predict as accurately as possible the label for any new set of features. But it doesn't know anything about the meaning of either, it's a purely statistical process, i.e. counting how often something happens. Distances between vectors are used in some learning algorithms (e.g. kNN), not in decision trees. Different learning algorithms rely on different methods, they are not fully comparable. Now a very brief introduction to decision tree learning to answer question 1: The algorithm progressively builds the tree starting from the root, one node at a time. When creating a node, the algorithm calculates for every feature how much information about the label is gained by knowing the value of the feature. This is calculated on the current training data at this node. It selects the feature $x_i$ which brings the highest amount of information about the label: the node is made of a condition such as $x_i$ == value, leading to two possible children nodes. The two children "receive" the part of the training data left after filtering the value: only the instance where $x_i$ == value for the node corresponding to true, and only the instances where $x_i$ != value for the node corresponding to false. Repeat the process of creating nodes until some condition is satisfied. When reaching a leaf, the algorithm simply assigns the majority label for the subset of training data that it "receives". The last question: There are several variants of the DT learning algorithm (and there can even be variants in their implementation), and each of them has several hyper-parameters which can be adjusted. Note that mathematical equations are not "virtue signaling", they are needed for people to understand precisely how to implement the method.
H: Why label encoding before split is data leakage? I want to ask why Label Encoding before train test split is considered data leakage? From my point of view, it is not. Because, for example, you encode "good" to 2, "neutral" to 1 and "bad" to 0. It will be same for both train and test sets. So, why do we have to split first and then do label encoding? AI: Imagine that after the split there is no "good" in the training data. If you had done the encoding after the split, then you would have no idea that there can be a "good". There you have your leakage. Of course, as you mention in the comments, this is a problem. Nevertheless, this problem is just the real world, where we do not have perfect information about the data that our system will be fed in production. That is why we must evaluate our model on unseen data. If you split after encoding, you are evaluating your model under a false premise of knowledge about that very unseen data.
H: Doubt about seed setting I'm doing some data analysis using R and I wrote set.seed(100) at the top of my code. My question is: can I set a random value for the seed? Is there a meaning for a specific value of the seed? AI: Seed ensure that you can reproduce results. This becomes very important when you want to show results to client. Specific values of seed should not matter in most cases. But in some cases starting at some seed may give you worse results then other in that case you should try multiple seed values
H: neural network binary classification softmax logsofmax and loss function I am building a binary classification where the class I want to predict is present only <2% of times. I am using pytorch The last layer could be logosftmax or softmax. self.softmax = nn.Softmax(dim=1) or self.softmax = nn.LogSoftmax(dim=1) my questions I should use softmax as it will provide outputs that sum up to 1 and I can check performance for various prob thresholds. is that understanding correct? if I use softmax then can I use cross_entropy loss? This seems to suggest that it is okay to use if i use logsoftmax then can I use cross_entropy loss? This seems to suggest that I shouldnt. if I use softmax then is there any better option than cross_entropy loss? ` cross_entropy = nn.CrossEntropyLoss(weight=class_wts)` ##################m My network's last few layers are as below: Could I just change the last layer to sigmoid? I feel that it could break my network - self.batch_norm2 = nn.BatchNorm1d(num_filters) self.fc2 = nn.Linear(np.sum(num_filters), fc2_neurons) self.batch_norm3 = nn.BatchNorm1d(fc2_neurons) self.fc3 = nn.Linear(fc2_neurons, 2) self.softmax = nn.Softmax(dim=1) question 5) should I replace the last 2 lines from above with these? Let me know if there are any other choices I am using sigmoid after linear as I will get values between 0 and 1 and then I could use different probability cutoffs if required self.fc3 = nn.Linear(fc2_neurons, 1) self.sigmoid=nn.Sigmoid() And cross_entropy = nn.CrossEntropyLoss(weight=class_wts) question 6) And Loss shown as below? Let me know if there are any other choice #as class_wts have class weights BCE_loss=nn.BCELoss(pos_weight = torch.tensor (class_wts[1]/class_wts[0])) ++++++++++++++++++++++++++++++++++++++++update1 can I use BCEWithLogitsLoss as it is more stable than sigmoid+BCE loss? The documentation says that This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability. what I am thinking is that I will keep my last activation and loss as below self.fc3 = nn.Linear(fc2_neurons, 1) BCEwithlogits_loss= nn.BCEWithLogitsLoss(pos_weight = torch.tensor (class_wts[0],class_wts[1])) initially I will predict to class 1 if results of my last activation are greater than 0 as sigmoid(0)=0.5. Then if I want to use different cutoffs then either I could change cutoff 0 to some different value or get logits from model, convert to probability using sigmoid and then make new predictions. For example if I want 0.9 probability cut off then for logits I will use cutoff of 2.2 as sigmoid(2.2) is 0.9 AI: Some elements to answer your questions: The softmax function is indeed generally used as a way to rescale the output of your network in a way such that the output vector can be interpreted as a probability distribution representing the prediction of your network. In general, if you want your network to make a prediction for the class of the input data, you just chose to return the class which as the highest "probability" after having applied the softmax function. In the case of binary classification, this would correspond to a threshold of 0.5. However, if you want to take into account some "degree of certainty" feel free to use higher thresholds. Absolutely. The cross entropy loss is used to compare distributions of probability. Cross entropy is not adapted to the log-probabilities returned by logsoftmax. Prefer using NLLLoss after logsoftmax instead of the cross entropy function. The results of the sequence softmax->cross entropy and logsoftmax->NLLLoss are pretty much the same regarding the final loss. Since you are doing binary classification, you could also use BCELoss which stand for binary cross entropy loss. In this case you do not need softmax but rather a function mapping your output to the interval [0,1] such as Sigmoid. Some alternatives may exist, but are useful only in specific cases and type of data. So I would suggest sticking to the loss cited before. Since you have unbalanced data you can make use of the parameter "weight" which is available with both Cross entropy loss and the NLLLoss pytorch implementation. It can be used to put more weight on the less represented class of your dataset in the computation of the loss. Edit: The modifications and your understanding in your points 5) and 6) are indeed correct if you decide to replace the Cross Entropy loss with BCELoss, except for the weight assignation. The weight parameter for BCELoss works differently than the other previously cited loss functions. You must actually assign a weight to each element of your batch. One way to do it (Assuming you have a labels are either 0 or 1, and the variablelabels contains the labels of the current batch during training) First, you instantiate your loss: criterion = nn.BCELoss() Then, at each iteration of your training (before computing the loss for your current batch): criterion.weight = labels * class_wts[1] + (1-labels)*class_wts[0] # ... # loss = criterion(predictions,labels) # ... This will have the effect of assigning the weight class_wts[1] to the positive examples and class_wts[0]. In any case, your code using Cross Entropy should work and gives pretty similar results.
H: Looking for in depth knowledge in evalution metric I am dealing with an unbalanced dataset. The total instances in my dataset is 1273 and the Yes class is 174 and No class is 1099. So the unbalance ratio is like 1:6. Now I know Recall measures how many yes classes we might have missed in the entire sample. I used Weka to classify the dataset. I was using Naive Bayes classifier and got recall 0.413. As per the definition of recall I can find out how many yes class I might have missed. 1273*41.3% = 525.749 However, I wonder how could I miss 525 yes classes where the number of yes classes itself 174. Any help would be appreciated AI: I think this is just a confusion with the definition of recall. You can better remember it based on the positive class. When we talk about recall, we should look at the actual count of positive class in the dataset. In your case, if you take 'Yes' as the positive class, there are 174 entries in this class. In simple terms recall measures the ratio of actual positive class in the prediction against the actual positive class in the population whereas precision measures the ratio of actual positive class in the prediction against the total predicted positive class. Both case, only the denominator changes. Your model gives .413 as recall meaning out of the 1273 records in the population where 174 are the real positive class entries, 'x' number is classified as positive where x/174 = .413. This gives x = .413*174 = 71.862. This means your model could classify only 71 as 'Yes' out of 174 I found this excellent video lesson for this - https://www.youtube.com/watch?v=2osIZ-dSPGE
H: BERT - The purpose of summing token embedding, positional embedding and segment embedding I read the implementation of BERT inputs processing (image below). My question is why the author chose to sum up three types of embedding (token embedding, positional embedding and segment embedding)? AI: Because these are the pieces of information needed to accomplish the loss' tasks, that is, both masked language modeling (i.e. predicting the masked tokens) and next sentence prediction (i.e. predict whether the second segment followed the first segment in the original text). These are the specific reasons: Token embeddings are needed to identify the word/subword being processed as input, as well as the token being masked. Positional embeddings are needed because without them, the Transformer cannot distinguish the same token in different positions (unlike recurrent networks like LSTMs). For more details, you can refer to this answer. Sentence embeddings are needed for the secondary task of the loss: next sentence prediction. They are needed to easily tell apart the different parts of the input. For more details, you can refer to this answer. Also, note that a normal Transformer architecture already adds token embeddings and positional embeddings. The reason why these embeddings are added up instead of e.g. concatenated can be found in this answer. Adding them up you basically are learning the optimal way of combining them, instead of fixing it a priori.
H: Does Multicollinearity affect Neural Networks? Can someone explain to me like I'm five on why multicollinearity does not affect neural networks? I've done some research and neural networks are basically linear functions being stacked with activation functions in between, now if the original input variables are highly correlated, doesn't that mean multicollinearity happens? AI: Multicollinearity in linear or logistic regression does not impact the model performance it only impacts how model coefficient are interpreted. Multicollinearity leads to a problem where small change in one variable can lead to drastic changes in coefficients. A neural network is black box model in nature so if its performs to a given expectation (good accuracy) we never know the impact of multicollinearity. Thats why i think most blogs say that neural network are not affected by multicollinearity
H: Naive Bayes as a baseline model in an NLP task I want to use the Naive Bayes model as a baseline in an classification task that I am working. I found this really useful tutorial: https://www.geeksforgeeks.org/applying-multinomial-naive-bayes-to-nlp-problems/ and I want to apply it into my problem. My dataset has a dataframe form with rows the texts and coloums the labels, original text, preprocessed text, etc. The code that I have is this one # cleaning texts import pandas as pd import re import nltk from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from sklearn.feature_extraction.text import CountVectorizer nltk.download('stopwords') corpus = [] for i in range(0, len(df)): text = df['Preprocessed_Text'][i] text = ''.join(text) corpus.append(text) # creating bag of words model cv = CountVectorizer(max_features = 1500) X = cv.fit_transform(corpus).toarray() y = df.iloc[:, 5].values # splitting the data set into training set and test set from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.25, random_state = 0) from sklearn import naive_bayes # fitting naive bayes to the training set from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import confusion_matrix #classifier = GaussianNB() classifier = MultinomialNB() classifier.fit(X_train, y_train) classifier.score(X_test, y_test) I have the following two questions: In the pre-processed part the is it correct this one text = ''.join(text)? Or a space (text = ' '.join(text)) is needed? By this approach we apply first the bag-of-words in the whole dataset and then split it into training and testing. Is there any way to do this in the opposite way? Therefore, first splitting the texts into training and testing and then apply bag of words. Update: I have already split the texts into training validation and testing in a listing form, thus # evaluation sets texts_train auhors_train texts_valid authors_valid texts_test authors_test Then I'm applying the Bag of Words method only in the training and transform the other evaluation sets with the following way # creating bag of words model from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer(max_features = 1500) cv.fit_transform(texts_train).toarray() bow_train = cv.transform(texts_train) bow_vald = cv.transform(texts_valid) bow_test = cv.transform(texts_test) Last step is to fit a classifier and measure its accuracy # Fitting naive bayes to the training set from sklearn import naive_bayes from sklearn.naive_bayes import MultinomialNB classifier = MultinomialNB() classifier.fit(bow_train, authors_train) classifier.score(bow_vald, authors_valid) AI: Assuming that the Preprocessed_Text column contains a regular string, you don't have to do any kind of join since you variable text is a single string. It's indeed recommended to calculate the bag of words representation only on the training set. It's "cleaner" in the sense that it prevents any possible data leakage, and it's more coherent with respect to applying the model to any fresh test set. However in this case there might be out-of-vocabulary words in the test set, this is normal. You can do this by splitting the "preprocessed text" data first, then calling cv.fit_transform() only on the training instances. Later the test set instances are encoded using cv.transform(), which just maps the new text using the previous representation.
H: Should mini-batches contain an even mix of classes or can this be random? I'm creating mini-batches to put into a CNN. Is it best to try and get an even mix of classes into each mini-batch (Scenario 1), or can this/should this be a random assortment of my classes (Scenario 2)? Scenario 1: I have 2 classes and a mini-batch size of 32. I should try and have 16 samples from each class in each mini-batch. Scenario 2: Same as 1, but I have a random distribution of samples in each mini-batch. So this could be 20 of one class and 12 of the other. Or even 32 of one class and none of the other on occasion. AI: Theoretically, it should not matter. As long as there are a large number of mini-batches and a good balance of classes within the training data.
H: Difference between facebook Hydra's + and ++ operators If you already been following some of the best practices in Deep Learning, Facebook's Hydra might be a familiar name to you. Going through the Hydra Documentation, I came across two important operators + and ++. The documents mentions the use of these operators as follows: Appending a config value : +foo.bar=value Appending or overriding a config value : ++foo.bar=value I need some help understanding these operators, as using them in the code technically makes no difference in the overall output. AI: Suppose we have the following simple hydra app: ├── config.yaml └── my_app.py # my_app.py import hydra from omegaconf import OmegaConf, DictConfig @hydra.main(".", "config") def app(cfg: DictConfig): print(OmegaConf.to_yaml(cfg)) if __name__ == "__main__": app() # config.yaml a: b: c $ python my_app.py a: b: c Let's take a look at the four methods of "Modifying the Config Object" from the docs page you linked: Overriding a config value: foo.bar=value Appending a config value: +foo.bar=value Appending or overriding a config value: ++foo.bar=value Removing a config value: ~foo.bar, ~foo.bar=value Overriding a config value: $ python my_app.py a.b=y a: b: y $ python my_app.py a.x=y Could not override 'a.x'. To append to your config use +a.x=y Key 'x' is not in struct full_key: a.x object_type=dict Appending a config value: $ python my_app.py +a.b=y Could not append to config. An item is already at 'a.b'. Either remove + prefix: 'a.b=y' Or add a second + to add or override 'a.b': '++a.b=y' ... $ python my_app.py +a.x=y a: b: c x: y Appending or overriding a config value: $ python my_app.py ++a.b=y a: b: y $ python my_app.py ++a.x=y a: b: c x: y Removing a config value: $ python my_app.py ~a.b a: {} $ python my_app.py ~a.b=c a: {} $ python my_app.py ~a.b=y Could not delete from config. The value of 'a.b' is c and not y. ... $ python my_app.py ~a.x Could not delete from config. 'a.x' does not exist. ... $ python my_app.py ~a.x=y Could not delete from config. 'a.x' does not exist. To summarize: Overriding a config value (foo.bar=value) works only if the given key is already present in the config Appending a config value (+foo.bar=value) works only if the given key not yet present in the config Appending or overriding a config value (++foo.bar=value) works whether or not the key is present in the config Removing a config value (~foo.bar, ~foo.bar=value) works only if the given key or key-value pair is in the config.
H: Latent space vs Embedding space | Are they same? I am going through variational autoencoders and it is mentioned that: continuity (two close points in the latent space should not give two completely different contents once decoded) and completeness (for a chosen distribution, a point sampled from the latent space should give “meaningful” content once decoded). so is latent space merely an embedding space where two similar entities are mapped nearby in the vector? AI: Any embedding space is a latent space. I'm not expert in this specific topic, but in general the term "latent space" refers to a multi-dimensional space in which elements are represented but their representation is not directly interpretable and/or observable. Typically this is in contrast to a space where dimensions have an interpretable meaning, for example in the traditional bag of words representation a dimension corresponds to a particular word. The term "latent" applies to some variable which is not directly observable, for example the "latent variable" in a HMM is the state that the model tries to infer from the observations. It's sometimes called the "hidden variable". Naturally a latent space is relevant only if it is meaningful with respect to the represented objects and/or the target task. This is what these sentences mean.
H: how to calculate evolution change in percentage in a dataset compared to itself scenario: lets say I have a table for number of customers visiting a shop per day. now I want to calculate that how much change has happened in terms of number of visitors throughout the past 30 days and come up with a single value that tells me the change rate (in percentage). but I do not want to compare the changes of current month with the values of the previous month(s) instead I want to compare the current month with itself! I have some naive solutions but I am not a data scientist and am not sure if the solution would give me a meaningful value at all! my first question: is this a meaningful question at all (to compare a data set with itself)? my second question: if so, how would you approach the solution? solution one: one solution that I thought about was to compare the value of the first day with the value of the last day, but then It seemed a bit stupid because values could fluctuate randomly in between and the result would not represent the fluctuations. solution two: another solution that I could think of was to compare the value of each day (within the month) with the value of the day before it and then calculate the change ratio and finally accumulate all the ratios and come up with a single value! side note: i would like to get both positive and negative percentage depending on the change ratio AI: Some suggestions: Your ideas in solutions were basic but solid. For example the fluctuations in second idea can be smoothed out with a simple moving average and then applying your idea makes sense. Another idea is to set the value of first day as threshold and calculate the total negative (drop from threshold) and positive (raise from threshold) through the month. Let's say in a constant month (a month in which all 30 days have exact same values) the percentage of change is 0%. It means that sum of above-threshold values and under-threshold values can give you an estimate of change in percentage according to the first day. An example is [5, 10, 50, 100, 5, 1] visits during 6 days. we set the value of first day ($5$) as threshold. Then you will have [0%, 100%, 900%, 1900%, 0%, -400%] are the raises and falls (why?!) and you can use any numerical calculation to come up with a final number e.g. 2900% raise and 400% fall which can be 2500% raise at the end. Playing with the idea above gives many different methods of doing this. Instead of comparing each day to first day, as you also mentioned, compare it to only previous day and come up with a final change rate. Or consider a penalty for negative changes e.g. if in one case values are strictly raising and final change is +3000% and in another case values fluctuate but finally the change is 3000% again, you can consider a weight for the second one so the difference between them is preserved.
H: Applying a matching function for string and substring with missing values on a python dataframe I have programmed the following functionality: The function returns True, when the two strings match sequentially except for a "*" value and false when they differ by at least 1 character. def matching(row1, row2): string = row1['number'] sub_string = row2['number'] flag = True i=0 if len(string) == len(sub_string): while i < len(string) and flag==True: if string[i] != "*" and sub_string[i] != "*": if string[i] != sub_string[i]: flag = False i+=1 else: flag = False return flag Assuming I have a dataframe with the column 'number'. I want to apply this function to a dataframe in order to obtain the following format: | number | unique_id | | ------ | --------- | | 178*A8 | 0 | | 13**B4 | 1 | | 17***8 | 0 | | 82819B | 2 | | 13**B4 | 1 | I managed to write the unique_id with the following code, but it only works when the numbers match. I would like to perform the same functionality but using the function in the code first. df['unique_id'] = pd.factorize(df['number'])[0] | number | unique_id | | ------ | --------- | | 178*A8 | 0 | | 13**B4 | 1 | | 17***8 | 2 | | 82819B | 3 | | 13**B4 | 1 | Edited: We will assume that matching will be done on a first-come, first-served basis. If the first value is 123*, all numbers matching('123*',X) == True will be assigned the same id. AI: This corresponds to a deduplication or record linkage problem. There are various ways to compare records (numbers in your case), but the main issue is almost always about the double loop: in the general problem, every possible pair of records must be compared. In case there are too many numbers for the double loop, you could implement the blocking technique described here. Your design may have an additional issue: your matching method is not transitive, i.e. you can have cases where $a$ matches $b$, $b$ matches $c$ but $a$ doesn't $c$. Apparently you plan to solve this by picking the first match. This might not be optimal for matching a maximum of numbers. I'm not expert at all with pandas but I doubt that there would be any predefined function which does what you need. factorize relies on strict equality, it's much simpler because it can collect all the unique values in one pass.
H: Explain MAAPE (Mean Arctangent Absolute Percentage Error) in simple terms (intermittent demand forecasting) n order to measure the accuracy of highly intermitted demand time series, I recently discovered a new accuracy measure, that overcomes the problem of zero values and values close to zero, when comparing a test forecast to the actual values. This is pretty useful, when it comes to forecast intermittent demand. I am able to understand the simple calculation of measures like RMSE and MAPE, however, when it comes to MAAPE I do struggle, to understand the math behind it. I found this paper, which is explaining it in very theoretical terms: https://www.sciencedirect.com/science/article/pii/S0169207016000121 In the abstract it sums up the meaning of MAAPE like this: In essence, MAAPE is a slope as an angle, while MAPE is a slope as a ratio, considering a triangle with adjacent and opposite sides that are equal to an actual value and the difference between the actual and forecast values, respectively. However, I could not find any simple example of MAAPE's calculation. The easiest way, to explain it to a customer would be some easy to understand visualizations or even a calculation done in excel. AI: I don't think there is much to understand here. In cases where your error can be sometimes very large, you don't want your overall average to be skewed by these freak events if $\alpha=\arctan\left(x\right)$ then: $\alpha\approx x$ when $x$ is small $\alpha\to\pi/2\approx 1.57$ when $x\to+\infty$. So if you have errors $E_i=\left(A_i -F_i\right)/A_i$ for actual ($A_i$) and forecast ($F_i$) at time $i$, transforming your errors using $\arctan$ and averaging the result (assuming $i=1\dots N$): $$ MAAPE=\frac{1}{N}\sum_{i=1}^N \arctan(\left|E_i\right|) $$ Will simply give you a measure that is approximately equal to normal MAPE for small errors, and is only marginally increased for large ones. Arctan is not special in this respect. One could choose a sigmoid, for example. Excell support ARCTAN https://support.microsoft.com/en-us/office/atan-function-50746fa8-630a-406b-81d0-4a2aed395543
H: Doubt about comparison of Machine Learning algorithm I read this article about the comparison of Machine Learning algorithm. According to this article, there is some characteristics that define the ML models performance. I know that there are some models define lazy and some define eager. Looking at the Figure in the article, it is possible to define what characteristics define a models lazy in spite of eager? This characteristic is speed of learning or speed of classification? AI: Disclaimer: I didn't read the paper. This article is apparently a preprint, i.e. it didn't go through the peer reviewing process required to be published in a reputable journal or conference (or it was rejected). The content might be correct, but nobody checked. The paper is about a very specific task "DIAGNOSIS OF NETWORK PERFORMANCE ISSUES IN CLIENT-TERMINAL DEVICES", the experiments and conclusions apply only to this task and cannot be generalized to any ML problem. The diagram showing the characteristics of different learning algorithms is even more specific, because there's no standard way to measure these characteristics. So the authors used their own definitions, which might or might not be truly measure these characteristics in the specific case of their experiment. It would be very risky to generalize anything from this figure imho. See this question about eager vs. lazy learning. It is correct that the figure shows two characteristics related to this: speed of learning is about the duration of training speed of classification is about the duration of testing, i.e. applying the model As mentioned in the linked question, a lazy learner just stores the training data. This is the case for kNN only in the figure: short training, long classification time. An eager learner (all the others) extracts parameters during the training stage, so it has longer training time and shorter classification time.
H: How to do sentence segmentation without loosing sentence's subject? I have some text with different lengths, I want to split it into separate clauses but I also want to preserve the subject For example; # single subject Original: "Coffee is very good, but wasn't hot enough" split: ["Coffee is very good", "Coffee wasn't hot enough"] Original: "Joe was the top performer of last year's dance competition, he is also a good singer" split: ["Joe was the top performer of last year's dance competition", "Joe is a good singer"] # multiple subjects Original: "Delicious food service, but we struggled with the app." split: ["Delicious food service", "We struggled with the app"] I don't know how to achieve this, we can maybe split sentences based on punctuation and conjunctions (may not be accurate) but how do we preserve its subject. Please let me know if you need more information. AI: After a lot of research, I figured out how to replace pronouns with their respective subject. It makes use of neuralcoref which is a pipeline extension for spaCy 2.1+ which annotates and resolves coreference clusters using a neural network. However, it only works with spacy v2 and python3.7 I tested it on conda environment with following tools version python==3.7 spacy==2.1.0 neuralcoref The solution goes like this import spacy import neuralcoref nlp = spacy.load('en_core_web_sm') neuralcoref.add_to_pipe(nlp) doc = nlp("Coffee is good but it wasn't hot enough!") print(f'\n[REPLACED]:\n{doc._.coref_resolved}') # output Coffee is good but Coffee wasn't hot enough! More sample outputs [Enter your text]: Joe was the top performer of last year's dance competition, he is also a good singer [REPLACED]: Joe was the top performer of last year's dance competition, Joe is also a good singer [CONTINUE(Y/N)?]: y [Enter your text]: Paul was amazing and so was our waiter I loved the squash pizza and the dessert he recommended will definitely come back soon. [REPLACED]: Paul was amazing and so was our waiter I loved the squash pizza and the dessert Paul recommended will definitely come back soon. I am still trying to figure out, a better way for splitting the sentence. Will update the answer once I figured this out.
H: Using Pandas.groupby.agg with multiple columns and functions I have a data frame which contains duplicates I'd like to combine based on 1 column (name). In half of the other columns I'd like to keep one value (as they should all be the same) whereas I'd like to sum the others. I've tried the following code based on an answer I found here: Pandas merge column duplicate and sum value df2 = df.groupby(['name']).agg({'address': 'first', 'cost': 'sum'} The only issue is I have 100 columns, so would rather not list them all out. Is there a way to pass a tuple or list in the the place of 'address' and 'cost' above? Something along the lines of column_list = df.columns.values.tolist() columns_first = tuple(column_list[0:68]) columns_sum = tuple(column_list[68:104]) AI: You could perhaps generate the dictionary using a list comprehension style syntax. E.g. df2 = df.groupby(['name']).agg({col: 'first' if i<68 else 'sum' for i, col in enumerate(df.columns)})
H: What to do about a predictor with a feature importance so high above the others that it is the only determining factor in my machine learning model? I created a logistic regression model with scikit-learn which predicts the outcome of an NFL football game. It predicts the result based on features such as the team's record, opponent's record, pass yards, rush yards, etc. I created the model and calculated the coefficients of each predictor and found the team's record and the opponent's record have huge influences on whether the team wins or not. In fact, whoever has the better record, the model will predict them to win no matter what their other features say. Here is a graph showing the coefficients of each of my predictors, ordered by importance: I know that record is important in determining the result of a football game, but there are plenty of factors that also should have an influence. The first idea I thought of was to decrease the weight of the team's records so that other predictors would come into play. However, I don't think that this is the right thing to do. Is there a better model for this problem or should I get more data? Any ideas? AI: With the info provided, as a generic approach I would have a look at: data quality: is your data complete, reliable and with the info you would expect with not so many missing values? Sometimes the clue is the data quality before going on modeling did you have a look at the correlation of each input feature with the target of interest? It might be that, before modeling and from a model-agnostic perspective, you find out that very high correlation also with few or just one input feature; in case it is weird from your expert knowledge, I would check again for data reliability or, maybe you find out that your prior belief was wrong try out another type of algorithms: maybde a simple decision-tree based algorithm can give you some different feature importances (now from a model perspective) one more precise step could be not only feature importance, but predictions explainability on a row-base (how much each value of each feature impacts your predictions): source of info
H: Does thereshold of classifier close to 0 make sense? I have roc curve with AUC of 0.91. I applied the following function to determine the best threshold: threshold1[np.argmin(np.abs(false_positive_rate1+true_positive_rate1-1))] and I got 0.004. Does it make sense? it means that the change between the classes is very gentle, that there is not enough difference between them? AI: That’s fine. Depending on the cost of misclassification, you might choose an even lower threshold of zero. After all, many of us just spent the past two years assuming everyone had Covid-19, since the cost of a false negative is potentially so catastrophic. The idea of the cutoff threshold being influenced by the costs of misclassifications leads to direct analysis of machine learning probability outputs via proper scoring rules. I’ll give some of my usual links on this topic. https://www.fharrell.com/post/class-damage/ https://www.fharrell.com/post/classification/ https://stats.stackexchange.com/a/359936/247274 https://stats.stackexchange.com/questions/464636/ For what it’s worth, the Frank Harrell whose blog I’ve linked has a pretty low opinion of ROC curves and would advocate for picking a threshold based on the misclassification costs (which might differ from subject to subject), not based on the ROC curve.
H: Find best features for text classification with Countvectorizer Python I'm facing a text classification problem where the algo is human-made but impacted by keywords. Hence I can't use any ML model but I certainly can take a data-science driven approach to find the best keywords that would determine a class. Therefore I managed to build a dataframe with Countvectorizer, where each row is a word of the whole input data vocabulary, and the 3 categories columns hold the amount of times the word appeared in each category. What's my best shot at getting the best keywords from that dataframe? Can I use statistical tests for significance ? I can't just choose an arbitrary threshold and say all words having more than 20 occurrences in a category are good keywords for that category; this because the dataset is also quite unbalanced (50% is class A, 35% is class B and 15% is class C) so I'm sure there must be some clever way to extract the most significant impacting keywords. My first try that somewhat works is weighting occurrences by class occurrences, so having a word appear 15+X% in class C could be significant although the word appears way more often in class A when looking at absolute numbers. This method is not perfect but that's where I'm stuck. Ideas are very much welcome. AI: Can I use statistical tests for significance ? I can't just choose an arbitrary threshold and say all words having more than 20 occurrences in a category are good keywords for that category It's not just a matter of threshold, because there are word which are frequent in general even though they are not especially associated to a category. First, in any case you should remove the least frequent words because they often appear by chance and cause overfitting. The first very basic option is to calculate the conditional probability of a category given a word $p(c|w)$: words which have a high $p(c|w)$ are highly associated with $c$, but this doesn't take into account the imbalance between the categories so you'd have a lot of words which are highly associated with the majority category. A more proper method is to use a statistical association measure: Pointwise Mutual Information between a word and a category Information Gain or Conditional entropy, which calculate how much more information information one has about the category when one knows the word. These measures are used in decision tree algorithms to select the best features.
H: Why am I getting different prediction result after every run? I have a simple lstm model model =Sequential() model.add(LSTM(100, activation='relu', input_shape=(n_input,n_features))) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') model.summary() on which I train and test on the same data but each time I am getting different prediction. Why? If the model parameters and train and test are same why the prediction is changing on every run? If the results are not reproducible what is the point of training and testing lstm model? the rmse value is almost similar but the predicted values are way off on each run. AI: Do you specify the random seed anywhere in your code? If you don't, that might be the explanation why your RMSE value differs on each run for your train/test datasets. You could use the set_random_seed() function to set the random seed and have your training be more deterministic. You can also use enable_op_determinism() to make it even more deterministic, but the training speed will suffer as a result. import tensorflow as tf tf.keras.utils.set_random_seed(1) tf.config.experimental.enable_op_determinism() utils.set_random_seed() will automatically set the seed of random.seed() and numpy.random.seed() as you can see in the linked documentation, so you don't need to import and set those. On a second note, did you split the train/test set ahead of training or do you use the built-in functions of Tensorflow/Keras to do this? If so, the data split will vary based on the random seed as well.
H: How to print the corresponding c of the lowest classification error on the validation data I'm currently measuring the overall classification error for an SVM classifier and I'm varying the regularization value C. In the following code, how can I print in the last line of the C value with the lowest error in the validation set? X_train_ = X_train[:, :51949] X_val_ = X_val[:, :51949] y1_train = np.where(y_train == 1, 1, -1) y1_val = np.where(y_val == 1, 1, -1) penalty =list_ y1val_err_all = [] y1trn_err_all = [] for c in penalty: clf_1 = SVC(kernel="linear", C = c).fit(X_train, y1_train) y_pred1_trn = clf_1.predict(X_train_) y_pred1_val = clf_1.predict(X_val_) y1trn_acc=accuracy_score(y1_train, y_pred1_trn) y1val_acc=accuracy_score(y1_val, y_pred1_val) y1trn_err=1-y1trn_acc y1val_err=1-y1val_acc y1trn_err_all.append(y1trn_err) y1val_err_all.append(y1val_err) print("Training Error (c=:"+ str(c) + ") :" , y1trn_err) #round(y1trn_err,6) ) print("Validation Error (c=:"+ str(c) + ") :" , y1val_err) #round(y1val_err,6) ) y1trn_acc_err = np.array(y1val_err_all) y1val_acc_err = np.array(y1val_err_all) print('The Lowest Val Error : ', np.amin(y1val_acc_err)) AI: Try: You can use np.argmin to get the index of the lowest value, then you get that item from penalty X_train_ = X_train[:, :51949] X_val_ = X_val[:, :51949] y1_train = np.where(y_train == 1, 1, -1) y1_val = np.where(y_val == 1, 1, -1) penalty =list_ y1val_err_all = [] y1trn_err_all = [] for c in penalty: clf_1 = SVC(kernel="linear", C = c).fit(X_train, y1_train) y_pred1_trn = clf_1.predict(X_train_) y_pred1_val = clf_1.predict(X_val_) y1trn_acc=accuracy_score(y1_train, y_pred1_trn) y1val_acc=accuracy_score(y1_val, y_pred1_val) y1trn_err=1-y1trn_acc y1val_err=1-y1val_acc y1trn_err_all.append(y1trn_err) y1val_err_all.append(y1val_err) print("Training Error (c=:"+ str(c) + ") :" , y1trn_err) #round(y1trn_err,6) ) print("Validation Error (c=:"+ str(c) + ") :" , y1val_err) #round(y1val_err,6) ) y1trn_acc_err = np.array(y1val_err_all) y1val_acc_err = np.array(y1val_err_all) print(f'The Lowest Val Error : {np.min(y1val_acc_err)}') print(f'The C with the Lowest Val Error : {penalty[np.argmin(y1val_acc_err)]}')
H: making a contingency table with TRUE and FALSE values I made the following contingency table already, however there should only be TRUE or FALSE and not all of them showing up on the table. How can I change that? my code is the following: library(tidyverse) library(haven) read_xpt("~/downloads/DEMO_J.XPT") -> demo17 demo17%>% select (subjectID= SEQN, Lebensalter=RIDAGEYR, Geschlecht=RIAGENDR, Ethnie = RIDRETH3, Einwohner=WTMEC2YR, Ratio=INDFMPIR)%>% mutate(Geschlecht=fct_recode(factor(Geschlecht), "Männlich"="1", "Weiblich"="2"))%>% mutate(Ethnie=fct_recode(factor (Ethnie), "Mexican American"="1", "Other Hispanic"="2", "NH White"="3", "NH Black"="4", "NH Asian"="6", "Other"="7")) -> D2 read_xpt("~/downloads/BMX_J.XPT") -> bmx17 bmx17%>% select (subjectID = SEQN, Körpergröße= BMXHT, Gewicht = BMXWT) -> B2 inner_join(D2, B2, by= "subjectID") -> DurchgangJ DurchgangJ DurchgangJ%>% mutate( bmi = Gewicht / (Körpergröße/100)^2 ) %>% filter( Lebensalter >= 18 )%>% filter(!is.na(bmi))%>% mutate (Adipös= bmi>=30)%>% mutate (Poor = Ratio < 1.3)%>% filter(!is.na(Poor))%>% ggplot+ geom_point(aes(x= Poor, y= Adipös))+ facet_grid(Ethnie~Geschlecht) The table used for the plot looks like this: AI: It's normal that you have both TRUE and FALSE everywhere since you use these values as coordinates. This means that for every individual who has for instance TRUE as X and FALSE as Y, a point is added for x=TRUE and Y=FALSE. Since there are many individuals with TRUE as X and FALSE as Y in your data, the points are just plotted on top of each and you see a single point. Since there is at least one individual with every combination of TRUE/FALSE for X and for Y in the data, there are points everywhere. So your plot is not meaningful because for every facet combination it shows only TRUE or FALSE as coordinates. A more meaningful plot would show the distribution for each case of the two variables, and this can be done with geom_histogram. For a single variable something like this should work: ggplot+ geom_histogram(aes(x= Poor))+ facet_grid(Ethnie~Geschlecht) You can show the two variables either as an additional facet or as colours, but you need to format the data differently: there should be a single column value for the TRUE/FALSE value and another column category indicating whether this is the Poor or Adipös value (i.e. two rows for every individual). It's certainly doable with tidyverse but I don't use it so I don't know how (I use melt for this). Then you could do this for instance: ggplot+ geom_histogram(aes(x= value, fill=category),alpha=.5)+ facet_grid(Ethnie~Geschlecht) Note that a contingency table is not a graph, it's a table with numbers.
H: Can I create a new target value based on the average target value of same data points for regression? I am trying to predict profit of retail stores. The orginal dataframe looks like this: Store No feature A feature B year profit A 1 2 2016 20000 A 1 2 2017 40000 B 4 3 2017 50000 B 4 3 2018 40000 C 5 6 2015 80000 C 5 6 2016 90000 In production information about profit and year will not be available. Since year is not available, we have same data points with different target values. So I thought adding the average profit for every store, since the input features stays the same. Then drop the old target value, year and remove the duplicates. Then it looks like this: Store No feature A feature B Average profit A 1 2 30000 B 4 3 45000 C 5 6 85000 Can I use 'Average profit' as my new target for regression models or will this create data leakage, since the average is not what we predict in production (We predict the store's profit not the average and independent from the year)? Or is this step completely unnecessary, since this is how the regression models work mathematically? Thanks in advance. Edit: Edited the sample set, since it can happen that the profit decreases over time. But anyway the information about year is not available -> so no temporal denpendency AI: Your solution makes total sense and if you do not have temporal data in production then this is how you better do. I just add small points: Data Leakage does not happen when you transform solely based on targets or solely based on features. So you are actually safe here according to data leakage. There might be significant dispersion in your targets i.e. you have to predict $75k$ as mean of two target values of $100k$ and $50k$. I suggest you also learn a dispersion measure (variance, std, etc.) as another target and train for learning both central (e.g. mean) and dispersion (var or std) measures. This helps you have a better understanding of how "good" your "good" predictions are (i.e. predicting around $75k$ for above example is machine-learning-wise good, but the true statistics of your data suggests that this is still far from both real values. This can be captured by learning a dispersion measure)
H: What are the exact differences between Word Embedding and Word Vectorization? I am learning NLP. I have tried to figure out the exact difference between Word Embedding and Word Vectorization. However, seems like some articles use these words interchangeably. But I think there must be some sort of differences. In Vectorization, I came across these vectorizers: CountVectorizer, HashingVectorizer, TFIDFVectorizer Moreover, while I was trying to understand the word embedding. I found these tools. Bag of words, Word2Vec Would you please briefly summarize the differences and the algorithms of between Word Embeddings and Word Vectorization? Thanks a lot. AI: A „bag of words“ usually describes encoding of text where one word (or ngram) is represented as one variable (column). This can be done as binary encoding or as count of words, often called one-hot encoding. Alternatively, you can introduce weights to represent the frequency of words in a document, such as TFIDF. See also here for a sklearn implementation. Hashing essentially is a „bag of words“ using the hashing trick to cope with previously unseen words in a corpus and a large (or growing) corpus. In word2vec, each word is represented by a vector, which indicates how close one word is to another (this is the result of a pre-trained model). You can use a pre-trained word2vec model and assess the proximity of words by comparing two (word) vectors e.g. based on the Euclidean distance. These vectors help models to better understand the (semantic) structure of some text via understanding the empirical co-occurance of words (which is not possible with one-hot encoding etc.) BERT goes even one step further. In BERT pre-training a word in a sentence is „masked“, where the model tries to predict the masked word in a sentence. Also „next sentence prediction“ is used to pre-train BERT models. By doing so, BERT has a even better ability to understand semantic relations in a text.
H: Which metrics is used in the *training* of XGBoost : is it the one in the so-called parameter "eval_metric"? In XGBoost, when calling the train function, I can provide multiple metrics, for example : 'eval_metric':['auc','logloss'] Which ones are used in the training and how to state it technically in the tool ? (This is counter-intuitive to me that several metrics could be used simultaneously) hyperparameters_xgboost = {'validate_parameters':False,'eval_metric':['auc','logloss'],'max_depth' : 3,'eta':0.3,'objective' :'binary:logistic','tree_method': 'hist'} bst=xgb.train(hyperparameters_xgboost,dmatrix_train,num_round,evals=list_evals,early_stopping_rounds=20) AI: As the name indicates, the values for eval_metric are used as the metrics when evaluating the model's performance on data passed using eval_set (see also this example from the documentation). The loss that is actually used when training the gradient boosted trees can be specified using the objective keyword when using the scikit-learn API, which by default is a squared error loss for the regressor and a logistic loss for the classifier.
H: How to handle categorical variables with Random Forest using Scikit Learn? One of the variables/features is the department id, which is like 1001, 1002, ..., 1218, etc. The ids are nominal, not ordinal, i.e., they are just ids, department 1002 is by no means higher than department 1001. I feed the feature to random forest using Scikit Learn. How should I deal with it? Some people say to use one-hot encoding. However, Some others say the one-hot encoding degrades random forest's performance. Also, I do have over 200 departments, so I will add about 200 more variables for using one-hot encoding. But if I just use the original values, 1001, 1002, etc., will random forest think that department 1002 is higher than department 1001? Thanks. AI: Is one-hot encoding an option? It seems like no, due to the high cardinality of your feature, it might result in the course of dimensionality problems if your sample size is small and also if you are using mean decrease impurity as a measure of feature importance you have to consider the bias to high cardinality features. So to avoid having that many categories ~200, you could group them. You could for example check the distribution on the train set of this feature and group those whose representativeness is below x% as OTHERS category. If I just use the original values, 1001, 1002, etc., will random forest think that department 1002 is higher than department 1001? Yes, it will be treated as a continuous feature and then a nonsense order will be established. What options do I have? The simplest, yet most efficient way of encoding categorical features is Target encoding, in short: Target encoding is the process of replacing a categorical value with the mean of the target variable. Any non-categorical columns are automatically dropped by the target encoder model. You could remove the target value of the observation $i$ to avoid leakage. There is another alternative named WOE, which is a more sophisticated encoding in logarithmic scale that is highly used in credit scoring None of those encodings will increase the feature dimension. Finally, if you are using python, both aforementioned and many other encodings are available in CategoryEncoders package.
H: Using PCA for Dimensionality Expansion I was trying to use t-SNE algorithm for dimensionality reduction and I know this was not the primary usage of this algorithm and not recommended. I saw an implementation here. I am not convinced about this implementation on t-SNE. The algorithm works like this: Given a training dataset and a test dataset, combine the 2 together into one full dataset Run t-SNE on the full dataset (excluding the target variable) Take the output of the t-SNE and add it as K new columns to the full dataset, K being the mapping dimensionality of t-SNE. Re-split the full dataset into training and test Split the training dataset into N folds Train your machine learning model on the N folds and doing N-fold cross-validation Evaluate the machine learning model on the test dataset My main questions are not about the t-SNE but; Can I use this algorithm below for other dimensionality reduction algorithms such as PCA by splitting dataset into train and test sets before transforming the data? Would this be effective? Dimensionality is not a problem for my dataset because it is already a small one. Having highly correlated features also not important. AI: Your algorithm may work only if embeddings created by the manifold learning (T-SNE) catch information that the features by themselves do not. As mentioned in the comments, if you use T-SNE, you will have to fit and predict the same data, leading to leakage. An alternative would be using UMAP, so your approach would be: For $K$ in number_of_folds: Fit UMAP on the train set, excluding the target variable. Take the output of the UMAP and add it as $U$ new columns to the full dataset, $U$ being the mapping dimensionality of UMAP. Train your machine learning model on the $K-1$ folds. Evaluate the machine learning model on the $K$ fold.
H: How to tell CatBoost which feature is categorical? I am excited to learn that CatBoost can handle categorical features by itself. One of my features, Department ID, is categorical. However, it looks like numeric, since the values are like 1001, 1002, ..., 1218. Those numbers are just IDs of the departments. It by no means has numeric or ordinal meanings. How do I tell CatBoost to treat it as categorical (nominal), not numeric? Thanks. AI: When you are training your Catboost classifier, you can pass the list of cat features like this in python : CatboostClassifier has a parameter called Cat_Features which takes list of names and treat them as categorical variables from catboost import CatBoostClassifier clf = CatBoostClassifier( iterations=5, learning_rate=0.1, #loss_function='CrossEntropy' ) clf.fit(X_train, y_train, cat_features=['Name of your categorical variables'], ## This eval_set=(X_val, y_val), verbose=False )
H: Get the confidence interval for prediction results with LSTM I have an LSTM network and I use it to predict the load. I want to get the confidence interval for the prediction. I am not sure that I can get that or not. I have tried and search in in the different platform, however, I could not find the solution. Here is my simple model. import pandas as pd pd.options.mode.chained_assignment = None # default='warn' import numpy as np import tensorflow as tf from datetime import datetime from tensorflow import keras from keras.models import Sequential from keras.layers import LSTM, Dense, Dropout from keras.layers.recurrent import LSTM from matplotlib import pyplot as plt from sklearn.preprocessing import StandardScaler X_train = np.random.rand(10,5,2) Y_train = np.random.rand(10,2) X_test = np.random.rand(3, 5, 2) model = Sequential() model.add(LSTM(64, activation='relu', input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True)) model.add(LSTM(32, activation='relu', return_sequences=False)) model.add(Dropout(0.2)) model.add(Dense(Y_train.shape[1], kernel_regularizer='l2')) opt = keras.optimizers.Adam(learning_rate=0.001) model.compile(optimizer=opt, loss='mse') history = model.fit(X_train, Y_train, epochs=1, batch_size=200, validation_split=0.1,verbose=1) prediction = model.predict(X_test) AI: What you need is tensorflow probability. Indeed, you want to estimate a distribution and over that the interval of confidence for your prediction. To do so, you cannot use mse loss function, but you need something that somehow compares probability distributions. One possibility is the likelihood function (better, as you want a loss to minimise the negative (log)likelihood). Try to modify your model as follows: import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions model = Sequential() model.add(LSTM(64, activation='relu', input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True)) model.add(LSTM(32, activation='relu', return_sequences=False)) model.add(Dropout(0.2)) model.add(Dense(2, activation='relu')) # 2 as you want to predict mean and variance model.add(tfp.layers.DistributionLambda( lambda t: tfd.Normal(loc=t[...,0], scale=0.01*tf.math.softplus(t[...,1])), name='normal_dist')) # note this negloglik = lambda y, p_y: -p_y.log_prob(y) # note this opt = keras.optimizers.Adam(learning_rate=0.001) model.compile(optimizer=opt, loss=negloglik) history = model.fit(train_dataset, epochs=1, batch_size=200, validation_split=0.1,verbose=1) The only thing you have to carefully check is that since you are using a custom loss function, you need to pass tensors to your model, hence you have to convert your numpy arrays into tf.dataset train_dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train)) test_dataset = tf.data.Dataset.from_tensor_slices((X_test, Y_test)) Finally, to get the Confidence Interval over predictions mean = lambda x: x.mean().numpy().flatten() # multiply by the scaling factor if you used a scaler sd = lambda x: x.stddev().numpy().flatten() def conf_int(pred): """95% confidence interval""" return np.array([mean(pred) - 2*sd(pred), mean(pred) + 2*sd(pred)]
H: validation_split in time series data for lstm model I have an LSTM network and I use it to predict. My whole data is an array with 10 rows and 1000 columns (10, 1000). I want to divide the data to train with size (10, 600), validate (10, 200) and test (10, 200). When I have the train data, I want to change the data to a an array with size (10*600, 1) and then train the model. However, in the tensorflow, we have validation_splitand I am not sure that this validation is same as the method that I want. Here is a simple example: import pandas as pd import numpy as np import tensorflow as tf from tensorflow import keras from keras.models import Sequential from keras.layers import LSTM, Dense, Dropout from keras.layers.recurrent import LSTM X_total = np.randm.rand(10, 1000) #here is an example with a sample data X_train = np.random.rand(10,5,2) Y_train = np.random.rand(10,2) X_test = np.random.rand(3, 5, 2) model = Sequential() model.add(LSTM(64, activation='relu', input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True)) model.add(LSTM(32, activation='relu', return_sequences=False)) model.add(Dropout(0.2)) model.add(Dense(Y_train.shape[1], kernel_regularizer='l2')) opt = keras.optimizers.Adam(learning_rate=0.001) model.compile(optimizer=opt, loss='mse') history = model.fit(X_train, Y_train, epochs=1, batch_size=200, validation_split=0.1,verbose=1) prediction = model.predict(X_test) AI: Based on your explanation, I think that this is what you want. Same as the train and test, you can create the the x_val and y_val and then in the model.fit, you can use this line of code : validation_data= (X_val, Y_val). In this way, the model will validated on this set that you provide.
H: Feature selection with "overly important" features I am very new to machine learning modeling, but I encountered a feature selection problem that I hope can get your insights on: For example, I have A,B,C,D as my independent variables and y as my dependent variable. The end user is more interested in C & D's impact on y since A and B are factors that the user don't have much power to change. But in the modeling, we see that A and B have very large feature importance in predicting y, while C and D have low prediction power. In this case, should I train the model only based on C&D or I should train the model based on ABCD? or is there any feature engineering I should do? AI: I think it's a matter of clearly defining the target task: If the goal is to predict $y$ as accurately as possible and all 4 independent variables are available, then in general there's no reason not to use the 3 variables. If the goal is to predict $y$ using C and D only, or to calculate the impact of C and D on their own on $y$, then only C and D should be used, of course. It might also be some kind of optimization problem where the goal is for instance to maximize $y$ using C and D, this would be a different problem.
H: PCA dimentional reduction I try to plot three values of a VCF file (QUAL, DP, and rate of phasing) for all the SNPs in the file. I thought that a PCA plot would be a good way to reduce dimensions of the plot, and to compare values although they were not on the same scale. I have tried to build the PCA with different R packages as ggbiplot and pca3d, but its seems that the distribution of the dots cloud is always skewed as a dimension is missing. Find below the code used for generating the ggbiplot. Would anyone have some advises to reduce three values with different scales to two-dimensional PCA? library(devtools) library(ggbiplot) # Create the dataframe df <- df[c("quality", "phasing", "depth")] # Create the PCA dataset pop.pca <- prcomp(df, center = TRUE,scale. = TRUE) # Create the plot ggbiplot(pop.pca) + scale_y_continuous(trans='log10') + scale_x_continuous(trans='log10') AI: There isn't a dimension missing because PC1 and PC2 are "summaries" of all your variable. Infact, PC1 is a linear combination of your feature, calculate like: PC1 = v1*feature1 + v2*feature2 + v3*feature3 + ... where v1,v2 and v3 are scalar number calculated directly by PCA model. PCA calculed n PC, where n is equal to your number of feature and are selected the first and the second PC because are those that explain the greatest variability in the system. For a complete example in R follow this link
H: What correlation measure for Wikipedia translated pages vs number of in links? I'm trying to find a correlation measure for the number of Wikipedia pages an entity (an article) has been translated to vs number of links that point to that page (both measures that can point to the popularity of a page). Is it possible to correlate them? For instance I have Work, links, wikipediaTranslatedPages The name of the rose, 500, 53 AI: Here you can find an introduction to scatter plot. You can read the graph as a correlation. Follow the immage you can define what kind of correlation is there between the two variable (image source) positive correlation (0.7 - 1): if one variable increases the other increases negative correlatio (-0.7 - -1): if one variable increases, the other decreases Of course, if you want the precise value of the correlation you have to use the specific calculation library, according to the language you are using.
H: What makes differences in each head in the multiheaded attention in transformer? What makes differences in each head in the multiheaded attention in transformer? As they are fed and trained in the exact same way, except the initialization of weights are different for each head to produce different sets of (Q,K,V) in each head. Such multi-headed design to me seems no difference than ensembling multiple models that are initialized differently. Many sources claim that the multi-head attention 'can help capture meaning in different contextual subspace' without further substantiation or supporting proofs. Honestly I've been quite fed up with all those vague descriptions in the data science world that they make claim without mathematical rigor. I think I'm looking for more rigorous explanation as to why "multi-head attention 'can help capture meaning in different contextual subspace'" when they are simply an ensemble of identical models but weights randomly initialized? AI: You are right in that "they are simply an ensemble of identical models but weights randomly initialized". If you think about it, the different filters in convolutional layers are also just that. Having multiple heads increases the model's capacity. The randomly initialized weights are certainly the key for each head to learn different things. What the heads really learn has been an active area of research, normally studied by either pruning away heads to see the effect, by measuring the attention patterns to attribute effect, or by probing them in control tasks. These are some conclusions in that regard: From "Are Sixteen Heads Really Better than One?" (publised at NeurIPS'2019) we know that, in many cases, having multiple heads is needed especially at training time, while at inference time is it possible to prune a number of heads (depending on the task) without significant performance loss. From "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned" (published at ACL'2019) we confirm the previous article's conclusion and that the most important and confident heads play consistent and often linguistically-interpretable roles. From "Revealing the Dark Secrets of BERT" (published at EMNLP'2019) we know that "there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization", therefore confirming the previous findings. From "What Does BERT Look at? An Analysis of BERT’s Attention" (BlackBoxNLP'2019) we learn that "attention heads correspond well to linguistic notions of syntax and coreference". From "Probing for Bridging Inference in Transformer Language Models" (published at NAACL'2021) we learn that "heads at higher layers prominently focus on bridging relations incomparison with the lower and middle layers, also, few specific attention heads concentrate consistently on bridging".
H: Proof that multihead works better than single head in transformer According to this post, the purpose of the multihead is to have 'gradient splitting' across heads, which is achieved by random initialization of weight matrices for Q, K and V in each head. But how can we prove this can solve the problems in using single head? Specifically, how can the splitting of gradients ensures within each output attention vector for each word it wouldn't overemphasize (the attention) of itself? AI: In order to see whether one single attention head is enough, we can simply try it. This is precisely what it's done in the article "Are Sixteen Heads Really Better than One?" (publised at NeurIPS'2019). The authors conclude that, for some tasks, having multiple heads is needed especially at training time, while at inference time is it possible to prune a number of heads (depending on the task) without significant performance loss.
H: How to extract MSEP or RMSEP from lassoCV? I'm doing lasso and ridge regression in R with the package chemometrics. With ridgeCV it is easy to extract the SEP and MSEP values by modell.ridge$RMSEP and model.ridge$SEP. But how can I do this with lassoCV? model.lasso$SEP works, but there is no RMSE or MSE entry in the list. However the function provides a plot with MSEP and SEP in the legend. Therefore it must be possible to extract both values! But how? SEP = standard error of the predictions; MSEP = mean squared error of the predictions; RMSEP = root mean square error of the predictions = root(MSEP) AI: Based on the documentation and the source code of the lassocv funtion the SEP values indeed be extracted from the output using $SEP, the MSEP values should be stored in the named list that is returned as $cv: Value cv - MSEP values at each value of fraction cv.error - standard errors for each value of fraction SEP - SEP value for each value of fraction ind - index of fraction with optimal choice for fraction sopt - optimal value for fraction fraction - all values considered for fraction To get the RMSEP you can then simply take the square root of the MSEP values.
H: What kind of decision tree are used in random forest? Read some documentation (for example) I know that there are many types of decision tree (Cart, ID3 and so on). I also know that Random Forest is a particolar algorithm that use a set of decision tree. My question is: in random forest, what kind of decision tree are used? (cart, id3,..) AI: In short : It can be any type of tree inside forest :) Random forest is an ensemble of many decision trees.The success of a random forest highly depends on using uncorrelated decision trees. If we use same or very similar trees, overall result will not be much different than the result of a single decision tree. Random forests achieve to have uncorrelated decision trees by bootstrapping and feature randomness. So what type of tree are inside ? This depends on implementation. Generally, any bootstrap-aggregated attribute-bagged learner based on trees (any of them) is called Random Forest. You get different flavors using different trees. For example : Function randomForest() in R uses CART algorithm CART, C4.5 or C5.0 , any of these can be used to grow a forest
H: How can I learn and apply the scientific method in machine learning? Rigor Theory. I wish to learn the scientific method and how to apply it in machine learning. Specifically, how to verify that a model captured the pattern in data; how to rigorously reach conclusions based on well-justified empirical evidence. Verification in Practice. My colleagues in both academia and industry tell me measuring the accuracy of the model on testing data is sufficient, but I don't feel confident such criteria are sufficient. Data Science Books. I have picked up multiple data science books, like Skiena's manual, Dell EMC's book, and Waikato's data mining. Even though there had been a section for diagnosing the model and measuring results, my instinct worries are these are heuristics, but not rigour-based. Scientific Method Books. Searching for the scientific method I found, Statistics and Scientific Method: An Introduction for Students and Researchers and Principles of Scientific Methods, which seem to answer the crux of my question. I am planning to study both of them. My Questions. Here are couple of questions I hope to gain guidance on, from your wonderful community. Is it feasible to rigorously apply the scientific method in machine learning applications like recommendation engines or social sciences, or is it the case that so far our scientific/technological advancement didn't reach that degree of maturity, and that the best we can hope for is heuristics-based approximations. Is it feasible to do machine learning in practical industry, by applying the scientific method, or is it the case that industry leaders prefer cheap heuristics in order to minimize a project's costs? Are the scientific method books I mentioned above useful for enhancing my own skills in machine learning? Are they worthwhile the effort and time? Are you aware of better alternative resources for learning the scientific method? Are there more helpful courses or recorded lectures? Do you have any recommendations or advise, while studying the scientific method, for someone who is mainly motivated by machine learning in industry like recommendation engines applications and logistical optimization? AI: I think it's an excellent idea to acquire and apply a solid scientific basis in general and in ML in particular. Here are a few comments: The scientific method is a general set of good principles for obtaining reliable scientific conclusions. It's not especially precise and it's not always clear if it's applied correctly in a specific case, there's no clear binary way to say whether a study satisfies the scientific method or not. Mind that what is considered scientifically valid evolves over time. For example, various shortcomings of using significance tests have been demonstrated recently. In science and in statistics in particular, the main scientific point is usually not to prove something with 100% confidence (virtually never possible) but to quantify the confidence in some reliable way. A common mistake is to expect a ML prediction to be 100% reliable: of course this is not possible (it wouldn't be statistical learning otherwise), but it is possible to measure how likely a prediction is correct (to some extent). So the level of maturity of the field doesn't matter: solid scientific principles can be applied to any existing method. This even applies to heuristics: it's just a matter of estimating their performance reliably. Last point: let's not forget that the scientific method is not a rigid set of formal rules to apply systematically, it's also about questioning whether a particular dataset or approach is adapted to the goal. Another common mistake I see is to apply a specific kind of evaluation to a system without thinking whether this evaluation truly measures the task that the system is supposed to do. (I don't have any specific book recommendation)
H: What is the meaning of two embedding layers in a row? I've noticed in one deep pre-trained textual neural network that there are two embedding layers in the beginning and I don't quite understand why there are two of them. As far as I understand (correct me if I'm wrong, I am a newcomer in NLP) in embedding layer there is a vector of trainable weights that forms set of parameters for every unique word. So, what is the meaning of two such layers in a row? Does the second layer create sub-parameters for every parameter in the original embedding layer? AI: If you print the model layers with the code you included in the comments, you obtain this: GPT2ForSequenceClassification( (transformer): GPT2Model( (wte): Embedding(50257, 1024) (wpe): Embedding(2048, 1024) ... That's where the misunderstanding comes from. Having one layer printed after the other does not imply that they are connected. They are not. The first embedding is the normal token embedding, with 50257 token IDs. The second embedding is the positional encoding, with 2048 positions. The model you are exploring is based on GPT-2, which is a Transformer decoder. In such an architecture, the text is encoded as discrete tokens and the resulting embedded vectors are summed together with some special vectors called positional encoding/embeddings, which encodes the position of the token in the sequence: So, the answer: there aren't two embedding layers in a row, it is a misunderstanding. Actually, one of the embedding layers encodes the tokens and the other encodes the token positions, and the resulting vectors are added together.
H: Random forest accuracy I saw this video and I understood that to build a random forest are used different decision tree, with a different structure. My code about that is: library(randomForest) library(datasets) library(caret) data<-iris str(data) data$Species <- as.factor(data$Species) table(data$Species) set.seed(222) ind <- sample(2, nrow(data), replace = TRUE, prob = c(0.7, 0.3)) train <- data[ind==1,] test <- data[ind==2,] rf <- randomForest(Species~., data=train, proximity=TRUE) print(rf) p1 <- predict(rf, train) confusionMatrix(p1, train$ Species) p2 <- predict(rf, test) confusionMatrix(p2, test$ Species) When I run print(rf) I saw: OOB estimate of error rate: 4.95% So my expected accuracy is 1-4.95%? Expected accuracy is significant differently from train accuracy (1) and test accuracy (0,93), what does it means? AI: The OOB error rate is just the sum missclassified classes over the sum of all observation. Please refer to the following article for guidance: https://www.blopig.com/blog/2017/04/a-very-basic-introduction-to-random-forests-using-r/#:~:text=The%20OOB%20estimate%20of%20error%20rate%20is%20a%20useful%20measure,value%20for%20this%20error%20rate. However your second question is a little bit more delicate. You have to understand the difference between train and test. Normally the error of the test is always higher than the train (difference between insample vs out of sample data). Imagine your algorithm it is trained on train data and it fits train data. However during the test you are trying to predict test with data fitted on the train, it is normal that the error is higher (or accuracy lower) since it has never seen those data. The error comes from these lines : p1 <- predict(rf, train) confusionMatrix(p1, train$ Species) rf already holds the information. Donc the OOB you find in p1 is predicting on the same data you have fitted. For train part use table that comes out from rf.
H: Do I have to set same batch size for training, validation and testing? I was performing segmentation task and have set my batchsize to 16 for all train, validation and inferencing. In my observation, I got better result in inferencing when setting batch size to 1. How should I decide the correct size for these three or they will have to be of same size? AI: It depends on what is your problem, what is your target and how are good your data. There isn't a correct number, but here you can find some helps.
H: Not perfect accuracy when overfitting Given a dataset and a decision tree that can be as depth as it wants, if you train the tree with all the dataset and then you test it against the same dataset and you get an accuracy that is not 100%, what can you tell about your data? One would expect a perfect accuracy as you let the tree overfit as much as wanted, so what's that difference on accuracy? AI: There are many possible reasons why the model might not reach perfect performance: As mentioned by Oxbowerce in a comment, there are several hyper-parameters other than the tree depth which might force the model to generalize: max number of instances per leaf, pruning, etc. It's even possible that some implementations apply some generalization methods without providing parameters to control them. The dataset might include contradicting evidence, i.e. several instances with the same feature values but different labels. This is more common with categorical data but it can also happen with numerical data. Purposefully overfitting a model is an interesting experiment, but in general ML models are meant to generalize from the data, so they are not intended for just storing all the data from the training set (except lazy learners like k-NN). So there's no guarantee that they would be able to fully represent the whole training data accurately.
H: How do you handle the free-text fields in tabular data in ML/DL? While we see a number of cases where the input data is only a single text fields (for the X variable) in NLP tasks, e.g. a tweet with a sentiment label being the only numerical field. But how do you handle the free-text fields in tabular data in ML/DL? The text field(s) is/are among all the numeric fields in a table! I think this is tricky to handle. It can be comment fields or some log data in some fields along with many other numeric fields. List as many approaches as possible. Any idea? For easy discussion, the 'free-text' defined here refers to a bunch of text where each row of data in the dataset can has variable length in the text. And the goal of this question is to find ways to transform such text field(s) such that they can be included into ML/DL models. AI: There are various text representation techniques, ranging from bag-of-words methods like TFIDF to embeddings. These techniques are used to build a fixed-length representation of any input text. If the text is the only input in an instance, as in text classification for example, then this representation is directly usable as the vector of features values. If there are other features or several distinct texts which must be represented separately, then the vector and the other features can be concatenated. In general one should be careful to prevent the resulting number of features to become too high. For example, it's very common to ignore the least frequent words in the case of a bag-of-words representation.
H: TypeError: bad argument type for built-in operation I have one dataset containing images X of type ( numpy array) and one target csv file as Y which has counts of cells (type : pandas dataframe, that I have converted to numpy array), both are now read as numpy arrays. Essentially, I am creating a training dataset to train my images with target dataset using cnn model. For the code below i am getting an error that argument is bad. Any help on how to solve this? **image = cv2.imread(img_path)** Full code: def load_data(): import os datasets = (X,Y) images = [] labels = [] # iterate through training and test sets count =0 for dataset in datasets: # iterate through folders in each dataset for folder in dataset: if folder in ['plasma']: label = 'T4' elif folder in ['lymphocyte']: label = 'T3' elif folder in ['epithelial']: label = 'T2' elif folder in ['neutrophil']: label = 'T1' elif folder in ['eosinophil']: label = 'T5' elif folder in ['connective']: label = 'T6' img_path = np.append(dataset, folder) **image = cv2.imread(img_path)** image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # resize the image image = cv2.resize(image, image_size) # Append the image and its corresponding label to the output images.append(image) labels.append(label) images = np.array(images, dtype = 'float32') labels = np.array(labels, dtype = 'int32') return images, labels Error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-145-a8d2a3fd920f> in <module> ----> 1 images, labels = load_data() <ipython-input-144-1acddd84d1e7> in load_data() 24 25 img_path = np.append(dataset, folder) ---> 26 image = cv2.imread(img_path) 27 image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) 28 TypeError: bad argument type for built-in operation AI: By defintion cv2.imread() expect the path: A string representing the path of the image to be read. The image should be in the working directory or a full path of image should be given Unclear on your datasets structure and what are you keeping inside. Still at this line img_path = np.append(dataset, folder) img_path is an array ( np.append will return an array ) and cannot be passed to the imread() function which is clear from the error ( bad argument type for built-in operation )
H: Measuring similarity from massive embedded vectors I am given a set of 10,000 journal articles, with their corresponding 100th-dimension embedded vectors. (The way they are embedded is unknown, but I'm guessing it is word2vec. The vector values vary from -0.5 to 0.5.) Among the 10,000 article name data, 10 of them are my target articles. My objective is to find several articles that are 'similar' to my target articles. After reading this post, it seems like word similarity measurements such as tf-idf are unnecessary for my task, for I already have an access to embedded vectors. If so, how would I calculate similarity between my target articles and the rest optimally, given that my dataset is already massive (10,000 * 100)? AI: There are many excellent answers on the differences between cosine distance (1-cosine similarity) and euclidean distance - some are linked below. I think it's useful to first think when they are similar. They are in fact clearly related when you work with unit-norm vectors $a,b$: $||a||_2 = ||b||_2 = 1$. In this particular case: $$||a-b||^2 = (a-b)^T\cdot(a-b)=a^Ta-2a^Tb+b^Tb=2-2a^Tb=1-c_s(a,b)$$ Where $c_s$ is the cosine similarity of the two normalized vectors. Hence, the squared euclidean distance is the cosine distance for unit-norm vectors. What happens when vectors are not normalized? You may have good reasons to use Euclidean distance if that means something (e.g. in physical space). In NLP, it is hard to interpret the euclidean distance. Look at your word embeddings (vectors): is there a good reason to think that the absolute magnitude of the components should be comparable across embeddings? Cosine distance measures instead the (normalized) projection of one vector (embeddings, in your case) onto the other, i.e. tells if the direction of the two is the same. To better understand this, it may be useful to try working out Euclidean and cosine distances across the following vectors: $v_1=(1,1,0,0), v_2=(5,5,0,0), v_3=(1,1,0,1)$ Which might well be the outputs of a CountVectorizer for 3 words with a dictionary of size 4. Some good references: an intuitive explanation and a detailed comparison with code both measures suffers the curse of dimensionality, discussion on statsexchange again, on the equivalence of euclidean and cosine similarity
H: my all categories convert to NaN or 'missing' ; what's the problem? Hello, I have a dataFrame and one of features is categorical and I want to convert that to ordinal category(including a category for missing values) but in the last cell as you see it assumes all of my categories as NaN (-1) and even I use X.fillna('missing') assumes all of those as 'missing' I don't no what's the problem and I couldn't find any helpful documentation. Thank you AI: You mentioned two issues in the code : issue 1 : Our missing value is encoded as a seperate class It is no longer a missing value , as you put np.NAN there and nan will considered as a seperate class in the Ordinal Encoder order is not respected becuse the categories are not specified, so it will take as auto and the most occured value will treated as first . As @Oxbowerce commented , your pandas.Categorical will return -1 for all since the values are present instead of string . in your code , I am not sure why you are substituing np.NAN() . X = pd.DataFrame ( np.array ( [ 'M', 'O-', 'medium', 'M', 'O-', 'high', 'F', 'O+', 'high', 'F', 'AB', 'low', 'F', 'B+', np.NAN]).reshape((5,3))) X.columns = ['sex', 'blood_type', 'edu_level'] at this point , if you know this is missing value, it is better to substitute here itself rather than complicating or after your processing , if you want to convert back to categories including your missing value, you can simply do by edu_dict = {2.0: 'Low', 0.0: 'High', 1.0: 'Medium', 3.0: 'Missing', } X['edu_level'] = X['edu_level'].map(edu_dict) output : sex blood_type edu_level 0 M O- Low 1 M O- High 2 F O+ High 3 F AB Medium 4 F B+ Missing
H: Why does gridsearchCV fit fail? I already referred this post here but there is no answer. I am working on a binary classification using a random forest classifier. My dataset shape is (977,8) with 77:23 class proportion. My system has 4 cores and 8 logical processors. As my dataset is imbalanced, I used Balancedbaggingclassifier (with random forest as an estimator). Therefore, I used gridsearchCV to identify the best parameters of balancedbagging classifier model to train/fit the model and then predict. My code looks like below n_estimators = [100, 300, 500, 800, 1200] max_samples = [5, 10, 25, 50, 100] max_features = [1, 2, 5, 10, 13] hyperbag = dict(n_estimators = n_estimators, max_samples = max_samples, max_features = max_features) skf = StratifiedKFold(n_splits=10, shuffle=False) gridbag = GridSearchCV(rf_boruta,hyperbag,cv = skf,scoring='f1',verbose = 3, n_jobs=-1) gridbag.fit(ord_train_t, y_train) However, the logs that are generated in jupyter console, have below messages where the gridsearchcv score is nan for some cv executions as shown below. You can see that for some of the CV executions, the gridscore is nan. can help me please? And it keeps running for more than half an hour and no output yet Why does gridsearchCV return nan? [CV 10/10] END max_features=1, max_samples=25, n_estimators=500;, score=nan total time= 4.5min [CV 4/10] END max_features=1, max_samples=25, n_estimators=500;, score=0.596 total time=10.4min [CV 5/10] END max_features=1, max_samples=25, n_estimators=500;, score=0.622 total time=10.4min [CV 6/10] END max_features=1, max_samples=25, n_estimators=500;, score=0.456 total time=10.5min [CV 9/10] END max_features=1, max_samples=25, n_estimators=500;, score=0.519 total time=10.5min [CV 5/10] END max_features=1, max_samples=25, n_estimators=800;, score=nan total time= 3.3min [CV 4/10] END max_features=1, max_samples=25, n_estimators=800;, score=nan total time= 9.9min [CV 8/10] END max_features=1, max_samples=25, n_estimators=800;, score=nan total time= 7.0min [CV 6/10] END max_features=1, max_samples=25, n_estimators=800;, score=nan total time=10.7min [CV 1/10] END max_features=1, max_samples=25, n_estimators=800;, score=0.652 total time=16.4min [CV 9/10] END max_features=1, max_samples=25, n_estimators=800;, score=nan total time= 7.6min [CV 2/10] END max_features=1, max_samples=25, n_estimators=800;, score=0.528 total time=16.6min [CV 3/10] END max_features=1, max_samples=25, n_estimators=800;, score=0.571 total time=16.4min [CV 7/10] END max_features=1, max_samples=25, n_estimators=800;, score=0.553 total time=16.1min [CV 4/10] END max_features=1, max_samples=25, n_estimators=1200;, score=nan total time= 6.7min [CV 8/10] END max_features=1, max_samples=25, n_estimators=1200;, score=nan total time= 1.7min [CV 10/10] END max_features=1, max_samples=25, n_estimators=800;, score=0.489 total time=16.0min [CV 3/10] END max_features=1, max_samples=25, n_estimators=1200;, score=nan total time=18.6min [CV 1/10] END max_features=1, max_samples=50, n_estimators=100;, score=0.652 total time= 2.4min update - error trace report - fit fail reason he above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) <timed exec> in <module> ~\AppData\Roaming\Python\Python39\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params) 889 return results 890 --> 891 self._run_search(evaluate_candidates) 892 893 # multimetric is determined here because in the case of a callable ~\AppData\Roaming\Python\Python39\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates) 1390 def _run_search(self, evaluate_candidates): 1391 """Search all candidates in param_grid""" -> 1392 evaluate_candidates(ParameterGrid(self.param_grid)) 1393 1394 ~\AppData\Roaming\Python\Python39\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params, cv, more_results) 836 ) 837 --> 838 out = parallel( 839 delayed(_fit_and_score)( 840 clone(base_estimator), ~\AppData\Roaming\Python\Python39\site-packages\joblib\parallel.py in __call__(self, iterable) 1052 1053 with self._backend.retrieval_context(): -> 1054 self.retrieve() 1055 # Make sure that we get a last message telling us we are done 1056 elapsed_time = time.time() - self._start_time ~\AppData\Roaming\Python\Python39\site-packages\joblib\parallel.py in retrieve(self) 931 try: 932 if getattr(self._backend, 'supports_timeout', False): --> 933 self._output.extend(job.get(timeout=self.timeout)) 934 else: 935 self._output.extend(job.get()) ~\AppData\Roaming\Python\Python39\site-packages\joblib\_parallel_backends.py in wrap_future_result(future, timeout) 540 AsyncResults.get from multiprocessing.""" 541 try: --> 542 return future.result(timeout=timeout) 543 except CfTimeoutError as e: 544 raise TimeoutError from e ~\Anaconda3\lib\concurrent\futures\_base.py in result(self, timeout) 443 raise CancelledError() 444 elif self._state == FINISHED: --> 445 return self.__get_result() 446 else: 447 raise TimeoutError() ~\Anaconda3\lib\concurrent\futures\_base.py in __get_result(self) 388 if self._exception: 389 try: --> 390 raise self._exception 391 finally: 392 # Break a reference cycle with the exception in self._exception ValueError: The target 'y' needs to have more than 1 class. Got 1 class instead AI: First I want to make sure you know what you're building here. You're doing (balanced) bagging with between 100 and 1200 estimators, each of which is a random forest of 300 trees. So each model builds between $100\cdot300=30k$ and $1200\cdot300=360k$ trees. Your grid search has $5^3=125$ hyperparameter combinations, and 10 folds. So you're fitting on the order of $10^8$ individual trees. The grid search splits your data into 10 pieces, stratified so that the class balance should be the same as in the whole dataset. Now the balanced bagging is set to use only 25 rows, but it's also using the default "not minority" method, which means it tries to only downsample the majority class. Those two together are impossible, so I'm not really sure what ends up happening (if I have some time I'll look into that later). Since not all your scores are nan, it obviously sometimes works. But now the scarce 25 rows are used to train a random forest, so conceivably sometimes one of the trees there selects a bag with no examples from one of the classes. I suspect that's the issue. The BalancedBaggingClassifier with a single decision tree base estimator acts as a fancier random forest, so that'd be my recommendation. You also wouldn't need to set class_weights in the tree, since the balanced bags will already be equally divided. I would expect better performance with larger max_samples, but even without changing that now you'll expect ~12.5 rows of each class for each tree to build off of. If you really want to balanced-bag random forests, then definitely increase the number of rows reaching each tree.
H: Can a multilayer perceptron classify binary values? I have a dataset in which the response variable is Sick(1) or not sick (2). As for the variables, there are a few numeric ones (2/14), all the others are variables by levels (example: 1-Abdominal pain, 2-Throat pain...). I had two questions: 1-Can a multilayer perceptron classify a binary variable? Or can it only return numerical values? 2-Can binary or leveled variables be passed as training to the multilayer perceptron? Thank you very much. AI: 1 - Yes, a multilayer perceptron can be used for classification and for binary classification too. It will return a probability that an instance belongs to a certain class. In the case of binary classification a multilayer perceptron with one output may be used. Then one class is labeled as 0, the other one as 1. The probability boundary (for example 0.5) separates instances into two classes. 2 - Yes, nominal and ordinal data can be used as inputs in a perceptron. But it is necessary to encode them before use.
H: Advice on movie per topic classification and relation with rating I would like to extract topics from a set of movie subtitles, and possibly see if there is any relation with the viewer's rating. I have thought about creating a DocumentTermMatrix where each document is one movie, and than applying LDA in order to find the topics. However, I have never classified documents, and I have no idea about how to find out if one topic is more likely to have good reviews than others. I would like to create something graphical that shows both the clusters of topics and their relation to the rating... Any advice would be very useful! AI: I think that this experiment makes some sense, keeping in mind that: The dialogues (subtitles) are probably a decent indicator for the main topic of the movie, but not a perfect one. Similarly, there might be some correlation between dialogues/topics and ratings, but obviously the ratings don't only depend on the topic (or dialogues). Note: in case the goal is to see the relation between dialogue and rating, you could also consider training a supervised regression model which predicts ratings directly from the subtitles text. Topic modelling calculates a distribution of topics for every movie, so in general you can have a movie which belongs to different topics at varying degrees. Of course, it's common to assign a single topic to an instance (here a movie) by taking the maximum probability topic. If you do this, you can easily obtain the set of movies for every topic, and then calculate any statistic related to rating for the topic, for example the mean rating for this topic. In terms of visualization, a common way to represent a topic is with the set of N words which are most strongly associated with it. You could do a word clouds with these. You can also represent the histograms of the ratings distributions by cluster.
H: TypeError: 'int' object is not subscriptable, On making header = None when reading a csv file As soon as I add header = None, I get the error TypeError: 'int' object is not subscriptable. Any help on how to solve this without removing header = None from reader? Code: reader = pd.read_csv('counts.csv', header = None) X3 = [] y3 = [] for row in reader: label = row[2] if len(label) > 0 and label.find(',') == -1: y3.append(label) y3 = np.asarray(y3) encoder = LabelEncoder() encoder.fit(y3) encoded_y = encoder.transform(y3) counts = np.bincount(encoded_y) print(counts) fig, ax = plt.subplots() plt.bar(list(range(6)), counts) ax.set_xticklabels(('', 'plasma', 'lymphocyte', 'epithelial', 'neutrophil','eosinophil','connective')) ax.set_ylabel('Counts') Error: TypeError Traceback (most recent call last) <ipython-input-97-53828df7d817> in <module> 7 y3 = [] 8 for row in reader: ----> 9 label = row[2] 10 if len(label) > 0 and label.find(',') == -1: 11 y3.append(label) TypeError: 'int' object is not subscriptable AI: reader is a dataframe, since you want to iterate the rows you need to use the function iterrows: import pandas as pd reader = pd.read_csv('counts.csv', header = None) for rowno, row in reader.iterrows(): label = row[2] print('row ',rowno,' label:', label) See for instance here about iterating dataframes.
H: How to predict strategy based on given data using Machine Learning? My basic goal is to predict strategy based on given data for instance a) Predict what formation In a football match will maximize my winning rate b) Predict what product combination will maximize my sales rate in the Grocery store How to deal with such problems in machine learning? What approach is used in such problems? AI: These are optimization problems. Typically it requires two parts: One needs a function $y=f(x)$ which returns the expected outcome $y$ for a particular "strategy" $x$. The way to do it depends on the actual problem, for example sometimes it can be a custom deterministic function. Otherwise it can be a supervised regression model, but then one needs a training set. The second part is the optimization per se: exploring the set of possible "strategies" $X$, applying the function $f$ to every candidate $x\in X$ to obtain the corresponding outcome, then pick the maximum. Genetic learning is a common approach for exploring the search space, but it depends on the problem.
H: Which is the difference between the two Greek BERT models? I want to use the Greek BERT which can be found here https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1 However I am confused about which model should I use and which are the differences. The tokenizer is the same tokenizer = AutoTokenizer.from_pretrained('nlpaueb/bert-base-greek-uncased-v1') but we have two models model = AutoModel.from_pretrained("nlpaueb/bert-base-greek-uncased-v1") model = AutoModelWithLMHead.from_pretrained("nlpaueb/bert-base-greek-uncased-v1") Which one should I use? AI: The main difference between the two models is that the first one is a general BERT model without a specific head that simply returns the raw hidden states of the model, whereas the second one has an architecture with a language modelling head on top. You can find some more info in the huggingface documentation, compare for example the explanation of the BertModel versus the BertLMHeadModel.
H: Why leaky features are problematic I want to know why leaky features are problematic in machine learning/data science. I'm reading a book that uses the Titanic dataset for illustration. It says that the column body (Body Identification Number) leaks data since if we are creating a model to predict if a passenger would die, knowing that they had a body identification number a priori would let us know they were already dead. Logical-wise, this makes sense. But assuming I don't have any knowledge about this dataset, I will just keep the body feature and build a model, say RandomForestClassifier. Even though later on I discover it leaks data, so what? As long as my test set has this column, the model still runs and still give me a prediction (a very good prediction, indeed). UPDATE I followed a linked thread here, Why does my model produce too good to be true output? and got some more thoughts. Let's assume a very hypothetical situation: someone wants to mess with my predictions by purposely attacking my data source and alter the data in the body column. In this case, I can see that dropping it before model training makes sense since this completely fools the models. This got me into thinking that data governance is equally important, perhaps even more important, than building a good ML model. But I rarely see this practice in real-world projects. Usually, the expectation is: given the data, train the best model. These datasets typically have thousands of feature and are already prepared by data engineers. It's very likely that they accidentally include features that make validation scores unreliably high, even if one applies tons of advanced validation techniques. This seems to make it hard for data scientists to properly train and validate models because the only way then is to comb through the data generation process, which might be perceived as unproductive. AI: Yes, the argument you are giving is perfectly valid. But Let look at two different scenarios and see how it does not benefit in real world: If you already have a variables which perfectly predicts the target variable why would you spend your resources on creating a ML model. Remember the objective of ML model should not be to good on Testing Data but when its deployed in production it should perform well. In real world, this leaky variable leads to bad model prediction. In some cases this variable may not be present hence your model pipeline and model performace will drastically reduce.
H: What does it mean when roc curves intersect at a point? I am working with a data set and I have obtained the following roc curve: As you can see, black and Asian ethnicity cross at one point (green and purple lines). Does this have any significance? Could any conclusion be drawn from this? Note that I am dealing with the following datasets: -transrisk_performance_by_race_ssa -transrisk_cdf_by_race_ssa.csv -totals.csv In order to observe whether fairness affects profits. AI: From a fairness point of view one might argue that one curve dominating another curve may be an indication of a model being potentially biased towards the class with the dominant ROC curve. However, if two ROC curves intersect it implies that none of the two dominates the other one. (Please note the direction of above stated conditionals. Also this is intentionally vaguely phrased since the conclusions to be drawn are limited.) Another (not insightful) observation is that in such intersecting cases one cannot generally tell from the ROC curve plot which of the two has a higher ROC AUC (to phrase it differently: an ROC curve dominating another ROC curve implies a higher ROC AUC).
H: Difference between scaling just x or x and y in PCA / principle component regresseion Before doing principle component regression it is important to scale the data. But which data exactly? Is it enough if I just scale X or do I have to scale the whole data set, containing X and Y (=regressor and regressand)? The advantage of scaling just X is, that I do not have to backtransform Y. But is this valid? Whats the difference between scaling just X and scaling the whole data set? AI: The PCA process identifies the directions with the greatest variance. Since the variance of a variable is measured in its own units squared, if all the variables are not standardized to have zero mean and standard deviation of one before calculating the components, those variables whose scale is larger will dominate the rest. Therefore, it is not necessary to scale the variable Y, simply scale the variables of the matrix X so that all the vectors have the "same distance".
H: Forecast new loans that will be granted next month using machine learning NN regressor I'm attempting to apply a machine learning regression solution using NN to the following problem: I have the history of loans granted by a bank, and I need to forecast what loans will be opened in the future based on macroeconomic variables. Amount, Zipcode and Segment are loan features and Unemployment and GDP are macroeconomic features. Loan Date Amount Zipcode Segment Unemployment GDP Probability 1 Jan-2020 100,000 40921 Corporate 5.1% 2.5% 1.0 2 Oct-2020 250,000 54323 Business 4.9% 3.2% 1.0 3 Mar-2021 223,000 40921 Business 4.8% 3.1% 1.0 4 Dec-2021 562,000 43241 Coporate 5.0% 2.9% 1.0 5 Feb-2022 300,000 54325 Corporate 5.2% 2.7% 1.0 I added Probability = 1.0 to every line because the loan was actually granted. Probability is the label in the machine learning model, and it's what I need to predict with the regressor. If probability is greater than 0.5 then the loan is granted, else is not granted. Now, I need to predict loans that will be opened next month, Apr-2022 at the moment of this writing. The question is what do I take as input in the prediction. My idea is to take all the loans that were granted in the past, combine with the Unemployment and GDP forecasted for Apr-2022, and see if the Probability is greater than 0.5. I have two concerns. First, I'd like to know if this model makes sense. Second, since I don't have much control over the number of loans that were granted in the past, if the history is large then the predicted number will also be large, and this is not right. Any suggestions how to approach this? AI: Given the data and information shared by you, if you also have rows in which loan application was rejected then this problem can be phrased as Classification Problem. Given the data shared by you i would frame the problem as following : Independent Variable : ZipCode, Amount, Segment,Unemplyment, GDP Target Variable : Loan Accepted Flag (0,1) Once we have these we can also try to bring in more data like Crime Rate, Defaul Rate etc by Zipcode & Segment to overall level. This model will use your independent variables and whenever you enter new loan with all information it will return probability of acceptance. I think you can completely ignore Date Column unless you know that Load Acceptance varies by Month or any other date related variable. Once the problem is formulated, you can approach it like any other classification problem
H: AttributeError: 'DecisionTreeRegressor' object has no attribute 'tree_' everyone. I'm a noob in data science and I'm building my first ML model. But I keep getting this error, 'DecisionTreeRegressor' object has no attribute 'tree_' This is my code below. df = pd.read_csv('/Users/mog/Downloads/train_data.csv') from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3, random_state = 1) df.model = DecisionTreeRegressor() df.model.fit(x_train, y_train) from sklearn.metrics import accuracy_score print('Training score...', accuracy_score(y_train, df_model.predict(x_train))) print('Test score...', accuracy_score(y_test, df_model.predict(x_test))) Please help. AI: The problem is that you are trying to estimate a classification metric for a regression model. So, depending on your target you have two options: If it is categorical you need to use DecisionTreeClassifier instead of DecisionTreeRegressor If it is continuous, you need to change the metric accuracy_score for example for r2_score I also noticed you named your model df.model but when predicting this is named df_model. So I recommend to change both to df_model Hope it helps!
H: Siamese model accuracy stuck at 0.5 I'm trying to train a Siamese Network model for a signatures dataset using Keras API and considering the loss only seems not bad. But ironically enough the model accuracy stuck at 0.5. Model Loss: Model Accuracy: My model is kind of a deep model, here's its architecture: input = Input((128, 128, 1)) x = BatchNormalization()(input) x = Conv2D(16, (2, 2), activation="tanh")(x) x = AveragePooling2D(pool_size=(2, 2))(x) x = Conv2D(32, (2, 2), activation="tanh")(x) x = AveragePooling2D(pool_size=(2, 2))(x) x = Conv2D(64, (2, 2), activation="tanh")(x) x = AveragePooling2D(pool_size=(2, 2))(x) x = Conv2D(128, (2, 2), activation="tanh")(x) x = AveragePooling2D(pool_size=(2, 2))(x) x = Conv2D(256, (2, 2), activation="tanh")(x) x = AveragePooling2D(pool_size=(2, 2))(x) x = Flatten()(x) x = BatchNormalization()(x) x = Dense(10, activation="tanh", kernel_regularizer="l2")(x) embedding_network = Model(input, x) input_1 = Input((128, 128, 1)) input_2 = Input((128, 128, 1)) tower_1 = embedding_network(input_1) tower_2 = embedding_network(input_2) merge_layer = Lambda(euclidean_distance)([tower_1, tower_2]) normal_layer = BatchNormalization()(merge_layer) output_layer = Dense(1, activation="sigmoid")(normal_layer) siamese = Model(inputs=[input_1, input_2], outputs=output_layer) The model takes grayscale 128x128 images of signatures in batches of 16 images using the ImageDataGenerator class having 800 training samples and 200 validation samples and trains in 15 epochs. The optimizer function is RMSprop at learning rate of 0.001 and the loss function is Contrasive Loss with margin of 0.5. What did I miss and why doesn't my model learning? AI: Turns out the problem was hidden in the loss function. I decided to change it from Contrasive Loss to Categorical Crossentropy just for fun, and with some kind of "magic" it worked. from: def loss(margin=1): def contrastive_loss(y_true, y_pred): square_pred = tf.math.square(y_pred) margin_square = tf.math.square(tf.math.maximum(margin - (y_pred), 0)) return tf.math.reduce_mean( (1 - y_true) * square_pred + (y_true) * margin_square ) return contrastive_loss siamese.compile(loss=loss(margin=MARGIN), optimizer=RMSprop(learning_rate=0.001), metrics=["accuracy"]) to: siamese.compile(loss="categorical_crossentropy", optimizer=RMSprop(learning_rate=0.001), metrics=["accuracy"])
H: Do subjective features result to noise? Suppose I have a dataset containing feature values I don't all agree with, as another person selected them. An example problem could be classifying 'good' or 'bad' music using subjective features, e.g., emotion. If an emotion feature of a particular sample has a value of 'sad,' whereas I think 'peaceful' is a much better representation, would that count as feature noise? Suppose I then re-labeled the samples as 'good' or' bad' by my standards, without touching the subjectively 'wrong' features. Will a classifier still be able to accurately classify 'good' music based on my taste, despite the features? Although, the fact that subjective features can even be used in ML problems is something I assumed, as I'm unable to find any related resources. I'm also relatively new to ML, so please do correct me. AI: Welcome to datascience.stackexchange, One way to look at the subjectivity problem is a model per subjective scenario (in your case - per person). Then, each classification could use some kind of voting mechanism between all models' classifications. In that sense, using your friend's model on "yourself" would be more of in-accuracy than noise, and changing the labels might be more of a noise (like a noisy sensor in a system, that collected the wrong target value). An alternative might be to group-by song observations from different persons, aggragating per-song-features by some averaging. Thus, if we have enough unique sensors (i.e. persons) a song might get an average emotion value of 6.5 and would classify closer to peaceful. I guess it comes down to - to what extent an individual's opinion can be represented in a collected opinions' data-set, and how would that matter for the user-story you were asking about. This is a good example of how an early ml-project's domain-expertise with regard to a well-defined user-story - can make the difference. The user story here might be: classifying an individual's song's score, while the domain-expert might state how, if any, it's possible to classify an indivudal's taste based on the number of unique-sensors. Regarding the emotion averaging, in case it's not possible to project it on a numeric scale (domain-expert issue), see my post here regarding n-hot encoding, for exactly this scenario (encoding not just the existense of a category-value, but also its weight)
H: One hot encoding with Keras I have a large dataset (500k rows) where one column contains the weekday of the purchase. For now, it is in the 0-6 format (Mon-Sun) and I think I should one-hot encode it before training my sequential NN in Keras. It is not clear if I should do this in the dataset (transform the one column into 7 columns, manually or is there a Pandas function?) or I can just have that one column containing an array of length 7. The dataset is a Pandas Dataframe if it matters. AI: Pandas and skicit learn provides function to do the encoding.Sharing an example code using pandas dummies function one_hot = pd.get_dummies(df['weekday']) # Drop column weekday as it is now encoded df = one_hot = pd.get_dummies(df['weekday'])
H: Dealing with inputs of different sizes in time series forecasting I'm dealing with a task where I need to forecast the n-ith value of a target variable in a multivariate time series. But in this case we have two variables: -var1: Is my target variable that represents the output of a system. -var2: This time series represents a binary control signal (on/off). Thus, var1 varies according to its previous values and according to the control signal (var2). My problem is that the output in a given day n depends on the last n values of var 2 and the last n-1 values of var 1. That is, I have a different number of values (n and n-1) as the input of my network. In this scenario, I'm not sure about how to model the input of my network. I was trying to use a LSTM layer as the input of my network, with 2 dimensions (var1 and var2). But, as I said, each "sample" has n values of var2 and n-1 values of var1. It is not possible to create a 2D array in this situation. Any idea? AI: You can create a sort of encoder-decoder network with two different inputs. latent_dim = 16 # First branch of the net is an lstm which finds an embedding for the var1 var_1_inputs = tf.keras.Input(shape=(window_len_1, n_1_features), name='var_1_inputs') # Encoding var_1 encoder = tf.keras.layers.LSTM(latent_dim, return_state=True, name = 'Encoder') encoder_outputs, state_h, state_c = encoder(var_1_inputs) # Apply the encoder object to var_1_inputs. var_2_inputs = tf.keras.Input(shape=(window_len_2, n_2_features), name='var_2_inputs') # Combining future inputs with recurrent branch output decoder_lstm = tf.keras.layers.LSTM(latent_dim, return_sequences=True, name = 'Decoder') x = decoder_lstm(var_2_inputs, initial_state=[state_h, state_c]) x = tf.keras.layers.Dense(16, activation='relu')(x) x = tf.keras.layers.Dense(16, activation='relu')(x) output = tf.keras.layers.Dense(1, activation='relu')(x) model = tf.keras.models.Model(inputs=[var_1_inputs,var_2_inputs], outputs=output) optimizer = tf.keras.optimizers.Adam() loss = tf.keras.losses.Huber() model.compile(loss=loss, optimizer=optimizer, metrics=["mae"]) model.summary() Here you are, of course I inserted random numbers for layer, latent dimensions, etc. You can also have different features to input with var_1 and var_2 and these have to passed as arrays.
H: Clustering - Auto ML Solutions I intend to use clustering for my problem grouping customers together. However, instead of me manually tuning hyperparameters and CV, I would like to know is there anyway to do the clustering using Auto-ML packages. The auto-ml packages that I found online don't seem to support clustering and are mainly focused on classification and regression. So, is there any Auto-ML solutions for the clustering? can provide me the list of auto-ML packages please? AI: Classification predictions can be evaluated using accuracy, whereas regression predictions cannot. Regression predictions can be evaluated using root mean squared error, whereas classification predictions cannot. Clustering is totally different! You are not looking for some accuracy measure or precision of prediction using a supervised technique, but rather, clustering is used to group data points having similar characteristics. This is why it is known as unsupervised learning. Try Mean Shift for automatically detecting the optimal number of clusters. Here's an example. Hopefully you can adapt it for your specific use. import numpy as np from sklearn.cluster import MeanShift, estimate_bandwidth from sklearn.datasets import make_blobs # ############################################################################# # Generate sample data centers = [[1, 1], [-1, -1], [1, -1], [1, -1], [1, -1]] X, _ = make_blobs(n_samples=10000, centers=centers, cluster_std=0.2) # ############################################################################# # Compute clustering with MeanShift # The following bandwidth can be automatically detected using bandwidth = estimate_bandwidth(X, quantile=0.6, n_samples=5000) ms = MeanShift(bandwidth=bandwidth, bin_seeding=True) ms.fit(X) labels = ms.labels_ cluster_centers = ms.cluster_centers_ labels_unique = np.unique(labels) n_clusters_ = len(labels_unique) print("number of estimated clusters : %d" % n_clusters_) # ############################################################################# # Plot result import matplotlib.pyplot as plt from itertools import cycle plt.figure(1) plt.clf() colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk') for k, col in zip(range(n_clusters_), colors): my_members = labels == k cluster_center = cluster_centers[k] plt.plot(X[my_members, 0], X[my_members, 1], col + '.') plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=14) plt.title('Estimated number of clusters: %d' % n_clusters_) plt.show() Or, try this. import numpy as np import pandas as pd from sklearn.cluster import MeanShift from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn.datasets import make_blobs # We will be using the make_blobs method # in order to generate our own data. clusters = [[2, 2, 2], [7, 7, 7], [5, 13, 13]] X, _ = make_blobs(n_samples = 150, centers = clusters, cluster_std = 0.60) # After training the model, We store the # coordinates for the cluster centers ms = MeanShift() ms.fit(X) cluster_centers = ms.cluster_centers_ # Finally We plot the data points # and centroids in a 3D graph. fig = plt.figure() ax = fig.add_subplot(111, projection ='3d') ax.scatter(X[:, 0], X[:, 1], X[:, 2], marker ='o') ax.scatter(cluster_centers[:, 0], cluster_centers[:, 1], cluster_centers[:, 2], marker ='x', color ='red', s = 300, linewidth = 5, zorder = 10) plt.show() There are a few clustering methodologies that help you choose the optimal number of clusters automatically. Check out the link below for some ideas of how to move forward with your project. https://scikit-learn.org/stable/modules/clustering.html
H: Calculating relative risk in R I am working with the nhanes data from 2017-2018. I want to determine the relative risk of being overweight (Adipös) when being "Poor". And this grouped by Gender and Ethnicity. So far I have written this: DurchgangJ%>% mutate( bmi = Gewicht / (Körpergröße/100)^2 ) %>% filter( Lebensalter >= 18 )%>% filter(!is.na(bmi))%>% filter(!is.na(Ratio))%>% mutate (Poor = Ratio < 1.3)%>% mutate (nichtarme = Ratio >= 1.3)%>% mutate (Adipös= bmi>=30)%>% mutate (Normal= bmi<30)%>% group_by(Geschlecht, Ethnie)%>% summarise (Arme = sum(Poor), Nicht_Arm= sum(nichtarme), Adipöse = sum (Adipös), Normal_Gewicht= sum (Normal)) which gives me the number of poor, overweight, normal weight and not poor people per gender and ethnicity: I want to make the following calculations = R1 = amount of overweight (Adipöse) not poor People (Nicht_Arm) / not poor people R2 = amount of overweight poor people / poor people (Arme) and then R2/R1 I got all the values I need apart from the amount of overweight not poor people and the amount of overweight poor people. I was thinking to maybe use the if/else function but I did not manage to make it work. I am relatively new to R and therefore don't know really how to summarize TRUE and FALSE values. Because all I need for the overweight non poor people is the amount of where in both columns Adipös and Nicht_Arm it says TRUE. Thank you for your help AI: This is more of a programming than a data science question and is therefore better suited to the stackoverflow stackexchange page. Without having a data sample it is bit difficult, but I would expect something like this to at least get close to what you're looking for: DurchgangJ %>% mutate(bmi = Gewicht / (Körpergröße/100)^2) %>% filter(Lebensalter >= 18)%>% filter(!is.na(bmi))%>% filter(!is.na(Ratio))%>% mutate(Poor = Ratio < 1.3)%>% mutate(Overweight = bmi > 30)%>% group_by(Geschlecht, Ethnie)%>% summarise( R1 = sum(Overweight & !Poor) / sum(!Poor). R2 = sum(Overweight & Poor) / sum(Poor) )
H: Models: during training and during deployment It's known that during the model training, we hold out the test-set. However, I actually find during deployment, that if to use a new model train on the entire dataset (train+test), actually yield also pretty good results on predicting truly 'unseen' data. I'm look for comments for this approach. Are there any concerns? In particular, in this approach I would apply transformations of any kind, e.g. standardization/normalization/categorical encoding, over the entire dataset. Isn't it better for deployment? (Compared to traditional approach where all these transformation are only done on train set, which can sometimes fail to encode some categories of data that are absent in train set.) Any experience sharing and critique of my approach? TIA. AI: Usually the approach is as follows, first find the model that works best : Divide the data into train & test.(Testing is out of sample) Do all the preprocessing on training data and apply the same of test using transform data. Do hyperparam tuning to find the best model Do evaluation on test data for best model, once satisfied take the model to production Taking the model to production: Take the complete train data and use the same preprocessing and best hyperparams found to train the data again but this time on complete data For classification you may have to adjust the threshold. This is how model's are deployed across industry.
H: Do we perform text embedding before or after train-test splitting? Do we perform text embedding before or after train-test splitting? I know that for encoding variables, usually done after the split. However, I'm not sure if that's also the case for text processing? AI: You should compute embeddings after the split, otherwise you would be incurring in data leakage and hiding the out-of-vocabulary word problem.
H: how to interpret precsion recall value in binary classification of scikit-learn I am working with binary classification and my classification report generated through scikit-learn looks like the image below. I am confused I have two precision-recall values one for class 0 and the other for class 1. which value I should consider while writing results? AI: Mostly when you are doing the binary class classification you are mostly interested in predicting the 1s. Some example of binary classification like load default or not( default 1, else 0), whether a customer will churn or not ( churn 1 , else 0). So for interpreting the results if you follow the same methodology you should report precision for 1 or by that means all the reporting should happen on 1s.
H: How do I estimate the benefit of one business method over another when all my data is confounded? I'm trying to estimate the benefit of using one method over another, but the existing data is confounded and I'm struggling to figure it out. So my company submits debit orders to customers to collect a monthly payment. Sometimes customers reverse the charge or don't have enough money, so the payment fails and we don't collect any money from them that month. Orders are submitted through method A, which is cheap but can be reversed; and through method B, which is more expensive but can't be reversed, although it can still fail. Overall method B is more likely to succeed in collecting a payment, and I'm trying to estimate how much better it is. How much more likely are we to collect a payment if method B is used over method A? Currently a simple rule based system is used to decide whether to use method A / B for the next payment, which is based off of how many payments the customer previously failed. So the worse customers are more likely to have had their payments made on method B. So when I look at my payments data, the average payment rate for method A is say 90% and the average payment rate for method B is say 70%. That makes sense, since the customers who were more likely to not pay are assigned to method B. But we know that method B is better at collecting payments! So my issue is, how do I quantify how much better method B is from this data? AI: The only way is to obtain two comparable datasets for method A and method B. How exactly to do that is specific to your data and likely involves expert knowledge, but I can at least try to sketch possible directions: If there are cases where customers were randomly assigned A or B and you have clear indications of these cases, that would perfect. If there are enough cases where the same customer was sometimes billed by method A and sometimes B, that could the basic of a fair dataset: even if the customer was switched from A to B, one can still observe by how much this decreases the risk of non-payment. Note that the comparison would be paired by customer. If none of this works, the company has to organize a proper A/B testing on a sample of transactions (basically building a dataset like described in the first option). This may cost a bit in the short term, but it could be beneficial to obtain reliable estimations in the long term.
H: Comparison between applications of vanilla transformer and BERT I try to identify applications of vanilla transformer in nlp, as well as those in BERT. But I don't seem to find good summaries for either of them. Thus my questions are: what are the applications of transformer and bert respectively? in (1), why in some application vanilla transformer is used over BERT? (or vice versa?) What're the reasons? TIA. AI: A normal transformer has two parts: encoder (non-autoregressive) and decoder (autoregressive). This allows it to generate text (i.e. sequences of tokens). Therefore the applications of the vanilla transformer are those receiving a piece of text as input and getting another piece of text as output. The main example is machine translation. BERT is a transformer encoder. Its applications are those tasks where the input is a piece of text (or N pieces of text) and the output is either: One single output (at the [CLS] token position). This includes any classification or regression task. One output per some/each of the input tokens. This mainly comprises token tagging tasks, e.g. part of speech tagging, span tagging (e.g. for question answering).
H: Is it wise to always `StandardScaler()` features? [SOLVED] My current investigations point to the sklearn.preprocessing.StandardScaler() not always being the right choice for certain types of feature extractions for neural networks. Suppose I want to classify sound events based on spectrogram data. A second of such data could look like this: Visible here is a sine wave of around 1kHz over one second. The settling of the low bands is specific to the feature extraction and not part of the question. The data is a (n,28,40) matrix of dBFS-values, ergo the logarithmic energy levels relative to the maximal digital headroom of the wav-file. If StandardScaler is now applied, the visual representation of the sound now looks like this: ... which basically removes the defining features and amplifies the noise, exactly what is NOT wanted. Would a level-based scaler be the better choice here or is the StandardScaler() just not appearing to benefit the system in this specific case of a sine wave? Note: I am a student and I do not have years of experience. So if the question lacks quality, I ask of you to suggest an improvement before downvoting. Thank you. AI: By the looks of the image, you seem to be training the Scaler for each line of the graph. Almost every line of the first graph keeps the same values along the 0-1s duration. If you run the scaler for each line, then you end up with 0 as a result for every pixel, because almost all pixels have the same value that is also the mean. Also note that the only pixel that is different is the leftmost pixel, which is the pixel that differs the most. What you should be doing instead is lining up all the pixels of the matrix in one single column and scaling the whole data at once, then reshape the column back to your matrix form, then you won't see that much of a difference between the two images.
H: Why does Keras UpSampling2D layer cast to float32? When using the Keras UpSampling2D layer to resize a float64 array the result is cast to float32. The following code snippet demonstrates this behavior. import numpy as np from keras import layers arr = np.ones((4,4), np.float64).reshape((1,4,4,1)) print(arr.dtype) # float64 upsample = layers.UpSampling2D(interpolation='bilinear') arr_upsampled = upsample(arr) print(arr_upsampled.dtype) # float32 Why is this happening, and is it possible to keep the resulting tensor float64? AI: This is what happens: UpSampling2D invokes keras.backend.resize_images, which invokes tensorflow.image.resize. In its documentation, we find that: The return value has type float32, unless the method is ResizeMethod.NEAREST_NEIGHBOR, then the return dtype is the dtype of images: As you specified 'bilinear' as the interpolation method, the resulting type is float32. If you use 'nearest', you should get a float64 result. This constraint seems to be due to the internal implementation of the bilinear interpolation to be implemented only for float32. This limitation is permeated to all functionalities that depend on it.
H: Application of bag-of-ngrams in feature engineering of texts I've got few questions about the application of bag-of-ngrams in feature engineering of texts: How to (or can we?) perform word2vec on bag-of-ngrams? As the feature space of bag of n-gram increases exponentially with 'N', what (or are there?) are commonly used together with bag-of-ngrams to increase computational and storage efficiency? Or in general, does bag of n-gram used alongside with other feature engineering techniques when it's involved in transforming a text fields into a field of text feature? AI: I answer all 3 questions together. Embeddings get tokens i.e. the smallest meaningful text piece and you define it. It means that you may call characters the smallest meaningful piece, the words, phrases or whatever your creativity lets you. Word2vec is based on words so if you enter ngrams as tokens you get the same feature space in which your ngrams are also taken into account. If embedding, then the high-dimensionality of BOW feature space will be automatically taken care of (question 2) and entire idea is answering question 3 as well (if not please update your question with exact feature engineering techniques you mean and I will update my answer too) Disclaimer: The output, however intuitively working, might have some phenomena inside and you need to be careful e.g. the embedding algorithm sees "data" and "science" individually and within the same context tries to embed "data science" too. It might reduce the semantic map. For example in this case, if you do not consider individual tokens of your ngram it will be totally fine but you probably don't want to skip two semantically strong words like "data" and "science". So, be careful.
H: NLP Basic input doubt I actually have a basic doubt in NLP, When we consider traditional models like Decision trees, The feature column order is important, Like first column is fixed with some particular attribute. So If, I have Tf-Idf Each word will have some fixed index and the model can learn. But in the case of LSTM, Sentences can be jumbled. For eg: "There is heavy rain", "Heavy rain is there" In the above 2 sentences, The word heavy occurs in different places. So in order for the model to understand that we have passed the word "There", We would require some unique representations for the word "there". Either a One-Hot or Word2vec. Is my understanding so far right? My final doubt is, If I use tfidf for the above, How will it work? How will the model understand that "heavy" word is passed? This doubt has been bugging me for long. Kindly clarify this! Thanks a ton! AI: First of all in BOW model, the order is not represented. Decision Tree does not care if "heavy" is the first feature or last feature and it works the same for both as BOW just models the "existence" of words in documents. So your final doubt is actually nothing to worry about. In both sentences you have the word "heavy" and in that column you get a $1$ for both sentences (or TF, or TF-IDF or any other count you use). LSTM sees the order as it has a memorising behaviour. That means word "heavy" has an index and during long training, your model learns the probability of usage of word heavy. That means it models your text. so your understanding is right, there is a representation of words for last too.
H: What are the benefits of using spectral k-means over simple k-means? I have understood why k-means can get stuck in local minima. Now, I am curious to know how the spectral k-means helps to avoid this local minima problem. According to this paper A tutorial on Spectral, The spectral algorithm goes in the following way Project data into $R^n$ matrix Define an Affinity matrix A , using a Gaussian Kernel K or an Adjacency matrix Construct the Graph Laplacian from A (i.e. decide on a normalization) Solve the Eigenvalue problem Select k eigenvectors corresponding to the k lowest (or highest) eigenvalues to define a k-dimensional subspace Form clusters in this subspace using k-means In step 6, it is using k-means. How is it overcoming the local minima problem of k-means? Moreover, what are the benefits of spectral over k-means? If someone gives detailed insight upon this, it will be very helpful for me. AI: They are totally different approaches. Spectral Embedding is a representation of your data and it maps close data points next to each other in a new feature space. This helps k-means to deal with more separable clusters rather than original space. But this is not my answer to your question! The answer is that you need to know not only the algorithm of Spectral Clustering but how it maps data into a new vector space. If you read about that you will understand it easily. I give it a try here: Spectral Embedding (on which you apply a simple clustering and get Spectral Clustering!) is basically a graph embedding method. What graph means is out of scope of this answer and I assume you know it. In graphs, a good clustering puts those nodes who have many "within connections" (with each other) and a few "between connections" (with other parts of graph) into a cluster. See bellow image to understand more. Red nodes have several connections to each other but one single connection relates them to other parts of graph. It is a good cluster, right? An application example could be suggesting friends in Facebook. Facebook is a graph in which each node is a person and each connection implies friendship on FB. You have many connections with your circle of friends and others have the same "cluster of friends" too. If we cluster people on FB graph, we can check which people in same cluster are not friends and we suggest them to each other, right?! Based on brilliant work of Miroslav Fiedler called "Algebraic Connectivity of Graphs", this graph clustering can happen easily by finding those edges that removing them, clusters data in the way I mentioned above. Spectral Clustering is mainly doing it. Now the question is "How can I get benefit of such graph representation in a clustering problem which is hard for k-means?" Look at image bellow. It is a famous example of non-linearly separable data that k-means easily fails to cluster (because red points are between any two blue points which are on different sides of blue circle. K-means gets confused here!) Now we convert our data into a graph using the rule each data point is a node and every node is connected to its k-nearest neighbours. How will that graph look like?! It will densely connect the nodes related to blue points to each other, and nodes related to red points to each other and probably very little connection between any blue and red node as they are barely in neighbourhood of each other. What we did was "converting non-linearly separable but connected circles in original space into connected bunches of nodes in graph representation". Now we just need to turn the graph representation into numerical vectors and Taddaaa! we will have those strange clsusters as two beautifully separable bunches of points here k-mean does not get confused anymore! DISCLAIMER: There are many simplifications and not accurately use of terms in my answer. I tried to be intuitive more than accurate (e.g. our example data will result in two disconnected subgraphs which need numerical solution to be clustered using spectral clustering but does not affect the concept anyways)
H: ML Modeling approach for Event data I have this two dataset(image below).The one on the left shows events and the right is the alarm data. Goal : Using the two datasets, after any number of events, an alarm can be triggered.I'd like to predict when the next alarm will rise. Approach : I am a bit confused about the approach though. This is like a time-series data. Is using RNN the best approach or is there other approaches ? Thanks AI: This is sort of a hybrid task. It is time series data, but the outcome is categorical (it's binary in fact, it's whether the alarm is triggered (1), or the alarm is not triggered (0)). What you need here is a combination of a classifier and something that takes in sequential data. As you said, I think an LSTM RNN fits the bill. If you use tensorflow and you need some initial model to start playing around with, you can just use a tensorflow.keras sequential model with the following in order: LSTM layer dropout or dense layers of choice (don't do anything too polarizing, maybe a modest dropout depending on how big your data is) output should be a dense layer, with an activation function of either sigmoid (number of units = 1) or softmax (number of units = 2) When you compile the models, use the following loss functions: if you chose sigmoid, use binary crossentropy if you chose softmax, use categorical crossentropy (if your labels are one-hot encoded), use sparse categorical crossentropy (if your labels are just integers) If you use PyTorch instead, then you can find a direct analogue to what I described. Make sure to label your data at every timestep (whether the alarm rang or not), and check if the dataset is imbalanced and judge your model thoroughly using precision, recall, f1 score. Accuracy is misleading for imbalanced data. I hope this gives you a good direction to start the analysis.
H: Is normalization needed for TargetEncoded Variables? Basically the title. If I encode the address of people (the cities they live in) with a target encoder, do I still need to normalize that column? Of course, the capital is going to have more citizens and bigger cities also, so it looks kinda like an exponential distribution. In such a case, is normalization still needed (via a log transform for example), or are target encoded variables enough? Why? Thank you! AI: (I can't leave comments yet so I'll have to write an answer instead.) It depends on what kind of model you're using. And more specifically... If you're doing using something like logistic regression, or you're using neural networks with non-linear activations (ReLU, tanh, sigmoid, softmax especially), you absolutely need to do some type of normalization/standardization on all the features you decide to incorporate because the loss functions and their gradients are susceptible to unfriendly behavior. Another class of models you would need to normalize/standardize data in are in clustering models, because those rely on the notion of a metric for the partitioning criteria. If you are incorporating your categorical variables into a clustering algorithm you had better normalize them. On the flipside, if you're doing something like random forest or decision trees (also applies to gradient boosted analogues), it's not necessary to normalize your data because the partitioning criteria in those models is almost always independent of the scale of the data. So the answer is, it depends on the model, and it depends on how the model partitions the inputs to achieve the output (using a loss function, discrete conditions to send down branches of a tree, etc.)
H: Cannot achieve good result while Transfer Learning CIFAR-10 on ResNet50 - Keras I'm trying to Transfer Learn ResNet50 for image classification of the CIFAR-10 dataset. It's stated in the original paper and also ResNet50 documentation on keras.io that the ResNet should have a minimum input shape of 32x32. But I cannot achieve any good results. Here I have created and compiled the sequential model: model = Sequential() model.add(ResNet50(include_top=False, weights='imagenet', input_shape=(32,32,3))) model.add(Flatten()) model.add(BatchNormalization()) model.add(Dense(128, activation='relu')) #Dense Layer model.add(Dropout(0.5)) #Dropout model.add(Dense(10, activation='softmax')) #Output Layer model.layers[0].trainable = False #Set ResNet as NOT trainable model.summary() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) and here I have loaded the CIFAR-10 dataset: (x_train, y_train), (x_test, y_test) = cifar10.load_data() #Load dataset num_classes = len(np.unique(y_test)) (x_train, x_valid) = x_train[5000:],x_train[:5000] #set 5000 validation data (y_train, y_valid) = y_train[5000:],y_train[:5000] y_train = to_categorical(y_train,num_classes) #Convert to one-hot y_test = to_categorical(y_test,num_classes) y_valid = to_categorical(y_valid,num_classes) Note that on the keras.io website documentation for the resnet50.preprocess_input, it is stated that the input data has 0-255 range. So then to preprocess data I used: x_train = preprocess_input(x_train.copy()) #preprocess training images for the resnet50 x_test = preprocess_input(x_test.copy()) #preprocess test images for the resnet50 x_valid = preprocess_input(x_valid.copy()) #preprocess validation images for the resnet50 And here's the fitting section: ModelHistory = model.fit(x_train, y_train, batch_size=32, epochs=10, verbose=0, validation_data=(x_valid,y_valid), callbacks=[earlystop]) #Use Early Stopping But the result I get is terrible even after ~20 epochs (~67% testing accuracy) Where am I doing wrong? AI: I was able to achieve better results by upsampling the input images (using UpSample2D((6,6))) Accuracy of ~82% was achieved. For better results, I think one could upsample more, or add another Dense layer at the output.