text
stringlengths
83
79.5k
H: Which between random forest or extra tree is best in a unbalance dataset? I have an unbalanced dataset, with 3 classes, with 60% of class 1, 38% of class 2, and 2% of class 3. I don't want to generate more examples of class 3, and I cannot get more examples of class 3. The problem is that I need to choose between RandomForest, and ExtraTree (this is homework), and explain why I choose one of these. So I choose the Random Forest classifier, but I am not sure if my assumptions are right or no. I choose that, because, the split of extra tree is random, so the probabilities of the pick some examples of class 3 are low, and because I think (this is the real question) that because Random is more high-variance than Extra tree, can be more useful because the high variance can help with the dataset is unbalance. So are this two assumption especially the last one, correct? I choose correctly random forest over extra tree? Thanks AI: Both Random Forest Classifier and Extra Trees randomly sample the features at each split point, but because Random Forest is greedy it will try to find the optimal split point at each node whereas Extra trees selects the split point randomly. I would choose Random Forest because it's more likely to create a split point that accounts for the imbalanced class, whereas Extra Trees might keep splitting over and over again on a subset of the data without separating out class 3 due to the random split point.
H: Choosing attributes for k-means clustering The k-means clustering tries to minimize the within-cluster scatter and maximizing the distances between clusters. It does so on all attributes. I am learning about this method on several datasets. To illustrate, in one the datasets countries are compared based on attributes related to their Human development Index. However some of the attributes are completely unrelated to this dimension, for example total population of countries. How to deal with this attributes? As mentioned before k-means tries to minimize the scatter based on all attributes, which would mean this additional attributes could hurt the clusters. To illustrate, I know the k-means cannot discern three clusters that are perfectly clustered around one dimension and are completely scattered around the other. Should one just exclude some attributes based on prior knowledge? Is their perhaps a processes that identifies irrelevant attributes. AI: First of all, if you know that certain attributes shouldn't alter the clusters, you should remove them altogether. There is no point in hoping that K-Means will figure it out on its own if that can be fixed upstream. Second, obviously, not every attribute should affect the clusters equally. K-Means is based on the concept of distances between your points. Based on the distance matrix, the algorithm will find different clusters. The good thing is that you can tweak how the distance is calculated. You could weigh the different attributes such that differences between certain attributes are more important than others. Third, if you want to programmatically find the "best" attributes for clustering, I don't know of any efficient ways to do it. Meaning that your best bet is to try different combinations of attributes and see how good the clustering becomes. To rate the quality of clustering, there exist metrics like the Dunn Index, or the Davies-Bouldin Index (see this link for more detailed information: https://medium.com/@ODSC/assessment-metrics-for-clustering-algorithms-4a902e00d92d)
H: Why do we use 2D kernel for RGB data? I have recently started kearning CNN and I coukdnt understand that why are we using a 2D kernel like of shape (3x3) for a RGB data in place of a 3D kernel like of shape (3x3x3)? Are we sharing the same kernel among all the channels because the data would look the same in all the channels? AI: welcome to the community I guess there's a confusion in your understanding of the kernel we use in case of rgb data. We normally use a kernel of equal number of channels as the input coming in (in this case as you mentioned it's RGB, so my number of channels for convolution operation would be 3). So instead of using a 3 X 3 Kernel, we use a 3 X 3 X 3 kernel. Weight matrix multiplication of kernel and image pixels happen channel-wise. However, having said this, you can use a kernel of size 3 X 3 when input image is rgb, by specifying the stride as 1 in the third dimension. What this will do is convolute the kernel not only horizontally and vertically but also through the depth or specifically through the channels as well. I don't exactly know why one would like to do it. Apart from this I guess the course or the video you are referring to might have specified '2D convolution on a 3D image'. That doesn't mean using a 2D kernel. And a 2D convolution on a 3D image uses a 3D kernel and after weight matrix multiplication you get a 2D image hence justifying the 2D convolution name.
H: K-means and LDA for text classification I hope to explain in a clear way what I would like to do. I have more than 50000 tweets and I would like to add some labels on topics. So I have used LDA for doing this. I have also used k-means to group them and try to predict the cluster (but not the topic). I would like to know if it is possible to associate to clusters the topics found from LDA or if the two approaches are worthless when used simultaneously. AI: Try to look at it from a broader perspective, what you're essentially doing is wanting to build a sort of multi-class classification model based on your clusters and the topics LDA assigned. It's like wanting a clustering model and a classification model to do the same thing, which would ultimately depend upon the dimensional space the data points are in and their respective vectors. I assume you must be using some sort of tweet to vector conversion method to represent a tweet, like bag of words or glove doc2vec or something. Now making clusters on this vector space and hoping this does the same segregation as LDA does to allot topic is quite farfetched. I don't think so, your vector space will be so much in line with the topics LDA produces for your clusters to reflect the same. Also clustering puts data points in same clusters based on certain attributes, but determining why a cluster is made based on these attributes if the data dimensions are is very difficult.
H: In model validation (log regression), can AUC for the test sample be higher than that for the train sample? I have a relatively simple model (with 8'000 cases, 5 predictor variables) predicting a dihotomised outcome. It has an AUC of 0.82 (95%CI 0.82-0.83). Prediction in temporal (N=240) and external validation (N=170) datasets had AUCs of 0.85 (95%CI 0.81-0.88) and 0.76 (95% CI 0.68-0.84%). A reviewer told me it is impossible (can't happen, theoretically) for the temporal validation dataset to have higher AUC. Is this true? AI: This is just not true. Let's say that your future data is way easier to classify, then it can happen than temporal splits imply higher AUC. It often happens that future data is harder to classify because of shifts in the distributions of the variables, but if distributions are stable I don't see why not.
H: Is this attribute numeric or categorical (ordinal)? Help! So I have this dataset I need to perform several techniques on as part of a data mining/machine learning project of some sort in PYTHON. There are a couple of features however, that have me very worried as I don't know whether I whould handle them as Categorical (ordinal to be precise) or as numeric. The first feature takes integer values from 0 to 9 and it's supposed to represent population percentage of a certain religion (so 0 is 0%, 1 is 1-10%, 9 is 100%) The second feature represents contibution (money) and also takes values from 0-9 depending on how big the contribution is (so 0 is.. well, 0 , 1 is 1-49 , 9 is 20000+). How should I handle these features? Are they interval data? Should I one-hot-encode them into dummies or should I leave them as they are now? I am going to have to perform some clustering and classification techniques on the dataset. Thank you in advance! AI: They are not categorical as they have a meaningful ordering that you likely want to use. The first is usable as is as it is roughly fraction religious times 10. Yes it's ordinal but happens to be just a rescaled continuous feature. The second is ordinal and so you don't quite want to use it as a continuous feature as that doesn't capture the difference in scale that they indicate. It might work OK, but you could also translate each number to the midpoint of corresponding range.
H: Multidimensional K-Means wiith sklearn, centroids problem when plotting I am working with a dataset (X) to predict 12 clusters with K-Means using python SKLEARN library: numClusters= 12 kmeans = KMeans(n_clusters=numClusters).fit(X) centroids = kmeans.cluster_centers_ # Predicting the clusters labels = kmeans.predict(X) # Getting the cluster centers C = kmeans.cluster_centers_ #transform n variiables to 2 principal components to plot pca = PCA(n_components=2) principalComponents = pca.fit_transform(X) principalDf = pd.DataFrame(data = principalComponents , columns = ['principal component 1', 'principal component 2']) colors =['red','green','blue','cyan','yellow', 'lime','orange','coral','brown','peru','khaki','tan'] centroidColor= [] for item in range(numClusters): centroidColor.append(colors[item]) dataPointColor=[] for row in labels: dataPointColor.append(colors[row]) fig = plt.figure(figsize = (10,10)) ax = fig.add_subplot(1,1,1) ax.set_xlabel('Principal Component 1', fontsize = 15) ax.set_ylabel('Principal Component 2', fontsize = 15) ax.set_title('2 component PCA', fontsize = 20) plt.scatter(principalDf['principal component 1'], principalDf['principal component 2'], c=dataPointColor, s=50, alpha=0.5) plt.scatter(C[:, 0], C[:, 1], c=centroidColor, s=200, marker=('x')) plt.show() My problem comes when trying to plot the n dimensions of my dataset into 2 PCA. The clustering seems fine, but the centroids (marked with a star icon) are not correct: The centroids should not always be in the same near spot. I know that I did not standardize the dataset (X) because all values are in a range [1:5]: [[4 5 4 ... 4 4 4] [4 4 1 ... 4 3 5] [3 4 3 ... 1 1 4] ... [2 5 1 ... 2 1 4] [5 5 1 ... 5 5 1] [2 5 1 ... 1 1 5]] Could the problem come from that? Thanks for your help. AI: In your plot you used PCA to reduce the dimensionality of your data, but you plotted the first 2 dimensions of your centroids. You should also transform the centroids using the PCA transform you fitted on your data. This code should work for you numClusters= 12 kmeans = KMeans(n_clusters=numClusters).fit(X) centroids = kmeans.cluster_centers_ # Predicting the clusters labels = kmeans.predict(X) # Getting the cluster centers C = kmeans.cluster_centers_ #transform n variiables to 2 principal components to plot pca = PCA(n_components=2) pca_fit = pca.fit(X) principalComponents = pca_fit.transform(X) principalDf = pd.DataFrame(data = principalComponents , columns = ['principal component 1', 'principal component 2']) colors =['red','green','blue','cyan','yellow', 'lime','orange','coral','brown','peru','khaki','tan'] centroidColor= [] for item in range(numClusters): centroidColor.append(colors[item]) dataPointColor=[] for row in labels: dataPointColor.append(colors[row]) fig = plt.figure(figsize = (10,10)) ax = fig.add_subplot(1,1,1) ax.set_xlabel('Principal Component 1', fontsize = 15) ax.set_ylabel('Principal Component 2', fontsize = 15) ax.set_title('2 component PCA', fontsize = 20) plt.scatter(principalDf['principal component 1'], principalDf['principal component 2'], c=dataPointColor, s=50, alpha=0.5) C_transformed = pca_fit.transform(C) plt.scatter(C_transformed[:, 0], C_transformed[:, 1], c=centroidColor, s=200, marker=('x')) plt.show()
H: k-means and LDA for text classification: how to test accuracy? I have many tweets that I would like classify based on their similarity. Unfortunately I am not quite familiar with text-classification and nlp, so I had to read a lot of documents before having an idea on the topic. My tweets have no labels so I cannot classify them: only manually, but it would be time consuming. I would like to group them by topic, so i have first considered LDA for topic classification, then k-means clustering. Is it a good approach to proceed? What are the differences and how I can test the accuracy of the classification? AI: First of all, you use two terms Clustering and Classification interchangably and I would like to draw your attention to this. Your problem is purely Clustering. Secondly, you asked for testing accuracy. As your problem is pure Clustering, there is no evaluation for that. The last but not least is the problem of "Short Text Understanding". In short texts, LDA of TF-IDF based approaches (like LSA) do not work well as they rely on co-occurrences of words in texts. Considering these two facts let's discuss the solution. I would recommend that you use a pre-trained model (I recommend S-BERT which is implemented in Sentence-Transformer Python package). Simply follow this semantic similarity search piece of code and you can implement it easily in a few lines. Probably fine-tuning the model will be tricky according to the nature of your task, so just use a pre-trained model and see how it works. For evaluation, I recommend to capture some similar tweets manually (more the better) and check the performance on them. For Topic Modeling approach you can use the pytho implementation of the paper I mentioned above here. Two above mentioned can also be combined creatively (search similarity with S-BERT and compare to bi-term topic model for example) Hope it helped. Good Luck!
H: Why PyTorch is faster than sklearn models? Recently, I get to know about the hummingbird library for Python. I trained a RandomForest on a 10M-sized dataset with 2 labels. With sklearn it was taking 450 ms for inference. But after converting the same model to PyTorch, now it takes 128ms on CPU inference. If both are running on the CPU, then why hummingbird's Pytorch model is faster than sklean model? I am not getting what hummingbird does to my sklearn model to increases speed. AI: It is difficult to answer your question without the access to your code. The best way to understand the difference is to profile the code and see where the bottlenecks are for your specific problem. For this, you can use different profiling modules in python: cProfile python line profiler
H: Best Approach to Forecasting Numerical Value Based on time series and categorical data? Consider a dataset of thousands of car repairs that have been performed. In simplest of terms, the columns to consider are the time of year when it was broken (seasonal changes in demand for car repairs), type of damage to car (some damages take longer to repair than others), and the type of car (some cars are more difficult to work on). I am inquiring as to the best fit for trying to model data of this format where you are predicting the repair time based off of timeseries and categorical data as the inputs. Keep in mind that the data does not have a constant period. Example Column Names: datetime | Type of Damage to Car | Type of Car | Repair Time Any suggestions? AI: So the question is about to model the next repair date given the previous repairs. If you have customer-specific data, where you have logged customer repairs to a specific customer, then it would be good idea to do time series, provided you have enough instances of customer repeatedly coming back. If you have this scenario, then you can use something like an RNN (Recurrent Neural Network [https://towardsdatascience.com/understanding-rnn-and-lstm-f7cdf6dfc14e]) or LSTM, whereby you feed in the date time, type of damage at every time step to then get a single output date time. If you have not logged a customer ID to the repairs to identify which repairs have been carried out for particular customers, then you could easily use a standard neural network for this. Here you would train the model on all input data with the goal of getting a date time which equates to when the next repair is likely to happen. In terms of data representation, you could represent the following features as the following: Datetime: here, you could this as the day in the year, for example, Jan 1 could be represented as 1 and Dec 31 as 365. This can then be normalised (i.e. divided by 365) to reduce the scale of this feature. Categorical variables like type of damage, etc: normally we represent these as one-hot encoded vectors (https://machinelearningmastery.com/why-one-hot-encode-data-in-machine-learning/). To make the input for a given entry, we then concatenate these features into one massive vector. Good luck!
H: Association Rules with Python (coded dataset) I have this dataset which I really need to use association rules techniques on. The dataset has like 90 variables, many of which are ordinal. Thing is, the data is already coded using numbers instead of strings (e.g. bread = 4 instead of "bread") as well as some re-scaled numerics such as 1 = 1%-10%". What I have so far: from apyori import apriori #Convert dataframe to list val_list= [] for row in range(1,5530): val_list.append([str(data.values[row,column]) for column in range (0,90)]) print('Row ', row, ' ok') apr = apriori(val_list,min_support=0.1,min_confidence=0.2,min_lift=2) result = list(apr) Still, this way I don't get the feature names in the frequent "baskets" so it's not much use, since I have baskets like [33, 1, 8, 8, 1, 1] with no idea what the numbers might be referring to. What can I do and/or how do I prepare the data for association rule mining? AI: Create a dictionary that contains the coded variables as keys and the item names as values. So it would look like: dicty = {4: "bread", 7: "milk", 9: "toothpaste"} Constructing dictionaries in python is really easy if you have them in a table or excel spreadsheet. dicty = {i:j for i,j in zip(coded_list,normal_list)} where coded_list is the list of variables in numbers, and normal_list is the list of variables in their categorical names. Once you have a dictionary you can simply convert them like this: name = dicty[9] and it should return toothpaste for name.
H: Why don't we transpose $\delta^{l+1}$ in back propagation? Using this neural network as an example: The weight matrices are then $$ W_0=[2\times4], W_1=[4\times4], W_2=[4\times2]$$ To find the error for the last layer, we use $$ \delta^{[2]} = \nabla C \odot \sigma'(z^{[2]})$$ which makes sense. This will produce a $[1\times 2]$ vector. But to find the error in the next layer, we use $$ \delta^{[1]} = (W_2^T\delta^{[2]})\odot \sigma'(z^{[1]}) $$ This appears to try to multiply a $[4\times2]$ matrix and a $[1\times 2]$ matrix together, which is illegal. Am I just wrong about how the layers are represented? Should $z^{[n]}$ really be a $[l\times 1]$ vector? That doesn't really make sense to me, because it would be multiplied by an $[l\times m]$ matrix as the feed-forward continues. Do we just always represent $\delta^{[n]}$ as a $[l\times 1]$ vector, and the formula doesn't mention this as it's common knowledge? What am I missing here? (Note: these formulae are based on this book) AI: You have some wrong dimension here. Rules for Dim of weight $W^{[l]} = d^{l} * d^{l-1} $ $ W_0 = [ 4 * 2 ] $ $ W_2 = [ 2 * 4 ] $ As $ dim (z^{[2]}) = [2 * 1] $ so is $ \delta^{[2]}$ So $ W_{2}^{T} \delta^{2} $ is $ [4 * 2 ] * [2 * 1] = [4 * 1 ] dimension $
H: Text Classification : Classifying N classes vs rest of the classes Apologies if this is naive, I am fairly new to the domain. I have a requirement where I am trying to classify 2 types of text data, i.e, I have got 2 classes to classify my data upon. I am able to get acceptable results for them using word vectors, dimensionality reduction and then putting the data through to LinearSVC for classification. But my model is biased towards sentences with longer lengths. I know tfidf can help in this, but is there a way to apply that with word vectors, I dont want to lose its ability to predict on unseen similar data? A follow up problem to this is, when there is data from some other class apart from the 2 classes that I have trained my data upon. I would like my model to be able to predict that the data doesn't belong to any of the 2 classes. Currently, it just predicts 1 of the 2 classes randomly. I was thinking if I should first put my data through topic modelling, which would give an idea on what topic the text data is. Based on keywords from topic modelling, we can detect if my text data belongs to one of the 2 classes or not and then put it to my classification model for final prediction. But this doesn't seems very clean and I can see it failing as there's too much dependency on keywords generated by topic modelling. Is there any other finer and better way to do this? AI: This is called an open-class text classification problem, it's used in particular for some author identification problems. I don't have any recent pointers but from a quick search I found this article: https://www.aclweb.org/anthology/N16-1061.pdf In the field of author classification there is a similar problem called author verification, which can be treated as a one-class classification problem. You could consider using it in this way: one-class classification between "known classes" vs. others regular classification between the known classes
H: Understanding output probabilites of xgboost in multiclass problems I would like to understand the output probabilities of a xgboost classifier (or any other decision tree ensemble based classifier) in the case of a multiclass problem. For example: We have 5 different classes and a trained model on some data belonging to those classes. I would expect, given some test data, the model would give output probabilities of up to 100% when it is really sure the data belongs to some class. Obviously the probabilites still have to add up to 1. On the contrary I was told that in the case of 5 different classes, the model would be already sure when outputting 20% (100/5) for the data. I also see this in a problem I am facing, the probabilities are never higher than 30%. I would like to understand this apparent fact, ideally pointing me to a source where this is explained. AI: What you were told is a worst case scenario. With 5 labels, 20.01% is the lowest possible value that a model would need to choose one class over the other. If the probability for each of the 5 classes are almost equal then the probabilities for each would be approximately 20%. In this case, the model would be having trouble deciding which class is correct. About the 30% comment, while it seems low, keep in mind that the model mainly uses the probability to classify the most likely label, calculating a highly accurate probability value is not the first priority. Here is a good reference. This paper discusses a few of the problems with probability accuracy and mentions some theories on how to improve the accuracy: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.381.4254&rep=rep1&type=pdf It mentions (see excerpts below) how a probability estimation in classification is a complex and difficult problem, and that some models can be very accurate at classification but poor at probability estimation, and how multi-class models are much more difficult than a binary classification model. Excerpt 1: "Probability estimation is known to be a quite hard problem, especially in comparison to standard classification. This comes at no surprise, noting that proper probability estimation is a sufficient but not necessary condition for proper classification: If the conditional class probabilities (1) are predicted accurately, an optimal classification can simply be made by picking the class with highest probability" Excerpt 2: "More generally, the Bayes decision can be taken so as to minimize any loss in expectation. On the other hand, a correct classification can also be obtained based on less accurate probability estimates. In fact, the classification will remain correct as long as the estimated probability is highest for the true class. Or, stated differently, an estimation error will remain ineffective unless it changes the result of the arg max operation in (2). This is also the reason for why methods like naive Bayes show competitive performance in classification despite producing relatively inaccurate probability estimates [7]." Excerpt 3: Methods like naive Bayes and decision trees are multi-class classifiers and can in principle be used to produce probability estimates in this setting. In practice, however, one often prefers to estimate probabilities in the two-class setting, especially because estimating a single probability (of the positive class) is much simpler than estimating K − 1 probabilities simultaneously Example with probabilities close to 1: from sklearn import datasets from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split iris = datasets.load_iris() X = iris.data y = iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0) from sklearn.tree import DecisionTreeClassifier dtree_model = DecisionTreeClassifier(max_depth = 2).fit(X_train, y_train) dtree_predictions = dtree_model.predict(X_test) print(dtree_model.predict_proba(X_test) ) [[0. 0.02564103 0.97435897] [0. 0.91666667 0.08333333] [1. 0. 0. ] [0. 0.02564103 0.97435897] ...
H: Final Model fitting - subset vs entire training data If I used a subset of the entire available training data for model tuning and hyperparamater selection, should I fit the final model to the subset training dataset or the entire available training data? For example, if I have 1M samples available and I took a 100K random samples as a test holdout and 200K random samples as a training dataset for model tuning, should the tuned hyperparamaters used to fit the final model on the 1) 200K training dataset, or 2) 900K available data (excluding test holdout)? In other words, can the hyperparameters by generalized for the entire population? I am assuming that both the holdout and training datasets are selected randomly and follow the class distribution in the original data. AI: The general machine learning process is this: Split your data into two parts, training and test. So in your example I would take 100k for test and 900k for training (don't know why you say only take 200k in your question but I digress). With the 900k training we perform hyper-parameter tuning. This can be done by splitting training into training and validation say 800k/100k or better yet we could do this using k-fold cross validation. Once you have chosen the optimal hyper-parameters in this manner you evaluate their performance on the test set. The whole point of this process is simply to evaluate the algorithms performance, and from that select and algorithm. That is the only reason for a train/validation/test split. (As a note, this process can be even further improved by using something called nested cross-validation but I will not go into the details). After you have selected your algorithm and determined its performance (error rate), you take your whole data set (1 million records) and perform hyper-parameter selection on that, either using a single split or by k-fold cross-validation. You no longer need the test set because you have already determined the models error rate in the previous step. Once you have selected the best hyper-parameters in the previous step you apply them to the entire data set (the 1 million records) and build the model.
H: Linear discriminant analysis in R: how to choose the most suitable model? The data set vaso in the robustbase library summarizes the vasoconstriction (or not) of subjects’ fingers along with their breathing volumes and rates. > head(vaso) Volume Rate Y 1 3.70 0.825 1 2 3.50 1.090 1 3 1.25 2.500 1 4 0.75 1.500 1 5 0.80 3.200 1 6 0.70 3.500 1 I want to perform a linear discriminant analysis in R to see how well these distinguish between the two groups. And I consider two cases: ld <- lda(Y ~ ., data=vaso) ld1 <- lda(Y ~ log(Volume)+log(Rate), data=vaso) Please help me understand which model is better? What characteristics to look at? AI: I'm not familiar with LDA, but as far as I know you're not really changing the "model" (i.e. the way to measure impact) between the two versions, what you're changing is the features: in the 2nd version, instead of looking at whether the value of the feature impacts Y, you look at whether the log of the value of the feature impacts Y. The first version is of course the most natural way to look at features, the second is common but usually this is used when we already know that the distribution of the feature (or the relation between the feature and the response variable) makes it relevant.
H: Hive / Impala best practice code structuring Coming from a DWH-background I am used to putting subqueries almost everywhere in my queries. On a Hadoop project (with Hive version 1.1.0 on Cloudera), I noticed we can forego subqueries in some cases. It made me wonder if there are similar SQL-dialect specific differences between what is used in Hadoop SQL and what you would use in a DWH-setting. So I would like to extend this question so that people can mention what they noticed as differences between Hadoop and DWH in when structuring their queries. I noticed there was very little reference to this topic for Hadoop. There are some normal principles that apply to Hadoop like mentioned here: https://streever.atlassian.net/wiki/spaces/HADOOP/pages/3211279/Hive+SQL+Best+Practices It would be nice to get a few of your best practices for working with Hadoop. E.g. You write your queries as neutral as possible so that it works in both Hive and Impala avoiding using language-specific functions such as left (Impala only) The example I came across was that a group by-query worked in two different ways. In classic RDBMS I know that only one of both would work. Here is an example (columns are displayed in different order then in group by): Classic SQL (Oracle, SQL Server, etc) select t2.b, t2.a from (select t1.a,t1.b from table1 as t1 group by t1.a, t1.b) t2 Hive SQL select t1.b, t1.a from table1 as t1 group by t1.a, t1.b Notice that everything is combined in 1 single query Hadoop. The classic SQL snippet first does the group by in a subquery before the data is displayed in the right order. I would be curious to know if you came across other subtleties like this one. I know, based on my experience, that it is good practice to respect the following guidelines while coding: Keep your code clean (hence avoid sub-queries if you can do without) Consider what impact changes to settings might have. E.g. yarn.nodemanager.resource.memory-mb=24576 is great to use, but what happens if you are not allowed to change the node memory size. In an automated job it is not necessarily good practice. Look at the general Hadoop settings, e.g. file size. Avoid using functions that are specific to a program. E.g. select left("Hello world", 3) is useful in Impala but in Hive it has to be rewritten to select left("Hello world", 1,4). This can lead to problems when later on down the line it will run in a different program. AI: The biggest help to me was learning about how MapReduce works on a distributed environment. I don't have exact rules of thumb for you, but I might be able to help. In Hadoop/Hive, the hive.execution.engine definitely seems to matter. Make sure understand the differences and try changing it for certain problems. Subqueries can actually HELP performance, especially with really wide tables, MapReduce as your execution engine, and Hive. For example, if you only want 2 or 3 columns from a 2nd table, its better to: SELECT T1.*, T2.col1, T2.col2 FROM T1 LEFT OUTER JOIN (SELECT key, col1, col2 FROM T2) T2 ON T1.key = T2.key; This is all due to how the execution engine works under-the-covers, so learning about that and the differences of the execution engines will really help. Temp tables are your friend - prior to hadoop, I avoided staging data in temporary tables a lot and wrote more complex SQL to avoid it. But with Hadoop and big data, I've actually found it much faster to create tables and then subsequently joining to them helps in some cases with massive data. Learn how the table's underlying data is stored. STORED AS TEXTFILE is a lot different than data stored as parquet Impala vs Hive also has performance implications. I've found to find Hive more stable, but slower. How the data is stored (skewed or not, bucketed or not, etc) makes a huge different. Keep an eye out for queries where you have 100 reducers and 99 of them finish super fast and 1 of them takes forever. Like I mentioned, spend time learning how the execution engines do the work in a distributed environment. Once you understand how the execution happens behind the scenes, you can start to pick up on other subtleties like why this: select count(distinct user) from table; can be a low slower than this: select count(*) from (select user from table group by user) q;
H: Rolling average: when is it possible to consider it? I would like to know if I can consider rolling average to predict the future trend of sells. I collected data from January 2020 to March 2020, day by day, on sells in a shop and I would like to run some analysis. I was considering rolling average on multiple periods (5 - 10 days). Do you think it might make sense? AI: You can! But the implications can be huge. If you choose a big number, you automatically shorten the dataset from which you can train your time-series or forecast model. Also, if things start to "change" in recent time, your model might take too long to react if it weights those rolling averages heavily. If you go too short, it might overreact to recent changes, or you're just missing that information from which your model might be able to learn on. I usually make that decision based on how I want to use the forecast on unseen data. This is not an easy decision for which there is a solid rule-of-thumb. Welcome to the real-world complexities of a time-series problem!
H: Loaded model predicts well in colab but gives same label and accuracy when downloaded I have developed a Recurrent Neural Network to perform sentiment analysis on tweets using the Kazanova/sentiment140 dataset in Kaggle. The model looks like this: def scheduler(epoch): if epoch < 10: return 0.001 else: return 0.001 * tf.math.exp(0.1 * (10 - epoch)) callback1 = tf.keras.callbacks.LearningRateScheduler(scheduler) callback2 = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',patience=10, verbose=0, mode='auto',min_delta=0.0001, cooldown=0, min_lr=0) callback3 = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', min_delta=0, patience=3, verbose=0, mode='auto',baseline=None, restore_best_weights=True) model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable=False), tf.keras.layers.Dropout(0.2), tf.keras.layers.Conv1D(64, 5, activation='relu'), tf.keras.layers.MaxPooling1D(pool_size=4), tf.keras.layers.LSTM(64), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) model.summary() num_epochs = 50 training_padded = np.array(training_sequences) training_labels = np.array(training_labels) testing_padded = np.array(test_sequences) testing_labels = np.array(test_labels) history = model.fit(training_padded, training_labels, epochs=num_epochs, validation_data=(testing_padded, testing_labels), verbose=2,callbacks=[callback1,callback2]) print("Training Complete") model.save('sentiment_final.h5') The model runs fine and predicts output perfectly when loaded from colab itself The loaded colab code: load_model= tf.keras.models.load_model('sentiment_final.h5') #load_model.summary() def decode_sentiment(score): if score < 0.5: return "NEGATIVE" else: return "POSITIVE" def predict(text): x_test = pad_sequences(tokenizer.texts_to_sequences([text]), maxlen=16) score = load_model.predict([x_test])[0] return {"label": decode_sentiment(score), "score": float(score)} predict("I love this day") #Outputs -> {'label': 'POSITIVE', 'score': 0.793081521987915} predict("I hate this day") #Outputs -> {'label': 'NEGATIVE', 'score': 0.38644927740097046} predict("I shouldn't be alive") #Outputs -> {'label': 'NEGATIVE', 'score': 0.12737956643104553} But If I load the model in VSCode , the output is the same for all the models. VSCode Implementation: import tensorflow from tensorflow.keras.models import load_model from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences import os tokenizer=Tokenizer() model = load_model('sentiment_final.h5') def decode_sentiment(score): if score<0.5: return "Negative" else: return "Positive" def predict_score(text): x_test=pad_sequences(tokenizer.texts_to_sequences([text]),maxlen=16) score=model.predict([x_test])[0] return {"label":decode_sentiment(score),"score": float(score)} def call_predict_function(text): return predict_score(text) print(call_predict_function("I love this day")) #Outputs -> {'label': 'POSITIVE', 'score': 0.793081521987915} print(call_predict_function("I hate this day")) #Outputs -> {'label': 'POSITIVE', 'score': 0.793081521987915} print(call_predict_function("I shouldn't be alive")) #Outputs -> {'label': 'POSITIVE', 'score': 0.793081521987915} Where am I going wrong? Can somebody resolve this problem? AI: As far as I am aware you also need to save and load the tokenizer you used. The tokenizer is not fitted/trained and therefore is outputting nothing sensible for the model to predict on.
H: Linear Regression with vs without polynomial features I have a conceptual question about why (processing power/storage aside) would you ever just use a regular linear regression without adding polynomial features? It seems like adding polynomial features (without overfitting) would always produce better results. I know linear regression can fit more than just a line but that is only once you decide to add polynomial features correct? My experience is with python using sklearn's libraries. AI: Occam’s razor principle [ref]: Having two hypotheses, that has the same empirical risk (here, training error), a short explanation (here, a boundary with fewer parameters) tends to be more valid than a long explanation. Hence, complexity must add additional comparable accuracy. Otherwise, we should go with the simple model Let's say you have 50-50 chance of Linear and Non-Linear data. With your approach, you will end up using regularization in 50% of the model. First, make a complex model and then regularize to make it simple. Overall, you will increase the complexity which might impact future activities e.g Maintenance Interpretability Many businesses demand a reason for the decision e.g. Credit card related decision. Linear regression will give you a simple explanation for each feature Also, read these SE answers
H: The OLAP (On-Line Analytical Processing) cube with 4 dimensions A typical OLAP cube looks like this: As I can see, this cube can work with 2 or 3 dimensions, but what if I have 4 dimensions to produce facts? Should I use star schema instead when having more than 3 dims? AI: Your example is a star schema, it's just a star with three points (dimensions). It's OK to have star schemas with more dimensions. Some large OLAP schemas can have tens of dimensions. The star schema holds the underlying data. The cube is a convenient set of pre-aggregated values that make our run-time faster. The "cube" name is a handy visual and linguistic analogy but not a fundamental part of it's definition. Specifically, it is not limited to 3D. As in maths were a hypercube can be reasoned about in many dimension so an OLAP cube can be defined over many star schema dimensions. It just gets harder to draw in marketing literature, is all.
H: Why my training and testing set are about 99% but my single prediction does wrong prediction? I have performed fruits classification using CNN but i am paused at a point where all things are going right confusion matrix accuracy score all are correct it seems there is no overfitting but it always classifies wrong fruit. Why would this happen. Link to source code is provided below. Thank You! Github source code link AI: It looks like the new data has a different distribution from the training data. It looks like the training data is just a single fruit, with white background, and the new image you've passed is a picture of bananas with blue background. The model has probably learned something like: if blue image, then blueberries, and for this reason it classifies the blue bananas picture as blueberries. Whenever the distribution of new data is different from the data you've trained on, don't expect the model to work very well, as ML models just interpolate.
H: Should the weights for CrossEntropyLoss be exactly the inverse of the propotions of training data? I have a classifier network which chooses one of three classifications, and uses cross entropy loss as the loss function. If the proportions of training data are 100:10:5 for each classification, should I automatically set the weights to 1/100,1/10,1/5 ? If not, what other issues are there to consider? AI: I'll refer to an answer for a similar topic. If you have enough data for classes 2 and 3, there's no reason to change your training scheme if you use standard metrics. The baseline should always be training without changing the weights, and if you see that the model does very bad on classes 2 and 3, you can change the training scheme. However, I have rarely seen it work better.
H: Is rolling of biased dice random phenomenon? Random phenomenon is a situation in which we dont know what the out come is going to come. Rolling of unbiased dice is a random phenomenon since we dont know what number is going to come. We can only say every number has 1/6 probability. Rolling a biased coin on other hand also we dont know which number is going to come. We only know every number does not have 1/6 probability. So does that mean rolling of biased dice is a random phenomenon ? AI: Yes, you are correct. Rolling of dice is (almost always) a random phenomenon. In case of biased v unbiased dice, the difference is solely in the probability distributions. For unbiased the distribution must be uniform, for biased it can be anything we want. The only case where biased dice stop being random phenomena is when we load them in such a way that they always produce a predetermined number (or a specific sequence).
H: Compare similarities between two data frames using more than one column in each data frame This work started by comparing two columns in each data set in pandas. Previous research:here A lot of results online show how to compare 2 data frames with 1 column I'm trying to learn how to compare and extract similarities between two data frames (same & different sizes if possible) using more than 1 column in pandas. sample input: df1=pd.DataFrame([[1,None],[1,None,],[1,None],[1,'item_a'],[2,'item_a'],[2,'item_b'],[2,'item_f'],[3,'item_e'],[3,'item_e'],[3,'item_g'],[3,'item_h']],columns=['id','A']) df2=pd.DataFrame([[1,'item_a'],[1,'item_b'],[1,'item_c'],[1,'item_d'],[2,'item_a'],[2,'item_b'],[2,'item_c'],[2,'item_d'],[3,'item_e'],[3,'item_f'],[3,'item_g'],[3,'item_h']],columns=['id','A']) df1 id A 0 1 None 1 1 None 2 1 None 3 1 item_a 4 2 item_a 5 2 item_b 6 2 item_f 7 3 item_e 8 3 item_e 9 3 item_g 10 3 item_h df2 id A 0 1 item_a 1 1 item_b 2 1 item_c 3 1 item_d 4 2 item_a 5 2 item_b 6 2 item_c 7 2 item_d 8 3 item_e 9 3 item_f 10 3 item_g 11 3 item_h What I've tried so far: 1: df1[df1.A.isin(df2.A) & df1.id.isin(df2.id)] 2: df1[ df1[['id', 'A']].isin(df2[['id', 'A']]) ] The output I got for 1 is close to what I desire: id A 3 1 item_a 4 2 item_a 5 2 item_b 6 2 item_f #this specific row is not desired in the output 7 3 item_e 8 3 item_e #this specific row was raised due to a duplicate in `df1`. It's permitted to show duplicates. Duplicates values are allowed in `df1` but not `df2`. 9 3 item_g 10 3 item_h Desired output: id A 3 1 item_a 4 2 item_a 5 2 item_b 7 3 item_e 8 3 item_e 9 3 item_g 10 3 item_h What's not shown: Two data frames have 2500+ rows. df1 can have the same items associated with an id. No duplicate items for an id in df2. My 2nd try 2: df1[ df1[['id', 'A']].isin(df2[['id', 'A']]) ] is definitely the wrong approach as its matching row and column in df1 to row and column df2 (This output is similar to equals(), I get values from df1 instead of True and NaN instead of False) Any code, links, suggestions are appreciated. AI: The last df is your required result, 2nd last is mine. I think 3, item_f is not possible as its absent in df1.
H: Multiple models in the same notebook Having working on data sets, sometimes we want to keep track of mtiple models with different architectures which work on the same data set on which we have made some transformations and preprecessing of data has been done. So I would like to know what is the elegant way to work on multiple models which use the same data set? Because having multiple models on the same notebook is cumborsome and recreating the same data preprocessing nd transformations on separate notebooks is also a lot of copy-paste which I think can be solved with some existing solutions which I am not aware of. What is the industry standard when doing such tasks? Any help is appreciated. AI: If I was in your situation, I would approach it this way: Create a "preprocessing"/"data" module That module can either be a simple data access layer that can be shared across notebooks, or it can also include preprocessing steps that you can add. This enforces a common data access layers across notebooks without necessarily duplicating code. So you could do something like this: from data_layer import Data, Preprocessor preprocessor = Preprocessor(**kwargs) data = Data(preprocessor = preprocessor, **kwargs) In an ideal world, you should be able to define your data access layer as data itself. For instance, a JSON document containing the data source, and the preprocessing options. Create a different notebook per model And in this case, if your models share components, I'd also create modules they can all import from. The same way you'd use sklearn or torch component.
H: What is a good interpretation of this 'learning curve' plot? I read about the validation_curve and how interpret it to know if there are over-fitting or underfitting, but how can interpret the plot when the data is the error like this: The X-axis is "Nº of examples of training" Redline is train error Green line is validation error Thanks AI: The X axis is the number of instances in the training set, so this plot is a data ablation study: it shows what happens for different amount of training data. The Y axis is an error score, so lower value means better performance. In the leftmost part of the graph, the fact that the error is zero on the training set until around 6000 instances points to overfitting, and the very large difference of the error between the training and validation confirms this. In the right half of the graph the difference in performance starts to decrease and the performance on the validation set seems to be come stable. The fact that the training error becomes higher than zero is good: it means that the model starts generalizing instead of just recording every detail of the data. Yet the difference is still important, so there is still a high amount of overfitting.
H: Separating styles of numbers for simple digit classification I am just getting started with my first simple digit classifier, so my doubts are at a pretty low level. In every dataset of digit images I've seen so far, different variants of each digit are grouped together, for example: All of these images represent the number 1, but are fairly different in looks. Won't simple convolutional neural networks have a hard time learning the visual pattern for 1 in such a case? Especially considering how the third image is similar to 7 in design. My questions are these: Would it be better to create other labels such as "1", "1-alt", "1-serif" etc? The CNN can then add the probabilities of the image being a variant of 1 and then give its prediction, but I'm not sure about this. How do professional classifiers approach this problem? Theoretically, will this method affect performance or accuracy in any way? AI: Interesting question. You are right in assuming that some 1s may be confused for 7s, same with 8s and 3s for instance. Generally creating different classes as you suggest doesn't happen, simply because it would require more annotation. There are multiple ways to handle this. stacked models Anything labelled as a 1 or a 7 would be given to a model fine-tuned to differentiate these two specific digits weighted training It is possible to teach your model that some mistakes are worse than others. In your case, 1s for 7s is bad, so you could increase the cost of making that mistake.
H: what is the meaning of $\mathbb{R}^{768\times (768 * 2)}$? Hi I'm an undergraduate student interested in Machine Learning. I was reading a paper from ICLR 2020 and came a cross a weird looking vector dimensions. Can anyone tell me what this means?? $\mathbb{R}^{768\times (768 * 2)}$ Does this mean that in python numpy array the shape would probably be (2, 768, 768) ?? I remember reading that the numpy array dimensions are reversed from the actual vector dimensions representations. And the vector I asked about shows up in page 4. AI: To my knowledge, that notation refers to a matrix which is of shape 768 by 1536. I am guessing the 2 is there to highlight that the dimension increase is proportional to the 768 (whether that is the input shape to a layer in a neural network, for example).
H: Is this overfitting? I read about the validations curves, and the following plot is similar to overfit, but in this case, the validation curve doesn't' growth again. So is this overfit? why? Thanks AI: So, overfitting occurs when the model is complex enough to fit very well with examples observed in the training data, such that the model is not able to generalise well over unseen instances (validation data). Therefore, for overfitting, we expect the training F1 score to continually decrease, whilst the valid_1 F1 score increases. Here, the plot shows that both training and validation F1-score has stabilised over epochs/iterations. Arguably though, we can see that valid_1 marginally increases as the training F1 score decreases. This can be indicative of (very mild) overfitting.
H: is it possible get a overfit underfit comparation between models, with this chart? (homework) I am trying to interpret this chart. I am not sure how to interpret this, because, I think that the fact of the for examples LGBM Validation error, is wide and similar to train boxplot, there arent problem of overfitting, but when I see another type of charts of the execution of LGBM, I can see that really the LGBM is overfitted, so really I don't know how to interpret this of the correct way. But I don't know how could interpret beyond this: LightGBM is maybe the best option because it is faster and finally you can get enough accuracy with that, and in comparison with the other two, bagging have less overfit because of the differences between the error is less. Any idea? Thanks AI: Your chart seems to show that light GBM models are very inconsistent in terms of F1 score. The other two types of model tend to have lower validation accuracy than training accuracy, suggesting overfitting is occurring to some extent (but this is ubiquitous in machine learning so it’s not a deal breaker by any means). The best median validation performance is by RandomForest, however some outliers underperformed the models using bagging. Possibly a good approach would be to have an ensemble of RandomForest models.
H: 3D visualisation and post analysis tool I am looking for a 3D plotting and post analysis tool, specifically which can generate figures like the following. It is preferred the tool has a Linux distribution. The figure is copied from here. AI: If anyone is interested, I found that such graphs can be created with the software Origin. It is a cheap commercial software (student version: $69 per year). It's certainly worth the price.
H: How similar is Adam optimization and Gradient clipping? According to the Adam optimization update rule: $$m \leftarrow \beta_1 m + (1 - \beta_1)\nabla J(\theta)$$ $$v \leftarrow \beta_2 v + (1 - \beta_2)(\nabla J(\theta) \odot \nabla J(\theta))$$ $$\theta \leftarrow \theta - \alpha \frac{m}{\sqrt{v}}$$ From the equations, it is clear that $m$ is accumulated gradient for each $\theta$ based on an exponential decay function and $v$ is doing a similar thing (kind of) but with the magnitude of the gradient. Then, when we update the parameters $\theta$, we divide the accumulated gradient with square root of the accumulated magnitude to only update those parameters largely which haven't been updated much and vice-versa. In gradient clipping, we do a kind of similar thing by scaling the gradient vector with respect to a threshold. My question is, why do we need gradient clipping to solve the problem of gradient explosion when we can use Adam optimizer to do a controlled search of the space for the minima. AI: Both have a different role and approach. So, I will say both are not comparable. Gradient Clipping has a simple task to clip the Gradient to a certain threshold. Its job is done as soon as this is achieved, i.e. Gradient is brought to a decent level to avoid the Explosion. It has no responsibility to see if learning will converge to best possible minima of Loss. Adam, on the other hand, is an optimizer. It came as an improvement over RMSprop. The improvement was to have the goodness of both, i.e. Momentum and RMSProp (Read this answer) Adam is expected to help the learning converge towards the minimum loss function when it is going in Valley or Plateau by managing both the Momentum and Coordinate specific Gradient. Coming back to your question why do we need gradient clipping to solve the problem of gradient explosion when we can use Adam optimizer to do a controlled search of the space for the minima. Let's say, in the first iteration, the Model accumulate very large Gradient because of its depth. So, it will take a considerable jump when updating the weight ($\theta$) and can land to any random coordinates in the Loss function space. Adam can definitely provide a brake on the Gradient, but it has few challenges - It needs a few iterations to accumulate the $v$ to apply the brake. But it is very much possible that during these iterations, the point will take any random coordinates in the space. This randomness will also cause all the coordinates to have random Gradients i.e. no relation to the last Gradient. This will not let Adam do any accumulation and it will have no clue to act in a corrective manner. Everything will happen haphazardly. Also, keep in mind, exploding Gradient has bidirectional causation, i.e. explosion causes more Gradient, and then this significant Gradient adds more to the Explosion. So, the learning reaches very quickly to NaN state.
H: Expectation Maximization Algorithm (EM) for Gaussian Mixture Models (GMMs) I'm trying to apply the Expectation Maximization algorithm (EM) to a Gaussian Mixture Model (GMM) using Python and NumPy. The PDF document I am basing my implementation on can be found here. Below are the equations: $\mathrm{E}-\text{step:}$ $$w_{ik} = \frac{\pi_k \cdot p_k(x_i|z_k, \mu_k, \Sigma_k)}{\sum_{m=1}^{K} \pi_m \cdot p_m(x_i|z_m, \mu_m, \Sigma_m)}, \; [1]$$ $\text{where:}$ $${\displaystyle (2\pi )^{-{1}}|{\Sigma_k}|^{-{\frac {1}{2}}}\,\mathrm e^{-{\frac {1}{2}}(x_i -{\mu_k})^{\!{\mathsf {T}}}{{\Sigma_k }}^{-1}(x_i -{\mu_k})}.} $$ $\mathrm{M}-\text{step:}$ $$\pi_k^{\text{new}} = \frac{N_k}{N}, \; [2]$$ $\text{where:}$ $$N_k = \sum_{i=1}^{N} w_{ik}.$$ $$\mu_k^{\text{new}} = \frac{1}{N_k} \sum_{i=1}^{N} w_{ik} \cdot x_i, \; [3]$$ $$\Sigma_k^{\text{new}} = \frac{1}{N_k} \sum_{i=1}^N w_{ik} (x_i - \mu_k)(x_i - \mu_k)^\mathsf {T}. \; [4]$$ When applying the algorithm I get the mean of the first and second cluster equal to: array([[2.50832195], [2.51546208]]) When the actual vector means for the first and second cluster are, respectively: array([[0], [0]]) and: array([[5], [5]]) The same thing happens when getting the values of the covariance matrices I get: array([[7.05168736, 6.17098629], [6.17098629, 7.23009494]]) When it should be: array([[1, 0], [0, 1]]) for both clusters. Here is the code: np.random.seed(1) # first cluster X_11 = np.random.normal(0, 1, 1000) X_21 = np.random.normal(0, 1, 1000) # second cluster X_12 = np.random.normal(5, 1, 1000) X_22 = np.random.normal(5, 1, 1000) X_1 = np.concatenate((X_11,X_12), axis=None) X_2 = np.concatenate((X_21,X_22), axis=None) # data matrix of k x n dimensions (2 x 2000 dimensions) X = np.concatenate((np.array([X_1]),np.array([X_2])), axis=0) # multivariate normal distribution function gives n x 1 vector (2000 x 1 vector) def normal_distribution(x, mu, sigma): mvnd = [] for i in range(np.shape(x)[1]): gd = (2*np.pi)**(-2/2) * np.linalg.det(sigma)**(-1/2) * np.exp((-1/2) * np.dot(np.dot((x[:,i:i+1]-mu).T, np.linalg.inv(sigma)), (x[:,i:i+1]-mu))) mvnd.append(gd) return np.reshape(np.array(mvnd), (np.shape(x)[1], 1)) # Initialized parameters sigma_1 = np.array([[10, 0], [0, 10]]) sigma_2 = np.array([[10, 0], [0, 10]]) mu_1 = np.array([[10], [10]]) mu_2 = np.array([[10], [10]]) pi_1 = 0.5 pi_2 = 0.5 Sigma_1 = np.empty([2000, 2, 2]) Sigma_2 = np.empty([2000, 2, 2]) for i in range(10): # E-step: w_i1 = (pi_1*normal_distribution(X, mu_1, sigma_1))/(pi_1*normal_distribution(X, mu_1, sigma_1) + pi_2*normal_distribution(X, mu_2, sigma_2)) w_i2 = (pi_2*normal_distribution(X, mu_2, sigma_2))/(pi_1*normal_distribution(X, mu_1, sigma_1) + pi_2*normal_distribution(X, mu_2, sigma_2)) # M-step: pi_1 = np.sum(w_i1)/2000 pi_2 = np.sum(w_i2)/2000 mu_1 = np.array([(1/(np.sum(w_i1)))*np.sum(w_i1.T*X, axis=1)]).T mu_2 = np.array([(1/(np.sum(w_i2)))*np.sum(w_i2.T*X, axis=1)]).T for i in range(2000): Sigma_1[i:i+1, :, :] = w_i1[i:i+1,:]*np.dot((X[:,i:i+1]-mu_1), (X[:,i:i+1]-mu_1).T) Sigma_2[i:i+1, :, :] = w_i2[i:i+1,:]*np.dot((X[:,i:i+1]-mu_2), (X[:,i:i+1]-mu_2).T) sigma_1 = (1/(np.sum(w_i1)))*np.sum(Sigma_1, axis=0) sigma_2 = (1/(np.sum(w_i2)))*np.sum(Sigma_2, axis=0) Would really appreciate if someone could point out the mistake in my code or in my misunderstanding of the algorithm. AI: One reason why you aren't getting fitted values close to the true values could be the initial values of the parameters used. It's likely what you have found is a local maxima. You have to try a number of initial starts and then pick the one with that gives the highest likelihood.
H: Decision tree and SVM for text classification - theory I used 4 classifiers for my text data: NB, kNN, DT and SVM. As for NB and kNN I fully understand how they work with text - how we can count probabilities for all words in NB and how to use similarity metrics with TF-IDF vectors in kNN I don't understand at all how decision tree and support vector machine work with text data. I implemented all algorithms in Python so all I need is some resource or explanation how the other two classifiers work with text... I understand DT with non-text data - it seams logical for example nodes with checking if some data is more/less than some number. But with text I get confused. Does it operate on text or with numerical vectors? The same applies to SVM... AI: Similarly to NB or kNN, the DT and SVM algorithms work with the features which are provided as input. So whenever ML is applied to text it's important to understand how the unstructured text is transformed into structured data, i.e. how text instances are represented with features. There are many options, but traditionally a document is represented as as a vector over the full vocabulary. A very simple version of this is a boolean vector: a cell $v_i$ contains 1 if the word $w_i$ occurs in the document and 0 otherwise. The DT training will generate the tree the usual way, so in this case the conditions at the nodes will be v_i == 1, representing whether the word $w_i$ is present or not. If the values in the vector are say TFIDF weights, the conditions might look like v_i > 3.5 for instance. Similarly for SVM: the algorithm will find the optimal way to separate the instances in a multi-dimensional space: each dimension actually represents a single word, but the algorithm itself doesn't know (and doesn't care) about that.
H: what's this approach to spatiotemporal data named as? I have some sequential data (e.g. audio, video, text etc.) and I am using this approach to classify sequences. I am sure there's a name for it, but I can't think of it: vectors = t1,[v1_0....v1_n] t2,[v2_0....v2_n] : : tm,[vm_0....vm_n] where t1..tm are the time offsets and the VM are the feature vectors. Out of this data, I create oversampling by using different window_size, and step_size [t0,win_size,step_size, np.median(vectors[frame_id:frame_id+win_size],axis=0)] win_size = the size of the window to mean/median ahead step_size = how many columns to move with each step np.median(vectors[frame_id:frame_id+win_size],axis=0) = column0wise mean or median across the array as a resultant of the vectors and use the above to train a classifier and should be to generate predictions at multiple scales (e.g. at a large window size). Edit: for predictions, from the query clip extract a similar set of vectors and then predictions can be aggregated somehow to generate the "consensus". AI: I would call that a multi-headed convolutional model.
H: Dealing with missing data I have a question about data cleaning. I am a novice and have just started learning in this field so please pardon my ignorance. Suppose there are two columns and based on some samples taken from both the columns you find the correlation coefficient to be high. Now for the values that aren't there, can you use linear regression to predict or find them out, by using the values you know as training data? AI: Hi Soumyadeep and welcome to Data Science/Stack Exchange What you are describing is called regression imputation, and it is a valid method to use on missing data. However, if the data is sparse (lots of missing values), this issue will be more difficult to handle. In general, missing data can be handled in several ways (row deletion, imputation, substitution, etc). Regression imputation can be used if you have little or no knowledge about the data, but usually it is better to use another method. If you have some domain knowledge about the missing values, like you have an idea what the value should be, usually you can use that knowledge to fill in the missing values. Try some different methods and see which one works best. A person pointed out that I should check for multicollinearity if the features are both independent. Does it basically mean that one feature is falling in the span of the other feature? Definition of multicollinearity: There exist one or more exact linear relationships among some of the variables References: https://en.wikipedia.org/wiki/Multicollinearity https://stats.stackexchange.com/questions/234870/is-multicollinearity-the-issue-here
H: Why do we add "αd" to N in Laplace Smoothing? I just started to learn Naive Bayes algorithm. Then I learned to use Laplace smoothing to avoid getting probability of zero. I understand the purpose of using it, but, in the expression of Laplace smoothing below, I do not really understand why we need to add "αd" to N in the denominator. As far as I researched, one of websites says that it is done so that the division will never be greater than 1. However, I still do not understand why we add "αd". Could someone explain why we have to add "αd" to N in the denominator ? AI: You're right, the idea is that, if $\alpha d$ is not used, then you might have $\hat{\theta_i}$ bigger than 1. That being said, I don't think the main reason of Laplace smoothing is to have probabilities greater than 0. I think the important thing is that it acts as a regularization technique (a smoother, indeed). The smoother works the following way: we are mixing the empirical observations ($\frac{x_i}{N}$) and the theoretical distribution without data ($\frac{1}{d}$), and $\alpha$ is a parameter that controls how much do we care about the theoretical distribution. The idea is: the more we trust the theoretical distribution, the more regularization we are adding. It even has a Bayesian interpretation in terms of priors and posteriors. So I don't think it has to do with probabilities being between 0 and 1, I think it has to do with performing the mixture of distributions (or update of the posterior, if you want) properly.
H: Data visualization with extreme far away points I want to show points across two groups. However, for both groups, there are some points which are far away from most of the other points within each group, shown below. Any suggestions for this situation? Thank you. AI: If you want to see the distribution of the data that is hidden in the bottom portion, you can add a histogram or probability plot, or even a violin plot. Each will show the distribution of the data more clearly than this boxplot does, and you can still see the true value directly. You can also add some jitter to the boxplot to see more of the overlapping points displayed. Jitter: Violin: Probability Plot with Boxplot:
H: Confusion regarding confusion matrix I am confused on how to represent the confusion matrix -- where to put the FP and FN. Link1 and Link2 show different confusion matrix for binary classification. The rows represent the actual and columns represent the predicted values. Based on my understanding, the correct confusion matrix should have been: | Pred Neg| Pred Pos |___________________ Actual Negative(class0) | TN | FP ------------------ |------------- Actual Positive(Class1) | FN | TP where TN (class 0) indicates the amount of correctly identified normal patterns. TP (class 1) indicates the amount of correctly identified malignant patterns. On the other hand, FP indicates that the classifier predicted the signature to be malignant when infact it was normal. $ \text{Precision} = \frac{TP}{TP+FP}$, $\text{Recall or TPR} = \frac{TP}{TP+FN}$, Question 1: Is my understanding and construction of the confusion matrix correct? Question 2: What is the intuitive difference between Precision and recall? What happens if precision < recall? AI: Question 1: Is my understanding and construction of the confusion matrix correct? Yes, you are correct in your definitions and the way you construct the confusion matrix. The links you have provided also agree with each other. They just switch rows and columns, since there is no hard rule regarding the presentation, as long as the correct relations are maintained. Link 1 shows this matrix: | Pos Class | Neg Class Pos Pred | TP | FP Neg Pred | FN | TN Link 2 shows the same matrix, but transposed: | Pos Pred | Neg Pred Pos Class | TP | FN Neg Class | FP | TN Question 2: What is the intuitive difference between Precision and recall? Precision is the rate at which you are correct when you predict a positive class. It takes into account all of your positive predictions and figures out which proportion of those is actually correct. When your precision is high, this means that once you make a positive prediction, you are likely to be correct about it. This says nothing about how correct your negative predictions are -- you might make 1 positive and 99 negative predictions on 100 actual positives and still get 100% precision, since your only positive prediction just happened to be correct. Recall is the rate at which you are able to predict the positive class correctly. It takes into account all of the actual positive classes and figures out which proportion of those you have predicted correctly. When your recall is high, this means that very few actual positives slip by your model without being detected as such. This says nothing about how good you are at being actually correct with your positive predictions -- a model that always predicts a positive class easily achieves 100% recall. One usually strives to optimize both precision and recall by finding the most acceptable balance between the two. You might want to read this article about the Precision-Recall curve to get a fuller understanding of the relationship between these metrics. What happens if precision < recall? As you have highlighted in your post, the two formulas differ only in the denominator. It follows that when precision is smaller than recall, then the number of false positives in your predictions is larger than the number of false negatives.
H: Stacking and Ensembling methods in Data Science I understand that using stacking and ensembling has become popular, and these methods can give better results than using a single algorithm. My question is: What are the reasons, statistical or otherwise, behind the improvement in results? I also understand that at a high level that combining these methods will combine the predictions from different algorithms. Each algorithm has its own strengths and weaknesses, but not sure how combining them will actually improve the results. Here is a simple balanced example (50% of the labels are actually T and 50% are actually F), where I think stacking or ensembling algorithms will still give the same result as the original model. In this case, for the ties (1 F and 1 T), I decided to select T. The same problem would occur if I selected F instead. This is a little more complicated if I used the predict_proba, but think the same problem would occur. AI: There are many ways in which Ensembling can be done and each one has a different foundation logic to gain improvement. Key variations can be - 1. Nature(High Bias/High Variance) of models in the ensemble 2. How we put models into work i.e. same model type, different model type, parallel, sequential, sample data, full data etc. 3. How we combine individual prediction Let's see a few of the key approaches - 1. Simple Voting based ensembling Dataset doesn't have the same pattern across the feature space. Its pattern will support one type of model in most of the part but a different type of model in some of the part. Observation on an experiment for multiple models. Despite their overall scores being identical, the two best models – neural network and nearest neighbour – disagreed a third of the time; that is, they made errors on very different regions of the data. We observed that the more confident of the two methods was right more often than not. Ref - Ensemble Methods in Data Mining:Improving Accuracy Through Combining Predictions What it meant, if two models have 70% accuracy each and both differ on 10% of the data. There is a good chance that the more confident one is true on 0-10% of the time and that will be the gain on combining both of them using a Soft voting strategy. Intuition - If we use a KNN and a Linear Regression. Definitely, KNN will be better in most of the space(i.e. away from the Regression plane) but for data points which are near to the plane, Regression will be more confident. $\hspace{4cm}$ $\hspace{4cm}$Ref - Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow 2. Bagging based Ensembling A model with very high variance is prone to overfit. We can convert this challenge to our advantage if we figure out a way to average out the variance. This is the logic behind bagging based model. Intuition - At a very high level, the high variance model when built on a different random sample will create decision boundaries which when averaged will smoothen the prediction and variance will be reduced. An intuitive example is Here Why not High Bias models - A high bias model (e.g. A Regression Line) will not change much with every sample as the sample will have the roughly same distribution and the slight difference doesn't impact these models. So, it will end up almost the same models for every sample. As shown in this example for 3 different models. $\hspace{4cm}$Ref - Hands-On Machine Learning with R, Bradley Boehmke & Brandon Greenwell 3. Boosting based Ensembling The main idea of boosting is to add new models to the ensemble sequentially. In essence, boosting attacks the bias-variance-tradeoff by starting with a weak model (e.g., a decision tree with only a few splits) and sequentially boosts its performance by continuing to build new trees, where each new tree in the sequence tries to fix up where the previous one made the biggest mistakes (i.e., each new tree in the sequence will focus on the training rows where the previous tree had the largest prediction errors) Ref - Hands-On Machine Learning with R, Bradley Boehmke & Brandon Greenwell Intuition - We start with a weak model(e.g. a DT stump), we may think it as a simple line(Hyper-plane) across the dataset space, splitting it into two parts. We repeat this step but with additional info i.e. adding weight to miss-classified records. In the end, we do a weightage voting e.g. more weight to better Model. Let's say the first model predicted 57 correct out of 100. Now the 2nd model will have additional weight for the 43 records. Let's say it end up 55 correct. So, the first model will have higher weights. It means you have sure-shot 57 correct + there is a good chance that because of the added weight on 43 records, some will be predicted correctly with very high confidence and that will be the addition for the ensemble. 4. Meta-learner/Generalized Stacking In this approach, the prediction of multiple models is used as an input to a meta-learner to decide the final prediction using an additional set of data. So, here we are not using any ready-made function for voting e.g. soft/hard voting but allowing another model to learn the bias pattern of initial model's prediction and learn the adjustment if any. $\hspace{8cm}$Ref - developer.ibm.com This was a very simple explanation of generalized stacking approach but stacking has been extensively used in competitions. To an unimaginative level which is almost impossible to comprehend and explain. As done in below mentioned approach Ref $\hspace{2cm}$ Your sample data We have to attack the model Bias/Variance pattern, Confidence in prediction probability etc. to gain an advantage. We can't get an improvement on any dataset/model combo by just doing hard voting. Maybe you can investigate this example dataset = sklearn.datasets.load_breast_cancer(return_X_y=False) X = pd.DataFrame(dataset.data, columns=dataset.feature_names) y = dataset.target from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(X,y,test_size=0.20,random_state=201) from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.ensemble import VotingClassifier knn_clf = KNeighborsClassifier(n_neighbors=2) svm_clf = SVC(probability=True) voting_clf = VotingClassifier( estimators=[('knn', knn_clf), ('svc', svm_clf)], voting='soft') voting_clf.fit(x_train, y_train) from sklearn.metrics import accuracy_score for clf in (knn_clf, svm_clf, voting_clf): clf.fit(x_train, y_train) y_pred = clf.predict(x_test) print(clf.__class__.__name__, accuracy_score(y_test, y_pred)) KNeighborsClassifier 0.9298245614035088 SVC 0.9122807017543859 VotingClassifier 0.956140350877193
H: How to determine the correct target for classification probability when the observed samples are probabilities of each class? I have data in which each event's outcome can be described by a probability of a categorical occurrence. For example, if all of the possible class outcomes are A, B, C, or D suppose in one event 7/10 people selected category A, 2/10 selected category B, and 1/10 selected C. The goal of the model is to predict the probability of each class as close to the observed truth as possible. However, it is unclear how to determine the proper target. Here are the options I've thought about doing: The target becomes the predominant class selected. That is, in the event described above A=1, B=0, C=0, D=0. Additional classes/categories are created and then become bins of all possible observed probabilities. For example, the event above would become A=1 when Pr(A) >= 0.7. I THINK option 1 is the the best as option 2 would result in a lot of categories and spread. However, when considering verification of the model output, i.e. Y = {A=0.9, B=0.1, C=0, D=0} for each sample would it not be advantageous to use the actual observed probabilities? Are there other options for the target? AI: Imho the "cleanest" option would be to train a probabilistic model on the original categorical target, then obtain the predicted probabilities for every category as the final "predictions". By "training on the original target" I mean designing each instance as an event, e.g. in order to represent that 7/10 people select category A there would be 7 instances where the target is category A out of 10 instances in total. The most simple option is Naive Bayes, but depending on the data it tends to always predict extreme probabilities, which would defeat the purpose. An ad hoc Bayesian model could give very good results but it's probably more work to design it, depending on the features.
H: What are the merges and vocab files used for in BERT-based models? The title says it all. I see plenty online about how to initialize RoBERTa with a merges and vocab file, but what is the point of these files? What exactly are they used for? AI: The vocab file contains a mapping from vocabulary strings and indices used for embedding lookup in the model. The merges say how to split the input string into subword units. The algorithm is as follows: At the beginning of merging, a word split into characters and then you greedily search for neighboring symbols that can be merged (i.e., are in a list of allowed merges), you stop when there is nothing to merge and this is the subword segmentation.
H: The impact of using different scaling strategy with Clustering I'm currently learning about clustering. To practice clustering, I am using this dataset. After running K-means clustering for multiple values of k and plotting the results, I can see that scaling is affecting the results (within-cluster SSE) and I want to use this post to confirm my intuition as to why this is the case. I don't believe that this is a meaningful reduction in the Within-Cluster SSE because the numerical distances are sensitive to scale, and I don't think that this has any effect on how accurate the model is. Is that intuition correct? I just wasn't expecting the reduction to be this drastic between standardizing and normalizing. Code and the results: import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('customers.csv') X = df.iloc[:, [3, 4]].to_numpy() from sklearn.preprocessing import StandardScaler, MinMaxScaler ssc, mmsc = StandardScaler(), MinMaxScaler() X_ssc = ssc.fit_transform(X) X_mmsc = mmsc.fit_transform(X) from sklearn.cluster import KMeans # Unscaled k_vals = list(range(2, 21)) WCSSE = [] for k in k_vals: kmeans = KMeans(n_clusters=k) model = kmeans.fit(X) WCSSE.append(model.inertia_) plt.plot(WCSSE, marker='o', markersize=10) # Standard Scaler k_vals = list(range(2, 21)) WCSSE = [] for k in k_vals: kmeans = KMeans(n_clusters=k) model = kmeans.fit(X_ssc) WCSSE.append(model.inertia_) plt.plot(WCSSE, marker='o', markersize=10) # MinMax scaler k_vals = list(range(2, 21)) WCSSE = [] for k in k_vals: kmeans = KMeans(n_clusters=k) model = kmeans.fit(X_mmsc) WCSSE.append(model.inertia_) plt.plot(WCSSE, marker='o', markersize=10) AI: Is that intuition correct Yes There is no improvement in Cluster quality. All the 3 are the same and should be that way. We can easily observe that all the 3 clusters are forming the elbow at 2.5. Even all other aspects of the 3 plots are exactly the same. Within Cluster Sum of Squares (WCSS) measures the squared average distance of all the points within a cluster to the center of the cluster.It is the mean distance of each point within the Cluster from the Centroid. No ratio is involved in this metric(i.e. to cancel the impact of scale), hence it will definitely depend on the space size and also on the number of clusters. Imaging your space is of the size of Earth, then you standardized it to make it as a size of a football, then you make it even smaller i.e. golf ball with Normalization. I just wasn't expecting the reduction to be this drastic between standardizing and normalizing. Obviously, the mean distance will reduce in the proportion of Standard deviation in the case of Standardization and "Max value" in the case of Normalization. Having big outliers can have a bigger impact. Using the same logic, we can see that the metric decreases with the number of clusters too. More cluster means each cluster is closer to its Centrod and hence smaller SSE. That's why it's not a great metric. You may try to calculate silhouette score, which combines both the Cohesion and Separation for 3 cases from sklearn.metrics import silhouette_score silhouette_score(X, kmeans.labels_) Ref
H: Predicting next element of a sequence given small amount of data I have data of bank branches and amount of revenue they have generated in a month. The data looks like this: I am tasked to find the expected revenue for the branch for the next month using machine learning. Initially I was planning to use LSTM networks for such analysis, but I doubt its possible with such small amount of data. I personally think machine learning is an overkill for such task. What would be the most appropriate way to predict the revenue for next month? I thought about increasing the amount of data by treating every branch as equal and using the row corresponding to each branch as separate instance for training (but I doubt that is a correct approach). Any advice would be appreciated AI: You might find the link helpful. https://towardsdatascience.com/how-to-model-time-series-data-with-linear-regression-cd94d1d901c0
H: I am trying to figure out the stationarity of time series data Here, this plot shows the number of customers served per day from 1 jan 2018 to 31 dec 2019. I grouped the entire data by each month and calculated the average and variance per month. This is the average This is the variance I also ran an augmented dickey-fuller test and got the following result ADF Statistic: -26.031963 p-value: 0.000000 Since, the mean and variance do not vary too much, should I infer that the data is stationary? AI: There are certainly more tests you could try out but by definition of the test, you can conclude that your data is stationary. On the other hand I would be concerned with your test results. Getting a p of exactly zero, and a test statistic value of way below -3 or -4 I would look at your implementation again and check if it's right.
H: Build a sentiment model from scratch I would like to know how I can create a sentiment model from scratch. I have my data, list of texts, with no labels about sentiment. Author Quotes Dan Brown “Everything is possible. The impossible just takes longer.” Dan Brown “Great minds are always feared by lesser minds.” Dan Brown “Men go to far greater lengths to avoid what they fear than to obtain what they desire.” Dan Brown “Google' is not a synonym for 'research'. etc... I have 20000 of quotes like the above, from other authors too. My dataset is the only set of data that I have, so I would need to split into training (80%) and test set (20%). I should work on my training set to build the vocabulary of 'sentiment'. What I have thought is to clean the text removing stopwords, except negative words (like no, not, ...). Then look for adjectives and assign them a score manually (?). Once done this, I should have also consider n-grams, especially with negative stopwords, in order to create a small dictionary which could take into account also this. However I do not know if it could be the right approach, if it is something insane... Is there any way to build from scratch a sentiment model? How could the 'machine' learn from this? I would need to develop a model from scratch as I would like to analyse texts in other languages (like Italian or Spanish) and there are no models (not good model at least) in Python to do that. AI: What you're describing is indeed the traditional approach for building a sentiment analysis system, so I'd say it looks like a reasonable approach to me. I'm not up to date with the sentiment analysis task at all, but I think it would be worth studying the state of the art for several reasons: There might be more recent, better approaches There might be datasets in the languages you're interested in, and if there is that could save you a lot of time. Check if there are any shared tasks about this, they often provide annotated datasets.
H: How to use unigram and bigram as an feature on SVM or logistic regression How to use unigram and bigram as an feature to build an Natural Language Inference model on SVM or logistic regression?on my dataset i have premise, hypotesis and label column. I'm planning to use the unigram and bigram of the premis or hipotesis or both as one of the features on my training. for example : premise |hipotesis |hypothesis bigram =============================================================================================== I am planning to use the unigram and bigram |I am planning to use the unigram |[(i, am), (am, planning), (planning, to), (to, use), (use, the), (the, unigram)] the hypothesis bigram is a list of bigram(word), so i cant use it as input to my svm or logistic. can i convert the hypothesis bigram into vector? AI: You need to create a vocabulary of the n-grams, i.e., a numbered inventory of bigrams that you are going to use as features. Typically, these are the most frequent ones. When you create the feature vector, you start with a zero vector and put one (or add one) if the n-gram with the corresponding index appears is in your sentence. Machine learning libraries typically have functions that do that. For instance, in scikit-learn, you can use CountVectorizer to do the job. The fit method has an ngram_range argument that controls the length of n-grams that are considered in the feature vectors.
H: Lower loss always better for Probabilistic loss functions? I am working on an neural net int Tensorflow that predicts percentages for win, draw, loss for given data of a game. The labels I provide are always {1, 0, 0}, {0, 1, 0} or {0, 0, 1}. After some epochs my accuracy doesn't increase any further, but the loss still decreases for a many epochs (also on the validation set, though very slowly). I am using a softmax activation in the last layer and the categorical crossentropy loss function provided by Keras. I was wondering if in this case, lower loss always corresponds to better probabilities (because I obviously wouldn't want the net to output only values like 1 or 0 for probabilities), or in other words, does this net output the "true" probabilites and if so, why does it do that? AI: If $0.5$ is the threshold for declaring a class (perhaps more sensible in a binary classification than your problem, yes), there is no incentive for accuracy to regard a $1$ as a $0.95$ instead of a $0.51$. Meanwhile your cross-entropy loss function sees that the correct answer is $1$ and wants to get the probability as close to $1$ as it can. Accuracy, however, doesn't care if the predicted probability is $0.51$ or $0.95$, so accuracy does not change as you move the predicted probability closer and closer to the observed value, even though the loss function decreases by getting closer and closer to the observation (as you would expect loss to do...consider how square loss behaves in a linear regression).
H: Creating a new Dataframe with specific row numbers from another I've found other posts that refer to creating a new dataframe using specific conditions from another (like ID = 27, etc.) but nothing that allows me to make a new dataframe from specific row numbers of another. Here is what I have tried so far: To add a single row to a new dataframe: test.append(original.loc[300]) test To add a list of rows to a new dataframe: entries_to_move = [10, 20, 30] for i in entries_to_move: test.append(original.loc[i]) test Neither method works, so help would be appreciated. The output for either code is just a __. Thank you! AI: So, your question is to instantiate a new data frame df2 from another data frame df1, by simply selecting rows. You can do this by indexing. What is great by pandas DataFrames is that you can index a DataFrame using a list of indices. df2 = df1.iloc[[list of indices],:]
H: Permutation test on two groups I am trying to use a permutation test to test my hypothesis. I want to make sure I am understanding concept of permutation correctly. I have control and experimental group. Then I combine them and resample from combined dataset randomly calculating desired statistics. Do this N time and distribute them on a histogram to see where my observed statistics stand. In the book "practical statistics for data scientists" it says that from combined data (without replacing) resample same size as groupA in my case control group. Then since I have only two groups rest are simply groupB(experimental group)? So my question simply put, when using permutation test on two groups first you resample size of groupA then rest automatically becomes resample of groupB? AI: Yes it does, you resample from the whole population without replacement $N_A$ samples, which are associated to group $A$, and the rest of the samples are associated to group $B$. As I understand, you compute the statistic for every possible permutation of the total population and you measure how the actual statistic compares in the distribution of all the permutations. If your statistic is not extreme in the distribution among all possible permutations, then it's likely that $H_0$ is true. Otherwise, if your statistic is an outlier of that distribution it's likely that $H_0$ is false.
H: How does Pytorch deal with non-differentiable activation functions during backprop? I've read many posts on how Pytorch deal with non-differentiability in the network due to non-differentiable (or almost everywhere differentiable - doesn't make it that much better) activation functions during backprop. However I was not able to come up with a full picture as to what exactly happens. Most answers deal with ReLU $\max(0,1)$ and claims that the derivative at $0$ is either taken to be $0$ or $1$ by convention (not sure which one). But there are many other activation functions with multiple points of non-differentiability. 2 points 4 points How does Pytorch systematically deal with all these points during backprop? Does anyone have an authoritative answer? AI: The function value is never exactly equal to those exact point because of numerical precision error.And again those functions in torch calculate left or right derivative which is defined in every case.So non-differentiability doesn't pose a problem here.
H: Scikit-learn: train/test split to include have same representation of two different types of values in a column I have a dataset of online purchase orders that contains two types of customers: Customers who have an account and thus are known customers with unique customer number. Customers who do not have an account and do their purchases as a guest; thus, these are unknown customers and are given a default number(out of a list of 50 pre-defined numbers). Meaning it is not a unique number and the only way to distinguish transactions here is by the unique order number. To avoid distortion and make sure that my train/test dataset will have the same distribution when it comes to known and unknown customers, I want to select a Train/Test set as follows: The part of the dataset that includes the known customers to be split 80/20. The part of the dataset that includes the unknown customers to be split 80/20 ( the selection of the sub dataset here will depend on the rule that orders are not linked to a customer (= linked to a default customer number). In Scikit-learn I need to set up the X(features) and Y(target) in order to do the train_test_split. My question is would it be logical and doable to split my dataframe into: df1 which contains all known customers. df2 which contains all unknown customers. Then set up X, Y and consequently x_train, x_test, y_train , y_test for each of df1 and df2, then merge the result into one and continue to build the rest of the model? Is this a good approach or is there another solution? AI: Keep only one dataframe Add a column(if not available) to mark - Guest Or Customer Then, simply split with stratify flag on that column from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=19, stratify=data['guest']) stratify : array-like, default = None If not None, data is split in a stratified fashion, using this as the class labels
H: Decision Tree : how to determine target in a model with no labels? I am studying classification algorithms using decision tree approach in Python. I would have some questions on this topic, specifically regarding the target (y) in my dataset. I have a dateset made by 20000 observations and a few fields: Customer recorded date amount status (if married or not) children (if any children in the family) nationality (if American or not) And so on. Most of these fields are binary (yes/no). Based on this I would like to determine if this customer is trustworthy or not. As you can see, I have no label about trusting, but I have some initial information: for example the amount. If the amount is 0 or < 0, the customer has no money so he/she can be considered not trusted. Then, I could consider status: if he/she is married, then it could be considered trustworthy, as there could be another salary to take into account. And so on. My doubt are in splitting my dataset, as it asks about y variable. What would it be in this case? I have no explicit target.. AI: When you do not have any target, and you want to label them as trustworthy or not, so here you are using your psychology that when customer is not earning money, or not married, then he/she is a bad customer. But manually labeling the datasets with this psychology may or may not be correct. Because you do not have any target variable to validate your labeling. Therefore, as @Kappil C has suggested, first you need to categorize your data using some clustering algorithm to understand how your population is divided. It can be trustworth Vs non-trustworthy (2 classes). Or it can be super-trust-worthy, trust-worthy, non-trust-worthy (3 or more classes). Once these classes are tagged, you are ready to proceed with any Supervised learning algorithm. As against to this approach, you can proceed with simple rule-based technique also using basic statistics where you will understand each variable individually, and will create multiple rules independently. But again, you need to have target to find confusion matrix rule wise Example: People with age > 50 -> Super-trust-worthy People with age < 18 -> Non-trust-worthy and these rules will be helpful in streamlining your business.
H: Stacked Model performance? I am currently working with a dataset that seems very easily separable and I have an accuracy of 99% for SVM (NN-98%, RF-98%, DT-96-97% and I have checked for leakage & overfitting). As part of my project, I am also learning how to implement a hybrid model but its accuracy is also 99% (1 record misclassified). It seems to be misclassifying the same record as SVM and no other algorithm seems to get that record correct. Probability wise- RF and Logistic Regression give a probability of 0.5-0.7 but all other models give 0.9-1 probability for the wrong prediction. I was wondering what I should do now. Are there certain techniques I can use to reduce the probability to below 0.5 in the stacked model so that it isn't classified the way it currently is? I am using the StackingCVClassifier for implementation and this is a binary classification project. I am currently using SVM, LR, RF as base models and NB as the meta-model. My other question is if there is even value in making a hybrid model if the accuracy is the same as SVM's? Thank you! AI: Are there certain techniques I can use to reduce the probability to below 0.5 in the stacked model so that it isn't classified the way it currently is? It's generally not a good idea to try to bias the classifier in order to deal better with some specific instances, because it's likely to make it weaker in with some (possibly many) other instances. The way to do that would be oversample in the training set the instances which will help predict the target instance correctly, but that's a terrible idea, don't do it! :) In general it's perfectly normal to have some errors, the data can contain noise or even sometimes annotation errors. My other question is if there is even value in making a hybrid model if the accuracy is the same as SVM's? If the data is really easy to classify, it's totally possible that stacking learners doesn't improve performance: if the performance of a single model reaches the maximum obtainable on this data, clearly there's nothing to improve. However it's also possible that the benefit of stacking is not visible in this particular test: in this hypothesis the meta-model is indeed better in general that a single learner, but the test set just doesn't contain any instance on which this can be seen. A way to check this is to reduce the size of the training set: by making things harder for the classifiers, it's possible that the weaknesses of the individual learners will show up.
H: How to create ROC - AUC curves for multi class text classification problem in Python I am working on a multiclass text classification problem and trying to plot ROC Curve but no success so far. Tried many solutions available but didn't work. Kindly please someone help me out with the following piece of code to plot the ROC curve. There are actually five different classes for which I am performing text classification. categories = ['Philonthropists', 'Politcians', 'Showbiz', 'Sportsmen', 'Writers'] train = dt.load_files(r'C:\Users\...\Learning\Train', categories=categories, encoding='ISO-8859-1') test = dt.load_files(r'C:\Users\...\Learning\Test', categories=categories, encoding='ISO-8859-1') count_vector = CountVectorizer() x_trian_tf = count_vector.fit_transform(train.data) tfidf_transformer = TfidfTransformer() x_train_tfidf = tfidf_transformer.fit_transform(x_trian_tf) learn = MultinomialNB().fit(x_train_tfidf, train.target) x_test_tf = count_vector.transform(test.data) x_test_tfidf = tfidf_transformer.transform(x_test_tf) prediction = learn.predict(x_test_tfidf) print("Accuracy is of Multinomial Naive Bayes Classifier", accuracy_score(test.target, prediction) * 100) AI: First check out the binary classification example in the scikit-learn documentation. It's as easy as that: from sklearn.metrics import roc_curve from sklearn.metrics import RocCurveDisplay y_score = clf.decision_function(X_test) fpr, tpr, _ = roc_curve(y_test, y_score, pos_label=clf.classes_[1]) roc_display = RocCurveDisplay(fpr=fpr, tpr=tpr).plot() In the case of multi-class classification this is not so simple. If you have 3 classes you could do ROC-AUC-curve in 3D. Have a look at the resources here. What you can do and what is much simpler is to make 4 one-vs-all curves. You basically have a binary setting for each class. In your case: import matplotlib.pyplot as plt # all the same up until now prediction = learn.predict(x_test_tfidf) proba = learn.predict_proba(x_test_tfidf) print("Accuracy is of Multinomial Naive Bayes Classifier", accuracy_score(test.target, prediction) * 100) for i in range(len(categories)): y_test_bin = np.int32(test.target == i) y_score = proba[:,i] fpr, tpr, _ = roc_curve(y_test_bin, y_score, pos_label=0) plt.subplot(2,2,i) roc_display = RocCurveDisplay(fpr=fpr, tpr=tpr).plot()
H: How to work with different Encoding for Foreign Languages I've got a Word Embedding File called model.txt. This contains 100 Dimensional vectors for over a million French words. These words contain accented characters such as é, â, î or ô. Let me explain my problem with the following example: Consider these two words and their respective vectors, both of which are taken from model.txt: etait -0.100460 -0.127720 ... était 0.094601 -0.266495 ... Both words signify the same meaning but the former is without the accents while the later has accents. Now I'm trying to load this word embedding using the gensim.models.KeyedVectors in the following way: model = KeyedVectors.load_word2vec_format(open(model_location, 'r', encoding='utf8'), binary=False) word_vectors = model.wv To which I get the following error: --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-82-e17c33c552da> in <module> 10 model = KeyedVectors.load_word2vec_format(open(model_location, 'r', 11 encoding='utf8'), ---> 12 binary=False) 13 14 word_vectors = model.wv D:\Anaconda\lib\site-packages\gensim\models\keyedvectors.py in load_word2vec_format(cls, fname, fvocab, binary, encoding, unicode_errors, limit, datatype) 1547 return _load_word2vec_format( 1548 cls, fname, fvocab=fvocab, binary=binary, encoding=encoding, unicode_errors=unicode_errors, -> 1549 limit=limit, datatype=datatype) 1550 1551 @classmethod D:\Anaconda\lib\site-packages\gensim\models\utils_any2vec.py in _load_word2vec_format(cls, fname, fvocab, binary, encoding, unicode_errors, limit, datatype, binary_chunk_size) 286 vocab_size, vector_size, datatype, unicode_errors, binary_chunk_size) 287 else: --> 288 _word2vec_read_text(fin, result, counts, vocab_size, vector_size, datatype, unicode_errors, encoding) 289 if result.vectors.shape[0] != len(result.vocab): 290 logger.info( D:\Anaconda\lib\site-packages\gensim\models\utils_any2vec.py in _word2vec_read_text(fin, result, counts, vocab_size, vector_size, datatype, unicode_errors, encoding) 213 def _word2vec_read_text(fin, result, counts, vocab_size, vector_size, datatype, unicode_errors, encoding): 214 for line_no in range(vocab_size): --> 215 line = fin.readline() 216 if line == b'': 217 raise EOFError("unexpected end of input; is count incorrect or file otherwise damaged?") D:\Anaconda\lib\codecs.py in decode(self, input, final) 320 # decode input (taking the buffer into account) 321 data = self.buffer + input --> 322 (result, consumed) = self._buffer_decode(data, self.errors, final) 323 # keep undecoded input until the next call 324 self.buffer = data[consumed:] UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 7110-7111: invalid continuation byte which I thought made sense if my file was encoded in a different format. However, using git I tried checking the encoding of the file using file * and got: model.txt: UTF-8 Unicode text, with very long lines Now, if I try to write the above code and have the encoding set to latin1, there isn't any problem to load this document but at the cost of not being able to access any of the words which contains an accent. Essentially throwing an out-of-vocab error upon executing: word_vectors.word_vec('était') How am I supposed to approach the problem? I've also got the .bin file of the model, should I try to use that to load my words and their corresponding vectors? AI: Nevermind, the solution was trivial. Since I had the .bin file I could just open it in binary form. If somebody doesn't really have the .bin file, they could consider converting the .txt file to .bin and solve further.
H: Significance of Number of Calls and Reset Call in Ball Tree Why does the Scikit Implementation has functions to reset and get number of calls? How are these parameter important in Trees? https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.BallTree.html#sklearn.neighbors.BallTree.reset_n_calls AI: Indeed, they don't seem to be used for anything. Probably they were included for some initial debugging? An issue (and a linked PR) discussing these methods: https://github.com/scikit-learn/scikit-learn/issues/13330 (At time of posting,) searching for n_calls only finds results in _binary_tree.pxi, and none of them seem actually used for anything other than tracking the number of distances computed: https://github.com/scikit-learn/scikit-learn/search?q=n_calls
H: Smaller alternatives to sklearn that doesn't require scipy? I am packaging my model for deployment in aws lambda which has a size limit of 250mb for all dependencies. Sklearn, if you include its dependencies of numpy and scipy is a huge package. Are there any alternatives to sklearn that don't require scipy that are smaller than sklearn? Thanks! AI: Did you check tinynumpy? Anyway, I rarely found alternatives to famous packages (except scikit-image instead of opencv). What usually works for me is: Slim the model as much as I can (e.g. weights quantization) Check in the code which functions I use from each module. Once I have a list of them, I retrieve the corresponding python files and get rid of the rest Try to split my process in multiple functions (e.g. one function to perform data processing, one function where to implement the model and make the inference) The second point is crucial. In my experience, one rarely needs entire packages. However, depending on the case, it could also be that AWS Lambda does not fit your needs.
H: What is the proper order of normalization steps before and after splitting data I use a classification model on time-series data where I normalize the data before splitting the data into train and test. Now, I know that train and test data should be treated separately to prevent data leaking. What could be the proper order of normalization steps here? Should I apply steps 1,2,3 separately to train and test after I split data with the help of a sliding window? I use a sliding window here to compare each hour (test) with its previous 24 hrs data (train). Here is the order that I am currently using in the pipeline. Moving averages (mean) Resampling every hour Standardization Split data into train and test using a sliding window (of a length 24 hrs (train) and slides every 1 hr (test)) Fit the model using train data Predict using the test data AI: Assuming I understand it correctly, I think your process is ok this way but I'm not sure about step 3 "standardization": Steps 1 and 2 are ok since this cannot leak any kind of information from the test set to the training set. If the standardization step involves calculating values (e.g. mean and s.d.) over the whole data (including test set) and then standardizing the whole data using these values, then that could be an issue. I'd suggest moving step 3 after the split, so that standardization is based only on the training set and then applied "as is" to the test set.
H: Classification model accuracy with ensemble methods I came through this statement in a Machine Learning text book based on law of large numbers: Suppose you build an ensemble containing 1,000 classifiers that are individually correct only 51% of the time (barely better than random guessing). If you predict the majority voted class, you can hope for up to 75% accuracy! I understand the analogy if we consider average over 1000 predictions but how majority votes lead to 75% accuracy from 51% (individual)? AI: This comes from the Binomial distribution, where you have $n=1000$ independent trials (models), $p=0.51$ of each model being right and since you care about the majority vote you want to have at least $k=500$ successful trials. That leads to: $$\text{Pr}(k\geq500 \text{ models are right}) = \sum^{1000}_{k=500}\binom{1000}{k}0.51^{k}(1-0.51)^{1000-k}=0.74675\approx0.75$$ Here is how I calculated it: import numpy as np from scipy.stats import binom np.sum([binom.pmf(k,1000,0.51) for k in range(500,1001)])
H: What are some options to add or remove nodes from a multiclass classification model? I'm building a classification model that will need to classify into one of many possible outputs. I know in advance that I will need to add and subtract nodes from the output layer as circumstances change. Please refer me to any resource you are aware of that can help me understand potential approaches. I can retrain the model from scratch every time, of course. I'm looking for alternatives. I am only aware of two approaches: Transfer learning: Only retraining the final layer with the new desired outputs. Stacking one binary classifier for every potential output and streaming my data through all of them. I need a level of specificity that is pretty granular, for example, I'd like to be able to identify a document that is primarily focused on a topic like "Changes in goat-herding practices in Uzbekistan." This is a made-up requirement, but the point is that it may have very specific features. Please provide input on the pros and cons of the approaches that I mentioned and add any approaches that I haven't thought of. AI: @DPCII, I don't think modifying output nodes at runtime will help you. This is because, A neural network is trained on a specific dataset and to predict predefined variables only. In backpropogation ( used for optimization ), the gradients are for each weight and bias ( or any other parameters ) are calculated using the loss function ( also the objective function ). For the changing the number of output nodes, you have remove some the connections coming from the previous layer which is $L_N$, suppose. But during training, the parameters of $L_{N-1}$ were optimized using the gradients of the activation functions of $L_N$. Hence, removing output nodes at inference time would affect all the previous layers which in turn will affect the final output ( of the remaining nodes ). Instead, I would suggest, to construct a neural network which predicts all the possible classes. Train this NN on various samples wherein you'll require the predictions of some desired nodes. Let me explain this by an example. Suppose, we have a neural network which predicts classes $C_1, C_2,C_3,...,C_N$. For a specific sample, we only need the predictions for classes $C_1$ and $C_2$ only, given a sample $X$. For this sample, we set the label as $[ y_1 , y_2 , 0 , 0 , 0 , ... , 0].$ We only set the values for classes $C_1$ and $C_2$ and rest all the classes are set to zero. Similarly, we assign labels for various samples and set only those values whose predictions are needed. Rest all values are set to zero. This is actually the common classification we do for images. For a given sample of a cat, I would like to predict an array of $[1,0]$ and for a sample of dog, $[0,1]$. Hence, we could apply this ideology instead of removing nodes and disturbing the NN.
H: how to adjusting already built ML predictive model How can I continue machine learning model after predicting results? What I mean by that is that I built a model for my 1 million records dataset, this model took around 1 day to get built. I extracted the model results using Python and now I have a (function) that I can feed it with my features and it gives me a prediction results but with time my dataset has become 1.5 million records. I do not want to redo the whole thing all over again from scratch. Is there any way I continue of top of thf first model I built ( the one with 1 million records) so the new model take less time to adjust it based on the new 0.5 million records compare to re building everything from scratch for 1.5 million records. P.S. I am asking for all algorithms, if there is anyway to do this for any algorithm that would be good to know which ones are these AI: This depends on your model type: Classical using ensemble/stacked models: If you are using classical machine learning, you could use your old model built on the previous 1 million records, and create a new model on the most recent 500k records and then combine the predictions in an ensemble or stacked approach. References for Ensemble and Stacking: https://machinelearningmastery.com/stacking-ensemble-machine-learning-with-python/ Video Reference: https://www.youtube.com/watch?v=Un9zObFjBH0 AI/NN using transfer learning: If your are using a NN (neural network) model, you can use the idea of transfer learning. Save your model built on the first 1 million records, then add it as an initial layer to a new NN for analyzing the new data. You can then save the new NN and use it in the next round. Reference: https://machinelearningmastery.com/transfer-learning-for-deep-learning/ Video Reference: https://www.youtube.com/watch?v=yofjFQddwHE General guidelines: If you need to do this updating process many times, you can create a new model on n number of records, drop the oldest data/model off once your new dataset reaches a minimum, and predict only on the last x number of models. n and x are adjusted based on your data, flexibility and need for real-time predictions. If the data is changing over time, then it would be better to only use the latest data, or weight the older data lower and the newer data higher. Here is a good definition of transfer learning: "Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task."
H: Choosing a distance metric and measuring similarity I am trying to decide which particular algorithm would be most appropriate for my use-case. I have dataset of about 1000 physical buildings in a city with feature space such as location, distance, year built and other characteristics etc. For each new data point, a building, I'd like to find 3-5 buildings that are most similar based on feature space comparison. I define similarity as weighted comparison of features. I'd like to iterate over entire feature space (w/ filter like location) and choose 3-5 most similar buildings matching the new building data point. Here's what my data looks like: I'm wondering what similarity measure would make sense? I work in python, so prefer a pythonic/sci-kit learn way of doing this. AI: It appears to me that what you're looking for in your use-case is not clustering - it's a distance metric. When you get a new data point, you want to find the 3-5 most similar data points; there's no need for clustering for it. Calculate the distance from the new data point to each of the 'old' data points, and select the top 3-5. Now, which distance metric to pick? There are options. If you're using SKLearn, I'd look over this page for example of distance(/similarity) metrics. If your features are continuous, you can normalize them and use cosine similarity; Start with this, and see if it fits.
H: Feature selection inside Random Forest I understood random forest is building a model with multiple decision trees, Row sampling is based on bootstraping My question is how feature selection is happening for each tree ? Any help would be appreciated. AI: A random subset of features than using the best split logic as done in normal Tree The Random Forest algorithm introduces extra randomness when growing trees; instead of searching for the very best feature when splitting a node, it searches for the best feature among a random subset of features. The algorithm results in greater tree diversity, which (again) trades a higher bias for a lower variance, generally yielding an overall better model. $\hspace{3cm}$Image credit - SE( https://stats.stackexchange.com/a/438384/256691) - Ashish Anand Extra-trees - More randomness by getting the feature using a random Threshold instead of searching the best split. When you are growing a tree in a Random Forest, at each node only a random subset of the features is considered for splitting. It is possible to make trees even more random by also using random thresholds for each feature rather than searching for the best possible thresholds (like regular Decision Trees do) I used Extra-tree to explain the point to have a contrast. It helps in catching the point. Please also read the SE link of the image [Blockquote ref] - Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition
H: What undergraduate degree should I take to get into data science? I'm going to do an undergraduate degree next year. Data Science major is there but I was wondering if a statistics major with a minor in data science would be more valuable? Which degree would be more in demand? I am also planning on doing as many internships as possible to gain practical skills and experience. I am also contemplating the value of a masters degree, and whether it is worth the time and money, or if an undergrad+internships will suffice. I already know Python and I will learn R and SQL as well. Whether this is in university or otherwise (or both). I will want to specialise in data analysis using statistical analysis paired with artificial intelligence. EDIT: I’m in Australia and looking to go to the University of Melbourne (I’m pretty sure its #1 in Australia for computer science and math/statistics) AI: If you can manage it, I suggest focusing more on Statistics. Statistics can be a demanding field, and takes a lot of effort to attain a degree. Of course, a Data Scientist also needs several skills besides Statistics, such as Software, Data Engineering and Visualization, but Statistics knowledge/skills are the most rare to find, and therefore can be the most valuable. I worked the last 8 years in a data science department and found that when looking to hire a data scientist, it was often difficult to find someone with a solid statistical background. Most candidates had software, engineering, data and visualization skills. However, someone with a stronger statistics background would be hired over the other applicants, and usually with a higher salary. Internships are very valuable, they will help you get your foot in the door before you graduate. Some companies may even offer you a job during your internship if they like your work. Some of the bigger companies will pay for your continuing education as they will get a better employee. It is a good investment for both you and them. A Master's degree will also be beneficial, but may limit your chances to get a lower paying entry level job. Larger companies will usually prefer a Master's over a Bachelor's degree. Response to your followup question: I strongly suggest doing the internships, they can dramatically improve your chances to get a job when you graduate, especially with the companies you did the internship with. The choice between Masters with little experience vs Bachelors with several internships would be much more difficult to make. Overall, the better choice would be to get the Masters degree with a few internships along the way. Also, I suggest going with Statistics all the way, Bachelors and Masters. Several of the topics in Data Science will evolve over the 6+ years it takes to get the Masters, but the underlying Statistics will change very little. Maybe get a minor degree in Data Science or a Statistics degree with an emphasis in Data Science. You can also consider looking for a job in between the Bachelors degree and Masters, and if you find a good job, you can always get your Masters degree while you are working and earning money.
H: Pandas - Sum of multiple specific columns I created this script: import pandas as pd pd.set_option('display.min_rows', None) pd.set_option('display.max_columns', None) df = pd.read_excel('file.xlsx', sep=';', skiprows=6) df = df.drop(['Position', 'Swap'], axis=1) df.drop_duplicates(['Time']) df = df.groupby(['Symbol'])[['Profit', 'Volume', 'Commission']].sum() df['Comm. ratio'] = (df['Commission'] / df['Profit']) df['Comm. ratio'] = df['Comm. ratio'].round(2) df['Net profit'] = (df['Profit'] + df['Commission']) df.loc['Total'] = pd.Series(df['Commission'].sum(), index = ['Commission']) Symbol Profit Volume Commission Comm. ratio Net profit BA FB NVDA US500 USTEC Total -45 df.loc['Total'] = pd.Series(df['Commission'].sum(), index = ['Commission']) works on "commission" column, but I'd like to have a multiple column sum for "Profit, Commission, and Net profit" in the "Total" row. I couldn't make it work. Thanks! AI: df.loc['Total'] = pd.Series([df['Commission'].sum(),df['Profit'].sum(),df['Net profit'].sum()], index = ['Commission','Profit','Net profit'])
H: How to run Neural Net on GPU without python frameworks? I coded a deep learning model from scratch in python(using numPy) without using any frameworks like keras or tensorflow. So far my model works fine but it runs on CPU. How should i modify my code so as to run it on GPU instead? AI: Most deep learning frameworks have APIs that are significantly similar to NumPy. I recommend you take a look at PyTorch as it will let you refactor your code reasonably intuitively to make use of your GPU via Cuda. Speaking as someone who has coded a neural network in NumPy, I would highly recommend learning a popular deep learning framework. It will be easier than learning to make your NumPy code run on a GPU (which is in any case a less useful thing to be able to do than to use NN frameworks that already exist).
H: How can I count the number of matching zero elements between two numpy arrays? I have a function that returns the predicted accuracy of a time-series model. I have two equally-sized numpy arrays, one for the actual direction and one for the predicted direction. I'm classifying whether there is a change in the data's direction from the previous t-1 step. '1' stands for an increase from t-1, '0' stands for no change in direction, and '-1' stands for a decrease from t-1. I'm trying to compare the elements of both arrays to determine if they both contain the same data to determine accuracy. I can match indexes and count the number of '1s' and '-1s' that match, but I cannot count the number of matching zeros. (It's kinda hard summing zeros). :-) Anyway, I've tried the numpy sum function specifying '0' as the argument for both arrays but it only returns an array of zeros but no count. I'm not trying to create a confusion matrix...the goal here is to create an accuracy score. I plan to take all the matching ones, negative ones and zeros and divide that by the total length to get an accuracy score. Thanks in advance. AI: You can compare the predictions with the expected results directly, using simple comparisons, in this case just ==. This returns boolean values - True or False, which you can sum up because True == 1 and False = 0. Here is an example for your case using some randomly generated dummy data: In [1]: import numpy as np In [2]: y = np.random.choice([-1, 0, 1], 10) In [3]: preds = np.random.choice([-1, 0, 1], 10) In [4]: y Out[4]: array([ 1, 1, 1, -1, 1, -1, -1, 1, 1, 0]) In [5]: preds Out[5]: array([ 0, -1, 1, 0, 1, 1, -1, 1, -1, 0]) The real part that checks where your predictions are correct is then done using two checks: where are the predictions equal to the ground truth i.e. where were you correct in predicting the direction, and where are the predictions equal to the direction you are interested in i.e. downwards (-1), no change (0) or upwards (1) This can be done as follows: In [6]: (y == preds) & (preds == 0) # & means we needs both checks to be True Out[6]: array([False, False, False, False, False, False, False, False, False, True]) We can see only the final position is True after both these checks, because that is the only place that the prediction was True and the value direction was 0. You can then write a loop to check all values and do something with them: In [7]: n = len(preds) # the number of test samples (= 10 in my dummy example) In [8]: for direction in [-1, 0, 1]: score = sum((y == preds) & (preds == 0)) accuracy = score / n print(f"Direction {direction:>2}: {score}/{n} = {accuracy * 100:.1f}%") Which gives: Direction -1: 1/10 = 10.0% Direction 0: 1/10 = 10.0% Direction 1: 1/10 = 10.0%
H: How to know if classification model is predicting 1 or 0 I have used logistic regression to predict whether customer is good(1) or bad(0). I got the accuracy .80 . How do i know whether model predicted 1 or 0 .Is it related to parameter of model1.predict_proba(X_test)[:,1] (the 1 in the end in square bracket). AI: You can find the model predictions here: model1.predict(X_test) array([0, 0, 0, ..., 0, 0, 0], dtype=int64) In a binary (good/bad or 1/0) model, predict_proba is the probability of 0 or 1 , the value that is calculated by your model, which is used along with a cutoff (which is 0.5 in the sklearn implementation and cannot be changed) to determine if the final prediction is 0 or 1 model1.predict_proba(X_test)[:,0] # probability the answer is 0 array([0.94009529, 0.96378774, 0.98951049, ..., 0.67607543, 0.97599932, 0.82838031]) model1.predict_proba(X_test)[:,1] # probability the answer is 1 array([0.05990471, 0.03621226, 0.01048951, ..., 0.32392457, 0.02400068, 0.17161969]) since this is a binary model, both should add up to one. The accuracy score is how many predictions were correct. (correct_pred_0 + correct_pred_1)/total_predictions If you remove the [:,1], you get the entire array: model1.predict_proba(X_test) #col 1 col 2 array([[0.94009529, 0.05990471], [0.96378774, 0.03621226], [0.98951049, 0.01048951], ..., [0.67607543, 0.32392457], [0.97599932, 0.02400068], [0.82838031, 0.17161969]]) First column is the probability for 0 and the second column is the probability for 1 you can check the order of the classes using this: model1.classes_, for binomial, the default is like this: array([0, 1], dtype=int64)
H: What does it mean when the shape of input images is (600,64,64,3)? While attempting an assignment, I found that shape of the input image was (600,64,64,3). I thought 3 stood for the number of channels but it's listed as the 4th dimension. What does this mean? This is in reference to convolutional neural networks. AI: The fourth dimension is because it is referring to either the full dataset(train/test) Or an individual batch. 600 - Number of images in the dataset or batch 64 x 64 - Size of each image 3 - Number of channels
H: How to Transpose dataframe with custom output? I have a data like this: I want to transpose it like this in python: AI: df.pivot_table(values='Confirmed',index['state','State_name'],columns='date',aggfunc='sum')
H: how classification scores are interpreted? I would like to know how to interpret classification scores (i am not sure about the word score or probability, please correct me). For example, for a binary classification positive values are labeled as 1, and -1 for negative ones. Now, is it fair to say that for a score 10 the instance is more likely to be successfully predicted than a score 5, despite the result that can be wrong. Thanks. AI: If the scores have values that are higher than 1, I wouldn't call them probabilities. Probabilities should always be between 0 and 1. And indeed, the higher the score, the more likely an example is to be positive, this is the most natural interpretation of a score.
H: Conventions for dimensions of input and weight matrices in neural networks? Im currently learning neural networks and I see conflicting decsriptions of the dimensions of weight and input matrices on the internet. I just wanted to know if there is some convention which more people use than the other. I currently define my input matrix X with the dimensions of: (m x n) Where m is the number of samples and n is the number of features. And I define my weight matrices with the dimensions: (a x b) Where a is the number of neurons in the layer and b is the number of neurons in the last layer. Is that conventional or should I change something? AI: I would not say there is such a convention for it per se (if anyone has anything to comment on this, I would also like to know). I think to make it clearer how the layer's input x interacts with the weights W, it might better to define the dimensions as the following: x: (m x n) W: (n x k) bias term b: (k) m remains as the number of examples. n represents number of input features and k represents number of neutrons in the layer. As we know, we compute the output of the layer y as Wx + b. Therefore, the resulting output matrix will be (m x k)
H: SMOTE oversampling for class imbalanced dataset introduces bias in final distribution I have a problem statement where percentage of goods (denoted by 0) is 95%, and for bads (denoted by 1) it is 5% only. One way is to do under sampling of goods so that model understands the patterns properly for both the segment. But going with under sampling is leading to high loss of data which will directly lower down my model performance. Hence I have opted for over sampling of bads, but over sampling has its own problem too: Check this code snippet: from imblearn.over_sampling import SMOTE sm = SMOTE(random_state = 33) x = train_data.drop(['target'], axis = 1) y = train_data[['target']] x_new, y_new = sm.fit_sample(x, y) y.target.value_counts() # 0 -> 26454 1-> 2499 y_new.target.value_counts() # 0 -> 26454 1-> 26454 after oversampling, I get my equal no. of goods and bads, but the problem is that variable distribution is getting affected. For e.g. I have 'age' variable, in case of good, the bucket wise variable distribution is 1 - 25 years - 20% 26 - 50 years - 35% 50+ years - 45% and distribution of bad is (Before OverSampling): 1 - 25 years - 50% 26 - 50 years - 30% 50+ years - 20% But post oversampling the distribution of bads is changing: 1 - 25 years - 40% 26 - 50 years - 35% 50+ years - 25% So now the distribution of this variable for Good Vs Bad is not that much effective as it was earlier (before oversampling). Is there any way that doing oversampling does not affect my variable distribution? AI: Class imbalance is a frequent problem in machine learning and techniques to balance the data usualy are of two flavors: undersampling the majority, oversampling the minority or both. One can always partition the data according to some variables and separately oversample each partition so as to maintain some measure (eg given data distribution). In the same way that separate oversampling can be achieved for only $1$ variable, in the same way separate oversampling can be achieved for $n$ variables. Of course more complex but certainly doable. For example one takes all distinct combinations of variables (or ranges of variables for continous variables) and separately oversamples each such cluster in order to maintain the given data distribution. The above is a straightforward technique, although one should note that if minority class does not have enough samples there is no guaranty that the given data distribution reflects the (true) underlying data distribution (in other words it may not constitute a representative sample in statistical sense). So for these cases oversampling the whole data, without extra assumptions about underlying distribution, is a maximally unbiased method in the statistical sense. There is some research lately on hybrid and intelligent methods for (oversampling) class imbalance problems without introducing bias during the process. The following references will provide the relevant background: Cross-Validation for Imbalanced Datasets: Avoiding Overoptimistic and Overfitting Approaches, October 2018 Although cross-validation is a standard procedure for performance evaluation, its joint application with oversampling remains an open question for researchers farther from the imbalanced data topic. A frequent experimental flaw is the application of oversampling algorithms to the entire dataset, resulting in biased models and overly-optimistic estimates. We emphasize and distinguish overoptimism from overfitting, showing that the former is associated with the cross-validation procedure, while the latter is influenced by the chosen oversampling algorithm. Furthermore, we perform a thorough empirical comparison of well-established oversampling algorithms, supported by a data complexity analysis. The best oversampling techniques seem to possess three key characteristics: use of cleaning procedures, cluster-based example synthetization and adaptive weighting of minority examples, where Synthetic Minority Oversampling Technique coupled with Tomek Links and Majority Weighted Minority Oversampling Technique stand out, being capable of increasing the discriminative power of data Learning from Imbalanced Data, 9, SEPTEMBER 2009 With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Although existing knowledge discovery and data engineering techniques have shown great success in many real-world applications, the problem of learning from imbalanced data (the imbalanced learning problem) is a relatively new challenge that has attracted growing attention from both academia and industry. The imbalanced learning problem is concerned with the performance of learning algorithms in the presence of under represented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently into information and knowledge representation. In this paper, we provide a comprehensive review of the development of research in learning from imbalanced data. Our focus is to provide a critical review of the nature of the problem, the state-of-the-art technologies, and the current assessment metrics used to evaluate learning performance under the imbalanced learning scenario. Furthermore, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potential important research directions for learning from imbalanced data. Data Sampling Methods to Deal With the Big Data Multi-Class Imbalance Problem, 14 February 2020 The class imbalance problem has been a hot topic in the machine learning community in recent years. Nowadays, in the time of big data and deep learning, this problem remains in force. Much work has been performed to deal to the class imbalance problem, the random sampling methods (over and under sampling) being the most widely employed approaches. Moreover, sophisticated sampling methods have been developed, including the Synthetic Minority Over-sampling Technique (SMOTE), and also they have been combined with cleaning techniques such as Editing Nearest Neighbor or Tomek’s Links (SMOTE+ENN and SMOTE+TL, respectively). In the big data context, it is noticeable that the class imbalance problem has been addressed by adaptation of traditional techniques, relatively ignoring intelligent approaches. Thus, the capabilities and possibilities of heuristic sampling methods on deep learning neural networks in big data domain are analyzed in this work, and the cleaning strategies are particularly analyzed. This study is developed on big data, multi-class imbalanced datasets obtained from hyper-spectral remote sensing images. The effectiveness of a hybrid approach on these datasets is analyzed, in which the dataset is cleaned by SMOTE followed by the training of an Artificial Neural Network (ANN) with those data, while the neural network output noise is processed with ENN to eliminate output noise; after that, the ANN is trained again with the resultant dataset. Obtained results suggest that best classification outcome is achieved when the cleaning strategies are applied on an ANN output instead of input feature space only. Consequently, the need to consider the classifier’s nature when the classical class imbalance approaches are adapted in deep learning and big data scenarios is clear. Hope these notes help.
H: Why are RNNs used in some computer vision problems? I am learning computer vision. When I was going through implementations of various computer vision projects, some OCR problems used GRU or LSTM, while some did not. I understand that RNNs are used only in problems where input data is a sequence, like audio or text. So, in kernels of MNIST on kaggle almost no kernel has used RNNs and almost every repository for OCR on IAM dataset on GitHub has used GRU or LSTMs. Intuitively, written text in an image is a sequence, so RNNs were used. But, so is the written text in MNIST data. So, when exactly is it that RNNs(or GRUs or LSTMs) need to be used in computer vision and when don't? AI: RNNs and CNNs are not mutually exclusive! It might seem that they are used to handle different problems, but it is important to note that some types of data can be processed by either architecture. For instance, RNNs uses the sequences as the input. It should be mentioned that sequences are not just limited to text or music. Sequences can also be videos, which are a set of images. RNNs, such as LSTM, are used for cases where the data includs temporal properties, e.g., time series, and also where the data is context-sensitive, e.g., in the case of sentence completion, the function of memory provided by the feedback loops is critical for adequate performance. In addition, RNNs have been successfully applied in the following areas of computer vision: Image classification (one-to-one RNN): e.g., “Daytime picture” versus “Nighttime picture”. Image captioning (One-to-many RNN): giving a caption to an image based on what is being shown. For example, “Fox jumping over dog”. Handwriting recognition: Please read this [paper] (https://arxiv.org/pdf/1902.10525.pdf) Regarding CNN, here are some of its applications: Medical image analysis Image recognition Face detection Recognition systems Full-motion video analysis. It is important to know that CNNs are not capable of handling a variable-length input. Finally, using RNNs and CNNs together is possible and it could be the most advanced use of computer vision. For example, a hybrid RNN and CNN approach may be superior when the data is suitable for a CNN, but has temporal characteristics. Ref
H: How to visualize categorical data with numeric columns I have data like as shown below I would like to represent the above tabular data in a visual form. However, the below graph may not work because my real data as 50K unique drug names. So, is there any better way to represent this? Can you share some suggestions on how this can be represented? AI: Clearly there's no way to have the names of the drugs. Assuming the relation between the two columns is important, a scatter plot with units prescribed as X and number of patients as Y might work. You could even add the name of the drug for a few isolated points. Transparency/opacity can be used to show the dense areas. In case the relation between the columns is not important, you could just plot the two distributions (histograms) with different colours on the same graph.
H: Why isn't the neural network updated after every example in the dataset Why the neural network is updating only after the whole batch passes? AI: Weights get updated based on the number of examples you feed in a batch. This is because, a full pass(forward and backward) of matrix computations has to be completed in order for the weights to be updated, after back-propagation and proceeding with next epoch, with batch type you had chose. Moreover, If you use stochastic gradient descent, where each example will be processed at a time, your weights will updated after having processed every example.
H: Appropriate objective function and evaluation metric when I DO care about outliers? I am reading these two pages: xgboost documentation Post on evaluation metrics I have a dataset where I am trying to predict future spend at the user level. A lot of our spend comes from large spenders, outliers. So, we care about them. I am using XGBoost. I have tried xgboost with objective reg:squarederror. This tended to underpredict a little. I then tried with reg:squaredlogerror and this resulted in predictions that under predict by much more than just using squarederror. I have tried tuning with several differing hyper parameter combinations but none made as big a difference as changing the objective. So, I'm dwelling on the objective function and trying to understand if there's another one out there that would be worth a shot? On the xgboost docs above, some of the out of the other regression objective options are reg:pseudohubererror as well as count:poisson. There is no option, that I can see, for just MAE. If using an objective function less susceptible to outliers with rmsle took me further away from accuracy whereas rmse took me closer, would using MAE potentially be worth a shot? In this dataset, outliers are more important, but so are regular users. What would be a good objective and evaluation metric? Is MAE worth trying? If so, how? Looking at the docs above, I cannot see MAE as an option under regression parameters. AI: These are several things you can try: Use quartic error, $(y - \hat{y})^4$, instead of quadratic error. This is going to penalize a lot big errors, way more than MSE. The issue is that this is not implemented in xgboost, and you would need to develop a custom loss. If your target is always positive, you can use the target as training weights. This will give more weights to the outliers. If it is not always positive, you can use the absolute value of the target as weights. If using the target values directly puts too much weight on the outliers, you might want to transform it (e.g. using the log or square root), and if you have samples whose target value is zero, you might want to add some epsilon to all the weights. Note that xgboost can be easily trained using weights. Try to predict the quantile of the training distribution, then transform your predictions using the training cumulative probability function.
H: Identify significant features in clustering results I'm a student in Data Analysis, working on a data clustering exercise. Two clusters have been identified based on a dataset with 40 features. To interpret and label these clusters, I'm wondering if there is a way to determine which features are the most determinant in the clustering output. For instance, I would imagine that I could take out one feature from the clustering and see how much it affects the output. However there are probably smarter ways. I would greatly appreciate if someone could point me in the right direction. Thanks! AI: A similar post appears on Cross-Validated, "Estimating the most important features in a k-means cluster partition". Quoting from that post: One way to quantify the usefulness of each feature (= variable = dimension), from the book Burns, Robert P., and Richard Burns. Business research methods and statistics using SPSS. Sage, 2008. (mirror), usefulness being defined by the features' discriminative power to tell clusters apart. We usually examine the means for each cluster on each dimension using ANOVA to assess how distinct our clusters are. Ideally, we would obtain significantly different means for most, if not all dimensions, used in the analysis. The magnitude of the F values performed on each dimension is an indication of how well the respective dimension discriminates between clusters. Another way would be to remove a specific feature and see how this impact internal quality indices. Unlike the first solution, you would have to redo the clustering for each feature (or set of features) you want to analyze. FYI: Can a useless feature negatively impact the clustering? Can the choice of the measurement units of the features impact the clustering? Why vector normalization can improve the accuracy of clustering and classification? What are the most commonly used ways to perform feature selection for k-means clustering? Furthermore, there is a paper on Feature Selection in Clustering Problems A novel approach to combining clustering and feature selection is presented. It implements a wrapper strategy for feature selection, in the sense that the features are directly selected by optimizing the discriminative power of the used partitioning algorithm. On the technical side, we present an efficient optimization algorithm with guaranteed local convergence property. The only free parameter of this method is selected by a resampling-based stability analysis. Experiments with real-world datasets demonstrate that our method is able to infer both meaningful partitions and meaningful subsets of features.
H: Find the best interpolation method for missing observations I have a database which has measurements of objects every day every hour. However, some data is missing and I don't have measurements for all the hours. in order to get over this challenge I have used different interpolations methods in order to create this missing data (with pandas). So now I have several databases with those interpolations methods, and I need only one. My question is how can I determine which interpolation is the best interpolation method? I have researched the internet but mainly found explanation abut how to interpolate data but not how to choose the best method and how can I visualize it. AI: The most basic method that springs to mind is split of a test set: Take the data where you have recorded all variables which you might need to extrapolate in another set, and split of a percentage of that and "mask" or hide the variable you wish to interpolate in this split (maybe using the data from the other part of the split if you're using some sort of trained interpolation). Compare the results of the different interpolation methods you are using with the actual values (that you've taken out) on a metric that suits your purpose for the data best (e.g. mean squared error, mean absolute error, logistic loss, or maybe even the outcome of some machine learning method trained on the dataset). That way, you'll find the interpolation method that best suits your data + problem. One thing to keep in mind is that your masking should follow the same (if any) patterns that your actual missing data has: e.g. if it only happens on certain time periods, your masking method should try to follow that pattern if possible.
H: How to determine sample rate of a time series dataset? I have a dataset of magnetometer sensor readings which looks like: TimeStamp X Y Z 1.59408E+12 -22.619999 28.8 -22.14 1.59408E+12 -22.5 29.039999 -22.08 1.59408E+12 -22.32 29.039999 -21.779999 1.59408E+12 -22.38 29.16 -21.6 . . . And so on The timestamp is in milliseconds where 1.59408E+12, 1.59408E+12, 1.59408E+12 is 1594076006983, 1594076006994, 1594076007004 and so on. So what will be the sampling rate/ frequency of the data? AI: General Explanation Generally speaking, the frequency of the data is the difference between consecutive time-stamps. If all is well, that difference will be constant across your time-series; in this case, this difference is the frequency of your data. In other cases, it might be a bit more complicated - for example, you can have missing data, or data where some timestamps are shifted a bit forward or backward. In which case, you might want to 'smooth out' those issues, for example by taking the median of such differences (if some samples come a bit too early and some come a bit too late), or the mode (if some samples are missing, meaning that most will come at the right time and some will be too late by an exact multiple of the real frequency). Python (pandas) Example Here's an example in Python, using the pandas library: let's make up a time series with missing values. Its frequency is 5 minutes, but about 5% of the samples are missing (its data is random integers between 1 and 20, and the 1's have been dropped) import pandas as pd import random index = pd.DatetimeIndex(start='2020-01-01 00:00:00', end='2020-04-01 00:00:00', freq='5T') s = pd.Series(index=index, data=None) s = s.apply(lambda x: random.randint(1,20)) s=s[s.ne(1)] Now let's look at the time-difference between existing (not dropped) samples: s.index.to_series().diff().value_counts() yields 00:05:00 23629 00:10:00 1191 00:15:00 63 00:20:00 2 i.e. the vast majority of samples are really 5 minutes apart, but not all of them. We'll take the actual frequency by taking the median of those differences: s.index.to_series().diff().median() yields Timedelta('0 days 00:05:00')
H: How many ways are there to check model overfitting? I am running xgboost on a regression classification problem where the model is predicting a score of how likely a gene is to cause a disease from 0-1. I try to avoid overfitting in all the ways I can think of and the mean output of nested cross-validation is r2 0.88, I'm not sure if I can trust this or if there are other ways I can see if this is overfitting. The output r2 on just training and testing non-nested is: Train r2: 0.971 Test r2: 0.868. So far I: Remove features with a correlation >0.9 and remove any features with >50% missing data (this is hard to strengthen, a lot of genetic features simply have missing data for a lot of under studied genes in biology) Have no imputation to avoid imputation bias, and since xgboost accepts missing data. Scale features with MinMaxScaler() in scikit-learn - recommended as a good starting point and most features don't have a normal distribution Compare 2 feature selection methods (one using features xgboost deems important from SHAP values and one using Boruta, both give 0.87-0.88 r2 on average of the 10 nested CV k-folds and only remove 3-4 out of 57 features) Use nested kfold cross validation with 10 kfolds The only other area I'm aware of that I haven't really explored is projection techniques. I am not sure which method would be best for this (my features are all numeric but mixed continuous or discrete data types) such as between UMAP, PCA or partial least squares. Are there any other ways I can investigate overfitting? I have a biology background so any resources on this would be useful and any help appreciated. I have also more manually removed some minority example genes before training (e.g. removed training genes with a 0.9 score which make up only about 1/8 of the training dataset) to give the trained model to predict and view how the model generalises to this 'new' hard to predict genes - gives them a 0.6-0.7 score when they are actually 0.9: y_pred =[0.69412696, 0.709764, 0.6366122] y_true = [0.9, 0.9, 0.9] r2_score(y_true, y_pred) #outputs 0.0 10-fold nested cv r2 results per fold: 'test_r2': array([0.8484691 , 0.86808136, 0.91821645, 0.93616375, 0.94435934, 0.82065733, 0.84856025, 0.8267642 , 0.84561417, 0.89567455] Edit: A few other things I've tried: I think I've misused classification here (and removed tag accordingly), I use regression models and I don't have labels and only continuous scores so I don't get true positives, false positives etc. to be able to do ROC. I'm not sure what other metrics are good or better than R2 for regression that I can use. I have tried applying imputation to compare other models (random forest, SVM, and logistic rgeression with elasticnet or lasso), all models perform notably lower than gradient boosting (0.59 average nested r2 is the highest with random forest) - but I was originally concerned with biased data from imputation, is imputation worth doing to counteract overfitting? I use GridSearch in scikit-learn for all my models with nested cross-validation, I should have included this information originally as I have been trying to always do this. I have a biology background, so not sure about best practices for machine learning, but from this I'm suspecting random forest is better and I should be trying to do a better parameter tuning than I currently do for it, and trusting that model's result on nested CV. Is this the best approach? Also not sure if how I tune my random forest is reasonable, currently I use: rfr = RandomForestRegressor(random_state=seed) rfr_params={'n_estimators':[100, 500, 1000], 'min_samples_split': [50, 100], 'min_samples_leaf': [50, 100],} AI: The direct way to check your model for overfitting is to compare its performance on a training set with its performance on a testing set; overfitting is when your train score is significantly above your cv score. According to your comments, your r2 score is 0.97 on the training set, and 0.86 on your testing set (or similarly, 0.88 cv score, mean across 10 folds). That's somewhat overfitting, but not extremely so; think if 0.88 is "good enough" for your requirements The r2 score is 1 - MSE of errors / variance of true values. In the example you showed, all of the three true values were the same; i.e. their variance is zero. The r2 score should've been a negative infinite, but apparently sklearn corrects this to 0; you can verify that changing y_true to [0.9, 0.9, 0.90001] changes your r2 score to a very large negative number (around -2*10**9). This is why checking r2 against a small sample is not a good idea; the mean of the small sample contains too much important information. You added that you want to know which parameters to tune in order to prevent over-fitting. In your edit to your question, you said you're using grid-search over n_estimators (3 options), min_samples_split (2 options) and min_sample_leaf (2 options). There are other parameters you can try, and in my experience max_depth is important to tune. This question on Stack Overflow and this question on Cross Validated deal with overfitting, and there are good options there. I'd add that if you're trying many options, then maybe you'd be better off doing using Bayesian Optimization (there's a package that functions well with SKLearn: https://scikit-optimize.github.io/stable/auto_examples/sklearn-gridsearchcv-replacement.html).
H: Does the performance of GBM methods profit from feature scaling? I know that feature scaling is an important pre-processing step for creating artificial neural network models. But what about Gradient Boosting Machines, such as LightGBM, XGBoost or CatBoost? Does their performance profit from feature scaling? If so, why and how? AI: Scaling doesn't affect the performance of any tree-based method, not for lightgbm, xgboost, catboost or even a decision tree. This post that elaborates on the topic, but mainly the issue is that decision trees split the feature space based on binary decisions like "is this feature bigger than this value?", and if you scale your data, the decisions might look different, as they are done on the scaled space, but the results should be the same. As an example, a decision tree should split in the same way the data if you change units (a particular case of scaling). Let's say you want to use weight of a person to predict if someone is under 18 (as a binary classifier). If given the weight in grams, the decision tree might do something like: if weight < 5000 grams, then the person is under 18. If you change the units to kilograms, the decision tree will do: if weight < 5 kg, then the person is under 18. To sum up, the splits will be equivalent under any scaling or, in general, any increasing linear transformation.
H: List of numbers as classifier input I am trying to use my data to predict the classes of the input. My data are as the following: x1 = [0.2, 0.25, 0.15, 0.22] y = 1 x2 = [0.124, 0.224, 0.215, 0.095] y = 3 ... xn = [...] y = 2 The problem is that my data are just lists of numbers that do not have an order. I mean that x1 can be x1 = [0.2, 0.25, 0.15, 0.22] y = 1 or x1 = [0.25, 0.22, 0.2, 0.15] y = 1 or the numbers in the list to be in any other order. Is there anything that I can do, so I will be able to build a classifier? Thank you! AI: The simple option is to design your features so that they represent the distribution of the values: every feature $f_i$ represents a bin and its value for a particular instance is the frequency of the corresponding range for this instance. Example: let's consider 10 bins between 0 and 1, i.e. $f_1=[0,0.1), f_2=[0.1,0.2),..., f_{10}=[0.9,1]$: $x_1=[0.2, 0.25, 0.15, 0.22]$ is represented as $[0,1,3,0,0,0,0,0,0,0]$ $x_2 = [0.124, 0.224, 0.215, 0.095]$ is represented as $[1,1,2,0,0,0,0,0,0,0]$ ...
H: How to input LSTM output to MLP with concatenate? I am having a training data set for a time-series dataset like below where my target variable is var1(t) which is the value of var 1 at time=t. import numpy as np import pandas as pd train_df = pd.DataFrame(np.random.randint(0,100,size=(100, 16))) train_df.columns=['var1(t-3)','var2(t-3)','var3(t-3)','var4(t-3)','var1(t-2)','var2(t-2)','var3(t-2)','var4(t-2)','var1(t-1)','var2(t-1)','var3(t-1)','var4(t-1)','var1(t)','var2(t)','var3(t)','var4(t)'] train_X=train_df.drop(['var1(t)'],axis=1) train_y=train_df[['var1(t)']] I am inputting the LSTM network with the past 3 timesteps (t-3) to (t-1) of all 4 variables and then feed the output of the LSTM with the current timestep value of the var2,var3,var4 with an MLP with functional API in Keras. So I prepared the inputs for the LSTM and MLP like below : #subset the 3 previous timesteps of the 4 variables for the time sries part train_X_LSTM=train_X[train_X.columns[:12]].reset_index(drop=True).values #target is always var1(t) train_y_LSTM=train_y.values #take the current timestep fatures which are var2,var3,var4 which are realised at t=t train_X_MLP=train_X[train_X.columns[-3:]].reset_index(drop=True).values #target is always var1(t) train_y_MLP=train_y.values Then I tried the below network : #lstm input shape lstm_input = Input(shape=(train_X_LSTM.shape[0],train_X_LSTM.shape[1])) #lstm units hidden1 = LSTM(10)(lstm_input) hidden2 = Dense(10, activation='relu')(hidden1) #lstm output which will be predicted var1 at t=t lstm_output = Dense(1, activation='sigmoid')(hidden2) #mlp input with additonal 3 variables at t=t mlp_input=Input(shape=(train_X_MLP.shape[0],train_X_MLP.shape[1])) #combine the lstm output which is predicted var1 at t=t and key in var2,var3,var4 at t=t x = concatenate([lstm_output, mlp_input], axis=-1) #mlp model output which is predicted var1 at t=t mlp_out = Dense(1, activation='relu')(x) #final output of combined model which is predicted var1 at t=t model = Model(inputs=[lstm_input, mlp_input],outputs=mlp_out) #compile the model model.compile(loss='mae', optimizer='adam') #fit the model model.fit(x_train, y_train, batch_size=64, epochs=10, validation_split=0.2) This throws the error of ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 1), (None, 100, 3)] which shows I am not combining them correctly. Any help is appreciated! AI: In Keras, there is no need to give the number of samples in your training data to your model. Also, the way you defined your LSTM means that you consider each data sample as a timestep for the LSTM, instead of your t-1, t-2 and t-3 values. So you can remove train_X_LSTM.shape[0] in your Input layers, and give X=[train_X_LSTM, train_X_MLP] and y=train_y_LSTM to model.fit so it matches what your model expects. #lstm input shape lstm_input = Input(shape=(train_X_LSTM.shape[1], 1)) #lstm units hidden1 = LSTM(10)(lstm_input) hidden2 = Dense(10, activation='relu')(hidden1) #lstm output which will be predicted var1 at t=t lstm_output = Dense(1, activation='sigmoid')(hidden2) #mlp input with additonal 3 variables at t=t mlp_input=Input(shape=(train_X_MLP.shape[1])) #combine the lstm output which is predicted var1 at t=t and key in var2,var3,var4 at t=t x = Concatenate()([lstm_output, mlp_input]) #mlp model output which is predicted var1 at t=t mlp_out = Dense(1, activation='relu')(x) #final output of combined model which is predicted var1 at t=t model = Model(inputs=[lstm_input, mlp_input],outputs=mlp_out) #compile the model model.compile(loss='mae', optimizer='adam') #fit the model model.fit([train_X_LSTM, train_X_MLP], train_y_LSTM, batch_size=64, epochs=10, validation_split=0.2) In addition, from my understanding, you want to input (var1, var2, var3, var4) values vector at timesteps t-3, t-2, t-1 to the LSTM. In that case, you should also reshape the data you feed to the LSTM train_X_LSTM=train_X_LSTM.reshape(-1, 3, 4) This way, you will feed a sequence of 3 vectors containing (var1, var2, var3, var4) at the corresponding past timesteps. Then, update the LSTM Input layer lstm_input = Input(shape=train_X_LSTM.shape[1:]) Full code import numpy as np import pandas as pd train_df = pd.DataFrame(np.random.randint(0,100,size=(100, 16))) train_df.columns=['var1(t-3)','var2(t-3)','var3(t-3)','var4(t-3)','var1(t-2)','var2(t-2)','var3(t-2)','var4(t-2)','var1(t-1)','var2(t-1)','var3(t-1)','var4(t-1)','var1(t)','var2(t)','var3(t)','var4(t)'] train_X=train_df.drop(['var1(t)'],axis=1) train_y=train_df[['var1(t)']] #subset the 3 previous timesteps of the 4 variables for the time sries part train_X_LSTM=train_X[train_X.columns[:12]].reset_index(drop=True).values train_X_LSTM=train_X_LSTM.reshape(-1, 3, 4) #target is always var1(t) train_y_LSTM=train_y.values #take the current timestep fatures which are var2,var3,var4 which are realised at t=t train_X_MLP=train_X[train_X.columns[-3:]].reset_index(drop=True).values #target is always var1(t) train_y_MLP=train_y.values from tensorflow.keras.layers import Input, LSTM, Dense, Concatenate from tensorflow.keras.models import Model #lstm input shape lstm_input = Input(shape=train_X_LSTM.shape[1:]) #lstm units hidden1 = LSTM(10)(lstm_input) hidden2 = Dense(10, activation='relu')(hidden1) #lstm output which will be predicted var1 at t=t lstm_output = Dense(1, activation='sigmoid')(hidden2) #mlp input with additonal 3 variables at t=t mlp_input=Input(shape=(train_X_MLP.shape[1])) #combine the lstm output which is predicted var1 at t=t and key in var2,var3,var4 at t=t x = Concatenate()([lstm_output, mlp_input]) #mlp model output which is predicted var1 at t=t mlp_out = Dense(1, activation='relu')(x) #final output of combined model which is predicted var1 at t=t model = Model(inputs=[lstm_input, mlp_input],outputs=mlp_out) #compile the model model.compile(loss='mae', optimizer='adam') #fit the model model.fit([train_X_LSTM, train_X_MLP], train_y_LSTM, batch_size=64, epochs=10, validation_split=0.2)
H: What do `loss` and `accuracy` values mean? I'm using this: Python version: 3.7.7 (default, May 6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)] TensorFlow version: 2.1.0 Eager execution: True With this U-Net model: inputs = Input(shape=img_shape) conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', data_format="channels_last", name='conv1_1')(inputs) conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', data_format="channels_last", name='conv1_2')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool1')(conv1) conv2 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv2_1')(pool1) conv2 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv2_2')(conv2) pool2 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool2')(conv2) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv3_1')(pool2) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv3_2')(conv3) pool3 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool3')(conv3) conv4 = Conv2D(256, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv4_1')(pool3) conv4 = Conv2D(256, (4, 4), activation='relu', padding='same', data_format="channels_last", name='conv4_2')(conv4) pool4 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool4')(conv4) conv5 = Conv2D(512, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv5_1')(pool4) conv5 = Conv2D(512, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv5_2')(conv5) up_conv5 = UpSampling2D(size=(2, 2), data_format="channels_last", name='up_conv5')(conv5) ch, cw = get_crop_shape(conv4, up_conv5) crop_conv4 = Cropping2D(cropping=(ch, cw), data_format="channels_last", name='crop_conv4')(conv4) up6 = concatenate([up_conv5, crop_conv4]) conv6 = Conv2D(256, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv6_1')(up6) conv6 = Conv2D(256, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv6_2')(conv6) up_conv6 = UpSampling2D(size=(2, 2), data_format="channels_last", name='up_conv6')(conv6) ch, cw = get_crop_shape(conv3, up_conv6) crop_conv3 = Cropping2D(cropping=(ch, cw), data_format="channels_last", name='crop_conv3')(conv3) up7 = concatenate([up_conv6, crop_conv3]) conv7 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv7_1')(up7) conv7 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv7_2')(conv7) up_conv7 = UpSampling2D(size=(2, 2), data_format="channels_last", name='up_conv7')(conv7) ch, cw = get_crop_shape(conv2, up_conv7) crop_conv2 = Cropping2D(cropping=(ch, cw), data_format="channels_last", name='crop_conv2')(conv2) up8 = concatenate([up_conv7, crop_conv2]) conv8 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv8_1')(up8) conv8 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv8_2')(conv8) up_conv8 = UpSampling2D(size=(2, 2), data_format="channels_last", name='up_conv8')(conv8) ch, cw = get_crop_shape(conv1, up_conv8) crop_conv1 = Cropping2D(cropping=(ch, cw), data_format="channels_last", name='crop_conv1')(conv1) up9 = concatenate([up_conv8, crop_conv1]) conv9 = Conv2D(64, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv9_1')(up9) conv9 = Conv2D(64, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv9_2')(conv9) ch, cw = get_crop_shape(inputs, conv9) conv9 = ZeroPadding2D(padding=(ch, cw), data_format="channels_last", name='conv9_3')(conv9) conv10 = Conv2D(1, (1, 1), activation='sigmoid', data_format="channels_last", name='conv10_1')(conv9) model = Model(inputs=inputs, outputs=conv10) To compile the model I do: model.compile(tf.keras.optimizers.Adam(lr=(1e-4) * 2), loss=dice_coef_loss, metrics=['accuracy']) And I get this output while training: Epoch 1/2 5/5 [==============================] - 8s 2s/sample - loss: 0.9830 - accuracy: 0.1033 - val_loss: 1.0000 - val_accuracy: 0.9469 Epoch 2/2 5/5 [==============================] - 5s 1s/sample - loss: 1.0000 - accuracy: 0.9442 - val_loss: 0.9999 - val_accuracy: 0.9972 Train on 5 samples, validate on 5 samples I don't understand what loss and accuracy values mean (if they are percentage, etc.). For example, with the values: loss: 0.9830 - accuracy: 0.1033 Mean that there is a 98.3% of error and there is a 10.33% of success. And these values: loss: 1.0000 - accuracy: 0.9442 There is a 100% of error and an 94.42% of success. I think that the values for loss and accuracy could be [1.0, 0.0] (if I'm not wrong). What do loss and accuracy values mean? I mean that I don't know when my model it is getting better. AI: I don't understand what loss and accuracy values mean (if they are percentage, etc.). Loss is the difference between the feedforward output and the actual target. It is calculated using the function you passed for Loss in the compile method. In your case it is - loss=dice_coef_loss. It is calculated after each batch. Metrics is calculated on the metrics you pass to the metrics parm in the compile method. In your case, it is "Accuracy". It means the model is calculating the accuracy(correct prediction/total predictions) after every Batch and letting you know by printing. An improving metrics means the model is learning with every Batch If the model will learn the target, Loss will reduce. Needless to say, too much learning of the train data creates another issue -Overfitting loss: 0.9830 - accuracy: 0.1033 Loss value is 0.98 and Accuracy is 10.33% (Very poor) Loss is not a percentage, it the value of the output of the Loss function using y_true and y_pred.
H: Linear Regression Loss function for Logistic regression I was attending Andrew Ng Machine learning course on youtube Lecture 6.4 He says what a cost function will look like if we used Linear Regression loss function (least squares) for logistic regression I wanted to see such a graph my self and so I tried to plot cost function J with least square loss for a losgistic regression task. Here is my code import matplotlib.pyplot as plt import numpy as np import math x = np.random.rand(10000) # x = np.array([0.1, 0.2, 0.7, 0.4]) y = np.round(np.random.rand(10000)) b = 1 J_list = [] w_list = [] for w in np.arange(-500.5, 500.5, 0.05): J = (1/10000)*np.sum((1/2)*np.square(((1/(1 + np.exp(-1*(w*x + b)))) - y))) J_list.append(J) w_list.append(w) df = pd.DataFrame() df['w'] = w_list df['J'] = J_list import seaborn as sns sns.lineplot(x='w', y='J', data=df) The output of lineplot is Note: w in my code in theta in Andrew Ng's lecture If anyone can help me spot my mistake, would be really appreciated. AI: You have taken x,y from random space. So, it should not follow any rule/logic. Get a real data e.g. Iris(just 2 class) or using sklearn make_classification module Figure out the approx value of theta for a good model Then loop before and after this value Have y_true in probability(not the Class)
H: Dummy variable only for character value in a column (Neglecting float and integers) My dataset consists of 3000 rows and 50 columns, out of which one column (ESTIMATE_FAMILY_CONTRIBUTION) contains all numerical value(around 2000 different values like 20,30,32....) but got one value as String e.g. 'No_information'. When I create dummies(One-Hot Encoded) for the Feature (pd.get_dummies()), around 2000 new columns are created for the original column i.e. ESTIMATE_FAMILY_CONTRIBUTION. What I want is to have only 2 columns to be created, one for 'No_information' and other with all the numerical values. How do I do it? AI: You have 2000 different values available for a dataset of 3000. I don't think you should treat that as a Categorical column. Treat "No_information" as a NaN and impute using the best-suited strategy using the relation with other columns Edit post comment If you want to treat it as Categorical, I will suggest - Create "bins" of values e.g. <50- Very Low, 50-500 - Medium, etc. - Then create One-Hot encoded data - With this approach too, "No_information" should be treated as NaN. The reason being that when Info is not available it could have been anything i.e. Low, High, Medium. Treating it as 4th value will have lesser information. - Try different bins/approaches and see which produce the best result Dealing with High cardinality of Categorical feature - - Search internet/SE with "Encode feature with very high cardinality". This is a known challenge, you will get plenty of resources. - Try other encoding approaches. See these links - Beyond OHE Library kaggle post
H: Generative Adversarial Text to Image Synthesis Can anyone explain the meaning of this line: "Deep networks have been shown to learn representations in which interpolations between embedding pairs tend to be near the data manifold". Reference: Section 4.3 of the paper Generative Adversarial Text to Image Synthesis AI: There are several important concepts in this sentence so let's break them down. "Data manifold" This is a usual model we use in machine learning problem, that is to consider real data as a manifold. I suggest to you this article by Christopher Olah on the topic. "Interpolation between embedding pairs" One way of generating new data is to sample the embedding space learned by the neural network. For example, you can take two real data samples, compute their embedding, interpolate them to obtain an intermediate embedding, then see the output of your neural network when fed with this intermediate embedding. "Interpolations between embedding pairs tend to be near the data manifold" Following the previous step, the output of your neural network should be realistic. In mathematical terms, it should be near the real data manifold. This is the main point of generative models such as generative adversarial networks or variational autoencoders. They learn to fit some random distribution, usually a gaussian, to the real data distribution, and they learn to "convert" noise to real data and vice versa. This is often referred to as disentanglement. As explained in the first paper cited by the article after the sentence, deeper representations, when well trained, tend to do a better job at disentangling the underlying factors of variation. In other words, we can obtain deep embeddings that isolate the factors of variation of real data. Ideally, embeddings of human faces can isolate the axis that controls color of the hair, expression of the mouth, etc. (like TL-GAN for example). But these factors of variations are not always as easily explainable.
H: Where is the Backward function defined in PyTorch? This might sound a little basic but while running the code below, I wanted to see the source code of the backward function: import torch.nn as nn [...] criterion = nn.CrossEntropyLoss() loss = criterion(output, target) loss.backward() So I went to the PyTorch GitHub and found the CrossEntropyLoss class, but without any backward function defined. Moving up, CrossEntropyLoss extends _WeightedLoss >> _Loss >> Module then still nothing. So, where is the backward function defined? AI: Backward function is same for all type of layers. Look at 155 number line here. https://github.com/pytorch/pytorch/blob/35bd2b3c8b64d594d85fc740e94c30aa67892a34/torch/tensor.py and it will forward you to here https://github.com/pytorch/pytorch/blob/35bd2b3c8b64d594d85fc740e94c30aa67892a34/torch/autograd/__init__.py
H: Is there a method to apply trained weights to a model with the same input shape and model architecture but different output shape? I am developing a speaker identification model in Keras, and I have saved the weights from a trained custom model. Now, I am looking to use the trained weights to fine tune the model on a new dataset, but I am having trouble since the new dataset contains a different number of speakers than the first, so the new output shape will be different from the original. Here's the code that I am using to create and evaluate the model: # Create Model def createModel(model_input, model_output, first_session=True): # Define Input Layer inputs = model_input # Define First Conv2D Layer conv = Conv2D(filters=32, kernel_size=(5, 5), activation='relu', padding='same', strides=3)(inputs) conv = Conv2D(filters=32, kernel_size=(5, 5), activation='relu', padding='same', strides=3)(conv) conv = MaxPooling2D(pool_size=(3, 3), padding='same')(conv) conv = Dropout(0.3)(conv) # Define Second Conv2D Layer conv = Conv2D(filters=64, kernel_size=(3, 3), activation='relu', padding='same', strides=3)(conv) conv = Conv2D(filters=64, kernel_size=(3, 3), activation='relu', padding='same', strides=3)(conv) conv = MaxPooling2D(pool_size=(3, 3), padding='same')(conv) conv = Dropout(0.3)(conv) # Define Third Conv2D Layer conv = Conv2D(filters=128, kernel_size=(3, 3), activation='relu', padding='same', strides=3)(conv) conv = Conv2D(filters=128, kernel_size=(3, 3), activation='relu', padding='same', strides=3)(conv) conv = MaxPooling2D(pool_size=(3, 3), padding='same')(conv) conv = Dropout(0.3)(conv) # Define Flatten Layer conv = Flatten()(conv) # Define First Dense Layer conv = Dense(256, activation='relu')(conv) conv = Dropout(0.2)(conv) # Define Second Dense Layer conv = Dense(128, activation='relu')(conv) conv = Dropout(0.2)(conv) # Define Output Layer outputs = Dense(model_output, activation='softmax')(conv) # Create Model model = Model(inputs, outputs) model.summary() if first_session != True: model.load_weights('SI_ideal_weights_simple.hdf5') return model # Train Model def evaluateModel(x_train, x_val, y_train, y_val, num_classes, first_session=True): # Model Parameters verbose, epochs, batch_size, patience = 1, 100, 64, 10 # Determine Input and Output Dimensions x = x_train[0].shape[0] # Number of MFCC rows y = x_train[0].shape[1] # Number of MFCC columns c = 1 # Number of channels n_outputs = num_classes # Number of outputs # Create Model inputs = Input(shape=(x, y, c)) model = createModel(model_input=inputs, model_output=n_outputs, first_session=first_session) # Compile Model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Callbacks es = EarlyStopping(monitor='val_loss', mode='min', verbose=verbose, patience=patience, min_delta=0.0001) # Stop training at right time mc = ModelCheckpoint('SI_ideal_weights_simple.hdf5', monitor='val_accuracy', verbose=verbose, save_weights_only=True, save_best_only=True, mode='max') # Save best model after each epoch reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=patience//2, min_lr=1e-3) # Reduce learning rate once learning stagnates # Evaluate Model model.fit(x=x_train, y=y_train, epochs=epochs, callbacks=[es,mc,reduce_lr], batch_size=batch_size, validation_data=(x_val, y_val)) accuracy = model.evaluate(x=x_train, y=y_train, batch_size=batch_size, verbose=verbose) return (accuracy[1], model) Attempting to run the model on the second dataset throws the following error: ValueError: Shapes (128, 40) and (128, 15) are incompatible Which occurs at the output layer because of the difference in the number of speakers (i.e. from 40 to 15). The last layer contains 5160 trainable parameters, so I was trying to find a solution other than dropping it and adding an equivalent one with a new output shape to retain accuracy, if possible. (That being said, I am new to ML/Keras, and I can't say for certain that this would make a substantial difference.) Ultimately, my question is: How can I apply the weights from a custom trained convolutional neural net to a dataset with the same data shape but different number of classes? Any help is greatly appreciated. AI: Artificial Neural Networks are like a black box learning system. There is no known, or generally agreed upon, method that dictates what each weight represents or means for a given learning problem. Its internal representation of the problem is opaque to the architect. In fact, the final trained weights are very closely tied to the neural network architecture and is very logical to assume that they cannot be transfered to another arbitrary architecture of another neural network. That being said, there is research related to re-purposing an already trained neural network to another similar task. This is called Transfer Learning in machine learning literature. Some resources to get you started: A Gentle Introduction to Transfer Learning for Deep Learning Transfer learning only works in deep learning if the model features learned from the first task are general. How transferable are features in deep neural networks? Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset. Deep Learning using Transfer Learning What to transfer — We need to understand what knowledge is common between the source and target task. What knowledge can be transferred from source task to target task that will help improve the performance of the target task When to transfer or when not to Transfer - When the source and target domains are not related at all we should not try to apply transfer learning. In such a scenario the performance will suffer. This type of transfer is called Negative Transfer. We should apply Transfer learning only when source and target domains/tasks are related How to transfer: Identifying different techniques to apply transfer learning when the source and target domain/task are related. We can use Inductive transfer learning, Transductive transfer learning or unsupervised transfer learning. An overview of attempts to interpret deep-learning models and a new suggestion in Causality Learning: A New Perspective for Interpretable Machine Learning Recent years have witnessed the rapid growth of machine learning in a wide range of fields such as image recognition, text classification, credit scoring prediction, recommendation system, etc. In spite of their great performance in different sectors, researchers still concern about the mechanism under any machine learning (ML) techniques that are inherently black-box and becoming more complex to achieve higher accuracy. Therefore, interpreting machine learning model is currently a mainstream topic in the research community. However, the traditional interpretable machine learning focuses on the association instead of the causality. This paper provides an overview of causal analysis with the fundamental background and key concepts, and then summarizes most recent causal approaches for interpretable machine learning. The evaluation techniques for assessing method quality, and open problems in causal interpretability are also discussed in this paper. Now, to answer your main question given the already mentioned points, is to try heuristics in a trial and error manner, there is no standard procedure. For example you can set superfluous output weights to zero, or missing output weights to zero. One can try other linear (or non-linear) combinations to change the amount of output weights to match the original neural network to the new dataset. One can even train a neural net whose sole purpose is to adapt the output classes of the original network to the output classes of the new problem, and concatenate it with the orignal neural net. However if one takes this approach, why not train a new convolutional network from scratch that directly classifies the new problem.
H: What is the best way to pick the optimized configuration from this dataset? I have about 8000 configurations in an excel sheet. each configuration has four scores as seen in the image below. I would like to choose the best solution that has the highest lighting level score, lowest energy consumption score, the highest visual comfort score, the highest thermal comfort score, and the lowest CO2 level score. I'm new to data science but familiar with Python. I'm hoping that someone who is kind enough to explain a possible solution to rank these configurations from best to worst. Also, I'm interested to know the mathematical algorithm behind of the possible ranking system. AI: I have not executed the below code but I would do something like this, df.loc[(df['Lighting Levela (Max)'] == max(df['Lighting Levela (Max)']) & (df['Energy Consumption (Min)'] == min(df['Energy Consumption (Max)']) & (df['Visual Comfort (Max)'] == max(df['Visual Comfort (Max)']) & (df['Thermal Comfort (Max)'] == max(df['Thermal Comfort (Max)']) & (df['CO2 levels (Min)'] == min(df['CO2 levels (Max)']), :)
H: feature scaling xgbRegressor I read for example in this answer: Does the performance of GBM methods profit from feature scaling? that scaling doesn´t affect the performance of any tree-based method, not for lightgbm,xgboost,catboost or even decision tree. When i do feature scaling and compare the rmse of a xgboost model without and with minmax scaling, i got a better rmse value with feature scaling. Here is the code: from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error as MSE import math from math import sqrt import pandas as pd import numpy as np from xgboost import XGBRegressor import xgboost as xgb data = pd.read_excel(r'C:...path.xlsx') X = data.drop(['colA'], axis=1) y = data['colA'] scaler = MinMaxScaler() scaler.fit(X) minmax_scaled_X = scaler.transform(X) minmax_scaled_X y = np.array(y).reshape(-1, 1) scaler.fit(y) minmax_scaled_y = scaler.transform(y) from sklearn.model_selection import train_test_split xtrain, xtest, ytrain, ytest = train_test_split(minmax_scaled_X, minmax_scaled_y, test_size =0.3, random_state=0, shuffle=True) xg_reg = xgb.XGBRegressor(objective ='reg:squarederror', colsample_bytree = 0.7, learning_rate = 0.05, max_depth = 8, min_child_weight = 4, n_estimators = 600, subsample = 0.7) xg_reg.fit(xtrain,ytrain) preds = xg_reg.predict(xtest) rmse = sqrt(MSE(ytest, preds)) print(rmse) the result with min max scaling is 0.003, while the rmse without is about 3.8. I did the same with simple decision tree and got always a better result with minmax scaling. Where is my mistake? In other posts like the link above, answers are about that it is not good to scale when using trees. Can I say, that min max scaling does have a positive effect on the rmse on my data? AI: You're also scaling $y$, then of course you are getting lower error. That question was regarding scaling $X$. The same model will have very different error metrics when units on $y$ are changed: if I multiply all $y$ values by 100, the error will be 100 times larger, if I divide all $y$ values by 100 the error will be divided by 100.
H: Train/Test dataset and model I would like to ask you how to work on train and test dataset. I have unlabelled data. They are short text (max 100 characters) and I would need to understand their sentiment. To do this, I am manually assigning labels (1,0,-1). However I have more than 2000 text and I would like to find a way to do it automatically, after considering a small labelled set. What I was thinking is to split the dataset into train and test since the beginning and work with the training dataset to label data. Unfortunately, I have not understood how to assign labels to the remaining texts, I.e. how to predict the sentiment of data in the test dataset. Could you please tell me what would be next steps and, if you have anything that you think can be useful for a better understanding, suggesting an example to follow? Many thanks AI: You want to manually label some cases and then extend that "manual labeling" to the rest of the data. This is a supervised learning excercise with prior manual labeling by you. Let's suppose you have partitioned a random, suitably sized training data set. Now you need to model a classification algorithm via the classical modeling pipeline and use this model to predict the classifaction/label in the rest of the data. So yes this is easily possible,however modeling a text classification model is non-trivial and you need to understand basic modeling. Here are the basic steps, please read upon each step you do not know how to do: Split your data into a training set (you will model on this data), a test set and a target set. Manually label your training and test set. Choose the kind of classification algorithm you want to use. You can use classical ML models but this involves heavily tokenizing and transforming your data to numerical components. You could also use more advanced deep learning techniques for text classification like BERT. Create a suitable transformer to tidy and transform your data into the right format for your chosen algorithm. Train the model on your manually labeled training data. Evaluate and optimize your performance with the test data set. Use the final model to predict labels in your target set. Understand that the quality of the automatic labeling will only be as good as your manual labels.
H: Modeling price vs demand I have a dataset consisting of products, clients, price policy, discounts, quantities, and net sales. The task as put in words by the business is quantity vs price. I have noted a few observations from looking at the dataset : Discounts: Discounts nullify the effect of any change in the Price policy. So in the end the net sales don't follow this variation. And i observe this for so many client-product pairs. Seasonality : Variation of quantity for client-product pairs simply follows a seasonality pattern and its not driven by any of the timeseries variables in the dataset. ( I should statistically verify this for now i just did a visual check). Because at this point i dont see any logic behind how discounts are decided for the clients. Hence there is literally no affect on net sales vs price changes. How should I model this ? Is this even a machine learning problem because there is simply no causal relation between the variables. If not Price vs Demand then what other things can I propose to the business ? Edit : 1. Product-client scatter plots UNITARY_NET_SALES VS QUANTITY The first column of the plot shows products are demanded at the same quantities across varying net sales. So no price vs demand effect here. Timeseries plot for a particular product Price_list and Discounts have the same behaviour. So whenever the business increases the prices they increase the discounts too, hence the overall affect on net sales is none. And Quantity simply follows a seasonal pattern. Describe on the columns 75% of Quantity is less than 8 units ! Thank you ! AI: I've tangled with modeling pricing systems over the last two years and one of my key learnings applies here: Available sales data is often a bad basis for straight-forward prediction tasks and the reason for this is fairly simple: If you classify all prices (or transactions of a given product at a price x) into "Accepted" and "Not accepted" by the customer you will realize that the data provided by your customer only contains instances of "Accepted" prices. Therefore a straight-forward modeling of y ~ x with y = demand and x = price is impossible because your y does not vary! There are several ways around this however. In my comment I mentioned discount being a valuable information! You have already noticed that discounts are not depended on logical variables, they are seemingly random, this isn't true! Discounts in most organizations are very, very flexible and often applied manually based on negotiations. This means they are a great indicator of our target y "Acceptance"/"Non Acceptance". Consider this: Discount = Demand x Undiscounted_Price This means that large discounts indicate that the demand is low / negative for the undiscounted price and low discounts indicate demand is high. To truly discover this relation you might need to model codependent factors and then remove them by training new models on the residuals. Edit: An important thing to add especially for the B2B domain. Demand for a certain product is almost set in stone for a customer. Unlike consumers companies do not buy surplus or refrain from purchase due to the prices. What they do is switch suppliers! This means that there is a really important unknown variable "Customer Demand for product X". You do not want model this variable but you need it to model what you actually want to do: Share of Wallet or the percentage of the fixed customer demand that was satisfied by your company with the goal being to identify the price that will optimize that percentage. This is helpful to keep in mind because it constrains the performance of any model as you never know whether the historic demand you have in your data is already 0% or 100% of the total demand and thus could not decrease/increase regardless of price.
H: why does my calibration curve for platts and isotonic have less points than my uncalibrated model? i train a model using grid search then i use the best parameters from this to define my chosen model. model = XGBClassifier() pipeline = make_pipeline(model) kfolds = StratifiedKFold(3) clf = GridSearchCV(pipeline, parameters, cv=kfolds.split(x_train, y_train), scoring='roc_auc', return_train_score=True) clf.fit(x, y) model = clf.best_estimator_ using this model from gridsearch i then calibrate it and plot uncalibrated vs calibrated.. y_test_uncalibrated = model.predict_proba(x_test)[:, 1] fraction_of_positives, mean_predicted_value=calibration_curve(y_test,y_test_uncalibrated,n_bins=10) plt.plot(mean_predicted_value, fraction_of_positives, 's-', label='Uncalibrated') clf_isotonic = CalibratedClassifierCV(model, cv='prefit', method='isotonic') clf_isotonic.fit(x_train, y_train) y_test_iso = clf_isotonic.predict_proba(x_test)[:, 1] fraction_of_positives, mean_predicted_value = calibration_curve(y_test, y_test_iso, n_bins=10) plt.plot(mean_predicted_value, fraction_of_positives, 's-', color='red', label='Calibrated (Isotonic)') i do the same for platts above.. however i get the following results: i don't understand why there are less points now for isotonic and platts? i don't feel as though i am doing anything wrong in my code. am i making any mistakes? AI: The default strategy for calibration_curve is 'uniform', i.e. each of the bins has equal width. If, after calibration, your model makes no predictions inside a bin, there will be no point plotted for that range. You could change to strategy='quantile', which would guarantee 10 points plotted for each curve; you'll get many more of the red/yellow dots further to the left. Also, ideally you should not be fitting the calibration on the same data that you trained the original model. From the User Guide, An already fitted classifier can be calibrated by setting cv="prefit". In this case, the data is only used to fit the regressor. It is up to the user make sure that the data used for fitting the classifier is disjoint from the data used for fitting the regressor.
H: One Hot Encoding for any kind of dataset How can I make a one hot encoding for a unknown dataset which can iterate and check the dytype of the dataset and do one hot encoding by checking the number of unique values of the columns, also how to keep track of the new one hot encoded data with the original dataset? AI: I would recommend to use the one hot encoding package from category encoders and select the columns you want to using pandas select dtypes. import numpy as np import pandas as pd from category_encoders.one_hot import OneHotEncoder pd.options.display.float_format = '{:.2f}'.format # to make legible # make some data df = pd.DataFrame({'a': ['aa','bb','cc']*2, 'b': [True, False] * 3, 'c': [1.0, 2.0] * 3}) cols_encoding = df.select_dtypes(include='object').columns ohe = OneHotEncoder(cols=cols_encoding) encoded = ohe.fit_transform(df) Note that you can change the way you handle unseen data with handle_unknown: str options are ‘error’, ‘return_nan’, ‘value’, and ‘indicator’. The default is ‘value’. Warning: if indicator is used, an extra column will be added in if the transform matrix has unknown categories. This can cause unexpected changes in dimension in some cases.
H: SparseCategoricalCrosstentropy vs sparse_categorical_crossentropy What is the difference between SparseCategoricalCrosstentropy and sparse_categorical_crossentropy ? SparseCategoricalCrossentropy: Computes the crossentropy loss between the labels and predictions. sparse_categorical_crossentropy: Computes the sparse categorical crossentropy loss. But I am still not sure. Any loss will always be calculates between labels and predictions. SO how are these two different ? AI: SparseCategoricalCrossentropy is a class. So you have to define a object first then you can compute the loss using it. scce = tf.keras.losses.SparseCategoricalCrossentropy() scce(y_true, y_pred).numpy() While sparse_categorical_crossentropy is merely a function which can be directly used to compute cost. loss = tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred) If you are to pass the loss to a Sequential API then you must pass the object ,not the function. model.compile('sgd', loss=tf.keras.losses.SparseCategoricalCrossentropy())
H: How to perform a running (moving) standardization for feature scaling of a growing dataset? Let's say that there is a function $r$ $r_n = r(\tau_n)$, where $n$ denotes a so-called time-step of a system with an evolving state. Both $\rho$ and $\tau$ should equally influence $r$, and should therefore be scaled. The problem is, the sequence $(\tau_1, \tau_2, \dots, \tau_n)$ grows in time because $n$ grows. How to perform a running standardization of $(\tau_1, \tau_2, \dots, \tau_n)$. Running mean is relatively simple to express: $\text{mean}(\tau)_{n+1} = \frac{1}{n+1}\left[\tau_{n+1} + N \text{mean}(\tau)_n\right]$ where $\text{mean}(\tau)_1 = \tau_1$. The standardization requires $\tilde{\tau}_n = \dfrac{\tau_n - \text{mean}(\tau)_n}{\sigma(\tau)_n}$ where $\sigma(\tau)_n = \sqrt{\dfrac{1}{n-1}\sum_{i=1}^{n}[\tau_i - \text{mean}(\tau)_n]}$ (1) is the standard deviation of $(\tau_1, \tau_2, \dots, \tau_n)$. Question: is there an expression for a running standard deviation? Online I've only found links on stack overflow and Matlab functions, but I am not sure which algorithm is best suited for feature scaling. By running (moving) I mean not having to store $(\tau_1, \tau_2, \dots , \tau_n)$ to calculate (1), instead update it incrementally. AI: I think you want $$S_{n}=S_{n−1}+ (x_{n}−μ_{n−1})(x_{n}−μ_{n})$$ where S, x and μ are respectively the variance, value and mean. See https://fanf2.user.srcf.net/hermes/doc/antiforgery/stats.pdf for explanation and derivation.