text
stringlengths
83
79.5k
H: Sampling Technique for mixed data type I am looking for a very specific sampling technique which pertains to a very large dataset with mixed data type i.e, I have categorical as well as continous variables and want to have a sample that represents the population of such kind of data as closely as possible. It would be appreciable if anyone could help me out of with this. Thanks! AI: This would need some data preprocessing: Get the different main categories (ex: bikes and car) If there are several mix of categories, get the quantities of each configuration in order to get the right proportions of the samples (see 4). Get random sample within each category (10% bikes and 10% cars) Ensure that those samples have the right quantity in proportions regarding to the whole population (if there are 600 bikes and 100 cars, you should have 60 bikes and 10 cars) Ensure that each sample distribution shape is similar their related category distribution (using all data). This step is crucial, because some categories'samples may not have enough data to represent the whole data set correctly. If you don't have enough data, increase the overall sampling ratio or redo a random sample. Example with python Seaborn: sns.displot( data=df, x="Price", col="Type", kind="hist", aspect=1.4, log_scale=10, bins=20 ) Source: https://towardsdatascience.com/10-examples-to-master-distribution-plots-with-python-seaborn-4ea2ceea906a Merge all samples into a general sample set.
H: How do you add negative class sample for binary classification? How do you prepare the negative dataset for binary classification? Let us say that I am building a classifier that has to classify whether the input image is of a car or not. I already have a dataset that consists of thousands of cars. But what about negative classes? Should I collect any images which do not contain cars in them and label them as negative? How will I include negative classes in my dataset? AI: I think it is important to think about the application of that classifier and get the negative class images to be from a similar distribution as will be your application. For example if you want to classify blog images get the negative examples from blogs, if you want to classify facebook photos, get the facebook photos. Note that this should apply also for your positive class (cars). If you are not able to get a tons of photos from the distribution of your application you should definitely get them at least for your validation and test set.
H: Why keras Conv2D makes convolution over volume? I have a very basic question, but I couldn't get the idea about 2D convolution in Keras. If I would create a model like this : model = tf.keras.Sequential([tf.keras.layers.ZeroPadding2D(padding=(3,3), input_shape=(64,64,3)), tf.keras.layers.Conv2D(filters=1, kernel_size=(7,7))]) why the output shape is (None, 64, 64, 1) : Layer (type) Output Shape Param # ================================================================= zero_padding2d_63 (ZeroPaddi (None, 70, 70, 3) 0 _________________________________________________________________ conv2d_67 (Conv2D) (None, 64, 64, 1) 148 ================================================================= Total params: 148 Trainable params: 148 Non-trainable params: 0 and not (None, 64, 64, 3) with 148 parameters? As far as I understand, the 2D convolution is not a volume convolution, the window is a 2D-matrix, but not a 3D-cube, so could somebody please explain why do I have 64, 64, 1 instead of 64, 64, 3? AI: Your understanding is not correct. The 2D convolution is indeed a volume convolution. The filter is a tensor of dimensions 7x7x3. The depth of the output equals to the number of filters in the convolution; yours has 1 filter, so the depth of the output is 1.
H: What is the difference between a test with assumption of standard normal distribution and t-test? And when to use each one? I am studying statistics on my own and I am not understanding when to use a one-sample t-test or a test with assuming a standard normal distribution. As I understand, both are comparing a population mean with a sample mean, so, when should I use each one? And what is the difference between these two tests? AI: Do you mean when to compare to a t-distribution as opposed to a standard normal distribution? That is because, in the former case, you have to estimate the standard deviation. Chugging through the theory gives you that you compare to $t$, not standard normal. The details of that should be covered in a statistics textbook. The intuition might not be. The way I think about it is that the standard deviation you estimate might be an underestimate. Thus, you use the thicker tails of the $t$-distribution to keep from getting a dishonesty small p-value.
H: Boosting algorithms only built with decision trees? why? My understanding of boosting is just training models sequentially and learning from its previous mistakes. Can boosting algorithms be built with bunch of logistic regression? or logistic regression + decision trees? If yes, I would like to know some papers or books that covers this topic in-depth. AI: Boosting is not limited to tree-based models. Find some more information here: P. Bühlmann, T. Hothorn (2007), "Boosting Algorithms: Regularization, Prediction and Model Fitting", Statistical Science 22(4), p. 477-505. I implemented L2 linear regression boosting from Section 3.3 (p. 483) from the paper above in this R-code. You may replace the L2 model by a logit model and see how it works.
H: What backpropagation actually is? I have a conceptual question due to terminology that bothers me. Is backpropagation algorithm a neural network training algorithm or is it just a recursive algorithm in order to calculate a Jacobian for a neural network? Then this Jacobian will be used as part of the main training algorithm e.g Steepest Descent? Hence is it a training algorithm or a numerical way to calculate a Jacobian matrix (partial derivatives of neural network outputs respective to network parameters)? AI: No, I wouldn't consider backprop a training algorithm. Backpropagation is just a way to find the derivative of the loss function with respect to the inputs by using the chain rule. Computing a derivative doesn't train anything. What you do with this derivative in order to minimize the loss function is the training part. EDIT: I think it will depend on who you ask. Take for example, this PyTorch tutorial. They say that "Backward propagation: In backprop, the NN adjusts its parameters proportionate to the error in its guess. It does this by traversing backwards from the output, collecting the derivatives of the error with respect to the parameters of the functions (gradients), and optimizing the parameters using gradient descent." I.e. the two steps loss.backward() optim.step() together are what they call backpropagation. This is what I'd call the more engineering view point and I believe is a semantic shift away from what I'd argue (see comments!) is actually backprop and that's just the loss.backward() step. The semantic drift of backprop meaning calculating the derivatives together with optimization makes sense in this context. Why would you call loss.backward() and then not call optim.step()? But, originally (and technically, the best kind of correct) backprop refers to just the computation of the derivatives and I'll think you'll find that terminology more in math/theory contexts instead of the programming/engineering contexts.
H: Normalization with learning/test dataset in [0,1] Say you split your data into two sets: training and test sets. You know that the inputs of your data are in [lower_bounds, upper_bounds]. Now, assume that you would like to do a min-max normalization on your inputs between $[0, 1]$. For the values of the max and the min, should you use the min/max of your learning dataset or the bounds [lower_bounds, upper_bounds]? In the same way, in order to normalize your test set, you should use the same bounds as the ones used for the learning dataset. If you use the min/max of your training set, some of your values in the test set can be found outside of $[0, 1]$, if, for instance, some values of the test set are greater than the max of the data in the learning dataset. Is it an issue? AI: You pretend the out-of-sample data set does not exist. Any feature manipulations should be based on the in-sample (training) data. If your in-sample data set is $\{1,2,3,4,5,6\}$ and your test set is $\{1,3,7\}$, you would do your normalization based on the in-sample set and accept that the $7$ in the test set will be normalized to a value exceeding $1$. Remember that the reason we use a test set it to mimic the real use case where we make predictions on totally unseen data, perhaps even data that do not yet exist (think of Siri or Alexa being expected to do speech recognition for speech signals that have yet to be uttered, perhaps by people who have yet to be born).
H: Cosine similarity between sentence embeddings is always positive I have a list of documents and I am looking for a) duplicates; b) documents that are very similar. To do so, I proceed as follows: Embed the documents using paraphrase-xlm-r-multilingual-v1. Calculate the cosine similarity between the vector embeddings (code below). All the cosine similarity values I get are between 0 and 1. Why is that? Shouldn't I also have negative cos similarity values? The sentence embeddings have both positive and negative elements. num_docs = np.array(sentence_embedding).shape[0] cos_sim = np.zeros([num_docs, num_docs]) for ii in range(num_docs): for jj in range(num_docs): if ii != jj: cos_sim[ii, jj] = np.dot(sentence_embedding[ii], sentence_embedding[jj].T)/(norm(sentence_embedding[ii])*norm(sentence_embedding[jj])) AI: Disclaimer: This is actually a tentative explanation, it provides a possible answer, but it does not contain proof. First of all, contrary to added comments, cosine similarity is not always in the range $[0,1]$. This range is valid if the vectors contain positive values, but if negative values are allowed, negative cosine similarity is possible. Take for example two vectors like $(-1,1)$ and $(1,-1)$ which should give a cosine similarity of $-1$ since the two vectors are on the same line but in opposite directions. Going back to the question, we should ask if it's possible to have positive and negative values in vectors and still have only positive cosine similarity values. The answer is true, it is possible if the embedding vectors are contained into a nappe of a conical surface fixed in origin. (see Wikipedia: Conical surface). Basically, if you rotate the positive space section you still get positive cosine similarities. Why would that happen with paraphrase-xlm-r-multilingual-v1? If you read the paper which describes the model Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks - Reimers,Gurevych in the training detail section they explained that they used pooling layers on top of the BERT-like pre-trained models to obtain a fixed encoding. The pooling layer default model is averaging. Supposing in the convolutional layers' output vectors you get uniform values in range $[-a,a]$. Pooling those output vector values by averaging basically moves the output vectors to be closer together, producing smaller angles so towards positive cosine similarity. Even pooling by mode max tokens has a similar effect. This increases a lot the probability to have resulted embeddings to have only positive similarity, even if they allow positive and negative values in resulted embeddings. As I said I do not have proof, but considering how pooling works and if many convolutions are pooled this is a logical consequence. This does not mean that it is not possible to have negative similarities. A way to verify this experiment, if you have a good quantity of random sentences would be to plot a histogram of their cosine similarity and to visually inspect that you have few values near zero and a monotonic increase of frequencies as we grow towards $1$. This again would be just a hint. [Later edit] I have run some experiments myself to address the insights provided by @albus_c (thank you). First of all I don't have sentences and anyway I don't use Python, so I generated artificial data (vectors with random values from a standard normal) in a matrix with rows being the instance vectors to be compared via cosine similarity. I noticed an interesting phenomena: on average the cosine similarity between random vectors have a shorter range of absolute values as the lengths of the vectors increases. In the graph above we can see that for small vector sizes the empirical distribution covers the whole range $[-1,1]$, and this range shrinks as vector sizes grows. This is important because the role of the pooling layers is to reduce the size of input vectors while retaining important information. As a consequence, if the pooling is aggressive the range of cosine similarity will increase on average. This is what @albus_c noticed, I think. I also implemented a 2D pooling layer function over the random sample with average and max pooling. What I noticed, contrary to my intuition, is that averaging does not decrease the range of cosine, but keeps it in the same range. Due to the previous effect, however (pooling shrinks vector sizes and increase the range of cosine as a consequence) the final effect is that the cosine range is increased. In the case of max pooling, however, the cosine range is shrinked and moved drastically to positive values, as can be seen in the below graph. In the graph above we can see in the upper left the histogram of cosine similarities on random vectors of size $768$. On upper right histogram we have cosine similarities only for vectors of size $384$ for comparison. I applied a 2d pooling layer with size $2$ and slide $2$. In the lower left graph we have similarities after max pooling. We clearly observe the values moving towards $1$ in positive range. In the lower right we have similarities after mean pooling. We notice the range has increased compared to original (upper left) but is similar with the range onvectors of same size (upper right). I did not worked out an analytic explanation for that, those are only simulations. The normal shape that appears is due to how I generated data, in real life it can look different, but I expect tendencies to remain the same. I have also experimented with different sizes of pooling. If the size of the pooling patch increases the effects increases dramatically on max pooling while remaining the same for averaging. If the slide of the pooling is lower than the size of the patch (the patches overlap) a correlation appears between resulted vectors and the cosine range shrinks more due to that correlation for both max and average pooling. I think a proper analitycal explanation can be also given, and if I will have results and time I will update the answer again, but I do not expect to change what we already see in the simulations.
H: How to visualize a hierarchical clustering as a tree of labelled nodes in Python? The chapter "Normalized Information Distance", visualizes a hierarchical clustering as a tree of nodes with labels: Unfortunately I cannot find out how to replicate this visualization, maybe they did it in a manual way with Tikz? How can I achieve this effect automatically in Python, preferably with Scikit-Learn? I only found the Dendogram, which looks nothing like the effect I want to replicate: Result (thanks at @andy-w): model = AgglomerativeClustering(linkage="average", n_clusters=N_CLUSTERS, compute_distances=True, affinity="l1") model.fit(data) no_of_observations = np.arange(2, model.children_.shape[0]+2) linkage_matrix = np.column_stack([model.children_, model.distances_, no_of_observations]).astype(float) G = nx.Graph() n = len(linkage_matrix) for i in range(n): row = linkage_matrix[i] G.add_edge(label(int(row[0])),label(n+i+1),len=1+0.1*(math.log(1+row[2]))) G.add_edge(label(int(row[1])),label(n+i+1),len=1+0.1*(math.log(1+row[2]))) dot = nx.nx_pydot.to_pydot(G).to_string() dot = graphviz.Source(dot, engine='neato') dot.render(format='pdf',filename='tree') AI: This specific format to me looks like graphviz. So if you can extract the tree edges from your original object, then you can render it, example below (some roundabout to convert between different objects): import networkx as nx import pydot import graphviz # Just a part of your graph G = nx.Graph() ed = [('n3','n0'), ('n0','MusicHendrixA'), ('n0','MusicHendrixB'), ('n3','n2'), ('n2','n8'), ('n8','MusicBergA'), ('n8','MusicBergB') ] G.add_edges_from(ed) # Now the graphviz part dot = nx.nx_pydot.to_pydot(G).to_string() dot = graphviz.Source(dot, engine='neato') dot.render(format='png',filename='MusicTree')
H: Excluding data via confidence score: Is it a good idea? Let's say I have a model which has a binary classification task (Two classes of 0 and 1) and therefore, it outputs a number between 0 and 1, if it is greater than 0.5 we consider it to be class 1 and 0 the other way around. Now let's say we remove any results in the test set that its output is between two thresholds of 0.4 and 0.6 to make the model more confident. To be more clear, if the output is in that bound, the model just prints "I'm not confident about this image". Is this approach a good idea in general? What if the task is about a binary classification of a medical dataset like COVID? And if so, has this approach used in any recent research? AI: In general yes, the predicted probability can be used in this way. However it's important to take into account that this probability is a prediction itself, i.e. the model could be wrong about it. For example the model may predict a probability of 99% positive for an instance which is actually negative. As usual, it cannot be assumed that the model is correct: it has to be evaluated, in particular whether the instances tagged as "not confident" are actually more likely to be wrongly predicted or not. An important question in this strategy is how to select the bounds of the "not confident" interval, for example arbitrarily choosing [0.4,0.6] may not be optimal.
H: What are the possible applications of a Data Scientist in the design fase of an Aerospace Or Railway Engineering industry? I have been trying to understand this for a long time, but this information proves to be incredibly elusive online. What are possible jobs that a pure Data Scientist, without much background knowledge, could be hired for in an Engineering team? I am aware, for instance, that supply chain can get some involvement. I don't mean the Business Intelligence positions, I want to get more involved with the engineering team, working on the products themselves (specially Aerospace or Railway). By "engineering" I mean working in the design phase of the product itself, rather than with post-market features (such as maintenance prediction). Can a Data Scientist be useful in engineering, even without much domain knowledge? Is there anyone familiar with this world that could provide some insight? Thank you AI: That really depend of the area and the needs of your company a data scientist can fit on everything that produce data ( with a good data collection instruments of course). You are talking about of data scientist in the productions of aerospace or railways industry? Do you hear about engineering statistics? This is a broad area and there are tens of books about that. I know about engineering statistics area is being used for chemical engineering, mechanical engineering ( thermodynamic statistics or check wiki ), nuclear engineering and civil engineering but there are various applications in more engineering fields. For example for a beginner in statistics the last chapter of Schaum's Outline of Statistics, 6th Edition Take Chapter 18 Statistical Process Control and Process Capability This method is used for quality control or best said by the book: 18.1 GENERAL DISCUSSION OF CONTROL CHARTS Variation in any process is due to common causes or special causes. The natural variation that exists in materials, machinery, and people gives rise to common causes of variation. In industrial settings, special causes, also known as assignable causes, are due to excessive tool wear, a new operator, a change of materials, a new supplier, etc. This chapter focus in charts so that skills on matplotlib, ggplot2, d3.js will come to light!: GENERAL DISCUSSION OF CONTROL CHARTS VARIABLES AND ATTRIBUTES CONTROL CHARTS X-BAR AND R CHARTS TESTS FOR SPECIAL CAUSES PROCESS CAPABILITY P- AND NP-CHARTS OTHER CONTROL CHARTS a chart of the book as example: If you are a Data Scientist with a solid background of statistics models in the engineering area could help you a lot with Aerospace Or Railway Production Industry. Bibliography: Applied Statistics for Civil and Environmental Engineers N. T. Kottegoda, R. Rosso Statistics for Chemical and Process Engineers A Modern Approach Authors: Shardt, Yuri A.W. Statistical Thermodynamics: An Engineering Approach 1st Edition, John W. Daily Statistics for nuclear engineers and scientists.
H: Calculating the dissimilarity between term frequency vectors Given that a document is an object represented by what is called a term frequency vector. How can we calculate the dissimilarity between term frequency vectors? AI: There are several ways to find the relationship between vector representations in NLP, such as the cosine distance (you can check this for instance to apply it as a quick proof of concept) or L2 distance, which aim to find the relationship between such vectors in the vectors space they lay in. Nevertheless, to associate the geometric distance to semantic similarity, it is interesting to apply word embeddings, with which you get more lower-dimensional vectors directly learned from your data (you can check for it).
H: How create a representative small subset from a huge dataset, for local development? ​ I have a time series problem and the dataset I'm using is rather huge. Around 100GB. For local development I'm trying to subset this into a very small batch around 50MB, just to make sure unit tests and some very streamlined "analytic" tests pass, my code is not a mess, and my model is actually trying to do something meaningful with this data. I know that I cannot create a very good "representative" small subset which can totally mimick the original, but I want to make sure I find many of my model's base flaws with this data before training it on that huge dataset. Maybe having multiple different sized batches for different scopes of tests is an option too, I don't have any preferences. ​ What is the best strategy to create this subset? I think for a data that is not sequential, unlike mine, random downsampling of the datapoints might be a good thing, but I don't know what is a good practice in time Series data. Should I just choose a small frame of time as the new dataset? What about casuality? How to sample according to class imbalance? These are the first questions that come to my mind. But feel free to expand on even more questions. Edit: What I am working on is this dataset. The dataset is quite large, and I want to effectively choose a subset from it. The task is to detect seizures. One option is to the number of subjects, I think. But I am open to all options that you might suggest! AI: In your case, you have first to deal with the biological data complexity. I don't know the minimum sampling rate to detect brain epilepsia or any brain behavior. I would recommend to study some articles to know the best practices about EEG signal analysis like this one : https://www.frontiersin.org/articles/10.3389/fneur.2020.00375/full Maybe there are good practices to reduce the data volume. In addition to that, you could start with 5 min data before the epilepsia, as suggested in the document, in the same dog (ex: dog 2). The first objective could be to detect which sensors are more significant in your case study, so that you can remove less representative ones (if they exist). This is possible doing a correlation study between sensors. If a specific sensor doesn't have any correlation (=0 value) between its signals in the same dog, it would probably mean that it is not related to the epilepsia event. Then, if you detect some correlations in specific sensors, you can start using multi variate models that could predict with more precision whether or not there will be epilepsia. After doing good models on several significative sensors and in a few dogs, I suppose you can extend the predictions using 1 hour training.
H: How to fix my CSV files? (ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required) I have tried to import two csv files into df1 and df2. Concatenated them to make df3. I tried to call the mutual_info_regression on them but I am getting a value error ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required. I have checked the dimensions of X, y, and discrete_features. They all seem okay. Since the code works with other csv files (I have tested), I think the problem is with my csv files and not the code. import numpy as np import pandas as pd df1 = pd.read_csv("WT_MDE.csv", index_col=0) df1["Interact"] = 1 df2 = pd.read_csv("M_MDE.csv", index_col=0) df2["Interact"] = 0 data = pd.concat([df1, df2]) X = data.copy() y = X.pop("Interact") discrete_features = X.dtypes == float from sklearn.feature_selection import mutual_info_regression def make_mi_scores(X, y, discrete_features): mi_scores = mutual_info_regression(X, y, discrete_features = discrete_features) mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns) mi_scores = mi_scores.sort_values(ascending=False) return mi_scores mi_scores = make_mi_scores(X, y, discrete_features) Google Drive Link to The CSV Files I would really appreciate if anyone could help. AI: The problem seems to be with the discrete_features flag inside mutual_info_regression. If you remove it completely (or set it to 'auto') it will work fine!
H: Cross-validation split for modelling data with timeseries behavior Background: I have a dataset that is generated every month (it is similar with card data that contains card demography and transactions every month and new accounts can be added in the middle of data series). From those historical data, I need to build a classification model to predict a binary label for the next month. Question: Which better cross-validation split type that can be used to get a fair model score assessment (not bias and low variance)? To make it clear, lets take 15 months training data and needs to hypertune the model with 5-folds cross-validation split. I have two options below, but it is ok if you have other. 1. Time series with leave one out type fold 1 : training [1 2 3 4 5 6 7 8 9 10], test [11] fold 2 : training [1 2 3 4 5 6 7 8 9 10 11], test [12] fold 3 : training [1 2 3 4 5 6 7 8 9 10 11 12], test [13] fold 4 : training [1 2 3 4 5 6 7 8 9 10 11 12 13], test [14] fold 5 : training [1 2 3 4 5 6 7 8 9 10 11 12 13 14], test [15] 2. Time Series with leave rest out type fold 1 : training [1 2 3 4 5 6 7 8 9 10], test [11 12 13 14 15] fold 2 : training [1 2 3 4 5 6 7 8 9 10 11], test [12 13 14 15] fold 3 : training [1 2 3 4 5 6 7 8 9 10 11 12], test [13 14 15] fold 4 : training [1 2 3 4 5 6 7 8 9 10 11 12 13], test [14 15] fold 5 : training [1 2 3 4 5 6 7 8 9 10 11 12 13 14], test [15] Thanks for your answer, will appreciate so much any respond. AI: Since you want to build a binary classifier based on time-ordered tabular data, I see two possible approaches among others: as you suggest, split your dataset in ordered train-test folds, so you reproduce the "real" situation of having, at each time interval, a historic dataset to train on and a test (and later evaluation) set; you can use the scikit-learn TimeSeriesSplit to get this type of split, which is similar to what you propose but having always the same test set volume of data: Reframe your dataset as a usual classification problem, where each sample row has some aggregated information (let's say for a client) like mean, min, max... values of the client attributes, and a binary label; with this frame, apply a k-fold (10-fold is a frequent option) cross validation strategy, you can also check in this answer By the way, you model should reach a good bias-variance trade-off, rather than a perfect "no bias" model.
H: RANSAC and R2, why the r2 score is negative? I was experimenting with curve_fit, RANSAC and stuff trying to learn the basics and there is one thing I don´t understand. Why is R2 score negative here? import numpy as np import warnings import matplotlib.pyplot as plt from sklearn.metrics import r2_score from sklearn.base import BaseEstimator from sklearn.linear_model import RANSACRegressor from scipy.optimize import OptimizeWarning from scipy.optimize import curve_fit class LogarithmicRegression(BaseEstimator): def __init__(self, log_base=np.log): self.__log_base = log_base def __log_expr(self, x, a, b, c): with warnings.catch_warnings(): warnings.simplefilter("ignore", RuntimeWarning) return a * self.__log_base(x+c) + b def get_params(self, deep=False): # https://scikit-learn.org/stable/developers/develop.html#get-params-and-set-params return {"log_base": self.__log_base} def set_params(self, **parameters): for parameter, value in parameters.items(): setattr(self, parameter, value) return self def fit(self, X, y): self.coef, _ = curve_fit(self.__log_expr, X.flatten(), y, maxfev=10000, bounds=( (-np.inf, -np.inf, -np.inf),(np.inf, np.inf, np.inf) )) return self def predict(self, X): hypothesis = self.__log_expr(X, *self.coef) return hypothesis.flatten() def score(self, X_test, y_test): from sklearn.metrics import r2_score self.accuracy = r2_score(X_test, y_test) return self.accuracy np.random.seed(543) n_sample = 100 dataX = np.array(range(1, n_sample+1)) dataY = 2.5 * np.log(dataX) + 7 noise = np.random.normal(np.mean(dataY), 2, n_sample) add_noise = np.random.choice(a=[False, True], size=n_sample) for i in range(n_sample): if add_noise[i]: dataY[i] = noise[i] plt.style.use("dark_background") plt.rcParams["figure.figsize"] = (8,6) plt.grid(False) #plt.scatter(dataX, dataY, color='white') X = dataX.reshape(-1, 1) y = dataY ransac = RANSACRegressor(base_estimator=LogarithmicRegression(), min_samples=int(n_sample/4), residual_threshold=0.7) ransac.fit(X, y) inlier_mask = ransac.inlier_mask_ outlier_mask = np.logical_not(inlier_mask) plt.scatter(X[inlier_mask], y[inlier_mask], color='yellowgreen', marker='.', label='Inliers') plt.scatter(X[outlier_mask], y[outlier_mask], color='r', marker='.', label='Outliers') lineX = np.arange(X.min(), X.max())[:, np.newaxis] lineY = ransac.predict(lineX) print("Estimated coefficients", ransac.estimator_.coef) print("Accuracy", ransac.estimator_.accuracy) plt.plot(lineX, lineY, color='yellow', linewidth=2, label='RANSAC regressor') plt.show() AI: After reading the RANSAC source code, I think I missinterpred the role of the score function. This is what I had to do: def score(self, X_subset, y_subset): from sklearn.metrics import r2_score y_pred = self.predict(X_subset) self.accuracy = r2_score(y_pred, y_subset) return self.accuracy
H: How can word2vec or BERT be used for previously unseen words Is there any way to modify word2vec or BERT to extend finding out embeddings for words that were not in the training data? My data is extremely domain-specific and I don't really expect pre-trained models to work very well. I also don't have access to huge amounts of this data so cannot train word2vec on my own. I was thinking something like a combination of word2vec and the PMI matrix (i.e. concatenation of the 2 vector representations). Would this work, would anyone have any other suggestions, please? Thanks in advance! AI: BERT does not provide word-level representations, but subword representations. This implies that when an unseen word is presented to BERT, it will slice it into multiple subwords, even reaching character subwords if needed. That is how it deals with unseen words. Therefore, BERT can handle out-of-vocabulary words. Some other questions and answers in this site can help you with the implementation details of BERT's subword tokenization, e.g. this, this or this. On the other hand, word2vec is a static table of words and vectors, so it is just meant to represent words that are already in its vocabulary.
H: How to get best data split from cross validation I have trained a Random forest regressor which is giving me a rmse score of 70.72. But when I tried the same model in cross_val_score with a cv of 10, it gave me an array that looks something like this. array([63.96728974, 60.43972474, 63.98455253, 61.69770344, 94.24656396, 59.93552448, 60.77507132, 54.20247545, 59.20367786, 61.59208032]) # min 54.20247545 This shows that the model has model potential to perform even better if the data get splitted in a certain way. So, My question is that is there any way to find the best split which can achieve the optimal loss which we saw in cross-validation. Sometimes I use for loop to find out the best value for random_state for best split, but this is not memory efficient and sometimes also does not work. So it will be great if there is another alternative for this!! AI: It would be a very bad idea to select the "optimal split". The goal of cross-validation is to evaluate the model more accurately by minimizing the effect of chance due to the splitting. Selecting the "optimal split" goes against the idea of reliably estimating the performance, in fact this would purposefully overestimate the model. It's important to realize that the goal of evaluating a model is not maximizing performance, it's to reliably estimate the performance of the model on any random test set.
H: Does hyperparameter tuning of Decision Tree then use it in Adaboost individually vs Simultaneously yield the same results? So, my predicament here is as follows, I performed hyperparameter tuning on a standalone Decision Tree classifier, and I got the best results, now comes the turn of Standalone Adaboost, but here is where my problem lies, if I use the Tuned Decision Tree from earlier as a base_estimator in Adaboost, then I perform hyperparameter tuning on Adaboost only, will it yield the same results as trying to perform hyperparameter tuning on untuned Adaboost and untuned Decision Tree as a base_estimator simultaneously, where I try the hyperparameters of both Adaboost and Decision Tree together. AI: No, generally optimizing two parts of a modeling pipeline separately will not work as well as searching over all the parameters simultaneously. In your particular case, this is easier to see: the optimal single tree will probably be much deeper than the optimal trees in an AdaBoost ensemble. A single tree (probably) needs to split quite a bit to avoid being dramatically underfit, whereas AdaBoost generally performs best with "weak learners", and in particular often a "decision stump", i.e. a depth-1 tree, is selected.
H: Scikit-learn's implementation of AdaBoost I am trying to implement the AdaBoost algorithm in pure Python (or using NumPy if necessary). I loop over all weak classifiers (in this case, decision stumps), then overall features, and then over all possible values of the feature to see which one divides the dataset better. This is my code: for _ in range(self.n_classifiers): classifier = BaseClassifier() min_error = np.inf # greedy search to find the best threshold and feature for feature_i in range(n_features): thresholds = np.unique(X[:, feature_i]) for threshold in thresholds: # here we find the best stump error = sum(w[y != predictions]) if error < min_error: min_error = error The first two loops are not a problem since we usually have some tens of classifiers and features. But the third loop causes the code to be very inefficient. One way to solve this is to ignore the best weak classifier and choose one with slightly better performance than a random classifier (as suggested in the Boosting: Foundations and Algorithms by Robert E. SchapireYoav Freund, p. 6): for _ in range(self.n_classifiers): classifier = BaseClassifier() min_error = np.inf # greedy search to find the best threshold and feature for feature_i in range(n_features): thresholds = np.unique(X[:, feature_i]) for threshold in thresholds: # here we find the best stump error = sum(w[y != predictions]) if error < 0.5 - gamma: min_error = error break But in this case, the accuracy of my model is lower than that of Scikit-learn, and the running time is still three times. I tried to see how Scikit-learn implemented AdaBoost, but the code was not clear to me. I appreciate any comment. AI: The sklearn implementation of AdaBoost takes the base learner as an input parameter, with a decision tree as the default, so it cannot modify the tree-learning algorithm to short-circuit at a "good-enough" split; it will search all possible splits. It manages to be fast at that because the tree learning is done in Cython. Another option for improved speed, if you want to stay in pure python: do histogram splitting, as pioneered by LightGBM and now incorporated into XGBoost and sklearn's HistGradientBoosting models.
H: For sklearn ML algorithms, is it possible to use boolean data alongside continuous data for the predictive data, and if so how can the data be scaled? I have a medium size data set (7K) of patient age, sex, and pre-existing conditions. Age of course is from 0-101, sex is 1 for male, 2 for female, and -1 for diverse. All the pre-conditions are Boolean. The outcome, death is also Boolean. Regardless of how I scale the data (I tried lots of scalers), I always get a warning: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. This traces back to: ValueError("Unknown label type: %r" % y_type) ValueError: Unknown label type: 'unknown' If I take out the age and sex columns, the error goes away. There are definitely no text, missing, or weird values here. If I look at my rescaled data, it looks as I would expect it to look. If I drastically simplify the data, it works. import numpy as np import pandas as pd from sklearn import preprocessing array = np.array([[42, 1, False, False, False, False, False, False, False, False, False, False, False],\ [72, 1, False, False, True, False, False, False, False, False, False, True, False],\ [77, 2, False, False, False, False, False, False, False, False, True, True, False],\ [36, 1, False, False, False, False, False, False, False, False, False, False, False],\ [42, 1, False, False, False, False, False, False, False, False, True, False, False],\ [82, 1, False, False, False, True, False, False, False, False, False, True, False],\ [71, 2, False, False, False, False, False, False, False, False, False, True, False],\ [36, -1, False, False, False, False, False, False, False, False, True, False, False], [52, 1, False, False, False, False, False, False, False, False, False, False, False],\ [52, 1, False, False, False, False, False, False, False, True, False, True, True],\ [77, 2, False, False, False, False, False, False, True, False, True, True, False],\ [46, 1, False, False, False, False, False, False, False, False, False, False, False],\ [45, 1, False, False, False, False, False, False, False, False, False, False, False],\ [88, 1, False, False, False, False, False, True, False, False, False, True, True],\ [79, 2, False, True, True, False, False, False, False, False, False, True, True],\ [36, -1, True, False, False, False, False, False, False, False, False, False, False]]) X = array[:,0:12] Y = array[:,12] scaler = preprocessing.MinMaxScaler().fit(X) rescaledX = scaler.transform(X) from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression kfold = KFold(n_splits=3, shuffle=True, random_state=7) # split the data into training and test sets for k-fold validation model = LogisticRegression(solver='lbfgs') # set up model of a linear regression results = cross_val_score(model, rescaledX, Y, cv=kfold) print("Accuracy: %.3f%% (%.3f%%)" % (results.mean()*100.0, results.std()*100.0)) It would be awesome if someone has an idea of what might be wrong, or how to troubleshoot further. AI: So, to me what you have to do is : Transform all your your True/False to 1/0, so they're numerical. Keep age as it is (or use some normalisation, but not that necessary Absolutely change the way Sex is handled. You have a big bias since you have 3 values : Since it's numerical, distance matters. Here, distance between "Male" and "Diverse" is 2, and distance between "Female" and "Diverse" is 3. There's no logical reason, seeing your problem, for that. This will bring bias to your model. You should read this answer : https://datascience.stackexchange.com/a/79575/101580 In your case One Hot Encoder is good enough since you have 3 values.
H: Understanding SVM mathematics I was referring SVM section of Andrew Ng's course notes for Stanford CS229 Machine Learning course. On pages 14 and 15, he says: Consider the picture below: How can we find the value of $\gamma^{(i)}$? Well, $w/\Vert w\Vert$ is a unit-length vector pointing in the same direction as $w$. Since, point $A$ represents $x^{(i)}$, we therefore find that the point $B$ is given by $x^{(i)} − \gamma^{(i)}·w/\Vert w\Vert$. But this point lies on the decision boundary, and all points $x$ on the decision boundary satisfy the equation $w^Tx + b = 0$. Hence, $$w^T\left(x^{(i)}-\gamma^{(i)}\frac{w}{\Vert w \Vert}\right)+b=0$$ Solving for $\gamma^{(i)}$ yields $$\color{red}{\gamma^{(i)}=\frac{w^Tx^{(i)}+b}{\Vert w\Vert}}$$ I am not getting how the last red-colored equality is arrived. I am getting something like this: $$w^T\left(x^{(i)}-\gamma^{(i)}\frac{w}{\Vert w \Vert}\right)+b=0$$ $$\rightarrow w^Tx^{(i)}-\gamma^{(i)}\frac{w^Tw}{\Vert w \Vert}+b=0$$ $$\rightarrow w^Tx^{(i)}+b=\gamma^{(i)}\frac{w^Tw}{\Vert w \Vert}$$ How can I proceed further to equality in red color? Do I have to divide both the sides again by $\Vert w \Vert$ to get the following? $$\rightarrow \frac{w^Tx^{(i)}+b}{\Vert w \Vert}=\gamma^{(i)}\frac{w^Tw}{\Vert w \Vert\Vert w \Vert}$$ But then how $\frac{w^Tw}{\Vert w \Vert\Vert w \Vert}$ equals to $1$? AI: Hint: $w^Tw = \Vert w \Vert^2$ this stems directly from the definitions of norm and matrix product (assuming $w$ is column vector as usually taken) and one can expand the two sides to prove it easily. Note that technically $w^Tw$ is a $1 \times 1$ matrix but any such matrix is identified with its single scalar entry. So it is simply a scalar number. Or equivalently any scalar value is also a $1 \times 1$ matrix.
H: Understanding Lagrangian for SVM I was referring SVM section of Andrew Ng's course notes for Stanford CS229 Machine Learning course. On page 22, he says: Lagrangian for optimization problem: $$\mathcal{L}(w,b,\alpha)=\frac{1}{2}\Vert w\Vert^2-\sum_{i=1}^n \alpha_i[y^{(i)}(w^Tx^{(i)}+b)-1] \quad\quad\quad \text{...equation (1)} $$ To find dual of the problem, we set derivative of $\mathcal{L}$ with respect to $w$ to zero, to get: $$w=\sum_{i=1}^n\alpha_iy^{(i)}x^{(i)}\quad\quad\quad \text{...equation (2)}$$ Putting $w$ from equation (2) in equation (1), we get: $$\mathcal{L}(w,b,\alpha)=\sum_{i=1}^n\alpha_i-\color{red}{\frac{1}{2}}\sum_{i,j=1}^ny^{(i)}y^{(j)}\alpha_i\alpha_j(x^{(i)})^Tx^{(j)}-b\sum_{i=1}^n\alpha_iy^{(i)}$$ But I got following putting $w$ from equation (2) in equation (1): $$\begin{align} \mathcal{L}(w,b,\alpha) & =\frac{1}{2}\left( \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right)^2-\sum_{i=1}^n \alpha_i\left[y^{(i)}\left(\left( \sum_{j=1}^n\alpha_jy^{(j)}x^{(j)} \right)x^{(i)}+b\right)-1\right] \\ & =\frac{1}{2}\left( \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right)^2-\sum_{i,j=1}^n \alpha_i\left[y^{(i)}\left(\left( \alpha_jy^{(j)}x^{(j)} \right)x^{(i)}+b\right)-1\right] \\ & =\frac{1}{2}\left( \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right)^2-\sum_{i,j=1}^n \left[ y^{(i)}y^{(j)}\alpha_i \alpha_j\left(x^{(i)}\right)^Tx^{(j)} + \alpha_i y^{(i)} b -\alpha_i \right] \\ & =\color{blue}{\frac{1}{2}\left( \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right)^2}+\sum_{i=1}^n\alpha_i-\sum_{i,j=1}^ny^{(i)}y^{(j)}\alpha_i\alpha_j(x^{(i)})^Tx^{(j)}-b\sum_{i=1}^n\alpha_iy^{(i)} \end{align}$$ I didn't get from where Andrew Ng got red colored $\color{red}{\frac{1}{2}}$ and why didn't he got blue colored $\color{blue}{\frac{1}{2}\left( \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right)^2}$ (, which I got in my simplification). Where did I make mistake? AI: Assuming $x^{(i)} \in \mathbb{R}^{dx1}$ with $d>0$ we have: $$ \frac{1}{2} \left\lVert \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right\rVert ^2 = \frac{1}{2}\left( \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right)^T \left( \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right) = \frac{1}{2} \sum_{i,j=1}^n y^{(i)} y^{(j)} \alpha_i \alpha_j (x^{(i)})^T x^{(j)} $$ You need to be careful here, $x^{(i)}$ is a feature vector, hence you need to make sure that you respect the dot product rules in $\mathbb{R}^{dx1}$. The further right hand side of the above comes from developing the dot product (the alpha and y are scalars). Thus: $$ \frac{1}{2} \left\lVert \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right\rVert ^2 - \sum_{i,j=1}^n y^{(i)} y^{(j)} \alpha_i \alpha_j (x^{(i)})^T x^{(j)} = \color{red}{\frac{1}{2}} \sum_{i,j=1}^n y^{(i)} y^{(j)} \alpha_i \alpha_j (x^{(i)})^T x^{(j)} + \color{red}{(-1)} \sum_{i,j=1}^n y^{(i)} y^{(j)} \alpha_i \alpha_j (x^{(i)})^T x^{(j)} $$ which ultimately gives: $$ \frac{1}{2} \left\lVert \sum_{i=1}^n\alpha_iy^{(i)}x^{(i)} \right\rVert ^2 - \sum_{i,j=1}^n y^{(i)} y^{(j)} \alpha_i \alpha_j (x^{(i)})^T x^{(j)} = - \frac{1}{2} \sum_{i,j=1}^n y^{(i)} y^{(j)} \alpha_i \alpha_j (x^{(i)})^T x^{(j)} $$
H: OneVsRest Classification why do the probabilites sum to 1? I am using OneVsRest Classifier in sklearn. So a multilabel model, 4 models for each class (i have 4 classes). When i called the predict_proba method i therefore get an array with 4 columns each one corresponding to a probability for that class. e.g. 0 1 2 3 0.6 0.2 0.1 0.1 0.8 0.05 0.05 0.1 I know the models all train independently of one another and that the class asigned i.e. whether 0 1 2 3 takes the argmax of the 4 . what else happens under the hood with multilabel classifation such that each row sums up to 1? Why and how is this normalization happening. AI: In multiclass classification, the assumption is that every instance has exactly one class. Example: a poll asks people their favourite colour among blue (B), yellow (Y) or red (R). Each instance represents a person's answer, either B, Y or R. The "one vs. rest" method means that 3 binary classifiers are trained: "B" vs "not B", where the Y and R instances are labelled "not B" "Y" vs "not Y", where the B and R instances are labelled "not Y" "R" vs "not R", where the B and Y instances are labelled "not R" These models are not independent by assumption, for example: if the class is B then it cannot be Y or R. if the class is not Y then it's either B or R. Etc. In probabilistic terms this translates as a distribution which sums to 1, because if a class has a high probability then it's impossible that any other class also has a high probability (complement). This is why the probabilities predicted by the binary classifiers are each divided by the sum (see Ben's answer for details). Note: by contrast multi-label classification allows every instance to have any number of classes. In the example above it's as if the poll asks people to say whether they like each colour B, Y, R. A person might like all 3 colours or none of them. This implies that the binary classifiers are independent: For "B vs not B", both the B and "not B" classes can contain instances which also have Y or R (or both). As a consequence the classifiers are independent: knowing that an instance has class B doesn't imply anything about the other classes.
H: Comparison of classifier confusion matrices I tried implementing Logistic regression, Linear Discriminant Analysis and KNN for the smarket dataset provided in "An Introduction to Statistical Learning" in python. Logistic Regression and LDA was pretty straight forward in terms of implementation. Here are the confusion matrices on a test dataset. Both of them are pretty similar with almost same accuracy. But I tried finding a K for KNN by plotting the loss vs K graph: and chose a K around 125 to get this confusion matrix (same test dataset) Although the KNN gave a higher accuracy of around 0.61, the confusion matrix is very different from logistic and LDA matrices with a much higher true negative and a low true positive. I cant really understand why this is happening. Any help would be appreciated. Here is how I computed loss for KNN Classifier (using Sklearn). Could not use MSE since the Y values are qualitative. k_set = np.linspace(1,200, dtype=int) knn_dict = {} for k in k_set: model = KNeighborsClassifier(k) model.fit(train_X, train_Y) y_pred = model.predict(test_X) loss = 1 - metrics.accuracy_score(test_Y, y_pred) knn_dict[k] = loss model = KNeighborsClassifier(K) model.fit(train_X, train_Y) knn_y_pred = model.predict(test_X) knn_cnf_matrix = metrics.confusion_matrix(test_Y, knn_y_pred) Very new to data science. I hope I have provided enough background/context. Let me know if more info is needed. AI: A few comments: I don't know this dataset but it seems to be a difficult one to classify since the performance is not much better than a random baseline (the random baseline in binary classification gives 50% accuracy, since it guesses right half the time). If I'm not mistaken the majority class (class 1) has 141 instances out of 252, i.e. 56% (btw the numbers are not easily readable in the matrices). This means that a classifier which automatically assigns class 1 would reach 56% accuracy. This is called the majority baseline, this is usually the minimal performance one wants to reach with a binary classifier. The LR and LDA classifiers are worse than this, so practically they don't really work. The k-NN classifier appears to give better results indeed, and importantly above 56% so it actually "learns" something useful. It's a bit strange that the first 2 classifers predict class 0 more often than class 1. It looks as if the training set and test set don't have the same distribution. the k-NN classifier correcly predicts class 1 more often, and that's why it works better. k-NN is also much less sensitive to the data distribution: in case it differs between training and test set, this could explain the difference with the first 2 classifiers. However it's rarely meaningful for the $k$ in $k$-NN to be this high (125). Normally it should be a low value, like one digit only. I'm not sure what this means in this case. Suggestion: you could try some more robust classifiers like decision trees (or random forests) or SVM.
H: How to write a Proof-Of-Concept(POC) for machine learning model? I've found that If any company is interested in your product, But they don't know it will fit, it will work or they don't trust you, They will ask you for a POC or Proof-Of-concept I need to write a Proof-Of-Concept(POC) for my machine learning developed model, So how can I? AI: A proof of concept for a ML model is the same as in the ML research literature: Design or adopt a suitable evaluation method specifically for the task. Prove that the evaluation design is appropriate, including explanation about any data collection, preprocessing, etc. Evaluate performance in a reliable and accurate way. Prove that the performance value that you obtain is correct with respect to the task, i.e. that the same performance will be obtained in the production environment. Compare the performance of your model against any relevant existing model, preferably on the same data. Justify any design/implementation choice
H: how is validator created in python and what are most popular libraries / modules to learn first I have a df which has a serial number generated with each new record. The serial number combines with some other part like state code, year of registration and category code. So it has a format like below: | DOR | Applicant's code | |:-------|:--------------:| |1-2-2018| MH2018-PAR-0689| |1-2-2018| MH2018-PAR-0689| |2-2-2018| MH2018-PAR-0690| |2-2-2018| MH2018-OMC-0691| |1-2-2018| UP2018-OMC-2461| |1-2-2018| UP2018-FPR-2462| |3-2-2018| UP2018-PAR-2463| |1-2-2018| UP2018-OMC-2462| Let's say 20 such records are generated in each state every month and there are 37 different state codes and 8 different category codes. I want to create a list of all possible Applicant's codes for next month which should be 37 x 1 x 8 x 20 possible values. I need guidance on how to code it with python and also if my approach is correct. AI: Use itertools doc for this purpose. Without knowing your exact codes I just made some lists up: import itertools as it nums = [x for x in range(37)] single = ["_"] abc = list('abcdefgh') codes = [f"123{x}" for x in range(20)] len(abc) * len(nums) * len(codes) # 5920 list(it.product(abc, single, nums, codes)) # len(...) -> 5920 This gives you: [...] ('a', '_', 0, '1232'), ('a', '_', 0, '1233'), ('a', '_', 0, '1234'), ('a', '_', 0, '1235'), ('a', '_', 0, '1236'), [...] ('b', '_', 12, '12315'), ('b', '_', 12, '12316'), ('b', '_', 12, '12317'), ('b', '_', 12, '12318'), ('b', '_', 12, '12319'), [...]
H: Binary document classification using keywords for a very small dataset I have a set of 150 documents with their assigned binary class. I also have 1000 unlabeled documents. Each document is about the length of a journal paper. Each class has 15 associated keywords. I want to be able to predict the assigned class of the documents using this information. Does anyone have any ideas of how I could approach this problem? AI: This problem is called text classification (it belongs to the more general case of document classification). There are plenty of resources online about this, e.g. here, here or here. There are also a lot of research papers on the topic. General text classification consists in two steps: Represent the text as features Train a classification model The first step is specific to text, as opposed to the second step which is general ML. There are plenty of options to represent text as features, from traditional bag of words representation to word embeddings. In this question I explained the principle of the traditional BoW representation.
H: Can a CNN have a different number of convolutional layers and kernel and what does it mean? So if I have $3$ RGB channels, $6$ convolutional layers and $4$ kernels, does this mean that each kernel does a convolution on each channel and so the input for the next convolution will be $3 \times 4=12$ channels? Or those outputs are just stacked on each other (summed) and the input to the next neural network is still 3 channels? Edit: I am pretty sure that the input for the next convolution would still be $3$, but why is that? What is the operation performed? AI: Check this video. Color channels in a CNN are only involved in the first layer, where the original image is given as the input. Let us assume we are taking a 3x3 filter. This actually means we have 3 3x3 filters for each color channels. Each filter is applied to its corresponding color channel: Red filter is applied to Red channel, Blue filter to Blue channel etc.., and at the end you simply add the result for each filter, for each pixel. Assuming you use a padding that preserves the original size of the image, the output for these filters will be the same size as the original image, where each pixel is calculated by adding up the result of the filter. Of course we never use a single filter. The number of filters you use will determine the dimension of the output. For example, check this model summary from TensorFlow documentation, and compare it to the model above. The last parameter in parentheses represents the number of filters, while the last parameter in the first row is the number color channels in the given image.
H: Loss in multi-class classification I have a multi-class classification task. One of the standard approach in choosing loss function is to use a CrossEntropyLoss. It is a good option when classes are standonlone and not similar to each other. What if some classes are more similar? For example, if I have 10 classes, from 0 to 9 and classes with nearby numbers are closer to each other, i.e 4 and 6 are closer to 5 than 0 and 9, etc. How can I modify CrossEntropyLoss to reflect this fact? Or maybe already exists such loss function? AI: I don't think there is a built-in loss function for what you want - I had the same issue a few years back and I found a custom loss function for this purpose. It is called Ordinal Categorical Classification problem. I have not checked this in a while now but I believe it is still not implemented in Keras. You can also check this cross-validated question and the references given in the answers.
H: Predict total responses to emails amongst multiple groups I've got historical data about email characteristics (like time sent, length, topic etc.), and the respondents to these emails - I've got their IP, which is further linked to gender, domicile, employment status and so on. The example of my datasets is shown below: # dataset 1 email_id time_sent length topic respondent_ip YH2 00:02 300 advertisement 80.121 YH2 00:02 300 advertisement 71.231 # dataset 2 respondent_ip gender domicile employment 80.121 man US employed 71.231 woman China unemployed I want to predict how many people within different 'groups' are likely to respond to the emails based on the email characteristics. So for example, if I send an email on midnight, that is 300 characters and its topic is advertisement, how many unemployed women are likely to respond? I'm struggling to conceptualise what sort of model I could apply here, or even what sort of structure that model should have. Primarily, because I'm interested in so many different 'groups', I'm not sure what my response variable should be. Any pointers here would be appreciated! AI: Do you have historical data on how many people, with certain characteristics, responded to emails previously? If yes, then you can train a model (eg ANN, CART) using topic / gender / domicile / employment status (and optionally size of group of these people) as input features and number of responses of that group of people (or percent of the original size) as outcome. If you are only interested in unemployed women then use only data for unemployed women. In any case, after training, you feed the model with an email topic and gender and employment status (and optionally how many people you target) and you get a result for how many are likely to respond from this group of people.
H: sklearn models Parameter tuning GridSearchCV Dataframe: id review name label 1 it is a great product for turning lights on. Ashley 1 2 plays music and have a good sound. Alex 1 3 I love it, lots of fun. Peter 0 The aim is to classify the text; if the review is about the functionality of the product (e.g. turn the light on, music), label=1, otherwise label=0. I am running several sklearn models to see which one works bests: # Naïve Bayes: text_clf_nb = Pipeline([('tfidf', TfidfVectorizer()), ('clf', MultinomialNB())]) # Linear Support Vectors Classifier: text_clf_lsvc = Pipeline([('tfidf', TfidfVectorizer()), ('clf', LinearSVC(loss='hinge', penalty='l2', max_iter = 50))]) # SGDClassifier text_clf_sgd = Pipeline([('tfidf', TfidfVectorizer()), ('clf', SGDClassifier(loss='hinge', penalty='l2',alpha=1e-3, random_state=42,max_iter=50, tol=None))]) #Random Forest text_clf_rf = Pipeline([('tfidf', TfidfVectorizer()), ('clf', RandomForestClassifier())]) #neural network MLPClassifier text_clf_mlp = Pipeline([('tfidf', TfidfVectorizer()), ('clf', MLPClassifier())]) Problem: How to tune models using GridSearchCV? What I have so far: from sklearn.model_selection import GridSearchCV parameters = {'vect__ngram_range': [(1, 1), (1, 2)],'tfidf__use_idf': (True, False),'clf__alpha': (1e-2, 1e-3) } gs_clf = GridSearchCV(text_clf_nb, param_grid= parameters, cv=2, scoring='roc_auc', n_jobs=-1) gs_clf = gs_clf.fit((X_train, y_train)) This gives the following error on running gs_clf = gs_clf.fit((X_train, y_train)): ValueError: Invalid parameter C for estimator Pipeline(memory=None, steps=[('tfidf', TfidfVectorizer(analyzer='word', binary=False, decode_error='strict', dtype=<class 'numpy.float64'>, encoding='utf-8', input='content', lowercase=True, max_df=1.0, max_features=None, min_df=1, ngram_range=(1, 1), norm='l2', preprocessor=None, smooth_idf=True, stop_words=None, strip_accents=None, sublinear_tf=False, token_pattern='(?u)\\b\\w\\w+\\b', tokenizer=None, use_idf=True, vocabulary=None)), ('clf', MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True))], verbose=False). Check the list of available parameters with `estimator.get_params().keys()`. I would appreciate any suggestions. Thanks. AI: The correct way of calling the parameters inside Pipeline is using double underscore like named_step__parameter_name .So the first thing I noticed is in this line: parameters = {'vect__ngram_range': [(1, 1), (1, 2)],'tfidf__use_idf': (True, False),'clf__alpha': (1e-2, 1e-3) } You are calling vect__ngram_range but this should be tfidf__ngram_range Now this is no the error displayed, rather it seems as if you were somewhere mixed your code since C is a parameter for an SVM not for a MultinomialNB, so check if you are really passing the intended pipeline since I suspect that you are passing the pipeline that constants the SVM but trying to hyper parametrize the MultinomialNB So check if this dictionary: parameters = {'vect__ngram_range': [(1, 1), (1, 2)],'tfidf__use_idf': (True, False),'clf__alpha': (1e-2, 1e-3) } is being also created but for an SVM (the two with the same name parameter) Finally I would also change those lines: gs_clf = GridSearchCV(text_clf_nb, param_grid= parameters, cv=2, scoring='roc_auc', n_jobs=-1) gs_clf = gs_clf.fit((X_train, y_train)) for only this: gs_clf = GridSearchCV(text_clf_nb, param_grid= parameters, cv=2, scoring='roc_auc', n_jobs=-1).fit(X_train, y_train) It is confusing why you are passing a tuple to the fit method.
H: What is the difference between features in vgg I read the architecture of the model but this is the first time I try to use it . The calculations of the features map will be different if I extract the features from the two last layers or from the last layer but does it will affect if I used it in another model. AI: Yes it will definitely affect the result. If you are going to use CNN pre-trained models for feature extraction you have to remove the last output layer. Along with that you have to remove all the densely(Fully) connected layers since those will act as ANN for processing to predict the results. We need only the features to be trained using other models like SVC, Random forest etc.
H: improve LinearSVC Dataframe: id review name label 1 it is a great product for turning lights on. Ashley 1 2 plays music and have a good sound. Alex 1 3 I love it, lots of fun. Peter 0 The aim is to classify the text; if the review is about the functionality of the product (e.g. turn the light on, music), label=1, otherwise label=0. How can I improve the accuracy of LinearSVC; I tried difference models but LinearSVC gives the highest accuracy but it is still not enough: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42) text_clf_lsvc = Pipeline([('tfidf', TfidfVectorizer()), ('clf', LinearSVC(loss='hinge', penalty='l2', max_iter = 100))]) metrics.accuracy_score(y_test,predictions) is 0.84 at this stage. I would appreciate your advice. AI: There are many ways to increase the accuracy. 1.) Try to get more data. More data usually helps in getting better results. (usually, not always!) 2.) Although you mention you have tried different models and I'm not sure how many, but there are still more models you can try. 3.) Try hyperparameter tuning for all the models you have tried, not only for linear SVC. 4.) Try to use different preprocessing techniques other than Tfidf to see which yields best results.
H: Creating and training a Multilayer perceptron model with few data Are there any ways to create a deep multilayer perceptron model that is capable of making accurate regression predictions based on the training done using around 1000 unique data? I'm currently working on a Kaggle challenge for predicting the amount Followers gained using the top 1000 streamers on Twitch 2020 dataset. - The X value would be every columns excluding Followers gained; - The y value would be the amount of Followers gained - the model will make predictions regarding this. In general, the data values for the amount of followers gained contain around 6 to 7 digits; currently my RMSE loss value is almost near to become 5 digits, yet still 6. There are limited quantity of data; I'm aiming for a 5 digit RMSE value. Here's an overview structure of my MLP model; this one showed the best result by so far. Do let me know if you have any recommendations. Thanks. AI: Looks the model is good, If training accuracy is 100% then try increasing the dropout percentage in the initial layers or reduce some hidden layers. If training accuracy is still less than 100% then try decreasing the dropout percentage in the last few layers of nodes with count as 32. Refer this page https://stats.stackexchange.com/questions/417055/dropping-outliers-based-on-2-5-times-the-rmse you may get more insights. Thanks
H: Understanding Lagrangian equation for SVM I was trying to understand Lagrangian from SVM section of Andrew Ng's Stanford CS229 course notes. On page 17 and 18, he says: Given the problem $$\begin{align} min_w & \quad f(w) \\ s.t. & \quad h_i(w)=0, i=1,...,l \end{align}$$, the Lagrangian can be given as follows: $$\mathcal{L}(w,\beta)=f(w)\color{red}{+}\sum_{i=1}^l\beta_ih_i(w)\quad\quad\quad \text{...equation(1)}$$ Here, the $\beta_i$'s are Lagrange multipliers. While referring to Lagrange multipliers from Khan academy aryicle, I found it says: Lagrangian is given as: $$ \mathcal{L}(x,y,…,λ)=f(x,y,…)\color{red}{−}λ(g(x,y,…)−c) \quad\quad\quad \text{...equation(2)}$$ Here, $g$ is a constraint and is same as $h_i$ in CS229 notes above and $\lambda$ is a Lagrange multiplier. Comparing these two forms Lagrangian, I have the following doubts: Q1. Why does CS229 notes have $\color{red}{+}$ve sign, whereas Khan academy's version of Lagrangian has $\color{red}{-}$ve sign? Q2. If you check Grand's video on Khan academy's, he says: Maximum value of function $f$ under the constraint function $g$ occurs at the point $(x_m,y_m)$ where curves of these two functions are tangent to each other. The vectors (in vector field) perpendicular to these curves at the point $(x_m,y_m)$ is nothing but the gradients of these functions. However, the magnitude of the gradients to different functions usually vary: At the point of intersection $(x_m,y_m)$ , these two gradients are proportional to each other: $$\nabla f(x_m,y_m )=\lambda\nabla g(x_m,y_m)$$ where $\lambda$ is a Lagrange multiplier. Then the video defines Lagrangian as in equation (2). The point is that the Lagrangian in equation (1) is defined at the point of intersection of two functions and it does not involve summation. Then why the Lagrangian in equation (1) involves summation? What am missing here? AI: The sign is just a matter of convention. If you use plus instead of minus, it simply flips the sign of the multiplier itself. The method of finding them is the same. I am not sure if I understand the second part of your question but the first equation is for the general case where the number of Lagrange multipliers can be more than one - if you have more than one constraints. If you take the case where l = 1, you get the second equation.
H: calibrated classifier ValueError: could not convert string to float Dataframe: id review name label 1 it is a great product for turning lights on. Ashley 2 plays music and have a good sound. Alex 3 I love it, lots of fun. Peter I want to use probabilistic classifier (linear_svc) to predict labels (probability of 1) based on review. My code: from sklearn.svm import LinearSVC from sklearn.calibration import CalibratedClassifierCV from sklearn import datasets #Load dataset X = training['review'] y = training['label'] linear_svc = LinearSVC() #The base estimator # This is the calibrated classifier which can give probabilistic classifier calibrated_svc = CalibratedClassifierCV(linear_svc, method='sigmoid', #sigmoid will use Platt's scaling. Refer to documentation for other methods. cv=3) calibrated_svc.fit(X, y) # predict prediction_data = predict_data['review'] predicted_probs = calibrated_svc.predict_proba(prediction_data) It gives following error on calibrated_svc.fit(X, y): ValueError: could not convert string to float: 'it is a great product for turning...' I would appreciate your help. AI: Once I assume you are using text data as your input matrix X. The first point is that you have to include your preprocessing step as you would do when not using a calibrated classifier, so as you already know you can use a Pipeline like so: calibrated_svc = CalibratedClassifierCV(linear_svc, method='sigmoid', cv=3) model = Pipeline([('tfidf', TfidfVectorizer()), ('clf', calibrated_svc)]).fit(X, y) Another option if your are interested in using probabilities in your SVM you can set the parameter probability = True inside your SVM but using the class SVC with a linear kernel is equvilalent to LinearSVC like: model = Pipeline([('tfidf', TfidfVectorizer()), ('clf',SVC(probability = True, kernel = 'linear') )]).fit(X, y) This will run a Logistic regression on the top of the binary predictions of the SVM. Both options are feasible if you are only interested in using probabilities per se but if you are also interested on the calibration of your probabilities, the first option is better
H: I am attempting to implement k-folds cross validation in python3. What is the best way to implement this? Is it preferable to use Pandas or Numpy? I am attempting to create a script to implement cross validation in data. However, the splits cannot randomly take any records, so the training and testing can be done on equal data splits for each label which is why I need some guidance trying to implement the code. How do I approach this issue? Updating with code: data = pd.read_csv("data/iris.data", sep=",", header=None) data.columns = ["Sepal_length", "Sepal_width", "Petal_length", "Petal_width", "Species"] iris_setosa = data.loc[data["Species"] == "Iris-setosa"] iris_virginica = data.loc[data["Species"] == "Iris-virginica"] iris_versicolor = data.loc[data["Species"] == "Iris-versicolor"] train_setosa1 = iris_setosa.iloc[40, :] test_setosa1 = iris_setosa.iloc[-10, :] train_setosa2 = iris_setosa.iloc[-40, :] test_setosa2 = iris_setosa.iloc[10, :] train_setosa3 = iris_setosa.iloc[5:45, :] test_setosa3 = iris_setosa.iloc[6:-4, :] train_virginica1 = iris_virginica.iloc[40, :] test_virginica1 = iris_virginica.iloc[-10, :] train_virginica2 = iris_virginica.iloc[-40, :] test_virginica2 = iris_virginica.iloc[10, :] train_virginica3 = iris_virginica.iloc[5:45, :] test_virginica3 = iris_virginica.iloc[6:-4, :] train_versicolor1 = iris_versicolor.iloc[40, :] test_versicolor1 = iris_versicolor.iloc[-10, :] train_versicolor2 = iris_versicolor.iloc[-40, :] test_versicolor2 = iris_versicolor.iloc[10, :] train_versicolor3 = iris_versicolor.iloc[5:45, :] test_versicolor3 = iris_versicolor.iloc[6:-4, :] AI: Which is preferable pandas or numpy? It's totally up to you, if you really need to increase speed, go for numpy arrays, although at the same time the code will tend to get unwieldy and prone to errors, because you won't be able to keep track of feature columns easily. On the contrary, pandas will make your life easy when coding, so this choice is up to you. What is the best way of creating folds for cross-validation? Again there is no best algorithm or approach in machine learning that can be used in every case, it all depends on your preferences and needs. So I'll give you several options. I assume you are dealing with classification problem so I'll advise only on that, You can use sklearn.model_selection.StratifiedKFold function, as in this example, kf = StratifiedKFold(n_splits=5, shuffle=True, random_state=43) for train_idx, test_idx in kf.split(X, y): X_train, X_valid = X.iloc[train_idx], X.iloc[test_idx] y_train, y_valid = y.iloc[train_idx], y.iloc[test_idx] this option gives you more control over the code while debugging, and what's more important it stratifies the result according to y labels. Stratification here makes sure that distribution of values in y_train and y_valid will repeat the distribution in y proportionally, or simply put if one half of values in y were 1s and another half 0s, y_train and y_valid will have the same distribution of half being 1s and another half being 0s. With this function, you can use whichever scoring function you want. Predefine cross-validation folds beforehand, kf = StratifiedKFold(n_splits=5, shuffle=True, random_state=43) for fold, (t_, v_) in enumerate(kf.split(X, y)): # where X = df.drop('target'), y = df.target df.loc[v_, 'fold'] = fold these folds can be used as following: folds = set(df['fold'].astype(int).unique()) for fold in folds: df_train, df_valid = df[~(df.fold==fold)], df[(df.fold==fold)] X_train, y_train = df_train.drop(['target', 'fold'], axis=1), df_train.target X_valid, y_valid = df_valid.drop(['target', 'fold'], axis=1), df_valid.target this approach apparently may increase the speed of computations, as there is no need to compute folds at every step using complex algorithm, instead all we do is select rows with the current fold. Additionally, this approach makes reproducibility of the result possible, especially if you save the dataset with predefined folds to csv, and so that other people will get the same cross-validation folds. Another important thing, this approach makes it possible to compute different folds in parallel. This may come in handy, with neural nets used in deep learning. Here, you can use whichever scoring function you want as well. Use sklearn.model_selection.cross_val_score function as it was mentioned in the previous answer. This function is not exactly what you want, because it evaluates a score by cross-validation, while you were asking about ways of cross validation itself. Although I could hardly imagine what you can use cross-validation for apart from evaluating a score. Anyway, let's address pros and cons of this function. Pros: simple to use, stratification is possible. Cons: apparently, this is a black-box function, you can't debug it, unless on local machine, even there it is hard to do that. It is slower than second approach for mentioned reasons. And it is restricted to scoring methods listed here, in case you want use different scoring. UPDATE: In your newly added code, you're slicing the datasets wrong, train_setosa1 = iris_setosa.iloc[40, :] test_setosa1 = iris_setosa.iloc[-10, :] you are selecting only one row for each dataset, I suspect you were trying to slice it so that first 40 rows were included in train_setosa1 and the rest in test_setosa1, here's how you do that, train_setosa1 = iris_setosa.iloc[:40] test_setosa1 = iris_setosa.iloc[40:] But even in this case it is not proper cross-validation, because the whole point of that is using randomness of selection, and usually machines are better at this than people are. Here's equivalent implementation of your code, iris_species = ["Iris-setosa", "Iris-virginica", "Iris-versicolor"] kf = StratifiedKFold(n_splits=5, shuffle=True, random_state=43) for species in iris_species: species_df = data[data["Species"] == species] X = species_df.drop('Species', axis=1) y = species_df.Species for train_idx, valid_idx in kf.split(X, y): X_train, X_valid = X.iloc[train_idx], X.iloc[valid_idx] y_train, y_valid = y.iloc[train_idx], y.iloc[valid_idx] # do whatever it is you want with training and validation data here pass Cheers,
H: Does BERT need supervised data only when fine-tuning? I've read many articles and papers mentioning how unsupervised training is conducted while pre-training a BERT model. I would like to know if it is possible to fine-tune a BERT model in an unsupervised manner or does it always have to be supervised? AI: The distinction between supervised and unsupervised is a little bit tricky here. BERT pre-training is unsupervised with respect to the downstream tasks, but the pre-training itself is technically a supervised learning task. BERT is trained to predict words that have been masked in the input, so the target words are known at training time. The term unsupervised fine-tuning is thus a little confusing here. If we use BERT in a clever way (e.g., using a method called iPET), we can use its language modeling abilities to perform some tasks in an (almost) zero-shot setup, which basically means that BERT learned to perform the task in an unsupervised way. However, it is disputable, if this could be called unsupervised fine-tuning. BERT can be of course fine-tuned in an unsupervised way by continued pre-training. It can be viewed as a way of domain adaptation of the model, which is typically again followed by supervised fine-tuning. Imagine you want to do a task on legal text, so you can first adapt BERT for legal text using large amounts of plain text, and fine-tune it using on much smaller labeled data.
H: ML : Found input variable with inconsistent numbers I am trying an Retail ML project, but am stuck on the Error "Found input variables with inconsistent numbers of samples: [982644, 911]". I tired many thing & I know why this error occurs, but I can't figure out a solution for it. Can anybody please help me. I've been stuck on it for past 2 days. Y_train = train1['Sales'] Y_val = test_val1['Sales'] X_train = train1.drop(['Sales', 'Date', 'Customers'], axis = 1).values X_val = test_val1.drop(['Sales', 'Date', 'Customers'], axis = 1).values X_train = X_train.reshape(X_train.shape[0:]) rr = Ridge(alpha=10) rr.fit(X_train, Y_train) Y_pred1 = rr.predict(X_val) print('MSE',np.sqrt(mean_squared_error(Y_pred1,Y_val))) print('MAE',mean_absolute_error(Y_pred1,Y_val)) print('train model score',rr.score(X_train, Y_train)) print('test model score',rr.score(X_val,Y_val)) I am getting the error on rr.fit(X_train, Y_train). I have performed Linear Regression with the same object variables, but cant seem to perform the Regularization of the Model. AI: I have checked the colab example. There seems to be not enough memory to train a Ridge model with your humongous dataset whose shape is (982644, 1169). The notebook crashes when attempting to execute the said line, rr.fit(X_train, Y_train) So I tried decreasing the size of the dataset, and everything worked fine. xt = X_train.loc[:200000].copy() yt = Y_train.loc[xt.index].copy() xv = X_val.copy() yv = Y_val.copy() print(xt.shape, yt.shape) print(xv.shape, yv.shape) rr = Ridge(alpha=10) rr.fit(xt, yt) Y_pred1 = rr.predict(xv) print('MSE:',np.sqrt(mean_squared_error(Y_pred1,yv))) print('MAE:',mean_absolute_error(Y_pred1,yv)) print('train model score:',rr.score(xt, yt)) print('test model score:',rr.score(xv,yv)) Output: (200001, 1169) (200001,) (34565, 1169) (34565,) MSE: 1431.9094471008812 MAE: 1058.263279968715 train model score: 0.8604548907625048 test model score: 0.8423455437913853 NB: be sure to load all the datasets properly, when dataset files in the mounted drive are corrupt you may get unexpected errors when trying to execute the code.
H: Answering the question of "WHY" using AI? We have seen lots of natural occurrences that are happening in the whole world. Since we have great progress in technology and in particular AI, How can I employ ML to answer the question of WHY. In a sense that, without interpreting the result by human, Can machine interpret why something is happening or not? Like feeding a machine with lots of input, from synthesized data to actual data, does the machine answer any question or no, it does just analyze the data? AI: In short: no, one cannot feed a ML system with massive random heterogeneous data and expect the system to make sense of it by itself. ML is not magical, it needs to be fed with the right information in order to produce a meaningful and reliable answer. The closest application to this idea is Question Answering (QA). QA is an NLP task where the system answers a question, but the system must have been trained on a large collection of text and can only answer questions for which the answer exists in the text. For example the system can answer the question "why is the sky blue?" only if the training data contains a sentence such as "blue light's short wavelengths aren't easily absorbed and bounce off the sky, creating a sapphire hue".
H: Trained model performs worse on the whole dataset I used pytorch as the training framework and the official pytorch imagenet example to train the image classification model with my custom dataset. My custom dataset has 2 different label (good and bad), and over 1 million images. I splitted the dataset into a training set(80%), a val set(10%), and a test set(10%) My model got average 99% training acc in training phase, and nearly 99% val acc in validation phase. In the testing phase, the model got 99% testing acc. However, when I used my model to evaluate the whole dataset(all the images in my dataset), the acc got only 90%, which is pretty weird since my model updated its parameter in the training phase. The model should be able to achieve higher accuracy, but it can only get 90% acc when evaluating the whole dataset. I am wondering if it is normal or anything I can check for this problem. AI: These performance values are inconsistent, this is definitely not normal. The whole dataset is made of the training set, the validation set and the test set. Accuracy is the proportion of correctly labelled instances, so accuracy on the whole dataset is: $$accu_{full}= 0.8 * accu_{train} + 0.1 * accu_{val} + 0.1 * accu_{test}$$ Since $0.8 * 0.99 + 0.1 * 0.99 + 0.1 * 0.99 = 0.99 \neq 0.90 $, there must be a mistake somewhere, at least one of your performance values is wrong.
H: Where do Q vectors come from in Attention-based Sequence-to-Sequence Transformers? I'm taking a course on Attention-based NLP but I'm not understanding the calculation and application of Attention, based on the use of Q, K, and V vectors. My understanding is that the K and V vectors are derived from the encoder input and the Q vector is derived from the decoder input. This makes sense to me in the context of training, where the entire input sequence is presented to the encoder and the entire output sequence is presented to the decoder. What does not make sense, however, is how this applies in the context of inference. In that case, it would seem like there is no input to the decoder, so where does the Q vector come from? AI: Your understanding is correct: in the encoder-decoder attention blocks, the Keys and Values are the output of the encoder, while the Query vectors come from the decoder layers. At inference time we have as many Query positions as the step we are in. Remember that at inference time, the decoder behaves autoregressive, meaning that at each timestep T it receives the T - 1 previous tokens and predicts the T token. Such a prediction is then concatenated to the previous step input and used as input for the following step. This way, in the first step, we only have one Query vector (per layer), which is the one belonging to the first position (the beginning of sequence token, aka <s> or <bos>). In the second step, we have two Query vectors, and so on.
H: Understanding SVM's Lagrangian dual optimization problem I was going through SVM section of Stanford CS229 course notes by Andrew Ng. On page 18 and 19, he explains Lagrangian and its dual: He first defines the generalized primal optimization problem: $$ \begin{align} \color{red}{ \min_w } & \quad \color{red}{f(w)} \\ s.t. & \quad g_i(w)\leq 0, i=1,...,k \\ & \quad h_i(w)=0, i=1,...,l \end{align} $$ Then, he defines generalized Lagrangian: $$\mathcal{L}(w,\alpha,\beta)=f(w)+\sum_{i=1}^k\alpha_ig_i(w)+\sum_{i=1}^l\beta_ih_i(w)$$ Then, he defines primal in terms of $\mathcal{L}$ $$=\color{red}{\min}_w\underbrace{\color{red}{\max}_{\alpha,\beta:\alpha_i\geq0}\color{red}{\mathcal{L}}(w,\alpha,\beta)}_{\text{call it }\theta_\mathcal{P}(w,b)}$$ (Since $\max\mathcal{L}=f$ when constraints are satisfied, else $\infty$.) Similarly, he defines dual optimization in terms of $\mathcal{L}$ $$=\color{blue}{\max}_{\alpha,\beta:\alpha_i\geq0}\underbrace{\color{blue}{\min}_w\color{blue}{\mathcal{L}}(w,\alpha,\beta)}_{\text{call it }\theta_\mathcal{D}(\alpha)}$$ Then, on page 21, he defines SVM's primal optimization problem: $$ \begin{align} \color{red}{ \min_{w,b} } & \quad \underbrace{\color{red}{\frac{1}{2}\Vert w\Vert^2}}_{\text{call it}\color{red}{f}} \\ s.t. & \quad y^{(i)}(w^Tx^{(i)}+b)\geq 1, i=1,...,n \end{align} $$ Then, he defines the SVM's Lagrangian as follows: $$\mathcal{L}=\frac{1}{2}\Vert w\Vert^2-\sum_{i=1}^n\alpha_i[y^{(i)}(w^Tx^{(i)}+b)-1]$$ Then, he minimizes $\mathcal{L}$ with respect to $w$ and $b$ to get: $$\mathcal{L}(w,b,\alpha)=\sum_{i=1}^n\alpha_i-\frac{1}{2}\sum_{i,j=1}^n y^{(i)}y^{(j)}\alpha_i\alpha_j(x^{(i)})^Tx^{(j)}\quad\quad\quad \text{...equation (1)}$$ Then, he gives SVM's dual optimization problem: $$\begin{align} \max_\alpha & \quad W(\alpha)=\sum_{i=1}^n\alpha_i-\frac{1}{2}\sum_{i,j=1}^n y^{(i)}y^{(j)}\alpha_i\alpha_j(x^{(i)})^Tx^{(j)} \\ \text{s.t.} & \quad \alpha_i\geq 0, 0=1,...,n \\ & \quad \sum_{i=1}^n\alpha_iy^{(i)}=0 \\ & & \text{...equation (2)} \end{align}$$ I am unable to map / relate SVM's dual in equation (2) to the dual in blue color. So after a bit thinking, I guess equation (1) is giving $$W(\alpha) = \theta_{\mathcal{D}}(\alpha) = \color{blue}{\min}_{w,b}\color{blue}{\mathcal{L}}(w,b,\alpha)$$ and SVM's dual is $$\max_\alpha W(\alpha) =\max_\alpha \theta_{\mathcal{D}}(\alpha) = \color{blue}{\max}_{\alpha}\color{blue}{\min}_{w,b}\color{blue}{\mathcal{L}}(w,b,\alpha)$$ I guess this correctly maps with earlier dual in blue color, right? Rephrasing the doubt, I guess the confusion was that I felt equation (2) is simply renaming $\mathcal{L}(w,b,\alpha)$ in equation (1) as $\max_\alpha W(\alpha)$. But that is not the case right? Again rephrasing the doubt, equation (2) is: $$\begin{align}\max_\alpha & \quad \left[W(\alpha)=\sum_{i=1}^n\alpha_i-\frac{1}{2}\sum_{i,j=1}^n y^{(i)}y^{(j)}\alpha_i\alpha_j(x^{(i)})^Tx^{(j)} \right]\\ \text{s.t.} & \quad \alpha_i\geq 0, 0=1,...,n \\ & \quad \sum_{i=1}^n\alpha_iy^{(i)}=0 \end{align} $$ and not: $$\begin{align}\color{red}{[}\max_\alpha & \quad W(\alpha)\color{red}{]}=\sum_{i=1}^n\alpha_i-\frac{1}{2}\sum_{i,j=1}^n y^{(i)}y^{(j)}\alpha_i\alpha_j(x^{(i)})^Tx^{(j)} \\ \text{s.t.} & \quad \alpha_i\geq 0, 0=1,...,n \\ & \quad \sum_{i=1}^n\alpha_iy^{(i)}=0 \end{align} $$ Am I correct with this understanding? AI: You are correct. Sanity check: The final (incorrect) formulation of the optimization problem would not make sense because when you maximize over a certain variable, you essentially take that variable out of the expression. It's fixed. In that vein, I think Ng's notation would be more informative in equation (1) if they wrote the lagrangian as $L(\alpha)$ rather than $L(w,b,\alpha)$, since we have already minimized over $w$ and $b$. Notice how they are not present in the expression anymore?
H: Assess the goodness of a ML generative model (text) Take a RNN network fed with Shakespeare and generating Shakespeare-like text. Once a model seems mathematically fine, as can be assessed by observing its loss and accuracy over training epochs, how can one assess and refine the goodness of the result ? Only human eyes can judge of the readable character of a text, its creativity, its grammatical correctness etc. QUESTION : Which systematic approach can be used to refine a generative model (text) ? AI: The answer is in the question :) Only human eyes can judge of the readable character of a text, its creativity, its grammatical correctness etc. In the example of a model trained on Shakespeare's writing, take a group of human annotators (preferably literature experts) and ask them to annotate texts as likely authored by Shakespeare or not (variant: mark texts according to how close they are to Shakespeare's style). The texts provided to them should contain actual texts by Shakespeare and texts generated by the model, of course. Btw this is the principle of the well known Turing Test.
H: Difference between ReLU, ELU and Leaky ReLU. Their pros and cons majorly I am unable to understand when to use ReLU, ELU and Leaky ReLU. How do they compare to other activation functions(like the sigmoid and the tanh) and their pros and cons. AI: Look at this ML glossary: ELU ELU is very similiar to RELU except negative inputs. They are both in identity function form for non-negative inputs. On the other hand, ELU becomes smooth slowly until its output equal to $-\alpha$ whereas RELU sharply smoothes. Pros ELU becomes smooth slowly until its output equal to $-\alpha$ whereas RELU sharply smoothes. ELU is a strong alternative to ReLU. Unlike to ReLU, ELU can produce negative outputs. Cons For $x > 0$, it can blow up the activation with the output range of [0, inf]. ReLU Pros It avoids and rectifies vanishing gradient problem. ReLu is less computationally expensive than tanh and sigmoid because it involves simpler mathematical operations. Cons One of its limitations is that it should only be used within hidden layers of a neural network model. Some gradients can be fragile during training and can die. It can cause a weight update which will makes it never activate on any data point again. In other words, ReLu can result in dead neurons. In another words, For activations in the region ($x<0$) of ReLu, gradient will be 0 because of which the weights will not get adjusted during descent. That means, those neurons which go into that state will stop responding to variations in error/ input (simply because gradient is 0, nothing changes). This is called the dying ReLu problem. The range of ReLu is $[0,\infty)$. This means it can blow up the activation. LeakyRelu LeakyRelu is a variant of ReLU. Instead of being 0 when $z<0$, a leaky ReLU allows a small, non-zero, constant gradient α (Normally, $\alpha=0.01$). However, the consistency of the benefit across tasks is presently unclear. [1] Pros Leaky ReLUs are one attempt to fix the “dying ReLU” problem by having a small negative slope (of 0.01, or so). Cons As it possess linearity, it can’t be used for the complex Classification. It lags behind the Sigmoid and Tanh for some of the use cases. Further reading Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, Kaiming He et al. (2015)
H: scikit-learn OneHot returns tuples and not a vectors First I do a label encoding to all the columns that are strings so they will be numeric. After that, I take just the columns with the labels, convert them to np array, reshape, and convert them to one-hot encoding. The "y" is of size 900 (of floats), and in the resize I change it to (900,1) so the one hot will work. I use scikit-learn OneHotEncoding, and when doing fir_transform the result is: Why do I get a tuple as output and not vectors of 1 and 0? def OneHot(self,y): ohe = OneHotEncoder() y = y.reshape(len(y) , 1) y_hot = ohe.fit_transform(y) print(y_hot) return y_hot AI: Why do I get a tuple as output and not vectors of 1 and 0? You get this because by default OneHotEncoder() uses sparse matrix representation. Hence, it transforms the elements of y into elements of type - <1x3 sparse matrix of type '<class 'numpy.float64'>' with 1 stored elements in Compressed Sparse Row format> If you want the output as vectors, then just put sparse=False in OneHotEncoder() Following is an example of the same - from sklearn import datasets from sklearn.preprocessing import OneHotEncoder # Iris dataset X, y = datasets.load_iris(return_X_y=True) print("Shape of dataset - ",X.shape, y.shape) # Your code def OneHot(y): ohe = OneHotEncoder(sparse=False) y = y.reshape(len(y) , 1) # you can also use y = y.reshape(-1, 1) instead y_hot = ohe.fit_transform(y) return y_hot y_oh = OneHot(y) print("Shape of One Hot Encoded y - ",y_oh.shape) print("Single element in y - ",y_oh[0]) The code generates the following output - Shape of dataset - (150, 4) (150,) Shape of One Hot Encoded y - (150, 3) Single element in y - [1. 0. 0.]
H: Data extraction using crawlers I have a rather simple data scraping task, but my knowledge of web scraping is limited. I have a excel file containing the names of 500 cities in a column, and I'd like to find their distance from a fixed city, say Montreal. I have found this website which gives the desired distance (in both km and miles). For each of these 500 cities, I'd like to read the name in the excel file, enter it in the "to" box, set the "from" field to "Montreal", press on the "Find" button (or Enter), extract the distance in km, and store the result in a vector. Is there any source which walks you through these steps in Python, R, or even an online service? AI: I think there are few different ways to handle such a task. You can try creating a python script using pandas for example for extracting the names of the cities and then use Selenium for navigating and finally save the results back to file using pandas. For that you need a basic HTML understanding to find the HTML elements that needed to be handled both for inserting your data into form and for collecting the results after each city is inserted. Here is a source explains the basics of finding elements with Selenium
H: What does "regularization" actually refer to? I am familiar with regularization, where we add a penalty in our cost function to force the model to behave a certain way. But is this a definition of regularization? Typically we regularize to get a "simpler" model in some sense. But we could easily create a penalty function that forces a model to be more complex. Would this be considered regularization? Most commonly it is a penalty on the size of our model parameters. If we add a penalty that is not a function of the model parameters, but rather the model output, would that still be considered regularization? Or is that just a modified objective function? AI: According to wikipedia, the definition regularization is the process of adding information in order to solve an ill-posed problem or to prevent overfitting. One common approach is to add a penalty term for large parameter values to the loss function. There are many other approaches to regularization. Here are a couple of other examples: Increasing the amount of data (either by collecting more data or data augmentation of existing data) Early stopping of the training process Add a prior to the model Dropout - randomly remove connections during training Pruning - removing connections after training
H: How are scores calculated for each class of binary classification The formula for Precision is TP / TP + FP, but how to apply it individually for each class of a binary classification problem, For example here the precision, recall and f1 scores are calculated for class 0 and class 1 individually, I am not able to wrap my head around how these scores are calculated for each class individually. Can someone please explain to me with this confusion matrix as an example? Please explain in laymen's terms if possible. Thank you AI: Your confusion matrix does not correspond to your classification report. Also the matrix that you show is not standard: the labels "True Positive" and "True negative" are confusing because these terms should only be used for the classification status (see below). They mean "true class is positive" and "true class is negative". It has the true classes as columns and the predicted classes as rows, whereas it's usually presented with true classes as rows and the predicted classes as columns. The same as a regular confusion matrix: true classes | | v 0 1 <---- predicted classes 0 15 10 1 15 60 For example there are 10 instances which have true class 0 but predicted class 1. The first thing to define clearly is which class is considered as the positive class, because everything else depends on that. Here let's assume that class 1 is positive, 0 is negative. Now we can obtain the number for every classification status: An instances which has true class 1 and predicted class 1 is a true positive, meaning that it is predicted positive and its prediction is correct (same as true class). In the example there are 60 TP. An instances which has true class 0 and predicted class 0 is a true negative, meaning that it is predicted negative and the prediction is correct (same as true class). In the example there are 15 TN. An instances which has true class 0 and predicted class 1 is a false positive, meaning that it is predicted positive but the prediction is incorrect (different than true class). In the example there are 10 FP. An instances which has true class 1 and predicted class 0 is a false negative, meaning that it is predicted negative but the prediction is incorrect (different than true class). In the example there are 15 FN. Once the above is clear it's straightforward to apply the formula, for instance for precision: $$P=\frac{TP}{TP+FP}=\frac{60}{60+10}=0.86$$ Keep in mind that the obtained score is for class 1 as the positive class. In order to obtain precision for the other class, you need to define it as the positive class and redo the classification status.
H: How does Gradient Descent work? I know the calculus and the famous hill and valley analogy (so to say) of gradient descent. However, I find the update rule of the weights and biases quite terrible. Let's say we have a couple of parameters, one weight 'w' and one bias 'b'. Using SGD, we can update both w and b after the evaluation of each mini-batch. If the size of the mini-batch is 1, we give way to online learning. What if I do not want to use any of these methods and simply want to use "Gradient descent" in its entirety? What is the update rule in that case? To be more precise; at what step does w and b get updated? And at what step do we stop? That said, the elephant in the room is the initial value of w and b. What is the parameter for choosing the first values of w and b? AI: Suppose you have a strictly convex function $f(x)$ that you'd like to minimize then to do using gradient descent you keep applying $$x_{i+1} = x_{i}-\lambda\frac{\partial f}{\partial x}$$ until convergence; that is when $x_i$ is very weekly changing or not changing at all because that implies that ${\partial f}/{\partial x}$ is zero or very close to zero in that neighborhood which further mathematically implies that you've reached the minimum. The same applies if $f$ was rather a function in many variables the gradient descent rule applies for each of them. Now in data science $f$ can be a function in many variables that also involves a sum, for instance $$f(\theta_1,\theta_0)=\sum _{i=1}^m(y_{i}-(\theta _1^{\:}x_i+\theta \:_0))^2$$ where$x_i$ and $y_i$ are drawn from some dataset of length $m$. In that case ${\partial f}/{\partial \theta_1}$ and ${\partial f}/{\partial \theta_0}$ are also going to involve the sum from $i=1$ to $i=m$; that is, to do a single update step you need to load the entire dataset in memory because you need to compute the derivatives. An alternative formulation that can be shown to be faster while also avoiding this issue (because it can be unfeasible to load the entire data set) uses only a subset of the dataset for each step, that subset can be even like you said just one example from the dataset. So to answer your questions: 1 - You can use "Gradient Descent" in its entirety by considering the whole dataset for each iteration. 2 - You can always derive the update rule yourself by differentiating with respect to each of the parameters. If you see sums over the whole dataset leave them there so you can use Gradient Descent in its entirety. 3 - Once you compute the partial derivatives, you plug in the iterative scheme and that's when they get updated. Again, to compute the partial derivatives you might need to consider the whole dataset if you're using Gradient Descent in its entirety, also known as Batch Gradient Descent. 4 - You stop updating the weights whenever you believe that the loss function has reached the minimum. But because this might sometimes cause use to overfit the data if you have many parameters, you might stop whenever your model has reasonable accuracy on the validation set. I suggest that you read about early stopping. 5 - I can't see how this is "the elephant in the room" given how it isn't so relevant to the rest of the questions; however, like other iterative schemes used in optimization you start with random values for your parameters and the gradient should lead you to the minimum. Irregardless, in some scenarios, there do exist methods that help you start with better random guesses.
H: Keras: Custom output layer for multiple multi-class classifications Hello, I’m quite new to machine learning and I want to build my first custom layer in Keras, using Python. I want to use a dataset of 103 dimensions to do classification task. The last fully connected layer of the model has 103 neurons (represented by 13 dots in the image). Groups of five dimensions of the former layer should be connected to three neurons of the output layer, so there will be 20 classifications. The neurons of the output layer represent "True" ("T" in the image), "indifferent" ("?") and "False" ("F"). The remaining three don’t need connections to the output layer. How can I build this layer? And how can I make sure, that each of the 20 groups with three neurons gives probabilities that add up to 1? Can I apply the softmax activation function to each of the groups, for example? Edit – This is my solution: # define input and hidden layers. append them to list by calling the new layer with the last layer in the list self.layers: list = [keras.layers.Input(shape=self.neurons)] [self.layers.append(keras.layers.Dense(self.neurons, activation=self.activation_hidden_layers)(self.layers[-1])) for _ in range(num_hidden_layers)] self.layers.append(keras.layers.Dense(self.neurons - self.dims_to_leave_out, activation=activation_hidden_layers)(self.layers[-1])) # define multi-output layer by slicing the neurons from the last hidden layer self.outputs: list = [] index_start: int = 0 for i in range(int((self.neurons - self.dims_to_leave_out)/self.neurons_per_output_layer)): index_end: int = index_start + self.neurons_per_output_layer self.outputs.append(keras.layers.Dense(self.output_dims_per_output_layer, activation=self.activation_output_layers)(self.layers[-1][:, index_start:index_end])) index_start = index_end AI: Functional API allows you to design more complicated models, including multi-output models. Check the documentation to see how you can connect specific neurons to others of your choice. You should be able to make custom layers from scratch. Once you build distinct output layers, probabilities within each can be set just as usual by using softmax activation.
H: How to choose embedding size for tensorflow recommender system I am going to build a recommender system using TensorFlow recommender and the two-tower-model. I have wondered, how to choose the size of the embedding dimension. Are there any papers on this for large scale recommender systems? For the example, Google chose a size of 32 dimensions for the movie recommender. My vocabulary contains around 30,000 different items. Help is highly appreciated! AI: Generally, the size of the embedding layer is not an important hyperparameter. Research has found embeddings dimension lower than ~19 are not performant and there is asymptotic improvement after ~200 dimensions. Multiples of 32 are chosen for hardware efficiency.
H: Confusion matrix terminology I am working on machine learning with a supervised problem with 2 classes: NO and YES, and I need some precision about confusion matrix. I read 2 differents terminologies, some writes matrix confusion as: $$ \begin{pmatrix} & &\text{Positive Prediction} &\text{Negative Prediction}\\ &\text{Actual Positive Class} &TP &FP \\ &\text{Actual Negative class} &FN &TN \end{pmatrix} $$ where TP = true positive, FP = false positive, FN = false negative, and TN = true negative. And I also saw, the confusion matrix written like that: $$ \begin{pmatrix} & &\text{Predicted NO} &\text{Predicted YES}\\ &\text{Actual NO Class} &TN &FP \\ &\text{Actual YES class} &FN &TP \end{pmatrix} $$ TP and TN are inverted. Which one is corrected, especially for my problem? Thanks. AI: I think this page from Google's machine learning crash course explains true vs false and positive vs negative very well. In addition the wikipedia page on the confusion matrix is very informative. As you will see the second matrix is the correct one, since a false positive is an example that is incorrectly predicted as being positive.
H: Machine learning accuracy for not a class-imbalanced problem I would like know if the accuracy has an impact on not class-imbalanced dataset ? I know that accuracy is sensitive to class-imbalance and also always good to be able to appreciate precision and recall values. Is it also the case to not class-imbalanced dataset? AI: Accuracy treats all misclassifications as the same - we only care whether we got the answer right or not, but don't care about what kind of error was made. Even for class-balanced problems, this may not be a desirable feature. If misclassifications come with different "costs", accuracy is not a good measure of the overall utility of your classifier. Suppose you are designing a medical screening test for a serious but curable disease, where patients with a negative result are given a clean bill of health and patients with a positive test are referred for a highly accurate but more expensive confirmatory test. False positives are scary for the patient but do little harm, as the only cost is the additional test which comes back negative. False negatives are a much bigger problem, as the patient goes untreated and dies a preventable death. Regardless of the actual disease incidence or class balance in the population, evaluating your classifier in terms of accuracy is not terribly useful - you don't really care about accuracy within the actual negative population, all you really care about is correctly identifying the positive cases. Two classifiers, one of which has 80% sensitivity and 100% specificity, and the other of which has 100% sensitivity and 80% specificity would have identical accuracies in a class-balanced scenario, but would behave very differently in practice and would be suitable for totally different purposes. In any situation where false positives and false negatives have different costs, accuracy will fail to respect the different types of misclassifications and treat them all the same. These problems with accuracy are exacerbated by class imbalance (the metric is always dominated by the majority class, even if it's the class you don't care about), but still exist even with balanced classes (it will still equally weight misclassifications you don't care much about).
H: Is standardization/normalization a good way of reducing the impact of outliers when I'm training a machine learning model? Recently, I have read some papers in which the authors state that they have performed standardization/normalization of the variables for reducing the impact of outliers in the machine learning models trained with the data. Does it make sense? Why? I think that the difference between outliers and the other values is still in the data, after standardization. AI: Of course, classic techniques, such as min-max scaler and z-score normalization, just change the range of the values, hence they are prone to outliers and do not solve the problem. However, what these papers probably suggest, makes sense, providing a few conditions are met. In this context, I will try to summarize everything I can think of regarding both normalization and standardization. Although not entirely accurate, providing your data follow a power law distribution, you can scale the data with the log function (log scaling). This would change your data distribution to a "narrower" scale, ultimately decreasing the potential effect of your outliers. Feature Clipping: If your dataset has extreme outliers, you can always clip your features to fixed numerical value (or fixed value +- 3 std (standard deviation)). This would result in an information loss but would effectively combat outliers' effect in your analysis. Robust Scaler: When there are many instances of outliers in your dataset, you can normalize the data with the median divided by the IQR = the difference between the 75th and 25th percentiles of your data. This would not negate the effect of outliers in your machine learning model but will instead make normalize your data correctly, despite the existence of these extreme points. You can always use tree-based algorithms or neural networks for your analysis, which are robust to outliers.
H: Difference in result in every run of Neural network? I have written a simple neural network (MLP Regressor), to fit simple data frame columns. To have an optimum architecture, I also defined it as a function to see whether it is converging to a pattern. But every time that I run the model, it gives me a different result than the last time that I tried, and I do not know why? Due to the fact that it is fairly difficult to make the question reproducible, I can not post the data but I can post the architecture of the network here: def MLP(): #After 50 nn=30 nl=25 a=2 s=0 learn=2 learn_in=4.22220046e-05 max_i=1000 return nn,nl,a,s,learn,learn_in,max_i#, def process(df): y = df.iloc[:,-1] X = df.drop(columns=['col3']) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=27) return X_train, X_test, y_train, y_test def MLPreg(x_train, y_train):# nn,nl,a,s,learn,learn_in,max_i=MLP()#nl, act=['identity', 'logistic', 'relu','tanh'] #'identity'=Linear activ=act[a] sol=['lbfgs', 'sgd', 'adam'] solv=sol[s] l_r=['constant','invscaling','adaptive'] lr=l_r[learn] model = MLPRegressor(hidden_layer_sizes=(nl,nn), activation=activ, solver=solv, alpha=0.00001, batch_size='auto', learning_rate=lr, learning_rate_init=learn_in, power_t=0.5, max_iter=max_i, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000) # model = MLPRegressor(max_iter = 7000) # param_list = {"hidden_layer_sizes": [(10,),(50,)], "activation": ["identity", "tanh", "relu"], "solver": ["lbfgs", "sgd", "adam"], "alpha": [0.00005,0.0005]} # gridsearchcv = GridSearchCV(estimator=model, param_grid=param_list) model.fit(x_train, y_train) return model ``` AI: Some difference is expected as you set random_state = None and shuffle=True in your model. This results in weights being initialized randomly and training data to be used in different orders. For reproducible results, you should set it to an integer. See Scikit documentation for random_state variable.
H: Where can I find study materials? Can anyone recommend me some material (books, blogs, youtube channels, ...) to study statistics, Machine Learning and in general Data Science topics? Thanks AI: Deep Learning Specialization on Coursera: You can follow the lectures for free or apply for a scholarship if you can't afford it. Even though some content is a bit old, still a very extensive source from fundamental concepts of machine learning to advanced optimization concepts and additional courses on CNN's and Sequence models. Also, there are several completely free courses on Udacity, from artificial intelligence, neural networks to more framework specific courses such as TensorFlow for deep learning, Machine Learning on Azure or AWS Deep Racer. Also, Sentdex has several tutorials on Data Analysis and Machine Learning. He has a nice style and usually goes through everything in detail. fast.ai is another comprehensive and completely free resource I can suggest. And TensorFlow (or Keras) also have very accessible resources and tutorials within their documentation.
H: Explanation of Karpathy tweet about common mistakes. #5: "you didn't use bias=False for your Linear/Conv2d layer when using BatchNorm" I recently found this twitter thread from Andrej Karpathy. In it he states a few common mistakes during the development of a neural network. you didn't try to overfit a single batch first. you forgot to toggle train/eval mode for the net. you forgot to .zero_grad() (inpytorch) before .backward(). you passed softmaxed outputs to a loss that expects raw logits. you didn't use bias=False for your Linear/Conv2d layer when using BatchNorm, or conversely forget to include it for the output layer. This one won't make you silently fail, but they are spurious parameters thinking view() and permute() are the same thing (& incorrectly using view) I am specifically interested in an explanation or motivation for the fifth comment. Even more so given I have a network built akin to self.conv0_e1 = nn.Conv2d(f_in, f_ot, kernel_size=3, stride=1, padding=1) self.conv1_e1 = nn.Conv2d(f_ot, f_ot, kernel_size=3, stride=1, padding=1) self.norm_e1 = nn.BatchNorm2d(num_features=f_ot, eps=0.001, momentum=0.01) self.actv_e1 = nn.ReLU() self.pool_e1 = nn.MaxPool2d(kernel_size=2, stride=2) Where the torch.Conv2d has an implicit bias=True in the constructor. How would I go about implementing the fifth point in the code sample above? Though, based on the second sentence of the point, it doesn't seem like this matters?.. AI: Love this question. First, note the last thing said in the tweet you quote: "This one won't make you silently fail, but they are spurious parameters." Basically, this is a sort of mathematical quibble, but To see what is happening here, consider what's happening in a BatchNorm2d layer vs Conv2d (quoting the PyTorch Docs): BatchNorm2d: $y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta$ Conv2d: $\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{\text{in}} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)$ Both operations have an additive bias along the channel dimension. The two biases are either redundant or in conflict. If they're redundant, your model is performing extra, unnecessary computation. When they're in conflict, some parameters become useless, e.g. in the case where we set momentum=0 for BatchNorm2d, the preceding Conv2d layer will have a set of trainable parameters with no useful gradient. You can fix this with Conv2d(..., bias=False, ...). Again, this is unlikely to have a significant impact on most networks, but it can be helpful and is good to know. Follow-up edit: The reason that this is relevant is that the BatchNorm2d layer is a linear operation. Multiple linear operations can always combined into a single one. So with Conv2d -> BatchNorm2d, the bias in Conv2d is redundant. However, as long as you have activation functions, Conv2d layers are not linear, so with two conv layers, Conv2d -> Conv2d, you do not run into this problem. So it's only the last layer before batchnorm that matters. (That said I've definitely seen examples where practitioners just turn off all biases for conv layers, and it doesn't seem to harm things.)
H: Is there any difference between classifying images by their type and by the objects they represent? Let us suppose that I would like to train a machine learning model for classifying images according to their types (for example, photographs and drawings). The techniques that I can use for this would be different from the techniques used for classifying images in the classes of the objects represented in them (dogs, cats, birds, etc)? Or both tasks are so similar that I can use the same techniques in both? AI: A machine learning model (simple or neural net) is agnostic of the image type or the object depicted. That is because in both cases images consist of pixels (i.e a matrix of numbers with a certain range) and a CNN model, for instance, will identify a dog or a drawing based on the image's pixels pattern (in a very high level). Therefore, both tasks can be considered similar or at least models with similar architectures can be constructed, providing correct labels are given.
H: Software/Library Suggestion: Is there a usable open-source sequence tagger around? (Not sure if this is the right community for the question - please do downvote if stats. or whatever else is more appropriate...) I'm looking for a suggestion for either a command-line tool or library (preferably Python or Ruby, but at this point, anything will do) implementing non-Parts-of-Speech-specific sequence tagging/labelling. If it was PoS-specific but could be re-trained for custom categories, that'd be fine, too. The projects I've found mostly seem to be abandoned PhD thesis codebases or similar and I've not been able to make any of them work in a practical manner. The one I got the furthest with was pytorch-sequence-tagger. In case it helps with giving suggestions: the purpose is to tell apart tokens which are part of library class marks from tokens which are part of author names or book titles, but where the input data are too irregular for a rule-based system to work 100%. AI: One can find sequence labelling libraries by searching for the term conditional random fields, the state of the art method. Probably one could also find libraries and tutorial by searching the term Named Entity Recognition, which is certainly the most standard NLP application of sequence labelling. Here are a few libraries that I know of: CRF++ crfsuite (there is a python wrapper) Wapiti is a particularly efficient library (also with python wrapper) See also this question.
H: What package, software or particular tool produced this bar plot? Would anyone be able to identify the package or tool used to created this particular bar plot? It has a distinctive font. Source (with paywall): https://towardsdatascience.com/stopping-covid-19-with-misleading-graphs-6812a61a57c9 AI: This seems to be a bar chart made using Google sheets, see also this chart which was made with Google sheets:
H: What tool can I use for produced this type of lines in a multiple line graph? I was viewing a video about the declination in fertility rates when I saw a good line chart. This is a multiple graphs line and each line have different form this can be useful for readers don't get confused comparing the lines because the colors is not enough for classify when you have various line. This method could be useful when we have a up to +20 lines for plotting. You don't need to answer about the specific tool of the graph above. only where I can do this? for example what theme of R, Python or Power Bi. I am able to do this for a line chart? AI: The variations between lines in the image you have provided are usually set using color and line style properties in a programmatic plotting library (e.g. gnuplot, matplotlib in Python, etc). Specifically how to control color and style varies from program to program, but an example showing a Matplotlib plot using the Seaborn styling package is similar to the image provided. In the code below for the Matplotlib library it uses the c parameter for setting the line color (documentation) and ls parameter for setting line style (documentation). Example: import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl th = np.linspace(0, 2*np.pi, 128) sty='seaborn' mpl.style.use(sty) fig, ax = plt.subplots(figsize=(3, 3)) ax.set_title('style: {!r}'.format(sty), color='C0') ax.plot(th, np.cos(th), 'C1', label='C1',c='r',ls='solid') ax.plot(th, np.sin(th), 'C2', label='C2',c='b',ls=(0,(3,5,1,5))) ax.legend() fig.savefig('example.png') Output Image:
H: Determining if a dataset is balanced I'm learning about training sets and I have been provided with a set of labelled customer data that segments customers into one of two classes: A or B. The dataset also contains gender, age and profession attributes for each customer. The distribution of classes in the dataset is like this: 92% of customers are class A 8% of customers are class B Based on my understanding, this is an unbalanced dataset because the distribution of classes is not equal. However, I'm confused as to how the other attributes play a role in determining whether or not this dataset is balanced. For example, if my dataset has equal distributions of gender, profession and age values, is the dataset still considered unbalanced because the value I'm trying to train my model to predict (class A or B) is unbalanced? Alternatively, if my class distribution was equal, would my dataset be considered balanced regardless of the other attributes? For example, if my dataset had 90% female customers and 10% male, but the class distribution was 50% A and 50% B, would the dataset be considered balanced? My main question is, when determining whether or not my dataset is balanced, should I be looking at the distribution of classes within the dataset or the distribution of the other attributes that may/may not be good predictors of the class? AI: I am not sure what environment are you using this on. It would help to understand if you provided more information on that. Answering the question you have, the data set is imbalanced. If you are making a supervised learning model, it helps to have equal amounts of data for each label. Check the frequency distribution for the data set. You can look at the below mentioned statistics to look for correlation in the data, basically assist to choose the features/columns to predict class A or B. Correlation matrix - Gives information how much each column relates with the label column. Clustering algorithms can give you a good visual representation of how the data is naturally grouped.
H: Understanding deconvolutional network loss function In the paper (1), there is a description of a deconvolutional network. The loss function (with only one layer) compares the colour channels of the orignal image with the colour channels of the generated image. To increase the sparsity of the feature maps there is also added a regularization term. Intuitively, one can say that the loss function tries to create feature maps and filters to reconstruct the original image. What does the dark green marked term mean? (1) Link to paper: https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf AI: Then green term indicates that they are taking the L2 norm of the difference between the orange and the pink terms (indicated by the lower 2), and taking the square of that L2 norm (indicated by the upper 2). See also this answer on the statistics stackexchange.
H: Removing the outliers improved my models, is what I did good or bad? I used cross validation on my data (11000 rows) with maximum salary of 10000 and after some cleaning I got to rmse=70. Then I tried to remove the outliers 10 times just to try things now I have 9000 rows with maximum salary of 260, I got rmse=23. Is what I did bad even though I got a better rmse? Is the jump from 10000 maximum to 260 a bad thing? Is the jump from 11000 rows to 9000 a bad thing? AI: Removing outliers is only appropriate when you have reason to believe the data is wrong. Do you have such a reason? Otherwise, you are, as @Dave suggested, tricking yourself into thinking you have good predictive power. If your data is not "nicely" distributed, and you're having trouble fitting a model to predict it, the first thing I would try is transform the salary field to a more usable range. For example, you can try predicting log(salary) or sqrt(salary), then transforming it back if necessary.
H: Steps of multiclass classification problem So this question is more theoretical, than a practical one. I got a dataframe with 4 classes of cars' body types (e.g. sedan, hatchback, etc.) and different characteristics (doors, seats, maximum speed, etc.). The goal is to build a model, which predicts class by means of provided features. The steps, which I've applied are the following: Encode classes of body types into variables (0, 1, 2, 3 Check if classes are balanced and in case of imbalance correct this issue Feature selection based on the results of Pearson, Chi-2, RFE, logistic regression and XGBoost Applying k-fold cross-validation with XGBoost on the whole dataset. What is the correct order of implementing steps from the second one and so on? Should I firstly balance classes, then pick features and then apply XGBoost? Furthermore, should I split dataset into train and test and only then apply CV or may I stack XGBoost with CV on the whole dataset? UPD: the class distribution is below 1 0.512228 2 0.282609 0 0.118207 3 0.086957 AI: Please find the steps below. Encode classes of body types into variables (0, 1, 2, 3). Remove the rows or fill the cells contains Nan values. Remove/Update cells have unrelated data like 10 doors etc. Remove outliers if present in column like speed. Make the data balanced if required. Apply right encoding techniques for every non numeric columns. Perform Standardization and Normalization. If you feel that you really have more columns then apply PCA with 95% variance to reduce the columns count. Apply Random Forest classifier with K-Fold cross validation in whole dataset(before apply PCA) which gives you good accuracy and the importance features. Please feel free to run different algorithms along with hyper-parameters tuning using GridSearchCV.
H: Does gradient descent always find global minimum for specific regression type? From my understanding, linear regression is used for predicting an output based on an input using a linear equation that is optimally fitted to some input data. We choose the best fitted linear equation for some input data using a loss function. By simulating the values of m and b in y = mx + b we can find the optimal linear equation with gradient descent. My question is, does gradient descent always find the global minimum loss for linear regression? An extension of this question would be, doesn't the answer to the previous question depend on the loss function used? Furthermore, when we use gradient descent on a plot of m, b, and the value of our loss function, is the plot always convex given that we are using linear regression? AI: For linear regression, using least square error as the loss function, the cost function will be convex. Gradient descent applied to a convex function will converge to a global minimum. And yes, it depends on the loss function. I don't have an example of a non convex loss function for linear regression but check this video for the case of logistic regression.
H: Normalization factor in logistic regression cross entropy Given that probability of a matrix of features $X$ along with weights $w$ is computed: def probability(X, w): z = np.dot(X,w) a = 1./(1+np.exp(-z)) return np.array(a) def loss(X, y, w): normalization_fator = X.shape[0] #store loss values features_probability = probability(X, w) #return one probability for each row in a matrix corss_entropy = y*np.log(features_probability) + (1-y)*np.log(1-features_probability) cost = -1/float(normalization_fator) * np.sum(corss_entropy) cost = np.squeeze(cost) return cost Question: I did it first without dividing by $normalization\_fator$, but the correct way to do it is to divide by normalization factor although in the formula I had for logistic regression loss is given by: $$ L\left( \theta \right) =-\sum_{i=1}^n{y^{\left( i \right)}\log \left( \alpha _i \right) +\left( 1-y^{\left( i \right)} \right) \log \left( 1-\alpha _i \right)} $$ So as you can see there is no normalization facto: $$ L\left( \theta \right) =-\frac {1}{(norm\_factor)}\sum_{i=1}^n{y^{\left( i \right)}\log \left( \alpha _i \right) +\left( 1-y^{\left( i \right)} \right) \log \left( 1-\alpha _i \right)} $$ Edit: $\alpha_i$ represent probability of each row in $X$ given by sigmoid function. AI: In the linked answer, it is convenient to have the $1/2$ in the loss function so it cancels when we bring down the $2$ in the derivative, and this is okay since we just want to optimize the parameters. I do not see something that should cancel out in your equation, but there could be another reason to divide through. In your case, unless you pick a silly normalization factor like zero, your two loss functions have the same parameters that optimize them, so it does not matter which we optimize. Dividing by some factor can keep the numbers from getting too large, though, especially if you're adding up over thousands or billions of predictions. Additionally, if your normalization factor is the sample size, you get some sense of the average crossentropy loss for an observation, the same as the MSE gives some sense of the average squared deviation when we do linear regression.
H: CRFSuite/Wapiti: How to create intermediary data for running a training? After having asked for and been suggested two pieces of software last week (for training a model to categorize chunks of a string) I'm now struggling to make use of either one of them. It seems that in machine learning (or at least, with CRF?), you can't just train on the training data directly, but you have to go through an intermediary step first.¹ From the CRFsuite tutorial: The next step is to preprocess the training and testing data to extract attributes that express the characteristics of words (items) in the data. CRFsuite internally generates features from attributes in a data set. In general, this is the most important process for machine-learning approaches because a feature design greatly affects the labeling accuracy. Wapiti doesn't need such an attribute file created, I think because it has "patterns" instead which seem somewhat more sophisticated than CRFsuite's intermediary-format files. To provide an example: given a large number (many tens of thousands) of strings such as these three: Michael went to his room. Did you know Jessica's mom used to be with the military? Amanda! Come back inside! We'll have dinner soon! From which manually a smaller number (few thousands) of labelled training and test data have been created, such as this block (for the first example above): T Michael K went K to K his K room S . K Did K you K know T Jessica's K mom K used K to K be K with K the K military S ? T Amanda S ! K Come K back K inside S ! K We'll K have K dinner K soon S . (T for names, K for non-names, S for punctuation, N for numbers.) How do I figure out what the "attributes" should be, to be able to create an equivalent to the chunking.py script used in the CRFsuite tutorial? ¹: With regard to that intermediary step, the terminology used by Naoaki Okazaki is not clear to me. "Features" and "Attributes" are used interchangeably and seem to refer to something invisible contained in the data. "Labels" might be the categories in which to put the tokens, and then there's also "Observations". AI: It's true that it's a bit of a complex process but it's worth understanding it in order to get the best out of the model. "Feature" and "attribute" (and probably observation but I'm not 100% sure) are the same thing. The features are the ones directly used by the model (as opposed to the raw input data). For every input word a vector of binary features is generated based on the input data following the custom "patterns" defined in the configuration file. Note that I'm using the word "data" because the input data doesn't have to be only the text, it can optionally include additional information as columns, for example POS tags (as obtained by a POS tagger) and syntactic dependencies (as obtained by a dependency parser). This kind of information is often very useful for the model: if the model can only use the text then the default binary features are made of a basic one-hot-encoding of the words. This means that the model can only use conditions based on whether word == x or word != x. To see why this is not enough: the word "12345" is different from "12346" in the same way that the word ";" is different from "paleontology", i.e. in this example the model can not capture the fact that "12345" and "12346" are both numbers. Additionally the patterns allow the model to use other "neighbour features", which is why the notation is a bit complex. The idea is that the label may depend not only on the features of the current word but also on the features of the previous word, or the one before that. In other words, this allows the model to take into account the context in the sequence. Finally it's usually also possible to define the dependencies between labels. For example there might some sequences of labels which cannot happen, and this information can help the model to determine the correct label for the current word by taking into account the previous/next label in the sequence. Ok that's a very short summary, now how to decide which patterns to use? Well, the most common option is to try a few configurations, then test and tune them manually. It's also possible to automatize this process but it's rarely worth the effort imho.
H: Cleaning rows of special characters and creating dataframe columns Below is my Dataframe format consisting of 2 columns (one is index and other is the data to be cleaned) 0 ans({'A,B,C,Bad_QoS,Sort'}) 1 ans({'A,B,D,QoS_Miss1,Sort'}) I want to remove special characters create a data frame for all comma separated items. I have managed to first remove ans from all rows using: ds_[col2] = ds_.replace('ans', '', regex=True) > 0 ({'A,B,C,Bad_QoS,Sort'}) > 1 ({'A,B,D,QoS_Miss1,Sort'}) Then I try to apply replace regex, see below: ds_['col2'] = ds_['col2'].str.replace( r' \(\{\' | \'\}\) ', '', regex=True) ds_['col2'] I get no errors, but no changes. How could I clean these characters and also create a data frame like below: col1 col2 col3 col4 col5 col6 0 A B C Bad_Qos Sort 1 A B D Qos_Miss1 Sort AI: You can first split on ', and then on , and then remove unnecessary columns: first df[['col_21','col_22','col_23']] = df['col2'].str.split("'",expand=True) and df[['col_2','col_3','col_4','col_5','col_6']] = df['col_22'].str.split(",",expand=True) finally remove rest.
H: Why deep learning models still use RELU instead of SELU, as their activation function? I am a trying to understand the SELU activation function and I was wondering why deep learning practitioners keep using RELU, with all its issues, instead of SELU, which enables a neural network to converge faster and internally normalizes each layer? AI: ReLU is quick to compute, and also easy to understand and explain. But I think people mainly use ReLU because everyone else does. The activation function doesn't make that much of a difference, and proving or disproving that requires adding yet another dimension of hyperparameter combinations to try. If the research is for a paper there is another consideration: you will want to stick with what your benchmarks use, what everyone else is doing, unless the research is specifically about activation functions. (As an aside, I see practically no research on the pros or cons of using different activation functions at different layers. I suspect this is also because of the hyperparameter combinatorial explosion, combined with the expectation of it not making much difference.) The SELU function is a hard-sell in a couple of ways. First it requires reading a long paper to understand, and accept the couple of magic numbers it comes with. But a bigger factor might be that it does internal normalization, meaning you don't need your batch or layer normalization any more. Or do you? Suddenly this is not a simple swap in for ReLU, but affects other parts of the architecture. This is a good article on a large selection of alternative activation functions: https://mlfromscratch.com/activation-functions-explained/ The con they give there for SELU is that there are not enough comparative research papers on it, for different architectures, yet.
H: Does One-Hot encoding increase the dimensionality and sparsity of dataset? There are two ways to convert object datatype into numeric datatype, first is One-Hot encoding and second is simply map the numerical tags to different values. For example for column Age containing three distinct values 'child', 'adult' and 'old', for that column One-Hot encoding is: Age Age_child Age_adult Age_old child 1 0 0 adult 0 1 0 old 0 0 1 Whereas a simple mapping of numerical tags to distinct values might be Age _Age child 1 adult 2 old 3 What I understand One-Hot encoding can increase the number of columns many times. For instance, consider 10 columns and each column having 3 distinct values on average, then the resulting dateset will have 30 columns. Whereas, simple numerical mapping does not change the datasets size (columns) and simply assigns the numerical tags to each distinct value. So the question is, does One-Hot encoding increase the dimensionality and sparsity of complex and large dataset? What is the more appropriate approach for machine or deep learning analyses out of these two? Is there any pros and cons of both? AI: Which encoding technique to use depends on your data/features. Ordinal encoding is used when there ia a sense of order in your feature. For example you have a feature performance which has values worst, bad good. Here you should use ordinal encoder which will result in worst = 0, bad = 1 and good = 2. We used ordinal encoding because good is better than bad which is better than worst. So here we have a sense of order with good getting more priority. The model will then learn this sense of order. OHE is used when there is no sense of order present and we simply just want to convert categorical type to numerical type. For example we have a feature named color which has values as red, blue and green. If we use ordinal encoding, it will assign red = 0, blue = 1 and green = 2 which will mean green is much more important than blue and red. But this makes no sense! Hence in the second case it would be wise to use OHE. Coming to the pros and cons, yes OHE increases the dimensionality of the dataset and ordinal encoding does not. But OHE is useful when there is no ordering in the feature. So yes it depends on the feature type. I would suggest you to use both wherever necessary. For example you have some features where there is no sense of order and other features where order is present. Use both techniques so as to remain true to the feature type and also reduce the dimensionality a bit!!
H: Comparing accuracies of Grid Search CV & Randomized Search CV with K-Fold Cross Validation? Are Grid Search CV & Randomized Search CV always/necessarily supposed to give more accurate results after hyperparameter tuning as compared to K-Fold Cross Validation? AI: From your comment above, " though Grid & Radom Searches are expected to do better." They are EXPECTED to perform better but it is not a given that in each and every case they will outperform K Fold CV. Sometimes K FoldCV can outperform Grid or Random SearchCV.
H: Tune learning rate while tuning other HP When doing hyperparameters optimisation, like a Random Search, should you add a search space for the learning rate ? My intuition is that some HP might work better with a certain LR, and be sub-optimal with a lower LR. But if I add LR to the search space, I fear that the random search will only favour high LR trials, as they will reach lower loss for the same limited number of max epochs. What would be the right way to do it ? AI: Learning rate probably should not be considered an independent hyperparameter as it is usually a good idea to adjust it proportionally to batch size.
H: problem with using f1 score with a multi class and imbalanced dataset - (lstm , keras) I'm trying to use f1 score because my dataset is imbalanced. I already tried this code but the problem is that val_f1_score is always equal to 1. I don't know if I did it correctly or not. my X_train data has a shape of (50000,30,10) and Y_train data has a shape of (50000,). I have 3 classes: 0, 1 and 2. this is my code so far: maximum_epochs = 40 early_stop_epochs= 60 learning_rate_epochs = 30 maximum_time = 8*60*60 model = Sequential() model.add(LSTM(32,activation='tanh', input_shape=(X_train.shape[1],X_train.shape[2]), return_sequences=True)) model.add(LSTM(16,activation='tanh', return_sequences=False)) model.add(Dense(3, activation='softmax')) def recall(y_true, y_pred): y_true = K.ones_like(y_true) true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) all_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (all_positives + K.epsilon()) return recall def precision(y_true, y_pred): y_true = K.ones_like(y_true) true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = true_positives / (predicted_positives + K.epsilon()) return precision def f1_score(y_true, y_pred): p = precision(y_true, y_pred) r = recall(y_true, y_pred) return 2*((p*r)/(p+r+K.epsilon())) model.compile(loss='sparse_categorical_crossentropy',optimizer='adam', metrics=['accuracy', f1_score, precision, recall]) callbacks_list = [ tf.keras.callbacks.ReduceLROnPlateau(monitor='val_f1_score', factor=0.9, patience=learning_rate_epochs, verbose=0, mode='max', min_lr=0.0000001), tf.keras.callbacks.ModelCheckpoint(filepath=fn, save_weights_only=True, monitor='val_f1_score',mode='max', save_best_only=True)] history = model.fit(x=X_train, y= Y_train, validation_data=(X_val, Y_val), batch_size=500, epochs=maximum_epochs, shuffle=True, verbose=2, callbacks=callbacks_list) pyplot.plot(history.history['f1_score'], label='train') pyplot.plot(history.history['val_f1_score'], label='val') pyplot.legend() pyplot.show() this is the log of first epochs: Epoch 1/40 85/85 - 29s - loss: 0.7125 - accuracy: 0.8806 - f1_score: 0.9736 - precision: 1.0000 - recall: 0.9515 - val_loss: 0.5389 - val_accuracy: 0.8862 - val_f1_score: 1.0000 - val_precision: 1.0000 - val_recall: 1.0000 Epoch 2/40 85/85 - 8s - loss: 0.5590 - accuracy: 0.8900 - f1_score: 0.9903 - precision: 1.0000 - recall: 0.9808 - val_loss: 0.4930 - val_accuracy: 0.8862 - val_f1_score: 1.0000 - val_precision: 1.0000 - val_recall: 1.0000 UPDATE: thanks to @Erwan's answer I changed compilation as below: import tensorflow_addons as tfa from sklearn.preprocessing import OneHotEncoder encoder = OneHotEncoder(sparse=False) Y_train = encoder.fit_transform(Y_train.reshape(-1,1)) Y_val = encoder.fit_transform(Y_val.reshape(-1,1)) model.compile(loss='categorical_crossentropy',optimizer='adam', metrics=[tfa.metrics.F1Score(average='macro',num_classes=3)]) callbacks_list = [ tf.keras.callbacks.ReduceLROnPlateau(monitor='val_f1_score', factor=0.9, patience=learning_rate_epochs, verbose=0, mode='max', min_lr=0.0000001), tf.keras.callbacks.ModelCheckpoint(filepath=fn, save_weights_only=True, monitor='val_f1_score',mode='max', save_best_only=True)] here is the epochs log(I think it's going well and f1_score is increasing and loss is decreasing): Epoch 1/15 85/85 - 27s - loss: 0.8422 - f1_score: 0.3337 - val_loss: 0.5830 - val_f1_score: 0.3145 Epoch 2/15 85/85 - 7s - loss: 0.6539 - f1_score: 0.3221 - val_loss: 0.5218 - val_f1_score: 0.3145 AI: The problem is simple: recall, precision and F1-score work only with binary classification. If you try with a example manually you will see that the definitions that you're using for precision and recall can only work with classes 0 and 1, they go wrong with class 2 (and this is normal). When working with more than 2 classes you must use either micro f1-score (but this is the same as accuracy) or macro f1-score, which would be the standard option with imbalanced data. Macro F1-score is the average of the f1-score across all 3 classes, where the f1-score for one class is obtained by considering all the other classes as the negative class.
H: Multiple models have extreme differences during evaluation My dataset has about 100k entries, 6 features, and the label is simple binary classification (about 65% zeros, 35% ones). When I train my dataset on different models: random forest, decision tree, extra trees, k-nearest neighbors, logistic regression, sgd, dense neural networks, etc, the evaluations differ GREATLY from model to model. tree classifiers: about 80% for both accuracy and precision k-nearest neighbors: 56% accuracy and 36% precision. linear svm: 65% accuracy and 0 positives guessed sgd : 63% accuracy and 2 true positives + 4 false positives I don't understand the difference in such disparity. Can someone explain why that happens? Am I doing something wrong? Also cannot find an answer to my question, so please link if someone asked it already Would really appreciate the help! AI: A few thoughts: The first thing I would check is whether the other models overfit. You could check this by comparing the performance between the training set and the test set. Also there's something a bit strange about k-NN always predicting the majority class. This would happen only if any instance is always closer to more majority instances than minority instances. In this case there's something wrong with either the features or the distance measure. 100k instances looks like a large dataset but with only 6 features it's possible that the data contains many duplicates and/or near-duplicates which don't bring any information for the model. In general it's possible that the features are simply not good indicators, although in this case the decision tree models would fail as well. The better performance of the tree models points to something discontinuous in the features (btw you didn't mention if they are numerical or categorical?). Decision trees and especially random forests can handle discontinuity but like logistic regression might have trouble with it.
H: Robustness vs Generalization I don't quite understand the difference between robustness and generalisability in relation to image processing (CNN). If my model generalises well, it is also robust to changes in the image material. Unfortunately, I haven't found any concrete definitions or other materials that describe the exact difference, AI: Check this paper. Its introduction gives a very good definition of both: The classic approach towards the assessment of any machine learning model revolves around the evaluation of its generalizability i.e. its performance on unseen test scenarios. Evaluating such models on an available non-overlapping test set is popular, yet significantly limited in its ability to explore the model’s resilience to outliers and noisy data / labels (i.e. robustness). For generalizability, unseen data does not have to be noisy or contain more outliers compared to original data. You can simply split your original data set into 3: training, validation and test; use training and validation for the model development and keep test data unseen for a final check after cross validation. This will check your model's generalizability. Test set created in this way won't be more noise or have more outliers compared to the other two.
H: when I only give command 'fit', my class does 'transform' too I have created 2 classes, first of which is: away_defencePressure_idx = 15 class IterImputer(TransformerMixin): def __init__(self): self.imputer = IterativeImputer(max_iter=10) def fit(self, X, y=None): self.imputer.fit(X) return self def transform(self, X, y=None): imputed = self.imputer.transform(X) X['away_defencePressure'] = imputed[:,away_defencePressure_idx] return X and the second one is home_chanceCreationPassing_idx = 3 class KneighborImputer(TransformerMixin): def __init__(self): self.imputer = KNNImputer(n_neighbors=1) def fit(self, X, y=None): self.imputer.fit(X) return self def transform(self, X, y=None): imputed = self.imputer.transform(X) X['home_chanceCreationPassing'] = imputed[:,home_chanceCreationPassing_idx] return X When I put IterImputer() in a pipeline and fit_transform, the outcome is: ******************** Before Imputing ******************** 7856 49.166667 12154 44.666667 10195 48.333333 18871 57.333333 267 48.833333 Name: home_chanceCreationPassing, dtype: float64 # of null values 70 ******************** After Imputing ******************** 7856 49.166667 12154 44.666667 10195 48.333333 18871 57.333333 267 48.833333 Name: home_chanceCreationPassing, dtype: float64 # of null values 0 It works fine. But then if I put the two imputers into one pipeline as follows and fit: p = Pipeline([ ('imputerA', IterImputer()), ('imputerB', KneighborImputer()) ]) p = Pipeline([ ('imputerA', IterImputer()), ('imputerB', KneighborImputer()) ]) X = X_train.copy() p.fit(X) even without transforming display(X.head()) print('# of null values', X.isnull().sum()) the outcome would be like home_buildUpPlaySpeed home_buildUpPlayDribbling home_buildUpPlayPassing home_chanceCreationPassing home_chanceCreationCrossing home_chanceCreationShooting home_defencePressure home_defenceAggression home_defenceTeamWidth away_buildUpPlaySpeed away_buildUpPlayDribbling away_buildUpPlayPassing away_chanceCreationPassing away_chanceCreationCrossing away_chanceCreationShooting away_defencePressure away_defenceAggression away_defenceTeamWidth 7856 50.833333 44.5 37.666667 49.166667 55.000000 48.166667 49.333333 43.000000 53.166667 61.333333 56.0 51.333333 67.000000 58.333333 57.166667 55.000000 47.166667 53.000000 12154 59.333333 69.0 42.666667 44.666667 59.166667 52.333333 40.333333 41.833333 52.666667 47.000000 54.0 41.166667 60.833333 53.833333 54.833333 49.666667 47.500000 56.500000 10195 58.000000 54.0 57.666667 48.333333 53.833333 55.833333 34.833333 60.333333 53.166667 56.333333 41.5 42.333333 52.166667 51.666667 57.166667 46.333333 53.666667 53.333333 18871 61.833333 54.5 58.000000 57.333333 55.000000 49.500000 47.833333 48.000000 57.000000 59.000000 64.0 57.333333 52.500000 63.000000 58.666667 46.500000 47.666667 60.833333 267 49.166667 52.0 46.500000 48.833333 55.833333 47.666667 53.666667 53.833333 54.666667 59.666667 45.0 60.333333 54.666667 58.833333 61.333333 51.500000 57.500000 56.500000 # of null values home_buildUpPlaySpeed 0 home_buildUpPlayDribbling 0 home_buildUpPlayPassing 0 home_chanceCreationPassing 70 home_chanceCreationCrossing 0 home_chanceCreationShooting 0 home_defencePressure 0 home_defenceAggression 0 home_defenceTeamWidth 0 away_buildUpPlaySpeed 0 away_buildUpPlayDribbling 0 away_buildUpPlayPassing 0 away_chanceCreationPassing 0 away_chanceCreationCrossing 0 away_chanceCreationShooting 0 away_defencePressure 0 away_defenceAggression 0 away_defenceTeamWidth 0 dtype: int64 So the thing is only by doing 'fit', second last step is commited! and the last step is commited when I do 'transform'. Does anyone know why such thing happens? AI: Imputer fit() : provides statistics for the imputer i.e., fits data to imputer transform() : imputes and fills the missing values fit_transform() : Fit to data, then transform it. Pipeline ideally apply a list of transforms so the final estimator only needs to implement fit So to answer your question in the pipeline, the transform is already in place, so all you have to do is ensure fit is called by the final estimator
H: Number of parameters in CNN I'm trying to understand the convolutional neural network and especially its parameters. I found several formulas on the internet, but I cannot understand them. For example: ((filter_size*filter_size)*stride+1)*filters) What is the number of filters here? Does it mean, that we train different size*size weights for every stride that we do, and the total number of strides will be the number of filters? AI: A convolutional layer is composed of a grid of numbers called filter (or kernel). This is the filter that scans the image (talking about 2D convolutions here). Applying means simply multiplying the values of each pixel of the filter with the corresponding values of the image. For a visual explanation of this process of applying the filter to the image, check this video. Stride refers to the number of pixels between each application of the filter. For this, I will also refer to the video above. If we have a 5x5 image and 3x3 filter; stride = 1 refers to centering the filter on each of the 25 pixels in the image. stride = 2 on the other hand skips centering on the pixel at each step. Check this video for a visual explanation of stride. Applying the filter to the entire image finishes the processing of the filter. But usually, we have multiple filters at each layer to capture different features which means applying the above procedure again for a different filter. Filter size can be anything from 1x1 to 5x5 for a 5x5 image. 1x1 is a little meaningless but still possible. A good explanation of the terms used in CNNs is also given in this article. And finally, you can find a good discussion on calculating the number of parameters in a convolutional layer here.
H: How to improve the result of f1 on imbalanced dataset I have a dataset in which these are the distribution of the data: Neutral. 15000 Negative 3000 positive 2000 And I am mostly interested to improve the performance on the negative category. I would say neutral and positive are not important for me. And I am using Bert model. What I have tried so far: undersample data: result was poor on negative category Augment data with different approaches available in NLPaug. The result not only did not improve but it dropped by 4 percent Class weight. Gave more weight to the negative class however did not affect the result and in some scenarios dropped I tried to change the batch_size epoch etc... and it just had 0.5percent improvement Now my question is that what could be the problem here? (is there anything I need to check in my dataset?) And what else I can try to improve my model?, this is the general result I have so far Negative 65 positive 72 neutral 90 And this is my confusion matrix: Pred_negative Pred_neutral Pred_positive True_negative 138 101 3 True_neutral 53 1408 24 True_positive 2 25 69 I need to improve the negative category by at least 5 percent. AI: A few thoughts: The evaluation method is not clear, in particular what are the evaluation scores shown, is it f1 score? Why do you need to improve "by at least 5%"? Do you know the results of another system on the same data? If not it doesn't really make sense to aim for a particular performance value: performance depends a lot on the data, it's possible that your system already reaches the maximum performance with this dataset for example. You should at least have a baseline system to compare to, for example a basic Naive Bayes classifier. One thing you could try is to remove the neutral category, this might help the model focus on the difference between negative and positive instead of trying to correctly classify the neutral category.
H: Why scikit-learn's sequential feature selection requires how much features to be selected beforehand? From the version 0.24, the scikit-learn has new method 'SequentialFeatureSelector', which adds (forward selection) or removes (backward selection) features to form a feature subset in a greedy fashion. It lets us to select features in the 'forward stepwise selection' or 'backward stepwise selection', described in the book 'Introduction to Statistical Learning (ISLR)'. To use the SequentialFeatureSelector, you need to put 'int' or 'float' value to the parameter n_features_to_select. If you don't write anything, half of feature numbers are automatically put into the parameter. However, according to ISLR, you can know how many number of variables are appropriate only after you test all number of parameters and get the best model of each number of parameters. The plots shown below is from the ISLR, which shows that you can figure out how many number of features are appropriate only after testing all numbers of predictors. You cannot figure it out beforehand. So I think an input "the lowest adjusted r_squared" for the parameter n_features_to_select. By doing so, you can choose the number of features that has the lowest adjusted r_squared, which cannot be known beforehand. Why is scikit-learn made in such a way that int or float must be put into the parameter? AI: For "why", I think it's just a new transformer that maybe didn't get thoroughly thought-out. The default of "half the features" in particular seems very odd to me. A middle ground, that I think is more useful, is to select features until there is no (or little) further improvement. That's being implemented in PR20145. If they would also expose the scores in an attribute, you could post-process with this PR by setting tol=-np.inf (so that all the features would eventually get added) and then selecting the best exposed score. I don't see an Issue suggesting storing scores (as in RFECV.cv_results_), but the ranking has been suggested in Issue19583.
H: Correct approach to scale (min-max scaler) both input and output signal data for unsupervised learning? I am working on a denoising autoencoder problem with noisy and clean signals. Before I pass the signals to my model I want to apply min-max normalization and am unsure of the correct way to apply this. The model will see the noisy signal as the input and the output/reference signal as the clean signal (denoising autoencoders are a type of unsupervised learning where concepts of features and labels perhaps don't apply in the original sense). The current way I am applying scaling is by fitting and transforming the noisy and clean signals separately before fitting into the model - is this the correct strategy? from sklearn.preprocessing import MinMaxScaler scaler_noisy = MinMaxScaler() scaler_clean = MinMaxScaler() X_noisy_train = scaler_noisy.fit_transform(X_noisy_train) X_clean_train = scaler_clean.fit_transform(X_clean_train) AI: No, this is not the correct strategy. If the transformation you apply takes any parameters, in this case the minimal and maximal values, you should first do it on the training set and then apply it to the test set to avoid data leakage. This would not matter in case of something like a log transformation, where it does not change the outcome of the transformation. Your code should look more like this: from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split noisy_train, noisy_test, clean_train, clean_test = train_test_split(noisy, clean) scaler_noisy = MinMaxScaler() scaler_clean = MinMaxScaler() noisy_train = scaler_noisy.fit_transform(noisy_train) noisy_test = scaler_noisy.transform(noisy_test) clean_train = scaler_clean.fit_transform(clean_train) clean_test = scaler_noisy.transform(clean_test) or you could just put the scaler inside a sklean pipiline. Although you would than have to use a TransformedTargetRegressor to transform the Y as well. I don't see this as an example of unsupervised learning. In unsupervised learning you don't have any output associated with your data points. In this case your output/labels is the clean noise.
H: Google's Bayesian Structural Time-Series I am attempting to get my head around Google's Causal Impact paper, which isn't completely clear to me. In the methodology part of the paper, the authors say: "The framework of our model allows us to choose from among a large set of potential controls by placing a spike-and-slab prior on the set of regression coefficients and by allowing the model to average over the set of controls ". My question is the following: For the synthetic control variables, I know that we're supposed to come up with the variables that are determining y, but do not receive the treatment, but I am unclear whether Causal Impact runs an automatic test to examine whether those variables are actually useful as controls. Their Github post: https://google.github.io/CausalImpact/CausalImpact.html Paper: https://research.google/pubs/pub41854/ Not sure that's the right community for the post, but I could not find anything related to questions about DS papers AI: To provide an answer after a quick read of the sources. Selection of predictors for counter-factual outcomes (the outcome if the policy was not effective), is done automaticaly via Bayesian Spike and Slab method which is designed for automatic selection in mind. Both the original paper and Wikipedia article state so (emphasis mine): The model consists of three main components: Kalman filter. The technique for time series decomposition. In this step, a researcher can add different state variables: trend, seasonality, regression, and others. Spike-and-slab method. In this step, the most important regression predictors are selected. Bayesian model averaging. Combining the results and prediction calculation.
H: Improving text classification & labeling in imbalanced dataset I am trying to classify text titles (NLP) in categories. Let us say I have 6K titles that should fall into four categories. My questions: I do not understand why in some ML techniques categories are converted into numerical values "Transforming the prediction target"? will this impact the model accuracy instead of using nominal values? My data is severely imbalanced towards some categories, ex: CAT A has 4K titles and CAT B has 500 title. So oversampling or under sampling could impact the accuracy as the chances of correct prediction will be higher to fall in the biggest category as the original distribution has, am I correct? Finally, titles could have brand names like corporations, products .. etc. Should this be cleaned and replaced before training the model? Because the model can guess that a text will fall into automotive category if a brand name like Toyota is in the title? AI: why categories are converted to numeric values? Its due to the simple fact that the most machine learning models do not accept categorical values to perform prediction. For this reason its Yes, for this reason there are some techniques(like SMOTE) to ensure the data is rightly balanced. You can also opt for other metrics like F1 score which works for imbalanced data. Its ideal to clean and replace prior training the model(your example of toyota falls under automotive category) Few Techniques to remember while dealing with imbalanced text data remove duplicate data: ensuring duplicates of texts with same semantic meaning (eg: where is my product and where is the product is one and the same) Merge minority classes resampling Dataset undersampling majority class oversampling minority class(like SMOTE) Data Augmentation(using spacy, space_wordnet, word embeddings
H: Meaning of NER Training values using Spacy I am trying to train custom entities using Spacy. During the training process I am getting number of values of LOSS, score etc. What is the meaning of these values ============================= Training pipeline ============================= ℹ Pipeline: ['tok2vec', 'ner'] ℹ Initial learn rate: 0.001 E # LOSS TOK2VEC LOSS NER ENTS_F ENTS_P ENTS_R SCORE --- ------ ------------ -------- ------ ------ ------ ------ 0 0 0.00 278.46 0.00 0.00 0.00 0.00 20 200 3647.33 10920.67 91.75 93.68 89.90 0.92 40 400 92.82 679.78 98.21 99.48 96.97 0.98 60 600 66.59 274.91 98.98 100.00 97.98 0.99 80 800 87.59 252.62 98.98 99.49 98.48 0.99 AI: The values for LOSS TOK2VEC and LOSS NER are the loss values for the token-to-vector and named entity recognition steps in your pipeline. The ENTS_F, ENTS_P, and ENTS_R column indicate the values for the F-score, precision, and recall for the named entities task (see also the items under the 'Accuracy Evaluation' block on this link. The score column shows the overall score of the pipeline, which may or may not be a weighted more to specific subtasks.
H: Best platform to work with when having millions of rows in dataframe I have table with around 20 features and millions of observations (rows). I need to create model base on this table, however, as it is huge, training models like random forest or XGB takes forever. I'm working mainly with scikit-learn and the XGBoost packages on Jupyter lab server, using python, and i'm struggling with this when the dataframes are very large. Also it is important to mention that I have windows (not Linux). My question is for people with more experience than I have: what way do you deal with huge dataframes? are there any better packages or platforms to work with when the data is so big? AI: A million observations of 20 features should be very manageable on a laptop, if a little slow. Cloud computing for very large datasets is staggeringly expensive and offers little or no benefit unless and until you have good parallelization in place. I would recommend keeping that option as your last resort. For the initial data exploration and experimentation, I suggest you sample your data. Spending a few minutes googling "data sampling" will save you a lot of time and effort later. Only when you are getting reasonable results with your samples should you consider apply your methods to the larger dataset. Also give some serious thought to dimensionality reduction, methods like PCA can be very helpful here. If you haven't already done so, a correlation analysis of your features might help you eliminate the less useful ones.
H: Understanding Sklearns learning_curve I have been using sklearns learning_curve , and there are a few questions I have that are not answered by the documentation(see also here and here), as well as questions that are raised by the function about sklearn more generally Here are some learning curves from my models of a data set And the code that produced them: train_sizes, train_scores, valid_scores =learning_curve(linear_regression_model,rescaled_X_train,Y_train) axes[0,0].plot(train_sizes,train_scores) axes[0,1].plot(train_sizes,valid_scores) train_sizes, train_scores, valid_scores =learning_curve(random_forest_model, rescaled_X_train,Y_train) axes[1,0].plot(train_sizes,train_scores) axes[1,1].plot(train_sizes,valid_scores) The documentation makes it seem like, the line learning_curve(linear_regression_model, rescaled_X_train, Y_train) fits the model rather than simply showing how the models fitting process previously behaved? a. If it is fitting the model again – how do you pass hyperparameters (for example gamma for a SVM or maximum tree depth) and determine the cost function that is being used? b. If not, this seems very strange. I would have assumed that a linear regressor was by default just fit by least squares rather than something involving k-fold validation, as it appears to be if I am viewing the above graphs correctly. Is this how sklearn normally fits regressors? is the y- axis on these graphs accuracy score? AI: The term learning curve can mean different things in different context, which is confusing. When talking about neural networks (and other iteratively trained models) the learning curve describes the model's training progress. It is often used to determine when it's time to stop training. In scikit-learn, the learning curve is interpreted differently. It describes how your model would perform if it was (re-)trained with less data. This can help you guess if the model would likely improve by getting more data. The same hyperparameters specified when constructing the model are used when the model is re-fitted. The score function used is also a parameter of the model. Many regression models default to the R2 score, which is likely the score you plotted.
H: Creating a DataFrame in Pandas from a numpy array and a list labels is array([3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], dtype=int32) and species is ['Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Bream', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Roach', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Smelt', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike', 'Pike'] I'm trying to understand how to create a DataFrame from these elements. If I run the following: df = pd.DataFrame([labels,species],columns=["labels","species"]) I get ValueError: 2 columns passed, passed data had 85 columns However, if I pass them in a dictionary as in pd.DataFrame({"labels":labels,"species":species] everything goes smoothly... Why is that? AI: This reason for this error is that pandas expects each list within the list you're providing to be data for a single row. In your example, however, you are providing a list of lists, where each list contains all data for a single column, instead of all data for a single row. Changing the format from column-wise data to row-wise data using zip solves the error: import pandas as pd pd.DataFrame([*zip(labels.tolist(), species)], columns=["labels", "species"]) labels species 3 Bream 1 Bream 1 Bream 1 Bream 1 Bream
H: What is the difference between batch_encode_plus() and encode_plus() I am doing a project using T5 Transformer. I have read documentations related to T5 Transformer model. While using T5Tokenizer I am kind of confused with tokenizing my sentences. Can someone please help me understand the difference between batch_encode_plus() and encode_plus() and when should I use either of the tokenizers. AI: See also the huggingface documentation, but as the name suggests batch_encode_plus tokenizes a batch of (pairs of) sequences whereas encode_plus tokenizes just a single sequence. Looking at the documentation both of these methods are deprecated and you use __call__ instead, which checks by itself if the inputs are batched or not and calls the correct method (see the source code with the is_batched variable and if statement).
H: difference between novelty, concept drift and anomaly Concept drift is when the relation between the input data and the target variable changes over time. like changes in the conditional distribution. is novelty an outlier? what should I think of? what is the difference between concept drift and novelty and anomaly? is the concept drift considered a type of novelty? how exactly? can you please explain !! AI: Roughly all three concepts are related. Drift means the relationship between input and output is dynamic and changes (stochastically) over (sufficiently long periods of) time. That is, it is not stationary. For example, consumers' criteria about what to buy, change over time, for example as people become more eco-conscious. More importantly drift, when it happens, invalidates the existing model used for prediction. Anomaly also called an outlier is a very rare non-typical event (when input-output relationship is considered stationary over time), that happens upon exceptional circumstances. Something like a white snake. It may happen but is not typical of snakes and if it happens it does not mean that input-output relationship has necessarily drifted from the original assumptions (eg the assumptions about the color distribution of snakes). Accordingly anomaly, when it happens, does not invalidate the existing model used for prediction. Novelty as far as I understand it, is an umbrella term for something new and unpredictable happening, which however may be attributable to anything (drift, anomaly, etc). Please note that determining the reason for the observed novelty requires careful analysis (for example, multiple anomalies may mean drift is what is actually happening)! References: Anomaly detection Concept drift
H: Keras ImageDataGenerator unable to find images I'm trying to add image data to a Kaggle notebook so I can run a convolutional neural network but I'm having trouble doing this via ImageDataGenerator. This is the link to my Kaggle notebook These are my imports: import numpy as np # linear algebra# import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from random import randint from sklearn.utils import shuffle from sklearn.preprocessing import MinMaxScaler import tensorflow as tf# from tensorflow import keras# from tensorflow.keras.models import Sequential# from tensorflow.keras.layers import Activation, Dense, Flatten, BatchNormalization, Conv2D, MaxPool2D# from tensorflow.keras.optimizers import Adam # from tensorflow.keras.metrics import categorical_crossentropy # from tensorflow.keras.preprocessing.image import ImageDataGenerator # from sklearn.metrics import confusion_matrix # import itertools # import matplotlib.pyplot as plt # import os import shutil import random import glob import warnings # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) Here is my code where I attempt to import the data using the ImageDataGenerator: #Define datagen: datagen = ImageDataGenerator(rescale=1./255) #Train Data: cat_train_datagen = datagen.flow_from_directory('../input/dog-vs-cat-images-data/dogcat/train/cats', batch_size=500, shuffle=True) dog_train_datagen = datagen.flow_from_directory('../input/dog-vs-cat-images-data/dogcat/train/dogs', batch_size=500, shuffle=True) #Valid Data: cat_valid_datagen = datagen.flow_from_directory('../input/dog-vs-cat-images-data/dogcat/validation/cats', batch_size=100, shuffle=True) dog_valid_datagen = datagen.flow_from_directory('../input/dog-vs-cat-images-data/dogcat/validation/dogs', batch_size=100, shuffle=True) #Test Data: test_datagen = datagen.flow_from_directory('../input/dog-vs-cat-images-data/dogcat/test1/test1', batch_size=100, shuffle=True) This is my terminal output: Found 0 images belonging to 0 classes. Found 0 images belonging to 0 classes. Found 0 images belonging to 0 classes. Found 0 images belonging to 0 classes. Found 0 images belonging to 0 classes. Any input would be greatly appreciated, as I'm fairly new to Keras and am unsure whether I am using ImageDataGenerator correctly. AI: The path you are providing to the flow_from_directory method is one level to deep. The data generator expects a path to a directory which contains one subdirectory for each class in your dataset, see tensorflow documentation. This github gist shows how to apply the ImageDataGenerator to a dataset (coincidentally also using 'cat' and 'dog classes') together with the correct folder structure to use. Changing the provided path to ../input/dog-vs-cat-images-data/dogcat/train should solve the issue.
H: Churn prediction model doesn't predict good on real data I am working currently on churn prediction problem. As an input I use data from date warehouse for a period 082016 - 032021(one row per month for each customer). Based on this data I have created a time window of 18 months, where I track customer behaviour(feature engineering). Based on features, I predict churn in 4 months in the future 122020-032021. As a model I use lightGBM with the following parameters: parameters = { 'objective': 'binary', 'metric': 'auc', 'is_unbalance': 'true', 'boosting': 'gbdt', 'num_leaves': 31, 'feature_fraction': 0.5, 'bagging_fraction': 0.5, 'bagging_freq': 20, 'learning_rate': 0.05, 'verbose': 0 } and get the following as classification report based on test data (training/test split 80/20%): precision recall f1-score support 0 0.96 0.93 0.95 48008 1 0.68 0.80 0.73 8745 accuracy 0.91 56753 macro avg 0.82 0.86 0.84 56753 weighted avg 0.92 0.91 0.91 56753 In real example I use period 082016-032021 for creating features, and predict churn for next 4 months (042021-072021). In the last step I create dataset from clients who were active in a month 03/2021 and who have churned in period of 4 months (042021-072021), about 1700 customers. When I compare predicted values (what says the model), who will churn and real values for churned customers, the model has 44% accuracy. The model can correctly predict only 844 from 1700 customers. I can not find the reason for such a huge difference between test data and using model in real prediction. Does anybody have the similar experience? Edit: Tnx for useful suggestions! Here is the number of features and observations: 293552 rows × 152 columns number of not churners - 242385 number of churners - 51167 I will try cross validation and suggested metrics for churn. One more question: What is the best method to determine the threshold in this situation? At the moment, I use exactly what you said: 50%+ = churn, <50% = not churn. AI: 2020 has thrown a lot of models off. I'ld suggest training your model on 2016-2018 and evaluating it on 2019 data. If that looks good, you'll know that your pipeline is fine
H: How to write a reward function that optimizes for profit and revenue? So I want to write a reward function for a reinforcement learning model which picks products to display to a customer. Each product has a profit margin %. Higher price products will have a higher profit margin but lower probability of being purchased. Lower price products have a lower profit margin, but higher probability of being purchased. The goal is to maintain an AVERAGE margin of 5% for ALL products sold, while maximizing the total revenue. What's the best way to write this reward function? AI: Your goals include two criteria that interact and may conflict. It is not possible to write a single reward function to solve this perfectly. You have to decide first on relative importance of the two goals. As one is effectively a constraint, you need to decide on how hard you want to apply this constraint. As the revenue is easy to measure, and already natural expression of what that part of the optimisation is supposed to achieve, you can start by using an arbitrary scaling for revenue that makes the numbers simple for your approximator - e.g. a neural network. Having numbers in the thousands or millions is not great because the error values could be really large during training, so I would try to scale this part of the reward by some order of magnitude depending on values you are expecting. Following that, you then have to decide how to add in some reward factor for the gross profit margin. There are lots of ways to do this, because the constraint you have been given is not "natural", it is something that a business owner or analyst has determined will result in overall acceptable net profit margin, which is related to but not the same as the gross profit margin goals you have been given (this is not unexpected, net profit margin is the real goal of the company, but much more complicated to figure out than gross profit margin per sale). I can think of two additional rewards that you could add in order to represent the goal of meeting the gross profit margin target: As it has been phrased as a constraint, you will want negative rewards for sales that result in gross profit margin below 5% and positive rewards for sales that result in gross profit margin above 5%. You may be able to simplify that down to +1 or -1 per sale depending on what side of the line your margin currently is. As an individual sale may not move this average by much, you may want add a third reward centred on the 5% mark that simply is the amount above or below the 5% mark for an individual sale. So e.g. an object sold at £104 with a cost of £100 would score -1 reward. This option is a form of "reward shaping". There is a chance it could be counter-productive, but bear it in mind in case short term learning does not steer sales in the right direction. There are several other ways that you could construct a reward system. The key thing to bear in mind is that all rewards that you are adding from different sources need to be scaled to work together and express the goal of your agent. This is something you will need to establish through trial and error. You may be able to get a feel for the behaviour your weightings are encouraging by working through some examples from your data. High weights on meeting the 5% constraint may reduce revenue through lack of sales (because all offered items may be more expensive), low weights on the constraint may have the business operating at a loss overall (as it makes sales that cost the company more in overheads than the smaller profit margins can make up for). However, there is no mathematically correct answer to that unless you can somehow model the relationship to net profit margin well enough to use that as the goal instead.
H: log(odds) to p formulation $$Log(Odds) = log({p \over (1-p)}) $$ $${p \over (1-p)} = e^{b+b_1x_1+....}$$ I understand up to here, however how does this: $$p = (1-p) e^{b+b_1x_1+...}$$ become: $$ p = {1 \over {1+e^{-(b+b_1x_1+...)}}}$$ Can someone explain last two steps? AI: We have, $p = (1 - p)e^{b + b_1x_1 + \ldots}$ Let $y= {b + b_1x_1 + \ldots}$ So, $p = (1 - p)e^y$ or, $p = e^y - pe^y$ or, $p+pe^y = e^y$ or, $p(1+e^y) = e^y$ or, $p = e^y/(1+e^y)$ or, $p = 1/(e^{-y}+1)$ (Dividing both denominator and numerator by $e^y$ on the RHS) or, $p = 1/(e^{-{b + b_1x_1 + \ldots}}+1)$ or, $p = 1/(1+e^{-{b + b_1x_1 + \ldots}})$ Let me know if you have any doubts.
H: Interpreting evaluation metrics with threshold/cutoff I was doing churn prediction for a company. I've got the following results by applying 3 classifier. Model Accuracy AUC Logistic Regression 0.671 0.736 Decision Tree (pruned) 0.681 0.665 Decision Tree unpruned 0.623 0.627 Now, I want to know two things: which model has a better accuracy for a cutoff of 0.9? As the logistic regression has highest AUC so, in my opinion, Logistic Regression is better Which model is the best in terms of ranking the predictions according to their probability of leaving Can anyone explain how I can interpret them? AI: The accuracy is likely to go down if you change the cutoff point to 0.9, since any model tries to separate the classes so that the probability of the correct class is higher than 0.5. But the only way to know would be to actually do the experiment (I assume that the results that you show are obtained with the default cutoff). AUC is a complex measure for a soft classifier, i.e. it doesn't use any cutoff point but provides a performance value across cutoff points. Importantly, the AUC score considers one class as positive, like the precision, recall and F1-score measures. Btw I would suggest looking at these scores instead, which are more precise than accuracy and more easily interpretable than AUC.
H: How do I minimizie cost for EV charging? I want to find a charging schedule that minimize cost of charging an EV. The main objective is to have a fully charged car for the next morning, but the sub objective is to minimize cost based these two things combined: Charge when electricity is cheapest - I know the hourly electricity price for the next 24 hours Minimize hourly peak demand charges for the household - I pay a small additional fee each month if my hourly demand exceed different steps. I know the power size of the charger (W), the capacity of the car battery (Wh), how many hours I have to charge (h), I know what my household peak is right now (W), and all prices for both consumption (Money/Wh) and peak demands (xx Money, if hourly demand > xxxx Wh). What would one call this type of problem? How would one go forward to solve this? Is there a python package that can help me solve this? (I have seen similar problems been solved with Gurobi) AI: This is an optimization problem: you're trying to find which combination of parameter values gives the smallest value for a cost function, taking into account some constraints on the parameters. The first step is to formalize the problem: the fixed parameters (electricity rate, charge duration, ...), the variable parameters (when to charge), the constraints (e.g. when you need the car to be charged). From the description the problem is simple enough: not that many parameters, the cost can be directly calculated for particular parameter values. So I think you could simply use a grid search to solve it. A more advanced option would be genetic learning, but that's probably an overkill.
H: Is it necessary to use stratified sampling if I am using SMOTE already? I have already applied SMOTE to my imbalanced dataset with more than 300K observations. Does it still make sense to use stratified K-fold cross validation rather than simply ordinary K-fold cross validation (seems unlikely each of the K-fold training set would be imbalanced)? AI: It doesn't make sense to stratify your data after balancing it, since your data is now balanced, so how would you determine the stratification? It would be equal to regular sampling, unless you would use the ratio from before balancing your data, but that is not relevant anymore. Also, the whole point of resampling techniques is to avoid such procedures. Whether resampling techniques will work for your data is another question. You can't generate new information only from existing information, so there may still be quite a lot of imbalance in your data, even after balancing. This depends a lot on how imbalanced your data was and how much information it contained.
H: What does "S" in Shannon's entropy stands for? I see many machine learning texts using the following notation to represent Shannon's entropy in classification/supervised learning contexts: $$ H(S) = \sum_{i \in Y}p_i \log(p_i) $$ Where $p_i$ is the probability of a given point being of class $i$. I just do not understand what is $S$ because no further explanation about it is provided. Does it has something to do with the feature $S$ in the dateset? $S$ seems to appear again in Information Gain formula: $$ \operatorname{IG}(S,A) = H(S) - \sum_{a \in A} \frac{S_a}{S}H(S_a) $$ I know Information Gain and Entropy concepts, I just would like to understand the mathematical formalism. AI: To answer your question, $S$ in shannon entropy represents a discrete random variable with values $s_{1},s_{2},..s_{n}$ $S$ in Information Gain represents set of training examples, in the form ${\displaystyle ({\textbf {s}},t)=(s_{1},s_{2},s_{3},...,s_{k},t)}$, where ${\displaystyle s_{a}\in vals(a)}$ is the value of the ${\displaystyle a^{\text{th}}}$ attribute or feature of example ${\displaystyle {\textbf {s}}}$ and $t$ is the class label. Below is information from wikipedia Shannon Entropy: wiki link Given a discrete random variable $X$, with possible outcomes $x_{1} ,x_{2} ,....x_{n}$ , which occur with probability ${\displaystyle \mathrm {P} (x_{1}),...,\mathrm {P} (x_{n}),}{\displaystyle \mathrm {P} (x_{1}),...,\mathrm {P} (x_{n}),}$ the entropy of $X$ is formally defined as: ${\displaystyle \mathrm {H} (X)=-\sum _{i=1}^{n}{\mathrm {P} (x_{i})\log \mathrm {P} (x_{i})}}$ Information Gain:wiki link Let ${\displaystyle T}$ denote a set of training examples, each of the form ${\displaystyle ({\textbf {x}},y)=(x_{1},x_{2},x_{3},...,x_{k},y)}$ where ${\displaystyle x_{a}\in vals(a)}$ is the value of the ${\displaystyle a^{\text{th}}}$ attribute or feature of example ${\displaystyle {\textbf {x}}}$ and $y$ is the corresponding class label. The information gain for an attribute ${\displaystyle a}$ is defined in terms of Shannon entropy ${\displaystyle \mathrm {H} (-)}$ as follows. For a value ${\displaystyle v}$ taken by attribute ${\displaystyle a}$, let ${\displaystyle S_{a}{(v)}=\{{\textbf {x}}\in T|x_{a}=v\}}$ be defined as the set of training inputs of ${\displaystyle T}$ for which attribute ${\displaystyle a}$ is equal to ${\displaystyle v}$. Then the information gain of ${\displaystyle T}$ for attribute ${\displaystyle a}$ is the difference between the a priori Shannon entropy ${\displaystyle \mathrm {H} (T)}$ of the training set and the conditional entropy ${\displaystyle \mathrm {H} (T|a)}$ . ${\displaystyle \mathrm {H} (T|a)=\sum _{v\in vals(a)}{{\frac {|S_{a}{(v)}|}{|T|}}\cdot \mathrm {H} \left(S_{a}{\left(v\right)}\right)}.}$ ${\displaystyle IG(T,a)=\mathrm {H} (T)-\mathrm {H} (T|a)}$
H: why is the BERT NSP task useful for sentence classification tasks? BERT pre-trains the special [CLS] token on the NSP task - for every pair A-B predicting whether sentence B follows sentence A in the corpus or not. When fine-tuning BERT for sentence classification (e.g. spam or not), it is recommended to use a degenerate pair A-null and use the [CLS] token output for our task. How is that making sense? in the pre-training stage, BERT never saw such pairs, how come it will eat them just fine and "know" that instead of extracting the relation between A and B it is to extract the meaning of sentence A as there is no sentence B? Is there another practice of fine-tuning the model with A-spam and A-notspam for every sentence A, and seeing which pair gets the better NSP score? or is that totally equivalent to fine tuning with A-null? related to Bert-Transformer : Why Bert transformer uses [CLS] token for classification instead of average over all tokens? AI: The motivation is that the [CLS] embedding should contain "a summary" of both sentences to be able to decide if they follow each other or not. However, in follow-up papers such as RoBERTa or XLNet, only the masked LM objective is used and they reach better results than the original BERT. Here is the table with with results from the RoBERTa paper (Table 2 on page 5) that specifically measures the effect of the next-sentence-prediction (NSP) loss.NS